Quality Analysis with AWS Account

Requirements

Let’s see how to configure an Amazon AWS Account according to the following requirements

You need to get an Amazon AWS Account by creating an IAM user with some specific permissions to make it work.
To add permissions to the user you can enter in IAM > Users > username and the permissions tab click on Add permissions → Add permissions → choose Attach policies directly and select these modules:

  • Amazon Transcribe to transform call recordings from audio to text (Amazon Transcribe – Speech to Text)

  • Amazon Comprehend to run sentiment analysis on the transcripts (Amazon Comprehend) - text analysis, key phrases and sentiment

  • On an AWS account, you need to configure an S3 bucket (From AWS Console Home → S3 → Buckets | See on Amazon User guide how to create a bucket Create your first S3 bucket), by choosing your AWS Region.

  • For transcribe and other AI features, on the permissions tab, you need to check on the grantee section that you have Objects and Bucket ACL with write permissions.

 

 

Configuration

  • From XCALLY Settings, configure the Cloud Provider. Click on the button:

  • Add the Amazon AWS Account:

To retrieve Access Key ID and Secret Access Key, you can open your user Account on Amazon AWS on section IAM > Users > username

And click on Security Credentials

On Access keys you can view the created keys or generate a new one.
If you need to create a new key, you can click on Create Access Key

The system asks you to choose an “access key best practices & alternatives”: you can select Third-party service and then you can Retrieve your access key

You need to copy the secret access key before closing the window because when you click on Done, it will no longer be visible.

 

  • To use the Post Call Analytics, from the Settings Menu → go to General → go to Global section and Enable audio split for voice recordings

  • Configure the Quality Analysis you want to use:

 

Enable Transcribe, Sentiment Analysis and Post call analytics.

Region: choose from the list of proposed values the geographic region closest to you (The S3 URI must point to the correct region: the region must be the same as the one configured in AWS).

Account: choose from the available account previously configured.

 

 

How it works

If you want to launch AI features using the New Client Experience explore this documentation

1- Run Transcribe option, by choosing Region and Language for the transcription process (Language inserted here by default is the same indicated in General Settings)

You always have the default Account selected but you can change it with another account if for example you have multilingual recordings and you know that with some languages AWS Transcription is best performing. In contrast, other language transcriptions work better with OpenAI.
If you select a different account from the default one, this will not be changed in the settings section

When you click on Start Transcription, technically your audio is sent to a bucket configured on AWS, which, when it is ready, takes the audio from this bucket and launches the transcription.
You can see the transcript in the specific tab (Edit Voice Recording):

Record: Call recording audio file (mono format).

If split voice recording has been enabled, the system generates one audio per speaker (stereo format).

When post call analytics is run, the audio file is colored depending on the sentient analysis (positive, negative or neutral).

Transcription complete: If split voice recording has been enabled, call transcript is split in two sides, one per conversation turn/speaker.

When post call analytics is run, the text side box is colored depending on the sentient analysis (POSITIVE, NEUTRAL or NEGATIVE).

Transcript: Call transcript. Amazon Transcribe uses machine learning models to convert speech to text.

 

 

 

Transcript feature:

  1. Click on the icon to play the single speaker audio

  2. Click on the icon to mute the single-speaker audio

 

2- Run Sentiment → you will always see AWS Account (with Region and Language) to choose and see the Sentiment Analysis in the specific tab (Edit Voice Recording):

 

Sentiment analysis:

Sentiment analysis inspects the call transcript text and returns an inference of the prevailing sentiment (POSITIVE, NEUTRAL, MIXED, or NEGATIVE) and their corresponding confidence levels.

Sentiment determination returns the following values:

  • Positive – The text expresses an overall positive sentiment.

  • Negative – The text expresses an overall negative sentiment.

  • Mixed – The text expresses both positive and negative sentiments.

  • Neutral – The text does not express either positive or negative sentiments.

 

SENTIMENT (first box): The inferred sentiment that Amazon Comprehend has the highest level of confidence in.

POSITIVE, NEGATIVE, NEUTRAL, MIXED: Amazon Comprehend confidence levels for each sentiment.

 

 

3- Run Post call Analytics (choosing Account with Region and Language) and see the analytics in the specific tab (Edit Voice Recording):

 

Post-call Analytics sentiment analysis estimates how the customer and agent are feeling throughout the call. This metric is represented as a quantitative value (with a range from -5 to 5). Quantitative values are provided per quarter and per call.

This metric can help identify if your agent is able to delight an upset customer by the time the call ends.

 

Post-call analytics: Call overall sentiment score per speaker, with a range from 0 to 5. XCALLY recalculates Amazon’s metrics (for example, -5 (Amazon) corresponds to 0 (XCALLY), 0 (Amazon) corresponds to 2,5 (XCALLY), 5 (Amazon) corresponds to 5 (XCALLY)).

 

 

Time graph sentiment: It displays the overall sentiment per speaker per quarter, with a range from -5 to 5.

Clicking on one of the four points (call’s quarters) of the line, sentiment scores per speaker are shown.

 

Related pages