Quality Analysis on New Experience
This feature is available from version 3.35.0 and to use it, you need to enable on your license Quality Analysis
With the New Client Experience, on Voice Recordings, it is possible to apply a Quality Analysis using the AI features.
If you don't have a New Client Experience you can still see most of the features described below. However, you might find some limitations
Voice Recordings on New Client Experience
Starting from version 3.40.0 you will see the new GUI to improve the User Experience of Voice Recordings section
On the New Client Experience, the Voice Recordings section shows search filters on each column.
It is possible to manage columns of the table, by selecting the ones you need to view.
Moreover you can customize the pagination of elements, from the bottom button defining Number of Items per page, for example, a maximum of 10, 20 or 50.
So by clicking on arrows icon you can order elements in ascending or descending order, while with the funnel filter icon of each column, you can apply a series of filters to find several voice recordings:
Column | Operator | Value |
---|---|---|
Type | checkbox option | inbound, internal, outbound, dialer, chanspy |
Unique ID | starts with/ contains / not contains / ends with / equals / not equals | number or text value |
Caller/ Called or Connected (agent’s internal number) | starts with/ contains / not contains / ends with / equals / not equals | text value |
Queue on which the call arrived | starts with/ contains / not contains / ends with / equals / not equals | text value |
Agent who managed the conversation | starts with/ contains / not contains / ends with / equals / not equals | text value |
Rating | equals / not equals / less than / less than or equal to / greater than / greater than or equal to | numeric value |
Audio | no filter | audio file |
Duration | ascending or descending order |
|
Created at | select from calendar | Calendar to insert a specific day or a preset range (today, yesterday, this week, last week, this month, last month, this year, last year |
Disposition (1°,2°,3° level) | starts with/ contains / not contains / ends with / equals / not equals | text value |
Transcribe / Sentiment Analysis / Post Call Analytics | no filter | status (Completed, New, Failed or blank field if not launched) |
QA Categories | select values (created AWS Categorie) | the matched value |
To filter Voice Recordings click on Apply
Instead, with this icon you can clear all filters
If you click on a specific voice recording, you will see edit modal, to insert rating or a comment
While if you click on 3 dots button you will see this menu to edit or delete voice recording, download file, start transcribe, sentiment analysis and post call analytics
A voice recording can have one or more Transcriptions, Post-Call Analytics, or Sentiment Analyses. To launch Transcribe, Sentiment Analysis and Post Call Analytics you can select the voice recording and use the icons on the top right. There is also a specific button to export files in .csv
format.
It is also possible to see all the actions' results relative to a Voice Recording, by editing a specific recording. From this interface, you can see the details of the Voice Recording, which cannot be modified. It’s only possible to define a rating, while from the topbar you can decide to play audio, download file, run transcribe/sentiment/post call analytics.
In Transcribe tab, you can see the details of each transcription, with indication of Created at (default descending sorting and filter from calendar), Status (with checkbox menu), Service (with checkbox menu to choose Amazon AWS or OpenAI), Language, while by clicking on 3 dots button you can View the transcription.
Status can be:
New: the job has just been created and is waiting for processing
UploadingData: the file is being uploaded to the provider's server
InProgress: the provider is processing the uploaded file
Unknown: unknown status (due to some error)
Completed: processing completed successfully
Failed: processing failed (in this case, if the provider gives an error message, a warning icon is placed next to the status (with a tooltip showing that message)
Requirements
Transcribe
You can launch a new transcription, by inserting the Provider:
If you choose AWS, you can indicate Region and language from the dropdown menu
If you choose OpenAI, you don’t need to indicate the language, because it has automatic recognition
By clicking on the eye button it is possible to see audio details:
Two audio bars, indicating the User and Customer audio channels (enabling split voice recordings in settings section).
The right channel, represented by the agent, has downward values, while the left channel, containing the client's audio, shows upward bands. By scrolling the vertical bar to a position on the plot, you can see the detail of the corresponding part of the transcript.Word confidence colors are label to indicate the speech recognition's reliability level. Specifically, white text is above 90% sure, orange 50% and red is less than 50% (the system had difficulty identifying the correct words).
It is possible to Hide confidence to remove the above-mentioned colors from the text bodyOn the left there is the conversation transcription, divided by the two speakers (User and Customer). The duration time of each exchange is reported in seconds
To see a more detailed view of the transcription, click on Show table visualisation
From this visualisation, you can click on Show chat visualisation to turn back to the previous interface
Sentiment Analysis
Sentiment analysis can be run on the latest transcription produced by AWS transcribe, OpenAI whisper or post call analytics. This feature inspects the call transcript text and returns an inference of the prevailing sentiment (POSITIVE
, NEUTRAL
, MIXED
, or NEGATIVE
, expressed in percentage) and their corresponding confidence levels.
The list of sentiment analysis shows the following columns:
Created at (default descending sorting and filter from calendar)
Status: (the same described above, with checkbox menu)
Main Sentiment (
POSITIVE
,NEUTRAL
,MIXED
, orNEGATIVE
)Percentages of each sentiment
Language
Post-Call Analytics
Post-Call Analytics estimates how the customer and agent have been feeling throughout the call.
As columns, you can see Created at, Status, Language code, while by clicking on 3 dots button you can View the post-call analytics.
This feature can be useful when the sentiment output is negative, so the user can choose to run the post-call analytics, to find out in which parts of the conversation the issue occurred.
You can view general details about the analysis:
general sentiment, by assigning scores to the agent and the client
Word confidence color to indicate the speech recognition's reliability level of transcribe
matched categories: by default all created categories are searched inside the conversation, but clicking on this icon you can also decide to filter for specific categories
audio file divided per speaker
conversation with relative duration of each part
For each piece of conversation, you can view the speaker’s tones of the voice, expressed using emojis:
= appropriate, non-aggressive tone
= conversation improveable, but appropriate to the context (neutral emoji)
= threatening or aggressive tone, inappropriate vocabulary
Moreover when categories are matched in the conversation by AWS (when the audio is sent to analysis), you will see a green star near the phrase and a orange bar to the left
So it is possible to track also pause, for example by filtering by non_talk category to match moments in which pauses last at least the seconds entered in the category configuration (it’s possible to find pause moments also inside comics of conversation)
By selecting a piece of conversation you can view the relative sentiment, matched categories and duration
Data Redaction
This function allows users to choose if to enable the data redaction process before starting the post-call analysis. By enabling this mode, when you launch a Post-Call Analytics, sensitive information (credit cards, phone numbers, addresses, etc.) will be hidden in the return transcript from AWS (sensitive data are hidden by asterisks).
By launching a post call analytics, if you choose as language code “English US” or “Spanish US”, you will be able to activate manually, if you want, the redaction (option disabled by default), by selecting the option “Allow data redaction”.
When post call analytics will be finished, by clicking on eye icon to view the content, the conversation will be obscured in the sensitive data with asterisks.
Troubleshooting
To check if the Redis container is started correctly, you can follow this procedure:
connect to the machine in SSH and launch this command as root user
docker ps
in the list of active containers, a container using the Redis image (in IMAGE column) and named “bullmq-v1” (NAMES column) should appear.
Moreover you can see that in PORTS column, the host port 21000 is mapped (in this case to internal port 6379 on the container).
curl -u 'public:bs4#)W]h8+VK),RV' --silent --location https://repository.xcally.com/repository/provisioning/Scripts/motionV3_new_feature_update | bash
Then you need to check the environment variables in the .env file (variables published in requirements paragraph).
Finally as motion user
su - motion
go to folder cd /var/opt/motion2 and launch this command
to apply the changes to the environment variables, but consider that it restarts the API and it can create disservice (so it is recommended to launch it while you are not using the server).