/
Quality Analysis on New Experience

Quality Analysis on New Experience

This feature is available from version 3.35.0 and to use it, you need to enable on your license Quality Analysis Add-on

With the New Client Experience, on Voice Recordings, it is possible to apply a Quality Analysis using the AI features.

If you don't have a New Client Experience you can still see most of the features described below. However, you might find some limitations.

What’s about

Starting from version 3.40.0 you will see the new GUI to improve the User Experience of Voice Recordings section

On the New Client Experience, the Voice Recordings section shows search filters on each column.

image (12)-20240917-074023.jpg

 

It is possible to manage columns of the table, by selecting the ones you need to view.

Moreover you can customize the pagination of elements, from the bottom button defining Number of Items per page, for example, a maximum of 10, 20 or 50.

So by clicking on arrows icon you can order elements in ascending or descending order, while with the funnel filter icon of each column, you can apply a series of filters to find several voice recordings:

image-20240917-142531.png

Column

Operator

Value

Column

Operator

Value

Type

checkbox option

inbound, internal, outbound, dialer, chanspy

Unique ID

starts with/ contains / not contains / ends with / equals / not equals

number or text value

Caller/ Called or Connected (agent’s internal number)

starts with/ contains / not contains / ends with / equals / not equals

text value

Queue on which the call arrived

starts with/ contains / not contains / ends with / equals / not equals

text value

Agent who managed the conversation

starts with/ contains / not contains / ends with / equals / not equals

text value

Rating

equals / not equals / less than / less than or equal to / greater than / greater than or equal to

numeric value

Audio

no filter

audio file

Duration

ascending or descending order

 

Created at

select from calendar

Calendar to insert a specific day or a preset range (today, yesterday, this week, last week, this month, last month, this year, last year

Disposition (1°,2°,3° level)

starts with/ contains / not contains / ends with / equals / not equals

text value

Transcribe / Sentiment Analysis / Post Call Analytics

no filter

status (Completed, New, Failed or blank field if not launched)

QA Categories

select values (created AWS Categorie)

the matched value

To filter Voice Recordings click on Apply

image-20240917-143835.png

Instead, with this icon you can clear all filters

 

If you click on a specific voice recording, you will see edit modal, to insert rating or a comment

image-20240917-144026.png

 

While if you click on 3 dots button you will see this menu to edit or delete voice recording, download file, start transcribe, sentiment analysis and post call analytics

image-20240917-144242.png

Remember that if you have never launched the transcribe, you will not see the sentiment and analytics options, because you must first launch the transcribe

A voice recording can have one or more Transcriptions, Post-Call Analytics, or Sentiment Analyses. To launch Transcribe, Sentiment Analysis and Post Call Analytics you can select the voice recording and use the icons on the top right. There is also a specific button to export files in .csv format.

image-20240917-074425.png

 

It’s possible to select more than one Voice Recordings and then launch these actions, but consider that each Voice Recording can be analysed according to the specific permissions, therefore, not all the actions are enabled for each Voice Recording.
Click on the three dots menu next to each audio recording to see which actions are enabled

It is also possible to see all the actions' results relative to a Voice Recording, by editing a specific recording. From this interface, you can see the details of the Voice Recording, which cannot be modified. It’s only possible to define a rating, while from the topbar you can decide to play audio, download file, run transcribe/sentiment/post call analytics.

In edit page, you can see buttons/sections Transcribe/Sentiment Analysis/Post Call Analytics only if you have enabled the features of Settings → General → Quality Analysis

image-20241015-104719.png

In Transcribe tab, you can see the details of each transcription, with indication of Created at (default descending sorting and filter from calendar), Status (with checkbox menu), Service (with checkbox menu to choose Amazon AWS or OpenAI), Language, while by clicking on 3 dots button you can View the transcription.

image-20241016-083255.png

 

Status can be:

  • New: the job has just been created and is waiting for processing

  • UploadingData: the file is being uploaded to the provider's server

  • InProgress: the provider is processing the uploaded file

  • Unknown: unknown status (due to some error)

  • Completed: processing completed successfully

  • Failed: processing failed (in this case, if the provider gives an error message, a warning icon is placed next to the status (with a tooltip showing that message)

Requirements

  • For Quality Analysis, a Redis container and these environment variables has been added starting from Version 3.35.0 to installation script .env
    So for all installations (new and existing ones, for which the update script should be executed) the variable values must be:

Quality Analysis SECTION | XC_QA_QUEUE_WORKERS=10
timings redis port | XC_QA_REDIS_PORT=21000
timings redis db | XC_QA_REDIS_DB=0
timings redis username | XC_QA_REDIS_USERNAME=
timings redis password | XC_QA_REDIS_PASSWORD=
1 hour | XC_QA_REMOVE_FAILED_JOBS_AFTER=3600
7 days | XC_QA_REMOVE_COMPLETED_JOBS_AFTER=604800

Let’s view Troubleshooting paragraph if you want to verify if required redis container is launched

  • If you want to launch Post Call Analytics you need to enable split voice recordings in settings section. If the voice recording is a mono file, Post Call Analytics button is not visible

Video Tutorial

Transcribe

You can launch a new transcription, by inserting the Provider:

If you choose AWS, you can indicate Region and language from the dropdown menu

image-20240702-141832.jpg

 

If you choose OpenAI, you don’t need to indicate the language, because it has automatic recognition

image-20240702-141958.png

 

By clicking on the 3 dots button → View, it is possible to see audio details:

image-20250310-101639.png
  • Two audio bars, indicating the User and Customer audio channels (enabling split voice recordings in settings section).
    The agent channel has downward values, while the client's audio shows upward bands.

  • Buttons to play, rewind, forward audio or download it

  • Conversation transcription, with indication of role, begin at and duration time reported in seconds

  • Word confidence colors are label to indicate the speech recognition's reliability level of transcribe. Specifically, black text is above 90% sure, yellow 50% and red is less than 50% (the system had difficulty identifying the correct words).

Confidence color is visible only if you use as provider for Transcription.

  • A red dot to identify the part of conversation played

Sentiment Analysis

Sentiment analysis can be run on the latest transcription produced by AWS transcribe, OpenAI whisper or post call analytics. This feature inspects the call transcript text and returns an inference of the prevailing sentiment (POSITIVE, NEUTRAL, MIXED, or NEGATIVE, expressed in percentage).

Sentiment Analysis can be run only if Transcribe is already enabled and only with an AWS account (by indicating Region and Language)

image-20241016-083738.png

 

The list of sentiment analysis shows the following columns:

  • Created at (default descending sorting and filter from calendar)

  • Status: (the same described above, with checkbox menu)

  • Main Sentiment (POSITIVE, NEUTRAL, MIXED, or NEGATIVE)

  • Percentages of each sentiment

  • Language

Post-Call Analytics

Post-Call Analytics estimates how the customer and agent have been feeling throughout the call.

Post-Call Analytics can be run only with an AWS account, by indicating Region and Language

 

image-20241016-083920.png

 

As columns, you can see Created at, Status, Language code, while by clicking on 3 dots button you can View the post-call analytics.

This feature can be useful when the sentiment output is negative, so the user can choose to run the post-call analytics, to find out in which parts of the conversation the issue occurred.

image-20250310-100500.png


You can view General tab, with details about the analysis:

image-20250310-101242.png
  • Two audio bars, indicating the User and Customer audio channels (enabling split voice recordings in settings section).
    The agent channel has downward values, while the client's audio shows upward bands.
    It’s also possible to “navigate” with mouse in the audio bar to listen only a desired part

  • Buttons to play, rewind, forward audio or download it

image-20250310-100747.png
  • This graph shows sentiment score over time: score is expressed from -5 to +5 and it displays the sentiment detected during the call for customer and agent.
    System starts with neutral sentiment at 0 and detects the score every quarter of a call

 

image-20250310-100622.png

Beside you can view talk time graph, so the division of the call into seconds, with an indication of how many seconds each participant spoke (with reference to the data also in percentages). Moreover you can see seconds and percentage of non talk time.
By clicking on the graph, it is also possible to remove one of the participants from the analysis to view the details of agent or customer.

image-20250310-100815.png

Below the graph shows sentiment per quarter with a heatmap graphed with colours, divided in 2 rows, one per customer and one per agent.
Score ranging from -5 (very negative) to +5 (very positive)
The data is shown per quarter call (Q1,Q2,Q3,Q4) + one box for the overall (average result)

image-20250310-100852.png

Then the system is able to mark issues detected during the call: by analysing the call sentiment, the AI is able to recognise negative parts of the conversation and report them in this section, with the interval and issue detected

image-20250310-101031.png

Finally the next section shows the categories detected during the call, so categories matched in the conversation with indication of intervals in which it was found

Starting from Version 3.36.0 in fact it’s possible to create AWS Categories and by default all created categories are searched inside the conversation

Instead, on Conversation tab you can view:

image-20250310-100534.png
  • Two audio bars, indicating the User and Customer audio channels (enabling split voice recordings in settings section).
    The agent channel has downward values, while the client's audio shows upward bands.

  • Buttons to play, rewind, forward audio or download it

  • Conversation transcription, with indication of role, sentiment (Positive, Neutral, Negative) begin at, duration time reported in seconds, matched QA categories

  • Word confidence colors are label to indicate the speech recognition's reliability level. Specifically, black text is above 90% sure, yellow 50% and red is less than 50% (the system had difficulty identifying the correct words).

  • A red dot to identify the part of conversation played: if agent and customer talk to each other above, two sentences will have red hot at the same time

image-20250310-101505.png

 

Data Redaction

Starting from version 3.39.0 only on new client experience it is available the feature of Redacted.
Redaction with batch transcriptions is available only with languages US English "en-US" and US Spanish "es-US".

This function allows users to choose if to enable the data redaction process before starting the post-call analysis. By enabling this mode, when you launch a Post-Call Analytics, sensitive information (credit cards, phone numbers, addresses, etc.) will be hidden in the return transcript from AWS (sensitive data are hidden by asterisks).

image-20240829-092504.png

 

By launching a post call analytics, if you choose as language code “English US” or “Spanish US”, you will be able to activate manually, if you want, the redaction (option disabled by default), by selecting the option “Allow data redaction”.

When post call analytics will be finished, by clicking on eye icon to view the content, the conversation will be obscured in the sensitive data with asterisks.

Troubleshooting

If you see an error like this

image-20250217-132348.png

to check if the Redis container is started correctly, you can follow this procedure:

  • connect to the machine in SSH and launch this command as root user

docker ps
  • in the list of active containers, a container using the Redis image (in IMAGE column) and named “bullmq-v1” (NAMES column) should appear.
    Moreover you can see that in PORTS column, the host port 21000 is mapped (in this case to internal port 6379 on the container).

image-20240902-135816.png

If running the docker ps command the output is bash: docker: command not found, (so the docker is not installed) or the redis container named bullmq-v1 does not result, you need to run this script as root user to install and execute the container. The script should start the redis container on port 21000, but it’s important to check it.
Consider that by launching this script, it creates disservice, because it restarts Motion service and updates NGINX configuration (by creating a backup file), so it is recommended to launch it while you are not using the server.

NGINX Best Practice

Our configuration files should not be modified, so if you need to customise the nginx conf file, it is strongly recommended to not modify our file, but create a copy and customise your own.

We overwrite the configuration file each time we update it. For example when a script is launched, the nginx file is modified. In any case the script saves a backup copy of modified nginx and highlighted on the screen the differences between versions when the script is run.

curl -u 'public:bs4#)W]h8+VK),RV' --silent --location https://repository.xcally.com/repository/provisioning/Scripts/motionV3_new_feature_update | bash
  • Then you need to check the environment variables in the .env file:

Quality Analysis SECTION | XC_QA_QUEUE_WORKERS=10
timings redis port | XC_QA_REDIS_PORT=21000
timings redis db | XC_QA_REDIS_DB=0
timings redis username | XC_QA_REDIS_USERNAME=
timings redis password | XC_QA_REDIS_PASSWORD=
1 hour | XC_QA_REMOVE_FAILED_JOBS_AFTER=3600
7 days | XC_QA_REMOVE_COMPLETED_JOBS_AFTER=604800

  • Finally as motion user

su - motion

go to folder cd /var/opt/motion2 and launch this command

npm run initialize

to apply the changes to the environment variables, but consider that it restarts the API and it can create disservice (so it is recommended to launch it while you are not using the server).

 

Related content