V3 Recordings
XCALLY section | Cally Square → Recordings |
On this page |
What’s about
From Cally Square section, it is possible to access to Recordings.
Recordings section shows all the recordings made inside Cally Square projects:
For each recording, it is possible to see the following information:
Filename: the name of the file
Project: the name of the Cally Square project the call was recorded in
UniqueID: uniqueID of the call
Phone: caller phone number
Exten: the extension the caller dialed
Audio: hear a preview (ONLY for .wav format) or download call recordings
Created at: the date and the time of the recording
Application: the application from which the recording has been made (record, googleasr, awsasr)
Duration: the duration of the recording
From the three dots menu, it is possible to:
Download Square Recording
Delete Square Recording
Edit Square Recording
Download Recording Transcription (Available only for recording made by Google ASR and AWS ASR):
the transcription file is downloaded in .txt
the transcription reports the Confidence rate: the reliability of the transcription from the voice recording made by provider ASR
Please note the default recording path is /var/opt/motion2/server/files/recordings/
Recordings on New Client Experience
FROM VERSION 3.43.0
If you enable New Client Experience, you can view this visualisation, with the list of Cally Square recordings, with indication of:
ID and Unique ID
Filename
Cally Square Project and relative Application
Phone and extension
Audio file and duration
Quality analysis information (transcription, sentiment analysis, post call analytics and QA categories), from version 3.50.0
you can:
search for a specific recording
clear all filters
manage columns, by selecting or not them
activate the advanced search for each field
By clicking on 3 dots of a specific recording, you can go to the edit section, download the audio file, delete it and run quality analysis features (transcribe, sentiment analysis, postcall analytics).
Quality Analysis on New Client Experience
FROM VERSION 3.50.0
The Quality Analysis feature allows you to perform transcription, sentiment analysis, and post-call analytics on Cally Square recorded calls using AI tools powered by AWS and OpenAI.
Requirements
This feature requires an active Quality Analysis Add-on license.
For Quality Analysis, a Redis container and these environment variables has been added starting from Version 3.35.0 to installation script .env
So for all installations (new and existing ones, for which the update script should be executed) the variable values must be:
XC_QA_QUEUE_WORKERS=10 # Quality Analysis SECTION
XC_QA_REDIS_PORT=21000 # timings redis port
XC_QA_REDIS_DB=0 # timings redis db
XC_QA_REDIS_USERNAME=
XC_QA_REDIS_PASSWORD=
XC_QA_REMOVE_FAILED_JOBS_AFTER=3600 # 1 hour
XC_QA_REMOVE_COMPLETED_JOBS_AFTER=604800 # 7 days
Let’s view Troubleshooting paragraph if you want to verify if required redis container is launched.
Remember to enable split voice recordings in Settings section.
To initiate the quality analysis of a Cally Square recorded call, click the three-dot menu of a specific recording.
From there, you can choose to start transcription, sentiment analysis, or post-call analytics.
Please note that Transcription is the first required step.
Transcribe
The transcription feature converts recorded calls into text, making it easier to review and analyze conversations.
When you launch a new transcription, clicking on Run Transcribe, you need to select the Provider as first step:
If you choose AWS, you should specify Region and language from the dropdown menu.
If you choose OpenAI, you don’t need to indicate the language, because it has automatic recognition.
After you click on Start transcription, wait until the Transcribe value becomes Completed:
Then, click on the three dots → Edit → Transcribe:
By clicking on the 3 dots button → View, it is possible to see the result:
The audio bar, indicating the Customer audio channel.
Buttons to play, rewind, forward audio or download it.
Conversation transcription, with indication of role (Customer), begin at and duration time reported in seconds.
Word confidence colors are label to indicate the speech recognition's reliability level of transcribe. Specifically, black text is above 90% sure, yellow 50% and red is less than 50% (the system had difficulty identifying the correct words).
Confidence color is visible only if you AWS use as provider for Transcription.
A red dot to identify the part of conversation, while playing the audio file.
Sentiment Analysis
The sentiment analysis identifies the emotional tone of the conversation, highlighting whether the call was positive, negative, or neutral.
Sentiment analysis can be run on the latest transcription produced by AWS transcribe, OpenAI whisper or post call analytics. This feature inspects the call transcript text and returns an inference of the prevailing sentiment (POSITIVE
, NEUTRAL
, MIXED
, or NEGATIVE
, expressed in percentage).
Sentiment Analysis can be run - only if Transcribe is already enabled - with an AWS account (by indicating Region and Language). From version 3.50.0 Sentiment Analysis is available with an OpenAI account as well.
The sentiment with OpenAI is based on the new features of OpenAI API responses.
The output of the score calculation function is not deterministic, because it is subject to interpretation by the AI. Therefore, running the sentiment on the same transcription multiple times may result in slightly different scores.
You can run the sentiment analysis feature clicking on the three dots and on Run Sentiment Analysis.
A modal will open to let you select the preferred provider:
If you choose AWS, you should specify Region and language from the dropdown menu.
If you choose OpenAI, you don’t need to indicate the language, because it has automatic recognition.
After you click on Start analysis, wait until the Sentiment Analysis value becomes Completed:
Then, click on the three dots → Edit → Sentiment analysis to see the result:
The sentiment analysis section shows the following columns:
Created at (default descending sorting and filter from calendar)
Status
Service (Amazon AWS or OpenAI)
Main Sentiment (
POSITIVE
,NEUTRAL
,MIXED
, orNEGATIVE
)Percentages of each sentiment
Language
Post-Call Analytics
Post-Call Analytics estimates how the customer has been feeling throughout the call. This feature is especially useful when the sentiment analysis returns a negative result, enabling the supervisor to run post-call analytics and pinpoint the parts of the conversation where issues may have occurred.
You can run the post-call analytics feature clicking on the three dots and on Run Post-Call Analytics. It can be run only with an AWS account, by indicating Region and Language.
After you click on Start analysis, wait until the Sentiment Analysis value becomes Completed:
Then, click on the three dots → Edit → Post Call Analytics.
As columns, you can see Created at, Status, Language code.
By clicking on the 3 dots button → View, it is possible to see more details.
You can view General tab, with sections showing the result of the analysis.
The audio bar on the top is the Customer audio channel.
It’s also possible to “navigate” it to listen only a desired part of the recording.
You can find buttons to play, rewind, forward audio or download it.
Sentiment score over time graph: score is expressed from -5 to +5 and it displays the customer sentiment detected during. System starts with neutral sentiment at 0 and detects the score every quarter of a call.
Customer Talk Time graph: it shows the talk time in seconds
Sentiment per quarter: a heatmap graphed with colours.
Score ranging from -5 (very negative) to +5 (very positive).
The data is shown per quarter call (Q1,Q2,Q3,Q4) + one box for the overall (average result).
Then if the system automatically detects issues during the call, it will mark them in a specified section. By analysing the call sentiment, the AI tools are able to recognise negative parts of the conversation and report them, with the interval and issue detected.
Finally the last section shows the categories detected during the call, with indication of detection intervals.
See how to create AWS Categories
Instead, on Conversation tab you can view:
The audio bar, indicating the Customer audio channel.
Buttons to play, rewind, forward audio or download it
Conversation transcription, with indication of role, sentiment (Positive, Neutral, Negative) begin at, duration time reported in seconds, matched QA categories
Word confidence colors are label to indicate the speech recognition's reliability level. Specifically, black text is above 90% sure, yellow 50% and red is less than 50% (the system had difficulty identifying the correct words).
A red dot to identify the part of conversation played.
Data Redaction
Redaction with batch transcriptions is available only with languages US English "en-US" and US Spanish "es-US".
This function allows users to choose if to enable the data redaction process before starting the post-call analysis. By enabling this mode, when you launch a Post-Call Analytics, sensitive information (credit cards, phone numbers, addresses, etc.) will be hidden in the return transcript from AWS (sensitive data are hidden by asterisks).
By launching a post call analytics, if you choose as language code “English US” or “Spanish US”, you will be able to activate manually, if you want, the redaction (option disabled by default), by selecting the option “Allow data redaction”.
When post call analytics will be finished, by clicking on eye icon to view the content, the conversation will be obscured in the sensitive data with asterisks.
Troubleshooting
If you see an error like this
to check if the Redis container is started correctly, you can follow this procedure:
connect to the machine in SSH and launch this command as root user
docker ps
in the list of active containers, a container using the Redis image (in IMAGE column) and named “bullmq-v1” (NAMES column) should appear.
Moreover you can see that in PORTS column, the host port 21000 is mapped (in this case to internal port 6379 on the container).
If running the docker ps
command the output is bash: docker: command not found
, (so the docker is not installed) or the redis container named bullmq-v1 does not result, you need to run this script as root user to install and execute the container. The script should start the redis container on port 21000, but it’s important to check it.
Consider that by launching this script, it creates disservice, because it restarts Motion service and updates NGINX configuration (by creating a backup file), so it is recommended to launch it while you are not using the server.
NGINX Best Practice Our configuration files should not be modified, so if you need to customise the nginx conf file, it is strongly recommended to not modify our file, but create a copy and customise your own. We overwrite the configuration file each time we update it. For example when a script is launched, the nginx file is modified. In any case the script saves a backup copy of modified nginx and highlighted on the screen the differences between versions when the script is run. |
---|
curl -u 'public:bs4#)W]h8+VK),RV' --silent --location https://repository.xcally.com/repository/provisioning/Scripts/motionV3_new_feature_update | bash
Then you need to check the environment variables in the .env file:
XC_QA_QUEUE_WORKERS=10 # Quality Analysis SECTION
XC_QA_REDIS_PORT=21000 # timings redis port
XC_QA_REDIS_DB=0 # timings redis db
XC_QA_REDIS_USERNAME=
XC_QA_REDIS_PASSWORD=
XC_QA_REMOVE_FAILED_JOBS_AFTER=3600 # 1 hour
XC_QA_REMOVE_COMPLETED_JOBS_AFTER=604800 # 7 days
Finally as motion user
su - motion
go to folder cd /var/opt/motion2
and launch this command
npm run initialize
to apply the changes to the environment variables, but consider that it restarts the API and it can create disservice (so it is recommended to launch it while you are not using the server).