Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

On this page

Table of Contents
minLevel1
maxLevel3
outlinefalse
stylenone
typelist
printablefalse
Panel
bgColor#FFFAE6

This feature is available from version 3.35.0 and to use it, you need to enable on your license Quality Analysis

With the New Client Experience, on Voice Recordings, it is possible to apply a Quality Analysis using the AI features.

Info

If you don't have a New Client Experience you can still see most of the features described below. However, you might find some limitations

📋 Voice Recordings on New Client Experience

Info

We are working Starting from version 3.40.0 you will see the new GUI to improve the User Experience of Voice Recordings section. In future releases, the relative GUI will be updated..stay tuned!

On the New Client Experience, the Voice Recordings section shows more advanced search filters and custom columnson each column.

image-20240517-080230.pngImage Removedimage (12)-20240917-074023.jpgImage Added

It is possible to configure manage columns and fields of the table and the table width can be flexible, namely fittable to the surrounding space, or fixed, according to a specific dimension measured in pixels.You can also add as many columns as you need, by selecting the ones you need to view.

Moreover you can customize the number pagination of elements, from the text field bottom button defining Number of Items per page, for example, a maximum of 10.

image-20240723-075754.pngImage Removed

You , 20 or 50.

So by clicking on arrows icon you can order elements in ascending or descending order, while with the funnel filter icon of each column, you can apply a series of filters to find several voice recordings:

  • Duration that can organize the data in ascending or descending order (so this filter does not allow the insertion of text or numbers)

  • Date: you can select a time range from the calendar or you can choose default option as today, yesterday, this week, last week, this month, last month, this year, last year

  • Agent who managed the conversationUnique ID

    image-20240917-142531.pngImage Added

    Column

    Operator

    Value

    Type

    starts with/ contains / not contains / ends with / equals / not equals

    internal, inbound, outbound or dialer

    Unique ID

    starts with/ contains / not contains / ends with / equals / not equals

    number or text value

    Caller/ Called or Connected (agent’s internal number)

  • Type: internal, inbound, outbound or dialer

  • starts with/ contains / not contains / ends with / equals / not equals

    text value

    Queue on which the call arrived

    starts with/ contains / not contains / ends with / equals / not equals

    text value

    Agent who managed the conversation

    starts with/ contains / not contains / ends with / equals / not equals

    text value

    Rating

    equals / not equals / less than / less than or equal to / greater than / greater than or equal to

    numeric value

    Audio

    no filter

    audio file

    Duration

    equals / not equals / less than / less than or equal to / greater than / greater than or equal to

    numeric value

    Created at

    Calendar to select a specific day or a time range

    Disposition (1°,2°,3° level)

    starts with/ contains / not contains / ends with / equals / not equals

    text value

    To filter Voice Recordings click on Apply

    Filters

    image-20240917-143835.pngImage Added

    Instead, with this icon you can clear all filters

    If you click on a specific voice recording, you will see edit modal, to insert rating or a comment

    image-20240917-144026.pngImage Added

    While if you click on 3 dots button you will see this menu to edit or delete voice recording, download file, start transcribe, sentiment analysis and post call analytics

    image-20240917-144242.pngImage Added
    Info

    Remember that if you have never launched the transcribe, you will not see the sentiment and analytics options, because you must first launch the transcribe

    A voice recording can have one or more Transcriptions, Post-Call Analytics, or Sentiment Analyses. To launch Transcribe, Sentiment Analysis and Post Call Analytics you can select the voice recording and use the icons on the top right. There is also a specific button to export files in .csv format.

    image-20240517-080856.pngImage Removedimage-20240917-074425.pngImage Added

    Note

    It’s possible to select more than one Voice Recordings and then launch these actions, but consider that each Voice Recording can be analysed according to the specific permissions, therefore, not all the actions are enabled for each Voice Recording.
    Click on the three dots menu next to each audio recording to see which actions are enabled

    It is also possible to see all the actions' results relative to a Voice Recording, by clicking on a specific recording. From this interface, you can see the details of the Voice Recording, which cannot be modified, and the amount of actions taken (e.g. 11 Transcriptions)

    image-20240517-081254.png

    Here, you can see the details of each transcription, with indication of Date (default descending sorting), Status, Service (Amazon AWS or OpenAI), Language code and Display column (represented by the eye icon).

    image-20240517-081306.png

    Status can be:

    • New: the job has just been created and is waiting for processing

    • UploadingData: the file is being uploaded to the provider's server

    • InProgress: the provider is processing the uploaded file

    • Unknown: unknown status (due to some error)

    • Completed: processing completed successfully

    • Failed: processing failed (in this case, if the provider gives an error message, a warning icon is placed next to the status (with a tooltip showing that message)

    ❗Requirements(info) Requirements

    Panel
    bgColor#EAE6FF

    For Quality Analysis, a Redis container and these environment variables has been added starting from Version 3.35.0 to installation script .env
    So for all installations (new and existing ones, for which the update script should be executed) the variable values must be:

    Quality Analysis SECTION | XC_QA_QUEUE_WORKERS=10
    timings redis port | XC_QA_REDIS_PORT=21000
    timings redis db | XC_QA_REDIS_DB=0
    timings redis username | XC_QA_REDIS_USERNAME=
    timings redis password | XC_QA_REDIS_PASSWORD=
    1 hour | XC_QA_REMOVE_FAILED_JOBS_AFTER=3600
    7 days | XC_QA_REMOVE_COMPLETED_JOBS_AFTER=604800

    Let’s view Troubleshooing paragraph if you want to verify if required redis container is launched

    ✏️ Transcribe

    You can launch a new transcription, by inserting the Provider:

    If you choose AWS, you can indicate Region and language from the dropdown menu

    image-20240702-141832.jpg

    If you choose OpenAI, you don’t need to indicate the language, because it has automatic recognition

    image-20240702-141958.png

    By clicking on the eye button image-20240521-125138.png it is possible to see audio details:

    image-20240517-081539.png
    • Two audio bars, indicating the User and Customer audio channels (enabling split voice recordings in settings section).
      The right channel, represented by the agent, has downward values, while the left channel, containing the client's audio, shows upward bands. By scrolling the vertical bar to a position on the plot, you can see the detail of the corresponding part of the transcript.

    • Word confidence colors are label to indicate the speech recognition's reliability level. Specifically, white text is above 90% sure, orange 50% and red is less than 50% (the system had difficulty identifying the correct words).
      It is possible to Hide confidence to remove the above-mentioned colors from the text body

    • On the left there is the conversation transcription, divided by the two speakers (User and Customer). The duration time of each exchange is reported in seconds

    • To see a more detailed view of the transcription, click on Show table visualisation

    image-20240517-081740.png

    From this visualisation, you can click on Show chat visualisation to turn back to the previous interface

    (heart) Sentiment Analysis

    Sentiment analysis can be run on the latest transcription produced by AWS transcribe, OpenAI whisper or post call analytics. This feature inspects the call transcript text and returns an inference of the prevailing sentiment (POSITIVE, NEUTRAL, MIXED, or NEGATIVE, expressed in percentage) and their corresponding confidence levels.

    Info

    Sentiment Analysis can be run only if Transcribe is enabled and by selecting the AWS account to use, you need always to indicate Region and Language

    image-20240517-082022.png

    The list of sentiment analysis shows the following columns:

    • Date (default descending sorting)

    • Status: (described above)

    • Service (only Amazon AWS)

    The detail on the right shows the sentiment of the entire conversation, with percentages for each individual category.

    📈 Post-Call Analytics

    Post-Call Analytics estimates how the customer and agent have been feeling throughout the call.

    Info

    Post-Call Analytics can be run only with an AWS account, by indicating Region and Language

    image-20240521-145939.png

    As columns, you can see Date, Status, Service (only Amazon AWS), Language code and Visualisation column (eye icon).

    By clicking on a row you can see details about analysis. This feature can be useful when the sentiment output is negative, so the user can choose to run the post-call analytics, to find out in which parts of the conversation the issue occurred.

    Info

    Starting from Version 3.36.0 you can also view AWS Categories related to the post call analytics

    image-20240612-091618.png
    Info

    Moving along the audio with mouse, system selects the relative comic

    You can view general details about the analysis:

    • general sentiment, by assigning scores to the agent and the client

    • Word confidence color to indicate the speech recognition's reliability level of transcribe

    • matched categories: by default all created categories are searched inside the conversation, but clicking on this icon you can also decide to filter for specific categories

    image-20240612-093432.png

    • audio file divided per speaker

    • conversation with relative duration of each part

    For each piece of conversation, you can view the speaker’s tones of the voice, expressed using emojis:

    image-20240517-081938.png

    (smile) = appropriate, non-aggressive tone

    😐 = conversation improveable, but appropriate to the context (neutral emoji)

    😡 = threatening or aggressive tone, inappropriate vocabulary

    Moreover when categories are matched in the conversation by AWS (when the audio is sent to analysis), you will see a green star near the phrase and a orange bar to the left

    image-20240612-092203.png

    So it is possible to track also pause, for example by filtering by non_talk category to match moments in which pauses last at least the seconds entered in the category configuration (it’s possible to find pause moments also inside comics of conversation)

    image-20240612-093325.png

    image-20240612-093338.png

    image-20240612-093211.jpg

    By selecting a piece of conversation you can view the relative sentiment, matched categories and duration

    👁️ Data Redaction

    Info

    Starting from version 3.39.0 only on new client experience it is available the feature of Redacted.
    Redaction with batch transcriptions is available only with languages US English "en-US" and US Spanish "es-US".

    This function allows users to choose if to enable the data redaction process before starting the post-call analysis. By enabling this mode, when you launch a Post-Call Analytics, sensitive information (credit cards, phone numbers, addresses, etc.) will be hidden in the return transcript from AWS (sensitive data are hidden by asterisks).

    image-20240829-092504.png

    By launching a post call analytics, if you choose as language code “English US” or “Spanish US”, you will be able to activate manually, if you want, the redaction (option disabled by default), by selecting the option “Allow data redaction”.

    When post call analytics will be finished, by clicking on eye icon to view the content, the conversation will be obscured in the sensitive data with asterisks.

    image-20240829-093109.png

    🔧 Troubleshooting

    To check if the Redis container is started correctly, you can follow this procedure:

    • connect to the machine in SSH and launch this command as root user

    Code Block
    docker ps

    • in the list of active containers, a container using the Redis image (in IMAGE column) and named “bullmq-v1” (NAMES column) should appear.
      Moreover you can see that in PORTS column, the host port 21000 is mapped (in this case to internal port 6379 on the container).

    image-20240902-135816.png
    Note

    If running the docker ps command the output is bash: docker: command not found, (so the docker is not installed) or the redis container named bullmq-v1 does not result, you need to run this script as root user to install and execute the container. The script should start the redis container on port 21000, but it’s important to check it.

    Code Block
    curl -u 'public:bs4#)W]h8+VK),RV' --silent --location https://repository.xcally.com/repository/provisioning/Scripts/motionV3_new_feature_update | bash
    • Then you need to check the environment variables in the .env file (variables published in requirements paragraph).

    • Finally as motion user

    Code Block
    su - motion

    go to folder cd /var/opt/motion2 and launch this command

    Code Block
    npm run initialize

    to apply the changes to the environment variables, but consider that it restarts the API and it can create disservice (so it is recommended to launch it while you are not using the server).