Bots
On this page |
|
Section visible by enabling AI Bots on license
OpenAI ChatGPT
available from VERSION 3.36.0
This block will allow Voicebot creation via ChatGPT, by configuring an OpenAI account under Cloud Providers section.
Requirements:
Set up an OpenAI Cloud Provider with a valid API key
Configure an OpenAI bot in Tools → OpenAI Bots
Label: here you can type a brief description
Choose an OpenAI bot: select the created bot from the list (mandatory field)
Text you want to send (e.g. the variable {GOOGLE_ASR_TRANSCRIPT}) - (discover below the IVR project built with ASR and TTS blocks)
Sound to play while waiting (available from version 3.40.0): a playback sound, previously uploaded in Tools/Sound section, executed for the customer while waiting for a response from OpenAI, such as, “We are analyzing your request, we will get back to you shortly” , thus improving the user experience and avoiding audio gaps.
This option is not mandatory and by default the sound is set on “No option chosen”. If you decide to select a sound, it is executed right after sending the HTTP request, while the server waits for OpenAI answer. The playback is executed to the end, without being interrupted in case OpenAI's response was already ready before the completion of the audio.
In the Cally Square context, the user can configure the project, by using one of the ASR blocks to understand the conversation, then the ChatGPT block to generate the response, and one of the TTS blocks to play the response.
Requirements for the project:
Set up a Google-Type Cloud Provider or a AWS Cloud Provider with a valid API key
Set up an OpenAI Cloud Provider with a valid API key
Check if available or set these variables in Tools:
GOOGLE_ASR_TRANSCRIPT or AWS_ASR_TRANSCRIPT to send to OpenAI the text of what the user said
OPENAI_CHATGPT_RESULT to get the text response from OpenAI, passed then to TTS block
Example
In TTS block, set the Google provider
In ASR block set the Google API Key and Set the Variable {GOOGLE_ASR_TRANSCRIPT}
In OpenAI ChatGPT block retrieve your OpenAI Bot configured in Tools section
in TTS block set the variable {OPENAI_CHATGPT_RESULT} to get the text response from OpenAI.
The text result is saved in database insquare_messages
table
Dialogflow V2
This box allows you to build a voice bot using the Google Dialogflow integration
Explore this documentation to find out How to retrieve Google Key for Cally Square blocks
An Internet connection is required for this box to work
Remember:
This software is managed by others. Check if it works properly.
Label: here you can type a brief description
Project ID: Cloud Platform project ID
Client Email: email address associated to Service Account Key
Private Key: private key associated to Service Account Key
Language: the language you want use for the bot
Text: the text you want to send
Please note Dialogflow requires a valid Service Account Key and a sufficient amount of acquired credits.
Furthermore it is pure experimental and it can bring to unexpected behaviour.
The DialogflowV2 block saves the results in the following variables:
DIALOGFLOW_ACTION: Matched Dialogflow intent action name
DIALOGFLOW_ALLREQUIREDPARAMSPRESENT: True if all required parameter values have been collected (true-false)
DIALOGFLOW_ENDCONVERSATION: True when 'end conversation' flag is set for the matched dialogflow intent. It is useful when you want to transfer a call to an agent (true/false)
DIALOGFLOW_FULLFILLMENTTEXT: The text to be pronounced to the user or shown on the screen
DIALOGFLOW_INTENTNAME: The unique identifier of the intent
DIALOGFLOW_INTENTDISPLAYNAME: The display name of the intent
DIALOGFLOW_ISFALLBACKINTENT: True when matched dialogflow intent is fallback intent (true-false)
DIALOGFLOW_LANGUAGECODE: The language that was triggered during intent detection
DIALOGFLOW_QUERYTEXT: User input
DIALOGFLOW_RESPONSEID: The unique identifier of the response
DIALOGFLOW_SCORE: Matching score for the intent (0-1)
DIALOGFLOW_SPEECH: Text to be pronounced to the user
DIALOGFLOW_RESOLVEDQUERY: The query that was used to produce the result.
Exit Arrows
This box provides just one arrow out to the next step
TIP
Click here to learn more about voice bot triggers useful to enable conversation transcript view
Example - Scenario healthcare
Call comes in
A playback starts e.g. with the welcome message "Welcome, how can I help you?"
The customer says his sentence for example:
"I would like to know the clinic's opening hours", or
"I would like to speak with an operator" or
"I would like to know the date of my reservation that I cannot remember"
AWS ASR turns the voice into text and sends it to DialogflowV2 which understands customer's intent
the call passes to switch option
if the request is for clinic hours, a playback will say them
if the request is to speak to an operator, a playback will say "please hold, we are transferring your call" and the interaction will pass to a queue
if the request is for a reservation time, a playback will say "Please hold for a moment, I search for the required information" and the info goes to the Database, which retrieves the data and the AWS Polly will say "the date of your reservation is Saturday, March 9 at 11 a.m."
finally there is the Hangup of the interaction
Dialogflow
Deprecated from rel. 2.5.7
This box allows you to build a voice bot using the Google Dialogflow integration (click clip1 and clip2 and learn more about this topic!)
An Internet connection is required for this box to work
Remember:
This software is managed by others. Check if it works properly.
Label: here you can type a brief description
Key: your acquired client api key from the console.dialogflow.com account
Text: the text you want to send
Language: the language you want use for the bot
Please note that Dialogflow requires a valid key from the console.dialogflow.com website and a sufficient amount of acquired credits. Furthermore, it is purely experimental and it can bring to unexpected behaviour.
The Dialogflow block saves the results in the following variables:
DIALOGFLOW_SOURCE: Request source name.
DIALOGFLOW_RESOLVEDQUERY: The query that was used to produce the result.
DIALOGFLOW_ACTION: Matched Dialogflow intent action name.
DIALOGFLOW_SPEECH: Text to be pronounced to the user.
DIALOGFLOW_SCORE: Matching score for the intent (0-1).
DIALOGFLOW_STATUSCODE: Response status code. For more information, please see here https://dialogflow.com/docs/fulfillment#errors
DIALOGFLOW_ENDCONVERSATION: True when 'end conversation' flag is set for the matched dialogflow intent. It is useful when you want to transfer a call to an agent (true/false).
DIALOGFLOW_ISFALLBACKINTENT: True when matched dialogflow intent is fallback intent (true-false).
Release notes
DIALOGFLOW_ENDCONVERSATION and DIALOGFLOW_ISFALLBACKINTENT variables are available from version 2.0.77
Exit Arrows
This box provides just one arrow out to the next step
TIP
Click here to learn more about voice bot triggers useful to enable conversation transcript view
Amazon Lex
available from rel. 2.0.77
his box allows you to build a voice bot using the Amazon Lex integration.
For additional information see https://docs.aws.amazon.com/en_us/lex/latest/dg/getting-started.html
An Internet connection is required for this box to work
Remember:
This software is managed by others. Check if it works properly.
Label: here you can type a brief description
Access Key ID and Secret Access Key: AWS security credentials. Required: Yes
(see http://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/credentials.html )Region: AWS regional endpoint. Required: Yes (see http://docs.aws.amazon.com/general/latest/gr/rande.html#pol_region)
Bot name: the bot name. Required: Yes
Text: the text you want to send. Required: Yes
The Amazon Lex block saves the results in the following variables:
AWS_LEX_INTENTNAME: The current user intent that Amazon Lex is aware of. (Read more)
AWS_LEX_MESSAGE: The message to convey to the user.
AWS_LEX_MESSAGEFORMAT: The format of the response message. (Read more)
AWS_LEX_DIALOGSTATE: Identifies the current state of the user interaction. (Read more)
AWS_LEX_SLOTTOELICIT: If the AWS_LEX_DIALOGSTATE value is ElicitSlot, returns the name of the slot for which Amazon Lex is eliciting a value.
AWS_LEX_SLOT_*: The intent slots that Amazon Lex detected from the user input in the conversation. (ex. AWS_LEX_SLOT_PICKUPCITY)
TIP
Click here to learn more about voice bot triggers useful for enabling conversation transcript view