Bots

Section visible by enabling AI Bots on license

 

OpenAI ChatGPT

available from VERSION 3.36.0

image-20240611-070701.png

This block will allow Voicebot creation via ChatGPT, by configuring an OpenAI account under Cloud Providers section.

Requirements:

  • Set up an OpenAI Cloud Provider with a valid API key

  • Configure an OpenAI bot in Tools → OpenAI Bots

 

image-20241029-082014.png

 

  • Label: here you can type a brief description

  • Choose an OpenAI bot: select the created bot from the list (mandatory field)

  • Text you want to send (e.g. the variable {GOOGLE_ASR_TRANSCRIPT}) - (discover below the IVR project built with ASR and TTS blocks)

  • Sound to play while waiting (available from version 3.40.0): a playback sound, previously uploaded in Tools/Sound section, executed for the customer while waiting for a response from OpenAI, such as, “We are analyzing your request, we will get back to you shortly” , thus improving the user experience and avoiding audio gaps.
    This option is not mandatory and by default the sound is set on “No option chosen”. If you decide to select a sound, it is executed right after sending the HTTP request, while the server waits for OpenAI answer. The playback is executed to the end, without being interrupted in case OpenAI's response was already ready before the completion of the audio.

In the Cally Square context, the user can configure the project, by using one of the ASR blocks to understand the conversation, then the ChatGPT block to generate the response, and one of the TTS blocks to play the response.

Requirements for the project:

  • Set up a Google-Type Cloud Provider with a valid API key

  • Set up an OpenAI Cloud Provider with a valid API key

  • Check if available or set a variable in Tools → Variables called GOOGLE_ASR_TRANSCRIPT

  • Set in the TTS blocks the Google provider

  • Set in the ARS block the Google API key

  • Choose in the set Variable block the variable GOOGLE_ASR_TRANSCRIPT

 

  1. In TTS blocks, set the Google provider

  2. In ASR block set the Google API Key

  3. In Set Variable block, choose GOOGLE_ASR_TRANSCRIPT

  4. In Tools → OpenAI Bots, configure an OpenAI bot with the parameters reported below

  • Recommended model "GPT-3.5-turbo" for simple bots, otherwise "GPT-4" (slower, but understands better in case of unclear transcripts)

  • Max tokens: 2000

  • Exit phrase: "I am redirecting you to a human operator".

  • ChatGPT prompt:
    You are a voice bot assistant. Help users as much as possible.
    Answer in a concise way. Answer only in English.
    Answer like in a phone conversation.
    If the customer asks for a reservation, ask first for the day, then ask for the time.
    If the customer asks for warranty information, ask for the customer number, then redirect to a human operator.
    If the customer asks for a cancellation, ask for the customer number, and then the reservation number.
    If you don't know how to help the user, tell the customer "I am redirecting you to a human operator".

  • Temperature: 0.1

  • Analyze chat (RECOMMENDED ENABLED):
    Answer in a JSON format. The key is 'CHAT_PROGRESS', which is equal to '1' if chat is complete. Else it is equal to '0'. Write a key REASON about why CHAT_PROGRESS should be 1. If you say "Have a good day", CHAT_PROGRESS = 1. If you say "I am redirecting you to an human operator", CHAT_PROGRESS = 1

  • Forward message: “I am redirecting you to a human operator

  • Error message: “Sorry, I ran into an error, I am redirecting you to a human operator”

  • Attachment message: “/“ (a voicebot can not have an attachment sent)


Dialogflow V2

available from rel. 2.5.7

This box allows you to build a voice bot using the Google Dialogflow integration

Explore this documentation to find out How to retrieve Google Key for Cally Square blocks

 

  • Label: here you can type a brief description

  • Project ID: Cloud Platform project ID

  • Client Email: email address associated to Service Account Key

  • Private Key: private key associated to Service Account Key

  • Language: the language you want use for the bot

  • Text: the text you want to send

The DialogflowV2 block saves the results in the following variables:

  • DIALOGFLOW_ACTION: Matched Dialogflow intent action name

  • DIALOGFLOW_ALLREQUIREDPARAMSPRESENT: True if all required parameter values have been collected (true-false)

  • DIALOGFLOW_ENDCONVERSATION: True when 'end conversation' flag is set for the matched dialogflow intent. It is useful when you want to transfer a call to an agent (true/false)

  • DIALOGFLOW_FULLFILLMENTTEXT: The text to be pronounced to the user or shown on the screen

  • DIALOGFLOW_INTENTNAME: The unique identifier of the intent

  • DIALOGFLOW_INTENTDISPLAYNAME: The display name of the intent

  • DIALOGFLOW_ISFALLBACKINTENT: True when matched dialogflow intent is fallback intent (true-false)

  • DIALOGFLOW_LANGUAGECODE: The language that was triggered during intent detection

  • DIALOGFLOW_QUERYTEXT: User input

  • DIALOGFLOW_RESPONSEID: The unique identifier of the response

  • DIALOGFLOW_SCORE: Matching score for the intent (0-1)

  • DIALOGFLOW_SPEECH: Text to be pronounced to the user

  • DIALOGFLOW_RESOLVEDQUERY: The query that was used to produce the result.

Exit Arrows

This box provides just one arrow out to the next step

 


Dialogflow

Deprecated from rel. 2.5.7

This box allows you to build a voice bot using the Google Dialogflow integration (click clip1 and clip2 and learn more about this topic!)

 

  • Label: here you can type a brief description

  • Key: your acquired client api key from the console.dialogflow.com account

  • Text: the text you want to send

  • Language: the language you want use for the bot

The Dialogflow block saves the results in the following variables:

Exit Arrows

This box provides just one arrow out to the next step


Amazon Lex

available from rel. 2.0.77

his box allows you to build a voice bot using the Amazon Lex integration.

For additional information see https://docs.aws.amazon.com/en_us/lex/latest/dg/getting-started.html

 

The Amazon Lex block saves the results in the following variables:

  • AWS_LEX_INTENTNAME: The current user intent that Amazon Lex is aware of. (Read more)

  • AWS_LEX_MESSAGE: The message to convey to the user. 

  • AWS_LEX_MESSAGEFORMAT: The format of the response message. (Read more)

  • AWS_LEX_DIALOGSTATE: Identifies the current state of the user interaction. (Read more)

  • AWS_LEX_SLOTTOELICIT: If the AWS_LEX_DIALOGSTATE value is ElicitSlot, returns the name of the slot for which Amazon Lex is eliciting a value.

  • AWS_LEX_SLOT_*: The intent slots that Amazon Lex detected from the user input in the conversation. (ex. AWS_LEX_SLOT_PICKUPCITY)

Related topics