OpenAI bots
XCALLY section | Tools → OpenAI bots |
On this page |
What's about?
AVAILABLE FROM VERSION 3.36.0
OpenAI bots section allows you to configure bots for Voice and other Channels with ChatGPT
Requirements
The section is visible only if you have AI feature enable on your license
You need to configure a OpenAI Cloud Provider to use ChatGPT bot
As user, you need permissions on this section to view created Bots
So you need to have an OpenAI account and remember that the Bot costs will be billed directly on your OpenAI account and they depend on the number of AI-managed conversations
How to configure a OpenAI bot
By clicking on + button, you can create a new OpenAI bot
So you can configure your bot:
Choose your OpenAI Cloud Provider Account
ChatGPT Assistant (you can choose from the list of configured assistant on your provider or insert “no assistant” and continue in the configuration)
In fact starting from version 3.35.0 you can also use Assistants inside the bot configuration.
Explore the relative wiki for more information.
ChatGPT Model: multiple models, each with different capabilities and prices which can be viewed in units of either per 1M or 1K tokens. You can explore OpenAI ChatGPT Pricing documentation to understand which model to use according to your needs. This is a handpicked list of ChatGPT default models:
gpt-3.5-turbo: The fastest model. Same responses as the free ChatGPT experiences;
gpt-4: More creative than 3.5 but slower;
gpt-4-32k: Same as gpt-4 but allows longer conversations.
gpt-4-turbo
gpt-4o
To understand which model to use, you can explore OpenAI documentation
Name of the ChatGPT bot: this name will be saved to the XCALLY database to help better understand what bot is talking
Max conversation lenght in ChatGPT tokens (for each session): you can think of tokens as pieces of words, where 1000 tokens is about 750 words. When your session reaches this number of tokens, conversation has to end. You can understand how ChatGPT calculates token at this link OpenAI Tokenizer
The max conversation lenght that you can inserted in configuration is 4000 tokens
Remember that this parameter consider the sum of input customer’s messages + output ChatGPT messages + lenght of prompt inserted
Exit Phrase: this is the key phrase to finish chat with the bot. In this way chat will go forward the next action application in the routing (e.g. by forwarding it to a queue with human operators). In this case the sentence has to be "I am redirecting you to a human operator".
Welcome message: this message will be sent as first message in every new chat
Instructions: these are the bot training “instructions” to explain what it has to say, questions it can manage, phrase it has to say when it doesn't know how to help the customer and so on.
If you want to manage the re-routing from the bot to a human agent, it is necessary to explicitly include this phrase key literally in the instructions:
If you don't know how to help user, tell the customer "I am redirecting you to a human operator".
If customer asks to be redirected to human operator, tell the customer "I am redirecting you to a human operator".
Consider that you can enter prompt in a specific language, but ChatGPT supports auto language detect, so if customer writes in an other languages, the bot will understand it by writing the next messages in customer’s language.
Let’s explore in this paragraph how you can configure instructions (and explore in official OpenAI documentation how to properly configure the prompt: https://platform.openai.com/docs/guides/prompt-engineering/prompt-engineering )
ChatGPT Temperature: parameter related to creativity from 0.0 to 2.0. Lower temperature is recommended for more solid results. When using 0, the bot will do literally what is written in the prompt, if creativity is 1 it will try to fill the gaps in the knowledge, if it is 2, it will generate very creative results, that can be factually incorrect.
Analyze chat to know if you should pass to an agent: by enabling this option, the escape Prompt inserted is a System message sent to ChatGPT to recognize when the chat with the bot should be interrupted and passed to an agent.
You need to enter an answer in JSON format, with the following keys: CHAT_PROGRESS and REASON.
CHAT_PROGRESS is a number and must be 0 if chat has to continue or 1 if:the customer wants to talk with an operator
the bot can not help the customer
the bot has finished the questions inserted in the prompt
REASON is the value with the reason why CHAT_PROGRESS should be 1.
An example of Exit prompt can be “Answer in a JSON format. The key is ‘chatProgress’, which is equal to ‘1' if the customer asks for a human operator, or the virtual assistant talks about passing to a human operator. Else it is equal to '0’. Write also another key named REASON where you write why you think chat must pass to an operator.”
You can customize key for ChatProgress. Moreover it is mandatory to enter JSON word in the prompt, while reason can be optional
Forward message: message said when max token limit is reached
Error message: Message said when there is an error with OpenAI APIs
ChatGPT Attachment Message: in case of an attachment, this message will be sent
How does it work?
With the configuration of bots in this section, you can retrieve them on actions flow of different channels (Chat, SMS, Open Channel, Whatsapp) and on Cally Square OpenAI ChatGPT block. If for example you decide to insert a ChatGPT bot as first application, all messages will be managed by the bot until it doesn't know what to say and the chat will pass to the next block configured on action flow (for example a queue). When chat enters in a queue and it is assigned to an agent, ChatGPT gets disabled.
By selecting ChatGPT block, you need to choose OpenAI cloud provider account and created OpenAI bot
Scenarios
These are some possible scenarios after starting to chat with ChatGPT (followed by queue application):
Every interaction starts with a welcome message
Chat with ChatGPT:
If ChatGPT says the exit phrase, the interaction will be passed to the queue
If ChatGPT can't help the user, the interaction will be passed to the queue
If the total tokens used reach the max tokens value, it will be shown the forward message and the interaction will be passed to the queue
If the application runs into an error, the error message will be shown and the interaction will be passed to the queue
During chatbot conversation, agent can not manage it (but if admin wants to see the bot conversation, it’s possible to enter in spy section).
In fact interactions are created on account and they are visible to all associated users like not assigned interactions while chatbot talks with the user. If an agent clicks on a not assigned interaction, XCALLY assigns it to the operator and the bot no longer interacts at the next message.
Instructions tips and tricks
Insert in the prompt "Answer in a concise way" to help to set ChatGPT using less words as possible, being still useful and spending less in billed tokens
ChatGPT can be used to make multistep dialogues, with prompt instructions like:
“If customer ask for help with an order, you have to request order ID.
When customer answers with that information then you have to ask order date.
When customer tells you this information, then redirect to operator”
If you realize that despite the configured prompt, ChatGPT is not replying the way you want, you can try changing the ChatGPT model or temperature