OpenAI bots
XCALLY section | Tools → OpenAI bots |
On this page |
Overview
FROM VERSION 3.36.0
OpenAI bots section allows you to configure bots for Voice and other Channels with ChatGPT. From the same section you can configure bots using Deepseek for Voice channel.
Benefits
The use of the chatbot ensures:
Instant Response Time: Bots can instantly reply to inquiries, reducing wait times and improving customer satisfaction by providing immediate assistance;
Cost Efficiency: by automating routine tasks and handling common customer queries, Bots free up human agents to focus on more complex issues, lowering operational costs;
Scalability: Bots can handle a large volume of interactions simultaneously, allowing businesses to scale their customer support without needing to add more staff.
Requirements
You need to configure a OpenAI Cloud Provider to use ChatGPT bot
You need to configure a Deepsek Cloud Provider to use Deepseek bot
As user, you need permissions on this section to view created Bots
Bot costs will be billed:
on your OpenAI account or Deepseek account and
on XCALLY based on the number of AI Conversations tracked in the AI Conversation Dashboards
OpenAI bots section
This page describes the OpenAI configuration on the New Client Experience, available from version 3.50.0. We recommend using the New Experience web interface to take full advantage of the latest features.
Under Tools → OpenAI bots section you can find the list of created bots (OpenAI or Deepseek):
When you enter the OpenAI bots section you find the list of the created bots, with the possibility to edit or delete them clicking the three dots button (⋮) next to the bot of interest.
Furthermore in the Tools → OpenAI section you can:
search for a specific bot
set and clear filters
manage columns (starting from version 3.51.0, you will see also the column Provider)
activate the advanced search for each field
export data to CSV, if you select at least one bot
edit a specific bot
create a new bot
Create an OpenAI bot
To create a new OpenAI bot click the Add button, under Tools → OpenAI.
Configure the fields:
Choose your OpenAI Cloud Provider Account
Select, if needed, the ChatGPT Assistant. As alternative select “No assistant”.
FROM VERSION 3.35.0
You can also use Assistants inside the bot configuration.
Explore the relative wiki for more information.
Select the ChatGPT Model. Please note each model has different capabilities and prices, which can be viewed in units of either per 1M or 1K tokens.
Define a name of the ChatGPT bot
Set the Max conversation lenght in ChatGPT tokens for each session
Enter the Exit Phrase to terminate the chat with the bot. In this way the chat will go forward the next action application in the routing.
Enter the Welcome message that will be sent as first message in every new chat interaction
Write the bot Instructions
Set ChatGPT Temperature, the parameter related to creativity from 0.0 to 2.0.
Enable, if needed, Analyze chat to know if you should pass to an agent. If enabled, the escape Prompt inserted is a System message sent to ChatGPT to recognize when the chat with the bot should be interrupted and passed to an agent.
Set Forward message, sent when max token limit is reached
Set Error message, sent when there is an error with OpenAI APIs
Set ChatGPT Attachment Message, sent in case of an attachment
Create a DeepSeek bot
FROM VERSION 3.51.0
Requirements:
Setup a DeepSeek Cloud Provider with a valid API key and credit loaded on the account.
Discover DeepSeek Models and pricing on official documentation:Models & Pricing | DeepSeek API Docs
Use the XCALLY new Client Experience
If you don’t use the XCALLY New experience interface you can’t configure a Deepseek bot
To create a new Deepseek bot click the blue + button located at the bottom-right corner of the interface, under Tools → OpenAI.
Configure the fields:
On Bot provider service section, it’s possible to choose DeepSeek Provider and the relative DeepSeek cloud provider account
DeepSeek Chat Model by choosing between:
deepseek-chat: this is the recommended model because it has lower latency
deepseek-reasoner
Name of the bot will be predefined but you can customize it
Max conversation lenght in bot tokens (for each session): you can think of tokens as pieces of words, where 1000 tokens is about 750 words. When your session reaches this number of tokens, conversation has to end. You can understand how DeepSeek calculates token at this link
Token & Token Usage | DeepSeek API Docs
Exit Phrase: this is the key phrase to finish chat with the bot. In this way chat will go forward the next action application in the routing
Welcome message: this message will be sent as first message in every new chat
Instructions: these are the bot training “instructions” to explain what it has to say, questions it can manage, phrase it has to say when it doesn't know how to help the customer and so on.
If you want to manage the re-routing from the bot to a human agent, it is necessary to explicitly include this phrase key literally in the instructions:
If you don't know how to help user, tell the customer "I am redirecting you to a human operator".
If customer asks to be redirected to human operator, tell the customer "I am redirecting you to a human operator".
Bot Temperature: parameter related to creativity from 0.0 to 2.0. Lower temperature is recommended for more solid results. When using 0, the bot will do literally what is written in the prompt, if creativity is 1 it will try to fill the gaps in the knowledge, if it is 2, it will generate very creative results, that can be factually incorrect.
Analyze chat to know if you should pass to an agent: by enabling this option, the escape Prompt inserted is a System message sent to DeepSeek to recognize when the chat with the bot should be interrupted and passed to an agent.
It’s recommended to enable this option because DeepSeek does not correctly transpose the command that is supposed to say ‘I am redirecting you to an human operator’.
You need to enter an answer in JSON format, with the following keys: CHAT_PROGRESS and REASON.Forward message: message said when max token limit is reached
Error message: Message said when there is an error with DeepSeek APIs
Attachment Message: in case of an attachment, this message will be sent
How does it work?
After you complete the configuration of bots in this section, you can use them on actions flow of different channels (Chat, SMS, Open Channel, Whatsapp) and on Cally Square OpenAI ChatGPT block or Cally Square DeepkSeek block.
For example, if you choose to insert a bot as the first application, all incoming messages will be managed by the bot until it reaches a point where it has no suitable response. At that moment, the chat automatically moves to the next block configured in the action flow (for instance, a queue). Once the chat is assigned to an agent, the bot is disabled.
When selecting the block, you must specify the OpenAI cloud provider account and the OpenAI bot previously created.
Scenarios
These are some possible scenarios (followed by queue application):
Every interaction starts with a welcome message
Chat with ChatGPT:
If ChatGPT says the exit phrase, the interaction will be passed to the queue
If ChatGPT can't help the user, the interaction will be passed to the queue
If the total tokens used reach the max tokens value, it will be shown the forward message and the interaction will be passed to the queue
If the application runs into an error, the error message will be shown and the interaction will be passed to the queue
During a chatbot conversation, the agent cannot intervene or manage the interaction. However, if the administrator wishes to monitor the chatbot, it is possible to do so from the Interactions Spy section.
Interactions are created under the account and remain visible to all associated users as unassigned interactions while the chatbot is communicating with the user. If an agent clicks on one of these unassigned interactions, XCALLY automatically assigns it to that operator, and the bot stops interacting from the next message onward.
Instructions tips and tricks
Insert in the prompt "Answer in a concise way" to help to set ChatGPT using less words as possible, being still useful and spending less in billed tokens
ChatGPT can be used to make multistep dialogues, with prompt instructions like:
“If customer ask for help with an order, you have to request order ID.
When customer answers with that information then you have to ask order date.
When customer tells you this information, then redirect to operator”
If you notice that ChatGPT isn’t responding as expected despite the configured prompt, try adjusting the ChatGPT model or temperature settings to fine-tune its behavior and response style.