In this tutorial, we will learn how to use ChatGPT with Rasa.
ChatGPT is a Large Language Model (LLM) created by OpenAI which started the whole new wave in Artificial Industry.
Whereas Rasa is one such open-source framework that allows developers to build, improve, and deploy tailor made chatbots that can understand and respond to natural language input in text and voice mode.
You likely know about ChatGPT capabilities, with a rapidly growing number of applications being built on this technology. One question may arise in your mind that if ChatGPT is this powerful then why do we need Rasa?
Rasa provides a way to define the bot response that is sent back to user once intent is detected based on user’s query. This gives us the power on what type of response we need to send back to the user. However this cannot be achieved when we use ChatGPT in a chatbot scenario as we don’t have much control over the response generated by ChatGPT.
What we are going to do today is to combine the power of ChatGPT with Rasa!!
Rasa is a powerful tool when it comes to cater one specific field at a time. However it starts behaving incorrectly for any off-topic user query. For example a health based chatbot will not be able to answer questions related to stock market and may return any random response.
That’s where ChatGPT comes into play. We will combine ChatGPT in such a way that if user distracts from the course of conversation and ask something that is not handled by our Rasa chatbot. ChatGPT will handle it without causing the conversation to deviate from its path.
Prerequisite
- Basic Rasa understanding: If you are new to Rasa, I would suggest you go through
How to Create a ChatBot using Rasa and Python — A Step-by-Step Guide (Simple) - Basic understanding of ChatGPT: If you are new to ChatGPT, Please checkout How to use OpenAI ChatGPT API in Python.
Coding Section
We are going to use the code for Rasa from here: How to Create a ChatBot using Rasa and Python — A Step-by-Step Guide (Simple) and add ChatGPT functionality in it.
To add ChatGPT functionality we require two things:
- Updating our
config.yml
file to handlenlu_fallback
cases. - Update default fallback method in
actions.py
to add ChatGPT support. - Add the actions in your actions section of
domain.yml
file
To update config.yml, replace the previous content with this one:
# The config recipe.
# https://rasa.com/docs/rasa/model-configuration/
recipe: default.v1
# Configuration for Rasa NLU.
# https://rasa.com/docs/rasa/nlu/components/
language: en
pipeline:
- name: WhitespaceTokenizer
- name: CountVectorsFeaturizer
- name: CountVectorsFeaturizer
analyzer: "char_wb"
min_ngram: 1
max_ngram: 4
- name: DIETClassifier
epochs: 120
entity_recognition: False
- name: ResponseSelector
epochs: 100
- name: FallbackClassifier
threshold: 0.85
policies:
- name: TEDPolicy
max_history: 10
epochs: 20
- name: AugmentedMemoizationPolicy
max_history: 6
- name: RulePolicy
core_fallback_threshold: 0.4
core_fallback_action_name: "action_default_fallback"
enable_fallback_prediction: True
Code language: Python (python)
Our main focus should be here on Rulepolicy
. we can see that it contains three points under it. let’s understand them one by one.
- core_fallback_threshold: When a user asks a question, Rasa firstly predicts the intent and confidence of the predicted intent. Here we are setting the threshold value which will decide that at which confidence value the predicted confidence will be termed as
nlu_fallback
. - core_fallback_action_name: This is the default method name which will be called when nlu_fallback will occur.
- enable_fallback_prediction: Setting the parameter to true is important when conversing with a Rasa-based chatbot. Typically, the chatbot follows a predefined conversation path in
stories.yml
. However, if the user types something that falls outside of this path, the chatbot can get stuck and not know how to get back on track. By setting the parameter to true, we allow the chatbot to remember the conversation path. When we use ChatGPT to handle a non-path query, the chatbot can then return to the same conversation path and continue the conversation smoothly.
config.yml
changes are done. let’s update our actions.py
file now with the following code.
# This files contains your custom actions which can be used to run
# custom Python code.
#
# See this guide on how to implement these action:
# https://rasa.com/docs/rasa/custom-actions
from typing import Any, Text, Dict, List
from rasa_sdk import Action, Tracker
from rasa_sdk.executor import CollectingDispatcher
from rasa_sdk.events import UserUtteranceReverted
import requests
class ActionDefaultFallback(Action):
def name(self) -> Text:
return "action_default_fallback"
def run(
self,
dispatcher: CollectingDispatcher,
tracker: Tracker,
domain: Dict[Text, Any],
) -> List[Dict[Text, Any]]:
# Get user message from Rasa tracker
user_message = tracker.latest_message.get('text')
print(user_message)
# def get_chatgpt_response(self, message):
url = 'https://api.openai.com/v1/chat/completions'
headers = {
'Authorization': 'Bearer ADD_YOUR_OPENAI_API_KEY_HERE',
'Content-Type': 'application/json'
}
data = {
'model': "gpt-3.5-turbo",
'messages': [ {'role': 'system', 'content': 'You are an AI assistant for the user. You help to solve user query'}
{'role': 'user', 'content': 'You: ' + user_message}
],
'max_tokens': 100
}
response = requests.post(url, headers=headers, json=data)
# response = requests.post(api_url, headers=headers, json=data)
if response.status_code == 200:
chatgpt_response = response.json()
message = chatgpt_response['choices'][0]['message']['content']
dispatcher.utter_message(message)
else:
# Handle error
return "Sorry, I couldn't generate a response at the moment. Please try again later."
# Revert user message which led to fallback.
return [UserUtteranceReverted()]
Code language: Python (python)
We have created a class inheriting from Action
class, having the same name that we defined in our config.yml
file. When the predicted confidence for the user query falls below the set threshold value, Rasa invokes this method.
It interacts with ChatGPT via the API and returns the response generated by ChatGPT model. You can define the prompt for the model to behave as per your requirement.
You may be wondering why are we returning UserUtteranceReverted
method at the end. This is connected to the last setting of RulePolicy
, that is, enable_fallback_prediction
. Once the user strays from the mainpath of the conversation and ask something off-topic, we take chatgpt help and take back user to the same conversation path by reverting this off-topic conversation from the conversation tracker of the user.
Moving on to the last change, update your domain.yml
to add the actions.
version: "3.0"
intents:
- greet
- goodbye
- affirm
- deny
- mood_great
- mood_unhappy
- bot_challenge
responses:
utter_greet:
- text: "Hey! How are you?"
utter_cheer_up:
- text: "Here is something to cheer you up:"
image: "https://i.imgur.com/nGF1K8f.jpg"
utter_did_that_help:
- text: "Did that help you?"
utter_happy:
- text: "Great, carry on!"
utter_goodbye:
- text: "Bye"
utter_iamabot:
- text: "I am a bot, powered by Rasa."
actions:
- utter_greet
- utter_cheer_up
- utter_did_that_help
- utter_happy
- utter_goodbye
- utter_iamabot
- action_default_fallback
session_config:
session_expiration_time: 60
carry_over_slots_to_new_session: true
Code language: Python (python)
Test our updated bot
We have completed our changes. Go ahead and re-train your model using rasa train
command and then start your rasa server using rasa start --enable-api
and in another terminal your action server using rasa run actions
. Once started it will look something like this.
let’s try our integrations of ChatGPT with rasa by starting the conversation with greet and then asking something of topic and see if our approach works or not.
Here are the snapshots for the conversation.
You can see that our integration of ChatGPT with Rasa has actually worked!!
You can find the code on 🖥️ GitHub | Feel free to give ⭐