Rapid LLM app development is now achievable with Chainlit, the feature-packed Python library. Its intuitive user interface and comprehensive features have sparked widespread interest, which are give below:
- Build LLM Apps fast: Integrate seamlessly with an existing code base or start from scratch in minutes
- Visualize multi-steps reasoning: Understand the intermediary steps that produced an output at a glance
- Iterate on prompts: Deep dive into prompts in the Prompt Playground to understand where things went wrong and iterate
- Collaborate with teammates: Invite your teammates, create annotated datasets and run experiments together
- Share your app: Publish your LLM app and share it with the world (coming soon)
In this article we are going to create a chainlit application which will be using llamaIndex at its core for the LLM app.
Requirements
To get started in creating an LLM app using Chainlit, we would need the following things.
- OpenAI API key, Which can be found here
- Chainlit, Install it
pip install chainlit
- LlamaIndex, Install it with
pip install llama-index
command.
Coding Section
Once you install all the requirements, We just need to add few lines on code in a `.py` file and voila! Your Chainlit app is ready to be served. Copy the following code and save it in a python file.
import os
import openai
from llama_index.query_engine.retriever_query_engine import RetrieverQueryEngine
from llama_index.callbacks.base import CallbackManager
from llama_index import (
LLMPredictor,
ServiceContext,
StorageContext,
load_index_from_storage,
)
from langchain.chat_models import ChatOpenAI
import chainlit as cl
openai.api_key = os.environ.get("OPENAI_API_KEY")
try:
# rebuild storage context
storage_context = StorageContext.from_defaults(persist_dir="./storage")
# load index
index = load_index_from_storage(storage_context)
except:
from llama_index import GPTVectorStoreIndex, SimpleDirectoryReader
# Provide the folder path here to read files
documents = SimpleDirectoryReader("./data").load_data()
index = GPTVectorStoreIndex.from_documents(documents)
index.storage_context.persist()
@cl.on_chat_start
async def factory():
llm_predictor = LLMPredictor(
llm=ChatOpenAI(
temperature=0,
model_name="gpt-3.5-turbo",
streaming=True,
),
)
service_context = ServiceContext.from_defaults(
llm_predictor=llm_predictor,
chunk_size=512,
callback_manager=CallbackManager([cl.LlamaIndexCallbackHandler()]),
)
query_engine = index.as_query_engine(
service_context=service_context,
streaming=True,
)
# storing query_engine instance in chainlit session
cl.user_session.set("query_engine", query_engine)
@cl.on_message
async def main(message):
query_engine = cl.user_session.get("query_engine") # type: RetrieverQueryEngine
response = await cl.make_async(query_engine.query)(message)
response_message = cl.Message(content="")
for token in response.response_gen:
await response_message.stream_token(token=token)
if response.response_txt:
response_message.content = response.response_txt
await response_message.send()
Code language: Python (python)
Let’s understand the above code.
- In the
try/except
block we are building/loading up our indexes. If you are not familiar with LlamaIndex, I highly recommend you check Building with LlamaIndex @cl.on_chat_start
is the starting point of our app. Whenever we will visit the page or click on new chat button. This will be loaded to create aquery_engine
instance.@cl.on_message
As the name suggests, This decorator will get invoked for every message that user sends from the UI. Response generation using LLM takes place in this part.
We now have the basic understanding of the code, let’s run it to understand it more. To run the code you have to type the following command in your terminal chainlit run main.py -w
.
Keep in mind that it will take some time as index building process takes place first time. After that it will load faster. If everything worked fine, You should see something like this in your browser.
You have a working chatbot UI without working on the UI. Isn’t this great!! This will help developers to put their focus on the logic side rather than the presentation side.
So take up your arms and jump in the war and build something cool using Chainlit!!