40. Streamlit#
A library for building Apps: https://streamlit.io
Generative AI and Streamlit: https://blog.streamlit.io/generative-ai-and-streamlit-a-perfect-match/
Tutorials: https://blog.streamlit.io/tag/tutorials/
We begin with the first tutorial on a basic use of Gen AI combining OpenAI ChatGPT and Langchain in Streamlit: https://blog.streamlit.io/langchain-tutorial-1-build-an-llm-powered-app-in-18-lines-of-code/
40.1. Install all the packages for GAI#
The following packages are required and can be installed from the command line or in an IDE like VScode:
pip install openai
pip install langchain
pip install streamlit
Test the installation with the following code:
streamlit hello
and then try different functionality as shown.
40.2. Load in API Keys#
Many of the services used in this notebook require API keys and they charge fees, after you use up the free tier. We store the keys in a separate notebook so as to not reveal them here. Then by running that notebook all the keys are added to the environment.
Set up your own notebook with the API keys and it should look as follows:
import os
OPENAI_KEY = '<Your API Key here>'
os.environ['OPENAI_API_KEY'] = OPENAI_KEY
HF_API_KEY = '<Your API Key here>'
os.environ['HUGGINGFACEHUB_API_TOKEN'] = HF_API_KEY
SERPAPI_KEY = '<Your API Key here>'
os.environ['SERPAPI_API_KEY'] = SERPAPI_KEY
WOLFRAM_ALPHA_KEY = '<Your API Key here>'
os.environ['WOLFRAM_ALPHA_APPID'] = WOLFRAM_ALPHA_KEY
GOOGLE_KEY = '<Your API Key here>'
keys = ['OPENAI_KEY', 'HF_API_KEY', 'SERPAPI_KEY', 'WOLFRAM_ALPHA_KEY']
print("Keys available: ", keys)
You can save these in a file called keys.py
and then we will execute it in the app to ingest all the keys.
exec(open('keys.py').read())
You may use the above line of code in the app below.
If you are using OpenAI, please also install:
pip install langchain-openai
40.3. Code for the App#
Ref: https://docs.streamlit.io/develop/tutorials/llms/llm-quickstart
This is a very simple app to show a first use case. We call this langchain_app.py
import streamlit as st
from langchain_openai.chat_models import ChatOpenAI
st.title("🦜🔗 LangChain App")
openai_api_key = st.sidebar.text_input("OpenAI API Key", type="password")
def generate_response(input_text):
model = ChatOpenAI(temperature=0.7, api_key=openai_api_key)
st.info(model.invoke(input_text).content)
with st.form("my_form"):
text = st.text_area(
"Enter text:",
"What is the meaning of life?",
)
submitted = st.form_submit_button("Submit")
if not openai_api_key.startswith("sk-"):
st.warning("Please enter your OpenAI API key!", icon="âš ")
if submitted and openai_api_key.startswith("sk-"):
generate_response(text)
40.4. Run the App#
From the command line, start up the streamlit server:
streamlit run langchain_app.py
You will see the app open at the URL http://localhost:8501
40.5. Using Ollama
with LangChain
#
import streamlit as st
from langchain_ollama import ChatOllama
st.title("🦜🔗 LangChain Ollama App")
def generate_response(input_text):
llm = ChatOllama(model="llama3.2")
st.info(llm.invoke(input_text).content)
with st.form("my_form"):
text = st.text_area(
"Enter text:",
"What is the meaning of life?",
)
submitted = st.form_submit_button("Submit")
if submitted:
generate_response(text)
To use the above:
Make sure Ollama is installed. https://ollama.com
Pull the required model from the command line, for example:
ollama pull llama3.1
Also install
pip install langchain-ollama
Then use streamlit
.
40.6. Documentation#
You can build more elaborate apps based on these examples and use the documentation here: https://docs.streamlit.io
For a simple chatbot, see: https://docs.streamlit.io/library/get-started/installation
An interesting application using Amazon Kendra, LangChain, Streamlit is here: https://aws.amazon.com/blogs/machine-learning/harnessing-the-power-of-enterprise-data-with-generative-ai-insights-from-amazon-kendra-langchain-and-large-language-models/
A simple chat interface to Ollama with streamlit: Gravtas-J/Ollama-Chat
An even simpler repo that you can modify and contribute to is: iamaziz/ollachat