Building a Simple LLM Application with LangChain: A Quickstart Guide
This article’ll walk you through building a simple Language Model (LLM) application using LangChain. This application will translate text from English into another language. While it’s a straightforward use case — just a single LLM call with some prompting — it offers a great introduction to LangChain’s robust features. You’ll explore components like language models, prompt templates, and LangChain Expression Language (LCEL) to chain different modules together. Additionally, you’ll see how to debug using LangSmith and deploy your app with LangServe.
By the end of this guide, you’ll have a foundational understanding of:
- Working with language models
- Using PromptTemplates and OutputParsers
- Chaining components together using LCEL
- Debugging and tracing with LangSmith
- Deploying your application with LangServe
Let’s dive right in!
Setup
Jupyter Notebook
This tutorial is optimized for use with Jupyter notebooks, as they’re perfect for interactive learning, especially when working with LLMs where things can go wrong (like unexpected outputs or API issues). If you’re new to Jupyter notebooks, you can find installation instructions here.
Installation
To get started with LangChain, you need to install it using pip
:
pip install langchain
For more detailed installation options, check out the LangChain installation guide.
Using Language Models
LangChain supports many language models, allowing you to switch between them easily. In this example, we’ll use OpenAI’s GPT-4 model.
First, install the necessary package:
pip install -qU langchain-openai
Then, set up your API key:
import getpass
import os
os.environ["OPENAI_API_KEY"] = getpass.getpass() # Input your OpenAI API key
Now, let’s instantiate the model:
from langchain_openai import ChatOpenAI
model = ChatOpenAI(model="gpt-4")
Next, use the model to translate a simple message:
from langchain_core.messages import HumanMessage, SystemMessage
messages = [
SystemMessage(content="Translate the following from English into Italian"),
HumanMessage(content="hi!"),
]
model.invoke(messages)
The output should be something like this:
AIMessage(content='ciao!', ...)
Output Parsers
When calling the model, the result comes as an AIMessage
, containing not just the output string but also metadata. Often, you'll want to work with just the string. This is where output parsers come in handy.
from langchain_core.output_parsers import StrOutputParser
parser = StrOutputParser()
result = model.invoke(messages)
parser.invoke(result) # Output: 'Ciao!'
Chaining Components Together
LangChain allows chaining components like language models and parsers. You can chain them using the |
operator, making the output of one component the input for the next:
chain = model | parser
chain.invoke(messages) # Output: 'Ciao!'
Prompt Templates
Often, the messages passed to the language model will be a mix of user input and application logic. PromptTemplates help transform raw user input into a format ready for the LLM.
Let’s create a prompt template for language translation:
from langchain_core.prompts import ChatPromptTemplate
system_template = "Translate the following into {language}:"
prompt_template = ChatPromptTemplate.from_messages([
("system", system_template),
("user", "{text}")
])
result = prompt_template.invoke({"language": "italian", "text": "hi"})
result.to_messages()
The output would be a structured list of messages, including the system prompt and user input.
Chaining with LCEL
You can chain the prompt template, language model, and output parser together with LCEL (LangChain Expression Language). Here’s how:
chain = prompt_template | model | parser
chain.invoke({"language": "italian", "text": "hi"}) # Output: 'ciao'
This simple example shows the power of LangChain to link multiple steps seamlessly. You can also visualize each component’s contribution using LangSmith’s tracing feature.
Debugging with LangSmith
LangSmith offers tools to inspect and trace your LLM application, especially as your chain becomes more complex. You can enable LangSmith by setting environment variables:
export LANGCHAIN_TRACING_V2="true"
export LANGCHAIN_API_KEY="..."
Or in a notebook:
os.environ["LANGCHAIN_TRACING_V2"] = "true"
os.environ["LANGCHAIN_API_KEY"] = getpass.getpass()
LangSmith provides detailed traces, allowing you to pinpoint issues and understand how each chain component contributes to the final output.
Serving with LangServe
Once your app is ready, you can deploy it with LangServe, which helps developers expose LangChain applications as REST APIs.
Installing LangServe
pip install "langserve[all]"
Creating the Server
Here’s a basic Python server for our translation app:
from fastapi import FastAPI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
from langchain_openai import ChatOpenAI
from langserve import add_routes
# Set up the components
system_template = "Translate the following into {language}:"
prompt_template = ChatPromptTemplate.from_messages([
('system', system_template),
('user', '{text}')
])
model = ChatOpenAI()
parser = StrOutputParser()
# Chain components
chain = prompt_template | model | parser
# FastAPI setup
app = FastAPI(
title="LangChain Translation API",
version="1.0",
)
add_routes(app, chain, path="/chain")
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="localhost", port=8000)
Run the server using:
python serve.py
Your application will be available at http://localhost:8000
, and you can test it through the built-in playground at http://localhost:8000/chain/playground/
.
Client Interaction
To interact with the served chain programmatically, use the RemoteRunnable
:
from langserve import RemoteRunnable
remote_chain = RemoteRunnable("http://localhost:8000/chain/")
remote_chain.invoke({"language": "italian", "text": "hi"}) # Output: 'Ciao'
Conclusion
In this guide, we explored how to build, debug, and deploy a simple LLM application using LangChain. You’ve learned how to:
- Use language models
- Chain components using LCEL
- Debug with LangSmith
- Serve your application with LangServe
This is just the beginning — LangChain provides extensive tools to help you build sophisticated LLM-based applications. For further exploration, check out the official documentation on LCEL and LangSmith.