Swiftorial Logo
Home
Swift Lessons
AI Tools
Learn More
Career
Resources
Integrating LangChain, LangGraph, and LangSmith in a Single LLM Project

Integrating LangChain, LangGraph, and LangSmith in a Single LLM Project

How to combine LangChain, LangGraph, and LangSmith into a unified, production-ready LLM application.

Introduction: The Power of a Unified Ecosystem

Building a successful, production-ready Large Language Model (LLM) application requires more than just a single tool. It demands a holistic approach that covers everything from foundational components and complex logic to monitoring and evaluation. The true power of the LangChain ecosystem is revealed when its three core components—**LangChain**, **LangGraph**, and **LangSmith**—are seamlessly integrated. While each serves a distinct purpose, they work together in a synergistic workflow to enable the creation, debugging, and continuous improvement of sophisticated LLM agents. This guide will walk you through a practical example of how to combine these three tools into a single, unified project.

The Project: A Dynamic Web-Browsing Agent

Let's imagine we're building a conversational agent that can answer complex questions by searching the web. The user might ask a question, the agent needs to figure out if it needs to search, perform the search, and then use the search results to formulate a final answer. This workflow is a perfect candidate for our integrated approach.

1. LangChain: The Foundational Components

We'll start with LangChain to define the basic "ingredients" of our application. This includes our LLM, our web search tool, and a universal prompt that our agent will use. LangChain provides the modularity to easily define and interchange these components.

import os
from langchain_core.tools import tool
from langchain_openai import ChatOpenAI

# 1. Define the LLM and the Tool
# The LLM will serve as the "brain" of our agent.
llm = ChatOpenAI(model="gpt-4o")

# LangChain provides the Tool decorator to easily create a callable function
# that our agent can use. Here's a placeholder for a web search.
@tool
def search_the_web(query: str) -> str:
    """Searches the web for information on the given query."""
    # In a real app, this would call a search API like Tavily or Google Search.
    return f"Result for query: '{query}'"

# A list of tools available to our agent
tools = [search_the_web]

2. LangGraph: The Orchestration Engine

Now, we'll use LangGraph to build the dynamic logic of our agent. Instead of a linear chain, we'll create a graph with nodes that represent the steps of our agent's reasoning process. This graph will have a cyclic structure, allowing the agent to loop back and re-evaluate its next move based on tool outputs. We'll set up a `State` to maintain the conversation history and tool outputs throughout the process.

from typing import TypedDict, Annotated, List, Union
from langgraph.graph import StateGraph, END
from langchain_core.messages import BaseMessage

# Define the state of our graph. This will be passed between nodes.
class AgentState(TypedDict):
    messages: Annotated[List[BaseMessage], lambda x, y: x + y]
    
# Create a LangGraph StateGraph
graph = StateGraph(AgentState)

# Define the nodes (steps) in our graph
def call_llm(state):
    # This node is where the LLM decides what to do next.
    messages = state['messages']
    response = llm.invoke(messages)
    return {'messages': [response]}

def call_tool(state):
    # This node executes the tool chosen by the LLM.
    ...

# We'll add a conditional edge to decide where the flow goes next.
# The graph will be built to loop between the LLM and the tool until
# the LLM decides the conversation is over.

# The graph will be compiled here to create the runnable agent.
app = graph.compile()

3. LangSmith: The Observability Layer

The final and most crucial step for a production-ready application is connecting it to LangSmith. By setting a few environment variables, our LangGraph agent will automatically log every trace, allowing us to visualize, debug, and evaluate its performance. Without this, debugging a complex agent would be a near-impossible task.

# Set the environment variables for LangSmith
os.environ["LANGCHAIN_TRACING_V2"] = "true"
os.environ["LANGCHAIN_API_KEY"] = "YOUR_LANGSMITH_API_KEY"
os.environ["LANGCHAIN_PROJECT"] = "langgraph_agent_project"

# Now, when you run your LangGraph application,
# every step will be logged and visible in the LangSmith UI.
# You can see the full trace, from the initial prompt to the final answer,
# including all tool calls and the LLM's reasoning process.

# Example of an execution run that will be traced:
# result = app.invoke({"messages": [HumanMessage(content="What is the capital of Australia?")]})

The Unified Workflow: A Full Development Cycle

By using these three tools together, you create a complete, virtuous development cycle:

  • Build with LangChain & LangGraph: You use LangChain for the core components and LangGraph for the complex, stateful orchestration.
  • Debug & Monitor with LangSmith: As you run your application, you observe the traces in LangSmith. If an agent gives a poor response, you can open the trace to see exactly why it failed.
  • Improve & Evaluate with LangSmith: Based on the insights from LangSmith, you can refine your prompts, adjust your agent's logic, or fix tool errors. You can then use LangSmith's evaluation features to test the new version of your application against a dataset to ensure a measurable improvement.

This integration is not just about using different libraries; it's about establishing a robust, data-driven workflow that is essential for building and maintaining intelligent LLM applications in a production environment.

← Back to Articles