Building Stateful AI Agents with LangGraph.js and TypeScript

Most AI chatbots are stateless — they process a message and forget. Real-world AI applications need agents that remember, reason, and decide what to do next based on context.
LangGraph.js solves this. It gives you a framework for building AI agents as directed graphs — where each node is a step (call an LLM, use a tool, make a decision) and edges define the flow. The state travels through the graph, accumulating context at every step.
In this tutorial, you'll build a complete AI agent that can:
- Search the web for information
- Perform calculations
- Decide which tools to use based on the user's question
- Maintain conversation memory across turns
- Handle errors gracefully
By the end, you'll have a production-ready agent pattern you can extend for any use case.
Prerequisites
Before starting, make sure you have:
- Node.js 20+ installed (check with
node --version) - TypeScript basics (types, interfaces, async/await)
- An OpenAI API key (or any LLM provider — we'll show alternatives)
- Basic understanding of what LLMs and prompts are
- A terminal and code editor (VS Code recommended)
What is LangGraph.js?
LangGraph.js is the JavaScript/TypeScript port of LangGraph — a library built by the LangChain team for creating stateful, multi-step AI agent workflows.
Think of it like a state machine for AI:
| Concept | What It Means |
|---|---|
| State | The data that flows through your agent (messages, results, decisions) |
| Node | A function that takes state, does something (calls LLM, runs tool), and returns updated state |
| Edge | The connection between nodes — can be fixed or conditional |
| Graph | The complete workflow: nodes + edges + state schema |
Why not just chain function calls? Because real agents need branching logic. An agent might need to call a tool, check the result, decide to call another tool, then formulate a response. LangGraph makes this explicit and debuggable.
Step 1: Project Setup
Create a new project and install dependencies:
mkdir ai-agent-langgraph && cd ai-agent-langgraph
npm init -y
npm install @langchain/langgraph @langchain/openai @langchain/core zod
npm install -D typescript tsx @types/nodeInitialize TypeScript:
npx tsc --initUpdate your tsconfig.json:
{
"compilerOptions": {
"target": "ES2022",
"module": "ES2022",
"moduleResolution": "bundler",
"strict": true,
"esModuleInterop": true,
"outDir": "./dist",
"rootDir": "./src",
"declaration": true
},
"include": ["src/**/*"]
}Create a .env file for your API key:
OPENAI_API_KEY=sk-your-key-hereYour project structure should look like this:
ai-agent-langgraph/
├── src/
│ ├── agent.ts # Main agent graph
│ ├── tools.ts # Tool definitions
│ ├── state.ts # State schema
│ └── index.ts # Entry point
├── .env
├── tsconfig.json
└── package.json
Step 2: Define the Agent State
The state is the backbone of your agent. Every node reads from it and writes to it. Let's define what our agent needs to track.
Create src/state.ts:
import { Annotation, MessagesAnnotation } from "@langchain/langgraph";
// Define the state schema for our agent
export const AgentState = Annotation.Root({
// Messages accumulate through the conversation
...MessagesAnnotation.spec,
// Track which tools have been called (for debugging)
toolCallCount: Annotation<number>({
reducer: (current, update) => (update ?? current ?? 0),
default: () => 0,
}),
// Final answer from the agent
finalAnswer: Annotation<string>({
reducer: (current, update) => update ?? current ?? "",
default: () => "",
}),
});
export type AgentStateType = typeof AgentState.State;Key concepts here:
MessagesAnnotation— A built-in annotation that handles message accumulation. New messages get appended to the list automatically.Annotation— Defines a typed state field with areducer(how updates are merged) and adefaultvalue.- Reducers — Functions that decide how to merge new values into existing state. For messages, they append. For our counter, they replace.
💡 The reducer pattern is what makes LangGraph powerful. Each node can return a partial state update, and the framework knows how to merge it correctly.
Step 3: Create Tools
Tools are functions the AI can call. LangGraph uses LangChain's tool format — you define the name, description, input schema, and implementation.
Create src/tools.ts:
import { tool } from "@langchain/core/tools";
import { z } from "zod";
// Tool 1: Calculator for math operations
export const calculatorTool = tool(
async ({ expression }) => {
try {
// Simple safe math evaluation
const sanitized = expression.replace(/[^0-9+\-*/().%\s]/g, "");
if (!sanitized || sanitized !== expression.trim()) {
return "Error: Invalid expression. Only numbers and basic operators (+, -, *, /, %) are allowed.";
}
const result = new Function(`return (${sanitized})`)();
return `Result: ${result}`;
} catch (error) {
return `Calculation error: ${(error as Error).message}`;
}
},
{
name: "calculator",
description:
"Performs mathematical calculations. Input should be a mathematical expression like '2 + 2' or '(10 * 5) / 3'.",
schema: z.object({
expression: z
.string()
.describe("The mathematical expression to evaluate"),
}),
}
);
// Tool 2: Web search (simulated for this tutorial)
export const webSearchTool = tool(
async ({ query }) => {
// In production, integrate with Brave Search, Tavily, or SerpAPI
// This simulates a search result for demonstration
console.log(`[Tool] Searching web for: "${query}"`);
const results = [
{
title: `Top result for: ${query}`,
snippet: `Here's comprehensive information about ${query}. This covers the latest developments and key facts as of 2026.`,
url: `https://example.com/search?q=${encodeURIComponent(query)}`,
},
];
return JSON.stringify(results, null, 2);
},
{
name: "web_search",
description:
"Searches the web for current information. Use this when you need up-to-date facts, news, or data.",
schema: z.object({
query: z.string().describe("The search query"),
}),
}
);
// Tool 3: Date/time utility
export const dateTimeTool = tool(
async ({ timezone }) => {
const now = new Date();
const formatter = new Intl.DateTimeFormat("en-US", {
timeZone: timezone || "UTC",
dateStyle: "full",
timeStyle: "long",
});
return formatter.format(now);
},
{
name: "get_current_datetime",
description:
"Gets the current date and time. Optionally specify a timezone like 'America/New_York' or 'Europe/London'.",
schema: z.object({
timezone: z
.string()
.optional()
.describe("IANA timezone name (default: UTC)"),
}),
}
);
// Export all tools as an array
export const allTools = [calculatorTool, webSearchTool, dateTimeTool];⚠️ Security note: The calculator tool uses basic sanitization. In production, use a proper math parser like
mathjsinstead ofnew Function().
Step 4: Build the Agent Graph
This is the core of the tutorial. We'll create a graph where:
- The LLM node receives messages and decides what to do
- If it wants to use tools → route to the tools node
- The tools node executes the tools and returns results
- Loop back to the LLM to process results
- When the LLM has a final answer → end
Create src/agent.ts:
import { StateGraph, END, START } from "@langchain/langgraph";
import { ChatOpenAI } from "@langchain/openai";
import { ToolNode } from "@langchain/langgraph/prebuilt";
import {
AIMessage,
HumanMessage,
SystemMessage,
} from "@langchain/core/messages";
import { AgentState } from "./state.js";
import { allTools } from "./tools.js";
// Initialize the LLM with tool binding
const llm = new ChatOpenAI({
model: "gpt-4o",
temperature: 0,
}).bindTools(allTools);
// System prompt that defines agent behavior
const SYSTEM_PROMPT = `You are a helpful AI assistant with access to tools.
Your capabilities:
- calculator: For any math calculations
- web_search: For finding current information online
- get_current_datetime: For getting the current date/time
Guidelines:
- Use tools when you need factual data or calculations
- Think step by step for complex questions
- If a tool returns an error, explain it to the user
- Be concise but thorough in your final answers`;
// Node 1: Call the LLM
async function callModel(
state: typeof AgentState.State
): Promise<Partial<typeof AgentState.State>> {
const messages = [new SystemMessage(SYSTEM_PROMPT), ...state.messages];
const response = await llm.invoke(messages);
return {
messages: [response],
};
}
// Node 2: Execute tools (using the prebuilt ToolNode)
const toolNode = new ToolNode(allTools);
// Conditional edge: should we continue to tools or end?
function shouldContinue(
state: typeof AgentState.State
): "tools" | typeof END {
const lastMessage = state.messages[state.messages.length - 1];
// If the last message has tool calls, route to tools
if (
lastMessage instanceof AIMessage &&
lastMessage.tool_calls &&
lastMessage.tool_calls.length > 0
) {
return "tools";
}
// Otherwise, we're done
return END;
}
// Build the graph
export function createAgentGraph() {
const graph = new StateGraph(AgentState)
// Add nodes
.addNode("agent", callModel)
.addNode("tools", toolNode)
// Add edges
.addEdge(START, "agent")
.addConditionalEdges("agent", shouldContinue, {
tools: "tools",
[END]: END,
})
.addEdge("tools", "agent");
// Compile the graph
return graph.compile();
}Let's break down what's happening:
createAgentGraph()builds the workflow as a directed graph- The
agentnode calls the LLM. The LLM either returns a direct answer or requests tool calls shouldContinue()inspects the LLM's response — if it contains tool calls, route to the tools node; otherwise, end- The
toolsnode (prebuilt by LangGraph) executes each tool call and returns results as messages - After tools execute, flow returns to the agent node, which processes the results
- This loop continues until the LLM gives a final answer without tool calls
Here's a visual of the graph:
┌─────────┐
│ START │
└────┬─────┘
│
▼
┌─────────┐ has tool calls ┌─────────┐
│ Agent │ ──────────────────► │ Tools │
│ (LLM) │ ◄────────────────── │ (Execute)│
└────┬─────┘ return results └──────────┘
│
│ no tool calls
▼
┌─────────┐
│ END │
└──────────┘
🚀 Need help implementing AI agents in your product? Noqta builds AI-powered solutions for teams who want production-ready results, not prototypes.
Step 5: Run the Agent
Create src/index.ts:
import "dotenv/config";
import { HumanMessage } from "@langchain/core/messages";
import { createAgentGraph } from "./agent.js";
async function main() {
const agent = createAgentGraph();
console.log("🤖 AI Agent ready. Let's test some queries.\n");
// Test 1: Simple calculation
console.log("--- Test 1: Math ---");
const result1 = await agent.invoke({
messages: [
new HumanMessage(
"What is 15% of 2,340? And then add 99 to that result."
),
],
});
const lastMsg1 = result1.messages[result1.messages.length - 1];
console.log("Answer:", lastMsg1.content, "\n");
// Test 2: Web search
console.log("--- Test 2: Search ---");
const result2 = await agent.invoke({
messages: [
new HumanMessage(
"Search for the latest trends in TypeScript development in 2026"
),
],
});
const lastMsg2 = result2.messages[result2.messages.length - 1];
console.log("Answer:", lastMsg2.content, "\n");
// Test 3: Multi-tool usage
console.log("--- Test 3: Multi-tool ---");
const result3 = await agent.invoke({
messages: [
new HumanMessage(
"What time is it in Tokyo? And calculate how many hours until midnight there."
),
],
});
const lastMsg3 = result3.messages[result3.messages.length - 1];
console.log("Answer:", lastMsg3.content, "\n");
}
main().catch(console.error);Run it:
npx tsx src/index.tsYou should see the agent reasoning through each question, calling tools as needed, and returning coherent answers.
Step 6: Add Conversation Memory
A stateless agent forgets everything after each invocation. Let's add persistent memory so the agent remembers previous turns.
Create src/memory-agent.ts:
import "dotenv/config";
import { StateGraph, END, START, MemorySaver } from "@langchain/langgraph";
import { ChatOpenAI } from "@langchain/openai";
import { ToolNode } from "@langchain/langgraph/prebuilt";
import {
AIMessage,
HumanMessage,
SystemMessage,
} from "@langchain/core/messages";
import { AgentState } from "./state.js";
import { allTools } from "./tools.js";
const memory = new MemorySaver();
const llm = new ChatOpenAI({ model: "gpt-4o", temperature: 0 }).bindTools(
allTools
);
const SYSTEM_PROMPT = `You are a helpful AI assistant with memory.
You remember previous messages in the conversation.
Use tools when needed: calculator, web_search, get_current_datetime.`;
async function callModel(state: typeof AgentState.State) {
const messages = [new SystemMessage(SYSTEM_PROMPT), ...state.messages];
const response = await llm.invoke(messages);
return { messages: [response] };
}
function shouldContinue(state: typeof AgentState.State) {
const last = state.messages[state.messages.length - 1];
if (
last instanceof AIMessage &&
last.tool_calls &&
last.tool_calls.length > 0
) {
return "tools";
}
return END;
}
const agentWithMemory = new StateGraph(AgentState)
.addNode("agent", callModel)
.addNode("tools", new ToolNode(allTools))
.addEdge(START, "agent")
.addConditionalEdges("agent", shouldContinue, {
tools: "tools",
[END]: END,
})
.addEdge("tools", "agent")
.compile({ checkpointer: memory }); // ← Memory enabled!
async function main() {
// Configuration with a thread ID — this is the "conversation ID"
const config = { configurable: { thread_id: "user-123" } };
// Turn 1
console.log("👤 User: My name is Sarah and I work at a startup.");
const res1 = await agentWithMemory.invoke(
{
messages: [
new HumanMessage("My name is Sarah and I work at a startup."),
],
},
config
);
console.log(
"🤖 Agent:",
res1.messages[res1.messages.length - 1].content
);
console.log();
// Turn 2 — the agent should remember the name
console.log("👤 User: What's my name?");
const res2 = await agentWithMemory.invoke(
{ messages: [new HumanMessage("What's my name?")] },
config
);
console.log(
"🤖 Agent:",
res2.messages[res2.messages.length - 1].content
);
console.log();
// Turn 3 — multi-step with memory context
console.log(
"👤 User: Calculate the number of letters in my name times 100."
);
const res3 = await agentWithMemory.invoke(
{
messages: [
new HumanMessage(
"Calculate the number of letters in my name times 100."
),
],
},
config
);
console.log(
"🤖 Agent:",
res3.messages[res3.messages.length - 1].content
);
}
main().catch(console.error);Run it:
npx tsx src/memory-agent.tsThe agent now remembers that the user's name is Sarah across turns. The thread_id in the config acts as a session identifier — different thread IDs give you isolated conversations.
Tip:
MemorySaverstores state in-memory (lost on restart). For production, use a persistent checkpointer likePostgresSaverorSqliteSaverfrom@langchain/langgraph-checkpoint-*packages.
Step 7: Add Error Handling and Retries
Production agents need to handle failures gracefully. Let's add a retry wrapper and error boundaries.
Create src/utils.ts:
import { AIMessage } from "@langchain/core/messages";
// Retry wrapper for tool calls
export function withRetry<T>(
fn: () => Promise<T>,
maxRetries = 3
): Promise<T> {
return new Promise(async (resolve, reject) => {
for (let attempt = 1; attempt <= maxRetries; attempt++) {
try {
const result = await fn();
return resolve(result);
} catch (error) {
console.warn(
`Attempt ${attempt}/${maxRetries} failed:`,
(error as Error).message
);
if (attempt === maxRetries) {
return reject(error);
}
// Exponential backoff
await new Promise((r) =>
setTimeout(r, Math.pow(2, attempt) * 1000)
);
}
}
});
}
// Timeout wrapper for agent invocations
export async function invokeWithTimeout(
agent: any,
input: any,
config: any,
timeoutMs = 30000
) {
const controller = new AbortController();
const timeout = setTimeout(() => controller.abort(), timeoutMs);
try {
const result = await agent.invoke(input, {
...config,
signal: controller.signal,
});
return result;
} catch (error) {
if ((error as Error).name === "AbortError") {
throw new Error(
`Agent timed out after ${timeoutMs}ms. The query may be too complex.`
);
}
throw error;
} finally {
clearTimeout(timeout);
}
}
// Token usage tracking
export function extractTokenUsage(result: any) {
const messages = result.messages || [];
let totalTokens = 0;
for (const msg of messages) {
if (msg instanceof AIMessage && msg.usage_metadata) {
totalTokens +=
(msg.usage_metadata.input_tokens || 0) +
(msg.usage_metadata.output_tokens || 0);
}
}
return totalTokens;
}These utilities give you:
- Retry with exponential backoff — handles transient API failures
- Timeout protection — prevents infinite loops if the agent gets stuck
- Token tracking — monitors costs per invocation
Step 8: Streaming Responses
For better UX, stream the agent's output token by token instead of waiting for the full response.
Create src/stream.ts:
import "dotenv/config";
import { HumanMessage } from "@langchain/core/messages";
import { createAgentGraph } from "./agent.js";
async function streamAgent() {
const agent = createAgentGraph();
const input = {
messages: [
new HumanMessage(
"Explain quantum computing in simple terms, then calculate 2^64."
),
],
};
console.log("🤖 Streaming response:\n");
// Stream events from the graph
for await (const event of await agent.streamEvents(input, {
version: "v2",
})) {
// Filter for LLM token events
if (event.event === "on_chat_model_stream") {
const chunk = event.data.chunk;
if (chunk.content) {
process.stdout.write(chunk.content);
}
}
// Log tool calls
if (event.event === "on_tool_start") {
console.log(`\n\n🔧 Calling tool: ${event.name}`);
console.log(` Input: ${JSON.stringify(event.data.input)}`);
}
if (event.event === "on_tool_end") {
console.log(` Result: ${event.data.output.content}\n`);
}
}
console.log("\n\n✅ Stream complete.");
}
streamAgent().catch(console.error);Run with:
npx tsx src/stream.tsYou'll see tokens appear one by one, with tool calls logged in real-time.
💡 Ready to build AI-powered features into your product? Talk to our team about implementing agent workflows that actually ship.
Step 9: Testing Your Agent
Create src/test-agent.ts for a quick validation suite:
import "dotenv/config";
import { HumanMessage } from "@langchain/core/messages";
import { createAgentGraph } from "./agent.js";
interface TestCase {
name: string;
input: string;
expectToolCall?: string;
expectInAnswer?: string;
}
const testCases: TestCase[] = [
{
name: "Direct question (no tools)",
input: "What is TypeScript?",
expectInAnswer: "TypeScript",
},
{
name: "Math calculation",
input: "What is 42 * 58?",
expectToolCall: "calculator",
expectInAnswer: "2436",
},
{
name: "Date/time query",
input: "What is the current date and time in UTC?",
expectToolCall: "get_current_datetime",
},
{
name: "Web search",
input: "Search for LangGraph.js latest release",
expectToolCall: "web_search",
},
];
async function runTests() {
const agent = createAgentGraph();
let passed = 0;
let failed = 0;
for (const tc of testCases) {
console.log(`\n🧪 Test: ${tc.name}`);
try {
const result = await agent.invoke({
messages: [new HumanMessage(tc.input)],
});
const messages = result.messages;
const lastMsg = messages[messages.length - 1];
const answer =
typeof lastMsg.content === "string"
? lastMsg.content
: JSON.stringify(lastMsg.content);
// Check if expected tool was called
if (tc.expectToolCall) {
const toolCalled = messages.some(
(m: any) =>
m.tool_calls?.some(
(tc2: any) => tc2.name === tc.expectToolCall
)
);
if (!toolCalled) {
console.log(` ❌ Expected tool call: ${tc.expectToolCall}`);
failed++;
continue;
}
}
// Check if answer contains expected text
if (tc.expectInAnswer) {
if (!answer.includes(tc.expectInAnswer)) {
console.log(
` ❌ Expected "${tc.expectInAnswer}" in answer`
);
console.log(` Got: ${answer.slice(0, 200)}`);
failed++;
continue;
}
}
console.log(` ✅ Passed`);
passed++;
} catch (error) {
console.log(` ❌ Error: ${(error as Error).message}`);
failed++;
}
}
console.log(`\n📊 Results: ${passed} passed, ${failed} failed`);
}
runTests().catch(console.error);Step 10: Using Alternative LLM Providers
Not tied to OpenAI? LangGraph.js works with any LangChain-compatible model. Here are quick swaps:
Anthropic (Claude)
npm install @langchain/anthropicimport { ChatAnthropic } from "@langchain/anthropic";
const llm = new ChatAnthropic({
model: "claude-sonnet-4-20250514",
temperature: 0,
}).bindTools(allTools);Google Gemini
npm install @langchain/google-genaiimport { ChatGoogleGenerativeAI } from "@langchain/google-genai";
const llm = new ChatGoogleGenerativeAI({
model: "gemini-2.0-flash",
temperature: 0,
}).bindTools(allTools);Local models via Ollama
npm install @langchain/ollamaimport { ChatOllama } from "@langchain/ollama";
const llm = new ChatOllama({
model: "llama3.3",
temperature: 0,
}).bindTools(allTools);Tip: For production, consider running multiple providers with a fallback chain. If OpenAI is down, fall back to Anthropic automatically.
Production Considerations
Before deploying your agent, address these critical areas:
1. Rate Limiting
import { RateLimiter } from "limiter";
const limiter = new RateLimiter({
tokensPerInterval: 10,
interval: "minute",
});
async function rateLimitedInvoke(agent: any, input: any) {
await limiter.removeTokens(1);
return agent.invoke(input);
}2. Observability with LangSmith
npm install langsmithSet environment variables:
LANGCHAIN_TRACING_V2=true
LANGCHAIN_API_KEY=your-langsmith-key
LANGCHAIN_PROJECT=my-ai-agentEvery agent invocation is now traced — you can see each node execution, tool call, and LLM response in the LangSmith dashboard.
3. Human-in-the-Loop
For high-stakes actions (sending emails, making purchases), add an approval step:
import { interrupt } from "@langchain/langgraph";
async function sensitiveToolNode(state: typeof AgentState.State) {
const lastMessage = state.messages[state.messages.length - 1];
// Pause and ask for human approval
const approval = interrupt({
action: "tool_call",
description: "Agent wants to perform a sensitive action",
toolCalls: (lastMessage as any).tool_calls,
});
if (!approval.approved) {
return {
messages: [new HumanMessage("Action was rejected by the user.")],
};
}
// Proceed with tool execution
const toolNode = new ToolNode(allTools);
return toolNode.invoke(state);
}4. Cost Control
Set a maximum number of LLM calls per invocation to prevent runaway loops:
const MAX_ITERATIONS = 10;
function shouldContinue(state: typeof AgentState.State) {
// Safety: prevent infinite loops
const aiMessages = state.messages.filter(
(m) => m instanceof AIMessage
);
if (aiMessages.length >= MAX_ITERATIONS) {
console.warn("Max iterations reached, forcing end");
return END;
}
const last = state.messages[state.messages.length - 1];
if (last instanceof AIMessage && last.tool_calls?.length > 0) {
return "tools";
}
return END;
}Summary
You've built a complete AI agent system with LangGraph.js:
| What You Built | Why It Matters |
|---|---|
| State schema with annotations | Type-safe data flow through the agent |
| Tool definitions with Zod schemas | AI can call external functions safely |
| Graph-based workflow | Explicit, debuggable agent logic |
| Conditional routing | Agent decides its own path |
| Conversation memory | Multi-turn interactions |
| Error handling & retries | Production resilience |
| Streaming responses | Better user experience |
| Multi-provider support | No vendor lock-in |
Key takeaways:
- Graphs > Chains — When your agent needs branching logic, LangGraph makes it explicit
- State is king — Design your state schema carefully; everything flows through it
- Tools need schemas — Well-described tools with Zod schemas help the LLM make better decisions
- Memory needs persistence — Use
PostgresSaverorSqliteSaverin production, notMemorySaver - Always add safety limits — Max iterations, timeouts, and rate limiting prevent disasters
Next Steps
- Add more tools (database queries, email sending, file operations)
- Implement sub-graphs for complex multi-agent orchestration
- Deploy as an API with Express or Hono
- Add LangSmith tracing for production observability
- Explore LangGraph Studio for visual debugging
💡 Building AI agents for your business? Talk to our team — we design and deploy production agent systems that integrate with your existing stack.
Discuss Your Project with Us
We're here to help with your web development needs. Schedule a call to discuss your project and how we can assist you.
Let's find the best solutions for your needs.
Related Articles

Introduction to Model Context Protocol (MCP)
Learn about the Model Context Protocol (MCP), its use cases, advantages, and how to build and use an MCP server with TypeScript.

Getting Started with ALLaM-7B-Instruct-preview
Learn how to use the ALLaM-7B-Instruct-preview model with Python, and how to interact with it from JavaScript via a hosted API (e.g., on Hugging Face Spaces).

Building a Custom Code Interpreter for LLM Agents
Learn how to create a custom code interpreter for Large Language Model (LLM) agents, enabling dynamic tool calling and isolated code execution for enhanced flexibility and security.