Featured Blog

How to Use LangSmith in 2026: Complete Beginner-to-Advanced Guide

2026-05-14
FILE · langsmith_guide.py RUN · 2026.05.14
SESSION · ACTIVE
DOC / 015 Developer Field Guide
— LangSmith 2026 · Beginner → Advanced —
BUILD 26.05.14 ai.cc dev desk
Tutorial · 5 Spans · 18 min read

Stop debugging agents
with print().
Use LangSmith.

LangSmith is the leading observability, evaluation, and deployment platform for LLM applications and Agentic AI — built by the LangChain team. It traces every model call, tool use, and decision your agents make, then gives you the dashboards, evals, and prompt tools to debug them like grown-up software. This is the practitioner's playbook: account, tracing, LangGraph debugging, evaluations, and production-grade workflow.

Read Time
18m
+ hands-on setup
Spans
5/5
Setup → Production
Stack
Py·JS
+ LangGraph
Free Tier
$0/mo
No card needed
§ Why now

Why LangSmith matters in 2026.

As Agentic AI becomes mainstream, simple print() debugging no longer works. A modern agent run is a tree of nested LLM calls, tool invocations, retries, and conditional branches — and you cannot debug that with logger.info. LangSmith gives you five capabilities that compose into a real workflow:

F / 01
End-to-end tracing
Every LLM call, tool use, and decision captured.
F / 02
Visual debugging
Multi-agent workflows as inspectable trees.
F / 03
Automated evals
Regression testing with datasets & LLM judges.
F / 04
Prompt management
Versioning, A/B testing, team review.
F / 05
Production monitoring
Cost, latency, error-rate at scale.
LangSmith tracing dashboard showing agent execution trace with LLM calls and tool usage
Trace Overview The LangSmith tracing dashboard — every nested span, every tool call, every token cost in one timeline.
▾ ROOT · run_id: 5a3f...c91d 5 spans · total ~45m
SPAN · 01 SETUP
t = 00:00 ~5 min

Create your LangSmith account & API key.

The free Developer tier is enough to get started — no credit card. Service Keys are for production, Personal Access Tokens for development. Don't mix them.

  1. Go to smith.langchain.com and sign up. Developer plan is free — no card needed.
  2. Log in with Google, GitHub, or email — your choice.
  3. Navigate to Settings → API Keys.
  4. Create a Personal Access Token for dev, or a Service Key for production deployments.
  5. Copy the key — you only see it once. Store it in your secret manager, not in git.
Heads Up Service Keys carry org-level scope. If a key leaks, rotate it immediately from the same Settings page — old key dies, new one takes over, no downtime if you re-deploy promptly.
SPAN · 02 INSTALL
t = 00:05 ~3 min

Install langsmith & configure your environment.

One pip install, three environment variables. The optional fourth one is for EU residency or custom endpoint routing.

terminal Bash
1
pip install langsmith

Set environment variables — add to .env or export directly:

.env Bash
1
2
3
4
export LANGSMITH_TRACING=true  export LANGSMITH_API_KEY=lsv2_xxxxxxxxxxxx  # Optional: For EU or custom region  # export LANGSMITH_ENDPOINT=https://eu.api.smith.langchain.com
SPAN · 03 QUICKSTART
t = 00:08 ~10 min

Enable tracing in your code.

The @traceable decorator wraps any function and pushes its inputs, outputs, latency, and tool calls into LangSmith. That's the whole quickstart.

my_agent.py Python
1
2
3
4
5
6
7
8
9
10
11
from langchain_openai import ChatOpenAI  from langsmith import traceable    @traceable  def my_agent(query: str):      llm = ChatOpenAI(model="gpt-4o")      return llm.invoke(query)    # Or enable globally  import os  os.environ["LANGSMITH_TRACING"] = "true"

Run your code. Traces appear in the LangSmith dashboard automatically — usually within seconds.

LangSmith API key creation in settings
API Key Creation Settings → API Keys — pick the right key type (PAT for dev, Service Key for production).
Python code paired with corresponding LangSmith trace view
Code → Trace Left: your decorated function. Right: the trace LangSmith captures automatically.
SPAN · 04 LANGGRAPH
t = 00:18 ~15 min

Debug complex agents with LangGraph.

LangSmith shines hardest when paired with LangGraph — stateful, multi-node, multi-agent workflows. The trace tree becomes the graph execution itself.

  • Visualize the full graph execution — every node, every edge, every state transition.
  • See state changes at every node. No more print(state) in twelve places.
  • Replay traces with different models or prompts — test a hypothesis without re-running production.
  • Use LangSmith Studio for visual, step-through agent debugging.
LangGraph multi-agent workflow visualized in LangSmith Studio
Studio · LangGraph View Multi-agent workflow as a visual graph — drag-step debugging, state inspection at every node.
SPAN · 05 EVALS
t = 00:33 ~12 min

Run evaluations & experiments.

Evals are what separate a demo from a product. Without them, every prompt tweak is a guess. LangSmith bakes the workflow in:

  • Create Datasets of test cases — start with 10 examples, grow to hundreds.
  • Define evaluators — LLM-as-judge, custom code, or human feedback.
  • Run experiments and compare versions side-by-side, with regression flags.
  • Track regression as you iterate. Did the new prompt break example #7? You'll see it before it ships.
Production Gate This is the protocol that separates a hobby project from production-grade Agentic AI. Treat the dataset like a test suite — small at first, growing every time a real-world failure surfaces.
§ Advanced · 2026

Power-user tips for production deployments.

  • Use LangSmith Engine (new in 2026) — an AI layer that analyzes traces and suggests fixes for failing or expensive runs.
  • Integrate with LangGraph for persistent memory and human-in-the-loop checkpoints — the two patterns that matter most past prototype stage.
  • Monitor costs, latency, and error rates in production. Set thresholds. Get alerted before customers do.
  • Set up anomaly alerts. A 3× spike in tokens-per-call is a signal, not a coincidence.
LangSmith production monitoring dashboard with cost and performance metrics
Production Monitoring Cost, latency, error rates, p95 — the dashboards you want before going live.
§ Pricing

Pricing overview · 2026 tiers.

Plan · Pricing · Capacity UPDATED 2026.05
Developer
For solo builders and prototypes. Limited trace volume. No credit card required.
FREE
Plus
For small teams shipping agents to production. Seat-based plus usage. Full evals + Studio.
$39 / seat / mo
Enterprise
For larger orgs. Self-hosted and hybrid deployments available. Custom SLAs and security review.
Custom
§ Use cases

What teams are actually using it for.

Case / 01 · Debugging Figuring out why your agent hallucinates, loops, or stops mid-task — by inspecting the actual trace tree, not by guessing.
Case / 02 · Model A/B Comparing Claude vs GPT-4o on identical inputs — same dataset, same evaluators, head-to-head results.
Case / 03 · Compliance Audit trails for regulated industries — every prompt, output, and decision captured immutably.
Case / 04 · Team Prompts Collaborative prompt engineering — versioning, comments, review, and rollout across the team.

Action checklist — execute today.

  1. Create your LangSmith account and generate an API key.
  2. Enable tracing in your current project — one decorator, two env vars.
  3. Run one agent flow and inspect the resulting trace tree.
  4. Build a small (5–10 example) evaluation dataset.
  5. Open LangSmith Studio and step through your agent visually.

LangSmith has become the de facto standard for serious Agentic AI development in 2026. Start with the free tier today and level up as your agents grow in complexity. What are you building with LangSmith? Share your use case in the comments — happy to give specific advice. Last updated May 14, 2026. Always check the official LangSmith Docs for the latest features.

// END OF TRACE · run_id: 5a3f...c91d ai.cc · field_guide · DOC·015 · 2026.05.14

300+ AI Models for
OpenClaw & AI Agents

Save 20% on Costs