Now in public beta

See inside your agent's
context window

Visual debugger for AI agent developers. Inspect context segments, track token usage, and identify memory bottlenecks before they break your agent.

Customer Support Agent
Token Usage77%
System Prompt: 4,200
Tools: 12,800
Conversation: 62,400
Memories: 8,900
RAG Docs: 7,200
Results: 2,920
Conversation history consuming 63.4% of total context

Built for agent developers

Everything you need to understand and optimize your agent's context window.

Segment Visualizer

Color-coded breakdown of system prompts, tools, conversation history, memories, and RAG documents.

Token Counter

Real-time token counts with percentage of max window. Supports GPT-4, Claude, and custom models.

Bottleneck Alerts

Automatic detection when segments approach limits, with actionable optimization suggestions.

Prompt Inspector

Deep-dive into system prompts with length analysis, section breakdown, and compression hints.

JSON Export

Export full context snapshots for sharing, version control, or automated analysis.

Session Comparison

Compare context usage across sessions to identify regressions and optimization wins.

How it works

1

Paste or Connect

Paste an API response payload or connect via API key

2

Parse & Visualize

Context is parsed into color-coded segments with token counts

3

Identify Bloat

See which segments are consuming the most tokens

4

Optimize

Export, modify, and re-test your optimized context

Trusted by agent builders

Finally I can see why my agent loses context after 20 turns. Conversation history was eating 70% of my window.

Sarah Chen
ML Engineer @ Vercel

ContextLens saved us hours of debugging. We found 3 duplicate memory entries that were wasting 8K tokens.

Marcus Rivera
Founder, AgentOps

The segment visualizer is a game-changer. I optimized my tool definitions and got 40% more conversation room.

Priya Sharma
AI Dev @ Stripe

Start debugging your agent's context

Free for solo developers. No credit card required.

Open Debugger →