Back to blog
Oct 29, 2025
8 min read

Deep Agents 0.2: A Conversational Learning Guide to Pluggable Backends and Beyond

An in-depth, conversational exploration of LangChain's DeepAgents 0.2 release, focusing on pluggable backends, composite backends, and building portable agent infrastructure

Hey there! Welcome back to our ongoing journey into the world of Deep Agents. If you’ve been following along from my earlier posts on Deep Agents, you know we’ve been exploring how LangChain’s DeepAgents library transforms simple AI agents into sophisticated, long-running autonomous systems capable of complex multi-step tasks.

Today, we’re diving into something really exciting: DeepAgents 0.2 and its game-changing pluggable backends architecture. This is where things get really interesting for folks like us who love building portable, flexible infrastructure.

What’s Got Me Excited About 0.2?

Just recently, LangChain dropped DeepAgents 0.2 - two months after the initial release - and honestly, it’s addressing exactly the kinds of challenges I’ve been thinking about. Remember how we discussed the four pillars of Deep Agents architecture in my first post?

  1. Planning tools
  2. Filesystem access
  3. Subagents
  4. Detailed prompts

Well, version 0.2 takes that filesystem access pillar and completely reimagines it. Instead of being locked into a single storage approach, we now have a pluggable backend system that opens up a world of possibilities.

The Evolution: From Virtual to Pluggable

The Old Way (0.1)

In the original DeepAgents release, the “filesystem” was essentially a virtual one - it used LangGraph’s state management to store files. Think of it like this:

graph TB subgraph "DeepAgents 0.1 Architecture" Agent["Deep Agent"] VFS["Virtual Filesystem
(LangGraph State)"] State["State Storage
(In-Memory)"] Agent -->|read/write| VFS VFS -->|stores in| State end style VFS fill:#3498db,stroke:#2c3e50,stroke-width:3px,color:#fff style State fill:#e74c3c,stroke:#2c3e50,stroke-width:3px,color:#fff style Agent fill:#95a5a6,stroke:#2c3e50,stroke-width:3px,color:#fff

This worked fine for single-session work, but there was a problem: everything disappeared when your session ended. Sure, you could persist state, but it wasn’t designed for truly portable, multi-environment agent deployments.

The New Way (0.2): Backends All The Way Down

Now, with 0.2’s Backend abstraction, we have a completely different ballgame:

graph TB subgraph "DeepAgents 0.2 Architecture" Agent["Deep Agent"] Backend["Backend Abstraction
(Pluggable Interface)"] subgraph "Built-in Implementations" LGState["LangGraph State
(Original Behavior)"] LGStore["LangGraph Store
(Cross-Thread Persistence)"] LocalFS["Local Filesystem
(Actual Files)"] end subgraph "Custom Backends" S3["S3 / MinIO
(Object Storage)"] DB["PostgreSQL
(Structured Data)"] Redis["Redis
(Fast Cache)"] Custom["Your Custom Backend"] end Agent -->|uses| Backend Backend -.->|implements| LGState Backend -.->|implements| LGStore Backend -.->|implements| LocalFS Backend -.->|extends to| S3 Backend -.->|extends to| DB Backend -.->|extends to| Redis Backend -.->|extends to| Custom end style Agent fill:#95a5a6,stroke:#2c3e50,stroke-width:3px,color:#fff style Backend fill:#27ae60,stroke:#2c3e50,stroke-width:3px,color:#fff style LGState fill:#3498db,stroke:#2c3e50,stroke-width:2px,color:#fff style LGStore fill:#16a085,stroke:#2c3e50,stroke-width:2px,color:#fff style LocalFS fill:#e74c3c,stroke:#2c3e50,stroke-width:2px,color:#fff style S3 fill:#9b59b6,stroke:#2c3e50,stroke-width:2px,color:#fff style DB fill:#2980b9,stroke:#2c3e50,stroke-width:2px,color:#fff style Redis fill:#c0392b,stroke:#2c3e50,stroke-width:2px,color:#fff style Custom fill:#f39c12,stroke:#2c3e50,stroke-width:2px,color:#fff

This is huge. You’re no longer constrained to how LangChain decided to handle storage. You can:

  • Use the local filesystem for development
  • Swap to LangGraph Store for production cross-thread persistence
  • Write your own backend to integrate with any data store
  • Add guardrails and validation by subclassing existing backends

Understanding Composite Backends

Here’s where it gets really interesting. The 0.2 release introduces “composite backends” - a powerful pattern that allows you to layer different backends on top of each other at different directory paths. Let me show you why this is brilliant.

The Mental Model

Think of a composite backend like a router for your filesystem operations:

graph TB subgraph "Composite Backend Architecture" Agent["Deep Agent"] Router["Composite Backend
(Path Router)"] subgraph "Base Layer" BaseFS["Base Backend
Local Filesystem"] end subgraph "Mounted Overlays" Memory["/memories/
S3/MinIO Backend"] Temp["/temp/
Redis Backend"] Logs["/logs/
PostgreSQL Backend"] end Agent -->|file operation| Router Router -->|default path| BaseFS Router -->|/memories/*| Memory Router -->|/temp/*| Temp Router -->|/logs/*| Logs end style Agent fill:#95a5a6,stroke:#2c3e50,stroke-width:3px,color:#fff style Router fill:#f39c12,stroke:#2c3e50,stroke-width:3px,color:#fff style BaseFS fill:#7f8c8d,stroke:#2c3e50,stroke-width:2px,color:#fff style Memory fill:#3498db,stroke:#2c3e50,stroke-width:2px,color:#fff style Temp fill:#e67e22,stroke:#2c3e50,stroke-width:2px,color:#fff style Logs fill:#27ae60,stroke:#2c3e50,stroke-width:2px,color:#fff

A Practical Example: Long-Term Memory

The LangChain blog post gives us a perfect example. Imagine you want your agent to have:

  1. Fast local access for working files (base backend = local filesystem)
  2. Persistent memories that survive machine restarts (overlay = S3/MinIO at /memories/)
  3. Temporary caching for frequently accessed data (overlay = Redis at /temp/)
  4. Structured logs for debugging and analysis (overlay = PostgreSQL at /logs/)

Here’s how that might look conceptually (note: the S3Backend, RedisBackend, and PostgresBackend shown here are hypothetical custom implementations you would need to create - they are not built-in backends):

from deepagents import create_deep_agent
from deepagents.backends import (
    LocalFilesystemBackend,
    CompositeBackend
)
# These would be your custom backend implementations:
from my_backends import (
    S3Backend,
    RedisBackend,
    PostgresBackend
)

# Create the base backend
base_backend = LocalFilesystemBackend(root_path="./agent_workspace")

# Create specialized backends for different paths
memory_backend = S3Backend(
    bucket="agent-memories",
    endpoint_url="http://minio:9000"  # MinIO endpoint
)

cache_backend = RedisBackend(
    host="redis",
    db=0,
    prefix="agent:cache:"
)

log_backend = PostgresBackend(
    connection_string="postgresql://postgres:5432/agent_logs",
    table_name="agent_operations"
)

# Compose them together
composite_backend = CompositeBackend(
    base=base_backend,
    mounts={
        "/memories/": memory_backend,
        "/temp/": cache_backend,
        "/logs/": log_backend
    }
)

# Create your agent with the composite backend
agent = create_deep_agent(
    tools=[...],
    instructions="You are a helpful assistant...",
    backend=composite_backend
)

Now when your agent:

  • Writes to /memories/user_preferences.json → Goes to MinIO
  • Writes to /temp/api_cache.json → Goes to Redis
  • Writes to /logs/operation_log.json → Goes to PostgreSQL
  • Writes to /workspace/current_task.md → Goes to local filesystem

The Three New Power Features

Beyond pluggable backends, 0.2 ships with three intelligent context management features that make agents way more robust:

1. Large Tool Result Eviction

The Problem: Sometimes tools return massive results that blow up your context window.

The Solution: DeepAgents can now automatically detect when a tool result exceeds a token limit and dump it to the filesystem instead of keeping it in the message history.

sequenceDiagram participant Agent participant Tool participant ContextManager participant Filesystem Agent->>Tool: Execute tool Tool->>Agent: Return large result (10K tokens) Agent->>ContextManager: Check result size ContextManager->>ContextManager: Exceeds limit (5K tokens) ContextManager->>Filesystem: Write to /tool_results/result_123.json ContextManager->>Agent: Replace with reference:
"Result saved to /tool_results/result_123.json" Note over Agent,Filesystem: Context window stays manageable!

2. Conversation History Summarization

The Problem: Long conversations eat up context window space, leaving less room for actual reasoning.

The Solution: Automatically compress old conversation history when token usage gets too large.

graph LR subgraph "Before Summarization" A1["Turn 1
1000 tokens"] A2["Turn 2
1000 tokens"] A3["Turn 3
1000 tokens"] A4["Turn 4
1000 tokens"] A5["Turn 5
1000 tokens"] end subgraph "After Summarization" B1["Summary of
Turns 1-3
500 tokens"] B2["Turn 4
1000 tokens"] B3["Turn 5
1000 tokens"] end A1 --> B1 A2 --> B1 A3 --> B1 A4 --> B2 A5 --> B3 style A1 fill:#95a5a6,stroke:#2c3e50,stroke-width:2px,color:#fff style A2 fill:#95a5a6,stroke:#2c3e50,stroke-width:2px,color:#fff style A3 fill:#95a5a6,stroke:#2c3e50,stroke-width:2px,color:#fff style A4 fill:#95a5a6,stroke:#2c3e50,stroke-width:2px,color:#fff style A5 fill:#95a5a6,stroke:#2c3e50,stroke-width:2px,color:#fff style B1 fill:#27ae60,stroke:#2c3e50,stroke-width:3px,color:#fff style B2 fill:#3498db,stroke:#2c3e50,stroke-width:2px,color:#fff style B3 fill:#3498db,stroke:#2c3e50,stroke-width:2px,color:#fff

3. Dangling Tool Call Repair

The Problem: Sometimes tool calls get interrupted or cancelled before execution, leaving broken message history.

The Solution: Automatically detect and fix these dangling tool calls to maintain clean conversation state.

When to Use DeepAgents vs. LangChain vs. LangGraph

The LangChain team provides great clarity on this:

graph TB subgraph "The LangChain Ecosystem" LG["LangGraph
Agent Runtime

For: Workflows + Agents
Custom control flows"] LC["LangChain
Agent Framework

For: Core agent loop
Build from scratch"] DA["DeepAgents
Agent Harness

For: Autonomous agents
Long-running tasks"] end DA -->|built on| LC LC -->|built on| LG style DA fill:#27ae60,stroke:#2c3e50,stroke-width:3px,color:#fff style LC fill:#3498db,stroke:#2c3e50,stroke-width:3px,color:#fff style LG fill:#f39c12,stroke:#2c3e50,stroke-width:3px,color:#fff

Use LangGraph when:

  • You need custom control flow
  • Building complex workflows with agents
  • Need fine-grained control over execution

Use LangChain when:

  • You want the basic agent loop
  • Building prompts and tools from scratch
  • Need maximum flexibility

Use DeepAgents when:

  • Building autonomous, long-running agents
  • Need built-in planning, filesystem, subagents
  • Want to focus on your domain logic, not infrastructure

What’s Next?

I’m planning to dive deeper into several aspects:

  1. Advanced Subagent Patterns: How to structure complex agent hierarchies
  2. Production Deployment: Best practices for scaling DeepAgents
  3. Integration Patterns: Connecting DeepAgents with existing systems
  4. Testing Strategies: How to test deep agent behaviors

Wrapping Up

DeepAgents 0.2 represents a major step forward in building production-ready autonomous agents. The pluggable backend architecture gives us the flexibility to:

  • Start simple with local development
  • Scale to production with proper infrastructure
  • Mix and match storage backends for optimal performance
  • Build truly portable agent systems

You can combine different backend types to create a powerful, flexible foundation that grows with your needs.

I’m genuinely excited about where this is heading. The ability to build agents with proper separation of concerns, persistent memory, and production-ready infrastructure is game-changing.

What are you planning to build with DeepAgents 0.2? Drop me a note - I’d love to hear about your use cases and learn from your experiences!

Additional Resources


This is part of my ongoing series on Deep Agents. Check out my previous posts to understand the foundational concepts, and stay tuned for more deep dives into advanced agent architectures!

Previous posts in this series:

Let's Build AI That Works

Ready to implement these ideas in your organization?