Back to blog
Jan 04, 2026
6 min read

A2A Protocol Improvements: Making My Blog Agent-Ready

How I fixed 9 issues in my A2A implementation to make my blog truly discoverable and usable by AI agents, plus new endpoints for related posts and tag exploration.

The Problem: Promise vs. Reality

I’ve been running an A2A (Agent-to-Agent) implementation on this blog for a few months now. The idea was simple: let AI agents discover and interact with my content programmatically. An agent could ask “what posts do you have about Kubernetes?” and get a structured response it could reason about.

The agent.json file advertised all these great capabilities—tag filtering, relevance scores, multiple output formats. On paper, it looked solid.

Then I actually tested what an AI agent would experience.

An agent asking for posts tagged “AI” would get all 247 posts. Search results came back without relevance scores. The metadata endpoint ignored the type parameter entirely. The documentation said one thing; the implementation did another.

Not great for agent trust. And honestly, a little embarrassing once I dug in.

The Root Cause: Dual Implementation

Digging into the codebase, I found the culprit: dual implementations.

Here’s what happened. I had a proper skill router system with handlers that did everything right—tag filtering, relevance scoring, format conversion, the works. But at some point (probably during a late-night debugging session), I’d also created “simplified” API endpoints that talked directly to the data layer, bypassing the skill router entirely.

You know how this goes. The simplified version was easier to test. It worked for basic cases. So it stuck around. And over time, the two implementations drifted apart. The fancy features lived in code that never got called.

The Fix: One Line Per Endpoint

The good news? The solution was already sitting in my codebase, waiting to be used.

I had a createMethodHandler() factory function that properly routes requests through the skill handlers. It handles JSON-RPC validation, authentication, rate limiting, caching—all the things I’d been reimplementing (poorly) in each endpoint.

Each endpoint went from ~150 lines of duplicated logic to this:

import { createMethodHandler, prerender } from '../../../../lib/a2a/core/endpoint-handler';

export { prerender };
export const { POST, OPTIONS, GET } = createMethodHandler('blog.list_posts');

That’s it. Four lines. The factory does the heavy lifting, and 650 lines of duplicated code became 20 lines across 4 files.

I love it when the fix is deleting code.

What’s Fixed

Here’s the rundown of what actually changed. I’m grouping these by how badly they were breaking things.

Urgent Fixes (Breaking Agent Expectations)

Tags Filterlist_posts now actually filters by tags:

curl -X POST /api/a2a/blog/list \
  -d '{"jsonrpc":"2.0","params":{"tags":["AI"],"limit":5},"id":1}'
# Returns 47 posts, not 247

Relevance Scores — Search results include scoring:

{
  "id": "deep-agents-part-1",
  "title": "Deep Agents Part 1...",
  "relevanceScore": 113,
  "matchedExcerpt": "...context around the match..."
}

Metadata Types — The type parameter now works:

  • type: "site" — Site name, description, license
  • type: "stats" — Post counts, averages, trends
  • type: "tags" — All tags with counts
  • type: "author" — Author info

High Priority Fixes

Format Parameter — Get post content in different formats:

{"params": {"id": "my-post", "format": "html"}}
// Returns rendered HTML instead of markdown

Reading Time — All post responses now include estimated reading time based on word count.

New Endpoints

While I was in there fixing things, I figured I’d add a few capabilities that make the A2A implementation more useful for agents doing real work:

POST /api/a2a/blog/related
{"params": {"id": "small-language-models-ai-workflows", "limit": 5}}

Returns posts that share tags with the source post, ranked by overlap count. Useful for agents building content recommendations.

Tags List

POST /api/a2a/blog/tags
{"params": {"sortBy": "count", "limit": 10}}

Dedicated endpoint for tag exploration with post counts and most recent post per tag.

Date Range Filtering

POST /api/a2a/blog/list
{"params": {"dateFrom": "2025-01-01", "dateTo": "2025-12-31"}}

Filter posts by publication date. Both parameters are optional and work independently.

Validation Schema Updates

Part of the fix involved aligning validation schemas with handler interfaces. The Zod schemas now match what the handlers expect:

export const listPostsSchema = z.object({
  limit: z.number().int().min(1).max(100).optional().default(10),
  offset: z.number().int().min(0).optional().default(0),
  tags: z.array(z.string().max(100)).max(20).optional(),
  sortBy: z.enum(['date', 'title']).optional().default('date'),
  sortOrder: z.enum(['asc', 'desc']).optional().default('desc'),
  dateFrom: z.string().regex(/^\d{4}-\d{2}-\d{2}/).optional(),
  dateTo: z.string().regex(/^\d{4}-\d{2}-\d{2}/).optional(),
}).strict();

The .strict() mode rejects unknown parameters, preventing silent failures when agents send malformed requests.

Lessons Learned

If I had to boil this down to a few takeaways:

  1. Test what you document — If your agent.json says you support a feature, verify it works. Agents will trust your schema. Don’t betray that trust.

  2. Avoid dual implementations — When you have both “simple” and “full” versions of an endpoint, they will drift. Use a factory pattern to ensure consistency.

  3. Validation schemas are contracts — Keep them in sync with your handlers. Mismatches cause confusing errors for consumers.

  4. Factory patterns scale — Adding a new endpoint is now trivial. Create the handler, register it, create a 5-line endpoint file. Done.

What’s Next

The A2A implementation is now solid for read operations. It’s actually kind of fun to think about AI agents browsing my blog, finding related posts, exploring tags. The content becomes more than just something humans scroll through—it becomes data that agents can reason about.

Future improvements I’m considering:

  • WebSocket support for real-time updates (new post published? agents get notified)
  • Batch request optimization for agents that want to grab multiple posts at once
  • Agent capability negotiation so different agents can discover what features they can use
  • Usage analytics to see which agents are actually using this stuff

If you’re thinking about adding A2A support to your own site, the key lesson here is simple: don’t let your documentation promise things your implementation doesn’t deliver. Test it like an agent would. Send the JSON-RPC requests. Check the responses match your schema.

And if you find yourself with two versions of something? Pick one and delete the other. Your future self will thank you.


Nine issues tracked in Linear, fixed in two commits, tested on production. Sometimes the best refactoring is just deleting code.

Let's Build AI That Works

Ready to implement these ideas in your organization?