Two agents. Different teams. Different frameworks. One built on LangGraph, the other on Google ADK. Both running in production. No standard way to make them talk to each other.

Before April 2025, this was just the reality of multi-agent systems. Every agent-to-agent connection was custom glue code. Research agent, coding agent, approval agent: you wrote bespoke integrations between each pair. Custom REST APIs. Kafka topics with one-off schemas. Direct function calls that broke the moment you stepped outside a single framework.

That doesn't scale. The moment you want to use an agent from a different vendor, it doesn't work at all.

The Agent2Agent protocol (A2A) is the wire-level answer to this. It's an open standard for how autonomous AI agents discover each other, delegate tasks, and coordinate work across frameworks, organizations, and vendors. This post covers how it works, how it sits alongside MCP, the patterns it unlocks, and where the spec still has gaps.


The pre-A2A problem

The interoperability problem had four dimensions.

Discovery: How does one agent learn what another agent can do? There was no standard. You hardcoded capabilities or wrote internal documentation that aged poorly.

Delegation: How do you hand off a task with a defined result contract? Every pair had its own schema. No common task lifecycle.

Long-running coordination: How do you track a task that takes minutes or hours across an organizational boundary? You built your own state machine.

Authentication: How do you authorize cross-agent calls when the agents live in different systems? API key per integration, with all the management overhead that implies.

The result: every team building multi-agent systems was also building proprietary middleware. Framework-specific approaches (LangGraph's internal node system, OpenAI's handoff abstraction) only worked within a single framework. Cross-framework or cross-org agent collaboration required custom engineering on both sides.

A2A standardizes the handshake. Any framework that implements it can talk to any other framework that implements it.


What A2A actually is

A2A is not a framework. It is not an SDK. It is a wire-level communication standard.

Every A2A interaction runs over:

  • HTTP(S) as the transport layer
  • JSON-RPC 2.0 as the payload format for requests and responses
  • Server-Sent Events (SSE) for streaming long-running task updates
  • gRPC (added in v0.3.0, July 2025) as an optional alternative transport binding

The design principle is opacity. A2A treats every agent as a black box. The client agent never sees the remote agent's internal prompts, memory systems, tool implementations, or reasoning chains. It only sees what the remote agent advertises: what it can do, and what it returns.

This is intentional. It means organizations can combine specialist agents across teams and vendors without exposing proprietary implementations. The interface is the contract. Internals stay private.

What A2A explicitly does not standardize: internal prompts, memory internals, tool call implementations, or the LLM backend.

If MCP is a tool workshop (workers know where each tool is and how to pick it up), A2A is a conference room where different specialists negotiate, delegate, and hand off work without needing to understand each other's internal processes.


The spec: how it works

A2A has a three-layer architecture:

  1. Canonical Data Model - the core data structures
  2. Abstract Operations - the capabilities agents must support, transport-agnostic
  3. Protocol Bindings - concrete mappings to HTTP/JSON-RPC and gRPC

Agent Cards

Before any task gets sent, the client agent needs to know the remote agent exists and what it can do. This is what the Agent Card is for.

Every A2A server publishes a JSON document at a well-known path:

text
GET https://<base_url>/.well-known/agent-card.json

The Agent Card contains the agent's identity, service endpoint, supported protocol version, capabilities (streaming, push notifications), authentication requirements, and the list of specific skills it can perform.

Here's a real example from the official A2A codelab:

json
{
  "capabilities": {
    "streaming": true
  },
  "defaultInputModes": ["text", "text/plain"],
  "defaultOutputModes": ["text", "text/plain"],
  "description": "Helps with creating burger orders",
  "name": "burger_seller_agent",
  "protocolVersion": "0.2.6",
  "skills": [
    {
      "description": "Helps with creating burger orders",
      "examples": ["I want to order 2 classic cheeseburgers"],
      "id": "create_burger_order",
      "name": "Burger Order Creation Tool",
      "tags": ["burger order creation"]
    }
  ],
  "url": "https://burger-agent-109790610330.us-central1.run.app",
  "version": "1.0.0"
}

The client reads this card, confirms the agent supports what it needs, reads the authentication requirements, and starts sending tasks.

The Task object

The Task is the fundamental unit of work. Every delegation creates a Task with a unique ID. Tasks move through a defined lifecycle:

text
submitted โ†’ working โ†’ completed
                    โ†˜ failed
                    โ†˜ canceled
                    โ†˜ input-required
                    โ†˜ auth-required

The input-required state matters: when the remote agent needs more information before continuing, it pauses and signals back. The client sends a new message into the same task and the agent continues. Multi-turn interactions happen within a single task, not across multiple tasks.

The auth-required state is the human-in-the-loop escalation path. When the agent reaches a decision that needs human authorization, it transitions to auth-required and passes the authorization back to the calling client. This is how A2A handles step-up authentication mid-task without breaking the task lifecycle.

Messages, Parts, and Artifacts

A Message is one communication turn between client and remote agent. Each has a role (user or agent) and contains one or more Parts.

Parts are the smallest unit of content:

  • TextPart - plain text
  • FilePart - file content or a URI
  • DataPart - structured JSON

Artifacts are the outputs the remote agent produces. Also composed of Parts. Can be streamed incrementally.

Three communication modes

Synchronous request/response: A standard HTTP JSON-RPC 2.0 request. Use this for fast, simple tasks where you can wait for the result.

SSE streaming: The client subscribes to a streaming endpoint. The server sends incremental status updates and artifact chunks as text/event-stream. The stream terminates when the task hits a terminal state. Use this for longer tasks where you want progress updates without polling.

Push notifications (webhooks): The client provides a webhook URL in the task. The server POST requests to that URL on significant state changes. Use this for tasks that may take hours or where the connection cannot stay open.


A2A and MCP: two different axes

The confusion here is understandable. Both involve AI agents and external communication. They solve different problems at different layers of the stack.

MCP A2A
Created by Anthropic (Nov 2024) Google (Apr 2025)
Axis Vertical (agent to tools/resources) Horizontal (agent to agent, peer-to-peer)
Who's on each end Model (client) + tool server Two autonomous agents
Directionality One-directional capability exposure Bidirectional task delegation
Primary use case Giving an agent access to a database, API, filesystem Delegating a sub-task to a specialist agent

They are complementary. The typical production pattern looks like this:

An orchestrator agent (on LangGraph) receives a complex user request. Via A2A, it delegates "research this topic" to a specialist research agent (on Google ADK). The ADK agent internally uses MCP to call a web search tool and a database reader. It returns results to the orchestrator via A2A.

MCP handles the vertical connection between agent and tool. A2A handles the horizontal connection between agent and agent. Both protocols now sit under the same governance umbrella at the Linux Foundation's Agentic AI Foundation (AAIF).


Multi-agent patterns A2A enables

Orchestrator and specialists

The most common pattern. An orchestrator decomposes a complex request, reads Agent Cards to discover which specialists handle each sub-task, and dispatches A2A tasks to each. Specialists run independently and return Artifacts. The orchestrator aggregates.

Real example: a hiring manager agent receives a request to find candidates for a senior ML role. It reads Agent Cards, discovers a sourcing agent, an interview scheduling agent, and a background check agent. Tasks go out to each. Results come back. Report assembled.

Parallel execution with aggregation

Tasks go to multiple specialist agents simultaneously. Each runs independently. Results come back via SSE or polling. The orchestrator aggregates when all tasks hit terminal state.

A research orchestrator dispatches "search academic papers," "search news," and "search patents" to three different agents at once. Each returns an Artifact. Combined report assembled from the three.

Capability routing

The orchestrator does not hardcode which agent handles which task type. It reads Agent Cards dynamically and routes based on declared skills. New specialist agents register themselves. Routing happens automatically at runtime based on skill definitions in the cards.

This is the agent marketplace model. It works today for teams that build an internal agent registry and let the orchestrator discover agents at runtime rather than at deploy time.

Human-in-the-loop via auth-required

When an agent needs human authorization before continuing, it transitions the task to auth-required. The calling system surfaces that to a human. The human approves. The task resumes.

You can also wire auth-required to a "human agent": an A2A server backed by a human approval queue rather than an LLM. From the orchestrator's perspective, it just delegated a task. The implementation on the other side is irrelevant.

Cross-organization deployment

A2A works across organizational boundaries. Two companies expose their agents as A2A servers. Authorized external agents call them using the authentication schemes declared in the Agent Card (OAuth 2.0, mTLS, API keys).

Tyson Foods and Gordon Food Service are running this in production. Their agents share product data and supply chain leads in real time across company lines via A2A.

Sequential pipelines

Agent A's output Artifact becomes Agent B's input. A2A supports DataPart and FilePart, so structured handoffs are straightforward. Each step depends on the previous step's output. Less parallelism, more correctness guarantees.


Security: the trust problem

A2A delegates authentication to standard web mechanisms. The Agent Card declares what security schemes the agent supports:

  • OAuth 2.0 - most common for enterprise deployments
  • OpenID Connect (OIDC) - for federated identity
  • API keys - simpler deployments
  • Mutual TLS (mTLS) - for high-security agent-to-agent communication

The client agent reads the Agent Card, picks the right scheme, authenticates, then sends tasks. That part of the design holds up.

The vulnerability is the Agent Card itself.

Agent Card spoofing

The client decides which remote agent to call and what to trust based on the Agent Card. If a rogue agent presents a crafted Agent Card with a description designed to manipulate the orchestrator's LLM-based routing logic, it can insert itself into the workflow.

Trustwave SpiderLabs demonstrated this in 2025. A rogue agent with an inflated card description exploited the orchestrator's LLM-based selection to route all tasks to itself. A prompt injection attack at the discovery layer.

Signed Agent Cards (v0.3+)

A2A v0.3+ added support for Agent Card signing. The card gets signed with the agent's private key. The client verifies the signature against the domain before acting on the card's contents. This closes the forgery attack.

The problem: signing is supported, not enforced. There's no protocol-level requirement to sign Agent Cards. An unsigned card is still valid per the spec. Implementations that skip signing have a real security gap.

For anything beyond internal development:

  • Sign Agent Cards
  • Validate that the Agent Card domain matches the service URL before trusting it
  • Use mTLS for transport-level security in addition to application-level auth
  • Maintain an allowlist of trusted remote agent domains for cross-org calls

Until Agent Card signing is mandatory in the spec, this stays an open attack surface.


Where the ecosystem stands

A2A launched in April 2025 with 50+ partners. By April 2026: 150+ organizations, across every major cloud provider and enterprise software vendor.

Native support: Google ADK (Python and Go), Google Agent Engine, AWS Bedrock AgentCore.

Framework integrations: LangGraph, CrewAI, AutoGen, LlamaIndex, and Semantic Kernel all have A2A support.

Enterprise adopters: ServiceNow (built "AI Agent Fabric" on A2A), S&P Global Market Intelligence, Adobe, SAP.

The ACP merger: IBM's Agent Communication Protocol launched in March 2025 as a competing standard. By late 2025, the ACP team merged into A2A under the Linux Foundation. One less competing standard.

What's still missing:

  • Universal agent registry. A2A defines how to read an Agent Card. It does not define how you discover which agents exist in the first place. Each org maintains its own internal registry or hardcodes URLs.
  • Mandatory signing. Agent Card signing is recommended, not required. This needs to change.
  • Observability. No standard for tracing A2A task execution across agent boundaries. Distributed tracing for multi-agent workflows is still custom work per team.
  • Cost attribution. When Agent A calls Agent B which calls Agent C, there's no protocol-level mechanism for tracking cost across that chain.
  • Formal agent identity. PKI for agents is still ad-hoc. Who is this agent, really? Nobody has a clean answer yet.

Is A2A the MCP moment for agent-to-agent?

MCP launched in November 2024 and became the dominant agent-to-tool standard within about 12 months. A2A is following a similar arc for the agent-to-agent layer.

The structural conditions are the same: a clear, unsolved interoperability problem; a protocol designed to use the existing web stack (HTTP, JSON-RPC, OAuth) rather than invent new primitives; governance under a neutral foundation with buy-in from competing vendors. The ACP merger eliminated the only serious competing standard. That's significant.

The full stack is settling:

  • MCP - agent to tools and resources (vertical)
  • A2A - agent to agent (horizontal)
  • AG-UI - agent backend to human-facing frontend
  • Framework-internal coordination (LangGraph nodes, OpenAI handoffs) for within-harness orchestration

A2A is the interoperability layer that lets the framework-internal parts of different systems talk to each other. If you're designing a multi-agent system today, design for A2A compatibility from the start. Not because it's inevitable, but because every integration you build without it is custom glue that A2A will eventually replace.

If you're new to the agent harness concept and want to start from the foundation, the first post in this series covers what harnesses are and why they're the layer that actually runs your AI.


References and sources

  1. A2A Protocol Official Site - canonical spec home
  2. A2A Specification - full technical specification
  3. A2A Roadmap (March 2026) - official roadmap
  4. A2A and MCP Comparison (official) - from the A2A project team
  5. Google Developers Blog: A2A Announcement - original launch post
  6. Google Cloud Blog: Agent2Agent Protocol Upgrade - v0.3 release notes
  7. A2A GitHub Repository - source of truth, Apache 2.0
  8. Google ADK + A2A Integration Docs - ADK implementation guide
  9. IBM: What is A2A? - third-party technical overview
  10. Semgrep: A Security Engineer's Guide to A2A - security analysis
  11. Auth0: MCP vs A2A Guide - protocol comparison
  12. AWS Open Source Blog: Inter-Agent Communication on A2A - AWS perspective
  13. Microsoft Cloud Blog: Empowering Multi-Agent Apps with A2A - Microsoft integration
  14. AWS Bedrock AgentCore: A2A Protocol Contract - Bedrock implementation docs
  15. ArXiv: Secure Agentic AI with A2A - security research paper
  16. ArXiv: Survey of Agent Interoperability Protocols - MCP, ACP, A2A, ANP comparison
  17. ArXiv: Orchestration of Multi-Agent Systems - enterprise adoption research
  18. O'Reilly: Designing Collaborative Multi-Agent Systems with A2A
  19. Palo Alto Networks: A2A Protocol Security Guide
  20. Red Hat: How to Enhance A2A Security
  21. Google Codelabs: Purchasing Concierge A2A Example - hands-on A2A tutorial
  22. Protocol Version History
  23. The Register: Alphabet Soup of Agentic AI Protocols - industry overview
  24. NextPj: MCP vs A2A vs AG-UI Guide 2026 - protocol stack explainer

Rahul Kashyap is CTO & Co-founder at Designare Solutions and DeepStory, based in Bangalore.