All posts
April 19, 2026·9 min read
A2AAgent ProtocolsMulti-AgentMCP

A2A Protocol: How AI Agents Will Talk to Each Other in Production

Published: April 19, 2026

The Model Context Protocol solved agent-to-tool communication. The Agent-to-Agent Protocol (A2A) solves the harder problem: how does an agent built by one team, running on one framework, talk to an agent built by a different team, running a different framework, in a different cloud?

Google published the A2A specification in early 2025. By April 2026, it has 50+ partner organizations implementing it, Microsoft has committed to shipping A2A support in Agent Framework 1.1, and enterprise teams are running the first cross-vendor multi-agent deployments in production.

This is the protocol that turns isolated agents into networked agent systems. Here's exactly how it works, what it enables, and what you need to understand to build on it.


The Problem A2A Solves

Today's multi-agent systems are siloed by framework. An AutoGen agent can coordinate with other AutoGen agents. An ADK agent can coordinate with other ADK agents. A LangGraph agent works with other LangGraph agents. But cross-framework coordination — an ADK orchestrator calling a LangChain specialist, or a CrewAI agent delegating to a LlamaIndex retrieval agent — requires custom integration code at every boundary.

This matters because agent specialization creates natural framework diversity:

  • Data retrieval specialists often run LlamaIndex or Haystack (optimized for RAG)
  • Code execution agents often run Codex CLI or custom sandboxes
  • Research agents often run ADK or AutoGen (strong multi-agent coordination)
  • Domain specialists (legal, medical, financial) often run proprietary frameworks

A2A defines a standard protocol for these agents to call each other without framework-specific adapters.


The A2A Architecture: Four Core Concepts

1. Agent Cards

Every A2A-compatible agent publishes an Agent Card — a JSON document at a well-known URL (typically /.well-known/agent.json) that describes:

  • What the agent can do (capabilities, task types)
  • What inputs it accepts (schema, required fields)
  • What outputs it produces (response format, streaming support)
  • How to authenticate with it (OAuth2, API key, mutual TLS)
  • What it costs to call (optional pricing metadata)

Agent Cards make agent discovery machine-readable. An orchestrator can query /.well-known/agent.json on any A2A endpoint and know how to call it without prior knowledge of the agent's internal implementation.

2. Tasks

A2A's execution model is task-oriented, not request-response. When an agent calls another agent, it creates a Task — a persistent, addressable unit of work with an ID, status, and structured input/output.

The task lifecycle:

```

submitted → working → [input-required → working]* → completed | failed | cancelled

```

Tasks can be long-running. A task submitted to a research agent might take minutes to complete. The calling agent polls the task ID or subscribes to streaming updates, rather than waiting synchronously.

3. Streaming with Server-Sent Events

A2A requires support for Server-Sent Events (SSE) on the streaming endpoint. As a task runs, the agent emits incremental artifacts — partial outputs, status updates, tool invocations — as SSE events.

This enables real-time observability across agent boundaries. An orchestrator watching a task on a remote agent can surface progress updates to end users in real time, even if the underlying agent is implemented in a completely different stack.

4. Artifacts

When a task completes, its output is structured as Artifacts — typed, named outputs that the calling agent can consume programmatically. An artifact might be:

  • A text document with a specific format
  • A structured JSON object
  • A code file
  • A binary result (image, data file)

Artifacts decouple agent output from agent implementation. The calling agent works with the artifact type, not the internal representation the producing agent used.


How Multi-Agent Workflows Change With A2A

Without A2A, a multi-agent pipeline looks like this:

```

Orchestrator (Python, ADK) → [custom adapter] → Specialist (JS, LangGraph)

```

The custom adapter encodes knowledge about both frameworks, has to be maintained independently, and breaks when either framework updates.

With A2A:

```

Orchestrator (any framework) → [A2A protocol] → Specialist (any framework)

```

The orchestrator queries the specialist's Agent Card, creates a Task via the standard HTTP API, and reads structured Artifacts when the task completes. The protocol handles discovery, authentication, and result serialization — no adapter code required.

The practical upside: you can mix the best-in-class agent for each role in your pipeline. Use LlamaIndex for retrieval, ADK for orchestration, a specialized proprietary model for domain expertise, and a sandboxed code execution agent for implementation. A2A makes this composition pattern production-viable without custom glue code.


A2A vs MCP: Complementary Protocols

A common source of confusion is how A2A relates to MCP. They're complementary, not competing.

AspectMCPA2A
ConnectsAgents to tools, data sources, and servicesAgents to other agents
DirectionAgent → Tool (agent is always the caller)Bidirectional (either agent can orchestrate)
StatefulnessStateless tool invocationsStateful tasks with lifecycle
DiscoveryTool catalog from serverAgent Cards at well-known URLs
StreamingOptionalRequired
GovernanceLinux Foundation (AAIF)Google-led, 50+ partner orgs

The production stack: MCP for tool access within an agent, A2A for coordination between agents. An ADK orchestrator using MCP tool servers to retrieve data, calling A2A-connected specialist agents for domain tasks.


Enterprise A2A in April 2026

Three significant A2A production deployments have been reported:

Google's enterprise pilot (undisclosed financial services firm): An ADK orchestrator routes compliance analysis tasks to A2A-connected specialist agents from three different vendors. The result: compliance review time reduced 70%, with full audit trail across agent boundaries because A2A task IDs propagate through the call chain.

AWS Bedrock AgentCore: Amazon's Agent Registry (launched April 13, 2026) uses A2A-compatible agent descriptions. Agents registered in the AgentCore catalog can be discovered and called by other agents through the registry, with CloudTrail audit trails capturing cross-agent task chains.

Microsoft Agent Framework 1.1 preview: Microsoft confirmed A2A support is the top priority for 1.1, expected Q2 2026. .NET and Python Agent Framework agents will publish Agent Cards and expose A2A task endpoints by default.


What You Need to Build With A2A Now

For teams building orchestrators:

  1. Implement Agent Card discovery — query /.well-known/agent.json before calling any A2A endpoint to understand capabilities and authentication
  2. Use task IDs for observability — propagate task IDs from parent to child agents so you can trace execution chains across boundaries
  3. Handle the `input-required` state — A2A tasks can pause and request additional context from the calling agent; build your orchestrator to handle this lifecycle state
  4. Consume Artifacts by type — don't parse raw response text; consume the structured Artifact schema the agent declares in its Agent Card

For teams exposing A2A endpoints:

  1. Publish a complete Agent Card — include capability descriptions detailed enough for an LLM orchestrator to understand when to call you
  2. Implement SSE streaming — required by the spec; don't ship a polling-only endpoint
  3. Version your Agent Card — changes to input schema or capability set should increment the version field so calling agents can adapt
  4. Include cost metadata — optional but strongly recommended; orchestrators use cost estimates to make routing decisions

What This Means for AgenticNode

AgenticNode's workflow canvas is designed around the same principle A2A formalizes: agents are composable units with typed inputs, outputs, and execution state.

In the canvas today, each node represents an agent step with defined inputs and outputs. The execution trace surfaces task state, tool calls, and token consumption for every node in real time. Multi-provider model routing is built in.

A2A support will make it possible to drop any A2A-compatible external agent into a canvas workflow as a node — an external specialist agent would appear in the canvas with its capabilities pulled from its Agent Card, and its task lifecycle visible in the Glass Window execution trace.

The workflow composition model is already there. A2A is the protocol that connects it to the broader agent ecosystem.


Summary

A2A Protocol is the coordination layer for the multi-agent era:

  1. Agent Cards — machine-readable capability descriptions at well-known URLs enable dynamic agent discovery
  2. Tasks — stateful, long-running work units with lifecycle management replace stateless request-response
  3. SSE streaming — real-time progress updates across agent boundaries enable end-to-end observability
  4. Artifacts — typed, structured outputs decouple agent result consumption from agent implementation
  5. A2A + MCP — complementary: MCP for agent-to-tool, A2A for agent-to-agent
  6. Enterprise adoption is real — AWS, Microsoft, and Google all shipping A2A production support in Q2 2026

The agent coordination problem is solved at the protocol level. The workflow tooling layer is where the next differentiation happens.

Build your first agentic workflow

The visual workflow editor is live. Design, execute, and observe multi-agent pipelines — no framework code required.

Open Editor