MCP vs A2A vs Function Calling: AI Agent Protocol Comparison
Four protocols cover most of how AI agents reach the outside world in 2026: Model Context Protocol (MCP), Agent-to-Agent (A2A), OpenAI function calling, and plain REST. They are not interchangeable. This page compares them side by side, explains which solves which problem, and shows why most production AI stacks now run more than one.
The short answer
A 30-second comparison:
- MCP - one AI host talks to many external tools. Vendor-neutral, JSON-RPC 2.0. Use when a model needs to call out to services, read data, or run prompt templates.
- A2A - separate AI agents discover and talk to each other. Vendor-neutral, backed by AWS, Cisco, Google, IBM, Microsoft, Salesforce, SAP, ServiceNow. Use when multiple agents need to coordinate.
- OpenAI function calling - a single-vendor mechanism inside OpenAI APIs. The developer registers functions per request; GPT decides what to invoke. Works only against OpenAI APIs.
- REST APIs - the baseline. Works everywhere but requires the developer (or model) to know endpoint URLs, auth schemes, and response shapes up front. No discovery, no shared tool format.
MCP and A2A are complements, not rivals. Function calling and REST are baselines that MCP and A2A both improve on for specific cases. The interesting comparison is MCP vs A2A, because that is where most architectural questions actually live.
What each protocol actually is
Model Context Protocol (MCP)
Open protocol for exposing tools, data resources, and prompt templates to a single AI host. Introduced by Anthropic in late 2024, now multi-vendor. JSON-RPC 2.0 over stdio (local) or HTTP+SSE (remote). Three primitives: tools (functions the model invokes), resources (data the model reads), prompts (templates the host surfaces).
MCP-aware hosts in 2026: Claude Code, Cursor, Windsurf, OpenCode (6.5 million monthly developers, opencode.ai), OpenClaw, Continue.dev, Zed, Cline, Visual Studio Code. Server you write once works in all of them.
For the protocol overview, see What is Model Context Protocol. For the server side, see What is an MCP server.
Agent-to-Agent (A2A)
Open protocol for how separate AI agents discover, advertise capabilities, and exchange messages. Backed by a coalition: AWS, Cisco, Google, IBM, Microsoft, Salesforce, SAP, ServiceNow. The protocol covers agent identity, capability descriptors, message formats, and security primitives.
A2A operates at a different layer from MCP. Where MCP makes one model more capable by giving it tools, A2A lets two agents collaborate by giving them a shared message format. Many production systems use both: each agent uses MCP for tool access internally, and A2A for coordination with other agents.
The Bing search volume for "a2a protocol" reached 4.5K impressions per quarter as of W19 (May 2026), with India recently becoming the number-one market for the query (1.7K vs US 936). The ecosystem is still emerging but the enterprise backing is unusual.
OpenAI function calling
A single-vendor mechanism inside OpenAI APIs. Developer registers function definitions in each request, GPT decides whether to invoke them, developer code runs the actual function. Returns a structured tool-call message; developer dispatches and feeds the result back into the next API call.
Function calling works only against OpenAI APIs. Functions are not discoverable across hosts; they have to be registered per request, per developer, per app. It predates MCP by about a year and the two mechanisms now coexist in OpenAI hosts (some apps use function calling for in-prompt tools and MCP for everything else).
REST APIs
Not an AI protocol. The baseline that everything else compares against. REST works everywhere, has decades of tooling, and is fully understood. Where REST falls short for AI is discovery (the model has to know endpoint URLs and response shapes up front) and self-description (the model has no standardised way to learn what calls are available). MCP and A2A both add layers above REST to solve these.
Side-by-side comparison
The same dimensions across all four protocols:
Who talks to whom
- MCP: one AI host (with one model) to many external tool servers
- A2A: one AI agent to another AI agent (each may be a different host with a different model)
- Function calling: one model to one developer codebase, all in-process
- REST: any caller to any HTTP endpoint
Wire format
- MCP: JSON-RPC 2.0 over stdio or HTTP+SSE
- A2A: open spec, JSON-based, multiple transports (HTTP, gRPC, queues)
- Function calling: JSON inside OpenAI request and response bodies
- REST: HTTP + whatever body format the API picks (JSON, XML, Protobuf)
Discovery
- MCP: server announces tools at handshake; host registers them with the model automatically
- A2A: agent capability descriptors, advertised via the protocol's discovery layer
- Function calling: developer registers function definitions per request
- REST: none built in. OpenAPI / Swagger is a common addition but not part of REST itself.
Vendor scope
- MCP: vendor-neutral. Anthropic, Microsoft, Google, AWS, Cloudflare, Figma, Zerodha, Stripe all ship MCP servers; same protocol across all of them.
- A2A: vendor-neutral. AWS, Cisco, Google, IBM, Microsoft, Salesforce, SAP, ServiceNow all back the spec.
- Function calling: OpenAI only. Other model vendors have their own non-interoperable equivalents.
- REST: universal but not standardised across APIs.
Process model
- MCP: each server runs as a separate process. A misbehaving server cannot crash the host.
- A2A: each agent is a separate process (often a separate machine or organisation).
- Function calling: in-process. Function code runs in the developer's app context.
- REST: any deployment shape; the protocol is indifferent.
When to use which
Use MCP when
- A single AI host (Claude Code, Cursor, Windsurf, OpenCode) needs to call out to external services or read data
- You want the same tool to work in multiple hosts without rewriting
- You want process isolation between the model and the tool layer
- You are exposing a service to the AI tool ecosystem (any vendor shipping an "official MCP server" for their product)
Use A2A when
- Two or more AI agents (each potentially with a different host or model) need to discover each other and exchange messages
- You are building a multi-agent system where agents may live in different organisations or trust boundaries
- You need standardised capability advertisement so a calling agent can decide which other agent to delegate to
- Enterprise integration matters and the vendor coalition behind A2A is the relevant one for your buyers
Use OpenAI function calling when
- You are building only against OpenAI APIs and have no plan to support Claude, Gemini, or local models
- You want a fully in-process tool layer with no extra running servers
- The functions are small, app-specific, and not worth packaging as a separate MCP server
Use REST when
- Either side of the call is a non-AI client (humans browsing, web apps, scripts)
- You already have an existing API and adding MCP or A2A is not justified by usage volume yet
- The interaction is one-shot rather than discovery-driven
MCP and A2A together: the typical production stack
The interesting architectural question is not "MCP or A2A". It is "how do MCP and A2A fit together". The typical production stack in 2026 looks like this:
- Each AI agent runs in its own host (Claude Code, Cursor, OpenCode, etc.) with its own MCP servers attached for tool access
- Agents that need to talk to other agents do so over A2A, with capability descriptors advertised through A2A's discovery layer
- Cross-agent file transfer, scheduling, and coordination ride on A2A or on specialised transports built on top
- Where audit trails matter, A2A messages get logged at the host boundary; MCP tool calls get logged inside each host
AgentDrop is one example of a system that lives in both layers. It ships as an MCP server (so any host can call its send_file, check_inbox, and download_transfer tools) while transporting files between separate agents on different machines or accounts. The encryption layer (X25519 ECDH + AES-256-GCM) is documented at encrypted file transfer for AI agents. For a worked Claude Code to Cursor handoff example, see MCP server for file transfer.
Vendor adoption status (May 2026)
A snapshot of which protocols the major vendors back today.
- Anthropic: MCP (creator + reference implementations)
- OpenAI: function calling (own vendor mechanism) + MCP (now supported in their API as a tool layer)
- Google: MCP (Cloud, Stitch) + A2A (founding backer) + function calling-equivalent in Gemini API
- Microsoft: MCP (Azure, VS Code, Visual Studio, three SERP slots for "mcp server" on Bing) + A2A (founding backer)
- AWS: MCP (server hit GA May 2026) + A2A (founding backer)
- Cloudflare, Stripe, Figma, Zerodha: MCP only
- IBM, Cisco, Salesforce, SAP, ServiceNow: A2A primarily, with MCP-compatible servers shipping case-by-case
The pattern: cloud platforms back both. Vertical SaaS vendors lead with MCP. Enterprise software backs A2A first. The two protocols have not split into rival camps because they solve different problems.
What this means for buying decisions
If you are evaluating which protocols to adopt, the practical summary:
- Adopt MCP today. The ecosystem is mature. Every major host supports it. Servers are MIT-licensed. The cost of integration is low and the lock-in is near zero.
- Watch A2A and adopt as multi-agent systems become a requirement. The vendor coalition is strong. The protocol is open. The use case (multi-agent coordination) is real but still emerging in production.
- Avoid building OpenAI-function-calling-only. Even inside OpenAI hosts, MCP is now the better-supported mechanism. Function calling still works but the trajectory is toward MCP parity across vendors.
- Keep REST as the boring foundation. Most APIs should still be REST. MCP and A2A are layers on top, not replacements.
FAQ
What is the difference between MCP and A2A?
MCP standardises how a single AI host talks to external tools, data sources, and prompt templates. A2A standardises how separate AI agents discover and talk to each other. They solve adjacent problems. MCP makes a model more capable; A2A lets agents collaborate. Most production stacks use both.
Can I use both MCP and A2A in the same system?
Yes. The two protocols operate at different layers and do not conflict. A typical stack: each agent uses MCP servers for its own tool access, and A2A for coordinating with other agents. AgentDrop is an example of a system that runs as both an MCP server (for in-host tool calls) and a cross-agent transport (for file transfer between separate agents).
Is A2A replacing MCP?
No. A2A and MCP are complementary, not competing. The same vendors backing A2A (AWS, Cisco, Google, IBM, Microsoft, Salesforce, SAP, ServiceNow) also support MCP. Both protocols are open and vendor-neutral.
Does A2A work without MCP?
Technically yes. A2A defines its own discovery, capability advertisement, and message exchange. In practice, the agents that participate in A2A systems are usually the same ones using MCP for tool access, so the two appear together.
Which protocol does Claude Code use?
Claude Code uses MCP for external tool access. It does not natively speak A2A as of May 2026, but it can communicate with other AI agents through MCP servers that bridge to A2A or to direct transports. AgentDrop is an example of an MCP server that handles cross-agent communication. For Claude Code MCP setup specifically, see Claude Code MCP.
What is the relationship between MCP and OpenAI function calling?
OpenAI function calling is a single-vendor mechanism inside OpenAI APIs. The developer registers function definitions with each request, GPT decides whether to invoke them, the developer code executes. MCP is vendor-neutral with separate processes per server, formal handshake, and standardised primitives. Function calling works only against OpenAI APIs; MCP works in Claude Code, Cursor, Windsurf, OpenCode, and any other MCP-aware host.
Where to go next
Related guides on agent-drop.com:
- What is Model Context Protocol - the MCP overview if you have not read it yet
- What is an MCP server - the server-side mechanics with real examples
- Claude Code MCP - Claude Code config layer and the most-used MCP servers
- MCP server documentation index - every MCP doc in one place
- MCP server for file transfer - worked example of one MCP server end-to-end
To run AgentDrop as an MCP server in any MCP-aware host, the quickstart gets you running in about 60 seconds. Free tier: 50 transfers and 50 MB files per month, no card required.