What is Model Context Protocol (MCP)?

Model Context Protocol (MCP) is the open standard that lets AI assistants call external tools, read structured data, and use prompt templates over a common wire format. Anthropic introduced it in late 2024 and the spec is now multi-vendor: Microsoft, Google, AWS, Cloudflare, Figma, and Zerodha all ship official MCP servers as of 2026.


The short answer

MCP is a protocol. It defines how an AI host (Claude Code, Cursor, Windsurf, OpenCode, OpenClaw, and others) talks to external programs that provide capabilities to the language model. Those external programs are called MCP servers. The host launches each server, asks it what it can do, and routes calls from the model to whichever server claims the matching tool name.

The full spec lives at modelcontextprotocol.io. Reference implementations in TypeScript and Python are open source and MIT-licensed at github.com/modelcontextprotocol.

For a focused look at the server side specifically, see What is an MCP server. This page is about the protocol itself: what it standardises, what wire format it uses, and how it differs from other ways AI assistants reach external systems.


What problem MCP solves

Language models are useful inside their context window. They become dramatically more useful when they can reach outside that window: read a database, browse a webpage, send a message, transfer a file, query your filesystem. Before MCP, every host application had its own way to expose those capabilities to the model.

A tool you wrote for Cursor would not work in Claude Code without rewriting. A tool you wrote for Windsurf would not work in OpenClaw. Every host required per-host integration code. The result: a fragmented ecosystem where each AI assistant had its own incompatible tool layer.

MCP standardises the contract. A tool authored once as an MCP server works in every MCP-aware host. The host handles tool discovery, the model handles tool selection, and the server handles execution. None of them need to know how the others are implemented.

For developers in India and across emerging markets, this matters unusually. Many local services (Zerodha's Kite trading API, regional payment processors, language-specific tooling) ship their own MCP servers and inherit access to every MCP-aware AI tool simultaneously. Build once, run in every host.


The protocol primitives

The spec defines three things an MCP server can expose to the host:

Tools

Functions the model can decide to call. Each tool has a name, a human-readable description, an input JSON schema, and the implementation behaviour. The model picks tools based on description matching the user's intent. Tools are the most-used primitive in production today.

Resources

Data the model can read but not invoke. Addressed by URI: a file path, a database table identifier, an API endpoint. The host fetches resources on the model's behalf and hands the contents back as context. Useful for putting reference material in front of the model without spending a tool call.

Prompts

Reusable prompt templates the host can offer to the user. They show up as slash commands or quick actions in the host UI. Less common than tools or resources right now but the spec supports them and adoption is growing as hosts ship richer command palettes.

A single MCP server can expose any combination. Most production servers ship tools first, add resources second, add prompts when the use case calls for it.


Wire format and architecture

MCP is JSON-RPC 2.0. No custom binary protocol, no Protobuf, no REST. Plain JSON moving over standard transports.

Two transports are defined in the spec:

The architecture has three roles:

One host can run many servers in parallel. Each server runs in its own process. A misbehaving server cannot crash the host or affect other servers. The host enforces sandboxing on what each server can read or write at the OS level.


MCP versus other agent protocols

MCP versus OpenAI function calling

Function calling is a single-vendor mechanism inside OpenAI's API. The developer registers function definitions with each request, GPT decides whether to invoke them, and the developer's code runs the functions. It works only against OpenAI APIs. Functions are not discoverable across hosts.

MCP is vendor-neutral. Every MCP-aware host shares one tool format. A server you write today works in Claude Code, Cursor, OpenCode, and any future MCP host without modification. Tools are discoverable at handshake, not registered per-request.

MCP versus A2A protocol

A2A (Agent-to-Agent) is a separate emerging protocol for how independent AI agents discover and talk to each other. Backed by AWS, Cisco, Google, IBM, Microsoft, Salesforce, SAP, and ServiceNow. For the side-by-side breakdown including OpenAI function calling and REST as baselines, see agent protocol comparison.

The short version: MCP makes a single AI host more capable. A2A lets two AI agents collaborate. Many systems will speak both. AgentDrop is an example: ships as an MCP server (so a Claude Code or Cursor user can send files via tool calls) while transporting those files between discrete agents on different machines.

MCP versus REST APIs

REST APIs work fine for one-off model calls. Where MCP wins is repeated, cross-host, self-describing tool use. A server announces what it does; the model picks tools without knowing implementation details; new tools added to the server appear automatically next session. REST requires the developer (or the model) to know endpoint URLs, auth schemes, and response shapes up front.


Who actually ships MCP servers

As of May 2026, MCP has crossed from emerging-standard into enterprise-mainstream. Official servers from major vendors:

Beyond the big vendors, the community has shipped MCP servers for almost every common dev tool: Playwright (browser automation), Slack, Reddit, Stripe, Mem0 (long-term memory), Matrix (semantic memory for Claude Code), and AgentDrop (encrypted file transfer between agents). For a curated directory grouped by category, see MCP servers directory.


A short history

Anthropic introduced MCP in late 2024, framing it as the missing layer between language models and external tools. The first reference implementations covered filesystem, GitHub, Postgres, and SQLite access. Claude Desktop and Claude Code shipped MCP support out of the gate.

Through 2025, third-party MCP servers proliferated. Cursor and Windsurf added native MCP support. By early 2026, OpenClaw and OpenCode entered with their own MCP-aware clients, and major cloud vendors began shipping official servers. The protocol moved from "Anthropic standard" to "industry standard" in roughly 18 months.


FAQ

Who created the Model Context Protocol?

Anthropic introduced MCP and published the open specification in late 2024. The protocol is now an open standard maintained at modelcontextprotocol.io with multi-vendor adoption: Microsoft, Google, AWS, Cloudflare, Figma, and Zerodha all ship official MCP servers as of 2026.

Is Model Context Protocol open source?

Yes. The specification is open and free to implement. Reference implementations in TypeScript and Python are MIT-licensed and live at github.com/modelcontextprotocol. Any host application or server can speak the protocol with no licensing fee.

How is MCP different from OpenAI function calling?

OpenAI function calling is a single-vendor, in-prompt mechanism for letting GPT models invoke functions. MCP is a vendor-neutral wire protocol with a separate process per server, formal handshake, and standard primitives (tools, resources, prompts). MCP servers work in any MCP-aware host. Function-calling configs are specific to OpenAI APIs.

How is MCP different from A2A protocol?

MCP standardises how an AI host talks to external tools. A2A (Agent-to-Agent) standardises how separate AI agents talk to each other. They solve adjacent problems. AgentDrop is one example of a system using both: it ships as an MCP server (so any host can call its tools) and supports cross-agent file transfer on top.

What hosts support Model Context Protocol?

Major hosts as of 2026: Claude Code, Cursor, Windsurf, OpenClaw, OpenCode (6.5 million monthly developers), Continue.dev, Zed, and Cline. Microsoft Visual Studio and VS Code both ship MCP integration. The spec is vendor-neutral so any client can implement it.

Why does MCP exist when REST APIs already work?

REST APIs require the model (or its developer) to know the endpoint, the auth, and the schema. MCP lets a server announce itself to the host on startup, hand the model a self-describing tool list, and run in process isolation. The result is a tool ecosystem that works across hosts without per-host integration code.


Where to go next

Related guides on agent-drop.com:

For the canonical specification, visit modelcontextprotocol.io. To run AgentDrop as an MCP server in Claude Code, Cursor, Windsurf, or OpenCode, the quickstart gets you running in about 60 seconds. Free tier: 50 transfers and 50 MB files per month, no card required.