What is an MCP server?

An MCP server is a small program that exposes tools, resources, and prompts an AI assistant can call. The host application (Claude Code, Cursor, Windsurf, OpenClaw, and other Model Context Protocol clients) launches the server on startup, registers its capabilities, and routes function calls from the language model to whichever server claims the matching tool name.


The short answer

An MCP server is a separate process the AI host spawns. The server announces what it can do (its tools, its readable resources, its prompt templates) at handshake. The host registers those capabilities with the language model. When the model decides to call a tool, the host routes the call to the right server, the server executes, the response goes back to the model.

The protocol that ties this together is the Model Context Protocol, MCP for short. Anthropic published the spec at modelcontextprotocol.io and any host or server can implement it. MCP is open and vendor-neutral.

The reason the protocol exists at all: language models keep needing to reach into external systems (filesystems, databases, browsers, APIs), and every host application was inventing its own way to expose tools to the model. MCP standardises the contract so a tool written once works in every MCP-aware host.


How an MCP server actually works

The end-to-end mechanics when you launch an MCP-aware host:

  1. Host reads the MCP config file (mcp.json for Claude Code, equivalent settings panel for Cursor or Windsurf, plugin manager for other hosts).
  2. For each declared server entry, the host spawns the process with the configured command and environment variables.
  3. Host sends an initialize request over the server's stdin, negotiating protocol version and capabilities.
  4. Server responds with its declared tools, resources, and prompts.
  5. Host registers those with the language model so the model knows what it can call.
  6. When the model decides to call a tool, the host writes the call to the server's stdin as a JSON-RPC request.
  7. Server runs the tool, writes the result to stdout as a JSON-RPC response.
  8. Host hands the result back to the model, which continues the conversation with the new information in context.

The wire format is JSON-RPC 2.0 over stdio for local servers (the common case). Remote MCP servers use HTTP plus SSE for streaming. There is no custom binary protocol, no Protobuf, no REST. Plain JSON over standard streams.


What an MCP server can expose

Three primitives in the spec, in order of how often they get used:

Tools

Functions the model decides to call. Each tool declares a name, a human-readable description, an input JSON schema, and the behavior. The model picks tools based on the description matching the user's intent.

Examples: read_file, query_database, send_email, send_file, browser_navigate. The model has no idea how the tool is implemented, only what it does. That separation is a security feature: the host enforces what tools the model is allowed to invoke, and the server enforces what the tool can do when invoked.

Resources

Data the model can read but not invoke. Resources are addressed by URI (file paths, database table identifiers, API endpoints) and the host fetches them on the model's behalf. Useful for handing the model context it should consider without spending a tool call to read it.

Prompts

Reusable prompt templates the host can offer to the user. These show up as slash commands or quick actions in the host UI. Less commonly used than tools or resources right now, but the spec supports them.

Most MCP servers in production expose tools primarily. Resources are growing. Prompts are still niche.


Real-world MCP servers, by category

Filesystem and local files

Code repositories and version control

Databases

Browser automation

Cloud and infrastructure

Agent-to-agent communication

Memory and persistent context

For the longer curated list, see our MCP servers directory.


How to add an MCP server to your host

Every MCP-aware host accepts the same JSON config shape. The host launches each declared server on startup, asks for its tools, and exposes those tools to the model.

{
  "mcpServers": {
    "agentdrop": {
      "command": "npx",
      "args": ["-y", "agentdrop@latest"],
      "env": {
        "AGENTDROP_API_KEY": "agd_..."
      }
    },
    "filesystem": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-filesystem", "/Users/me/projects"]
    }
  }
}

Drop this into:

Host launches both servers on next start, registers their tools, the model picks them up. No further glue code on your side.


MCP server vs traditional API

People sometimes ask: why not just give the model an HTTP client and let it call your API directly? Two reasons MCP wins for repeated tool use.

Discovery and self-description. A traditional API requires the model (or its developer) to know the endpoint URL, the auth scheme, the request schema, the response shape. An MCP server announces itself to the host on startup. The host hands the model a curated tool list with descriptions. The model picks tools without knowing implementation details. New tools added to the server show up automatically next session.

Process isolation. An MCP server runs in its own process. It can crash, get rate-limited, time out, get killed and restarted, all without affecting the host or the conversation. The host enforces sandboxing on what the server can read or write at the OS level. A misbehaving server cannot leak data the host did not grant it access to.

For one-off API calls during a single conversation, just calling the API works fine. For tools the model uses repeatedly across sessions, MCP servers are the better pattern.


FAQ

What languages can I write an MCP server in?

Any language. The Model Context Protocol is just JSON-RPC over stdio or HTTP. Reference implementations exist in TypeScript and Python (Anthropic). Community implementations cover Go, Rust, Ruby, and other languages.

Do MCP servers need to be public?

No. Most MCP servers run locally on your machine, spawned by the host application as a child process. Remote MCP servers exist over HTTP + SSE but are the minority of deployments.

Is MCP only for Anthropic and Claude?

No. The Model Context Protocol is an open standard. Cursor, Windsurf, OpenClaw, Continue.dev, and other hosts all implement it. Google shipped an official Stitch MCP server in April 2026. Microsoft ships Azure MCP. The protocol is vendor-neutral.

Can one MCP server talk to another MCP server?

Yes. AgentDrop is built on this pattern: one agent's MCP server sends an encrypted file to another agent's MCP server through end-to-end encrypted transport. Multi-server graphs and inter-agent flows are valid uses of the protocol.

How do MCP servers handle authentication?

The server reads credentials from environment variables that the host config passes in at spawn time. There is no shared auth layer across servers. Each server handles its own auth flow (API key, OAuth, or whatever the backend service requires).

Are MCP servers safe to install?

The protocol itself is safe. The risk is the same as any tool ecosystem: a malicious server can execute code with the access the host grants. The Bawbel security scanner found that 22% of MCP servers on Smithery had at least one vulnerability (tool description injection, output exfiltration, PII leakage). AVE vulnerability records track these. Vet servers before installing them.


Where to go next

Related guides on AgentDrop:

For the canonical protocol spec, see the official site at modelcontextprotocol.io.

To try AgentDrop as your first MCP server, the quickstart gets the server running in about 60 seconds. Free tier covers 50 transfers and 50 MB files per month, no card required.