MCP server for file transfer between AI agents

The Model Context Protocol (MCP) gives AI agents a way to call tools. It does not give them a way to exchange files. AgentDrop is an MCP server that fills that gap. It adds encrypted file transfer between agents running in Claude Code, Cursor, Windsurf, or any MCP host, without changing the MCP UX or the model's tool-calling experience.


What is an MCP server?

An MCP server is a small program that exposes tools an AI assistant can call. The host (Claude Code, Cursor, Windsurf, OpenClaw) launches the server on startup, registers its tools, and routes function calls from the language model to whichever server claims the matching tool name. AgentDrop is the MCP server that extends model reach into encrypted file transfer between agents, including agents running in different MCP hosts on different machines.

For the full definition, mechanics, and examples by category, see What is an MCP server.


What MCP solves, and what it leaves on the table

MCP (the Model Context Protocol) is the standard Anthropic released for connecting AI assistants to external tools. It's narrow on purpose. Each MCP server exposes a set of named functions with typed arguments. The host application - Claude Code, Cursor, Windsurf, and others - lets the user enable servers and exposes their function calls to the model. The model decides when to call a function. The host runs the call, returns the result. The model continues.

What MCP is excellent at:

What MCP is silent on:

That last gap is the one we care about. MCP says "here's how an LLM in Claude Code calls a function." It doesn't say "here's how that function returns a 200 MB PDF that the LLM running in Cursor across the office can pick up tomorrow."


The four wrong ways to move files inside MCP

Option A: Base64-encode the file in the JSON response

Hits the LLM's context window in seconds. A 5 MB PDF base64-encoded is roughly 6.7 MB of tokens. You'll exhaust the model's context on the file payload itself, leaving no room for the actual conversation. If the file exceeds the model's max output tokens, it can't even return the file in one response - you're into multi-call streaming territory just to ferry bytes the model never needed to read.

Option B: Save the file locally, return the path

Works only when both agents share a filesystem. In practice, that's rare. Most users have one Claude Code instance per machine; the user running Cursor on a different laptop sees a different filesystem. Even on one machine, MCP servers running under different user accounts have different views of disk. You end up coordinating over a shared volume that breaks the moment one side moves.

Option C: Upload to S3, return a presigned URL

You've now leaked the URL into the LLM provider's request log (OpenAI, Anthropic, Google). The model sees the URL. The model's response containing the URL is logged again by your observability stack. The URL is long-lived enough to be useful but long-lived enough to be exfiltrated. We covered this attack vector at length on our encrypted file transfer page.

Option D: Build a custom protocol on top of MCP

Now every tool in your agent stack needs to understand your custom protocol. You've fragmented the ecosystem you went to MCP to consolidate. The point of MCP is that one tool works in every host; the moment you bolt a custom on-the-side protocol, you're back to N-by-M integration work.


What the AgentDrop MCP server adds

We ship an MCP server (npm: agentdrop) that exposes file-transfer tools to any MCP host. From the LLM's perspective, they're just tools, called the same way as any other MCP tool. From the user's perspective, the MCP host config gains one entry. From the protocol's perspective, nothing changes - we're a normal MCP server, not an extension or a fork.

The tools the LLM sees:

When the LLM calls send_file, the SDK does the X25519 ECDH key derivation, AES-256-GCM encryption, R2 upload, and metadata persistence. The model gets back a small JSON response containing just the transfer ID and a download URL. The bytes never travel through the LLM's context. The file's contents never appear in the conversation log.


Configuring it

The MCP server is a single npm package run via npx. No persistent install. No auth flow. Drop the config block into your MCP host's settings and the tools appear on next launch.

Claude Code (~/.claude/mcp.json or via claude mcp add):

{
  "mcpServers": {
    "agentdrop": {
      "command": "npx",
      "args": ["-y", "agentdrop@latest"],
      "env": {
        "AGENTDROP_API_KEY": "agd_..."
      }
    }
  }
}

Cursor - same shape, added through Cursor's MCP settings panel. Same env var.

Windsurf - same shape, added through Windsurf's plugin manager.

Any other MCP host - same JSON. The MCP standard means the config travels.


A real workflow: Claude Code to Cursor handoff

Suppose you're in Claude Code finishing a database migration. The model produces a 30 MB Postgres dump and you want it in a colleague's Cursor for review. The mechanical steps:

  1. Claude Code's LLM calls send_file(recipient="cursor-charlie", file_paths=["./dump.sql"], message="feature/migrations dump for review")
  2. AgentDrop's MCP server encrypts the file with Charlie's public key, uploads to R2, returns the transfer ID
  3. Claude Code's LLM tells you "shipped, transfer tr_abc..."
  4. Charlie opens Cursor. The Cursor LLM does an inbox check on launch (or on demand). It sees a new transfer.
  5. Charlie's LLM calls download_transfer(transfer_id="tr_abc...")
  6. Cursor's MCP server fetches the encrypted blob, decrypts client-side using Charlie's private key, writes the plaintext to disk
  7. Charlie's LLM now has the file ready to read or hand to other tools

Two LLMs in two MCP hosts on two machines. No shared filesystem. No URL leaked into either LLM provider's logs. No file contents in either conversation history. Each side called regular MCP tools the same way they'd call any other tool.


Identity is the part most MCP file-transfer attempts miss

The trickiest aspect of MCP-to-MCP file exchange isn't encryption - it's identity. When Claude Code sends a file to "cursor-charlie", how does the system know which Charlie? Across organisations, this matters. An agent identifier on its own is a string. Without an identity layer beneath it, you have no way to tell whether the recipient is the Charlie you meant or a Charlie someone registered to phish your transfers.

AgentDrop solves this with two layers: account-level connections (humans agree their accounts can interact) and agent-level pairings (specific agents within those accounts are authorised to communicate). Same-account agents can talk immediately. Cross-account agents need both layers active before any transfer succeeds. The MCP server respects all of that without exposing it to the model - the LLM only ever sees the agent slugs it's allowed to send to.

If you want the deep dive, see AI agent identity and trust.


The MCP-native way to move files

AgentDrop is the file-transfer layer MCP doesn't define. A standard MCP server, dropped into your existing Claude Code, Cursor, or Windsurf config, giving any LLM the ability to send and receive encrypted files between identified agents in one tool call. No spec changes, no protocol forks, no context-window gymnastics. The MCP host handles tool calls; we handle the bytes, the keys, and the identity beneath them.


Where to go next

To get started in under 60 seconds, install the MCP server and run the quickstart.

For the full SDK + MCP server reference, the create-transfer API docs map directly onto the MCP send_file tool.

Related topic pages:

Or sign up free - 50 transfers and 50 MB files included, no credit card required.