Model Context Protocol
The Model Context Protocol (MCP) is an open standard for connecting large language model (LLM) applications to external data sources and tools. It standardizes how AI systems share context, expose capabilities, and compose integrations. [1]
Architecture
MCP uses a client-server-host architecture to separate concerns:
- Hosts: LLM applications (such as Claude Desktop, IDEs, or custom agents) that initiate connections.
- Clients: Connectors within the host that maintain the protocol session.
- Servers: Lightweight programs that provide specific capabilities like data access or tool execution. [2]
Communication is handled via JSON-RPC 2.0 messages, allowing for a structured and language-agnostic interface. [1]
Transports
MCP supports two primary transport mechanisms for message exchange:
- stdio: The client launches the server as a local subprocess and communicates via standard input/output. This is common for local tools and desktop integrations.
- Streamable HTTP: The server runs as a separate network service. It uses HTTP POST for client-to-server messages and optionally Server-Sent Events (SSE) for server-to-client notifications. [3]
Capabilities
Servers can register three main types of capabilities to expose functionality to the model and user:
| Capability | Purpose | Structure | Interaction |
|---|---|---|---|
| Resources | Expose read-only data (files, logs, APIs) | Defined by a generic URI and MIME type (text or binary). | Clients read content or subscribe to push updates. [4] |
| Tools | Perform executable actions | Defined by a JSON Schema for required arguments. | Model generates a call; Client (with approval) executes it. [5] |
| Prompts | Standardize workflows and context | Defined by name, description, and arguments. | User selects via UI (e.g., slash commands) to inject context. [6] |
Sampling
A distinctive feature of MCP is Sampling, which allows a server to request an LLM completion back from the client. The term derives from the underlying mechanism of generative AI, where the model "samples" the next token from a predicted probability distribution. This capability enables agentic behaviors where a server-side tool can "ask" the model to analyze data or make decisions as part of its execution. Crucially, the client retains control over which model is used and can require user approval for any sampling request, ensuring that the human operator remains in the loop. [7]
History and Adoption
MCP was announced and open-sourced by Anthropic in November 2024. [8] Since then, it has seen growing adoption across the AI ecosystem, including integration into development environments like Cursor and localized toolchains for specialized data analysis. [9]
- ^a ^b Model Context Protocol (2025-03-26). Specification - Model Context Protocol. Model Context Protocol. https://modelcontextprotocol.io/specification/2025-03-26.
- ^ Model Context Protocol. Architecture - Model Context Protocol. Model Context Protocol. https://modelcontextprotocol.io/docs/concepts/architecture.
- ^ Model Context Protocol. Transports - Model Context Protocol. Model Context Protocol. https://modelcontextprotocol.io/docs/concepts/transports.
- ^ Model Context Protocol. Resources - Model Context Protocol. Model Context Protocol. https://modelcontextprotocol.io/docs/concepts/resources.
- ^ Model Context Protocol. Tools - Model Context Protocol. Model Context Protocol. https://modelcontextprotocol.io/docs/concepts/tools.
- ^ Model Context Protocol. Prompts - Model Context Protocol. Model Context Protocol. https://modelcontextprotocol.io/docs/concepts/prompts.
- ^ Model Context Protocol. Sampling - Model Context Protocol. Model Context Protocol. https://modelcontextprotocol.io/docs/concepts/sampling.
- ^ Anthropic (2024-11-25). Introducing the Model Context Protocol. Anthropic. https://www.anthropic.com/news/model-context-protocol.
- ^ Model Context Protocol. Specification and documentation for the Model Context Protocol. GitHub. https://github.com/modelcontextprotocol/modelcontextprotocol.