
What is MCP and what is it for
Introduction
The Model Context Protocol (MCP) is an open protocol developed by Anthropic that standardizes how AI applications connect to external data sources. Think of it as a USB-C for AI integrations: instead of creating a specific adapter for each data source, you have a universal standard that works anywhere.
Launched in November 2024, MCP solves a critical problem in LLM development: integration fragmentation. Before it, each tool needed to implement its own custom connections, resulting in duplicated code and complex maintenance.
Basic architecture
MCP works like a corporate telephone system:
Hosts are the companies that need to make external calls. In our case, tools like VS Code with Cline, Claude Desktop, or Zed. Each company has an internal telephone exchange that manages the calls.
MCP Client is the company’s telephone exchange. You don’t see it working, but when someone in the company needs to call out, the exchange takes the number, dials, establishes the connection, and manages the call. If the company needs to talk to 5 different suppliers, the exchange manages all 5 lines simultaneously.
Servers are the external suppliers you call. Each offers specific services: one provides PostgreSQL data, another accesses Google Drive, another connects to GitHub. Each supplier handles calls from multiple companies simultaneously.
When you configure an MCP server, you’re basically adding a number to the telephone exchange’s directory. The exchange (client) handles dialing, keeping the line open, and translating conversations into a format both sides understand.
Communication happens via JSON-RPC 2.0, both through stdio (direct line) and SSE (shared line), allowing implementation flexibility.
Main primitives
The protocol defines three fundamental primitives:
Resources are data that can be read by LLMs. They can be files, database records, web pages, or any contextual content. The server exposes URIs that the client can request.
Tools are functions the LLM can execute. Unlike resources which are passive, tools enable actions: creating records, executing queries, making HTTP requests. The LLM decides when and how to use each tool based on context.
Prompts are reusable templates that structure interactions. They encapsulate common workflows and can include predefined resources and tools.
Practical use cases
MCP shines in scenarios where you need to connect LLMs to private data or specific tools:
Database integration allows the LLM to query and manipulate business data without exposing credentials or business logic in the prompt. An SQLite server, for example, can safely execute queries while the LLM only decides which query to make.
Access to local or remote file systems facilitates document analysis, log processing, and configuration manipulation without needing to manually upload each file.
External API connection standardizes how the LLM interacts with third-party services. A server can encapsulate authentication, rate limiting, and data transformation, letting the LLM focus only on high-level logic.
Workflow automation allows orchestrating multiple tools in a coordinated way. The LLM can fetch data from a CRM via MCP, process it in Google Sheets via another server, and send notifications via Slack, all through the same protocol.
Why use MCP
The main advantage is reusability. An MCP server works with any compatible host. You write the integration once and it works in Claude Desktop, your custom CLI, or any other tool that implements the protocol.
The separation of concerns is also clear: the server handles authentication, security, and data access. The LLM only decides what to do with available capabilities. This keeps prompts clean and focused on business logic.
The ecosystem is growing fast. Anthropic maintains official servers for common cases, and the community has already created dozens of implementations for various services and platforms.
Implementing an MCP Server
Anatomy of a server
Let’s build a functional MCP server to manage a todo list. The implementation uses Anthropic’s official SDK in TypeScript, but the concept applies to any language.
Every MCP server needs to declare its capabilities during initialization. The host asks what the server offers, and the server responds with the list of available resources, tools, and prompts.
import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import {
ListToolsRequestSchema,
CallToolRequestSchema,
} from "@modelcontextprotocol/sdk/types.js";
const server = new Server(
{
name: "todo-server",
version: "1.0.0",
},
{
capabilities: {
tools: {},
},
}
);
Defining tools
Tools are actions the LLM can execute. Each tool needs a JSON schema describing its parameters and a handler implementing the logic.
// Simple in-memory storage
let todos: Array<{ id: number; task: string; done: boolean }> = [];
let nextId = 1;
server.setRequestHandler(ListToolsRequestSchema, async () => ({
tools: [
{
name: "add_todo",
description: "Adds a new task to the list",
inputSchema: {
type: "object",
properties: {
task: {
type: "string",
description: "Task description",
},
},
required: ["task"],
},
},
{
name: "list_todos",
description: "Lists all tasks",
inputSchema: {
type: "object",
properties: {},
},
},
{
name: "complete_todo",
description: "Marks a task as completed",
inputSchema: {
type: "object",
properties: {
id: {
type: "number",
description: "Task ID",
},
},
required: ["id"],
},
},
],
}));
The execution handler processes tool calls:
server.setRequestHandler(CallToolRequestSchema, async request => {
const { name, arguments: args } = request.params;
switch (name) {
case "add_todo": {
const todo = {
id: nextId++,
task: args.task as string,
done: false,
};
todos.push(todo);
return {
content: [
{
type: "text",
text: `Task added with ID ${todo.id}`,
},
],
};
}
case "list_todos": {
const list = todos
.map(t => `[${t.done ? "x" : " "}] ${t.id}. ${t.task}`)
.join("\n");
return {
content: [
{
type: "text",
text: list || "No tasks registered",
},
],
};
}
case "complete_todo": {
const todo = todos.find(t => t.id === args.id);
if (!todo) {
throw new Error(`Task ${args.id} not found`);
}
todo.done = true;
return {
content: [
{
type: "text",
text: `Task ${todo.id} marked as completed`,
},
],
};
}
default:
throw new Error(`Unknown tool: ${name}`);
}
});
Initializing the transport
The server needs a transport to communicate with the host. Stdio is most common for local tools:
async function main() {
const transport = new StdioServerTransport();
await server.connect(transport);
console.error("Todo MCP Server running on stdio");
}
main().catch(error => {
console.error("Fatal error:", error);
process.exit(1);
});
Configuring in VS Code
To connect the server to VS Code with the Cline extension (or any other MCP-compatible tool like Continue, Claude Desktop, Zed), add the configuration to the extension’s settings file.
For Cline, edit the cline_mcp_settings.json file:
{
"mcpServers": {
"todo": {
"command": "node",
"args": ["/absolute/path/to/todo-server/build/index.js"]
}
}
}
Each MCP host has its own configuration method. Claude Desktop uses claude_desktop_config.json, Continue has configuration in VS Code’s settings.json, and so on. The important thing is they all follow the same pattern of declaring the command and arguments to start the server.
After restarting the extension or editor, the tools icon shows the available tools for use.
Complete interaction flow
Let’s see what happens when you ask the assistant to manage your tasks:
sequenceDiagram
participant User
participant Host as MCP Host (VS Code/Cline)
participant Server as Todo Server
User->>Host: "Add a task: buy milk"
Host->>Server: list_tools()
Server-->>Host: [add_todo, list_todos, complete_todo]
Host->>Server: call_tool(add_todo, {task: "buy milk"})
Server-->>Host: "Task added with ID 1"
Host-->>User: "I added the task 'buy milk' to your list"
User->>Host: "What tasks do I have?"
Host->>Server: call_tool(list_todos)
Server-->>Host: "[ ] 1. buy milk"
Host-->>User: "You have 1 pending task: buy milk"
User->>Host: "Mark the first one as completed"
Host->>Server: call_tool(complete_todo, {id: 1})
Server-->>Host: "Task 1 marked as completed"
Host-->>User: "Task completed! Your list is now empty"
User input
Add three tasks: study MCP, write article, and review code
Internal processing
The LLM analyzes the request and identifies it needs to use the add_todo tool three times. It makes the calls sequentially:
Request 1
{
"method": "tools/call",
"params": {
"name": "add_todo",
"arguments": {
"task": "study MCP"
}
}
}
Response 1
{
"content": [
{
"type": "text",
"text": "Task added with ID 1"
}
]
}
This repeats for the other two tasks, generating IDs 2 and 3.
Output to user
I added the three tasks to your list:
1. study MCP
2. write article
3. review code
Want me to mark any as completed or add more tasks?
Complete architecture diagram
graph TB
subgraph "MCP Host (VS Code, Claude Desktop, Zed, etc)"
UI[User Interface]
Client[MCP Client]
end
subgraph "Todo Server (MCP Server)"
Handlers[Request Handlers]
Storage[(In-Memory Storage)]
end
UI -->|"User input"| Client
Client <-->|"JSON-RPC via stdio"| Handlers
Handlers <-->|"Read/Write"| Storage
Client -->|"LLM response"| UI
style UI fill:#e1f5ff
style Client fill:#fff4e1
style Handlers fill:#ffe1f5
style Storage fill:#e1ffe1
Expanding with resources
Besides tools, we can expose the task list as a resource the LLM can read anytime:
import {
ListResourcesRequestSchema,
ReadResourceRequestSchema,
} from "@modelcontextprotocol/sdk/types.js";
server.setRequestHandler(ListResourcesRequestSchema, async () => ({
resources: [
{
uri: "todo://list",
name: "Task List",
description: "All registered tasks",
mimeType: "text/plain",
},
],
}));
server.setRequestHandler(ReadResourceRequestSchema, async request => {
const { uri } = request.params;
if (uri === "todo://list") {
const content = todos
.map(t => `${t.id}. [${t.done ? "DONE" : "PENDING"}] ${t.task}`)
.join("\n");
return {
contents: [
{
uri,
mimeType: "text/plain",
text: content || "No tasks",
},
],
};
}
throw new Error(`Unknown resource: ${uri}`);
});
Now the LLM can read the resource automatically when it needs context about tasks, without explicitly calling the list_todos tool.
Best practices and security considerations
When exposing data and functionality through MCP, some precautions are essential to maintain system integrity.
Input validation
Never blindly trust arguments arriving in tools. The LLM might misinterpret a user instruction or generate unexpected values:
case "complete_todo": {
const id = args.id as number;
if (typeof id !== "number" || id < 1) {
throw new Error("Invalid ID");
}
const todo = todos.find((t) => t.id === id);
if (!todo) {
throw new Error(`Task ${id} not found`);
}
todo.done = true;
return {
content: [{ type: "text", text: `Task ${id} completed` }]
};
}
Limits and rate limiting
For servers making external calls or costly operations, implement usage controls:
let requestCount = 0;
const MAX_REQUESTS_PER_MINUTE = 60;
setInterval(() => {
requestCount = 0;
}, 60000);
server.setRequestHandler(CallToolRequestSchema, async request => {
if (++requestCount > MAX_REQUESTS_PER_MINUTE) {
throw new Error("Rate limit exceeded, try again in a few seconds");
}
// normal processing
});
Data persistence
The example uses volatile memory, but real servers need to persist data. SQLite is a simple option to start:
import Database from "better-sqlite3";
const db = new Database("todos.db");
db.exec(`
CREATE TABLE IF NOT EXISTS todos (
id INTEGER PRIMARY KEY AUTOINCREMENT,
task TEXT NOT NULL,
done INTEGER DEFAULT 0
)
`);
const addTodo = db.prepare("INSERT INTO todos (task) VALUES (?)");
const listTodos = db.prepare("SELECT * FROM todos");
const completeTodo = db.prepare("UPDATE todos SET done = 1 WHERE id = ?");
Error handling
Always return clear error messages to the LLM. This helps it understand what went wrong and how to fix it:
try {
// dangerous operation
} catch (error) {
return {
content: [
{
type: "text",
text: `Processing error: ${error.message}. Check parameters and try again.`,
},
],
isError: true,
};
}
Secrets and credentials
Never hardcode API keys or passwords in code. Use environment variables:
import dotenv from "dotenv";
dotenv.config();
const API_KEY = process.env.EXTERNAL_API_KEY;
if (!API_KEY) {
throw new Error("EXTERNAL_API_KEY not configured");
}
In the host configuration file, you can pass environment variables:
{
"mcpServers": {
"external-service": {
"command": "node",
"args": ["build/index.js"],
"env": {
"EXTERNAL_API_KEY": "your-key-here"
}
}
}
}
Ecosystem and next steps
MCP is growing rapidly. Anthropic maintains a list of official servers in the modelcontextprotocol/servers repository that include integrations with:
- Databases: PostgreSQL, SQLite, MySQL
- Cloud storage: Google Drive, AWS S3
- Dev tools: GitHub, GitLab, Linear
- Productivity: Slack, Gmail, Google Calendar
- Search and analysis: Brave Search, Exa, Puppeteer
The community is also creating servers for specific cases. You can find implementations in Python, Go, Rust, and other languages besides TypeScript.
If you want to explore more:
- Official SDK: github.com/modelcontextprotocol/typescript-sdk
- Complete specification: spec.modelcontextprotocol.io
- Practical examples: modelcontextprotocol.io/quickstart
Conclusion
The Model Context Protocol solves a fundamental problem in LLM development: integration fragmentation. Instead of each tool implementing its own custom connections, we now have a universal standard that works with any compatible host.
The architecture based on hosts, clients, and servers creates a clear separation of concerns. The LLM decides what to do, the server handles how to do it, and the protocol ensures everyone speaks the same language.
The three primitives - resources, tools, and prompts - cover the most common use cases: reading contextual data, executing actions, and structuring workflows. And implementation is surprisingly straightforward when using the official SDKs.
What’s most interesting is we’re only seeing the beginning. With ecosystem growth, the trend is for MCP servers to become as common as npm packages or Ruby gems. You’ll find ready-made integrations for virtually any service or data source you need to connect.
It’s worth investing time learning MCP now. The protocol is still in early phase, but adoption is growing fast. Companies are already building internal tools based on it, and the community is creating servers for the most diverse use cases.
If you work with LLMs in production, MCP will eventually be part of your stack. Better to understand the fundamentals now than have to learn in a rush when it becomes inevitable.