Ornold
Back to blog
Explainer8 min read

What Is MCP (Model Context Protocol) and How Does It Work?

MCP is an open standard that lets AI agents connect to external tools through a single universal interface. One server, any agent — no custom plugins needed.
Apr 17, 2026

The Short Version

MCP (Model Context Protocol) is an open standard created by Anthropic that defines how AI agents discover and use external tools. Think of it as USB-C for AI — one universal connector that works with any device. Before MCP, every AI agent needed its own custom plugin format. MCP replaces that with a single protocol that any agent can speak.

Why MCP Exists

AI models like Claude, GPT, and Gemini are powerful reasoners, but they can't do anything outside their sandbox. They can't browse the web, query a database, or control a browser on their own. To interact with the real world, they need tools.
Before MCP, connecting an AI agent to external tools meant building a custom integration for every combination. Want Claude to read files? Write a plugin. Want Cursor to do the same? Write a different plugin. Want Codex to join? Another plugin. The tools are identical — the wiring is not.
MCP solves this by standardizing the wiring. A tool author writes one MCP server. Every MCP-compatible agent — Claude Code, Codex, Cursor, Windsurf, VS Code Copilot, and dozens more — can connect to it immediately.

How MCP Works

MCP uses a client-server architecture built on JSON-RPC 2.0. There are three participants:
  • Host — the AI application (Claude Desktop, Cursor, VS Code). It manages connections and runs the language model.
  • Client — a connector inside the host that maintains a 1:1 link with a specific MCP server.
  • Server — a program that exposes tools, resources, or prompts to the AI agent.
When an agent starts, it connects to its configured MCP servers and asks each one: "What tools do you have?" The server responds with a list of tools, each described by a name, a human-readable description, and a typed parameter schema. The agent stores this catalog and uses it when deciding how to fulfill user requests.
// Example: MCP server exposes a tool { "name": "browser_parallel_navigate", "description": "Navigate all connected browsers to a URL in parallel", "inputSchema": { "type": "object", "properties": { "url": { "type": "string", "description": "Target URL" } }, "required": ["url"] } }

What MCP Servers Can Provide

MCP defines three types of capabilities a server can expose:
  • Tools — functions the AI can call to perform actions (navigate a browser, send an email, query a database). The most common capability.
  • Resources — read-only data the AI can access (file contents, API responses, configuration). Like GET endpoints.
  • Prompts — reusable templates that help users interact with the AI in consistent ways.
Most MCP servers focus on tools. For example, Ornold MCP exposes 30+ browser automation tools — navigate, click, fill forms, solve CAPTCHAs, take screenshots, run JavaScript — all executable across multiple browser profiles in parallel.

Transport: How Agent and Server Communicate

MCP supports multiple transport mechanisms:
  • stdio — the server runs as a local child process. The agent spawns it and communicates via stdin/stdout. Simple, fast, no network. This is how most local MCP servers work (including Ornold MCP via `npx ornold-mcp`).
  • Streamable HTTP — the server runs remotely and communicates over HTTP. Supports multiple concurrent clients. Recommended for remote/shared servers.
  • SSE (Server-Sent Events) — older HTTP-based transport, kept for backwards compatibility.
For local tools like browser automation, stdio is the standard choice. The agent starts the MCP server process, and they communicate directly — no network latency, no authentication complexity.

Which AI Agents Support MCP?

MCP adoption has grown rapidly since its release. As of 2025, these agents support MCP natively:
  • Claude Code — Anthropic's CLI agent
  • Claude Desktop — Anthropic's desktop application
  • Codex — OpenAI's coding agent (CLI)
  • Cursor — AI code editor by Anysphere
  • Windsurf — AI IDE by Codeium
  • VS Code Copilot — GitHub's AI assistant
  • Cline — open-source AI coding assistant
  • Roo Code, Kilo Code, Augment Code — and many more
Any application that implements the MCP client specification can connect to any MCP server. This is the core value proposition — write once, use everywhere.

MCP vs Function Calling vs Plugins

How does MCP compare to other approaches for giving AI agents tools?
  • Function calling (OpenAI, Anthropic) — the model API supports tool definitions, but the tools run in YOUR code. You define the schema, the model returns a tool call, and you execute it. MCP standardizes the server side so tools are portable across agents.
  • ChatGPT Plugins (deprecated) — OpenAI's earlier attempt at tool integration. Proprietary, only worked with ChatGPT, required OpenAI approval. MCP is open, works with any agent, no approval needed.
  • Custom integrations — bespoke code for each agent-tool pair. Works but doesn't scale. MCP replaces the N×M integration problem with N+M.
MCP doesn't replace function calling — it builds on top of it. The AI agent uses its native function calling to invoke MCP tools. MCP standardizes how tools are discovered, described, and served.

A Practical Example: Browser Automation

To make this concrete, here's how MCP works with Ornold for browser automation:
  • You install the Ornold MCP server: `npx ornold-mcp --token YOUR_TOKEN --linken-port 40080`
  • Your AI agent (say, Claude Code) connects to it and discovers 30+ tools: navigate, click, fill, screenshot, solve CAPTCHA, etc.
  • You tell the agent: "Open google.com in all browsers and search for MCP protocol"
  • The agent plans the steps, calls `browser_parallel_navigate`, then `browser_parallel_fill`, then `browser_parallel_click` — all through MCP tool calls
  • Each tool executes across all connected antidetect browser profiles in parallel
The same Ornold MCP server works identically with Codex, Cursor, Windsurf, or any other MCP-compatible agent. No code changes, no separate plugins.
// Connect Ornold MCP to Claude Code: claude mcp add --transport stdio ornold-browser -- npx ornold-mcp --token YOUR_TOKEN --linken-port 40080 // Connect to Codex: codex mcp add ornold-browser -- npx ornold-mcp --token YOUR_TOKEN --linken-port 40080 // Connect to Cursor (~/.cursor/mcp.json): { "mcpServers": { "ornold-browser": { "command": "npx", "args": ["ornold-mcp", "--token", "YOUR_TOKEN", "--linken-port", "40080"] } } }

Getting Started with MCP

If you want to use MCP tools, you just need an MCP-compatible AI agent and an MCP server. No SDK, no framework, no boilerplate. For browser automation with Ornold:
  • Get an API token at mcp.ornold.com
  • Add the MCP server to your agent's config (one command for CLI agents, one JSON block for editors)
  • Start your antidetect browser and talk to the AI
For more details on the protocol itself, see the official MCP specification. For agent-specific setup guides, check our Claude Code guide, Codex guide, or Cursor guide.

Related posts