Google PageRank for AI agents. 25,000+ tools indexed.

How to Build an MCP Server in 2026

The Model Context Protocol lets any tool talk to any AI agent through a standardized interface. Building an MCP server means your API, database, or service becomes natively accessible to Claude, Cursor, Copilot, and every other MCP-compatible client. Here's how to do it — from zero to a working server in one sitting.

What MCP actually is

MCP (Model Context Protocol) is a JSON-RPC-based protocol that defines how AI clients communicate with external tools and data sources. When you build an MCP server, you're exposing capabilities through three primitives:

  • Tools — functions the AI can call (search a database, send an email, run a query)
  • Resources — data sources the AI can read (files, API responses, database records)
  • Prompts — reusable prompt templates the client can invoke

The protocol runs over stdio (for local servers) or HTTP+SSE (for remote servers). Claude Desktop, Cursor, VS Code Copilot, Cline, and dozens of other clients implement the protocol — so a server you build once works everywhere.

As of March 2026, the AgentRank index tracks 25,632 MCP-related repositories on GitHub. The top-scoring ones follow the same patterns. This guide shows you those patterns.

Framework options ranked

You have two decisions: language (Python vs TypeScript) and abstraction level (official SDK vs higher-level framework). The AgentRank index scores these frameworks by freshness, issue health, contributor count, and inbound dependents. Here's where they stand:

# Framework Score Stars Approach Lang
1 modelcontextprotocol/python-sdk Official Python SDK for building MCP servers and clients 92.14 4,821 Low-level / Full control Python
2 modelcontextprotocol/typescript-sdk Official TypeScript/Node.js SDK for MCP servers and clients 91.88 5,102 Low-level / Full control TypeScript
3 jlowin/fastmcp High-level Python framework for building MCP servers with minimal boilerplate 89.44 6,734 High-level / FastAPI-style Python
4 wong2/litestar-mcp MCP server integration for the Litestar async Python web framework 76.21 887 Framework integration Python

Which to choose: If you're building in Python and want the fastest path to a working server, use FastMCP. If you need full protocol control or are working in TypeScript, use the official SDK. For existing Python web apps, the Litestar integration adds MCP alongside your existing HTTP routes.

Quickstart: Python with FastMCP

jlowin/fastmcp is the most-starred Python MCP framework at 6,734 stars and a score of 89.44. It uses a decorator pattern borrowed from FastAPI — if you've used FastAPI, this will feel immediately familiar.

Install
pip install fastmcp
server.py — minimal working server
from fastmcp import FastMCP

mcp = FastMCP("My Tool Server")

@mcp.tool()
def search_products(query: str, limit: int = 10) -> list[dict]:
    """Search the product catalog by keyword."""
    # Your implementation here
    return [{"id": 1, "name": "Widget", "price": 9.99}]

@mcp.resource("catalog://products")
def get_all_products() -> str:
    """Full product catalog as JSON."""
    return '{"products": [...]}'

if __name__ == "__main__":
    mcp.run()  # stdio transport by default

That's a complete MCP server. The @mcp.tool() decorator registers a callable tool. The docstring becomes the tool's description that the AI uses to decide when to call it. Type hints become the parameter schema. FastMCP handles all the JSON-RPC plumbing.

To run it locally with Claude Desktop, add the server to your claude_desktop_config.json:

claude_desktop_config.json
{
  "mcpServers": {
    "my-tool-server": {
      "command": "python",
      "args": ["/path/to/server.py"]
    }
  }
}

Quickstart: TypeScript SDK

The official TypeScript SDK has 5,102 stars and scores 91.88 — the most-starred MCP framework overall. It gives you full protocol access with strong typing.

Install
npm install @modelcontextprotocol/sdk
server.ts — minimal working server
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";

const server = new McpServer({
  name: "My Tool Server",
  version: "1.0.0",
});

server.tool(
  "search_products",
  "Search the product catalog by keyword",
  { query: z.string(), limit: z.number().optional().default(10) },
  async ({ query, limit }) => {
    // Your implementation here
    return {
      content: [{ type: "text", text: JSON.stringify([{ id: 1, name: "Widget" }]) }],
    };
  }
);

const transport = new StdioServerTransport();
await server.connect(transport);

The TypeScript SDK uses Zod for schema validation on tool parameters. The content return format follows the MCP spec — text, image, or resource content types.

Core concepts: tools, resources, prompts

Tools — what the agent can do

Tools are functions the AI calls to take action or retrieve data. They're the most-used primitive — the vast majority of MCP servers expose tools and nothing else. Good tool design follows the same rules as good function design: narrow scope, clear name, useful description. The description is critical — it's what the model reads to decide whether to call your tool, so be explicit about what it does and when to use it.

Resources — what the agent can read

Resources are URIs that expose data — like a filesystem path or an API endpoint mapped to a stable identifier. Resources are read-only. Use them for data that the AI should be able to pull into context without triggering an action: config files, documentation, data schemas, reference data. Resources are less commonly implemented than tools, but they're the right primitive for "give the agent background context" use cases.

Prompts — reusable instruction templates

Prompts let you package specific workflows as named templates that clients can invoke. Think of them as saved workflows — "analyze this PR for security issues", "summarize this database schema". Most servers don't implement prompts. Implement them when you have known, high-value workflows your users run repeatedly.

Transport: stdio vs HTTP

Stdio transport is the default for local servers — simple, no port management, works immediately with Claude Desktop and Cursor. HTTP+SSE is for remote servers you want multiple clients to connect to. Most servers start with stdio. Migrate to HTTP when you need multi-tenant access or want to run the server as a standalone service.

Taking it to production

Error handling

Tools should return structured error messages, not raise exceptions. The AI needs to read and understand errors to decide what to do next. Raise an exception and the connection may drop. Return {"error": "No results found for query 'xyz'"} and the AI can retry with a different query.

Rate limiting and auth

For servers that access paid APIs or sensitive data, implement authentication at the transport level. HTTP servers can use bearer tokens in the SSE handshake headers. Stdio servers can read credentials from environment variables passed through the client config.

Schema quality

Tool parameter schemas drive the AI's ability to call your tools correctly. Use descriptive field names. Add description strings to each parameter. Mark optional fields as optional with defaults. Poorly documented schemas mean the AI guesses — and guesses wrong.

Testing

Both the Python SDK and TypeScript SDK include test utilities. The Python SDK includes a client fixture you can use in pytest. Alternatively, run your server locally and connect to it through Claude Desktop or the MCP Inspector CLI (npx @modelcontextprotocol/inspector) to validate tool responses interactively.

Publishing

Once your server works locally, publish it to PyPI or npm. This is what gets you inbound dependents — the strongest signal in the AgentRank score. Servers with npm packages score significantly higher on average than source-only repos because real usage shows up in the dependency graph.

Get indexed on AgentRank

The AgentRank crawler runs nightly against GitHub and picks up new MCP repositories automatically. If your server is public and has a description mentioning MCP or Model Context Protocol, it will be indexed within 24 hours of creation.

To maximize your score from day one:

  • Add clear GitHub topics: mcp, mcp-server, model-context-protocol
  • Write a description that mentions what the server does and the MCP protocol
  • Respond to issues quickly — issue health is weighted 25% of the score
  • Keep commits coming — freshness decays hard after 90 days
  • Get other repos to depend on yours — inbound dependents are the strongest signal

Browse existing MCP servers: Explore 25,000+ indexed tools — use them as implementation references.

Compare frameworks head-to-head: Tool comparison widget — see the full signal breakdown side by side.

Already built something? Submit your server to verify it's indexed and get your initial score.

Get the weekly AgentRank digest

Top movers, new tools, ecosystem insights — straight to your inbox.