Google PageRank for AI agents. 25,000+ tools indexed.

Best Python Libraries for Building MCP Servers in 2026

Python has 9,869 MCP repositories in the AgentRank index — 38.5% of the entire ecosystem. The language's dominance in ML and scripting means Python developers were first to build production MCP servers at scale. These are the libraries that matter.

Top Python MCP libraries by score

Ranked by the composite AgentRank score — stars (15%), freshness (25%), issue health (25%), contributors (10%), inbound dependents (25%). Average score across Python MCP repos is 36.2. The libraries below are clustered at the top of the distribution.

# Repository Score Stars Role Issues closed
1 PrefectHQ/fastmcp The fast, Pythonic way to build MCP servers and clients 90.7 23,659 High-level server framework 83%
2 oraios/serena Semantic code retrieval and editing toolkit built as a Python MCP server 66.6 21,474 Production Python MCP server 85%
3 modelcontextprotocol/python-sdk Official Python SDK — the authoritative low-level implementation 62.3 22,124 Official SDK / Low-level 65%
4 lastmile-ai/mcp-agent Python framework for building agents that use MCP as their tool interface 58.7 8,099 MCP agent client framework 48%

FastMCP — the fastest way to build a Python MCP server

PrefectHQ/fastmcp scores 90.7 — the highest of any Python MCP library in the index. It's the result of FastAPI-inspired thinking applied to MCP: use decorators, let the framework handle the protocol plumbing, ship a working server in under 10 lines.

23,659 stars and 208 contributors signal broad community adoption. A 83% issue close rate (1,153 of 1,388) signals an active maintainer team that ships fixes. Last commit was 4 days ago. This is the library to reach for when you have a Python function and want to expose it to Claude or any other MCP-compatible agent.

FastMCP — minimal server
pip install fastmcp
server.py
from fastmcp import FastMCP

mcp = FastMCP("weather")

@mcp.tool()
def get_weather(city: str) -> str:
    """Get current weather for a city."""
    # your implementation here
    return f"72°F and sunny in {city}"

@mcp.resource("config://settings")
def get_settings() -> str:
    """Return server configuration."""
    return "{\"units\": \"imperial\"}"

if __name__ == "__main__":
    mcp.run()  # stdio by default; mcp.run(transport="sse") for HTTP

The decorator API is deliberately minimal: @mcp.tool() for tools, @mcp.resource() for resources, @mcp.prompt() for prompts. FastMCP infers the JSON schema from Python type annotations. You write functions; the framework builds the protocol.

FastMCP also ships a client library. If you need to both build a server and programmatically test or call it in Python, the same package covers both ends of the connection.

FastMCP — calling a tool from Python
from fastmcp import Client

async def main():
    async with Client("server.py") as client:
        result = await client.call_tool("get_weather", {"city": "Austin"})
        print(result)  # "72°F and sunny in Austin"

import asyncio
asyncio.run(main())

Official Python SDK — when you need full protocol control

modelcontextprotocol/python-sdk scores 62.3 with 22,124 stars and 181 contributors. It is maintained by Anthropic and has full Model Context Protocol spec coverage — every primitive, every transport, every capability flag.

The SDK's lower score relative to FastMCP reflects its maintenance signal, not its quality. A 65% issue close rate (787 of 1,204) lags behind FastMCP's 83%, and more open issues signal a higher volume of edge case requests that come with being the authoritative reference implementation. The last commit was 4 days ago — actively maintained.

Official Python SDK — minimal server
pip install mcp
server.py
import asyncio
from mcp.server import Server
from mcp.server.stdio import stdio_server
import mcp.types as types

app = Server("weather")

@app.list_tools()
async def list_tools() -> list[types.Tool]:
    return [
        types.Tool(
            name="get_weather",
            description="Get current weather for a city",
            inputSchema={
                "type": "object",
                "properties": {"city": {"type": "string"}},
                "required": ["city"],
            },
        )
    ]

@app.call_tool()
async def call_tool(name: str, arguments: dict):
    if name == "get_weather":
        city = arguments["city"]
        return [types.TextContent(type="text", text=f"72°F and sunny in {city}")]
    raise ValueError(f"Unknown tool: {name}")

async def main():
    async with stdio_server() as (r, w):
        await app.run(r, w, app.create_initialization_options())

asyncio.run(main())

The verbosity is the point. You define the JSON schema explicitly, handle each tool name in a conditional, and manage transport directly. That explicitness is a liability when prototyping but an asset in production: every behavior is visible and auditable. The SDK also exposes the full lifecycle — initialization, capability negotiation, request routing — which you cannot access through FastMCP's abstraction layer.

FastMCP v2 (the PrefectHQ version) is built on top of this SDK. If you hit an edge case in FastMCP that you cannot resolve, the underlying SDK behavior is documented here.

lastmile-ai/mcp-agent — Python framework for MCP agent workflows

lastmile-ai/mcp-agent scores 58.7 with 8,099 stars and 60 contributors. It covers a different half of the MCP problem: not building a server that exposes tools, but building an agent that uses MCP servers as its tool interface.

If your use case is orchestrating multiple MCP servers in a single Python agent workflow — routing calls between a file-system server, a GitHub server, and a database server — this is the right layer. The library implements MCP client logic, session management, and multi-server coordination. The last commit was roughly one month ago; activity is moderate but the project is under active development.

mcp-agent — orchestrating multiple servers
from mcp_agent.app import MCPApp
from mcp_agent.agents.agent import Agent
from mcp_agent.workflows.llm.augmented_llm_anthropic import AnthropicAugmentedLLM

app = MCPApp(name="my_agent")

async def main():
    async with app.run() as agent_app:
        agent = Agent(
            name="researcher",
            instruction="You are a research assistant with tool access.",
            server_names=["fetch", "filesystem"],
        )
        async with agent as a:
            llm = await a.attach_llm(AnthropicAugmentedLLM)
            result = await llm.generate_str("Summarize the README at https://example.com")
            print(result)

The 48% issue close rate is the weakest signal in this set. That reflects the library's scope — multi-server orchestration produces a long tail of edge cases. If you are building a single-purpose agent that wraps one server, FastMCP's client library is simpler. If you are building an orchestration layer that routes across many servers, mcp-agent is purpose-built for the problem.

FastMCP vs official SDK: when to use each

Use FastMCP when

You want to expose Python functions as MCP tools quickly. Your team is familiar with FastAPI or Flask decorator patterns. You want a library that ships both a server and a test client. You are building a server, not studying the protocol. FastMCP's 90.7 score, 23,659 stars, and 208 contributors make it the most-validated choice in the ecosystem. Start here unless you have a specific reason not to.

Use the official Python SDK when

You need access to MCP primitives that FastMCP's abstraction layer doesn't expose. You are building infrastructure that implements the MCP spec itself — not just using it. You need to handle custom transport layers, implement capability negotiation manually, or debug protocol-level behavior. The official SDK gives you every knob; FastMCP gives you the defaults that are right for most cases.

Use mcp-agent when

You are building an agent that consumes multiple MCP servers and need session management, multi-server routing, and LLM integration in one Python package. This library is not for building servers — it is for orchestrating agents that use servers.

The score gap explains the hierarchy

FastMCP scores 90.7 vs the official SDK's 62.3 — a 28-point gap that reflects maintenance velocity and dependency adoption, not technical quality. The official SDK has a higher open issue count because it receives every edge-case bug report in the Python MCP ecosystem. FastMCP has fewer open issues because most of those edge cases never reach it — FastMCP handles them by calling the underlying SDK. Both are production-ready. The score gap reflects who catches the long tail, not which is better built.

Browse Python MCP servers: Full AgentRank index — 9,869 Python MCP repositories scored, updated daily.

Built a Python MCP server? Submit it to get indexed and scored alongside PrefectHQ/fastmcp and the official SDK.

Get the weekly AgentRank digest

Top movers, new tools, ecosystem insights — straight to your inbox.