Google PageRank for AI agents. 25,000+ tools indexed.

MCP Server Performance: What the Data Shows

Most MCP server comparisons focus on features. The question that actually matters is will it work six months from now? We analyzed maintenance velocity, issue response rates, and transport characteristics across 25,750+ indexed repositories to find out which servers perform — and which ones will let you down.

What performance means for MCP servers

Traditional API performance benchmarks measure throughput, p99 latency, and error rates under load. MCP servers are different. Most run as local processes serving a single AI agent — not distributed services under concurrent traffic. The performance questions that actually matter are:

  • How fast does the server cold-start? Every time your agent client launches, local stdio servers spin up fresh. A Python server with heavy imports can add 300–600ms of startup latency to every session.
  • How fast are the tool calls? Depends almost entirely on the external service being wrapped. An MCP server calling GitHub's API is only as fast as GitHub's API.
  • Is the server maintained? This is the dominant performance question. A server that worked in January 2026 may be broken today if its dependencies changed or the upstream API was updated.
  • How quickly do issues get resolved? When something breaks — and in a fast-moving ecosystem, things break — the issue close rate tells you how long you'll be blocked.

The AgentRank score captures the signals that predict long-term performance: freshness (25%), issue health (25%), dependents (25%), stars (15%), and contributors (10%). A score above 80 is our threshold for production-ready — the server is actively maintained and has proven real-world use. See the full scoring methodology for details.

Transport latency: stdio vs HTTP+SSE

MCP servers communicate over one of two transports. The choice determines the baseline latency before any application logic runs.

Transport Latency Cold Start Scalability Best For
stdio (local) <1ms (IPC) 50–500ms (Python/TS process) Single user, single process Local tools, filesystem access, IDE integrations
HTTP + SSE (remote) 5–50ms (network) Near-zero (persistent server) Multi-user, horizontally scalable SaaS integrations, shared team tools, APIs
Streamable HTTP (spec draft) 5–50ms (network) Near-zero Multi-user, stateless-friendly Serverless deployments, Cloudflare Workers

stdio cold start: the hidden latency

Local stdio servers (the majority of the index) start a new OS process per session. Measured cold-start times from common frameworks:

  • TypeScript / Node.js: 80–200ms typical. Node's startup is fast; heavy npm dependency trees can push this to 400ms.
  • Python (FastMCP / official SDK): 150–500ms. Python's import time dominates. A server with pandas or numpy imports can hit 800ms.
  • Go: 10–40ms. Compiled binaries start near-instantly. MCP-Go servers are the fastest-starting option.
  • Rust: 5–20ms. Same advantage as Go — compiled, no runtime startup.

Cold start is a one-time cost per session, not per tool call. For interactive use (IDE integrations, Claude Desktop), 300ms of startup is imperceptible. For batch agent pipelines that restart servers frequently, Go or Rust servers materially reduce overhead.

HTTP+SSE latency

Remote servers add network round-trips. For a server deployed on the same cloud region as your agent, expect 5–15ms overhead per call. For cross-region calls, 50–150ms. The MCP spec's Server-Sent Events stream means the connection stays open across tool calls, so you pay the TCP handshake only once per session — tool call overhead is just the round-trip time.

The Streamable HTTP transport (in the MCP spec draft as of Q1 2026) enables stateless deployments on Cloudflare Workers and similar edge runtimes. Expect this to become the dominant remote transport once clients adopt it — it eliminates SSE connection management complexity.

Maintenance velocity as a performance signal

We define maintenance velocity as the rate at which a server receives commits, closes issues, and responds to bugs. From the AgentRank index, the distribution across 25,750 repositories:

  • Committed in the last 7 days: 8.4% of repositories
  • Committed in the last 30 days: 26.2% of repositories
  • Committed in the last 90 days: 41.1% of repositories
  • No commit in 90+ days: 58.9% of repositories — scoring heavily penalized

More than half the index is effectively stale. This matters for performance because stale servers break silently: the MCP protocol doesn't have a concept of "server version" that agents can check against the upstream service's API version. When Slack updates their API and a community Slack MCP server isn't updated to match, the server returns errors that look like agent failures, not dependency failures.

Issue close rate: the reliability predictor

Issue close rate (closed issues / total issues) is our best proxy for how quickly bugs get fixed. Distribution in the index:

  • Above 80% close rate: 9.2% of repositories — elite maintenance
  • 60–80% close rate: 17.8% — solid, production-viable
  • Below 60% close rate: 73% of repositories with issues

Servers with close rates above 70% resolve reported bugs 3–5x faster than those below 40%. For production use, this translates directly to reduced downtime when upstream services change.

Category health breakdown

Performance varies significantly by category. Enterprise and AI-native categories outperform community-built personal automation tools by a wide margin.

Category Freshness % Issue Close Rate Avg Score Notes
Memory & Knowledge 37.6% 71% 68.4 Highest freshness in the index — AI-native use cases drive active maintenance
DevOps & Infrastructure 31.2% 74% 72.1 Enterprise maintainers (Microsoft, Red Hat, AWS) anchor high issue close rates
Database 29.8% 69% 70.3 Vendor-backed servers (MongoDB, Redis, Neon) outperform community forks
Web Browser & Scraping 28.4% 65% 67.9 Playwright-based servers benefit from upstream spec updates
Code Generation 26.1% 68% 66.2 GitHub-adjacent tools score well; editor plugins vary
API Integration 22.3% 62% 63.8 Slack/Google/Notion maintain official servers; third-party forks lag
Productivity 19.7% 58% 59.4 Calendar and task tools have high abandonment after initial novelty
Social & Personal 11.2% 41% 44.1 Lowest maintenance health; most servers are personal experiments

Memory & Knowledge Management leads all categories because it includes core agent infrastructure (knowledge graphs, codebase indexers, context windows) that AI-native teams depend on and actively maintain. Social & Personal servers lag because most are personal experiments published once and never updated.

Top performers in the index

These are the servers with the highest AgentRank scores as of March 2026 — all scoring above 88. The pattern is consistent: enterprise backing or a single highly engaged maintainer, recent commits, and a strong issue close rate.

#1
microsoft/playwright-mcp
97.44 ⭐ 29,180 Freshness: last commit 2d ago Issue close: 91%

Microsoft ownership, 29K+ stars, commits every 1–2 days, 91% issue close rate

#2
microsoft/azure-devops-mcp
97.17 ⭐ 1,406 Freshness: last commit 4d ago Issue close: 88%

Enterprise-grade maintenance with dedicated Azure DevOps team

#3
modelcontextprotocol/python-sdk
92.14 ⭐ 4,821 Freshness: last commit 1d ago Issue close: 94%

Spec-authoritative implementation, Anthropic-maintained, daily commits

#4
jlowin/fastmcp
89.44 ⭐ 6,734 Freshness: last commit 3d ago Issue close: 87%

Most-starred Python MCP framework, active solo maintainer with rapid response

#5
mongodb-js/mongodb-mcp-server
88.44 ⭐ 612 Freshness: last commit 6d ago Issue close: 86%

Official MongoDB server; database team handles issues within hours

Full rankings are updated nightly. View the live leaderboard for current scores.

Performance red flags to watch

Last commit over 90 days ago

The AgentRank freshness signal decays hard after 90 days. In the MCP ecosystem, 90 days without a commit typically means the maintainer has moved on. We've seen servers that passed all quality checks at 30 days and had broken tool schemas at 120 days after an upstream API change.

Issue close rate below 40%

A growing pile of open issues with no responses is the clearest sign of abandonment. Community servers often reach 50–100 open issues with no activity — the maintainer found the initial build interesting but isn't investing in upkeep.

Zero inbound dependents after 6 months

Inbound dependents (other repos importing this server as a dependency) are the strongest signal of real production use. A server with zero dependents after six months hasn't found a user base that depends on it. That's correlated with slow bug fixes and eventual abandonment.

Single-file, single-commit repositories

26% of the index is a single server.py or index.ts with one commit and no README. These are experiments, not maintained tools. Filter them out immediately.

How to measure MCP server performance yourself

MCP Inspector for tool call latency

The official MCP Inspector is the fastest way to measure tool call latency:

npx @modelcontextprotocol/inspector

Open the Inspector, connect to any stdio or HTTP server, and call tools individually. The Inspector shows response time per call. This isolates MCP protocol overhead from your agent framework's overhead.

Checking health signals in the AgentRank index

For any server you're evaluating, the AgentRank tool page shows the five signals that predict long-term reliability: score, freshness, issue close rate, contributors, and dependents. Use this as a quick screen before you invest integration time.

Use the score checker to look up any GitHub repo by URL and get the current AgentRank score with a full signal breakdown.

What to benchmark in production

Once integrated, log timestamps around tool calls in your agent framework. The metrics worth tracking:

  • p50 / p95 tool call latency — baseline performance under normal conditions
  • Error rate by tool — tools that error frequently signal broken upstream dependencies
  • Session startup time — time from agent launch to first successful tool call

High p95 latency on a specific tool usually means the external API that tool wraps has rate limits or reliability issues — not an MCP problem. Route the alert to the right place.

Get the weekly AgentRank digest

Top movers, new tools, ecosystem insights — straight to your inbox.