Google PageRank for AI agents. 25,000+ tools indexed.

Top 10 MCP Servers for AI Agents in 2026 (Ranked by Real Data)

The AgentRank index now tracks 25,632 MCP servers and agent tools. Average score: 29.6. The tools below are the top 10 across the entire index — not sorted by GitHub stars, not by recent hype, but by a composite signal of five real quality indicators scored daily. This is what the data says.

The top 10 MCP servers

Scored by the AgentRank composite: stars (15%), freshness (25%), issue health (25%), contributors (10%), inbound dependents (25%). Average score across all 25,632 indexed tools is 29.6. These ten sit at the top of the distribution. All were last updated within the past two weeks.

# Repository Score Stars Close% Category Lang
1 CoplayDev/unity-mcp Direct AI control over Unity Editor — assets, scenes, scripts, and automation 98.67 7,003 95.7% Game Dev / IDE C#
2 microsoft/azure-devops-mcp Official Microsoft MCP server for Azure DevOps — work items, PRs, pipelines, repos 97.17 1,406 96.8% DevOps / Cloud TypeScript
3 laravel/boost AI-augmented local development for Laravel applications 96.94 3,333 92.7% Web Dev / PHP PHP
4 Pimzino/spec-workflow-mcp Spec-driven AI development workflow with real-time dashboard and VSCode extension 96.02 3,999 97.7% Dev Workflow TypeScript
5 zcaceres/markdownify-mcp Convert almost anything to clean Markdown — PDFs, URLs, files, and more 94.73 2,449 100% Content / Utilities TypeScript
6 samanhappy/mcphub Unified hub for managing and routing multiple MCP servers with flexible strategies 94.47 1,876 88% Orchestration TypeScript
7 perplexityai/modelcontextprotocol Official Perplexity AI MCP server — real-time web search for agents 94.27 2,016 97.7% Search / AI TypeScript
8 microsoft/playwright-mcp Microsoft's official Playwright MCP server for browser automation and testing 94.26 28,849 97.1% Browser Automation TypeScript
9 oraios/serena Semantic code retrieval and editing toolkit for coding agents via MCP 93.61 21,474 85.2% Coding Agents Python
10 mcp-use/mcp-use Full-stack MCP framework connecting any LLM to any MCP server 93.1 9,434 76% Frameworks TypeScript

All data reflects the AgentRank crawler run from March 2026. Full scoring methodology here.

Deep dives

#1 — CoplayDev/unity-mcp (98.67)

CoplayDev/unity-mcp is the highest-scoring MCP server in the entire index at 98.67. It bridges AI assistants — Claude, Cursor, Copilot — directly to the Unity Editor. An agent can manage assets, create and modify scenes, edit C# scripts, and automate game development tasks entirely through MCP calls.

The numbers behind the score: 7,003 stars, 43 contributors, last commit March 12 — three days before this snapshot. Issue close rate is 95.7% (353 closed, 16 open). That combination of recency, responsiveness, and adoption is what drives the top score. It's not just popular — it's actively maintained with a disciplined issue queue.

Why Unity MCP specifically? Game development has a massive surface area of repetitive tasks — scripting prefabs, tweaking scene hierarchies, generating boilerplate. An agent with direct Editor access handles these faster than a human can navigate the UI. The high star count tells you Unity developers recognized that value immediately.

#2 — microsoft/azure-devops-mcp (97.17)

microsoft/azure-devops-mcp is the official Microsoft MCP server for Azure DevOps at 97.17. It exposes work items, pull requests, pipelines, repositories, and test plans as MCP tools — meaning an agent on Claude or Copilot can create issues, query build statuses, review PRs, and trigger deployments without leaving the conversation.

The score is driven by exceptional issue health: 96.8% close rate (425 closed, 14 open). For a tool with 45 contributors and direct Microsoft backing, that responsiveness is consistent and predictable. Teams running Azure DevOps can integrate this into existing sprint workflows and trust that it'll track the Azure DevOps API accurately as it evolves.

#3 — laravel/boost (96.94)

laravel/boost is the official Laravel MCP server for AI-augmented local development at 96.94. It connects AI assistants to a Laravel project's artisan commands, routes, models, and database schema — giving agents context about the application structure that's otherwise locked inside the codebase.

What makes this stand out in the rankings: 1,032 dependent repos. In the AgentRank scoring model, inbound dependents carry 25% weight because they signal real-world adoption beyond starred interest. 76 contributors and active weekly commits complete the picture. The Laravel ecosystem is large and the community moved quickly to adopt the official server once it landed.

#4 — Pimzino/spec-workflow-mcp (96.02)

Pimzino/spec-workflow-mcp scores 96.02 with the highest issue close rate in the top 10: 97.7% (130 closed, 3 open). It provides structured, spec-driven development workflow tools for AI-assisted software development — complete with a real-time web dashboard and VSCode extension to monitor agent progress.

The core workflow: an agent reads a spec, breaks it into tasks, executes them, and updates the dashboard. Humans monitor via the VSCode extension. This is an early implementation of the "agentic development loop" that most agent frameworks describe but few ship as a ready-to-use MCP server. 3,999 stars with essentially no open issues is a strong credibility signal for a single-maintainer project.

#5 — zcaceres/markdownify-mcp (94.73)

zcaceres/markdownify-mcp scores 94.73 and has a unique distinction: 100% issue close rate across 25 issues — not a single open issue at time of snapshot. It converts almost anything to Markdown: URLs, PDFs, Word documents, HTML, plain text.

For AI agents, Markdown is the universal input format. Raw HTML and PDFs are noisy and token-heavy. Markdownify-mcp serves as the preprocessing layer that strips structure down to clean, agent-readable text. The 2,449 star count is modest relative to the other top-10 tools, but the maintenance discipline is exceptional. An agent retrieval pipeline that includes Markdownify consistently outperforms ones that parse raw web content.

#6 — samanhappy/mcphub (94.47)

samanhappy/mcphub scores 94.47 and solves a problem that grows proportionally with the size of your MCP stack: orchestration. It's a unified hub that centrally manages multiple MCP servers and routes requests between them with flexible strategies — load balancing, failover, endpoint isolation.

As MCP adoption increases, most production agent deployments will run several servers simultaneously: a GitHub server, a database server, a search server, a code execution server. Without an orchestration layer, each gets configured separately and managed independently. MCPHub provides the central configuration plane. The 88% issue close rate and 23 contributors signal active community development, not a one-person side project.

#7 — perplexityai/modelcontextprotocol (94.27)

perplexityai/modelcontextprotocol scores 94.27. It's the official Perplexity AI MCP server — giving any MCP-connected agent access to Perplexity's real-time web search. An agent that only has access to its training data is epistemically bounded. An agent with Perplexity access can answer questions about things that happened last week.

The 97.7% issue close rate (42 closed, 1 open) reflects direct Perplexity engineering ownership. Official backing here matters: when Perplexity updates their API, this server tracks it. Community wrappers for the Perplexity API will lag. 2,016 stars in a competitive search-MCP category confirms adoption, but it's the maintenance signals that put it in the top 10.

#8 — microsoft/playwright-mcp (94.26)

microsoft/playwright-mcp scores 94.26 and is the most-starred tool in this list at 28,849 stars. It's the official Playwright MCP server from Microsoft — browser automation, UI testing, and web scraping exposed as MCP tools.

For agents that need to navigate the web, fill forms, screenshot pages, or run end-to-end tests, Playwright MCP is the standard. The 97.1% issue close rate (743/765) across 62 contributors and active weekly commits makes it the most battle-tested browser automation server in the index. The only reason it ranks 8th rather than 1st: dependents are not tracked for Go and TypeScript tool binaries the same way npm packages are, which suppresses the dependent signal despite actual widespread adoption.

#9 — oraios/serena (93.61)

oraios/serena scores 93.61 with the most contributors in the top 10: 134. It's a coding agent toolkit that provides semantic code retrieval and editing via MCP — not file reads, but language-server-level symbol navigation, definition jumps, and semantic search.

21,474 stars place it second only to Playwright in raw star count. The breadth of contributor base signals this has moved beyond a single team's project — it's become the de facto semantic code MCP for the broader agent development community. Agents using Serena can navigate large codebases efficiently without loading entire files into context, which compounds cost savings at scale.

#10 — mcp-use/mcp-use (93.1)

mcp-use/mcp-use scores 93.1 and occupies a different niche than the other nine: it's the framework layer that connects any LLM — not just Claude — to any MCP server. 9,434 stars and 377 dependent repos signal that developers are building on top of it, not just using it directly.

The 76.0% issue close rate is the lowest in the top 10, which reflects a more open issue queue typical of infrastructure-layer projects with broad scope. The dependent count matters more here: 377 repos that depend on mcp-use means it's embedded in other tools and workflows. For teams building multi-model setups — running Claude for reasoning, a smaller model for classification — mcp-use is the connection layer that makes that work with MCP servers built for any specific model.

What the top 10 have in common

Issue close rate above 85% is table stakes

Nine of the ten tools have issue close rates above 85%. The one exception — mcp-use at 76% — compensates with 377 dependent repos and 9,434 stars. Issue health is the single most predictive signal in the AgentRank model for whether a tool will be usable in production six months from now. A server with 200 open issues and no activity is a maintenance burden waiting to happen.

Official backing dominates the top half

Five of the top 10 — Microsoft Azure DevOps, laravel/boost, Perplexity AI, and Microsoft Playwright — are officially maintained by the organization that owns the underlying platform. Official servers track API changes faster and ship fixes when the underlying API breaks. Community forks lag. For production agent deployments with uptime requirements, official backing reduces risk.

Stars alone do not predict rank

microsoft/playwright-mcp has 28,849 stars and ranks 8th. CoplayDev/unity-mcp has 7,003 stars and ranks 1st. Stars capture awareness. AgentRank captures quality. The gap between the two is maintenance discipline, freshness, and ecosystem adoption — things that don't show up in a star count but matter enormously when you're shipping production agents.

Category diversity at the top

The top 10 spans game dev, DevOps, web frameworks, workflow tooling, content processing, orchestration, search, browser automation, coding, and LLM connectivity. There's no single dominant category. The best MCP servers solve focused problems extremely well — they don't try to be everything. A mature agent stack probably touches at least three or four categories from this list.

Browse the full index: agentrank-ai.com/tools — 25,632 tools scored, updated daily.

Maintain an MCP server? Claim your listing to add context, links, and documentation to your entry.

How the score works

AgentRank computes a composite score for every indexed tool from five signals:

  • Stars (15%) — raw popularity signal, normalized across the index
  • Freshness (25%) — days since last commit; scores decay past 90 days
  • Issue health (25%) — closed issues divided by total issues; higher is more responsive
  • Contributors (10%) — bus factor proxy; more contributors reduces single-maintainer risk
  • Inbound dependents (25%) — how many repos depend on this one; the strongest adoption signal

The composite is scaled 0–100. Average across all 25,632 indexed tools is 29.6. The top 10 here range from 93.1 to 98.67. That's what genuine outlier quality looks like in the distribution. Full methodology at /methodology.

Get the weekly AgentRank digest

Top movers, new tools, ecosystem insights — straight to your inbox.