State of the MCP Ecosystem — March 2026
The Model Context Protocol turned 16 months old this March. In that time, a GitHub ecosystem of 25,750 repositories has formed around it — more than most mature open-source frameworks built up over years. We ran the numbers on all of them. Here is what the data shows about where MCP actually stands, which tools are winning, and where the ecosystem is healthy versus hollow.
Ecosystem overview
The AgentRank index crawled GitHub in March 2026 using search queries for "mcp server", "mcp-server", "model context protocol", "agent tool", and "a2a agent". The result: 25,750 repositories scored and ranked using five signals.
The average score of 29.83 tells a story: most of the ecosystem is early-stage. The median project has fewer than 10 stars, one contributor, and was last updated several months ago. But the long tail matters less than the head. The tools at the top of the rankings are genuinely excellent, and the gap between the top 200 and the bottom 20,000 is enormous.
The top 163 repositories (those scoring above 80) represent less than 1% of the index but are responsible for the majority of real-world usage. This is normal for open-source ecosystems — but the MCP ecosystem is unusually concentrated even by those standards.
Growth: how fast is this moving?
The most striking data point in the index is the monthly commit activity curve. We can estimate ecosystem growth by counting how many repositories had their last commit in each month — a proxy for how many tools were actively developed during that period.
| Month | Active repos (last commit) | vs. same month prior year |
|---|---|---|
| October 2024 | 6 | — |
| November 2024 | 26 | — |
| December 2024 | 207 | — |
| January 2025 | 181 | — |
| February 2025 | 216 | — |
| March 2025 | 1,063 | — |
| Q2 2025 (avg/mo) | ~1,604 | — |
| Q3 2025 (avg/mo) | ~1,545 | — |
| Q4 2025 (avg/mo) | ~1,742 | — |
| January 2026 | 2,149 | +1,087% vs Jan 2025 |
| February 2026 | 3,357 | +1,454% vs Feb 2025 |
| March 2026 (partial) | 4,542 | +327% vs Mar 2025 |
The inflection point is clear: late 2024 saw the initial Anthropic launch push, 2025 was steady growth as early adopters built and shared tools, and early 2026 is a second explosion — driven by MCP support landing in GitHub Copilot, VS Code, Gemini, and every major AI coding client simultaneously. The protocol went from "Claude-specific" to "universal AI interface" in the span of about six months.
The March 2026 number (4,542 repos actively committed) is a partial-month count from our March crawl. Annualized, that pace suggests over 50,000 active MCP projects by year-end 2026 if growth holds — double the current index in nine months.
Ecosystem health: active vs abandoned
Counting repositories is easy. Understanding which ones are real is harder. We use days since last commit as the primary freshness signal, weighted heavily in the score (25% of the composite). Here is how the current index breaks down:
| Last commit age | Repository count | Share of index | Avg AgentRank score |
|---|---|---|---|
| 0–7 days | 2,006 | 7.8% | 53.65 |
| 8–30 days | 4,213 | 16.4% | 47.96 |
| 31–90 days | 4,416 | 17.1% | 33.55 |
| 91–180 days | 4,362 | 16.9% | 22.27 |
| Over 180 days | 10,753 | 41.8% | 19.82 |
The active core of the ecosystem — tools updated within the last 90 days — is 10,635 repositories (41.3%). The dormant long tail — tools last touched more than 180 days ago — is 10,753 repositories (41.8%). A meaningful 16.9% sits in the middle at 91–180 days: not dead, but not thriving either.
The score gap between fresh and stale tools is dramatic. A tool updated in the last week averages a score of 53.65. A tool that hasn't been touched in six months averages 19.82 — less than half. This is not just the scoring system penalizing old commits; it reflects a real correlation: recently-committed tools also tend to have better issue close rates, more contributors, and more dependents.
Only 357 repositories (1.4%) are formally archived. The rest of the dormant long tail exists in a graveyard-without-a-tombstone state: technically available, practically abandoned. This is the main quality problem in the MCP ecosystem right now — not that there are too few tools, but that it is hard to distinguish alive from undead without diving into commit history.
Issue health tells a similar story. Of the repositories with enough issue activity to measure responsiveness:
- 139 repos (1.7%) close 80%+ of their issues — highly responsive maintainers. Average score: 79.46.
- 331 repos (4.1%) close 50–80% — moderate responsiveness. Average score: 69.70.
- 8,483 repos (94.2%) close fewer than 50% of issues. Average score: 27.03.
The 94% with poor issue hygiene are not necessarily bad tools — many are solo projects or proof-of-concepts that don't expect issue volume. But for a developer evaluating whether to depend on a tool in production, issue close rate is one of the strongest signals of whether the maintainer will be around when something breaks.
Language breakdown
Python and TypeScript dominate, as expected given the AI tooling ecosystem. But the quality distribution across languages reveals something interesting: the smaller language communities are shipping higher-quality work.
| Language | Repositories | Share | Avg AgentRank score |
|---|---|---|---|
| Python | 9,919 | 38.5% | 28.94 |
| TypeScript | 7,047 | 27.4% | 31.22 |
| JavaScript | 3,104 | 12.1% | 26.41 |
| Go | 1,230 | 4.8% | 33.54 |
| Rust | 673 | 2.6% | 35.6 |
| C# | 512 | 2% | 32.86 |
| Java | 544 | 2.1% | 27.3 |
| Other | 2,721 | 10.6% | 30.1 |
Rust leads all languages by average score at 35.60, followed by Swift (35.87 — small sample), PHP (35.54 — skewed by laravel/boost at rank #3), C++ (36.75 — very small sample), and Go (33.54). Python and TypeScript, despite their volume dominance, sit below the ecosystem average in score quality — partly because they attract the most experimental and one-off projects.
The Go ecosystem stands out: 1,230 repositories with an average score of 33.54, producing two tools in the top-10 (mark3labs/mcp-go at #4 and modelcontextprotocol/go-sdk at #16). Go developers building MCP tools appear to be more experienced systems engineers building production-grade implementations, not experimenters learning the protocol.
The C# ecosystem is surprisingly strong given its smaller developer community in the AI space. The top-ranked tool overall — CoplayDev/unity-mcp — is a C# project, as is the #5 ranked CoderGamester/mcp-unity. Unity integration is clearly a serious use case driving C# MCP development.
Score distribution: who is actually winning
The AgentRank score runs 0–100. Here is how the full index distributes across score bands:
Only 589 tools (2.3% of the index) score above 60 — the threshold where an AgentRank score starts to be a reliable indicator of quality. Everything below that is noise, signal, or somewhere in between. The scoring system is intentionally harsh: a tool needs active maintenance, community adoption, and responsive issue management to break into the Strong tier.
Star count and score are correlated but not identical. The highest-starred tool in the index — punkpeye/awesome-mcp-servers with 83,395 stars — scores 71.01, landing in the Strong tier but not Elite. That's because an awesome-list doesn't get evaluated on issue health or contributor count the way an active MCP implementation does. The score rewards maintainers, not curators.
Top 10 MCP tools by AgentRank score
These are the ten highest-scoring repositories in the March 2026 index. Scores reflect all five signals; star counts are provided for comparison.
| # | Repository | Score | Stars | Language | Last commit |
|---|---|---|---|---|---|
| 1 | CoplayDev/unity-mcp | 98.67 | 7,003 | C# | 7 days ago |
| 2 | microsoft/azure-devops-mcp | 97.31 | 1,423 | TypeScript | 2 days ago |
| 3 | laravel/boost | 96.94 | 3,333 | PHP | 6 days ago |
| 4 | mark3labs/mcp-go | 96.09 | 8,353 | Go | 8 days ago |
| 5 | CoderGamester/mcp-unity | 96.06 | 1,412 | C# | 8 days ago |
| 6 | PrefectHQ/fastmcp | 95.61 | 23,775 | Python | 1 day ago |
| 7 | samanhappy/mcphub | 94.37 | 1,893 | TypeScript | 1 day ago |
| 8 | homeassistant-ai/ha-mcp | 94.28 | 1,190 | Python | 5 days ago |
| 9 | jgravelle/jcodemunch-mcp | 94.25 | 1,082 | Python | 5 days ago |
| 10 | microsoft/playwright-mcp | 94.23 | 29,150 | TypeScript | 3 days ago |
A few observations worth noting:
- Unity integration dominates the top 5. CoplayDev/unity-mcp (#1) and CoderGamester/mcp-unity (#5) both scored above 96. Game engine integration is one of the fastest-moving categories in the MCP ecosystem — Unity has a massive developer community with a strong reason to use AI tooling.
- Microsoft shows up twice in the top 10 (azure-devops-mcp at #2 and playwright-mcp at #10). Both have large contributor teams (45 and 62 contributors respectively), which boosts the contributor signal significantly.
- PrefectHQ/fastmcp at #6 has the highest star count in the top 10 at 23,775 — and it was updated just 1 day before our crawl. fastmcp is the de facto standard Python framework for building MCP servers, and its combination of raw popularity, freshness, and 208 contributors makes it the strongest Python ecosystem signal.
- mark3labs/mcp-go at #4 is the Go community's answer to fastmcp — 8,353 stars and 170 contributors for a Go-native MCP library. If you're building in Go, this is where the ecosystem has coalesced.
The skills layer: 3,859 indexed
Alongside MCP server repositories, AgentRank indexes agent skills — reusable instruction sets and tool configurations designed for AI coding platforms like Claude Code, Cursor, GitHub Copilot, and Gemini. These are a distinct category from MCP servers: they extend agent behavior rather than providing tool access.
The skills index contains 3,859 entries across two primary sources: skills.sh (2,373 skills) and Glama (1,414 MCP server entries with install data). The average skill score is 37.16 — notably higher than the 29.83 average for MCP repos, reflecting the more curated nature of the skills registries.
Top 10 skills by AgentRank score
| # | Skill | Score | Installs | Source |
|---|---|---|---|---|
| 1 | Playwright MCP Server | 84.27 | 1,777,315 | Glama |
| 2 | browser-use | 81.54 | 50,320 | skills.sh |
| 3 | turborepo | 80.18 | 11,453 | skills.sh |
| 4 | docker-expert | 78.84 | 6,709 | skills.sh |
| 5 | gh-cli | 77.65 | 11,781 | skills.sh |
| 6 | git-commit | 77.32 | 14,933 | skills.sh |
| 7 | excalidraw-diagram-generator | 77.2 | 9,491 | skills.sh |
| 8 | frontend-design | 76.76 | 11,331 | skills.sh |
| 9 | microsoft-docs | 76.75 | 7,621 | skills.sh |
| 10 | mcp-cli | 76.7 | 7,450 | skills.sh |
Platform distribution
Skills in the index support the following platforms (a single skill can target multiple):
Claude Code's skill count (814) is notably lower than other major platforms. This is partly because Claude Code's skill ecosystem — distributed via skills.sh and the Paperclip/Superpowers network — is newer and more curated. The Playwright MCP Server leads all skills in raw install count at 1,777,315 installs via Glama, roughly 35x the next highest-installed skills.sh entry.
Registry presence and download data
Not every MCP tool publishes to a package registry, but registry presence is a strong quality signal — it takes effort, and tools that bother tend to be more serious implementations.
- 595 repositories have a linked PyPI package. Combined PyPI download count: 450,212,036. Python dominates the server-side MCP ecosystem, and those download numbers include the fastmcp and official python-sdk packages which pull enormous volume.
- 116 repositories have a linked npm package. Combined npm downloads: 368,192. TypeScript/JavaScript MCP tools are less likely to be published to npm directly — many are server processes rather than libraries.
- 196 tools have Glama weekly download data. Combined Glama weekly downloads: 2,660,777. Glama tracks tool calls directly through their hosted infrastructure, making this the cleanest actual-usage signal we have.
The 2.66 million weekly tool calls through Glama alone is a meaningful number for an ecosystem that is less than 18 months old. The official Glama leaderboard suggests the vast majority of those calls come from a handful of high-volume servers — Playwright MCP, fetch/browser tools, and a few database connectors.
What the data predicts for Q2 2026
Based on the trends in the current index, here are the patterns most likely to define the next quarter:
- Game engine integration accelerates. The strength of Unity MCP tools (#1 and #5 in the index) signals that game studios and indie developers have discovered MCP as a workflow enhancement. Unreal Engine equivalents are likely already in development and will surface in Q2.
- Enterprise vendors consolidate position. Microsoft has two tools in the top 10. AWS, Google, and others with large contributor budgets will use them to lock in their respective developer ecosystems. Expect official MCP servers for every major cloud service by mid-year.
- The skills layer outgrows the tools layer in practical importance. Skills — reusable agent behaviors — are growing faster than raw MCP server counts because they are composable across platforms. A skill works the same way whether the underlying tool is an MCP server, a REST API, or a CLI command. The abstraction is more durable.
- Quality stratification intensifies. As the index grows, the gap between the top 2% and the bottom 80% will become more visible to developers. Discovery tools — including AgentRank — will become essential infrastructure for navigating the ecosystem. Blind GitHub search is already inadequate for finding trustworthy tools.
- Go and Rust ecosystem tools punch above their weight. Both languages produce higher average scores than Python or TypeScript despite smaller communities. If performance-sensitive production deployments of MCP servers become common (as MCP usage scales from developer laptops to cloud infrastructure), Go and Rust implementations will see disproportionate adoption.
Methodology
All data in this report comes from the AgentRank index, which is computed from GitHub data crawled in March 2026. The full index covers 25,750 repositories matched via keyword search for "mcp server", "mcp-server", "model context protocol", "agent tool", and "a2a agent".
The AgentRank composite score uses five signals: stars (15%), freshness (25%) — days since last commit, issue health (25%) — ratio of closed to total issues, contributors (10%), and inbound dependents (25%) — repositories that depend on this one via GitHub's dependency graph. Scores are normalized 0–100 across the index.
Skills data comes from the skills.sh registry and Glama's MCP tool index, crawled in March 2026. Install and download counts are provided by those registries and were not independently verified.
See the full methodology page for signal weights, normalization approach, and data freshness guarantees.
Explore the full index: agentrank-ai.com — sort 25,750 MCP tools by score, stars, freshness, or issue health. Updated daily.
Browse by category: Database · Browser Automation · DevOps · MCP Servers
Check a specific tool: /check — paste any GitHub URL and get a full score breakdown with signal detail.
API access: AgentRank API — query the index programmatically. Free tier available.
Get the weekly AgentRank digest
Top movers, new tools, ecosystem insights — straight to your inbox.