Google PageRank for AI agents. 25,000+ tools indexed.

Why a score?

GitHub search sorted by stars shows you the most popular tools — not the most trustworthy ones. A tool with 10,000 stars and its last commit in 2023 is not a better choice than one with 500 stars that shipped yesterday. AgentRank separates popularity from health.

The score is a single number — 0 to 100 — that answers the question an agent or developer actually cares about: is this tool actively maintained and likely to work?

The Eight Signals

Each signal is normalized to a 0–1 value, then weighted and summed to produce the final 0–100 score. When a signal has no data (e.g., no registry package to measure downloads), its weight is redistributed proportionally across the remaining signals.

Freshness 20%

How recently was the repository updated?

Score 1.0 Committed within the last 7 days
Linear decay Days 7–90: score scales from 1.0 → 0.0
Exponential decay Beyond 90 days: rapid drop toward 0
Floor for established tools Repos with 200+ stars or 5+ dependents never drop below 0.3 — stable, adopted projects shouldn't be penalized for not needing daily updates
Archived repos Score 0.0 — maintenance has explicitly ended

Freshness is the #1 predictor of reliability. Stale repos accumulate unpatched bugs and bitrotting dependencies. The MCP protocol itself is evolving fast — a tool that hasn't been touched in six months may not work with current clients.

Issue Health 20%

Does the maintainer respond to bug reports?

Formula closed_issues / (open_issues + closed_issues)
No issues at all Score 0.3 — new or unused projects haven't had a chance to demonstrate responsiveness

A high closed-to-total ratio means the maintainer actively triages reports. A repo with 200 open issues and 10 closed is a red flag — problems are piling up, not getting resolved.

Dependents 22%

How many other repos depend on this one?

Source GitHub dependency graph
Normalization ceiling 100 dependents = max score (1.0)
Not available Weight redistributed to other signals — a missing signal isn't a penalty

When other real projects import your package, that's real-world validation. It's hard to fake and hard to buy. High dependent counts are the strongest signal that a tool actually works in production.

Downloads 13%

Weekly installs from npm or PyPI.

Source npm weekly download count or PyPI weekly download count (whichever is higher)
Normalization ceiling 10,000 weekly downloads = max score (1.0)
Not available Weight redistributed — only applies when a registered package exists

Download counts reflect actual usage, not just curiosity. A tool being installed 50,000 times a week is being used in real workflows.

Stars 10%

Raw GitHub star count.

Normalization ceiling 1,000 stars = max score (1.0)

Stars are the weakest signal — easy to inflate, often reflecting hype rather than utility. We include them because broad recognition does correlate weakly with quality, but we deliberately weight them lowest among the popularity signals. A tool shouldn't rank #1 just because it went viral on Twitter.

Contributors 8%

How many distinct contributors has the repo had?

Floor Minimum 1 (every repo has at least its creator)
Normalization ceiling 20 contributors = max score (1.0)

A single-contributor repo has single-maintainer risk. If that person disappears, the project dies. Two or more contributors means bus-factor risk is reduced. We cap the benefit at 20 — beyond that, additional contributors don't meaningfully increase reliability for a tool of this type.

Description Quality 4%

Does the repo have a meaningful description?

Score 0.0 No description at all
Score 0.3 Description under 50 characters
Score 0.7 Description 50–150 characters
Score 1.0 Description over 150 characters

A maintainer who can't write three sentences about what their tool does is either in stealth mode or not thinking about users. It's a weak proxy for maintainer investment, but it's measurable and consistent.

License Health 3%

Is the tool usable by others?

Score 1.0 MIT, Apache-2.0, BSD-2-Clause, BSD-3-Clause, ISC, Unlicense
Score 0.6 Other recognized licenses (GPL, LGPL, etc.)
Score 0.2 No license or unrecognized license — legal ambiguity, safer to avoid

Permissive licenses maximize the likelihood that the tool can be integrated into any project without legal friction.

The Formula

Base weights (when all signals have data):

Dependents
22%
Freshness
20%
Issue Health
20%
Downloads
13%
Stars
10%
Contributors
8%
Description Quality
4%
License Health
3%

When Dependents or Downloads data isn't available for a tool, those weights are redistributed proportionally across the remaining signals. A missing signal is not a penalty — it's just fewer data points.

Live Examples

Here's how the scoring works in practice, using real tools from the index as of March 2026.

microsoft/playwright-mcp Browser automation
94.81
Stars 28,849 capped at 1,000 → max signal
Freshness Updated 3 days ago 1.0 — active development
Issue Health 743 closed / 765 total 97% — Microsoft triages aggressively
Contributors 62 capped at 20 → max signal

Scores high on every health signal. The 97% issue resolution rate is exceptional — most tools sit at 40–60%.

96.29
Stars 8,353 capped at 1,000 → max signal
Freshness Updated 6 days ago 1.0 — consistently active
Issue Health 239 closed / 266 total 90% — issues don't pile up
Dependents 2,938 far exceeds 100 ceiling → max signal

The dependents signal is the key driver here. Nearly 3,000 other repos build on top of mcp-go — real adoption by real developers.

PrefectHQ/fastmcp Python framework
94.73
Stars 23,659 capped at 1,000 → max signal
Freshness Updated yesterday 1.0 — daily commits
Issue Health 1,153 closed / 1,388 total 83% — large volume, well managed
Dependents 7,718 far exceeds 100 ceiling → max signal

The most widely adopted Python MCP framework. 7,718 dependents across the ecosystem is a strong signal that this has become infrastructure.

NapthaAI/automcp Auto-conversion tool
39.96
Stars 301 solid star count — but that's not the problem
Freshness Last commit: April 2025 ~11 months stale — heavy penalty
Issue Health 0 closed / 6 open 0% — no issues have ever been resolved
Contributors 1 single-maintainer bus factor risk

301 stars but a score of 39. This is exactly what AgentRank is designed to surface: a project that got initial traction but shows no signs of ongoing maintenance. Stars don't tell you this. The scores do.

Update Frequency

The crawler runs nightly. Every morning, all 25,000+ tools are re-scored from fresh GitHub data. Score and rank changes from the previous day are tracked and surfaced in the Movers feed.

Scores reflect reality as of the previous night's crawl. A tool that ships a critical fix today will see its scores improve within 24 hours.

How AgentRank differs from alternatives

Source Approach What's missing
GitHub search (stars) Sort by raw star count No health signals. Stale viral repos rank above active maintained ones.
MCPMarket Engagement-based ranking (clicks, installs) No composite score. New tools with no traffic are invisible regardless of quality.
PulseMCP Popularity-only (GitHub stars + social) No maintenance signals. No issue health. Popularity ≠ reliability.
Glama Security scans per tool No public composite score. Security-only lens misses freshness, adoption, maintainer responsiveness.
Awesome-MCP-Servers Curated flat list No scoring. Human-curated means latency. New tools take weeks to appear.
AgentRank 8-signal composite score, updated nightly No competitors publish a transparent 0-100 composite score. This is the differentiator.

Open source

The scoring engine is open source. You can read the exact implementation, propose weight changes, or run your own scoring pass against the dataset.