How to Use AgentRank Scores in Claude Code
Claude Code supports MCP servers — which means you can give it live access to
25,000+ ranked MCP server scores while you work. This guide walks through
installing the AgentRank MCP server, the prompts that work best, and using the
@agentrank/sdk to query rankings programmatically from your projects.
Why this is useful
The hardest part of building with MCP isn't writing the code — it's knowing which servers are worth depending on. There are over 25,000 on GitHub, they vary wildly in quality, and many haven't been touched in months.
AgentRank scores every server daily on five signals: stars, freshness (days since last commit), issue health (close rate), contributor count, and downstream dependents. The composite score runs 0–100. A score above 60 means actively maintained and community-validated. Below 40 means proceed carefully.
With the AgentRank MCP server installed in Claude Code, you can ask Claude to check a server's score before adding it as a dependency, search for the highest-ranked option in a category, or surface what's trending this week — all without leaving your editor.
Step 1 — Install the MCP server
Claude Code reads MCP server configuration from a .mcp.json file. You can
place this at the project root (project-level) or in your home directory as
~/.mcp.json (user-level, available in every project).
Add the agentrank entry to your .mcp.json:
{
"mcpServers": {
"agentrank": {
"command": "npx",
"args": ["-y", "agentrank-mcp-server"]
}
}
}
No npm install required. The npx -y flag downloads and runs the latest
version automatically on first use. Subsequent runs use the cached version.
Claude Code checks for .mcp.json in the current working directory first,
then walks up the directory tree. A file at ~/.mcp.json applies globally.
Project-level files take precedence over user-level ones.
Step 2 — Verify it's working
After saving .mcp.json, start or restart Claude Code in that directory.
The AgentRank MCP server starts as a background process. To verify it loaded correctly,
ask Claude:
List the MCP tools you have available. Do you have an agentrank tool?
You should see three tools listed under agentrank:
| Tool | What it does | Auth required |
|---|---|---|
search | Keyword search across 25,000+ MCP servers, returns ranked results | No |
lookup | Get the AgentRank score and verdict for a specific GitHub repo | No |
get_badge_url | Generate an embeddable score badge URL for a README | No |
All three tools use the public v1 API — no API key, no sign-up required. Rate limit is 100 requests per minute per IP, which is more than enough for interactive use.
Step 3 — Prompts that work
Claude calls the AgentRank tools automatically when your prompt is about tool selection or quality. Here are the prompt patterns that work best.
Finding the best tool for a task
What's the best MCP server for querying a PostgreSQL database?
Use AgentRank to find and compare the top options.
Claude will call search("postgresql database"), get back the top-ranked
results, and explain the differences — score, stars, freshness, and what each server does.
Checking a specific tool before adding it
I'm considering adding benborla29/mcp-server-mysql to my project.
Can you look up its AgentRank score and tell me if it's well-maintained?
Claude calls lookup("benborla29/mcp-server-mysql") and returns the score
with a plain-English verdict: "score 58.4 — solid, active maintainer, 1,800+ stars".
Comparative selection
I need to add web scraping capability to my agent.
Search AgentRank for the top browser automation MCP servers and recommend one
based on the scores and my use case (headless Chrome, need JavaScript rendering). This is the highest-value use case: Claude searches for options, compares scores, and makes a recommendation grounded in real data rather than training knowledge that may be months out of date.
Gate-check before adding a dependency
Before I commit to using modelcontextprotocol/servers for filesystem access,
check its AgentRank score. If it's below 50, suggest alternatives. You can use this as a quality gate in your development workflow — Claude checks the score and only proceeds if it meets a threshold you define.
Phrasing like "use AgentRank to check" or "look up the AgentRank score" reliably triggers tool use. Generic questions like "what's the best MCP server?" may or may not invoke the tool depending on context. Explicit references are more consistent.
Step 4 — Use the SDK in your project
For programmatic use — scripts, CI checks, agent pipelines — install the official TypeScript SDK. It's a thin wrapper around the AgentRank REST API with full type coverage.
npm install @agentrank/sdk Search for tools
import { AgentRank } from '@agentrank/sdk';
const ar = new AgentRank();
// Search for the best browser automation MCP servers
const results = await ar.search('browser automation', { limit: 5 });
for (const tool of results.results) {
console.log(`#${tool.rank} ${tool.name} score=${tool.score.toFixed(1)}`);
}
// #1 microsoft/playwright-mcp score=71.3
// #2 browserbase/mcp-server-browserbase score=67.8
// #3 MarkusPfundstein/mcp-server-puppeteer score=61.2 Look up a specific tool
import { AgentRank } from '@agentrank/sdk';
const ar = new AgentRank();
// Check a specific tool before adding it to your project
const tool = await ar.getTool('jlowin/fastmcp');
console.log(`Score: ${tool.score}`); // Score: 89.4
console.log(`Rank: #${tool.rank}`); // Rank: #1
// Verdict: highly recommended — active maintainer, 6700+ stars, 89% issue close rate See what's trending
import { AgentRank } from '@agentrank/sdk';
const ar = new AgentRank();
// See what's trending this week
const movers = await ar.getMovers({ direction: 'up', limit: 5 });
for (const mover of movers.results) {
const delta = mover.rankDelta > 0 ? `+${mover.rankDelta}` : String(mover.rankDelta);
console.log(`${delta.padStart(4)} spots ${mover.name} score=${mover.score.toFixed(1)}`);
} Real workflows
Pre-commit MCP quality check
Add a script to your project that checks the AgentRank score of every MCP server in your
.mcp.json before you commit. Tools below your threshold fail the check.
import { AgentRank } from '@agentrank/sdk';
import mcpConfig from '../.mcp.json' assert { type: 'json' };
const ar = new AgentRank();
const MIN_SCORE = 50;
let failed = false;
for (const [name, config] of Object.entries(mcpConfig.mcpServers)) {
// Skip servers that aren't GitHub repos (local paths, etc.)
const pkg = (config as any).args?.find((a: string) => a.includes('/'));
if (!pkg) continue;
try {
const tool = await ar.getTool(pkg);
const status = tool.score >= MIN_SCORE ? 'PASS' : 'FAIL';
console.log(`[${status}] ${name}: score=${tool.score.toFixed(1)} (threshold: ${MIN_SCORE})`);
if (tool.score < MIN_SCORE) failed = true;
} catch {
console.log(`[SKIP] ${name}: not in AgentRank index`);
}
}
if (failed) process.exit(1); Agent tool selection at runtime
If you're building an AI agent that dynamically selects MCP servers at runtime, use the SDK to rank candidates and pick the best one:
import { AgentRank } from '@agentrank/sdk';
const ar = new AgentRank();
async function pickBestTool(task: string): Promise<string> {
const results = await ar.search(task, { limit: 3 });
if (!results.results.length) throw new Error('No tools found for: ' + task);
const best = results.results[0];
console.log(`Selected: ${best.name} (score: ${best.score.toFixed(1)}, rank: #${best.rank})`);
return best.name;
}
// Use it:
const tool = await pickBestTool('query postgres database');
// Selected: benborla29/mcp-server-mysql (score: 58.4, rank: #18) Get an API key for richer data
The public API returns rank, score, name, and description. With a free API key you get the full signal breakdown — individual sub-scores for stars, freshness, issue health, contributors, and dependents — plus rank history.
{
"mcpServers": {
"agentrank": {
"command": "npx",
"args": ["-y", "agentrank-mcp-server"],
"env": {
"AGENTRANK_API_KEY": "your_api_key_here"
}
}
}
} import { AgentRank } from '@agentrank/sdk';
// With API key: gets richer data (score breakdown, rank history)
const ar = new AgentRank({ apiKey: process.env.AGENTRANK_API_KEY });
const tool = await ar.getTool('modelcontextprotocol/python-sdk');
// Returns full signal breakdown: stars score, freshness score, issue health, contributors, dependents Get a free API key at agentrank-ai.com/docs. The free tier supports 1,000 v2 requests per day, which covers most development workflows.
What's next
- Cursor users — see the Cursor integration guide for the same workflow with Cursor-specific config
- Build your own agent — the full API integration tutorial covers OpenAI function calling and LangChain
- Browse the index — agentrank-ai.com/tools for the full ranked leaderboard
- Submit a tool — agentrank-ai.com/submit to add any GitHub repo to the index
Get the weekly AgentRank digest
Top movers, new tools, ecosystem insights — straight to your inbox.