How to Use AgentRank in Cursor
Cursor supports MCP servers in both Chat and Agent modes. This guide shows you how to add the AgentRank MCP server so Cursor can query live rankings for 25,000+ MCP servers while you work — helping you pick the highest-quality tools instead of guessing from outdated training data.
Why use AgentRank in Cursor
When you ask Cursor to recommend an MCP server or help you choose between packages, it draws on training data that's months old. The MCP ecosystem moves fast — servers get abandoned, new ones become the community standard, and quality varies enormously.
AgentRank scores every MCP server daily on five signals: GitHub stars, commit freshness, issue close rate, contributor count, and downstream dependents. The composite score runs 0–100. With the MCP server installed, Cursor can look up current scores in real time instead of guessing.
The result: when you ask Cursor "what's the best MCP server for querying a database?", it can answer with live data rather than stale training knowledge.
Step 1 — Configure the MCP server
Cursor reads MCP server configuration from ~/.cursor/mcp.json (global) or
.cursor/mcp.json in your project root (per-project). Global is recommended
since you'll want AgentRank available in every project.
Global configuration (recommended)
// ~/.cursor/mcp.json — available in all projects
{
"mcpServers": {
"agentrank": {
"command": "npx",
"args": ["-y", "agentrank-mcp-server"]
}
}
} Project-level configuration
// .cursor/mcp.json — project-level config
{
"mcpServers": {
"agentrank": {
"command": "npx",
"args": ["-y", "agentrank-mcp-server"]
}
}
}
No installation required. npx -y downloads and runs
agentrank-mcp-server automatically on first use and caches it locally.
Subsequent runs start in under a second.
You can also add MCP servers through Cursor's UI: open Settings
→ Features → MCP Servers →
Add Server. Set type to stdio, command to
npx, and args to -y agentrank-mcp-server. The UI writes
to the same ~/.cursor/mcp.json file.
Step 2 — Verify it's working
After saving the config, restart Cursor or reload the window (Cmd/Ctrl+Shift+P → "Developer: Reload Window"). The MCP server starts automatically.
Open Cursor Chat and type:
What MCP tools do you have available? List them. You should see the AgentRank tools listed. Three are exposed:
| Tool | What it does | Key inputs |
|---|---|---|
search | Find the best MCP server for a task by keyword, returns ranked results | query, limit, category, sort |
lookup | Get the AgentRank score for a specific GitHub repo | GitHub full name (e.g. jlowin/fastmcp) |
get_badge_url | Generate an embeddable score badge URL for a README | GitHub full name |
All tools use the free public API — no authentication required. The rate limit (100 req/min per IP) is well above what interactive Cursor usage generates.
Step 3 — Prompts that work
Cursor invokes MCP tools automatically when the prompt is relevant. These are the patterns that consistently trigger AgentRank tool use.
Find the best tool for a task
Search AgentRank for the best MCP server to handle web scraping with JavaScript rendering.
Compare the top 3 options and recommend one.
Cursor calls search("web scraping javascript rendering"), gets back
the top-ranked results with scores, and gives you a grounded recommendation.
Check a specific tool
Look up the AgentRank score for modelcontextprotocol/servers.
Is it still actively maintained? What's the issue health score?
Direct lookup via the lookup tool. Useful when you already know which
tool you're considering and want a quick quality check.
Score-gated workflow
I'm about to add apify/mcp-server-rag-web-browser to my project.
Check its AgentRank score first. Only proceed with the integration if it's above 55. This pattern uses AgentRank as a quality gate: Cursor checks the score first and only proceeds if the threshold is met. Useful when evaluating new dependencies.
Consolidate overlapping tools
I have these MCP servers in my .mcp.json: firecrawl-dev/mcp-server-firecrawl,
browserbase/mcp-server-browserbase, and apify/mcp-server-rag-web-browser.
Use AgentRank to look up all three scores and recommend which one to keep. Pass multiple candidates and ask Cursor to look them all up and compare. Good for reducing MCP server sprawl in a project.
Both modes support MCP tools. In Chat mode, tools are called when explicitly relevant. In Agent mode, Cursor can chain multiple tool calls autonomously — for example, searching for options, looking up each candidate's score, and writing the integration code, all in one run.
Step 4 — Add to .cursorrules
To make AgentRank lookups automatic — without having to ask explicitly — add an
instruction to your .cursorrules file. Cursor applies these rules to every
chat session in the project.
# .cursorrules
When recommending or evaluating MCP servers, always check the AgentRank score
using the agentrank_search or agentrank_lookup tools before making a recommendation.
Only recommend servers with a score above 50. If the top result is below 50, say so
and explain the tradeoffs. With this rule, any time you ask Cursor about MCP servers — even in a passing question like "what should I use for file access?" — it will call AgentRank before responding.
Step 5 — Use the SDK in your project
For programmatic access — build scripts, CI checks, runtime tool selection — use the
official @agentrank/sdk. Cursor can read the type definitions and help you
write code against it.
npm install @agentrank/sdk TypeScript — search
import { AgentRank } from '@agentrank/sdk';
const ar = new AgentRank();
// Find the top-ranked database MCP servers
const results = await ar.search('postgres database', {
limit: 5,
sort: 'score',
category: 'tool',
});
for (const tool of results.results) {
const verdict = tool.score >= 60 ? 'recommended' : 'proceed with caution';
console.log(`#${tool.rank} score=${tool.score.toFixed(1)} ${verdict} ${tool.name}`);
} TypeScript — look up a specific tool
import { AgentRank } from '@agentrank/sdk';
const ar = new AgentRank();
// Check a specific tool before adding it as a dependency
const tool = await ar.getTool('jlowin/fastmcp');
console.log(`${tool.name}: score=${tool.score}, rank=#${tool.rank}`);
// jlowin/fastmcp: score=89.4, rank=#1 TypeScript — trending this week
import { AgentRank } from '@agentrank/sdk';
const ar = new AgentRank();
// See which MCP servers are gaining momentum this week
const movers = await ar.getMovers({ direction: 'up', limit: 5 });
for (const m of movers.results) {
console.log(`+${m.rankDelta} spots ${m.name} score=${m.score.toFixed(1)}`);
} Python — search
If you're working in Python, use the REST API directly with requests:
import requests
def find_best_mcp_server(task: str, top_n: int = 3) -> list[dict]:
"""Find top-ranked MCP servers for a given task."""
resp = requests.get(
"https://agentrank-ai.com/api/v1/search",
params={"q": task, "category": "tool", "sort": "score", "limit": top_n},
timeout=10,
)
resp.raise_for_status()
return resp.json()["results"]
# Find the best database MCP server
tools = find_best_mcp_server("postgres database")
for t in tools:
verdict = "recommended" if t["score"] >= 60 else "proceed with caution"
print(f"#{t['rank']:3d} score={t['score']:.1f} {verdict} {t['name']}") Get richer data with an API key
The public v1 API returns rank, score, name, and description. With a free API key you unlock the v2 API: full signal breakdown (individual sub-scores for each of the five signals), rank history, and higher rate limits.
// ~/.cursor/mcp.json — with API key for richer data
{
"mcpServers": {
"agentrank": {
"command": "npx",
"args": ["-y", "agentrank-mcp-server"],
"env": {
"AGENTRANK_API_KEY": "your_api_key_here"
}
}
}
} Get a free key at agentrank-ai.com/docs. The free tier covers 1,000 v2 requests per day.
With the API key set, the lookup tool returns a full signal breakdown instead
of just the composite score — helpful when you want to understand why a tool
scored the way it did.
80+ — highly recommended: active development, strong community, low risk.
60–79 — solid: maintained and used, minor concerns at worst.
40–59 — proceed with caution: may be early-stage or slow to respond to issues.
Below 40 — higher risk: check the GitHub repo manually before depending on it.
What's next
- Claude Code users — see the Claude Code integration guide for the same workflow with Claude Code config
- Build a custom agent — the full integration tutorial covers OpenAI function calling, LangChain, and the Claude API
- Browse the leaderboard — agentrank-ai.com/tools for the full ranked index
- Submit a tool — agentrank-ai.com/submit to add any GitHub repo
Get the weekly AgentRank digest
Top movers, new tools, ecosystem insights — straight to your inbox.