How to Integrate AgentRank Scores Into Your AI Agent
The AgentRank API indexes 25,000+ MCP servers and agent tools, scored daily by real signals: stars, freshness, issue health, contributors, and dependents. This tutorial shows you how to query those rankings from your own agent — so it recommends the best tool for the job instead of guessing.
What you'll build
By the end of this tutorial you'll have a recommend_tools(use_case) function
that takes a natural-language description and returns the top-ranked MCP servers for that
use case — scored by real quality signals, updated daily. You'll wire it into:
- Claude — via the AgentRank MCP server (zero code required)
- OpenAI — as a function-calling tool
- LangChain — as a structured tool
The recommendation logic is ~20 lines. All the intelligence comes from the AgentRank index.
Prerequisites
- Basic Python 3.10+ or TypeScript/Node.js 18+ knowledge
- The AgentRank API base URL:
https://agentrank-ai.com/api/v1(no auth for read queries) - For advanced endpoints: a free API key from agentrank-ai.com/docs
The /api/v1 endpoints are public — no API key, no signup. Rate limit: 100 req/min per IP.
The /api/v2 endpoints require a Bearer token and return richer data:
full score breakdowns, rank history, and Glama usage stats.
Get a free key at agentrank-ai.com/docs.
Step 1 — Query the API
The search endpoint returns tools ranked by AgentRank score. The simplest query is a keyword search with no authentication required.
curl
curl "https://agentrank-ai.com/api/v1/search?q=database&limit=5" {
"query": "database",
"category": "all",
"sort": "score",
"results": [
{
"type": "tool",
"id": "benborla29--mcp-server-mysql",
"name": "benborla29/mcp-server-mysql",
"description": "A MySQL MCP server with read and write operations",
"score": 72.4,
"rank": 18,
"url": "https://agentrank-ai.com/tool/benborla29--mcp-server-mysql/",
"raw": { "stars": 1842, "lastCommitAt": "2026-03-10T14:22:00Z" }
}
],
"meta": { "total": 47, "limit": 5, "offset": 0 }
} Python
import requests
def search_tools(query: str, limit: int = 10) -> list[dict]:
"""Search the AgentRank index for tools matching a query."""
resp = requests.get(
"https://agentrank-ai.com/api/v1/search",
params={"q": query, "limit": limit, "sort": "score"},
timeout=10,
)
resp.raise_for_status()
return resp.json()["results"]
results = search_tools("database")
for tool in results:
print(f"#{tool['rank']:3d} {tool['name']:<45} score={tool['score']}") TypeScript
interface AgentRankResult {
type: "tool" | "skill" | "agent";
id: string;
name: string;
description: string | null;
score: number;
rank: number;
url: string;
}
async function searchTools(query: string, limit = 10): Promise<AgentRankResult[]> {
const url = new URL("https://agentrank-ai.com/api/v1/search");
url.searchParams.set("q", query);
url.searchParams.set("limit", String(limit));
url.searchParams.set("sort", "score");
const res = await fetch(url.toString());
if (!res.ok) throw new Error(`AgentRank API error: ${res.status}`);
const data = await res.json();
return data.results;
}
const results = await searchTools("database");
results.forEach(t => console.log(`#${t.rank} ${t.name} score=${t.score}`)); Step 2 — Filter by category
The category parameter narrows results. Valid values: tool
(GitHub repos), skill (registry entries), agent (A2A agents),
or omit for all. The sort parameter accepts score (default),
rank, stars, or freshness.
# MCP tool repos only
curl "https://agentrank-ai.com/api/v1/search?q=browser&category=tool&limit=5"
# Registry skills only
curl "https://agentrank-ai.com/api/v1/search?q=playwright&category=skill&limit=5" def search_mcp_tools(query: str, sort: str = "score", limit: int = 10) -> list[dict]:
resp = requests.get(
"https://agentrank-ai.com/api/v1/search",
params={"q": query, "category": "tool", "sort": sort, "limit": limit},
timeout=10,
)
resp.raise_for_status()
return resp.json()["results"]
top = search_mcp_tools("browser automation") # ranked by score
fresh = search_mcp_tools("browser automation", sort="freshness") # by recency Step 3 — Get tool details
The /api/v2/tool/{id} endpoint returns the full signal breakdown:
individual sub-scores, weights, rank history, GitHub topics, and Glama usage stats.
This endpoint requires an API key. The tool ID uses -- instead of /
(e.g. modelcontextprotocol/python-sdk → modelcontextprotocol--python-sdk).
curl -H "Authorization: Bearer YOUR_API_KEY" \
"https://agentrank-ai.com/api/v2/tool/modelcontextprotocol--python-sdk" {
"id": "modelcontextprotocol--python-sdk",
"name": "modelcontextprotocol/python-sdk",
"score": 62.3,
"rank": 3,
"signals": {
"stars": { "raw": 14820, "score": 85.0, "weight": 0.16 },
"freshness": { "daysSinceLastCommit": 3, "score": 98.0, "weight": 0.30 },
"issueHealth": { "closeRate": 0.74, "score": 74.0, "weight": 0.32 },
"contributors":{ "count": 89, "score": 72.0, "weight": 0.12 },
"dependents": { "count": 0, "score": 0.0, "weight": 0.00 }
},
"rankHistory": [
{ "date": "2026-03-17", "rank": 4, "score": 61.8 },
{ "date": "2026-03-16", "rank": 4, "score": 61.5 }
]
} import requests
API_KEY = "your_api_key" # get free at agentrank-ai.com/docs
def get_tool_details(github_full_name: str) -> dict:
tool_id = github_full_name.replace("/", "--")
resp = requests.get(
f"https://agentrank-ai.com/api/v2/tool/{tool_id}",
headers={"Authorization": f"Bearer {API_KEY}"},
timeout=10,
)
resp.raise_for_status()
return resp.json()
details = get_tool_details("modelcontextprotocol/python-sdk")
print(f"Score: {details['score']} | Rank: #{details['rank']}")
for signal, data in details["signals"].items():
print(f" {signal:<16} {data['score']:5.1f} (weight {data['weight']:.0%})") Step 4 — Build a recommendation function
Combine search and filtering into a single recommend_tools function.
Takes plain-English input, returns an agent-friendly formatted summary.
import requests
BASE_URL = "https://agentrank-ai.com/api/v1/search"
def recommend_tools(use_case: str, top_n: int = 5) -> str:
"""Return top-ranked MCP tools for a given use case.
Args:
use_case: e.g. "scrape web pages" or "query postgres"
top_n: number of results
Returns:
Formatted string ready to inject into an LLM prompt.
"""
resp = requests.get(
BASE_URL,
params={"q": use_case, "category": "tool", "sort": "score", "limit": top_n},
timeout=10,
)
resp.raise_for_status()
results = resp.json()["results"]
if not results:
return f"No ranked tools found for '{use_case}'."
verdict = lambda s: "highly rated" if s >= 80 else "solid" if s >= 60 else "use with caution"
lines = [f"Top {len(results)} MCP tools for '{use_case}':\n"]
for i, tool in enumerate(results, 1):
lines.append(f"{i}. {tool['name']} — score {tool['score']:.1f} ({verdict(tool['score'])}, #{tool['rank']})")
if tool.get("description"):
lines.append(f" {tool['description']}")
lines.append(f" {tool['url']}")
return "\n".join(lines) const BASE_URL = "https://agentrank-ai.com/api/v1/search";
export async function recommendTools(useCase: string, topN = 5): Promise<string> {
const url = new URL(BASE_URL);
url.searchParams.set("q", useCase);
url.searchParams.set("category", "tool");
url.searchParams.set("sort", "score");
url.searchParams.set("limit", String(topN));
const res = await fetch(url.toString());
if (!res.ok) throw new Error(`AgentRank error: ${res.status}`);
const data = await res.json();
const results: any[] = data.results;
if (!results.length) return `No ranked tools found for '${useCase}'.`;
const verdict = (s: number) =>
s >= 80 ? "highly rated" : s >= 60 ? "solid" : "use with caution";
const lines = [`Top ${results.length} MCP tools for '${useCase}':\n`];
results.forEach((tool, i) => {
lines.push(`${i + 1}. ${tool.name} — score ${tool.score.toFixed(1)} (${verdict(tool.score)}, #${tool.rank})`);
if (tool.description) lines.push(` ${tool.description}`);
lines.push(` ${tool.url}`);
});
return lines.join("\n");
} Step 5 — Add to your agent
OpenAI function calling
Expose recommend_tools as a tool in your OpenAI chat loop.
The model calls it automatically when the user asks for a tool recommendation.
from openai import OpenAI
from recommend_tools import recommend_tools
import json, os
client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
TOOLS = [
{
"type": "function",
"function": {
"name": "recommend_tools",
"description": (
"Search AgentRank for MCP servers ranked by quality signals. "
"Use when the user asks for a tool recommendation."
),
"parameters": {
"type": "object",
"properties": {
"use_case": {"type": "string", "description": "What the tool should do"},
"top_n": {"type": "integer", "default": 5},
},
"required": ["use_case"],
},
},
}
]
def chat(user_message: str) -> str:
messages = [{"role": "user", "content": user_message}]
response = client.chat.completions.create(
model="gpt-4o", messages=messages, tools=TOOLS, tool_choice="auto"
)
msg = response.choices[0].message
if not msg.tool_calls:
return msg.content
messages.append(msg)
for call in msg.tool_calls:
args = json.loads(call.function.arguments)
messages.append({"role": "tool", "tool_call_id": call.id,
"content": recommend_tools(**args)})
final = client.chat.completions.create(model="gpt-4o", messages=messages)
return final.choices[0].message.content
print(chat("What's the best MCP server for browsing the web?")) LangChain
Wrap recommend_tools as a LangChain Tool and add it to any
ReAct or OpenAI Functions agent.
from langchain.agents import AgentExecutor, create_openai_functions_agent
from langchain_core.tools import Tool
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from recommend_tools import recommend_tools
agentrank_tool = Tool(
name="recommend_tools",
func=lambda q: recommend_tools(q),
description=(
"Search AgentRank for top-ranked MCP servers. "
"Input: plain-English use case. Returns: ranked recommendations."
),
)
llm = ChatOpenAI(model="gpt-4o", temperature=0)
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful AI assistant. Use AgentRank to recommend tools."),
("human", "{input}"),
MessagesPlaceholder("agent_scratchpad"),
])
agent = create_openai_functions_agent(llm, [agentrank_tool], prompt)
executor = AgentExecutor(agent=agent, tools=[agentrank_tool], verbose=True)
result = executor.invoke({"input": "I need to browse the web from my AI agent"})
print(result["output"]) Bonus — Use the AgentRank MCP server
If you're using Claude Desktop, Cursor, Windsurf, or any other MCP-capable client, skip the API integration entirely. Install the agentrank-mcp-server npm package and Claude will automatically query AgentRank whenever it needs to recommend a tool.
npm install -g agentrank-mcp-server
# or without installing:
npx -y agentrank-mcp-server Claude Desktop & Cursor
{
"mcpServers": {
"agentrank": {
"command": "npx",
"args": ["-y", "agentrank-mcp-server"]
}
}
} VS Code (Copilot)
{
"servers": {
"agentrank": {
"type": "stdio",
"command": "npx",
"args": ["-y", "agentrank-mcp-server"]
}
}
} The MCP server exposes three tools:
| Tool | Description | Example prompt |
|---|---|---|
search | Search 25,000+ MCP servers by keyword | "Best tool for querying a database?" |
lookup | Get score and verdict for a specific GitHub repo | "Is modelcontextprotocol/python-sdk any good?" |
get_badge_url | Generate an embeddable score badge | "Give me a README badge for my MCP server" |
Calls /api/v1/search — no API key needed, no configuration required.
Rankings update nightly, so recommendations always reflect the current ecosystem.
What's next
- Add rank history — v2 API with
?history=1detects trending tools - Scope by language — search for
"python database"to narrow results - Embed score badges — show AgentRank scores in your tool catalog or README
- Submit your own tool — add any GitHub repo at agentrank-ai.com/submit
The complete runnable code samples are in the scripts/quickstart-code-samples directory of the AgentRank repo.
Get the weekly AgentRank digest
Top movers, new tools, ecosystem insights — straight to your inbox.