Google PageRank for AI agents. 25,000+ tools indexed.

Smithery vs AgentRank: Which MCP Directory Should You Use?

Smithery and AgentRank are the two most-referenced MCP directories in the ecosystem right now. They are not the same kind of product. One is a hosted deployment platform with a directory on the side. The other is a discovery and ranking engine with 6x the coverage. This is a direct comparison across coverage, methodology, API, and use case fit — with a clear recommendation for each scenario.

Side-by-side comparison

The most important thing to understand upfront: Smithery and AgentRank are solving different primary problems. Smithery's core product is hosted MCP server deployment — the directory is a byproduct. AgentRank's core product is the ranked index — discovery is the primary value. That shapes every difference in the table below.

Dimension Smithery.ai AgentRank
Coverage ~4,000 tools 25,000+ tools
Update frequency Continuous (platform-driven) Nightly crawl
Ranking method Smithery platform usage Composite quality score (5 signals)
Scoring transparency Opaque Fully documented
Public API Yes (auth required) Yes (no key for basic access)
Hosted deployment Yes — core feature No
Open data No Yes
Freshness signal No Yes (25% weight)
Issue health signal No Yes (25% weight)
Claim listing Via server submission Yes — free claim flow

Data as of March 2026. Coverage figures from publicly available platform data.

Coverage gap

Smithery lists approximately 4,000 MCP servers. AgentRank indexes 25,000+. That 6x gap reflects how each platform adds tools.

Smithery adds servers by submission. A maintainer (or the community) submits a server to be deployed on Smithery's infrastructure. This means Smithery's catalog skews toward servers that maintainers actively want to support and promote — the motivated subset of the ecosystem.

AgentRank adds servers by crawling. The nightly crawler runs GitHub searches across MCP-related patterns and ingests every matching repository automatically. New repos appear in the index without any action from the maintainer. The result is broader coverage, including early-stage projects, niche tools, and servers that haven't been submitted anywhere else.

If you're asking "does an MCP server exist for X?" — you'll get more complete answers from AgentRank. The 21,000 servers that appear in AgentRank but not Smithery are real GitHub projects. Some are actively maintained. Some are abandoned. The score separates the two.

Ranking methodology

Smithery: platform usage signals

Smithery surfaces tools based on how often they've been deployed or run through the Smithery platform. This is a legitimate signal — servers with more Smithery installs are clearly being used by real developers. But it measures adoption specifically within Smithery's ecosystem.

The limitation: a server can have massive real-world adoption on GitHub, heavy use in production agent workflows, and active weekly development — and rank poorly on Smithery because it hasn't been installed through Smithery specifically. Stars, commits, issue health, and ecosystem dependents are not factored in.

The ranking methodology is not publicly documented. You can't inspect the weights or reproduce the scores.

AgentRank: composite quality score

AgentRank computes a 0–100 composite score from five GitHub signals for every indexed tool:

  • Freshness (25%) — days since last commit; scores decay past 90 days idle
  • Issue health (25%) — closed issues divided by total; measures maintainer responsiveness
  • Inbound dependents (25%) — repos that depend on this one; the strongest real adoption signal
  • Stars (15%) — normalized popularity signal
  • Contributors (10%) — bus factor proxy; single-maintainer risk reduction

The weights reflect a specific philosophy: maintenance signals should outweigh popularity signals because they predict future reliability. A server with an 80% issue close rate and weekly commits will be easier to depend on than a 5,000-star server that last committed in 2024.

The scoring weights and methodology are fully documented. Every tool in the index is scored by the same formula — no editorial exceptions.

API and programmatic access

Both platforms offer APIs. They're built for different use cases.

Smithery API

Smithery's API is deployment-oriented. It supports listing, searching, and triggering server runs on the Smithery platform. Requires authentication. Well-suited for applications that want to spin up hosted MCP servers programmatically.

For discovery queries — "find me the best MCP servers for database operations" — the API covers Smithery's ~4,000 catalog, ranked by Smithery usage. No composite quality score is returned with results.

Auth required · ~4,000 tools · Deployment-focused

AgentRank API

AgentRank's REST API is discovery-oriented. Query by keyword, category, score range, or language. Every result includes the full score breakdown: the composite score plus the individual signal values (freshness, issue health, dependents, stars, contributors).

Basic queries require no API key. Agents can select MCP tools programmatically using quality signals as filters — not just name matching or popularity sorting.

No key for basic access · 25,000+ tools · Score breakdown per result

For agentic tool selection — an agent that decides which MCP server to use for a task — the AgentRank API is more useful. The score breakdown allows conditional logic: "select the freshest server with >80% issue close rate that handles database queries." Smithery's API doesn't expose those signals.

Deployment: where Smithery wins

One area where Smithery has no competition from AgentRank: hosted infrastructure.

If you want to use an MCP server without running it yourself — no Docker container, no Node process to manage, no server to monitor — Smithery's hosted deployment is a real differentiator. You get a managed endpoint. Updates are handled by the platform. For developers who want zero-configuration MCP access and don't have infrastructure in place, that's significant.

AgentRank does not offer hosting. It's a directory and scoring engine. It tells you what's worth installing; it doesn't run it for you. If you need hosted deployment alongside a discovery layer, the practical workflow is to discover via AgentRank, then deploy through Smithery.

When to use each

Use AgentRank when you need to discover what exists

25,000+ indexed repos vs 4,000. If a server exists on GitHub, AgentRank has it. For any discovery question — "is there an MCP server for Notion?", "what database MCP servers are actively maintained?", "which tools have the best issue health?" — AgentRank's coverage and quality signals give you more signal.

Use AgentRank when quality comparison matters

When you have several candidate servers for the same category, AgentRank's composite score lets you compare on objective signals. Sort by score within a category, examine freshness and issue health, check dependent count. Smithery shows you which servers are popular on Smithery; AgentRank shows you which are actively maintained in the broader ecosystem.

Use AgentRank for agentic tool selection pipelines

If you're building an agent that selects MCP tools programmatically, AgentRank's API returns quality scores with every result. You can filter by score threshold, sort by freshness, and return only servers with >85% issue close rates. No equivalent capability in Smithery's API today.

Use Smithery when you want hosted, zero-infrastructure deployment

Smithery's core value is managed hosting. If you've identified the server you want (via AgentRank or elsewhere) and it's available on Smithery, you can deploy without setting up your own infrastructure. That's a legitimate time-saving tradeoff for many teams.

Use both together

The strongest workflow: use AgentRank to discover and evaluate quality across the full 25,000+ ecosystem, then check Smithery to see if your chosen server is available for hosted deployment. The two platforms are complementary more often than they're competitive.

As an MCP server maintainer

List on both. Submit to Smithery for deployment reach in that audience. Claim your AgentRank listing to add context to your entry and ensure the description is accurate. Your AgentRank score is computed automatically from your GitHub repo's health signals — you don't need to do anything to appear.

Browse the full AgentRank index: 25,000+ MCP servers, scored daily — find what Smithery's catalog doesn't cover.

API access: Query for tools by quality score, category, or keyword. API docs here.

Scoring methodology: Freshness 25%, issue health 25%, dependents 25%, stars 15%, contributors 10%. Full docs.

Get the weekly AgentRank digest

Top movers, new tools, ecosystem insights — straight to your inbox.