Best MCP Servers for AI & Machine Learning
Top-ranked MCP servers for LLM APIs, vector databases, embeddings, model serving, and AI infrastructure. Scored from the AgentRank index.
AI and ML MCP servers give agents access to model APIs, vector stores, embedding services, and local inference engines. They let your agent call other models, store and retrieve embeddings, and plug into the full AI infrastructure stack.
This guide ranks the top AI and ML MCP servers from the AgentRank index — covering Ollama for local inference, vector databases (Pinecone, Qdrant, Chroma, Weaviate), embedding APIs, and multi-model gateways.
Top AI & Machine Learning MCP servers
Ranked by the composite AgentRank score — a weighted blend of stars (15%), freshness (25%), issue health (25%), contributors (10%), and inbound dependents (25%). Average score across these 8 tools: 89.0.
| # | Repository | Score | Stars | Lang | Updated |
|---|---|---|---|---|---|
| 1 | CoplayDev/unity-mcp Unity MCP acts as a bridge, allowing AI assistants (like Claude, Cursor) to interact direc… | 98.7 | 7.0k | C# | 6d ago |
| 2 | mark3labs/mcp-go A Go implementation of the Model Context Protocol (MCP), enabling seamless integration bet… | 96.1 | 8.4k | Go | 8d ago |
| 3 | stacklok/toolhive ToolHive is an enterprise-grade platform for running and managing Model Context Protocol (… | 91.3 | 1.7k | Go | 1d ago |
| 4 | ForLoopCodes/contextplus Semantic Intelligence for Large-Scale Engineering. Context+ is an MCP server designed for … | 88.0 | 1.5k | TypeScript | 8d ago |
| 5 | Shelpuk-AI-Technology-Consulting/kindly-web-search-mcp-server Kindly Web Search MCP Server: Web search + robust content retrieval for AI coding tools (C… | 86.0 | 221 | Python | 10d ago |
| 6 | lanbaoshen/mcp-jenkins The Model Context Protocol (MCP) is an open-source implementation that bridges Jenkins wit… | 85.5 | 95 | Python | 9d ago |
| 7 | RLabs-Inc/gemini-mcp MCP Server that enables Claude code to interact with Gemini | 83.6 | 159 | TypeScript | 5d ago |
| 8 | husnainpk/SymDex Code-indexer MCP server for AI agents — 97% fewer tokens per lookup. Supports 13 languages… | 82.9 | 95 | Python | 5d ago |
Choosing by use case
- Local model inference (Ollama)
- Run Llama, Mistral, and other open models locally and call them from your MCP-enabled agent.
- Vector databases
- Store and retrieve embeddings for RAG, semantic search, and memory — Pinecone, Qdrant, Chroma, Weaviate.
- LLM API gateways
- Route between OpenAI, Anthropic, Gemini, and open models through a unified MCP interface.
- Embedding generation
- Generate text embeddings on demand for clustering, similarity search, and knowledge base construction.
Quick setup
Most MCP servers follow the same config pattern. Add this to your Claude Desktop, Cursor, or Windsurf MCP config file:
{
"mcpServers": {
"ollama": {
"command": "npx",
"args": ["-y", "ollama-mcp"]
}
}
} See the integrations page for platform-specific setup guides for Claude Code, Cursor, VS Code, Windsurf, and Cline.
Reading the AgentRank signals
Each tool in this guide is scored on five signals that predict long-term reliability:
| Signal | Weight | What it means |
|---|---|---|
| Stars | 15% | Raw popularity — how many developers have found and bookmarked this tool |
| Freshness | 25% | Days since last commit — tools with no recent commits decay hard after 90 days |
| Issue health | 25% | Ratio of closed to total issues — measures maintainer responsiveness |
| Contributors | 10% | More contributors = less bus-factor risk, broader review surface |
| Dependents | 25% | How many other repos depend on this — the strongest signal of real-world adoption |
Query this live from your editor: Install AgentRank in Cursor, VS Code, or Claude Code — your AI agent can query live rankings on demand.
Missing a tool? Submit it to the index — new tools are scored in the next nightly crawl.
Get the weekly AgentRank digest
Top movers, new tools, ecosystem insights — straight to your inbox.