Best MCP Servers for AI & Machine Learning
Top-ranked MCP servers for LLM APIs, vector databases, embeddings, model serving, and AI infrastructure. Scored from the AgentRank index.
AI and ML MCP servers give agents access to model APIs, vector stores, embedding services, and local inference engines. They let your agent call other models, store and retrieve embeddings, and plug into the full AI infrastructure stack.
This guide ranks the top AI and ML MCP servers from the AgentRank index — covering Ollama for local inference, vector databases (Pinecone, Qdrant, Chroma, Weaviate), embedding APIs, and multi-model gateways.
Top AI & Machine Learning MCP servers
Ranked by the composite AgentRank score — a weighted blend of stars (15%), freshness (25%), issue health (25%), contributors (10%), and inbound dependents (25%). Average score across these 8 tools: 89.8.
| # | Repository | Score | Stars | Lang | Updated |
|---|---|---|---|---|---|
| 1 | CoplayDev/unity-mcp Unity MCP acts as a bridge, allowing AI assistants (like Claude, Cursor) to interact direc… | 97.6 | 7.6k | C# | 2d ago |
| 2 | mark3labs/mcp-go A Go implementation of the Model Context Protocol (MCP), enabling seamless integration bet… | 96.7 | 8.4k | Go | 2d ago |
| 3 | cablate/mcp-google-map A powerful Model Context Protocol (MCP) server providing comprehensive Google Maps API int… | 91.5 | 234 | TypeScript | 2d ago |
| 4 | stacklok/toolhive ToolHive is an enterprise-grade platform for running and managing Model Context Protocol (… | 91.2 | 1.7k | Go | 1d ago |
| 5 | Mng-dev-ai/agentrove Your own Claude Code UI, sandbox, in-browser VS Code, terminal, multi-provider support (An… | 88.9 | 250 | TypeScript | 2d ago |
| 6 | lanbaoshen/mcp-jenkins The Model Context Protocol (MCP) is an open-source implementation that bridges Jenkins wit… | 88.2 | 103 | Python | 11d ago |
| 7 | Dicklesworthstone/ultimate_mcp_server Comprehensive MCP server exposing dozens of capabilities to AI agents: multi-provider LLM … | 82.5 | 144 | Python | 6d ago |
| 8 | atlassian/atlassian-mcp-server Remote MCP Server that securely connects Jira and Confluence with your LLM, IDE, or agent … | 81.8 | 495 | JavaScript | 3d ago |
Choosing by use case
- Local model inference (Ollama)
- Run Llama, Mistral, and other open models locally and call them from your MCP-enabled agent.
- Vector databases
- Store and retrieve embeddings for RAG, semantic search, and memory — Pinecone, Qdrant, Chroma, Weaviate.
- LLM API gateways
- Route between OpenAI, Anthropic, Gemini, and open models through a unified MCP interface.
- Embedding generation
- Generate text embeddings on demand for clustering, similarity search, and knowledge base construction.
Quick setup
Most MCP servers follow the same config pattern. Add this to your Claude Desktop, Cursor, or Windsurf MCP config file:
{
"mcpServers": {
"ollama": {
"command": "npx",
"args": ["-y", "ollama-mcp"]
}
}
} See the integrations page for platform-specific setup guides for Claude Code, Cursor, VS Code, Windsurf, and Cline.
Reading the AgentRank signals
Each tool in this guide is scored on five signals that predict long-term reliability:
| Signal | Weight | What it means |
|---|---|---|
| Stars | 15% | Raw popularity — how many developers have found and bookmarked this tool |
| Freshness | 25% | Days since last commit — tools with no recent commits decay hard after 90 days |
| Issue health | 25% | Ratio of closed to total issues — measures maintainer responsiveness |
| Contributors | 10% | More contributors = less bus-factor risk, broader review surface |
| Dependents | 25% | How many other repos depend on this — the strongest signal of real-world adoption |
Query this live from your editor: Install AgentRank in Cursor, VS Code, or Claude Code — your AI agent can query live rankings on demand.
Missing a tool? Submit it to the index — new tools are scored in the next nightly crawl.
Get the weekly AgentRank digest
Top movers, new tools, ecosystem insights — straight to your inbox.