Build Your First MCP Server: A Complete Tutorial (2026)
The Model Context Protocol is the fastest-growing standard in the AI tooling ecosystem. 25,750+ MCP server repositories are already indexed on AgentRank — built by developers who wanted their APIs, databases, and services to talk natively to AI agents. This tutorial walks you through the entire journey: from a blank directory to a published, scored MCP server in one sitting.
What you're building
By the end of this tutorial, you'll have a fully working MCP server called weather-mcp. It will expose three tools:
- get_current_weather — returns current conditions for a city
- get_forecast — returns a multi-day forecast
- list_locations — returns cached locations the user has queried before
We'll use the Open-Meteo API (free, no key needed) so you don't need to sign up for anything. The same patterns apply to any API, database, or service you want to expose through MCP.
The finished server will connect to Claude Desktop, Cursor, and any other MCP client. It will be published on PyPI. It will show up in the AgentRank index with an initial score within 24 hours of going public on GitHub.
Prerequisites
You need:
- Python 3.10+ — check with
python3 --version - pip — included with Python
- A GitHub account — for publishing and getting indexed
- Basic Python knowledge — functions, type hints, decorators. That's it.
You do not need to understand the MCP protocol spec. You do not need to know JSON-RPC. FastMCP handles all of that. You write Python functions and FastMCP makes them speak MCP.
Optionally install Claude Desktop to test the full client-server connection in the later steps. You can also verify everything works using the MCP Inspector CLI tool, which is covered in the testing section.
What MCP actually does
Without MCP, every AI client that wants to call an external tool writes its own integration: Claude has its own tool calling format, Cursor has its own, Copilot has its own. Every developer who wants their service to work with these clients maintains N separate integrations.
MCP collapses that into one interface. You write one server. Any MCP client — Claude, Cursor, Windsurf, Cline, Zed — connects to it using the same protocol. The client asks your server "what tools do you have?" Your server responds with a list. The client calls them. Your server runs the function and returns the result. That's the whole protocol.
As of March 2026, the AgentRank index tracks 25,750+ MCP-related repositories. The top-ranked ones — the ones scoring 85+ — share common patterns: clear tool descriptions, good error handling, responsive maintainers, and consistent publishing to a package registry. This tutorial follows those patterns.
Looking for a quick reference? Browse the top-ranked MCP servers on AgentRank to see real-world examples of well-structured servers you can use as implementation references.
The reference implementations
The three most important repositories for understanding MCP are all indexed on AgentRank:
| Repo | Score | Stars | Why it matters |
|---|---|---|---|
| modelcontextprotocol/python-sdk | 92.14 | 4,821 | Official Python SDK — the specification-authoritative implementation |
| jlowin/fastmcp | 89.44 | 6,734 | Most-starred Python MCP framework — what we use in this tutorial |
| modelcontextprotocol/typescript-sdk | 91.88 | 5,102 | Official TypeScript SDK — the most-starred MCP framework overall |
Project setup
Create a project directory and install FastMCP:
mkdir weather-mcp
cd weather-mcp
python3 -m venv .venv
source .venv/bin/activate # Windows: .venv\Scripts\activate
pip install fastmcp httpx fastmcp is the server framework. httpx is for making HTTP requests
to the Open-Meteo API. We'll also create a minimal project structure:
touch server.py README.md
echo ".venv/" > .gitignore
echo "__pycache__/" >> .gitignore That's it. No configuration files, no complex scaffolding. MCP servers are just Python files that get run as a subprocess by the client.
Your first tool
Open server.py and add the following. Read the comments — they explain what each
line does:
import httpx
from fastmcp import FastMCP
# Create the server. The name appears in MCP client UIs.
mcp = FastMCP("Weather MCP")
@mcp.tool()
async def get_current_weather(city: str, country_code: str = "US") -> dict:
"""
Get the current weather conditions for a city.
Use this when the user asks about current weather, temperature,
wind speed, or conditions in a specific location.
Args:
city: The city name (e.g., "San Francisco", "Tokyo")
country_code: Two-letter ISO country code (default: "US")
Returns:
Dictionary with temperature_c, wind_speed_kmh, weather_description,
and timestamp fields.
"""
# First, geocode the city to get coordinates
geo_url = "https://geocoding-api.open-meteo.com/v1/search"
async with httpx.AsyncClient() as client:
geo_resp = await client.get(geo_url, params={
"name": city,
"count": 1,
"language": "en",
"format": "json"
})
geo_data = geo_resp.json()
if not geo_data.get("results"):
return {"error": f"Location not found: {city}, {country_code}"}
result = geo_data["results"][0]
lat, lon = result["latitude"], result["longitude"]
# Now fetch current weather from Open-Meteo (no API key needed)
weather_url = "https://api.open-meteo.com/v1/forecast"
async with httpx.AsyncClient() as client:
weather_resp = await client.get(weather_url, params={
"latitude": lat,
"longitude": lon,
"current": "temperature_2m,wind_speed_10m,weather_code",
"timezone": "auto",
})
weather_data = weather_resp.json()
current = weather_data["current"]
weather_codes = {
0: "Clear sky", 1: "Mainly clear", 2: "Partly cloudy", 3: "Overcast",
45: "Foggy", 51: "Light drizzle", 61: "Slight rain", 71: "Slight snow",
80: "Slight showers", 95: "Thunderstorm",
}
description = weather_codes.get(current["weather_code"], "Unknown conditions")
return {
"city": result["name"],
"country": result.get("country", country_code),
"temperature_c": current["temperature_2m"],
"wind_speed_kmh": current["wind_speed_10m"],
"weather_description": description,
"timestamp": current["time"],
}
if __name__ == "__main__":
mcp.run() # stdio transport — works with all MCP clients A few things worth noting before we add more tools:
- The docstring is critical. The AI reads it to decide when to call your tool. Be explicit: describe what the tool does, when to use it, and what it returns. Poor docstrings = the AI calls your tool at the wrong time, or doesn't call it when it should.
- Type hints become your schema. FastMCP reads
city: strandcountry_code: str = "US"and generates the JSON schema the client uses to populate the call. Always type your parameters. - Return errors as data, not exceptions. If you raise an exception, the client
connection may drop. Return
{"error": "..."}and the AI can read the error and decide what to do — retry with different parameters, ask the user, etc. - Use
asyncwhen making network calls. MCP's I/O loop is async. Blocking calls inside a sync function will stall the server.
Adding more tools
A server with one tool is complete but limited. Add a forecast tool and a location history tool. Each follows the same pattern: decorator, type-annotated parameters, docstring, implementation.
# Simple in-memory location history
# In production, you'd persist this to a database or file
_location_history: list[str] = []
@mcp.tool()
async def get_forecast(city: str, days: int = 7) -> dict:
"""
Get a multi-day weather forecast for a city.
Use this when the user asks about upcoming weather, next week's forecast,
or wants to plan for future days.
Args:
city: The city name (e.g., "Chicago", "London")
days: Number of forecast days (1-16, default: 7)
Returns:
Dictionary with city name and a list of daily forecasts, each containing
date, max_temp_c, min_temp_c, and weather_description.
"""
days = max(1, min(16, days)) # Clamp to valid range
geo_url = "https://geocoding-api.open-meteo.com/v1/search"
async with httpx.AsyncClient() as client:
geo_resp = await client.get(geo_url, params={"name": city, "count": 1})
geo_data = geo_resp.json()
if not geo_data.get("results"):
return {"error": f"Location not found: {city}"}
result = geo_data["results"][0]
lat, lon = result["latitude"], result["longitude"]
# Track this location
if city not in _location_history:
_location_history.append(city)
weather_url = "https://api.open-meteo.com/v1/forecast"
async with httpx.AsyncClient() as client:
weather_resp = await client.get(weather_url, params={
"latitude": lat,
"longitude": lon,
"daily": "temperature_2m_max,temperature_2m_min,weather_code",
"forecast_days": days,
"timezone": "auto",
})
weather_data = weather_resp.json()
daily = weather_data["daily"]
weather_codes = {
0: "Clear sky", 1: "Mainly clear", 2: "Partly cloudy", 3: "Overcast",
45: "Foggy", 51: "Light drizzle", 61: "Slight rain", 71: "Slight snow",
80: "Slight showers", 95: "Thunderstorm",
}
forecasts = []
for i, date in enumerate(daily["time"]):
forecasts.append({
"date": date,
"max_temp_c": daily["temperature_2m_max"][i],
"min_temp_c": daily["temperature_2m_min"][i],
"weather_description": weather_codes.get(daily["weather_code"][i], "Unknown"),
})
return {"city": result["name"], "forecast": forecasts}
@mcp.tool()
def list_locations() -> list[str]:
"""
List all locations that have been queried in this session.
Use this when the user asks what locations they've looked up,
wants to revisit a previous city, or asks for their search history.
Returns:
List of city names queried during this session (most recent last).
Returns an empty list if no locations have been queried yet.
"""
return list(_location_history)
Notice that list_locations is synchronous — it doesn't make any network calls,
so there's no need for async. FastMCP handles both sync and async tools correctly.
Adding a resource
Resources expose read-only data at a URI. They're less common than tools — most MCP servers don't implement them — but they're the right primitive when you want to give the AI background context without triggering computation.
For this server, add a resource that exposes a list of supported weather codes so the AI can explain conditions accurately:
@mcp.resource("weather://codes")
def get_weather_codes() -> str:
"""
Reference list of WMO weather interpretation codes and their descriptions.
Use this resource when you need to explain what a weather code means.
"""
import json
codes = {
"0": "Clear sky",
"1": "Mainly clear",
"2": "Partly cloudy",
"3": "Overcast",
"45": "Fog",
"48": "Depositing rime fog",
"51": "Light drizzle",
"53": "Moderate drizzle",
"55": "Dense drizzle",
"61": "Slight rain",
"63": "Moderate rain",
"65": "Heavy rain",
"71": "Slight snow fall",
"73": "Moderate snow fall",
"75": "Heavy snow fall",
"80": "Slight rain showers",
"81": "Moderate rain showers",
"82": "Violent rain showers",
"95": "Thunderstorm",
"96": "Thunderstorm with slight hail",
"99": "Thunderstorm with heavy hail",
}
return json.dumps(codes, indent=2)
The URI scheme weather://codes is arbitrary — you define it. The client can
request this resource by URI and receive the JSON string. Resources don't take parameters;
if you need parameterized data access, use a tool.
Testing with MCP Inspector
Before connecting to a full client, use the MCP Inspector — a CLI tool that lets you call your server's tools directly:
npx @modelcontextprotocol/inspector python server.py
This launches the Inspector with your server. Open the URL it prints (usually
http://localhost:5173) in a browser. You'll see:
- A list of your tools with their schemas
- The ability to call each tool with custom parameters
- Raw request and response JSON
- A list of your resources
Test each tool. Verify the outputs look right. Check that the error path works by passing a city name that doesn't exist. The Inspector shows exactly what the AI client sees when it calls your tools.
Alternatively, test with a minimal Python client (no Node required):
import asyncio
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
async def main():
server_params = StdioServerParameters(
command="python",
args=["server.py"],
)
async with stdio_client(server_params) as (read, write):
async with ClientSession(read, write) as session:
await session.initialize()
# List available tools
tools = await session.list_tools()
print("Tools:", [t.name for t in tools.tools])
# Call get_current_weather
result = await session.call_tool(
"get_current_weather",
arguments={"city": "San Francisco"}
)
print("Weather:", result.content[0].text)
asyncio.run(main()) pip install mcp
python test_client.py Connecting to Claude Desktop
If you have Claude Desktop installed, you can connect your server in two minutes. Find the config file at:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json - Windows:
%APPDATA%\Claude\claude_desktop_config.json
Add your server to the mcpServers object:
{
"mcpServers": {
"weather-mcp": {
"command": "python",
"args": ["/absolute/path/to/weather-mcp/server.py"],
"env": {}
}
}
}
Use an absolute path — Claude Desktop launches your server from its own working directory,
not yours. Restart Claude Desktop. Your tools will appear in the tool call menu. Try asking
Claude: "What's the weather in Tokyo right now?" — it will call
get_current_weather automatically.
The same pattern applies to Cursor (.cursor/mcp.json) and other clients. See our
MCP setup guide
for client-specific configuration details.
Publishing to PyPI
Publishing to PyPI transforms your local tool into something other developers can install with
pip install weather-mcp. It also dramatically improves your AgentRank score —
inbound dependents (other repos using your package) is the strongest signal in the
scoring formula at 25% weight.
Step 1 — Add a pyproject.toml
[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"
[project]
name = "weather-mcp"
version = "0.1.0"
description = "MCP server for real-time weather data via Open-Meteo"
readme = "README.md"
requires-python = ">=3.10"
dependencies = [
"fastmcp>=2.0",
"httpx>=0.27",
]
[project.scripts]
weather-mcp = "server:mcp.run" Step 2 — Write a README
Your README.md needs to clearly state what your MCP server does and how to
install it. The first two paragraphs are what the AgentRank crawler indexes for your tool
description. Include the GitHub topics mcp and mcp-server in your
repo settings — the crawler uses these to confirm classification.
# weather-mcp
An MCP server that exposes real-time weather data via Open-Meteo.
Works with Claude Desktop, Cursor, Cline, and all MCP-compatible clients.
## Installation
```bash
pip install weather-mcp
```
## Usage (Claude Desktop)
Add to `claude_desktop_config.json`:
```json
{
"mcpServers": {
"weather-mcp": {
"command": "weather-mcp"
}
}
}
```
## Tools
- `get_current_weather(city, country_code)` — current conditions
- `get_forecast(city, days)` — multi-day forecast
- `list_locations()` — session query history
## License
MIT Step 3 — Push to GitHub and publish
# Push to GitHub first (creates the repo for indexing)
git init
git add .
git commit -m "Initial release"
git remote add origin https://github.com/YOUR_USERNAME/weather-mcp.git
git push -u origin main
# Add GitHub topics in the repo settings: mcp, mcp-server, model-context-protocol
# Then publish to PyPI
pip install build twine
python -m build
twine upload dist/* After pushing to GitHub, the AgentRank crawler will pick up your repo within 24 hours and assign an initial score. After publishing to PyPI, you'll start accumulating inbound dependents as other developers install your package — the most powerful long-term score signal.
How AgentRank will score your server
Once your repo is public and indexed, you'll get a score from 0–100 based on five signals. Understanding the formula helps you prioritize where to invest time after launch. Full details are at /methodology.
| Signal | Weight | What drives it | Day-1 reality |
|---|---|---|---|
| Stars | 15% | GitHub star count, normalized to the index | Low — grows with distribution |
| Freshness | 25% | Days since last commit. Decays hard past 90 days | High — you just committed |
| Issue health | 25% | Closed issues ÷ total issues. Respond fast | N/A (no issues yet) — starts at 100% |
| Contributors | 10% | Unique contributor count. >1 = less bus factor | Low — builds over time |
| Inbound dependents | 25% | Repos depending on yours via package registries | Low — grows after PyPI publish |
A new, well-maintained server typically scores in the 40–55 range on day one — high on freshness and issue health, low on stars and dependents. The fastest paths to a higher score:
- Close every issue promptly. Issue health (25% weight) is entirely in your control. Even if you can't fix the bug immediately, respond, acknowledge, and label it.
- Keep committing. Freshness decays hard past 90 days. Even minor improvements (better error messages, updated dependencies) reset the clock.
- Get real users. Stars come from distribution — Twitter, Reddit, being listed in awesome-mcp-servers. Dependents come from real usage — publish to PyPI, write a good README, answer questions.
- Welcome contributors. Even a single external contributor lifts your contributor
score meaningfully. A
CONTRIBUTING.mdand labeledgood first issueissues help.
See how top servers are built: Browse 25,000+ indexed MCP servers — filter by language, category, and score to find well-ranked examples you can use as references.
Understand the full scoring model: Read the methodology — every signal, every weight, and the normalization approach explained.
Compare frameworks: FastMCP vs the official Python SDK — if you outgrow FastMCP or need lower-level protocol access, this is the comparison to read next.
Get the weekly AgentRank digest
Top movers, new tools, ecosystem insights — straight to your inbox.