The reputation layer for AI skills, tools & agents

firstbatchxyz/mem-agent-mcp

Score: 28.4 Rank #7365

mem-agent mcp server

Overview

firstbatchxyz/mem-agent-mcp is a Python MCP server licensed under Apache-2.0. mem-agent mcp server

Ranked #7365 out of 25632 indexed tools.

Ecosystem

Python Apache-2.0

Signal Breakdown

Stars 612
Freshness 3mo ago
Issue Health 33%
Contributors 5
Dependents 0
Forks 98
Description Brief
License Apache-2.0

How to Improve

Description low impact

Expand your description to 150+ characters for better discoverability

Freshness high impact

Last commit was 119 days ago — a recent commit would boost your freshness score

Issue Health high impact

You have 4 open vs 2 closed issues — triaging stale issues improves health

Badge

AgentRank score for firstbatchxyz/mem-agent-mcp
[![AgentRank](https://agentrank-ai.com/api/badge/tool/firstbatchxyz--mem-agent-mcp)](https://agentrank-ai.com/tool/firstbatchxyz--mem-agent-mcp)
<a href="https://agentrank-ai.com/tool/firstbatchxyz--mem-agent-mcp"><img src="https://agentrank-ai.com/api/badge/tool/firstbatchxyz--mem-agent-mcp" alt="AgentRank"></a>

Matched Queries

"mcp server""mcp-server"

From the README

# mem-agent-mcp

This is an MCP server for our model [driaforall/mem-agent](https://huggingface.co/driaforall/mem-agent), which can be connected to apps like Claude Desktop or Lm Studio to interact with an obsidian-like memory system.

## Supported Platforms

- macOS (Metal backend)
- Linux (with GPU, vLLM backend)

### Platform note: aarch64 (ARM64) Linux
- On ARM64 Linux, vLLM is not installed by default to avoid build failures (no stable wheels and source builds can fail).
- Installation will succeed without vLLM; you can:
  - Use the default OpenRouter/OpenAI path (no local vLLM needed), or
  - Run vLLM on a compatible x86_64 host and point the client at it (see agent/model.py create_vllm_client).

## Running Instructions

### Using a LiteLLM proxy (OpenAI-compatible)
- If you have a LiteLLM proxy running locally (e.g., on port 4000), configure the client via .env:
```
VLLM_HOST=localhost
VLLM_PORT=4000
```
- Verify connectivity:
```
curl http://localhost:4000/v1/models
```
- Then 
Read full README on GitHub →
Are you the maintainer? Claim this listing