The reputation layer for AI skills, tools & agents

szeider/consult7

Score: 35.8 Rank #3936

MCP server to consult a language model with large context size

Overview

szeider/consult7 is a Python MCP server licensed under MIT. MCP server to consult a language model with large context size

Ranked #3936 out of 25632 indexed tools.

Ecosystem

Python MIT

Signal Breakdown

Stars 291
Freshness 10d ago
Issue Health 0%
Contributors 1
Dependents 0
Forks 32
Description Good
License MIT

How to Improve

Description low impact

Expand your description to 150+ characters for better discoverability

Issue Health high impact

You have 6 open vs 0 closed issues — triaging stale issues improves health

Contributors medium impact

Single-contributor projects carry bus-factor risk — welcoming contributors boosts confidence

Badge

AgentRank score for szeider/consult7
[![AgentRank](https://agentrank-ai.com/api/badge/tool/szeider--consult7)](https://agentrank-ai.com/tool/szeider--consult7)
<a href="https://agentrank-ai.com/tool/szeider--consult7"><img src="https://agentrank-ai.com/api/badge/tool/szeider--consult7" alt="AgentRank"></a>

Matched Queries

"mcp server""mcp-server"

From the README

# Consult7 MCP Server

**Consult7** is a Model Context Protocol (MCP) server that enables AI agents to consult large context window models via [OpenRouter](https://openrouter.ai) for analyzing extensive file collections - entire codebases, document repositories, or mixed content that exceed the current agent's context limits.

## Why Consult7?

**Consult7** enables any MCP-compatible agent to offload file analysis to large context models (up to 2M tokens). Useful when:
- Agent's current context is full
- Task requires specialized model capabilities
- Need to analyze large codebases in a single query
- Want to compare results from different models

> "For Claude Code users, Consult7 is a game changer."

## How it works

**Consult7** collects files from the specific paths you provide (with optional wildcards in filenames), assembles them into a single context, and sends them to a large context window model along with your query. The result is directly fed back to the agent you are workin
Read full README on GitHub →
Are you the maintainer? Claim this listing