The reputation layer for AI skills, tools & agents

cyberchitta/llm-context.py

Score: 45.3 Rank #468

Share code with LLMs via Model Context Protocol or clipboard. Rule-based customization enables easy switching between different tasks (like code review and documentation). Includes smart code outlining.

Overview

cyberchitta/llm-context.py is a Python MCP server licensed under Apache-2.0. Share code with LLMs via Model Context Protocol or clipboard. Rule-based customization enables easy switching between different tasks (like code review and documentation). Includes smart code outlining. Topics: coding, tools, cli, claude-desktop, model-context-protocol.

Ranked #468 out of 25632 indexed tools.

In the top 2% of all indexed tools.

Actively maintained with commits in the last week.

Ecosystem

Python Apache-2.0 2 weekly downloads
codingtoolscliclaude-desktopmodel-context-protocol

Signal Breakdown

Stars 295
Freshness 7d ago
Issue Health 29%
Contributors 2
Dependents 0
Forks 24
Description Detailed
License Apache-2.0

How to Improve

Issue Health high impact

You have 5 open vs 2 closed issues — triaging stale issues improves health

Dependents medium impact

No downstream dependents detected yet — adoption by other projects is the strongest trust signal

Badge

AgentRank score for cyberchitta/llm-context.py
[![AgentRank](https://agentrank-ai.com/api/badge/tool/cyberchitta--llm-context.py)](https://agentrank-ai.com/tool/cyberchitta--llm-context.py)
<a href="https://agentrank-ai.com/tool/cyberchitta--llm-context.py"><img src="https://agentrank-ai.com/api/badge/tool/cyberchitta--llm-context.py" alt="AgentRank"></a>

Matched Queries

"model context protocol""model-context-protocol"

From the README

# LLM Context

**Smart context management for LLM development workflows.** Share relevant project files instantly through intelligent selection and rule-based filtering.

## The Problem

Getting the right context into LLM conversations is friction-heavy:

- Manually finding and copying relevant files wastes time
- Too much context hits token limits, too little misses important details
- AI requests for additional files require manual fetching
- Hard to track what changed during development sessions

## The Solution

llm-context provides focused, task-specific project context through composable rules.

**For humans using chat interfaces:**
```bash
lc-select   # Smart file selection
lc-context  # Copy formatted context to clipboard
# Paste and work - AI can access additional files via MCP
```

**For AI agents with CLI access:**
```bash
lc-preview tmp-prm-auth    # Validate rule selects right files
lc-context tmp-prm-auth    # Get focused context for sub-agent
```

**For AI agents in chat
Read full README on GitHub →
Are you the maintainer? Claim this listing