mcp-turboquant by ShipItAndPray
23 score
MCP server for LLM quantization. Compress any HuggingFace model to GGUF, GPTQ, or AWQ format. 6 tools: info, check, recommend, quantize, evaluate, push. Self-contained Python server — no external CLI needed.
Ranked #6133 out of 7050 indexed skills.
Is this your tool? Claim this listing to add maintainer context, get a verified badge, and unlock analytics.
Claim listing → Signal Breakdown
Installs 0
Freshness GitHub not linked
Issue Health GitHub not linked
Stars GitHub not linked
Platform Breadth 1 platform
Contributors GitHub not linked
Description Detailed
How to Improve
Platforms medium impact