toolfence MCP Server
Rik-Banerjee/toolfence
Are you the maintainer of Rik-Banerjee/toolfence? Claim this listing →
Deterministic runtime security for AI agent tools
Add AgentRank to Claude Code Discover and compare tools like Rik-Banerjee/toolfence — your AI finds the right one automatically
Get API Access → claude mcp add agentrank -- npx -y agentrank-mcp-server Overview
Rik-Banerjee/toolfence is a Python agent tool licensed under MIT. Deterministic runtime security for AI agent tools
Ranked #55 out of 100 indexed tools.
Actively maintained with commits in the last week.
Ecosystem
Score Breakdown
Stars 15% 2
2 stars → early stage
Freshness 25% 3d ago
Last commit 3d ago → actively maintained
Issue Health 25% 50%
No issues filed → no history to score
Contributors 10% 1
1 contributor → solo project
Dependents 25% 0
No dependents → no downstream usage
npm Downloads N/A
PyPI Downloads 13% 122/wk
122 weekly installs → early adoption
Forks 0
Description Brief
License MIT
Weights: Freshness 25% · Issue Health 25% · Dependents 25% · Stars 15% · Contributors 10% · How we score →
How to Improve
Description low impact
Contributors medium impact
Dependents medium impact
Matched Queries
From the README
# ToolFence **Deterministic runtime security for AI agent tools.** LLMs can hallucinate and be prompt-engineered, allowing faulty agent actions to slip through. ToolFence helps solve this by enforcing strict and deterministic rules. ToolFence is a lightweight Python framework that sits between your LLM and your tool functions. When an agent calls a tool, ToolFence intercepts the call, evaluates your rules, and either passes it through, blocks it, or escalates it for user approval — all before your tool function runs. Rules are Python code, not LLM instructions, so they cannot be overridden by a clever prompt. --- ## Why ToolFence LLMs are good at deciding *what* to do. They are not reliable enforcers of *what is allowed*. A well-crafted prompt can convince an LLM to ignore its own safety instructions. ToolFence allows a quick and simple way to enforce policy at the code layer — outside the prompt, outside the model, and outside the reach of any user input. ``` User prompt → LLMRead full README on GitHub →
Get the weekly AgentRank digest
Top movers, new tools, ecosystem insights — straight to your inbox.