elallali/multi_agent_tool
Overview
elallali/multi_agent_tool is a Python agent tool.
Ranked #92 out of 104 indexed tools.
Actively maintained with commits in the last week.
Ecosystem
Python No license
Score Breakdown
Stars 15% 1
1 stars → early stage
Freshness 25% 1d ago
Last commit 1d ago → actively maintained
Issue Health 25% 50%
No issues filed → no history to score
Contributors 10% 3
3 contributors → small team
Dependents 25% 0
No dependents → no downstream usage
Forks 0
Description None
License None
Weights: Freshness 25% · Issue Health 25% · Dependents 25% · Stars 15% · Contributors 10% · How we score →
How to Improve
Description low impact
License low impact
Dependents medium impact
Matched Queries
From the README
# multi_agent_tool ## Environment Variables (Local LLM) This project loads `.env` automatically via `python-dotenv`. Set the local LLM variables below to run against a local OpenAI-compatible server: | Variable | Required | Default | Description | | --- | --- | --- | --- | | `LLAMA_SERVER_BASE_URL` | Yes (for local mode) | none | Base URL of your local server, e.g. `http://127.0.0.1:8080/v1`. | | `LLAMA_SERVER_MODEL_ID` | Yes (for local mode) | none | Model name/id sent as `model` in chat completion requests. | | `LLAMA_SERVER_API_KEY` | No | empty | Optional bearer token for local server auth. | | `LLAMA_SERVER_TIMEOUT` | No | `120` | Request timeout in seconds. Must be `> 0`. | | `LLAMA_SERVER_FOLLOWUP_PROMPT` | No | `Please provide your response.` | Prompt appended when the previous message role is `assistant` or `tool`. | | `LLAMA_SERVER_CONTINUE_PROMPT` | No | `Continue.` | Prompt used when the model stops with `finish_reason=length`. | | `LLAMA_SERVER_MAX_CONTINUATIONS` | NoRead full README on GitHub →
Are you the maintainer of elallali/multi_agent_tool?
Claim this tool → Claim this listing to add a tagline, mark deprecation status, and get a verified maintainer badge.
Get the weekly AgentRank digest
Top movers, new tools, ecosystem insights — straight to your inbox.