# MCP Bridge Setup — Qwen3 via Ollama This document describes how the MCP (Model Context Protocol) bridge connects Qwen3 models running in Ollama to Timmy's tool ecosystem. ## Architecture ``` User Prompt │ ▼ ┌──────────────┐ /api/chat ┌──────────────────┐ │ MCPBridge │ ──────────────────▶ │ Ollama (Qwen3) │ │ (Python) │ ◀────────────────── │ tool_calls JSON │ └──────┬───────┘ └──────────────────┘ │ │ Execute tool calls ▼ ┌──────────────────────────────────────────────┐ │ MCP Tool Handlers │ ├──────────────┬───────────────┬───────────────┤ │ Gitea API │ Shell Exec │ Custom Tools │ │ (httpx) │ (ShellHand) │ (pluggable) │ └──────────────┴───────────────┴───────────────┘ ``` ## Bridge Options Evaluated | Option | Verdict | Reason | |--------|---------|--------| | **Direct Ollama /api/chat** | **Selected** | Zero extra deps, native Qwen3 tool support, full control | | qwen-agent MCP | Rejected | Adds heavy dependency (qwen-agent), overlaps with Agno | | ollmcp | Rejected | External Go binary, limited error handling | | mcphost | Rejected | Generic host, doesn't integrate with existing tool safety | | ollama-mcp-bridge | Rejected | Purpose-built but unmaintained, Node.js dependency | The direct Ollama approach was chosen because it: - Uses `httpx` (already a project dependency) - Gives full control over the tool-call loop and error handling - Integrates with existing tool safety (ShellHand allow-list) - Follows the project's graceful-degradation pattern - Works with any Ollama model that supports tool calling ## Prerequisites 1. **Ollama** running locally (default: `http://localhost:11434`) 2. **Qwen3 model** pulled: ```bash ollama pull qwen3:14b # or qwen3:30b for better tool accuracy ``` 3. **Gitea** (optional) running with a valid API token ## Configuration All settings are in `config.py` via environment variables or `.env`: | Setting | Default | Description | |---------|---------|-------------| | `OLLAMA_URL` | `http://localhost:11434` | Ollama API endpoint | | `OLLAMA_MODEL` | `qwen3:30b` | Default model for tool calling | | `OLLAMA_NUM_CTX` | `4096` | Context window cap | | `MCP_BRIDGE_TIMEOUT` | `60` | HTTP timeout for bridge calls (seconds) | | `GITEA_URL` | `http://localhost:3000` | Gitea instance URL | | `GITEA_TOKEN` | (empty) | Gitea API token | | `GITEA_REPO` | `rockachopa/Timmy-time-dashboard` | Target repository | ## Usage ### Basic usage ```python from timmy.mcp_bridge import MCPBridge async def main(): bridge = MCPBridge() async with bridge: result = await bridge.run("List open issues in the repo") print(result.content) print(f"Tool calls: {len(result.tool_calls_made)}") print(f"Latency: {result.latency_ms:.0f}ms") ``` ### With custom tools ```python from timmy.mcp_bridge import MCPBridge, MCPToolDef async def my_handler(**kwargs): return f"Processed: {kwargs}" custom_tool = MCPToolDef( name="my_tool", description="Does something custom", parameters={ "type": "object", "properties": { "input": {"type": "string", "description": "Input data"}, }, "required": ["input"], }, handler=my_handler, ) bridge = MCPBridge(extra_tools=[custom_tool]) ``` ### Selective tool loading ```python # Gitea tools only (no shell) bridge = MCPBridge(include_shell=False) # Shell only (no Gitea) bridge = MCPBridge(include_gitea=False) # Custom model bridge = MCPBridge(model="qwen3:14b") ``` ## Available Tools ### Gitea Tools (enabled when `GITEA_TOKEN` is set) | Tool | Description | |------|-------------| | `list_issues` | List issues by state (open/closed/all) | | `create_issue` | Create a new issue with title and body | | `read_issue` | Read details of a specific issue by number | ### Shell Tool (enabled by default) | Tool | Description | |------|-------------| | `shell_exec` | Execute sandboxed shell commands (allow-list enforced) | The shell tool uses the project's `ShellHand` with its allow-list of safe commands (make, pytest, git, ls, cat, grep, etc.). Dangerous commands are blocked. ## How Tool Calling Works 1. User prompt is sent to Ollama with tool definitions 2. Qwen3 generates a response — either text or `tool_calls` JSON 3. If tool calls are present, the bridge executes each one 4. Tool results are appended to the message history as `role: "tool"` 5. The updated history is sent back to the model 6. Steps 2-5 repeat until the model produces a final text response 7. Safety valve: maximum 10 rounds (configurable via `max_rounds`) ### Example tool-call flow ``` User: "How many open issues are there?" Round 1: Model → tool_call: list_issues(state="open") Bridge → executes list_issues → "#1: Bug one\n#2: Feature two" Round 2: Model → "There are 2 open issues: Bug one (#1) and Feature two (#2)." Bridge → returns BridgeResult(content="There are 2 open issues...") ``` ## Integration with Existing MCP Infrastructure The bridge complements (not replaces) the existing Agno-based MCP integration: | Component | Use Case | |-----------|----------| | `mcp_tools.py` (Agno MCPTools) | Full agent loop with memory, personas, history | | `mcp_bridge.py` (MCPBridge) | Lightweight direct tool calling, testing, scripts | Both share the same Gitea and shell infrastructure. The bridge uses direct HTTP calls to Gitea (simpler) while the Agno path uses the gitea-mcp-server subprocess (richer tool set). ## Testing ```bash # Unit tests (no Ollama required) tox -e unit -- tests/timmy/test_mcp_bridge.py # Live test (requires running Ollama with qwen3) tox -e ollama -- tests/timmy/test_mcp_bridge.py ``` ## Troubleshooting | Problem | Solution | |---------|----------| | "Ollama connection failed" | Ensure `ollama serve` is running | | "Model not found" | Run `ollama pull qwen3:14b` | | Tool calls return errors | Check tool allow-list in ShellHand | | "max tool-call rounds reached" | Model is looping — simplify the prompt | | Gitea tools return empty | Check `GITEA_TOKEN` and `GITEA_URL` |