feat: add MCP bridge for Qwen3 via Ollama (#1067)
Implements a lightweight MCP bridge that connects Qwen3 models in Ollama directly to Timmy's tool ecosystem (Gitea, shell) via Ollama's native /api/chat tool-calling API. This complements the existing Agno-based MCP integration with a simpler, standalone path for direct tool calling. - src/timmy/mcp_bridge.py: MCPBridge class with tool-call loop, Gitea tools (list/create/read issues), and shell execution via ShellHand - tests/timmy/test_mcp_bridge.py: 24 unit tests covering tool schema conversion, bridge lifecycle, tool-call loop, error handling - docs/mcp-setup.md: architecture, bridge option evaluation, setup guide - config.py: mcp_bridge_timeout setting (60s default) Fixes #1067 Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
195
docs/mcp-setup.md
Normal file
195
docs/mcp-setup.md
Normal file
@@ -0,0 +1,195 @@
|
||||
# MCP Bridge Setup — Qwen3 via Ollama
|
||||
|
||||
This document describes how the MCP (Model Context Protocol) bridge connects
|
||||
Qwen3 models running in Ollama to Timmy's tool ecosystem.
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
User Prompt
|
||||
│
|
||||
▼
|
||||
┌──────────────┐ /api/chat ┌──────────────────┐
|
||||
│ MCPBridge │ ──────────────────▶ │ Ollama (Qwen3) │
|
||||
│ (Python) │ ◀────────────────── │ tool_calls JSON │
|
||||
└──────┬───────┘ └──────────────────┘
|
||||
│
|
||||
│ Execute tool calls
|
||||
▼
|
||||
┌──────────────────────────────────────────────┐
|
||||
│ MCP Tool Handlers │
|
||||
├──────────────┬───────────────┬───────────────┤
|
||||
│ Gitea API │ Shell Exec │ Custom Tools │
|
||||
│ (httpx) │ (ShellHand) │ (pluggable) │
|
||||
└──────────────┴───────────────┴───────────────┘
|
||||
```
|
||||
|
||||
## Bridge Options Evaluated
|
||||
|
||||
| Option | Verdict | Reason |
|
||||
|--------|---------|--------|
|
||||
| **Direct Ollama /api/chat** | **Selected** | Zero extra deps, native Qwen3 tool support, full control |
|
||||
| qwen-agent MCP | Rejected | Adds heavy dependency (qwen-agent), overlaps with Agno |
|
||||
| ollmcp | Rejected | External Go binary, limited error handling |
|
||||
| mcphost | Rejected | Generic host, doesn't integrate with existing tool safety |
|
||||
| ollama-mcp-bridge | Rejected | Purpose-built but unmaintained, Node.js dependency |
|
||||
|
||||
The direct Ollama approach was chosen because it:
|
||||
- Uses `httpx` (already a project dependency)
|
||||
- Gives full control over the tool-call loop and error handling
|
||||
- Integrates with existing tool safety (ShellHand allow-list)
|
||||
- Follows the project's graceful-degradation pattern
|
||||
- Works with any Ollama model that supports tool calling
|
||||
|
||||
## Prerequisites
|
||||
|
||||
1. **Ollama** running locally (default: `http://localhost:11434`)
|
||||
2. **Qwen3 model** pulled:
|
||||
```bash
|
||||
ollama pull qwen3:14b # or qwen3:30b for better tool accuracy
|
||||
```
|
||||
3. **Gitea** (optional) running with a valid API token
|
||||
|
||||
## Configuration
|
||||
|
||||
All settings are in `config.py` via environment variables or `.env`:
|
||||
|
||||
| Setting | Default | Description |
|
||||
|---------|---------|-------------|
|
||||
| `OLLAMA_URL` | `http://localhost:11434` | Ollama API endpoint |
|
||||
| `OLLAMA_MODEL` | `qwen3:30b` | Default model for tool calling |
|
||||
| `OLLAMA_NUM_CTX` | `4096` | Context window cap |
|
||||
| `MCP_BRIDGE_TIMEOUT` | `60` | HTTP timeout for bridge calls (seconds) |
|
||||
| `GITEA_URL` | `http://localhost:3000` | Gitea instance URL |
|
||||
| `GITEA_TOKEN` | (empty) | Gitea API token |
|
||||
| `GITEA_REPO` | `rockachopa/Timmy-time-dashboard` | Target repository |
|
||||
|
||||
## Usage
|
||||
|
||||
### Basic usage
|
||||
|
||||
```python
|
||||
from timmy.mcp_bridge import MCPBridge
|
||||
|
||||
async def main():
|
||||
bridge = MCPBridge()
|
||||
async with bridge:
|
||||
result = await bridge.run("List open issues in the repo")
|
||||
print(result.content)
|
||||
print(f"Tool calls: {len(result.tool_calls_made)}")
|
||||
print(f"Latency: {result.latency_ms:.0f}ms")
|
||||
```
|
||||
|
||||
### With custom tools
|
||||
|
||||
```python
|
||||
from timmy.mcp_bridge import MCPBridge, MCPToolDef
|
||||
|
||||
async def my_handler(**kwargs):
|
||||
return f"Processed: {kwargs}"
|
||||
|
||||
custom_tool = MCPToolDef(
|
||||
name="my_tool",
|
||||
description="Does something custom",
|
||||
parameters={
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"input": {"type": "string", "description": "Input data"},
|
||||
},
|
||||
"required": ["input"],
|
||||
},
|
||||
handler=my_handler,
|
||||
)
|
||||
|
||||
bridge = MCPBridge(extra_tools=[custom_tool])
|
||||
```
|
||||
|
||||
### Selective tool loading
|
||||
|
||||
```python
|
||||
# Gitea tools only (no shell)
|
||||
bridge = MCPBridge(include_shell=False)
|
||||
|
||||
# Shell only (no Gitea)
|
||||
bridge = MCPBridge(include_gitea=False)
|
||||
|
||||
# Custom model
|
||||
bridge = MCPBridge(model="qwen3:14b")
|
||||
```
|
||||
|
||||
## Available Tools
|
||||
|
||||
### Gitea Tools (enabled when `GITEA_TOKEN` is set)
|
||||
|
||||
| Tool | Description |
|
||||
|------|-------------|
|
||||
| `list_issues` | List issues by state (open/closed/all) |
|
||||
| `create_issue` | Create a new issue with title and body |
|
||||
| `read_issue` | Read details of a specific issue by number |
|
||||
|
||||
### Shell Tool (enabled by default)
|
||||
|
||||
| Tool | Description |
|
||||
|------|-------------|
|
||||
| `shell_exec` | Execute sandboxed shell commands (allow-list enforced) |
|
||||
|
||||
The shell tool uses the project's `ShellHand` with its allow-list of safe
|
||||
commands (make, pytest, git, ls, cat, grep, etc.). Dangerous commands are
|
||||
blocked.
|
||||
|
||||
## How Tool Calling Works
|
||||
|
||||
1. User prompt is sent to Ollama with tool definitions
|
||||
2. Qwen3 generates a response — either text or `tool_calls` JSON
|
||||
3. If tool calls are present, the bridge executes each one
|
||||
4. Tool results are appended to the message history as `role: "tool"`
|
||||
5. The updated history is sent back to the model
|
||||
6. Steps 2-5 repeat until the model produces a final text response
|
||||
7. Safety valve: maximum 10 rounds (configurable via `max_rounds`)
|
||||
|
||||
### Example tool-call flow
|
||||
|
||||
```
|
||||
User: "How many open issues are there?"
|
||||
|
||||
Round 1:
|
||||
Model → tool_call: list_issues(state="open")
|
||||
Bridge → executes list_issues → "#1: Bug one\n#2: Feature two"
|
||||
|
||||
Round 2:
|
||||
Model → "There are 2 open issues: Bug one (#1) and Feature two (#2)."
|
||||
Bridge → returns BridgeResult(content="There are 2 open issues...")
|
||||
```
|
||||
|
||||
## Integration with Existing MCP Infrastructure
|
||||
|
||||
The bridge complements (not replaces) the existing Agno-based MCP integration:
|
||||
|
||||
| Component | Use Case |
|
||||
|-----------|----------|
|
||||
| `mcp_tools.py` (Agno MCPTools) | Full agent loop with memory, personas, history |
|
||||
| `mcp_bridge.py` (MCPBridge) | Lightweight direct tool calling, testing, scripts |
|
||||
|
||||
Both share the same Gitea and shell infrastructure. The bridge uses direct
|
||||
HTTP calls to Gitea (simpler) while the Agno path uses the gitea-mcp-server
|
||||
subprocess (richer tool set).
|
||||
|
||||
## Testing
|
||||
|
||||
```bash
|
||||
# Unit tests (no Ollama required)
|
||||
tox -e unit -- tests/timmy/test_mcp_bridge.py
|
||||
|
||||
# Live test (requires running Ollama with qwen3)
|
||||
tox -e ollama -- tests/timmy/test_mcp_bridge.py
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
| Problem | Solution |
|
||||
|---------|----------|
|
||||
| "Ollama connection failed" | Ensure `ollama serve` is running |
|
||||
| "Model not found" | Run `ollama pull qwen3:14b` |
|
||||
| Tool calls return errors | Check tool allow-list in ShellHand |
|
||||
| "max tool-call rounds reached" | Model is looping — simplify the prompt |
|
||||
| Gitea tools return empty | Check `GITEA_TOKEN` and `GITEA_URL` |
|
||||
@@ -298,6 +298,7 @@ class Settings(BaseSettings):
|
||||
mcp_gitea_command: str = "gitea-mcp-server -t stdio"
|
||||
mcp_filesystem_command: str = "npx -y @modelcontextprotocol/server-filesystem"
|
||||
mcp_timeout: int = 15
|
||||
mcp_bridge_timeout: int = 60 # HTTP timeout for MCP bridge Ollama calls (seconds)
|
||||
|
||||
# ── Loop QA (Self-Testing) ─────────────────────────────────────────
|
||||
# Self-test orchestrator that probes capabilities alongside the thinking loop.
|
||||
|
||||
540
src/timmy/mcp_bridge.py
Normal file
540
src/timmy/mcp_bridge.py
Normal file
@@ -0,0 +1,540 @@
|
||||
"""MCP Bridge for Qwen3 via Ollama.
|
||||
|
||||
Provides a lightweight bridge between Ollama's native tool-calling API
|
||||
and MCP tool servers (Gitea, Filesystem, Shell). Unlike the Agno-based
|
||||
agent loop, this bridge talks directly to the Ollama ``/api/chat``
|
||||
endpoint, translating MCP tool schemas into Ollama tool definitions and
|
||||
executing tool calls in a loop until the model produces a final response.
|
||||
|
||||
Designed for Qwen3 models which have first-class tool-calling support.
|
||||
|
||||
Usage::
|
||||
|
||||
from timmy.mcp_bridge import MCPBridge
|
||||
|
||||
bridge = MCPBridge()
|
||||
async with bridge:
|
||||
result = await bridge.run("List open issues in Timmy-time-dashboard")
|
||||
print(result.content)
|
||||
|
||||
The bridge evaluates available options in order of preference:
|
||||
1. Direct Ollama /api/chat with native tool_calls (selected — best fit)
|
||||
2. qwen-agent MCP (requires separate qwen-agent install)
|
||||
3. ollmcp / mcphost / ollama-mcp-bridge (external binaries)
|
||||
|
||||
Option 1 was selected because:
|
||||
- Zero additional dependencies (uses httpx already in the project)
|
||||
- Native Qwen3 tool-calling support via Ollama's OpenAI-compatible API
|
||||
- Full control over the tool-call loop and error handling
|
||||
- Consistent with the project's graceful-degradation pattern
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
import time
|
||||
from dataclasses import dataclass, field
|
||||
from typing import Any
|
||||
|
||||
import httpx
|
||||
|
||||
from config import settings
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Maximum tool-call round-trips before aborting (safety valve).
|
||||
_MAX_TOOL_ROUNDS = 10
|
||||
|
||||
|
||||
@dataclass
|
||||
class BridgeResult:
|
||||
"""Result from an MCP bridge run."""
|
||||
|
||||
content: str
|
||||
tool_calls_made: list[dict] = field(default_factory=list)
|
||||
rounds: int = 0
|
||||
latency_ms: float = 0.0
|
||||
model: str = ""
|
||||
error: str = ""
|
||||
|
||||
|
||||
@dataclass
|
||||
class MCPToolDef:
|
||||
"""An MCP tool definition translated for Ollama."""
|
||||
|
||||
name: str
|
||||
description: str
|
||||
parameters: dict[str, Any]
|
||||
handler: Any # async callable(**kwargs) -> str
|
||||
|
||||
|
||||
def _mcp_schema_to_ollama_tool(tool: MCPToolDef) -> dict:
|
||||
"""Convert an MCPToolDef into Ollama's tool format.
|
||||
|
||||
Ollama uses OpenAI-compatible tool definitions::
|
||||
|
||||
{
|
||||
"type": "function",
|
||||
"function": {
|
||||
"name": "...",
|
||||
"description": "...",
|
||||
"parameters": { "type": "object", "properties": {...}, "required": [...] }
|
||||
}
|
||||
}
|
||||
"""
|
||||
# Normalise parameters — ensure it has "type": "object" wrapper.
|
||||
params = tool.parameters
|
||||
if params.get("type") != "object":
|
||||
params = {
|
||||
"type": "object",
|
||||
"properties": params,
|
||||
"required": list(params.keys()),
|
||||
}
|
||||
|
||||
return {
|
||||
"type": "function",
|
||||
"function": {
|
||||
"name": tool.name,
|
||||
"description": tool.description,
|
||||
"parameters": params,
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
def _build_shell_tool() -> MCPToolDef | None:
|
||||
"""Build the shell execution tool using the local ShellHand."""
|
||||
try:
|
||||
from infrastructure.hands.shell import shell_hand
|
||||
|
||||
async def _handle_shell(**kwargs: Any) -> str:
|
||||
command = kwargs.get("command", "")
|
||||
timeout = kwargs.get("timeout")
|
||||
result = await shell_hand.run(command, timeout=timeout)
|
||||
if result.success:
|
||||
return result.stdout or "(no output)"
|
||||
return f"[error] exit={result.exit_code} {result.error or result.stderr}"
|
||||
|
||||
return MCPToolDef(
|
||||
name="shell_exec",
|
||||
description=(
|
||||
"Execute a shell command in a sandboxed environment. "
|
||||
"Commands are validated against an allow-list. "
|
||||
"Returns stdout, stderr, and exit code."
|
||||
),
|
||||
parameters={
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"command": {
|
||||
"type": "string",
|
||||
"description": "Shell command to execute (must match allow-list)",
|
||||
},
|
||||
"timeout": {
|
||||
"type": "integer",
|
||||
"description": "Timeout in seconds (default 60)",
|
||||
},
|
||||
},
|
||||
"required": ["command"],
|
||||
},
|
||||
handler=_handle_shell,
|
||||
)
|
||||
except Exception as exc:
|
||||
logger.debug("Shell tool unavailable: %s", exc)
|
||||
return None
|
||||
|
||||
|
||||
def _build_gitea_tools() -> list[MCPToolDef]:
|
||||
"""Build Gitea MCP tool definitions for direct Ollama bridge use.
|
||||
|
||||
These tools call the Gitea REST API directly via httpx rather than
|
||||
spawning an MCP server subprocess, keeping the bridge lightweight.
|
||||
"""
|
||||
if not settings.gitea_enabled or not settings.gitea_token:
|
||||
return []
|
||||
|
||||
base_url = settings.gitea_url
|
||||
token = settings.gitea_token
|
||||
owner, repo = settings.gitea_repo.split("/", 1)
|
||||
|
||||
async def _list_issues(**kwargs: Any) -> str:
|
||||
state = kwargs.get("state", "open")
|
||||
limit = kwargs.get("limit", 10)
|
||||
try:
|
||||
async with httpx.AsyncClient(timeout=15) as client:
|
||||
resp = await client.get(
|
||||
f"{base_url}/api/v1/repos/{owner}/{repo}/issues",
|
||||
headers={"Authorization": f"token {token}"},
|
||||
params={"state": state, "limit": limit, "type": "issues"},
|
||||
)
|
||||
resp.raise_for_status()
|
||||
issues = resp.json()
|
||||
if not issues:
|
||||
return f"No {state} issues found."
|
||||
lines = []
|
||||
for issue in issues:
|
||||
labels = ", ".join(lb["name"] for lb in issue.get("labels", []))
|
||||
label_str = f" [{labels}]" if labels else ""
|
||||
lines.append(f"#{issue['number']}: {issue['title']}{label_str}")
|
||||
return "\n".join(lines)
|
||||
except Exception as exc:
|
||||
return f"Error listing issues: {exc}"
|
||||
|
||||
async def _create_issue(**kwargs: Any) -> str:
|
||||
title = kwargs.get("title", "")
|
||||
body = kwargs.get("body", "")
|
||||
if not title:
|
||||
return "Error: title is required"
|
||||
try:
|
||||
async with httpx.AsyncClient(timeout=15) as client:
|
||||
resp = await client.post(
|
||||
f"{base_url}/api/v1/repos/{owner}/{repo}/issues",
|
||||
headers={
|
||||
"Authorization": f"token {token}",
|
||||
"Content-Type": "application/json",
|
||||
},
|
||||
json={"title": title, "body": body},
|
||||
)
|
||||
resp.raise_for_status()
|
||||
data = resp.json()
|
||||
return f"Created issue #{data['number']}: {data['title']}"
|
||||
except Exception as exc:
|
||||
return f"Error creating issue: {exc}"
|
||||
|
||||
async def _read_issue(**kwargs: Any) -> str:
|
||||
number = kwargs.get("number")
|
||||
if not number:
|
||||
return "Error: issue number is required"
|
||||
try:
|
||||
async with httpx.AsyncClient(timeout=15) as client:
|
||||
resp = await client.get(
|
||||
f"{base_url}/api/v1/repos/{owner}/{repo}/issues/{number}",
|
||||
headers={"Authorization": f"token {token}"},
|
||||
)
|
||||
resp.raise_for_status()
|
||||
issue = resp.json()
|
||||
labels = ", ".join(lb["name"] for lb in issue.get("labels", []))
|
||||
parts = [
|
||||
f"#{issue['number']}: {issue['title']}",
|
||||
f"State: {issue['state']}",
|
||||
]
|
||||
if labels:
|
||||
parts.append(f"Labels: {labels}")
|
||||
if issue.get("body"):
|
||||
parts.append(f"\n{issue['body']}")
|
||||
return "\n".join(parts)
|
||||
except Exception as exc:
|
||||
return f"Error reading issue: {exc}"
|
||||
|
||||
return [
|
||||
MCPToolDef(
|
||||
name="list_issues",
|
||||
description="List issues in the Gitea repository. Returns issue numbers and titles.",
|
||||
parameters={
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"state": {
|
||||
"type": "string",
|
||||
"description": "Filter by state: open, closed, or all (default: open)",
|
||||
},
|
||||
"limit": {
|
||||
"type": "integer",
|
||||
"description": "Maximum number of issues to return (default: 10)",
|
||||
},
|
||||
},
|
||||
"required": [],
|
||||
},
|
||||
handler=_list_issues,
|
||||
),
|
||||
MCPToolDef(
|
||||
name="create_issue",
|
||||
description="Create a new issue in the Gitea repository.",
|
||||
parameters={
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"title": {
|
||||
"type": "string",
|
||||
"description": "Issue title (required)",
|
||||
},
|
||||
"body": {
|
||||
"type": "string",
|
||||
"description": "Issue body in markdown (optional)",
|
||||
},
|
||||
},
|
||||
"required": ["title"],
|
||||
},
|
||||
handler=_create_issue,
|
||||
),
|
||||
MCPToolDef(
|
||||
name="read_issue",
|
||||
description="Read details of a specific issue by number.",
|
||||
parameters={
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"number": {
|
||||
"type": "integer",
|
||||
"description": "Issue number to read",
|
||||
},
|
||||
},
|
||||
"required": ["number"],
|
||||
},
|
||||
handler=_read_issue,
|
||||
),
|
||||
]
|
||||
|
||||
|
||||
class MCPBridge:
|
||||
"""Bridge between Ollama's tool-calling API and MCP tools.
|
||||
|
||||
Manages a set of tool definitions and executes a chat loop with
|
||||
tool calling against a Qwen3 model via Ollama.
|
||||
|
||||
The bridge:
|
||||
1. Registers available tools (Gitea, shell, custom)
|
||||
2. Sends prompts to Ollama with tool definitions
|
||||
3. Executes tool calls when the model requests them
|
||||
4. Returns tool results to the model for the next round
|
||||
5. Repeats until the model produces a final text response
|
||||
|
||||
Attributes:
|
||||
model: Ollama model name (default from settings).
|
||||
ollama_url: Ollama API base URL (default from settings).
|
||||
tools: Registered tool definitions.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
model: str | None = None,
|
||||
ollama_url: str | None = None,
|
||||
*,
|
||||
include_gitea: bool = True,
|
||||
include_shell: bool = True,
|
||||
extra_tools: list[MCPToolDef] | None = None,
|
||||
max_rounds: int = _MAX_TOOL_ROUNDS,
|
||||
) -> None:
|
||||
self.model = model or settings.ollama_model
|
||||
self.ollama_url = ollama_url or settings.normalized_ollama_url
|
||||
self.max_rounds = max_rounds
|
||||
self._tools: dict[str, MCPToolDef] = {}
|
||||
self._client: httpx.AsyncClient | None = None
|
||||
|
||||
# Register built-in tools
|
||||
if include_gitea:
|
||||
for tool in _build_gitea_tools():
|
||||
self._tools[tool.name] = tool
|
||||
|
||||
if include_shell:
|
||||
shell = _build_shell_tool()
|
||||
if shell:
|
||||
self._tools[shell.name] = shell
|
||||
|
||||
# Register extra tools
|
||||
if extra_tools:
|
||||
for tool in extra_tools:
|
||||
self._tools[tool.name] = tool
|
||||
|
||||
logger.info(
|
||||
"MCPBridge initialised: model=%s, tools=%s",
|
||||
self.model,
|
||||
list(self._tools.keys()),
|
||||
)
|
||||
|
||||
async def __aenter__(self) -> MCPBridge:
|
||||
self._client = httpx.AsyncClient(timeout=settings.mcp_bridge_timeout)
|
||||
return self
|
||||
|
||||
async def __aexit__(self, *exc: Any) -> None:
|
||||
if self._client:
|
||||
await self._client.aclose()
|
||||
self._client = None
|
||||
|
||||
@property
|
||||
def tool_names(self) -> list[str]:
|
||||
"""Return names of all registered tools."""
|
||||
return list(self._tools.keys())
|
||||
|
||||
def _build_ollama_tools(self) -> list[dict]:
|
||||
"""Convert registered tools to Ollama tool format."""
|
||||
return [_mcp_schema_to_ollama_tool(t) for t in self._tools.values()]
|
||||
|
||||
async def _chat(self, messages: list[dict], tools: list[dict]) -> dict:
|
||||
"""Send a chat request to Ollama and return the response.
|
||||
|
||||
Uses the ``/api/chat`` endpoint with tool definitions.
|
||||
"""
|
||||
if not self._client:
|
||||
raise RuntimeError("MCPBridge must be used as async context manager")
|
||||
|
||||
payload: dict[str, Any] = {
|
||||
"model": self.model,
|
||||
"messages": messages,
|
||||
"stream": False,
|
||||
}
|
||||
if tools:
|
||||
payload["tools"] = tools
|
||||
|
||||
# Set num_ctx if configured
|
||||
if settings.ollama_num_ctx > 0:
|
||||
payload["options"] = {"num_ctx": settings.ollama_num_ctx}
|
||||
|
||||
resp = await self._client.post(
|
||||
f"{self.ollama_url}/api/chat",
|
||||
json=payload,
|
||||
)
|
||||
resp.raise_for_status()
|
||||
return resp.json()
|
||||
|
||||
async def _execute_tool_call(self, tool_call: dict) -> str:
|
||||
"""Execute a single tool call and return the result string."""
|
||||
func = tool_call.get("function", {})
|
||||
name = func.get("name", "")
|
||||
arguments = func.get("arguments", {})
|
||||
|
||||
tool = self._tools.get(name)
|
||||
if not tool:
|
||||
return f"Error: unknown tool '{name}'"
|
||||
|
||||
try:
|
||||
result = await tool.handler(**arguments)
|
||||
return str(result)
|
||||
except Exception as exc:
|
||||
logger.warning("Tool '%s' execution failed: %s", name, exc)
|
||||
return f"Error executing {name}: {exc}"
|
||||
|
||||
async def run(
|
||||
self,
|
||||
prompt: str,
|
||||
*,
|
||||
system_prompt: str | None = None,
|
||||
) -> BridgeResult:
|
||||
"""Run a prompt through the MCP bridge with tool calling.
|
||||
|
||||
Sends the prompt to the Ollama model with tool definitions.
|
||||
If the model requests tool calls, executes them and feeds
|
||||
results back until the model produces a final text response.
|
||||
|
||||
Args:
|
||||
prompt: User message to send.
|
||||
system_prompt: Optional system prompt override.
|
||||
|
||||
Returns:
|
||||
BridgeResult with the final response and tool call history.
|
||||
"""
|
||||
start = time.time()
|
||||
messages: list[dict] = []
|
||||
|
||||
if system_prompt:
|
||||
messages.append({"role": "system", "content": system_prompt})
|
||||
|
||||
messages.append({"role": "user", "content": prompt})
|
||||
|
||||
tools = self._build_ollama_tools()
|
||||
tool_calls_made: list[dict] = []
|
||||
rounds = 0
|
||||
|
||||
try:
|
||||
for round_num in range(self.max_rounds):
|
||||
rounds = round_num + 1
|
||||
response = await self._chat(messages, tools)
|
||||
msg = response.get("message", {})
|
||||
|
||||
# Check if model made tool calls
|
||||
model_tool_calls = msg.get("tool_calls", [])
|
||||
if not model_tool_calls:
|
||||
# Final text response — done.
|
||||
content = msg.get("content", "")
|
||||
latency = (time.time() - start) * 1000
|
||||
return BridgeResult(
|
||||
content=content,
|
||||
tool_calls_made=tool_calls_made,
|
||||
rounds=rounds,
|
||||
latency_ms=latency,
|
||||
model=self.model,
|
||||
)
|
||||
|
||||
# Append the assistant message (with tool_calls) to history
|
||||
messages.append(msg)
|
||||
|
||||
# Execute each tool call and add results
|
||||
for tc in model_tool_calls:
|
||||
func = tc.get("function", {})
|
||||
tool_name = func.get("name", "unknown")
|
||||
tool_args = func.get("arguments", {})
|
||||
|
||||
logger.info(
|
||||
"Bridge tool call [round %d]: %s(%s)",
|
||||
rounds,
|
||||
tool_name,
|
||||
tool_args,
|
||||
)
|
||||
|
||||
result = await self._execute_tool_call(tc)
|
||||
tool_calls_made.append(
|
||||
{
|
||||
"round": rounds,
|
||||
"tool": tool_name,
|
||||
"arguments": tool_args,
|
||||
"result": result[:500], # Truncate for logging
|
||||
}
|
||||
)
|
||||
|
||||
# Add tool result to message history
|
||||
messages.append(
|
||||
{
|
||||
"role": "tool",
|
||||
"content": result,
|
||||
}
|
||||
)
|
||||
|
||||
# Hit max rounds
|
||||
latency = (time.time() - start) * 1000
|
||||
return BridgeResult(
|
||||
content="(max tool-call rounds reached)",
|
||||
tool_calls_made=tool_calls_made,
|
||||
rounds=rounds,
|
||||
latency_ms=latency,
|
||||
model=self.model,
|
||||
error=f"Exceeded maximum of {self.max_rounds} tool-call rounds",
|
||||
)
|
||||
|
||||
except httpx.ConnectError as exc:
|
||||
latency = (time.time() - start) * 1000
|
||||
logger.warning("Ollama connection failed: %s", exc)
|
||||
return BridgeResult(
|
||||
content="",
|
||||
tool_calls_made=tool_calls_made,
|
||||
rounds=rounds,
|
||||
latency_ms=latency,
|
||||
model=self.model,
|
||||
error=f"Ollama connection failed: {exc}",
|
||||
)
|
||||
except httpx.HTTPStatusError as exc:
|
||||
latency = (time.time() - start) * 1000
|
||||
logger.warning("Ollama HTTP error: %s", exc)
|
||||
return BridgeResult(
|
||||
content="",
|
||||
tool_calls_made=tool_calls_made,
|
||||
rounds=rounds,
|
||||
latency_ms=latency,
|
||||
model=self.model,
|
||||
error=f"Ollama HTTP error: {exc.response.status_code}",
|
||||
)
|
||||
except Exception as exc:
|
||||
latency = (time.time() - start) * 1000
|
||||
logger.error("MCPBridge run failed: %s", exc)
|
||||
return BridgeResult(
|
||||
content="",
|
||||
tool_calls_made=tool_calls_made,
|
||||
rounds=rounds,
|
||||
latency_ms=latency,
|
||||
model=self.model,
|
||||
error=str(exc),
|
||||
)
|
||||
|
||||
def status(self) -> dict:
|
||||
"""Return bridge status for the dashboard."""
|
||||
return {
|
||||
"model": self.model,
|
||||
"ollama_url": self.ollama_url,
|
||||
"tools": self.tool_names,
|
||||
"max_rounds": self.max_rounds,
|
||||
"connected": self._client is not None,
|
||||
}
|
||||
@@ -14,7 +14,6 @@ from infrastructure.guards.moderation import (
|
||||
get_moderator,
|
||||
)
|
||||
|
||||
|
||||
# ── Unit tests for data types ────────────────────────────────────────────────
|
||||
|
||||
|
||||
|
||||
619
tests/timmy/test_mcp_bridge.py
Normal file
619
tests/timmy/test_mcp_bridge.py
Normal file
@@ -0,0 +1,619 @@
|
||||
"""Tests for the MCP bridge module (Qwen3 via Ollama)."""
|
||||
|
||||
from unittest.mock import AsyncMock, MagicMock, patch
|
||||
|
||||
import httpx
|
||||
import pytest
|
||||
|
||||
from timmy.mcp_bridge import (
|
||||
BridgeResult,
|
||||
MCPBridge,
|
||||
MCPToolDef,
|
||||
_build_gitea_tools,
|
||||
_build_shell_tool,
|
||||
_mcp_schema_to_ollama_tool,
|
||||
)
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _mcp_schema_to_ollama_tool
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def test_schema_to_ollama_tool_basic():
|
||||
"""Converts an MCPToolDef to Ollama tool format."""
|
||||
tool = MCPToolDef(
|
||||
name="test_tool",
|
||||
description="A test tool",
|
||||
parameters={
|
||||
"type": "object",
|
||||
"properties": {"arg1": {"type": "string"}},
|
||||
"required": ["arg1"],
|
||||
},
|
||||
handler=AsyncMock(),
|
||||
)
|
||||
result = _mcp_schema_to_ollama_tool(tool)
|
||||
assert result["type"] == "function"
|
||||
assert result["function"]["name"] == "test_tool"
|
||||
assert result["function"]["description"] == "A test tool"
|
||||
assert result["function"]["parameters"]["type"] == "object"
|
||||
assert "arg1" in result["function"]["parameters"]["properties"]
|
||||
|
||||
|
||||
def test_schema_to_ollama_tool_wraps_bare_params():
|
||||
"""Wraps bare parameter dicts in an object type."""
|
||||
tool = MCPToolDef(
|
||||
name="bare",
|
||||
description="Bare params",
|
||||
parameters={"x": {"type": "integer"}},
|
||||
handler=AsyncMock(),
|
||||
)
|
||||
result = _mcp_schema_to_ollama_tool(tool)
|
||||
params = result["function"]["parameters"]
|
||||
assert params["type"] == "object"
|
||||
assert "x" in params["properties"]
|
||||
assert "x" in params["required"]
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _build_shell_tool
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def test_build_shell_tool_returns_def():
|
||||
"""Shell tool builder returns an MCPToolDef."""
|
||||
tool = _build_shell_tool()
|
||||
assert tool is not None
|
||||
assert tool.name == "shell_exec"
|
||||
assert "command" in tool.parameters["properties"]
|
||||
|
||||
|
||||
def test_build_shell_tool_graceful_on_import_error():
|
||||
"""Shell tool returns None when infrastructure is unavailable."""
|
||||
with patch.dict("sys.modules", {"infrastructure.hands.shell": None}):
|
||||
# Force re-import failure — but _build_shell_tool catches it
|
||||
with patch(
|
||||
"timmy.mcp_bridge._build_shell_tool",
|
||||
wraps=_build_shell_tool,
|
||||
):
|
||||
# The real function should handle import errors
|
||||
tool = _build_shell_tool()
|
||||
# May return tool if import cache succeeds, or None if not
|
||||
# Just verify it doesn't raise
|
||||
assert tool is None or isinstance(tool, MCPToolDef)
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _build_gitea_tools
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def test_gitea_tools_empty_when_disabled():
|
||||
"""Gitea tools returns empty list when disabled."""
|
||||
with patch("timmy.mcp_bridge.settings") as mock_settings:
|
||||
mock_settings.gitea_enabled = False
|
||||
mock_settings.gitea_token = ""
|
||||
result = _build_gitea_tools()
|
||||
assert result == []
|
||||
|
||||
|
||||
def test_gitea_tools_empty_when_no_token():
|
||||
"""Gitea tools returns empty list when no token."""
|
||||
with patch("timmy.mcp_bridge.settings") as mock_settings:
|
||||
mock_settings.gitea_enabled = True
|
||||
mock_settings.gitea_token = ""
|
||||
result = _build_gitea_tools()
|
||||
assert result == []
|
||||
|
||||
|
||||
def test_gitea_tools_returns_three_tools():
|
||||
"""Gitea tools returns list_issues, create_issue, read_issue."""
|
||||
with patch("timmy.mcp_bridge.settings") as mock_settings:
|
||||
mock_settings.gitea_enabled = True
|
||||
mock_settings.gitea_token = "tok123"
|
||||
mock_settings.gitea_url = "http://localhost:3000"
|
||||
mock_settings.gitea_repo = "owner/repo"
|
||||
result = _build_gitea_tools()
|
||||
assert len(result) == 3
|
||||
names = {t.name for t in result}
|
||||
assert names == {"list_issues", "create_issue", "read_issue"}
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# MCPBridge.__init__
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def test_bridge_init_default():
|
||||
"""MCPBridge initialises with default settings."""
|
||||
with patch("timmy.mcp_bridge.settings") as mock_settings:
|
||||
mock_settings.ollama_model = "qwen3:14b"
|
||||
mock_settings.normalized_ollama_url = "http://localhost:11434"
|
||||
mock_settings.gitea_enabled = False
|
||||
mock_settings.gitea_token = ""
|
||||
bridge = MCPBridge(include_gitea=False, include_shell=False)
|
||||
assert bridge.model == "qwen3:14b"
|
||||
assert bridge.tool_names == []
|
||||
|
||||
|
||||
def test_bridge_init_with_extra_tools():
|
||||
"""MCPBridge accepts extra tool definitions."""
|
||||
custom = MCPToolDef(
|
||||
name="custom_tool",
|
||||
description="Custom",
|
||||
parameters={"type": "object", "properties": {}, "required": []},
|
||||
handler=AsyncMock(),
|
||||
)
|
||||
with patch("timmy.mcp_bridge.settings") as mock_settings:
|
||||
mock_settings.ollama_model = "qwen3:14b"
|
||||
mock_settings.normalized_ollama_url = "http://localhost:11434"
|
||||
mock_settings.gitea_enabled = False
|
||||
mock_settings.gitea_token = ""
|
||||
bridge = MCPBridge(
|
||||
include_gitea=False,
|
||||
include_shell=False,
|
||||
extra_tools=[custom],
|
||||
)
|
||||
assert "custom_tool" in bridge.tool_names
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# MCPBridge.run — tool-call loop
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_bridge_run_simple_response():
|
||||
"""Bridge returns model content when no tool calls are made."""
|
||||
with patch("timmy.mcp_bridge.settings") as mock_settings:
|
||||
mock_settings.ollama_model = "qwen3:14b"
|
||||
mock_settings.normalized_ollama_url = "http://localhost:11434"
|
||||
mock_settings.ollama_num_ctx = 4096
|
||||
mock_settings.mcp_bridge_timeout = 60
|
||||
mock_settings.gitea_enabled = False
|
||||
mock_settings.gitea_token = ""
|
||||
|
||||
bridge = MCPBridge(include_gitea=False, include_shell=False)
|
||||
|
||||
mock_resp = MagicMock()
|
||||
mock_resp.json.return_value = {
|
||||
"message": {"role": "assistant", "content": "Hello!"}
|
||||
}
|
||||
mock_resp.raise_for_status = MagicMock()
|
||||
|
||||
mock_client = AsyncMock()
|
||||
mock_client.post = AsyncMock(return_value=mock_resp)
|
||||
mock_client.aclose = AsyncMock()
|
||||
|
||||
bridge._client = mock_client
|
||||
result = await bridge.run("Hi")
|
||||
|
||||
assert result.content == "Hello!"
|
||||
assert result.rounds == 1
|
||||
assert result.tool_calls_made == []
|
||||
assert result.error == ""
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_bridge_run_with_tool_call():
|
||||
"""Bridge executes tool calls and returns final response."""
|
||||
handler = AsyncMock(return_value="tool result data")
|
||||
tool = MCPToolDef(
|
||||
name="my_tool",
|
||||
description="Test",
|
||||
parameters={"type": "object", "properties": {}, "required": []},
|
||||
handler=handler,
|
||||
)
|
||||
|
||||
with patch("timmy.mcp_bridge.settings") as mock_settings:
|
||||
mock_settings.ollama_model = "qwen3:14b"
|
||||
mock_settings.normalized_ollama_url = "http://localhost:11434"
|
||||
mock_settings.ollama_num_ctx = 0
|
||||
mock_settings.mcp_bridge_timeout = 60
|
||||
mock_settings.gitea_enabled = False
|
||||
mock_settings.gitea_token = ""
|
||||
|
||||
bridge = MCPBridge(
|
||||
include_gitea=False,
|
||||
include_shell=False,
|
||||
extra_tools=[tool],
|
||||
)
|
||||
|
||||
# Round 1: model requests tool call
|
||||
tool_call_resp = MagicMock()
|
||||
tool_call_resp.json.return_value = {
|
||||
"message": {
|
||||
"role": "assistant",
|
||||
"content": "",
|
||||
"tool_calls": [
|
||||
{
|
||||
"function": {
|
||||
"name": "my_tool",
|
||||
"arguments": {},
|
||||
}
|
||||
}
|
||||
],
|
||||
}
|
||||
}
|
||||
tool_call_resp.raise_for_status = MagicMock()
|
||||
|
||||
# Round 2: model returns final text
|
||||
final_resp = MagicMock()
|
||||
final_resp.json.return_value = {
|
||||
"message": {"role": "assistant", "content": "Done with tools!"}
|
||||
}
|
||||
final_resp.raise_for_status = MagicMock()
|
||||
|
||||
mock_client = AsyncMock()
|
||||
mock_client.post = AsyncMock(side_effect=[tool_call_resp, final_resp])
|
||||
mock_client.aclose = AsyncMock()
|
||||
|
||||
bridge._client = mock_client
|
||||
result = await bridge.run("Do something")
|
||||
|
||||
assert result.content == "Done with tools!"
|
||||
assert result.rounds == 2
|
||||
assert len(result.tool_calls_made) == 1
|
||||
assert result.tool_calls_made[0]["tool"] == "my_tool"
|
||||
handler.assert_awaited_once()
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_bridge_run_unknown_tool():
|
||||
"""Bridge handles calls to unknown tools gracefully."""
|
||||
with patch("timmy.mcp_bridge.settings") as mock_settings:
|
||||
mock_settings.ollama_model = "qwen3:14b"
|
||||
mock_settings.normalized_ollama_url = "http://localhost:11434"
|
||||
mock_settings.ollama_num_ctx = 0
|
||||
mock_settings.mcp_bridge_timeout = 60
|
||||
mock_settings.gitea_enabled = False
|
||||
mock_settings.gitea_token = ""
|
||||
|
||||
bridge = MCPBridge(include_gitea=False, include_shell=False)
|
||||
|
||||
# Model calls a tool that doesn't exist
|
||||
tool_call_resp = MagicMock()
|
||||
tool_call_resp.json.return_value = {
|
||||
"message": {
|
||||
"role": "assistant",
|
||||
"content": "",
|
||||
"tool_calls": [
|
||||
{"function": {"name": "nonexistent", "arguments": {}}}
|
||||
],
|
||||
}
|
||||
}
|
||||
tool_call_resp.raise_for_status = MagicMock()
|
||||
|
||||
final_resp = MagicMock()
|
||||
final_resp.json.return_value = {
|
||||
"message": {"role": "assistant", "content": "OK"}
|
||||
}
|
||||
final_resp.raise_for_status = MagicMock()
|
||||
|
||||
mock_client = AsyncMock()
|
||||
mock_client.post = AsyncMock(side_effect=[tool_call_resp, final_resp])
|
||||
mock_client.aclose = AsyncMock()
|
||||
|
||||
bridge._client = mock_client
|
||||
result = await bridge.run("test")
|
||||
|
||||
assert len(result.tool_calls_made) == 1
|
||||
assert "unknown tool" in result.tool_calls_made[0]["result"]
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_bridge_run_max_rounds():
|
||||
"""Bridge stops after max_rounds and returns error."""
|
||||
handler = AsyncMock(return_value="result")
|
||||
tool = MCPToolDef(
|
||||
name="loop_tool",
|
||||
description="Loops forever",
|
||||
parameters={"type": "object", "properties": {}, "required": []},
|
||||
handler=handler,
|
||||
)
|
||||
|
||||
with patch("timmy.mcp_bridge.settings") as mock_settings:
|
||||
mock_settings.ollama_model = "qwen3:14b"
|
||||
mock_settings.normalized_ollama_url = "http://localhost:11434"
|
||||
mock_settings.ollama_num_ctx = 0
|
||||
mock_settings.mcp_bridge_timeout = 60
|
||||
mock_settings.gitea_enabled = False
|
||||
mock_settings.gitea_token = ""
|
||||
|
||||
bridge = MCPBridge(
|
||||
include_gitea=False,
|
||||
include_shell=False,
|
||||
extra_tools=[tool],
|
||||
max_rounds=2,
|
||||
)
|
||||
|
||||
# Always return tool calls (never a final response)
|
||||
tool_call_resp = MagicMock()
|
||||
tool_call_resp.json.return_value = {
|
||||
"message": {
|
||||
"role": "assistant",
|
||||
"content": "",
|
||||
"tool_calls": [
|
||||
{"function": {"name": "loop_tool", "arguments": {}}}
|
||||
],
|
||||
}
|
||||
}
|
||||
tool_call_resp.raise_for_status = MagicMock()
|
||||
|
||||
mock_client = AsyncMock()
|
||||
mock_client.post = AsyncMock(return_value=tool_call_resp)
|
||||
mock_client.aclose = AsyncMock()
|
||||
|
||||
bridge._client = mock_client
|
||||
result = await bridge.run("loop")
|
||||
|
||||
assert "max tool-call rounds" in result.content
|
||||
assert "Exceeded" in result.error
|
||||
assert result.rounds == 2
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_bridge_run_connection_error():
|
||||
"""Bridge handles Ollama connection errors gracefully."""
|
||||
with patch("timmy.mcp_bridge.settings") as mock_settings:
|
||||
mock_settings.ollama_model = "qwen3:14b"
|
||||
mock_settings.normalized_ollama_url = "http://localhost:11434"
|
||||
mock_settings.ollama_num_ctx = 0
|
||||
mock_settings.mcp_bridge_timeout = 60
|
||||
mock_settings.gitea_enabled = False
|
||||
mock_settings.gitea_token = ""
|
||||
|
||||
bridge = MCPBridge(include_gitea=False, include_shell=False)
|
||||
|
||||
mock_client = AsyncMock()
|
||||
mock_client.post = AsyncMock(
|
||||
side_effect=httpx.ConnectError("Connection refused")
|
||||
)
|
||||
mock_client.aclose = AsyncMock()
|
||||
|
||||
bridge._client = mock_client
|
||||
result = await bridge.run("test")
|
||||
|
||||
assert result.error
|
||||
assert "connection" in result.error.lower()
|
||||
assert result.content == ""
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_bridge_run_http_error():
|
||||
"""Bridge handles Ollama HTTP errors gracefully."""
|
||||
with patch("timmy.mcp_bridge.settings") as mock_settings:
|
||||
mock_settings.ollama_model = "qwen3:14b"
|
||||
mock_settings.normalized_ollama_url = "http://localhost:11434"
|
||||
mock_settings.ollama_num_ctx = 0
|
||||
mock_settings.mcp_bridge_timeout = 60
|
||||
mock_settings.gitea_enabled = False
|
||||
mock_settings.gitea_token = ""
|
||||
|
||||
bridge = MCPBridge(include_gitea=False, include_shell=False)
|
||||
|
||||
mock_response = MagicMock()
|
||||
mock_response.status_code = 500
|
||||
|
||||
mock_client = AsyncMock()
|
||||
mock_client.post = AsyncMock(
|
||||
side_effect=httpx.HTTPStatusError(
|
||||
"Server Error",
|
||||
request=MagicMock(),
|
||||
response=mock_response,
|
||||
)
|
||||
)
|
||||
mock_client.aclose = AsyncMock()
|
||||
|
||||
bridge._client = mock_client
|
||||
result = await bridge.run("test")
|
||||
|
||||
assert result.error
|
||||
assert "500" in result.error
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_bridge_run_without_context_manager():
|
||||
"""Bridge returns error when used without async context manager."""
|
||||
with patch("timmy.mcp_bridge.settings") as mock_settings:
|
||||
mock_settings.ollama_model = "qwen3:14b"
|
||||
mock_settings.normalized_ollama_url = "http://localhost:11434"
|
||||
mock_settings.gitea_enabled = False
|
||||
mock_settings.gitea_token = ""
|
||||
|
||||
bridge = MCPBridge(include_gitea=False, include_shell=False)
|
||||
|
||||
result = await bridge.run("test")
|
||||
assert result.error
|
||||
assert "context manager" in result.error.lower()
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# MCPBridge.status
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def test_bridge_status():
|
||||
"""Bridge status returns model and tool info."""
|
||||
with patch("timmy.mcp_bridge.settings") as mock_settings:
|
||||
mock_settings.ollama_model = "qwen3:14b"
|
||||
mock_settings.normalized_ollama_url = "http://localhost:11434"
|
||||
mock_settings.gitea_enabled = False
|
||||
mock_settings.gitea_token = ""
|
||||
|
||||
bridge = MCPBridge(include_gitea=False, include_shell=False)
|
||||
|
||||
status = bridge.status()
|
||||
assert status["model"] == "qwen3:14b"
|
||||
assert status["connected"] is False
|
||||
assert isinstance(status["tools"], list)
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# MCPBridge context manager
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_bridge_context_manager():
|
||||
"""Bridge opens and closes httpx client via async context manager."""
|
||||
with patch("timmy.mcp_bridge.settings") as mock_settings:
|
||||
mock_settings.ollama_model = "qwen3:14b"
|
||||
mock_settings.normalized_ollama_url = "http://localhost:11434"
|
||||
mock_settings.mcp_bridge_timeout = 60
|
||||
mock_settings.gitea_enabled = False
|
||||
mock_settings.gitea_token = ""
|
||||
|
||||
bridge = MCPBridge(include_gitea=False, include_shell=False)
|
||||
|
||||
assert bridge._client is None
|
||||
|
||||
async with bridge:
|
||||
assert bridge._client is not None
|
||||
|
||||
assert bridge._client is None
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Gitea tool handlers (integration-style, mocked HTTP)
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_gitea_list_issues_handler():
|
||||
"""list_issues handler calls Gitea API and formats results."""
|
||||
with patch("timmy.mcp_bridge.settings") as mock_settings:
|
||||
mock_settings.gitea_enabled = True
|
||||
mock_settings.gitea_token = "tok123"
|
||||
mock_settings.gitea_url = "http://localhost:3000"
|
||||
mock_settings.gitea_repo = "owner/repo"
|
||||
tools = _build_gitea_tools()
|
||||
|
||||
list_tool = next(t for t in tools if t.name == "list_issues")
|
||||
|
||||
mock_resp = MagicMock()
|
||||
mock_resp.json.return_value = [
|
||||
{"number": 1, "title": "Bug one", "labels": [{"name": "bug"}]},
|
||||
{"number": 2, "title": "Feature two", "labels": []},
|
||||
]
|
||||
mock_resp.raise_for_status = MagicMock()
|
||||
|
||||
mock_client = AsyncMock()
|
||||
mock_client.get = AsyncMock(return_value=mock_resp)
|
||||
mock_client.__aenter__ = AsyncMock(return_value=mock_client)
|
||||
mock_client.__aexit__ = AsyncMock(return_value=False)
|
||||
|
||||
with patch("timmy.mcp_bridge.httpx.AsyncClient", return_value=mock_client):
|
||||
result = await list_tool.handler(state="open", limit=10)
|
||||
|
||||
assert "#1: Bug one [bug]" in result
|
||||
assert "#2: Feature two" in result
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_gitea_create_issue_handler():
|
||||
"""create_issue handler calls Gitea API and returns confirmation."""
|
||||
with patch("timmy.mcp_bridge.settings") as mock_settings:
|
||||
mock_settings.gitea_enabled = True
|
||||
mock_settings.gitea_token = "tok123"
|
||||
mock_settings.gitea_url = "http://localhost:3000"
|
||||
mock_settings.gitea_repo = "owner/repo"
|
||||
tools = _build_gitea_tools()
|
||||
|
||||
create_tool = next(t for t in tools if t.name == "create_issue")
|
||||
|
||||
mock_resp = MagicMock()
|
||||
mock_resp.json.return_value = {"number": 42, "title": "New bug"}
|
||||
mock_resp.raise_for_status = MagicMock()
|
||||
|
||||
mock_client = AsyncMock()
|
||||
mock_client.post = AsyncMock(return_value=mock_resp)
|
||||
mock_client.__aenter__ = AsyncMock(return_value=mock_client)
|
||||
mock_client.__aexit__ = AsyncMock(return_value=False)
|
||||
|
||||
with patch("timmy.mcp_bridge.httpx.AsyncClient", return_value=mock_client):
|
||||
result = await create_tool.handler(title="New bug", body="Description")
|
||||
|
||||
assert "#42" in result
|
||||
assert "New bug" in result
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_gitea_create_issue_requires_title():
|
||||
"""create_issue handler returns error when title is missing."""
|
||||
with patch("timmy.mcp_bridge.settings") as mock_settings:
|
||||
mock_settings.gitea_enabled = True
|
||||
mock_settings.gitea_token = "tok123"
|
||||
mock_settings.gitea_url = "http://localhost:3000"
|
||||
mock_settings.gitea_repo = "owner/repo"
|
||||
tools = _build_gitea_tools()
|
||||
|
||||
create_tool = next(t for t in tools if t.name == "create_issue")
|
||||
result = await create_tool.handler()
|
||||
assert "required" in result.lower()
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_gitea_read_issue_handler():
|
||||
"""read_issue handler calls Gitea API and formats result."""
|
||||
with patch("timmy.mcp_bridge.settings") as mock_settings:
|
||||
mock_settings.gitea_enabled = True
|
||||
mock_settings.gitea_token = "tok123"
|
||||
mock_settings.gitea_url = "http://localhost:3000"
|
||||
mock_settings.gitea_repo = "owner/repo"
|
||||
tools = _build_gitea_tools()
|
||||
|
||||
read_tool = next(t for t in tools if t.name == "read_issue")
|
||||
|
||||
mock_resp = MagicMock()
|
||||
mock_resp.json.return_value = {
|
||||
"number": 5,
|
||||
"title": "Test issue",
|
||||
"state": "open",
|
||||
"body": "Issue body text",
|
||||
"labels": [{"name": "enhancement"}],
|
||||
}
|
||||
mock_resp.raise_for_status = MagicMock()
|
||||
|
||||
mock_client = AsyncMock()
|
||||
mock_client.get = AsyncMock(return_value=mock_resp)
|
||||
mock_client.__aenter__ = AsyncMock(return_value=mock_client)
|
||||
mock_client.__aexit__ = AsyncMock(return_value=False)
|
||||
|
||||
with patch("timmy.mcp_bridge.httpx.AsyncClient", return_value=mock_client):
|
||||
result = await read_tool.handler(number=5)
|
||||
|
||||
assert "#5" in result
|
||||
assert "Test issue" in result
|
||||
assert "open" in result
|
||||
assert "enhancement" in result
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_gitea_read_issue_requires_number():
|
||||
"""read_issue handler returns error when number is missing."""
|
||||
with patch("timmy.mcp_bridge.settings") as mock_settings:
|
||||
mock_settings.gitea_enabled = True
|
||||
mock_settings.gitea_token = "tok123"
|
||||
mock_settings.gitea_url = "http://localhost:3000"
|
||||
mock_settings.gitea_repo = "owner/repo"
|
||||
tools = _build_gitea_tools()
|
||||
|
||||
read_tool = next(t for t in tools if t.name == "read_issue")
|
||||
result = await read_tool.handler()
|
||||
assert "required" in result.lower()
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# BridgeResult dataclass
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def test_bridge_result_defaults():
|
||||
"""BridgeResult has sensible defaults."""
|
||||
r = BridgeResult(content="hello")
|
||||
assert r.content == "hello"
|
||||
assert r.tool_calls_made == []
|
||||
assert r.rounds == 0
|
||||
assert r.latency_ms == 0.0
|
||||
assert r.model == ""
|
||||
assert r.error == ""
|
||||
Reference in New Issue
Block a user