feat: upgrade to qwen3.5, self-hosted Gitea CI, optimize Docker image
All checks were successful
Tests / lint (pull_request) Successful in 2s
Tests / test (pull_request) Successful in 32s

Model upgrade:
- qwen2.5:14b → qwen3.5:latest across config, tools, and docs
- Added qwen3.5 to multimodal model registry

Self-hosted Gitea CI:
- .gitea/workflows/tests.yml: lint + test jobs via act_runner
- Unified Dockerfile: pre-baked deps from poetry.lock for fast CI
- sitepackages=true in tox for ~2s dep resolution (was ~40s)
- OLLAMA_URL set to dead port in CI to prevent real LLM calls

Test isolation fixes:
- Smoke test fixture mocks create_timmy (was hitting real Ollama)
- WebSocket sends initial_state before joining broadcast pool (race fix)
- Tests use settings.ollama_model/url instead of hardcoded values
- skip_ci marker for Ollama-dependent tests, excluded in CI tox envs

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
Trip T
2026-03-11 18:36:42 -04:00
parent 36fc10097f
commit f6a6c0f62e
24 changed files with 236 additions and 292 deletions

View File

@@ -1,6 +1,5 @@
# Timmy Time — Mission Control
[![Tests](https://github.com/AlexanderWhitestone/Timmy-time-dashboard/actions/workflows/tests.yml/badge.svg)](https://github.com/AlexanderWhitestone/Timmy-time-dashboard/actions/workflows/tests.yml)
![Python](https://img.shields.io/badge/python-3.11+-blue)
![Coverage](https://img.shields.io/badge/coverage-73%25-brightgreen)
![License](https://img.shields.io/badge/license-MIT-green)
@@ -8,28 +7,26 @@
A local-first, sovereign AI agent system. Talk to Timmy, watch his swarm, gate
API access with Bitcoin Lightning — all from a browser, no cloud AI required.
**[Live Docs →](https://alexanderwhitestone.github.io/Timmy-time-dashboard/)**
---
## Quick Start
```bash
git clone https://github.com/AlexanderWhitestone/Timmy-time-dashboard.git
git clone http://localhost:3000/rockachopa/Timmy-time-dashboard.git
cd Timmy-time-dashboard
make install # create venv + install deps
cp .env.example .env # configure environment
ollama serve # separate terminal
ollama pull qwen2.5:14b # Required for reliable tool calling
ollama pull qwen3.5:latest # Required for reliable tool calling
make dev # http://localhost:8000
make test # no Ollama needed
```
**Note:** qwen2.5:14b is the primary model — better reasoning and tool calling
**Note:** qwen3.5:latest is the primary model — better reasoning and tool calling
than llama3.1:8b-instruct while still running locally on modest hardware.
Fallback: llama3.1:8b-instruct if qwen2.5:14b is not available.
Fallback: llama3.1:8b-instruct if qwen3.5:latest is not available.
llama3.2 (3B) was found to hallucinate tool output consistently in testing.
---
@@ -82,7 +79,7 @@ cp .env.example .env
| Variable | Default | Purpose |
|----------|---------|---------|
| `OLLAMA_URL` | `http://localhost:11434` | Ollama host |
| `OLLAMA_MODEL` | `qwen2.5:14b` | Primary model for reasoning and tool calling. Fallback: `llama3.1:8b-instruct` |
| `OLLAMA_MODEL` | `qwen3.5:latest` | Primary model for reasoning and tool calling. Fallback: `llama3.1:8b-instruct` |
| `DEBUG` | `false` | Enable `/docs` and `/redoc` |
| `TIMMY_MODEL_BACKEND` | `ollama` | `ollama` \| `airllm` \| `auto` |
| `AIRLLM_MODEL_SIZE` | `70b` | `8b` \| `70b` \| `405b` |