feat: upgrade to qwen3.5, self-hosted Gitea CI, optimize Docker image
All checks were successful
Tests / lint (pull_request) Successful in 2s
Tests / test (pull_request) Successful in 32s

Model upgrade:
- qwen2.5:14b → qwen3.5:latest across config, tools, and docs
- Added qwen3.5 to multimodal model registry

Self-hosted Gitea CI:
- .gitea/workflows/tests.yml: lint + test jobs via act_runner
- Unified Dockerfile: pre-baked deps from poetry.lock for fast CI
- sitepackages=true in tox for ~2s dep resolution (was ~40s)
- OLLAMA_URL set to dead port in CI to prevent real LLM calls

Test isolation fixes:
- Smoke test fixture mocks create_timmy (was hitting real Ollama)
- WebSocket sends initial_state before joining broadcast pool (race fix)
- Tests use settings.ollama_model/url instead of hardcoded values
- skip_ci marker for Ollama-dependent tests, excluded in CI tox envs

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
Trip T
2026-03-11 18:36:42 -04:00
parent 36fc10097f
commit f6a6c0f62e
24 changed files with 236 additions and 292 deletions

View File

@@ -240,11 +240,15 @@ class TestInviteParser:
@pytest.mark.asyncio
async def test_parse_image_no_deps(self):
"""parse_image returns None when pyzbar/Pillow are not installed."""
from unittest.mock import AsyncMock, patch
from integrations.chat_bridge.invite_parser import InviteParser
parser = InviteParser()
# With mocked pyzbar, this should gracefully return None
result = await parser.parse_image(b"fake-image-bytes")
# Mock out the Ollama vision call so we don't make a real HTTP request
with patch.object(parser, "_try_ollama_vision", new_callable=AsyncMock, return_value=None):
# With mocked pyzbar + mocked vision, this should gracefully return None
result = await parser.parse_image(b"fake-image-bytes")
assert result is None