feat: add functional Ollama chat tests with containerised LLM
Add an ollama service (behind --profile ollama) to the test compose stack and a new test suite that verifies real LLM inference end-to-end: - docker-compose.test.yml: add ollama/ollama service with health check, make OLLAMA_URL and OLLAMA_MODEL configurable via env vars - tests/functional/test_ollama_chat.py: session-scoped fixture that brings up Ollama + dashboard, pulls qwen2.5:0.5b (~400MB, CPU-only), and runs chat/history/multi-turn tests against the live stack - Makefile: add `make test-ollama` target Run with: make test-ollama (or FUNCTIONAL_DOCKER=1 pytest tests/functional/test_ollama_chat.py -v) https://claude.ai/code/session_01NTEzfRHSZQCfkfypxgyHKk
This commit is contained in:
6
Makefile
6
Makefile
@@ -62,6 +62,12 @@ test-cov-html:
|
||||
$(PYTEST) tests/ --cov=src --cov-report=term-missing --cov-report=html -q
|
||||
@echo "✓ HTML coverage report: open htmlcov/index.html"
|
||||
|
||||
# Full-stack functional test: spins up Ollama (CPU, qwen2.5:0.5b) + dashboard
|
||||
# in Docker and verifies real LLM chat end-to-end.
|
||||
# Override model: make test-ollama OLLAMA_TEST_MODEL=tinyllama
|
||||
test-ollama:
|
||||
FUNCTIONAL_DOCKER=1 $(PYTEST) tests/functional/test_ollama_chat.py -v --tb=long -x
|
||||
|
||||
# ── Code quality ──────────────────────────────────────────────────────────────
|
||||
|
||||
lint:
|
||||
|
||||
Reference in New Issue
Block a user