Add an ollama service (behind --profile ollama) to the test compose stack
and a new test suite that verifies real LLM inference end-to-end:
- docker-compose.test.yml: add ollama/ollama service with health check,
make OLLAMA_URL and OLLAMA_MODEL configurable via env vars
- tests/functional/test_ollama_chat.py: session-scoped fixture that
brings up Ollama + dashboard, pulls qwen2.5:0.5b (~400MB, CPU-only),
and runs chat/history/multi-turn tests against the live stack
- Makefile: add `make test-ollama` target
Run with: make test-ollama (or FUNCTIONAL_DOCKER=1 pytest tests/functional/test_ollama_chat.py -v)
https://claude.ai/code/session_01NTEzfRHSZQCfkfypxgyHKk