forked from Rockachopa/Timmy-time-dashboard
## Summary Complete refactoring of Timmy Time from monolithic architecture to microservices using Test-Driven Development (TDD) and optimized Docker builds. ## Changes ### Core Improvements - Optimized dashboard startup: moved blocking tasks to async background processes - Fixed model fallback logic in agent configuration - Enhanced test fixtures with comprehensive conftest.py ### Microservices Architecture - Created separate Dockerfiles for dashboard, Ollama, and agent services - Implemented docker-compose.microservices.yml for service orchestration - Added health checks and non-root user execution for security - Multi-stage Docker builds for lean, fast images ### Testing - Added E2E tests for dashboard responsiveness - Added E2E tests for Ollama integration - Added E2E tests for microservices architecture validation - All 36 tests passing, 8 skipped (environment-specific) ### Documentation - Created comprehensive final report - Generated issue resolution plan - Added interview transcript demonstrating core agent functionality ### New Modules - skill_absorption.py: Dynamic skill loading and integration system for Timmy ## Test Results ✅ 36 passed, 8 skipped, 6 warnings ✅ All microservices tests passing ✅ Dashboard responsiveness verified ✅ Ollama integration validated ## Files Added/Modified - docker/: Multi-stage Dockerfiles for all services - tests/e2e/: Comprehensive E2E test suite - src/timmy/skill_absorption.py: Skill absorption system - src/dashboard/app.py: Optimized startup logic - tests/conftest.py: Enhanced test fixtures - docker-compose.microservices.yml: Service orchestration ## Breaking Changes None - all changes are backward compatible ## Next Steps - Integrate skill absorption system into agent workflow - Test with microservices-tdd-refactor skill - Deploy to production with docker-compose orchestration
23 lines
830 B
Docker
23 lines
830 B
Docker
# ── Ollama LLM Service — Optimized Build ──────────────────────────────────────
|
|
#
|
|
# Lightweight wrapper around official Ollama image with auto-model-pull on startup.
|
|
#
|
|
# Build: docker build -f docker/Dockerfile.ollama -t timmy-ollama:latest .
|
|
# Run: docker run -p 11434:11434 -v ollama-data:/root/.ollama timmy-ollama:latest
|
|
|
|
FROM ollama/ollama:latest
|
|
|
|
# Set environment
|
|
ENV OLLAMA_HOST=0.0.0.0:11434
|
|
|
|
# Create startup script for auto-pulling models
|
|
COPY docker/scripts/init-ollama.sh /app/init-ollama.sh
|
|
RUN chmod +x /app/init-ollama.sh
|
|
|
|
# Health check
|
|
HEALTHCHECK --interval=30s --timeout=5s --start-period=30s --retries=3 \
|
|
CMD curl -f http://localhost:11434/api/tags || exit 1
|
|
|
|
# Use custom entrypoint
|
|
ENTRYPOINT ["/app/init-ollama.sh"]
|