forked from Rockachopa/Timmy-time-dashboard
## Summary Complete refactoring of Timmy Time from monolithic architecture to microservices using Test-Driven Development (TDD) and optimized Docker builds. ## Changes ### Core Improvements - Optimized dashboard startup: moved blocking tasks to async background processes - Fixed model fallback logic in agent configuration - Enhanced test fixtures with comprehensive conftest.py ### Microservices Architecture - Created separate Dockerfiles for dashboard, Ollama, and agent services - Implemented docker-compose.microservices.yml for service orchestration - Added health checks and non-root user execution for security - Multi-stage Docker builds for lean, fast images ### Testing - Added E2E tests for dashboard responsiveness - Added E2E tests for Ollama integration - Added E2E tests for microservices architecture validation - All 36 tests passing, 8 skipped (environment-specific) ### Documentation - Created comprehensive final report - Generated issue resolution plan - Added interview transcript demonstrating core agent functionality ### New Modules - skill_absorption.py: Dynamic skill loading and integration system for Timmy ## Test Results ✅ 36 passed, 8 skipped, 6 warnings ✅ All microservices tests passing ✅ Dashboard responsiveness verified ✅ Ollama integration validated ## Files Added/Modified - docker/: Multi-stage Dockerfiles for all services - tests/e2e/: Comprehensive E2E test suite - src/timmy/skill_absorption.py: Skill absorption system - src/dashboard/app.py: Optimized startup logic - tests/conftest.py: Enhanced test fixtures - docker-compose.microservices.yml: Service orchestration ## Breaking Changes None - all changes are backward compatible ## Next Steps - Integrate skill absorption system into agent workflow - Test with microservices-tdd-refactor skill - Deploy to production with docker-compose orchestration
40 lines
1.1 KiB
Bash
Executable File
40 lines
1.1 KiB
Bash
Executable File
#!/bin/bash
|
|
# ── Ollama Initialization Script ──────────────────────────────────────────────
|
|
#
|
|
# Starts Ollama and pulls models on first run.
|
|
|
|
set -e
|
|
|
|
echo "🚀 Ollama startup — checking for models..."
|
|
|
|
# Start Ollama in background
|
|
ollama serve &
|
|
OLLAMA_PID=$!
|
|
|
|
# Wait for Ollama to be ready
|
|
echo "⏳ Waiting for Ollama to be ready..."
|
|
for i in {1..60}; do
|
|
if curl -s http://localhost:11434/api/tags > /dev/null 2>&1; then
|
|
echo "✓ Ollama is ready"
|
|
break
|
|
fi
|
|
echo " Attempt $i/60..."
|
|
sleep 1
|
|
done
|
|
|
|
# Check if models are already present
|
|
echo "📋 Checking available models..."
|
|
MODELS=$(curl -s http://localhost:11434/api/tags | grep -o '"name":"[^"]*"' | wc -l)
|
|
|
|
if [ "$MODELS" -eq 0 ]; then
|
|
echo "📥 No models found. Pulling llama3.2..."
|
|
ollama pull llama3.2 || echo "⚠️ Failed to pull llama3.2 (may already be pulling)"
|
|
else
|
|
echo "✓ Models available: $MODELS"
|
|
fi
|
|
|
|
echo "✓ Ollama initialization complete"
|
|
|
|
# Keep process running
|
|
wait $OLLAMA_PID
|