- Complete daemon with FastAPI - Ollama client for local AI (gemma3:4b) - Telegram webhook handler - Hermes bridge (thin profile) - Systemd service definition - All unit tests passing
Archon Kion
Local AI assistant daemon with Hermes integration. Processes Telegram and Gitea webhooks, routes queries to local Ollama instance.
Architecture
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ Telegram │────▶│ Archon Kion │────▶│ Ollama │
│ Webhooks │ │ Daemon │ │ localhost │
└─────────────┘ └──────┬──────┘ └─────────────┘
│
┌──────┴──────┐
│ Hermes │
│ Profile │
└─────────────┘
Components
- src/main.py: Daemon entry point, FastAPI web server
- src/ollama_client.py: Ollama API client
- src/telegram_bot.py: Telegram webhook handler
- src/hermes_bridge.py: Hermes profile integration
- config/archon-kion.yaml: Configuration file
- hermes-profile/profile.yaml: Thin Hermes profile
- systemd/archon-kion.service: Systemd service definition
Installation
# Clone repository
git clone http://143.198.27.163:3000/ezra/archon-kion.git
cd archon-kion
# Install dependencies
pip install -r requirements.txt
# Configure
edit config/archon-kion.yaml
# Run
cd src && python main.py
Configuration
Edit config/archon-kion.yaml:
ollama:
host: localhost
port: 11434
model: gemma3:4b
telegram:
webhook_url: https://your-domain.com/webhook
token: ${TELEGRAM_BOT_TOKEN}
hermes:
profile_path: ./hermes-profile/profile.yaml
Commands
/status- Check daemon and Ollama status/memory- Show conversation memory/query <text>- Send query to Ollama
Testing
cd tests
python -m pytest test_archon.py -v
License
MIT - Hermes Project
Description
Languages
Python
100%