* test: remove hardcoded sleeps, add pytest-timeout - Replace fixed time.sleep() calls with intelligent polling or WebDriverWait - Add pytest-timeout dependency and --timeout=30 to prevent hangs - Fixes test flakiness and improves test suite speed * feat: add Aider AI tool to Forge's toolkit - Add Aider tool that calls local Ollama (qwen2.5:14b) for AI coding assist - Register tool in Forge's code toolkit - Add functional tests for the Aider tool * config: add opencode.json with local Ollama provider for sovereign AI * feat: Timmy fixes and improvements ## Bug Fixes - Fix read_file path resolution: add ~ expansion, proper relative path handling - Add repo_root to config.py with auto-detection from .git location - Fix hardcoded llama3.2 - now dynamic from settings.ollama_model ## Timmy's Requests - Add communication protocol to AGENTS.md (read context first, explain changes) - Create DECISIONS.md for architectural decision documentation - Add reasoning guidance to system prompts (step-by-step, state uncertainty) - Update tests to reflect correct model name (llama3.1:8b-instruct) ## Testing - All 177 dashboard tests pass - All 32 prompt/tool tests pass --------- Co-authored-by: Alexander Payne <apayne@MM.local>
41 lines
1.1 KiB
Python
41 lines
1.1 KiB
Python
from timmy.prompts import TIMMY_SYSTEM_PROMPT, TIMMY_STATUS_PROMPT, get_system_prompt
|
|
|
|
|
|
def test_system_prompt_not_empty():
|
|
assert TIMMY_SYSTEM_PROMPT.strip()
|
|
|
|
|
|
def test_system_prompt_has_timmy_identity():
|
|
assert "Timmy" in TIMMY_SYSTEM_PROMPT
|
|
|
|
|
|
def test_system_prompt_mentions_sovereignty():
|
|
assert "sovereignty" in TIMMY_SYSTEM_PROMPT.lower()
|
|
|
|
|
|
def test_system_prompt_references_local():
|
|
assert "local" in TIMMY_SYSTEM_PROMPT.lower()
|
|
|
|
|
|
def test_system_prompt_is_multiline():
|
|
assert "\n" in TIMMY_SYSTEM_PROMPT
|
|
|
|
|
|
def test_status_prompt_not_empty():
|
|
assert TIMMY_STATUS_PROMPT.strip()
|
|
|
|
|
|
def test_status_prompt_has_timmy():
|
|
assert "Timmy" in TIMMY_STATUS_PROMPT
|
|
|
|
|
|
def test_prompts_are_distinct():
|
|
assert TIMMY_SYSTEM_PROMPT != TIMMY_STATUS_PROMPT
|
|
|
|
|
|
def test_get_system_prompt_injects_model_name():
|
|
"""System prompt should inject actual model name from config."""
|
|
prompt = get_system_prompt(tools_enabled=False)
|
|
# Should contain the model name from settings, not hardcoded
|
|
assert "llama3.1" in prompt or "qwen" in prompt or "{model_name}" in prompt
|