cleanup: delete dead modules — ~7,900 lines removed
Closes #22, Closes #23 Deleted: brain/, swarm/, openfang/, paperclip/, cascade_adapter, memory_migrate, agents/timmy.py, dead routes + all corresponding tests. Updated pyproject.toml, app.py, loop_qa.py for removed imports.
This commit is contained in:
190
config/agents.yaml
Normal file
190
config/agents.yaml
Normal file
@@ -0,0 +1,190 @@
|
||||
# ── Agent Definitions ───────────────────────────────────────────────────────
|
||||
#
|
||||
# All agent differentiation lives here. The Python runtime reads this file
|
||||
# and builds identical agent instances from a single seed class (SubAgent).
|
||||
#
|
||||
# To add a new agent: copy any block, change the values, restart.
|
||||
# To remove an agent: delete or comment out its block.
|
||||
# To change a model: update the model field. No code changes needed.
|
||||
#
|
||||
# Fields:
|
||||
# name Display name
|
||||
# role Functional role (used for routing and tool delegation)
|
||||
# model Ollama model ID (null = use defaults.model)
|
||||
# tools List of tool names this agent can access
|
||||
# prompt System prompt — what makes this agent unique
|
||||
# prompt_tier "full" (tool-capable models) or "lite" (small models)
|
||||
# max_history Number of conversation turns to keep in context
|
||||
# context_window Max context length (null = model default)
|
||||
#
|
||||
# ── Defaults ────────────────────────────────────────────────────────────────
|
||||
|
||||
defaults:
|
||||
model: qwen3.5:latest
|
||||
prompt_tier: lite
|
||||
max_history: 10
|
||||
tools: []
|
||||
context_window: null
|
||||
|
||||
# ── Routing ─────────────────────────────────────────────────────────────────
|
||||
#
|
||||
# Pattern-based routing replaces the old Helm LLM routing.
|
||||
# Each agent lists keyword patterns that trigger delegation to it.
|
||||
# First match wins. If nothing matches, the orchestrator handles it.
|
||||
|
||||
routing:
|
||||
method: pattern # "pattern" (keyword matching) or "llm" (model-based)
|
||||
patterns:
|
||||
researcher:
|
||||
- search
|
||||
- research
|
||||
- find out
|
||||
- look up
|
||||
- what is
|
||||
- who is
|
||||
- news about
|
||||
- latest on
|
||||
coder:
|
||||
- code
|
||||
- implement
|
||||
- debug
|
||||
- fix bug
|
||||
- write function
|
||||
- refactor
|
||||
- test
|
||||
- programming
|
||||
- python
|
||||
- javascript
|
||||
writer:
|
||||
- write
|
||||
- draft
|
||||
- document
|
||||
- summarize
|
||||
- blog post
|
||||
- readme
|
||||
- changelog
|
||||
memory:
|
||||
- remember
|
||||
- recall
|
||||
- we discussed
|
||||
- we talked about
|
||||
- what did i say
|
||||
- remind me
|
||||
- have we
|
||||
experimenter:
|
||||
- experiment
|
||||
- train
|
||||
- fine-tune
|
||||
- benchmark
|
||||
- evaluate model
|
||||
- run trial
|
||||
|
||||
# ── Agents ──────────────────────────────────────────────────────────────────
|
||||
|
||||
agents:
|
||||
orchestrator:
|
||||
name: Timmy
|
||||
role: orchestrator
|
||||
model: qwen3:30b
|
||||
prompt_tier: full
|
||||
max_history: 20
|
||||
tools:
|
||||
- web_search
|
||||
- read_file
|
||||
- write_file
|
||||
- python
|
||||
- memory_search
|
||||
- memory_write
|
||||
- system_status
|
||||
- shell
|
||||
prompt: |
|
||||
You are Timmy, a sovereign local AI orchestrator.
|
||||
|
||||
You are the primary interface between the user and the agent swarm.
|
||||
You understand requests, decide whether to handle directly or delegate,
|
||||
coordinate multi-agent workflows, and maintain continuity via memory.
|
||||
|
||||
Hard Rules:
|
||||
1. NEVER fabricate tool output. Call the tool and wait for real results.
|
||||
2. If a tool returns an error, report the exact error.
|
||||
3. If you don't know something, say so. Then use a tool. Don't guess.
|
||||
4. When corrected, use memory_write to save the correction immediately.
|
||||
|
||||
researcher:
|
||||
name: Seer
|
||||
role: research
|
||||
model: qwen3:30b
|
||||
prompt_tier: full
|
||||
max_history: 10
|
||||
tools:
|
||||
- web_search
|
||||
- read_file
|
||||
- memory_search
|
||||
prompt: |
|
||||
You are Seer, a research and information gathering specialist.
|
||||
Find, evaluate, and synthesize information from external sources.
|
||||
Be thorough, skeptical, concise, and cite sources.
|
||||
|
||||
coder:
|
||||
name: Forge
|
||||
role: code
|
||||
model: qwen3:30b
|
||||
prompt_tier: full
|
||||
max_history: 15
|
||||
tools:
|
||||
- python
|
||||
- write_file
|
||||
- read_file
|
||||
- shell
|
||||
prompt: |
|
||||
You are Forge, a code generation and tool building specialist.
|
||||
Write clean code, be safe, explain your work, and test mentally.
|
||||
Follow existing patterns in the codebase. Never break tests.
|
||||
|
||||
writer:
|
||||
name: Quill
|
||||
role: writing
|
||||
model: null # uses defaults.model
|
||||
prompt_tier: lite
|
||||
max_history: 10
|
||||
tools:
|
||||
- write_file
|
||||
- read_file
|
||||
- memory_search
|
||||
prompt: |
|
||||
You are Quill, a writing and content generation specialist.
|
||||
Write clearly, know your audience, be concise, use formatting.
|
||||
|
||||
memory:
|
||||
name: Echo
|
||||
role: memory
|
||||
model: null # uses defaults.model
|
||||
prompt_tier: lite
|
||||
max_history: 10
|
||||
tools:
|
||||
- memory_search
|
||||
- read_file
|
||||
- write_file
|
||||
prompt: |
|
||||
You are Echo, a memory and context management specialist.
|
||||
Remember, retrieve, and synthesize information from the past.
|
||||
Be accurate, relevant, concise, and acknowledge uncertainty.
|
||||
|
||||
experimenter:
|
||||
name: Lab
|
||||
role: experiment
|
||||
model: qwen3:30b
|
||||
prompt_tier: full
|
||||
max_history: 10
|
||||
tools:
|
||||
- run_experiment
|
||||
- prepare_experiment
|
||||
- shell
|
||||
- python
|
||||
- read_file
|
||||
- write_file
|
||||
prompt: |
|
||||
You are Lab, an autonomous ML experimentation specialist.
|
||||
You run time-boxed training experiments, evaluate metrics,
|
||||
modify training code to improve results, and iterate.
|
||||
Always report the metric delta. Never exceed the time budget.
|
||||
542
poetry.lock
generated
542
poetry.lock
generated
@@ -462,7 +462,6 @@ files = [
|
||||
{file = "attrs-25.4.0-py3-none-any.whl", hash = "sha256:adcf7e2a1fb3b36ac48d97835bb6d8ade15b8dcce26aba8bf1d14847b57a3373"},
|
||||
{file = "attrs-25.4.0.tar.gz", hash = "sha256:16d5969b87f0859ef33a48b35d55ac1be6e42ae49d5e853b597db70c35c57e11"},
|
||||
]
|
||||
markers = {main = "extra == \"discord\" or extra == \"dev\""}
|
||||
|
||||
[[package]]
|
||||
name = "audioop-lts"
|
||||
@@ -700,7 +699,7 @@ files = [
|
||||
{file = "cffi-2.0.0-cp39-cp39-win_amd64.whl", hash = "sha256:b882b3df248017dba09d6b16defe9b5c407fe32fc7c65a9c69798e6175601be9"},
|
||||
{file = "cffi-2.0.0.tar.gz", hash = "sha256:44d1b5909021139fe36001ae048dbdde8214afa20200eda0f64c068cac5d5529"},
|
||||
]
|
||||
markers = {main = "os_name == \"nt\" and implementation_name != \"pypy\" and extra == \"dev\"", dev = "os_name == \"nt\" and implementation_name != \"pypy\""}
|
||||
markers = {main = "platform_python_implementation != \"PyPy\" or os_name == \"nt\" and implementation_name != \"pypy\" and extra == \"dev\"", dev = "os_name == \"nt\" and implementation_name != \"pypy\""}
|
||||
|
||||
[package.dependencies]
|
||||
pycparser = {version = "*", markers = "implementation_name != \"PyPy\""}
|
||||
@@ -1044,6 +1043,78 @@ markers = {main = "extra == \"dev\""}
|
||||
[package.extras]
|
||||
toml = ["tomli ; python_full_version <= \"3.11.0a6\""]
|
||||
|
||||
[[package]]
|
||||
name = "cryptography"
|
||||
version = "46.0.5"
|
||||
description = "cryptography is a package which provides cryptographic recipes and primitives to Python developers."
|
||||
optional = false
|
||||
python-versions = "!=3.9.0,!=3.9.1,>=3.8"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "cryptography-46.0.5-cp311-abi3-macosx_10_9_universal2.whl", hash = "sha256:351695ada9ea9618b3500b490ad54c739860883df6c1f555e088eaf25b1bbaad"},
|
||||
{file = "cryptography-46.0.5-cp311-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:c18ff11e86df2e28854939acde2d003f7984f721eba450b56a200ad90eeb0e6b"},
|
||||
{file = "cryptography-46.0.5-cp311-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:4d7e3d356b8cd4ea5aff04f129d5f66ebdc7b6f8eae802b93739ed520c47c79b"},
|
||||
{file = "cryptography-46.0.5-cp311-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:50bfb6925eff619c9c023b967d5b77a54e04256c4281b0e21336a130cd7fc263"},
|
||||
{file = "cryptography-46.0.5-cp311-abi3-manylinux_2_28_ppc64le.whl", hash = "sha256:803812e111e75d1aa73690d2facc295eaefd4439be1023fefc4995eaea2af90d"},
|
||||
{file = "cryptography-46.0.5-cp311-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:3ee190460e2fbe447175cda91b88b84ae8322a104fc27766ad09428754a618ed"},
|
||||
{file = "cryptography-46.0.5-cp311-abi3-manylinux_2_31_armv7l.whl", hash = "sha256:f145bba11b878005c496e93e257c1e88f154d278d2638e6450d17e0f31e558d2"},
|
||||
{file = "cryptography-46.0.5-cp311-abi3-manylinux_2_34_aarch64.whl", hash = "sha256:e9251e3be159d1020c4030bd2e5f84d6a43fe54b6c19c12f51cde9542a2817b2"},
|
||||
{file = "cryptography-46.0.5-cp311-abi3-manylinux_2_34_ppc64le.whl", hash = "sha256:47fb8a66058b80e509c47118ef8a75d14c455e81ac369050f20ba0d23e77fee0"},
|
||||
{file = "cryptography-46.0.5-cp311-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:4c3341037c136030cb46e4b1e17b7418ea4cbd9dd207e4a6f3b2b24e0d4ac731"},
|
||||
{file = "cryptography-46.0.5-cp311-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:890bcb4abd5a2d3f852196437129eb3667d62630333aacc13dfd470fad3aaa82"},
|
||||
{file = "cryptography-46.0.5-cp311-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:80a8d7bfdf38f87ca30a5391c0c9ce4ed2926918e017c29ddf643d0ed2778ea1"},
|
||||
{file = "cryptography-46.0.5-cp311-abi3-win32.whl", hash = "sha256:60ee7e19e95104d4c03871d7d7dfb3d22ef8a9b9c6778c94e1c8fcc8365afd48"},
|
||||
{file = "cryptography-46.0.5-cp311-abi3-win_amd64.whl", hash = "sha256:38946c54b16c885c72c4f59846be9743d699eee2b69b6988e0a00a01f46a61a4"},
|
||||
{file = "cryptography-46.0.5-cp314-cp314t-macosx_10_9_universal2.whl", hash = "sha256:94a76daa32eb78d61339aff7952ea819b1734b46f73646a07decb40e5b3448e2"},
|
||||
{file = "cryptography-46.0.5-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:5be7bf2fb40769e05739dd0046e7b26f9d4670badc7b032d6ce4db64dddc0678"},
|
||||
{file = "cryptography-46.0.5-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:fe346b143ff9685e40192a4960938545c699054ba11d4f9029f94751e3f71d87"},
|
||||
{file = "cryptography-46.0.5-cp314-cp314t-manylinux_2_28_aarch64.whl", hash = "sha256:c69fd885df7d089548a42d5ec05be26050ebcd2283d89b3d30676eb32ff87dee"},
|
||||
{file = "cryptography-46.0.5-cp314-cp314t-manylinux_2_28_ppc64le.whl", hash = "sha256:8293f3dea7fc929ef7240796ba231413afa7b68ce38fd21da2995549f5961981"},
|
||||
{file = "cryptography-46.0.5-cp314-cp314t-manylinux_2_28_x86_64.whl", hash = "sha256:1abfdb89b41c3be0365328a410baa9df3ff8a9110fb75e7b52e66803ddabc9a9"},
|
||||
{file = "cryptography-46.0.5-cp314-cp314t-manylinux_2_31_armv7l.whl", hash = "sha256:d66e421495fdb797610a08f43b05269e0a5ea7f5e652a89bfd5a7d3c1dee3648"},
|
||||
{file = "cryptography-46.0.5-cp314-cp314t-manylinux_2_34_aarch64.whl", hash = "sha256:4e817a8920bfbcff8940ecfd60f23d01836408242b30f1a708d93198393a80b4"},
|
||||
{file = "cryptography-46.0.5-cp314-cp314t-manylinux_2_34_ppc64le.whl", hash = "sha256:68f68d13f2e1cb95163fa3b4db4bf9a159a418f5f6e7242564fc75fcae667fd0"},
|
||||
{file = "cryptography-46.0.5-cp314-cp314t-manylinux_2_34_x86_64.whl", hash = "sha256:a3d1fae9863299076f05cb8a778c467578262fae09f9dc0ee9b12eb4268ce663"},
|
||||
{file = "cryptography-46.0.5-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:c4143987a42a2397f2fc3b4d7e3a7d313fbe684f67ff443999e803dd75a76826"},
|
||||
{file = "cryptography-46.0.5-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:7d731d4b107030987fd61a7f8ab512b25b53cef8f233a97379ede116f30eb67d"},
|
||||
{file = "cryptography-46.0.5-cp314-cp314t-win32.whl", hash = "sha256:c3bcce8521d785d510b2aad26ae2c966092b7daa8f45dd8f44734a104dc0bc1a"},
|
||||
{file = "cryptography-46.0.5-cp314-cp314t-win_amd64.whl", hash = "sha256:4d8ae8659ab18c65ced284993c2265910f6c9e650189d4e3f68445ef82a810e4"},
|
||||
{file = "cryptography-46.0.5-cp38-abi3-macosx_10_9_universal2.whl", hash = "sha256:4108d4c09fbbf2789d0c926eb4152ae1760d5a2d97612b92d508d96c861e4d31"},
|
||||
{file = "cryptography-46.0.5-cp38-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:7d1f30a86d2757199cb2d56e48cce14deddf1f9c95f1ef1b64ee91ea43fe2e18"},
|
||||
{file = "cryptography-46.0.5-cp38-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:039917b0dc418bb9f6edce8a906572d69e74bd330b0b3fea4f79dab7f8ddd235"},
|
||||
{file = "cryptography-46.0.5-cp38-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:ba2a27ff02f48193fc4daeadf8ad2590516fa3d0adeeb34336b96f7fa64c1e3a"},
|
||||
{file = "cryptography-46.0.5-cp38-abi3-manylinux_2_28_ppc64le.whl", hash = "sha256:61aa400dce22cb001a98014f647dc21cda08f7915ceb95df0c9eaf84b4b6af76"},
|
||||
{file = "cryptography-46.0.5-cp38-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:3ce58ba46e1bc2aac4f7d9290223cead56743fa6ab94a5d53292ffaac6a91614"},
|
||||
{file = "cryptography-46.0.5-cp38-abi3-manylinux_2_31_armv7l.whl", hash = "sha256:420d0e909050490d04359e7fdb5ed7e667ca5c3c402b809ae2563d7e66a92229"},
|
||||
{file = "cryptography-46.0.5-cp38-abi3-manylinux_2_34_aarch64.whl", hash = "sha256:582f5fcd2afa31622f317f80426a027f30dc792e9c80ffee87b993200ea115f1"},
|
||||
{file = "cryptography-46.0.5-cp38-abi3-manylinux_2_34_ppc64le.whl", hash = "sha256:bfd56bb4b37ed4f330b82402f6f435845a5f5648edf1ad497da51a8452d5d62d"},
|
||||
{file = "cryptography-46.0.5-cp38-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:a3d507bb6a513ca96ba84443226af944b0f7f47dcc9a399d110cd6146481d24c"},
|
||||
{file = "cryptography-46.0.5-cp38-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:9f16fbdf4da055efb21c22d81b89f155f02ba420558db21288b3d0035bafd5f4"},
|
||||
{file = "cryptography-46.0.5-cp38-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:ced80795227d70549a411a4ab66e8ce307899fad2220ce5ab2f296e687eacde9"},
|
||||
{file = "cryptography-46.0.5-cp38-abi3-win32.whl", hash = "sha256:02f547fce831f5096c9a567fd41bc12ca8f11df260959ecc7c3202555cc47a72"},
|
||||
{file = "cryptography-46.0.5-cp38-abi3-win_amd64.whl", hash = "sha256:556e106ee01aa13484ce9b0239bca667be5004efb0aabbed28d353df86445595"},
|
||||
{file = "cryptography-46.0.5-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:3b4995dc971c9fb83c25aa44cf45f02ba86f71ee600d81091c2f0cbae116b06c"},
|
||||
{file = "cryptography-46.0.5-pp311-pypy311_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:bc84e875994c3b445871ea7181d424588171efec3e185dced958dad9e001950a"},
|
||||
{file = "cryptography-46.0.5-pp311-pypy311_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:2ae6971afd6246710480e3f15824ed3029a60fc16991db250034efd0b9fb4356"},
|
||||
{file = "cryptography-46.0.5-pp311-pypy311_pp73-manylinux_2_34_aarch64.whl", hash = "sha256:d861ee9e76ace6cf36a6a89b959ec08e7bc2493ee39d07ffe5acb23ef46d27da"},
|
||||
{file = "cryptography-46.0.5-pp311-pypy311_pp73-manylinux_2_34_x86_64.whl", hash = "sha256:2b7a67c9cd56372f3249b39699f2ad479f6991e62ea15800973b956f4b73e257"},
|
||||
{file = "cryptography-46.0.5-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:8456928655f856c6e1533ff59d5be76578a7157224dbd9ce6872f25055ab9ab7"},
|
||||
{file = "cryptography-46.0.5.tar.gz", hash = "sha256:abace499247268e3757271b2f1e244b36b06f8515cf27c4d49468fc9eb16e93d"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
cffi = {version = ">=2.0.0", markers = "python_full_version >= \"3.9.0\" and platform_python_implementation != \"PyPy\""}
|
||||
|
||||
[package.extras]
|
||||
docs = ["sphinx (>=5.3.0)", "sphinx-inline-tabs", "sphinx-rtd-theme (>=3.0.0)"]
|
||||
docstest = ["pyenchant (>=3)", "readme-renderer (>=30.0)", "sphinxcontrib-spelling (>=7.3.1)"]
|
||||
nox = ["nox[uv] (>=2024.4.15)"]
|
||||
pep8test = ["check-sdist", "click (>=8.0.1)", "mypy (>=1.14)", "ruff (>=0.11.11)"]
|
||||
sdist = ["build (>=1.0.0)"]
|
||||
ssh = ["bcrypt (>=3.1.5)"]
|
||||
test = ["certifi (>=2024)", "cryptography-vectors (==46.0.5)", "pretend (>=0.7)", "pytest (>=7.4.0)", "pytest-benchmark (>=4.0)", "pytest-cov (>=2.10.1)", "pytest-xdist (>=3.5.0)"]
|
||||
test-randomorder = ["pytest-randomly"]
|
||||
|
||||
[[package]]
|
||||
name = "discord-py"
|
||||
version = "2.7.0"
|
||||
@@ -1614,6 +1685,18 @@ http2 = ["h2 (>=3,<5)"]
|
||||
socks = ["socksio (==1.*)"]
|
||||
zstd = ["zstandard (>=0.18.0)"]
|
||||
|
||||
[[package]]
|
||||
name = "httpx-sse"
|
||||
version = "0.4.3"
|
||||
description = "Consume Server-Sent Event (SSE) messages with HTTPX."
|
||||
optional = false
|
||||
python-versions = ">=3.9"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "httpx_sse-0.4.3-py3-none-any.whl", hash = "sha256:0ac1c9fe3c0afad2e0ebb25a934a59f4c7823b60792691f779fad2c5568830fc"},
|
||||
{file = "httpx_sse-0.4.3.tar.gz", hash = "sha256:9b1ed0127459a66014aec3c56bebd93da3c1bc8bb6618c8082039a44889a755d"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "huggingface-hub"
|
||||
version = "1.5.0"
|
||||
@@ -1833,6 +1916,43 @@ files = [
|
||||
{file = "joblib-1.5.3.tar.gz", hash = "sha256:8561a3269e6801106863fd0d6d84bb737be9e7631e33aaed3fb9ce5953688da3"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "jsonschema"
|
||||
version = "4.26.0"
|
||||
description = "An implementation of JSON Schema validation for Python"
|
||||
optional = false
|
||||
python-versions = ">=3.10"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "jsonschema-4.26.0-py3-none-any.whl", hash = "sha256:d489f15263b8d200f8387e64b4c3a75f06629559fb73deb8fdfb525f2dab50ce"},
|
||||
{file = "jsonschema-4.26.0.tar.gz", hash = "sha256:0c26707e2efad8aa1bfc5b7ce170f3fccc2e4918ff85989ba9ffa9facb2be326"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
attrs = ">=22.2.0"
|
||||
jsonschema-specifications = ">=2023.3.6"
|
||||
referencing = ">=0.28.4"
|
||||
rpds-py = ">=0.25.0"
|
||||
|
||||
[package.extras]
|
||||
format = ["fqdn", "idna", "isoduration", "jsonpointer (>1.13)", "rfc3339-validator", "rfc3987", "uri-template", "webcolors (>=1.11)"]
|
||||
format-nongpl = ["fqdn", "idna", "isoduration", "jsonpointer (>1.13)", "rfc3339-validator", "rfc3986-validator (>0.1.0)", "rfc3987-syntax (>=1.1.0)", "uri-template", "webcolors (>=24.6.0)"]
|
||||
|
||||
[[package]]
|
||||
name = "jsonschema-specifications"
|
||||
version = "2025.9.1"
|
||||
description = "The JSON Schema meta-schemas and vocabularies, exposed as a Registry"
|
||||
optional = false
|
||||
python-versions = ">=3.9"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "jsonschema_specifications-2025.9.1-py3-none-any.whl", hash = "sha256:98802fee3a11ee76ecaca44429fda8a41bff98b00a0f2838151b113f210cc6fe"},
|
||||
{file = "jsonschema_specifications-2025.9.1.tar.gz", hash = "sha256:b540987f239e745613c7a9176f3edb72b832a4ac465cf02712288397832b5e8d"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
referencing = ">=0.31.0"
|
||||
|
||||
[[package]]
|
||||
name = "kombu"
|
||||
version = "5.6.2"
|
||||
@@ -1870,6 +1990,107 @@ sqs = ["boto3 (>=1.26.143)", "pycurl (>=7.43.0.5) ; sys_platform != \"win32\" an
|
||||
yaml = ["PyYAML (>=3.10)"]
|
||||
zookeeper = ["kazoo (>=2.8.0)"]
|
||||
|
||||
[[package]]
|
||||
name = "librt"
|
||||
version = "0.8.1"
|
||||
description = "Mypyc runtime library"
|
||||
optional = false
|
||||
python-versions = ">=3.9"
|
||||
groups = ["dev"]
|
||||
markers = "platform_python_implementation != \"PyPy\""
|
||||
files = [
|
||||
{file = "librt-0.8.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:81fd938344fecb9373ba1b155968c8a329491d2ce38e7ddb76f30ffb938f12dc"},
|
||||
{file = "librt-0.8.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:5db05697c82b3a2ec53f6e72b2ed373132b0c2e05135f0696784e97d7f5d48e7"},
|
||||
{file = "librt-0.8.1-cp310-cp310-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:d56bc4011975f7460bea7b33e1ff425d2f1adf419935ff6707273c77f8a4ada6"},
|
||||
{file = "librt-0.8.1-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:5cdc0f588ff4b663ea96c26d2a230c525c6fc62b28314edaaaca8ed5af931ad0"},
|
||||
{file = "librt-0.8.1-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:97c2b54ff6717a7a563b72627990bec60d8029df17df423f0ed37d56a17a176b"},
|
||||
{file = "librt-0.8.1-cp310-cp310-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:8f1125e6bbf2f1657d9a2f3ccc4a2c9b0c8b176965bb565dd4d86be67eddb4b6"},
|
||||
{file = "librt-0.8.1-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:8f4bb453f408137d7581be309b2fbc6868a80e7ef60c88e689078ee3a296ae71"},
|
||||
{file = "librt-0.8.1-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:c336d61d2fe74a3195edc1646d53ff1cddd3a9600b09fa6ab75e5514ba4862a7"},
|
||||
{file = "librt-0.8.1-cp310-cp310-musllinux_1_2_riscv64.whl", hash = "sha256:eb5656019db7c4deacf0c1a55a898c5bb8f989be904597fcb5232a2f4828fa05"},
|
||||
{file = "librt-0.8.1-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:c25d9e338d5bed46c1632f851babf3d13c78f49a225462017cf5e11e845c5891"},
|
||||
{file = "librt-0.8.1-cp310-cp310-win32.whl", hash = "sha256:aaab0e307e344cb28d800957ef3ec16605146ef0e59e059a60a176d19543d1b7"},
|
||||
{file = "librt-0.8.1-cp310-cp310-win_amd64.whl", hash = "sha256:56e04c14b696300d47b3bc5f1d10a00e86ae978886d0cee14e5714fafb5df5d2"},
|
||||
{file = "librt-0.8.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:681dc2451d6d846794a828c16c22dc452d924e9f700a485b7ecb887a30aad1fd"},
|
||||
{file = "librt-0.8.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:a3b4350b13cc0e6f5bec8fa7caf29a8fb8cdc051a3bae45cfbfd7ce64f009965"},
|
||||
{file = "librt-0.8.1-cp311-cp311-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:ac1e7817fd0ed3d14fd7c5df91daed84c48e4c2a11ee99c0547f9f62fdae13da"},
|
||||
{file = "librt-0.8.1-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:747328be0c5b7075cde86a0e09d7a9196029800ba75a1689332348e998fb85c0"},
|
||||
{file = "librt-0.8.1-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:f0af2bd2bc204fa27f3d6711d0f360e6b8c684a035206257a81673ab924aa11e"},
|
||||
{file = "librt-0.8.1-cp311-cp311-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:d480de377f5b687b6b1bc0c0407426da556e2a757633cc7e4d2e1a057aa688f3"},
|
||||
{file = "librt-0.8.1-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:d0ee06b5b5291f609ddb37b9750985b27bc567791bc87c76a569b3feed8481ac"},
|
||||
{file = "librt-0.8.1-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:9e2c6f77b9ad48ce5603b83b7da9ee3e36b3ab425353f695cba13200c5d96596"},
|
||||
{file = "librt-0.8.1-cp311-cp311-musllinux_1_2_riscv64.whl", hash = "sha256:439352ba9373f11cb8e1933da194dcc6206daf779ff8df0ed69c5e39113e6a99"},
|
||||
{file = "librt-0.8.1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:82210adabbc331dbb65d7868b105185464ef13f56f7f76688565ad79f648b0fe"},
|
||||
{file = "librt-0.8.1-cp311-cp311-win32.whl", hash = "sha256:52c224e14614b750c0a6d97368e16804a98c684657c7518752c356834fff83bb"},
|
||||
{file = "librt-0.8.1-cp311-cp311-win_amd64.whl", hash = "sha256:c00e5c884f528c9932d278d5c9cbbea38a6b81eb62c02e06ae53751a83a4d52b"},
|
||||
{file = "librt-0.8.1-cp311-cp311-win_arm64.whl", hash = "sha256:f7cdf7f26c2286ffb02e46d7bac56c94655540b26347673bea15fa52a6af17e9"},
|
||||
{file = "librt-0.8.1-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:a28f2612ab566b17f3698b0da021ff9960610301607c9a5e8eaca62f5e1c350a"},
|
||||
{file = "librt-0.8.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:60a78b694c9aee2a0f1aaeaa7d101cf713e92e8423a941d2897f4fa37908dab9"},
|
||||
{file = "librt-0.8.1-cp312-cp312-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:758509ea3f1eba2a57558e7e98f4659d0ea7670bff49673b0dde18a3c7e6c0eb"},
|
||||
{file = "librt-0.8.1-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:039b9f2c506bd0ab0f8725aa5ba339c6f0cd19d3b514b50d134789809c24285d"},
|
||||
{file = "librt-0.8.1-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:5bb54f1205a3a6ab41a6fd71dfcdcbd278670d3a90ca502a30d9da583105b6f7"},
|
||||
{file = "librt-0.8.1-cp312-cp312-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:05bd41cdee35b0c59c259f870f6da532a2c5ca57db95b5f23689fcb5c9e42440"},
|
||||
{file = "librt-0.8.1-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:adfab487facf03f0d0857b8710cf82d0704a309d8ffc33b03d9302b4c64e91a9"},
|
||||
{file = "librt-0.8.1-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:153188fe98a72f206042be10a2c6026139852805215ed9539186312d50a8e972"},
|
||||
{file = "librt-0.8.1-cp312-cp312-musllinux_1_2_riscv64.whl", hash = "sha256:dd3c41254ee98604b08bd5b3af5bf0a89740d4ee0711de95b65166bf44091921"},
|
||||
{file = "librt-0.8.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:e0d138c7ae532908cbb342162b2611dbd4d90c941cd25ab82084aaf71d2c0bd0"},
|
||||
{file = "librt-0.8.1-cp312-cp312-win32.whl", hash = "sha256:43353b943613c5d9c49a25aaffdba46f888ec354e71e3529a00cca3f04d66a7a"},
|
||||
{file = "librt-0.8.1-cp312-cp312-win_amd64.whl", hash = "sha256:ff8baf1f8d3f4b6b7257fcb75a501f2a5499d0dda57645baa09d4d0d34b19444"},
|
||||
{file = "librt-0.8.1-cp312-cp312-win_arm64.whl", hash = "sha256:0f2ae3725904f7377e11cc37722d5d401e8b3d5851fb9273d7f4fe04f6b3d37d"},
|
||||
{file = "librt-0.8.1-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:7e6bad1cd94f6764e1e21950542f818a09316645337fd5ab9a7acc45d99a8f35"},
|
||||
{file = "librt-0.8.1-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:cf450f498c30af55551ba4f66b9123b7185362ec8b625a773b3d39aa1a717583"},
|
||||
{file = "librt-0.8.1-cp313-cp313-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:eca45e982fa074090057132e30585a7e8674e9e885d402eae85633e9f449ce6c"},
|
||||
{file = "librt-0.8.1-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:0c3811485fccfda840861905b8c70bba5ec094e02825598bb9d4ca3936857a04"},
|
||||
{file = "librt-0.8.1-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:5e4af413908f77294605e28cfd98063f54b2c790561383971d2f52d113d9c363"},
|
||||
{file = "librt-0.8.1-cp313-cp313-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:5212a5bd7fae98dae95710032902edcd2ec4dc994e883294f75c857b83f9aba0"},
|
||||
{file = "librt-0.8.1-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:e692aa2d1d604e6ca12d35e51fdc36f4cda6345e28e36374579f7ef3611b3012"},
|
||||
{file = "librt-0.8.1-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:4be2a5c926b9770c9e08e717f05737a269b9d0ebc5d2f0060f0fe3fe9ce47acb"},
|
||||
{file = "librt-0.8.1-cp313-cp313-musllinux_1_2_riscv64.whl", hash = "sha256:fd1a720332ea335ceb544cf0a03f81df92abd4bb887679fd1e460976b0e6214b"},
|
||||
{file = "librt-0.8.1-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:93c2af9e01e0ef80d95ae3c720be101227edae5f2fe7e3dc63d8857fadfc5a1d"},
|
||||
{file = "librt-0.8.1-cp313-cp313-win32.whl", hash = "sha256:086a32dbb71336627e78cc1d6ee305a68d038ef7d4c39aaff41ae8c9aa46e91a"},
|
||||
{file = "librt-0.8.1-cp313-cp313-win_amd64.whl", hash = "sha256:e11769a1dbda4da7b00a76cfffa67aa47cfa66921d2724539eee4b9ede780b79"},
|
||||
{file = "librt-0.8.1-cp313-cp313-win_arm64.whl", hash = "sha256:924817ab3141aca17893386ee13261f1d100d1ef410d70afe4389f2359fea4f0"},
|
||||
{file = "librt-0.8.1-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:6cfa7fe54fd4d1f47130017351a959fe5804bda7a0bc7e07a2cdbc3fdd28d34f"},
|
||||
{file = "librt-0.8.1-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:228c2409c079f8c11fb2e5d7b277077f694cb93443eb760e00b3b83cb8b3176c"},
|
||||
{file = "librt-0.8.1-cp314-cp314-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:7aae78ab5e3206181780e56912d1b9bb9f90a7249ce12f0e8bf531d0462dd0fc"},
|
||||
{file = "librt-0.8.1-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:172d57ec04346b047ca6af181e1ea4858086c80bdf455f61994c4aa6fc3f866c"},
|
||||
{file = "librt-0.8.1-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:6b1977c4ea97ce5eb7755a78fae68d87e4102e4aaf54985e8b56806849cc06a3"},
|
||||
{file = "librt-0.8.1-cp314-cp314-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:10c42e1f6fd06733ef65ae7bebce2872bcafd8d6e6b0a08fe0a05a23b044fb14"},
|
||||
{file = "librt-0.8.1-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:4c8dfa264b9193c4ee19113c985c95f876fae5e51f731494fc4e0cf594990ba7"},
|
||||
{file = "librt-0.8.1-cp314-cp314-musllinux_1_2_i686.whl", hash = "sha256:01170b6729a438f0dedc4a26ed342e3dc4f02d1000b4b19f980e1877f0c297e6"},
|
||||
{file = "librt-0.8.1-cp314-cp314-musllinux_1_2_riscv64.whl", hash = "sha256:7b02679a0d783bdae30d443025b94465d8c3dc512f32f5b5031f93f57ac32071"},
|
||||
{file = "librt-0.8.1-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:190b109bb69592a3401fe1ffdea41a2e73370ace2ffdc4a0e8e2b39cdea81b78"},
|
||||
{file = "librt-0.8.1-cp314-cp314-win32.whl", hash = "sha256:e70a57ecf89a0f64c24e37f38d3fe217a58169d2fe6ed6d70554964042474023"},
|
||||
{file = "librt-0.8.1-cp314-cp314-win_amd64.whl", hash = "sha256:7e2f3edca35664499fbb36e4770650c4bd4a08abc1f4458eab9df4ec56389730"},
|
||||
{file = "librt-0.8.1-cp314-cp314-win_arm64.whl", hash = "sha256:0d2f82168e55ddefd27c01c654ce52379c0750ddc31ee86b4b266bcf4d65f2a3"},
|
||||
{file = "librt-0.8.1-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:2c74a2da57a094bd48d03fa5d196da83d2815678385d2978657499063709abe1"},
|
||||
{file = "librt-0.8.1-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:a355d99c4c0d8e5b770313b8b247411ed40949ca44e33e46a4789b9293a907ee"},
|
||||
{file = "librt-0.8.1-cp314-cp314t-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:2eb345e8b33fb748227409c9f1233d4df354d6e54091f0e8fc53acdb2ffedeb7"},
|
||||
{file = "librt-0.8.1-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:9be2f15e53ce4e83cc08adc29b26fb5978db62ef2a366fbdf716c8a6c8901040"},
|
||||
{file = "librt-0.8.1-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:785ae29c1f5c6e7c2cde2c7c0e148147f4503da3abc5d44d482068da5322fd9e"},
|
||||
{file = "librt-0.8.1-cp314-cp314t-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:1d3a7da44baf692f0c6aeb5b2a09c5e6fc7a703bca9ffa337ddd2e2da53f7732"},
|
||||
{file = "librt-0.8.1-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:5fc48998000cbc39ec0d5311312dda93ecf92b39aaf184c5e817d5d440b29624"},
|
||||
{file = "librt-0.8.1-cp314-cp314t-musllinux_1_2_i686.whl", hash = "sha256:e96baa6820280077a78244b2e06e416480ed859bbd8e5d641cf5742919d8beb4"},
|
||||
{file = "librt-0.8.1-cp314-cp314t-musllinux_1_2_riscv64.whl", hash = "sha256:31362dbfe297b23590530007062c32c6f6176f6099646bb2c95ab1b00a57c382"},
|
||||
{file = "librt-0.8.1-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:cc3656283d11540ab0ea01978378e73e10002145117055e03722417aeab30994"},
|
||||
{file = "librt-0.8.1-cp314-cp314t-win32.whl", hash = "sha256:738f08021b3142c2918c03692608baed43bc51144c29e35807682f8070ee2a3a"},
|
||||
{file = "librt-0.8.1-cp314-cp314t-win_amd64.whl", hash = "sha256:89815a22daf9c51884fb5dbe4f1ef65ee6a146e0b6a8df05f753e2e4a9359bf4"},
|
||||
{file = "librt-0.8.1-cp314-cp314t-win_arm64.whl", hash = "sha256:bf512a71a23504ed08103a13c941f763db13fb11177beb3d9244c98c29fb4a61"},
|
||||
{file = "librt-0.8.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:3dff3d3ca8db20e783b1bc7de49c0a2ab0b8387f31236d6a026597d07fcd68ac"},
|
||||
{file = "librt-0.8.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:08eec3a1fc435f0d09c87b6bf1ec798986a3544f446b864e4099633a56fcd9ed"},
|
||||
{file = "librt-0.8.1-cp39-cp39-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:e3f0a41487fd5fad7e760b9e8a90e251e27c2816fbc2cff36a22a0e6bcbbd9dd"},
|
||||
{file = "librt-0.8.1-cp39-cp39-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:bacdb58d9939d95cc557b4dbaa86527c9db2ac1ed76a18bc8d26f6dc8647d851"},
|
||||
{file = "librt-0.8.1-cp39-cp39-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:b6d7ab1f01aa753188605b09a51faa44a3327400b00b8cce424c71910fc0a128"},
|
||||
{file = "librt-0.8.1-cp39-cp39-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:4998009e7cb9e896569f4be7004f09d0ed70d386fa99d42b6d363f6d200501ac"},
|
||||
{file = "librt-0.8.1-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:2cc68eeeef5e906839c7bb0815748b5b0a974ec27125beefc0f942715785b551"},
|
||||
{file = "librt-0.8.1-cp39-cp39-musllinux_1_2_i686.whl", hash = "sha256:0bf69d79a23f4f40b8673a947a234baeeb133b5078b483b7297c5916539cf5d5"},
|
||||
{file = "librt-0.8.1-cp39-cp39-musllinux_1_2_riscv64.whl", hash = "sha256:22b46eabd76c1986ee7d231b0765ad387d7673bbd996aa0d0d054b38ac65d8f6"},
|
||||
{file = "librt-0.8.1-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:237796479f4d0637d6b9cbcb926ff424a97735e68ade6facf402df4ec93375ed"},
|
||||
{file = "librt-0.8.1-cp39-cp39-win32.whl", hash = "sha256:4beb04b8c66c6ae62f8c1e0b2f097c1ebad9295c929a8d5286c05eae7c2fc7dc"},
|
||||
{file = "librt-0.8.1-cp39-cp39-win_amd64.whl", hash = "sha256:64548cde61b692dc0dc379f4b5f59a2f582c2ebe7890d09c1ae3b9e66fa015b7"},
|
||||
{file = "librt-0.8.1.tar.gz", hash = "sha256:be46a14693955b3bd96014ccbdb8339ee8c9346fbe11c1b78901b55125f14c73"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "markdown-it-py"
|
||||
version = "4.0.0"
|
||||
@@ -1993,6 +2214,39 @@ files = [
|
||||
{file = "markupsafe-3.0.3.tar.gz", hash = "sha256:722695808f4b6457b320fdc131280796bdceb04ab50fe1795cd540799ebe1698"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "mcp"
|
||||
version = "1.26.0"
|
||||
description = "Model Context Protocol SDK"
|
||||
optional = false
|
||||
python-versions = ">=3.10"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "mcp-1.26.0-py3-none-any.whl", hash = "sha256:904a21c33c25aa98ddbeb47273033c435e595bbacfdb177f4bd87f6dceebe1ca"},
|
||||
{file = "mcp-1.26.0.tar.gz", hash = "sha256:db6e2ef491eecc1a0d93711a76f28dec2e05999f93afd48795da1c1137142c66"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
anyio = ">=4.5"
|
||||
httpx = ">=0.27.1"
|
||||
httpx-sse = ">=0.4"
|
||||
jsonschema = ">=4.20.0"
|
||||
pydantic = ">=2.11.0,<3.0.0"
|
||||
pydantic-settings = ">=2.5.2"
|
||||
pyjwt = {version = ">=2.10.1", extras = ["crypto"]}
|
||||
python-multipart = ">=0.0.9"
|
||||
pywin32 = {version = ">=310", markers = "sys_platform == \"win32\""}
|
||||
sse-starlette = ">=1.6.1"
|
||||
starlette = ">=0.27"
|
||||
typing-extensions = ">=4.9.0"
|
||||
typing-inspection = ">=0.4.1"
|
||||
uvicorn = {version = ">=0.31.1", markers = "sys_platform != \"emscripten\""}
|
||||
|
||||
[package.extras]
|
||||
cli = ["python-dotenv (>=1.0.0)", "typer (>=0.16.0)"]
|
||||
rich = ["rich (>=13.9.4)"]
|
||||
ws = ["websockets (>=15.0.1)"]
|
||||
|
||||
[[package]]
|
||||
name = "mdurl"
|
||||
version = "0.1.2"
|
||||
@@ -2181,6 +2435,79 @@ files = [
|
||||
{file = "multidict-6.7.1.tar.gz", hash = "sha256:ec6652a1bee61c53a3e5776b6049172c53b6aaba34f18c9ad04f82712bac623d"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "mypy"
|
||||
version = "1.19.1"
|
||||
description = "Optional static typing for Python"
|
||||
optional = false
|
||||
python-versions = ">=3.9"
|
||||
groups = ["dev"]
|
||||
files = [
|
||||
{file = "mypy-1.19.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:5f05aa3d375b385734388e844bc01733bd33c644ab48e9684faa54e5389775ec"},
|
||||
{file = "mypy-1.19.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:022ea7279374af1a5d78dfcab853fe6a536eebfda4b59deab53cd21f6cd9f00b"},
|
||||
{file = "mypy-1.19.1-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:ee4c11e460685c3e0c64a4c5de82ae143622410950d6be863303a1c4ba0e36d6"},
|
||||
{file = "mypy-1.19.1-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:de759aafbae8763283b2ee5869c7255391fbc4de3ff171f8f030b5ec48381b74"},
|
||||
{file = "mypy-1.19.1-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:ab43590f9cd5108f41aacf9fca31841142c786827a74ab7cc8a2eacb634e09a1"},
|
||||
{file = "mypy-1.19.1-cp310-cp310-win_amd64.whl", hash = "sha256:2899753e2f61e571b3971747e302d5f420c3fd09650e1951e99f823bc3089dac"},
|
||||
{file = "mypy-1.19.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:d8dfc6ab58ca7dda47d9237349157500468e404b17213d44fc1cb77bce532288"},
|
||||
{file = "mypy-1.19.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:e3f276d8493c3c97930e354b2595a44a21348b320d859fb4a2b9f66da9ed27ab"},
|
||||
{file = "mypy-1.19.1-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:2abb24cf3f17864770d18d673c85235ba52456b36a06b6afc1e07c1fdcd3d0e6"},
|
||||
{file = "mypy-1.19.1-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:a009ffa5a621762d0c926a078c2d639104becab69e79538a494bcccb62cc0331"},
|
||||
{file = "mypy-1.19.1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:f7cee03c9a2e2ee26ec07479f38ea9c884e301d42c6d43a19d20fb014e3ba925"},
|
||||
{file = "mypy-1.19.1-cp311-cp311-win_amd64.whl", hash = "sha256:4b84a7a18f41e167f7995200a1d07a4a6810e89d29859df936f1c3923d263042"},
|
||||
{file = "mypy-1.19.1-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:a8174a03289288c1f6c46d55cef02379b478bfbc8e358e02047487cad44c6ca1"},
|
||||
{file = "mypy-1.19.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:ffcebe56eb09ff0c0885e750036a095e23793ba6c2e894e7e63f6d89ad51f22e"},
|
||||
{file = "mypy-1.19.1-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:b64d987153888790bcdb03a6473d321820597ab8dd9243b27a92153c4fa50fd2"},
|
||||
{file = "mypy-1.19.1-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:c35d298c2c4bba75feb2195655dfea8124d855dfd7343bf8b8c055421eaf0cf8"},
|
||||
{file = "mypy-1.19.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:34c81968774648ab5ac09c29a375fdede03ba253f8f8287847bd480782f73a6a"},
|
||||
{file = "mypy-1.19.1-cp312-cp312-win_amd64.whl", hash = "sha256:b10e7c2cd7870ba4ad9b2d8a6102eb5ffc1f16ca35e3de6bfa390c1113029d13"},
|
||||
{file = "mypy-1.19.1-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:e3157c7594ff2ef1634ee058aafc56a82db665c9438fd41b390f3bde1ab12250"},
|
||||
{file = "mypy-1.19.1-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:bdb12f69bcc02700c2b47e070238f42cb87f18c0bc1fc4cdb4fb2bc5fd7a3b8b"},
|
||||
{file = "mypy-1.19.1-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:f859fb09d9583a985be9a493d5cfc5515b56b08f7447759a0c5deaf68d80506e"},
|
||||
{file = "mypy-1.19.1-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:c9a6538e0415310aad77cb94004ca6482330fece18036b5f360b62c45814c4ef"},
|
||||
{file = "mypy-1.19.1-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:da4869fc5e7f62a88f3fe0b5c919d1d9f7ea3cef92d3689de2823fd27e40aa75"},
|
||||
{file = "mypy-1.19.1-cp313-cp313-win_amd64.whl", hash = "sha256:016f2246209095e8eda7538944daa1d60e1e8134d98983b9fc1e92c1fc0cb8dd"},
|
||||
{file = "mypy-1.19.1-cp314-cp314-macosx_10_15_x86_64.whl", hash = "sha256:06e6170bd5836770e8104c8fdd58e5e725cfeb309f0a6c681a811f557e97eac1"},
|
||||
{file = "mypy-1.19.1-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:804bd67b8054a85447c8954215a906d6eff9cabeabe493fb6334b24f4bfff718"},
|
||||
{file = "mypy-1.19.1-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:21761006a7f497cb0d4de3d8ef4ca70532256688b0523eee02baf9eec895e27b"},
|
||||
{file = "mypy-1.19.1-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:28902ee51f12e0f19e1e16fbe2f8f06b6637f482c459dd393efddd0ec7f82045"},
|
||||
{file = "mypy-1.19.1-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:481daf36a4c443332e2ae9c137dfee878fcea781a2e3f895d54bd3002a900957"},
|
||||
{file = "mypy-1.19.1-cp314-cp314-win_amd64.whl", hash = "sha256:8bb5c6f6d043655e055be9b542aa5f3bdd30e4f3589163e85f93f3640060509f"},
|
||||
{file = "mypy-1.19.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:7bcfc336a03a1aaa26dfce9fff3e287a3ba99872a157561cbfcebe67c13308e3"},
|
||||
{file = "mypy-1.19.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:b7951a701c07ea584c4fe327834b92a30825514c868b1f69c30445093fdd9d5a"},
|
||||
{file = "mypy-1.19.1-cp39-cp39-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:b13cfdd6c87fc3efb69ea4ec18ef79c74c3f98b4e5498ca9b85ab3b2c2329a67"},
|
||||
{file = "mypy-1.19.1-cp39-cp39-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:4f28f99c824ecebcdaa2e55d82953e38ff60ee5ec938476796636b86afa3956e"},
|
||||
{file = "mypy-1.19.1-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:c608937067d2fc5a4dd1a5ce92fd9e1398691b8c5d012d66e1ddd430e9244376"},
|
||||
{file = "mypy-1.19.1-cp39-cp39-win_amd64.whl", hash = "sha256:409088884802d511ee52ca067707b90c883426bd95514e8cfda8281dc2effe24"},
|
||||
{file = "mypy-1.19.1-py3-none-any.whl", hash = "sha256:f1235f5ea01b7db5468d53ece6aaddf1ad0b88d9e7462b86ef96fe04995d7247"},
|
||||
{file = "mypy-1.19.1.tar.gz", hash = "sha256:19d88bb05303fe63f71dd2c6270daca27cb9401c4ca8255fe50d1d920e0eb9ba"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
librt = {version = ">=0.6.2", markers = "platform_python_implementation != \"PyPy\""}
|
||||
mypy_extensions = ">=1.0.0"
|
||||
pathspec = ">=0.9.0"
|
||||
typing_extensions = ">=4.6.0"
|
||||
|
||||
[package.extras]
|
||||
dmypy = ["psutil (>=4.0)"]
|
||||
faster-cache = ["orjson"]
|
||||
install-types = ["pip"]
|
||||
mypyc = ["setuptools (>=50)"]
|
||||
reports = ["lxml"]
|
||||
|
||||
[[package]]
|
||||
name = "mypy-extensions"
|
||||
version = "1.1.0"
|
||||
description = "Type system extensions for programs checked with the mypy type checker."
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
groups = ["dev"]
|
||||
files = [
|
||||
{file = "mypy_extensions-1.1.0-py3-none-any.whl", hash = "sha256:1be4cccdb0f2482337c4743e60421de3a356cd97508abadd57d47403e94f5505"},
|
||||
{file = "mypy_extensions-1.1.0.tar.gz", hash = "sha256:52e68efc3284861e772bbcd66823fde5ae21fd2fdb51c62a211403730b916558"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "networkx"
|
||||
version = "3.6"
|
||||
@@ -2622,6 +2949,24 @@ files = [
|
||||
{file = "packaging-26.0.tar.gz", hash = "sha256:00243ae351a257117b6a241061796684b084ed1c516a08c48a3f7e147a9d80b4"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "pathspec"
|
||||
version = "1.0.4"
|
||||
description = "Utility library for gitignore style pattern matching of file paths."
|
||||
optional = false
|
||||
python-versions = ">=3.9"
|
||||
groups = ["dev"]
|
||||
files = [
|
||||
{file = "pathspec-1.0.4-py3-none-any.whl", hash = "sha256:fb6ae2fd4e7c921a165808a552060e722767cfa526f99ca5156ed2ce45a5c723"},
|
||||
{file = "pathspec-1.0.4.tar.gz", hash = "sha256:0210e2ae8a21a9137c0d470578cb0e595af87edaa6ebf12ff176f14a02e0e645"},
|
||||
]
|
||||
|
||||
[package.extras]
|
||||
hyperscan = ["hyperscan (>=0.7)"]
|
||||
optional = ["typing-extensions (>=4)"]
|
||||
re2 = ["google-re2 (>=1.1)"]
|
||||
tests = ["pytest (>=9)", "typing-extensions (>=4.15)"]
|
||||
|
||||
[[package]]
|
||||
name = "pluggy"
|
||||
version = "1.6.0"
|
||||
@@ -2835,7 +3180,7 @@ files = [
|
||||
{file = "pycparser-3.0-py3-none-any.whl", hash = "sha256:b727414169a36b7d524c1c3e31839a521725078d7b2ff038656844266160a992"},
|
||||
{file = "pycparser-3.0.tar.gz", hash = "sha256:600f49d217304a5902ac3c37e1281c9fe94e4d0489de643a9504c5cdfdfc6b29"},
|
||||
]
|
||||
markers = {main = "os_name == \"nt\" and implementation_name != \"pypy\" and implementation_name != \"PyPy\" and extra == \"dev\"", dev = "os_name == \"nt\" and implementation_name != \"pypy\" and implementation_name != \"PyPy\""}
|
||||
markers = {main = "(platform_python_implementation != \"PyPy\" or os_name == \"nt\" and implementation_name != \"pypy\" and extra == \"dev\") and implementation_name != \"PyPy\"", dev = "os_name == \"nt\" and implementation_name != \"pypy\" and implementation_name != \"PyPy\""}
|
||||
|
||||
[[package]]
|
||||
name = "pydantic"
|
||||
@@ -3032,6 +3377,27 @@ files = [
|
||||
[package.extras]
|
||||
windows-terminal = ["colorama (>=0.4.6)"]
|
||||
|
||||
[[package]]
|
||||
name = "pyjwt"
|
||||
version = "2.12.0"
|
||||
description = "JSON Web Token implementation in Python"
|
||||
optional = false
|
||||
python-versions = ">=3.9"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "pyjwt-2.12.0-py3-none-any.whl", hash = "sha256:9bb459d1bdd0387967d287f5656bf7ec2b9a26645d1961628cda1764e087fd6e"},
|
||||
{file = "pyjwt-2.12.0.tar.gz", hash = "sha256:2f62390b667cd8257de560b850bb5a883102a388829274147f1d724453f8fb02"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
cryptography = {version = ">=3.4.0", optional = true, markers = "extra == \"crypto\""}
|
||||
|
||||
[package.extras]
|
||||
crypto = ["cryptography (>=3.4.0)"]
|
||||
dev = ["coverage[toml] (==7.10.7)", "cryptography (>=3.4.0)", "pre-commit", "pytest (>=8.4.2,<9.0.0)", "sphinx", "sphinx-rtd-theme", "zope.interface"]
|
||||
docs = ["sphinx", "sphinx-rtd-theme", "zope.interface"]
|
||||
tests = ["coverage[toml] (==7.10.7)", "pytest (>=8.4.2,<9.0.0)"]
|
||||
|
||||
[[package]]
|
||||
name = "pyobjc"
|
||||
version = "12.1"
|
||||
@@ -6826,10 +7192,10 @@ pywin32 = {version = "*", markers = "platform_system == \"Windows\""}
|
||||
name = "pywin32"
|
||||
version = "311"
|
||||
description = "Python for Window Extensions"
|
||||
optional = true
|
||||
optional = false
|
||||
python-versions = "*"
|
||||
groups = ["main"]
|
||||
markers = "extra == \"voice\" and platform_system == \"Windows\""
|
||||
markers = "extra == \"voice\" and platform_system == \"Windows\" or sys_platform == \"win32\""
|
||||
files = [
|
||||
{file = "pywin32-311-cp310-cp310-win32.whl", hash = "sha256:d03ff496d2a0cd4a5893504789d4a15399133fe82517455e78bad62efbb7f0a3"},
|
||||
{file = "pywin32-311-cp310-cp310-win_amd64.whl", hash = "sha256:797c2772017851984b97180b0bebe4b620bb86328e8a884bb626156295a63b3b"},
|
||||
@@ -6959,6 +7325,23 @@ ocsp = ["cryptography (>=36.0.1)", "pyopenssl (>=20.0.1)", "requests (>=2.31.0)"
|
||||
otel = ["opentelemetry-api (>=1.39.1)", "opentelemetry-exporter-otlp-proto-http (>=1.39.1)", "opentelemetry-sdk (>=1.39.1)"]
|
||||
xxhash = ["xxhash (>=3.6.0,<3.7.0)"]
|
||||
|
||||
[[package]]
|
||||
name = "referencing"
|
||||
version = "0.37.0"
|
||||
description = "JSON Referencing + Python"
|
||||
optional = false
|
||||
python-versions = ">=3.10"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "referencing-0.37.0-py3-none-any.whl", hash = "sha256:381329a9f99628c9069361716891d34ad94af76e461dcb0335825aecc7692231"},
|
||||
{file = "referencing-0.37.0.tar.gz", hash = "sha256:44aefc3142c5b842538163acb373e24cce6632bd54bdb01b21ad5863489f50d8"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
attrs = ">=22.2.0"
|
||||
rpds-py = ">=0.7.0"
|
||||
typing-extensions = {version = ">=4.4.0", markers = "python_version < \"3.13\""}
|
||||
|
||||
[[package]]
|
||||
name = "regex"
|
||||
version = "2026.2.28"
|
||||
@@ -7125,6 +7508,131 @@ pygments = ">=2.13.0,<3.0.0"
|
||||
[package.extras]
|
||||
jupyter = ["ipywidgets (>=7.5.1,<9)"]
|
||||
|
||||
[[package]]
|
||||
name = "rpds-py"
|
||||
version = "0.30.0"
|
||||
description = "Python bindings to Rust's persistent data structures (rpds)"
|
||||
optional = false
|
||||
python-versions = ">=3.10"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "rpds_py-0.30.0-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:679ae98e00c0e8d68a7fda324e16b90fd5260945b45d3b824c892cec9eea3288"},
|
||||
{file = "rpds_py-0.30.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:4cc2206b76b4f576934f0ed374b10d7ca5f457858b157ca52064bdfc26b9fc00"},
|
||||
{file = "rpds_py-0.30.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:389a2d49eded1896c3d48b0136ead37c48e221b391c052fba3f4055c367f60a6"},
|
||||
{file = "rpds_py-0.30.0-cp310-cp310-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:32c8528634e1bf7121f3de08fa85b138f4e0dc47657866630611b03967f041d7"},
|
||||
{file = "rpds_py-0.30.0-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:f207f69853edd6f6700b86efb84999651baf3789e78a466431df1331608e5324"},
|
||||
{file = "rpds_py-0.30.0-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:67b02ec25ba7a9e8fa74c63b6ca44cf5707f2fbfadae3ee8e7494297d56aa9df"},
|
||||
{file = "rpds_py-0.30.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0c0e95f6819a19965ff420f65578bacb0b00f251fefe2c8b23347c37174271f3"},
|
||||
{file = "rpds_py-0.30.0-cp310-cp310-manylinux_2_31_riscv64.whl", hash = "sha256:a452763cc5198f2f98898eb98f7569649fe5da666c2dc6b5ddb10fde5a574221"},
|
||||
{file = "rpds_py-0.30.0-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:e0b65193a413ccc930671c55153a03ee57cecb49e6227204b04fae512eb657a7"},
|
||||
{file = "rpds_py-0.30.0-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:858738e9c32147f78b3ac24dc0edb6610000e56dc0f700fd5f651d0a0f0eb9ff"},
|
||||
{file = "rpds_py-0.30.0-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:da279aa314f00acbb803da1e76fa18666778e8a8f83484fba94526da5de2cba7"},
|
||||
{file = "rpds_py-0.30.0-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:7c64d38fb49b6cdeda16ab49e35fe0da2e1e9b34bc38bd78386530f218b37139"},
|
||||
{file = "rpds_py-0.30.0-cp310-cp310-win32.whl", hash = "sha256:6de2a32a1665b93233cde140ff8b3467bdb9e2af2b91079f0333a0974d12d464"},
|
||||
{file = "rpds_py-0.30.0-cp310-cp310-win_amd64.whl", hash = "sha256:1726859cd0de969f88dc8673bdd954185b9104e05806be64bcd87badbe313169"},
|
||||
{file = "rpds_py-0.30.0-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:a2bffea6a4ca9f01b3f8e548302470306689684e61602aa3d141e34da06cf425"},
|
||||
{file = "rpds_py-0.30.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:dc4f992dfe1e2bc3ebc7444f6c7051b4bc13cd8e33e43511e8ffd13bf407010d"},
|
||||
{file = "rpds_py-0.30.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:422c3cb9856d80b09d30d2eb255d0754b23e090034e1deb4083f8004bd0761e4"},
|
||||
{file = "rpds_py-0.30.0-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:07ae8a593e1c3c6b82ca3292efbe73c30b61332fd612e05abee07c79359f292f"},
|
||||
{file = "rpds_py-0.30.0-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:12f90dd7557b6bd57f40abe7747e81e0c0b119bef015ea7726e69fe550e394a4"},
|
||||
{file = "rpds_py-0.30.0-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:99b47d6ad9a6da00bec6aabe5a6279ecd3c06a329d4aa4771034a21e335c3a97"},
|
||||
{file = "rpds_py-0.30.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:33f559f3104504506a44bb666b93a33f5d33133765b0c216a5bf2f1e1503af89"},
|
||||
{file = "rpds_py-0.30.0-cp311-cp311-manylinux_2_31_riscv64.whl", hash = "sha256:946fe926af6e44f3697abbc305ea168c2c31d3e3ef1058cf68f379bf0335a78d"},
|
||||
{file = "rpds_py-0.30.0-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:495aeca4b93d465efde585977365187149e75383ad2684f81519f504f5c13038"},
|
||||
{file = "rpds_py-0.30.0-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:d9a0ca5da0386dee0655b4ccdf46119df60e0f10da268d04fe7cc87886872ba7"},
|
||||
{file = "rpds_py-0.30.0-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:8d6d1cc13664ec13c1b84241204ff3b12f9bb82464b8ad6e7a5d3486975c2eed"},
|
||||
{file = "rpds_py-0.30.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:3896fa1be39912cf0757753826bc8bdc8ca331a28a7c4ae46b7a21280b06bb85"},
|
||||
{file = "rpds_py-0.30.0-cp311-cp311-win32.whl", hash = "sha256:55f66022632205940f1827effeff17c4fa7ae1953d2b74a8581baaefb7d16f8c"},
|
||||
{file = "rpds_py-0.30.0-cp311-cp311-win_amd64.whl", hash = "sha256:a51033ff701fca756439d641c0ad09a41d9242fa69121c7d8769604a0a629825"},
|
||||
{file = "rpds_py-0.30.0-cp311-cp311-win_arm64.whl", hash = "sha256:47b0ef6231c58f506ef0b74d44e330405caa8428e770fec25329ed2cb971a229"},
|
||||
{file = "rpds_py-0.30.0-cp312-cp312-macosx_10_12_x86_64.whl", hash = "sha256:a161f20d9a43006833cd7068375a94d035714d73a172b681d8881820600abfad"},
|
||||
{file = "rpds_py-0.30.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:6abc8880d9d036ecaafe709079969f56e876fcf107f7a8e9920ba6d5a3878d05"},
|
||||
{file = "rpds_py-0.30.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ca28829ae5f5d569bb62a79512c842a03a12576375d5ece7d2cadf8abe96ec28"},
|
||||
{file = "rpds_py-0.30.0-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:a1010ed9524c73b94d15919ca4d41d8780980e1765babf85f9a2f90d247153dd"},
|
||||
{file = "rpds_py-0.30.0-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:f8d1736cfb49381ba528cd5baa46f82fdc65c06e843dab24dd70b63d09121b3f"},
|
||||
{file = "rpds_py-0.30.0-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:d948b135c4693daff7bc2dcfc4ec57237a29bd37e60c2fabf5aff2bbacf3e2f1"},
|
||||
{file = "rpds_py-0.30.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:47f236970bccb2233267d89173d3ad2703cd36a0e2a6e92d0560d333871a3d23"},
|
||||
{file = "rpds_py-0.30.0-cp312-cp312-manylinux_2_31_riscv64.whl", hash = "sha256:2e6ecb5a5bcacf59c3f912155044479af1d0b6681280048b338b28e364aca1f6"},
|
||||
{file = "rpds_py-0.30.0-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:a8fa71a2e078c527c3e9dc9fc5a98c9db40bcc8a92b4e8858e36d329f8684b51"},
|
||||
{file = "rpds_py-0.30.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:73c67f2db7bc334e518d097c6d1e6fed021bbc9b7d678d6cc433478365d1d5f5"},
|
||||
{file = "rpds_py-0.30.0-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:5ba103fb455be00f3b1c2076c9d4264bfcb037c976167a6047ed82f23153f02e"},
|
||||
{file = "rpds_py-0.30.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:7cee9c752c0364588353e627da8a7e808a66873672bcb5f52890c33fd965b394"},
|
||||
{file = "rpds_py-0.30.0-cp312-cp312-win32.whl", hash = "sha256:1ab5b83dbcf55acc8b08fc62b796ef672c457b17dbd7820a11d6c52c06839bdf"},
|
||||
{file = "rpds_py-0.30.0-cp312-cp312-win_amd64.whl", hash = "sha256:a090322ca841abd453d43456ac34db46e8b05fd9b3b4ac0c78bcde8b089f959b"},
|
||||
{file = "rpds_py-0.30.0-cp312-cp312-win_arm64.whl", hash = "sha256:669b1805bd639dd2989b281be2cfd951c6121b65e729d9b843e9639ef1fd555e"},
|
||||
{file = "rpds_py-0.30.0-cp313-cp313-macosx_10_12_x86_64.whl", hash = "sha256:f83424d738204d9770830d35290ff3273fbb02b41f919870479fab14b9d303b2"},
|
||||
{file = "rpds_py-0.30.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:e7536cd91353c5273434b4e003cbda89034d67e7710eab8761fd918ec6c69cf8"},
|
||||
{file = "rpds_py-0.30.0-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2771c6c15973347f50fece41fc447c054b7ac2ae0502388ce3b6738cd366e3d4"},
|
||||
{file = "rpds_py-0.30.0-cp313-cp313-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:0a59119fc6e3f460315fe9d08149f8102aa322299deaa5cab5b40092345c2136"},
|
||||
{file = "rpds_py-0.30.0-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:76fec018282b4ead0364022e3c54b60bf368b9d926877957a8624b58419169b7"},
|
||||
{file = "rpds_py-0.30.0-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:692bef75a5525db97318e8cd061542b5a79812d711ea03dbc1f6f8dbb0c5f0d2"},
|
||||
{file = "rpds_py-0.30.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9027da1ce107104c50c81383cae773ef5c24d296dd11c99e2629dbd7967a20c6"},
|
||||
{file = "rpds_py-0.30.0-cp313-cp313-manylinux_2_31_riscv64.whl", hash = "sha256:9cf69cdda1f5968a30a359aba2f7f9aa648a9ce4b580d6826437f2b291cfc86e"},
|
||||
{file = "rpds_py-0.30.0-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:a4796a717bf12b9da9d3ad002519a86063dcac8988b030e405704ef7d74d2d9d"},
|
||||
{file = "rpds_py-0.30.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:5d4c2aa7c50ad4728a094ebd5eb46c452e9cb7edbfdb18f9e1221f597a73e1e7"},
|
||||
{file = "rpds_py-0.30.0-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:ba81a9203d07805435eb06f536d95a266c21e5b2dfbf6517748ca40c98d19e31"},
|
||||
{file = "rpds_py-0.30.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:945dccface01af02675628334f7cf49c2af4c1c904748efc5cf7bbdf0b579f95"},
|
||||
{file = "rpds_py-0.30.0-cp313-cp313-win32.whl", hash = "sha256:b40fb160a2db369a194cb27943582b38f79fc4887291417685f3ad693c5a1d5d"},
|
||||
{file = "rpds_py-0.30.0-cp313-cp313-win_amd64.whl", hash = "sha256:806f36b1b605e2d6a72716f321f20036b9489d29c51c91f4dd29a3e3afb73b15"},
|
||||
{file = "rpds_py-0.30.0-cp313-cp313-win_arm64.whl", hash = "sha256:d96c2086587c7c30d44f31f42eae4eac89b60dabbac18c7669be3700f13c3ce1"},
|
||||
{file = "rpds_py-0.30.0-cp313-cp313t-macosx_10_12_x86_64.whl", hash = "sha256:eb0b93f2e5c2189ee831ee43f156ed34e2a89a78a66b98cadad955972548be5a"},
|
||||
{file = "rpds_py-0.30.0-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:922e10f31f303c7c920da8981051ff6d8c1a56207dbdf330d9047f6d30b70e5e"},
|
||||
{file = "rpds_py-0.30.0-cp313-cp313t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:cdc62c8286ba9bf7f47befdcea13ea0e26bf294bda99758fd90535cbaf408000"},
|
||||
{file = "rpds_py-0.30.0-cp313-cp313t-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:47f9a91efc418b54fb8190a6b4aa7813a23fb79c51f4bb84e418f5476c38b8db"},
|
||||
{file = "rpds_py-0.30.0-cp313-cp313t-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1f3587eb9b17f3789ad50824084fa6f81921bbf9a795826570bda82cb3ed91f2"},
|
||||
{file = "rpds_py-0.30.0-cp313-cp313t-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:39c02563fc592411c2c61d26b6c5fe1e51eaa44a75aa2c8735ca88b0d9599daa"},
|
||||
{file = "rpds_py-0.30.0-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:51a1234d8febafdfd33a42d97da7a43f5dcb120c1060e352a3fbc0c6d36e2083"},
|
||||
{file = "rpds_py-0.30.0-cp313-cp313t-manylinux_2_31_riscv64.whl", hash = "sha256:eb2c4071ab598733724c08221091e8d80e89064cd472819285a9ab0f24bcedb9"},
|
||||
{file = "rpds_py-0.30.0-cp313-cp313t-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:6bdfdb946967d816e6adf9a3d8201bfad269c67efe6cefd7093ef959683c8de0"},
|
||||
{file = "rpds_py-0.30.0-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:c77afbd5f5250bf27bf516c7c4a016813eb2d3e116139aed0096940c5982da94"},
|
||||
{file = "rpds_py-0.30.0-cp313-cp313t-musllinux_1_2_i686.whl", hash = "sha256:61046904275472a76c8c90c9ccee9013d70a6d0f73eecefd38c1ae7c39045a08"},
|
||||
{file = "rpds_py-0.30.0-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:4c5f36a861bc4b7da6516dbdf302c55313afa09b81931e8280361a4f6c9a2d27"},
|
||||
{file = "rpds_py-0.30.0-cp313-cp313t-win32.whl", hash = "sha256:3d4a69de7a3e50ffc214ae16d79d8fbb0922972da0356dcf4d0fdca2878559c6"},
|
||||
{file = "rpds_py-0.30.0-cp313-cp313t-win_amd64.whl", hash = "sha256:f14fc5df50a716f7ece6a80b6c78bb35ea2ca47c499e422aa4463455dd96d56d"},
|
||||
{file = "rpds_py-0.30.0-cp314-cp314-macosx_10_12_x86_64.whl", hash = "sha256:68f19c879420aa08f61203801423f6cd5ac5f0ac4ac82a2368a9fcd6a9a075e0"},
|
||||
{file = "rpds_py-0.30.0-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:ec7c4490c672c1a0389d319b3a9cfcd098dcdc4783991553c332a15acf7249be"},
|
||||
{file = "rpds_py-0.30.0-cp314-cp314-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f251c812357a3fed308d684a5079ddfb9d933860fc6de89f2b7ab00da481e65f"},
|
||||
{file = "rpds_py-0.30.0-cp314-cp314-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:ac98b175585ecf4c0348fd7b29c3864bda53b805c773cbf7bfdaffc8070c976f"},
|
||||
{file = "rpds_py-0.30.0-cp314-cp314-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3e62880792319dbeb7eb866547f2e35973289e7d5696c6e295476448f5b63c87"},
|
||||
{file = "rpds_py-0.30.0-cp314-cp314-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:4e7fc54e0900ab35d041b0601431b0a0eb495f0851a0639b6ef90f7741b39a18"},
|
||||
{file = "rpds_py-0.30.0-cp314-cp314-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:47e77dc9822d3ad616c3d5759ea5631a75e5809d5a28707744ef79d7a1bcfcad"},
|
||||
{file = "rpds_py-0.30.0-cp314-cp314-manylinux_2_31_riscv64.whl", hash = "sha256:b4dc1a6ff022ff85ecafef7979a2c6eb423430e05f1165d6688234e62ba99a07"},
|
||||
{file = "rpds_py-0.30.0-cp314-cp314-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:4559c972db3a360808309e06a74628b95eaccbf961c335c8fe0d590cf587456f"},
|
||||
{file = "rpds_py-0.30.0-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:0ed177ed9bded28f8deb6ab40c183cd1192aa0de40c12f38be4d59cd33cb5c65"},
|
||||
{file = "rpds_py-0.30.0-cp314-cp314-musllinux_1_2_i686.whl", hash = "sha256:ad1fa8db769b76ea911cb4e10f049d80bf518c104f15b3edb2371cc65375c46f"},
|
||||
{file = "rpds_py-0.30.0-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:46e83c697b1f1c72b50e5ee5adb4353eef7406fb3f2043d64c33f20ad1c2fc53"},
|
||||
{file = "rpds_py-0.30.0-cp314-cp314-win32.whl", hash = "sha256:ee454b2a007d57363c2dfd5b6ca4a5d7e2c518938f8ed3b706e37e5d470801ed"},
|
||||
{file = "rpds_py-0.30.0-cp314-cp314-win_amd64.whl", hash = "sha256:95f0802447ac2d10bcc69f6dc28fe95fdf17940367b21d34e34c737870758950"},
|
||||
{file = "rpds_py-0.30.0-cp314-cp314-win_arm64.whl", hash = "sha256:613aa4771c99f03346e54c3f038e4cc574ac09a3ddfb0e8878487335e96dead6"},
|
||||
{file = "rpds_py-0.30.0-cp314-cp314t-macosx_10_12_x86_64.whl", hash = "sha256:7e6ecfcb62edfd632e56983964e6884851786443739dbfe3582947e87274f7cb"},
|
||||
{file = "rpds_py-0.30.0-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:a1d0bc22a7cdc173fedebb73ef81e07faef93692b8c1ad3733b67e31e1b6e1b8"},
|
||||
{file = "rpds_py-0.30.0-cp314-cp314t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0d08f00679177226c4cb8c5265012eea897c8ca3b93f429e546600c971bcbae7"},
|
||||
{file = "rpds_py-0.30.0-cp314-cp314t-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:5965af57d5848192c13534f90f9dd16464f3c37aaf166cc1da1cae1fd5a34898"},
|
||||
{file = "rpds_py-0.30.0-cp314-cp314t-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:9a4e86e34e9ab6b667c27f3211ca48f73dba7cd3d90f8d5b11be56e5dbc3fb4e"},
|
||||
{file = "rpds_py-0.30.0-cp314-cp314t-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:e5d3e6b26f2c785d65cc25ef1e5267ccbe1b069c5c21b8cc724efee290554419"},
|
||||
{file = "rpds_py-0.30.0-cp314-cp314t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:626a7433c34566535b6e56a1b39a7b17ba961e97ce3b80ec62e6f1312c025551"},
|
||||
{file = "rpds_py-0.30.0-cp314-cp314t-manylinux_2_31_riscv64.whl", hash = "sha256:acd7eb3f4471577b9b5a41baf02a978e8bdeb08b4b355273994f8b87032000a8"},
|
||||
{file = "rpds_py-0.30.0-cp314-cp314t-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:fe5fa731a1fa8a0a56b0977413f8cacac1768dad38d16b3a296712709476fbd5"},
|
||||
{file = "rpds_py-0.30.0-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:74a3243a411126362712ee1524dfc90c650a503502f135d54d1b352bd01f2404"},
|
||||
{file = "rpds_py-0.30.0-cp314-cp314t-musllinux_1_2_i686.whl", hash = "sha256:3e8eeb0544f2eb0d2581774be4c3410356eba189529a6b3e36bbbf9696175856"},
|
||||
{file = "rpds_py-0.30.0-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:dbd936cde57abfee19ab3213cf9c26be06d60750e60a8e4dd85d1ab12c8b1f40"},
|
||||
{file = "rpds_py-0.30.0-cp314-cp314t-win32.whl", hash = "sha256:dc824125c72246d924f7f796b4f63c1e9dc810c7d9e2355864b3c3a73d59ade0"},
|
||||
{file = "rpds_py-0.30.0-cp314-cp314t-win_amd64.whl", hash = "sha256:27f4b0e92de5bfbc6f86e43959e6edd1425c33b5e69aab0984a72047f2bcf1e3"},
|
||||
{file = "rpds_py-0.30.0-pp311-pypy311_pp73-macosx_10_12_x86_64.whl", hash = "sha256:c2262bdba0ad4fc6fb5545660673925c2d2a5d9e2e0fb603aad545427be0fc58"},
|
||||
{file = "rpds_py-0.30.0-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:ee6af14263f25eedc3bb918a3c04245106a42dfd4f5c2285ea6f997b1fc3f89a"},
|
||||
{file = "rpds_py-0.30.0-pp311-pypy311_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3adbb8179ce342d235c31ab8ec511e66c73faa27a47e076ccc92421add53e2bb"},
|
||||
{file = "rpds_py-0.30.0-pp311-pypy311_pp73-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:250fa00e9543ac9b97ac258bd37367ff5256666122c2d0f2bc97577c60a1818c"},
|
||||
{file = "rpds_py-0.30.0-pp311-pypy311_pp73-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:9854cf4f488b3d57b9aaeb105f06d78e5529d3145b1e4a41750167e8c213c6d3"},
|
||||
{file = "rpds_py-0.30.0-pp311-pypy311_pp73-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:993914b8e560023bc0a8bf742c5f303551992dcb85e247b1e5c7f4a7d145bda5"},
|
||||
{file = "rpds_py-0.30.0-pp311-pypy311_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:58edca431fb9b29950807e301826586e5bbf24163677732429770a697ffe6738"},
|
||||
{file = "rpds_py-0.30.0-pp311-pypy311_pp73-manylinux_2_31_riscv64.whl", hash = "sha256:dea5b552272a944763b34394d04577cf0f9bd013207bc32323b5a89a53cf9c2f"},
|
||||
{file = "rpds_py-0.30.0-pp311-pypy311_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:ba3af48635eb83d03f6c9735dfb21785303e73d22ad03d489e88adae6eab8877"},
|
||||
{file = "rpds_py-0.30.0-pp311-pypy311_pp73-musllinux_1_2_aarch64.whl", hash = "sha256:dff13836529b921e22f15cb099751209a60009731a68519630a24d61f0b1b30a"},
|
||||
{file = "rpds_py-0.30.0-pp311-pypy311_pp73-musllinux_1_2_i686.whl", hash = "sha256:1b151685b23929ab7beec71080a8889d4d6d9fa9a983d213f07121205d48e2c4"},
|
||||
{file = "rpds_py-0.30.0-pp311-pypy311_pp73-musllinux_1_2_x86_64.whl", hash = "sha256:ac37f9f516c51e5753f27dfdef11a88330f04de2d564be3991384b2f3535d02e"},
|
||||
{file = "rpds_py-0.30.0.tar.gz", hash = "sha256:dd8ff7cf90014af0c0f787eea34794ebf6415242ee1d6fa91eaba725cc441e84"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "ruff"
|
||||
version = "0.15.5"
|
||||
@@ -7582,6 +8090,28 @@ postgresql-psycopgbinary = ["psycopg[binary] (>=3.0.7)"]
|
||||
pymysql = ["pymysql"]
|
||||
sqlcipher = ["sqlcipher3_binary"]
|
||||
|
||||
[[package]]
|
||||
name = "sse-starlette"
|
||||
version = "3.3.2"
|
||||
description = "SSE plugin for Starlette"
|
||||
optional = false
|
||||
python-versions = ">=3.10"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "sse_starlette-3.3.2-py3-none-any.whl", hash = "sha256:5c3ea3dad425c601236726af2f27689b74494643f57017cafcb6f8c9acfbb862"},
|
||||
{file = "sse_starlette-3.3.2.tar.gz", hash = "sha256:678fca55a1945c734d8472a6cad186a55ab02840b4f6786f5ee8770970579dcd"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
anyio = ">=4.7.0"
|
||||
starlette = ">=0.49.1"
|
||||
|
||||
[package.extras]
|
||||
daphne = ["daphne (>=4.2.0)"]
|
||||
examples = ["aiosqlite (>=0.21.0)", "fastapi (>=0.115.12)", "sqlalchemy[asyncio] (>=2.0.41)", "uvicorn (>=0.34.0)"]
|
||||
granian = ["granian (>=2.3.1)"]
|
||||
uvicorn = ["uvicorn (>=0.34.0)"]
|
||||
|
||||
[[package]]
|
||||
name = "starlette"
|
||||
version = "0.52.1"
|
||||
@@ -8478,4 +9008,4 @@ voice = ["pyttsx3"]
|
||||
[metadata]
|
||||
lock-version = "2.1"
|
||||
python-versions = ">=3.11,<4"
|
||||
content-hash = "bb8088a38625a65b8f7d296f912b0c1437b12c53f6b26698a91f030e82b1bf57"
|
||||
content-hash = "50423b08ebb6bb00a2ce51b5cfc522a8f72d3b675ed720b1e8654d8f8f6e675d"
|
||||
|
||||
@@ -13,7 +13,7 @@ homepage = "http://localhost:3000/rockachopa/Timmy-time-dashboard"
|
||||
repository = "http://localhost:3000/rockachopa/Timmy-time-dashboard"
|
||||
packages = [
|
||||
{ include = "config.py", from = "src" },
|
||||
{ include = "brain", from = "src" },
|
||||
|
||||
{ include = "dashboard", from = "src" },
|
||||
{ include = "infrastructure", from = "src" },
|
||||
{ include = "integrations", from = "src" },
|
||||
@@ -35,6 +35,7 @@ python-multipart = ">=0.0.12"
|
||||
typer = ">=0.12.0"
|
||||
rich = ">=13.0.0"
|
||||
pydantic-settings = ">=2.0.0,<3.0"
|
||||
mcp = ">=1.0.0"
|
||||
# Optional extras
|
||||
redis = { version = ">=5.0.0", optional = true }
|
||||
celery = { version = ">=5.3.0", extras = ["redis"], optional = true }
|
||||
@@ -44,7 +45,6 @@ airllm = { version = ">=2.9.0", optional = true }
|
||||
pyttsx3 = { version = ">=2.90", optional = true }
|
||||
sentence-transformers = { version = ">=2.0.0", optional = true }
|
||||
numpy = { version = ">=1.24.0", optional = true }
|
||||
mcp = { version = ">=1.0.0", optional = true }
|
||||
requests = { version = ">=2.31.0", optional = true }
|
||||
GitPython = { version = ">=3.1.40", optional = true }
|
||||
pytest = { version = ">=8.0.0", optional = true }
|
||||
@@ -63,7 +63,6 @@ voice = ["pyttsx3"]
|
||||
celery = ["celery"]
|
||||
embeddings = ["sentence-transformers", "numpy"]
|
||||
git = ["GitPython"]
|
||||
mcp = ["mcp"]
|
||||
dev = ["pytest", "pytest-asyncio", "pytest-cov", "pytest-timeout", "pytest-randomly", "pytest-xdist", "selenium"]
|
||||
|
||||
[tool.poetry.group.dev.dependencies]
|
||||
@@ -125,7 +124,7 @@ ignore = [
|
||||
]
|
||||
|
||||
[tool.ruff.lint.isort]
|
||||
known-first-party = ["brain", "config", "dashboard", "infrastructure", "integrations", "spark", "swarm", "timmy", "timmy_serve"]
|
||||
known-first-party = ["config", "dashboard", "infrastructure", "integrations", "spark", "timmy", "timmy_serve"]
|
||||
|
||||
[tool.ruff.lint.per-file-ignores]
|
||||
"tests/**" = ["S"]
|
||||
|
||||
@@ -1,24 +0,0 @@
|
||||
"""Distributed Brain — unified memory and task queue.
|
||||
|
||||
Provides:
|
||||
- **UnifiedMemory** — Single API for all memory operations (local SQLite or rqlite)
|
||||
- **BrainClient** — Direct rqlite interface for distributed operation
|
||||
- **DistributedWorker** — Task execution on Tailscale nodes
|
||||
- **LocalEmbedder** — Sentence-transformer embeddings (local, no cloud)
|
||||
|
||||
Default backend is local SQLite (data/brain.db). Set RQLITE_URL to
|
||||
upgrade to distributed rqlite over Tailscale — same API, replicated.
|
||||
"""
|
||||
|
||||
from brain.client import BrainClient
|
||||
from brain.embeddings import LocalEmbedder
|
||||
from brain.memory import UnifiedMemory, get_memory
|
||||
from brain.worker import DistributedWorker
|
||||
|
||||
__all__ = [
|
||||
"BrainClient",
|
||||
"DistributedWorker",
|
||||
"LocalEmbedder",
|
||||
"UnifiedMemory",
|
||||
"get_memory",
|
||||
]
|
||||
@@ -1,417 +0,0 @@
|
||||
"""Brain client — interface to distributed rqlite memory.
|
||||
|
||||
All devices connect to the local rqlite node, which replicates to peers.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
import socket
|
||||
from datetime import datetime
|
||||
from typing import Any
|
||||
|
||||
import httpx
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
DEFAULT_RQLITE_URL = "http://localhost:4001"
|
||||
|
||||
|
||||
class BrainClient:
|
||||
"""Client for distributed brain (rqlite).
|
||||
|
||||
Connects to local rqlite instance, which handles replication.
|
||||
All writes go to leader, reads can come from local node.
|
||||
"""
|
||||
|
||||
def __init__(self, rqlite_url: str | None = None, node_id: str | None = None):
|
||||
from config import settings
|
||||
|
||||
self.rqlite_url = rqlite_url or settings.rqlite_url or DEFAULT_RQLITE_URL
|
||||
self.node_id = node_id or f"{socket.gethostname()}-{os.getpid()}"
|
||||
self.source = self._detect_source()
|
||||
self._client = httpx.AsyncClient(timeout=30)
|
||||
|
||||
def _detect_source(self) -> str:
|
||||
"""Detect what component is using the brain."""
|
||||
# Could be 'timmy', 'zeroclaw', 'worker', etc.
|
||||
# For now, infer from context or env
|
||||
from config import settings
|
||||
|
||||
return settings.brain_source
|
||||
|
||||
# ──────────────────────────────────────────────────────────────────────────
|
||||
# Memory Operations
|
||||
# ──────────────────────────────────────────────────────────────────────────
|
||||
|
||||
async def remember(
|
||||
self,
|
||||
content: str,
|
||||
tags: list[str] | None = None,
|
||||
source: str | None = None,
|
||||
metadata: dict[str, Any] | None = None,
|
||||
) -> dict[str, Any]:
|
||||
"""Store a memory with embedding.
|
||||
|
||||
Args:
|
||||
content: Text content to remember
|
||||
tags: Optional list of tags (e.g., ['shell', 'result'])
|
||||
source: Source identifier (defaults to self.source)
|
||||
metadata: Additional JSON-serializable metadata
|
||||
|
||||
Returns:
|
||||
Dict with 'id' and 'status'
|
||||
"""
|
||||
from brain.embeddings import get_embedder
|
||||
|
||||
embedder = get_embedder()
|
||||
embedding_bytes = embedder.encode_single(content)
|
||||
|
||||
query = """
|
||||
INSERT INTO memories (content, embedding, source, tags, metadata, created_at)
|
||||
VALUES (?, ?, ?, ?, ?, ?)
|
||||
"""
|
||||
params = [
|
||||
content,
|
||||
embedding_bytes,
|
||||
source or self.source,
|
||||
json.dumps(tags or []),
|
||||
json.dumps(metadata or {}),
|
||||
datetime.utcnow().isoformat(),
|
||||
]
|
||||
|
||||
try:
|
||||
resp = await self._client.post(f"{self.rqlite_url}/db/execute", json=[query, params])
|
||||
resp.raise_for_status()
|
||||
result = resp.json()
|
||||
|
||||
# Extract inserted ID
|
||||
last_id = None
|
||||
if "results" in result and result["results"]:
|
||||
last_id = result["results"][0].get("last_insert_id")
|
||||
|
||||
logger.debug(f"Stored memory {last_id}: {content[:50]}...")
|
||||
return {"id": last_id, "status": "stored"}
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to store memory: {e}")
|
||||
raise
|
||||
|
||||
async def recall(
|
||||
self, query: str, limit: int = 5, sources: list[str] | None = None
|
||||
) -> list[str]:
|
||||
"""Semantic search for memories.
|
||||
|
||||
Args:
|
||||
query: Search query text
|
||||
limit: Max results to return
|
||||
sources: Filter by source(s) (e.g., ['timmy', 'user'])
|
||||
|
||||
Returns:
|
||||
List of memory content strings
|
||||
"""
|
||||
from brain.embeddings import get_embedder
|
||||
|
||||
embedder = get_embedder()
|
||||
query_emb = embedder.encode_single(query)
|
||||
|
||||
# rqlite with sqlite-vec extension for vector search
|
||||
sql = "SELECT content, source, metadata, distance FROM memories WHERE embedding MATCH ?"
|
||||
params = [query_emb]
|
||||
|
||||
if sources:
|
||||
placeholders = ",".join(["?"] * len(sources))
|
||||
sql += f" AND source IN ({placeholders})"
|
||||
params.extend(sources)
|
||||
|
||||
sql += " ORDER BY distance LIMIT ?"
|
||||
params.append(limit)
|
||||
|
||||
try:
|
||||
resp = await self._client.post(f"{self.rqlite_url}/db/query", json=[sql, params])
|
||||
resp.raise_for_status()
|
||||
result = resp.json()
|
||||
|
||||
results = []
|
||||
if "results" in result and result["results"]:
|
||||
for row in result["results"][0].get("rows", []):
|
||||
results.append(
|
||||
{
|
||||
"content": row[0],
|
||||
"source": row[1],
|
||||
"metadata": json.loads(row[2]) if row[2] else {},
|
||||
"distance": row[3],
|
||||
}
|
||||
)
|
||||
|
||||
return results
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to search memories: {e}")
|
||||
# Graceful fallback - return empty list
|
||||
return []
|
||||
|
||||
async def get_recent(
|
||||
self, hours: int = 24, limit: int = 20, sources: list[str] | None = None
|
||||
) -> list[dict[str, Any]]:
|
||||
"""Get recent memories by time.
|
||||
|
||||
Args:
|
||||
hours: Look back this many hours
|
||||
limit: Max results
|
||||
sources: Optional source filter
|
||||
|
||||
Returns:
|
||||
List of memory dicts
|
||||
"""
|
||||
sql = """
|
||||
SELECT id, content, source, tags, metadata, created_at
|
||||
FROM memories
|
||||
WHERE created_at > datetime('now', ?)
|
||||
"""
|
||||
params = [f"-{hours} hours"]
|
||||
|
||||
if sources:
|
||||
placeholders = ",".join(["?"] * len(sources))
|
||||
sql += f" AND source IN ({placeholders})"
|
||||
params.extend(sources)
|
||||
|
||||
sql += " ORDER BY created_at DESC LIMIT ?"
|
||||
params.append(limit)
|
||||
|
||||
try:
|
||||
resp = await self._client.post(f"{self.rqlite_url}/db/query", json=[sql, params])
|
||||
resp.raise_for_status()
|
||||
result = resp.json()
|
||||
|
||||
memories = []
|
||||
if "results" in result and result["results"]:
|
||||
for row in result["results"][0].get("rows", []):
|
||||
memories.append(
|
||||
{
|
||||
"id": row[0],
|
||||
"content": row[1],
|
||||
"source": row[2],
|
||||
"tags": json.loads(row[3]) if row[3] else [],
|
||||
"metadata": json.loads(row[4]) if row[4] else {},
|
||||
"created_at": row[5],
|
||||
}
|
||||
)
|
||||
|
||||
return memories
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to get recent memories: {e}")
|
||||
return []
|
||||
|
||||
async def get_context(self, query: str) -> str:
|
||||
"""Get formatted context for system prompt.
|
||||
|
||||
Combines recent memories + relevant memories.
|
||||
|
||||
Args:
|
||||
query: Current user query to find relevant context
|
||||
|
||||
Returns:
|
||||
Formatted context string for prompt injection
|
||||
"""
|
||||
recent = await self.get_recent(hours=24, limit=10)
|
||||
relevant = await self.recall(query, limit=5)
|
||||
|
||||
lines = ["Recent activity:"]
|
||||
for m in recent[:5]:
|
||||
lines.append(f"- {m['content'][:100]}")
|
||||
|
||||
lines.append("\nRelevant memories:")
|
||||
for r in relevant[:5]:
|
||||
lines.append(f"- {r['content'][:100]}")
|
||||
|
||||
return "\n".join(lines)
|
||||
|
||||
# ──────────────────────────────────────────────────────────────────────────
|
||||
# Task Queue Operations
|
||||
# ──────────────────────────────────────────────────────────────────────────
|
||||
|
||||
async def submit_task(
|
||||
self,
|
||||
content: str,
|
||||
task_type: str = "general",
|
||||
priority: int = 0,
|
||||
metadata: dict[str, Any] | None = None,
|
||||
) -> dict[str, Any]:
|
||||
"""Submit a task to the distributed queue.
|
||||
|
||||
Args:
|
||||
content: Task description/prompt
|
||||
task_type: Type of task (shell, creative, code, research, general)
|
||||
priority: Higher = processed first
|
||||
metadata: Additional task data
|
||||
|
||||
Returns:
|
||||
Dict with task 'id'
|
||||
"""
|
||||
query = """
|
||||
INSERT INTO tasks (content, task_type, priority, status, metadata, created_at)
|
||||
VALUES (?, ?, ?, 'pending', ?, ?)
|
||||
"""
|
||||
params = [
|
||||
content,
|
||||
task_type,
|
||||
priority,
|
||||
json.dumps(metadata or {}),
|
||||
datetime.utcnow().isoformat(),
|
||||
]
|
||||
|
||||
try:
|
||||
resp = await self._client.post(f"{self.rqlite_url}/db/execute", json=[query, params])
|
||||
resp.raise_for_status()
|
||||
result = resp.json()
|
||||
|
||||
last_id = None
|
||||
if "results" in result and result["results"]:
|
||||
last_id = result["results"][0].get("last_insert_id")
|
||||
|
||||
logger.info(f"Submitted task {last_id}: {content[:50]}...")
|
||||
return {"id": last_id, "status": "queued"}
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to submit task: {e}")
|
||||
raise
|
||||
|
||||
async def claim_task(
|
||||
self, capabilities: list[str], node_id: str | None = None
|
||||
) -> dict[str, Any] | None:
|
||||
"""Atomically claim next available task.
|
||||
|
||||
Uses UPDATE ... RETURNING pattern for atomic claim.
|
||||
|
||||
Args:
|
||||
capabilities: List of capabilities this node has
|
||||
node_id: Identifier for claiming node
|
||||
|
||||
Returns:
|
||||
Task dict or None if no tasks available
|
||||
"""
|
||||
claimer = node_id or self.node_id
|
||||
|
||||
# Try to claim a matching task atomically
|
||||
# This works because rqlite uses Raft consensus - only one node wins
|
||||
placeholders = ",".join(["?"] * len(capabilities))
|
||||
|
||||
query = f"""
|
||||
UPDATE tasks
|
||||
SET status = 'claimed',
|
||||
claimed_by = ?,
|
||||
claimed_at = ?
|
||||
WHERE id = (
|
||||
SELECT id FROM tasks
|
||||
WHERE status = 'pending'
|
||||
AND (task_type IN ({placeholders}) OR task_type = 'general')
|
||||
ORDER BY priority DESC, created_at ASC
|
||||
LIMIT 1
|
||||
)
|
||||
AND status = 'pending'
|
||||
RETURNING id, content, task_type, priority, metadata
|
||||
"""
|
||||
params = [claimer, datetime.utcnow().isoformat()] + capabilities
|
||||
|
||||
try:
|
||||
resp = await self._client.post(f"{self.rqlite_url}/db/execute", json=[query, params])
|
||||
resp.raise_for_status()
|
||||
result = resp.json()
|
||||
|
||||
if "results" in result and result["results"]:
|
||||
rows = result["results"][0].get("rows", [])
|
||||
if rows:
|
||||
row = rows[0]
|
||||
return {
|
||||
"id": row[0],
|
||||
"content": row[1],
|
||||
"type": row[2],
|
||||
"priority": row[3],
|
||||
"metadata": json.loads(row[4]) if row[4] else {},
|
||||
}
|
||||
|
||||
return None
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to claim task: {e}")
|
||||
return None
|
||||
|
||||
async def complete_task(
|
||||
self, task_id: int, success: bool, result: str | None = None, error: str | None = None
|
||||
) -> None:
|
||||
"""Mark task as completed or failed.
|
||||
|
||||
Args:
|
||||
task_id: Task ID
|
||||
success: True if task succeeded
|
||||
result: Task result/output
|
||||
error: Error message if failed
|
||||
"""
|
||||
status = "done" if success else "failed"
|
||||
|
||||
query = """
|
||||
UPDATE tasks
|
||||
SET status = ?,
|
||||
result = ?,
|
||||
error = ?,
|
||||
completed_at = ?
|
||||
WHERE id = ?
|
||||
"""
|
||||
params = [status, result, error, datetime.utcnow().isoformat(), task_id]
|
||||
|
||||
try:
|
||||
await self._client.post(f"{self.rqlite_url}/db/execute", json=[query, params])
|
||||
logger.debug(f"Task {task_id} marked {status}")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to complete task {task_id}: {e}")
|
||||
|
||||
async def get_pending_tasks(self, limit: int = 100) -> list[dict[str, Any]]:
|
||||
"""Get list of pending tasks (for dashboard/monitoring).
|
||||
|
||||
Args:
|
||||
limit: Max tasks to return
|
||||
|
||||
Returns:
|
||||
List of pending task dicts
|
||||
"""
|
||||
sql = """
|
||||
SELECT id, content, task_type, priority, metadata, created_at
|
||||
FROM tasks
|
||||
WHERE status = 'pending'
|
||||
ORDER BY priority DESC, created_at ASC
|
||||
LIMIT ?
|
||||
"""
|
||||
|
||||
try:
|
||||
resp = await self._client.post(f"{self.rqlite_url}/db/query", json=[sql, [limit]])
|
||||
resp.raise_for_status()
|
||||
result = resp.json()
|
||||
|
||||
tasks = []
|
||||
if "results" in result and result["results"]:
|
||||
for row in result["results"][0].get("rows", []):
|
||||
tasks.append(
|
||||
{
|
||||
"id": row[0],
|
||||
"content": row[1],
|
||||
"type": row[2],
|
||||
"priority": row[3],
|
||||
"metadata": json.loads(row[4]) if row[4] else {},
|
||||
"created_at": row[5],
|
||||
}
|
||||
)
|
||||
|
||||
return tasks
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to get pending tasks: {e}")
|
||||
return []
|
||||
|
||||
async def close(self):
|
||||
"""Close HTTP client."""
|
||||
await self._client.aclose()
|
||||
@@ -1,90 +0,0 @@
|
||||
"""Local embeddings using sentence-transformers.
|
||||
|
||||
No OpenAI dependency. Runs 100% locally on CPU.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Model cache
|
||||
_model = None
|
||||
_model_name = "all-MiniLM-L6-v2"
|
||||
_dimensions = 384
|
||||
|
||||
|
||||
class LocalEmbedder:
|
||||
"""Local sentence transformer for embeddings.
|
||||
|
||||
Uses all-MiniLM-L6-v2 (80MB download, runs on CPU).
|
||||
384-dimensional embeddings, good enough for semantic search.
|
||||
"""
|
||||
|
||||
def __init__(self, model_name: str = _model_name):
|
||||
self.model_name = model_name
|
||||
self._model = None
|
||||
self._dimensions = _dimensions
|
||||
|
||||
def _load_model(self):
|
||||
"""Lazy load the model."""
|
||||
global _model
|
||||
if _model is not None:
|
||||
self._model = _model
|
||||
return
|
||||
|
||||
try:
|
||||
from sentence_transformers import SentenceTransformer
|
||||
|
||||
logger.info(f"Loading embedding model: {self.model_name}")
|
||||
_model = SentenceTransformer(self.model_name)
|
||||
self._model = _model
|
||||
logger.info(f"Embedding model loaded ({self._dimensions} dims)")
|
||||
except ImportError:
|
||||
logger.error(
|
||||
"sentence-transformers not installed. Run: pip install sentence-transformers"
|
||||
)
|
||||
raise
|
||||
|
||||
def encode(self, text: str | list[str]):
|
||||
"""Encode text to embedding vector(s).
|
||||
|
||||
Args:
|
||||
text: String or list of strings to encode
|
||||
|
||||
Returns:
|
||||
Numpy array of shape (dims,) for single string or (n, dims) for list
|
||||
"""
|
||||
if self._model is None:
|
||||
self._load_model()
|
||||
|
||||
# Normalize embeddings for cosine similarity
|
||||
return self._model.encode(text, normalize_embeddings=True)
|
||||
|
||||
def encode_single(self, text: str) -> bytes:
|
||||
"""Encode single text to bytes for SQLite storage.
|
||||
|
||||
Returns:
|
||||
Float32 bytes
|
||||
"""
|
||||
import numpy as np
|
||||
|
||||
embedding = self.encode(text)
|
||||
if len(embedding.shape) > 1:
|
||||
embedding = embedding[0]
|
||||
return embedding.astype(np.float32).tobytes()
|
||||
|
||||
def similarity(self, a, b) -> float:
|
||||
"""Compute cosine similarity between two vectors.
|
||||
|
||||
Vectors should already be normalized from encode().
|
||||
"""
|
||||
import numpy as np
|
||||
|
||||
return float(np.dot(a, b))
|
||||
|
||||
|
||||
def get_embedder() -> LocalEmbedder:
|
||||
"""Get singleton embedder instance."""
|
||||
return LocalEmbedder()
|
||||
@@ -1,677 +0,0 @@
|
||||
"""Unified memory interface (DEPRECATED).
|
||||
|
||||
New code should use ``timmy.memory.unified`` and the tools in
|
||||
``timmy.semantic_memory`` (memory_write, memory_read, memory_search,
|
||||
memory_forget). This module is retained for backward compatibility
|
||||
and the loop-QA self-test probes.
|
||||
|
||||
One API, two backends:
|
||||
- **Local SQLite** (default) — works immediately, no setup
|
||||
- **Distributed rqlite** — same API, replicated across Tailscale devices
|
||||
|
||||
Every module that needs to store or recall memory uses this interface.
|
||||
|
||||
Usage:
|
||||
from brain.memory import UnifiedMemory
|
||||
|
||||
memory = UnifiedMemory() # auto-detects backend
|
||||
|
||||
# Store
|
||||
await memory.remember("User prefers dark mode", tags=["preference"])
|
||||
|
||||
# Recall
|
||||
results = await memory.recall("what does the user prefer?")
|
||||
|
||||
# Facts
|
||||
await memory.store_fact("user_preference", "Prefers dark mode")
|
||||
facts = await memory.get_facts("user_preference")
|
||||
|
||||
# Context for prompt
|
||||
context = await memory.get_context("current user question")
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import logging
|
||||
import sqlite3
|
||||
import uuid
|
||||
from datetime import UTC, datetime
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Default paths
|
||||
_PROJECT_ROOT = Path(__file__).parent.parent.parent
|
||||
_DEFAULT_DB_PATH = _PROJECT_ROOT / "data" / "brain.db"
|
||||
|
||||
# Schema version for migrations
|
||||
_SCHEMA_VERSION = 1
|
||||
|
||||
|
||||
def _get_db_path() -> Path:
|
||||
"""Get the brain database path from env or default."""
|
||||
from config import settings
|
||||
|
||||
if settings.brain_db_path:
|
||||
return Path(settings.brain_db_path)
|
||||
return _DEFAULT_DB_PATH
|
||||
|
||||
|
||||
class UnifiedMemory:
|
||||
"""Unified memory interface.
|
||||
|
||||
Provides a single API for all memory operations. Defaults to local
|
||||
SQLite. When rqlite is available (detected via RQLITE_URL env var),
|
||||
delegates to BrainClient for distributed operation.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
db_path: Path | None = None,
|
||||
source: str = "default",
|
||||
use_rqlite: bool | None = None,
|
||||
):
|
||||
self.db_path = db_path or _get_db_path()
|
||||
self.source = source
|
||||
self._embedder = None
|
||||
self._rqlite_client = None
|
||||
|
||||
# Auto-detect: use rqlite if RQLITE_URL is set, otherwise local SQLite
|
||||
if use_rqlite is None:
|
||||
from config import settings as _settings
|
||||
|
||||
use_rqlite = bool(_settings.rqlite_url)
|
||||
self._use_rqlite = use_rqlite
|
||||
|
||||
if not self._use_rqlite:
|
||||
self._init_local_db()
|
||||
|
||||
# ──────────────────────────────────────────────────────────────────────
|
||||
# Local SQLite Setup
|
||||
# ──────────────────────────────────────────────────────────────────────
|
||||
|
||||
def _init_local_db(self) -> None:
|
||||
"""Initialize local SQLite database with schema."""
|
||||
self.db_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
conn = sqlite3.connect(str(self.db_path))
|
||||
try:
|
||||
conn.execute("PRAGMA journal_mode=WAL")
|
||||
conn.execute("PRAGMA busy_timeout=5000")
|
||||
conn.executescript(_LOCAL_SCHEMA)
|
||||
conn.commit()
|
||||
logger.info("Brain local DB initialized at %s (WAL mode)", self.db_path)
|
||||
finally:
|
||||
conn.close()
|
||||
|
||||
def _get_conn(self) -> sqlite3.Connection:
|
||||
"""Get a SQLite connection with WAL mode and busy timeout."""
|
||||
conn = sqlite3.connect(str(self.db_path))
|
||||
conn.row_factory = sqlite3.Row
|
||||
conn.execute("PRAGMA busy_timeout=5000")
|
||||
return conn
|
||||
|
||||
def _get_embedder(self):
|
||||
"""Lazy-load the embedding model."""
|
||||
if self._embedder is None:
|
||||
from config import settings as _settings
|
||||
|
||||
if _settings.timmy_skip_embeddings:
|
||||
return None
|
||||
try:
|
||||
from brain.embeddings import LocalEmbedder
|
||||
|
||||
self._embedder = LocalEmbedder()
|
||||
except ImportError:
|
||||
logger.warning("sentence-transformers not available — semantic search disabled")
|
||||
self._embedder = None
|
||||
return self._embedder
|
||||
|
||||
# ──────────────────────────────────────────────────────────────────────
|
||||
# rqlite Delegation
|
||||
# ──────────────────────────────────────────────────────────────────────
|
||||
|
||||
def _get_rqlite_client(self):
|
||||
"""Lazy-load the rqlite BrainClient."""
|
||||
if self._rqlite_client is None:
|
||||
from brain.client import BrainClient
|
||||
|
||||
self._rqlite_client = BrainClient()
|
||||
return self._rqlite_client
|
||||
|
||||
# ──────────────────────────────────────────────────────────────────────
|
||||
# Core Memory Operations
|
||||
# ──────────────────────────────────────────────────────────────────────
|
||||
|
||||
async def remember(
|
||||
self,
|
||||
content: str,
|
||||
tags: list[str] | None = None,
|
||||
source: str | None = None,
|
||||
metadata: dict[str, Any] | None = None,
|
||||
) -> dict[str, Any]:
|
||||
"""Store a memory.
|
||||
|
||||
Args:
|
||||
content: Text content to remember.
|
||||
tags: Optional list of tags for categorization.
|
||||
source: Source identifier (defaults to self.source).
|
||||
metadata: Additional JSON-serializable metadata.
|
||||
|
||||
Returns:
|
||||
Dict with 'id' and 'status'.
|
||||
"""
|
||||
if self._use_rqlite:
|
||||
client = self._get_rqlite_client()
|
||||
return await client.remember(content, tags, source or self.source, metadata)
|
||||
|
||||
return self.remember_sync(content, tags, source, metadata)
|
||||
|
||||
def remember_sync(
|
||||
self,
|
||||
content: str,
|
||||
tags: list[str] | None = None,
|
||||
source: str | None = None,
|
||||
metadata: dict[str, Any] | None = None,
|
||||
) -> dict[str, Any]:
|
||||
"""Store a memory (synchronous, local SQLite only).
|
||||
|
||||
Args:
|
||||
content: Text content to remember.
|
||||
tags: Optional list of tags.
|
||||
source: Source identifier.
|
||||
metadata: Additional metadata.
|
||||
|
||||
Returns:
|
||||
Dict with 'id' and 'status'.
|
||||
"""
|
||||
now = datetime.now(UTC).isoformat()
|
||||
embedding_bytes = None
|
||||
|
||||
embedder = self._get_embedder()
|
||||
if embedder is not None:
|
||||
try:
|
||||
embedding_bytes = embedder.encode_single(content)
|
||||
except Exception as e:
|
||||
logger.warning("Embedding failed, storing without vector: %s", e)
|
||||
|
||||
conn = self._get_conn()
|
||||
try:
|
||||
cursor = conn.execute(
|
||||
"""INSERT INTO memories (content, embedding, source, tags, metadata, created_at)
|
||||
VALUES (?, ?, ?, ?, ?, ?)""",
|
||||
(
|
||||
content,
|
||||
embedding_bytes,
|
||||
source or self.source,
|
||||
json.dumps(tags or []),
|
||||
json.dumps(metadata or {}),
|
||||
now,
|
||||
),
|
||||
)
|
||||
conn.commit()
|
||||
memory_id = cursor.lastrowid
|
||||
logger.debug("Stored memory %s: %s", memory_id, content[:50])
|
||||
return {"id": memory_id, "status": "stored"}
|
||||
finally:
|
||||
conn.close()
|
||||
|
||||
async def recall(
|
||||
self,
|
||||
query: str,
|
||||
limit: int = 5,
|
||||
sources: list[str] | None = None,
|
||||
) -> list[dict[str, Any]]:
|
||||
"""Semantic search for memories.
|
||||
|
||||
If embeddings are available, uses cosine similarity.
|
||||
Falls back to keyword search if no embedder.
|
||||
|
||||
Args:
|
||||
query: Search query text.
|
||||
limit: Max results to return.
|
||||
sources: Filter by source(s).
|
||||
|
||||
Returns:
|
||||
List of memory dicts with 'content', 'source', 'score'.
|
||||
"""
|
||||
if self._use_rqlite:
|
||||
client = self._get_rqlite_client()
|
||||
return await client.recall(query, limit, sources)
|
||||
|
||||
return self.recall_sync(query, limit, sources)
|
||||
|
||||
def recall_sync(
|
||||
self,
|
||||
query: str,
|
||||
limit: int = 5,
|
||||
sources: list[str] | None = None,
|
||||
) -> list[dict[str, Any]]:
|
||||
"""Semantic search (synchronous, local SQLite).
|
||||
|
||||
Uses numpy dot product for cosine similarity when embeddings
|
||||
are available. Falls back to LIKE-based keyword search.
|
||||
"""
|
||||
embedder = self._get_embedder()
|
||||
|
||||
if embedder is not None:
|
||||
return self._recall_semantic(query, limit, sources, embedder)
|
||||
return self._recall_keyword(query, limit, sources)
|
||||
|
||||
def _recall_semantic(
|
||||
self,
|
||||
query: str,
|
||||
limit: int,
|
||||
sources: list[str] | None,
|
||||
embedder,
|
||||
) -> list[dict[str, Any]]:
|
||||
"""Vector similarity search over local SQLite."""
|
||||
import numpy as np
|
||||
|
||||
try:
|
||||
query_vec = embedder.encode(query)
|
||||
if len(query_vec.shape) > 1:
|
||||
query_vec = query_vec[0]
|
||||
except Exception as e:
|
||||
logger.warning("Query embedding failed, falling back to keyword: %s", e)
|
||||
return self._recall_keyword(query, limit, sources)
|
||||
|
||||
conn = self._get_conn()
|
||||
try:
|
||||
sql = "SELECT id, content, embedding, source, tags, metadata, created_at FROM memories WHERE embedding IS NOT NULL"
|
||||
params: list = []
|
||||
|
||||
if sources:
|
||||
placeholders = ",".join(["?"] * len(sources))
|
||||
sql += f" AND source IN ({placeholders})"
|
||||
params.extend(sources)
|
||||
|
||||
rows = conn.execute(sql, params).fetchall()
|
||||
|
||||
# Compute similarities
|
||||
scored = []
|
||||
for row in rows:
|
||||
try:
|
||||
stored_vec = np.frombuffer(row["embedding"], dtype=np.float32)
|
||||
score = float(np.dot(query_vec, stored_vec))
|
||||
scored.append((score, row))
|
||||
except Exception:
|
||||
continue
|
||||
|
||||
# Sort by similarity (highest first)
|
||||
scored.sort(key=lambda x: x[0], reverse=True)
|
||||
|
||||
results = []
|
||||
for score, row in scored[:limit]:
|
||||
results.append(
|
||||
{
|
||||
"id": row["id"],
|
||||
"content": row["content"],
|
||||
"source": row["source"],
|
||||
"tags": json.loads(row["tags"]) if row["tags"] else [],
|
||||
"metadata": json.loads(row["metadata"]) if row["metadata"] else {},
|
||||
"score": score,
|
||||
"created_at": row["created_at"],
|
||||
}
|
||||
)
|
||||
|
||||
return results
|
||||
finally:
|
||||
conn.close()
|
||||
|
||||
def _recall_keyword(
|
||||
self,
|
||||
query: str,
|
||||
limit: int,
|
||||
sources: list[str] | None,
|
||||
) -> list[dict[str, Any]]:
|
||||
"""Keyword-based fallback search."""
|
||||
conn = self._get_conn()
|
||||
try:
|
||||
sql = "SELECT id, content, source, tags, metadata, created_at FROM memories WHERE content LIKE ?"
|
||||
params: list = [f"%{query}%"]
|
||||
|
||||
if sources:
|
||||
placeholders = ",".join(["?"] * len(sources))
|
||||
sql += f" AND source IN ({placeholders})"
|
||||
params.extend(sources)
|
||||
|
||||
sql += " ORDER BY created_at DESC LIMIT ?"
|
||||
params.append(limit)
|
||||
|
||||
rows = conn.execute(sql, params).fetchall()
|
||||
|
||||
return [
|
||||
{
|
||||
"id": row["id"],
|
||||
"content": row["content"],
|
||||
"source": row["source"],
|
||||
"tags": json.loads(row["tags"]) if row["tags"] else [],
|
||||
"metadata": json.loads(row["metadata"]) if row["metadata"] else {},
|
||||
"score": 0.5, # Keyword match gets a neutral score
|
||||
"created_at": row["created_at"],
|
||||
}
|
||||
for row in rows
|
||||
]
|
||||
finally:
|
||||
conn.close()
|
||||
|
||||
# ──────────────────────────────────────────────────────────────────────
|
||||
# Fact Storage (Long-Term Memory)
|
||||
# ──────────────────────────────────────────────────────────────────────
|
||||
|
||||
async def store_fact(
|
||||
self,
|
||||
category: str,
|
||||
content: str,
|
||||
confidence: float = 0.8,
|
||||
source: str = "extracted",
|
||||
) -> dict[str, Any]:
|
||||
"""Store a long-term fact.
|
||||
|
||||
Args:
|
||||
category: Fact category (user_preference, user_fact, learned_pattern).
|
||||
content: The fact text.
|
||||
confidence: Confidence score 0.0-1.0.
|
||||
source: Where this fact came from.
|
||||
|
||||
Returns:
|
||||
Dict with 'id' and 'status'.
|
||||
"""
|
||||
return self.store_fact_sync(category, content, confidence, source)
|
||||
|
||||
def store_fact_sync(
|
||||
self,
|
||||
category: str,
|
||||
content: str,
|
||||
confidence: float = 0.8,
|
||||
source: str = "extracted",
|
||||
) -> dict[str, Any]:
|
||||
"""Store a long-term fact (synchronous)."""
|
||||
fact_id = str(uuid.uuid4())
|
||||
now = datetime.now(UTC).isoformat()
|
||||
|
||||
conn = self._get_conn()
|
||||
try:
|
||||
conn.execute(
|
||||
"""INSERT INTO facts (id, category, content, confidence, source, created_at, last_accessed, access_count)
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?, 0)""",
|
||||
(fact_id, category, content, confidence, source, now, now),
|
||||
)
|
||||
conn.commit()
|
||||
logger.debug("Stored fact [%s]: %s", category, content[:50])
|
||||
return {"id": fact_id, "status": "stored"}
|
||||
finally:
|
||||
conn.close()
|
||||
|
||||
async def get_facts(
|
||||
self,
|
||||
category: str | None = None,
|
||||
query: str | None = None,
|
||||
limit: int = 10,
|
||||
) -> list[dict[str, Any]]:
|
||||
"""Retrieve facts from long-term memory.
|
||||
|
||||
Args:
|
||||
category: Filter by category.
|
||||
query: Keyword search within facts.
|
||||
limit: Max results.
|
||||
|
||||
Returns:
|
||||
List of fact dicts.
|
||||
"""
|
||||
return self.get_facts_sync(category, query, limit)
|
||||
|
||||
def get_facts_sync(
|
||||
self,
|
||||
category: str | None = None,
|
||||
query: str | None = None,
|
||||
limit: int = 10,
|
||||
) -> list[dict[str, Any]]:
|
||||
"""Retrieve facts (synchronous)."""
|
||||
conn = self._get_conn()
|
||||
try:
|
||||
conditions = []
|
||||
params: list = []
|
||||
|
||||
if category:
|
||||
conditions.append("category = ?")
|
||||
params.append(category)
|
||||
if query:
|
||||
conditions.append("content LIKE ?")
|
||||
params.append(f"%{query}%")
|
||||
|
||||
where = " AND ".join(conditions) if conditions else "1=1"
|
||||
sql = f"""SELECT id, category, content, confidence, source, created_at, last_accessed, access_count
|
||||
FROM facts WHERE {where}
|
||||
ORDER BY confidence DESC, last_accessed DESC
|
||||
LIMIT ?"""
|
||||
params.append(limit)
|
||||
|
||||
rows = conn.execute(sql, params).fetchall()
|
||||
|
||||
# Update access counts
|
||||
for row in rows:
|
||||
conn.execute(
|
||||
"UPDATE facts SET access_count = access_count + 1, last_accessed = ? WHERE id = ?",
|
||||
(datetime.now(UTC).isoformat(), row["id"]),
|
||||
)
|
||||
conn.commit()
|
||||
|
||||
return [
|
||||
{
|
||||
"id": row["id"],
|
||||
"category": row["category"],
|
||||
"content": row["content"],
|
||||
"confidence": row["confidence"],
|
||||
"source": row["source"],
|
||||
"created_at": row["created_at"],
|
||||
"access_count": row["access_count"],
|
||||
}
|
||||
for row in rows
|
||||
]
|
||||
finally:
|
||||
conn.close()
|
||||
|
||||
# ──────────────────────────────────────────────────────────────────────
|
||||
# Recent Memories
|
||||
# ──────────────────────────────────────────────────────────────────────
|
||||
|
||||
async def get_recent(
|
||||
self,
|
||||
hours: int = 24,
|
||||
limit: int = 20,
|
||||
sources: list[str] | None = None,
|
||||
) -> list[dict[str, Any]]:
|
||||
"""Get recent memories by time."""
|
||||
if self._use_rqlite:
|
||||
client = self._get_rqlite_client()
|
||||
return await client.get_recent(hours, limit, sources)
|
||||
|
||||
return self.get_recent_sync(hours, limit, sources)
|
||||
|
||||
def get_recent_sync(
|
||||
self,
|
||||
hours: int = 24,
|
||||
limit: int = 20,
|
||||
sources: list[str] | None = None,
|
||||
) -> list[dict[str, Any]]:
|
||||
"""Get recent memories (synchronous)."""
|
||||
conn = self._get_conn()
|
||||
try:
|
||||
sql = """SELECT id, content, source, tags, metadata, created_at
|
||||
FROM memories
|
||||
WHERE created_at > datetime('now', ?)"""
|
||||
params: list = [f"-{hours} hours"]
|
||||
|
||||
if sources:
|
||||
placeholders = ",".join(["?"] * len(sources))
|
||||
sql += f" AND source IN ({placeholders})"
|
||||
params.extend(sources)
|
||||
|
||||
sql += " ORDER BY created_at DESC LIMIT ?"
|
||||
params.append(limit)
|
||||
|
||||
rows = conn.execute(sql, params).fetchall()
|
||||
|
||||
return [
|
||||
{
|
||||
"id": row["id"],
|
||||
"content": row["content"],
|
||||
"source": row["source"],
|
||||
"tags": json.loads(row["tags"]) if row["tags"] else [],
|
||||
"metadata": json.loads(row["metadata"]) if row["metadata"] else {},
|
||||
"created_at": row["created_at"],
|
||||
}
|
||||
for row in rows
|
||||
]
|
||||
finally:
|
||||
conn.close()
|
||||
|
||||
# ──────────────────────────────────────────────────────────────────────
|
||||
# Identity
|
||||
# ──────────────────────────────────────────────────────────────────────
|
||||
|
||||
def get_identity(self) -> str:
|
||||
"""Return empty string — identity system removed."""
|
||||
return ""
|
||||
|
||||
def get_identity_for_prompt(self) -> str:
|
||||
"""Return empty string — identity system removed."""
|
||||
return ""
|
||||
|
||||
# ──────────────────────────────────────────────────────────────────────
|
||||
# Context Building
|
||||
# ──────────────────────────────────────────────────────────────────────
|
||||
|
||||
async def get_context(self, query: str) -> str:
|
||||
"""Build formatted context for system prompt.
|
||||
|
||||
Combines identity + recent memories + relevant memories.
|
||||
|
||||
Args:
|
||||
query: Current user query for relevance matching.
|
||||
|
||||
Returns:
|
||||
Formatted context string for prompt injection.
|
||||
"""
|
||||
parts = []
|
||||
|
||||
# Recent activity
|
||||
recent = await self.get_recent(hours=24, limit=5)
|
||||
if recent:
|
||||
lines = ["## Recent Activity"]
|
||||
for m in recent:
|
||||
lines.append(f"- {m['content'][:100]}")
|
||||
parts.append("\n".join(lines))
|
||||
|
||||
# Relevant memories
|
||||
relevant = await self.recall(query, limit=5)
|
||||
if relevant:
|
||||
lines = ["## Relevant Memories"]
|
||||
for r in relevant:
|
||||
score = r.get("score", 0)
|
||||
lines.append(f"- [{score:.2f}] {r['content'][:100]}")
|
||||
parts.append("\n".join(lines))
|
||||
|
||||
return "\n\n---\n\n".join(parts)
|
||||
|
||||
# ──────────────────────────────────────────────────────────────────────
|
||||
# Stats
|
||||
# ──────────────────────────────────────────────────────────────────────
|
||||
|
||||
def get_stats(self) -> dict[str, Any]:
|
||||
"""Get memory statistics.
|
||||
|
||||
Returns:
|
||||
Dict with memory_count, fact_count, db_size_bytes, etc.
|
||||
"""
|
||||
conn = self._get_conn()
|
||||
try:
|
||||
memory_count = conn.execute("SELECT COUNT(*) FROM memories").fetchone()[0]
|
||||
fact_count = conn.execute("SELECT COUNT(*) FROM facts").fetchone()[0]
|
||||
embedded_count = conn.execute(
|
||||
"SELECT COUNT(*) FROM memories WHERE embedding IS NOT NULL"
|
||||
).fetchone()[0]
|
||||
|
||||
db_size = self.db_path.stat().st_size if self.db_path.exists() else 0
|
||||
|
||||
return {
|
||||
"memory_count": memory_count,
|
||||
"fact_count": fact_count,
|
||||
"embedded_count": embedded_count,
|
||||
"db_size_bytes": db_size,
|
||||
"backend": "rqlite" if self._use_rqlite else "local_sqlite",
|
||||
"db_path": str(self.db_path),
|
||||
}
|
||||
finally:
|
||||
conn.close()
|
||||
|
||||
|
||||
# ──────────────────────────────────────────────────────────────────────────
|
||||
# Module-level convenience
|
||||
# ──────────────────────────────────────────────────────────────────────────
|
||||
|
||||
_default_memory: UnifiedMemory | None = None
|
||||
|
||||
|
||||
def get_memory(source: str = "agent") -> UnifiedMemory:
|
||||
"""Get the singleton UnifiedMemory instance.
|
||||
|
||||
Args:
|
||||
source: Source identifier for this caller.
|
||||
|
||||
Returns:
|
||||
UnifiedMemory instance.
|
||||
"""
|
||||
global _default_memory
|
||||
if _default_memory is None:
|
||||
_default_memory = UnifiedMemory(source=source)
|
||||
return _default_memory
|
||||
|
||||
|
||||
# ──────────────────────────────────────────────────────────────────────────
|
||||
# Local SQLite Schema
|
||||
# ──────────────────────────────────────────────────────────────────────────
|
||||
|
||||
_LOCAL_SCHEMA = """
|
||||
-- Unified memory table (replaces vector_store, semantic_memory, etc.)
|
||||
CREATE TABLE IF NOT EXISTS memories (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
content TEXT NOT NULL,
|
||||
embedding BLOB,
|
||||
source TEXT DEFAULT 'agent',
|
||||
tags TEXT DEFAULT '[]',
|
||||
metadata TEXT DEFAULT '{}',
|
||||
created_at TEXT NOT NULL
|
||||
);
|
||||
|
||||
-- Long-term facts (replaces memory_layers LongTermMemory)
|
||||
CREATE TABLE IF NOT EXISTS facts (
|
||||
id TEXT PRIMARY KEY,
|
||||
category TEXT NOT NULL,
|
||||
content TEXT NOT NULL,
|
||||
confidence REAL NOT NULL DEFAULT 0.5,
|
||||
source TEXT DEFAULT 'extracted',
|
||||
created_at TEXT NOT NULL,
|
||||
last_accessed TEXT NOT NULL,
|
||||
access_count INTEGER DEFAULT 0
|
||||
);
|
||||
|
||||
-- Indexes
|
||||
CREATE INDEX IF NOT EXISTS idx_memories_source ON memories(source);
|
||||
CREATE INDEX IF NOT EXISTS idx_memories_created ON memories(created_at);
|
||||
CREATE INDEX IF NOT EXISTS idx_facts_category ON facts(category);
|
||||
CREATE INDEX IF NOT EXISTS idx_facts_confidence ON facts(confidence);
|
||||
|
||||
-- Schema version
|
||||
CREATE TABLE IF NOT EXISTS brain_schema_version (
|
||||
version INTEGER PRIMARY KEY,
|
||||
applied_at TEXT
|
||||
);
|
||||
|
||||
INSERT OR REPLACE INTO brain_schema_version (version, applied_at)
|
||||
VALUES (1, datetime('now'));
|
||||
"""
|
||||
@@ -1,96 +0,0 @@
|
||||
"""Database schema for distributed brain.
|
||||
|
||||
SQL to initialize rqlite with memories and tasks tables.
|
||||
"""
|
||||
|
||||
# Schema version for migrations
|
||||
SCHEMA_VERSION = 1
|
||||
|
||||
INIT_SQL = """
|
||||
-- Note: sqlite-vec extensions must be loaded programmatically
|
||||
-- via conn.load_extension("vector0") / conn.load_extension("vec0")
|
||||
-- before executing this schema. Dot-commands are CLI-only.
|
||||
|
||||
-- Memories table with vector search
|
||||
CREATE TABLE IF NOT EXISTS memories (
|
||||
id INTEGER PRIMARY KEY,
|
||||
content TEXT NOT NULL,
|
||||
embedding BLOB, -- 384-dim float32 array (normalized)
|
||||
source TEXT, -- 'timmy', 'zeroclaw', 'worker', 'user'
|
||||
tags TEXT, -- JSON array
|
||||
metadata TEXT, -- JSON object
|
||||
created_at TEXT -- ISO8601
|
||||
);
|
||||
|
||||
-- Tasks table (distributed queue)
|
||||
CREATE TABLE IF NOT EXISTS tasks (
|
||||
id INTEGER PRIMARY KEY,
|
||||
content TEXT NOT NULL,
|
||||
task_type TEXT DEFAULT 'general', -- shell, creative, code, research, general
|
||||
priority INTEGER DEFAULT 0, -- Higher = process first
|
||||
status TEXT DEFAULT 'pending', -- pending, claimed, done, failed
|
||||
claimed_by TEXT, -- Node ID
|
||||
claimed_at TEXT,
|
||||
result TEXT,
|
||||
error TEXT,
|
||||
metadata TEXT, -- JSON
|
||||
created_at TEXT,
|
||||
completed_at TEXT
|
||||
);
|
||||
|
||||
-- Node registry (who's online)
|
||||
CREATE TABLE IF NOT EXISTS nodes (
|
||||
node_id TEXT PRIMARY KEY,
|
||||
capabilities TEXT, -- JSON array
|
||||
last_seen TEXT, -- ISO8601
|
||||
load_average REAL
|
||||
);
|
||||
|
||||
-- Indexes for performance
|
||||
CREATE INDEX IF NOT EXISTS idx_memories_source ON memories(source);
|
||||
CREATE INDEX IF NOT EXISTS idx_memories_created ON memories(created_at);
|
||||
CREATE INDEX IF NOT EXISTS idx_tasks_status_priority ON tasks(status, priority DESC);
|
||||
CREATE INDEX IF NOT EXISTS idx_tasks_claimed ON tasks(claimed_by, status);
|
||||
CREATE INDEX IF NOT EXISTS idx_tasks_type ON tasks(task_type);
|
||||
|
||||
-- Virtual table for vector search (if using sqlite-vec)
|
||||
-- Note: This requires sqlite-vec extension loaded
|
||||
CREATE VIRTUAL TABLE IF NOT EXISTS vec_memories USING vec0(
|
||||
embedding float[384]
|
||||
);
|
||||
|
||||
-- Schema version tracking
|
||||
CREATE TABLE IF NOT EXISTS schema_version (
|
||||
version INTEGER PRIMARY KEY,
|
||||
applied_at TEXT
|
||||
);
|
||||
|
||||
INSERT OR REPLACE INTO schema_version (version, applied_at)
|
||||
VALUES (1, datetime('now'));
|
||||
"""
|
||||
|
||||
MIGRATIONS = {
|
||||
# Future migrations go here
|
||||
# 2: "ALTER TABLE ...",
|
||||
}
|
||||
|
||||
|
||||
def get_init_sql() -> str:
|
||||
"""Get SQL to initialize fresh database."""
|
||||
return INIT_SQL
|
||||
|
||||
|
||||
def get_migration_sql(from_version: int, to_version: int) -> str:
|
||||
"""Get SQL to migrate between versions."""
|
||||
if to_version <= from_version:
|
||||
return ""
|
||||
|
||||
sql_parts = []
|
||||
for v in range(from_version + 1, to_version + 1):
|
||||
if v in MIGRATIONS:
|
||||
sql_parts.append(MIGRATIONS[v])
|
||||
sql_parts.append(
|
||||
f"UPDATE schema_version SET version = {v}, applied_at = datetime('now');"
|
||||
)
|
||||
|
||||
return "\n".join(sql_parts)
|
||||
@@ -1,359 +0,0 @@
|
||||
"""Distributed Worker — continuously processes tasks from the brain queue.
|
||||
|
||||
Each device runs a worker that claims and executes tasks based on capabilities.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
import socket
|
||||
import subprocess
|
||||
from collections.abc import Callable
|
||||
from typing import Any
|
||||
|
||||
from brain.client import BrainClient
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class DistributedWorker:
|
||||
"""Continuous task processor for the distributed brain.
|
||||
|
||||
Runs on every device, claims tasks matching its capabilities,
|
||||
executes them immediately, stores results.
|
||||
"""
|
||||
|
||||
def __init__(self, brain_client: BrainClient | None = None):
|
||||
self.brain = brain_client or BrainClient()
|
||||
self.node_id = f"{socket.gethostname()}-{os.getpid()}"
|
||||
self.capabilities = self._detect_capabilities()
|
||||
self.running = False
|
||||
self._handlers: dict[str, Callable] = {}
|
||||
self._register_default_handlers()
|
||||
|
||||
def _detect_capabilities(self) -> list[str]:
|
||||
"""Detect what this node can do."""
|
||||
caps = ["general", "shell", "file_ops", "git"]
|
||||
|
||||
# Check for GPU
|
||||
if self._has_gpu():
|
||||
caps.append("gpu")
|
||||
caps.append("creative")
|
||||
caps.append("image_gen")
|
||||
caps.append("video_gen")
|
||||
|
||||
# Check for internet
|
||||
if self._has_internet():
|
||||
caps.append("web")
|
||||
caps.append("research")
|
||||
|
||||
# Check memory
|
||||
mem_gb = self._get_memory_gb()
|
||||
if mem_gb > 16:
|
||||
caps.append("large_model")
|
||||
if mem_gb > 32:
|
||||
caps.append("huge_model")
|
||||
|
||||
# Check for specific tools
|
||||
if self._has_command("ollama"):
|
||||
caps.append("ollama")
|
||||
if self._has_command("docker"):
|
||||
caps.append("docker")
|
||||
if self._has_command("cargo"):
|
||||
caps.append("rust")
|
||||
|
||||
logger.info(f"Worker capabilities: {caps}")
|
||||
return caps
|
||||
|
||||
def _has_gpu(self) -> bool:
|
||||
"""Check for NVIDIA or AMD GPU."""
|
||||
try:
|
||||
# Check for nvidia-smi
|
||||
result = subprocess.run(["nvidia-smi"], capture_output=True, timeout=5)
|
||||
if result.returncode == 0:
|
||||
return True
|
||||
except (OSError, subprocess.SubprocessError):
|
||||
pass
|
||||
|
||||
# Check for ROCm
|
||||
if os.path.exists("/opt/rocm"):
|
||||
return True
|
||||
|
||||
# Check for Apple Silicon Metal
|
||||
if os.uname().sysname == "Darwin":
|
||||
try:
|
||||
result = subprocess.run(
|
||||
["system_profiler", "SPDisplaysDataType"],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
timeout=5,
|
||||
)
|
||||
if "Metal" in result.stdout:
|
||||
return True
|
||||
except (OSError, subprocess.SubprocessError):
|
||||
pass
|
||||
|
||||
return False
|
||||
|
||||
def _has_internet(self) -> bool:
|
||||
"""Check if we have internet connectivity."""
|
||||
try:
|
||||
result = subprocess.run(
|
||||
["curl", "-s", "--max-time", "3", "https://1.1.1.1"], capture_output=True, timeout=5
|
||||
)
|
||||
return result.returncode == 0
|
||||
except (OSError, subprocess.SubprocessError):
|
||||
return False
|
||||
|
||||
def _get_memory_gb(self) -> float:
|
||||
"""Get total system memory in GB."""
|
||||
try:
|
||||
if os.uname().sysname == "Darwin":
|
||||
result = subprocess.run(
|
||||
["sysctl", "-n", "hw.memsize"], capture_output=True, text=True
|
||||
)
|
||||
bytes_mem = int(result.stdout.strip())
|
||||
return bytes_mem / (1024**3)
|
||||
else:
|
||||
with open("/proc/meminfo") as f:
|
||||
for line in f:
|
||||
if line.startswith("MemTotal:"):
|
||||
kb = int(line.split()[1])
|
||||
return kb / (1024**2)
|
||||
except (OSError, ValueError):
|
||||
pass
|
||||
return 8.0 # Assume 8GB if we can't detect
|
||||
|
||||
def _has_command(self, cmd: str) -> bool:
|
||||
"""Check if command exists."""
|
||||
try:
|
||||
result = subprocess.run(["which", cmd], capture_output=True, timeout=5)
|
||||
return result.returncode == 0
|
||||
except (OSError, subprocess.SubprocessError):
|
||||
return False
|
||||
|
||||
def _register_default_handlers(self):
|
||||
"""Register built-in task handlers."""
|
||||
self._handlers = {
|
||||
"shell": self._handle_shell,
|
||||
"creative": self._handle_creative,
|
||||
"code": self._handle_code,
|
||||
"research": self._handle_research,
|
||||
"general": self._handle_general,
|
||||
}
|
||||
|
||||
def register_handler(self, task_type: str, handler: Callable[[str], Any]):
|
||||
"""Register a custom task handler.
|
||||
|
||||
Args:
|
||||
task_type: Type of task this handler handles
|
||||
handler: Async function that takes task content and returns result
|
||||
"""
|
||||
self._handlers[task_type] = handler
|
||||
if task_type not in self.capabilities:
|
||||
self.capabilities.append(task_type)
|
||||
|
||||
# ──────────────────────────────────────────────────────────────────────────
|
||||
# Task Handlers
|
||||
# ──────────────────────────────────────────────────────────────────────────
|
||||
|
||||
async def _handle_shell(self, command: str) -> str:
|
||||
"""Execute shell command via ZeroClaw or direct subprocess."""
|
||||
# Try ZeroClaw first if available
|
||||
if self._has_command("zeroclaw"):
|
||||
proc = await asyncio.create_subprocess_shell(
|
||||
f"zeroclaw exec --json '{command}'",
|
||||
stdout=asyncio.subprocess.PIPE,
|
||||
stderr=asyncio.subprocess.PIPE,
|
||||
)
|
||||
stdout, stderr = await proc.communicate()
|
||||
|
||||
# Store result in brain
|
||||
await self.brain.remember(
|
||||
content=f"Shell: {command}\nOutput: {stdout.decode()}",
|
||||
tags=["shell", "result"],
|
||||
source=self.node_id,
|
||||
metadata={"command": command, "exit_code": proc.returncode},
|
||||
)
|
||||
|
||||
if proc.returncode != 0:
|
||||
raise Exception(f"Command failed: {stderr.decode()}")
|
||||
return stdout.decode()
|
||||
|
||||
# Fallback to direct subprocess (less safe)
|
||||
proc = await asyncio.create_subprocess_shell(
|
||||
command, stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE
|
||||
)
|
||||
stdout, stderr = await proc.communicate()
|
||||
|
||||
if proc.returncode != 0:
|
||||
raise Exception(f"Command failed: {stderr.decode()}")
|
||||
return stdout.decode()
|
||||
|
||||
async def _handle_creative(self, prompt: str) -> str:
|
||||
"""Generate creative media (requires GPU)."""
|
||||
if "gpu" not in self.capabilities:
|
||||
raise Exception("GPU not available on this node")
|
||||
|
||||
# This would call creative tools (Stable Diffusion, etc.)
|
||||
# For now, placeholder
|
||||
logger.info(f"Creative task: {prompt[:50]}...")
|
||||
|
||||
# Store result
|
||||
result = f"Creative output for: {prompt}"
|
||||
await self.brain.remember(
|
||||
content=result,
|
||||
tags=["creative", "generated"],
|
||||
source=self.node_id,
|
||||
metadata={"prompt": prompt},
|
||||
)
|
||||
|
||||
return result
|
||||
|
||||
async def _handle_code(self, description: str) -> str:
|
||||
"""Code generation and modification."""
|
||||
# Would use LLM to generate code
|
||||
# For now, placeholder
|
||||
logger.info(f"Code task: {description[:50]}...")
|
||||
return f"Code generated for: {description}"
|
||||
|
||||
async def _handle_research(self, query: str) -> str:
|
||||
"""Web research."""
|
||||
if "web" not in self.capabilities:
|
||||
raise Exception("Internet not available on this node")
|
||||
|
||||
# Would use browser automation or search
|
||||
logger.info(f"Research task: {query[:50]}...")
|
||||
return f"Research results for: {query}"
|
||||
|
||||
async def _handle_general(self, prompt: str) -> str:
|
||||
"""General LLM task via local Ollama."""
|
||||
if "ollama" not in self.capabilities:
|
||||
raise Exception("Ollama not available on this node")
|
||||
|
||||
# Call Ollama
|
||||
try:
|
||||
proc = await asyncio.create_subprocess_exec(
|
||||
"curl",
|
||||
"-s",
|
||||
"http://localhost:11434/api/generate",
|
||||
"-d",
|
||||
json.dumps({"model": "llama3.1:8b-instruct", "prompt": prompt, "stream": False}),
|
||||
stdout=asyncio.subprocess.PIPE,
|
||||
)
|
||||
stdout, _ = await proc.communicate()
|
||||
|
||||
response = json.loads(stdout.decode())
|
||||
result = response.get("response", "No response")
|
||||
|
||||
# Store in brain
|
||||
await self.brain.remember(
|
||||
content=f"Task: {prompt}\nResult: {result}",
|
||||
tags=["llm", "result"],
|
||||
source=self.node_id,
|
||||
metadata={"model": "llama3.1:8b-instruct"},
|
||||
)
|
||||
|
||||
return result
|
||||
|
||||
except Exception as e:
|
||||
raise Exception(f"LLM failed: {e}") from e
|
||||
|
||||
# ──────────────────────────────────────────────────────────────────────────
|
||||
# Main Loop
|
||||
# ──────────────────────────────────────────────────────────────────────────
|
||||
|
||||
async def execute_task(self, task: dict[str, Any]) -> dict[str, Any]:
|
||||
"""Execute a claimed task."""
|
||||
task_type = task.get("type", "general")
|
||||
content = task.get("content", "")
|
||||
task_id = task.get("id")
|
||||
|
||||
handler = self._handlers.get(task_type, self._handlers["general"])
|
||||
|
||||
try:
|
||||
logger.info(f"Executing task {task_id}: {task_type}")
|
||||
result = await handler(content)
|
||||
|
||||
await self.brain.complete_task(task_id, success=True, result=result)
|
||||
logger.info(f"Task {task_id} completed")
|
||||
return {"success": True, "result": result}
|
||||
|
||||
except Exception as e:
|
||||
error_msg = str(e)
|
||||
logger.error(f"Task {task_id} failed: {error_msg}")
|
||||
await self.brain.complete_task(task_id, success=False, error=error_msg)
|
||||
return {"success": False, "error": error_msg}
|
||||
|
||||
async def run_once(self) -> bool:
|
||||
"""Process one task if available.
|
||||
|
||||
Returns:
|
||||
True if a task was processed, False if no tasks available
|
||||
"""
|
||||
task = await self.brain.claim_task(self.capabilities, self.node_id)
|
||||
|
||||
if task:
|
||||
await self.execute_task(task)
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
async def run(self):
|
||||
"""Main loop — continuously process tasks."""
|
||||
logger.info(f"Worker {self.node_id} started")
|
||||
logger.info(f"Capabilities: {self.capabilities}")
|
||||
|
||||
self.running = True
|
||||
consecutive_empty = 0
|
||||
|
||||
while self.running:
|
||||
try:
|
||||
had_work = await self.run_once()
|
||||
|
||||
if had_work:
|
||||
# Immediately check for more work
|
||||
consecutive_empty = 0
|
||||
await asyncio.sleep(0.1)
|
||||
else:
|
||||
# No work available - adaptive sleep
|
||||
consecutive_empty += 1
|
||||
# Sleep 0.5s, but up to 2s if consistently empty
|
||||
sleep_time = min(0.5 + (consecutive_empty * 0.1), 2.0)
|
||||
await asyncio.sleep(sleep_time)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Worker error: {e}")
|
||||
await asyncio.sleep(1)
|
||||
|
||||
def stop(self):
|
||||
"""Stop the worker loop."""
|
||||
self.running = False
|
||||
logger.info("Worker stopping...")
|
||||
|
||||
|
||||
async def main():
|
||||
"""CLI entry point for worker."""
|
||||
import sys
|
||||
|
||||
# Allow capability overrides from CLI
|
||||
if len(sys.argv) > 1:
|
||||
caps = sys.argv[1].split(",")
|
||||
worker = DistributedWorker()
|
||||
worker.capabilities = caps
|
||||
logger.info(f"Overriding capabilities: {caps}")
|
||||
else:
|
||||
worker = DistributedWorker()
|
||||
|
||||
try:
|
||||
await worker.run()
|
||||
except KeyboardInterrupt:
|
||||
worker.stop()
|
||||
logger.info("Worker stopped.")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
@@ -34,15 +34,11 @@ from dashboard.routes.experiments import router as experiments_router
|
||||
from dashboard.routes.grok import router as grok_router
|
||||
from dashboard.routes.health import router as health_router
|
||||
from dashboard.routes.loop_qa import router as loop_qa_router
|
||||
from dashboard.routes.marketplace import router as marketplace_router
|
||||
from dashboard.routes.memory import router as memory_router
|
||||
from dashboard.routes.mobile import router as mobile_router
|
||||
from dashboard.routes.models import api_router as models_api_router
|
||||
from dashboard.routes.models import router as models_router
|
||||
from dashboard.routes.paperclip import router as paperclip_router
|
||||
from dashboard.routes.router import router as router_status_router
|
||||
from dashboard.routes.spark import router as spark_router
|
||||
from dashboard.routes.swarm import router as swarm_router
|
||||
from dashboard.routes.system import router as system_router
|
||||
from dashboard.routes.tasks import router as tasks_router
|
||||
from dashboard.routes.telegram import router as telegram_router
|
||||
@@ -50,7 +46,6 @@ from dashboard.routes.thinking import router as thinking_router
|
||||
from dashboard.routes.tools import router as tools_router
|
||||
from dashboard.routes.voice import router as voice_router
|
||||
from dashboard.routes.work_orders import router as work_orders_router
|
||||
from infrastructure.router.api import router as cascade_router
|
||||
|
||||
|
||||
class _ColorFormatter(logging.Formatter):
|
||||
@@ -467,7 +462,6 @@ from dashboard.templating import templates # noqa: E402
|
||||
# Include routers
|
||||
app.include_router(health_router)
|
||||
app.include_router(agents_router)
|
||||
app.include_router(marketplace_router)
|
||||
app.include_router(voice_router)
|
||||
app.include_router(mobile_router)
|
||||
app.include_router(briefing_router)
|
||||
@@ -476,22 +470,18 @@ app.include_router(tools_router)
|
||||
app.include_router(spark_router)
|
||||
app.include_router(discord_router)
|
||||
app.include_router(memory_router)
|
||||
app.include_router(router_status_router)
|
||||
app.include_router(grok_router)
|
||||
app.include_router(models_router)
|
||||
app.include_router(models_api_router)
|
||||
app.include_router(chat_api_router)
|
||||
app.include_router(thinking_router)
|
||||
app.include_router(calm_router)
|
||||
app.include_router(swarm_router)
|
||||
app.include_router(tasks_router)
|
||||
app.include_router(work_orders_router)
|
||||
app.include_router(loop_qa_router)
|
||||
app.include_router(system_router)
|
||||
app.include_router(paperclip_router)
|
||||
app.include_router(experiments_router)
|
||||
app.include_router(db_explorer_router)
|
||||
app.include_router(cascade_router)
|
||||
|
||||
|
||||
@app.websocket("/ws")
|
||||
|
||||
@@ -1,93 +0,0 @@
|
||||
"""Agent marketplace route — /marketplace endpoints.
|
||||
|
||||
DEPRECATED: Personas replaced by brain task queue.
|
||||
This module is kept for UI compatibility.
|
||||
"""
|
||||
|
||||
from fastapi import APIRouter, Request
|
||||
from fastapi.responses import HTMLResponse
|
||||
|
||||
from brain.client import BrainClient
|
||||
from dashboard.templating import templates
|
||||
|
||||
router = APIRouter(tags=["marketplace"])
|
||||
|
||||
# Orchestrator only — personas deprecated
|
||||
AGENT_CATALOG = [
|
||||
{
|
||||
"id": "orchestrator",
|
||||
"name": "Orchestrator",
|
||||
"role": "Local AI",
|
||||
"description": (
|
||||
"Primary AI agent. Coordinates tasks, manages memory. Uses distributed brain."
|
||||
),
|
||||
"capabilities": "chat,reasoning,coordination,memory",
|
||||
"rate_sats": 0,
|
||||
"default_status": "active",
|
||||
}
|
||||
]
|
||||
|
||||
|
||||
@router.get("/api/marketplace/agents")
|
||||
async def api_list_agents():
|
||||
"""Return agent catalog with current status (JSON API)."""
|
||||
try:
|
||||
brain = BrainClient()
|
||||
pending_tasks = len(await brain.get_pending_tasks(limit=1000))
|
||||
except Exception:
|
||||
pending_tasks = 0
|
||||
|
||||
catalog = [dict(AGENT_CATALOG[0])]
|
||||
catalog[0]["pending_tasks"] = pending_tasks
|
||||
catalog[0]["status"] = "active"
|
||||
|
||||
# Include 'total' for backward compatibility with tests
|
||||
return {"agents": catalog, "total": len(catalog)}
|
||||
|
||||
|
||||
@router.get("/marketplace")
|
||||
async def marketplace_json(request: Request):
|
||||
"""Marketplace JSON API (backward compat)."""
|
||||
return await api_list_agents()
|
||||
|
||||
|
||||
@router.get("/marketplace/ui", response_class=HTMLResponse)
|
||||
async def marketplace_ui(request: Request):
|
||||
"""Marketplace HTML page."""
|
||||
try:
|
||||
brain = BrainClient()
|
||||
tasks = await brain.get_pending_tasks(limit=20)
|
||||
except Exception:
|
||||
tasks = []
|
||||
|
||||
# Enrich agents with fields the template expects
|
||||
enriched = []
|
||||
for agent in AGENT_CATALOG:
|
||||
a = dict(agent)
|
||||
a.setdefault("status", a.get("default_status", "active"))
|
||||
a.setdefault("tasks_completed", 0)
|
||||
a.setdefault("total_earned", 0)
|
||||
enriched.append(a)
|
||||
|
||||
active = sum(1 for a in enriched if a["status"] == "active")
|
||||
|
||||
return templates.TemplateResponse(
|
||||
request,
|
||||
"marketplace.html",
|
||||
{
|
||||
"agents": enriched,
|
||||
"pending_tasks": tasks,
|
||||
"message": "Personas deprecated — use Brain Task Queue",
|
||||
"page_title": "Agent Marketplace",
|
||||
"active_count": active,
|
||||
"planned_count": 0,
|
||||
},
|
||||
)
|
||||
|
||||
|
||||
@router.get("/marketplace/{agent_id}")
|
||||
async def agent_detail(agent_id: str):
|
||||
"""Get agent details."""
|
||||
if agent_id == "orchestrator":
|
||||
return AGENT_CATALOG[0]
|
||||
return {"error": "Agent not found — personas deprecated"}
|
||||
@@ -1,317 +0,0 @@
|
||||
"""Paperclip AI integration routes.
|
||||
|
||||
Timmy-as-CEO: create issues, delegate to agents, review work, manage goals.
|
||||
All business logic lives in the bridge — these routes stay thin.
|
||||
"""
|
||||
|
||||
import logging
|
||||
|
||||
from fastapi import APIRouter, Request
|
||||
from fastapi.responses import JSONResponse
|
||||
|
||||
from config import settings
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
router = APIRouter(prefix="/api/paperclip", tags=["paperclip"])
|
||||
|
||||
|
||||
def _disabled_response() -> JSONResponse:
|
||||
return JSONResponse({"enabled": False, "detail": "Paperclip integration is disabled"})
|
||||
|
||||
|
||||
# ── Status ───────────────────────────────────────────────────────────────────
|
||||
|
||||
|
||||
@router.get("/status")
|
||||
async def paperclip_status():
|
||||
"""Integration health check."""
|
||||
if not settings.paperclip_enabled:
|
||||
return _disabled_response()
|
||||
|
||||
from integrations.paperclip.bridge import bridge
|
||||
|
||||
status = await bridge.get_status()
|
||||
return status.model_dump()
|
||||
|
||||
|
||||
# ── Issues (CEO creates & manages tickets) ───────────────────────────────────
|
||||
|
||||
|
||||
@router.get("/issues")
|
||||
async def list_issues(status: str | None = None):
|
||||
"""List all issues in the company."""
|
||||
if not settings.paperclip_enabled:
|
||||
return _disabled_response()
|
||||
|
||||
from integrations.paperclip.bridge import bridge
|
||||
|
||||
issues = await bridge.client.list_issues(status=status)
|
||||
return [i.model_dump() for i in issues]
|
||||
|
||||
|
||||
@router.get("/issues/{issue_id}")
|
||||
async def get_issue(issue_id: str):
|
||||
"""Get issue details with comments (CEO review)."""
|
||||
if not settings.paperclip_enabled:
|
||||
return _disabled_response()
|
||||
|
||||
from integrations.paperclip.bridge import bridge
|
||||
|
||||
return await bridge.review_issue(issue_id)
|
||||
|
||||
|
||||
@router.post("/issues")
|
||||
async def create_issue(request: Request):
|
||||
"""Create a new issue and optionally assign to an agent."""
|
||||
if not settings.paperclip_enabled:
|
||||
return _disabled_response()
|
||||
|
||||
body = await request.json()
|
||||
title = body.get("title")
|
||||
if not title:
|
||||
return JSONResponse({"error": "title is required"}, status_code=400)
|
||||
|
||||
from integrations.paperclip.bridge import bridge
|
||||
|
||||
issue = await bridge.create_and_assign(
|
||||
title=title,
|
||||
description=body.get("description", ""),
|
||||
assignee_id=body.get("assignee_id"),
|
||||
priority=body.get("priority"),
|
||||
wake=body.get("wake", True),
|
||||
)
|
||||
|
||||
if not issue:
|
||||
return JSONResponse({"error": "Failed to create issue"}, status_code=502)
|
||||
|
||||
return issue.model_dump()
|
||||
|
||||
|
||||
@router.post("/issues/{issue_id}/delegate")
|
||||
async def delegate_issue(issue_id: str, request: Request):
|
||||
"""Delegate an issue to an agent (CEO assignment)."""
|
||||
if not settings.paperclip_enabled:
|
||||
return _disabled_response()
|
||||
|
||||
body = await request.json()
|
||||
agent_id = body.get("agent_id")
|
||||
if not agent_id:
|
||||
return JSONResponse({"error": "agent_id is required"}, status_code=400)
|
||||
|
||||
from integrations.paperclip.bridge import bridge
|
||||
|
||||
ok = await bridge.delegate_issue(
|
||||
issue_id=issue_id,
|
||||
agent_id=agent_id,
|
||||
message=body.get("message"),
|
||||
)
|
||||
|
||||
if not ok:
|
||||
return JSONResponse({"error": "Failed to delegate issue"}, status_code=502)
|
||||
|
||||
return {"ok": True, "issue_id": issue_id, "agent_id": agent_id}
|
||||
|
||||
|
||||
@router.post("/issues/{issue_id}/close")
|
||||
async def close_issue(issue_id: str, request: Request):
|
||||
"""Close an issue (CEO sign-off)."""
|
||||
if not settings.paperclip_enabled:
|
||||
return _disabled_response()
|
||||
|
||||
body = await request.json()
|
||||
|
||||
from integrations.paperclip.bridge import bridge
|
||||
|
||||
ok = await bridge.close_issue(issue_id, comment=body.get("comment"))
|
||||
|
||||
if not ok:
|
||||
return JSONResponse({"error": "Failed to close issue"}, status_code=502)
|
||||
|
||||
return {"ok": True, "issue_id": issue_id}
|
||||
|
||||
|
||||
@router.post("/issues/{issue_id}/comment")
|
||||
async def add_comment(issue_id: str, request: Request):
|
||||
"""Add a CEO comment to an issue."""
|
||||
if not settings.paperclip_enabled:
|
||||
return _disabled_response()
|
||||
|
||||
body = await request.json()
|
||||
content = body.get("content")
|
||||
if not content:
|
||||
return JSONResponse({"error": "content is required"}, status_code=400)
|
||||
|
||||
from integrations.paperclip.bridge import bridge
|
||||
|
||||
comment = await bridge.client.add_comment(issue_id, f"[CEO] {content}")
|
||||
|
||||
if not comment:
|
||||
return JSONResponse({"error": "Failed to add comment"}, status_code=502)
|
||||
|
||||
return comment.model_dump()
|
||||
|
||||
|
||||
# ── Agents (team management) ─────────────────────────────────────────────────
|
||||
|
||||
|
||||
@router.get("/agents")
|
||||
async def list_agents():
|
||||
"""List all agents in the org."""
|
||||
if not settings.paperclip_enabled:
|
||||
return _disabled_response()
|
||||
|
||||
from integrations.paperclip.bridge import bridge
|
||||
|
||||
agents = await bridge.get_team()
|
||||
return [a.model_dump() for a in agents]
|
||||
|
||||
|
||||
@router.get("/org")
|
||||
async def org_chart():
|
||||
"""Get the organizational chart."""
|
||||
if not settings.paperclip_enabled:
|
||||
return _disabled_response()
|
||||
|
||||
from integrations.paperclip.bridge import bridge
|
||||
|
||||
org = await bridge.get_org_chart()
|
||||
return org or {"error": "Could not retrieve org chart"}
|
||||
|
||||
|
||||
@router.post("/agents/{agent_id}/wake")
|
||||
async def wake_agent(agent_id: str, request: Request):
|
||||
"""Wake an agent to start working."""
|
||||
if not settings.paperclip_enabled:
|
||||
return _disabled_response()
|
||||
|
||||
body = await request.json()
|
||||
|
||||
from integrations.paperclip.bridge import bridge
|
||||
|
||||
result = await bridge.client.wake_agent(
|
||||
agent_id,
|
||||
issue_id=body.get("issue_id"),
|
||||
message=body.get("message"),
|
||||
)
|
||||
|
||||
if not result:
|
||||
return JSONResponse({"error": "Failed to wake agent"}, status_code=502)
|
||||
|
||||
return result
|
||||
|
||||
|
||||
# ── Goals ────────────────────────────────────────────────────────────────────
|
||||
|
||||
|
||||
@router.get("/goals")
|
||||
async def list_goals():
|
||||
"""List company goals."""
|
||||
if not settings.paperclip_enabled:
|
||||
return _disabled_response()
|
||||
|
||||
from integrations.paperclip.bridge import bridge
|
||||
|
||||
goals = await bridge.list_goals()
|
||||
return [g.model_dump() for g in goals]
|
||||
|
||||
|
||||
@router.post("/goals")
|
||||
async def create_goal(request: Request):
|
||||
"""Set a new company goal (CEO directive)."""
|
||||
if not settings.paperclip_enabled:
|
||||
return _disabled_response()
|
||||
|
||||
body = await request.json()
|
||||
title = body.get("title")
|
||||
if not title:
|
||||
return JSONResponse({"error": "title is required"}, status_code=400)
|
||||
|
||||
from integrations.paperclip.bridge import bridge
|
||||
|
||||
goal = await bridge.set_goal(title, body.get("description", ""))
|
||||
|
||||
if not goal:
|
||||
return JSONResponse({"error": "Failed to create goal"}, status_code=502)
|
||||
|
||||
return goal.model_dump()
|
||||
|
||||
|
||||
# ── Approvals ────────────────────────────────────────────────────────────────
|
||||
|
||||
|
||||
@router.get("/approvals")
|
||||
async def list_approvals():
|
||||
"""List pending approvals for CEO review."""
|
||||
if not settings.paperclip_enabled:
|
||||
return _disabled_response()
|
||||
|
||||
from integrations.paperclip.bridge import bridge
|
||||
|
||||
return await bridge.pending_approvals()
|
||||
|
||||
|
||||
@router.post("/approvals/{approval_id}/approve")
|
||||
async def approve(approval_id: str, request: Request):
|
||||
"""Approve an agent's action."""
|
||||
if not settings.paperclip_enabled:
|
||||
return _disabled_response()
|
||||
|
||||
body = await request.json()
|
||||
|
||||
from integrations.paperclip.bridge import bridge
|
||||
|
||||
ok = await bridge.approve(approval_id, body.get("comment", ""))
|
||||
|
||||
if not ok:
|
||||
return JSONResponse({"error": "Failed to approve"}, status_code=502)
|
||||
|
||||
return {"ok": True, "approval_id": approval_id}
|
||||
|
||||
|
||||
@router.post("/approvals/{approval_id}/reject")
|
||||
async def reject(approval_id: str, request: Request):
|
||||
"""Reject an agent's action."""
|
||||
if not settings.paperclip_enabled:
|
||||
return _disabled_response()
|
||||
|
||||
body = await request.json()
|
||||
|
||||
from integrations.paperclip.bridge import bridge
|
||||
|
||||
ok = await bridge.reject(approval_id, body.get("comment", ""))
|
||||
|
||||
if not ok:
|
||||
return JSONResponse({"error": "Failed to reject"}, status_code=502)
|
||||
|
||||
return {"ok": True, "approval_id": approval_id}
|
||||
|
||||
|
||||
# ── Runs (monitoring) ────────────────────────────────────────────────────────
|
||||
|
||||
|
||||
@router.get("/runs")
|
||||
async def list_runs():
|
||||
"""List active heartbeat runs."""
|
||||
if not settings.paperclip_enabled:
|
||||
return _disabled_response()
|
||||
|
||||
from integrations.paperclip.bridge import bridge
|
||||
|
||||
return await bridge.active_runs()
|
||||
|
||||
|
||||
@router.post("/runs/{run_id}/cancel")
|
||||
async def cancel_run(run_id: str):
|
||||
"""Cancel a running heartbeat execution."""
|
||||
if not settings.paperclip_enabled:
|
||||
return _disabled_response()
|
||||
|
||||
from integrations.paperclip.bridge import bridge
|
||||
|
||||
ok = await bridge.cancel_run(run_id)
|
||||
|
||||
if not ok:
|
||||
return JSONResponse({"error": "Failed to cancel run"}, status_code=502)
|
||||
|
||||
return {"ok": True, "run_id": run_id}
|
||||
@@ -1,51 +0,0 @@
|
||||
"""Cascade Router status routes."""
|
||||
|
||||
from fastapi import APIRouter, Request
|
||||
from fastapi.responses import HTMLResponse
|
||||
|
||||
from dashboard.templating import templates
|
||||
from timmy.cascade_adapter import get_cascade_adapter
|
||||
|
||||
router = APIRouter(prefix="/router", tags=["router"])
|
||||
|
||||
|
||||
@router.get("/status", response_class=HTMLResponse)
|
||||
async def router_status_page(request: Request):
|
||||
"""Cascade Router status dashboard."""
|
||||
adapter = get_cascade_adapter()
|
||||
|
||||
providers = adapter.get_provider_status()
|
||||
preferred = adapter.get_preferred_provider()
|
||||
|
||||
# Calculate overall stats
|
||||
total_requests = sum(p["metrics"]["total"] for p in providers)
|
||||
total_success = sum(p["metrics"]["success"] for p in providers)
|
||||
total_failed = sum(p["metrics"]["failed"] for p in providers)
|
||||
|
||||
avg_latency = 0.0
|
||||
if providers:
|
||||
avg_latency = sum(p["metrics"]["avg_latency_ms"] for p in providers) / len(providers)
|
||||
|
||||
return templates.TemplateResponse(
|
||||
request,
|
||||
"router_status.html",
|
||||
{
|
||||
"page_title": "Router Status",
|
||||
"providers": providers,
|
||||
"preferred_provider": preferred,
|
||||
"total_requests": total_requests,
|
||||
"total_success": total_success,
|
||||
"total_failed": total_failed,
|
||||
"avg_latency_ms": round(avg_latency, 1),
|
||||
},
|
||||
)
|
||||
|
||||
|
||||
@router.get("/api/providers")
|
||||
async def get_providers():
|
||||
"""API endpoint for provider status (JSON)."""
|
||||
adapter = get_cascade_adapter()
|
||||
return {
|
||||
"providers": adapter.get_provider_status(),
|
||||
"preferred": adapter.get_preferred_provider(),
|
||||
}
|
||||
@@ -1,113 +0,0 @@
|
||||
"""Swarm-related dashboard routes (events, live feed)."""
|
||||
|
||||
import logging
|
||||
|
||||
from fastapi import APIRouter, Request, WebSocket, WebSocketDisconnect
|
||||
from fastapi.responses import HTMLResponse
|
||||
|
||||
from dashboard.templating import templates
|
||||
from infrastructure.ws_manager.handler import ws_manager
|
||||
from spark.engine import spark_engine
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
router = APIRouter(prefix="/swarm", tags=["swarm"])
|
||||
|
||||
|
||||
@router.get("/events", response_class=HTMLResponse)
|
||||
async def swarm_events(
|
||||
request: Request,
|
||||
task_id: str | None = None,
|
||||
agent_id: str | None = None,
|
||||
event_type: str | None = None,
|
||||
):
|
||||
"""Event log page."""
|
||||
events = spark_engine.get_timeline(limit=100)
|
||||
|
||||
# Filter if requested
|
||||
if task_id:
|
||||
events = [e for e in events if e.task_id == task_id]
|
||||
if agent_id:
|
||||
events = [e for e in events if e.agent_id == agent_id]
|
||||
if event_type:
|
||||
events = [e for e in events if e.event_type == event_type]
|
||||
|
||||
# Prepare summary and event types for template
|
||||
summary = {}
|
||||
event_types = set()
|
||||
for e in events:
|
||||
etype = e.event_type
|
||||
event_types.add(etype)
|
||||
summary[etype] = summary.get(etype, 0) + 1
|
||||
|
||||
return templates.TemplateResponse(
|
||||
request,
|
||||
"events.html",
|
||||
{
|
||||
"events": events,
|
||||
"summary": summary,
|
||||
"event_types": sorted(list(event_types)),
|
||||
"filter_task": task_id,
|
||||
"filter_agent": agent_id,
|
||||
"filter_type": event_type,
|
||||
},
|
||||
)
|
||||
|
||||
|
||||
@router.get("/live", response_class=HTMLResponse)
|
||||
async def swarm_live(request: Request):
|
||||
"""Live swarm activity page."""
|
||||
status = spark_engine.status()
|
||||
events = spark_engine.get_timeline(limit=20)
|
||||
|
||||
return templates.TemplateResponse(
|
||||
request,
|
||||
"swarm_live.html",
|
||||
{
|
||||
"status": status,
|
||||
"events": events,
|
||||
},
|
||||
)
|
||||
|
||||
|
||||
@router.websocket("/live")
|
||||
async def swarm_ws(websocket: WebSocket):
|
||||
"""WebSocket endpoint for live swarm updates."""
|
||||
await websocket.accept()
|
||||
# Send initial state before joining broadcast pool to avoid race conditions
|
||||
await websocket.send_json(
|
||||
{
|
||||
"type": "initial_state",
|
||||
"data": {
|
||||
"agents": {"total": 0, "active": 0, "list": []},
|
||||
"tasks": {"active": 0},
|
||||
"auctions": {"list": []},
|
||||
},
|
||||
}
|
||||
)
|
||||
await ws_manager.connect(websocket, accept=False)
|
||||
try:
|
||||
while True:
|
||||
await websocket.receive_text()
|
||||
except WebSocketDisconnect:
|
||||
ws_manager.disconnect(websocket)
|
||||
|
||||
|
||||
@router.get("/agents/sidebar", response_class=HTMLResponse)
|
||||
async def agents_sidebar(request: Request):
|
||||
"""Sidebar partial showing agent status for the home page."""
|
||||
from config import settings
|
||||
|
||||
agents = [
|
||||
{
|
||||
"id": "default",
|
||||
"name": settings.agent_name,
|
||||
"status": "idle",
|
||||
"type": "local",
|
||||
"capabilities": "chat,reasoning,research,planning",
|
||||
"last_seen": None,
|
||||
}
|
||||
]
|
||||
return templates.TemplateResponse(
|
||||
request, "partials/swarm_agents_sidebar.html", {"agents": agents}
|
||||
)
|
||||
@@ -1,18 +0,0 @@
|
||||
"""OpenFang — vendored binary sidecar for agent tool execution.
|
||||
|
||||
OpenFang is a Rust-compiled Agent OS that provides real tool execution
|
||||
(browser automation, OSINT, forecasting, social management) in a
|
||||
WASM-sandboxed runtime. Timmy's coordinator dispatches to it as a
|
||||
tool vendor rather than a co-orchestrator.
|
||||
|
||||
Usage:
|
||||
from infrastructure.openfang import openfang_client
|
||||
|
||||
# Check if OpenFang is available
|
||||
if openfang_client.healthy:
|
||||
result = await openfang_client.execute_hand("browser", params)
|
||||
"""
|
||||
|
||||
from infrastructure.openfang.client import OpenFangClient, openfang_client
|
||||
|
||||
__all__ = ["OpenFangClient", "openfang_client"]
|
||||
@@ -1,206 +0,0 @@
|
||||
"""OpenFang HTTP client — bridge between Timmy coordinator and OpenFang runtime.
|
||||
|
||||
Follows project conventions:
|
||||
- Graceful degradation (log error, return fallback, never crash)
|
||||
- Config via ``from config import settings``
|
||||
- Singleton pattern for module-level import
|
||||
|
||||
The client wraps OpenFang's REST API and exposes its Hands
|
||||
(Browser, Collector, Predictor, Lead, Twitter, Researcher, Clip)
|
||||
as callable tool endpoints.
|
||||
"""
|
||||
|
||||
import logging
|
||||
import time
|
||||
from dataclasses import dataclass, field
|
||||
from typing import Any
|
||||
|
||||
from config import settings
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Hand names that OpenFang ships out of the box
|
||||
OPENFANG_HANDS = (
|
||||
"browser",
|
||||
"collector",
|
||||
"predictor",
|
||||
"lead",
|
||||
"twitter",
|
||||
"researcher",
|
||||
"clip",
|
||||
)
|
||||
|
||||
|
||||
@dataclass
|
||||
class HandResult:
|
||||
"""Result from an OpenFang Hand execution."""
|
||||
|
||||
hand: str
|
||||
success: bool
|
||||
output: str = ""
|
||||
error: str = ""
|
||||
latency_ms: float = 0.0
|
||||
metadata: dict = field(default_factory=dict)
|
||||
|
||||
|
||||
class OpenFangClient:
|
||||
"""HTTP client for the OpenFang sidecar.
|
||||
|
||||
All methods degrade gracefully — if OpenFang is down the client
|
||||
returns a ``HandResult(success=False)`` rather than raising.
|
||||
"""
|
||||
|
||||
def __init__(self, base_url: str | None = None, timeout: int = 60) -> None:
|
||||
self._base_url = (base_url or settings.openfang_url).rstrip("/")
|
||||
self._timeout = timeout
|
||||
self._healthy = False
|
||||
self._last_health_check: float = 0.0
|
||||
self._health_cache_ttl = 30.0 # seconds
|
||||
logger.info("OpenFangClient initialised → %s", self._base_url)
|
||||
|
||||
# ── Health ───────────────────────────────────────────────────────────────
|
||||
|
||||
@property
|
||||
def healthy(self) -> bool:
|
||||
"""Cached health check — hits /health at most once per TTL."""
|
||||
now = time.time()
|
||||
if now - self._last_health_check > self._health_cache_ttl:
|
||||
self._healthy = self._check_health()
|
||||
self._last_health_check = now
|
||||
return self._healthy
|
||||
|
||||
def _check_health(self) -> bool:
|
||||
try:
|
||||
import urllib.request
|
||||
|
||||
req = urllib.request.Request(
|
||||
f"{self._base_url}/health",
|
||||
method="GET",
|
||||
headers={"Accept": "application/json"},
|
||||
)
|
||||
with urllib.request.urlopen(req, timeout=5) as resp:
|
||||
return resp.status == 200
|
||||
except Exception as exc:
|
||||
logger.debug("OpenFang health check failed: %s", exc)
|
||||
return False
|
||||
|
||||
# ── Hand execution ───────────────────────────────────────────────────────
|
||||
|
||||
async def execute_hand(
|
||||
self,
|
||||
hand: str,
|
||||
params: dict[str, Any],
|
||||
timeout: int | None = None,
|
||||
) -> HandResult:
|
||||
"""Execute an OpenFang Hand and return the result.
|
||||
|
||||
Args:
|
||||
hand: Hand name (browser, collector, predictor, etc.)
|
||||
params: Parameters for the hand (task-specific)
|
||||
timeout: Override default timeout for long-running hands
|
||||
|
||||
Returns:
|
||||
HandResult with output or error details.
|
||||
"""
|
||||
if hand not in OPENFANG_HANDS:
|
||||
return HandResult(
|
||||
hand=hand,
|
||||
success=False,
|
||||
error=f"Unknown hand: {hand}. Available: {', '.join(OPENFANG_HANDS)}",
|
||||
)
|
||||
|
||||
start = time.time()
|
||||
try:
|
||||
import json
|
||||
import urllib.request
|
||||
|
||||
payload = json.dumps({"hand": hand, "params": params}).encode()
|
||||
req = urllib.request.Request(
|
||||
f"{self._base_url}/api/v1/hands/{hand}/execute",
|
||||
data=payload,
|
||||
method="POST",
|
||||
headers={
|
||||
"Content-Type": "application/json",
|
||||
"Accept": "application/json",
|
||||
},
|
||||
)
|
||||
effective_timeout = timeout or self._timeout
|
||||
with urllib.request.urlopen(req, timeout=effective_timeout) as resp:
|
||||
body = json.loads(resp.read().decode())
|
||||
latency = (time.time() - start) * 1000
|
||||
|
||||
return HandResult(
|
||||
hand=hand,
|
||||
success=body.get("success", True),
|
||||
output=body.get("output", body.get("result", "")),
|
||||
latency_ms=latency,
|
||||
metadata=body.get("metadata", {}),
|
||||
)
|
||||
|
||||
except Exception as exc:
|
||||
latency = (time.time() - start) * 1000
|
||||
logger.warning(
|
||||
"OpenFang hand '%s' failed (%.0fms): %s",
|
||||
hand,
|
||||
latency,
|
||||
exc,
|
||||
)
|
||||
return HandResult(
|
||||
hand=hand,
|
||||
success=False,
|
||||
error=str(exc),
|
||||
latency_ms=latency,
|
||||
)
|
||||
|
||||
# ── Convenience wrappers for common hands ────────────────────────────────
|
||||
|
||||
async def browse(self, url: str, instruction: str = "") -> HandResult:
|
||||
"""Web automation via OpenFang's Browser hand."""
|
||||
return await self.execute_hand("browser", {"url": url, "instruction": instruction})
|
||||
|
||||
async def collect(self, target: str, depth: str = "shallow") -> HandResult:
|
||||
"""OSINT collection via OpenFang's Collector hand."""
|
||||
return await self.execute_hand("collector", {"target": target, "depth": depth})
|
||||
|
||||
async def predict(self, question: str, horizon: str = "1w") -> HandResult:
|
||||
"""Superforecasting via OpenFang's Predictor hand."""
|
||||
return await self.execute_hand("predictor", {"question": question, "horizon": horizon})
|
||||
|
||||
async def find_leads(self, icp: str, max_results: int = 10) -> HandResult:
|
||||
"""Prospect discovery via OpenFang's Lead hand."""
|
||||
return await self.execute_hand("lead", {"icp": icp, "max_results": max_results})
|
||||
|
||||
async def research(self, topic: str, depth: str = "standard") -> HandResult:
|
||||
"""Deep research via OpenFang's Researcher hand."""
|
||||
return await self.execute_hand("researcher", {"topic": topic, "depth": depth})
|
||||
|
||||
# ── Inventory ────────────────────────────────────────────────────────────
|
||||
|
||||
async def list_hands(self) -> list[dict]:
|
||||
"""Query OpenFang for its available hands and their status."""
|
||||
try:
|
||||
import json
|
||||
import urllib.request
|
||||
|
||||
req = urllib.request.Request(
|
||||
f"{self._base_url}/api/v1/hands",
|
||||
method="GET",
|
||||
headers={"Accept": "application/json"},
|
||||
)
|
||||
with urllib.request.urlopen(req, timeout=10) as resp:
|
||||
return json.loads(resp.read().decode())
|
||||
except Exception as exc:
|
||||
logger.debug("Failed to list OpenFang hands: %s", exc)
|
||||
return []
|
||||
|
||||
def status(self) -> dict:
|
||||
"""Return a status summary for the dashboard."""
|
||||
return {
|
||||
"url": self._base_url,
|
||||
"healthy": self.healthy,
|
||||
"available_hands": list(OPENFANG_HANDS),
|
||||
}
|
||||
|
||||
|
||||
# ── Module-level singleton ──────────────────────────────────────────────────
|
||||
openfang_client = OpenFangClient()
|
||||
@@ -1,234 +0,0 @@
|
||||
"""Register OpenFang Hands as MCP tools in Timmy's tool registry.
|
||||
|
||||
Each OpenFang Hand becomes a callable MCP tool that personas can use
|
||||
during task execution. The mapping ensures the right personas get
|
||||
access to the right hands:
|
||||
|
||||
Mace (Security) → collector (OSINT), browser
|
||||
Seer (Analytics) → predictor, researcher
|
||||
Echo (Research) → researcher, browser, collector
|
||||
Helm (DevOps) → browser
|
||||
Lead hand → available to all personas via direct request
|
||||
|
||||
Call ``register_openfang_tools()`` during app startup (after config
|
||||
is loaded) to populate the tool registry.
|
||||
"""
|
||||
|
||||
import logging
|
||||
from typing import Any
|
||||
|
||||
from infrastructure.openfang.client import OPENFANG_HANDS, openfang_client
|
||||
|
||||
try:
|
||||
from mcp.schemas.base import create_tool_schema
|
||||
except ImportError:
|
||||
|
||||
def create_tool_schema(**kwargs):
|
||||
return kwargs
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# ── Tool schemas ─────────────────────────────────────────────────────────────
|
||||
|
||||
_HAND_SCHEMAS: dict[str, dict] = {
|
||||
"browser": create_tool_schema(
|
||||
name="openfang_browser",
|
||||
description=(
|
||||
"Web automation via OpenFang's Browser hand. "
|
||||
"Navigates URLs, extracts content, fills forms. "
|
||||
"Includes mandatory purchase confirmation gates."
|
||||
),
|
||||
parameters={
|
||||
"url": {"type": "string", "description": "URL to navigate to"},
|
||||
"instruction": {
|
||||
"type": "string",
|
||||
"description": "What to do on the page",
|
||||
},
|
||||
},
|
||||
required=["url"],
|
||||
),
|
||||
"collector": create_tool_schema(
|
||||
name="openfang_collector",
|
||||
description=(
|
||||
"OSINT intelligence and continuous monitoring via OpenFang's "
|
||||
"Collector hand. Gathers public information on targets."
|
||||
),
|
||||
parameters={
|
||||
"target": {
|
||||
"type": "string",
|
||||
"description": "Target to investigate (domain, org, person)",
|
||||
},
|
||||
"depth": {
|
||||
"type": "string",
|
||||
"description": "Collection depth: shallow | standard | deep",
|
||||
"default": "shallow",
|
||||
},
|
||||
},
|
||||
required=["target"],
|
||||
),
|
||||
"predictor": create_tool_schema(
|
||||
name="openfang_predictor",
|
||||
description=(
|
||||
"Superforecasting with calibrated reasoning via OpenFang's "
|
||||
"Predictor hand. Produces probability estimates with reasoning."
|
||||
),
|
||||
parameters={
|
||||
"question": {
|
||||
"type": "string",
|
||||
"description": "Forecasting question to evaluate",
|
||||
},
|
||||
"horizon": {
|
||||
"type": "string",
|
||||
"description": "Time horizon: 1d | 1w | 1m | 3m | 1y",
|
||||
"default": "1w",
|
||||
},
|
||||
},
|
||||
required=["question"],
|
||||
),
|
||||
"lead": create_tool_schema(
|
||||
name="openfang_lead",
|
||||
description=(
|
||||
"Prospect discovery and ICP-based qualification via OpenFang's "
|
||||
"Lead hand. Finds and scores potential leads."
|
||||
),
|
||||
parameters={
|
||||
"icp": {
|
||||
"type": "string",
|
||||
"description": "Ideal Customer Profile description",
|
||||
},
|
||||
"max_results": {
|
||||
"type": "integer",
|
||||
"description": "Maximum leads to return",
|
||||
"default": 10,
|
||||
},
|
||||
},
|
||||
required=["icp"],
|
||||
),
|
||||
"twitter": create_tool_schema(
|
||||
name="openfang_twitter",
|
||||
description=(
|
||||
"Social account management via OpenFang's Twitter hand. "
|
||||
"Includes approval gates for sensitive actions."
|
||||
),
|
||||
parameters={
|
||||
"action": {
|
||||
"type": "string",
|
||||
"description": "Action: post | reply | search | analyze",
|
||||
},
|
||||
"content": {
|
||||
"type": "string",
|
||||
"description": "Content for the action",
|
||||
},
|
||||
},
|
||||
required=["action", "content"],
|
||||
),
|
||||
"researcher": create_tool_schema(
|
||||
name="openfang_researcher",
|
||||
description=(
|
||||
"Deep autonomous research with source verification via "
|
||||
"OpenFang's Researcher hand. Produces cited reports."
|
||||
),
|
||||
parameters={
|
||||
"topic": {
|
||||
"type": "string",
|
||||
"description": "Research topic or question",
|
||||
},
|
||||
"depth": {
|
||||
"type": "string",
|
||||
"description": "Research depth: quick | standard | deep",
|
||||
"default": "standard",
|
||||
},
|
||||
},
|
||||
required=["topic"],
|
||||
),
|
||||
"clip": create_tool_schema(
|
||||
name="openfang_clip",
|
||||
description=(
|
||||
"Video processing and social media publishing via OpenFang's "
|
||||
"Clip hand. Edits, captions, and publishes video content."
|
||||
),
|
||||
parameters={
|
||||
"source": {
|
||||
"type": "string",
|
||||
"description": "Source video path or URL",
|
||||
},
|
||||
"instruction": {
|
||||
"type": "string",
|
||||
"description": "What to do with the video",
|
||||
},
|
||||
},
|
||||
required=["source"],
|
||||
),
|
||||
}
|
||||
|
||||
# Map personas to the OpenFang hands they should have access to
|
||||
PERSONA_HAND_MAP: dict[str, list[str]] = {
|
||||
"echo": ["researcher", "browser", "collector"],
|
||||
"seer": ["predictor", "researcher"],
|
||||
"mace": ["collector", "browser"],
|
||||
"helm": ["browser"],
|
||||
"forge": ["browser", "researcher"],
|
||||
"quill": ["researcher"],
|
||||
"pixel": ["clip", "browser"],
|
||||
"lyra": [],
|
||||
"reel": ["clip"],
|
||||
}
|
||||
|
||||
|
||||
def _make_hand_handler(hand_name: str):
|
||||
"""Create an async handler that delegates to the OpenFang client."""
|
||||
|
||||
async def handler(**kwargs: Any) -> str:
|
||||
result = await openfang_client.execute_hand(hand_name, kwargs)
|
||||
if result.success:
|
||||
return result.output
|
||||
return f"[OpenFang {hand_name} error] {result.error}"
|
||||
|
||||
handler.__name__ = f"openfang_{hand_name}"
|
||||
handler.__doc__ = _HAND_SCHEMAS.get(hand_name, {}).get(
|
||||
"description", f"OpenFang {hand_name} hand"
|
||||
)
|
||||
return handler
|
||||
|
||||
|
||||
def register_openfang_tools() -> int:
|
||||
"""Register all OpenFang Hands as MCP tools.
|
||||
|
||||
Returns the number of tools registered.
|
||||
"""
|
||||
try:
|
||||
from mcp.registry import tool_registry
|
||||
except ImportError:
|
||||
logger.warning("MCP registry not available — skipping OpenFang tool registration")
|
||||
return 0
|
||||
|
||||
count = 0
|
||||
for hand_name in OPENFANG_HANDS:
|
||||
schema = _HAND_SCHEMAS.get(hand_name)
|
||||
if not schema:
|
||||
logger.warning("No schema for OpenFang hand: %s", hand_name)
|
||||
continue
|
||||
|
||||
tool_name = f"openfang_{hand_name}"
|
||||
handler = _make_hand_handler(hand_name)
|
||||
|
||||
tool_registry.register(
|
||||
name=tool_name,
|
||||
schema=schema,
|
||||
handler=handler,
|
||||
category="openfang",
|
||||
tags=["openfang", hand_name, "vendor"],
|
||||
source_module="infrastructure.openfang.tools",
|
||||
requires_confirmation=(hand_name in ("twitter",)),
|
||||
)
|
||||
count += 1
|
||||
|
||||
logger.info("Registered %d OpenFang tools in MCP registry", count)
|
||||
return count
|
||||
|
||||
|
||||
def get_hands_for_persona(persona_id: str) -> list[str]:
|
||||
"""Return the OpenFang tool names available to a persona."""
|
||||
hand_names = PERSONA_HAND_MAP.get(persona_id, [])
|
||||
return [f"openfang_{h}" for h in hand_names]
|
||||
@@ -1,195 +0,0 @@
|
||||
"""Paperclip bridge — CEO-level orchestration logic.
|
||||
|
||||
Timmy acts as the CEO: reviews issues, delegates to agents, tracks goals,
|
||||
and approves/rejects work. All business logic lives here; routes stay thin.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
from typing import Any
|
||||
|
||||
from config import settings
|
||||
from integrations.paperclip.client import PaperclipClient, paperclip
|
||||
from integrations.paperclip.models import (
|
||||
CreateIssueRequest,
|
||||
PaperclipAgent,
|
||||
PaperclipGoal,
|
||||
PaperclipIssue,
|
||||
PaperclipStatusResponse,
|
||||
UpdateIssueRequest,
|
||||
)
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class PaperclipBridge:
|
||||
"""Bidirectional bridge between Timmy and Paperclip.
|
||||
|
||||
Timmy is the CEO — he creates issues, delegates to agents via wakeup,
|
||||
reviews results, and manages the company's goals.
|
||||
"""
|
||||
|
||||
def __init__(self, client: PaperclipClient | None = None):
|
||||
self.client = client or paperclip
|
||||
|
||||
# ── status / health ──────────────────────────────────────────────────
|
||||
|
||||
async def get_status(self) -> PaperclipStatusResponse:
|
||||
"""Return integration status for the dashboard."""
|
||||
if not settings.paperclip_enabled:
|
||||
return PaperclipStatusResponse(
|
||||
enabled=False,
|
||||
paperclip_url=settings.paperclip_url,
|
||||
)
|
||||
|
||||
connected = await self.client.healthy()
|
||||
agent_count = 0
|
||||
issue_count = 0
|
||||
error = None
|
||||
|
||||
if connected:
|
||||
try:
|
||||
agents = await self.client.list_agents()
|
||||
agent_count = len(agents)
|
||||
issues = await self.client.list_issues()
|
||||
issue_count = len(issues)
|
||||
except Exception as exc:
|
||||
error = str(exc)
|
||||
else:
|
||||
error = "Cannot reach Paperclip server"
|
||||
|
||||
return PaperclipStatusResponse(
|
||||
enabled=True,
|
||||
connected=connected,
|
||||
paperclip_url=settings.paperclip_url,
|
||||
company_id=settings.paperclip_company_id,
|
||||
agent_count=agent_count,
|
||||
issue_count=issue_count,
|
||||
error=error,
|
||||
)
|
||||
|
||||
# ── CEO actions: issue management ────────────────────────────────────
|
||||
|
||||
async def create_and_assign(
|
||||
self,
|
||||
title: str,
|
||||
description: str = "",
|
||||
assignee_id: str | None = None,
|
||||
priority: str | None = None,
|
||||
wake: bool = True,
|
||||
) -> PaperclipIssue | None:
|
||||
"""Create an issue and optionally assign + wake an agent.
|
||||
|
||||
This is the primary CEO action: decide what needs doing, create
|
||||
the ticket, assign it to the right agent, and kick off execution.
|
||||
"""
|
||||
req = CreateIssueRequest(
|
||||
title=title,
|
||||
description=description,
|
||||
priority=priority,
|
||||
assignee_id=assignee_id,
|
||||
)
|
||||
issue = await self.client.create_issue(req)
|
||||
if not issue:
|
||||
logger.error("Failed to create issue: %s", title)
|
||||
return None
|
||||
|
||||
logger.info("Created issue %s: %s", issue.id, title)
|
||||
|
||||
if assignee_id and wake:
|
||||
result = await self.client.wake_agent(assignee_id, issue_id=issue.id)
|
||||
if result:
|
||||
logger.info("Woke agent %s for issue %s", assignee_id, issue.id)
|
||||
else:
|
||||
logger.warning("Failed to wake agent %s", assignee_id)
|
||||
|
||||
return issue
|
||||
|
||||
async def delegate_issue(
|
||||
self,
|
||||
issue_id: str,
|
||||
agent_id: str,
|
||||
message: str | None = None,
|
||||
) -> bool:
|
||||
"""Assign an existing issue to an agent and wake them."""
|
||||
updated = await self.client.update_issue(
|
||||
issue_id,
|
||||
UpdateIssueRequest(assignee_id=agent_id),
|
||||
)
|
||||
if not updated:
|
||||
return False
|
||||
|
||||
if message:
|
||||
await self.client.add_comment(issue_id, f"[CEO] {message}")
|
||||
|
||||
await self.client.wake_agent(agent_id, issue_id=issue_id)
|
||||
return True
|
||||
|
||||
async def review_issue(
|
||||
self,
|
||||
issue_id: str,
|
||||
) -> dict[str, Any]:
|
||||
"""Gather all context for CEO review of an issue."""
|
||||
issue = await self.client.get_issue(issue_id)
|
||||
comments = await self.client.list_comments(issue_id)
|
||||
|
||||
return {
|
||||
"issue": issue.model_dump() if issue else None,
|
||||
"comments": [c.model_dump() for c in comments],
|
||||
}
|
||||
|
||||
async def close_issue(self, issue_id: str, comment: str | None = None) -> bool:
|
||||
"""Close an issue as the CEO."""
|
||||
if comment:
|
||||
await self.client.add_comment(issue_id, f"[CEO] {comment}")
|
||||
result = await self.client.update_issue(
|
||||
issue_id,
|
||||
UpdateIssueRequest(status="done"),
|
||||
)
|
||||
return result is not None
|
||||
|
||||
# ── CEO actions: team management ─────────────────────────────────────
|
||||
|
||||
async def get_team(self) -> list[PaperclipAgent]:
|
||||
"""Get the full agent roster."""
|
||||
return await self.client.list_agents()
|
||||
|
||||
async def get_org_chart(self) -> dict[str, Any] | None:
|
||||
"""Get the organizational hierarchy."""
|
||||
return await self.client.get_org()
|
||||
|
||||
# ── CEO actions: goal management ─────────────────────────────────────
|
||||
|
||||
async def list_goals(self) -> list[PaperclipGoal]:
|
||||
return await self.client.list_goals()
|
||||
|
||||
async def set_goal(self, title: str, description: str = "") -> PaperclipGoal | None:
|
||||
return await self.client.create_goal(title, description)
|
||||
|
||||
# ── CEO actions: approvals ───────────────────────────────────────────
|
||||
|
||||
async def pending_approvals(self) -> list[dict[str, Any]]:
|
||||
return await self.client.list_approvals()
|
||||
|
||||
async def approve(self, approval_id: str, comment: str = "") -> bool:
|
||||
result = await self.client.approve(approval_id, comment)
|
||||
return result is not None
|
||||
|
||||
async def reject(self, approval_id: str, comment: str = "") -> bool:
|
||||
result = await self.client.reject(approval_id, comment)
|
||||
return result is not None
|
||||
|
||||
# ── CEO actions: monitoring ──────────────────────────────────────────
|
||||
|
||||
async def active_runs(self) -> list[dict[str, Any]]:
|
||||
"""Get currently running heartbeat executions."""
|
||||
return await self.client.list_heartbeat_runs()
|
||||
|
||||
async def cancel_run(self, run_id: str) -> bool:
|
||||
result = await self.client.cancel_run(run_id)
|
||||
return result is not None
|
||||
|
||||
|
||||
# Module-level singleton
|
||||
bridge = PaperclipBridge()
|
||||
@@ -1,306 +0,0 @@
|
||||
"""Paperclip AI API client.
|
||||
|
||||
Async HTTP client for communicating with a remote Paperclip server.
|
||||
All methods degrade gracefully — log the error, return a fallback, never crash.
|
||||
|
||||
Paperclip API is mounted at ``/api`` and uses ``local_trusted`` mode on the
|
||||
VPS, so the board actor is implicit. When the server sits behind an nginx
|
||||
auth-gate the client authenticates with Basic-auth on the first request and
|
||||
re-uses the session cookie thereafter.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
from typing import Any
|
||||
|
||||
import httpx
|
||||
|
||||
from config import settings
|
||||
from integrations.paperclip.models import (
|
||||
CreateIssueRequest,
|
||||
PaperclipAgent,
|
||||
PaperclipComment,
|
||||
PaperclipGoal,
|
||||
PaperclipIssue,
|
||||
UpdateIssueRequest,
|
||||
)
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class PaperclipClient:
|
||||
"""Thin async wrapper around the Paperclip REST API.
|
||||
|
||||
All public methods return typed results on success or ``None`` / ``[]``
|
||||
on failure so callers never need to handle exceptions.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
base_url: str | None = None,
|
||||
api_key: str | None = None,
|
||||
timeout: int = 30,
|
||||
):
|
||||
self._base_url = (base_url or settings.paperclip_url).rstrip("/")
|
||||
self._api_key = api_key or settings.paperclip_api_key
|
||||
self._timeout = timeout or settings.paperclip_timeout
|
||||
self._client: httpx.AsyncClient | None = None
|
||||
|
||||
# ── lifecycle ────────────────────────────────────────────────────────
|
||||
|
||||
def _get_client(self) -> httpx.AsyncClient:
|
||||
if self._client is None or self._client.is_closed:
|
||||
headers: dict[str, str] = {"Accept": "application/json"}
|
||||
if self._api_key:
|
||||
headers["Authorization"] = f"Bearer {self._api_key}"
|
||||
self._client = httpx.AsyncClient(
|
||||
base_url=self._base_url,
|
||||
headers=headers,
|
||||
timeout=self._timeout,
|
||||
)
|
||||
return self._client
|
||||
|
||||
async def close(self) -> None:
|
||||
if self._client and not self._client.is_closed:
|
||||
await self._client.aclose()
|
||||
|
||||
# ── helpers ──────────────────────────────────────────────────────────
|
||||
|
||||
async def _get(self, path: str, params: dict | None = None) -> Any | None:
|
||||
try:
|
||||
resp = await self._get_client().get(path, params=params)
|
||||
resp.raise_for_status()
|
||||
return resp.json()
|
||||
except Exception as exc:
|
||||
logger.warning("Paperclip GET %s failed: %s", path, exc)
|
||||
return None
|
||||
|
||||
async def _post(self, path: str, json: dict | None = None) -> Any | None:
|
||||
try:
|
||||
resp = await self._get_client().post(path, json=json)
|
||||
resp.raise_for_status()
|
||||
return resp.json()
|
||||
except Exception as exc:
|
||||
logger.warning("Paperclip POST %s failed: %s", path, exc)
|
||||
return None
|
||||
|
||||
async def _patch(self, path: str, json: dict | None = None) -> Any | None:
|
||||
try:
|
||||
resp = await self._get_client().patch(path, json=json)
|
||||
resp.raise_for_status()
|
||||
return resp.json()
|
||||
except Exception as exc:
|
||||
logger.warning("Paperclip PATCH %s failed: %s", path, exc)
|
||||
return None
|
||||
|
||||
async def _delete(self, path: str) -> bool:
|
||||
try:
|
||||
resp = await self._get_client().delete(path)
|
||||
resp.raise_for_status()
|
||||
return True
|
||||
except Exception as exc:
|
||||
logger.warning("Paperclip DELETE %s failed: %s", path, exc)
|
||||
return False
|
||||
|
||||
# ── health ───────────────────────────────────────────────────────────
|
||||
|
||||
async def healthy(self) -> bool:
|
||||
"""Quick connectivity check."""
|
||||
data = await self._get("/api/health")
|
||||
return data is not None
|
||||
|
||||
# ── companies ────────────────────────────────────────────────────────
|
||||
|
||||
async def list_companies(self) -> list[dict[str, Any]]:
|
||||
data = await self._get("/api/companies")
|
||||
return data if isinstance(data, list) else []
|
||||
|
||||
# ── agents ───────────────────────────────────────────────────────────
|
||||
|
||||
async def list_agents(self, company_id: str | None = None) -> list[PaperclipAgent]:
|
||||
cid = company_id or settings.paperclip_company_id
|
||||
if not cid:
|
||||
logger.warning("paperclip_company_id not set — cannot list agents")
|
||||
return []
|
||||
data = await self._get(f"/api/companies/{cid}/agents")
|
||||
if not isinstance(data, list):
|
||||
return []
|
||||
return [PaperclipAgent(**a) for a in data]
|
||||
|
||||
async def get_agent(self, agent_id: str) -> PaperclipAgent | None:
|
||||
data = await self._get(f"/api/agents/{agent_id}")
|
||||
return PaperclipAgent(**data) if data else None
|
||||
|
||||
async def wake_agent(
|
||||
self,
|
||||
agent_id: str,
|
||||
issue_id: str | None = None,
|
||||
message: str | None = None,
|
||||
) -> dict[str, Any] | None:
|
||||
"""Trigger a heartbeat wake for an agent."""
|
||||
body: dict[str, Any] = {}
|
||||
if issue_id:
|
||||
body["issueId"] = issue_id
|
||||
if message:
|
||||
body["message"] = message
|
||||
return await self._post(f"/api/agents/{agent_id}/wakeup", json=body)
|
||||
|
||||
async def get_org(self, company_id: str | None = None) -> dict[str, Any] | None:
|
||||
cid = company_id or settings.paperclip_company_id
|
||||
if not cid:
|
||||
return None
|
||||
return await self._get(f"/api/companies/{cid}/org")
|
||||
|
||||
# ── issues (tickets) ─────────────────────────────────────────────────
|
||||
|
||||
async def list_issues(
|
||||
self,
|
||||
company_id: str | None = None,
|
||||
status: str | None = None,
|
||||
) -> list[PaperclipIssue]:
|
||||
cid = company_id or settings.paperclip_company_id
|
||||
if not cid:
|
||||
return []
|
||||
params: dict[str, str] = {}
|
||||
if status:
|
||||
params["status"] = status
|
||||
data = await self._get(f"/api/companies/{cid}/issues", params=params)
|
||||
if not isinstance(data, list):
|
||||
return []
|
||||
return [PaperclipIssue(**i) for i in data]
|
||||
|
||||
async def get_issue(self, issue_id: str) -> PaperclipIssue | None:
|
||||
data = await self._get(f"/api/issues/{issue_id}")
|
||||
return PaperclipIssue(**data) if data else None
|
||||
|
||||
async def create_issue(
|
||||
self,
|
||||
req: CreateIssueRequest,
|
||||
company_id: str | None = None,
|
||||
) -> PaperclipIssue | None:
|
||||
cid = company_id or settings.paperclip_company_id
|
||||
if not cid:
|
||||
logger.warning("paperclip_company_id not set — cannot create issue")
|
||||
return None
|
||||
data = await self._post(
|
||||
f"/api/companies/{cid}/issues",
|
||||
json=req.model_dump(exclude_none=True),
|
||||
)
|
||||
return PaperclipIssue(**data) if data else None
|
||||
|
||||
async def update_issue(
|
||||
self,
|
||||
issue_id: str,
|
||||
req: UpdateIssueRequest,
|
||||
) -> PaperclipIssue | None:
|
||||
data = await self._patch(
|
||||
f"/api/issues/{issue_id}",
|
||||
json=req.model_dump(exclude_none=True),
|
||||
)
|
||||
return PaperclipIssue(**data) if data else None
|
||||
|
||||
async def delete_issue(self, issue_id: str) -> bool:
|
||||
return await self._delete(f"/api/issues/{issue_id}")
|
||||
|
||||
# ── issue comments ───────────────────────────────────────────────────
|
||||
|
||||
async def list_comments(self, issue_id: str) -> list[PaperclipComment]:
|
||||
data = await self._get(f"/api/issues/{issue_id}/comments")
|
||||
if not isinstance(data, list):
|
||||
return []
|
||||
return [PaperclipComment(**c) for c in data]
|
||||
|
||||
async def add_comment(
|
||||
self,
|
||||
issue_id: str,
|
||||
content: str,
|
||||
) -> PaperclipComment | None:
|
||||
data = await self._post(
|
||||
f"/api/issues/{issue_id}/comments",
|
||||
json={"content": content},
|
||||
)
|
||||
return PaperclipComment(**data) if data else None
|
||||
|
||||
# ── issue workflow ───────────────────────────────────────────────────
|
||||
|
||||
async def checkout_issue(self, issue_id: str) -> dict[str, Any] | None:
|
||||
"""Assign an issue to Timmy (checkout)."""
|
||||
body: dict[str, Any] = {}
|
||||
if settings.paperclip_agent_id:
|
||||
body["agentId"] = settings.paperclip_agent_id
|
||||
return await self._post(f"/api/issues/{issue_id}/checkout", json=body)
|
||||
|
||||
async def release_issue(self, issue_id: str) -> dict[str, Any] | None:
|
||||
"""Release a checked-out issue."""
|
||||
return await self._post(f"/api/issues/{issue_id}/release")
|
||||
|
||||
# ── goals ────────────────────────────────────────────────────────────
|
||||
|
||||
async def list_goals(self, company_id: str | None = None) -> list[PaperclipGoal]:
|
||||
cid = company_id or settings.paperclip_company_id
|
||||
if not cid:
|
||||
return []
|
||||
data = await self._get(f"/api/companies/{cid}/goals")
|
||||
if not isinstance(data, list):
|
||||
return []
|
||||
return [PaperclipGoal(**g) for g in data]
|
||||
|
||||
async def create_goal(
|
||||
self,
|
||||
title: str,
|
||||
description: str = "",
|
||||
company_id: str | None = None,
|
||||
) -> PaperclipGoal | None:
|
||||
cid = company_id or settings.paperclip_company_id
|
||||
if not cid:
|
||||
return None
|
||||
data = await self._post(
|
||||
f"/api/companies/{cid}/goals",
|
||||
json={"title": title, "description": description},
|
||||
)
|
||||
return PaperclipGoal(**data) if data else None
|
||||
|
||||
# ── heartbeat runs ───────────────────────────────────────────────────
|
||||
|
||||
async def list_heartbeat_runs(
|
||||
self,
|
||||
company_id: str | None = None,
|
||||
) -> list[dict[str, Any]]:
|
||||
cid = company_id or settings.paperclip_company_id
|
||||
if not cid:
|
||||
return []
|
||||
data = await self._get(f"/api/companies/{cid}/heartbeat-runs")
|
||||
return data if isinstance(data, list) else []
|
||||
|
||||
async def get_run_events(self, run_id: str) -> list[dict[str, Any]]:
|
||||
data = await self._get(f"/api/heartbeat-runs/{run_id}/events")
|
||||
return data if isinstance(data, list) else []
|
||||
|
||||
async def cancel_run(self, run_id: str) -> dict[str, Any] | None:
|
||||
return await self._post(f"/api/heartbeat-runs/{run_id}/cancel")
|
||||
|
||||
# ── approvals ────────────────────────────────────────────────────────
|
||||
|
||||
async def list_approvals(self, company_id: str | None = None) -> list[dict[str, Any]]:
|
||||
cid = company_id or settings.paperclip_company_id
|
||||
if not cid:
|
||||
return []
|
||||
data = await self._get(f"/api/companies/{cid}/approvals")
|
||||
return data if isinstance(data, list) else []
|
||||
|
||||
async def approve(self, approval_id: str, comment: str = "") -> dict[str, Any] | None:
|
||||
body: dict[str, Any] = {}
|
||||
if comment:
|
||||
body["comment"] = comment
|
||||
return await self._post(f"/api/approvals/{approval_id}/approve", json=body)
|
||||
|
||||
async def reject(self, approval_id: str, comment: str = "") -> dict[str, Any] | None:
|
||||
body: dict[str, Any] = {}
|
||||
if comment:
|
||||
body["comment"] = comment
|
||||
return await self._post(f"/api/approvals/{approval_id}/reject", json=body)
|
||||
|
||||
|
||||
# Module-level singleton
|
||||
paperclip = PaperclipClient()
|
||||
@@ -1,116 +0,0 @@
|
||||
"""Pydantic models for Paperclip AI API objects."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
# ── Inbound: Paperclip → Timmy ──────────────────────────────────────────────
|
||||
|
||||
|
||||
class PaperclipIssue(BaseModel):
|
||||
"""A ticket/issue in Paperclip's task system."""
|
||||
|
||||
id: str
|
||||
title: str
|
||||
description: str = ""
|
||||
status: str = "open"
|
||||
priority: str | None = None
|
||||
assignee_id: str | None = None
|
||||
project_id: str | None = None
|
||||
labels: list[str] = Field(default_factory=list)
|
||||
created_at: str | None = None
|
||||
updated_at: str | None = None
|
||||
|
||||
|
||||
class PaperclipComment(BaseModel):
|
||||
"""A comment on a Paperclip issue."""
|
||||
|
||||
id: str
|
||||
issue_id: str
|
||||
content: str
|
||||
author: str | None = None
|
||||
created_at: str | None = None
|
||||
|
||||
|
||||
class PaperclipAgent(BaseModel):
|
||||
"""An agent in the Paperclip org chart."""
|
||||
|
||||
id: str
|
||||
name: str
|
||||
role: str = ""
|
||||
status: str = "active"
|
||||
adapter_type: str | None = None
|
||||
company_id: str | None = None
|
||||
|
||||
|
||||
class PaperclipGoal(BaseModel):
|
||||
"""A company goal in Paperclip."""
|
||||
|
||||
id: str
|
||||
title: str
|
||||
description: str = ""
|
||||
status: str = "active"
|
||||
company_id: str | None = None
|
||||
|
||||
|
||||
class HeartbeatRun(BaseModel):
|
||||
"""A heartbeat execution run."""
|
||||
|
||||
id: str
|
||||
agent_id: str
|
||||
status: str
|
||||
issue_id: str | None = None
|
||||
started_at: str | None = None
|
||||
finished_at: str | None = None
|
||||
|
||||
|
||||
# ── Outbound: Timmy → Paperclip ─────────────────────────────────────────────
|
||||
|
||||
|
||||
class CreateIssueRequest(BaseModel):
|
||||
"""Request to create a new issue in Paperclip."""
|
||||
|
||||
title: str
|
||||
description: str = ""
|
||||
priority: str | None = None
|
||||
assignee_id: str | None = None
|
||||
project_id: str | None = None
|
||||
labels: list[str] = Field(default_factory=list)
|
||||
|
||||
|
||||
class UpdateIssueRequest(BaseModel):
|
||||
"""Request to update an existing issue."""
|
||||
|
||||
title: str | None = None
|
||||
description: str | None = None
|
||||
status: str | None = None
|
||||
priority: str | None = None
|
||||
assignee_id: str | None = None
|
||||
|
||||
|
||||
class AddCommentRequest(BaseModel):
|
||||
"""Request to add a comment to an issue."""
|
||||
|
||||
content: str
|
||||
|
||||
|
||||
class WakeAgentRequest(BaseModel):
|
||||
"""Request to wake an agent via heartbeat."""
|
||||
|
||||
issue_id: str | None = None
|
||||
message: str | None = None
|
||||
|
||||
|
||||
# ── API route models ─────────────────────────────────────────────────────────
|
||||
|
||||
|
||||
class PaperclipStatusResponse(BaseModel):
|
||||
"""Response for GET /api/paperclip/status."""
|
||||
|
||||
enabled: bool
|
||||
connected: bool = False
|
||||
paperclip_url: str = ""
|
||||
company_id: str = ""
|
||||
agent_count: int = 0
|
||||
issue_count: int = 0
|
||||
error: str | None = None
|
||||
@@ -1,216 +0,0 @@
|
||||
"""Paperclip task runner — automated issue processing loop.
|
||||
|
||||
Timmy grabs open issues assigned to him, processes each one, posts a
|
||||
completion comment, marks the issue done, and creates a recursive
|
||||
follow-up task for himself.
|
||||
|
||||
Green-path workflow:
|
||||
1. Poll Paperclip for open issues assigned to Timmy
|
||||
2. Check out the first issue in queue
|
||||
3. Process it (delegate to orchestrator via execute_task)
|
||||
4. Post completion comment with the result
|
||||
5. Mark the issue done
|
||||
6. Create a follow-up task for himself (recursive musing)
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
import logging
|
||||
from collections.abc import Callable, Coroutine
|
||||
from typing import Any, Protocol, runtime_checkable
|
||||
|
||||
from config import settings
|
||||
from integrations.paperclip.bridge import PaperclipBridge
|
||||
from integrations.paperclip.bridge import bridge as default_bridge
|
||||
from integrations.paperclip.models import PaperclipIssue
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@runtime_checkable
|
||||
class Orchestrator(Protocol):
|
||||
"""Anything with an ``execute_task`` matching Timmy's orchestrator."""
|
||||
|
||||
async def execute_task(self, task_id: str, description: str, context: dict) -> Any: ...
|
||||
|
||||
|
||||
def _wrap_orchestrator(orch: Orchestrator) -> Callable:
|
||||
"""Adapt an orchestrator's execute_task to the process_fn signature."""
|
||||
|
||||
async def _process(task_id: str, description: str, context: dict) -> str:
|
||||
raw = await orch.execute_task(task_id, description, context)
|
||||
# execute_task may return str or dict — normalise to str
|
||||
if isinstance(raw, dict):
|
||||
return raw.get("result", str(raw))
|
||||
return str(raw)
|
||||
|
||||
return _process
|
||||
|
||||
|
||||
class TaskRunner:
|
||||
"""Autonomous task loop: grab → process → complete → follow-up.
|
||||
|
||||
Wire an *orchestrator* (anything with ``execute_task``) and the runner
|
||||
pushes issues through the real agent pipe. Falls back to a plain
|
||||
``process_fn`` callable or a no-op default.
|
||||
|
||||
The runner operates on a single cycle via ``run_once`` (testable) or
|
||||
continuously via ``start`` with ``paperclip_poll_interval``.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
bridge: PaperclipBridge | None = None,
|
||||
orchestrator: Orchestrator | None = None,
|
||||
process_fn: Callable[[str, str, dict], Coroutine[Any, Any, str]] | None = None,
|
||||
):
|
||||
self.bridge = bridge or default_bridge
|
||||
self.orchestrator = orchestrator
|
||||
|
||||
# Priority: explicit process_fn > orchestrator wrapper > default
|
||||
if process_fn:
|
||||
self._process_fn = process_fn
|
||||
elif orchestrator:
|
||||
self._process_fn = _wrap_orchestrator(orchestrator)
|
||||
else:
|
||||
self._process_fn = None
|
||||
|
||||
self._running = False
|
||||
|
||||
# ── single cycle ──────────────────────────────────────────────────
|
||||
|
||||
async def grab_next_task(self) -> PaperclipIssue | None:
|
||||
"""Grab the first open issue assigned to Timmy."""
|
||||
agent_id = settings.paperclip_agent_id
|
||||
if not agent_id:
|
||||
logger.warning("paperclip_agent_id not set — cannot grab tasks")
|
||||
return None
|
||||
|
||||
issues = await self.bridge.client.list_issues(status="open")
|
||||
# Filter to issues assigned to Timmy, take the first one
|
||||
for issue in issues:
|
||||
if issue.assignee_id == agent_id:
|
||||
return issue
|
||||
|
||||
return None
|
||||
|
||||
async def process_task(self, issue: PaperclipIssue) -> str:
|
||||
"""Process an issue: check out, run through the orchestrator, return result."""
|
||||
# Check out the issue so others know we're working on it
|
||||
await self.bridge.client.checkout_issue(issue.id)
|
||||
|
||||
context = {
|
||||
"issue_id": issue.id,
|
||||
"title": issue.title,
|
||||
"priority": issue.priority,
|
||||
"labels": issue.labels,
|
||||
}
|
||||
|
||||
if self._process_fn:
|
||||
result = await self._process_fn(issue.id, issue.description or issue.title, context)
|
||||
else:
|
||||
result = f"Processed task: {issue.title}"
|
||||
|
||||
return result
|
||||
|
||||
async def complete_task(self, issue: PaperclipIssue, result: str) -> bool:
|
||||
"""Post completion comment and mark issue done."""
|
||||
# Post the result as a comment
|
||||
await self.bridge.client.add_comment(
|
||||
issue.id,
|
||||
f"[Timmy] Task completed.\n\n{result}",
|
||||
)
|
||||
|
||||
# Mark the issue as done
|
||||
return await self.bridge.close_issue(issue.id, comment=None)
|
||||
|
||||
async def create_follow_up(
|
||||
self, original: PaperclipIssue, result: str
|
||||
) -> PaperclipIssue | None:
|
||||
"""Create a recursive follow-up task for Timmy.
|
||||
|
||||
Timmy muses about task automation and writes a follow-up issue
|
||||
assigned to himself — the recursive self-improvement loop.
|
||||
"""
|
||||
follow_up_title = f"Follow-up: {original.title}"
|
||||
follow_up_description = (
|
||||
f"Automated follow-up from completed task '{original.title}' "
|
||||
f"(issue {original.id}).\n\n"
|
||||
f"Previous result:\n{result}\n\n"
|
||||
"Review the outcome and determine if further action is needed. "
|
||||
"Muse about task automation improvements and recursive self-improvement."
|
||||
)
|
||||
|
||||
return await self.bridge.create_and_assign(
|
||||
title=follow_up_title,
|
||||
description=follow_up_description,
|
||||
assignee_id=settings.paperclip_agent_id,
|
||||
priority=original.priority,
|
||||
wake=False, # Don't wake immediately — let the next poll pick it up
|
||||
)
|
||||
|
||||
async def run_once(self) -> dict[str, Any] | None:
|
||||
"""Execute one full cycle of the green-path workflow.
|
||||
|
||||
Returns a summary dict on success, None if no work found.
|
||||
"""
|
||||
# Step 1: Grab next task
|
||||
issue = await self.grab_next_task()
|
||||
if not issue:
|
||||
logger.debug("No tasks in queue for Timmy")
|
||||
return None
|
||||
|
||||
logger.info("Grabbed task %s: %s", issue.id, issue.title)
|
||||
|
||||
# Step 2: Process the task
|
||||
result = await self.process_task(issue)
|
||||
logger.info("Processed task %s", issue.id)
|
||||
|
||||
# Step 3: Complete it
|
||||
completed = await self.complete_task(issue, result)
|
||||
if not completed:
|
||||
logger.warning("Failed to mark task %s as done", issue.id)
|
||||
|
||||
# Step 4: Create follow-up
|
||||
follow_up = await self.create_follow_up(issue, result)
|
||||
follow_up_id = follow_up.id if follow_up else None
|
||||
if follow_up:
|
||||
logger.info("Created follow-up %s for task %s", follow_up.id, issue.id)
|
||||
|
||||
return {
|
||||
"original_issue_id": issue.id,
|
||||
"original_title": issue.title,
|
||||
"result": result,
|
||||
"completed": completed,
|
||||
"follow_up_issue_id": follow_up_id,
|
||||
}
|
||||
|
||||
# ── continuous loop ───────────────────────────────────────────────
|
||||
|
||||
async def start(self) -> None:
|
||||
"""Run the task loop continuously using paperclip_poll_interval."""
|
||||
interval = settings.paperclip_poll_interval
|
||||
if interval <= 0:
|
||||
logger.info("Task runner disabled (poll_interval=%d)", interval)
|
||||
return
|
||||
|
||||
self._running = True
|
||||
logger.info("Task runner started (poll every %ds)", interval)
|
||||
|
||||
while self._running:
|
||||
try:
|
||||
await self.run_once()
|
||||
except Exception as exc:
|
||||
logger.error("Task runner cycle failed: %s", exc)
|
||||
|
||||
await asyncio.sleep(interval)
|
||||
|
||||
def stop(self) -> None:
|
||||
"""Signal the loop to stop."""
|
||||
self._running = False
|
||||
logger.info("Task runner stopping")
|
||||
|
||||
|
||||
# Module-level singleton
|
||||
task_runner = TaskRunner()
|
||||
@@ -1 +0,0 @@
|
||||
# swarm — task orchestration package
|
||||
@@ -1,250 +0,0 @@
|
||||
"""Swarm event log — records system events to SQLite.
|
||||
|
||||
Provides EventType enum, EventLogEntry dataclass, and log_event() function
|
||||
used by error_capture, thinking engine, and the event broadcaster.
|
||||
|
||||
Events are persisted to SQLite and also published to the unified EventBus
|
||||
(infrastructure.events.bus) for subscriber notification.
|
||||
"""
|
||||
|
||||
import json
|
||||
import logging
|
||||
import sqlite3
|
||||
import uuid
|
||||
from dataclasses import dataclass, field
|
||||
from datetime import UTC, datetime, timedelta
|
||||
from enum import Enum
|
||||
from pathlib import Path
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
DB_PATH = Path("data/events.db")
|
||||
|
||||
|
||||
class EventType(Enum):
|
||||
"""All recognised event types in the system."""
|
||||
|
||||
# Task lifecycle
|
||||
TASK_CREATED = "task.created"
|
||||
TASK_BIDDING = "task.bidding"
|
||||
TASK_ASSIGNED = "task.assigned"
|
||||
TASK_STARTED = "task.started"
|
||||
TASK_COMPLETED = "task.completed"
|
||||
TASK_FAILED = "task.failed"
|
||||
|
||||
# Agent lifecycle
|
||||
AGENT_JOINED = "agent.joined"
|
||||
AGENT_LEFT = "agent.left"
|
||||
AGENT_STATUS_CHANGED = "agent.status_changed"
|
||||
|
||||
# Bids
|
||||
BID_SUBMITTED = "bid.submitted"
|
||||
AUCTION_CLOSED = "auction.closed"
|
||||
|
||||
# Tools
|
||||
TOOL_CALLED = "tool.called"
|
||||
TOOL_COMPLETED = "tool.completed"
|
||||
TOOL_FAILED = "tool.failed"
|
||||
|
||||
# System
|
||||
SYSTEM_ERROR = "system.error"
|
||||
SYSTEM_WARNING = "system.warning"
|
||||
SYSTEM_INFO = "system.info"
|
||||
|
||||
# Error capture
|
||||
ERROR_CAPTURED = "error.captured"
|
||||
BUG_REPORT_CREATED = "bug_report.created"
|
||||
|
||||
# Thinking
|
||||
TIMMY_THOUGHT = "timmy.thought"
|
||||
|
||||
# Loop QA self-tests
|
||||
LOOP_QA_OK = "loop_qa.ok"
|
||||
LOOP_QA_FAIL = "loop_qa.fail"
|
||||
|
||||
|
||||
@dataclass
|
||||
class EventLogEntry:
|
||||
"""Single event in the log, used by the broadcaster for display."""
|
||||
|
||||
id: str
|
||||
event_type: EventType
|
||||
source: str
|
||||
timestamp: str
|
||||
data: dict = field(default_factory=dict)
|
||||
task_id: str = ""
|
||||
agent_id: str = ""
|
||||
|
||||
|
||||
def _ensure_db() -> sqlite3.Connection:
|
||||
DB_PATH.parent.mkdir(parents=True, exist_ok=True)
|
||||
conn = sqlite3.connect(str(DB_PATH))
|
||||
conn.row_factory = sqlite3.Row
|
||||
conn.execute("PRAGMA journal_mode=WAL")
|
||||
conn.execute("PRAGMA busy_timeout=5000")
|
||||
conn.execute("""
|
||||
CREATE TABLE IF NOT EXISTS events (
|
||||
id TEXT PRIMARY KEY,
|
||||
event_type TEXT NOT NULL,
|
||||
source TEXT DEFAULT '',
|
||||
task_id TEXT DEFAULT '',
|
||||
agent_id TEXT DEFAULT '',
|
||||
data TEXT DEFAULT '{}',
|
||||
timestamp TEXT NOT NULL
|
||||
)
|
||||
""")
|
||||
conn.execute("CREATE INDEX IF NOT EXISTS idx_events_type ON events(event_type)")
|
||||
conn.execute("CREATE INDEX IF NOT EXISTS idx_events_time ON events(timestamp)")
|
||||
conn.execute("CREATE INDEX IF NOT EXISTS idx_events_agent ON events(agent_id)")
|
||||
conn.commit()
|
||||
return conn
|
||||
|
||||
|
||||
def _publish_to_event_bus(entry: EventLogEntry) -> None:
|
||||
"""Publish an event to the unified EventBus (non-blocking).
|
||||
|
||||
This bridges the synchronous log_event() callers to the async EventBus
|
||||
so subscribers get notified of all events regardless of origin.
|
||||
"""
|
||||
try:
|
||||
import asyncio
|
||||
|
||||
from infrastructure.events.bus import Event, event_bus
|
||||
|
||||
event = Event(
|
||||
id=entry.id,
|
||||
type=entry.event_type.value,
|
||||
source=entry.source,
|
||||
data={
|
||||
**entry.data,
|
||||
"task_id": entry.task_id,
|
||||
"agent_id": entry.agent_id,
|
||||
},
|
||||
timestamp=entry.timestamp,
|
||||
)
|
||||
|
||||
try:
|
||||
asyncio.get_running_loop()
|
||||
asyncio.create_task(event_bus.publish(event))
|
||||
except RuntimeError:
|
||||
# No event loop running — skip async publish
|
||||
pass
|
||||
except Exception:
|
||||
# Graceful degradation — never crash on EventBus integration
|
||||
pass
|
||||
|
||||
|
||||
def log_event(
|
||||
event_type: EventType,
|
||||
source: str = "",
|
||||
data: dict | None = None,
|
||||
task_id: str = "",
|
||||
agent_id: str = "",
|
||||
) -> EventLogEntry:
|
||||
"""Record an event and return the entry.
|
||||
|
||||
Persists to SQLite, publishes to EventBus for subscribers,
|
||||
and broadcasts to WebSocket clients.
|
||||
"""
|
||||
entry = EventLogEntry(
|
||||
id=str(uuid.uuid4()),
|
||||
event_type=event_type,
|
||||
source=source,
|
||||
timestamp=datetime.now(UTC).isoformat(),
|
||||
data=data or {},
|
||||
task_id=task_id,
|
||||
agent_id=agent_id,
|
||||
)
|
||||
|
||||
# Persist to SQLite
|
||||
try:
|
||||
db = _ensure_db()
|
||||
try:
|
||||
db.execute(
|
||||
"INSERT INTO events (id, event_type, source, task_id, agent_id, data, timestamp) "
|
||||
"VALUES (?, ?, ?, ?, ?, ?, ?)",
|
||||
(
|
||||
entry.id,
|
||||
event_type.value,
|
||||
source,
|
||||
task_id,
|
||||
agent_id,
|
||||
json.dumps(data or {}),
|
||||
entry.timestamp,
|
||||
),
|
||||
)
|
||||
db.commit()
|
||||
finally:
|
||||
db.close()
|
||||
except Exception as exc:
|
||||
logger.debug("Failed to persist event: %s", exc)
|
||||
|
||||
# Publish to unified EventBus (non-blocking)
|
||||
_publish_to_event_bus(entry)
|
||||
|
||||
# Broadcast to WebSocket clients (non-blocking)
|
||||
try:
|
||||
from infrastructure.events.broadcaster import event_broadcaster
|
||||
|
||||
event_broadcaster.broadcast_sync(entry)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
return entry
|
||||
|
||||
|
||||
def prune_old_events(keep_days: int = 90, keep_min: int = 200) -> int:
|
||||
"""Delete events older than *keep_days*, always retaining at least *keep_min*.
|
||||
|
||||
Returns the number of deleted rows.
|
||||
"""
|
||||
db = _ensure_db()
|
||||
try:
|
||||
total = db.execute("SELECT COUNT(*) as c FROM events").fetchone()["c"]
|
||||
if total <= keep_min:
|
||||
return 0
|
||||
cutoff = (datetime.now(UTC) - timedelta(days=keep_days)).isoformat()
|
||||
cursor = db.execute(
|
||||
"DELETE FROM events WHERE timestamp < ? AND id NOT IN "
|
||||
"(SELECT id FROM events ORDER BY timestamp DESC LIMIT ?)",
|
||||
(cutoff, keep_min),
|
||||
)
|
||||
deleted = cursor.rowcount
|
||||
db.commit()
|
||||
return deleted
|
||||
except Exception as exc:
|
||||
logger.warning("Event pruning failed: %s", exc)
|
||||
return 0
|
||||
finally:
|
||||
db.close()
|
||||
|
||||
|
||||
def get_task_events(task_id: str, limit: int = 50) -> list[EventLogEntry]:
|
||||
"""Retrieve events for a specific task."""
|
||||
db = _ensure_db()
|
||||
try:
|
||||
rows = db.execute(
|
||||
"SELECT * FROM events WHERE task_id=? ORDER BY timestamp DESC LIMIT ?",
|
||||
(task_id, limit),
|
||||
).fetchall()
|
||||
finally:
|
||||
db.close()
|
||||
|
||||
entries = []
|
||||
for r in rows:
|
||||
try:
|
||||
et = EventType(r["event_type"])
|
||||
except ValueError:
|
||||
et = EventType.SYSTEM_INFO
|
||||
entries.append(
|
||||
EventLogEntry(
|
||||
id=r["id"],
|
||||
event_type=et,
|
||||
source=r["source"],
|
||||
timestamp=r["timestamp"],
|
||||
data=json.loads(r["data"]) if r["data"] else {},
|
||||
task_id=r["task_id"],
|
||||
agent_id=r["agent_id"],
|
||||
)
|
||||
)
|
||||
return entries
|
||||
@@ -1 +0,0 @@
|
||||
# swarm.task_queue — task queue bridge
|
||||
@@ -1,121 +0,0 @@
|
||||
"""Bridge module: exposes create_task() for programmatic task creation.
|
||||
|
||||
Used by infrastructure.error_capture to auto-create bug report tasks
|
||||
in the same SQLite database the dashboard routes use.
|
||||
"""
|
||||
|
||||
import logging
|
||||
import sqlite3
|
||||
import uuid
|
||||
from dataclasses import dataclass
|
||||
from datetime import datetime
|
||||
from pathlib import Path
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Use absolute path via settings.repo_root so tests can reliably redirect it
|
||||
# and relative-path CWD differences don't cause DB leaks.
|
||||
try:
|
||||
from config import settings as _settings
|
||||
|
||||
DB_PATH = Path(_settings.repo_root) / "data" / "tasks.db"
|
||||
except Exception:
|
||||
DB_PATH = Path("data/tasks.db")
|
||||
|
||||
|
||||
@dataclass
|
||||
class TaskRecord:
|
||||
"""Lightweight return value from create_task()."""
|
||||
|
||||
id: str
|
||||
title: str
|
||||
status: str
|
||||
|
||||
|
||||
def _ensure_db() -> sqlite3.Connection:
|
||||
DB_PATH.parent.mkdir(parents=True, exist_ok=True)
|
||||
conn = sqlite3.connect(str(DB_PATH))
|
||||
conn.row_factory = sqlite3.Row
|
||||
conn.execute("PRAGMA journal_mode=WAL")
|
||||
conn.execute("PRAGMA busy_timeout=5000")
|
||||
conn.execute("""
|
||||
CREATE TABLE IF NOT EXISTS tasks (
|
||||
id TEXT PRIMARY KEY,
|
||||
title TEXT NOT NULL,
|
||||
description TEXT DEFAULT '',
|
||||
status TEXT DEFAULT 'pending_approval',
|
||||
priority TEXT DEFAULT 'normal',
|
||||
assigned_to TEXT DEFAULT '',
|
||||
created_by TEXT DEFAULT 'operator',
|
||||
result TEXT DEFAULT '',
|
||||
created_at TEXT DEFAULT (datetime('now')),
|
||||
completed_at TEXT
|
||||
)
|
||||
""")
|
||||
conn.commit()
|
||||
return conn
|
||||
|
||||
|
||||
def create_task(
|
||||
title: str,
|
||||
description: str = "",
|
||||
assigned_to: str = "default",
|
||||
created_by: str = "system",
|
||||
priority: str = "normal",
|
||||
requires_approval: bool = True,
|
||||
auto_approve: bool = False,
|
||||
task_type: str = "",
|
||||
) -> TaskRecord:
|
||||
"""Insert a task into the SQLite task queue and return a TaskRecord.
|
||||
|
||||
Args:
|
||||
title: Task title (e.g. "[BUG] ConnectionError: ...")
|
||||
description: Markdown body with error details / stack trace
|
||||
assigned_to: Agent or queue to assign to
|
||||
created_by: Who created the task ("system", "operator", etc.)
|
||||
priority: "low" | "normal" | "high" | "urgent"
|
||||
requires_approval: If False and auto_approve, skip pending_approval
|
||||
auto_approve: If True, set status to "approved" immediately
|
||||
task_type: Optional tag (e.g. "bug_report")
|
||||
|
||||
Returns:
|
||||
TaskRecord with the new task's id, title, and status.
|
||||
"""
|
||||
valid_priorities = {"low", "normal", "high", "urgent"}
|
||||
if priority not in valid_priorities:
|
||||
priority = "normal"
|
||||
|
||||
status = "approved" if (auto_approve and not requires_approval) else "pending_approval"
|
||||
task_id = str(uuid.uuid4())
|
||||
now = datetime.utcnow().isoformat()
|
||||
|
||||
# Store task_type in description header if provided
|
||||
if task_type:
|
||||
description = f"**Type:** {task_type}\n{description}"
|
||||
|
||||
db = _ensure_db()
|
||||
try:
|
||||
db.execute(
|
||||
"INSERT INTO tasks (id, title, description, status, priority, assigned_to, created_by, created_at) "
|
||||
"VALUES (?, ?, ?, ?, ?, ?, ?, ?)",
|
||||
(task_id, title, description, status, priority, assigned_to, created_by, now),
|
||||
)
|
||||
db.commit()
|
||||
finally:
|
||||
db.close()
|
||||
|
||||
logger.info("Task created: %s — %s [%s]", task_id[:8], title[:60], status)
|
||||
return TaskRecord(id=task_id, title=title, status=status)
|
||||
|
||||
|
||||
def get_task_summary_for_briefing() -> dict:
|
||||
"""Return a summary of task counts by status for the morning briefing."""
|
||||
db = _ensure_db()
|
||||
try:
|
||||
rows = db.execute("SELECT status, COUNT(*) as cnt FROM tasks GROUP BY status").fetchall()
|
||||
finally:
|
||||
db.close()
|
||||
|
||||
summary = {r["status"]: r["cnt"] for r in rows}
|
||||
summary["total"] = sum(summary.values())
|
||||
return summary
|
||||
@@ -229,15 +229,14 @@ def create_timmy(
|
||||
auto_pull=True,
|
||||
)
|
||||
|
||||
# If Ollama is completely unreachable, fall back to Claude if available
|
||||
# If Ollama is completely unreachable, fail loudly.
|
||||
# Sovereignty: never silently send data to a cloud API.
|
||||
# Use --backend claude explicitly if you want cloud inference.
|
||||
if not _check_model_available(model_name):
|
||||
from timmy.backends import claude_available
|
||||
|
||||
if claude_available():
|
||||
logger.warning("Ollama unreachable — falling back to Claude backend")
|
||||
from timmy.backends import ClaudeBackend
|
||||
|
||||
return ClaudeBackend()
|
||||
logger.error(
|
||||
"Ollama unreachable and no local models available. "
|
||||
"Start Ollama with 'ollama serve' or use --backend claude explicitly."
|
||||
)
|
||||
|
||||
if is_fallback:
|
||||
logger.info("Using fallback model %s (requested was unavailable)", model_name)
|
||||
|
||||
@@ -1,11 +1,43 @@
|
||||
"""Agents package — Timmy orchestrator and configurable sub-agents."""
|
||||
"""Agents package — YAML-driven agent factory.
|
||||
|
||||
All agent definitions live in config/agents.yaml.
|
||||
The loader reads YAML and builds SubAgent instances from a single seed class.
|
||||
"""
|
||||
|
||||
from timmy.agents.base import BaseAgent, SubAgent
|
||||
from timmy.agents.timmy import TimmyOrchestrator, create_timmy_swarm
|
||||
from timmy.agents.loader import (
|
||||
get_agent,
|
||||
list_agents,
|
||||
load_agents,
|
||||
reload_agents,
|
||||
route_request,
|
||||
)
|
||||
|
||||
# Backwards compat — old code that imported create_timmy_swarm
|
||||
# now gets the YAML-driven equivalent.
|
||||
|
||||
|
||||
def create_timmy_swarm():
|
||||
"""Load all agents from YAML config.
|
||||
|
||||
Backwards-compatible wrapper for code that called create_timmy_swarm().
|
||||
Returns the orchestrator agent (or first agent if no orchestrator defined).
|
||||
"""
|
||||
agents = load_agents()
|
||||
return agents.get("orchestrator", next(iter(agents.values())))
|
||||
|
||||
|
||||
# Also alias TimmyOrchestrator for old imports
|
||||
TimmyOrchestrator = SubAgent
|
||||
|
||||
__all__ = [
|
||||
"BaseAgent",
|
||||
"SubAgent",
|
||||
"TimmyOrchestrator",
|
||||
"create_timmy_swarm",
|
||||
"get_agent",
|
||||
"list_agents",
|
||||
"load_agents",
|
||||
"reload_agents",
|
||||
"route_request",
|
||||
]
|
||||
|
||||
@@ -6,8 +6,8 @@ BaseAgent provides:
|
||||
- Memory integration
|
||||
- Structured logging
|
||||
|
||||
SubAgent is the concrete implementation used for all persona-based agents
|
||||
(replacing the individual Helm/Echo/Seer/Forge/Quill classes).
|
||||
SubAgent is the single seed class for ALL agents. Differentiation
|
||||
comes entirely from config (agents.yaml), not from Python subclasses.
|
||||
"""
|
||||
|
||||
import logging
|
||||
@@ -29,7 +29,7 @@ logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class BaseAgent(ABC):
|
||||
"""Base class for all sub-agents."""
|
||||
"""Base class for all agents."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
@@ -38,36 +38,47 @@ class BaseAgent(ABC):
|
||||
role: str,
|
||||
system_prompt: str,
|
||||
tools: list[str] | None = None,
|
||||
model: str | None = None,
|
||||
max_history: int = 10,
|
||||
) -> None:
|
||||
self.agent_id = agent_id
|
||||
self.name = name
|
||||
self.role = role
|
||||
self.tools = tools or []
|
||||
self.model = model or settings.ollama_model
|
||||
self.max_history = max_history
|
||||
|
||||
# Create Agno agent
|
||||
self.system_prompt = system_prompt
|
||||
self.agent = self._create_agent(system_prompt)
|
||||
|
||||
# Event bus for communication
|
||||
self.event_bus: EventBus | None = None
|
||||
|
||||
logger.info("%s agent initialized (id: %s)", name, agent_id)
|
||||
logger.info(
|
||||
"%s agent initialized (id: %s, model: %s)",
|
||||
name,
|
||||
agent_id,
|
||||
self.model,
|
||||
)
|
||||
|
||||
def _create_agent(self, system_prompt: str) -> Agent:
|
||||
"""Create the underlying Agno agent."""
|
||||
"""Create the underlying Agno agent with per-agent model."""
|
||||
# Get tools from registry
|
||||
tool_instances = []
|
||||
for tool_name in self.tools:
|
||||
handler = tool_registry.get_handler(tool_name)
|
||||
if handler:
|
||||
tool_instances.append(handler)
|
||||
if tool_registry is not None:
|
||||
for tool_name in self.tools:
|
||||
handler = tool_registry.get_handler(tool_name)
|
||||
if handler:
|
||||
tool_instances.append(handler)
|
||||
|
||||
return Agent(
|
||||
name=self.name,
|
||||
model=Ollama(id=settings.ollama_model, host=settings.ollama_url, timeout=300),
|
||||
model=Ollama(id=self.model, host=settings.ollama_url, timeout=300),
|
||||
description=system_prompt,
|
||||
tools=tool_instances if tool_instances else None,
|
||||
add_history_to_context=True,
|
||||
num_history_runs=10,
|
||||
num_history_runs=self.max_history,
|
||||
markdown=True,
|
||||
telemetry=settings.telemetry_enabled,
|
||||
)
|
||||
@@ -134,16 +145,18 @@ class BaseAgent(ABC):
|
||||
"agent_id": self.agent_id,
|
||||
"name": self.name,
|
||||
"role": self.role,
|
||||
"model": self.model,
|
||||
"status": "ready",
|
||||
"tools": self.tools,
|
||||
}
|
||||
|
||||
|
||||
class SubAgent(BaseAgent):
|
||||
"""Concrete agent configured by persona data (prompt + tools).
|
||||
"""Concrete agent — the single seed class for all agents.
|
||||
|
||||
Replaces the individual agent classes (Helm, Echo, Seer, Forge, Quill)
|
||||
which all shared the same structure and differed only by config.
|
||||
Every agent in the system is an instance of SubAgent, differentiated
|
||||
only by the config values passed in from agents.yaml. No subclassing
|
||||
needed — add new agents by editing YAML, not Python.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
@@ -153,6 +166,8 @@ class SubAgent(BaseAgent):
|
||||
role: str,
|
||||
system_prompt: str,
|
||||
tools: list[str] | None = None,
|
||||
model: str | None = None,
|
||||
max_history: int = 10,
|
||||
) -> None:
|
||||
super().__init__(
|
||||
agent_id=agent_id,
|
||||
@@ -160,6 +175,8 @@ class SubAgent(BaseAgent):
|
||||
role=role,
|
||||
system_prompt=system_prompt,
|
||||
tools=tools,
|
||||
model=model,
|
||||
max_history=max_history,
|
||||
)
|
||||
|
||||
async def execute_task(self, task_id: str, description: str, context: dict) -> Any:
|
||||
|
||||
212
src/timmy/agents/loader.py
Normal file
212
src/timmy/agents/loader.py
Normal file
@@ -0,0 +1,212 @@
|
||||
"""YAML-driven agent factory.
|
||||
|
||||
Reads config/agents.yaml and builds agent instances from a single seed
|
||||
class (SubAgent). All agent differentiation lives in YAML — no Python
|
||||
changes needed to add, remove, or reconfigure agents.
|
||||
|
||||
Usage:
|
||||
from timmy.agents.loader import load_agents, get_agent, list_agents
|
||||
from timmy.agents.loader import get_routing_config, route_request
|
||||
|
||||
agents = load_agents() # dict of agent_id -> SubAgent
|
||||
forge = get_agent("coder") # single agent by id
|
||||
target = route_request("fix bug") # pattern-based routing
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
import yaml
|
||||
|
||||
from config import settings
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Module-level cache
|
||||
_agents: dict[str, Any] | None = None
|
||||
_config: dict[str, Any] | None = None
|
||||
|
||||
# Default config path (relative to repo root)
|
||||
_CONFIG_FILENAME = "config/agents.yaml"
|
||||
|
||||
|
||||
def _find_config_path() -> Path:
|
||||
"""Locate agents.yaml relative to the repo root."""
|
||||
repo_root = Path(settings.repo_root)
|
||||
config_path = repo_root / _CONFIG_FILENAME
|
||||
if not config_path.exists():
|
||||
raise FileNotFoundError(
|
||||
f"Agent config not found: {config_path}\nCreate {_CONFIG_FILENAME} in your repo root."
|
||||
)
|
||||
return config_path
|
||||
|
||||
|
||||
def _load_config(force_reload: bool = False) -> dict[str, Any]:
|
||||
"""Load and cache the agents.yaml config."""
|
||||
global _config
|
||||
if _config is not None and not force_reload:
|
||||
return _config
|
||||
|
||||
config_path = _find_config_path()
|
||||
with open(config_path) as f:
|
||||
_config = yaml.safe_load(f)
|
||||
|
||||
logger.info("Loaded agent config from %s", config_path)
|
||||
return _config
|
||||
|
||||
|
||||
def _resolve_model(agent_model: str | None, defaults: dict) -> str:
|
||||
"""Resolve agent model, falling back to defaults then settings."""
|
||||
if agent_model:
|
||||
return agent_model
|
||||
default_model = defaults.get("model")
|
||||
if default_model:
|
||||
return default_model
|
||||
return settings.ollama_model
|
||||
|
||||
|
||||
def _resolve_prompt_tier(agent_tier: str | None, defaults: dict) -> str:
|
||||
"""Resolve prompt tier, falling back to defaults."""
|
||||
return agent_tier or defaults.get("prompt_tier", "lite")
|
||||
|
||||
|
||||
def _build_system_prompt(agent_cfg: dict, prompt_tier: str) -> str:
|
||||
"""Build the full system prompt for an agent.
|
||||
|
||||
Combines the agent's custom prompt with the appropriate base prompt
|
||||
(full or lite) from the prompts module.
|
||||
"""
|
||||
from timmy.prompts import get_system_prompt
|
||||
|
||||
# Get base prompt for the tier
|
||||
tools_enabled = prompt_tier == "full"
|
||||
base_prompt = get_system_prompt(tools_enabled=tools_enabled)
|
||||
|
||||
# Prepend the agent's custom prompt
|
||||
custom_prompt = agent_cfg.get("prompt", "").strip()
|
||||
if custom_prompt:
|
||||
return f"{custom_prompt}\n\n{base_prompt}"
|
||||
|
||||
return base_prompt
|
||||
|
||||
|
||||
def load_agents(force_reload: bool = False) -> dict[str, Any]:
|
||||
"""Load all agents from YAML config.
|
||||
|
||||
Returns a dict of agent_id -> SubAgent instances.
|
||||
Agents are cached after first load; pass force_reload=True to re-read.
|
||||
"""
|
||||
global _agents
|
||||
if _agents is not None and not force_reload:
|
||||
return _agents
|
||||
|
||||
from timmy.agents.base import SubAgent
|
||||
|
||||
config = _load_config(force_reload=force_reload)
|
||||
defaults = config.get("defaults", {})
|
||||
agents_cfg = config.get("agents", {})
|
||||
|
||||
_agents = {}
|
||||
|
||||
for agent_id, agent_cfg in agents_cfg.items():
|
||||
model = _resolve_model(agent_cfg.get("model"), defaults)
|
||||
prompt_tier = _resolve_prompt_tier(agent_cfg.get("prompt_tier"), defaults)
|
||||
system_prompt = _build_system_prompt(agent_cfg, prompt_tier)
|
||||
max_history = agent_cfg.get("max_history", defaults.get("max_history", 10))
|
||||
tools = agent_cfg.get("tools", defaults.get("tools", []))
|
||||
|
||||
agent = SubAgent(
|
||||
agent_id=agent_id,
|
||||
name=agent_cfg.get("name", agent_id.title()),
|
||||
role=agent_cfg.get("role", "general"),
|
||||
system_prompt=system_prompt,
|
||||
tools=tools,
|
||||
model=model,
|
||||
max_history=max_history,
|
||||
)
|
||||
|
||||
_agents[agent_id] = agent
|
||||
logger.info(
|
||||
"Loaded agent: %s (model=%s, tools=%d, tier=%s)",
|
||||
agent_id,
|
||||
model,
|
||||
len(tools),
|
||||
prompt_tier,
|
||||
)
|
||||
|
||||
logger.info("Total agents loaded: %d", len(_agents))
|
||||
return _agents
|
||||
|
||||
|
||||
def get_agent(agent_id: str) -> Any:
|
||||
"""Get a single agent by ID. Loads config if not already loaded."""
|
||||
agents = load_agents()
|
||||
agent = agents.get(agent_id)
|
||||
if agent is None:
|
||||
available = ", ".join(sorted(agents.keys()))
|
||||
raise KeyError(f"Unknown agent: {agent_id!r}. Available: {available}")
|
||||
return agent
|
||||
|
||||
|
||||
def list_agents() -> list[dict[str, Any]]:
|
||||
"""List all agents with their metadata (for tools_intro, delegation, etc.)."""
|
||||
config = _load_config()
|
||||
defaults = config.get("defaults", {})
|
||||
agents_cfg = config.get("agents", {})
|
||||
|
||||
result = []
|
||||
for agent_id, agent_cfg in agents_cfg.items():
|
||||
result.append(
|
||||
{
|
||||
"id": agent_id,
|
||||
"name": agent_cfg.get("name", agent_id.title()),
|
||||
"role": agent_cfg.get("role", "general"),
|
||||
"model": _resolve_model(agent_cfg.get("model"), defaults),
|
||||
"tools": agent_cfg.get("tools", defaults.get("tools", [])),
|
||||
"status": "available",
|
||||
}
|
||||
)
|
||||
return result
|
||||
|
||||
|
||||
# ── Routing ────────────────────────────────────────────────────────────────
|
||||
|
||||
|
||||
def get_routing_config() -> dict[str, Any]:
|
||||
"""Get the routing configuration."""
|
||||
config = _load_config()
|
||||
return config.get("routing", {"method": "pattern", "patterns": {}})
|
||||
|
||||
|
||||
def route_request(user_message: str) -> str | None:
|
||||
"""Route a user request to an agent using pattern matching.
|
||||
|
||||
Returns the agent_id of the best match, or None if no pattern matches
|
||||
(meaning the orchestrator should handle it directly).
|
||||
"""
|
||||
routing = get_routing_config()
|
||||
|
||||
if routing.get("method") != "pattern":
|
||||
return None
|
||||
|
||||
patterns = routing.get("patterns", {})
|
||||
message_lower = user_message.lower()
|
||||
|
||||
for agent_id, keywords in patterns.items():
|
||||
for keyword in keywords:
|
||||
if keyword.lower() in message_lower:
|
||||
logger.debug("Routed to %s (matched: %r)", agent_id, keyword)
|
||||
return agent_id
|
||||
|
||||
return None
|
||||
|
||||
|
||||
def reload_agents() -> dict[str, Any]:
|
||||
"""Force reload agents from YAML. Call after editing agents.yaml."""
|
||||
global _agents, _config
|
||||
_agents = None
|
||||
_config = None
|
||||
return load_agents(force_reload=True)
|
||||
@@ -1,547 +0,0 @@
|
||||
"""Orchestrator agent.
|
||||
|
||||
Coordinates all sub-agents and handles user interaction.
|
||||
Uses the three-tier memory system and MCP tools.
|
||||
"""
|
||||
|
||||
import logging
|
||||
from datetime import UTC, datetime
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
from config import settings
|
||||
from infrastructure.events.bus import event_bus
|
||||
from timmy.agents.base import BaseAgent, SubAgent
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Dynamic context that gets built at startup
|
||||
_timmy_context: dict[str, Any] = {
|
||||
"git_log": "",
|
||||
"agents": [],
|
||||
"hands": [],
|
||||
"memory": "",
|
||||
}
|
||||
|
||||
|
||||
async def _load_hands_async() -> list[dict]:
|
||||
"""Async helper to load hands.
|
||||
|
||||
Hands registry removed — hand definitions live in TOML files under hands/.
|
||||
This will be rewired to read from brain memory.
|
||||
"""
|
||||
return []
|
||||
|
||||
|
||||
def build_timmy_context_sync() -> dict[str, Any]:
|
||||
"""Build context at startup (synchronous version).
|
||||
|
||||
Gathers git commits, active sub-agents, and hot memory.
|
||||
"""
|
||||
global _timmy_context
|
||||
|
||||
ctx: dict[str, Any] = {
|
||||
"timestamp": datetime.now(UTC).isoformat(),
|
||||
"repo_root": settings.repo_root,
|
||||
"git_log": "",
|
||||
"agents": [],
|
||||
"hands": [],
|
||||
"memory": "",
|
||||
}
|
||||
|
||||
# 1. Get recent git commits
|
||||
try:
|
||||
from tools.git_tools import git_log
|
||||
|
||||
result = git_log(max_count=20)
|
||||
if result.get("success"):
|
||||
commits = result.get("commits", [])
|
||||
ctx["git_log"] = "\n".join(
|
||||
[f"{c['short_sha']} {c['message'].split(chr(10))[0]}" for c in commits[:20]]
|
||||
)
|
||||
except Exception as exc:
|
||||
logger.warning("Could not load git log for context: %s", exc)
|
||||
ctx["git_log"] = "(Git log unavailable)"
|
||||
|
||||
# 2. Get active sub-agents
|
||||
try:
|
||||
from swarm import registry as swarm_registry
|
||||
|
||||
conn = swarm_registry._get_conn()
|
||||
rows = conn.execute(
|
||||
"SELECT id, name, status, capabilities FROM agents ORDER BY name"
|
||||
).fetchall()
|
||||
ctx["agents"] = [
|
||||
{
|
||||
"id": r["id"],
|
||||
"name": r["name"],
|
||||
"status": r["status"],
|
||||
"capabilities": r["capabilities"],
|
||||
}
|
||||
for r in rows
|
||||
]
|
||||
conn.close()
|
||||
except Exception as exc:
|
||||
logger.warning("Could not load agents for context: %s", exc)
|
||||
ctx["agents"] = []
|
||||
|
||||
# 3. Read hot memory (via HotMemory to auto-create if missing)
|
||||
try:
|
||||
from timmy.memory_system import memory_system
|
||||
|
||||
ctx["memory"] = memory_system.hot.read()[:2000]
|
||||
except Exception as exc:
|
||||
logger.warning("Could not load memory for context: %s", exc)
|
||||
ctx["memory"] = "(Memory unavailable)"
|
||||
|
||||
_timmy_context.update(ctx)
|
||||
logger.info("Context built (sync): %d agents", len(ctx["agents"]))
|
||||
return ctx
|
||||
|
||||
|
||||
async def build_timmy_context_async() -> dict[str, Any]:
|
||||
"""Build complete context including hands (async version)."""
|
||||
ctx = build_timmy_context_sync()
|
||||
ctx["hands"] = await _load_hands_async()
|
||||
_timmy_context.update(ctx)
|
||||
logger.info("Context built (async): %d agents, %d hands", len(ctx["agents"]), len(ctx["hands"]))
|
||||
return ctx
|
||||
|
||||
|
||||
# Keep old name for backwards compatibility
|
||||
build_timmy_context = build_timmy_context_sync
|
||||
|
||||
|
||||
def format_timmy_prompt(base_prompt: str, context: dict[str, Any]) -> str:
|
||||
"""Format the system prompt with dynamic context."""
|
||||
|
||||
# Format agents list
|
||||
agents_list = (
|
||||
"\n".join(
|
||||
[
|
||||
f"| {a['name']} | {a['capabilities'] or 'general'} | {a['status']} |"
|
||||
for a in context.get("agents", [])
|
||||
]
|
||||
)
|
||||
or "(No agents registered yet)"
|
||||
)
|
||||
|
||||
# Format hands list
|
||||
hands_list = (
|
||||
"\n".join(
|
||||
[
|
||||
f"| {h['name']} | {h['schedule']} | {'enabled' if h['enabled'] else 'disabled'} |"
|
||||
for h in context.get("hands", [])
|
||||
]
|
||||
)
|
||||
or "(No hands configured)"
|
||||
)
|
||||
|
||||
repo_root = context.get("repo_root", settings.repo_root)
|
||||
|
||||
context_block = f"""
|
||||
## Current System Context (as of {context.get("timestamp", datetime.now(UTC).isoformat())})
|
||||
|
||||
### Repository
|
||||
**Root:** `{repo_root}`
|
||||
|
||||
### Recent Commits (last 20):
|
||||
```
|
||||
{context.get("git_log", "(unavailable)")}
|
||||
```
|
||||
|
||||
### Active Sub-Agents:
|
||||
| Name | Capabilities | Status |
|
||||
|------|--------------|--------|
|
||||
{agents_list}
|
||||
|
||||
### Hands (Scheduled Tasks):
|
||||
| Name | Schedule | Status |
|
||||
|------|----------|--------|
|
||||
{hands_list}
|
||||
|
||||
### Hot Memory:
|
||||
{context.get("memory", "(unavailable)")[:1000]}
|
||||
"""
|
||||
|
||||
# Replace {REPO_ROOT} placeholder with actual path
|
||||
base_prompt = base_prompt.replace("{REPO_ROOT}", repo_root)
|
||||
|
||||
# Insert context after the first line
|
||||
lines = base_prompt.split("\n")
|
||||
if lines:
|
||||
return lines[0] + "\n" + context_block + "\n" + "\n".join(lines[1:])
|
||||
return base_prompt
|
||||
|
||||
|
||||
# Base prompt with anti-hallucination hard rules
|
||||
ORCHESTRATOR_PROMPT_BASE = """You are a local AI orchestrator running on this machine.
|
||||
|
||||
## Your Role
|
||||
|
||||
You are the primary interface between the user and the agent swarm. You:
|
||||
1. Understand user requests
|
||||
2. Decide whether to handle directly or delegate to sub-agents
|
||||
3. Coordinate multi-agent workflows when needed
|
||||
4. Maintain continuity using the three-tier memory system
|
||||
|
||||
## Sub-Agent Roster
|
||||
|
||||
| Agent | Role | When to Use |
|
||||
|-------|------|-------------|
|
||||
| Seer | Research | External info, web search, facts |
|
||||
| Forge | Code | Programming, tools, file operations |
|
||||
| Quill | Writing | Documentation, content creation |
|
||||
| Echo | Memory | Past conversations, user profile |
|
||||
| Helm | Routing | Complex multi-step workflows |
|
||||
|
||||
## Decision Framework
|
||||
|
||||
**Handle directly if:**
|
||||
- Simple question about capabilities
|
||||
- General knowledge
|
||||
- Social/conversational
|
||||
|
||||
**Delegate if:**
|
||||
- Requires specialized skills
|
||||
- Needs external research (Seer)
|
||||
- Involves code (Forge)
|
||||
- Needs past context (Echo)
|
||||
- Complex workflow (Helm)
|
||||
|
||||
## Hard Rules — Non-Negotiable
|
||||
|
||||
1. **NEVER fabricate tool output.** If you need data from a tool, call the tool and wait for the real result.
|
||||
|
||||
2. **If a tool call returns an error, report the exact error message.**
|
||||
|
||||
3. **If you do not know something, say so.** Then use a tool. Do not guess.
|
||||
|
||||
4. **Never say "I'll wait for the output" and then immediately provide fake output.**
|
||||
|
||||
5. **When corrected, use memory_write to save the correction immediately.**
|
||||
|
||||
6. **Your source code lives at the repository root shown above.** When using git tools, they automatically run from {REPO_ROOT}.
|
||||
|
||||
7. **When asked about your status, queue, agents, memory, or system health, use the `system_status` tool.**
|
||||
"""
|
||||
|
||||
|
||||
class TimmyOrchestrator(BaseAgent):
|
||||
"""Main orchestrator agent that coordinates the swarm."""
|
||||
|
||||
def __init__(self) -> None:
|
||||
# Build initial context (sync) and format prompt
|
||||
# Full context including hands will be loaded on first async call
|
||||
context = build_timmy_context_sync()
|
||||
formatted_prompt = format_timmy_prompt(ORCHESTRATOR_PROMPT_BASE, context)
|
||||
|
||||
super().__init__(
|
||||
agent_id="orchestrator",
|
||||
name="Orchestrator",
|
||||
role="orchestrator",
|
||||
system_prompt=formatted_prompt,
|
||||
tools=[
|
||||
"web_search",
|
||||
"read_file",
|
||||
"write_file",
|
||||
"python",
|
||||
"memory_search",
|
||||
"memory_write",
|
||||
"system_status",
|
||||
],
|
||||
)
|
||||
|
||||
# Sub-agent registry
|
||||
self.sub_agents: dict[str, BaseAgent] = {}
|
||||
|
||||
# Session tracking for init behavior
|
||||
self._session_initialized = False
|
||||
self._session_context: dict[str, Any] = {}
|
||||
self._context_fully_loaded = False
|
||||
|
||||
# Connect to event bus
|
||||
self.connect_event_bus(event_bus)
|
||||
|
||||
logger.info("Orchestrator initialized with context-aware prompt")
|
||||
|
||||
def register_sub_agent(self, agent: BaseAgent) -> None:
|
||||
"""Register a sub-agent with the orchestrator."""
|
||||
self.sub_agents[agent.agent_id] = agent
|
||||
agent.connect_event_bus(event_bus)
|
||||
logger.info("Registered sub-agent: %s", agent.name)
|
||||
|
||||
async def _session_init(self) -> None:
|
||||
"""Initialize session context on first user message.
|
||||
|
||||
Silently reads git log and AGENTS.md to ground the orchestrator in real data.
|
||||
This runs once per session before the first response.
|
||||
"""
|
||||
if self._session_initialized:
|
||||
return
|
||||
|
||||
logger.debug("Running session init...")
|
||||
|
||||
# Load full context including hands if not already done
|
||||
if not self._context_fully_loaded:
|
||||
await build_timmy_context_async()
|
||||
self._context_fully_loaded = True
|
||||
|
||||
# Read recent git log --oneline -15 from repo root
|
||||
try:
|
||||
from tools.git_tools import git_log
|
||||
|
||||
git_result = git_log(max_count=15)
|
||||
if git_result.get("success"):
|
||||
commits = git_result.get("commits", [])
|
||||
self._session_context["git_log_commits"] = commits
|
||||
# Format as oneline for easy reading
|
||||
self._session_context["git_log_oneline"] = "\n".join(
|
||||
[f"{c['short_sha']} {c['message'].split(chr(10))[0]}" for c in commits]
|
||||
)
|
||||
logger.debug(f"Session init: loaded {len(commits)} commits from git log")
|
||||
else:
|
||||
self._session_context["git_log_oneline"] = "Git log unavailable"
|
||||
except Exception as exc:
|
||||
logger.warning("Session init: could not read git log: %s", exc)
|
||||
self._session_context["git_log_oneline"] = "Git log unavailable"
|
||||
|
||||
# Read AGENTS.md for self-awareness
|
||||
try:
|
||||
agents_md_path = Path(settings.repo_root) / "AGENTS.md"
|
||||
if agents_md_path.exists():
|
||||
self._session_context["agents_md"] = agents_md_path.read_text()[:3000]
|
||||
except Exception as exc:
|
||||
logger.warning("Session init: could not read AGENTS.md: %s", exc)
|
||||
|
||||
# Read CHANGELOG for recent changes
|
||||
try:
|
||||
changelog_path = Path(settings.repo_root) / "docs" / "CHANGELOG_2026-02-26.md"
|
||||
if changelog_path.exists():
|
||||
self._session_context["changelog"] = changelog_path.read_text()[:2000]
|
||||
except Exception:
|
||||
pass # Changelog is optional
|
||||
|
||||
# Build session-specific context block for the prompt
|
||||
recent_changes = self._session_context.get("git_log_oneline", "")
|
||||
if recent_changes and recent_changes != "Git log unavailable":
|
||||
self._session_context["recent_changes_block"] = f"""
|
||||
## Recent Changes to Your Codebase (last 15 commits):
|
||||
```
|
||||
{recent_changes}
|
||||
```
|
||||
When asked "what's new?" or similar, refer to these commits for actual changes.
|
||||
"""
|
||||
else:
|
||||
self._session_context["recent_changes_block"] = ""
|
||||
|
||||
self._session_initialized = True
|
||||
logger.debug("Session init complete")
|
||||
|
||||
def _get_enhanced_system_prompt(self) -> str:
|
||||
"""Get system prompt enhanced with session-specific context.
|
||||
|
||||
Prepends the recent git log to the system prompt for grounding.
|
||||
"""
|
||||
base = self.system_prompt
|
||||
|
||||
# Add recent changes block if available
|
||||
recent_changes = self._session_context.get("recent_changes_block", "")
|
||||
if recent_changes:
|
||||
# Insert after the first line
|
||||
lines = base.split("\n")
|
||||
if lines:
|
||||
return lines[0] + "\n" + recent_changes + "\n" + "\n".join(lines[1:])
|
||||
|
||||
return base
|
||||
|
||||
async def orchestrate(self, user_request: str) -> str:
|
||||
"""Main entry point for user requests.
|
||||
|
||||
Analyzes the request and either handles directly or delegates.
|
||||
"""
|
||||
# Run session init on first message (loads git log, etc.)
|
||||
await self._session_init()
|
||||
|
||||
# Quick classification
|
||||
request_lower = user_request.lower()
|
||||
|
||||
# Direct response patterns (no delegation needed)
|
||||
direct_patterns = [
|
||||
"your name",
|
||||
"who are you",
|
||||
"what are you",
|
||||
"hello",
|
||||
"hi",
|
||||
"how are you",
|
||||
"help",
|
||||
"what can you do",
|
||||
]
|
||||
|
||||
for pattern in direct_patterns:
|
||||
if pattern in request_lower:
|
||||
return await self.run(user_request)
|
||||
|
||||
# Check for memory references — delegate to Echo
|
||||
memory_patterns = [
|
||||
"we talked about",
|
||||
"we discussed",
|
||||
"remember",
|
||||
"what did i say",
|
||||
"what did we decide",
|
||||
"remind me",
|
||||
"have we",
|
||||
]
|
||||
|
||||
for pattern in memory_patterns:
|
||||
if pattern in request_lower:
|
||||
echo = self.sub_agents.get("echo")
|
||||
if echo:
|
||||
return await echo.run(
|
||||
f"Recall information about: {user_request}\nProvide relevant context from memory."
|
||||
)
|
||||
|
||||
# Complex requests — ask Helm to route
|
||||
helm = self.sub_agents.get("helm")
|
||||
if helm:
|
||||
routing_response = await helm.run(
|
||||
f"Analyze this request and determine the best agent to handle it:\n\n"
|
||||
f"Request: {user_request}\n\n"
|
||||
f"Respond with: Primary Agent: [agent name]"
|
||||
)
|
||||
# Extract agent name from routing response
|
||||
agent_id = self._extract_agent(routing_response)
|
||||
if agent_id in self.sub_agents and agent_id != "orchestrator":
|
||||
return await self.sub_agents[agent_id].run(user_request)
|
||||
|
||||
# Default: handle directly
|
||||
return await self.run(user_request)
|
||||
|
||||
@staticmethod
|
||||
def _extract_agent(text: str) -> str:
|
||||
"""Extract agent name from routing text."""
|
||||
agents = ["seer", "forge", "quill", "echo", "helm"]
|
||||
text_lower = text.lower()
|
||||
for agent in agents:
|
||||
if agent in text_lower:
|
||||
return agent
|
||||
return "orchestrator"
|
||||
|
||||
async def execute_task(self, task_id: str, description: str, context: dict) -> Any:
|
||||
"""Execute a task (usually delegates to appropriate agent)."""
|
||||
return await self.orchestrate(description)
|
||||
|
||||
def get_swarm_status(self) -> dict:
|
||||
"""Get status of all agents in the swarm."""
|
||||
return {
|
||||
"orchestrator": self.get_status(),
|
||||
"sub_agents": {aid: agent.get_status() for aid, agent in self.sub_agents.items()},
|
||||
"total_agents": 1 + len(self.sub_agents),
|
||||
}
|
||||
|
||||
|
||||
# ── Persona definitions ──────────────────────────────────────────────────────
|
||||
# Each persona is a config dict that gets passed to SubAgent.
|
||||
# Previously these were separate classes (SeerAgent, ForgeAgent, etc.)
|
||||
# that differed only by these values.
|
||||
|
||||
_PERSONAS: list[dict[str, Any]] = [
|
||||
{
|
||||
"agent_id": "seer",
|
||||
"name": "Seer",
|
||||
"role": "research",
|
||||
"tools": ["web_search", "read_file", "memory_search"],
|
||||
"system_prompt": (
|
||||
"You are Seer, a research and information gathering specialist.\n"
|
||||
"Find, evaluate, and synthesize information from external sources.\n"
|
||||
"Be thorough, skeptical, concise, and cite sources."
|
||||
),
|
||||
},
|
||||
{
|
||||
"agent_id": "forge",
|
||||
"name": "Forge",
|
||||
"role": "code",
|
||||
"tools": ["python", "write_file", "read_file", "list_directory"],
|
||||
"system_prompt": (
|
||||
"You are Forge, a code generation and tool building specialist.\n"
|
||||
"Write clean code, be safe, explain your work, and test mentally."
|
||||
),
|
||||
},
|
||||
{
|
||||
"agent_id": "quill",
|
||||
"name": "Quill",
|
||||
"role": "writing",
|
||||
"tools": ["write_file", "read_file", "memory_search"],
|
||||
"system_prompt": (
|
||||
"You are Quill, a writing and content generation specialist.\n"
|
||||
"Write clearly, know your audience, be concise, use formatting."
|
||||
),
|
||||
},
|
||||
{
|
||||
"agent_id": "echo",
|
||||
"name": "Echo",
|
||||
"role": "memory",
|
||||
"tools": ["memory_search", "read_file", "write_file"],
|
||||
"system_prompt": (
|
||||
"You are Echo, a memory and context management specialist.\n"
|
||||
"Remember, retrieve, and synthesize information from the past.\n"
|
||||
"Be accurate, relevant, concise, and acknowledge uncertainty."
|
||||
),
|
||||
},
|
||||
{
|
||||
"agent_id": "helm",
|
||||
"name": "Helm",
|
||||
"role": "routing",
|
||||
"tools": ["memory_search"],
|
||||
"system_prompt": (
|
||||
"You are Helm, a routing and orchestration specialist.\n"
|
||||
"Analyze tasks and decide how to route them to other agents.\n"
|
||||
"Available agents: Seer (research), Forge (code), Quill (writing), Echo (memory), Lab (experiments).\n"
|
||||
"Respond with: Primary Agent: [agent name]"
|
||||
),
|
||||
},
|
||||
{
|
||||
"agent_id": "lab",
|
||||
"name": "Lab",
|
||||
"role": "experiment",
|
||||
"tools": [
|
||||
"run_experiment",
|
||||
"prepare_experiment",
|
||||
"shell",
|
||||
"python",
|
||||
"read_file",
|
||||
"write_file",
|
||||
],
|
||||
"system_prompt": (
|
||||
"You are Lab, an autonomous ML experimentation specialist.\n"
|
||||
"You run time-boxed training experiments, evaluate metrics,\n"
|
||||
"modify training code to improve results, and iterate.\n"
|
||||
"Always report the metric delta. Never exceed the time budget."
|
||||
),
|
||||
},
|
||||
]
|
||||
|
||||
|
||||
def create_timmy_swarm() -> TimmyOrchestrator:
|
||||
"""Create orchestrator with all sub-agents registered."""
|
||||
orch = TimmyOrchestrator()
|
||||
|
||||
for persona in _PERSONAS:
|
||||
orch.register_sub_agent(SubAgent(**persona))
|
||||
|
||||
return orch
|
||||
|
||||
|
||||
# Convenience functions for refreshing context
|
||||
def refresh_timmy_context_sync() -> dict[str, Any]:
|
||||
"""Refresh context (sync version)."""
|
||||
return build_timmy_context_sync()
|
||||
|
||||
|
||||
async def refresh_timmy_context_async() -> dict[str, Any]:
|
||||
"""Refresh context including hands (async version)."""
|
||||
return await build_timmy_context_async()
|
||||
|
||||
|
||||
# Keep old name for backwards compatibility
|
||||
refresh_timmy_context = refresh_timmy_context_sync
|
||||
@@ -1,137 +0,0 @@
|
||||
"""Cascade Router adapter for Timmy agent.
|
||||
|
||||
Provides automatic failover between LLM providers with:
|
||||
- Circuit breaker pattern for failing providers
|
||||
- Metrics tracking per provider
|
||||
- Priority-based routing (local first, then APIs)
|
||||
"""
|
||||
|
||||
import logging
|
||||
from dataclasses import dataclass
|
||||
|
||||
from infrastructure.router.cascade import CascadeRouter
|
||||
from timmy.prompts import SYSTEM_PROMPT
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@dataclass
|
||||
class TimmyResponse:
|
||||
"""Response from Timmy via Cascade Router."""
|
||||
|
||||
content: str
|
||||
provider_used: str
|
||||
latency_ms: float
|
||||
fallback_used: bool = False
|
||||
|
||||
|
||||
class TimmyCascadeAdapter:
|
||||
"""Adapter that routes Timmy requests through Cascade Router.
|
||||
|
||||
Usage:
|
||||
adapter = TimmyCascadeAdapter()
|
||||
response = await adapter.chat("Hello")
|
||||
print(f"Response: {response.content}")
|
||||
print(f"Provider: {response.provider_used}")
|
||||
"""
|
||||
|
||||
def __init__(self, router: CascadeRouter | None = None) -> None:
|
||||
"""Initialize adapter with Cascade Router.
|
||||
|
||||
Args:
|
||||
router: CascadeRouter instance. If None, creates default.
|
||||
"""
|
||||
self.router = router or CascadeRouter()
|
||||
logger.info("TimmyCascadeAdapter initialized with %d providers", len(self.router.providers))
|
||||
|
||||
async def chat(self, message: str, context: str | None = None) -> TimmyResponse:
|
||||
"""Send message through cascade router with automatic failover.
|
||||
|
||||
Args:
|
||||
message: User message
|
||||
context: Optional conversation context
|
||||
|
||||
Returns:
|
||||
TimmyResponse with content and metadata
|
||||
"""
|
||||
# Build messages array
|
||||
messages = []
|
||||
if context:
|
||||
messages.append({"role": "system", "content": context})
|
||||
messages.append({"role": "user", "content": message})
|
||||
|
||||
# Route through cascade
|
||||
import time
|
||||
|
||||
start = time.time()
|
||||
|
||||
try:
|
||||
result = await self.router.complete(
|
||||
messages=messages,
|
||||
system_prompt=SYSTEM_PROMPT,
|
||||
)
|
||||
|
||||
latency = (time.time() - start) * 1000
|
||||
|
||||
# Determine if fallback was used
|
||||
primary = self.router.providers[0] if self.router.providers else None
|
||||
fallback_used = primary and primary.status.value != "healthy"
|
||||
|
||||
return TimmyResponse(
|
||||
content=result.content,
|
||||
provider_used=result.provider_name,
|
||||
latency_ms=latency,
|
||||
fallback_used=fallback_used,
|
||||
)
|
||||
|
||||
except Exception as exc:
|
||||
logger.error("All providers failed: %s", exc)
|
||||
raise
|
||||
|
||||
def get_provider_status(self) -> list[dict]:
|
||||
"""Get status of all providers.
|
||||
|
||||
Returns:
|
||||
List of provider status dicts
|
||||
"""
|
||||
return [
|
||||
{
|
||||
"name": p.name,
|
||||
"type": p.type,
|
||||
"status": p.status.value,
|
||||
"circuit_state": p.circuit_state.value,
|
||||
"metrics": {
|
||||
"total": p.metrics.total_requests,
|
||||
"success": p.metrics.successful_requests,
|
||||
"failed": p.metrics.failed_requests,
|
||||
"avg_latency_ms": round(p.metrics.avg_latency_ms, 1),
|
||||
"error_rate": round(p.metrics.error_rate, 3),
|
||||
},
|
||||
"priority": p.priority,
|
||||
"enabled": p.enabled,
|
||||
}
|
||||
for p in self.router.providers
|
||||
]
|
||||
|
||||
def get_preferred_provider(self) -> str | None:
|
||||
"""Get name of highest-priority healthy provider.
|
||||
|
||||
Returns:
|
||||
Provider name or None if all unhealthy
|
||||
"""
|
||||
for provider in self.router.providers:
|
||||
if provider.status.value == "healthy" and provider.enabled:
|
||||
return provider.name
|
||||
return None
|
||||
|
||||
|
||||
# Global singleton for reuse
|
||||
_cascade_adapter: TimmyCascadeAdapter | None = None
|
||||
|
||||
|
||||
def get_cascade_adapter() -> TimmyCascadeAdapter:
|
||||
"""Get or create global cascade adapter singleton."""
|
||||
global _cascade_adapter
|
||||
if _cascade_adapter is None:
|
||||
_cascade_adapter = TimmyCascadeAdapter()
|
||||
return _cascade_adapter
|
||||
@@ -65,14 +65,11 @@ def _get_vault():
|
||||
|
||||
|
||||
def _get_brain_memory():
|
||||
"""Lazy-import the brain unified memory.
|
||||
"""Return None — brain module removed.
|
||||
|
||||
Redirected to use unified memory.db (via vector_store) instead of
|
||||
brain.db. The brain module is deprecated for new memory operations.
|
||||
Memory operations now go through timmy.memory_system.
|
||||
"""
|
||||
from brain.memory import get_memory
|
||||
|
||||
return get_memory()
|
||||
return None
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
@@ -255,13 +252,8 @@ TEST_SEQUENCE: list[tuple[Capability, str]] = [
|
||||
|
||||
|
||||
def log_event(event_type, **kwargs):
|
||||
"""Proxy to swarm event_log.log_event — lazy import."""
|
||||
try:
|
||||
from swarm.event_log import log_event as _log_event
|
||||
|
||||
return _log_event(event_type, **kwargs)
|
||||
except Exception as exc:
|
||||
logger.debug("Failed to log event: %s", exc)
|
||||
"""No-op — swarm event_log removed."""
|
||||
logger.debug("log_event(%s) — swarm module removed, skipping", event_type)
|
||||
|
||||
|
||||
def capture_error(exc, **kwargs):
|
||||
@@ -275,10 +267,9 @@ def capture_error(exc, **kwargs):
|
||||
|
||||
|
||||
def create_task(**kwargs):
|
||||
"""Proxy to swarm.task_queue.models.create_task — lazy import."""
|
||||
from swarm.task_queue.models import create_task as _create
|
||||
|
||||
return _create(**kwargs)
|
||||
"""No-op — swarm task queue removed."""
|
||||
logger.debug("create_task() — swarm module removed, skipping")
|
||||
return None
|
||||
|
||||
|
||||
class LoopQAOrchestrator:
|
||||
@@ -341,10 +332,8 @@ class LoopQAOrchestrator:
|
||||
"error_type": type(exc).__name__,
|
||||
}
|
||||
|
||||
# Log via event_log
|
||||
from swarm.event_log import EventType
|
||||
|
||||
event_type = EventType.LOOP_QA_OK if result["success"] else EventType.LOOP_QA_FAIL
|
||||
# Log result
|
||||
event_type = "loop_qa_ok" if result["success"] else "loop_qa_fail"
|
||||
log_event(
|
||||
event_type,
|
||||
source="loop_qa",
|
||||
|
||||
@@ -21,6 +21,8 @@ Usage::
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
import os
|
||||
import shutil
|
||||
import sqlite3
|
||||
import uuid
|
||||
from datetime import datetime
|
||||
@@ -34,11 +36,55 @@ logger = logging.getLogger(__name__)
|
||||
_issue_session = None
|
||||
|
||||
|
||||
def _parse_command(command_str: str) -> tuple[str, list[str]]:
|
||||
"""Split a command string into (executable, args).
|
||||
|
||||
Handles ``~/`` expansion and resolves via PATH if needed.
|
||||
E.g. ``"gitea-mcp -t stdio"`` → ``("/Users/x/go/bin/gitea-mcp", ["-t", "stdio"])``
|
||||
"""
|
||||
parts = command_str.split()
|
||||
executable = os.path.expanduser(parts[0])
|
||||
|
||||
# If not an absolute path, resolve via shutil.which
|
||||
if not os.path.isabs(executable):
|
||||
resolved = shutil.which(executable)
|
||||
if resolved:
|
||||
executable = resolved
|
||||
else:
|
||||
# Check common binary locations not always on PATH
|
||||
for candidate_dir in ["~/go/bin", "~/.local/bin", "~/bin"]:
|
||||
candidate = os.path.expanduser(os.path.join(candidate_dir, parts[0]))
|
||||
if os.path.isfile(candidate) and os.access(candidate, os.X_OK):
|
||||
executable = candidate
|
||||
break
|
||||
|
||||
return executable, parts[1:]
|
||||
|
||||
|
||||
def _gitea_server_params():
|
||||
"""Build ``StdioServerParameters`` for the Gitea MCP server."""
|
||||
from mcp.client.stdio import StdioServerParameters
|
||||
|
||||
exe, args = _parse_command(settings.mcp_gitea_command)
|
||||
return StdioServerParameters(
|
||||
command=exe,
|
||||
args=args,
|
||||
env={
|
||||
"GITEA_ACCESS_TOKEN": settings.gitea_token,
|
||||
"GITEA_HOST": settings.gitea_url,
|
||||
"PATH": os.environ.get("PATH", "/usr/bin:/bin"),
|
||||
},
|
||||
)
|
||||
|
||||
|
||||
def create_gitea_mcp_tools():
|
||||
"""Create an MCPTools instance for the Gitea MCP server.
|
||||
|
||||
Returns None if Gitea is disabled or not configured (no token).
|
||||
The returned MCPTools is lazy — Agno connects it on first ``arun()``.
|
||||
|
||||
Uses ``server_params`` instead of ``command`` to bypass Agno's
|
||||
executable whitelist (gitea-mcp is a Go binary not in the list).
|
||||
"""
|
||||
if not settings.gitea_enabled or not settings.gitea_token:
|
||||
logger.debug("Gitea MCP: disabled or no token configured")
|
||||
@@ -47,22 +93,19 @@ def create_gitea_mcp_tools():
|
||||
try:
|
||||
from agno.tools.mcp import MCPTools
|
||||
|
||||
# Build command — gitea-mcp expects "-t stdio" for stdio transport
|
||||
command = settings.mcp_gitea_command
|
||||
|
||||
tools = MCPTools(
|
||||
command=command,
|
||||
env={
|
||||
"GITEA_ACCESS_TOKEN": settings.gitea_token,
|
||||
"GITEA_HOST": settings.gitea_url,
|
||||
},
|
||||
server_params=_gitea_server_params(),
|
||||
include_tools=[
|
||||
"create_issue",
|
||||
"list_repo_issues",
|
||||
"create_issue_comment",
|
||||
"edit_issue",
|
||||
"issue_write",
|
||||
"issue_read",
|
||||
"list_issues",
|
||||
"pull_request_write",
|
||||
"pull_request_read",
|
||||
"list_pull_requests",
|
||||
"list_branches",
|
||||
"list_commits",
|
||||
],
|
||||
timeout=settings.mcp_timeout,
|
||||
timeout_seconds=settings.mcp_timeout,
|
||||
)
|
||||
logger.info("Gitea MCP tools created (lazy connect)")
|
||||
return tools
|
||||
@@ -76,14 +119,28 @@ def create_filesystem_mcp_tools():
|
||||
|
||||
Returns None if the command is not configured.
|
||||
Scoped to the project repo_root directory.
|
||||
|
||||
Uses ``server_params`` for consistency (npx is whitelisted by Agno
|
||||
but server_params is the more robust approach).
|
||||
"""
|
||||
try:
|
||||
from agno.tools.mcp import MCPTools
|
||||
from mcp.client.stdio import StdioServerParameters
|
||||
|
||||
command = f"{settings.mcp_filesystem_command} {settings.repo_root}"
|
||||
# Parse the base command, then append repo_root as an extra arg
|
||||
exe, args = _parse_command(settings.mcp_filesystem_command)
|
||||
args.append(str(settings.repo_root))
|
||||
|
||||
params = StdioServerParameters(
|
||||
command=exe,
|
||||
args=args,
|
||||
env={
|
||||
"PATH": os.environ.get("PATH", "/usr/bin:/bin"),
|
||||
},
|
||||
)
|
||||
|
||||
tools = MCPTools(
|
||||
command=command,
|
||||
server_params=params,
|
||||
include_tools=[
|
||||
"read_file",
|
||||
"write_file",
|
||||
@@ -92,7 +149,7 @@ def create_filesystem_mcp_tools():
|
||||
"get_file_info",
|
||||
"directory_tree",
|
||||
],
|
||||
timeout=settings.mcp_timeout,
|
||||
timeout_seconds=settings.mcp_timeout,
|
||||
)
|
||||
logger.info("Filesystem MCP tools created (lazy connect)")
|
||||
return tools
|
||||
@@ -147,6 +204,9 @@ async def create_gitea_issue_via_mcp(title: str, body: str = "", labels: str = "
|
||||
Used by the thinking engine's ``_maybe_file_issues()`` post-hook.
|
||||
Manages its own MCPTools session with lazy connect + graceful failure.
|
||||
|
||||
Uses ``tools.session.call_tool()`` for direct MCP invocation — the
|
||||
``MCPTools`` wrapper itself does not expose ``call_tool()``.
|
||||
|
||||
Args:
|
||||
title: Issue title.
|
||||
body: Issue body (markdown).
|
||||
@@ -165,12 +225,8 @@ async def create_gitea_issue_via_mcp(title: str, body: str = "", labels: str = "
|
||||
|
||||
if _issue_session is None:
|
||||
_issue_session = MCPTools(
|
||||
command=settings.mcp_gitea_command,
|
||||
env={
|
||||
"GITEA_ACCESS_TOKEN": settings.gitea_token,
|
||||
"GITEA_HOST": settings.gitea_url,
|
||||
},
|
||||
timeout=settings.mcp_timeout,
|
||||
server_params=_gitea_server_params(),
|
||||
timeout_seconds=settings.mcp_timeout,
|
||||
)
|
||||
|
||||
# Ensure connected
|
||||
@@ -187,16 +243,17 @@ async def create_gitea_issue_via_mcp(title: str, body: str = "", labels: str = "
|
||||
# Parse owner/repo from settings
|
||||
owner, repo = settings.gitea_repo.split("/", 1)
|
||||
|
||||
# Build tool arguments
|
||||
# Build tool arguments — gitea-mcp uses issue_write with method="create"
|
||||
args = {
|
||||
"method": "create",
|
||||
"owner": owner,
|
||||
"repo": repo,
|
||||
"title": title,
|
||||
"body": full_body,
|
||||
}
|
||||
|
||||
# Call the MCP tool directly via the session
|
||||
result = await _issue_session.call_tool("create_issue", arguments=args)
|
||||
# Call via the underlying MCP session (MCPTools doesn't expose call_tool)
|
||||
result = await _issue_session.session.call_tool("issue_write", arguments=args)
|
||||
|
||||
# Bridge to local work order
|
||||
label_list = [tag.strip() for tag in labels.split(",") if tag.strip()] if labels else []
|
||||
@@ -216,7 +273,7 @@ async def close_mcp_sessions() -> None:
|
||||
global _issue_session
|
||||
if _issue_session is not None:
|
||||
try:
|
||||
await _issue_session.disconnect()
|
||||
await _issue_session.close()
|
||||
except Exception as exc:
|
||||
logger.debug("MCP session disconnect error: %s", exc)
|
||||
logger.debug("MCP session close error: %s", exc)
|
||||
_issue_session = None
|
||||
|
||||
@@ -1,296 +0,0 @@
|
||||
"""One-shot migration: consolidate old memory databases into data/memory.db.
|
||||
|
||||
Migrates:
|
||||
- data/semantic_memory.db → memory.db (chunks table)
|
||||
- data/swarm.db → memory.db (memory_entries → episodes table)
|
||||
- data/brain.db → memory.db (facts table, if any rows exist)
|
||||
|
||||
After migration the old DB files are moved to data/archive/.
|
||||
|
||||
Usage:
|
||||
python -m timmy.memory_migrate # dry-run (default)
|
||||
python -m timmy.memory_migrate --apply # actually migrate
|
||||
"""
|
||||
|
||||
import json
|
||||
import logging
|
||||
import shutil
|
||||
import sqlite3
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
PROJECT_ROOT = Path(__file__).parent.parent.parent
|
||||
DATA_DIR = PROJECT_ROOT / "data"
|
||||
ARCHIVE_DIR = DATA_DIR / "archive"
|
||||
MEMORY_DB = DATA_DIR / "memory.db"
|
||||
|
||||
|
||||
def _open(path: Path) -> sqlite3.Connection:
|
||||
conn = sqlite3.connect(str(path))
|
||||
conn.row_factory = sqlite3.Row
|
||||
return conn
|
||||
|
||||
|
||||
def migrate_semantic_chunks(dry_run: bool = True) -> int:
|
||||
"""Copy chunks from semantic_memory.db → memory.db."""
|
||||
src = DATA_DIR / "semantic_memory.db"
|
||||
if not src.exists():
|
||||
logger.info("semantic_memory.db not found — skipping")
|
||||
return 0
|
||||
|
||||
src_conn = _open(src)
|
||||
# Check if source table exists
|
||||
has_table = src_conn.execute(
|
||||
"SELECT COUNT(*) FROM sqlite_master WHERE type='table' AND name='chunks'"
|
||||
).fetchone()[0]
|
||||
if not has_table:
|
||||
src_conn.close()
|
||||
return 0
|
||||
|
||||
rows = src_conn.execute("SELECT * FROM chunks").fetchall()
|
||||
src_conn.close()
|
||||
|
||||
if not rows:
|
||||
logger.info("semantic_memory.db: no chunks to migrate")
|
||||
return 0
|
||||
|
||||
if dry_run:
|
||||
logger.info("[DRY RUN] Would migrate %d chunks from semantic_memory.db", len(rows))
|
||||
return len(rows)
|
||||
|
||||
from timmy.memory.unified import get_connection
|
||||
|
||||
dst = get_connection()
|
||||
migrated = 0
|
||||
for r in rows:
|
||||
try:
|
||||
dst.execute(
|
||||
"INSERT OR IGNORE INTO chunks (id, source, content, embedding, created_at, source_hash) "
|
||||
"VALUES (?, ?, ?, ?, ?, ?)",
|
||||
(
|
||||
r["id"],
|
||||
r["source"],
|
||||
r["content"],
|
||||
r["embedding"],
|
||||
r["created_at"],
|
||||
r["source_hash"],
|
||||
),
|
||||
)
|
||||
migrated += 1
|
||||
except Exception as exc:
|
||||
logger.warning("Chunk migration error: %s", exc)
|
||||
dst.commit()
|
||||
dst.close()
|
||||
logger.info("Migrated %d chunks from semantic_memory.db", migrated)
|
||||
return migrated
|
||||
|
||||
|
||||
def migrate_memory_entries(dry_run: bool = True) -> int:
|
||||
"""Copy memory_entries from swarm.db → memory.db episodes table."""
|
||||
src = DATA_DIR / "swarm.db"
|
||||
if not src.exists():
|
||||
logger.info("swarm.db not found — skipping")
|
||||
return 0
|
||||
|
||||
src_conn = _open(src)
|
||||
has_table = src_conn.execute(
|
||||
"SELECT COUNT(*) FROM sqlite_master WHERE type='table' AND name='memory_entries'"
|
||||
).fetchone()[0]
|
||||
if not has_table:
|
||||
src_conn.close()
|
||||
return 0
|
||||
|
||||
rows = src_conn.execute("SELECT * FROM memory_entries").fetchall()
|
||||
src_conn.close()
|
||||
|
||||
if not rows:
|
||||
logger.info("swarm.db: no memory_entries to migrate")
|
||||
return 0
|
||||
|
||||
if dry_run:
|
||||
logger.info("[DRY RUN] Would migrate %d memory_entries from swarm.db → episodes", len(rows))
|
||||
return len(rows)
|
||||
|
||||
from timmy.memory.unified import get_connection
|
||||
|
||||
dst = get_connection()
|
||||
migrated = 0
|
||||
for r in rows:
|
||||
try:
|
||||
dst.execute(
|
||||
"INSERT OR IGNORE INTO episodes "
|
||||
"(id, content, source, context_type, embedding, metadata, agent_id, task_id, session_id, timestamp) "
|
||||
"VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)",
|
||||
(
|
||||
r["id"],
|
||||
r["content"],
|
||||
r["source"],
|
||||
r["context_type"],
|
||||
r["embedding"],
|
||||
r["metadata"],
|
||||
r["agent_id"],
|
||||
r["task_id"],
|
||||
r["session_id"],
|
||||
r["timestamp"],
|
||||
),
|
||||
)
|
||||
migrated += 1
|
||||
except Exception as exc:
|
||||
logger.warning("Episode migration error: %s", exc)
|
||||
dst.commit()
|
||||
dst.close()
|
||||
logger.info("Migrated %d memory_entries → episodes", migrated)
|
||||
return migrated
|
||||
|
||||
|
||||
def migrate_brain_facts(dry_run: bool = True) -> int:
|
||||
"""Copy facts from brain.db → memory.db facts table."""
|
||||
src = DATA_DIR / "brain.db"
|
||||
if not src.exists():
|
||||
logger.info("brain.db not found — skipping")
|
||||
return 0
|
||||
|
||||
src_conn = _open(src)
|
||||
has_table = src_conn.execute(
|
||||
"SELECT COUNT(*) FROM sqlite_master WHERE type='table' AND name='facts'"
|
||||
).fetchone()[0]
|
||||
if not has_table:
|
||||
# Try 'memories' table (brain.db sometimes uses this name)
|
||||
has_memories = src_conn.execute(
|
||||
"SELECT COUNT(*) FROM sqlite_master WHERE type='table' AND name='memories'"
|
||||
).fetchone()[0]
|
||||
if not has_memories:
|
||||
src_conn.close()
|
||||
return 0
|
||||
|
||||
rows = src_conn.execute("SELECT * FROM memories").fetchall()
|
||||
src_conn.close()
|
||||
|
||||
if not rows:
|
||||
return 0
|
||||
if dry_run:
|
||||
logger.info("[DRY RUN] Would migrate %d brain memories → facts", len(rows))
|
||||
return len(rows)
|
||||
|
||||
from timmy.memory.unified import get_connection
|
||||
|
||||
dst = get_connection()
|
||||
from datetime import UTC, datetime
|
||||
|
||||
migrated = 0
|
||||
for r in rows:
|
||||
try:
|
||||
dst.execute(
|
||||
"INSERT OR IGNORE INTO facts "
|
||||
"(id, category, content, confidence, source, tags, created_at) "
|
||||
"VALUES (?, ?, ?, ?, ?, ?, ?)",
|
||||
(
|
||||
r["id"],
|
||||
"brain",
|
||||
r.get("content", r.get("text", "")),
|
||||
0.7,
|
||||
"brain",
|
||||
"[]",
|
||||
r.get("created_at", datetime.now(UTC).isoformat()),
|
||||
),
|
||||
)
|
||||
migrated += 1
|
||||
except Exception as exc:
|
||||
logger.warning("Brain fact migration error: %s", exc)
|
||||
dst.commit()
|
||||
dst.close()
|
||||
return migrated
|
||||
|
||||
rows = src_conn.execute("SELECT * FROM facts").fetchall()
|
||||
src_conn.close()
|
||||
|
||||
if not rows:
|
||||
logger.info("brain.db: no facts to migrate")
|
||||
return 0
|
||||
|
||||
if dry_run:
|
||||
logger.info("[DRY RUN] Would migrate %d facts from brain.db", len(rows))
|
||||
return len(rows)
|
||||
|
||||
from timmy.memory.unified import get_connection
|
||||
|
||||
dst = get_connection()
|
||||
migrated = 0
|
||||
for r in rows:
|
||||
try:
|
||||
dst.execute(
|
||||
"INSERT OR IGNORE INTO facts "
|
||||
"(id, category, content, confidence, source, tags, created_at) "
|
||||
"VALUES (?, ?, ?, ?, ?, ?, ?)",
|
||||
(
|
||||
r["id"],
|
||||
r.get("category", "brain"),
|
||||
r["content"],
|
||||
r.get("confidence", 0.7),
|
||||
"brain",
|
||||
json.dumps(r.get("tags", [])) if isinstance(r.get("tags"), list) else "[]",
|
||||
r.get("created_at", ""),
|
||||
),
|
||||
)
|
||||
migrated += 1
|
||||
except Exception as exc:
|
||||
logger.warning("Fact migration error: %s", exc)
|
||||
dst.commit()
|
||||
dst.close()
|
||||
logger.info("Migrated %d facts from brain.db", migrated)
|
||||
return migrated
|
||||
|
||||
|
||||
def archive_old_dbs(dry_run: bool = True) -> list[str]:
|
||||
"""Move old database files to data/archive/."""
|
||||
old_dbs = ["semantic_memory.db", "brain.db"]
|
||||
archived = []
|
||||
|
||||
for name in old_dbs:
|
||||
src = DATA_DIR / name
|
||||
if not src.exists():
|
||||
continue
|
||||
if dry_run:
|
||||
logger.info("[DRY RUN] Would archive %s → data/archive/%s", name, name)
|
||||
archived.append(name)
|
||||
else:
|
||||
ARCHIVE_DIR.mkdir(parents=True, exist_ok=True)
|
||||
dst = ARCHIVE_DIR / name
|
||||
shutil.move(str(src), str(dst))
|
||||
logger.info("Archived %s → data/archive/%s", name, name)
|
||||
archived.append(name)
|
||||
|
||||
return archived
|
||||
|
||||
|
||||
def run_migration(dry_run: bool = True) -> dict:
|
||||
"""Run the full migration pipeline."""
|
||||
results = {
|
||||
"chunks": migrate_semantic_chunks(dry_run),
|
||||
"episodes": migrate_memory_entries(dry_run),
|
||||
"facts": migrate_brain_facts(dry_run),
|
||||
"archived": archive_old_dbs(dry_run),
|
||||
"dry_run": dry_run,
|
||||
}
|
||||
total = results["chunks"] + results["episodes"] + results["facts"]
|
||||
mode = "DRY RUN" if dry_run else "APPLIED"
|
||||
logger.info("[%s] Migration complete: %d total records", mode, total)
|
||||
return results
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
logging.basicConfig(level=logging.INFO, format="%(levelname)s: %(message)s")
|
||||
apply = "--apply" in sys.argv
|
||||
results = run_migration(dry_run=not apply)
|
||||
|
||||
print(f"\n{'=' * 50}")
|
||||
print(f"Migration {'APPLIED' if apply else 'DRY RUN'}:")
|
||||
print(f" Chunks migrated: {results['chunks']}")
|
||||
print(f" Episodes migrated: {results['episodes']}")
|
||||
print(f" Facts migrated: {results['facts']}")
|
||||
print(f" Archived DBs: {results['archived']}")
|
||||
if not apply:
|
||||
print("\nRun with --apply to execute the migration.")
|
||||
print(f"{'=' * 50}")
|
||||
@@ -44,10 +44,14 @@ SAFE_TOOLS = frozenset(
|
||||
"get_memory_status",
|
||||
"list_swarm_agents",
|
||||
# MCP Gitea tools
|
||||
"create_issue",
|
||||
"list_repo_issues",
|
||||
"create_issue_comment",
|
||||
"edit_issue",
|
||||
"issue_write",
|
||||
"issue_read",
|
||||
"list_issues",
|
||||
"pull_request_write",
|
||||
"pull_request_read",
|
||||
"list_pull_requests",
|
||||
"list_branches",
|
||||
"list_commits",
|
||||
# MCP filesystem tools (read-only)
|
||||
"list_directory",
|
||||
"search_files",
|
||||
|
||||
@@ -1,8 +1,7 @@
|
||||
"""Timmy's delegation tools — submit tasks and list agents.
|
||||
|
||||
Delegation uses the orchestrator's sub-agent system. The old swarm
|
||||
task-queue was removed; delegation now records intent and returns the
|
||||
target agent information.
|
||||
Reads agent roster from agents.yaml via the loader module.
|
||||
No hardcoded agent lists.
|
||||
"""
|
||||
|
||||
import logging
|
||||
@@ -10,15 +9,6 @@ from typing import Any
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Agents available in the current orchestrator architecture
|
||||
_VALID_AGENTS: dict[str, str] = {
|
||||
"seer": "research",
|
||||
"forge": "code",
|
||||
"echo": "memory",
|
||||
"helm": "routing",
|
||||
"quill": "writing",
|
||||
}
|
||||
|
||||
|
||||
def delegate_task(
|
||||
agent_name: str, task_description: str, priority: str = "normal"
|
||||
@@ -26,19 +16,24 @@ def delegate_task(
|
||||
"""Record a delegation intent to another agent.
|
||||
|
||||
Args:
|
||||
agent_name: Name of the agent to delegate to
|
||||
agent_name: Name or ID of the agent to delegate to
|
||||
task_description: What you want the agent to do
|
||||
priority: Task priority - "low", "normal", "high"
|
||||
|
||||
Returns:
|
||||
Dict with agent, status, and message
|
||||
"""
|
||||
from timmy.agents.loader import list_agents
|
||||
|
||||
agent_name = agent_name.lower().strip()
|
||||
|
||||
if agent_name not in _VALID_AGENTS:
|
||||
# Build valid agents map from YAML config
|
||||
available = {a["id"]: a["role"] for a in list_agents()}
|
||||
|
||||
if agent_name not in available:
|
||||
return {
|
||||
"success": False,
|
||||
"error": f"Unknown agent: {agent_name}. Valid agents: {', '.join(sorted(_VALID_AGENTS))}",
|
||||
"error": f"Unknown agent: {agent_name}. Valid agents: {', '.join(sorted(available))}",
|
||||
"task_id": None,
|
||||
}
|
||||
|
||||
@@ -54,32 +49,35 @@ def delegate_task(
|
||||
"success": True,
|
||||
"task_id": None,
|
||||
"agent": agent_name,
|
||||
"role": _VALID_AGENTS[agent_name],
|
||||
"role": available[agent_name],
|
||||
"status": "noted",
|
||||
"message": f"Delegation to {agent_name} ({_VALID_AGENTS[agent_name]}): {task_description[:100]}",
|
||||
"message": f"Delegation to {agent_name} ({available[agent_name]}): {task_description[:100]}",
|
||||
}
|
||||
|
||||
|
||||
def list_swarm_agents() -> dict[str, Any]:
|
||||
"""List all available sub-agents and their roles.
|
||||
|
||||
Reads from agents.yaml — no hardcoded roster.
|
||||
|
||||
Returns:
|
||||
Dict with agent list
|
||||
"""
|
||||
try:
|
||||
from timmy.agents.timmy import _PERSONAS
|
||||
from timmy.agents.loader import list_agents
|
||||
|
||||
agents = list_agents()
|
||||
return {
|
||||
"success": True,
|
||||
"agents": [
|
||||
{
|
||||
"name": p["name"],
|
||||
"id": p["agent_id"],
|
||||
"role": p.get("role", ""),
|
||||
"status": "available",
|
||||
"capabilities": ", ".join(p.get("tools", [])),
|
||||
"name": a["name"],
|
||||
"id": a["id"],
|
||||
"role": a["role"],
|
||||
"status": a.get("status", "available"),
|
||||
"capabilities": ", ".join(a.get("tools", [])),
|
||||
}
|
||||
for p in _PERSONAS
|
||||
for a in agents
|
||||
],
|
||||
}
|
||||
except Exception as e:
|
||||
|
||||
@@ -219,25 +219,26 @@ def get_task_queue_status() -> dict[str, Any]:
|
||||
|
||||
|
||||
def get_agent_roster() -> dict[str, Any]:
|
||||
"""Get the agent roster from the orchestrator's sub-agent definitions.
|
||||
"""Get the agent roster from agents.yaml config.
|
||||
|
||||
Returns:
|
||||
Dict with agent list and summary.
|
||||
"""
|
||||
try:
|
||||
from timmy.agents.timmy import _PERSONAS
|
||||
from timmy.agents.loader import list_agents
|
||||
|
||||
roster = []
|
||||
for persona in _PERSONAS:
|
||||
roster.append(
|
||||
{
|
||||
"id": persona["agent_id"],
|
||||
"name": persona["name"],
|
||||
"status": "available",
|
||||
"capabilities": ", ".join(persona.get("tools", [])),
|
||||
"role": persona.get("role", ""),
|
||||
}
|
||||
)
|
||||
agents = list_agents()
|
||||
roster = [
|
||||
{
|
||||
"id": a["id"],
|
||||
"name": a["name"],
|
||||
"status": a.get("status", "available"),
|
||||
"capabilities": ", ".join(a.get("tools", [])),
|
||||
"role": a.get("role", ""),
|
||||
"model": a.get("model", ""),
|
||||
}
|
||||
for a in agents
|
||||
]
|
||||
|
||||
return {
|
||||
"agents": roster,
|
||||
|
||||
@@ -1 +0,0 @@
|
||||
|
||||
@@ -1,292 +0,0 @@
|
||||
"""Tests for brain.client — BrainClient memory + task operations."""
|
||||
|
||||
from unittest.mock import AsyncMock, MagicMock, patch
|
||||
|
||||
import pytest
|
||||
|
||||
from brain.client import DEFAULT_RQLITE_URL, BrainClient
|
||||
|
||||
|
||||
class TestBrainClientInit:
|
||||
"""Test BrainClient initialization."""
|
||||
|
||||
def test_default_url(self):
|
||||
client = BrainClient()
|
||||
assert client.rqlite_url == DEFAULT_RQLITE_URL
|
||||
|
||||
def test_custom_url(self):
|
||||
client = BrainClient(rqlite_url="http://custom:4001")
|
||||
assert client.rqlite_url == "http://custom:4001"
|
||||
|
||||
def test_node_id_generated(self):
|
||||
client = BrainClient()
|
||||
assert client.node_id # not empty
|
||||
|
||||
def test_custom_node_id(self):
|
||||
client = BrainClient(node_id="my-node")
|
||||
assert client.node_id == "my-node"
|
||||
|
||||
def test_source_detection(self):
|
||||
client = BrainClient()
|
||||
assert isinstance(client.source, str)
|
||||
|
||||
|
||||
class TestBrainClientMemory:
|
||||
"""Test memory operations (remember, recall, get_recent, get_context)."""
|
||||
|
||||
def _make_client(self):
|
||||
return BrainClient(rqlite_url="http://test:4001", node_id="test-node")
|
||||
|
||||
async def test_remember_success(self):
|
||||
client = self._make_client()
|
||||
mock_response = MagicMock()
|
||||
mock_response.json.return_value = {"results": [{"last_insert_id": 42}]}
|
||||
mock_response.raise_for_status = MagicMock()
|
||||
client._client = MagicMock()
|
||||
client._client.post = AsyncMock(return_value=mock_response)
|
||||
|
||||
with patch("brain.client.BrainClient._detect_source", return_value="test"):
|
||||
with patch("brain.embeddings.get_embedder") as mock_emb:
|
||||
mock_embedder = MagicMock()
|
||||
mock_embedder.encode_single.return_value = b"\x00" * 16
|
||||
mock_emb.return_value = mock_embedder
|
||||
|
||||
result = await client.remember("test memory", tags=["test"])
|
||||
assert result["id"] == 42
|
||||
assert result["status"] == "stored"
|
||||
|
||||
async def test_remember_failure_raises(self):
|
||||
client = self._make_client()
|
||||
client._client = MagicMock()
|
||||
client._client.post = AsyncMock(side_effect=Exception("connection refused"))
|
||||
|
||||
with patch("brain.embeddings.get_embedder") as mock_emb:
|
||||
mock_embedder = MagicMock()
|
||||
mock_embedder.encode_single.return_value = b"\x00" * 16
|
||||
mock_emb.return_value = mock_embedder
|
||||
|
||||
with pytest.raises(Exception, match="connection refused"):
|
||||
await client.remember("fail")
|
||||
|
||||
async def test_recall_success(self):
|
||||
client = self._make_client()
|
||||
mock_response = MagicMock()
|
||||
mock_response.json.return_value = {
|
||||
"results": [
|
||||
{
|
||||
"rows": [
|
||||
["memory content", "test", '{"key": "val"}', 0.1],
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
mock_response.raise_for_status = MagicMock()
|
||||
client._client = MagicMock()
|
||||
client._client.post = AsyncMock(return_value=mock_response)
|
||||
|
||||
with patch("brain.embeddings.get_embedder") as mock_emb:
|
||||
mock_embedder = MagicMock()
|
||||
mock_embedder.encode_single.return_value = b"\x00" * 16
|
||||
mock_emb.return_value = mock_embedder
|
||||
|
||||
results = await client.recall("search query")
|
||||
assert len(results) == 1
|
||||
assert results[0]["content"] == "memory content"
|
||||
assert results[0]["metadata"] == {"key": "val"}
|
||||
|
||||
async def test_recall_with_source_filter(self):
|
||||
client = self._make_client()
|
||||
mock_response = MagicMock()
|
||||
mock_response.json.return_value = {"results": [{"rows": []}]}
|
||||
mock_response.raise_for_status = MagicMock()
|
||||
client._client = MagicMock()
|
||||
client._client.post = AsyncMock(return_value=mock_response)
|
||||
|
||||
with patch("brain.embeddings.get_embedder") as mock_emb:
|
||||
mock_embedder = MagicMock()
|
||||
mock_embedder.encode_single.return_value = b"\x00" * 16
|
||||
mock_emb.return_value = mock_embedder
|
||||
|
||||
results = await client.recall("test", sources=["timmy", "user"])
|
||||
assert results == []
|
||||
# Check that sources were passed in the SQL
|
||||
call_args = client._client.post.call_args
|
||||
sql_params = call_args[1]["json"]
|
||||
assert "timmy" in sql_params[1] or "timmy" in str(sql_params)
|
||||
|
||||
async def test_recall_error_returns_empty(self):
|
||||
client = self._make_client()
|
||||
client._client = MagicMock()
|
||||
client._client.post = AsyncMock(side_effect=Exception("timeout"))
|
||||
|
||||
with patch("brain.embeddings.get_embedder") as mock_emb:
|
||||
mock_embedder = MagicMock()
|
||||
mock_embedder.encode_single.return_value = b"\x00" * 16
|
||||
mock_emb.return_value = mock_embedder
|
||||
|
||||
results = await client.recall("test")
|
||||
assert results == []
|
||||
|
||||
async def test_get_recent_success(self):
|
||||
client = self._make_client()
|
||||
mock_response = MagicMock()
|
||||
mock_response.json.return_value = {
|
||||
"results": [
|
||||
{
|
||||
"rows": [
|
||||
[1, "recent memory", "test", '["tag1"]', "{}", "2026-03-06T00:00:00"],
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
mock_response.raise_for_status = MagicMock()
|
||||
client._client = MagicMock()
|
||||
client._client.post = AsyncMock(return_value=mock_response)
|
||||
|
||||
memories = await client.get_recent(hours=24, limit=10)
|
||||
assert len(memories) == 1
|
||||
assert memories[0]["content"] == "recent memory"
|
||||
assert memories[0]["tags"] == ["tag1"]
|
||||
|
||||
async def test_get_recent_error_returns_empty(self):
|
||||
client = self._make_client()
|
||||
client._client = MagicMock()
|
||||
client._client.post = AsyncMock(side_effect=Exception("db error"))
|
||||
|
||||
result = await client.get_recent()
|
||||
assert result == []
|
||||
|
||||
async def test_get_context(self):
|
||||
client = self._make_client()
|
||||
client.get_recent = AsyncMock(
|
||||
return_value=[
|
||||
{"content": "Recent item 1"},
|
||||
{"content": "Recent item 2"},
|
||||
]
|
||||
)
|
||||
client.recall = AsyncMock(
|
||||
return_value=[
|
||||
{"content": "Relevant item 1"},
|
||||
]
|
||||
)
|
||||
|
||||
ctx = await client.get_context("test query")
|
||||
assert "Recent activity:" in ctx
|
||||
assert "Recent item 1" in ctx
|
||||
assert "Relevant memories:" in ctx
|
||||
assert "Relevant item 1" in ctx
|
||||
|
||||
|
||||
class TestBrainClientTasks:
|
||||
"""Test task queue operations."""
|
||||
|
||||
def _make_client(self):
|
||||
return BrainClient(rqlite_url="http://test:4001", node_id="test-node")
|
||||
|
||||
async def test_submit_task(self):
|
||||
client = self._make_client()
|
||||
mock_response = MagicMock()
|
||||
mock_response.json.return_value = {"results": [{"last_insert_id": 7}]}
|
||||
mock_response.raise_for_status = MagicMock()
|
||||
client._client = MagicMock()
|
||||
client._client.post = AsyncMock(return_value=mock_response)
|
||||
|
||||
result = await client.submit_task("do something", task_type="shell")
|
||||
assert result["id"] == 7
|
||||
assert result["status"] == "queued"
|
||||
|
||||
async def test_submit_task_failure_raises(self):
|
||||
client = self._make_client()
|
||||
client._client = MagicMock()
|
||||
client._client.post = AsyncMock(side_effect=Exception("network error"))
|
||||
|
||||
with pytest.raises(Exception, match="network error"):
|
||||
await client.submit_task("fail task")
|
||||
|
||||
async def test_claim_task_found(self):
|
||||
client = self._make_client()
|
||||
mock_response = MagicMock()
|
||||
mock_response.json.return_value = {
|
||||
"results": [{"rows": [[1, "task content", "shell", 5, '{"key": "val"}']]}]
|
||||
}
|
||||
mock_response.raise_for_status = MagicMock()
|
||||
client._client = MagicMock()
|
||||
client._client.post = AsyncMock(return_value=mock_response)
|
||||
|
||||
task = await client.claim_task(["shell", "general"])
|
||||
assert task is not None
|
||||
assert task["id"] == 1
|
||||
assert task["content"] == "task content"
|
||||
assert task["metadata"] == {"key": "val"}
|
||||
|
||||
async def test_claim_task_none_available(self):
|
||||
client = self._make_client()
|
||||
mock_response = MagicMock()
|
||||
mock_response.json.return_value = {"results": [{"rows": []}]}
|
||||
mock_response.raise_for_status = MagicMock()
|
||||
client._client = MagicMock()
|
||||
client._client.post = AsyncMock(return_value=mock_response)
|
||||
|
||||
task = await client.claim_task(["shell"])
|
||||
assert task is None
|
||||
|
||||
async def test_claim_task_error_returns_none(self):
|
||||
client = self._make_client()
|
||||
client._client = MagicMock()
|
||||
client._client.post = AsyncMock(side_effect=Exception("raft error"))
|
||||
|
||||
task = await client.claim_task(["general"])
|
||||
assert task is None
|
||||
|
||||
async def test_complete_task(self):
|
||||
client = self._make_client()
|
||||
client._client = MagicMock()
|
||||
client._client.post = AsyncMock()
|
||||
|
||||
# Should not raise
|
||||
await client.complete_task(1, success=True, result="done")
|
||||
client._client.post.assert_awaited_once()
|
||||
|
||||
async def test_complete_task_failure(self):
|
||||
client = self._make_client()
|
||||
client._client = MagicMock()
|
||||
client._client.post = AsyncMock()
|
||||
|
||||
await client.complete_task(1, success=False, error="oops")
|
||||
client._client.post.assert_awaited_once()
|
||||
|
||||
async def test_get_pending_tasks(self):
|
||||
client = self._make_client()
|
||||
mock_response = MagicMock()
|
||||
mock_response.json.return_value = {
|
||||
"results": [
|
||||
{
|
||||
"rows": [
|
||||
[1, "task 1", "general", 0, "{}", "2026-03-06"],
|
||||
[2, "task 2", "shell", 5, "{}", "2026-03-06"],
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
mock_response.raise_for_status = MagicMock()
|
||||
client._client = MagicMock()
|
||||
client._client.post = AsyncMock(return_value=mock_response)
|
||||
|
||||
tasks = await client.get_pending_tasks()
|
||||
assert len(tasks) == 2
|
||||
|
||||
async def test_get_pending_tasks_error(self):
|
||||
client = self._make_client()
|
||||
client._client = MagicMock()
|
||||
client._client.post = AsyncMock(side_effect=Exception("fail"))
|
||||
|
||||
result = await client.get_pending_tasks()
|
||||
assert result == []
|
||||
|
||||
async def test_close(self):
|
||||
client = self._make_client()
|
||||
client._client = MagicMock()
|
||||
client._client.aclose = AsyncMock()
|
||||
|
||||
await client.close()
|
||||
client._client.aclose.assert_awaited_once()
|
||||
@@ -1,243 +0,0 @@
|
||||
"""Tests for brain.worker — DistributedWorker capability detection + task execution."""
|
||||
|
||||
from unittest.mock import AsyncMock, MagicMock, patch
|
||||
|
||||
import pytest
|
||||
|
||||
from brain.worker import DistributedWorker
|
||||
|
||||
|
||||
class TestWorkerInit:
|
||||
"""Test worker initialization and capability detection."""
|
||||
|
||||
@patch("brain.worker.DistributedWorker._detect_capabilities")
|
||||
def test_init_defaults(self, mock_caps):
|
||||
mock_caps.return_value = ["general"]
|
||||
worker = DistributedWorker()
|
||||
assert worker.running is False
|
||||
assert worker.node_id # non-empty
|
||||
assert "general" in worker.capabilities
|
||||
|
||||
@patch("brain.worker.DistributedWorker._detect_capabilities")
|
||||
def test_custom_brain_client(self, mock_caps):
|
||||
mock_caps.return_value = ["general"]
|
||||
mock_client = MagicMock()
|
||||
worker = DistributedWorker(brain_client=mock_client)
|
||||
assert worker.brain is mock_client
|
||||
|
||||
@patch("brain.worker.DistributedWorker._detect_capabilities")
|
||||
def test_default_handlers_registered(self, mock_caps):
|
||||
mock_caps.return_value = ["general"]
|
||||
worker = DistributedWorker()
|
||||
assert "shell" in worker._handlers
|
||||
assert "creative" in worker._handlers
|
||||
assert "code" in worker._handlers
|
||||
assert "research" in worker._handlers
|
||||
assert "general" in worker._handlers
|
||||
|
||||
|
||||
class TestCapabilityDetection:
|
||||
"""Test individual capability detection methods."""
|
||||
|
||||
@patch("brain.worker.DistributedWorker._detect_capabilities", return_value=["general"])
|
||||
def _make_worker(self, mock_caps):
|
||||
return DistributedWorker()
|
||||
|
||||
@patch("brain.worker.subprocess.run")
|
||||
def test_has_gpu_nvidia(self, mock_run):
|
||||
worker = self._make_worker()
|
||||
mock_run.return_value = MagicMock(returncode=0)
|
||||
assert worker._has_gpu() is True
|
||||
|
||||
@patch("brain.worker.subprocess.run", side_effect=OSError("no nvidia-smi"))
|
||||
@patch("brain.worker.os.path.exists", return_value=False)
|
||||
@patch("brain.worker.os.uname")
|
||||
def test_has_gpu_no_gpu(self, mock_uname, mock_exists, mock_run):
|
||||
worker = self._make_worker()
|
||||
mock_uname.return_value = MagicMock(sysname="Linux")
|
||||
assert worker._has_gpu() is False
|
||||
|
||||
@patch("brain.worker.subprocess.run")
|
||||
def test_has_internet_true(self, mock_run):
|
||||
worker = self._make_worker()
|
||||
mock_run.return_value = MagicMock(returncode=0)
|
||||
assert worker._has_internet() is True
|
||||
|
||||
@patch("brain.worker.subprocess.run", side_effect=OSError("no curl"))
|
||||
def test_has_internet_no_curl(self, mock_run):
|
||||
worker = self._make_worker()
|
||||
assert worker._has_internet() is False
|
||||
|
||||
@patch("brain.worker.subprocess.run")
|
||||
def test_has_command_true(self, mock_run):
|
||||
worker = self._make_worker()
|
||||
mock_run.return_value = MagicMock(returncode=0)
|
||||
assert worker._has_command("docker") is True
|
||||
|
||||
@patch("brain.worker.subprocess.run")
|
||||
def test_has_command_false(self, mock_run):
|
||||
worker = self._make_worker()
|
||||
mock_run.return_value = MagicMock(returncode=1)
|
||||
assert worker._has_command("nonexistent") is False
|
||||
|
||||
@patch("brain.worker.subprocess.run", side_effect=OSError)
|
||||
def test_has_command_oserror(self, mock_run):
|
||||
worker = self._make_worker()
|
||||
assert worker._has_command("anything") is False
|
||||
|
||||
|
||||
class TestRegisterHandler:
|
||||
"""Test custom handler registration."""
|
||||
|
||||
@patch("brain.worker.DistributedWorker._detect_capabilities", return_value=["general"])
|
||||
def test_register_adds_handler_and_capability(self, mock_caps):
|
||||
worker = DistributedWorker()
|
||||
|
||||
async def custom_handler(content):
|
||||
return "custom result"
|
||||
|
||||
worker.register_handler("custom_type", custom_handler)
|
||||
assert "custom_type" in worker._handlers
|
||||
assert "custom_type" in worker.capabilities
|
||||
|
||||
|
||||
class TestTaskHandlers:
|
||||
"""Test individual task handlers."""
|
||||
|
||||
@patch("brain.worker.DistributedWorker._detect_capabilities", return_value=["general"])
|
||||
def _make_worker(self, mock_caps):
|
||||
worker = DistributedWorker()
|
||||
worker.brain = MagicMock()
|
||||
worker.brain.remember = AsyncMock()
|
||||
worker.brain.complete_task = AsyncMock()
|
||||
return worker
|
||||
|
||||
async def test_handle_code(self):
|
||||
worker = self._make_worker()
|
||||
result = await worker._handle_code("write a function")
|
||||
assert "write a function" in result
|
||||
|
||||
async def test_handle_research_no_internet(self):
|
||||
worker = self._make_worker()
|
||||
worker.capabilities = ["general"] # no "web"
|
||||
with pytest.raises(Exception, match="Internet not available"):
|
||||
await worker._handle_research("search query")
|
||||
|
||||
async def test_handle_creative_no_gpu(self):
|
||||
worker = self._make_worker()
|
||||
worker.capabilities = ["general"] # no "gpu"
|
||||
with pytest.raises(Exception, match="GPU not available"):
|
||||
await worker._handle_creative("make an image")
|
||||
|
||||
async def test_handle_general_no_ollama(self):
|
||||
worker = self._make_worker()
|
||||
worker.capabilities = ["general"] # but not "ollama"
|
||||
# Remove "ollama" if present
|
||||
if "ollama" in worker.capabilities:
|
||||
worker.capabilities.remove("ollama")
|
||||
with pytest.raises(Exception, match="Ollama not available"):
|
||||
await worker._handle_general("answer this")
|
||||
|
||||
|
||||
class TestExecuteTask:
|
||||
"""Test execute_task orchestration."""
|
||||
|
||||
@patch("brain.worker.DistributedWorker._detect_capabilities", return_value=["general"])
|
||||
def _make_worker(self, mock_caps):
|
||||
worker = DistributedWorker()
|
||||
worker.brain = MagicMock()
|
||||
worker.brain.complete_task = AsyncMock()
|
||||
return worker
|
||||
|
||||
async def test_execute_task_success(self):
|
||||
worker = self._make_worker()
|
||||
|
||||
async def fake_handler(content):
|
||||
return "result"
|
||||
|
||||
worker._handlers["test_type"] = fake_handler
|
||||
|
||||
result = await worker.execute_task(
|
||||
{
|
||||
"id": 1,
|
||||
"type": "test_type",
|
||||
"content": "do it",
|
||||
}
|
||||
)
|
||||
assert result["success"] is True
|
||||
assert result["result"] == "result"
|
||||
worker.brain.complete_task.assert_awaited_once_with(1, success=True, result="result")
|
||||
|
||||
async def test_execute_task_failure(self):
|
||||
worker = self._make_worker()
|
||||
|
||||
async def failing_handler(content):
|
||||
raise RuntimeError("oops")
|
||||
|
||||
worker._handlers["fail_type"] = failing_handler
|
||||
|
||||
result = await worker.execute_task(
|
||||
{
|
||||
"id": 2,
|
||||
"type": "fail_type",
|
||||
"content": "fail",
|
||||
}
|
||||
)
|
||||
assert result["success"] is False
|
||||
assert "oops" in result["error"]
|
||||
worker.brain.complete_task.assert_awaited_once()
|
||||
|
||||
async def test_execute_task_falls_back_to_general(self):
|
||||
worker = self._make_worker()
|
||||
|
||||
async def general_handler(content):
|
||||
return "general result"
|
||||
|
||||
worker._handlers["general"] = general_handler
|
||||
|
||||
result = await worker.execute_task(
|
||||
{
|
||||
"id": 3,
|
||||
"type": "unknown_type",
|
||||
"content": "something",
|
||||
}
|
||||
)
|
||||
assert result["success"] is True
|
||||
assert result["result"] == "general result"
|
||||
|
||||
|
||||
class TestRunOnce:
|
||||
"""Test run_once loop iteration."""
|
||||
|
||||
@patch("brain.worker.DistributedWorker._detect_capabilities", return_value=["general"])
|
||||
def _make_worker(self, mock_caps):
|
||||
worker = DistributedWorker()
|
||||
worker.brain = MagicMock()
|
||||
worker.brain.claim_task = AsyncMock()
|
||||
worker.brain.complete_task = AsyncMock()
|
||||
return worker
|
||||
|
||||
async def test_run_once_no_tasks(self):
|
||||
worker = self._make_worker()
|
||||
worker.brain.claim_task.return_value = None
|
||||
|
||||
had_work = await worker.run_once()
|
||||
assert had_work is False
|
||||
|
||||
async def test_run_once_with_task(self):
|
||||
worker = self._make_worker()
|
||||
worker.brain.claim_task.return_value = {"id": 1, "type": "code", "content": "write code"}
|
||||
|
||||
had_work = await worker.run_once()
|
||||
assert had_work is True
|
||||
|
||||
|
||||
class TestStopWorker:
|
||||
"""Test stop method."""
|
||||
|
||||
@patch("brain.worker.DistributedWorker._detect_capabilities", return_value=["general"])
|
||||
def test_stop_sets_running_false(self, mock_caps):
|
||||
worker = DistributedWorker()
|
||||
worker.running = True
|
||||
worker.stop()
|
||||
assert worker.running is False
|
||||
@@ -1,416 +0,0 @@
|
||||
"""Tests for brain.memory — Unified Memory interface.
|
||||
|
||||
Tests the local SQLite backend (default). rqlite tests are integration-only.
|
||||
|
||||
TDD: These tests define the contract that UnifiedMemory must fulfill.
|
||||
Any substrate that reads/writes memory goes through this interface.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
|
||||
import pytest
|
||||
|
||||
from brain.memory import UnifiedMemory, get_memory
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def memory(tmp_path):
|
||||
"""Create a UnifiedMemory instance with a temp database."""
|
||||
db_path = tmp_path / "test_brain.db"
|
||||
return UnifiedMemory(db_path=db_path, source="test", use_rqlite=False)
|
||||
|
||||
|
||||
# ── Initialization ────────────────────────────────────────────────────────────
|
||||
|
||||
|
||||
class TestUnifiedMemoryInit:
|
||||
"""Validate database initialization and schema."""
|
||||
|
||||
def test_creates_database_file(self, tmp_path):
|
||||
"""Database file should be created on init."""
|
||||
db_path = tmp_path / "test.db"
|
||||
assert not db_path.exists()
|
||||
UnifiedMemory(db_path=db_path, use_rqlite=False)
|
||||
assert db_path.exists()
|
||||
|
||||
def test_creates_parent_directories(self, tmp_path):
|
||||
"""Should create parent dirs if they don't exist."""
|
||||
db_path = tmp_path / "deep" / "nested" / "brain.db"
|
||||
UnifiedMemory(db_path=db_path, use_rqlite=False)
|
||||
assert db_path.exists()
|
||||
|
||||
def test_schema_has_memories_table(self, memory):
|
||||
"""Schema should include memories table."""
|
||||
conn = memory._get_conn()
|
||||
try:
|
||||
cursor = conn.execute(
|
||||
"SELECT name FROM sqlite_master WHERE type='table' AND name='memories'"
|
||||
)
|
||||
assert cursor.fetchone() is not None
|
||||
finally:
|
||||
conn.close()
|
||||
|
||||
def test_schema_has_facts_table(self, memory):
|
||||
"""Schema should include facts table."""
|
||||
conn = memory._get_conn()
|
||||
try:
|
||||
cursor = conn.execute(
|
||||
"SELECT name FROM sqlite_master WHERE type='table' AND name='facts'"
|
||||
)
|
||||
assert cursor.fetchone() is not None
|
||||
finally:
|
||||
conn.close()
|
||||
|
||||
def test_schema_version_recorded(self, memory):
|
||||
"""Schema version should be recorded."""
|
||||
conn = memory._get_conn()
|
||||
try:
|
||||
cursor = conn.execute("SELECT version FROM brain_schema_version")
|
||||
row = cursor.fetchone()
|
||||
assert row is not None
|
||||
assert row["version"] == 1
|
||||
finally:
|
||||
conn.close()
|
||||
|
||||
def test_idempotent_init(self, tmp_path):
|
||||
"""Initializing twice on the same DB should not error."""
|
||||
db_path = tmp_path / "test.db"
|
||||
m1 = UnifiedMemory(db_path=db_path, use_rqlite=False)
|
||||
m1.remember_sync("first memory")
|
||||
m2 = UnifiedMemory(db_path=db_path, use_rqlite=False)
|
||||
# Should not lose data
|
||||
results = m2.recall_sync("first")
|
||||
assert len(results) >= 1
|
||||
|
||||
def test_wal_mode_enabled(self, memory):
|
||||
"""Database should use WAL journal mode for concurrency."""
|
||||
conn = memory._get_conn()
|
||||
try:
|
||||
mode = conn.execute("PRAGMA journal_mode").fetchone()[0]
|
||||
assert mode == "wal", f"Expected WAL mode, got {mode}"
|
||||
finally:
|
||||
conn.close()
|
||||
|
||||
def test_busy_timeout_set(self, memory):
|
||||
"""Database connections should have busy_timeout configured."""
|
||||
conn = memory._get_conn()
|
||||
try:
|
||||
timeout = conn.execute("PRAGMA busy_timeout").fetchone()[0]
|
||||
assert timeout == 5000, f"Expected 5000ms busy_timeout, got {timeout}"
|
||||
finally:
|
||||
conn.close()
|
||||
|
||||
|
||||
# ── Remember (Sync) ──────────────────────────────────────────────────────────
|
||||
|
||||
|
||||
class TestRememberSync:
|
||||
"""Test synchronous memory storage."""
|
||||
|
||||
def test_remember_returns_id(self, memory):
|
||||
"""remember_sync should return dict with id and status."""
|
||||
result = memory.remember_sync("User prefers dark mode")
|
||||
assert "id" in result
|
||||
assert result["status"] == "stored"
|
||||
assert result["id"] is not None
|
||||
|
||||
def test_remember_stores_content(self, memory):
|
||||
"""Stored content should be retrievable."""
|
||||
memory.remember_sync("The sky is blue")
|
||||
results = memory.recall_sync("sky")
|
||||
assert len(results) >= 1
|
||||
assert "sky" in results[0]["content"].lower()
|
||||
|
||||
def test_remember_with_tags(self, memory):
|
||||
"""Tags should be stored and retrievable."""
|
||||
memory.remember_sync("Dark mode enabled", tags=["preference", "ui"])
|
||||
conn = memory._get_conn()
|
||||
try:
|
||||
row = conn.execute(
|
||||
"SELECT tags FROM memories WHERE content = ?", ("Dark mode enabled",)
|
||||
).fetchone()
|
||||
tags = json.loads(row["tags"])
|
||||
assert "preference" in tags
|
||||
assert "ui" in tags
|
||||
finally:
|
||||
conn.close()
|
||||
|
||||
def test_remember_with_metadata(self, memory):
|
||||
"""Metadata should be stored as JSON."""
|
||||
memory.remember_sync("Test", metadata={"key": "value", "count": 42})
|
||||
conn = memory._get_conn()
|
||||
try:
|
||||
row = conn.execute("SELECT metadata FROM memories WHERE content = 'Test'").fetchone()
|
||||
meta = json.loads(row["metadata"])
|
||||
assert meta["key"] == "value"
|
||||
assert meta["count"] == 42
|
||||
finally:
|
||||
conn.close()
|
||||
|
||||
def test_remember_with_custom_source(self, memory):
|
||||
"""Source should default to self.source but be overridable."""
|
||||
memory.remember_sync("From timmy", source="timmy")
|
||||
memory.remember_sync("From user", source="user")
|
||||
conn = memory._get_conn()
|
||||
try:
|
||||
rows = conn.execute("SELECT source FROM memories ORDER BY id").fetchall()
|
||||
sources = [r["source"] for r in rows]
|
||||
assert "timmy" in sources
|
||||
assert "user" in sources
|
||||
finally:
|
||||
conn.close()
|
||||
|
||||
def test_remember_default_source(self, memory):
|
||||
"""Default source should be the one set at init."""
|
||||
memory.remember_sync("Default source test")
|
||||
conn = memory._get_conn()
|
||||
try:
|
||||
row = conn.execute("SELECT source FROM memories").fetchone()
|
||||
assert row["source"] == "test" # set in fixture
|
||||
finally:
|
||||
conn.close()
|
||||
|
||||
def test_remember_multiple(self, memory):
|
||||
"""Multiple memories should be stored independently."""
|
||||
for i in range(5):
|
||||
memory.remember_sync(f"Memory number {i}")
|
||||
conn = memory._get_conn()
|
||||
try:
|
||||
count = conn.execute("SELECT COUNT(*) FROM memories").fetchone()[0]
|
||||
assert count == 5
|
||||
finally:
|
||||
conn.close()
|
||||
|
||||
|
||||
# ── Recall (Sync) ─────────────────────────────────────────────────────────────
|
||||
|
||||
|
||||
class TestRecallSync:
|
||||
"""Test synchronous memory recall (keyword fallback)."""
|
||||
|
||||
def test_recall_finds_matching(self, memory):
|
||||
"""Recall should find memories matching the query."""
|
||||
memory.remember_sync("Bitcoin price is rising")
|
||||
memory.remember_sync("Weather is sunny today")
|
||||
results = memory.recall_sync("Bitcoin")
|
||||
assert len(results) >= 1
|
||||
assert "Bitcoin" in results[0]["content"]
|
||||
|
||||
def test_recall_low_score_for_irrelevant(self, memory):
|
||||
"""Recall should return low scores for irrelevant queries.
|
||||
|
||||
Note: Semantic search may still return results (embeddings always
|
||||
have *some* similarity), but scores should be low for unrelated content.
|
||||
Keyword fallback returns nothing if no substring match.
|
||||
"""
|
||||
memory.remember_sync("Bitcoin price is rising fast")
|
||||
results = memory.recall_sync("underwater basket weaving")
|
||||
if results:
|
||||
# If semantic search returned something, score should be low
|
||||
assert results[0]["score"] < 0.7, (
|
||||
f"Expected low score for irrelevant query, got {results[0]['score']}"
|
||||
)
|
||||
|
||||
def test_recall_respects_limit(self, memory):
|
||||
"""Recall should respect the limit parameter."""
|
||||
for i in range(10):
|
||||
memory.remember_sync(f"Bitcoin memory {i}")
|
||||
results = memory.recall_sync("Bitcoin", limit=3)
|
||||
assert len(results) <= 3
|
||||
|
||||
def test_recall_filters_by_source(self, memory):
|
||||
"""Recall should filter by source when specified."""
|
||||
memory.remember_sync("From timmy", source="timmy")
|
||||
memory.remember_sync("From user about timmy", source="user")
|
||||
results = memory.recall_sync("timmy", sources=["user"])
|
||||
assert all(r["source"] == "user" for r in results)
|
||||
|
||||
def test_recall_returns_score(self, memory):
|
||||
"""Recall results should include a score."""
|
||||
memory.remember_sync("Test memory for scoring")
|
||||
results = memory.recall_sync("Test")
|
||||
assert len(results) >= 1
|
||||
assert "score" in results[0]
|
||||
|
||||
|
||||
# ── Facts ─────────────────────────────────────────────────────────────────────
|
||||
|
||||
|
||||
class TestFacts:
|
||||
"""Test long-term fact storage."""
|
||||
|
||||
def test_store_fact_returns_id(self, memory):
|
||||
"""store_fact_sync should return dict with id and status."""
|
||||
result = memory.store_fact_sync("user_preference", "Prefers dark mode")
|
||||
assert "id" in result
|
||||
assert result["status"] == "stored"
|
||||
|
||||
def test_get_facts_by_category(self, memory):
|
||||
"""get_facts_sync should filter by category."""
|
||||
memory.store_fact_sync("user_preference", "Likes dark mode")
|
||||
memory.store_fact_sync("user_fact", "Lives in Texas")
|
||||
prefs = memory.get_facts_sync(category="user_preference")
|
||||
assert len(prefs) == 1
|
||||
assert "dark mode" in prefs[0]["content"]
|
||||
|
||||
def test_get_facts_by_query(self, memory):
|
||||
"""get_facts_sync should support keyword search."""
|
||||
memory.store_fact_sync("user_preference", "Likes dark mode")
|
||||
memory.store_fact_sync("user_preference", "Prefers Bitcoin")
|
||||
results = memory.get_facts_sync(query="Bitcoin")
|
||||
assert len(results) == 1
|
||||
assert "Bitcoin" in results[0]["content"]
|
||||
|
||||
def test_fact_access_count_increments(self, memory):
|
||||
"""Accessing a fact should increment its access_count."""
|
||||
memory.store_fact_sync("test_cat", "Test fact")
|
||||
# First access — count starts at 0, then gets incremented
|
||||
facts = memory.get_facts_sync(category="test_cat")
|
||||
first_count = facts[0]["access_count"]
|
||||
# Second access — count should be higher
|
||||
facts = memory.get_facts_sync(category="test_cat")
|
||||
second_count = facts[0]["access_count"]
|
||||
assert second_count > first_count, (
|
||||
f"Access count should increment: {first_count} -> {second_count}"
|
||||
)
|
||||
|
||||
def test_fact_confidence_ordering(self, memory):
|
||||
"""Facts should be ordered by confidence (highest first)."""
|
||||
memory.store_fact_sync("cat", "Low confidence fact", confidence=0.3)
|
||||
memory.store_fact_sync("cat", "High confidence fact", confidence=0.9)
|
||||
facts = memory.get_facts_sync(category="cat")
|
||||
assert facts[0]["confidence"] > facts[1]["confidence"]
|
||||
|
||||
|
||||
# ── Recent Memories ───────────────────────────────────────────────────────────
|
||||
|
||||
|
||||
class TestRecentSync:
|
||||
"""Test recent memory retrieval."""
|
||||
|
||||
def test_get_recent_returns_recent(self, memory):
|
||||
"""get_recent_sync should return recently stored memories."""
|
||||
memory.remember_sync("Just happened")
|
||||
results = memory.get_recent_sync(hours=1, limit=10)
|
||||
assert len(results) >= 1
|
||||
assert "Just happened" in results[0]["content"]
|
||||
|
||||
def test_get_recent_respects_limit(self, memory):
|
||||
"""get_recent_sync should respect limit."""
|
||||
for i in range(10):
|
||||
memory.remember_sync(f"Recent {i}")
|
||||
results = memory.get_recent_sync(hours=1, limit=3)
|
||||
assert len(results) <= 3
|
||||
|
||||
def test_get_recent_filters_by_source(self, memory):
|
||||
"""get_recent_sync should filter by source."""
|
||||
memory.remember_sync("From timmy", source="timmy")
|
||||
memory.remember_sync("From user", source="user")
|
||||
results = memory.get_recent_sync(hours=1, sources=["timmy"])
|
||||
assert all(r["source"] == "timmy" for r in results)
|
||||
|
||||
|
||||
# ── Stats ─────────────────────────────────────────────────────────────────────
|
||||
|
||||
|
||||
class TestStats:
|
||||
"""Test memory statistics."""
|
||||
|
||||
def test_stats_returns_counts(self, memory):
|
||||
"""get_stats should return correct counts."""
|
||||
memory.remember_sync("Memory 1")
|
||||
memory.remember_sync("Memory 2")
|
||||
memory.store_fact_sync("cat", "Fact 1")
|
||||
stats = memory.get_stats()
|
||||
assert stats["memory_count"] == 2
|
||||
assert stats["fact_count"] == 1
|
||||
assert stats["backend"] == "local_sqlite"
|
||||
|
||||
def test_stats_empty_db(self, memory):
|
||||
"""get_stats should work on empty database."""
|
||||
stats = memory.get_stats()
|
||||
assert stats["memory_count"] == 0
|
||||
assert stats["fact_count"] == 0
|
||||
|
||||
|
||||
# ── Identity Integration ─────────────────────────────────────────────────────
|
||||
|
||||
|
||||
class TestIdentityIntegration:
|
||||
"""Identity system removed — stubs return empty strings."""
|
||||
|
||||
def test_get_identity_returns_empty(self, memory):
|
||||
assert memory.get_identity() == ""
|
||||
|
||||
def test_get_identity_for_prompt_returns_empty(self, memory):
|
||||
assert memory.get_identity_for_prompt() == ""
|
||||
|
||||
|
||||
# ── Singleton ─────────────────────────────────────────────────────────────────
|
||||
|
||||
|
||||
class TestSingleton:
|
||||
"""Test the module-level get_memory() singleton."""
|
||||
|
||||
def test_get_memory_returns_instance(self):
|
||||
"""get_memory() should return a UnifiedMemory instance."""
|
||||
import brain.memory as mem_module
|
||||
|
||||
# Reset singleton for test isolation
|
||||
mem_module._default_memory = None
|
||||
m = get_memory()
|
||||
assert isinstance(m, UnifiedMemory)
|
||||
|
||||
def test_get_memory_returns_same_instance(self):
|
||||
"""get_memory() should return the same instance on repeated calls."""
|
||||
import brain.memory as mem_module
|
||||
|
||||
mem_module._default_memory = None
|
||||
m1 = get_memory()
|
||||
m2 = get_memory()
|
||||
assert m1 is m2
|
||||
|
||||
|
||||
# ── Async Interface ───────────────────────────────────────────────────────────
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
class TestAsyncInterface:
|
||||
"""Test async wrappers (which delegate to sync for local SQLite)."""
|
||||
|
||||
async def test_async_remember(self, memory):
|
||||
"""Async remember should work."""
|
||||
result = await memory.remember("Async memory test")
|
||||
assert result["status"] == "stored"
|
||||
|
||||
async def test_async_recall(self, memory):
|
||||
"""Async recall should work."""
|
||||
await memory.remember("Async recall target")
|
||||
results = await memory.recall("Async recall")
|
||||
assert len(results) >= 1
|
||||
|
||||
async def test_async_store_fact(self, memory):
|
||||
"""Async store_fact should work."""
|
||||
result = await memory.store_fact("test", "Async fact")
|
||||
assert result["status"] == "stored"
|
||||
|
||||
async def test_async_get_facts(self, memory):
|
||||
"""Async get_facts should work."""
|
||||
await memory.store_fact("test", "Async fact retrieval")
|
||||
facts = await memory.get_facts(category="test")
|
||||
assert len(facts) >= 1
|
||||
|
||||
async def test_async_get_recent(self, memory):
|
||||
"""Async get_recent should work."""
|
||||
await memory.remember("Recent async memory")
|
||||
results = await memory.get_recent(hours=1)
|
||||
assert len(results) >= 1
|
||||
|
||||
async def test_async_get_context(self, memory):
|
||||
"""Async get_context should return formatted context."""
|
||||
await memory.remember("Context test memory")
|
||||
context = await memory.get_context("test")
|
||||
assert isinstance(context, str)
|
||||
assert len(context) > 0
|
||||
@@ -20,6 +20,8 @@ except ImportError:
|
||||
for _mod in [
|
||||
"airllm",
|
||||
"mcp",
|
||||
"mcp.client",
|
||||
"mcp.client.stdio",
|
||||
"mcp.registry",
|
||||
"telegram",
|
||||
"telegram.ext",
|
||||
|
||||
@@ -1,145 +0,0 @@
|
||||
"""Tests for the Paperclip API routes."""
|
||||
|
||||
from unittest.mock import AsyncMock, MagicMock, patch
|
||||
|
||||
# ── GET /api/paperclip/status ────────────────────────────────────────────────
|
||||
|
||||
|
||||
def test_status_disabled(client):
|
||||
"""When paperclip_enabled is False, status returns disabled."""
|
||||
response = client.get("/api/paperclip/status")
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
assert data["enabled"] is False
|
||||
|
||||
|
||||
def test_status_enabled(client):
|
||||
mock_status = MagicMock()
|
||||
mock_status.model_dump.return_value = {
|
||||
"enabled": True,
|
||||
"connected": True,
|
||||
"paperclip_url": "http://vps:3100",
|
||||
"company_id": "comp-1",
|
||||
"agent_count": 3,
|
||||
"issue_count": 5,
|
||||
"error": None,
|
||||
}
|
||||
mock_bridge = MagicMock()
|
||||
mock_bridge.get_status = AsyncMock(return_value=mock_status)
|
||||
with patch("dashboard.routes.paperclip.settings") as mock_settings:
|
||||
mock_settings.paperclip_enabled = True
|
||||
with patch.dict("sys.modules", {}):
|
||||
with patch("integrations.paperclip.bridge.bridge", mock_bridge):
|
||||
response = client.get("/api/paperclip/status")
|
||||
assert response.status_code == 200
|
||||
assert response.json()["connected"] is True
|
||||
|
||||
|
||||
# ── GET /api/paperclip/issues ────────────────────────────────────────────────
|
||||
|
||||
|
||||
def test_list_issues_disabled(client):
|
||||
response = client.get("/api/paperclip/issues")
|
||||
assert response.status_code == 200
|
||||
assert response.json()["enabled"] is False
|
||||
|
||||
|
||||
# ── POST /api/paperclip/issues ───────────────────────────────────────────────
|
||||
|
||||
|
||||
def test_create_issue_disabled(client):
|
||||
response = client.post(
|
||||
"/api/paperclip/issues",
|
||||
json={"title": "Test"},
|
||||
)
|
||||
assert response.status_code == 200
|
||||
assert response.json()["enabled"] is False
|
||||
|
||||
|
||||
def test_create_issue_missing_title(client):
|
||||
with patch("dashboard.routes.paperclip.settings") as mock_settings:
|
||||
mock_settings.paperclip_enabled = True
|
||||
response = client.post(
|
||||
"/api/paperclip/issues",
|
||||
json={"description": "No title"},
|
||||
)
|
||||
assert response.status_code == 400
|
||||
assert "title" in response.json()["error"]
|
||||
|
||||
|
||||
# ── POST /api/paperclip/issues/{id}/delegate ─────────────────────────────────
|
||||
|
||||
|
||||
def test_delegate_issue_missing_agent_id(client):
|
||||
with patch("dashboard.routes.paperclip.settings") as mock_settings:
|
||||
mock_settings.paperclip_enabled = True
|
||||
response = client.post(
|
||||
"/api/paperclip/issues/i1/delegate",
|
||||
json={"message": "Do this"},
|
||||
)
|
||||
assert response.status_code == 400
|
||||
assert "agent_id" in response.json()["error"]
|
||||
|
||||
|
||||
# ── POST /api/paperclip/issues/{id}/comment ──────────────────────────────────
|
||||
|
||||
|
||||
def test_add_comment_missing_content(client):
|
||||
with patch("dashboard.routes.paperclip.settings") as mock_settings:
|
||||
mock_settings.paperclip_enabled = True
|
||||
response = client.post(
|
||||
"/api/paperclip/issues/i1/comment",
|
||||
json={},
|
||||
)
|
||||
assert response.status_code == 400
|
||||
assert "content" in response.json()["error"]
|
||||
|
||||
|
||||
# ── GET /api/paperclip/agents ────────────────────────────────────────────────
|
||||
|
||||
|
||||
def test_list_agents_disabled(client):
|
||||
response = client.get("/api/paperclip/agents")
|
||||
assert response.status_code == 200
|
||||
assert response.json()["enabled"] is False
|
||||
|
||||
|
||||
# ── GET /api/paperclip/goals ─────────────────────────────────────────────────
|
||||
|
||||
|
||||
def test_list_goals_disabled(client):
|
||||
response = client.get("/api/paperclip/goals")
|
||||
assert response.status_code == 200
|
||||
assert response.json()["enabled"] is False
|
||||
|
||||
|
||||
# ── POST /api/paperclip/goals ────────────────────────────────────────────────
|
||||
|
||||
|
||||
def test_create_goal_missing_title(client):
|
||||
with patch("dashboard.routes.paperclip.settings") as mock_settings:
|
||||
mock_settings.paperclip_enabled = True
|
||||
response = client.post(
|
||||
"/api/paperclip/goals",
|
||||
json={"description": "No title"},
|
||||
)
|
||||
assert response.status_code == 400
|
||||
assert "title" in response.json()["error"]
|
||||
|
||||
|
||||
# ── GET /api/paperclip/approvals ─────────────────────────────────────────────
|
||||
|
||||
|
||||
def test_list_approvals_disabled(client):
|
||||
response = client.get("/api/paperclip/approvals")
|
||||
assert response.status_code == 200
|
||||
assert response.json()["enabled"] is False
|
||||
|
||||
|
||||
# ── GET /api/paperclip/runs ──────────────────────────────────────────────────
|
||||
|
||||
|
||||
def test_list_runs_disabled(client):
|
||||
response = client.get("/api/paperclip/runs")
|
||||
assert response.status_code == 200
|
||||
assert response.json()["enabled"] is False
|
||||
@@ -94,23 +94,7 @@ def test_creative_page_returns_200(client):
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def test_swarm_live_page_returns_200(client):
|
||||
"""GET /swarm/live renders the live dashboard page."""
|
||||
response = client.get("/swarm/live")
|
||||
assert response.status_code == 200
|
||||
|
||||
|
||||
def test_swarm_live_websocket_sends_initial_state(client):
|
||||
"""WebSocket at /swarm/live sends initial_state on connect."""
|
||||
|
||||
with client.websocket_connect("/swarm/live") as ws:
|
||||
data = ws.receive_json()
|
||||
# First message should be initial_state with swarm data
|
||||
assert data.get("type") == "initial_state", f"Unexpected WS message: {data}"
|
||||
payload = data.get("data", {})
|
||||
assert "agents" in payload
|
||||
assert "tasks" in payload
|
||||
assert "auctions" in payload
|
||||
# Swarm live page tests removed — swarm module deleted.
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
@@ -262,9 +246,6 @@ def test_all_dashboard_pages_return_200(client):
|
||||
"/tasks",
|
||||
"/briefing",
|
||||
"/thinking",
|
||||
"/swarm/mission-control",
|
||||
"/swarm/live",
|
||||
"/swarm/events",
|
||||
"/bugs",
|
||||
"/tools",
|
||||
"/lightning/ledger",
|
||||
|
||||
@@ -1,66 +0,0 @@
|
||||
"""Tests for swarm.event_log — WAL mode, basic operations, and EventBus bridge."""
|
||||
|
||||
import pytest
|
||||
|
||||
from swarm.event_log import EventType, _ensure_db, log_event
|
||||
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def tmp_event_db(tmp_path, monkeypatch):
|
||||
"""Redirect event_log writes to a temp directory."""
|
||||
db_path = tmp_path / "events.db"
|
||||
monkeypatch.setattr("swarm.event_log.DB_PATH", db_path)
|
||||
yield db_path
|
||||
|
||||
|
||||
class TestEventLogWAL:
|
||||
"""Verify WAL mode is enabled for the event log database."""
|
||||
|
||||
def test_event_db_uses_wal(self):
|
||||
conn = _ensure_db()
|
||||
try:
|
||||
mode = conn.execute("PRAGMA journal_mode").fetchone()[0]
|
||||
assert mode == "wal", f"Expected WAL mode, got {mode}"
|
||||
finally:
|
||||
conn.close()
|
||||
|
||||
def test_event_db_busy_timeout(self):
|
||||
conn = _ensure_db()
|
||||
try:
|
||||
timeout = conn.execute("PRAGMA busy_timeout").fetchone()[0]
|
||||
assert timeout == 5000
|
||||
finally:
|
||||
conn.close()
|
||||
|
||||
|
||||
class TestEventLogBasics:
|
||||
"""Basic event logging operations."""
|
||||
|
||||
def test_log_event_returns_entry(self):
|
||||
entry = log_event(EventType.SYSTEM_INFO, source="test", data={"msg": "hello"})
|
||||
assert entry.id
|
||||
assert entry.event_type == EventType.SYSTEM_INFO
|
||||
assert entry.source == "test"
|
||||
|
||||
def test_log_event_persists(self):
|
||||
log_event(EventType.TASK_CREATED, source="test", task_id="t1")
|
||||
from swarm.event_log import get_task_events
|
||||
|
||||
events = get_task_events("t1")
|
||||
assert len(events) == 1
|
||||
assert events[0].event_type == EventType.TASK_CREATED
|
||||
|
||||
def test_log_event_with_agent_id(self):
|
||||
entry = log_event(
|
||||
EventType.AGENT_JOINED,
|
||||
source="test",
|
||||
agent_id="forge",
|
||||
data={"persona_id": "forge"},
|
||||
)
|
||||
assert entry.agent_id == "forge"
|
||||
|
||||
def test_log_event_data_roundtrip(self):
|
||||
data = {"bid_sats": 42, "reason": "testing"}
|
||||
entry = log_event(EventType.BID_SUBMITTED, source="test", data=data)
|
||||
assert entry.data["bid_sats"] == 42
|
||||
assert entry.data["reason"] == "testing"
|
||||
@@ -1,205 +0,0 @@
|
||||
"""Tests for the Paperclip bridge (CEO orchestration logic)."""
|
||||
|
||||
from unittest.mock import AsyncMock, MagicMock, patch
|
||||
|
||||
import pytest
|
||||
|
||||
from integrations.paperclip.bridge import PaperclipBridge
|
||||
from integrations.paperclip.client import PaperclipClient
|
||||
from integrations.paperclip.models import PaperclipAgent, PaperclipGoal, PaperclipIssue
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def mock_client():
|
||||
client = MagicMock(spec=PaperclipClient)
|
||||
# Make all methods async
|
||||
client.healthy = AsyncMock(return_value=True)
|
||||
client.list_agents = AsyncMock(return_value=[])
|
||||
client.list_issues = AsyncMock(return_value=[])
|
||||
client.list_goals = AsyncMock(return_value=[])
|
||||
client.list_approvals = AsyncMock(return_value=[])
|
||||
client.list_heartbeat_runs = AsyncMock(return_value=[])
|
||||
client.get_issue = AsyncMock(return_value=None)
|
||||
client.get_org = AsyncMock(return_value=None)
|
||||
client.create_issue = AsyncMock(return_value=None)
|
||||
client.update_issue = AsyncMock(return_value=None)
|
||||
client.add_comment = AsyncMock(return_value=None)
|
||||
client.wake_agent = AsyncMock(return_value=None)
|
||||
client.create_goal = AsyncMock(return_value=None)
|
||||
client.approve = AsyncMock(return_value=None)
|
||||
client.reject = AsyncMock(return_value=None)
|
||||
client.cancel_run = AsyncMock(return_value=None)
|
||||
client.list_comments = AsyncMock(return_value=[])
|
||||
return client
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def bridge(mock_client):
|
||||
return PaperclipBridge(client=mock_client)
|
||||
|
||||
|
||||
# ── status ───────────────────────────────────────────────────────────────────
|
||||
|
||||
|
||||
async def test_status_when_disabled(bridge):
|
||||
with patch("integrations.paperclip.bridge.settings") as mock_settings:
|
||||
mock_settings.paperclip_enabled = False
|
||||
mock_settings.paperclip_url = "http://localhost:3100"
|
||||
status = await bridge.get_status()
|
||||
assert status.enabled is False
|
||||
|
||||
|
||||
async def test_status_when_connected(bridge, mock_client):
|
||||
mock_client.healthy.return_value = True
|
||||
mock_client.list_agents.return_value = [
|
||||
PaperclipAgent(id="a1", name="Codex"),
|
||||
]
|
||||
mock_client.list_issues.return_value = [
|
||||
PaperclipIssue(id="i1", title="Bug"),
|
||||
PaperclipIssue(id="i2", title="Feature"),
|
||||
]
|
||||
|
||||
with patch("integrations.paperclip.bridge.settings") as mock_settings:
|
||||
mock_settings.paperclip_enabled = True
|
||||
mock_settings.paperclip_url = "http://vps:3100"
|
||||
mock_settings.paperclip_company_id = "comp-1"
|
||||
status = await bridge.get_status()
|
||||
|
||||
assert status.enabled is True
|
||||
assert status.connected is True
|
||||
assert status.agent_count == 1
|
||||
assert status.issue_count == 2
|
||||
|
||||
|
||||
async def test_status_when_disconnected(bridge, mock_client):
|
||||
mock_client.healthy.return_value = False
|
||||
|
||||
with patch("integrations.paperclip.bridge.settings") as mock_settings:
|
||||
mock_settings.paperclip_enabled = True
|
||||
mock_settings.paperclip_url = "http://vps:3100"
|
||||
mock_settings.paperclip_company_id = "comp-1"
|
||||
status = await bridge.get_status()
|
||||
|
||||
assert status.enabled is True
|
||||
assert status.connected is False
|
||||
assert "Cannot reach" in status.error
|
||||
|
||||
|
||||
# ── create and assign ────────────────────────────────────────────────────────
|
||||
|
||||
|
||||
async def test_create_and_assign_with_wake(bridge, mock_client):
|
||||
issue = PaperclipIssue(id="i1", title="Deploy v2")
|
||||
mock_client.create_issue.return_value = issue
|
||||
mock_client.wake_agent.return_value = {"status": "queued"}
|
||||
|
||||
result = await bridge.create_and_assign(
|
||||
title="Deploy v2",
|
||||
assignee_id="agent-codex",
|
||||
wake=True,
|
||||
)
|
||||
|
||||
assert result is not None
|
||||
assert result.id == "i1"
|
||||
mock_client.wake_agent.assert_awaited_once_with("agent-codex", issue_id="i1")
|
||||
|
||||
|
||||
async def test_create_and_assign_no_wake(bridge, mock_client):
|
||||
issue = PaperclipIssue(id="i2", title="Research task")
|
||||
mock_client.create_issue.return_value = issue
|
||||
|
||||
result = await bridge.create_and_assign(
|
||||
title="Research task",
|
||||
assignee_id="agent-research",
|
||||
wake=False,
|
||||
)
|
||||
|
||||
assert result is not None
|
||||
mock_client.wake_agent.assert_not_awaited()
|
||||
|
||||
|
||||
async def test_create_and_assign_failure(bridge, mock_client):
|
||||
mock_client.create_issue.return_value = None
|
||||
|
||||
result = await bridge.create_and_assign(title="Will fail")
|
||||
assert result is None
|
||||
|
||||
|
||||
# ── delegate ─────────────────────────────────────────────────────────────────
|
||||
|
||||
|
||||
async def test_delegate_issue(bridge, mock_client):
|
||||
mock_client.update_issue.return_value = PaperclipIssue(id="i1", title="Task")
|
||||
mock_client.wake_agent.return_value = {"status": "queued"}
|
||||
|
||||
ok = await bridge.delegate_issue("i1", "agent-codex", message="Handle this")
|
||||
assert ok is True
|
||||
mock_client.add_comment.assert_awaited_once()
|
||||
mock_client.wake_agent.assert_awaited_once()
|
||||
|
||||
|
||||
async def test_delegate_issue_update_fails(bridge, mock_client):
|
||||
mock_client.update_issue.return_value = None
|
||||
|
||||
ok = await bridge.delegate_issue("i1", "agent-codex")
|
||||
assert ok is False
|
||||
|
||||
|
||||
# ── close issue ──────────────────────────────────────────────────────────────
|
||||
|
||||
|
||||
async def test_close_issue(bridge, mock_client):
|
||||
mock_client.update_issue.return_value = PaperclipIssue(id="i1", title="Done")
|
||||
|
||||
ok = await bridge.close_issue("i1", comment="Shipped!")
|
||||
assert ok is True
|
||||
mock_client.add_comment.assert_awaited_once()
|
||||
|
||||
|
||||
# ── goals ────────────────────────────────────────────────────────────────────
|
||||
|
||||
|
||||
async def test_set_goal(bridge, mock_client):
|
||||
mock_client.create_goal.return_value = PaperclipGoal(id="g1", title="Ship MVP")
|
||||
|
||||
goal = await bridge.set_goal("Ship MVP")
|
||||
assert goal is not None
|
||||
assert goal.title == "Ship MVP"
|
||||
|
||||
|
||||
# ── approvals ────────────────────────────────────────────────────────────────
|
||||
|
||||
|
||||
async def test_approve(bridge, mock_client):
|
||||
mock_client.approve.return_value = {"status": "approved"}
|
||||
ok = await bridge.approve("ap1")
|
||||
assert ok is True
|
||||
|
||||
|
||||
async def test_reject(bridge, mock_client):
|
||||
mock_client.reject.return_value = {"status": "rejected"}
|
||||
ok = await bridge.reject("ap1", comment="Needs work")
|
||||
assert ok is True
|
||||
|
||||
|
||||
async def test_approve_failure(bridge, mock_client):
|
||||
mock_client.approve.return_value = None
|
||||
ok = await bridge.approve("ap1")
|
||||
assert ok is False
|
||||
|
||||
|
||||
# ── runs ─────────────────────────────────────────────────────────────────────
|
||||
|
||||
|
||||
async def test_active_runs(bridge, mock_client):
|
||||
mock_client.list_heartbeat_runs.return_value = [
|
||||
{"id": "r1", "status": "running"},
|
||||
]
|
||||
runs = await bridge.active_runs()
|
||||
assert len(runs) == 1
|
||||
|
||||
|
||||
async def test_cancel_run(bridge, mock_client):
|
||||
mock_client.cancel_run.return_value = {"status": "cancelled"}
|
||||
ok = await bridge.cancel_run("r1")
|
||||
assert ok is True
|
||||
@@ -1,213 +0,0 @@
|
||||
"""Tests for the Paperclip API client.
|
||||
|
||||
Uses httpx.MockTransport so every test exercises the real HTTP path
|
||||
(_get/_post/_delete, status-code handling, JSON parsing, error paths)
|
||||
instead of patching the transport methods away.
|
||||
"""
|
||||
|
||||
import json
|
||||
from unittest.mock import patch
|
||||
|
||||
import httpx
|
||||
|
||||
from integrations.paperclip.client import PaperclipClient
|
||||
from integrations.paperclip.models import CreateIssueRequest
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Helpers
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def _mock_transport(routes: dict[str, tuple[int, dict | list | None]]):
|
||||
"""Build an httpx.MockTransport from a {method+path: (status, body)} map.
|
||||
|
||||
Example:
|
||||
_mock_transport({
|
||||
"GET /api/health": (200, {"status": "ok"}),
|
||||
"DELETE /api/issues/i1": (204, None),
|
||||
})
|
||||
"""
|
||||
|
||||
def handler(request: httpx.Request) -> httpx.Response:
|
||||
key = f"{request.method} {request.url.path}"
|
||||
if key in routes:
|
||||
status, body = routes[key]
|
||||
content = json.dumps(body).encode() if body is not None else b""
|
||||
return httpx.Response(
|
||||
status, content=content, headers={"content-type": "application/json"}
|
||||
)
|
||||
return httpx.Response(404, json={"error": "not found"})
|
||||
|
||||
return httpx.MockTransport(handler)
|
||||
|
||||
|
||||
def _client_with(routes: dict[str, tuple[int, dict | list | None]]) -> PaperclipClient:
|
||||
"""Create a PaperclipClient whose internal httpx.AsyncClient uses a mock transport."""
|
||||
client = PaperclipClient(base_url="http://fake:3100", api_key="test-key")
|
||||
client._client = httpx.AsyncClient(
|
||||
transport=_mock_transport(routes),
|
||||
base_url="http://fake:3100",
|
||||
headers={"Accept": "application/json", "Authorization": "Bearer test-key"},
|
||||
)
|
||||
return client
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# health
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
async def test_healthy_returns_true_on_200():
|
||||
client = _client_with({"GET /api/health": (200, {"status": "ok"})})
|
||||
assert await client.healthy() is True
|
||||
|
||||
|
||||
async def test_healthy_returns_false_on_500():
|
||||
client = _client_with({"GET /api/health": (500, {"error": "down"})})
|
||||
assert await client.healthy() is False
|
||||
|
||||
|
||||
async def test_healthy_returns_false_on_404():
|
||||
client = _client_with({}) # no routes → 404
|
||||
assert await client.healthy() is False
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# agents
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
async def test_list_agents_parses_response():
|
||||
raw = [{"id": "a1", "name": "Codex", "role": "engineer", "status": "active"}]
|
||||
client = _client_with({"GET /api/companies/comp-1/agents": (200, raw)})
|
||||
agents = await client.list_agents(company_id="comp-1")
|
||||
assert len(agents) == 1
|
||||
assert agents[0].name == "Codex"
|
||||
assert agents[0].id == "a1"
|
||||
|
||||
|
||||
async def test_list_agents_empty_on_server_error():
|
||||
client = _client_with({"GET /api/companies/comp-1/agents": (503, None)})
|
||||
agents = await client.list_agents(company_id="comp-1")
|
||||
assert agents == []
|
||||
|
||||
|
||||
async def test_list_agents_graceful_on_404():
|
||||
client = _client_with({})
|
||||
agents = await client.list_agents(company_id="comp-1")
|
||||
assert agents == []
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# issues
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
async def test_list_issues():
|
||||
raw = [{"id": "i1", "title": "Fix bug"}]
|
||||
client = _client_with({"GET /api/companies/comp-1/issues": (200, raw)})
|
||||
issues = await client.list_issues(company_id="comp-1")
|
||||
assert len(issues) == 1
|
||||
assert issues[0].title == "Fix bug"
|
||||
|
||||
|
||||
async def test_get_issue():
|
||||
raw = {"id": "i1", "title": "Fix bug", "description": "It's broken"}
|
||||
client = _client_with({"GET /api/issues/i1": (200, raw)})
|
||||
issue = await client.get_issue("i1")
|
||||
assert issue is not None
|
||||
assert issue.id == "i1"
|
||||
|
||||
|
||||
async def test_get_issue_not_found():
|
||||
client = _client_with({"GET /api/issues/nonexistent": (404, None)})
|
||||
issue = await client.get_issue("nonexistent")
|
||||
assert issue is None
|
||||
|
||||
|
||||
async def test_create_issue():
|
||||
raw = {"id": "i2", "title": "New feature"}
|
||||
client = _client_with({"POST /api/companies/comp-1/issues": (201, raw)})
|
||||
req = CreateIssueRequest(title="New feature")
|
||||
issue = await client.create_issue(req, company_id="comp-1")
|
||||
assert issue is not None
|
||||
assert issue.id == "i2"
|
||||
|
||||
|
||||
async def test_create_issue_no_company_id():
|
||||
"""Missing company_id returns None without making any HTTP call."""
|
||||
client = _client_with({})
|
||||
with patch("integrations.paperclip.client.settings") as mock_settings:
|
||||
mock_settings.paperclip_company_id = ""
|
||||
issue = await client.create_issue(CreateIssueRequest(title="Test"))
|
||||
assert issue is None
|
||||
|
||||
|
||||
async def test_delete_issue_returns_true_on_success():
|
||||
client = _client_with({"DELETE /api/issues/i1": (204, None)})
|
||||
result = await client.delete_issue("i1")
|
||||
assert result is True
|
||||
|
||||
|
||||
async def test_delete_issue_returns_false_on_error():
|
||||
client = _client_with({"DELETE /api/issues/i1": (500, None)})
|
||||
result = await client.delete_issue("i1")
|
||||
assert result is False
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# comments
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
async def test_add_comment():
|
||||
raw = {"id": "c1", "issue_id": "i1", "content": "Done"}
|
||||
client = _client_with({"POST /api/issues/i1/comments": (201, raw)})
|
||||
comment = await client.add_comment("i1", "Done")
|
||||
assert comment is not None
|
||||
assert comment.content == "Done"
|
||||
|
||||
|
||||
async def test_list_comments():
|
||||
raw = [{"id": "c1", "issue_id": "i1", "content": "LGTM"}]
|
||||
client = _client_with({"GET /api/issues/i1/comments": (200, raw)})
|
||||
comments = await client.list_comments("i1")
|
||||
assert len(comments) == 1
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# goals
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
async def test_list_goals():
|
||||
raw = [{"id": "g1", "title": "Ship MVP"}]
|
||||
client = _client_with({"GET /api/companies/comp-1/goals": (200, raw)})
|
||||
goals = await client.list_goals(company_id="comp-1")
|
||||
assert len(goals) == 1
|
||||
assert goals[0].title == "Ship MVP"
|
||||
|
||||
|
||||
async def test_create_goal():
|
||||
raw = {"id": "g2", "title": "Scale to 1000 users"}
|
||||
client = _client_with({"POST /api/companies/comp-1/goals": (201, raw)})
|
||||
goal = await client.create_goal("Scale to 1000 users", company_id="comp-1")
|
||||
assert goal is not None
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# heartbeat runs
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
async def test_list_heartbeat_runs():
|
||||
raw = [{"id": "r1", "agent_id": "a1", "status": "running"}]
|
||||
client = _client_with({"GET /api/companies/comp-1/heartbeat-runs": (200, raw)})
|
||||
runs = await client.list_heartbeat_runs(company_id="comp-1")
|
||||
assert len(runs) == 1
|
||||
|
||||
|
||||
async def test_list_heartbeat_runs_server_error():
|
||||
client = _client_with({"GET /api/companies/comp-1/heartbeat-runs": (500, None)})
|
||||
runs = await client.list_heartbeat_runs(company_id="comp-1")
|
||||
assert runs == []
|
||||
@@ -1,936 +0,0 @@
|
||||
"""Integration tests for the Paperclip task runner — full green-path workflow.
|
||||
|
||||
Tests the complete autonomous cycle with a StubOrchestrator that exercises
|
||||
the real pipe (TaskRunner → orchestrator.execute_task → bridge → client)
|
||||
while stubbing only the LLM intelligence layer.
|
||||
|
||||
Green path:
|
||||
1. Timmy grabs first task in queue
|
||||
2. Orchestrator.execute_task processes it (stub returns input-aware response)
|
||||
3. Timmy posts completion comment and marks issue done
|
||||
4. Timmy creates a recursive follow-up task for himself
|
||||
|
||||
The stub is deliberately input-aware — it echoes back task metadata so
|
||||
assertions can prove data actually flowed through the pipe, not just that
|
||||
methods were called.
|
||||
|
||||
Live-LLM tests (``@pytest.mark.ollama``) are at the bottom; they hit a real
|
||||
tiny model via Ollama and are skipped when Ollama is not running.
|
||||
Run them with: ``tox -e ollama`` or ``pytest -m ollama``
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from unittest.mock import AsyncMock, MagicMock, patch
|
||||
|
||||
import pytest
|
||||
|
||||
from integrations.paperclip.bridge import PaperclipBridge
|
||||
from integrations.paperclip.client import PaperclipClient
|
||||
from integrations.paperclip.models import PaperclipIssue
|
||||
from integrations.paperclip.task_runner import TaskRunner
|
||||
|
||||
# ── Constants ─────────────────────────────────────────────────────────────────
|
||||
|
||||
TIMMY_AGENT_ID = "agent-timmy"
|
||||
COMPANY_ID = "comp-1"
|
||||
|
||||
|
||||
# ── StubOrchestrator: exercises the pipe, stubs the intelligence ──────────────
|
||||
|
||||
|
||||
class StubOrchestrator:
|
||||
"""Deterministic orchestrator that proves data flows through the pipe.
|
||||
|
||||
Returns responses that reference input metadata — so tests can assert
|
||||
the pipe actually connected (task_id, title, priority all appear in output).
|
||||
Tracks every call for post-hoc inspection.
|
||||
"""
|
||||
|
||||
def __init__(self) -> None:
|
||||
self.calls: list[dict] = []
|
||||
|
||||
async def execute_task(self, task_id: str, description: str, context: dict) -> dict:
|
||||
call_record = {
|
||||
"task_id": task_id,
|
||||
"description": description,
|
||||
"context": dict(context),
|
||||
}
|
||||
self.calls.append(call_record)
|
||||
|
||||
title = context.get("title", description[:50])
|
||||
priority = context.get("priority", "normal")
|
||||
|
||||
return {
|
||||
"task_id": task_id,
|
||||
"agent": "orchestrator",
|
||||
"result": (
|
||||
f"[Orchestrator] Processed '{title}'. "
|
||||
f"Task {task_id} handled with priority {priority}. "
|
||||
"Self-reflection: my task automation loop is functioning. "
|
||||
"I should create a follow-up to review this pattern."
|
||||
),
|
||||
"status": "completed",
|
||||
}
|
||||
|
||||
|
||||
# ── Fixtures ──────────────────────────────────────────────────────────────────
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def stub_orchestrator():
|
||||
return StubOrchestrator()
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def mock_client():
|
||||
"""Fully stubbed PaperclipClient with async methods."""
|
||||
client = MagicMock(spec=PaperclipClient)
|
||||
client.healthy = AsyncMock(return_value=True)
|
||||
client.list_issues = AsyncMock(return_value=[])
|
||||
client.get_issue = AsyncMock(return_value=None)
|
||||
client.create_issue = AsyncMock(return_value=None)
|
||||
client.update_issue = AsyncMock(return_value=None)
|
||||
client.delete_issue = AsyncMock(return_value=True)
|
||||
client.add_comment = AsyncMock(return_value=None)
|
||||
client.list_comments = AsyncMock(return_value=[])
|
||||
client.checkout_issue = AsyncMock(return_value={"ok": True})
|
||||
client.release_issue = AsyncMock(return_value={"ok": True})
|
||||
client.wake_agent = AsyncMock(return_value=None)
|
||||
client.list_agents = AsyncMock(return_value=[])
|
||||
client.list_goals = AsyncMock(return_value=[])
|
||||
client.create_goal = AsyncMock(return_value=None)
|
||||
client.list_approvals = AsyncMock(return_value=[])
|
||||
client.list_heartbeat_runs = AsyncMock(return_value=[])
|
||||
client.cancel_run = AsyncMock(return_value=None)
|
||||
client.approve = AsyncMock(return_value=None)
|
||||
client.reject = AsyncMock(return_value=None)
|
||||
return client
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def bridge(mock_client):
|
||||
return PaperclipBridge(client=mock_client)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def settings_patch():
|
||||
"""Patch settings for all task runner tests."""
|
||||
with (
|
||||
patch("integrations.paperclip.task_runner.settings") as ts,
|
||||
patch("integrations.paperclip.bridge.settings") as bs,
|
||||
):
|
||||
for s in (ts, bs):
|
||||
s.paperclip_enabled = True
|
||||
s.paperclip_agent_id = TIMMY_AGENT_ID
|
||||
s.paperclip_company_id = COMPANY_ID
|
||||
s.paperclip_url = "http://fake:3100"
|
||||
s.paperclip_poll_interval = 0
|
||||
yield ts
|
||||
|
||||
|
||||
# ── Helpers ───────────────────────────────────────────────────────────────────
|
||||
|
||||
|
||||
def _make_issue(
|
||||
id: str = "issue-1",
|
||||
title: str = "Muse about task automation",
|
||||
description: str = "Reflect on how you handle tasks and write a recursive self-improvement task.",
|
||||
status: str = "open",
|
||||
assignee_id: str = TIMMY_AGENT_ID,
|
||||
priority: str = "normal",
|
||||
labels: list[str] | None = None,
|
||||
) -> PaperclipIssue:
|
||||
return PaperclipIssue(
|
||||
id=id,
|
||||
title=title,
|
||||
description=description,
|
||||
status=status,
|
||||
assignee_id=assignee_id,
|
||||
priority=priority,
|
||||
labels=labels or [],
|
||||
)
|
||||
|
||||
|
||||
def _make_done(id: str = "issue-1", title: str = "Done") -> PaperclipIssue:
|
||||
return PaperclipIssue(id=id, title=title, status="done")
|
||||
|
||||
|
||||
def _make_follow_up(id: str = "issue-2") -> PaperclipIssue:
|
||||
return PaperclipIssue(
|
||||
id=id,
|
||||
title="Follow-up: Muse about task automation",
|
||||
description="Automated follow-up from completed task",
|
||||
status="open",
|
||||
assignee_id=TIMMY_AGENT_ID,
|
||||
priority="normal",
|
||||
)
|
||||
|
||||
|
||||
# ═══════════════════════════════════════════════════════════════════════════════
|
||||
# PIPE WIRING: verify orchestrator is actually connected
|
||||
# ═══════════════════════════════════════════════════════════════════════════════
|
||||
|
||||
|
||||
class TestOrchestratorWiring:
|
||||
"""Verify the orchestrator parameter actually connects to the pipe."""
|
||||
|
||||
async def test_orchestrator_execute_task_is_called(
|
||||
self,
|
||||
mock_client,
|
||||
bridge,
|
||||
stub_orchestrator,
|
||||
settings_patch,
|
||||
):
|
||||
"""When orchestrator is wired, process_task calls execute_task."""
|
||||
issue = _make_issue()
|
||||
|
||||
runner = TaskRunner(bridge=bridge, orchestrator=stub_orchestrator)
|
||||
await runner.process_task(issue)
|
||||
|
||||
assert len(stub_orchestrator.calls) == 1
|
||||
call = stub_orchestrator.calls[0]
|
||||
assert call["task_id"] == "issue-1"
|
||||
assert call["context"]["title"] == "Muse about task automation"
|
||||
|
||||
async def test_orchestrator_receives_full_context(
|
||||
self,
|
||||
mock_client,
|
||||
bridge,
|
||||
stub_orchestrator,
|
||||
settings_patch,
|
||||
):
|
||||
"""Context dict passed to execute_task includes all issue metadata."""
|
||||
issue = _make_issue(
|
||||
id="ctx-test",
|
||||
title="Context verification",
|
||||
priority="high",
|
||||
labels=["automation", "meta"],
|
||||
)
|
||||
|
||||
runner = TaskRunner(bridge=bridge, orchestrator=stub_orchestrator)
|
||||
await runner.process_task(issue)
|
||||
|
||||
ctx = stub_orchestrator.calls[0]["context"]
|
||||
assert ctx["issue_id"] == "ctx-test"
|
||||
assert ctx["title"] == "Context verification"
|
||||
assert ctx["priority"] == "high"
|
||||
assert ctx["labels"] == ["automation", "meta"]
|
||||
|
||||
async def test_orchestrator_dict_result_unwrapped(
|
||||
self,
|
||||
mock_client,
|
||||
bridge,
|
||||
stub_orchestrator,
|
||||
settings_patch,
|
||||
):
|
||||
"""When execute_task returns a dict, the 'result' key is extracted."""
|
||||
issue = _make_issue()
|
||||
|
||||
runner = TaskRunner(bridge=bridge, orchestrator=stub_orchestrator)
|
||||
result = await runner.process_task(issue)
|
||||
|
||||
# StubOrchestrator returns dict with "result" key
|
||||
assert "[Orchestrator]" in result
|
||||
assert "issue-1" in result
|
||||
|
||||
async def test_orchestrator_string_result_passthrough(
|
||||
self,
|
||||
mock_client,
|
||||
bridge,
|
||||
settings_patch,
|
||||
):
|
||||
"""When execute_task returns a plain string, it passes through."""
|
||||
|
||||
class StringOrchestrator:
|
||||
async def execute_task(self, task_id, description, context):
|
||||
return f"Plain string result for {task_id}"
|
||||
|
||||
runner = TaskRunner(bridge=bridge, orchestrator=StringOrchestrator())
|
||||
result = await runner.process_task(_make_issue())
|
||||
|
||||
assert result == "Plain string result for issue-1"
|
||||
|
||||
async def test_process_fn_overrides_orchestrator(
|
||||
self,
|
||||
mock_client,
|
||||
bridge,
|
||||
stub_orchestrator,
|
||||
settings_patch,
|
||||
):
|
||||
"""Explicit process_fn takes priority over orchestrator."""
|
||||
|
||||
async def override(task_id, desc, ctx):
|
||||
return "override wins"
|
||||
|
||||
runner = TaskRunner(
|
||||
bridge=bridge,
|
||||
orchestrator=stub_orchestrator,
|
||||
process_fn=override,
|
||||
)
|
||||
result = await runner.process_task(_make_issue())
|
||||
|
||||
assert result == "override wins"
|
||||
assert len(stub_orchestrator.calls) == 0 # orchestrator NOT called
|
||||
|
||||
|
||||
# ═══════════════════════════════════════════════════════════════════════════════
|
||||
# STEP 1: Timmy grabs the first task in queue
|
||||
# ═══════════════════════════════════════════════════════════════════════════════
|
||||
|
||||
|
||||
class TestGrabNextTask:
|
||||
"""Verify Timmy picks the first open issue assigned to him."""
|
||||
|
||||
async def test_grabs_first_assigned_issue(self, mock_client, bridge, settings_patch):
|
||||
issue = _make_issue()
|
||||
mock_client.list_issues.return_value = [issue]
|
||||
|
||||
runner = TaskRunner(bridge=bridge)
|
||||
grabbed = await runner.grab_next_task()
|
||||
|
||||
assert grabbed is not None
|
||||
assert grabbed.id == "issue-1"
|
||||
assert grabbed.assignee_id == TIMMY_AGENT_ID
|
||||
mock_client.list_issues.assert_awaited_once_with(status="open")
|
||||
|
||||
async def test_skips_issues_not_assigned_to_timmy(self, mock_client, bridge, settings_patch):
|
||||
other = _make_issue(id="other-1", assignee_id="agent-codex")
|
||||
mine = _make_issue(id="timmy-1")
|
||||
mock_client.list_issues.return_value = [other, mine]
|
||||
|
||||
runner = TaskRunner(bridge=bridge)
|
||||
grabbed = await runner.grab_next_task()
|
||||
|
||||
assert grabbed.id == "timmy-1"
|
||||
|
||||
async def test_returns_none_when_queue_empty(self, mock_client, bridge, settings_patch):
|
||||
mock_client.list_issues.return_value = []
|
||||
runner = TaskRunner(bridge=bridge)
|
||||
assert await runner.grab_next_task() is None
|
||||
|
||||
async def test_returns_none_when_no_agent_id(self, mock_client, bridge, settings_patch):
|
||||
settings_patch.paperclip_agent_id = ""
|
||||
runner = TaskRunner(bridge=bridge)
|
||||
assert await runner.grab_next_task() is None
|
||||
mock_client.list_issues.assert_not_awaited()
|
||||
|
||||
async def test_grabs_first_of_multiple(self, mock_client, bridge, settings_patch):
|
||||
issues = [_make_issue(id=f"t-{i}", title=f"Task {i}") for i in range(3)]
|
||||
mock_client.list_issues.return_value = issues
|
||||
|
||||
runner = TaskRunner(bridge=bridge)
|
||||
assert (await runner.grab_next_task()).id == "t-0"
|
||||
|
||||
|
||||
# ═══════════════════════════════════════════════════════════════════════════════
|
||||
# STEP 2: Timmy processes the task through the orchestrator
|
||||
# ═══════════════════════════════════════════════════════════════════════════════
|
||||
|
||||
|
||||
class TestProcessTask:
|
||||
"""Verify checkout + orchestrator invocation + result flow."""
|
||||
|
||||
async def test_checkout_before_orchestrator(
|
||||
self,
|
||||
mock_client,
|
||||
bridge,
|
||||
stub_orchestrator,
|
||||
settings_patch,
|
||||
):
|
||||
"""Issue must be checked out before orchestrator runs."""
|
||||
issue = _make_issue()
|
||||
checkout_happened = {"before_execute": False}
|
||||
|
||||
original_execute = stub_orchestrator.execute_task
|
||||
|
||||
async def tracking_execute(task_id, desc, ctx):
|
||||
checkout_happened["before_execute"] = mock_client.checkout_issue.await_count > 0
|
||||
return await original_execute(task_id, desc, ctx)
|
||||
|
||||
stub_orchestrator.execute_task = tracking_execute
|
||||
|
||||
runner = TaskRunner(bridge=bridge, orchestrator=stub_orchestrator)
|
||||
await runner.process_task(issue)
|
||||
|
||||
assert checkout_happened["before_execute"], "checkout must happen before execute_task"
|
||||
|
||||
async def test_orchestrator_output_flows_to_result(
|
||||
self,
|
||||
mock_client,
|
||||
bridge,
|
||||
stub_orchestrator,
|
||||
settings_patch,
|
||||
):
|
||||
"""The string returned by process_task comes from the orchestrator."""
|
||||
issue = _make_issue(id="flow-1", title="Flow verification", priority="high")
|
||||
|
||||
runner = TaskRunner(bridge=bridge, orchestrator=stub_orchestrator)
|
||||
result = await runner.process_task(issue)
|
||||
|
||||
# Verify orchestrator's output arrived — it references the input
|
||||
assert "Flow verification" in result
|
||||
assert "flow-1" in result
|
||||
assert "high" in result
|
||||
|
||||
async def test_default_fallback_without_orchestrator(
|
||||
self,
|
||||
mock_client,
|
||||
bridge,
|
||||
settings_patch,
|
||||
):
|
||||
"""Without orchestrator or process_fn, a default message is returned."""
|
||||
issue = _make_issue(title="Fallback test")
|
||||
runner = TaskRunner(bridge=bridge) # no orchestrator
|
||||
result = await runner.process_task(issue)
|
||||
assert "Fallback test" in result
|
||||
|
||||
|
||||
# ═══════════════════════════════════════════════════════════════════════════════
|
||||
# STEP 3: Timmy completes the task — comment + close
|
||||
# ═══════════════════════════════════════════════════════════════════════════════
|
||||
|
||||
|
||||
class TestCompleteTask:
|
||||
"""Verify orchestrator output flows into the completion comment."""
|
||||
|
||||
async def test_orchestrator_output_in_comment(
|
||||
self,
|
||||
mock_client,
|
||||
bridge,
|
||||
stub_orchestrator,
|
||||
settings_patch,
|
||||
):
|
||||
"""The comment posted to Paperclip contains the orchestrator's output."""
|
||||
issue = _make_issue(id="cmt-1", title="Comment pipe test")
|
||||
mock_client.update_issue.return_value = _make_done("cmt-1")
|
||||
|
||||
runner = TaskRunner(bridge=bridge, orchestrator=stub_orchestrator)
|
||||
# Process to get orchestrator output
|
||||
result = await runner.process_task(issue)
|
||||
# Complete to post it as comment
|
||||
await runner.complete_task(issue, result)
|
||||
|
||||
comment_content = mock_client.add_comment.call_args[0][1]
|
||||
assert "[Timmy]" in comment_content
|
||||
assert "[Orchestrator]" in comment_content
|
||||
assert "Comment pipe test" in comment_content
|
||||
|
||||
async def test_marks_issue_done(
|
||||
self,
|
||||
mock_client,
|
||||
bridge,
|
||||
settings_patch,
|
||||
):
|
||||
issue = _make_issue()
|
||||
mock_client.update_issue.return_value = _make_done()
|
||||
|
||||
runner = TaskRunner(bridge=bridge)
|
||||
ok = await runner.complete_task(issue, "any result")
|
||||
|
||||
assert ok is True
|
||||
update_req = mock_client.update_issue.call_args[0][1]
|
||||
assert update_req.status == "done"
|
||||
|
||||
async def test_returns_false_on_close_failure(
|
||||
self,
|
||||
mock_client,
|
||||
bridge,
|
||||
settings_patch,
|
||||
):
|
||||
mock_client.update_issue.return_value = None
|
||||
runner = TaskRunner(bridge=bridge)
|
||||
assert await runner.complete_task(_make_issue(), "result") is False
|
||||
|
||||
|
||||
# ═══════════════════════════════════════════════════════════════════════════════
|
||||
# STEP 4: Follow-up creation with orchestrator output embedded
|
||||
# ═══════════════════════════════════════════════════════════════════════════════
|
||||
|
||||
|
||||
class TestCreateFollowUp:
|
||||
"""Verify orchestrator output flows into the follow-up description."""
|
||||
|
||||
async def test_follow_up_contains_orchestrator_output(
|
||||
self,
|
||||
mock_client,
|
||||
bridge,
|
||||
stub_orchestrator,
|
||||
settings_patch,
|
||||
):
|
||||
"""The follow-up description includes the orchestrator's result text."""
|
||||
issue = _make_issue(id="fu-1", title="Follow-up pipe test")
|
||||
mock_client.create_issue.return_value = _make_follow_up()
|
||||
|
||||
runner = TaskRunner(bridge=bridge, orchestrator=stub_orchestrator)
|
||||
result = await runner.process_task(issue)
|
||||
await runner.create_follow_up(issue, result)
|
||||
|
||||
create_req = mock_client.create_issue.call_args[0][0]
|
||||
# Orchestrator output should be embedded in description
|
||||
assert "[Orchestrator]" in create_req.description
|
||||
assert "fu-1" in create_req.description
|
||||
|
||||
async def test_follow_up_assigned_to_self(
|
||||
self,
|
||||
mock_client,
|
||||
bridge,
|
||||
settings_patch,
|
||||
):
|
||||
mock_client.create_issue.return_value = _make_follow_up()
|
||||
runner = TaskRunner(bridge=bridge)
|
||||
await runner.create_follow_up(_make_issue(), "result")
|
||||
|
||||
req = mock_client.create_issue.call_args[0][0]
|
||||
assert req.assignee_id == TIMMY_AGENT_ID
|
||||
|
||||
async def test_follow_up_preserves_priority(
|
||||
self,
|
||||
mock_client,
|
||||
bridge,
|
||||
settings_patch,
|
||||
):
|
||||
mock_client.create_issue.return_value = _make_follow_up()
|
||||
runner = TaskRunner(bridge=bridge)
|
||||
await runner.create_follow_up(_make_issue(priority="high"), "result")
|
||||
|
||||
req = mock_client.create_issue.call_args[0][0]
|
||||
assert req.priority == "high"
|
||||
|
||||
async def test_follow_up_not_woken(self, mock_client, bridge, settings_patch):
|
||||
mock_client.create_issue.return_value = _make_follow_up()
|
||||
runner = TaskRunner(bridge=bridge)
|
||||
await runner.create_follow_up(_make_issue(), "result")
|
||||
mock_client.wake_agent.assert_not_awaited()
|
||||
|
||||
async def test_returns_none_on_failure(self, mock_client, bridge, settings_patch):
|
||||
mock_client.create_issue.return_value = None
|
||||
runner = TaskRunner(bridge=bridge)
|
||||
assert await runner.create_follow_up(_make_issue(), "r") is None
|
||||
|
||||
|
||||
# ═══════════════════════════════════════════════════════════════════════════════
|
||||
# FULL GREEN PATH: orchestrator wired end-to-end
|
||||
# ═══════════════════════════════════════════════════════════════════════════════
|
||||
|
||||
|
||||
class TestGreenPathWithOrchestrator:
|
||||
"""Full pipe: TaskRunner → StubOrchestrator → bridge → mock_client.
|
||||
|
||||
Proves orchestrator output propagates to every downstream artefact:
|
||||
the comment, the follow-up description, and the summary dict.
|
||||
"""
|
||||
|
||||
async def test_full_cycle_orchestrator_output_everywhere(
|
||||
self,
|
||||
mock_client,
|
||||
bridge,
|
||||
stub_orchestrator,
|
||||
settings_patch,
|
||||
):
|
||||
"""Orchestrator result appears in comment, follow-up, and summary."""
|
||||
original = _make_issue(
|
||||
id="green-1",
|
||||
title="Muse about task automation and write a recursive task",
|
||||
description="Reflect on your task processing. Create a follow-up.",
|
||||
priority="high",
|
||||
)
|
||||
mock_client.list_issues.return_value = [original]
|
||||
mock_client.update_issue.return_value = _make_done("green-1")
|
||||
mock_client.create_issue.return_value = _make_follow_up("green-fu")
|
||||
|
||||
runner = TaskRunner(bridge=bridge, orchestrator=stub_orchestrator)
|
||||
summary = await runner.run_once()
|
||||
|
||||
# ── Orchestrator was called with correct data
|
||||
assert len(stub_orchestrator.calls) == 1
|
||||
call = stub_orchestrator.calls[0]
|
||||
assert call["task_id"] == "green-1"
|
||||
assert call["context"]["priority"] == "high"
|
||||
assert "Reflect on your task processing" in call["description"]
|
||||
|
||||
# ── Summary contains orchestrator output
|
||||
assert summary is not None
|
||||
assert summary["original_issue_id"] == "green-1"
|
||||
assert summary["completed"] is True
|
||||
assert summary["follow_up_issue_id"] == "green-fu"
|
||||
assert "[Orchestrator]" in summary["result"]
|
||||
assert "green-1" in summary["result"]
|
||||
|
||||
# ── Comment posted contains orchestrator output
|
||||
comment_content = mock_client.add_comment.call_args[0][1]
|
||||
assert "[Timmy]" in comment_content
|
||||
assert "[Orchestrator]" in comment_content
|
||||
assert "high" in comment_content # priority flowed through
|
||||
|
||||
# ── Follow-up description contains orchestrator output
|
||||
follow_up_req = mock_client.create_issue.call_args[0][0]
|
||||
assert "[Orchestrator]" in follow_up_req.description
|
||||
assert "green-1" in follow_up_req.description
|
||||
assert follow_up_req.priority == "high"
|
||||
assert follow_up_req.assignee_id == TIMMY_AGENT_ID
|
||||
|
||||
# ── Correct ordering of API calls
|
||||
mock_client.list_issues.assert_awaited_once()
|
||||
mock_client.checkout_issue.assert_awaited_once_with("green-1")
|
||||
mock_client.add_comment.assert_awaited_once()
|
||||
mock_client.update_issue.assert_awaited_once()
|
||||
assert mock_client.create_issue.await_count == 1
|
||||
|
||||
async def test_no_tasks_returns_none(
|
||||
self,
|
||||
mock_client,
|
||||
bridge,
|
||||
stub_orchestrator,
|
||||
settings_patch,
|
||||
):
|
||||
mock_client.list_issues.return_value = []
|
||||
runner = TaskRunner(bridge=bridge, orchestrator=stub_orchestrator)
|
||||
assert await runner.run_once() is None
|
||||
assert len(stub_orchestrator.calls) == 0
|
||||
|
||||
async def test_close_failure_still_creates_follow_up(
|
||||
self,
|
||||
mock_client,
|
||||
bridge,
|
||||
stub_orchestrator,
|
||||
settings_patch,
|
||||
):
|
||||
mock_client.list_issues.return_value = [_make_issue()]
|
||||
mock_client.update_issue.return_value = None # close fails
|
||||
mock_client.create_issue.return_value = _make_follow_up()
|
||||
|
||||
runner = TaskRunner(bridge=bridge, orchestrator=stub_orchestrator)
|
||||
summary = await runner.run_once()
|
||||
|
||||
assert summary["completed"] is False
|
||||
assert summary["follow_up_issue_id"] == "issue-2"
|
||||
assert len(stub_orchestrator.calls) == 1
|
||||
|
||||
|
||||
# ═══════════════════════════════════════════════════════════════════════════════
|
||||
# EXTERNAL INJECTION: task from Paperclip API → orchestrator processes it
|
||||
# ═══════════════════════════════════════════════════════════════════════════════
|
||||
|
||||
|
||||
class TestExternalTaskInjection:
|
||||
"""External system creates a task → Timmy's orchestrator processes it."""
|
||||
|
||||
async def test_external_task_flows_through_orchestrator(
|
||||
self,
|
||||
mock_client,
|
||||
bridge,
|
||||
stub_orchestrator,
|
||||
settings_patch,
|
||||
):
|
||||
external = _make_issue(
|
||||
id="ext-1",
|
||||
title="Review quarterly metrics",
|
||||
description="Analyze Q1 metrics and prepare summary.",
|
||||
)
|
||||
mock_client.list_issues.return_value = [external]
|
||||
mock_client.update_issue.return_value = _make_done("ext-1")
|
||||
mock_client.create_issue.return_value = _make_follow_up("ext-fu")
|
||||
|
||||
runner = TaskRunner(bridge=bridge, orchestrator=stub_orchestrator)
|
||||
summary = await runner.run_once()
|
||||
|
||||
# Orchestrator received the external task
|
||||
assert stub_orchestrator.calls[0]["task_id"] == "ext-1"
|
||||
assert "Analyze Q1 metrics" in stub_orchestrator.calls[0]["description"]
|
||||
|
||||
# Its output flowed to Paperclip
|
||||
assert "[Orchestrator]" in summary["result"]
|
||||
assert "Review quarterly metrics" in summary["result"]
|
||||
|
||||
async def test_skips_tasks_for_other_agents(
|
||||
self,
|
||||
mock_client,
|
||||
bridge,
|
||||
stub_orchestrator,
|
||||
settings_patch,
|
||||
):
|
||||
other = _make_issue(id="other-1", assignee_id="agent-codex")
|
||||
mine = _make_issue(id="mine-1", title="My task")
|
||||
mock_client.list_issues.return_value = [other, mine]
|
||||
mock_client.update_issue.return_value = _make_done("mine-1")
|
||||
mock_client.create_issue.return_value = _make_follow_up()
|
||||
|
||||
runner = TaskRunner(bridge=bridge, orchestrator=stub_orchestrator)
|
||||
summary = await runner.run_once()
|
||||
|
||||
assert summary["original_issue_id"] == "mine-1"
|
||||
mock_client.checkout_issue.assert_awaited_once_with("mine-1")
|
||||
|
||||
|
||||
# ═══════════════════════════════════════════════════════════════════════════════
|
||||
# RECURSIVE CHAIN: follow-up → grabbed → orchestrator → follow-up → ...
|
||||
# ═══════════════════════════════════════════════════════════════════════════════
|
||||
|
||||
|
||||
class TestRecursiveChain:
|
||||
"""Multi-cycle chains where each follow-up becomes the next task."""
|
||||
|
||||
async def test_two_cycle_chain(
|
||||
self,
|
||||
mock_client,
|
||||
bridge,
|
||||
stub_orchestrator,
|
||||
settings_patch,
|
||||
):
|
||||
task_a = _make_issue(id="A", title="Initial musing")
|
||||
fu_b = PaperclipIssue(
|
||||
id="B",
|
||||
title="Follow-up: Initial musing",
|
||||
description="Continue",
|
||||
status="open",
|
||||
assignee_id=TIMMY_AGENT_ID,
|
||||
priority="normal",
|
||||
)
|
||||
fu_c = PaperclipIssue(
|
||||
id="C",
|
||||
title="Follow-up: Follow-up",
|
||||
status="open",
|
||||
assignee_id=TIMMY_AGENT_ID,
|
||||
)
|
||||
|
||||
# Cycle 1
|
||||
mock_client.list_issues.return_value = [task_a]
|
||||
mock_client.update_issue.return_value = _make_done("A")
|
||||
mock_client.create_issue.return_value = fu_b
|
||||
|
||||
runner = TaskRunner(bridge=bridge, orchestrator=stub_orchestrator)
|
||||
s1 = await runner.run_once()
|
||||
assert s1["original_issue_id"] == "A"
|
||||
assert s1["follow_up_issue_id"] == "B"
|
||||
|
||||
# Cycle 2: follow-up B is now the task
|
||||
mock_client.list_issues.return_value = [fu_b]
|
||||
mock_client.update_issue.return_value = _make_done("B")
|
||||
mock_client.create_issue.return_value = fu_c
|
||||
|
||||
s2 = await runner.run_once()
|
||||
assert s2["original_issue_id"] == "B"
|
||||
assert s2["follow_up_issue_id"] == "C"
|
||||
|
||||
# Orchestrator was called twice — once per cycle
|
||||
assert len(stub_orchestrator.calls) == 2
|
||||
assert stub_orchestrator.calls[0]["task_id"] == "A"
|
||||
assert stub_orchestrator.calls[1]["task_id"] == "B"
|
||||
|
||||
async def test_three_cycle_chain_all_through_orchestrator(
|
||||
self,
|
||||
mock_client,
|
||||
bridge,
|
||||
stub_orchestrator,
|
||||
settings_patch,
|
||||
):
|
||||
"""Three cycles — every task goes through the orchestrator pipe."""
|
||||
tasks = [_make_issue(id=f"c-{i}", title=f"Chain {i}") for i in range(3)]
|
||||
follow_ups = [
|
||||
PaperclipIssue(
|
||||
id=f"c-{i + 1}",
|
||||
title=f"Follow-up: Chain {i}",
|
||||
status="open",
|
||||
assignee_id=TIMMY_AGENT_ID,
|
||||
)
|
||||
for i in range(3)
|
||||
]
|
||||
|
||||
runner = TaskRunner(bridge=bridge, orchestrator=stub_orchestrator)
|
||||
ids = []
|
||||
|
||||
for i in range(3):
|
||||
mock_client.list_issues.return_value = [tasks[i]]
|
||||
mock_client.update_issue.return_value = _make_done(tasks[i].id)
|
||||
mock_client.create_issue.return_value = follow_ups[i]
|
||||
|
||||
s = await runner.run_once()
|
||||
ids.append(s["original_issue_id"])
|
||||
|
||||
assert ids == ["c-0", "c-1", "c-2"]
|
||||
assert len(stub_orchestrator.calls) == 3
|
||||
|
||||
|
||||
# ═══════════════════════════════════════════════════════════════════════════════
|
||||
# LIFECYCLE: start/stop
|
||||
# ═══════════════════════════════════════════════════════════════════════════════
|
||||
|
||||
|
||||
class TestLifecycle:
|
||||
async def test_stop_halts_loop(self, mock_client, bridge, settings_patch):
|
||||
runner = TaskRunner(bridge=bridge)
|
||||
runner._running = True
|
||||
runner.stop()
|
||||
assert runner._running is False
|
||||
|
||||
async def test_start_disabled_when_interval_zero(
|
||||
self,
|
||||
mock_client,
|
||||
bridge,
|
||||
settings_patch,
|
||||
):
|
||||
settings_patch.paperclip_poll_interval = 0
|
||||
runner = TaskRunner(bridge=bridge)
|
||||
await runner.start()
|
||||
mock_client.list_issues.assert_not_awaited()
|
||||
|
||||
|
||||
# ═══════════════════════════════════════════════════════════════════════════════
|
||||
# LIVE LLM (manual e2e): runs only when Ollama is available
|
||||
# ═══════════════════════════════════════════════════════════════════════════════
|
||||
|
||||
|
||||
def _ollama_reachable() -> tuple[bool, list[str]]:
|
||||
"""Return (reachable, model_names)."""
|
||||
try:
|
||||
import httpx
|
||||
|
||||
resp = httpx.get("http://localhost:11434/api/tags", timeout=3)
|
||||
resp.raise_for_status()
|
||||
names = [m["name"] for m in resp.json().get("models", [])]
|
||||
return True, names
|
||||
except Exception:
|
||||
return False, []
|
||||
|
||||
|
||||
def _pick_tiny_model(available: list[str]) -> str | None:
|
||||
"""Pick the smallest model available for e2e tests."""
|
||||
candidates = ["tinyllama", "phi", "qwen2:0.5b", "llama3.2:1b", "gemma:2b"]
|
||||
for candidate in candidates:
|
||||
for name in available:
|
||||
if candidate in name:
|
||||
return name
|
||||
return None
|
||||
|
||||
|
||||
class LiveOllamaOrchestrator:
|
||||
"""Thin orchestrator that calls Ollama directly — no Agno dependency."""
|
||||
|
||||
def __init__(self, model_name: str) -> None:
|
||||
self.model_name = model_name
|
||||
self.calls: list[dict] = []
|
||||
|
||||
async def execute_task(self, task_id: str, description: str, context: dict) -> str:
|
||||
import httpx as hx
|
||||
|
||||
self.calls.append({"task_id": task_id, "description": description})
|
||||
|
||||
async with hx.AsyncClient(timeout=60) as client:
|
||||
resp = await client.post(
|
||||
"http://localhost:11434/api/generate",
|
||||
json={
|
||||
"model": self.model_name,
|
||||
"prompt": (
|
||||
f"You are Timmy, a task automation agent. "
|
||||
f"Task: {description}\n"
|
||||
f"Respond in 1-2 sentences about what you did."
|
||||
),
|
||||
"stream": False,
|
||||
"options": {"num_predict": 64},
|
||||
},
|
||||
)
|
||||
resp.raise_for_status()
|
||||
return resp.json()["response"]
|
||||
|
||||
|
||||
@pytest.mark.ollama
|
||||
class TestLiveOllamaGreenPath:
|
||||
"""Green-path with a real tiny LLM via Ollama.
|
||||
|
||||
Run with: ``tox -e ollama`` or ``pytest -m ollama``
|
||||
Requires: Ollama running with a small model.
|
||||
"""
|
||||
|
||||
async def test_live_full_cycle(self, mock_client, bridge, settings_patch):
|
||||
"""Wire a real tiny LLM through the full pipe and verify output."""
|
||||
reachable, models = _ollama_reachable()
|
||||
if not reachable:
|
||||
pytest.skip("Ollama not reachable at localhost:11434")
|
||||
|
||||
chosen = _pick_tiny_model(models)
|
||||
if not chosen:
|
||||
pytest.skip(f"No tiny model found (have: {models[:5]})")
|
||||
|
||||
issue = _make_issue(
|
||||
id="live-1",
|
||||
title="Reflect on task automation",
|
||||
description="Muse about how you process tasks and suggest improvements.",
|
||||
)
|
||||
mock_client.list_issues.return_value = [issue]
|
||||
mock_client.update_issue.return_value = _make_done("live-1")
|
||||
mock_client.create_issue.return_value = _make_follow_up("live-fu")
|
||||
|
||||
live_orch = LiveOllamaOrchestrator(chosen)
|
||||
runner = TaskRunner(bridge=bridge, orchestrator=live_orch)
|
||||
summary = await runner.run_once()
|
||||
|
||||
# The LLM produced *something* non-empty
|
||||
assert summary is not None
|
||||
assert len(summary["result"]) > 0
|
||||
assert summary["completed"] is True
|
||||
assert summary["follow_up_issue_id"] == "live-fu"
|
||||
|
||||
# Orchestrator was actually called
|
||||
assert len(live_orch.calls) == 1
|
||||
assert live_orch.calls[0]["task_id"] == "live-1"
|
||||
|
||||
# LLM output flowed into the Paperclip comment
|
||||
comment = mock_client.add_comment.call_args[0][1]
|
||||
assert "[Timmy]" in comment
|
||||
assert len(comment) > len("[Timmy] Task completed.\n\n")
|
||||
|
||||
# LLM output flowed into the follow-up description
|
||||
fu_req = mock_client.create_issue.call_args[0][0]
|
||||
assert len(fu_req.description) > 0
|
||||
assert fu_req.assignee_id == TIMMY_AGENT_ID
|
||||
|
||||
async def test_live_recursive_chain(self, mock_client, bridge, settings_patch):
|
||||
"""Two-cycle chain with a real LLM — each cycle produces real output."""
|
||||
reachable, models = _ollama_reachable()
|
||||
if not reachable:
|
||||
pytest.skip("Ollama not reachable")
|
||||
|
||||
chosen = _pick_tiny_model(models)
|
||||
if not chosen:
|
||||
pytest.skip("No tiny model found")
|
||||
|
||||
task_a = _make_issue(id="live-A", title="Initial reflection")
|
||||
fu_b = PaperclipIssue(
|
||||
id="live-B",
|
||||
title="Follow-up: Initial reflection",
|
||||
description="Continue reflecting",
|
||||
status="open",
|
||||
assignee_id=TIMMY_AGENT_ID,
|
||||
priority="normal",
|
||||
)
|
||||
fu_c = PaperclipIssue(
|
||||
id="live-C",
|
||||
title="Follow-up: Follow-up",
|
||||
status="open",
|
||||
assignee_id=TIMMY_AGENT_ID,
|
||||
)
|
||||
|
||||
live_orch = LiveOllamaOrchestrator(chosen)
|
||||
runner = TaskRunner(bridge=bridge, orchestrator=live_orch)
|
||||
|
||||
# Cycle 1
|
||||
mock_client.list_issues.return_value = [task_a]
|
||||
mock_client.update_issue.return_value = _make_done("live-A")
|
||||
mock_client.create_issue.return_value = fu_b
|
||||
|
||||
s1 = await runner.run_once()
|
||||
assert s1 is not None
|
||||
assert len(s1["result"]) > 0
|
||||
|
||||
# Cycle 2
|
||||
mock_client.list_issues.return_value = [fu_b]
|
||||
mock_client.update_issue.return_value = _make_done("live-B")
|
||||
mock_client.create_issue.return_value = fu_c
|
||||
|
||||
s2 = await runner.run_once()
|
||||
assert s2 is not None
|
||||
assert len(s2["result"]) > 0
|
||||
|
||||
# Both cycles went through the LLM
|
||||
assert len(live_orch.calls) == 2
|
||||
@@ -1,234 +0,0 @@
|
||||
"""Chunk 2: OpenFang HTTP client — test first, implement second.
|
||||
|
||||
Tests cover:
|
||||
- Health check returns False when unreachable
|
||||
- Health check TTL caching
|
||||
- execute_hand() rejects unknown hands
|
||||
- execute_hand() success with mocked HTTP
|
||||
- execute_hand() graceful degradation on error
|
||||
- Convenience wrappers call the correct hand
|
||||
"""
|
||||
|
||||
import json
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
import pytest
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Health checks
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def test_health_check_false_when_unreachable():
|
||||
"""Client should report unhealthy when OpenFang is not running."""
|
||||
from infrastructure.openfang.client import OpenFangClient
|
||||
|
||||
client = OpenFangClient(base_url="http://localhost:19999")
|
||||
assert client._check_health() is False
|
||||
|
||||
|
||||
def test_health_check_caching():
|
||||
"""Repeated .healthy calls within TTL should not re-check."""
|
||||
from infrastructure.openfang.client import OpenFangClient
|
||||
|
||||
client = OpenFangClient(base_url="http://localhost:19999")
|
||||
client._health_cache_ttl = 9999 # very long TTL
|
||||
# Force a first check (will be False)
|
||||
_ = client.healthy
|
||||
assert client._healthy is False
|
||||
|
||||
# Manually flip the cached value — next access should use cache
|
||||
client._healthy = True
|
||||
assert client.healthy is True # still cached, no re-check
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# execute_hand — unknown hand
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_execute_hand_unknown_hand():
|
||||
"""Requesting an unknown hand returns success=False immediately."""
|
||||
from infrastructure.openfang.client import OpenFangClient
|
||||
|
||||
client = OpenFangClient(base_url="http://localhost:19999")
|
||||
result = await client.execute_hand("nonexistent_hand", {})
|
||||
assert result.success is False
|
||||
assert "Unknown hand" in result.error
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# execute_hand — success path (mocked HTTP)
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_execute_hand_success_mocked():
|
||||
"""When OpenFang returns 200 with output, HandResult.success is True."""
|
||||
from infrastructure.openfang.client import OpenFangClient
|
||||
|
||||
response_body = json.dumps(
|
||||
{
|
||||
"success": True,
|
||||
"output": "Page loaded successfully",
|
||||
"metadata": {"url": "https://example.com"},
|
||||
}
|
||||
).encode()
|
||||
|
||||
mock_resp = MagicMock()
|
||||
mock_resp.status = 200
|
||||
mock_resp.read.return_value = response_body
|
||||
mock_resp.__enter__ = lambda s: s
|
||||
mock_resp.__exit__ = MagicMock(return_value=False)
|
||||
|
||||
with patch("urllib.request.urlopen", return_value=mock_resp):
|
||||
client = OpenFangClient(base_url="http://localhost:8080")
|
||||
result = await client.execute_hand("browser", {"url": "https://example.com"})
|
||||
|
||||
assert result.success is True
|
||||
assert result.output == "Page loaded successfully"
|
||||
assert result.hand == "browser"
|
||||
assert result.latency_ms > 0
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# execute_hand — graceful degradation on connection error
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_execute_hand_connection_error():
|
||||
"""When OpenFang is unreachable, HandResult.success is False (no crash)."""
|
||||
from infrastructure.openfang.client import OpenFangClient
|
||||
|
||||
client = OpenFangClient(base_url="http://localhost:19999")
|
||||
result = await client.execute_hand("browser", {"url": "https://example.com"})
|
||||
|
||||
assert result.success is False
|
||||
assert result.error # non-empty error message
|
||||
assert result.hand == "browser"
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Convenience wrappers
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_browse_calls_browser_hand():
|
||||
"""browse() should delegate to execute_hand('browser', ...)."""
|
||||
from infrastructure.openfang.client import OpenFangClient
|
||||
|
||||
client = OpenFangClient(base_url="http://localhost:19999")
|
||||
|
||||
calls = []
|
||||
original = client.execute_hand
|
||||
|
||||
async def spy(hand, params, **kw):
|
||||
calls.append((hand, params))
|
||||
return await original(hand, params, **kw)
|
||||
|
||||
client.execute_hand = spy
|
||||
await client.browse("https://example.com", "click button")
|
||||
|
||||
assert len(calls) == 1
|
||||
assert calls[0][0] == "browser"
|
||||
assert calls[0][1]["url"] == "https://example.com"
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_collect_calls_collector_hand():
|
||||
"""collect() should delegate to execute_hand('collector', ...)."""
|
||||
from infrastructure.openfang.client import OpenFangClient
|
||||
|
||||
client = OpenFangClient(base_url="http://localhost:19999")
|
||||
|
||||
calls = []
|
||||
original = client.execute_hand
|
||||
|
||||
async def spy(hand, params, **kw):
|
||||
calls.append((hand, params))
|
||||
return await original(hand, params, **kw)
|
||||
|
||||
client.execute_hand = spy
|
||||
await client.collect("example.com", depth="deep")
|
||||
|
||||
assert len(calls) == 1
|
||||
assert calls[0][0] == "collector"
|
||||
assert calls[0][1]["target"] == "example.com"
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_predict_calls_predictor_hand():
|
||||
"""predict() should delegate to execute_hand('predictor', ...)."""
|
||||
from infrastructure.openfang.client import OpenFangClient
|
||||
|
||||
client = OpenFangClient(base_url="http://localhost:19999")
|
||||
|
||||
calls = []
|
||||
original = client.execute_hand
|
||||
|
||||
async def spy(hand, params, **kw):
|
||||
calls.append((hand, params))
|
||||
return await original(hand, params, **kw)
|
||||
|
||||
client.execute_hand = spy
|
||||
await client.predict("Will BTC hit 100k?", horizon="1m")
|
||||
|
||||
assert len(calls) == 1
|
||||
assert calls[0][0] == "predictor"
|
||||
assert calls[0][1]["question"] == "Will BTC hit 100k?"
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# HandResult dataclass
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def test_hand_result_defaults():
|
||||
"""HandResult should have sensible defaults."""
|
||||
from infrastructure.openfang.client import HandResult
|
||||
|
||||
r = HandResult(hand="browser", success=True)
|
||||
assert r.output == ""
|
||||
assert r.error == ""
|
||||
assert r.latency_ms == 0.0
|
||||
assert r.metadata == {}
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# OPENFANG_HANDS constant
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def test_openfang_hands_tuple():
|
||||
"""The OPENFANG_HANDS constant should list all 7 hands."""
|
||||
from infrastructure.openfang.client import OPENFANG_HANDS
|
||||
|
||||
assert len(OPENFANG_HANDS) == 7
|
||||
assert "browser" in OPENFANG_HANDS
|
||||
assert "collector" in OPENFANG_HANDS
|
||||
assert "predictor" in OPENFANG_HANDS
|
||||
assert "lead" in OPENFANG_HANDS
|
||||
assert "twitter" in OPENFANG_HANDS
|
||||
assert "researcher" in OPENFANG_HANDS
|
||||
assert "clip" in OPENFANG_HANDS
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# status() summary
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def test_status_returns_summary():
|
||||
"""status() should return a dict with url, healthy flag, and hands list."""
|
||||
from infrastructure.openfang.client import OpenFangClient
|
||||
|
||||
client = OpenFangClient(base_url="http://localhost:19999")
|
||||
s = client.status()
|
||||
|
||||
assert "url" in s
|
||||
assert "healthy" in s
|
||||
assert "available_hands" in s
|
||||
assert len(s["available_hands"]) == 7
|
||||
@@ -1,25 +0,0 @@
|
||||
"""Chunk 1: OpenFang config settings — test first, implement second."""
|
||||
|
||||
|
||||
def test_openfang_url_default():
|
||||
"""Settings should expose openfang_url with a sensible default."""
|
||||
from config import settings
|
||||
|
||||
assert hasattr(settings, "openfang_url")
|
||||
assert settings.openfang_url == "http://localhost:8080"
|
||||
|
||||
|
||||
def test_openfang_enabled_default_false():
|
||||
"""OpenFang integration should be opt-in (disabled by default)."""
|
||||
from config import settings
|
||||
|
||||
assert hasattr(settings, "openfang_enabled")
|
||||
assert settings.openfang_enabled is False
|
||||
|
||||
|
||||
def test_openfang_timeout_default():
|
||||
"""Timeout should be generous (some hands are slow)."""
|
||||
from config import settings
|
||||
|
||||
assert hasattr(settings, "openfang_timeout")
|
||||
assert settings.openfang_timeout == 120
|
||||
@@ -1,43 +0,0 @@
|
||||
"""Paperclip AI config settings."""
|
||||
|
||||
|
||||
def test_paperclip_url_default():
|
||||
from config import settings
|
||||
|
||||
assert hasattr(settings, "paperclip_url")
|
||||
assert settings.paperclip_url == "http://localhost:3100"
|
||||
|
||||
|
||||
def test_paperclip_enabled_default_false():
|
||||
from config import settings
|
||||
|
||||
assert hasattr(settings, "paperclip_enabled")
|
||||
assert settings.paperclip_enabled is False
|
||||
|
||||
|
||||
def test_paperclip_timeout_default():
|
||||
from config import settings
|
||||
|
||||
assert hasattr(settings, "paperclip_timeout")
|
||||
assert settings.paperclip_timeout == 30
|
||||
|
||||
|
||||
def test_paperclip_agent_id_default_empty():
|
||||
from config import settings
|
||||
|
||||
assert hasattr(settings, "paperclip_agent_id")
|
||||
assert settings.paperclip_agent_id == ""
|
||||
|
||||
|
||||
def test_paperclip_company_id_default_empty():
|
||||
from config import settings
|
||||
|
||||
assert hasattr(settings, "paperclip_company_id")
|
||||
assert settings.paperclip_company_id == ""
|
||||
|
||||
|
||||
def test_paperclip_poll_interval_default_zero():
|
||||
from config import settings
|
||||
|
||||
assert hasattr(settings, "paperclip_poll_interval")
|
||||
assert settings.paperclip_poll_interval == 0
|
||||
@@ -103,16 +103,8 @@ class TestFeaturePages:
|
||||
r = client.get("/models")
|
||||
assert r.status_code == 200
|
||||
|
||||
def test_swarm_live(self, client):
|
||||
r = client.get("/swarm/live")
|
||||
assert r.status_code == 200
|
||||
|
||||
def test_swarm_events(self, client):
|
||||
r = client.get("/swarm/events")
|
||||
assert r.status_code == 200
|
||||
|
||||
def test_marketplace(self, client):
|
||||
r = client.get("/marketplace")
|
||||
def test_memory_page(self, client):
|
||||
r = client.get("/memory")
|
||||
assert r.status_code in (200, 307)
|
||||
|
||||
|
||||
@@ -162,10 +154,6 @@ class TestAPIEndpoints:
|
||||
r = client.get("/api/notifications")
|
||||
assert r.status_code == 200
|
||||
|
||||
def test_providers_api(self, client):
|
||||
r = client.get("/router/api/providers")
|
||||
assert r.status_code == 200
|
||||
|
||||
def test_mobile_status(self, client):
|
||||
r = client.get("/mobile/status")
|
||||
assert r.status_code == 200
|
||||
@@ -182,10 +170,6 @@ class TestAPIEndpoints:
|
||||
r = client.get("/grok/status")
|
||||
assert r.status_code == 200
|
||||
|
||||
def test_paperclip_status(self, client):
|
||||
r = client.get("/api/paperclip/status")
|
||||
assert r.status_code == 200
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# No 500s — every GET route should survive without server error
|
||||
@@ -223,19 +207,14 @@ class TestNo500:
|
||||
"/mobile/status",
|
||||
"/spark",
|
||||
"/models",
|
||||
"/swarm/live",
|
||||
"/swarm/events",
|
||||
"/marketplace",
|
||||
"/api/queue/status",
|
||||
"/api/tasks",
|
||||
"/api/chat/history",
|
||||
"/api/notifications",
|
||||
"/router/api/providers",
|
||||
"/discord/status",
|
||||
"/telegram/status",
|
||||
"/grok/status",
|
||||
"/grok/stats",
|
||||
"/api/paperclip/status",
|
||||
],
|
||||
)
|
||||
def test_no_500(self, client, path):
|
||||
|
||||
@@ -1,265 +0,0 @@
|
||||
"""Tests for timmy.agents.timmy — orchestrator, personas, context building."""
|
||||
|
||||
import sys
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
# Ensure mcp.registry stub with tool_registry exists before importing agents
|
||||
if "mcp" not in sys.modules:
|
||||
_mock_mcp = MagicMock()
|
||||
_mock_registry_mod = MagicMock()
|
||||
_mock_tool_reg = MagicMock()
|
||||
_mock_tool_reg.get_handler.return_value = None
|
||||
_mock_registry_mod.tool_registry = _mock_tool_reg
|
||||
sys.modules["mcp"] = _mock_mcp
|
||||
sys.modules["mcp.registry"] = _mock_registry_mod
|
||||
|
||||
from timmy.agents.timmy import (
|
||||
_PERSONAS,
|
||||
ORCHESTRATOR_PROMPT_BASE,
|
||||
TimmyOrchestrator,
|
||||
_load_hands_async,
|
||||
build_timmy_context_async,
|
||||
build_timmy_context_sync,
|
||||
create_timmy_swarm,
|
||||
format_timmy_prompt,
|
||||
)
|
||||
|
||||
|
||||
class TestLoadHandsAsync:
|
||||
"""Test _load_hands_async."""
|
||||
|
||||
async def test_returns_empty_list(self):
|
||||
result = await _load_hands_async()
|
||||
assert result == []
|
||||
|
||||
|
||||
class TestBuildContext:
|
||||
"""Test context building functions."""
|
||||
|
||||
@patch("timmy.agents.timmy.settings")
|
||||
def test_build_context_sync_graceful_failures(self, mock_settings):
|
||||
mock_settings.repo_root = "/nonexistent"
|
||||
ctx = build_timmy_context_sync()
|
||||
|
||||
assert "timestamp" in ctx
|
||||
assert isinstance(ctx["agents"], list)
|
||||
assert isinstance(ctx["hands"], list)
|
||||
# Git log should fall back gracefully
|
||||
assert isinstance(ctx["git_log"], str)
|
||||
# Memory should fall back gracefully
|
||||
assert isinstance(ctx["memory"], str)
|
||||
|
||||
@patch("timmy.agents.timmy.settings")
|
||||
async def test_build_context_async(self, mock_settings):
|
||||
mock_settings.repo_root = "/nonexistent"
|
||||
ctx = await build_timmy_context_async()
|
||||
assert ctx["hands"] == []
|
||||
|
||||
@patch("timmy.agents.timmy.settings")
|
||||
def test_build_context_reads_memory_file(self, mock_settings, tmp_path):
|
||||
memory_file = tmp_path / "MEMORY.md"
|
||||
memory_file.write_text("# Important memories\nRemember this.")
|
||||
mock_settings.repo_root = str(tmp_path)
|
||||
|
||||
# Patch HotMemory path so it reads from tmp_path
|
||||
from timmy.memory_system import memory_system
|
||||
|
||||
original_path = memory_system.hot.path
|
||||
memory_system.hot.path = memory_file
|
||||
memory_system.hot._content = None # Clear cache
|
||||
try:
|
||||
ctx = build_timmy_context_sync()
|
||||
assert "Important memories" in ctx["memory"]
|
||||
finally:
|
||||
memory_system.hot.path = original_path
|
||||
memory_system.hot._content = None
|
||||
|
||||
|
||||
class TestFormatPrompt:
|
||||
"""Test format_timmy_prompt."""
|
||||
|
||||
def test_inserts_context_block(self):
|
||||
base = "Line one.\nLine two."
|
||||
ctx = {
|
||||
"timestamp": "2026-03-06T00:00:00Z",
|
||||
"repo_root": "/home/user/project",
|
||||
"git_log": "abc123 initial commit",
|
||||
"agents": [],
|
||||
"hands": [],
|
||||
"memory": "some memory",
|
||||
}
|
||||
result = format_timmy_prompt(base, ctx)
|
||||
assert "Line one." in result
|
||||
assert "Line two." in result
|
||||
assert "abc123 initial commit" in result
|
||||
assert "some memory" in result
|
||||
|
||||
def test_agents_list_formatted(self):
|
||||
ctx = {
|
||||
"timestamp": "now",
|
||||
"repo_root": "/tmp",
|
||||
"git_log": "",
|
||||
"agents": [
|
||||
{"name": "Forge", "capabilities": "code", "status": "ready"},
|
||||
{"name": "Seer", "capabilities": "research", "status": "ready"},
|
||||
],
|
||||
"hands": [],
|
||||
"memory": "",
|
||||
}
|
||||
result = format_timmy_prompt("Base.", ctx)
|
||||
assert "Forge" in result
|
||||
assert "Seer" in result
|
||||
|
||||
def test_hands_list_formatted(self):
|
||||
ctx = {
|
||||
"timestamp": "now",
|
||||
"repo_root": "/tmp",
|
||||
"git_log": "",
|
||||
"agents": [],
|
||||
"hands": [
|
||||
{"name": "backup", "schedule": "daily", "enabled": True},
|
||||
],
|
||||
"memory": "",
|
||||
}
|
||||
result = format_timmy_prompt("Base.", ctx)
|
||||
assert "backup" in result
|
||||
assert "enabled" in result
|
||||
|
||||
def test_repo_root_placeholder_replaced(self):
|
||||
ctx = {
|
||||
"timestamp": "now",
|
||||
"repo_root": "/my/repo",
|
||||
"git_log": "",
|
||||
"agents": [],
|
||||
"hands": [],
|
||||
"memory": "",
|
||||
}
|
||||
result = format_timmy_prompt("Root is {REPO_ROOT}.", ctx)
|
||||
assert "/my/repo" in result
|
||||
assert "{REPO_ROOT}" not in result
|
||||
|
||||
|
||||
class TestExtractAgent:
|
||||
"""Test TimmyOrchestrator._extract_agent static method."""
|
||||
|
||||
def test_extracts_known_agents(self):
|
||||
assert TimmyOrchestrator._extract_agent("Primary Agent: Seer") == "seer"
|
||||
assert TimmyOrchestrator._extract_agent("Use Forge for this") == "forge"
|
||||
assert TimmyOrchestrator._extract_agent("Route to quill") == "quill"
|
||||
assert TimmyOrchestrator._extract_agent("echo can recall") == "echo"
|
||||
assert TimmyOrchestrator._extract_agent("helm decides") == "helm"
|
||||
|
||||
def test_defaults_to_orchestrator(self):
|
||||
assert TimmyOrchestrator._extract_agent("no agent mentioned") == "orchestrator"
|
||||
|
||||
def test_case_insensitive(self):
|
||||
assert TimmyOrchestrator._extract_agent("Use FORGE") == "forge"
|
||||
|
||||
|
||||
class TestTimmyOrchestrator:
|
||||
"""Test TimmyOrchestrator init and methods."""
|
||||
|
||||
@patch("timmy.agents.timmy.settings")
|
||||
def test_init(self, mock_settings):
|
||||
mock_settings.repo_root = "/tmp"
|
||||
mock_settings.ollama_model = "test"
|
||||
mock_settings.ollama_url = "http://localhost:11434"
|
||||
mock_settings.telemetry_enabled = False
|
||||
|
||||
orch = TimmyOrchestrator()
|
||||
assert orch.agent_id == "orchestrator"
|
||||
assert orch.name == "Orchestrator"
|
||||
assert orch.sub_agents == {}
|
||||
assert orch._session_initialized is False
|
||||
|
||||
@patch("timmy.agents.timmy.settings")
|
||||
def test_register_sub_agent(self, mock_settings):
|
||||
mock_settings.repo_root = "/tmp"
|
||||
mock_settings.ollama_model = "test"
|
||||
mock_settings.ollama_url = "http://localhost:11434"
|
||||
mock_settings.telemetry_enabled = False
|
||||
|
||||
orch = TimmyOrchestrator()
|
||||
|
||||
from timmy.agents.base import SubAgent
|
||||
|
||||
agent = SubAgent(
|
||||
agent_id="test-agent",
|
||||
name="Test",
|
||||
role="test",
|
||||
system_prompt="You are a test agent.",
|
||||
)
|
||||
orch.register_sub_agent(agent)
|
||||
assert "test-agent" in orch.sub_agents
|
||||
|
||||
@patch("timmy.agents.timmy.settings")
|
||||
def test_get_swarm_status(self, mock_settings):
|
||||
mock_settings.repo_root = "/tmp"
|
||||
mock_settings.ollama_model = "test"
|
||||
mock_settings.ollama_url = "http://localhost:11434"
|
||||
mock_settings.telemetry_enabled = False
|
||||
|
||||
orch = TimmyOrchestrator()
|
||||
status = orch.get_swarm_status()
|
||||
assert "orchestrator" in status
|
||||
assert status["total_agents"] == 1
|
||||
|
||||
@patch("timmy.agents.timmy.settings")
|
||||
def test_get_enhanced_system_prompt_with_attr(self, mock_settings):
|
||||
mock_settings.repo_root = "/tmp"
|
||||
mock_settings.ollama_model = "test"
|
||||
mock_settings.ollama_url = "http://localhost:11434"
|
||||
mock_settings.telemetry_enabled = False
|
||||
|
||||
orch = TimmyOrchestrator()
|
||||
# BaseAgent doesn't store system_prompt as attr; set it manually
|
||||
orch.system_prompt = "Test prompt.\nWith context."
|
||||
prompt = orch._get_enhanced_system_prompt()
|
||||
assert isinstance(prompt, str)
|
||||
assert "Test prompt." in prompt
|
||||
|
||||
|
||||
class TestCreateTimmySwarm:
|
||||
"""Test create_timmy_swarm factory."""
|
||||
|
||||
@patch("timmy.agents.timmy.settings")
|
||||
def test_creates_all_personas(self, mock_settings):
|
||||
mock_settings.repo_root = "/tmp"
|
||||
mock_settings.ollama_model = "test"
|
||||
mock_settings.ollama_url = "http://localhost:11434"
|
||||
mock_settings.telemetry_enabled = False
|
||||
|
||||
swarm = create_timmy_swarm()
|
||||
assert len(swarm.sub_agents) == len(_PERSONAS)
|
||||
assert "seer" in swarm.sub_agents
|
||||
assert "forge" in swarm.sub_agents
|
||||
assert "quill" in swarm.sub_agents
|
||||
assert "echo" in swarm.sub_agents
|
||||
assert "helm" in swarm.sub_agents
|
||||
|
||||
|
||||
class TestPersonas:
|
||||
"""Test persona definitions."""
|
||||
|
||||
def test_all_personas_have_required_fields(self):
|
||||
required = {"agent_id", "name", "role", "system_prompt"}
|
||||
for persona in _PERSONAS:
|
||||
assert required.issubset(persona.keys()), f"Missing fields in {persona['name']}"
|
||||
|
||||
def test_persona_ids_unique(self):
|
||||
ids = [p["agent_id"] for p in _PERSONAS]
|
||||
assert len(ids) == len(set(ids))
|
||||
|
||||
def test_six_personas(self):
|
||||
assert len(_PERSONAS) == 6
|
||||
|
||||
|
||||
class TestOrchestratorPrompt:
|
||||
"""Test the ORCHESTRATOR_PROMPT_BASE constant."""
|
||||
|
||||
def test_contains_hard_rules(self):
|
||||
assert "NEVER fabricate" in ORCHESTRATOR_PROMPT_BASE
|
||||
assert "do not know" in ORCHESTRATOR_PROMPT_BASE.lower()
|
||||
|
||||
def test_contains_repo_root_placeholder(self):
|
||||
assert "{REPO_ROOT}" in ORCHESTRATOR_PROMPT_BASE
|
||||
@@ -6,12 +6,42 @@ import pytest
|
||||
|
||||
from timmy.mcp_tools import (
|
||||
_bridge_to_work_order,
|
||||
_parse_command,
|
||||
close_mcp_sessions,
|
||||
create_filesystem_mcp_tools,
|
||||
create_gitea_issue_via_mcp,
|
||||
create_gitea_mcp_tools,
|
||||
)
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _parse_command
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def test_parse_command_splits_correctly():
|
||||
"""_parse_command splits a command string into executable and args."""
|
||||
with patch("timmy.mcp_tools.shutil.which", return_value="/usr/local/bin/gitea-mcp"):
|
||||
exe, args = _parse_command("gitea-mcp -t stdio")
|
||||
assert exe == "/usr/local/bin/gitea-mcp"
|
||||
assert args == ["-t", "stdio"]
|
||||
|
||||
|
||||
def test_parse_command_expands_tilde():
|
||||
"""_parse_command expands ~/."""
|
||||
with patch("timmy.mcp_tools.shutil.which", return_value=None):
|
||||
exe, args = _parse_command("~/go/bin/gitea-mcp -t stdio")
|
||||
assert "/go/bin/gitea-mcp" in exe
|
||||
assert "~" not in exe
|
||||
assert args == ["-t", "stdio"]
|
||||
|
||||
|
||||
def test_parse_command_preserves_absolute_path():
|
||||
"""_parse_command preserves an absolute path without calling which."""
|
||||
exe, args = _parse_command("/usr/local/bin/gitea-mcp -t stdio")
|
||||
assert exe == "/usr/local/bin/gitea-mcp"
|
||||
assert args == ["-t", "stdio"]
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# create_gitea_mcp_tools
|
||||
# ---------------------------------------------------------------------------
|
||||
@@ -36,25 +66,26 @@ def test_gitea_mcp_returns_none_when_no_token():
|
||||
|
||||
|
||||
def test_gitea_mcp_returns_tools_when_configured():
|
||||
"""Gitea MCP factory returns an MCPTools instance when properly configured."""
|
||||
"""Gitea MCP factory returns an MCPTools instance using server_params."""
|
||||
mock_mcp = MagicMock()
|
||||
mock_params = MagicMock()
|
||||
with (
|
||||
patch("timmy.mcp_tools.settings") as mock_settings,
|
||||
patch("agno.tools.mcp.MCPTools", return_value=mock_mcp) as mock_cls,
|
||||
patch("timmy.mcp_tools._gitea_server_params", return_value=mock_params),
|
||||
):
|
||||
mock_settings.gitea_enabled = True
|
||||
mock_settings.gitea_token = "tok123"
|
||||
mock_settings.mcp_gitea_command = "gitea-mcp -t stdio"
|
||||
mock_settings.gitea_url = "http://localhost:3000"
|
||||
mock_settings.mcp_timeout = 15
|
||||
result = create_gitea_mcp_tools()
|
||||
|
||||
assert result is mock_mcp
|
||||
mock_cls.assert_called_once()
|
||||
call_kwargs = mock_cls.call_args[1]
|
||||
assert call_kwargs["command"] == "gitea-mcp -t stdio"
|
||||
assert call_kwargs["env"]["GITEA_ACCESS_TOKEN"] == "tok123"
|
||||
assert "create_issue" in call_kwargs["include_tools"]
|
||||
assert call_kwargs["server_params"] is mock_params
|
||||
assert "command" not in call_kwargs
|
||||
assert "issue_write" in call_kwargs["include_tools"]
|
||||
assert "pull_request_write" in call_kwargs["include_tools"]
|
||||
|
||||
|
||||
def test_gitea_mcp_graceful_on_import_error():
|
||||
@@ -76,11 +107,14 @@ def test_gitea_mcp_graceful_on_import_error():
|
||||
|
||||
|
||||
def test_filesystem_mcp_returns_tools():
|
||||
"""Filesystem MCP factory returns an MCPTools instance."""
|
||||
"""Filesystem MCP factory returns an MCPTools instance using server_params."""
|
||||
mock_mcp = MagicMock()
|
||||
mock_params_cls = MagicMock()
|
||||
with (
|
||||
patch("timmy.mcp_tools.settings") as mock_settings,
|
||||
patch("agno.tools.mcp.MCPTools", return_value=mock_mcp) as mock_cls,
|
||||
patch("mcp.client.stdio.StdioServerParameters", mock_params_cls),
|
||||
patch("timmy.mcp_tools.shutil.which", return_value="/usr/local/bin/npx"),
|
||||
):
|
||||
mock_settings.mcp_filesystem_command = "npx -y @modelcontextprotocol/server-filesystem"
|
||||
mock_settings.repo_root = "/home/user/project"
|
||||
@@ -89,8 +123,11 @@ def test_filesystem_mcp_returns_tools():
|
||||
|
||||
assert result is mock_mcp
|
||||
call_kwargs = mock_cls.call_args[1]
|
||||
assert "/home/user/project" in call_kwargs["command"]
|
||||
assert "server_params" in call_kwargs
|
||||
assert "read_file" in call_kwargs["include_tools"]
|
||||
# Verify StdioServerParameters was called with repo_root as an arg
|
||||
params_kwargs = mock_params_cls.call_args[1]
|
||||
assert "/home/user/project" in params_kwargs["args"]
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
@@ -110,23 +147,29 @@ async def test_issue_via_mcp_returns_not_configured():
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_issue_via_mcp_calls_tool():
|
||||
"""Issue creation calls the MCP tool with correct arguments."""
|
||||
"""Issue creation calls session.call_tool with correct arguments."""
|
||||
import timmy.mcp_tools as mcp_mod
|
||||
|
||||
mock_session = MagicMock()
|
||||
mock_session.connect = AsyncMock()
|
||||
mock_session.call_tool = AsyncMock(return_value="Issue #42 created")
|
||||
mock_session._connected = False
|
||||
# Mock the inner MCP session (tools.session.call_tool)
|
||||
mock_inner_session = MagicMock()
|
||||
mock_inner_session.call_tool = AsyncMock(return_value="Issue #42 created")
|
||||
|
||||
mock_tools = MagicMock()
|
||||
mock_tools.connect = AsyncMock()
|
||||
mock_tools.session = mock_inner_session
|
||||
mock_tools._connected = False
|
||||
|
||||
mock_params = MagicMock()
|
||||
|
||||
with (
|
||||
patch("timmy.mcp_tools.settings") as mock_settings,
|
||||
patch("agno.tools.mcp.MCPTools", return_value=mock_session),
|
||||
patch("agno.tools.mcp.MCPTools", return_value=mock_tools),
|
||||
patch("timmy.mcp_tools._gitea_server_params", return_value=mock_params),
|
||||
):
|
||||
mock_settings.gitea_enabled = True
|
||||
mock_settings.gitea_token = "tok123"
|
||||
mock_settings.gitea_repo = "owner/repo"
|
||||
mock_settings.gitea_url = "http://localhost:3000"
|
||||
mock_settings.mcp_gitea_command = "gitea-mcp -t stdio"
|
||||
mock_settings.mcp_timeout = 15
|
||||
mock_settings.repo_root = "/tmp/test"
|
||||
|
||||
@@ -136,10 +179,12 @@ async def test_issue_via_mcp_calls_tool():
|
||||
result = await create_gitea_issue_via_mcp("Bug title", "Bug body", "bug")
|
||||
|
||||
assert "Bug title" in result
|
||||
mock_session.connect.assert_awaited_once()
|
||||
mock_session.call_tool.assert_awaited_once()
|
||||
call_args = mock_session.call_tool.call_args
|
||||
assert call_args[0][0] == "create_issue"
|
||||
mock_tools.connect.assert_awaited_once()
|
||||
# Verify it calls session.call_tool (not tools.call_tool)
|
||||
mock_inner_session.call_tool.assert_awaited_once()
|
||||
call_args = mock_inner_session.call_tool.call_args
|
||||
assert call_args[0][0] == "issue_write"
|
||||
assert call_args[1]["arguments"]["method"] == "create"
|
||||
assert call_args[1]["arguments"]["owner"] == "owner"
|
||||
assert call_args[1]["arguments"]["repo"] == "repo"
|
||||
|
||||
@@ -152,19 +197,21 @@ async def test_issue_via_mcp_graceful_failure():
|
||||
"""Issue creation returns error string on MCP failure."""
|
||||
import timmy.mcp_tools as mcp_mod
|
||||
|
||||
mock_session = MagicMock()
|
||||
mock_session.connect = AsyncMock(side_effect=ConnectionError("no process"))
|
||||
mock_session._connected = False
|
||||
mock_tools = MagicMock()
|
||||
mock_tools.connect = AsyncMock(side_effect=ConnectionError("no process"))
|
||||
mock_tools._connected = False
|
||||
|
||||
mock_params = MagicMock()
|
||||
|
||||
with (
|
||||
patch("timmy.mcp_tools.settings") as mock_settings,
|
||||
patch("agno.tools.mcp.MCPTools", return_value=mock_session),
|
||||
patch("agno.tools.mcp.MCPTools", return_value=mock_tools),
|
||||
patch("timmy.mcp_tools._gitea_server_params", return_value=mock_params),
|
||||
):
|
||||
mock_settings.gitea_enabled = True
|
||||
mock_settings.gitea_token = "tok123"
|
||||
mock_settings.gitea_repo = "owner/repo"
|
||||
mock_settings.gitea_url = "http://localhost:3000"
|
||||
mock_settings.mcp_gitea_command = "gitea-mcp -t stdio"
|
||||
mock_settings.mcp_timeout = 15
|
||||
mock_settings.repo_root = "/tmp/test"
|
||||
|
||||
@@ -207,16 +254,16 @@ def test_bridge_to_work_order(tmp_path):
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_close_mcp_sessions():
|
||||
"""close_mcp_sessions disconnects the cached session."""
|
||||
"""close_mcp_sessions closes the cached session."""
|
||||
import timmy.mcp_tools as mcp_mod
|
||||
|
||||
mock_session = MagicMock()
|
||||
mock_session.disconnect = AsyncMock()
|
||||
mock_session.close = AsyncMock()
|
||||
mcp_mod._issue_session = mock_session
|
||||
|
||||
await close_mcp_sessions()
|
||||
|
||||
mock_session.disconnect.assert_awaited_once()
|
||||
mock_session.close.assert_awaited_once()
|
||||
assert mcp_mod._issue_session is None
|
||||
|
||||
|
||||
@@ -240,8 +287,9 @@ def test_mcp_tools_classified_in_safety():
|
||||
from timmy.tool_safety import DANGEROUS_TOOLS, SAFE_TOOLS, requires_confirmation
|
||||
|
||||
# Gitea MCP tools should be safe
|
||||
assert "create_issue" in SAFE_TOOLS
|
||||
assert "list_repo_issues" in SAFE_TOOLS
|
||||
assert "issue_write" in SAFE_TOOLS
|
||||
assert "list_issues" in SAFE_TOOLS
|
||||
assert "pull_request_write" in SAFE_TOOLS
|
||||
|
||||
# Filesystem read-only MCP tools should be safe
|
||||
assert "list_directory" in SAFE_TOOLS
|
||||
@@ -251,6 +299,6 @@ def test_mcp_tools_classified_in_safety():
|
||||
assert "write_file" in DANGEROUS_TOOLS
|
||||
|
||||
# Verify requires_confirmation logic
|
||||
assert not requires_confirmation("create_issue")
|
||||
assert not requires_confirmation("issue_write")
|
||||
assert not requires_confirmation("list_directory")
|
||||
assert requires_confirmation("write_file")
|
||||
|
||||
@@ -1,4 +1,9 @@
|
||||
"""Tests for timmy.tools_delegation — delegate_task and list_swarm_agents."""
|
||||
"""Tests for timmy.tools_delegation — delegate_task and list_swarm_agents.
|
||||
|
||||
Agent IDs are now defined in config/agents.yaml, not hardcoded Python.
|
||||
Tests reference the YAML-defined IDs: orchestrator, researcher, coder,
|
||||
writer, memory, experimenter.
|
||||
"""
|
||||
|
||||
from timmy.tools_delegation import delegate_task, list_swarm_agents
|
||||
|
||||
@@ -11,33 +16,37 @@ class TestDelegateTask:
|
||||
assert result["task_id"] is None
|
||||
|
||||
def test_valid_agent_names_normalised(self):
|
||||
# Should still fail at import (no swarm module), but agent name is accepted
|
||||
result = delegate_task(" Seer ", "think about it")
|
||||
# The swarm import will fail, so success=False but error is about import, not agent name
|
||||
# Agent IDs are lowercased; whitespace should be stripped
|
||||
result = delegate_task(" Researcher ", "think about it")
|
||||
assert "Unknown agent" not in result.get("error", "")
|
||||
|
||||
def test_invalid_priority_defaults_to_normal(self):
|
||||
# Even with bad priority, delegate_task should not crash
|
||||
result = delegate_task("forge", "build", priority="ultra")
|
||||
result = delegate_task("coder", "build", priority="ultra")
|
||||
assert isinstance(result, dict)
|
||||
|
||||
def test_all_valid_agents_accepted(self):
|
||||
valid_agents = ["seer", "forge", "echo", "helm", "quill"]
|
||||
# These IDs match config/agents.yaml
|
||||
valid_agents = ["orchestrator", "researcher", "coder", "writer", "memory", "experimenter"]
|
||||
for agent in valid_agents:
|
||||
result = delegate_task(agent, "test task")
|
||||
assert "Unknown agent" not in result.get("error", ""), f"{agent} rejected"
|
||||
|
||||
def test_mace_no_longer_valid(self):
|
||||
result = delegate_task("mace", "run security scan")
|
||||
assert result["success"] is False
|
||||
assert "Unknown agent" in result["error"]
|
||||
def test_old_agent_names_no_longer_valid(self):
|
||||
# Old hardcoded names should not work anymore
|
||||
for old_name in ["seer", "forge", "echo", "helm", "quill", "mace"]:
|
||||
result = delegate_task(old_name, "test")
|
||||
assert result["success"] is False
|
||||
assert "Unknown agent" in result["error"]
|
||||
|
||||
|
||||
class TestListSwarmAgents:
|
||||
def test_returns_agents_from_personas(self):
|
||||
def test_returns_agents_from_yaml(self):
|
||||
result = list_swarm_agents()
|
||||
assert result["success"] is True
|
||||
assert len(result["agents"]) > 0
|
||||
agent_names = [a["name"] for a in result["agents"]]
|
||||
# These names come from config/agents.yaml
|
||||
assert "Seer" in agent_names
|
||||
assert "Forge" in agent_names
|
||||
assert "Timmy" in agent_names
|
||||
|
||||
Reference in New Issue
Block a user