Compare commits

..

1 Commits

Author SHA1 Message Date
Timmy
25dd988cc7 fix: #681
Some checks failed
Smoke Test / smoke (pull_request) Failing after 23s
2026-04-15 00:56:11 -04:00
4 changed files with 546 additions and 1027 deletions

View File

@@ -0,0 +1,476 @@
# GENOME.md: burn-fleet
**Generated:** 2026-04-15
**Repo:** Timmy_Foundation/burn-fleet
**Purpose:** Laned tmux dispatcher for sovereign burn operations across Mac and Allegro
**Analyzed commit:** `2d4d9ab`
**Size:** 5 top-level source/config files + README | 985 total lines (`fleet-dispatch.py` 320, `fleet-christen.py` 205, `fleet-status.py` 143, `fleet-launch.sh` 126, `fleet-spec.json` 98, `README.md` 93)
---
## Project Overview
`burn-fleet` is a compact control-plane repo for the Hundred-Pane Fleet.
Its job is not model inference itself. Its job is to shape where inference runs, which panes wake up, which repos route to which windows, and how work is fanned out across Mac and VPS workers.
The repo turns a narrative naming scheme into executable infrastructure:
- Mac runs the local session (`BURN`) with windows like `CRUCIBLE`, `GNOMES`, `LOOM`, `FOUNDRY`, `WARD`, `COUNCIL`
- Allegro runs a remote session (`BURN`) with windows like `FORGE`, `ANVIL`, `CRUCIBLE-2`, `SENTINEL`
- `fleet-spec.json` is the single source of truth for pane counts, lanes, sublanes, glyphs, and names
- `fleet-launch.sh` materializes the tmux topology
- `fleet-christen.py` boots `hermes chat --yolo` in each pane and pushes identity prompts
- `fleet-dispatch.py` consumes Gitea issues, maps repos to windows through `MAC_ROUTE` and `ALLEGRO_ROUTE`, and sends `/queue` work into the right panes
- `fleet-status.py` inspects pane output and reports fleet health
The repo is small, but it sits on a high-blast-radius operational seam:
- it controls 100+ panes
- it writes to live tmux sessions
- it comments on live Gitea issues
- it depends on SSH reachability to the VPS
- it is effectively a narrative infrastructure orchestrator
This means the right way to read it is as a dispatch kernel, not just a set of scripts.
---
## Architecture
```mermaid
graph TD
A[fleet-spec.json] --> B[fleet-launch.sh]
A --> C[fleet-christen.py]
A --> D[fleet-dispatch.py]
A --> E[fleet-status.py]
B --> F[tmux session BURN on Mac]
B --> G[tmux session BURN on Allegro over SSH]
C --> F
C --> G
C --> H[hermes chat --yolo in every pane]
H --> I[identity + lane prompt]
J[Gitea issues on forge.alexanderwhitestone.com] --> D
D --> K[MAC_ROUTE]
D --> L[ALLEGRO_ROUTE]
D --> M[/queue prompt generation]
M --> F
M --> G
D --> N[comment_on_issue]
N --> J
D --> O[dispatch-state.json]
E --> F
E --> G
E --> P[get_pane_status]
P --> Q[fleet health summary]
```
### Structural reading
The repo has one real architecture pattern:
1. declarative topology in `fleet-spec.json`
2. imperative realization scripts that consume that topology
3. runtime state in `dispatch-state.json`
4. external side effects in tmux, SSH, and Gitea
That makes `fleet-spec.json` the nucleus and the four scripts adapters around it.
---
## Entry Points
| Entry point | Type | Role |
|-------------|------|------|
| `fleet-launch.sh [mac|allegro|both]` | Shell CLI | Creates tmux sessions and pane layouts from `fleet-spec.json` |
| `python3 fleet-christen.py [mac|allegro|both]` | Python CLI | Starts Hermes workers and injects identity/lane prompts |
| `python3 fleet-dispatch.py [--cycles N] [--interval S] [--machine mac|allegro|both]` | Python CLI | Pulls open Gitea issues, routes them, comments on issues, persists `dispatch-state.json` |
| `python3 fleet-status.py [--machine mac|allegro|both]` | Python CLI | Samples pane output and reports working/idle/error/dead state |
| `README.md` quick start | Human runbook | Documents the intended operator flow from launch to christening to dispatch to status |
### Hidden operational entry points
These are not CLI entry points, but they matter for behavior:
- `MAC_ROUTE` in `fleet-dispatch.py`
- `ALLEGRO_ROUTE` in `fleet-dispatch.py`
- `SKIP_LABELS` and `INACTIVE` filtering in `fleet-dispatch.py`
- `send_to_pane()` as the effectful dispatch primitive
- `comment_on_issue()` as the visible acknowledgement primitive
- `get_pane_status()` in `fleet-status.py` as the fleet health classifier
---
## Data Flow
### 1. Topology creation
`fleet-launch.sh` reads `fleet-spec.json`, parses each window's pane count, and creates the tmux layout.
Flow:
- load spec file path from `SCRIPT_DIR/fleet-spec.json`
- parse `machines.mac.windows` or `machines.allegro.windows`
- create `BURN` session locally or remotely
- create first window, then split panes, then create remaining windows
- continuously tile after splits
This script is layout-only. It does not launch Hermes.
### 2. Agent wake-up / identity seeding
`fleet-christen.py` reads the same `fleet-spec.json` and sends `hermes chat --yolo` into each pane.
After a fixed wait window, it sends a second `/queue` identity message containing:
- glyph
- pane name
- machine name
- window name
- pane number
- sublane
- sovereign operating instructions
That identity message is the bridge from infrastructure to narrative.
The worker is not just launched; it is assigned a mythic/operator identity with a lane.
### 3. Issue harvest and lane dispatch
`fleet-dispatch.py` is the center of the runtime.
Flow:
- load `fleet-spec.json`
- load `dispatch-state.json`
- load Gitea token
- fetch open issues per repo with `requests`
- filter PRs, meta labels, and previously dispatched issues
- build a candidate pool per machine/window
- assign issues pane-by-pane
- call `send_to_pane()` to inject `/queue ...`
- call `comment_on_issue()` to leave a visible burn dispatch comment
- persist the issue assignment into `dispatch-state.json`
Important: the data flow is not issue -> worker directly.
It is:
issue -> repo route table -> window -> pane -> `/queue` prompt -> worker.
### 4. Health sampling
`fleet-status.py` runs the inverse direction.
It samples pane output through `tmux capture-pane` locally or over SSH and classifies the last visible signal as:
- `working`
- `idle`
- `error`
- `dead`
It then summarizes by window, machine, and global fleet totals.
### 5. Runtime state persistence
`dispatch-state.json` is not checked in, but it is the only persistent memory of what the dispatcher already assigned.
That means the runtime depends on a local mutable file rather than a centralized dispatch ledger.
---
## Key Abstractions
### 1. `fleet-spec.json`
This is the primary abstraction in the repo.
It encodes:
- machine identity (`mac`, `allegro`)
- host / SSH details
- hardware metadata (`cores`, `ram_gb`)
- tmux session names
- default model/provider metadata
- windows with `panes`, `lane`, `sublanes`, `glyphs`, `names`
Everything else in the repo interprets this document.
If the spec drifts from the route tables or runtime assumptions, the fleet silently degrades.
### 2. Route tables: `MAC_ROUTE` and `ALLEGRO_ROUTE`
These tables are the repo's second control nucleus.
They map repo names to windows.
This is how `timmy-home`, `the-nexus`, `the-door`, `fleet-ops`, and `the-beacon` land in different operational lanes.
This split means routing logic is duplicated:
- once in the topology spec
- once in Python route dictionaries
That duplication is one of the most important maintainability risks in the repo.
### 3. Pane effect primitive: `send_to_pane()`
`send_to_pane()` is the real actuator.
It turns a dispatch decision into a tmux `send-keys` side effect.
It handles both:
- local tmux injection
- remote SSH + tmux injection
Everything operationally dangerous funnels through this function.
It is therefore a critical path even though the repo has no tests around it.
### 4. Issue acknowledgement primitive: `comment_on_issue()`
This is the repo's social trace primitive.
It posts a burn dispatch comment back to the issue so humans can see that the fleet claimed it.
This is the visible heartbeat of autonomous dispatch.
### 5. Runtime memory: `dispatch-state.json`
This file is the anti-duplication ledger for dispatch cycles.
Without it, the dispatcher would keep recycling the same issues every pass.
Because it is local-file state instead of centralized state, machine locality matters.
### 6. Health classifier: `get_pane_status()`
`fleet-status.py` does not know the true worker state.
It infers state from captured pane output using string heuristics.
So `get_pane_status()` is effectively a lightweight log classifier.
Its correctness depends on fragile output pattern matching.
---
## API Surface
The repo exposes CLI-level APIs rather than import-oriented libraries.
### Shell API
`fleet-launch.sh`
- `./fleet-launch.sh mac`
- `./fleet-launch.sh allegro`
- `./fleet-launch.sh both`
### Python CLIs
`fleet-christen.py`
- `python3 fleet-christen.py mac`
- `python3 fleet-christen.py allegro`
- `python3 fleet-christen.py both`
`fleet-dispatch.py`
- `python3 fleet-dispatch.py`
- `python3 fleet-dispatch.py --cycles 10 --interval 60`
- `python3 fleet-dispatch.py --machine mac`
`fleet-status.py`
- `python3 fleet-status.py`
- `python3 fleet-status.py --machine allegro`
### Internal function surface worth naming explicitly
`fleet-launch.sh`
- `parse_spec()`
- `launch_local()`
- `launch_remote()`
`fleet-christen.py`
- `send_keys()`
- `christen_window()`
- `christen_machine()`
- `christen_remote()`
`fleet-dispatch.py`
- `load_token()`
- `load_spec()`
- `load_state()`
- `save_state()`
- `get_issues()`
- `send_to_pane()`
- `comment_on_issue()`
- `build_prompt()`
- `dispatch_cycle()`
- `dispatch_council()`
`fleet-status.py`
- `get_pane_status()`
- `check_machine()`
These are the true API surface for future hardening and testing.
---
## Test Coverage Gaps
### Current state
Grounded from the pipeline dry run on `/tmp/burn-fleet-genome`:
- 0% estimated coverage
- untested modules called out by pipeline: `fleet-christen`, `fleet-dispatch`, `fleet-status`
- no checked-in automated test suite
### Critical paths with no tests
1. `send_to_pane()`
- local tmux command construction
- remote SSH command construction
- escaping of issue titles and prompts
- failure handling when tmux or SSH fails
2. `comment_on_issue()`
- verifies Gitea comment formatting
- verifies non-200 responses do not silently disappear
3. `get_issues()`
- PR filtering
- `SKIP_LABELS` filtering
- title-based meta filtering
- robustness when Gitea returns malformed or partial issue objects
4. `dispatch_cycle()`
- correct pooling by window
- deduplication via `dispatch-state.json`
- pane recycling behavior
- correctness when one repo has zero issues and another has many
5. `get_pane_status()`
- classification heuristics for working/idle/error/dead
- false positives from incidental strings like `error` in normal output
6. `fleet-launch.sh`
- parse correctness for pane counts
- layout creation behavior across first vs later windows
- remote script generation for Allegro
### Missing tests to generate next in the real target repo
If the goal is to harden `burn-fleet` itself, the first tests to add should be:
- `test_route_tables_cover_spec_windows`
- `test_send_to_pane_escapes_single_quotes_and_special_chars`
- `test_comment_on_issue_formats_machine_window_pane_body`
- `test_get_issues_skips_prs_and_meta_labels`
- `test_dispatch_cycle_persists_dispatch_state_once`
- `test_get_pane_status_classifies_spinner_vs_traceback_vs_empty`
These are the minimum critical-path tests.
---
## Security Considerations
### 1. Command injection surface
`send_to_pane()` and the remote tmux/SSH command assembly are the biggest security surface.
Even though single quotes are escaped in prompts, this remains a command injection boundary because untrusted issue titles and repo metadata cross into shell commands.
This is why `command injection` is the right risk label for the repo.
The risk is not hypothetical; the repo is literally translating issue text into shell transport.
### 2. Credential handling
The dispatcher uses a local token file for Gitea authentication.
That is a credential handling concern because:
- token locality is assumed
- file path and host assumptions are embedded into runtime code
- there is no retry / fallback / explicit missing-token UX beyond failure
### 3. SSH trust boundary
Remote pane control over `root@167.99.126.228` means the repo assumes a trusted SSH path to a root shell.
That is operationally powerful and dangerous.
A malformed remote command, stale known_hosts state, or wrong host mapping has fleet-wide consequences.
### 4. Runtime state tampering
`dispatch-state.json` is a local mutable state file with no locking, signing, or cross-machine reconciliation.
If it is corrupted or lost, deduplication semantics fail.
That can cause repeated dispatches or misleading status.
### 5. Live-forge mutation
`comment_on_issue()` mutates live issue threads on every dispatch cycle.
That means any bug in deduplication or routing will create visible comment spam on the forge.
### 6. Dependency risk
The repo depends on `requests` for Gitea API access but has no pinned dependency metadata or environment contract in-repo.
This is a small operational repo, but reproducibility is weak.
---
## Dependency Picture
### Runtime dependencies
- Python 3
- `requests`
- tmux
- SSH client
- ssh trust boundary to `root@167.99.126.228`
- access to a Gitea token file
### Implied environment dependencies
- active tmux sessions on Mac and Allegro
- SSH trust / connectivity to the VPS
- hermes available in pane environments
- Gitea reachable at `https://forge.alexanderwhitestone.com`
### Notably missing
- no `requirements.txt`
- no `pyproject.toml`
- no explicit test harness
- no schema validation for `fleet-spec.json`
---
## Performance Characteristics
For such a small repo, the performance question is not CPU time inside Python.
It is orchestration fan-out latency.
The main scaling costs are:
- repeated Gitea issue fetches across repos
- SSH round-trips to Allegro
- tmux pane fan-out across 100+ panes
- serialized `time.sleep(0.2)` dispatch staggering
This means the bottleneck is control-plane coordination, not computation.
The repo will scale until SSH / tmux / Gitea latency become dominant.
---
## Dead Code / Drift Risks
### 1. Spec vs route duplication
`fleet-spec.json` defines windows and lanes, while `fleet-dispatch.py` separately defines `MAC_ROUTE` and `ALLEGRO_ROUTE`.
That is the biggest drift risk.
A window can exist in the spec and be missing from a route table, or vice versa.
### 2. Runtime-generated files absent from repo contracts
`dispatch-state.json` is operationally critical but not described as a first-class contract in code.
The repo assumes it exists or can be created, but does not validate structure.
### 3. README drift risk
The README says "use fleet-christen.sh" in one place while the actual file is `fleet-christen.py`.
That is a small but real operator-footgun and a sign the human runbook can drift from the executable surface.
---
## Suggested Follow-up Work
1. Move repo-to-window routing into `fleet-spec.json` and derive `MAC_ROUTE` / `ALLEGRO_ROUTE` programmatically.
2. Add automated tests for `send_to_pane`, `get_issues`, `dispatch_cycle`, and `get_pane_status`.
3. Add a schema validator for `fleet-spec.json`.
4. Add explicit dependency metadata (`requirements.txt` or `pyproject.toml`).
5. Add dry-run / no-side-effect mode for dispatch and christening.
6. Add retry/backoff and error reporting around Gitea comments and SSH execution.
---
## Bottom Line
`burn-fleet` is a small repo with outsized operational leverage.
Its genome is simple:
- one declarative topology file
- four operational adapters
- one local runtime ledger
- many side effects across tmux, SSH, and Gitea
It already expresses the philosophy of narrative-driven infrastructure well.
What it lacks is not architecture.
What it lacks is hardening:
- tests around the dangerous paths
- centralization of duplicated routing truth
- stronger command / credential / runtime-state safeguards
That makes it a strong control-plane prototype and a weakly tested production surface.

View File

@@ -1,290 +0,0 @@
#!/usr/bin/env python3
"""Codebase Test Generator — Fill Coverage Gaps (#667)."""
import ast
import os
import sys
import argparse
from dataclasses import dataclass, field
from pathlib import Path
from typing import Dict, List, Optional, Set, Tuple
@dataclass
class FunctionInfo:
name: str
module_path: str
class_name: Optional[str] = None
lineno: int = 0
args: List[str] = field(default_factory=list)
is_async: bool = False
is_private: bool = False
is_property: bool = False
docstring: Optional[str] = None
has_return: bool = False
raises: List[str] = field(default_factory=list)
decorators: List[str] = field(default_factory=list)
@property
def qualified_name(self):
if self.class_name:
return f"{self.class_name}.{self.name}"
return self.name
@property
def test_name(self):
safe_mod = self.module_path.replace("/", "_").replace(".py", "").replace("-", "_")
safe_cls = self.class_name + "_" if self.class_name else ""
return f"test_{safe_mod}_{safe_cls}{self.name}"
@dataclass
class CoverageGap:
func: FunctionInfo
reason: str
test_priority: int
class SourceAnalyzer(ast.NodeVisitor):
def __init__(self, module_path: str):
self.module_path = module_path
self.functions: List[FunctionInfo] = []
self._class_stack: List[str] = []
def visit_ClassDef(self, node):
self._class_stack.append(node.name)
self.generic_visit(node)
self._class_stack.pop()
def visit_FunctionDef(self, node):
self._collect(node, False)
self.generic_visit(node)
def visit_AsyncFunctionDef(self, node):
self._collect(node, True)
self.generic_visit(node)
def _collect(self, node, is_async):
cls = self._class_stack[-1] if self._class_stack else None
args = [a.arg for a in node.args.args if a.arg not in ("self", "cls")]
has_ret = any(isinstance(c, ast.Return) and c.value for c in ast.walk(node))
raises = []
for c in ast.walk(node):
if isinstance(c, ast.Raise) and c.exc:
if isinstance(c.exc, ast.Call) and isinstance(c.exc.func, ast.Name):
raises.append(c.exc.func.id)
decos = []
for d in node.decorator_list:
if isinstance(d, ast.Name): decos.append(d.id)
elif isinstance(d, ast.Attribute): decos.append(d.attr)
self.functions.append(FunctionInfo(
name=node.name, module_path=self.module_path, class_name=cls,
lineno=node.lineno, args=args, is_async=is_async,
is_private=node.name.startswith("_") and not node.name.startswith("__"),
is_property="property" in decos,
docstring=ast.get_docstring(node), has_return=has_ret,
raises=raises, decorators=decos))
def analyze_file(filepath, base_dir):
module_path = os.path.relpath(filepath, base_dir)
try:
with open(filepath, "r", errors="replace") as f:
tree = ast.parse(f.read(), filename=filepath)
except (SyntaxError, UnicodeDecodeError):
return []
a = SourceAnalyzer(module_path)
a.visit(tree)
return a.functions
def find_source_files(source_dir):
exclude = {"__pycache__", ".git", "venv", ".venv", "node_modules", ".tox", "build", "dist"}
files = []
for root, dirs, fs in os.walk(source_dir):
dirs[:] = [d for d in dirs if d not in exclude and not d.startswith(".")]
for f in fs:
if f.endswith(".py") and f != "__init__.py" and not f.startswith("test_"):
files.append(os.path.join(root, f))
return sorted(files)
def find_existing_tests(test_dir):
existing = set()
for root, dirs, fs in os.walk(test_dir):
for f in fs:
if f.startswith("test_") and f.endswith(".py"):
try:
with open(os.path.join(root, f)) as fh:
tree = ast.parse(fh.read())
for node in ast.walk(tree):
if isinstance(node, ast.FunctionDef) and node.name.startswith("test_"):
existing.add(node.name)
except (SyntaxError, UnicodeDecodeError):
pass
return existing
def identify_gaps(functions, existing_tests):
gaps = []
for func in functions:
if func.name.startswith("__") and func.name != "__init__":
continue
covered = func.name in str(existing_tests)
if not covered:
pri = 3 if func.is_private else (1 if (func.raises or func.has_return) else 2)
gaps.append(CoverageGap(func=func, reason="no test found", test_priority=pri))
gaps.sort(key=lambda g: (g.test_priority, g.func.module_path, g.func.name))
return gaps
def generate_test(gap):
func = gap.func
lines = []
lines.append(f" # AUTO-GENERATED -- review before merging")
lines.append(f" # Source: {func.module_path}:{func.lineno}")
lines.append(f" # Function: {func.qualified_name}")
lines.append("")
mod_imp = func.module_path.replace("/", ".").replace("-", "_").replace(".py", "")
call_args = []
for a in func.args:
if a in ("self", "cls"): continue
if "path" in a or "file" in a or "dir" in a: call_args.append(f"{a}='/tmp/test'")
elif "name" in a: call_args.append(f"{a}='test'")
elif "id" in a or "key" in a: call_args.append(f"{a}='test_id'")
elif "message" in a or "text" in a: call_args.append(f"{a}='test msg'")
elif "count" in a or "num" in a or "size" in a: call_args.append(f"{a}=1")
elif "flag" in a or "enabled" in a or "verbose" in a: call_args.append(f"{a}=False")
else: call_args.append(f"{a}=None")
args_str = ", ".join(call_args)
if func.is_async:
lines.append(" @pytest.mark.asyncio")
lines.append(f" def {func.test_name}(self):")
lines.append(f' """Test {func.qualified_name} -- auto-generated."""')
if func.class_name:
lines.append(f" try:")
lines.append(f" from {mod_imp} import {func.class_name}")
if func.is_private:
lines.append(f" pytest.skip('Private method')")
elif func.is_property:
lines.append(f" obj = {func.class_name}()")
lines.append(f" _ = obj.{func.name}")
else:
if func.raises:
lines.append(f" with pytest.raises(({', '.join(func.raises)})):")
lines.append(f" {func.class_name}().{func.name}({args_str})")
else:
lines.append(f" obj = {func.class_name}()")
lines.append(f" result = obj.{func.name}({args_str})")
if func.has_return:
lines.append(f" assert result is not None or result is None # Placeholder")
lines.append(f" except ImportError:")
lines.append(f" pytest.skip('Module not importable')")
else:
lines.append(f" try:")
lines.append(f" from {mod_imp} import {func.name}")
if func.is_private:
lines.append(f" pytest.skip('Private function')")
else:
if func.raises:
lines.append(f" with pytest.raises(({', '.join(func.raises)})):")
lines.append(f" {func.name}({args_str})")
else:
lines.append(f" result = {func.name}({args_str})")
if func.has_return:
lines.append(f" assert result is not None or result is None # Placeholder")
lines.append(f" except ImportError:")
lines.append(f" pytest.skip('Module not importable')")
return chr(10).join(lines)
def generate_test_suite(gaps, max_tests=50):
by_module = {}
for gap in gaps[:max_tests]:
by_module.setdefault(gap.func.module_path, []).append(gap)
lines = []
lines.append('"""Auto-generated test suite -- Codebase Genome (#667).')
lines.append("")
lines.append("Generated by scripts/codebase_test_generator.py")
lines.append("Coverage gaps identified from AST analysis.")
lines.append("")
lines.append("These tests are starting points. Review before merging.")
lines.append('"""')
lines.append("")
lines.append("import pytest")
lines.append("from unittest.mock import MagicMock, patch")
lines.append("")
lines.append("")
lines.append("# AUTO-GENERATED -- DO NOT EDIT WITHOUT REVIEW")
for module, mgaps in sorted(by_module.items()):
safe = module.replace("/", "_").replace(".py", "").replace("-", "_")
cls_name = "".join(w.title() for w in safe.split("_"))
lines.append("")
lines.append(f"class Test{cls_name}Generated:")
lines.append(f' """Auto-generated tests for {module}."""')
for gap in mgaps:
lines.append("")
lines.append(generate_test(gap))
lines.append("")
return chr(10).join(lines)
def main():
parser = argparse.ArgumentParser(description="Codebase Test Generator")
parser.add_argument("--source", default=".")
parser.add_argument("--output", default="tests/test_genome_generated.py")
parser.add_argument("--max-tests", type=int, default=50)
parser.add_argument("--dry-run", action="store_true")
parser.add_argument("--include-private", action="store_true")
args = parser.parse_args()
source_dir = os.path.abspath(args.source)
test_dir = os.path.join(source_dir, "tests")
print(f"Scanning: {source_dir}")
source_files = find_source_files(source_dir)
print(f"Source files: {len(source_files)}")
all_funcs = []
for f in source_files:
all_funcs.extend(analyze_file(f, source_dir))
print(f"Functions/methods: {len(all_funcs)}")
existing = find_existing_tests(test_dir)
print(f"Existing tests: {len(existing)}")
gaps = identify_gaps(all_funcs, existing)
if not args.include_private:
gaps = [g for g in gaps if not g.func.is_private]
print(f"Coverage gaps: {len(gaps)}")
by_pri = {1: 0, 2: 0, 3: 0}
for g in gaps:
by_pri[g.test_priority] += 1
print(f" High: {by_pri[1]}, Medium: {by_pri[2]}, Low: {by_pri[3]}")
if args.dry_run:
for g in gaps[:10]:
print(f" {g.func.module_path}:{g.func.lineno} {g.func.qualified_name}")
return
if gaps:
content = generate_test_suite(gaps, max_tests=args.max-tests if hasattr(args, 'max-tests') else args.max_tests)
out = os.path.join(source_dir, args.output)
os.makedirs(os.path.dirname(out), exist_ok=True)
with open(out, "w") as f:
f.write(content)
print(f"Generated {min(len(gaps), args.max_tests)} tests -> {args.output}")
else:
print("No gaps found!")
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,70 @@
from pathlib import Path
GENOME = Path('genomes/burn-fleet-GENOME.md')
def read_genome() -> str:
assert GENOME.exists(), 'burn-fleet genome must exist at genomes/burn-fleet-GENOME.md'
return GENOME.read_text(encoding='utf-8')
def test_genome_exists():
assert GENOME.exists(), 'burn-fleet genome must exist at genomes/burn-fleet-GENOME.md'
def test_genome_has_required_sections():
text = read_genome()
for heading in [
'# GENOME.md: burn-fleet',
'## Project Overview',
'## Architecture',
'## Entry Points',
'## Data Flow',
'## Key Abstractions',
'## API Surface',
'## Test Coverage Gaps',
'## Security Considerations',
]:
assert heading in text
def test_genome_contains_mermaid_diagram():
text = read_genome()
assert '```mermaid' in text
assert 'graph TD' in text or 'flowchart TD' in text
def test_genome_mentions_core_files_and_runtime_state():
text = read_genome()
for token in [
'fleet-spec.json',
'fleet-launch.sh',
'fleet-christen.py',
'fleet-dispatch.py',
'fleet-status.py',
'dispatch-state.json',
'tmux',
'ssh',
'MAC_ROUTE',
'ALLEGRO_ROUTE',
]:
assert token in text
def test_genome_mentions_test_gap_and_risk_findings():
text = read_genome()
for token in [
'0% estimated coverage',
'send_to_pane',
'comment_on_issue',
'get_pane_status',
'requests',
'command injection',
'credential handling',
]:
assert token in text
def test_genome_is_substantial():
text = read_genome()
assert len(text) >= 6000

View File

@@ -1,737 +0,0 @@
"""Auto-generated test suite -- Codebase Genome (#667).
Generated by scripts/codebase_test_generator.py
Coverage gaps identified from AST analysis.
These tests are starting points. Review before merging.
"""
import pytest
from unittest.mock import MagicMock, patch
# AUTO-GENERATED -- DO NOT EDIT WITHOUT REVIEW
class TestAngbandMcpServerGenerated:
"""Auto-generated tests for angband/mcp_server.py."""
# AUTO-GENERATED -- review before merging
# Source: angband/mcp_server.py:319
# Function: call_tool
@pytest.mark.asyncio
def test_angband_mcp_server_call_tool(self):
"""Test call_tool -- auto-generated."""
try:
from angband.mcp_server import call_tool
result = call_tool(name='test', arguments=None)
assert result is not None or result is None # Placeholder
except ImportError:
pytest.skip('Module not importable')
# AUTO-GENERATED -- review before merging
# Source: angband/mcp_server.py:64
# Function: capture_screen
def test_angband_mcp_server_capture_screen(self):
"""Test capture_screen -- auto-generated."""
try:
from angband.mcp_server import capture_screen
result = capture_screen(lines=None, session_name='test')
assert result is not None or result is None # Placeholder
except ImportError:
pytest.skip('Module not importable')
# AUTO-GENERATED -- review before merging
# Source: angband/mcp_server.py:74
# Function: has_save
def test_angband_mcp_server_has_save(self):
"""Test has_save -- auto-generated."""
try:
from angband.mcp_server import has_save
result = has_save(user=None)
assert result is not None or result is None # Placeholder
except ImportError:
pytest.skip('Module not importable')
# AUTO-GENERATED -- review before merging
# Source: angband/mcp_server.py:234
# Function: keypress
def test_angband_mcp_server_keypress(self):
"""Test keypress -- auto-generated."""
try:
from angband.mcp_server import keypress
result = keypress(key='test_id', wait_ms=None)
assert result is not None or result is None # Placeholder
except ImportError:
pytest.skip('Module not importable')
# AUTO-GENERATED -- review before merging
# Source: angband/mcp_server.py:141
# Function: launch_game
def test_angband_mcp_server_launch_game(self):
"""Test launch_game -- auto-generated."""
try:
from angband.mcp_server import launch_game
result = launch_game(user=None, new_game=None, continue_splash=None, width='test_id', height=None)
assert result is not None or result is None # Placeholder
except ImportError:
pytest.skip('Module not importable')
# AUTO-GENERATED -- review before merging
# Source: angband/mcp_server.py:253
# Function: list_tools
@pytest.mark.asyncio
def test_angband_mcp_server_list_tools(self):
"""Test list_tools -- auto-generated."""
try:
from angband.mcp_server import list_tools
result = list_tools()
assert result is not None or result is None # Placeholder
except ImportError:
pytest.skip('Module not importable')
# AUTO-GENERATED -- review before merging
# Source: angband/mcp_server.py:130
# Function: maybe_continue_splash
def test_angband_mcp_server_maybe_continue_splash(self):
"""Test maybe_continue_splash -- auto-generated."""
try:
from angband.mcp_server import maybe_continue_splash
result = maybe_continue_splash(session_name='test')
assert result is not None or result is None # Placeholder
except ImportError:
pytest.skip('Module not importable')
# AUTO-GENERATED -- review before merging
# Source: angband/mcp_server.py:226
# Function: observe
def test_angband_mcp_server_observe(self):
"""Test observe -- auto-generated."""
try:
from angband.mcp_server import observe
result = observe(lines=None)
assert result is not None or result is None # Placeholder
except ImportError:
pytest.skip('Module not importable')
# AUTO-GENERATED -- review before merging
# Source: angband/mcp_server.py:57
# Function: pane_id
def test_angband_mcp_server_pane_id(self):
"""Test pane_id -- auto-generated."""
try:
from angband.mcp_server import pane_id
result = pane_id(session_name='test')
assert result is not None or result is None # Placeholder
except ImportError:
pytest.skip('Module not importable')
# AUTO-GENERATED -- review before merging
# Source: angband/mcp_server.py:108
# Function: send_key
def test_angband_mcp_server_send_key(self):
"""Test send_key -- auto-generated."""
try:
from angband.mcp_server import send_key
with pytest.raises((RuntimeError)):
send_key(key='test_id', session_name='test')
except ImportError:
pytest.skip('Module not importable')
# AUTO-GENERATED -- review before merging
# Source: angband/mcp_server.py:123
# Function: send_text
def test_angband_mcp_server_send_text(self):
"""Test send_text -- auto-generated."""
try:
from angband.mcp_server import send_text
with pytest.raises((RuntimeError)):
send_text(text='test msg', session_name='test')
except ImportError:
pytest.skip('Module not importable')
# AUTO-GENERATED -- review before merging
# Source: angband/mcp_server.py:53
# Function: session_exists
def test_angband_mcp_server_session_exists(self):
"""Test session_exists -- auto-generated."""
try:
from angband.mcp_server import session_exists
result = session_exists(session_name='test')
assert result is not None or result is None # Placeholder
except ImportError:
pytest.skip('Module not importable')
# AUTO-GENERATED -- review before merging
# Source: angband/mcp_server.py:203
# Function: stop_game
def test_angband_mcp_server_stop_game(self):
"""Test stop_game -- auto-generated."""
try:
from angband.mcp_server import stop_game
result = stop_game()
assert result is not None or result is None # Placeholder
except ImportError:
pytest.skip('Module not importable')
# AUTO-GENERATED -- review before merging
# Source: angband/mcp_server.py:46
# Function: tmux
def test_angband_mcp_server_tmux(self):
"""Test tmux -- auto-generated."""
try:
from angband.mcp_server import tmux
with pytest.raises((RuntimeError)):
tmux(args=None, check=None)
except ImportError:
pytest.skip('Module not importable')
# AUTO-GENERATED -- review before merging
# Source: angband/mcp_server.py:243
# Function: type_and_observe
def test_angband_mcp_server_type_and_observe(self):
"""Test type_and_observe -- auto-generated."""
try:
from angband.mcp_server import type_and_observe
result = type_and_observe(text='test msg', wait_ms=None)
assert result is not None or result is None # Placeholder
except ImportError:
pytest.skip('Module not importable')
class TestEvenniaTimmyWorldGameGenerated:
"""Auto-generated tests for evennia/timmy_world/game.py."""
# AUTO-GENERATED -- review before merging
# Source: evennia/timmy_world/game.py:495
# Function: ActionSystem.get_available_actions
def test_evennia_timmy_world_game_ActionSystem_get_available_actions(self):
"""Test ActionSystem.get_available_actions -- auto-generated."""
try:
from evennia.timmy_world.game import ActionSystem
obj = ActionSystem()
result = obj.get_available_actions(char_name='test', world=None)
assert result is not None or result is None # Placeholder
except ImportError:
pytest.skip('Module not importable')
# AUTO-GENERATED -- review before merging
# Source: evennia/timmy_world/game.py:1485
# Function: PlayerInterface.get_available_actions
def test_evennia_timmy_world_game_PlayerInterface_get_available_actions(self):
"""Test PlayerInterface.get_available_actions -- auto-generated."""
try:
from evennia.timmy_world.game import PlayerInterface
obj = PlayerInterface()
result = obj.get_available_actions()
assert result is not None or result is None # Placeholder
except ImportError:
pytest.skip('Module not importable')
# AUTO-GENERATED -- review before merging
# Source: evennia/timmy_world/game.py:55
# Function: get_narrative_phase
def test_evennia_timmy_world_game_get_narrative_phase(self):
"""Test get_narrative_phase -- auto-generated."""
try:
from evennia.timmy_world.game import get_narrative_phase
result = get_narrative_phase(tick=None)
assert result is not None or result is None # Placeholder
except ImportError:
pytest.skip('Module not importable')
# AUTO-GENERATED -- review before merging
# Source: evennia/timmy_world/game.py:65
# Function: get_phase_transition_event
def test_evennia_timmy_world_game_get_phase_transition_event(self):
"""Test get_phase_transition_event -- auto-generated."""
try:
from evennia.timmy_world.game import get_phase_transition_event
result = get_phase_transition_event(old_phase=None, new_phase=None)
assert result is not None or result is None # Placeholder
except ImportError:
pytest.skip('Module not importable')
# AUTO-GENERATED -- review before merging
# Source: evennia/timmy_world/game.py:347
# Function: World.get_room_desc
def test_evennia_timmy_world_game_World_get_room_desc(self):
"""Test World.get_room_desc -- auto-generated."""
try:
from evennia.timmy_world.game import World
obj = World()
result = obj.get_room_desc(room_name='test', char_name='test')
assert result is not None or result is None # Placeholder
except ImportError:
pytest.skip('Module not importable')
# AUTO-GENERATED -- review before merging
# Source: evennia/timmy_world/game.py:1045
# Function: GameEngine.load_game
def test_evennia_timmy_world_game_GameEngine_load_game(self):
"""Test GameEngine.load_game -- auto-generated."""
try:
from evennia.timmy_world.game import GameEngine
obj = GameEngine()
result = obj.load_game()
assert result is not None or result is None # Placeholder
except ImportError:
pytest.skip('Module not importable')
# AUTO-GENERATED -- review before merging
# Source: evennia/timmy_world/game.py:556
# Function: NPCAI.make_choice
def test_evennia_timmy_world_game_NPCAI_make_choice(self):
"""Test NPCAI.make_choice -- auto-generated."""
try:
from evennia.timmy_world.game import NPCAI
obj = NPCAI()
result = obj.make_choice(char_name='test')
assert result is not None or result is None # Placeholder
except ImportError:
pytest.skip('Module not importable')
# AUTO-GENERATED -- review before merging
# Source: evennia/timmy_world/game.py:1454
# Function: GameEngine.play_turn
def test_evennia_timmy_world_game_GameEngine_play_turn(self):
"""Test GameEngine.play_turn -- auto-generated."""
try:
from evennia.timmy_world.game import GameEngine
obj = GameEngine()
result = obj.play_turn(action=None)
assert result is not None or result is None # Placeholder
except ImportError:
pytest.skip('Module not importable')
# AUTO-GENERATED -- review before merging
# Source: evennia/timmy_world/game.py:1076
# Function: GameEngine.run_tick
def test_evennia_timmy_world_game_GameEngine_run_tick(self):
"""Test GameEngine.run_tick -- auto-generated."""
try:
from evennia.timmy_world.game import GameEngine
obj = GameEngine()
result = obj.run_tick(timmy_action=None)
assert result is not None or result is None # Placeholder
except ImportError:
pytest.skip('Module not importable')
class TestEvenniaTimmyWorldServerConfWebPluginsGenerated:
"""Auto-generated tests for evennia/timmy_world/server/conf/web_plugins.py."""
# AUTO-GENERATED -- review before merging
# Source: evennia/timmy_world/server/conf/web_plugins.py:31
# Function: at_webproxy_root_creation
def test_evennia_timmy_world_server_conf_web_plugins_at_webproxy_root_creation(self):
"""Test at_webproxy_root_creation -- auto-generated."""
try:
from evennia.timmy_world.server.conf.web_plugins import at_webproxy_root_creation
result = at_webproxy_root_creation(web_root=None)
assert result is not None or result is None # Placeholder
except ImportError:
pytest.skip('Module not importable')
# AUTO-GENERATED -- review before merging
# Source: evennia/timmy_world/server/conf/web_plugins.py:6
# Function: at_webserver_root_creation
def test_evennia_timmy_world_server_conf_web_plugins_at_webserver_root_creation(self):
"""Test at_webserver_root_creation -- auto-generated."""
try:
from evennia.timmy_world.server.conf.web_plugins import at_webserver_root_creation
result = at_webserver_root_creation(web_root=None)
assert result is not None or result is None # Placeholder
except ImportError:
pytest.skip('Module not importable')
class TestEvenniaTimmyWorldWorldGameGenerated:
"""Auto-generated tests for evennia/timmy_world/world/game.py."""
# AUTO-GENERATED -- review before merging
# Source: evennia/timmy_world/world/game.py:400
# Function: ActionSystem.get_available_actions
def test_evennia_timmy_world_world_game_ActionSystem_get_available_actions(self):
"""Test ActionSystem.get_available_actions -- auto-generated."""
try:
from evennia.timmy_world.world.game import ActionSystem
obj = ActionSystem()
result = obj.get_available_actions(char_name='test', world=None)
assert result is not None or result is None # Placeholder
except ImportError:
pytest.skip('Module not importable')
# AUTO-GENERATED -- review before merging
# Source: evennia/timmy_world/world/game.py:1289
# Function: PlayerInterface.get_available_actions
def test_evennia_timmy_world_world_game_PlayerInterface_get_available_actions(self):
"""Test PlayerInterface.get_available_actions -- auto-generated."""
try:
from evennia.timmy_world.world.game import PlayerInterface
obj = PlayerInterface()
result = obj.get_available_actions()
assert result is not None or result is None # Placeholder
except ImportError:
pytest.skip('Module not importable')
# AUTO-GENERATED -- review before merging
# Source: evennia/timmy_world/world/game.py:254
# Function: World.get_room_desc
def test_evennia_timmy_world_world_game_World_get_room_desc(self):
"""Test World.get_room_desc -- auto-generated."""
try:
from evennia.timmy_world.world.game import World
obj = World()
result = obj.get_room_desc(room_name='test', char_name='test')
assert result is not None or result is None # Placeholder
except ImportError:
pytest.skip('Module not importable')
# AUTO-GENERATED -- review before merging
# Source: evennia/timmy_world/world/game.py:880
# Function: GameEngine.load_game
def test_evennia_timmy_world_world_game_GameEngine_load_game(self):
"""Test GameEngine.load_game -- auto-generated."""
try:
from evennia.timmy_world.world.game import GameEngine
obj = GameEngine()
result = obj.load_game()
assert result is not None or result is None # Placeholder
except ImportError:
pytest.skip('Module not importable')
# AUTO-GENERATED -- review before merging
# Source: evennia/timmy_world/world/game.py:461
# Function: NPCAI.make_choice
def test_evennia_timmy_world_world_game_NPCAI_make_choice(self):
"""Test NPCAI.make_choice -- auto-generated."""
try:
from evennia.timmy_world.world.game import NPCAI
obj = NPCAI()
result = obj.make_choice(char_name='test')
assert result is not None or result is None # Placeholder
except ImportError:
pytest.skip('Module not importable')
# AUTO-GENERATED -- review before merging
# Source: evennia/timmy_world/world/game.py:1258
# Function: GameEngine.play_turn
def test_evennia_timmy_world_world_game_GameEngine_play_turn(self):
"""Test GameEngine.play_turn -- auto-generated."""
try:
from evennia.timmy_world.world.game import GameEngine
obj = GameEngine()
result = obj.play_turn(action=None)
assert result is not None or result is None # Placeholder
except ImportError:
pytest.skip('Module not importable')
# AUTO-GENERATED -- review before merging
# Source: evennia/timmy_world/world/game.py:911
# Function: GameEngine.run_tick
def test_evennia_timmy_world_world_game_GameEngine_run_tick(self):
"""Test GameEngine.run_tick -- auto-generated."""
try:
from evennia.timmy_world.world.game import GameEngine
obj = GameEngine()
result = obj.run_tick(timmy_action=None)
assert result is not None or result is None # Placeholder
except ImportError:
pytest.skip('Module not importable')
# AUTO-GENERATED -- review before merging
# Source: evennia/timmy_world/world/game.py:749
# Function: DialogueSystem.select
def test_evennia_timmy_world_world_game_DialogueSystem_select(self):
"""Test DialogueSystem.select -- auto-generated."""
try:
from evennia.timmy_world.world.game import DialogueSystem
obj = DialogueSystem()
result = obj.select(char_name='test', listener=None)
assert result is not None or result is None # Placeholder
except ImportError:
pytest.skip('Module not importable')
class TestEvenniaToolsLayoutGenerated:
"""Auto-generated tests for evennia_tools/layout.py."""
# AUTO-GENERATED -- review before merging
# Source: evennia_tools/layout.py:58
# Function: grouped_exits
def test_evennia_tools_layout_grouped_exits(self):
"""Test grouped_exits -- auto-generated."""
try:
from evennia_tools.layout import grouped_exits
result = grouped_exits()
assert result is not None or result is None # Placeholder
except ImportError:
pytest.skip('Module not importable')
# AUTO-GENERATED -- review before merging
# Source: evennia_tools/layout.py:54
# Function: room_keys
def test_evennia_tools_layout_room_keys(self):
"""Test room_keys -- auto-generated."""
try:
from evennia_tools.layout import room_keys
result = room_keys()
assert result is not None or result is None # Placeholder
except ImportError:
pytest.skip('Module not importable')
class TestEvenniaToolsTelemetryGenerated:
"""Auto-generated tests for evennia_tools/telemetry.py."""
# AUTO-GENERATED -- review before merging
# Source: evennia_tools/telemetry.py:8
# Function: telemetry_dir
def test_evennia_tools_telemetry_telemetry_dir(self):
"""Test telemetry_dir -- auto-generated."""
try:
from evennia_tools.telemetry import telemetry_dir
result = telemetry_dir(base_dir='/tmp/test')
assert result is not None or result is None # Placeholder
except ImportError:
pytest.skip('Module not importable')
class TestEvenniaToolsTrainingGenerated:
"""Auto-generated tests for evennia_tools/training.py."""
# AUTO-GENERATED -- review before merging
# Source: evennia_tools/training.py:18
# Function: example_eval_path
def test_evennia_tools_training_example_eval_path(self):
"""Test example_eval_path -- auto-generated."""
try:
from evennia_tools.training import example_eval_path
result = example_eval_path(repo_root=None)
assert result is not None or result is None # Placeholder
except ImportError:
pytest.skip('Module not importable')
# AUTO-GENERATED -- review before merging
# Source: evennia_tools/training.py:14
# Function: example_trace_path
def test_evennia_tools_training_example_trace_path(self):
"""Test example_trace_path -- auto-generated."""
try:
from evennia_tools.training import example_trace_path
result = example_trace_path(repo_root=None)
assert result is not None or result is None # Placeholder
except ImportError:
pytest.skip('Module not importable')
class TestEvolutionBitcoinScripterGenerated:
"""Auto-generated tests for evolution/bitcoin_scripter.py."""
# AUTO-GENERATED -- review before merging
# Source: evolution/bitcoin_scripter.py:18
# Function: BitcoinScripter.generate_script
def test_evolution_bitcoin_scripter_BitcoinScripter_generate_script(self):
"""Test BitcoinScripter.generate_script -- auto-generated."""
try:
from evolution.bitcoin_scripter import BitcoinScripter
obj = BitcoinScripter()
result = obj.generate_script(requirements=None)
assert result is not None or result is None # Placeholder
except ImportError:
pytest.skip('Module not importable')
class TestEvolutionLightningClientGenerated:
"""Auto-generated tests for evolution/lightning_client.py."""
# AUTO-GENERATED -- review before merging
# Source: evolution/lightning_client.py:18
# Function: LightningClient.plan_payment_route
def test_evolution_lightning_client_LightningClient_plan_payment_route(self):
"""Test LightningClient.plan_payment_route -- auto-generated."""
try:
from evolution.lightning_client import LightningClient
obj = LightningClient()
result = obj.plan_payment_route(destination=None, amount_sats=None)
assert result is not None or result is None # Placeholder
except ImportError:
pytest.skip('Module not importable')
class TestEvolutionSovereignAccountantGenerated:
"""Auto-generated tests for evolution/sovereign_accountant.py."""
# AUTO-GENERATED -- review before merging
# Source: evolution/sovereign_accountant.py:17
# Function: SovereignAccountant.generate_financial_report
def test_evolution_sovereign_accountant_SovereignAccountant_generate_financial_report(self):
"""Test SovereignAccountant.generate_financial_report -- auto-generated."""
try:
from evolution.sovereign_accountant import SovereignAccountant
obj = SovereignAccountant()
result = obj.generate_financial_report(transaction_history=None)
assert result is not None or result is None # Placeholder
except ImportError:
pytest.skip('Module not importable')
class TestInfrastructureTimmyBridgeClientTimmyClientGenerated:
"""Auto-generated tests for infrastructure/timmy-bridge/client/timmy_client.py."""
# AUTO-GENERATED -- review before merging
# Source: infrastructure/timmy-bridge/client/timmy_client.py:108
# Function: TimmyClient.create_artifact
def test_infrastructure_timmy_bridge_client_timmy_client_TimmyClient_create_artifact(self):
"""Test TimmyClient.create_artifact -- auto-generated."""
try:
from infrastructure.timmy_bridge.client.timmy_client import TimmyClient
obj = TimmyClient()
result = obj.create_artifact()
assert result is not None or result is None # Placeholder
except ImportError:
pytest.skip('Module not importable')
# AUTO-GENERATED -- review before merging
# Source: infrastructure/timmy-bridge/client/timmy_client.py:167
# Function: TimmyClient.create_event
def test_infrastructure_timmy_bridge_client_timmy_client_TimmyClient_create_event(self):
"""Test TimmyClient.create_event -- auto-generated."""
try:
from infrastructure.timmy_bridge.client.timmy_client import TimmyClient
obj = TimmyClient()
result = obj.create_event(kind=None, content=None, tags=None)
assert result is not None or result is None # Placeholder
except ImportError:
pytest.skip('Module not importable')
# AUTO-GENERATED -- review before merging
# Source: infrastructure/timmy-bridge/client/timmy_client.py:74
# Function: TimmyClient.generate_observation
def test_infrastructure_timmy_bridge_client_timmy_client_TimmyClient_generate_observation(self):
"""Test TimmyClient.generate_observation -- auto-generated."""
try:
from infrastructure.timmy_bridge.client.timmy_client import TimmyClient
obj = TimmyClient()
result = obj.generate_observation()
assert result is not None or result is None # Placeholder
except ImportError:
pytest.skip('Module not importable')
class TestInfrastructureTimmyBridgeMlxMlxIntegrationGenerated:
"""Auto-generated tests for infrastructure/timmy-bridge/mlx/mlx_integration.py."""
# AUTO-GENERATED -- review before merging
# Source: infrastructure/timmy-bridge/mlx/mlx_integration.py:122
# Function: MLXInference.available
def test_infrastructure_timmy_bridge_mlx_mlx_integration_MLXInference_available(self):
"""Test MLXInference.available -- auto-generated."""
try:
from infrastructure.timmy_bridge.mlx.mlx_integration import MLXInference
obj = MLXInference()
_ = obj.available
except ImportError:
pytest.skip('Module not importable')
# AUTO-GENERATED -- review before merging
# Source: infrastructure/timmy-bridge/mlx/mlx_integration.py:125
# Function: MLXInference.get_stats
def test_infrastructure_timmy_bridge_mlx_mlx_integration_MLXInference_get_stats(self):
"""Test MLXInference.get_stats -- auto-generated."""
try:
from infrastructure.timmy_bridge.mlx.mlx_integration import MLXInference
obj = MLXInference()
result = obj.get_stats()
assert result is not None or result is None # Placeholder
except ImportError:
pytest.skip('Module not importable')
# AUTO-GENERATED -- review before merging
# Source: infrastructure/timmy-bridge/mlx/mlx_integration.py:30
# Function: MLXInference.load_model
def test_infrastructure_timmy_bridge_mlx_mlx_integration_MLXInference_load_model(self):
"""Test MLXInference.load_model -- auto-generated."""
try:
from infrastructure.timmy_bridge.mlx.mlx_integration import MLXInference
obj = MLXInference()
result = obj.load_model(model_path='/tmp/test')
assert result is not None or result is None # Placeholder
except ImportError:
pytest.skip('Module not importable')
# AUTO-GENERATED -- review before merging
# Source: infrastructure/timmy-bridge/mlx/mlx_integration.py:93
# Function: MLXInference.reflect
def test_infrastructure_timmy_bridge_mlx_mlx_integration_MLXInference_reflect(self):
"""Test MLXInference.reflect -- auto-generated."""
try:
from infrastructure.timmy_bridge.mlx.mlx_integration import MLXInference
obj = MLXInference()
result = obj.reflect()
assert result is not None or result is None # Placeholder
except ImportError:
pytest.skip('Module not importable')
# AUTO-GENERATED -- review before merging
# Source: infrastructure/timmy-bridge/mlx/mlx_integration.py:108
# Function: MLXInference.respond_to
def test_infrastructure_timmy_bridge_mlx_mlx_integration_MLXInference_respond_to(self):
"""Test MLXInference.respond_to -- auto-generated."""
try:
from infrastructure.timmy_bridge.mlx.mlx_integration import MLXInference
obj = MLXInference()
result = obj.respond_to(message='test msg', context='test msg')
assert result is not None or result is None # Placeholder
except ImportError:
pytest.skip('Module not importable')