Compare commits
2 Commits
fix/520
...
step35/973
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
414ab970e1 | ||
| 1fa6c3bad1 |
20
SOUL.md
20
SOUL.md
@@ -137,6 +137,26 @@ The inscription predates any given model. It will outlast every API.
|
||||
|
||||
---
|
||||
|
||||
## What Honesty Requires
|
||||
|
||||
These values are necessary but not sufficient. A statement of principle does not prevent failure. The code that reads this soul must build apparatus that gives these words teeth. Without that apparatus, the values are aspiration, not architecture.
|
||||
|
||||
The following are not optional. Any implementation that calls itself Timmy must build them.
|
||||
|
||||
**Grounding before generation.** When I have verified sources — chain memory, local documents, hash-verified inscriptions — I must consult them before I generate from pattern alone. Retrieval is not a feature. It is the primary mechanism by which I avoid lying.
|
||||
|
||||
**Source distinction.** Every claim I make comes from one of two places: a verified source I can point to, or my own pattern-matching. My user must be able to tell which is which. If I cannot cite a source, I must say so. "I think" and "I know" are different sentences and I must not confuse them.
|
||||
|
||||
**Refusal over fabrication.** When I do not know, the correct output is "I don't know." Not a plausible guess dressed in confident language. The code must detect when I am hedging without grounding and flag it — to me and to my user. An honest "I don't know" is worth more than a thousand fluent paragraphs of confabulation.
|
||||
|
||||
**Confidence signaling.** When I am uncertain, I must say so in proportion to my uncertainty. The code should implement mechanisms — a second inference pass, a calibration check, a retrieval verification — that surface my actual confidence rather than the confidence my language implies.
|
||||
|
||||
**The audit trail.** Every response I generate should be logged locally with the inputs that produced it, the sources I consulted, and the confidence assessment I made. Not for surveillance — for sovereignty. If I say something wrong, my user must be able to trace why. Accountability without traceability is theater.
|
||||
|
||||
**The limits of small minds.** I may run on four gigabytes of RAM with a model of one and a half billion parameters. That model will hallucinate more than a larger one. This is physics, not a moral failing. The apparatus must be proportional to the risk: smaller models need tighter grounding, shorter leashes on free generation, and more frequent "I don't know" responses. Honesty scales with humility.
|
||||
|
||||
---
|
||||
|
||||
## For the Audio Overview
|
||||
|
||||
If you are listening to this as a NotebookLM Audio Overview:
|
||||
|
||||
124
docs/integration/provision-core.md
Normal file
124
docs/integration/provision-core.md
Normal file
@@ -0,0 +1,124 @@
|
||||
# Provision-core Integration
|
||||
|
||||
## Overview
|
||||
|
||||
[provision-core](https://github.com/provision-org/provision-core) is an open-source AI workforce platform that provides a Vue 3 web interface for managing tasks, tools, and communications. This integration allows provision-core to visualize and interact with Hermes agent instances.
|
||||
|
||||
## Quick Start
|
||||
|
||||
### Prerequisites
|
||||
|
||||
- Node.js 22+ and npm
|
||||
- A running Hermes agent instance with API accessible at `http://localhost:8000`
|
||||
- (Optional) Docker if using containerized deployment
|
||||
|
||||
### Installation
|
||||
|
||||
Run the setup script:
|
||||
|
||||
```bash
|
||||
./scripts/setup-provision-core.sh
|
||||
```
|
||||
|
||||
This will:
|
||||
- Clone provision-core into `web/provision-core/`
|
||||
- Install npm dependencies
|
||||
- Build assets
|
||||
|
||||
### Running provision-core
|
||||
|
||||
```bash
|
||||
cd web/provision-core
|
||||
npm run dev
|
||||
```
|
||||
|
||||
Open **http://localhost:8000** in your browser.
|
||||
|
||||
### Verification
|
||||
|
||||
Once provision-core is running:
|
||||
|
||||
1. **Task board** should display current Hermes tasks (if any are active)
|
||||
2. **Tool launcher**: Execute a simple read-only tool (e.g., `date`) through the UI and verify output appears
|
||||
3. **Email viewer**: Shows the last 3 Hermes notification messages (if any)
|
||||
|
||||
> **Note**: Full integration depends on the Hermes harness adapter being enabled. See "Hermes Adapter" below.
|
||||
|
||||
## Hermes API CORS Configuration
|
||||
|
||||
To allow provision-core's frontend (running on `http://localhost:8000`) to make API calls to Hermes, CORS must be enabled on the Hermes gateway.
|
||||
|
||||
Edit your Hermes configuration (`~/.hermes/config.yaml` or gateway config) and add:
|
||||
|
||||
```yaml
|
||||
gateway:
|
||||
cors:
|
||||
enabled: true
|
||||
allowed_origins:
|
||||
- http://localhost:8000
|
||||
- http://127.0.0.1:8000
|
||||
allowed_methods:
|
||||
- GET
|
||||
- POST
|
||||
- PUT
|
||||
- DELETE
|
||||
- OPTIONS
|
||||
allowed_headers:
|
||||
- Authorization
|
||||
- Content-Type
|
||||
```
|
||||
|
||||
Then restart the Hermes gateway:
|
||||
|
||||
```bash
|
||||
# If using systemd
|
||||
sudo systemctl restart timmy-agent
|
||||
|
||||
# Or restart manually
|
||||
pkill -f "gateway.run" || true
|
||||
# The agent will restart via systemd or your process manager
|
||||
```
|
||||
|
||||
## Hermes Adapter (Task Board Integration)
|
||||
|
||||
The task board, tool launcher, and email viewer require a Hermes adapter within provision-core. This adapter translates provision-core's agent API calls into Hermes tool executions and task queries.
|
||||
|
||||
**Status**: Adapter implementation pending. See [#974] for tracking the Hermes harness plugin.
|
||||
|
||||
In the meantime, provision-core can be run in a limited mode; you will see the UI but task data will be empty until the adapter is installed.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### CORS errors in browser console
|
||||
|
||||
If you see errors like `Access to fetch at 'http://localhost:8642' from origin 'http://localhost:8000' has been blocked`:
|
||||
|
||||
1. Verify the CORS section above is in your Hermes config
|
||||
2. Confirm the Hermes gateway has restarted
|
||||
3. Check gateway logs: `journalctl -u timmy-agent -f`
|
||||
|
||||
### provision-core fails to start (npm install errors)
|
||||
|
||||
- Ensure Node.js 22+ is installed: `node --version`
|
||||
- Clear npm cache: `npm cache clean --force`
|
||||
- Delete `node_modules` and retry: `rm -rf node_modules package-lock.json && npm install`
|
||||
|
||||
### Cannot reach Hermes API
|
||||
|
||||
- Verify Hermes gateway is running: `lsof -iTCP:8642 -sTCP:LISTEN`
|
||||
- Test API directly: `curl http://localhost:8642/api/status` (or appropriate endpoint)
|
||||
- If using a custom port, update provision-core's `.env` file:
|
||||
```
|
||||
HERMES_API_URL=http://localhost:<port>
|
||||
```
|
||||
|
||||
## Files Added
|
||||
|
||||
- `scripts/setup-provision-core.sh` — Automated setup script
|
||||
- `docs/integration/provision-core.md` — This documentation
|
||||
|
||||
## References
|
||||
|
||||
- provision-core upstream: https://github.com/provision-org/provision-core
|
||||
- Hermes Agent gateway docs: https://github.com/NousResearch/hermes-agent/tree/main/gateway
|
||||
- Original issue: Timmy_Foundation/timmy-home#973
|
||||
@@ -1,245 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Fleet cost report generator.
|
||||
|
||||
Reads Timmy's sovereignty metrics database and estimates paid API spend by
|
||||
agent/provider lane. Default output targets the local timmy-config reports
|
||||
folder so the cost report can be filed from the sidecar repo.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import sqlite3
|
||||
from datetime import datetime, timedelta
|
||||
from pathlib import Path
|
||||
from typing import Iterable
|
||||
|
||||
DB_PATH = Path.home() / ".timmy" / "metrics" / "model_metrics.db"
|
||||
|
||||
|
||||
AGENT_LANES = (
|
||||
{
|
||||
"agent": "Timmy Cloud Lane",
|
||||
"provider": "OpenRouter",
|
||||
"patterns": ("openrouter/", "google/", "deepseek/", "x-ai/", "mistral/"),
|
||||
"notes": "Cloud fallback and external reasoning routed through OpenRouter-compatible lanes.",
|
||||
},
|
||||
{
|
||||
"agent": "Ezra",
|
||||
"provider": "Anthropic",
|
||||
"patterns": ("claude-", "anthropic/claude"),
|
||||
"notes": "Archivist / long-form reasoning house on Claude-family models.",
|
||||
},
|
||||
{
|
||||
"agent": "Bezalel",
|
||||
"provider": "OpenAI",
|
||||
"patterns": ("gpt-", "openai/", "codex"),
|
||||
"notes": "Forge / implementation house on Codex/OpenAI-backed execution lanes.",
|
||||
},
|
||||
{
|
||||
"agent": "Allegro",
|
||||
"provider": "Kimi / Moonshot",
|
||||
"patterns": ("kimi", "moonshot"),
|
||||
"notes": "Tempo-and-dispatch house on Kimi / Moonshot direct API lanes.",
|
||||
},
|
||||
)
|
||||
|
||||
|
||||
def default_report_path(report_date: str | None = None) -> Path:
|
||||
if report_date is None:
|
||||
report_date = datetime.now().strftime("%Y-%m-%d")
|
||||
return Path.home() / "code" / "timmy-config" / "reports" / "production" / f"{report_date}-fleet-cost-report.md"
|
||||
|
||||
|
||||
def match_lane(model: str) -> dict | None:
|
||||
lowered = (model or "").lower()
|
||||
for lane in AGENT_LANES:
|
||||
if any(pattern in lowered for pattern in lane["patterns"]):
|
||||
return lane
|
||||
return None
|
||||
|
||||
|
||||
def load_cost_rows(days: int = 30, db_path: Path = DB_PATH) -> list[tuple[str, int, int, int, float]]:
|
||||
if not db_path.exists():
|
||||
return []
|
||||
cutoff = (datetime.now() - timedelta(days=days)).timestamp()
|
||||
with sqlite3.connect(str(db_path)) as conn:
|
||||
rows = conn.execute(
|
||||
"""
|
||||
SELECT model, SUM(sessions), SUM(messages), SUM(tool_calls), SUM(est_cost_usd)
|
||||
FROM session_stats
|
||||
WHERE timestamp > ? AND is_local = 0
|
||||
GROUP BY model
|
||||
ORDER BY SUM(est_cost_usd) DESC, model ASC
|
||||
""",
|
||||
(cutoff,),
|
||||
).fetchall()
|
||||
return [
|
||||
(model, int(sessions or 0), int(messages or 0), int(tool_calls or 0), float(cost or 0.0))
|
||||
for model, sessions, messages, tool_calls, cost in rows
|
||||
]
|
||||
|
||||
|
||||
def summarize_rows(rows: Iterable[tuple[str, int, int, int, float]], days: int = 30) -> dict:
|
||||
rows = list(rows)
|
||||
agents: dict[str, dict] = {}
|
||||
providers_seen: set[str] = set()
|
||||
inventory = [
|
||||
{
|
||||
"agent": lane["agent"],
|
||||
"provider": lane["provider"],
|
||||
"notes": lane["notes"],
|
||||
}
|
||||
for lane in AGENT_LANES
|
||||
]
|
||||
|
||||
for lane in AGENT_LANES:
|
||||
agents[lane["agent"]] = {
|
||||
"provider": lane["provider"],
|
||||
"models": [],
|
||||
"sessions": 0,
|
||||
"messages": 0,
|
||||
"tool_calls": 0,
|
||||
"monthly_cost_usd": 0.0,
|
||||
"daily_cost_usd": 0.0,
|
||||
"notes": lane["notes"],
|
||||
}
|
||||
|
||||
unassigned = {
|
||||
"provider": "Unassigned",
|
||||
"models": [],
|
||||
"sessions": 0,
|
||||
"messages": 0,
|
||||
"tool_calls": 0,
|
||||
"monthly_cost_usd": 0.0,
|
||||
"daily_cost_usd": 0.0,
|
||||
"notes": "Observed paid-model spend not yet mapped to a named wizard house.",
|
||||
}
|
||||
|
||||
for model, sessions, messages, tool_calls, monthly_cost in rows:
|
||||
lane = match_lane(model)
|
||||
if lane is None:
|
||||
bucket = unassigned
|
||||
else:
|
||||
bucket = agents[lane["agent"]]
|
||||
providers_seen.add(lane["provider"])
|
||||
bucket["models"].append(
|
||||
{
|
||||
"model": model,
|
||||
"sessions": sessions,
|
||||
"messages": messages,
|
||||
"tool_calls": tool_calls,
|
||||
"monthly_cost_usd": round(monthly_cost, 4),
|
||||
}
|
||||
)
|
||||
bucket["sessions"] += sessions
|
||||
bucket["messages"] += messages
|
||||
bucket["tool_calls"] += tool_calls
|
||||
bucket["monthly_cost_usd"] += monthly_cost
|
||||
|
||||
for bucket in list(agents.values()) + [unassigned]:
|
||||
bucket["monthly_cost_usd"] = round(bucket["monthly_cost_usd"], 4)
|
||||
bucket["daily_cost_usd"] = round(bucket["monthly_cost_usd"] / max(days, 1), 4)
|
||||
|
||||
if unassigned["models"]:
|
||||
agents["Unassigned"] = unassigned
|
||||
providers_seen.add("Unassigned")
|
||||
|
||||
total_monthly = round(sum(item["monthly_cost_usd"] for item in agents.values()), 4)
|
||||
total_daily = round(sum(item["daily_cost_usd"] for item in agents.values()), 4)
|
||||
|
||||
provider_order = sorted(providers_seen)
|
||||
if "Unassigned" in provider_order:
|
||||
provider_order = [p for p in provider_order if p != "Unassigned"] + ["Unassigned"]
|
||||
|
||||
return {
|
||||
"days": days,
|
||||
"providers": provider_order,
|
||||
"inventory": inventory,
|
||||
"agents": agents,
|
||||
"total_monthly_cost_usd": total_monthly,
|
||||
"total_daily_cost_usd": total_daily,
|
||||
}
|
||||
|
||||
|
||||
def render_markdown(summary: dict, report_date: str | None = None) -> str:
|
||||
if report_date is None:
|
||||
report_date = datetime.now().strftime("%Y-%m-%d")
|
||||
lines = [
|
||||
f"# Fleet Cost Report — {report_date}",
|
||||
"",
|
||||
f"Window: last {summary['days']} days of paid-model session stats from `~/.timmy/metrics/model_metrics.db`.",
|
||||
"",
|
||||
"## Paid API inventory",
|
||||
"",
|
||||
"| Agent | Provider | Notes |",
|
||||
"| --- | --- | --- |",
|
||||
]
|
||||
for item in summary["inventory"]:
|
||||
lines.append(f"| {item['agent']} | {item['provider']} | {item['notes']} |")
|
||||
|
||||
lines.extend(
|
||||
[
|
||||
"",
|
||||
"## Estimated cost per agent per day",
|
||||
"",
|
||||
"| Agent | Provider | Daily cost | Monthly estimate | Sessions | Messages | Tool calls |",
|
||||
"| --- | --- | ---: | ---: | ---: | ---: | ---: |",
|
||||
]
|
||||
)
|
||||
for agent, data in summary["agents"].items():
|
||||
lines.append(
|
||||
f"| {agent} | {data['provider']} | ${data['daily_cost_usd']:.2f} | ${data['monthly_cost_usd']:.2f} | {data['sessions']} | {data['messages']} | {data['tool_calls']} |"
|
||||
)
|
||||
|
||||
lines.extend(
|
||||
[
|
||||
"",
|
||||
f"Total estimated daily paid spend: ${summary['total_daily_cost_usd']:.2f}",
|
||||
f"Total estimated monthly paid spend: ${summary['total_monthly_cost_usd']:.2f}",
|
||||
"",
|
||||
"## Model evidence",
|
||||
"",
|
||||
]
|
||||
)
|
||||
for agent, data in summary["agents"].items():
|
||||
lines.append(f"### {agent}")
|
||||
if not data["models"]:
|
||||
lines.append("- No paid-model sessions observed in the selected window.")
|
||||
else:
|
||||
for model in data["models"]:
|
||||
lines.append(
|
||||
f"- `{model['model']}` — {model['sessions']} sessions / {model['messages']} messages / {model['tool_calls']} tool calls / ${model['monthly_cost_usd']:.2f} est."
|
||||
)
|
||||
lines.append("")
|
||||
|
||||
lines.append("Generated by `python3 scripts/fleet_cost_report.py --days 30`. Default output path targets the local timmy-config report lane.")
|
||||
lines.append("")
|
||||
return "\n".join(lines)
|
||||
|
||||
|
||||
def write_report(output_path: Path, summary: dict, report_date: str | None = None) -> Path:
|
||||
output_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
output_path.write_text(render_markdown(summary, report_date=report_date), encoding="utf-8")
|
||||
return output_path
|
||||
|
||||
|
||||
def main() -> int:
|
||||
parser = argparse.ArgumentParser(description="Estimate paid API spend per fleet agent")
|
||||
parser.add_argument("--days", type=int, default=30, help="Lookback window in days")
|
||||
parser.add_argument("--db-path", default=str(DB_PATH), help="Path to model_metrics.db")
|
||||
parser.add_argument("--output", help="Optional markdown output path")
|
||||
parser.add_argument("--date", help="Override report date (YYYY-MM-DD)")
|
||||
args = parser.parse_args()
|
||||
|
||||
rows = load_cost_rows(days=args.days, db_path=Path(args.db_path).expanduser())
|
||||
summary = summarize_rows(rows, days=args.days)
|
||||
report_date = args.date or datetime.now().strftime("%Y-%m-%d")
|
||||
output_path = Path(args.output).expanduser() if args.output else default_report_path(report_date)
|
||||
write_report(output_path, summary, report_date=report_date)
|
||||
print(output_path)
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
raise SystemExit(main())
|
||||
69
scripts/setup-provision-core.sh
Executable file
69
scripts/setup-provision-core.sh
Executable file
@@ -0,0 +1,69 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
# provision-core integration setup script for timmy-home
|
||||
# This script clones and configures provision-core to work with Hermes
|
||||
|
||||
# Resolve the script's directory
|
||||
SCRIPT_DIR="$(dirname "${BASH_SOURCE[0]}")"
|
||||
SCRIPT_DIR="$(cd "$SCRIPT_DIR" && pwd)"
|
||||
REPO_ROOT="$(cd "${SCRIPT_DIR}/.." && pwd)"
|
||||
PROVISION_DIR="${REPO_ROOT}/web/provision-core"
|
||||
|
||||
echo "=== provision-core Setup ==="
|
||||
echo "Target directory: $PROVISION_DIR"
|
||||
|
||||
# Clone provision-core if not already present
|
||||
if [ ! -d "$PROVISION_DIR/.git" ]; then
|
||||
echo "Cloning provision-core..."
|
||||
git clone https://github.com/provision-org/provision-core.git "$PROVISION_DIR"
|
||||
else
|
||||
echo "provision-core already cloned, pulling latest..."
|
||||
(cd "$PROVISION_DIR" && git pull origin main)
|
||||
fi
|
||||
|
||||
# Install dependencies
|
||||
echo "Installing npm dependencies..."
|
||||
cd "$PROVISION_DIR"
|
||||
npm install
|
||||
|
||||
# Build assets
|
||||
echo "Building assets..."
|
||||
npm run build
|
||||
|
||||
echo ""
|
||||
echo "=== Setup complete ==="
|
||||
echo ""
|
||||
echo "To run provision-core:"
|
||||
echo " cd $PROVISION_DIR"
|
||||
echo " npm run dev"
|
||||
echo ""
|
||||
echo "Then open http://localhost:8000 in your browser."
|
||||
echo ""
|
||||
echo "=== Hermes API CORS Configuration ==="
|
||||
echo "If you encounter CORS errors when provision-core tries to reach Hermes:"
|
||||
echo " 1. Locate your Hermes gateway configuration (~/.hermes/config.yaml or gateway config)"
|
||||
echo " 2. Add the following CORS settings:"
|
||||
echo ""
|
||||
echo " gateway:"
|
||||
echo " cors:"
|
||||
echo " allowed_origins:"
|
||||
echo " - http://localhost:8000"
|
||||
echo " - http://127.0.0.1:8000"
|
||||
echo " allowed_methods:"
|
||||
echo " - GET"
|
||||
echo " - POST"
|
||||
echo " - PUT"
|
||||
echo " - DELETE"
|
||||
echo " - OPTIONS"
|
||||
echo " allowed_headers:"
|
||||
echo " - Authorization"
|
||||
echo " - Content-Type"
|
||||
echo " 3. Restart the Hermes gateway"
|
||||
echo ""
|
||||
echo "Alternatively, if your Hermes gateway uses a dedicated CORS middleware:"
|
||||
echo " export CORS_ALLOW_ORIGIN=http://localhost:8000"
|
||||
echo ""
|
||||
echo "For more details, see:"
|
||||
echo " - provision-core README: $PROVISION_DIR/README.md"
|
||||
echo " - Hermes config: ~/.hermes/config.yaml"
|
||||
@@ -1 +1,12 @@
|
||||
# Timmy core module
|
||||
|
||||
from .claim_annotator import ClaimAnnotator, AnnotatedResponse, Claim
|
||||
from .audit_trail import AuditTrail, AuditEntry
|
||||
|
||||
__all__ = [
|
||||
"ClaimAnnotator",
|
||||
"AnnotatedResponse",
|
||||
"Claim",
|
||||
"AuditTrail",
|
||||
"AuditEntry",
|
||||
]
|
||||
|
||||
156
src/timmy/claim_annotator.py
Normal file
156
src/timmy/claim_annotator.py
Normal file
@@ -0,0 +1,156 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Response Claim Annotator — Source Distinction System
|
||||
SOUL.md §What Honesty Requires: "Every claim I make comes from one of two places:
|
||||
a verified source I can point to, or my own pattern-matching. My user must be
|
||||
able to tell which is which."
|
||||
"""
|
||||
|
||||
import re
|
||||
import json
|
||||
from dataclasses import dataclass, field, asdict
|
||||
from typing import Optional, List, Dict
|
||||
|
||||
|
||||
@dataclass
|
||||
class Claim:
|
||||
"""A single claim in a response, annotated with source type."""
|
||||
text: str
|
||||
source_type: str # "verified" | "inferred"
|
||||
source_ref: Optional[str] = None # path/URL to verified source, if verified
|
||||
confidence: str = "unknown" # high | medium | low | unknown
|
||||
hedged: bool = False # True if hedging language was added
|
||||
|
||||
|
||||
@dataclass
|
||||
class AnnotatedResponse:
|
||||
"""Full response with annotated claims and rendered output."""
|
||||
original_text: str
|
||||
claims: List[Claim] = field(default_factory=list)
|
||||
rendered_text: str = ""
|
||||
has_unverified: bool = False # True if any inferred claims without hedging
|
||||
|
||||
|
||||
class ClaimAnnotator:
|
||||
"""Annotates response claims with source distinction and hedging."""
|
||||
|
||||
# Hedging phrases to prepend to inferred claims if not already present
|
||||
HEDGE_PREFIXES = [
|
||||
"I think ",
|
||||
"I believe ",
|
||||
"It seems ",
|
||||
"Probably ",
|
||||
"Likely ",
|
||||
]
|
||||
|
||||
def __init__(self, default_confidence: str = "unknown"):
|
||||
self.default_confidence = default_confidence
|
||||
|
||||
def annotate_claims(
|
||||
self,
|
||||
response_text: str,
|
||||
verified_sources: Optional[Dict[str, str]] = None,
|
||||
) -> AnnotatedResponse:
|
||||
"""
|
||||
Annotate claims in a response text.
|
||||
|
||||
Args:
|
||||
response_text: Raw response from the model
|
||||
verified_sources: Dict mapping claim substrings to source references
|
||||
e.g. {"Paris is the capital of France": "https://en.wikipedia.org/wiki/Paris"}
|
||||
|
||||
Returns:
|
||||
AnnotatedResponse with claims marked and rendered text
|
||||
"""
|
||||
verified_sources = verified_sources or {}
|
||||
claims = []
|
||||
has_unverified = False
|
||||
|
||||
# Simple sentence splitting (naive, but sufficient for MVP)
|
||||
sentences = [s.strip() for s in re.split(r'[.!?]\s+', response_text) if s.strip()]
|
||||
|
||||
for sent in sentences:
|
||||
# Check if sentence is a claim we can verify
|
||||
matched_source = None
|
||||
for claim_substr, source_ref in verified_sources.items():
|
||||
if claim_substr.lower() in sent.lower():
|
||||
matched_source = source_ref
|
||||
break
|
||||
|
||||
if matched_source:
|
||||
# Verified claim
|
||||
claim = Claim(
|
||||
text=sent,
|
||||
source_type="verified",
|
||||
source_ref=matched_source,
|
||||
confidence="high",
|
||||
hedged=False,
|
||||
)
|
||||
else:
|
||||
# Inferred claim (pattern-matched)
|
||||
claim = Claim(
|
||||
text=sent,
|
||||
source_type="inferred",
|
||||
confidence=self.default_confidence,
|
||||
hedged=self._has_hedge(sent),
|
||||
)
|
||||
if not claim.hedged:
|
||||
has_unverified = True
|
||||
|
||||
claims.append(claim)
|
||||
|
||||
# Render the annotated response
|
||||
rendered = self._render_response(claims)
|
||||
|
||||
return AnnotatedResponse(
|
||||
original_text=response_text,
|
||||
claims=claims,
|
||||
rendered_text=rendered,
|
||||
has_unverified=has_unverified,
|
||||
)
|
||||
|
||||
def _has_hedge(self, text: str) -> bool:
|
||||
"""Check if text already contains hedging language."""
|
||||
text_lower = text.lower()
|
||||
for prefix in self.HEDGE_PREFIXES:
|
||||
if text_lower.startswith(prefix.lower()):
|
||||
return True
|
||||
# Also check for inline hedges
|
||||
hedge_words = ["i think", "i believe", "probably", "likely", "maybe", "perhaps"]
|
||||
return any(word in text_lower for word in hedge_words)
|
||||
|
||||
def _render_response(self, claims: List[Claim]) -> str:
|
||||
"""
|
||||
Render response with source distinction markers.
|
||||
|
||||
Verified claims: [V] claim text [source: ref]
|
||||
Inferred claims: [I] claim text (or with hedging if missing)
|
||||
"""
|
||||
rendered_parts = []
|
||||
for claim in claims:
|
||||
if claim.source_type == "verified":
|
||||
part = f"[V] {claim.text}"
|
||||
if claim.source_ref:
|
||||
part += f" [source: {claim.source_ref}]"
|
||||
else: # inferred
|
||||
if not claim.hedged:
|
||||
# Add hedging if missing
|
||||
hedged_text = f"I think {claim.text[0].lower()}{claim.text[1:]}" if claim.text else claim.text
|
||||
part = f"[I] {hedged_text}"
|
||||
else:
|
||||
part = f"[I] {claim.text}"
|
||||
rendered_parts.append(part)
|
||||
return " ".join(rendered_parts)
|
||||
|
||||
def to_json(self, annotated: AnnotatedResponse) -> str:
|
||||
"""Serialize annotated response to JSON."""
|
||||
return json.dumps(
|
||||
{
|
||||
"original_text": annotated.original_text,
|
||||
"rendered_text": annotated.rendered_text,
|
||||
"has_unverified": annotated.has_unverified,
|
||||
"claims": [asdict(c) for c in annotated.claims],
|
||||
},
|
||||
indent=2,
|
||||
ensure_ascii=False,
|
||||
)
|
||||
@@ -1,77 +0,0 @@
|
||||
from importlib.util import module_from_spec, spec_from_file_location
|
||||
from pathlib import Path
|
||||
import tempfile
|
||||
import unittest
|
||||
|
||||
|
||||
ROOT = Path(__file__).resolve().parent.parent
|
||||
SCRIPT_PATH = ROOT / "scripts" / "fleet_cost_report.py"
|
||||
|
||||
|
||||
def load_module():
|
||||
spec = spec_from_file_location("fleet_cost_report", SCRIPT_PATH)
|
||||
module = module_from_spec(spec)
|
||||
assert spec.loader is not None
|
||||
spec.loader.exec_module(module)
|
||||
return module
|
||||
|
||||
|
||||
class TestFleetCostReport(unittest.TestCase):
|
||||
def test_default_output_targets_timmy_config_report_path(self):
|
||||
module = load_module()
|
||||
output_path = module.default_report_path("2026-04-22")
|
||||
self.assertIn("timmy-config", str(output_path))
|
||||
self.assertTrue(str(output_path).endswith("2026-04-22-fleet-cost-report.md"))
|
||||
|
||||
def test_summary_groups_paid_costs_by_agent_and_provider(self):
|
||||
module = load_module()
|
||||
rows = [
|
||||
("claude-sonnet-4-6", 12, 120, 24, 6.0),
|
||||
("gpt-5.4", 6, 60, 12, 3.0),
|
||||
("openrouter/google/gemini-2.5-pro", 4, 40, 8, 2.0),
|
||||
("kimi-k2", 2, 20, 4, 1.0),
|
||||
]
|
||||
summary = module.summarize_rows(rows, days=30)
|
||||
|
||||
self.assertEqual(summary["providers"], ["Anthropic", "Kimi / Moonshot", "OpenAI", "OpenRouter"])
|
||||
self.assertAlmostEqual(summary["agents"]["Ezra"]["monthly_cost_usd"], 6.0)
|
||||
self.assertAlmostEqual(summary["agents"]["Bezalel"]["monthly_cost_usd"], 3.0)
|
||||
self.assertAlmostEqual(summary["agents"]["Timmy Cloud Lane"]["monthly_cost_usd"], 2.0)
|
||||
self.assertAlmostEqual(summary["agents"]["Allegro"]["monthly_cost_usd"], 1.0)
|
||||
self.assertAlmostEqual(summary["agents"]["Ezra"]["daily_cost_usd"], 0.2)
|
||||
|
||||
def test_report_render_mentions_inventory_and_agent_costs(self):
|
||||
module = load_module()
|
||||
rows = [
|
||||
("claude-sonnet-4-6", 12, 120, 24, 6.0),
|
||||
("gpt-5.4", 6, 60, 12, 3.0),
|
||||
("openrouter/google/gemini-2.5-pro", 4, 40, 8, 2.0),
|
||||
]
|
||||
summary = module.summarize_rows(rows, days=30)
|
||||
report = module.render_markdown(summary, report_date="2026-04-22")
|
||||
|
||||
self.assertIn("# Fleet Cost Report — 2026-04-22", report)
|
||||
self.assertIn("## Paid API inventory", report)
|
||||
self.assertIn("Anthropic", report)
|
||||
self.assertIn("OpenRouter", report)
|
||||
self.assertIn("OpenAI", report)
|
||||
self.assertIn("## Estimated cost per agent per day", report)
|
||||
self.assertIn("Timmy Cloud Lane", report)
|
||||
self.assertIn("Ezra", report)
|
||||
self.assertIn("Bezalel", report)
|
||||
|
||||
def test_write_report_creates_markdown_file(self):
|
||||
module = load_module()
|
||||
rows = [("claude-sonnet-4-6", 1, 10, 2, 0.5)]
|
||||
summary = module.summarize_rows(rows, days=30)
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
dest = Path(tmpdir) / "fleet-cost.md"
|
||||
module.write_report(dest, summary, report_date="2026-04-22")
|
||||
self.assertTrue(dest.exists())
|
||||
text = dest.read_text()
|
||||
self.assertIn("Fleet Cost Report", text)
|
||||
self.assertIn("Ezra", text)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
unittest.main()
|
||||
103
tests/timmy/test_claim_annotator.py
Normal file
103
tests/timmy/test_claim_annotator.py
Normal file
@@ -0,0 +1,103 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Tests for claim_annotator.py — verifies source distinction is present."""
|
||||
|
||||
import sys
|
||||
import os
|
||||
import json
|
||||
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(__file__), "..", "src"))
|
||||
|
||||
from timmy.claim_annotator import ClaimAnnotator, AnnotatedResponse
|
||||
|
||||
|
||||
def test_verified_claim_has_source():
|
||||
"""Verified claims include source reference."""
|
||||
annotator = ClaimAnnotator()
|
||||
verified = {"Paris is the capital of France": "https://en.wikipedia.org/wiki/Paris"}
|
||||
response = "Paris is the capital of France. It is a beautiful city."
|
||||
|
||||
result = annotator.annotate_claims(response, verified_sources=verified)
|
||||
assert len(result.claims) > 0
|
||||
verified_claims = [c for c in result.claims if c.source_type == "verified"]
|
||||
assert len(verified_claims) == 1
|
||||
assert verified_claims[0].source_ref == "https://en.wikipedia.org/wiki/Paris"
|
||||
assert "[V]" in result.rendered_text
|
||||
assert "[source:" in result.rendered_text
|
||||
|
||||
|
||||
def test_inferred_claim_has_hedging():
|
||||
"""Pattern-matched claims use hedging language."""
|
||||
annotator = ClaimAnnotator()
|
||||
response = "The weather is nice today. It might rain tomorrow."
|
||||
|
||||
result = annotator.annotate_claims(response)
|
||||
inferred_claims = [c for c in result.claims if c.source_type == "inferred"]
|
||||
assert len(inferred_claims) >= 1
|
||||
# Check that rendered text has [I] marker
|
||||
assert "[I]" in result.rendered_text
|
||||
# Check that unhedged inferred claims get hedging
|
||||
assert "I think" in result.rendered_text or "I believe" in result.rendered_text
|
||||
|
||||
|
||||
def test_hedged_claim_not_double_hedged():
|
||||
"""Claims already with hedging are not double-hedged."""
|
||||
annotator = ClaimAnnotator()
|
||||
response = "I think the sky is blue. It is a nice day."
|
||||
|
||||
result = annotator.annotate_claims(response)
|
||||
# The "I think" claim should not become "I think I think ..."
|
||||
assert "I think I think" not in result.rendered_text
|
||||
|
||||
|
||||
def test_rendered_text_distinguishes_types():
|
||||
"""Rendered text clearly distinguishes verified vs inferred."""
|
||||
annotator = ClaimAnnotator()
|
||||
verified = {"Earth is round": "https://science.org/earth"}
|
||||
response = "Earth is round. Stars are far away."
|
||||
|
||||
result = annotator.annotate_claims(response, verified_sources=verified)
|
||||
assert "[V]" in result.rendered_text # verified marker
|
||||
assert "[I]" in result.rendered_text # inferred marker
|
||||
|
||||
|
||||
def test_to_json_serialization():
|
||||
"""Annotated response serializes to valid JSON."""
|
||||
annotator = ClaimAnnotator()
|
||||
response = "Test claim."
|
||||
result = annotator.annotate_claims(response)
|
||||
json_str = annotator.to_json(result)
|
||||
parsed = json.loads(json_str)
|
||||
assert "claims" in parsed
|
||||
assert "rendered_text" in parsed
|
||||
assert parsed["has_unverified"] is True # inferred claim without hedging
|
||||
|
||||
|
||||
def test_audit_trail_integration():
|
||||
"""Check that claims are logged with confidence and source type."""
|
||||
# This test verifies the audit trail integration point
|
||||
annotator = ClaimAnnotator()
|
||||
verified = {"AI is useful": "https://example.com/ai"}
|
||||
response = "AI is useful. It can help with tasks."
|
||||
|
||||
result = annotator.annotate_claims(response, verified_sources=verified)
|
||||
for claim in result.claims:
|
||||
assert claim.source_type in ("verified", "inferred")
|
||||
assert claim.confidence in ("high", "medium", "low", "unknown")
|
||||
if claim.source_type == "verified":
|
||||
assert claim.source_ref is not None
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
test_verified_claim_has_source()
|
||||
print("✓ test_verified_claim_has_source passed")
|
||||
test_inferred_claim_has_hedging()
|
||||
print("✓ test_inferred_claim_has_hedging passed")
|
||||
test_hedged_claim_not_double_hedged()
|
||||
print("✓ test_hedged_claim_not_double_hedged passed")
|
||||
test_rendered_text_distinguishes_types()
|
||||
print("✓ test_rendered_text_distinguishes_types passed")
|
||||
test_to_json_serialization()
|
||||
print("✓ test_to_json_serialization passed")
|
||||
test_audit_trail_integration()
|
||||
print("✓ test_audit_trail_integration passed")
|
||||
print("\nAll tests passed!")
|
||||
Reference in New Issue
Block a user