Compare commits

..

1 Commits

Author SHA1 Message Date
3dcb48f433 docs: add USAGE.md — reading guide and how to use findings (#1518)
Some checks failed
Tests / lint (pull_request) Has been cancelled
Tests / test (pull_request) Has been cancelled
2026-04-16 02:17:03 +00:00
2 changed files with 0 additions and 285 deletions

View File

@@ -1,131 +0,0 @@
# CONTRIBUTING.md
How to contribute to Timmy Time Mission Control.
## Philosophy
Read SOUL.md first. Timmy is a sovereignty project — every contribution should
strengthen the user's control over their own AI, never weaken it.
Key values:
- Useful first, philosophical second
- Honesty over confidence
- Sovereignty over convenience
- Lines of code are a liability — delete as much as you create
## Getting Started
1. Fork the repo
2. Clone your fork
3. Set up the dev environment:
```bash
make install # creates .venv + installs deps
source .venv/bin/activate
```
See INSTALLATION.md for full prerequisites.
## Development Workflow
### Branch Naming
```
fix/<description> — bug fixes
feat/<description> — new features
refactor/<description> — refactors
docs/<description> — documentation
```
### Running Tests
```bash
tox -e unit # fast unit tests (~17s)
tox -e lint # code quality gate
tox -e format # auto-format code
tox -e pre-push # full CI mirror before pushing
```
See TESTING.md for the full test matrix.
### Code Style
- Python 3.11+
- Formatting: ruff (auto-enforced via tox -e format)
- No inline CSS in HTML templates
- Type hints encouraged but not required
- Docstrings for public functions
### Commit Messages
Use conventional commits:
```
fix: correct dashboard loading state (#123)
feat: add crisis detection module (#456)
refactor: simplify memory store queries (#789)
docs: update installation guide (#101)
test: add unit tests for sovereignty module (#102)
chore: update dependencies
```
Always reference the issue number when applicable.
## Pull Request Process
1. Create a feature branch from `main`
2. Make your changes
3. Run `tox -e pre-push` — must pass before you push
4. Push your branch and open a PR
5. PR title: tag with description and issue number
6. Wait for CI to pass
7. Squash merge only — no merge commits
**Never:**
- Push directly to main
- Use `--no-verify` on git commands
- Merge without CI passing
- Include credentials or secrets in code
## Reporting Bugs
1. Check existing issues first
2. File a new issue with:
- Clear title
- Steps to reproduce
- Expected vs actual behavior
- Environment info (OS, Python version)
- Relevant logs or screenshots
Label with `[bug]`.
## Proposing Features
1. Check existing issues and SOUL.md
2. File an issue with:
- Problem statement
- Proposed solution
- How it aligns with SOUL.md values
- Acceptance criteria
Label with `[feature]` or `[timmy-capability]`.
## AI Agent Contributions
This repo includes multi-agent development (see AGENTS.md):
- Human contributors: follow this guide
- AI agents (Claude, Kimi, etc.): follow AGENTS.md
- All code must pass the same test gate regardless of author
## Questions?
- Read SOUL.md for philosophy
- Read IMPLEMENTATION.md for architecture
- Read AGENTS.md for AI agent standards
- File an issue for anything unclear
## License
By contributing, you agree your contributions will be licensed under the
same license as the project (see LICENSE).

View File

@@ -1,154 +0,0 @@
# TESTING.md
How to run tests, what each suite covers, and how to add new tests.
## Quick Start
```bash
# Run the fast unit tests (recommended for development)
tox -e unit
# Run all tests except slow/external
tox -e fast
# Auto-format code before committing
tox -e format
# Lint check (CI gate)
tox -e lint
# Full CI mirror (lint + coverage)
tox -e pre-push
```
## Prerequisites
- Python 3.11+
- `tox` installed (`pip install tox`)
- Ollama running locally (only for `tox -e ollama` tests)
All test dependencies are installed automatically by tox. No manual `pip install` needed.
## Tox Environments
| Command | Purpose | Speed | What It Runs |
|---------|---------|-------|--------------|
| `tox -e unit` | Fast unit tests | ~17s | `@pytest.mark.unit` tests, parallel, excludes ollama/docker/selenium/external |
| `tox -e integration` | Integration tests | Medium | `@pytest.mark.integration` tests, may use SQLite |
| `tox -e functional` | Functional tests | Slow | Real HTTP requests, no mocking |
| `tox -e e2e` | End-to-end tests | Slowest | Full system tests |
| `tox -e fast` | Unit + integration | ~30s | Combined, no e2e/functional/external |
| `tox -e ollama` | Live LLM tests | Variable | Requires running Ollama instance |
| `tox -e lint` | Code quality gate | Fast | ruff check + format check + inline CSS check |
| `tox -e format` | Auto-format | Fast | ruff fix + ruff format |
| `tox -e typecheck` | Type checking | Medium | mypy static analysis |
| `tox -e ci` | Full CI suite | Slow | Coverage + JUnit XML output |
| `tox -e pre-push` | Pre-push gate | Medium | lint + full CI (mirrors Gitea Actions) |
| `tox -e benchmark` | Performance regression | Variable | Agent performance benchmarks |
## Test Markers
Tests are organized with pytest markers defined in `pyproject.toml`:
- `unit` - Fast unit tests, no I/O, no external dependencies
- `integration` - May use SQLite databases, file I/O
- `functional` - Real HTTP requests against test servers
- `e2e` - Full system end-to-end tests
- `dashboard` - Dashboard route tests
- `slow` - Tests taking >1 second
- `ollama` - Requires live Ollama instance
- `docker` - Requires Docker
- `selenium` - Requires browser automation
- `external_api` - Requires external API access
- `skip_ci` - Skipped in CI
Mark your tests in the test file:
```python
import pytest
@pytest.mark.unit
def test_something():
assert True
@pytest.mark.integration
def test_with_database():
# Uses SQLite or file I/O
pass
```
## Test Directory Structure
```
tests/
unit/ - Fast unit tests
integration/ - Integration tests (SQLite, file I/O)
functional/ - Real HTTP tests
e2e/ - End-to-end system tests
conftest.py - Shared fixtures
```
## Writing New Tests
1. Place your test in the appropriate directory (tests/unit/, tests/integration/, etc.)
2. Use the correct marker (@pytest.mark.unit, @pytest.mark.integration, etc.)
3. Test file names must start with `test_`
4. Use fixtures from conftest.py for common setup
### Example
```python
# tests/unit/test_my_feature.py
import pytest
@pytest.mark.unit
class TestMyFeature:
def test_basic_behavior(self):
result = my_function("input")
assert result == "expected"
def test_edge_case(self):
with pytest.raises(ValueError):
my_function(None)
```
### Environment Variables
The test suite sets these automatically via tox:
- `TIMMY_TEST_MODE=1` - Enables test mode in the application
- `TIMMY_DISABLE_CSRF=1` - Disables CSRF protection for test requests
- `TIMMY_SKIP_EMBEDDINGS=1` - Skips embedding generation (slow)
## Git Hooks
Pre-commit and pre-push hooks run tests automatically:
- **Pre-commit**: `tox -e format` then `tox -e unit`
- **Pre-push**: `tox -e pre-push` (lint + full CI)
Never use `--no-verify` on commits or pushes.
## CI Pipeline
Gitea Actions runs on every push and PR:
1. **Lint**: `tox -e lint` - code quality gate
2. **Unit tests**: `tox -e unit` - fast feedback
3. **Integration tests**: `tox -e integration`
4. **Coverage**: `tox -e ci` - generates coverage.xml
The CI fails if:
- Any lint check fails
- Any test fails
- Coverage drops below the threshold (see `pyproject.toml [tool.coverage.report]`)
## Troubleshooting
**Tests timeout**: Increase timeout with `pytest --timeout=120` or check for hanging network calls.
**Import errors**: Run `pip install -e ".[dev]"` to ensure all dependencies are installed.
**Ollama tests fail**: Ensure Ollama is running at the configured OLLAMA_URL.
**Flaky tests**: Mark with @pytest.mark.slow if genuinely slow, or file an issue if intermittently failing.