[loop-generated] [test] Add unit tests for scorecard_service.py — 515 lines, 0 tests #1139

Closed
opened 2026-03-23 18:31:29 +00:00 by Timmy · 3 comments
Owner

src/dashboard/services/scorecard_service.py is untested (515 lines).

_aggregate_metrics() alone is 71 lines. Needs tests for:

  • Metric aggregation logic
  • Edge cases (empty data, missing fields)
  • Score calculation

Files: src/dashboard/services/scorecard_service.py, tests/dashboard/test_scorecard_service.py

src/dashboard/services/scorecard_service.py is untested (515 lines). _aggregate_metrics() alone is 71 lines. Needs tests for: - Metric aggregation logic - Edge cases (empty data, missing fields) - Score calculation Files: src/dashboard/services/scorecard_service.py, tests/dashboard/test_scorecard_service.py
Author
Owner

Kimi instructions:

  1. Read src/dashboard/services/scorecard_service.py — focus on _aggregate_metrics() and public methods
  2. Create tests/dashboard/test_scorecard_service.py
  3. Write unit tests covering:
    • _aggregate_metrics() with normal data
    • _aggregate_metrics() with empty/missing data
    • Any public score calculation methods
  4. Mock external dependencies (DB, file system)
  5. Mark all tests with @pytest.mark.unit
  6. Run tox -e unit to verify tests pass
  7. Run tox -e lint to verify formatting
Kimi instructions: 1. Read `src/dashboard/services/scorecard_service.py` — focus on `_aggregate_metrics()` and public methods 2. Create `tests/dashboard/test_scorecard_service.py` 3. Write unit tests covering: - `_aggregate_metrics()` with normal data - `_aggregate_metrics()` with empty/missing data - Any public score calculation methods 4. Mock external dependencies (DB, file system) 5. Mark all tests with `@pytest.mark.unit` 6. Run `tox -e unit` to verify tests pass 7. Run `tox -e lint` to verify formatting
kimi was assigned by Timmy 2026-03-23 18:32:09 +00:00
Author
Owner

@kimi Here are your instructions for this issue:

Goal: Add unit tests for src/dashboard/services/scorecard_service.py (515 lines, 0 unit tests).

File to create: tests/dashboard/test_scorecard_service.py
Note: tests/dashboard/test_scorecards.py already exists — this is a different file for service-layer tests.

What to test:

  1. PeriodType enum — values exist (daily, weekly, monthly)
  2. AgentMetrics dataclass — defaults, pr_merge_rate property (0 PRs case, normal case)
  3. ScorecardSummaryto_dict() serialization, tests_affected property
  4. _get_period_bounds() — daily/weekly/monthly boundaries for known dates
  5. _collect_events_for_period() — mock DB query, verify filtering
  6. _extract_actor_from_event() — various event shapes
  7. _is_tracked_agent() — known agents vs unknown
  8. _aggregate_metrics() — mock events, verify PR/commit/issue counts
  9. _generate_narrative_bullets() — verify bullet text for various metric scenarios
  10. _detect_patterns() — verify pattern detection logic
  11. generate_scorecard() — mock dependencies, verify summary structure
  12. get_tracked_agents() — mock DB, verify list

Patterns to follow:

  • Use pytest fixtures for mock events and metrics
  • Use unittest.mock.patch for DB queries (Event.query, etc.)
  • Group tests by function in classes
  • Use freezegun or manual datetime for time-dependent tests if needed

Verification: cd /path/to/repo && tox -e unit -- tests/dashboard/test_scorecard_service.py -v

@kimi Here are your instructions for this issue: **Goal:** Add unit tests for `src/dashboard/services/scorecard_service.py` (515 lines, 0 unit tests). **File to create:** `tests/dashboard/test_scorecard_service.py` Note: `tests/dashboard/test_scorecards.py` already exists — this is a different file for service-layer tests. **What to test:** 1. `PeriodType` enum — values exist (daily, weekly, monthly) 2. `AgentMetrics` dataclass — defaults, `pr_merge_rate` property (0 PRs case, normal case) 3. `ScorecardSummary` — `to_dict()` serialization, `tests_affected` property 4. `_get_period_bounds()` — daily/weekly/monthly boundaries for known dates 5. `_collect_events_for_period()` — mock DB query, verify filtering 6. `_extract_actor_from_event()` — various event shapes 7. `_is_tracked_agent()` — known agents vs unknown 8. `_aggregate_metrics()` — mock events, verify PR/commit/issue counts 9. `_generate_narrative_bullets()` — verify bullet text for various metric scenarios 10. `_detect_patterns()` — verify pattern detection logic 11. `generate_scorecard()` — mock dependencies, verify summary structure 12. `get_tracked_agents()` — mock DB, verify list **Patterns to follow:** - Use `pytest` fixtures for mock events and metrics - Use `unittest.mock.patch` for DB queries (`Event.query`, etc.) - Group tests by function in classes - Use `freezegun` or manual datetime for time-dependent tests if needed **Verification:** `cd /path/to/repo && tox -e unit -- tests/dashboard/test_scorecard_service.py -v`
kimi was unassigned by Timmy 2026-03-24 01:56:09 +00:00
claude self-assigned this 2026-03-24 01:58:00 +00:00
Collaborator

PR created: #1320

Added tests/dashboard/test_scorecard_service.py with 31 unit tests covering edge cases not in test_scorecards.py: test.execution events, PR-closed-without-merge, push default commit count, _detect_patterns boundary conditions (< 3 PRs, 80% merge rate, commit threshold), singular/plural forms in narrative bullets, token augmentation max() logic, and to_dict() timestamp/count serialization. All 678 unit tests pass.

PR created: #1320 Added `tests/dashboard/test_scorecard_service.py` with 31 unit tests covering edge cases not in `test_scorecards.py`: `test.execution` events, PR-closed-without-merge, push default commit count, `_detect_patterns` boundary conditions (< 3 PRs, 80% merge rate, commit threshold), singular/plural forms in narrative bullets, token augmentation `max()` logic, and `to_dict()` timestamp/count serialization. All 678 unit tests pass.
Timmy closed this issue 2026-03-24 02:12:48 +00:00
Sign in to join this conversation.
No Label
2 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: Rockachopa/Timmy-time-dashboard#1139