feat(skills): add memento-flashcards optional skill (#3827)
* feat(skills): add memento-flashcards skill * docs(skills): clarify memento-flashcards interaction model * fix: use HERMES_HOME env var for profile-safe data path --------- Co-authored-by: Magnus Ahmad <magnus.ahmad@gmail.com>
This commit is contained in:
324
optional-skills/productivity/memento-flashcards/SKILL.md
Normal file
324
optional-skills/productivity/memento-flashcards/SKILL.md
Normal file
@@ -0,0 +1,324 @@
|
|||||||
|
---
|
||||||
|
name: memento-flashcards
|
||||||
|
description: >-
|
||||||
|
Spaced-repetition flashcard system. Create cards from facts or text,
|
||||||
|
chat with flashcards using free-text answers graded by the agent,
|
||||||
|
generate quizzes from YouTube transcripts, review due cards with
|
||||||
|
adaptive scheduling, and export/import decks as CSV.
|
||||||
|
version: 1.0.0
|
||||||
|
author: Memento AI
|
||||||
|
license: MIT
|
||||||
|
platforms: [macos, linux]
|
||||||
|
metadata:
|
||||||
|
hermes:
|
||||||
|
tags: [Education, Flashcards, Spaced Repetition, Learning, Quiz, YouTube]
|
||||||
|
requires_toolsets: [terminal]
|
||||||
|
category: productivity
|
||||||
|
---
|
||||||
|
|
||||||
|
# Memento Flashcards — Spaced-Repetition Flashcard Skill
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
Memento gives you a local, file-based flashcard system with spaced-repetition scheduling.
|
||||||
|
Users can chat with their flashcards by answering in free text and having the agent grade the response before scheduling the next review.
|
||||||
|
Use it whenever the user wants to:
|
||||||
|
|
||||||
|
- **Remember a fact** — turn any statement into a Q/A flashcard
|
||||||
|
- **Study with spaced repetition** — review due cards with adaptive intervals and agent-graded free-text answers
|
||||||
|
- **Quiz from a YouTube video** — fetch a transcript and generate a 5-question quiz
|
||||||
|
- **Manage decks** — organise cards into collections, export/import CSV
|
||||||
|
|
||||||
|
All card data lives in a single JSON file. No external API keys are required — you (the agent) generate flashcard content and quiz questions directly.
|
||||||
|
|
||||||
|
User-facing response style for Memento Flashcards:
|
||||||
|
- Use plain text only. Do not use Markdown formatting in replies to the user.
|
||||||
|
- Keep review and quiz feedback brief and neutral. Avoid extra praise, pep, or long explanations.
|
||||||
|
|
||||||
|
## When to Use
|
||||||
|
|
||||||
|
Use this skill when the user wants to:
|
||||||
|
- Save facts as flashcards for later review
|
||||||
|
- Review due cards with spaced repetition
|
||||||
|
- Generate a quiz from a YouTube video transcript
|
||||||
|
- Import, export, inspect, or delete flashcard data
|
||||||
|
|
||||||
|
Do not use this skill for general Q&A, coding help, or non-memory tasks.
|
||||||
|
|
||||||
|
## Quick Reference
|
||||||
|
|
||||||
|
| User intent | Action |
|
||||||
|
|---|---|
|
||||||
|
| "Remember that X" / "save this as a flashcard" | Generate a Q/A card, call `memento_cards.py add` |
|
||||||
|
| Sends a fact without mentioning flashcards | Ask "Want me to save this as a Memento flashcard?" — only create if confirmed |
|
||||||
|
| "Create a flashcard" | Ask for Q, A, collection; call `memento_cards.py add` |
|
||||||
|
| "Review my cards" | Call `memento_cards.py due`, present cards one-by-one |
|
||||||
|
| "Quiz me on [YouTube URL]" | Call `youtube_quiz.py fetch VIDEO_ID`, generate 5 questions, call `memento_cards.py add-quiz` |
|
||||||
|
| "Export my cards" | Call `memento_cards.py export --output PATH` |
|
||||||
|
| "Import cards from CSV" | Call `memento_cards.py import --file PATH --collection NAME` |
|
||||||
|
| "Show my stats" | Call `memento_cards.py stats` |
|
||||||
|
| "Delete a card" | Call `memento_cards.py delete --id ID` |
|
||||||
|
| "Delete a collection" | Call `memento_cards.py delete-collection --collection NAME` |
|
||||||
|
|
||||||
|
## Card Storage
|
||||||
|
|
||||||
|
Cards are stored in a JSON file at:
|
||||||
|
|
||||||
|
```
|
||||||
|
~/.hermes/skills/productivity/memento-flashcards/data/cards.json
|
||||||
|
```
|
||||||
|
|
||||||
|
**Never edit this file directly.** Always use `memento_cards.py` subcommands. The script handles atomic writes (write to temp file, then rename) to prevent corruption.
|
||||||
|
|
||||||
|
The file is created automatically on first use.
|
||||||
|
|
||||||
|
## Procedure
|
||||||
|
|
||||||
|
### Creating Cards from Facts
|
||||||
|
|
||||||
|
### Activation Rules
|
||||||
|
|
||||||
|
Not every factual statement should become a flashcard. Use this three-tier check:
|
||||||
|
|
||||||
|
1. **Explicit intent** — the user mentions "memento", "flashcard", "remember this", "save this card", "add a card", or similar phrasing that clearly requests a flashcard → **create the card directly**, no confirmation needed.
|
||||||
|
2. **Implicit intent** — the user sends a factual statement without mentioning flashcards (e.g. "The speed of light is 299,792 km/s") → **ask first**: "Want me to save this as a Memento flashcard?" Only create the card if the user confirms.
|
||||||
|
3. **No intent** — the message is a coding task, a question, instructions, normal conversation, or anything that is clearly not a fact to memorize → **do NOT activate this skill at all**. Let other skills or default behavior handle it.
|
||||||
|
|
||||||
|
When activation is confirmed (tier 1 directly, tier 2 after confirmation), generate a flashcard:
|
||||||
|
|
||||||
|
**Step 1:** Turn the statement into a Q/A pair. Use this format internally:
|
||||||
|
|
||||||
|
```
|
||||||
|
Turn the factual statement into a front-back pair.
|
||||||
|
Return exactly two lines:
|
||||||
|
Q: <question text>
|
||||||
|
A: <answer text>
|
||||||
|
|
||||||
|
Statement: "{statement}"
|
||||||
|
```
|
||||||
|
|
||||||
|
Rules:
|
||||||
|
- The question should test recall of the key fact
|
||||||
|
- The answer should be concise and direct
|
||||||
|
|
||||||
|
**Step 2:** Call the script to store the card:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 ~/.hermes/skills/productivity/memento-flashcards/scripts/memento_cards.py add \
|
||||||
|
--question "What year did World War 2 end?" \
|
||||||
|
--answer "1945" \
|
||||||
|
--collection "History"
|
||||||
|
```
|
||||||
|
|
||||||
|
If the user doesn't specify a collection, use `"General"` as the default.
|
||||||
|
|
||||||
|
The script outputs JSON confirming the created card.
|
||||||
|
|
||||||
|
### Manual Card Creation
|
||||||
|
|
||||||
|
When the user explicitly asks to create a flashcard, ask them for:
|
||||||
|
1. The question (front of card)
|
||||||
|
2. The answer (back of card)
|
||||||
|
3. The collection name (optional — default to `"General"`)
|
||||||
|
|
||||||
|
Then call `memento_cards.py add` as above.
|
||||||
|
|
||||||
|
### Reviewing Due Cards
|
||||||
|
|
||||||
|
When the user wants to review, fetch all due cards:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 ~/.hermes/skills/productivity/memento-flashcards/scripts/memento_cards.py due
|
||||||
|
```
|
||||||
|
|
||||||
|
This returns a JSON array of cards where `next_review_at <= now`. If a collection filter is needed:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 ~/.hermes/skills/productivity/memento-flashcards/scripts/memento_cards.py due --collection "History"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Review flow (free-text grading):**
|
||||||
|
|
||||||
|
Here is an example of the EXACT interaction pattern you must follow. The user answers, you grade them, tell them the correct answer, then rate the card.
|
||||||
|
|
||||||
|
**Example interaction:**
|
||||||
|
|
||||||
|
> **Agent:** What year did the Berlin Wall fall?
|
||||||
|
>
|
||||||
|
> **User:** 1991
|
||||||
|
>
|
||||||
|
> **Agent:** Not quite. The Berlin Wall fell in 1989. Next review is tomorrow.
|
||||||
|
> *(agent calls: memento_cards.py rate --id ABC --rating hard --user-answer "1991")*
|
||||||
|
>
|
||||||
|
> Next question: Who was the first person to walk on the moon?
|
||||||
|
|
||||||
|
**The rules:**
|
||||||
|
|
||||||
|
1. Show only the question. Wait for the user to answer.
|
||||||
|
2. After receiving their answer, compare it to the expected answer and grade it:
|
||||||
|
- **correct** → user got the key fact right (even if worded differently)
|
||||||
|
- **partial** → right track but missing the core detail
|
||||||
|
- **incorrect** → wrong or off-topic
|
||||||
|
3. **You MUST tell the user the correct answer and how they did.** Keep it short and plain-text. Use this format:
|
||||||
|
- correct: "Correct. Answer: {answer}. Next review in 7 days."
|
||||||
|
- partial: "Close. Answer: {answer}. {what they missed}. Next review in 3 days."
|
||||||
|
- incorrect: "Not quite. Answer: {answer}. Next review tomorrow."
|
||||||
|
4. Then call the rate command: correct→easy, partial→good, incorrect→hard.
|
||||||
|
5. Then show the next question.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 ~/.hermes/skills/productivity/memento-flashcards/scripts/memento_cards.py rate \
|
||||||
|
--id CARD_ID --rating easy --user-answer "what the user said"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Never skip step 3.** The user must always see the correct answer and feedback before you move on.
|
||||||
|
|
||||||
|
If no cards are due, tell the user: "No cards due for review right now. Check back later!"
|
||||||
|
|
||||||
|
**Retire override:** At any point the user can say "retire this card" to permanently remove it from reviews. Use `--rating retire` for this.
|
||||||
|
|
||||||
|
### Spaced Repetition Algorithm
|
||||||
|
|
||||||
|
The rating determines the next review interval:
|
||||||
|
|
||||||
|
| Rating | Interval | ease_streak | Status change |
|
||||||
|
|---|---|---|---|
|
||||||
|
| **hard** | +1 day | reset to 0 | stays learning |
|
||||||
|
| **good** | +3 days | reset to 0 | stays learning |
|
||||||
|
| **easy** | +7 days | +1 | if ease_streak >= 3 → retired |
|
||||||
|
| **retire** | permanent | reset to 0 | → retired |
|
||||||
|
|
||||||
|
- **learning**: card is actively in rotation
|
||||||
|
- **retired**: card won't appear in reviews (user has mastered it or manually retired it)
|
||||||
|
- Three consecutive "easy" ratings automatically retire a card
|
||||||
|
|
||||||
|
### YouTube Quiz Generation
|
||||||
|
|
||||||
|
When the user sends a YouTube URL and wants a quiz:
|
||||||
|
|
||||||
|
**Step 1:** Extract the video ID from the URL (e.g. `dQw4w9WgXcQ` from `https://www.youtube.com/watch?v=dQw4w9WgXcQ`).
|
||||||
|
|
||||||
|
**Step 2:** Fetch the transcript:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 ~/.hermes/skills/productivity/memento-flashcards/scripts/youtube_quiz.py fetch VIDEO_ID
|
||||||
|
```
|
||||||
|
|
||||||
|
This returns `{"title": "...", "transcript": "..."}` or an error.
|
||||||
|
|
||||||
|
If the script reports `missing_dependency`, tell the user to install it:
|
||||||
|
```bash
|
||||||
|
pip install youtube-transcript-api
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 3:** Generate 5 quiz questions from the transcript. Use these rules:
|
||||||
|
|
||||||
|
```
|
||||||
|
You are creating a 5-question quiz for a podcast episode.
|
||||||
|
Return ONLY a JSON array with exactly 5 objects.
|
||||||
|
Each object must contain keys 'question' and 'answer'.
|
||||||
|
|
||||||
|
Selection criteria:
|
||||||
|
- Prioritize important, surprising, or foundational facts.
|
||||||
|
- Skip filler, obvious details, and facts that require heavy context.
|
||||||
|
- Never return true/false questions.
|
||||||
|
- Never ask only for a date.
|
||||||
|
|
||||||
|
Question rules:
|
||||||
|
- Each question must test exactly one discrete fact.
|
||||||
|
- Use clear, unambiguous wording.
|
||||||
|
- Prefer What, Who, How many, Which.
|
||||||
|
- Avoid open-ended Describe or Explain prompts.
|
||||||
|
|
||||||
|
Answer rules:
|
||||||
|
- Each answer must be under 240 characters.
|
||||||
|
- Lead with the answer itself, not preamble.
|
||||||
|
- Add only minimal clarifying detail if needed.
|
||||||
|
```
|
||||||
|
|
||||||
|
Use the first 15,000 characters of the transcript as context. Generate the questions yourself (you are the LLM).
|
||||||
|
|
||||||
|
**Step 4:** Validate the output is valid JSON with exactly 5 items, each having non-empty `question` and `answer` strings. If validation fails, retry once.
|
||||||
|
|
||||||
|
**Step 5:** Store quiz cards:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 ~/.hermes/skills/productivity/memento-flashcards/scripts/memento_cards.py add-quiz \
|
||||||
|
--video-id "VIDEO_ID" \
|
||||||
|
--questions '[{"question":"...","answer":"..."},...]' \
|
||||||
|
--collection "Quiz - Episode Title"
|
||||||
|
```
|
||||||
|
|
||||||
|
The script deduplicates by `video_id` — if cards for that video already exist, it skips creation and reports the existing cards.
|
||||||
|
|
||||||
|
**Step 6:** Present questions one-by-one using the same free-text grading flow:
|
||||||
|
1. Show "Question 1/5: ..." and wait for the user's answer. Never include the answer or any hint about revealing it.
|
||||||
|
2. Wait for the user to answer in their own words
|
||||||
|
3. Grade their answer using the grading prompt (see "Reviewing Due Cards" section)
|
||||||
|
4. **IMPORTANT: You MUST reply to the user with feedback before doing anything else.** Show the grade, the correct answer, and when the card is next due. Do NOT silently skip to the next question. Keep it short and plain-text. Example: "Not quite. Answer: {answer}. Next review tomorrow."
|
||||||
|
5. **After showing feedback**, call the rate command and then show the next question in the same message:
|
||||||
|
```bash
|
||||||
|
python3 ~/.hermes/skills/productivity/memento-flashcards/scripts/memento_cards.py rate \
|
||||||
|
--id CARD_ID --rating easy --user-answer "what the user said"
|
||||||
|
```
|
||||||
|
6. Repeat. Every answer MUST receive visible feedback before the next question.
|
||||||
|
|
||||||
|
### Export/Import CSV
|
||||||
|
|
||||||
|
**Export:**
|
||||||
|
```bash
|
||||||
|
python3 ~/.hermes/skills/productivity/memento-flashcards/scripts/memento_cards.py export \
|
||||||
|
--output ~/flashcards.csv
|
||||||
|
```
|
||||||
|
|
||||||
|
Produces a 3-column CSV: `question,answer,collection` (no header row).
|
||||||
|
|
||||||
|
**Import:**
|
||||||
|
```bash
|
||||||
|
python3 ~/.hermes/skills/productivity/memento-flashcards/scripts/memento_cards.py import \
|
||||||
|
--file ~/flashcards.csv \
|
||||||
|
--collection "Imported"
|
||||||
|
```
|
||||||
|
|
||||||
|
Reads a CSV with columns: question, answer, and optionally collection (column 3). If the collection column is missing, uses the `--collection` argument.
|
||||||
|
|
||||||
|
### Statistics
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 ~/.hermes/skills/productivity/memento-flashcards/scripts/memento_cards.py stats
|
||||||
|
```
|
||||||
|
|
||||||
|
Returns JSON with:
|
||||||
|
- `total`: total card count
|
||||||
|
- `learning`: cards in active rotation
|
||||||
|
- `retired`: mastered cards
|
||||||
|
- `due_now`: cards due for review right now
|
||||||
|
- `collections`: breakdown by collection name
|
||||||
|
|
||||||
|
## Pitfalls
|
||||||
|
|
||||||
|
- **Never edit `cards.json` directly** — always use the script subcommands to avoid corruption
|
||||||
|
- **Transcript failures** — some YouTube videos have no English transcript or have transcripts disabled; inform the user and suggest another video
|
||||||
|
- **Optional dependency** — `youtube_quiz.py` needs `youtube-transcript-api`; if missing, tell the user to run `pip install youtube-transcript-api`
|
||||||
|
- **Large imports** — CSV imports with thousands of rows work fine but the JSON output may be verbose; summarize the result for the user
|
||||||
|
- **Video ID extraction** — support both `youtube.com/watch?v=ID` and `youtu.be/ID` URL formats
|
||||||
|
|
||||||
|
## Verification
|
||||||
|
|
||||||
|
Verify the helper scripts directly:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 ~/.hermes/skills/productivity/memento-flashcards/scripts/memento_cards.py stats
|
||||||
|
python3 ~/.hermes/skills/productivity/memento-flashcards/scripts/memento_cards.py add --question "Capital of France?" --answer "Paris" --collection "General"
|
||||||
|
python3 ~/.hermes/skills/productivity/memento-flashcards/scripts/memento_cards.py due
|
||||||
|
```
|
||||||
|
|
||||||
|
If you are testing from the repo checkout, run:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
pytest tests/skills/test_memento_cards.py tests/skills/test_youtube_quiz.py -q
|
||||||
|
```
|
||||||
|
|
||||||
|
Agent-level verification:
|
||||||
|
- Start a review and confirm feedback is plain text, brief, and always includes the correct answer before the next card
|
||||||
|
- Run a YouTube quiz flow and confirm each answer receives visible feedback before the next question
|
||||||
@@ -0,0 +1,353 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""Memento card storage, spaced-repetition engine, and CSV I/O.
|
||||||
|
|
||||||
|
Stdlib-only. All output is JSON for agent parsing.
|
||||||
|
Data file: $HERMES_HOME/skills/productivity/memento-flashcards/data/cards.json
|
||||||
|
"""
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import csv
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
import tempfile
|
||||||
|
import uuid
|
||||||
|
from datetime import datetime, timedelta, timezone
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
_HERMES_HOME = Path(os.environ.get("HERMES_HOME", Path.home() / ".hermes"))
|
||||||
|
DATA_DIR = _HERMES_HOME / "skills" / "productivity" / "memento-flashcards" / "data"
|
||||||
|
CARDS_FILE = DATA_DIR / "cards.json"
|
||||||
|
|
||||||
|
RETIRED_SENTINEL = "9999-12-31T23:59:59+00:00"
|
||||||
|
|
||||||
|
|
||||||
|
def _now() -> datetime:
|
||||||
|
return datetime.now(timezone.utc)
|
||||||
|
|
||||||
|
|
||||||
|
def _iso(dt: datetime) -> str:
|
||||||
|
return dt.isoformat()
|
||||||
|
|
||||||
|
|
||||||
|
def _parse_iso(s: str) -> datetime:
|
||||||
|
return datetime.fromisoformat(s)
|
||||||
|
|
||||||
|
|
||||||
|
def _empty_store() -> dict:
|
||||||
|
return {"cards": [], "version": 1}
|
||||||
|
|
||||||
|
|
||||||
|
def _load() -> dict:
|
||||||
|
if not CARDS_FILE.exists():
|
||||||
|
return _empty_store()
|
||||||
|
try:
|
||||||
|
with open(CARDS_FILE, "r", encoding="utf-8") as f:
|
||||||
|
data = json.load(f)
|
||||||
|
if not isinstance(data, dict) or "cards" not in data:
|
||||||
|
return _empty_store()
|
||||||
|
return data
|
||||||
|
except (json.JSONDecodeError, OSError):
|
||||||
|
return _empty_store()
|
||||||
|
|
||||||
|
|
||||||
|
def _save(data: dict) -> None:
|
||||||
|
DATA_DIR.mkdir(parents=True, exist_ok=True)
|
||||||
|
fd, tmp = tempfile.mkstemp(dir=DATA_DIR, suffix=".tmp")
|
||||||
|
try:
|
||||||
|
with os.fdopen(fd, "w", encoding="utf-8") as f:
|
||||||
|
json.dump(data, f, indent=2, ensure_ascii=False)
|
||||||
|
f.write("\n")
|
||||||
|
os.replace(tmp, CARDS_FILE)
|
||||||
|
except BaseException:
|
||||||
|
try:
|
||||||
|
os.unlink(tmp)
|
||||||
|
except OSError:
|
||||||
|
pass
|
||||||
|
raise
|
||||||
|
|
||||||
|
|
||||||
|
def _out(obj: object) -> None:
|
||||||
|
json.dump(obj, sys.stdout, indent=2, ensure_ascii=False)
|
||||||
|
sys.stdout.write("\n")
|
||||||
|
|
||||||
|
|
||||||
|
# ── Subcommands ──────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
def cmd_add(args: argparse.Namespace) -> None:
|
||||||
|
data = _load()
|
||||||
|
now = _now()
|
||||||
|
card = {
|
||||||
|
"id": str(uuid.uuid4()),
|
||||||
|
"question": args.question,
|
||||||
|
"answer": args.answer,
|
||||||
|
"collection": args.collection or "General",
|
||||||
|
"status": "learning",
|
||||||
|
"ease_streak": 0,
|
||||||
|
"next_review_at": _iso(now),
|
||||||
|
"created_at": _iso(now),
|
||||||
|
"video_id": None,
|
||||||
|
"last_user_answer": None,
|
||||||
|
}
|
||||||
|
data["cards"].append(card)
|
||||||
|
_save(data)
|
||||||
|
_out({"ok": True, "card": card})
|
||||||
|
|
||||||
|
|
||||||
|
def cmd_add_quiz(args: argparse.Namespace) -> None:
|
||||||
|
data = _load()
|
||||||
|
now = _now()
|
||||||
|
|
||||||
|
try:
|
||||||
|
questions = json.loads(args.questions)
|
||||||
|
except json.JSONDecodeError as exc:
|
||||||
|
_out({"ok": False, "error": f"Invalid JSON for --questions: {exc}"})
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
# Dedup: skip if cards with this video_id already exist
|
||||||
|
existing_ids = {c["video_id"] for c in data["cards"] if c.get("video_id")}
|
||||||
|
if args.video_id in existing_ids:
|
||||||
|
existing = [c for c in data["cards"] if c.get("video_id") == args.video_id]
|
||||||
|
_out({"ok": True, "skipped": True, "reason": "duplicate_video_id", "existing_count": len(existing), "cards": existing})
|
||||||
|
return
|
||||||
|
|
||||||
|
created = []
|
||||||
|
for qa in questions:
|
||||||
|
card = {
|
||||||
|
"id": str(uuid.uuid4()),
|
||||||
|
"question": qa["question"],
|
||||||
|
"answer": qa["answer"],
|
||||||
|
"collection": args.collection or "Quiz",
|
||||||
|
"status": "learning",
|
||||||
|
"ease_streak": 0,
|
||||||
|
"next_review_at": _iso(now),
|
||||||
|
"created_at": _iso(now),
|
||||||
|
"video_id": args.video_id,
|
||||||
|
"last_user_answer": None,
|
||||||
|
}
|
||||||
|
data["cards"].append(card)
|
||||||
|
created.append(card)
|
||||||
|
|
||||||
|
_save(data)
|
||||||
|
_out({"ok": True, "created_count": len(created), "cards": created})
|
||||||
|
|
||||||
|
|
||||||
|
def cmd_due(args: argparse.Namespace) -> None:
|
||||||
|
data = _load()
|
||||||
|
now = _now()
|
||||||
|
due = []
|
||||||
|
for card in data["cards"]:
|
||||||
|
if card["status"] == "retired":
|
||||||
|
continue
|
||||||
|
review_at = _parse_iso(card["next_review_at"])
|
||||||
|
if review_at <= now:
|
||||||
|
if args.collection and card["collection"] != args.collection:
|
||||||
|
continue
|
||||||
|
due.append(card)
|
||||||
|
_out({"ok": True, "count": len(due), "cards": due})
|
||||||
|
|
||||||
|
|
||||||
|
def cmd_rate(args: argparse.Namespace) -> None:
|
||||||
|
data = _load()
|
||||||
|
now = _now()
|
||||||
|
card = None
|
||||||
|
for c in data["cards"]:
|
||||||
|
if c["id"] == args.id:
|
||||||
|
card = c
|
||||||
|
break
|
||||||
|
if not card:
|
||||||
|
_out({"ok": False, "error": f"Card not found: {args.id}"})
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
rating = args.rating
|
||||||
|
user_answer = getattr(args, "user_answer", None)
|
||||||
|
if user_answer is not None:
|
||||||
|
card["last_user_answer"] = user_answer
|
||||||
|
|
||||||
|
if rating == "retire":
|
||||||
|
card["status"] = "retired"
|
||||||
|
card["next_review_at"] = RETIRED_SENTINEL
|
||||||
|
card["ease_streak"] = 0
|
||||||
|
elif rating == "hard":
|
||||||
|
card["next_review_at"] = _iso(now + timedelta(days=1))
|
||||||
|
card["ease_streak"] = 0
|
||||||
|
elif rating == "good":
|
||||||
|
card["next_review_at"] = _iso(now + timedelta(days=3))
|
||||||
|
card["ease_streak"] = 0
|
||||||
|
elif rating == "easy":
|
||||||
|
card["next_review_at"] = _iso(now + timedelta(days=7))
|
||||||
|
card["ease_streak"] = card.get("ease_streak", 0) + 1
|
||||||
|
if card["ease_streak"] >= 3:
|
||||||
|
card["status"] = "retired"
|
||||||
|
|
||||||
|
_save(data)
|
||||||
|
_out({"ok": True, "card": card})
|
||||||
|
|
||||||
|
|
||||||
|
def cmd_list(args: argparse.Namespace) -> None:
|
||||||
|
data = _load()
|
||||||
|
cards = data["cards"]
|
||||||
|
if args.collection:
|
||||||
|
cards = [c for c in cards if c["collection"] == args.collection]
|
||||||
|
if args.status:
|
||||||
|
cards = [c for c in cards if c["status"] == args.status]
|
||||||
|
_out({"ok": True, "count": len(cards), "cards": cards})
|
||||||
|
|
||||||
|
|
||||||
|
def cmd_stats(args: argparse.Namespace) -> None:
|
||||||
|
data = _load()
|
||||||
|
now = _now()
|
||||||
|
total = len(data["cards"])
|
||||||
|
learning = sum(1 for c in data["cards"] if c["status"] == "learning")
|
||||||
|
retired = sum(1 for c in data["cards"] if c["status"] == "retired")
|
||||||
|
due_now = 0
|
||||||
|
for c in data["cards"]:
|
||||||
|
if c["status"] != "retired" and _parse_iso(c["next_review_at"]) <= now:
|
||||||
|
due_now += 1
|
||||||
|
|
||||||
|
collections: dict[str, int] = {}
|
||||||
|
for c in data["cards"]:
|
||||||
|
name = c["collection"]
|
||||||
|
collections[name] = collections.get(name, 0) + 1
|
||||||
|
|
||||||
|
_out({
|
||||||
|
"ok": True,
|
||||||
|
"total": total,
|
||||||
|
"learning": learning,
|
||||||
|
"retired": retired,
|
||||||
|
"due_now": due_now,
|
||||||
|
"collections": collections,
|
||||||
|
})
|
||||||
|
|
||||||
|
|
||||||
|
def cmd_export(args: argparse.Namespace) -> None:
|
||||||
|
data = _load()
|
||||||
|
output_path = Path(args.output).expanduser()
|
||||||
|
with open(output_path, "w", newline="", encoding="utf-8") as f:
|
||||||
|
writer = csv.writer(f, lineterminator="\n")
|
||||||
|
for card in data["cards"]:
|
||||||
|
writer.writerow([card["question"], card["answer"], card["collection"]])
|
||||||
|
_out({"ok": True, "exported": len(data["cards"]), "path": str(output_path)})
|
||||||
|
|
||||||
|
|
||||||
|
def cmd_import(args: argparse.Namespace) -> None:
|
||||||
|
data = _load()
|
||||||
|
now = _now()
|
||||||
|
file_path = Path(args.file).expanduser()
|
||||||
|
|
||||||
|
if not file_path.exists():
|
||||||
|
_out({"ok": False, "error": f"File not found: {file_path}"})
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
created = 0
|
||||||
|
with open(file_path, "r", encoding="utf-8") as f:
|
||||||
|
reader = csv.reader(f)
|
||||||
|
for row in reader:
|
||||||
|
if len(row) < 2:
|
||||||
|
continue
|
||||||
|
question = row[0].strip()
|
||||||
|
answer = row[1].strip()
|
||||||
|
collection = row[2].strip() if len(row) >= 3 and row[2].strip() else (args.collection or "Imported")
|
||||||
|
if not question or not answer:
|
||||||
|
continue
|
||||||
|
card = {
|
||||||
|
"id": str(uuid.uuid4()),
|
||||||
|
"question": question,
|
||||||
|
"answer": answer,
|
||||||
|
"collection": collection,
|
||||||
|
"status": "learning",
|
||||||
|
"ease_streak": 0,
|
||||||
|
"next_review_at": _iso(now),
|
||||||
|
"created_at": _iso(now),
|
||||||
|
"video_id": None,
|
||||||
|
"last_user_answer": None,
|
||||||
|
}
|
||||||
|
data["cards"].append(card)
|
||||||
|
created += 1
|
||||||
|
|
||||||
|
_save(data)
|
||||||
|
_out({"ok": True, "imported": created})
|
||||||
|
|
||||||
|
|
||||||
|
def cmd_delete(args: argparse.Namespace) -> None:
|
||||||
|
data = _load()
|
||||||
|
original = len(data["cards"])
|
||||||
|
data["cards"] = [c for c in data["cards"] if c["id"] != args.id]
|
||||||
|
removed = original - len(data["cards"])
|
||||||
|
if removed == 0:
|
||||||
|
_out({"ok": False, "error": f"Card not found: {args.id}"})
|
||||||
|
sys.exit(1)
|
||||||
|
_save(data)
|
||||||
|
_out({"ok": True, "deleted": args.id})
|
||||||
|
|
||||||
|
|
||||||
|
def cmd_delete_collection(args: argparse.Namespace) -> None:
|
||||||
|
data = _load()
|
||||||
|
original = len(data["cards"])
|
||||||
|
data["cards"] = [c for c in data["cards"] if c["collection"] != args.collection]
|
||||||
|
removed = original - len(data["cards"])
|
||||||
|
_save(data)
|
||||||
|
_out({"ok": True, "deleted_count": removed, "collection": args.collection})
|
||||||
|
|
||||||
|
|
||||||
|
# ── CLI ──────────────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
def main() -> None:
|
||||||
|
parser = argparse.ArgumentParser(description="Memento flashcard manager")
|
||||||
|
sub = parser.add_subparsers(dest="command", required=True)
|
||||||
|
|
||||||
|
p_add = sub.add_parser("add", help="Create one card")
|
||||||
|
p_add.add_argument("--question", required=True)
|
||||||
|
p_add.add_argument("--answer", required=True)
|
||||||
|
p_add.add_argument("--collection", default="General")
|
||||||
|
|
||||||
|
p_quiz = sub.add_parser("add-quiz", help="Batch-add quiz cards")
|
||||||
|
p_quiz.add_argument("--video-id", required=True)
|
||||||
|
p_quiz.add_argument("--questions", required=True, help="JSON array of {question, answer}")
|
||||||
|
p_quiz.add_argument("--collection", default="Quiz")
|
||||||
|
|
||||||
|
p_due = sub.add_parser("due", help="List due cards")
|
||||||
|
p_due.add_argument("--collection", default=None)
|
||||||
|
|
||||||
|
p_rate = sub.add_parser("rate", help="Rate a card")
|
||||||
|
p_rate.add_argument("--id", required=True)
|
||||||
|
p_rate.add_argument("--rating", required=True, choices=["easy", "good", "hard", "retire"])
|
||||||
|
p_rate.add_argument("--user-answer", default=None)
|
||||||
|
|
||||||
|
p_list = sub.add_parser("list", help="List cards")
|
||||||
|
p_list.add_argument("--collection", default=None)
|
||||||
|
p_list.add_argument("--status", default=None, choices=["learning", "retired"])
|
||||||
|
|
||||||
|
sub.add_parser("stats", help="Show statistics")
|
||||||
|
|
||||||
|
p_export = sub.add_parser("export", help="Export cards to CSV")
|
||||||
|
p_export.add_argument("--output", required=True)
|
||||||
|
|
||||||
|
p_import = sub.add_parser("import", help="Import cards from CSV")
|
||||||
|
p_import.add_argument("--file", required=True)
|
||||||
|
p_import.add_argument("--collection", default="Imported")
|
||||||
|
|
||||||
|
p_del = sub.add_parser("delete", help="Delete one card")
|
||||||
|
p_del.add_argument("--id", required=True)
|
||||||
|
|
||||||
|
p_delcol = sub.add_parser("delete-collection", help="Delete all cards in a collection")
|
||||||
|
p_delcol.add_argument("--collection", required=True)
|
||||||
|
|
||||||
|
args = parser.parse_args()
|
||||||
|
cmd_map = {
|
||||||
|
"add": cmd_add,
|
||||||
|
"add-quiz": cmd_add_quiz,
|
||||||
|
"due": cmd_due,
|
||||||
|
"rate": cmd_rate,
|
||||||
|
"list": cmd_list,
|
||||||
|
"stats": cmd_stats,
|
||||||
|
"export": cmd_export,
|
||||||
|
"import": cmd_import,
|
||||||
|
"delete": cmd_delete,
|
||||||
|
"delete-collection": cmd_delete_collection,
|
||||||
|
}
|
||||||
|
cmd_map[args.command](args)
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
@@ -0,0 +1,88 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""Fetch YouTube transcripts for Memento quiz generation.
|
||||||
|
|
||||||
|
Requires: pip install youtube-transcript-api
|
||||||
|
The quiz question *generation* is done by the agent's LLM — this script only fetches transcripts.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import json
|
||||||
|
import re
|
||||||
|
import sys
|
||||||
|
|
||||||
|
|
||||||
|
def _out(obj: object) -> None:
|
||||||
|
json.dump(obj, sys.stdout, indent=2, ensure_ascii=False)
|
||||||
|
sys.stdout.write("\n")
|
||||||
|
|
||||||
|
|
||||||
|
def _normalize_segments(segments: list) -> str:
|
||||||
|
parts = []
|
||||||
|
for seg in segments:
|
||||||
|
text = str(seg.get("text", "")).strip()
|
||||||
|
if text:
|
||||||
|
parts.append(text)
|
||||||
|
return re.sub(r"\s+", " ", " ".join(parts)).strip()
|
||||||
|
|
||||||
|
|
||||||
|
def cmd_fetch(args: argparse.Namespace) -> None:
|
||||||
|
try:
|
||||||
|
import youtube_transcript_api # noqa: F811
|
||||||
|
except ImportError:
|
||||||
|
_out({
|
||||||
|
"ok": False,
|
||||||
|
"error": "missing_dependency",
|
||||||
|
"message": "Run: pip install youtube-transcript-api",
|
||||||
|
})
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
video_id = args.video_id
|
||||||
|
languages = ["en", "en-US", "en-GB", "en-CA", "en-AU"]
|
||||||
|
|
||||||
|
api = youtube_transcript_api.YouTubeTranscriptApi()
|
||||||
|
try:
|
||||||
|
raw = api.fetch(video_id, languages=languages)
|
||||||
|
except Exception as exc:
|
||||||
|
error_type = type(exc).__name__
|
||||||
|
_out({
|
||||||
|
"ok": False,
|
||||||
|
"error": "transcript_unavailable",
|
||||||
|
"error_type": error_type,
|
||||||
|
"message": f"Could not fetch transcript for {video_id}: {exc}",
|
||||||
|
})
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
segments = raw
|
||||||
|
if hasattr(raw, "to_raw_data"):
|
||||||
|
segments = raw.to_raw_data()
|
||||||
|
|
||||||
|
text = _normalize_segments(segments)
|
||||||
|
if not text:
|
||||||
|
_out({
|
||||||
|
"ok": False,
|
||||||
|
"error": "empty_transcript",
|
||||||
|
"message": f"Transcript for {video_id} contained no usable text.",
|
||||||
|
})
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
_out({
|
||||||
|
"ok": True,
|
||||||
|
"video_id": video_id,
|
||||||
|
"transcript": text,
|
||||||
|
})
|
||||||
|
|
||||||
|
|
||||||
|
def main() -> None:
|
||||||
|
parser = argparse.ArgumentParser(description="Memento YouTube transcript fetcher")
|
||||||
|
sub = parser.add_subparsers(dest="command", required=True)
|
||||||
|
|
||||||
|
p_fetch = sub.add_parser("fetch", help="Fetch transcript for a video")
|
||||||
|
p_fetch.add_argument("video_id", help="YouTube video ID")
|
||||||
|
|
||||||
|
args = parser.parse_args()
|
||||||
|
if args.command == "fetch":
|
||||||
|
cmd_fetch(args)
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
427
tests/skills/test_memento_cards.py
Normal file
427
tests/skills/test_memento_cards.py
Normal file
@@ -0,0 +1,427 @@
|
|||||||
|
"""Tests for optional-skills/productivity/memento-flashcards/scripts/memento_cards.py"""
|
||||||
|
|
||||||
|
import csv
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
import uuid
|
||||||
|
from datetime import datetime, timedelta, timezone
|
||||||
|
from pathlib import Path
|
||||||
|
from unittest import mock
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
# Add the scripts dir so we can import the module directly
|
||||||
|
SCRIPTS_DIR = Path(__file__).resolve().parents[2] / "optional-skills" / "productivity" / "memento-flashcards" / "scripts"
|
||||||
|
sys.path.insert(0, str(SCRIPTS_DIR))
|
||||||
|
|
||||||
|
import memento_cards
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture(autouse=True)
|
||||||
|
def isolated_data(tmp_path, monkeypatch):
|
||||||
|
"""Redirect card storage to a temp directory for every test."""
|
||||||
|
data_dir = tmp_path / "data"
|
||||||
|
data_dir.mkdir()
|
||||||
|
monkeypatch.setattr(memento_cards, "DATA_DIR", data_dir)
|
||||||
|
monkeypatch.setattr(memento_cards, "CARDS_FILE", data_dir / "cards.json")
|
||||||
|
return data_dir
|
||||||
|
|
||||||
|
|
||||||
|
def _run(capsys, argv: list[str]) -> dict:
|
||||||
|
"""Run main() with given argv and return parsed JSON output."""
|
||||||
|
with mock.patch("sys.argv", ["memento_cards"] + argv):
|
||||||
|
memento_cards.main()
|
||||||
|
captured = capsys.readouterr()
|
||||||
|
return json.loads(captured.out)
|
||||||
|
|
||||||
|
|
||||||
|
# ── Add / List / Delete ──────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
class TestCardCRUD:
|
||||||
|
def test_add_creates_card(self, capsys):
|
||||||
|
result = _run(capsys, ["add", "--question", "What is 2+2?", "--answer", "4", "--collection", "Math"])
|
||||||
|
assert result["ok"] is True
|
||||||
|
card = result["card"]
|
||||||
|
assert card["question"] == "What is 2+2?"
|
||||||
|
assert card["answer"] == "4"
|
||||||
|
assert card["collection"] == "Math"
|
||||||
|
assert card["status"] == "learning"
|
||||||
|
assert card["ease_streak"] == 0
|
||||||
|
uuid.UUID(card["id"]) # validates it's a real UUID
|
||||||
|
|
||||||
|
def test_add_default_collection(self, capsys):
|
||||||
|
result = _run(capsys, ["add", "--question", "Q?", "--answer", "A"])
|
||||||
|
assert result["card"]["collection"] == "General"
|
||||||
|
|
||||||
|
def test_list_all(self, capsys):
|
||||||
|
_run(capsys, ["add", "--question", "Q1", "--answer", "A1", "--collection", "C1"])
|
||||||
|
_run(capsys, ["add", "--question", "Q2", "--answer", "A2", "--collection", "C2"])
|
||||||
|
result = _run(capsys, ["list"])
|
||||||
|
assert result["count"] == 2
|
||||||
|
|
||||||
|
def test_list_by_collection(self, capsys):
|
||||||
|
_run(capsys, ["add", "--question", "Q1", "--answer", "A1", "--collection", "C1"])
|
||||||
|
_run(capsys, ["add", "--question", "Q2", "--answer", "A2", "--collection", "C2"])
|
||||||
|
result = _run(capsys, ["list", "--collection", "C1"])
|
||||||
|
assert result["count"] == 1
|
||||||
|
assert result["cards"][0]["collection"] == "C1"
|
||||||
|
|
||||||
|
def test_list_by_status(self, capsys):
|
||||||
|
_run(capsys, ["add", "--question", "Q1", "--answer", "A1"])
|
||||||
|
result = _run(capsys, ["list", "--status", "learning"])
|
||||||
|
assert result["count"] == 1
|
||||||
|
result = _run(capsys, ["list", "--status", "retired"])
|
||||||
|
assert result["count"] == 0
|
||||||
|
|
||||||
|
def test_delete_card(self, capsys):
|
||||||
|
result = _run(capsys, ["add", "--question", "Q", "--answer", "A"])
|
||||||
|
card_id = result["card"]["id"]
|
||||||
|
del_result = _run(capsys, ["delete", "--id", card_id])
|
||||||
|
assert del_result["ok"] is True
|
||||||
|
assert del_result["deleted"] == card_id
|
||||||
|
# Verify gone
|
||||||
|
list_result = _run(capsys, ["list"])
|
||||||
|
assert list_result["count"] == 0
|
||||||
|
|
||||||
|
def test_delete_nonexistent(self, capsys):
|
||||||
|
with pytest.raises(SystemExit):
|
||||||
|
_run(capsys, ["delete", "--id", "nonexistent"])
|
||||||
|
|
||||||
|
def test_delete_collection(self, capsys):
|
||||||
|
_run(capsys, ["add", "--question", "Q1", "--answer", "A1", "--collection", "ToDelete"])
|
||||||
|
_run(capsys, ["add", "--question", "Q2", "--answer", "A2", "--collection", "ToDelete"])
|
||||||
|
_run(capsys, ["add", "--question", "Q3", "--answer", "A3", "--collection", "Keep"])
|
||||||
|
result = _run(capsys, ["delete-collection", "--collection", "ToDelete"])
|
||||||
|
assert result["ok"] is True
|
||||||
|
assert result["deleted_count"] == 2
|
||||||
|
list_result = _run(capsys, ["list"])
|
||||||
|
assert list_result["count"] == 1
|
||||||
|
assert list_result["cards"][0]["collection"] == "Keep"
|
||||||
|
|
||||||
|
|
||||||
|
# ── Due Filtering ────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
class TestDueFiltering:
|
||||||
|
def test_new_card_is_due(self, capsys):
|
||||||
|
_run(capsys, ["add", "--question", "Q", "--answer", "A"])
|
||||||
|
result = _run(capsys, ["due"])
|
||||||
|
assert result["count"] == 1
|
||||||
|
|
||||||
|
def test_future_card_not_due(self, capsys, monkeypatch):
|
||||||
|
_run(capsys, ["add", "--question", "Q", "--answer", "A"])
|
||||||
|
# Rate it good (pushes next_review_at to +3 days)
|
||||||
|
card_id = _run(capsys, ["list"])["cards"][0]["id"]
|
||||||
|
_run(capsys, ["rate", "--id", card_id, "--rating", "good"])
|
||||||
|
result = _run(capsys, ["due"])
|
||||||
|
assert result["count"] == 0
|
||||||
|
|
||||||
|
def test_retired_card_not_due(self, capsys):
|
||||||
|
_run(capsys, ["add", "--question", "Q", "--answer", "A"])
|
||||||
|
card_id = _run(capsys, ["list"])["cards"][0]["id"]
|
||||||
|
_run(capsys, ["rate", "--id", card_id, "--rating", "retire"])
|
||||||
|
result = _run(capsys, ["due"])
|
||||||
|
assert result["count"] == 0
|
||||||
|
|
||||||
|
def test_due_with_collection_filter(self, capsys):
|
||||||
|
_run(capsys, ["add", "--question", "Q1", "--answer", "A1", "--collection", "C1"])
|
||||||
|
_run(capsys, ["add", "--question", "Q2", "--answer", "A2", "--collection", "C2"])
|
||||||
|
result = _run(capsys, ["due", "--collection", "C1"])
|
||||||
|
assert result["count"] == 1
|
||||||
|
assert result["cards"][0]["collection"] == "C1"
|
||||||
|
|
||||||
|
|
||||||
|
# ── Rating and Rescheduling ──────────────────────────────────────────────────
|
||||||
|
|
||||||
|
class TestRating:
|
||||||
|
def test_hard_adds_1_day(self, capsys):
|
||||||
|
_run(capsys, ["add", "--question", "Q", "--answer", "A"])
|
||||||
|
card_id = _run(capsys, ["list"])["cards"][0]["id"]
|
||||||
|
before = datetime.now(timezone.utc)
|
||||||
|
result = _run(capsys, ["rate", "--id", card_id, "--rating", "hard"])
|
||||||
|
after = datetime.now(timezone.utc)
|
||||||
|
next_review = datetime.fromisoformat(result["card"]["next_review_at"])
|
||||||
|
assert before + timedelta(days=1) <= next_review <= after + timedelta(days=1)
|
||||||
|
assert result["card"]["ease_streak"] == 0
|
||||||
|
|
||||||
|
def test_good_adds_3_days(self, capsys):
|
||||||
|
_run(capsys, ["add", "--question", "Q", "--answer", "A"])
|
||||||
|
card_id = _run(capsys, ["list"])["cards"][0]["id"]
|
||||||
|
before = datetime.now(timezone.utc)
|
||||||
|
result = _run(capsys, ["rate", "--id", card_id, "--rating", "good"])
|
||||||
|
next_review = datetime.fromisoformat(result["card"]["next_review_at"])
|
||||||
|
assert next_review >= before + timedelta(days=3)
|
||||||
|
assert result["card"]["ease_streak"] == 0
|
||||||
|
|
||||||
|
def test_easy_adds_7_days_and_increments_streak(self, capsys):
|
||||||
|
_run(capsys, ["add", "--question", "Q", "--answer", "A"])
|
||||||
|
card_id = _run(capsys, ["list"])["cards"][0]["id"]
|
||||||
|
result = _run(capsys, ["rate", "--id", card_id, "--rating", "easy"])
|
||||||
|
assert result["card"]["ease_streak"] == 1
|
||||||
|
assert result["card"]["status"] == "learning"
|
||||||
|
|
||||||
|
def test_retire_sets_retired(self, capsys):
|
||||||
|
_run(capsys, ["add", "--question", "Q", "--answer", "A"])
|
||||||
|
card_id = _run(capsys, ["list"])["cards"][0]["id"]
|
||||||
|
result = _run(capsys, ["rate", "--id", card_id, "--rating", "retire"])
|
||||||
|
assert result["card"]["status"] == "retired"
|
||||||
|
assert result["card"]["ease_streak"] == 0
|
||||||
|
|
||||||
|
def test_auto_retire_after_3_easys(self, capsys):
|
||||||
|
_run(capsys, ["add", "--question", "Q", "--answer", "A"])
|
||||||
|
card_id = _run(capsys, ["list"])["cards"][0]["id"]
|
||||||
|
|
||||||
|
# Force card to be due by manipulating next_review_at through rate
|
||||||
|
for i in range(3):
|
||||||
|
# Load and directly set next_review_at to now so it's ratable
|
||||||
|
data = memento_cards._load()
|
||||||
|
for c in data["cards"]:
|
||||||
|
if c["id"] == card_id:
|
||||||
|
c["next_review_at"] = memento_cards._iso(memento_cards._now())
|
||||||
|
memento_cards._save(data)
|
||||||
|
|
||||||
|
result = _run(capsys, ["rate", "--id", card_id, "--rating", "easy"])
|
||||||
|
|
||||||
|
assert result["card"]["ease_streak"] == 3
|
||||||
|
assert result["card"]["status"] == "retired"
|
||||||
|
|
||||||
|
def test_hard_resets_ease_streak(self, capsys):
|
||||||
|
_run(capsys, ["add", "--question", "Q", "--answer", "A"])
|
||||||
|
card_id = _run(capsys, ["list"])["cards"][0]["id"]
|
||||||
|
|
||||||
|
# Easy twice
|
||||||
|
for _ in range(2):
|
||||||
|
data = memento_cards._load()
|
||||||
|
for c in data["cards"]:
|
||||||
|
if c["id"] == card_id:
|
||||||
|
c["next_review_at"] = memento_cards._iso(memento_cards._now())
|
||||||
|
memento_cards._save(data)
|
||||||
|
_run(capsys, ["rate", "--id", card_id, "--rating", "easy"])
|
||||||
|
|
||||||
|
# Verify streak is 2
|
||||||
|
check = _run(capsys, ["list"])
|
||||||
|
assert check["cards"][0]["ease_streak"] == 2
|
||||||
|
|
||||||
|
# Hard resets
|
||||||
|
data = memento_cards._load()
|
||||||
|
for c in data["cards"]:
|
||||||
|
if c["id"] == card_id:
|
||||||
|
c["next_review_at"] = memento_cards._iso(memento_cards._now())
|
||||||
|
memento_cards._save(data)
|
||||||
|
result = _run(capsys, ["rate", "--id", card_id, "--rating", "hard"])
|
||||||
|
assert result["card"]["ease_streak"] == 0
|
||||||
|
assert result["card"]["status"] == "learning"
|
||||||
|
|
||||||
|
def test_rate_nonexistent_card(self, capsys):
|
||||||
|
with pytest.raises(SystemExit):
|
||||||
|
_run(capsys, ["rate", "--id", "nonexistent", "--rating", "easy"])
|
||||||
|
|
||||||
|
|
||||||
|
# ── CSV Export/Import ────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
class TestCSV:
|
||||||
|
def test_export_import_roundtrip(self, capsys, tmp_path):
|
||||||
|
_run(capsys, ["add", "--question", "Q1", "--answer", "A1", "--collection", "C1"])
|
||||||
|
_run(capsys, ["add", "--question", "Q2", "--answer", "A2", "--collection", "C2"])
|
||||||
|
|
||||||
|
csv_path = str(tmp_path / "export.csv")
|
||||||
|
result = _run(capsys, ["export", "--output", csv_path])
|
||||||
|
assert result["ok"] is True
|
||||||
|
assert result["exported"] == 2
|
||||||
|
|
||||||
|
# Verify CSV content
|
||||||
|
with open(csv_path, "r") as f:
|
||||||
|
reader = csv.reader(f)
|
||||||
|
rows = list(reader)
|
||||||
|
assert len(rows) == 2
|
||||||
|
assert rows[0] == ["Q1", "A1", "C1"]
|
||||||
|
assert rows[1] == ["Q2", "A2", "C2"]
|
||||||
|
|
||||||
|
# Delete all and reimport
|
||||||
|
data = memento_cards._load()
|
||||||
|
data["cards"] = []
|
||||||
|
memento_cards._save(data)
|
||||||
|
|
||||||
|
result = _run(capsys, ["import", "--file", csv_path, "--collection", "Fallback"])
|
||||||
|
assert result["ok"] is True
|
||||||
|
assert result["imported"] == 2
|
||||||
|
|
||||||
|
# Verify imported cards use CSV collection column
|
||||||
|
list_result = _run(capsys, ["list"])
|
||||||
|
collections = {c["collection"] for c in list_result["cards"]}
|
||||||
|
assert collections == {"C1", "C2"}
|
||||||
|
|
||||||
|
def test_import_without_collection_column(self, capsys, tmp_path):
|
||||||
|
csv_path = str(tmp_path / "no_col.csv")
|
||||||
|
with open(csv_path, "w", newline="") as f:
|
||||||
|
writer = csv.writer(f)
|
||||||
|
writer.writerow(["Q1", "A1"])
|
||||||
|
writer.writerow(["Q2", "A2"])
|
||||||
|
|
||||||
|
result = _run(capsys, ["import", "--file", csv_path, "--collection", "MyDeck"])
|
||||||
|
assert result["imported"] == 2
|
||||||
|
|
||||||
|
list_result = _run(capsys, ["list"])
|
||||||
|
assert all(c["collection"] == "MyDeck" for c in list_result["cards"])
|
||||||
|
|
||||||
|
def test_import_skips_empty_rows(self, capsys, tmp_path):
|
||||||
|
csv_path = str(tmp_path / "sparse.csv")
|
||||||
|
with open(csv_path, "w", newline="") as f:
|
||||||
|
writer = csv.writer(f)
|
||||||
|
writer.writerow(["Q1", "A1"])
|
||||||
|
writer.writerow(["", ""]) # empty
|
||||||
|
writer.writerow(["Q2"]) # only one column
|
||||||
|
writer.writerow(["Q3", "A3"])
|
||||||
|
|
||||||
|
result = _run(capsys, ["import", "--file", csv_path, "--collection", "Test"])
|
||||||
|
assert result["imported"] == 2
|
||||||
|
|
||||||
|
def test_import_nonexistent_file(self, capsys, tmp_path):
|
||||||
|
with pytest.raises(SystemExit):
|
||||||
|
_run(capsys, ["import", "--file", str(tmp_path / "nope.csv"), "--collection", "X"])
|
||||||
|
|
||||||
|
|
||||||
|
# ── Quiz Batch Add ───────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
class TestQuizBatchAdd:
|
||||||
|
def test_add_quiz_creates_cards(self, capsys):
|
||||||
|
questions = json.dumps([
|
||||||
|
{"question": "Q1?", "answer": "A1"},
|
||||||
|
{"question": "Q2?", "answer": "A2"},
|
||||||
|
])
|
||||||
|
result = _run(capsys, ["add-quiz", "--video-id", "abc123", "--questions", questions, "--collection", "Quiz - Test"])
|
||||||
|
assert result["ok"] is True
|
||||||
|
assert result["created_count"] == 2
|
||||||
|
for card in result["cards"]:
|
||||||
|
assert card["video_id"] == "abc123"
|
||||||
|
assert card["collection"] == "Quiz - Test"
|
||||||
|
|
||||||
|
def test_add_quiz_deduplicates_by_video_id(self, capsys):
|
||||||
|
questions = json.dumps([{"question": "Q?", "answer": "A"}])
|
||||||
|
_run(capsys, ["add-quiz", "--video-id", "dup1", "--questions", questions])
|
||||||
|
result = _run(capsys, ["add-quiz", "--video-id", "dup1", "--questions", questions])
|
||||||
|
assert result["ok"] is True
|
||||||
|
assert result["skipped"] is True
|
||||||
|
assert result["reason"] == "duplicate_video_id"
|
||||||
|
# Only 1 card total (not 2)
|
||||||
|
list_result = _run(capsys, ["list"])
|
||||||
|
assert list_result["count"] == 1
|
||||||
|
|
||||||
|
def test_add_quiz_invalid_json(self, capsys):
|
||||||
|
with pytest.raises(SystemExit):
|
||||||
|
_run(capsys, ["add-quiz", "--video-id", "x", "--questions", "not json"])
|
||||||
|
|
||||||
|
|
||||||
|
# ── Statistics ───────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
class TestStats:
|
||||||
|
def test_stats_empty(self, capsys):
|
||||||
|
result = _run(capsys, ["stats"])
|
||||||
|
assert result["total"] == 0
|
||||||
|
assert result["learning"] == 0
|
||||||
|
assert result["retired"] == 0
|
||||||
|
assert result["due_now"] == 0
|
||||||
|
|
||||||
|
def test_stats_counts(self, capsys):
|
||||||
|
_run(capsys, ["add", "--question", "Q1", "--answer", "A1", "--collection", "C1"])
|
||||||
|
_run(capsys, ["add", "--question", "Q2", "--answer", "A2", "--collection", "C1"])
|
||||||
|
_run(capsys, ["add", "--question", "Q3", "--answer", "A3", "--collection", "C2"])
|
||||||
|
|
||||||
|
# Retire one
|
||||||
|
card_id = _run(capsys, ["list"])["cards"][0]["id"]
|
||||||
|
_run(capsys, ["rate", "--id", card_id, "--rating", "retire"])
|
||||||
|
|
||||||
|
result = _run(capsys, ["stats"])
|
||||||
|
assert result["total"] == 3
|
||||||
|
assert result["learning"] == 2
|
||||||
|
assert result["retired"] == 1
|
||||||
|
assert result["due_now"] == 2 # 2 learning cards still due
|
||||||
|
assert result["collections"] == {"C1": 2, "C2": 1}
|
||||||
|
|
||||||
|
|
||||||
|
# ── Edge Cases ───────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
class TestEdgeCases:
|
||||||
|
def test_empty_deck_operations(self, capsys):
|
||||||
|
"""Operations on empty deck shouldn't crash."""
|
||||||
|
result = _run(capsys, ["due"])
|
||||||
|
assert result["count"] == 0
|
||||||
|
result = _run(capsys, ["list"])
|
||||||
|
assert result["count"] == 0
|
||||||
|
result = _run(capsys, ["stats"])
|
||||||
|
assert result["total"] == 0
|
||||||
|
|
||||||
|
def test_corrupt_json_recovery(self, capsys):
|
||||||
|
"""Corrupt JSON file should be treated as empty."""
|
||||||
|
memento_cards.DATA_DIR.mkdir(parents=True, exist_ok=True)
|
||||||
|
with open(memento_cards.CARDS_FILE, "w") as f:
|
||||||
|
f.write("{corrupted json...")
|
||||||
|
result = _run(capsys, ["list"])
|
||||||
|
assert result["count"] == 0
|
||||||
|
# Can still add
|
||||||
|
result = _run(capsys, ["add", "--question", "Q", "--answer", "A"])
|
||||||
|
assert result["ok"] is True
|
||||||
|
|
||||||
|
def test_missing_cards_key_recovery(self, capsys):
|
||||||
|
"""JSON without 'cards' key should be treated as empty."""
|
||||||
|
memento_cards.DATA_DIR.mkdir(parents=True, exist_ok=True)
|
||||||
|
with open(memento_cards.CARDS_FILE, "w") as f:
|
||||||
|
json.dump({"version": 1}, f)
|
||||||
|
result = _run(capsys, ["list"])
|
||||||
|
assert result["count"] == 0
|
||||||
|
|
||||||
|
def test_atomic_write_creates_dir(self, capsys):
|
||||||
|
"""Data dir is created automatically if missing."""
|
||||||
|
import shutil
|
||||||
|
if memento_cards.DATA_DIR.exists():
|
||||||
|
shutil.rmtree(memento_cards.DATA_DIR)
|
||||||
|
result = _run(capsys, ["add", "--question", "Q", "--answer", "A"])
|
||||||
|
assert result["ok"] is True
|
||||||
|
assert memento_cards.CARDS_FILE.exists()
|
||||||
|
|
||||||
|
def test_delete_collection_empty(self, capsys):
|
||||||
|
"""Deleting a nonexistent collection succeeds with 0 deleted."""
|
||||||
|
result = _run(capsys, ["delete-collection", "--collection", "Nope"])
|
||||||
|
assert result["ok"] is True
|
||||||
|
assert result["deleted_count"] == 0
|
||||||
|
|
||||||
|
|
||||||
|
# ── User Answer Tracking ────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
class TestUserAnswer:
|
||||||
|
def test_rate_stores_user_answer(self, capsys):
|
||||||
|
_run(capsys, ["add", "--question", "Q", "--answer", "A"])
|
||||||
|
card_id = _run(capsys, ["list"])["cards"][0]["id"]
|
||||||
|
result = _run(capsys, ["rate", "--id", card_id, "--rating", "easy",
|
||||||
|
"--user-answer", "my answer"])
|
||||||
|
assert result["card"]["last_user_answer"] == "my answer"
|
||||||
|
|
||||||
|
def test_rate_without_user_answer_keeps_null(self, capsys):
|
||||||
|
_run(capsys, ["add", "--question", "Q", "--answer", "A"])
|
||||||
|
card_id = _run(capsys, ["list"])["cards"][0]["id"]
|
||||||
|
result = _run(capsys, ["rate", "--id", card_id, "--rating", "easy"])
|
||||||
|
assert result["card"]["last_user_answer"] is None
|
||||||
|
|
||||||
|
def test_new_card_has_last_user_answer_null(self, capsys):
|
||||||
|
result = _run(capsys, ["add", "--question", "Q", "--answer", "A"])
|
||||||
|
assert result["card"]["last_user_answer"] is None
|
||||||
|
|
||||||
|
def test_user_answer_persists_in_list(self, capsys):
|
||||||
|
_run(capsys, ["add", "--question", "Q", "--answer", "A"])
|
||||||
|
card_id = _run(capsys, ["list"])["cards"][0]["id"]
|
||||||
|
_run(capsys, ["rate", "--id", card_id, "--rating", "easy",
|
||||||
|
"--user-answer", "my answer"])
|
||||||
|
result = _run(capsys, ["list"])
|
||||||
|
assert result["cards"][0]["last_user_answer"] == "my answer"
|
||||||
|
|
||||||
|
def test_export_excludes_user_answer(self, capsys, tmp_path):
|
||||||
|
_run(capsys, ["add", "--question", "Q", "--answer", "A"])
|
||||||
|
card_id = _run(capsys, ["list"])["cards"][0]["id"]
|
||||||
|
_run(capsys, ["rate", "--id", card_id, "--rating", "easy",
|
||||||
|
"--user-answer", "my answer"])
|
||||||
|
csv_path = str(tmp_path / "export.csv")
|
||||||
|
_run(capsys, ["export", "--output", csv_path])
|
||||||
|
with open(csv_path) as f:
|
||||||
|
rows = list(csv.reader(f))
|
||||||
|
# CSV stays 3-column (question, answer, collection) — user_answer is internal only
|
||||||
|
assert len(rows[0]) == 3
|
||||||
128
tests/skills/test_youtube_quiz.py
Normal file
128
tests/skills/test_youtube_quiz.py
Normal file
@@ -0,0 +1,128 @@
|
|||||||
|
"""Tests for optional-skills/productivity/memento-flashcards/scripts/youtube_quiz.py"""
|
||||||
|
|
||||||
|
import json
|
||||||
|
import sys
|
||||||
|
from pathlib import Path
|
||||||
|
from types import SimpleNamespace
|
||||||
|
from unittest import mock
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
SCRIPTS_DIR = Path(__file__).resolve().parents[2] / "optional-skills" / "productivity" / "memento-flashcards" / "scripts"
|
||||||
|
sys.path.insert(0, str(SCRIPTS_DIR))
|
||||||
|
|
||||||
|
import youtube_quiz
|
||||||
|
|
||||||
|
|
||||||
|
def _run(capsys, argv: list[str]) -> dict:
|
||||||
|
"""Run main() with given argv and return parsed JSON output."""
|
||||||
|
with mock.patch("sys.argv", ["youtube_quiz"] + argv):
|
||||||
|
youtube_quiz.main()
|
||||||
|
captured = capsys.readouterr()
|
||||||
|
return json.loads(captured.out)
|
||||||
|
|
||||||
|
|
||||||
|
class TestNormalizeSegments:
|
||||||
|
def test_basic(self):
|
||||||
|
segments = [{"text": "hello "}, {"text": " world"}]
|
||||||
|
assert youtube_quiz._normalize_segments(segments) == "hello world"
|
||||||
|
|
||||||
|
def test_empty_segments(self):
|
||||||
|
assert youtube_quiz._normalize_segments([]) == ""
|
||||||
|
|
||||||
|
def test_whitespace_only(self):
|
||||||
|
assert youtube_quiz._normalize_segments([{"text": " "}, {"text": " "}]) == ""
|
||||||
|
|
||||||
|
def test_collapses_multiple_spaces(self):
|
||||||
|
segments = [{"text": "a b"}, {"text": "c d"}]
|
||||||
|
assert youtube_quiz._normalize_segments(segments) == "a b c d"
|
||||||
|
|
||||||
|
|
||||||
|
class TestFetchMissingDependency:
|
||||||
|
def test_missing_youtube_transcript_api(self, capsys, monkeypatch):
|
||||||
|
"""When youtube-transcript-api is not installed, report the error."""
|
||||||
|
import builtins
|
||||||
|
real_import = builtins.__import__
|
||||||
|
|
||||||
|
def mock_import(name, *args, **kwargs):
|
||||||
|
if name == "youtube_transcript_api":
|
||||||
|
raise ImportError("No module named 'youtube_transcript_api'")
|
||||||
|
return real_import(name, *args, **kwargs)
|
||||||
|
|
||||||
|
monkeypatch.setattr(builtins, "__import__", mock_import)
|
||||||
|
|
||||||
|
with pytest.raises(SystemExit) as exc_info:
|
||||||
|
_run(capsys, ["fetch", "test123"])
|
||||||
|
|
||||||
|
captured = capsys.readouterr()
|
||||||
|
result = json.loads(captured.out)
|
||||||
|
assert result["ok"] is False
|
||||||
|
assert result["error"] == "missing_dependency"
|
||||||
|
assert "pip install" in result["message"]
|
||||||
|
|
||||||
|
|
||||||
|
class TestFetchWithMockedAPI:
|
||||||
|
def _make_mock_module(self, segments=None, raise_exc=None):
|
||||||
|
"""Create a mock youtube_transcript_api module."""
|
||||||
|
mock_module = mock.MagicMock()
|
||||||
|
|
||||||
|
mock_api_instance = mock.MagicMock()
|
||||||
|
mock_module.YouTubeTranscriptApi.return_value = mock_api_instance
|
||||||
|
|
||||||
|
if raise_exc:
|
||||||
|
mock_api_instance.fetch.side_effect = raise_exc
|
||||||
|
else:
|
||||||
|
raw_data = segments or [{"text": "Hello world"}]
|
||||||
|
result = mock.MagicMock()
|
||||||
|
result.to_raw_data.return_value = raw_data
|
||||||
|
mock_api_instance.fetch.return_value = result
|
||||||
|
|
||||||
|
return mock_module
|
||||||
|
|
||||||
|
def test_successful_fetch(self, capsys):
|
||||||
|
mock_mod = self._make_mock_module(
|
||||||
|
segments=[{"text": "This is a test"}, {"text": "transcript segment"}]
|
||||||
|
)
|
||||||
|
with mock.patch.dict("sys.modules", {"youtube_transcript_api": mock_mod}):
|
||||||
|
result = _run(capsys, ["fetch", "abc123"])
|
||||||
|
|
||||||
|
assert result["ok"] is True
|
||||||
|
assert result["video_id"] == "abc123"
|
||||||
|
assert "This is a test" in result["transcript"]
|
||||||
|
assert "transcript segment" in result["transcript"]
|
||||||
|
|
||||||
|
def test_fetch_error(self, capsys):
|
||||||
|
mock_mod = self._make_mock_module(raise_exc=Exception("Video unavailable"))
|
||||||
|
with mock.patch.dict("sys.modules", {"youtube_transcript_api": mock_mod}):
|
||||||
|
with pytest.raises(SystemExit):
|
||||||
|
_run(capsys, ["fetch", "bad_id"])
|
||||||
|
|
||||||
|
captured = capsys.readouterr()
|
||||||
|
result = json.loads(captured.out)
|
||||||
|
assert result["ok"] is False
|
||||||
|
assert result["error"] == "transcript_unavailable"
|
||||||
|
|
||||||
|
def test_empty_transcript(self, capsys):
|
||||||
|
mock_mod = self._make_mock_module(segments=[{"text": ""}, {"text": " "}])
|
||||||
|
with mock.patch.dict("sys.modules", {"youtube_transcript_api": mock_mod}):
|
||||||
|
with pytest.raises(SystemExit):
|
||||||
|
_run(capsys, ["fetch", "empty_vid"])
|
||||||
|
|
||||||
|
captured = capsys.readouterr()
|
||||||
|
result = json.loads(captured.out)
|
||||||
|
assert result["ok"] is False
|
||||||
|
assert result["error"] == "empty_transcript"
|
||||||
|
|
||||||
|
def test_segments_without_to_raw_data(self, capsys):
|
||||||
|
"""Handle plain list segments (no to_raw_data method)."""
|
||||||
|
mock_mod = mock.MagicMock()
|
||||||
|
mock_api = mock.MagicMock()
|
||||||
|
mock_mod.YouTubeTranscriptApi.return_value = mock_api
|
||||||
|
# Return a plain list (no to_raw_data attribute)
|
||||||
|
mock_api.fetch.return_value = [{"text": "plain list"}]
|
||||||
|
|
||||||
|
with mock.patch.dict("sys.modules", {"youtube_transcript_api": mock_mod}):
|
||||||
|
result = _run(capsys, ["fetch", "plain123"])
|
||||||
|
|
||||||
|
assert result["ok"] is True
|
||||||
|
assert result["transcript"] == "plain list"
|
||||||
Reference in New Issue
Block a user