[claude] DeerFlow evaluation research note (#1283) #1305

Merged
claude merged 1 commits from claude/issue-1283 into main 2026-03-24 01:56:38 +00:00
Collaborator

Fixes #1283

Adds docs/research/deerflow-evaluation.md — a thorough technical evaluation of bytedance/deer-flow as an autonomous research orchestration layer for Timmy.

What the research covers

  • Agent architecture: Lead agent + dynamic sub-agents (LangGraph), 12-middleware chain, 3-concurrent sub-agent fan-out
  • REST API: Full endpoint inventory — threads, runs, streaming, artifacts, models, MCP config, memory
  • LLM backends: MIT-licensed, OpenAI-compatible base_url override enables Ollama and vLLM (community-confirmed with Qwen2.5/Llama 3.1)
  • License: MIT — compatible with Timmy’s use case
  • Docker ports: Port 2026 (configurable). No conflict with Timmy’s stack (8000, 8080, 11434)

Recommendation

No-go for full adoption. Timmy’s ResearchOrchestrator already covers the core pipeline. DeerFlow’s value is primarily parallel sub-agent fan-out and a code-execution sandbox — not blocking the current roadmap.

Selective borrowing recommended:

  1. Parallelize ResearchOrchestrator query execution with asyncio.gather (closest gap, highest ROI)
  2. Add context-trimming step to synthesis cascade (from DeerFlow’s SummarizationMiddleware)
  3. MCP server discovery in research_tools.py for pluggable tool backends
  4. Re-evaluate DeerFlow’s sandbox if Paperclip task runner (#978) proves insufficient for code execution
Fixes #1283 Adds `docs/research/deerflow-evaluation.md` — a thorough technical evaluation of bytedance/deer-flow as an autonomous research orchestration layer for Timmy. ## What the research covers - **Agent architecture:** Lead agent + dynamic sub-agents (LangGraph), 12-middleware chain, 3-concurrent sub-agent fan-out - **REST API:** Full endpoint inventory — threads, runs, streaming, artifacts, models, MCP config, memory - **LLM backends:** MIT-licensed, OpenAI-compatible `base_url` override enables Ollama and vLLM (community-confirmed with Qwen2.5/Llama 3.1) - **License:** MIT — compatible with Timmy’s use case - **Docker ports:** Port 2026 (configurable). No conflict with Timmy’s stack (8000, 8080, 11434) ## Recommendation **No-go for full adoption.** Timmy’s `ResearchOrchestrator` already covers the core pipeline. DeerFlow’s value is primarily parallel sub-agent fan-out and a code-execution sandbox — not blocking the current roadmap. **Selective borrowing recommended:** 1. Parallelize `ResearchOrchestrator` query execution with `asyncio.gather` (closest gap, highest ROI) 2. Add context-trimming step to synthesis cascade (from DeerFlow’s `SummarizationMiddleware`) 3. MCP server discovery in `research_tools.py` for pluggable tool backends 4. Re-evaluate DeerFlow’s sandbox if Paperclip task runner (#978) proves insufficient for code execution
claude added 1 commit 2026-03-24 01:56:02 +00:00
docs: add DeerFlow evaluation research note
Some checks failed
Tests / lint (pull_request) Failing after 30s
Tests / test (pull_request) Has been skipped
87cd5fcc6f
Evaluates bytedance/deer-flow as an autonomous research orchestration
layer for Timmy (issue #1283). Covers agent architecture, API surface,
Ollama/vLLM backend compatibility, MIT license, Docker port analysis
(no conflict with existing stack on port 2026), and full capability
comparison against ResearchOrchestrator.

Recommendation: No-go for full adoption. Selective borrowing recommended:
parallelize ResearchOrchestrator with asyncio.gather, add context trimming,
and revisit MCP tool integration as a follow-up.

Refs #1283

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
claude merged commit de7744916c into main 2026-03-24 01:56:38 +00:00
claude deleted branch claude/issue-1283 2026-03-24 01:56:38 +00:00
Sign in to join this conversation.
No Reviewers
No Label
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: Rockachopa/Timmy-time-dashboard#1305