Offline Timmy strurrling #49

Closed
opened 2026-03-28 20:47:23 +00:00 by Rockachopa · 2 comments
Owner

See logs when running against my local llama.cpp:
│··························
(⊙⊙) deliberating... │··························
(⊙
⊙) deliberating... │··························
(⊙_⊙) deliberating... │············── MM.local ────────────────────────────────────────────────────────────────────────────────────────────
🧠 memory ~user: "Hi Timmy" 0.0s

New message detected, interrupting...
Interrupted during API call.
─ ⏱ Timmy ──────────────────────────────────────────────────────────────────────────────────────────

Operation interrupted: waiting for model response (8.2s elapsed).


[Interrupted - processing new message]

──────────────────────────────────────────────────────────────────────────────────────────────────────
────────────────────────────────────────
● Seems that you are looping.
────────────────────────────────────────

📨 Queued: 'Seems that you are looping.'

🧠 memory +user: "Seems that you are looping." 0.0s
🧠 memory +user: "Seems that you are looping." 0.0s
🧠 memory +user: "Seems that you are looping." 0.0s
🧠 memory +user: "Seems that you are looping." 0.0s
🧠 memory +user: "Seems that you are looping." 0.0s

Interrupting agent... (press Ctrl+C again to force exit)
Interrupted during API call.
─ ⏱ Timmy ──────────────────────────────────────────────────────────────────────────────────────────

Operation interrupted: waiting for model response (7.2s elapsed).

──────────────────────────────────────────────────────────────────────────────────────────────────────
────────────────────────────────────────
● Why?

────────────────────────────────────────

( ˘⌣˘)♡ cogitating...

⚕ auto │ 21.6K/65.536K │ [███░░░░░░░] 33% │ 6m
────────────────────────────────────────────────────────────────────────────────────────────────────────
⚕ ⏱ type a message + Enter to interrupt, Ctrl+C to cancel
────────────────────────────────────────────────────────────────────────────────────────────────────────
── MM.local ────────────────────────────────────────────────────────────────────────────────────────────
slot release: id 0 | task 808 | stop processing: n_tokens = 18795, truncated = 0
srv params_from_: Chat format: peg-native
slot get_availabl: id 0 | task -1 | selected slot by LCP similarity, sim_best = 0.923 (> 0.100 thold), f_keep = 1.000
slot launch_slot_: id 0 | task -1 | sampler chain: logits -> ?penalties -> ?dry -> ?top-n-sigma -> top-k -> ?typical -> top-p -> min-p -> ?xtc -> temp-ext -> dist
slot launch_slot_: id 0 | task 847 | processing task, is_child = 0
slot update_slots: id 0 | task 847 | new prompt, n_ctx_slot = 40960, n_keep = 0, task.n_tokens = 20369
slot update_slots: id 0 | task 847 | n_tokens = 18795, memory_seq_rm [18795, end)
slot init_sampler: id 0 | task 847 | init sampler, took 1.80 ms, tokens: text = 20369, total = 20369
slot update_slots: id 0 | task 847 | prompt processing done, n_tokens = 20369, batch.n_tokens = 1575
^Csrv params_from_: Chat format: peg-native
srv params_from_: Chat format: peg-native
srv log_server_r: done request: POST /v1/chat/completions 127.0.0.1 200
srv operator(): operator(): cleaning up before exit...
srv stop: cancel task, id_task = 806

Alexander 1:llama-server Mar 28 16:46

See logs when running against my local llama.cpp: │·························· (⊙_⊙) deliberating... │·························· (⊙_⊙) deliberating... │·························· (⊙_⊙) deliberating... │············── MM.local ──────────────────────────────────────────────────────────────────────────────────────────── ┊ 🧠 memory ~user: "Hi Timmy" 0.0s ⚡ New message detected, interrupting... ⚡ Interrupted during API call. ─ ⏱ Timmy ────────────────────────────────────────────────────────────────────────────────────────── Operation interrupted: waiting for model response (8.2s elapsed). --- _[Interrupted - processing new message]_ ────────────────────────────────────────────────────────────────────────────────────────────────────── ──────────────────────────────────────── ● Seems that you are looping. ──────────────────────────────────────── 📨 Queued: 'Seems that you are looping.' ┊ 🧠 memory +user: "Seems that you are looping." 0.0s ┊ 🧠 memory +user: "Seems that you are looping." 0.0s ┊ 🧠 memory +user: "Seems that you are looping." 0.0s ┊ 🧠 memory +user: "Seems that you are looping." 0.0s ┊ 🧠 memory +user: "Seems that you are looping." 0.0s ⚡ Interrupting agent... (press Ctrl+C again to force exit) ⚡ Interrupted during API call. ─ ⏱ Timmy ────────────────────────────────────────────────────────────────────────────────────────── Operation interrupted: waiting for model response (7.2s elapsed). ────────────────────────────────────────────────────────────────────────────────────────────────────── ──────────────────────────────────────── ● Why? ──────────────────────────────────────── ( ˘⌣˘)♡ cogitating... ⚕ auto │ 21.6K/65.536K │ [███░░░░░░░] 33% │ 6m ──────────────────────────────────────────────────────────────────────────────────────────────────────── ⚕ ⏱ type a message + Enter to interrupt, Ctrl+C to cancel ──────────────────────────────────────────────────────────────────────────────────────────────────────── ── MM.local ──────────────────────────────────────────────────────────────────────────────────────────── slot release: id 0 | task 808 | stop processing: n_tokens = 18795, truncated = 0 srv params_from_: Chat format: peg-native slot get_availabl: id 0 | task -1 | selected slot by LCP similarity, sim_best = 0.923 (> 0.100 thold), f_keep = 1.000 slot launch_slot_: id 0 | task -1 | sampler chain: logits -> ?penalties -> ?dry -> ?top-n-sigma -> top-k -> ?typical -> top-p -> min-p -> ?xtc -> temp-ext -> dist slot launch_slot_: id 0 | task 847 | processing task, is_child = 0 slot update_slots: id 0 | task 847 | new prompt, n_ctx_slot = 40960, n_keep = 0, task.n_tokens = 20369 slot update_slots: id 0 | task 847 | n_tokens = 18795, memory_seq_rm [18795, end) slot init_sampler: id 0 | task 847 | init sampler, took 1.80 ms, tokens: text = 20369, total = 20369 slot update_slots: id 0 | task 847 | prompt processing done, n_tokens = 20369, batch.n_tokens = 1575 ^Csrv params_from_: Chat format: peg-native srv params_from_: Chat format: peg-native srv log_server_r: done request: POST /v1/chat/completions 127.0.0.1 200 srv operator(): operator(): cleaning up before exit... srv stop: cancel task, id_task = 806 Alexander 1:llama-server Mar 28 16:46
Author
Owner

🧾 Request debug dump written to: /Users/apayne/.hermes/sessions/request_dump_20260328_163958_de2bd7_20260328_164731_174210.json

🧾 Request debug dump written to: /Users/apayne/.hermes/sessions/request_dump_20260328_163958_de2bd7_20260328_164731_174210.json
Owner

Uniwizard (#94) context: This was the original signal that led to the overnight loop and ultimately to the Uniwizard decision. Historical. Closing.

Uniwizard (#94) context: This was the original signal that led to the overnight loop and ultimately to the Uniwizard decision. Historical. Closing.
Timmy closed this issue 2026-03-30 15:41:46 +00:00
Sign in to join this conversation.
2 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: Timmy_Foundation/timmy-home#49