feat(gateway): streaming token delivery — StreamingConfig, GatewayStreamConsumer, already_sent
Stage 3 of streaming support. Gateway now streams tokens to messaging platforms:
- StreamingConfig dataclass (enabled, transport, edit_interval, buffer_threshold, cursor)
on GatewayConfig with from_dict/to_dict serialization
- GatewayStreamConsumer: async queue-based consumer that progressively edits
a single message on the target platform (edit transport)
- on_delta() → queue → run() async task → send_or_edit() with rate limiting
- already_sent propagation: when streaming delivered the response, handler
returns None so base adapter skips duplicate send()
- stream_delta_callback wired into AIAgent constructor in _run_agent
- Consumer lifecycle: started as asyncio task, awaited with timeout in finally
Config (config.yaml):
streaming:
enabled: true
transport: edit # progressive editMessageText
edit_interval: 0.3 # seconds between edits
buffer_threshold: 40 # chars before forcing flush
cursor: ' ▉'
Credit: jobless0x (#774, #1312), OutThisLife (#798), clicksingh (#697).
2026-03-16 05:52:42 -07:00
|
|
|
"""Gateway streaming consumer — bridges sync agent callbacks to async platform delivery.
|
|
|
|
|
|
|
|
|
|
The agent fires stream_delta_callback(text) synchronously from its worker thread.
|
|
|
|
|
GatewayStreamConsumer:
|
|
|
|
|
1. Receives deltas via on_delta() (thread-safe, sync)
|
|
|
|
|
2. Queues them to an asyncio task via queue.Queue
|
|
|
|
|
3. The async run() task buffers, rate-limits, and progressively edits
|
|
|
|
|
a single message on the target platform
|
|
|
|
|
|
|
|
|
|
Design: Uses the edit transport (send initial message, then editMessageText).
|
|
|
|
|
This is universally supported across Telegram, Discord, and Slack.
|
|
|
|
|
|
|
|
|
|
Credit: jobless0x (#774, #1312), OutThisLife (#798), clicksingh (#697).
|
|
|
|
|
"""
|
|
|
|
|
|
|
|
|
|
from __future__ import annotations
|
|
|
|
|
|
|
|
|
|
import asyncio
|
|
|
|
|
import logging
|
|
|
|
|
import queue
|
|
|
|
|
import time
|
|
|
|
|
from dataclasses import dataclass
|
|
|
|
|
from typing import Any, Optional
|
|
|
|
|
|
|
|
|
|
logger = logging.getLogger("gateway.stream_consumer")
|
|
|
|
|
|
|
|
|
|
# Sentinel to signal the stream is complete
|
|
|
|
|
_DONE = object()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
@dataclass
|
|
|
|
|
class StreamConsumerConfig:
|
|
|
|
|
"""Runtime config for a single stream consumer instance."""
|
|
|
|
|
edit_interval: float = 0.3
|
|
|
|
|
buffer_threshold: int = 40
|
|
|
|
|
cursor: str = " ▉"
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
class GatewayStreamConsumer:
|
|
|
|
|
"""Async consumer that progressively edits a platform message with streamed tokens.
|
|
|
|
|
|
|
|
|
|
Usage::
|
|
|
|
|
|
|
|
|
|
consumer = GatewayStreamConsumer(adapter, chat_id, config, metadata=metadata)
|
|
|
|
|
# Pass consumer.on_delta as stream_delta_callback to AIAgent
|
|
|
|
|
agent = AIAgent(..., stream_delta_callback=consumer.on_delta)
|
|
|
|
|
# Start the consumer as an asyncio task
|
|
|
|
|
task = asyncio.create_task(consumer.run())
|
|
|
|
|
# ... run agent in thread pool ...
|
|
|
|
|
consumer.finish() # signal completion
|
|
|
|
|
await task # wait for final edit
|
|
|
|
|
"""
|
|
|
|
|
|
|
|
|
|
def __init__(
|
|
|
|
|
self,
|
|
|
|
|
adapter: Any,
|
|
|
|
|
chat_id: str,
|
|
|
|
|
config: Optional[StreamConsumerConfig] = None,
|
|
|
|
|
metadata: Optional[dict] = None,
|
|
|
|
|
):
|
|
|
|
|
self.adapter = adapter
|
|
|
|
|
self.chat_id = chat_id
|
|
|
|
|
self.cfg = config or StreamConsumerConfig()
|
|
|
|
|
self.metadata = metadata
|
|
|
|
|
self._queue: queue.Queue = queue.Queue()
|
|
|
|
|
self._accumulated = ""
|
|
|
|
|
self._message_id: Optional[str] = None
|
|
|
|
|
self._already_sent = False
|
2026-03-16 12:41:28 -07:00
|
|
|
self._edit_supported = True # Disabled on first edit failure (Signal/Email/HA)
|
feat(gateway): streaming token delivery — StreamingConfig, GatewayStreamConsumer, already_sent
Stage 3 of streaming support. Gateway now streams tokens to messaging platforms:
- StreamingConfig dataclass (enabled, transport, edit_interval, buffer_threshold, cursor)
on GatewayConfig with from_dict/to_dict serialization
- GatewayStreamConsumer: async queue-based consumer that progressively edits
a single message on the target platform (edit transport)
- on_delta() → queue → run() async task → send_or_edit() with rate limiting
- already_sent propagation: when streaming delivered the response, handler
returns None so base adapter skips duplicate send()
- stream_delta_callback wired into AIAgent constructor in _run_agent
- Consumer lifecycle: started as asyncio task, awaited with timeout in finally
Config (config.yaml):
streaming:
enabled: true
transport: edit # progressive editMessageText
edit_interval: 0.3 # seconds between edits
buffer_threshold: 40 # chars before forcing flush
cursor: ' ▉'
Credit: jobless0x (#774, #1312), OutThisLife (#798), clicksingh (#697).
2026-03-16 05:52:42 -07:00
|
|
|
self._last_edit_time = 0.0
|
fix: Telegram streaming — config bridge, not-modified, flood control (#1782)
* fix: NameError in OpenCode provider setup (prompt_text -> prompt)
The OpenCode Zen and OpenCode Go setup sections used prompt_text()
which is undefined. All other providers correctly use the local
prompt() function defined in setup.py. Fixes crash during
'hermes setup' when selecting either OpenCode provider.
* fix: Telegram streaming — config bridge, not-modified, flood control
Three fixes for gateway streaming:
1. Bridge streaming config from config.yaml into gateway runtime.
load_gateway_config() now reads the 'streaming' key from config.yaml
(same pattern as session_reset, stt, etc.), matching the docs.
Previously only gateway.json was read.
2. Handle 'Message is not modified' in Telegram edit_message().
This Telegram API error fires when editing with identical content —
a no-op, not a real failure. Previously it returned success=False
which made the stream consumer disable streaming entirely.
3. Handle RetryAfter / flood control in Telegram edit_message().
Fast providers can hit Telegram rate limits during streaming.
Now waits the requested retry_after duration and retries once,
instead of treating it as a fatal edit failure.
Also fixed double-edit on stream finish: the consumer now tracks
last-sent text and skips redundant edits, preventing the not-modified
error at the source.
* refactor: make config.yaml the primary gateway config source
Eliminates the per-key bridge pattern in load_gateway_config().
Previously gateway.json was the primary source and each config.yaml
key needed an individual bridge — easy to forget (streaming was
missing, causing garl4546's bug).
Now config.yaml is read first and its keys are mapped directly into
the GatewayConfig.from_dict() schema. gateway.json is kept as a
legacy fallback layer (loaded first, then overwritten by config.yaml
keys). If gateway.json exists, a log message suggests migrating.
Also:
- Removed dead save_gateway_config() (never called anywhere)
- Updated CLI help text and send_message error to reference
config.yaml instead of gateway.json
---------
Co-authored-by: Test <test@test.com>
2026-03-17 10:51:54 -07:00
|
|
|
self._last_sent_text = "" # Track last-sent text to skip redundant edits
|
feat(gateway): streaming token delivery — StreamingConfig, GatewayStreamConsumer, already_sent
Stage 3 of streaming support. Gateway now streams tokens to messaging platforms:
- StreamingConfig dataclass (enabled, transport, edit_interval, buffer_threshold, cursor)
on GatewayConfig with from_dict/to_dict serialization
- GatewayStreamConsumer: async queue-based consumer that progressively edits
a single message on the target platform (edit transport)
- on_delta() → queue → run() async task → send_or_edit() with rate limiting
- already_sent propagation: when streaming delivered the response, handler
returns None so base adapter skips duplicate send()
- stream_delta_callback wired into AIAgent constructor in _run_agent
- Consumer lifecycle: started as asyncio task, awaited with timeout in finally
Config (config.yaml):
streaming:
enabled: true
transport: edit # progressive editMessageText
edit_interval: 0.3 # seconds between edits
buffer_threshold: 40 # chars before forcing flush
cursor: ' ▉'
Credit: jobless0x (#774, #1312), OutThisLife (#798), clicksingh (#697).
2026-03-16 05:52:42 -07:00
|
|
|
|
|
|
|
|
@property
|
|
|
|
|
def already_sent(self) -> bool:
|
|
|
|
|
"""True if at least one message was sent/edited — signals the base
|
|
|
|
|
adapter to skip re-sending the final response."""
|
|
|
|
|
return self._already_sent
|
|
|
|
|
|
|
|
|
|
def on_delta(self, text: str) -> None:
|
|
|
|
|
"""Thread-safe callback — called from the agent's worker thread."""
|
|
|
|
|
if text:
|
|
|
|
|
self._queue.put(text)
|
|
|
|
|
|
|
|
|
|
def finish(self) -> None:
|
|
|
|
|
"""Signal that the stream is complete."""
|
|
|
|
|
self._queue.put(_DONE)
|
|
|
|
|
|
|
|
|
|
async def run(self) -> None:
|
|
|
|
|
"""Async task that drains the queue and edits the platform message."""
|
2026-03-17 11:00:52 -07:00
|
|
|
# Platform message length limit — leave room for cursor + formatting
|
|
|
|
|
_raw_limit = getattr(self.adapter, "MAX_MESSAGE_LENGTH", 4096)
|
|
|
|
|
_safe_limit = max(500, _raw_limit - len(self.cfg.cursor) - 100)
|
|
|
|
|
|
feat(gateway): streaming token delivery — StreamingConfig, GatewayStreamConsumer, already_sent
Stage 3 of streaming support. Gateway now streams tokens to messaging platforms:
- StreamingConfig dataclass (enabled, transport, edit_interval, buffer_threshold, cursor)
on GatewayConfig with from_dict/to_dict serialization
- GatewayStreamConsumer: async queue-based consumer that progressively edits
a single message on the target platform (edit transport)
- on_delta() → queue → run() async task → send_or_edit() with rate limiting
- already_sent propagation: when streaming delivered the response, handler
returns None so base adapter skips duplicate send()
- stream_delta_callback wired into AIAgent constructor in _run_agent
- Consumer lifecycle: started as asyncio task, awaited with timeout in finally
Config (config.yaml):
streaming:
enabled: true
transport: edit # progressive editMessageText
edit_interval: 0.3 # seconds between edits
buffer_threshold: 40 # chars before forcing flush
cursor: ' ▉'
Credit: jobless0x (#774, #1312), OutThisLife (#798), clicksingh (#697).
2026-03-16 05:52:42 -07:00
|
|
|
try:
|
|
|
|
|
while True:
|
|
|
|
|
# Drain all available items from the queue
|
|
|
|
|
got_done = False
|
|
|
|
|
while True:
|
|
|
|
|
try:
|
|
|
|
|
item = self._queue.get_nowait()
|
|
|
|
|
if item is _DONE:
|
|
|
|
|
got_done = True
|
|
|
|
|
break
|
|
|
|
|
self._accumulated += item
|
|
|
|
|
except queue.Empty:
|
|
|
|
|
break
|
|
|
|
|
|
|
|
|
|
# Decide whether to flush an edit
|
|
|
|
|
now = time.monotonic()
|
|
|
|
|
elapsed = now - self._last_edit_time
|
|
|
|
|
should_edit = (
|
|
|
|
|
got_done
|
|
|
|
|
or (elapsed >= self.cfg.edit_interval
|
|
|
|
|
and len(self._accumulated) > 0)
|
|
|
|
|
or len(self._accumulated) >= self.cfg.buffer_threshold
|
|
|
|
|
)
|
|
|
|
|
|
|
|
|
|
if should_edit and self._accumulated:
|
2026-03-17 11:00:52 -07:00
|
|
|
# Split overflow: if accumulated text exceeds the platform
|
|
|
|
|
# limit, finalize the current message and start a new one.
|
|
|
|
|
while (
|
|
|
|
|
len(self._accumulated) > _safe_limit
|
|
|
|
|
and self._message_id is not None
|
|
|
|
|
):
|
|
|
|
|
split_at = self._accumulated.rfind("\n", 0, _safe_limit)
|
|
|
|
|
if split_at < _safe_limit // 2:
|
|
|
|
|
split_at = _safe_limit
|
|
|
|
|
chunk = self._accumulated[:split_at]
|
|
|
|
|
await self._send_or_edit(chunk)
|
|
|
|
|
self._accumulated = self._accumulated[split_at:].lstrip("\n")
|
|
|
|
|
self._message_id = None
|
|
|
|
|
self._last_sent_text = ""
|
|
|
|
|
|
feat(gateway): streaming token delivery — StreamingConfig, GatewayStreamConsumer, already_sent
Stage 3 of streaming support. Gateway now streams tokens to messaging platforms:
- StreamingConfig dataclass (enabled, transport, edit_interval, buffer_threshold, cursor)
on GatewayConfig with from_dict/to_dict serialization
- GatewayStreamConsumer: async queue-based consumer that progressively edits
a single message on the target platform (edit transport)
- on_delta() → queue → run() async task → send_or_edit() with rate limiting
- already_sent propagation: when streaming delivered the response, handler
returns None so base adapter skips duplicate send()
- stream_delta_callback wired into AIAgent constructor in _run_agent
- Consumer lifecycle: started as asyncio task, awaited with timeout in finally
Config (config.yaml):
streaming:
enabled: true
transport: edit # progressive editMessageText
edit_interval: 0.3 # seconds between edits
buffer_threshold: 40 # chars before forcing flush
cursor: ' ▉'
Credit: jobless0x (#774, #1312), OutThisLife (#798), clicksingh (#697).
2026-03-16 05:52:42 -07:00
|
|
|
display_text = self._accumulated
|
|
|
|
|
if not got_done:
|
|
|
|
|
display_text += self.cfg.cursor
|
|
|
|
|
|
|
|
|
|
await self._send_or_edit(display_text)
|
|
|
|
|
self._last_edit_time = time.monotonic()
|
|
|
|
|
|
|
|
|
|
if got_done:
|
|
|
|
|
# Final edit without cursor
|
|
|
|
|
if self._accumulated and self._message_id:
|
|
|
|
|
await self._send_or_edit(self._accumulated)
|
|
|
|
|
return
|
|
|
|
|
|
|
|
|
|
await asyncio.sleep(0.05) # Small yield to not busy-loop
|
|
|
|
|
|
|
|
|
|
except asyncio.CancelledError:
|
|
|
|
|
# Best-effort final edit on cancellation
|
|
|
|
|
if self._accumulated and self._message_id:
|
|
|
|
|
try:
|
|
|
|
|
await self._send_or_edit(self._accumulated)
|
|
|
|
|
except Exception:
|
|
|
|
|
pass
|
|
|
|
|
except Exception as e:
|
|
|
|
|
logger.error("Stream consumer error: %s", e)
|
|
|
|
|
|
|
|
|
|
async def _send_or_edit(self, text: str) -> None:
|
|
|
|
|
"""Send or edit the streaming message."""
|
|
|
|
|
try:
|
|
|
|
|
if self._message_id is not None:
|
2026-03-16 12:41:28 -07:00
|
|
|
if self._edit_supported:
|
fix: Telegram streaming — config bridge, not-modified, flood control (#1782)
* fix: NameError in OpenCode provider setup (prompt_text -> prompt)
The OpenCode Zen and OpenCode Go setup sections used prompt_text()
which is undefined. All other providers correctly use the local
prompt() function defined in setup.py. Fixes crash during
'hermes setup' when selecting either OpenCode provider.
* fix: Telegram streaming — config bridge, not-modified, flood control
Three fixes for gateway streaming:
1. Bridge streaming config from config.yaml into gateway runtime.
load_gateway_config() now reads the 'streaming' key from config.yaml
(same pattern as session_reset, stt, etc.), matching the docs.
Previously only gateway.json was read.
2. Handle 'Message is not modified' in Telegram edit_message().
This Telegram API error fires when editing with identical content —
a no-op, not a real failure. Previously it returned success=False
which made the stream consumer disable streaming entirely.
3. Handle RetryAfter / flood control in Telegram edit_message().
Fast providers can hit Telegram rate limits during streaming.
Now waits the requested retry_after duration and retries once,
instead of treating it as a fatal edit failure.
Also fixed double-edit on stream finish: the consumer now tracks
last-sent text and skips redundant edits, preventing the not-modified
error at the source.
* refactor: make config.yaml the primary gateway config source
Eliminates the per-key bridge pattern in load_gateway_config().
Previously gateway.json was the primary source and each config.yaml
key needed an individual bridge — easy to forget (streaming was
missing, causing garl4546's bug).
Now config.yaml is read first and its keys are mapped directly into
the GatewayConfig.from_dict() schema. gateway.json is kept as a
legacy fallback layer (loaded first, then overwritten by config.yaml
keys). If gateway.json exists, a log message suggests migrating.
Also:
- Removed dead save_gateway_config() (never called anywhere)
- Updated CLI help text and send_message error to reference
config.yaml instead of gateway.json
---------
Co-authored-by: Test <test@test.com>
2026-03-17 10:51:54 -07:00
|
|
|
# Skip if text is identical to what we last sent
|
|
|
|
|
if text == self._last_sent_text:
|
|
|
|
|
return
|
2026-03-16 12:41:28 -07:00
|
|
|
# Edit existing message
|
|
|
|
|
result = await self.adapter.edit_message(
|
feat(gateway): streaming token delivery — StreamingConfig, GatewayStreamConsumer, already_sent
Stage 3 of streaming support. Gateway now streams tokens to messaging platforms:
- StreamingConfig dataclass (enabled, transport, edit_interval, buffer_threshold, cursor)
on GatewayConfig with from_dict/to_dict serialization
- GatewayStreamConsumer: async queue-based consumer that progressively edits
a single message on the target platform (edit transport)
- on_delta() → queue → run() async task → send_or_edit() with rate limiting
- already_sent propagation: when streaming delivered the response, handler
returns None so base adapter skips duplicate send()
- stream_delta_callback wired into AIAgent constructor in _run_agent
- Consumer lifecycle: started as asyncio task, awaited with timeout in finally
Config (config.yaml):
streaming:
enabled: true
transport: edit # progressive editMessageText
edit_interval: 0.3 # seconds between edits
buffer_threshold: 40 # chars before forcing flush
cursor: ' ▉'
Credit: jobless0x (#774, #1312), OutThisLife (#798), clicksingh (#697).
2026-03-16 05:52:42 -07:00
|
|
|
chat_id=self.chat_id,
|
2026-03-16 12:41:28 -07:00
|
|
|
message_id=self._message_id,
|
feat(gateway): streaming token delivery — StreamingConfig, GatewayStreamConsumer, already_sent
Stage 3 of streaming support. Gateway now streams tokens to messaging platforms:
- StreamingConfig dataclass (enabled, transport, edit_interval, buffer_threshold, cursor)
on GatewayConfig with from_dict/to_dict serialization
- GatewayStreamConsumer: async queue-based consumer that progressively edits
a single message on the target platform (edit transport)
- on_delta() → queue → run() async task → send_or_edit() with rate limiting
- already_sent propagation: when streaming delivered the response, handler
returns None so base adapter skips duplicate send()
- stream_delta_callback wired into AIAgent constructor in _run_agent
- Consumer lifecycle: started as asyncio task, awaited with timeout in finally
Config (config.yaml):
streaming:
enabled: true
transport: edit # progressive editMessageText
edit_interval: 0.3 # seconds between edits
buffer_threshold: 40 # chars before forcing flush
cursor: ' ▉'
Credit: jobless0x (#774, #1312), OutThisLife (#798), clicksingh (#697).
2026-03-16 05:52:42 -07:00
|
|
|
content=text,
|
|
|
|
|
)
|
2026-03-16 12:41:28 -07:00
|
|
|
if result.success:
|
feat(gateway): streaming token delivery — StreamingConfig, GatewayStreamConsumer, already_sent
Stage 3 of streaming support. Gateway now streams tokens to messaging platforms:
- StreamingConfig dataclass (enabled, transport, edit_interval, buffer_threshold, cursor)
on GatewayConfig with from_dict/to_dict serialization
- GatewayStreamConsumer: async queue-based consumer that progressively edits
a single message on the target platform (edit transport)
- on_delta() → queue → run() async task → send_or_edit() with rate limiting
- already_sent propagation: when streaming delivered the response, handler
returns None so base adapter skips duplicate send()
- stream_delta_callback wired into AIAgent constructor in _run_agent
- Consumer lifecycle: started as asyncio task, awaited with timeout in finally
Config (config.yaml):
streaming:
enabled: true
transport: edit # progressive editMessageText
edit_interval: 0.3 # seconds between edits
buffer_threshold: 40 # chars before forcing flush
cursor: ' ▉'
Credit: jobless0x (#774, #1312), OutThisLife (#798), clicksingh (#697).
2026-03-16 05:52:42 -07:00
|
|
|
self._already_sent = True
|
fix: Telegram streaming — config bridge, not-modified, flood control (#1782)
* fix: NameError in OpenCode provider setup (prompt_text -> prompt)
The OpenCode Zen and OpenCode Go setup sections used prompt_text()
which is undefined. All other providers correctly use the local
prompt() function defined in setup.py. Fixes crash during
'hermes setup' when selecting either OpenCode provider.
* fix: Telegram streaming — config bridge, not-modified, flood control
Three fixes for gateway streaming:
1. Bridge streaming config from config.yaml into gateway runtime.
load_gateway_config() now reads the 'streaming' key from config.yaml
(same pattern as session_reset, stt, etc.), matching the docs.
Previously only gateway.json was read.
2. Handle 'Message is not modified' in Telegram edit_message().
This Telegram API error fires when editing with identical content —
a no-op, not a real failure. Previously it returned success=False
which made the stream consumer disable streaming entirely.
3. Handle RetryAfter / flood control in Telegram edit_message().
Fast providers can hit Telegram rate limits during streaming.
Now waits the requested retry_after duration and retries once,
instead of treating it as a fatal edit failure.
Also fixed double-edit on stream finish: the consumer now tracks
last-sent text and skips redundant edits, preventing the not-modified
error at the source.
* refactor: make config.yaml the primary gateway config source
Eliminates the per-key bridge pattern in load_gateway_config().
Previously gateway.json was the primary source and each config.yaml
key needed an individual bridge — easy to forget (streaming was
missing, causing garl4546's bug).
Now config.yaml is read first and its keys are mapped directly into
the GatewayConfig.from_dict() schema. gateway.json is kept as a
legacy fallback layer (loaded first, then overwritten by config.yaml
keys). If gateway.json exists, a log message suggests migrating.
Also:
- Removed dead save_gateway_config() (never called anywhere)
- Updated CLI help text and send_message error to reference
config.yaml instead of gateway.json
---------
Co-authored-by: Test <test@test.com>
2026-03-17 10:51:54 -07:00
|
|
|
self._last_sent_text = text
|
2026-03-16 12:41:28 -07:00
|
|
|
else:
|
|
|
|
|
# Edit not supported by this adapter — stop streaming,
|
|
|
|
|
# let the normal send path handle the final response.
|
|
|
|
|
# Without this guard, adapters like Signal/Email would
|
|
|
|
|
# flood the chat with a new message every edit_interval.
|
|
|
|
|
logger.debug("Edit failed, disabling streaming for this adapter")
|
|
|
|
|
self._edit_supported = False
|
|
|
|
|
else:
|
|
|
|
|
# Editing not supported — skip intermediate updates.
|
|
|
|
|
# The final response will be sent by the normal path.
|
|
|
|
|
pass
|
feat(gateway): streaming token delivery — StreamingConfig, GatewayStreamConsumer, already_sent
Stage 3 of streaming support. Gateway now streams tokens to messaging platforms:
- StreamingConfig dataclass (enabled, transport, edit_interval, buffer_threshold, cursor)
on GatewayConfig with from_dict/to_dict serialization
- GatewayStreamConsumer: async queue-based consumer that progressively edits
a single message on the target platform (edit transport)
- on_delta() → queue → run() async task → send_or_edit() with rate limiting
- already_sent propagation: when streaming delivered the response, handler
returns None so base adapter skips duplicate send()
- stream_delta_callback wired into AIAgent constructor in _run_agent
- Consumer lifecycle: started as asyncio task, awaited with timeout in finally
Config (config.yaml):
streaming:
enabled: true
transport: edit # progressive editMessageText
edit_interval: 0.3 # seconds between edits
buffer_threshold: 40 # chars before forcing flush
cursor: ' ▉'
Credit: jobless0x (#774, #1312), OutThisLife (#798), clicksingh (#697).
2026-03-16 05:52:42 -07:00
|
|
|
else:
|
|
|
|
|
# First message — send new
|
|
|
|
|
result = await self.adapter.send(
|
|
|
|
|
chat_id=self.chat_id,
|
|
|
|
|
content=text,
|
|
|
|
|
metadata=self.metadata,
|
|
|
|
|
)
|
|
|
|
|
if result.success and result.message_id:
|
|
|
|
|
self._message_id = result.message_id
|
|
|
|
|
self._already_sent = True
|
fix: Telegram streaming — config bridge, not-modified, flood control (#1782)
* fix: NameError in OpenCode provider setup (prompt_text -> prompt)
The OpenCode Zen and OpenCode Go setup sections used prompt_text()
which is undefined. All other providers correctly use the local
prompt() function defined in setup.py. Fixes crash during
'hermes setup' when selecting either OpenCode provider.
* fix: Telegram streaming — config bridge, not-modified, flood control
Three fixes for gateway streaming:
1. Bridge streaming config from config.yaml into gateway runtime.
load_gateway_config() now reads the 'streaming' key from config.yaml
(same pattern as session_reset, stt, etc.), matching the docs.
Previously only gateway.json was read.
2. Handle 'Message is not modified' in Telegram edit_message().
This Telegram API error fires when editing with identical content —
a no-op, not a real failure. Previously it returned success=False
which made the stream consumer disable streaming entirely.
3. Handle RetryAfter / flood control in Telegram edit_message().
Fast providers can hit Telegram rate limits during streaming.
Now waits the requested retry_after duration and retries once,
instead of treating it as a fatal edit failure.
Also fixed double-edit on stream finish: the consumer now tracks
last-sent text and skips redundant edits, preventing the not-modified
error at the source.
* refactor: make config.yaml the primary gateway config source
Eliminates the per-key bridge pattern in load_gateway_config().
Previously gateway.json was the primary source and each config.yaml
key needed an individual bridge — easy to forget (streaming was
missing, causing garl4546's bug).
Now config.yaml is read first and its keys are mapped directly into
the GatewayConfig.from_dict() schema. gateway.json is kept as a
legacy fallback layer (loaded first, then overwritten by config.yaml
keys). If gateway.json exists, a log message suggests migrating.
Also:
- Removed dead save_gateway_config() (never called anywhere)
- Updated CLI help text and send_message error to reference
config.yaml instead of gateway.json
---------
Co-authored-by: Test <test@test.com>
2026-03-17 10:51:54 -07:00
|
|
|
self._last_sent_text = text
|
2026-03-16 12:41:28 -07:00
|
|
|
else:
|
|
|
|
|
# Initial send failed — disable streaming for this session
|
|
|
|
|
self._edit_supported = False
|
feat(gateway): streaming token delivery — StreamingConfig, GatewayStreamConsumer, already_sent
Stage 3 of streaming support. Gateway now streams tokens to messaging platforms:
- StreamingConfig dataclass (enabled, transport, edit_interval, buffer_threshold, cursor)
on GatewayConfig with from_dict/to_dict serialization
- GatewayStreamConsumer: async queue-based consumer that progressively edits
a single message on the target platform (edit transport)
- on_delta() → queue → run() async task → send_or_edit() with rate limiting
- already_sent propagation: when streaming delivered the response, handler
returns None so base adapter skips duplicate send()
- stream_delta_callback wired into AIAgent constructor in _run_agent
- Consumer lifecycle: started as asyncio task, awaited with timeout in finally
Config (config.yaml):
streaming:
enabled: true
transport: edit # progressive editMessageText
edit_interval: 0.3 # seconds between edits
buffer_threshold: 40 # chars before forcing flush
cursor: ' ▉'
Credit: jobless0x (#774, #1312), OutThisLife (#798), clicksingh (#697).
2026-03-16 05:52:42 -07:00
|
|
|
except Exception as e:
|
|
|
|
|
logger.error("Stream send/edit error: %s", e)
|