docs: add quick commands documentation
Documents the quick_commands config feature from PR #746: - configuration.md: full section with examples (server status, disk, gpu, update), behavior notes (timeout, priority, works everywhere) - cli.md: brief section with config example + link to config guide
This commit is contained in:
@@ -131,6 +131,23 @@ Type `/` to see an autocomplete dropdown of all available commands.
|
||||
Commands are case-insensitive — `/HELP` works the same as `/help`. Most commands work mid-conversation.
|
||||
:::
|
||||
|
||||
## Quick Commands
|
||||
|
||||
You can define custom commands that run shell commands instantly without invoking the LLM. These work in both the CLI and messaging platforms (Telegram, Discord, etc.).
|
||||
|
||||
```yaml
|
||||
# ~/.hermes/config.yaml
|
||||
quick_commands:
|
||||
status:
|
||||
type: exec
|
||||
command: systemctl status hermes-agent
|
||||
gpu:
|
||||
type: exec
|
||||
command: nvidia-smi --query-gpu=utilization.gpu,memory.used --format=csv,noheader
|
||||
```
|
||||
|
||||
Then type `/status` or `/gpu` in any chat. See the [Configuration guide](/docs/user-guide/configuration#quick-commands) for more examples.
|
||||
|
||||
## Skill Slash Commands
|
||||
|
||||
Every installed skill in `~/.hermes/skills/` is automatically registered as a slash command. The skill name becomes the command:
|
||||
|
||||
@@ -632,6 +632,33 @@ stt:
|
||||
|
||||
Requires `VOICE_TOOLS_OPENAI_KEY` in `.env` for OpenAI STT.
|
||||
|
||||
## Quick Commands
|
||||
|
||||
Define custom commands that run shell commands without invoking the LLM — zero token usage, instant execution. Especially useful from messaging platforms (Telegram, Discord, etc.) for quick server checks or utility scripts.
|
||||
|
||||
```yaml
|
||||
quick_commands:
|
||||
status:
|
||||
type: exec
|
||||
command: systemctl status hermes-agent
|
||||
disk:
|
||||
type: exec
|
||||
command: df -h /
|
||||
update:
|
||||
type: exec
|
||||
command: cd ~/.hermes/hermes-agent && git pull && pip install -e .
|
||||
gpu:
|
||||
type: exec
|
||||
command: nvidia-smi --query-gpu=name,utilization.gpu,memory.used,memory.total --format=csv,noheader
|
||||
```
|
||||
|
||||
Usage: type `/status`, `/disk`, `/update`, or `/gpu` in the CLI or any messaging platform. The command runs locally on the host and returns the output directly — no LLM call, no tokens consumed.
|
||||
|
||||
- **30-second timeout** — long-running commands are killed with an error message
|
||||
- **Priority** — quick commands are checked before skill commands, so you can override skill names
|
||||
- **Type** — only `exec` is supported (runs a shell command); other types show an error
|
||||
- **Works everywhere** — CLI, Telegram, Discord, Slack, WhatsApp, Signal
|
||||
|
||||
## Human Delay
|
||||
|
||||
Simulate human-like response pacing in messaging platforms:
|
||||
|
||||
Reference in New Issue
Block a user