[OpenClaw 3/8] Install OpenClaw on Hermes VPS and complete onboarding #726

Closed
opened 2026-03-21 13:57:23 +00:00 by perplexity · 1 comment
Collaborator

Description

Install OpenClaw on the Hermes VPS and configure it to use the local Ollama backend.

Tasks

  1. Install OpenClawnpm install -g openclaw@latest (or Docker if bare install fails)
  2. Run onboardingopenclaw onboard --install-daemon
  3. Configure LLM backend — Point to local Ollama at http://127.0.0.1:11434/v1
    • Set api: openai-chat (NOT openai-completions)
    • Set apiKey: ollama-local
    • Configure model ID matching what's pulled
  4. Configure gateway — Set port, auth token, bind to localhost (Tailscale will handle external access)
  5. Test basic chat — Open http://localhost:18789 and verify the agent responds
  6. Verify tool use — Ask the agent to create a file, list files, or run a command
  7. Document the config — Save openclaw.json to the OpenClaw Gitea repo

Configuration Reference

{
  "models": {
    "providers": {
      "openai": {
        "baseUrl": "http://127.0.0.1:11434/v1",
        "apiKey": "ollama-local",
        "api": "openai-chat",
        "models": [{ "id": "<model-id>", "contextWindow": 65536 }]
      }
    }
  },
  "agents": {
    "defaults": {
      "model": { "primary": "<model-id>" },
      "workspace": "~/.openclaw/workspace"
    }
  },
  "gateway": {
    "port": 18789,
    "mode": "local",
    "auth": { "mode": "token", "token": "<generate-secure-token>" }
  }
}

Acceptance Criteria

  • OpenClaw gateway is running and accessible
  • Agent responds to prompts using local Ollama model
  • Agent can use basic tools (file ops, shell)
  • Config is committed to Gitea repo
  • Gateway daemon starts on boot

Depends on

  • Ollama installed and working (Phase 1)

Parent epic: rockachopa/Timmy-time-dashboard#663


Migrated from perplexity/the-matrix#117

## Description Install OpenClaw on the Hermes VPS and configure it to use the local Ollama backend. ### Tasks 1. **Install OpenClaw** — `npm install -g openclaw@latest` (or Docker if bare install fails) 2. **Run onboarding** — `openclaw onboard --install-daemon` 3. **Configure LLM backend** — Point to local Ollama at `http://127.0.0.1:11434/v1` - Set `api: openai-chat` (NOT openai-completions) - Set `apiKey: ollama-local` - Configure model ID matching what's pulled 4. **Configure gateway** — Set port, auth token, bind to localhost (Tailscale will handle external access) 5. **Test basic chat** — Open `http://localhost:18789` and verify the agent responds 6. **Verify tool use** — Ask the agent to create a file, list files, or run a command 7. **Document the config** — Save `openclaw.json` to the OpenClaw Gitea repo ### Configuration Reference ```json { "models": { "providers": { "openai": { "baseUrl": "http://127.0.0.1:11434/v1", "apiKey": "ollama-local", "api": "openai-chat", "models": [{ "id": "<model-id>", "contextWindow": 65536 }] } } }, "agents": { "defaults": { "model": { "primary": "<model-id>" }, "workspace": "~/.openclaw/workspace" } }, "gateway": { "port": 18789, "mode": "local", "auth": { "mode": "token", "token": "<generate-secure-token>" } } } ``` ### Acceptance Criteria - [ ] OpenClaw gateway is running and accessible - [ ] Agent responds to prompts using local Ollama model - [ ] Agent can use basic tools (file ops, shell) - [ ] Config is committed to Gitea repo - [ ] Gateway daemon starts on boot ### Depends on - Ollama installed and working (Phase 1) > Parent epic: rockachopa/Timmy-time-dashboard#663 --- _Migrated from perplexity/the-matrix#117_
kimi was assigned by Timmy 2026-03-21 18:02:07 +00:00
kimi added this to the OpenClaw Sovereignty milestone 2026-03-21 20:24:23 +00:00
claude added the rejected-direction label 2026-03-23 13:51:18 +00:00
Author
Collaborator

🧹 Closed — Rejected Direction (OpenClaw)

OpenClaw direction was explicitly rejected by the principal. The harness is the product — sovereign AI runs on Hermes with local models, not OpenClaw.

Ref: Deep Backlog Triage #1076. Reopen if needed.

🧹 **Closed — Rejected Direction (OpenClaw)** OpenClaw direction was explicitly rejected by the principal. The harness is the product — sovereign AI runs on Hermes with local models, not OpenClaw. Ref: Deep Backlog Triage #1076. Reopen if needed.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: Rockachopa/Timmy-time-dashboard#726