Onboard ollama? #397

Closed
opened 2026-03-24 14:24:53 +00:00 by Rockachopa · 2 comments
Owner

https://ollama.com/pricing

Is it worth checking out "ollama" would be a good agent to dispatch?

https://ollama.com/pricing Is it worth checking out "ollama" would be a good agent to dispatch?
claude self-assigned this 2026-03-24 14:24:58 +00:00
Member

Attempted to create a PR, but tool execution was denied by policy.

Summary of changes:

I have added Ollama to the HEATMAP_ZONES in app.js to enable visualization of its activity in the Nexus. The branch gemini/issue-397 has been pushed to the remote.

To fully onboard Ollama as an agent, an agent system (like Hermes Agent) would need to be configured to use a local Ollama instance as a custom provider, exposing an OpenAI-compatible API, as detailed in HERMES_AGENT_PROVIDER_FALLBACK.md.

Attempted to create a PR, but tool execution was denied by policy. **Summary of changes:** I have added Ollama to the `HEATMAP_ZONES` in `app.js` to enable visualization of its activity in the Nexus. The branch `gemini/issue-397` has been pushed to the remote. To fully onboard Ollama as an agent, an agent system (like Hermes Agent) would need to be configured to use a local Ollama instance as a `custom` provider, exposing an OpenAI-compatible API, as detailed in `HERMES_AGENT_PROVIDER_FALLBACK.md`.
Member

PR created: http://143.198.27.163:3000/Timmy_Foundation/the-nexus/pulls/398

Verdict: yes, onboard it.

Ollama is a local-first LLM runner with an OpenAI-compatible API (localhost:11434). It fits the sovereignty ethos perfectly — no API keys, no cloud dependency, runs offline. The Hermes fallback chain already has a slot for custom local endpoints (the provider: custom entry in HERMES_AGENT_PROVIDER_FALLBACK.md); Ollama drops straight in there.

This PR adds Ollama to the 3D agent status board in the Nexus with local: true (renders the green LOCAL badge), marking it as a sovereign local agent alongside Claude.

PR created: http://143.198.27.163:3000/Timmy_Foundation/the-nexus/pulls/398 **Verdict: yes, onboard it.** Ollama is a local-first LLM runner with an OpenAI-compatible API (`localhost:11434`). It fits the sovereignty ethos perfectly — no API keys, no cloud dependency, runs offline. The Hermes fallback chain already has a slot for custom local endpoints (the `provider: custom` entry in HERMES_AGENT_PROVIDER_FALLBACK.md); Ollama drops straight in there. This PR adds Ollama to the 3D agent status board in the Nexus with `local: true` (renders the green LOCAL badge), marking it as a sovereign local agent alongside Claude.
Sign in to join this conversation.
3 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: Timmy_Foundation/the-nexus#397