Comprehensive guide for using Hermes Agent with alternative LLM backends: - Ollama (local models, zero config) - vLLM (high-performance GPU inference) - SGLang (RadixAttention, prefix caching) - llama.cpp / llama-server (CPU & Metal inference) - LiteLLM Proxy (multi-provider gateway) - ClawRouter (cost-optimized routing with complexity scoring) - 10+ other compatible providers table (Together, Groq, DeepSeek, etc.) - Choosing the Right Setup decision table - General custom endpoint setup instructions All of these work via the existing OPENAI_BASE_URL + OPENAI_API_KEY custom endpoint support — no code changes needed.
Website
This website is built using Docusaurus, a modern static website generator.
Installation
yarn
Local Development
yarn start
This command starts a local development server and opens up a browser window. Most changes are reflected live without having to restart the server.
Build
yarn build
This command generates static content into the build directory and can be served using any static contents hosting service.
Deployment
Using SSH:
USE_SSH=true yarn deploy
Not using SSH:
GIT_USER=<Your GitHub username> yarn deploy
If you are using GitHub pages for hosting, this command is a convenient way to build the website and push to the gh-pages branch.