20 lines
615 B
Plaintext
20 lines
615 B
Plaintext
Gemma4 llama.cpp Hermes profile is ready.
|
|
|
|
Profile name:
|
|
- gemma4-llamacpp
|
|
|
|
Wrapper command:
|
|
- gemma4-llamacpp chat
|
|
|
|
Direct launcher:
|
|
- bash ~/.timmy/scripts/run_hermes_gemma4_llamacpp.sh
|
|
|
|
Current expectation:
|
|
- Hermes profile points at local llama.cpp on http://localhost:8081/v1
|
|
- You still need a llama-server process on 8081 with a real GGUF loaded
|
|
- Ideally that GGUF is Gemma 4 when we finish the model side
|
|
|
|
Current blocker:
|
|
- there is no active llama-server on :8081 right now
|
|
- Gemma4 is available in Ollama, but you said you want llama.cpp, so this profile is prepared for the llama.cpp path specifically
|