feat: standardize llama.cpp backend for sovereign local inference (#1123)
Some checks failed
Deploy Nexus / deploy (push) Has been cancelled
Staging Verification Gate / verify-staging (push) Has been cancelled

This commit is contained in:
2026-04-14 01:57:11 +00:00
parent ad98bd5ead
commit 2f5f874e84

View File

@@ -46,5 +46,3 @@ Standardizes local LLM inference across the fleet using llama.cpp.
- Slow → -t to core count
- OOM → reduce -c
- Port conflict → lsof -i :11435
See systemd/llama-server.service for production deployment.