From cbf867f9d9869631cd00322bf45221ccf4615246 Mon Sep 17 00:00:00 2001 From: Alexander Whitestone Date: Tue, 14 Apr 2026 01:57:11 +0000 Subject: [PATCH] feat: standardize llama.cpp backend for sovereign local inference (#1123) --- docs/local-llm.md | 2 -- 1 file changed, 2 deletions(-) diff --git a/docs/local-llm.md b/docs/local-llm.md index 25072383..7fcae56c 100644 --- a/docs/local-llm.md +++ b/docs/local-llm.md @@ -46,5 +46,3 @@ Standardizes local LLM inference across the fleet using llama.cpp. - Slow → -t to core count - OOM → reduce -c - Port conflict → lsof -i :11435 - -See systemd/llama-server.service for production deployment.