feat: standardize llama.cpp backend for sovereign local inference (#1123)
This commit is contained in:
@@ -22,7 +22,6 @@ ProtectHome=read-only
|
||||
ReadWritePaths=/opt/models
|
||||
PrivateTmp=true
|
||||
StandardOutput=journal
|
||||
StandardError=journal
|
||||
SyslogIdentifier=llama-server
|
||||
|
||||
[Install]
|
||||
|
||||
Reference in New Issue
Block a user