- Add EPIC.md with resurrection plan - Create Hermes profile with Bezalel persona - Add llama-server.sh for Gemma 4 inference - Update start_bezalel.sh with stack checks - Add README with quick start guide Backend: llama.cpp Model: Gemma 4 26B MoE (Apache 2.0) Frontend: Hermes profile No OpenAI. No cloud. Pure sovereign stack.
10 lines
159 B
JSON
10 lines
159 B
JSON
{
|
|
"updated_at": "2026-04-02T20:09:18.074641",
|
|
"platforms": {
|
|
"telegram": [],
|
|
"whatsapp": [],
|
|
"signal": [],
|
|
"email": [],
|
|
"sms": []
|
|
}
|
|
} |