- Add EPIC.md with resurrection plan - Create Hermes profile with Bezalel persona - Add llama-server.sh for Gemma 4 inference - Update start_bezalel.sh with stack checks - Add README with quick start guide Backend: llama.cpp Model: Gemma 4 26B MoE (Apache 2.0) Frontend: Hermes profile No OpenAI. No cloud. Pure sovereign stack.
56 lines
1.2 KiB
Markdown
56 lines
1.2 KiB
Markdown
# Bezalel — Master Craftsman
|
|
|
|
**"In the shadow of God"** — Resurrected with Gemma 4
|
|
|
|
## The Stack
|
|
|
|
| Layer | Technology | Purpose |
|
|
|-------|-----------|---------|
|
|
| **Frontend** | Hermes Profile | Bezalel identity, tools, dispatch |
|
|
| **Inference** | llama.cpp | Local GPU-accelerated inference |
|
|
| **Model** | Gemma 4 26B MoE | Apache 2.0, sovereign AI |
|
|
|
|
## Quick Start
|
|
|
|
```bash
|
|
# 1. Start llama.cpp server
|
|
./llama-server.sh
|
|
|
|
# 2. In another terminal, start Bezalel
|
|
./start_bezalel.sh
|
|
|
|
# 3. Interact via Telegram or CLI
|
|
```
|
|
|
|
## Files
|
|
|
|
| File | Purpose |
|
|
|------|---------|
|
|
| `llama-server.sh` | Start llama.cpp with Gemma 4 |
|
|
| `start_bezalel.sh` | Start Hermes gateway |
|
|
| `hermes-profile/config.yaml` | Bezalel profile config |
|
|
| `hermes-profile/SOUL.md` | Bezalel persona |
|
|
| `EPIC.md` | Resurrection plan |
|
|
|
|
## Model Requirements
|
|
|
|
- **Model**: Gemma 4 26B MoE Q4_K_M
|
|
- **Size**: ~16GB
|
|
- **VRAM**: 16GB+ recommended
|
|
- **Download**:
|
|
```bash
|
|
huggingface-cli download google/gemma-4-26b-moe-GGUF \
|
|
--include "*Q4_K_M.gguf" \
|
|
--local-dir /opt/models/
|
|
```
|
|
|
|
## The Craftsman's Creed
|
|
|
|
> Measure twice, cut once.
|
|
> Good enough is never good enough.
|
|
> Quality is not an act, it is a habit.
|
|
|
|
---
|
|
|
|
*Resurrected 2026-04-02 by Alexander's command.*
|