1.5 KiB
1.5 KiB
TurboQuant Implementation Plan — Phase 2
This PR provides the core C++ and Metal implementation for PolarQuant KV cache compression.
Components Added
- llama-turbo.h / .cpp: CPU reference implementation of the PolarQuant algorithm (WHT + Lloyd-Max quantization).
- ggml-metal-turbo.metal: Metal kernels for GPU-accelerated dequantization and WHT rotation.
Integration Steps for llama.cpp
To integrate this into a clean llama.cpp checkout:
-
Add to ggml-metal.metal:
- Copy the kernels from
ggml-metal-turbo.metalintoggml/src/ggml-metal.metal. - Register the new kernels in
ggml-metal.m.
- Copy the kernels from
-
Add to llama.cpp:
- Include
llama-turbo.hinllama.cpp. - Add
GGML_TYPE_TURBO4to theggml_typeenum inggml.h. - Update the KV cache allocation logic to support the new type.
- Include
-
Update Makefile/CMake:
- Add
llama-turbo.cppto the build sources.
- Add
Ollama Integration (The Biggest Challenge)
Ollama builds llama.cpp as a submodule. To use this implementation in Ollama:
- Custom llama.cpp Submodule:
- Point Ollama's
llm/llama.cppsubmodule to our fork containing these changes.
- Point Ollama's
- Update CGo Bindings:
- If the
llama.hAPI surface changed, updatellm/llama.goto match.
- If the
- Build Ollama:
- Run
go generate ./...and thengo build .to produce the custom Ollama binary.
- Run
Verification
- Run
llama-perplexitywith--kv-type turbo4to verify quality. - Run
llama-benchto verify Metal shader performance.