Skip to content

Commit e7d1c1d

Browse files
authored
fix(llamafile): resolve deferred experts data race and update README (#1646)
1 parent 51745a9 commit e7d1c1d

File tree

3 files changed

+696
-88
lines changed

3 files changed

+696
-88
lines changed

kt-kernel/README.md

Lines changed: 132 additions & 85 deletions
Original file line numberDiff line numberDiff line change
@@ -2,45 +2,48 @@
22

33
High-performance kernel operations for KTransformers, featuring CPU-optimized MoE inference with AMX, AVX, KML and blis (amd library) support.
44

5-
- [Note](#note)
6-
- [Features](#features)
7-
- [Installation](#installation)
8-
- [Prerequisites](#prerequisites)
9-
- [Quick Installation (Recommended)](#quick-installation-recommended)
10-
- [Manual Configuration (Advanced)](#manual-configuration-advanced)
11-
- [Verification](#verification)
12-
- [Integration with SGLang](#integration-with-sglang)
13-
- [Installation Steps](#installation-steps)
14-
- [Complete Example: Qwen3-30B-A3B](#complete-example-qwen3-30b-a3b)
15-
- [KT-Kernel Parameters](#kt-kernel-parameters)
16-
- [Direct Python API Usage](#direct-python-api-usage)
17-
- [Advanced Options](#advanced-options)
18-
- [Build Configuration](#build-configuration)
19-
- [Manual Installation](#manual-installation)
20-
- [Error Troubleshooting](#error-troubleshooting)
21-
- [CUDA Not Found](#cuda-not-found)
22-
- [hwloc Not Found](#hwloc-not-found)
23-
- [Weight Quantization](#weight-quantization)
24-
- [CPU Weights (for "cold" experts on CPU)](#cpu-weights-for-cold-experts-on-cpu)
25-
- [GPU Weights (for "hot" experts on GPU)](#gpu-weights-for-hot-experts-on-gpu)
26-
- [Before Commit!](#before-commit)
5+
- [KT-Kernel](#kt-kernel)
6+
- [Note](#note)
7+
- [Features](#features)
8+
- [Installation](#installation)
9+
- [Prerequisites](#prerequisites)
10+
- [Quick Installation (Recommended)](#quick-installation-recommended)
11+
- [Manual Configuration (Advanced)](#manual-configuration-advanced)
12+
- [Verification](#verification)
13+
- [Integration with SGLang](#integration-with-sglang)
14+
- [Installation Steps](#installation-steps)
15+
- [1. Install SGLang](#1-install-sglang)
16+
- [2. Prepare Weights](#2-prepare-weights)
17+
- [3. Launch SGLang Server](#3-launch-sglang-server)
18+
- [Complete Example: Qwen3-30B-A3B](#complete-example-qwen3-30b-a3b)
19+
- [Option A: AMX Backend (AMXINT8)](#option-a-amx-backend-amxint8)
20+
- [Option B: LLAMAFILE Backend (GGUF)](#option-b-llamafile-backend-gguf)
21+
- [KT-Kernel Parameters](#kt-kernel-parameters)
22+
- [Direct Python API Usage](#direct-python-api-usage)
23+
- [Advanced Options](#advanced-options)
24+
- [Build Configuration](#build-configuration)
25+
- [Manual Installation](#manual-installation)
26+
- [1. Install System Dependencies](#1-install-system-dependencies)
27+
- [2. Set Build Configuration](#2-set-build-configuration)
28+
- [3. Build and Install](#3-build-and-install)
29+
- [Error Troubleshooting](#error-troubleshooting)
30+
- [CUDA Not Found](#cuda-not-found)
31+
- [hwloc Not Found](#hwloc-not-found)
32+
- [Weight Quantization](#weight-quantization)
33+
- [Before Commit!](#before-commit)
2734
## Note
2835

2936
**Current Support Status:**
30-
-**Intel CPUs with AMX**: Fully supported
31-
- ⚠️ **Universal CPU with llamafile**: In preview, not yet fully complete
32-
- ⚠️ **AMD CPUs with BLIS**: Upcoming, not yet fully integrated
37+
-**Intel CPUs with AMX**: Fully supported (using weights converted to INT4/INT8 format)
38+
- **Universal CPU (llamafile backend)**: Supported (using GGUF-format weights)
39+
- ⚠️ **AMD CPUs with BLIS**: In progress, not yet fully integrated
3340

3441
## Features
3542

36-
- **AMX Optimization**: Intel AMX (Advanced Matrix Extensions) support for INT4/INT8 quantized MoE inference
37-
- **Multi-Backend**: Unified `KTMoEWrapper` API supporting multiple backends (AMXINT4, AMXINT8, LLAMAFILE*)
38-
- **Flexible Backends**: AVX512, AVX2 via pluggable backend architecture
39-
- **Efficient MoE**: Optimized Mixture-of-Experts operations with NUMA-aware memory management
40-
- **Async Execution**: Non-blocking `submit_forward` / `sync_forward` API for improved pipelining
41-
- **Easy Integration**: Clean Python API with automatic backend selection
42-
43-
**Note**: *LLAMAFILE backend support is currently in *preview* and not yet fully complete.
43+
- **CPU-Optimized MoE Kernels**: High-throughput MoE expert kernels optimized for instruction sets.
44+
- **AMX INT4/INT8 Backend**: INT4 / INT8 quantized expert inference backend for AMX-capable servers.
45+
- **Llamafile CPU Backend**: AVX2/AVX512-based MoE backend built on Llamafile for universal CPU deployment.
46+
- **NUMA-Aware Execution**: Thread pool and memory layout designed for multi-socket / multi-NUMA machines.
4447

4548
## Installation
4649

@@ -62,18 +65,18 @@ conda activate kt-kernel
6265

6366
You can now install in two clear steps using the same script.
6467

65-
Option A: Two-step (explicit)
68+
Option A: Two-step (specify dependencies installation and build separately)
6669

6770
```bash
6871
# 1) Install system prerequisites (cmake, hwloc, pkg-config)
6972
./install.sh deps
7073

71-
# 2) Build and install kt-kernel (auto-detects CPU)
72-
# By default, the script cleans the local ./build directory before compiling.
74+
# 2) Build and install kt-kernel (auto-detects CPU instruction set)
75+
# By default, the script cleans the local ./build directory before compiling
7376
./install.sh build
7477
```
7578

76-
Option B: One-step (deps + build)
79+
Option B: One-step
7780

7881
```bash
7982
./install.sh
@@ -88,7 +91,9 @@ The install script will:
8891
- AMX CPU detected → `NATIVE + AMX=ON`
8992
- No AMX detected → `NATIVE + AMX=OFF`
9093

91-
⚠️ **Important for LLAMAFILE backend users:** If you have an AMX-capable CPU and plan to use the LLAMAFILE backend, do NOT use auto-detection. Use manual mode with `AVX512` or `AVX2` instead of `NATIVE` to avoid compilation issues (see below).
94+
⚠️ **Important for LLAMAFILE backend users:**
95+
If you have an AMX-capable CPU but plan to use the LLAMAFILE backend, do NOT use the default auto-detection build.
96+
Use "manual mode" with `CPUINFER_CPU_INSTRUCT` set to `AVX512` or `AVX2` instead of `NATIVE` to avoid compilation issues (see below).
9297

9398
### Manual Configuration (Advanced)
9499

@@ -99,7 +104,7 @@ If you need specific build options (e.g., for LLAMAFILE backend, compatibility,
99104
export CPUINFER_CPU_INSTRUCT=AVX512 # Options: NATIVE, AVX512, AVX2, FANCY
100105
export CPUINFER_ENABLE_AMX=OFF # Options: ON, OFF
101106

102-
# Run with manual mode (build only)
107+
# Build only (skip auto-detection of instruction set)
103108
./install.sh build --manual
104109
```
105110

@@ -127,27 +132,35 @@ pip install -e "python[all]"
127132

128133
#### 2. Prepare Weights
129134

130-
You need both GPU weights and CPU weights for heterogeneous inference:
135+
You need both GPU weights and CPU-side expert weights for heterogeneous inference. The exact format depends on the backend:
131136

132-
**GPU Weights:** Use the original / quantized model weights.
137+
**GPU Weights (for all backends):**
138+
Use the model weights required by SGLang for GPU inference (for example, the original or already-quantized model directory from Hugging Face).
133139

134-
**CPU Weights:** Quantize to AMX-optimized format using the conversion script:
140+
**CPU Weights (AMX backend: `AMXINT4` / `AMXINT8`):**
141+
Quantize weights to AMX-optimized INT4/INT8 format using the provided script:
135142

136143
```bash
137144
python scripts/convert_cpu_weights.py \
138145
--input-path /path/to/model \
139-
--input-type bf16 \ # Depends on your GPU weights type: fp8, fp16, or bf16
146+
--input-type bf16 \
140147
--output /path/to/cpu-weights \
141148
--quant-method int8 # or int4
142149
```
143150

151+
- `--input-path`: Path to GPU-side original weights
152+
- `--input-type`: Depends on your GPU weights type (`fp8`, `fp16`, or `bf16`)
153+
154+
In SGLang integration, `--kt-weight-path` should point to this converted CPU weights directory.
155+
144156
**Supported input formats:** FP8, FP16, BF16 → INT4/INT8.
145157

146-
For more details, see:
147-
- [CPU Weights conversion](#cpu-weights-for-cold-experts-on-cpu)
148-
- [GPU Weights quantization](#gpu-weights-for-hot-experts-on-gpu)
158+
**CPU Weights (LLAMAFILE backend: `LLAMAFILE`):**
159+
LLAMAFILE uses pre-quantized **GGUF** weights on the CPU side directly, without running `convert_cpu_weights.py`. You need to:
149160

150-
**Note:** LLAMAFILE backend supports GGUF format directly, but this feature is still in preview.
161+
- Download a GGUF model directly from the web (e.g., GGUF repos on Hugging Face / Modelscope);
162+
- In SGLang integration, use that GGUF directory as `--kt-weight-path`.
163+
KT-Kernel supports multiple GGUF quantization formats such as `Q4_KM`, `Q4_K`, `Q5_K`, etc. Choose based on your latency and accuracy requirements.
151164

152165
#### 3. Launch SGLang Server
153166

@@ -177,14 +190,12 @@ See [KT-Kernel Parameters](#kt-kernel-parameters) section below for detailed par
177190

178191
### Complete Example: Qwen3-30B-A3B
179192

180-
This example demonstrates the full workflow from downloading weights to launching the server.
193+
This example demonstrates the full workflow from downloading weights to launching the server, showing both **AMX backend** and **LLAMAFILE backend** options.
181194

182195
**Hardware Configuration:**
183196
- **GPU**: NVIDIA RTX 4090 24GB
184197
- **CPU**: 2x Intel Xeon Gold 6454S (64 physical cores total, 128 threads, 2 NUMA nodes)
185198
- **Model**: [Qwen3-30B-A3B](https://huggingface.co/Qwen/Qwen3-30B-A3B)
186-
- **GPU Weights**: BF16 original weights
187-
- **CPU Weights**: AMXINT8 quantized
188199

189200
**How to verify your system configuration:**
190201
```bash
@@ -202,19 +213,25 @@ NUMA node(s): 2
202213
- `--kt-cpuinfer 64`: Set to physical cores (64), not hyperthreads (128)
203214
- `--kt-threadpool-count 2`: 2 NUMA nodes detected (dual-socket system)
204215
- `--kt-num-gpu-experts 32`: With 24GB GPU memory, we can fit ~32 experts on GPU for this model (varies by model architecture and actual memory usage)
205-
- `--kt-max-deferred-experts-per-token 2`: Enable pipelined execution - allows CPU to process next batch while GPU completes current batch
216+
- `--kt-max-deferred-experts-per-token 2`: Enable pipelined execution; allows CPU to process next batch while GPU completes current batch
217+
218+
---
219+
220+
#### Option A: AMX Backend (AMXINT8)
206221

207-
#### Step 1: Download model weights
222+
For Intel CPUs with AMX instruction set support.
223+
224+
**Step 1: Download model weights**
208225

209226
```bash
210227
# Install huggingface-cli if not already installed
211228
pip install huggingface-hub
212229

213230
# Download model from Hugging Face
214-
hf download Qwen/Qwen3-30B-A3B --local-dir /mnt/data/models/Qwen3-30B-A3B
231+
huggingface-cli download Qwen/Qwen3-30B-A3B --local-dir /mnt/data/models/Qwen3-30B-A3B
215232
```
216233

217-
#### Step 2: Convert to CPU weights (AMXINT8)
234+
**Step 2: Convert to CPU weights (AMXINT8)**
218235

219236
```bash
220237
python scripts/convert_cpu_weights.py \
@@ -224,7 +241,7 @@ python scripts/convert_cpu_weights.py \
224241
--quant-method int8
225242
```
226243

227-
#### Step 3: Launch SGLang server
244+
**Step 3: Launch SGLang server**
228245

229246
```bash
230247
python -m sglang.launch_server \
@@ -244,23 +261,64 @@ python -m sglang.launch_server \
244261
--kt-max-deferred-experts-per-token 2
245262
```
246263

264+
---
265+
266+
#### Option B: LLAMAFILE Backend (GGUF)
267+
268+
For universal CPUs (no AMX required), using pre-quantized GGUF weights directly.
269+
270+
**Step 1: Download GPU weights (original model)**
271+
272+
```bash
273+
pip install huggingface-hub
274+
275+
huggingface-cli download Qwen/Qwen3-30B-A3B --local-dir /mnt/data/models/Qwen3-30B-A3B
276+
```
277+
278+
**Step 2: Download CPU weights (GGUF format)**
279+
280+
```bash
281+
huggingface-cli download Qwen/Qwen3-30B-A3B-GGUF Qwen3-30B-A3B-Q4_K_M.gguf \
282+
--local-dir /mnt/data/models/Qwen3-30B-A3B-Q4_K_M
283+
```
284+
285+
**Step 3: Launch SGLang server**
286+
287+
```bash
288+
python -m sglang.launch_server \
289+
--host 0.0.0.0 \
290+
--port 8000 \
291+
--model /mnt/data/models/Qwen3-30B-A3B \
292+
--trust-remote-code \
293+
--mem-fraction-static 0.92 \
294+
--chunked-prefill-size 4096 \
295+
--served-model-name Qwen3-30B-A3B \
296+
--enable-mixed-chunk \
297+
--kt-method LLAMAFILE \
298+
--kt-weight-path /mnt/data/models/Qwen3-30B-A3B-Q4_K_M \
299+
--kt-cpuinfer 64 \
300+
--kt-threadpool-count 2 \
301+
--kt-num-gpu-experts 32 \
302+
--kt-max-deferred-experts-per-token 2
303+
```
304+
247305
### KT-Kernel Parameters
248306

249307
| Parameter | Description | Example Value |
250308
|-----------|-------------|---------------|
251-
| `--kt-method` | CPU inference backend method | `AMXINT4`, `AMXINT8`, or `LLAMAFILE` (preview) |
309+
| `--kt-method` | CPU inference backend method | `AMXINT4`, `AMXINT8`, or `LLAMAFILE` |
252310
| `--kt-weight-path` | Path to quantized CPU weights | `/path/to/cpu-weights` |
253311
| `--kt-cpuinfer` | Number of CPU inference threads | `64` (adjust based on CPU cores) |
254312
| `--kt-threadpool-count` | Number of thread pools for parallel execution | `2` (typically 1-4) |
255313
| `--kt-num-gpu-experts` | Number of experts to keep on GPU | `32` (remaining experts go to CPU) |
256-
| `--kt-max-deferred-experts-per-token` | Number of experts per token to defer for pipelined execution | `2` (0 to disable, 1-2 recommended) |
314+
| `--kt-max-deferred-experts-per-token` | Number of experts per token to defer for pipelined execution | `2` (0 to disable, 1-4 recommended) |
257315

258316
**Parameter Guidelines:**
259317

260318
- **`kt-method`**: Choose based on your CPU and weight format:
261319
- `AMXINT4`: Best performance on AMX CPUs with INT4 quantized weights (May cause huge accuracy drop for some models, e.g., Qwen3-30B-A3B)
262320
- `AMXINT8`: Higher accuracy with INT8 quantized weights on AMX CPUs
263-
- `LLAMAFILE`: Preview support for GGUF format (not fully complete)
321+
- `LLAMAFILE`: GGUF-based backend
264322

265323
- **`kt-cpuinfer`**: Set to the number of **physical CPU cores** (not hyperthreads).
266324
- Check physical cores: `lscpu | grep -E "^CPU\(s\)|Thread\(s\) per core"`
@@ -282,8 +340,8 @@ python -m sglang.launch_server \
282340

283341
- **`kt-max-deferred-experts-per-token`**: Enables pipelined execution:
284342
- `0`: Synchronous execution (simpler, higher latency)
285-
- `1-2`: Deferred execution (better latency, requires tuning) - recommended
286-
- `3-4`: Higher deferred count (possible but rarely beneficial)
343+
- `1-4`: Deferred execution (recommended range; good latency/quality balance, requires tuning)
344+
- `5-7`: Highest latency reduction but may introduce noticeable accuracy loss; use with care
287345

288346
## Direct Python API Usage
289347

@@ -304,7 +362,7 @@ wrapper = KTMoEWrapper(
304362
threadpool_count=2,
305363
weight_path="/path/to/weights",
306364
chunked_prefill_size=512,
307-
method="AMXINT4" # Options: "AMXINT4", "AMXINT8", "LLAMAFILE" (preview)
365+
method="AMXINT4" # Options: "AMXINT4", "AMXINT8", "LLAMAFILE"
308366
)
309367

310368
# Load weights (from disk - pre-quantized)
@@ -442,11 +500,7 @@ sudo make install
442500

443501
## Weight Quantization
444502

445-
KT-Kernel provides weight quantization tools for CPU-GPU hybrid inference (e.g., integrating with SGLang). Both tools work together to enable heterogeneous expert placement across CPUs and GPUs.
446-
447-
### CPU Weights (for "cold" experts on CPU)
448-
449-
Quantize weights to INT4/INT8 format optimized for AMX inference:
503+
For AMX backends (`AMXINT4` / `AMXINT8`), CPU-side experts must be converted to AMX-friendly INT4/INT8 format using the provided script:
450504

451505
```bash
452506
python scripts/convert_cpu_weights.py \
@@ -458,40 +512,33 @@ python scripts/convert_cpu_weights.py \
458512

459513
**Supported formats:** FP8, FP16, BF16 → INT4/INT8
460514

461-
### GPU Weights (for "hot" experts on GPU)
462-
463-
Apply GPTQ quantization to model weights:
464-
465-
```bash
466-
# Install additional dependencies first
467-
pip install accelerate transformers llmcompressor datasets
468-
469-
# Quantize GPU weights
470-
python scripts/convert_gpu_weights.py \
471-
--model_id /path/to/model \
472-
--output_dir /path/to/output \
473-
--quant_type W4A16
474-
```
475-
476-
**Supported types:** W4A16 (GPTQ4), W8A16 (GPTQ8)
515+
For LLAMAFILE backend (`LLAMAFILE`), CPU-side experts are loaded directly from **GGUF** weights. You do **not** need to run the AMX conversion script; instead, download a GGUF model from the web (e.g., a GGUF repo on Hugging Face) and point `weight_path` / SGLang `--kt-weight-path` (or `--model` when appropriate) to that GGUF directory. KT-Kernel supports multiple GGUF quantization types such as `Q4_KM`, `Q4_K`, `Q5_K`, etc.
477516

478517
---
479518

480519
For detailed documentation, advanced options, and low-memory mode, see [scripts/README.md](scripts/README.md).
481520

482521
## Before Commit!
483-
your msg should match: Conventional Commits (https://www.conventionalcommits.org/) <br>and format your code before commit:
522+
523+
Commit messages should follow the Conventional Commits specification: https://www.conventionalcommits.org/
524+
525+
Please format your code before committing:
526+
484527
```shell
485528
cmake -B build
486529
cd build
487530
make format
488531
```
489-
and you may need a new clang-format at least 18, use this command in conda env:
532+
533+
You may need a newer clang-format (at least version 18). In a conda environment:
534+
490535
```shell
491536
conda install -c conda-forge clang-format=18
492537
rm -rf build
493538
```
494-
and you may need black for python format:
539+
540+
It's also recommended to install black for Python code formatting:
541+
495542
```shell
496543
conda install black
497544
```

0 commit comments

Comments
 (0)