dInfer is an efficient and extensible inference framework for dLLMs. As illustrated in the following architecture, it modularizes inference into four components: model, diffusion iteration manager, decoder and KV-cache manager. It provides well-designed APIs for flexible algorithms combinations in each component. It now supports batched inference for improved throughput.
Figure: Overall Architecture of dInfer
dInfer supports multiple dLLM variants, including LLaDA and LLaDA-MoE.
Algorithmic improvements:
- Soft diffusion iteration for smoother denoising
- Hierarchical and credit decoding for enhanced parallel decoding
- Vicinity refresh strategy for KV-cache management to mitigate cache staleness
System-level optimizations:
- Tensor Parallelism (TP) and Expert Parallelism (EP) to maximize GPU utilization across batch sizes
- Dynamic batching support for improved throughput on multi-request workloads
- PyTorch compilation and NVIDIA CUDA Graphs for efficient kernel execution
- Loop unrolling mechanism to eliminate CUDA stream bubbles across diffusion iterations
[2025/11/15] Support the inference on block diffusion LLMs (LLaDA2-mini and LLaDA2-flash).
[2025/10/10] Release the first version of the dInfer framework.
dInfer supports multiple diffusion language model variants with different architectures and sizes. Below are the HuggingFace model links and their corresponding implementation files:
| Model | Size | Implementation | HuggingFace Link |
|---|---|---|---|
| LLaDA2.0-mini-preview | 16B | LLaDA2MoeModelLM | inclusionAI/LLaDA2.0-mini-preview |
| LLaDA2.0-flash-preview | 100B | LLaDA2MoeModelLM | inclusionAI/LLaDA2.0-flash-preview |
| LLaDA-MoE-7B-A1B-Base | 7B | LLaDAMoeModelLM | inclusionAI/LLaDA-MoE-7B-A1B-Base |
| LLaDA-MoE-7B-A1B-Instruct | 7B | LLaDAMoeModelLM | inclusionAI/LLaDA-MoE-7B-A1B-Instruct |
| LLaDA-8B-Base | 8B | LLaDAModelLM | GSAI-ML/LLaDA-8B-Base |
| LLaDA-8B-Instruct | 8B | LLaDAModelLM | GSAI-ML/LLaDA-8B-Instruct |
| LLaDA-1.5 | 8B | LLaDAModelLM | GSAI-ML/LLaDA-1.5 |
git clone https://github.com/inclusionAI/dInfer.git
cd dInfer
pip install .
pip install -U huggingface_hub hf_transfer
export HF_HUB_ENABLE_HF_TRANSFER=1
# Download Instruct checkpoint
hf download inclusionAI/LLaDA-MoE-7B-A1B-Instruct \
--repo-type model \
--local-dir /path/to/LLaDA-MoE-7B-A1B-Instruct
# Convert to FusedMoE
python -m tools.transfer \
--input /path/to/LLaDA-MoE-7B-A1B-Instruct \
--output /path/to/LLaDA-MoE-7B-A1B-Instruct-fusedfrom dinfer.model import AutoModelForCausalLM
from transformers import AutoTokenizer
m = "/path/to/LLaDA-MoE-7B-A1B-Instruct-fused"
tok = AutoTokenizer.from_pretrained(m, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(m, trust_remote_code=True, torch_dtype="bfloat16")-
Benchmark (speed only)
- Measure throughput (TPS) only; predictions are saved under
--output_dirwith no automatic scoring. - Example 1 Dataset profiling (LLaDA-MoE, threshold decoder, TP across 4 GPUs):
python benchmarks/benchmark_dataset.py \ --model_name inclusionAI/LLaDA-MoE-7B-A1B-Instruct \ --model_type llada_moe \ --dataset dataset_path \ --gen_len 1024 \ --block_length 64 \ --gpu 0,1,2,3 \ --output_dir runs/llada_moe_threshold \ --use_tp \ --parallel_decoding threshold \ --threshold 0.8 \ --cache dual \ --prefix_look 16 \ --after_look 16 \ --warmup_times 4 \ --cont_weight 0.3
- Example 2 Dataset profiling (LLaDA2-flash, threshold decoder, TP across 4 GPUs):
python benchmarks/benchmark_dataset.py \ --model_name inclusionAI/LLaDA2.0-flash-preview \ --model_type llada2 \ --dataset dataset_path \ --gen_len 2048 \ --block_length 32 \ --gpu 0,1,2,3 \ --output_dir runs/llada2_flash \ --use_tp \ --parallel_decoding threshold \ --threshold 0.9 \ --cache prefix \ --use_bd- Example 3 Single-sample profiling (LLaDA-8B-Instruct, threshold decoder, TP across 4 GPUs):
python benchmarks/benchmark.py \ --model_name GSAI-ML/LLaDA-8B-Instruct \ --model_type llada \ --gen_len 2048 \ --block_length 32 \ --gpu 0,1,2,3 \ --use_tp \ --parallel_decoding threshold \ --threshold 0.9 \ --cache prefix
- Example 4: Single-sample profiling (LLaDA2-mini, threshold decoder, TP across 4 GPUs):
python benchmarks/benchmark.py \ --model_name inclusionAI/LLaDA2.0-mini-preview \ --model_type llada2 \ --gen_len 2048 \ --block_length 32 \ --gpu 0,1,2,3 \ --use_tp \ --parallel_decoding threshold \ --threshold 0.9 \ --cache prefix \ --use_bd
- Measure throughput (TPS) only; predictions are saved under
-
Evaluation (speed + accuracy)
- Built on HuggingFace
lm-eval-harnessto compute TPS and benchmark scores. - Tasks provided:
gsm8k_llada: math reasoning.mbpp_sanitized_llada: sanitized Python code generation.
- For more examples and comprehensive instructions, see our quickstart guide.
- Built on HuggingFace
Performance on HumanEval:
- Over 1,100 TPS at batch size 1
- Averages 800+ TPS across six benchmarks on a single node with 8×H800 GPUs
Speedup comparisons:
- 10× faster than Fast-dLLM while maintaining accuracy
- 2-3× faster than Qwen2.5-3B on vLLM (LLaDA-MoE) with comparable quality
- LLaDA2: Max 4-way TP (due to 4 attention heads), LLaDA Dense/MoE models support up to 8-way TP
- Block Diffusion: Not supported on LLaDA Dense/MoE models (use
--use_bdwith LLaDA2 only) - Evaluation:
lm-evalevaluations currently configured for LLaDA-MoE only, will add support for LLaDA Dense/LLaDA2 in the near future.
- Wechat Group
@article{dinfer,
title={dInfer: An Efficient Inference Framework for Diffusion Language Models},
author={Yuxin Ma, Lun Du, Lanning Wei, Kun Chen, Qian Xu, Kangyu Wang, Guofeng Feng, Guoshan Lu, Lin Liu, Xiaojing Qi, Xinyuan Zhang, Zhen Tao, Haibo Feng, Ziyun Jiang, Ying Xu, Zenan Huang, Yihong Zhuang, Haokai Xu, Jiaqi Hu, Zhenzhong Lan, Junbo Zhao, Jianguo Li, Da Zheng},
year={2025},
journal={arXiv preprint arXiv:2510.08666}
}
