Skip to content

Commit 862139d

Browse files
committed
Merge branch 'main' into unbloat-pipeline-utilities
2 parents fb9f540 + 041501a commit 862139d

File tree

67 files changed

+4011
-4329
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

67 files changed

+4011
-4329
lines changed

.github/workflows/pr_modular_tests.yml

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -110,8 +110,9 @@ jobs:
110110
run: |
111111
python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH"
112112
python -m uv pip install -e [quality,test]
113-
pip uninstall transformers -y && python -m uv pip install -U transformers@git+https://github.com/huggingface/transformers.git --no-deps
114-
pip uninstall accelerate -y && python -m uv pip install -U accelerate@git+https://github.com/huggingface/accelerate.git --no-deps
113+
# Stopping this update temporarily until the Hub RC is fully shipped and integrated.
114+
# pip uninstall transformers -y && python -m uv pip install -U transformers@git+https://github.com/huggingface/transformers.git --no-deps
115+
# pip uninstall accelerate -y && python -m uv pip install -U accelerate@git+https://github.com/huggingface/accelerate.git --no-deps
115116
116117
- name: Environment
117118
run: |

.github/workflows/pr_tests.yml

Lines changed: 7 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -116,8 +116,9 @@ jobs:
116116
run: |
117117
python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH"
118118
python -m uv pip install -e [quality,test]
119-
pip uninstall transformers -y && python -m uv pip install -U transformers@git+https://github.com/huggingface/transformers.git --no-deps
120-
pip uninstall accelerate -y && python -m uv pip install -U accelerate@git+https://github.com/huggingface/accelerate.git --no-deps
119+
# Stopping this update temporarily until the Hub RC is fully shipped and integrated.
120+
# pip uninstall transformers -y && python -m uv pip install -U transformers@git+https://github.com/huggingface/transformers.git --no-deps
121+
# pip uninstall accelerate -y && python -m uv pip install -U accelerate@git+https://github.com/huggingface/accelerate.git --no-deps
121122
122123
- name: Environment
123124
run: |
@@ -253,9 +254,10 @@ jobs:
253254
python -m uv pip install -e [quality,test]
254255
# TODO (sayakpaul, DN6): revisit `--no-deps`
255256
python -m pip install -U peft@git+https://github.com/huggingface/peft.git --no-deps
256-
python -m uv pip install -U transformers@git+https://github.com/huggingface/transformers.git --no-deps
257-
python -m uv pip install -U tokenizers
258-
pip uninstall accelerate -y && python -m uv pip install -U accelerate@git+https://github.com/huggingface/accelerate.git --no-deps
257+
# Stopping this update temporarily until the Hub RC is fully shipped and integrated.
258+
# python -m uv pip install -U transformers@git+https://github.com/huggingface/transformers.git --no-deps
259+
# python -m uv pip install -U tokenizers
260+
# pip uninstall accelerate -y && python -m uv pip install -U accelerate@git+https://github.com/huggingface/accelerate.git --no-deps
259261
260262
- name: Environment
261263
run: |

.github/workflows/pr_tests_gpu.yml

Lines changed: 8 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -132,8 +132,9 @@ jobs:
132132
run: |
133133
python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH"
134134
python -m uv pip install -e [quality,test]
135-
pip uninstall accelerate -y && python -m uv pip install -U accelerate@git+https://github.com/huggingface/accelerate.git
136-
pip uninstall transformers -y && python -m uv pip install -U transformers@git+https://github.com/huggingface/transformers.git --no-deps
135+
# Stopping this update temporarily until the Hub RC is fully shipped and integrated.
136+
# pip uninstall accelerate -y && python -m uv pip install -U accelerate@git+https://github.com/huggingface/accelerate.git
137+
# pip uninstall transformers -y && python -m uv pip install -U transformers@git+https://github.com/huggingface/transformers.git --no-deps
137138
138139
- name: Environment
139140
run: |
@@ -203,8 +204,9 @@ jobs:
203204
python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH"
204205
python -m uv pip install -e [quality,test]
205206
python -m uv pip install peft@git+https://github.com/huggingface/peft.git
206-
pip uninstall accelerate -y && python -m uv pip install -U accelerate@git+https://github.com/huggingface/accelerate.git
207-
pip uninstall transformers -y && python -m uv pip install -U transformers@git+https://github.com/huggingface/transformers.git --no-deps
207+
# Stopping this update temporarily until the Hub RC is fully shipped and integrated.
208+
# pip uninstall accelerate -y && python -m uv pip install -U accelerate@git+https://github.com/huggingface/accelerate.git
209+
# pip uninstall transformers -y && python -m uv pip install -U transformers@git+https://github.com/huggingface/transformers.git --no-deps
208210
209211
- name: Environment
210212
run: |
@@ -266,7 +268,8 @@ jobs:
266268
- name: Install dependencies
267269
run: |
268270
python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH"
269-
pip uninstall transformers -y && python -m uv pip install -U transformers@git+https://github.com/huggingface/transformers.git --no-deps
271+
# Stopping this update temporarily until the Hub RC is fully shipped and integrated.
272+
# pip uninstall transformers -y && python -m uv pip install -U transformers@git+https://github.com/huggingface/transformers.git --no-deps
270273
python -m uv pip install -e [quality,test,training]
271274
272275
- name: Environment

docs/source/en/_toctree.yml

Lines changed: 7 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -23,11 +23,7 @@
2323
- local: using-diffusers/reusing_seeds
2424
title: Reproducibility
2525
- local: using-diffusers/schedulers
26-
title: Load schedulers and models
27-
- local: using-diffusers/models
28-
title: Models
29-
- local: using-diffusers/scheduler_features
30-
title: Scheduler features
26+
title: Schedulers
3127
- local: using-diffusers/other-formats
3228
title: Model files and layouts
3329
- local: using-diffusers/push_to_hub
@@ -68,10 +64,14 @@
6864
title: Accelerate inference
6965
- local: optimization/cache
7066
title: Caching
67+
- local: optimization/attention_backends
68+
title: Attention backends
7169
- local: optimization/memory
7270
title: Reduce memory usage
7371
- local: optimization/speed-memory-optims
7472
title: Compiling and offloading quantized models
73+
- local: api/parallel
74+
title: Parallel inference
7575
- title: Community optimizations
7676
sections:
7777
- local: optimization/pruna
@@ -82,6 +82,8 @@
8282
title: Token merging
8383
- local: optimization/deepcache
8484
title: DeepCache
85+
- local: optimization/cache_dit
86+
title: CacheDiT
8587
- local: optimization/tgate
8688
title: TGATE
8789
- local: optimization/xdit

docs/source/en/api/parallel.md

Lines changed: 24 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,24 @@
1+
<!-- Copyright 2025 The HuggingFace Team. All rights reserved.
2+
3+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4+
the License. You may obtain a copy of the License at
5+
6+
http://www.apache.org/licenses/LICENSE-2.0
7+
8+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10+
specific language governing permissions and limitations under the License. -->
11+
12+
# Parallelism
13+
14+
Parallelism strategies help speed up diffusion transformers by distributing computations across multiple devices, allowing for faster inference/training times.
15+
16+
## ParallelConfig
17+
18+
[[autodoc]] ParallelConfig
19+
20+
## ContextParallelConfig
21+
22+
[[autodoc]] ContextParallelConfig
23+
24+
[[autodoc]] hooks.apply_context_parallel

docs/source/en/api/pipelines/qwenimage.md

Lines changed: 33 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -26,6 +26,7 @@ Qwen-Image comes in the following variants:
2626
|:----------:|:--------:|
2727
| Qwen-Image | [`Qwen/Qwen-Image`](https://huggingface.co/Qwen/Qwen-Image) |
2828
| Qwen-Image-Edit | [`Qwen/Qwen-Image-Edit`](https://huggingface.co/Qwen/Qwen-Image-Edit) |
29+
| Qwen-Image-Edit Plus | [Qwen/Qwen-Image-Edit-2509](https://huggingface.co/Qwen/Qwen-Image-Edit-2509) |
2930

3031
<Tip>
3132

@@ -96,6 +97,29 @@ The `guidance_scale` parameter in the pipeline is there to support future guidan
9697

9798
</Tip>
9899

100+
## Multi-image reference with QwenImageEditPlusPipeline
101+
102+
With [`QwenImageEditPlusPipeline`], one can provide multiple images as input reference.
103+
104+
```
105+
import torch
106+
from PIL import Image
107+
from diffusers import QwenImageEditPlusPipeline
108+
from diffusers.utils import load_image
109+
110+
pipe = QwenImageEditPlusPipeline.from_pretrained(
111+
"Qwen/Qwen-Image-Edit-2509", torch_dtype=torch.bfloat16
112+
).to("cuda")
113+
114+
image_1 = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/grumpy.jpg")
115+
image_2 = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/peng.png")
116+
image = pipe(
117+
image=[image_1, image_2],
118+
prompt="put the penguin and the cat at a game show called "Qwen Edit Plus Games"",
119+
num_inference_steps=50
120+
).images[0]
121+
```
122+
99123
## QwenImagePipeline
100124

101125
[[autodoc]] QwenImagePipeline
@@ -126,7 +150,15 @@ The `guidance_scale` parameter in the pipeline is there to support future guidan
126150
- all
127151
- __call__
128152

129-
## QwenImaggeControlNetPipeline
153+
## QwenImageControlNetPipeline
154+
155+
[[autodoc]] QwenImageControlNetPipeline
156+
- all
157+
- __call__
158+
159+
## QwenImageEditPlusPipeline
160+
161+
[[autodoc]] QwenImageEditPlusPipeline
130162
- all
131163
- __call__
132164

Lines changed: 114 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,114 @@
1+
<!-- Copyright 2025 The HuggingFace Team. All rights reserved.
2+
3+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4+
the License. You may obtain a copy of the License at
5+
6+
http://www.apache.org/licenses/LICENSE-2.0
7+
8+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10+
specific language governing permissions and limitations under the License. -->
11+
12+
# Attention backends
13+
14+
> [!NOTE]
15+
> The attention dispatcher is an experimental feature. Please open an issue if you have any feedback or encounter any problems.
16+
17+
Diffusers provides several optimized attention algorithms that are more memory and computationally efficient through it's *attention dispatcher*. The dispatcher acts as a router for managing and switching between different attention implementations and provides a unified interface for interacting with them.
18+
19+
Refer to the table below for an overview of the available attention families and to the [Available backends](#available-backends) section for a more complete list.
20+
21+
| attention family | main feature |
22+
|---|---|
23+
| FlashAttention | minimizes memory reads/writes through tiling and recomputation |
24+
| SageAttention | quantizes attention to int8 |
25+
| PyTorch native | built-in PyTorch implementation using [scaled_dot_product_attention](./fp16#scaled-dot-product-attention) |
26+
| xFormers | memory-efficient attention with support for various attention kernels |
27+
28+
This guide will show you how to set and use the different attention backends.
29+
30+
## set_attention_backend
31+
32+
The [`~ModelMixin.set_attention_backend`] method iterates through all the modules in the model and sets the appropriate attention backend to use. The attention backend setting persists until [`~ModelMixin.reset_attention_backend`] is called.
33+
34+
The example below demonstrates how to enable the `_flash_3_hub` implementation for FlashAttention-3 from the [kernel](https://github.com/huggingface/kernels) library, which allows you to instantly use optimized compute kernels from the Hub without requiring any setup.
35+
36+
> [!NOTE]
37+
> FlashAttention-3 is not supported for non-Hopper architectures, in which case, use FlashAttention with `set_attention_backend("flash")`.
38+
39+
```py
40+
import torch
41+
from diffusers import QwenImagePipeline
42+
43+
pipeline = QwenImagePipeline.from_pretrained(
44+
"Qwen/Qwen-Image", torch_dtype=torch.bfloat16, device_map="cuda"
45+
)
46+
pipeline.transformer.set_attention_backend("_flash_3_hub")
47+
48+
prompt = """
49+
cinematic film still of a cat sipping a margarita in a pool in Palm Springs, California
50+
highly detailed, high budget hollywood movie, cinemascope, moody, epic, gorgeous, film grain
51+
"""
52+
pipeline(prompt).images[0]
53+
```
54+
55+
To restore the default attention backend, call [`~ModelMixin.reset_attention_backend`].
56+
57+
```py
58+
pipeline.transformer.reset_attention_backend()
59+
```
60+
61+
## attention_backend context manager
62+
63+
The [attention_backend](https://github.com/huggingface/diffusers/blob/5e181eddfe7e44c1444a2511b0d8e21d177850a0/src/diffusers/models/attention_dispatch.py#L225) context manager temporarily sets an attention backend for a model within the context. Outside the context, the default attention (PyTorch's native scaled dot product attention) is used. This is useful if you want to use different backends for different parts of a pipeline or if you want to test the different backends.
64+
65+
```py
66+
import torch
67+
from diffusers import QwenImagePipeline
68+
69+
pipeline = QwenImagePipeline.from_pretrained(
70+
"Qwen/Qwen-Image", torch_dtype=torch.bfloat16, device_map="cuda"
71+
)
72+
prompt = """
73+
cinematic film still of a cat sipping a margarita in a pool in Palm Springs, California
74+
highly detailed, high budget hollywood movie, cinemascope, moody, epic, gorgeous, film grain
75+
"""
76+
77+
with attention_backend("_flash_3_hub"):
78+
image = pipeline(prompt).images[0]
79+
```
80+
81+
> [!TIP]
82+
> Most attention backends support `torch.compile` without graph breaks and can be used to further speed up inference.
83+
84+
## Available backends
85+
86+
Refer to the table below for a complete list of available attention backends and their variants.
87+
88+
<details>
89+
<summary>Expand</summary>
90+
91+
| Backend Name | Family | Description |
92+
|--------------|--------|-------------|
93+
| `native` | [PyTorch native](https://docs.pytorch.org/docs/stable/generated/torch.nn.attention.SDPBackend.html#torch.nn.attention.SDPBackend) | Default backend using PyTorch's scaled_dot_product_attention |
94+
| `flex` | [FlexAttention](https://docs.pytorch.org/docs/stable/nn.attention.flex_attention.html#module-torch.nn.attention.flex_attention) | PyTorch FlexAttention implementation |
95+
| `_native_cudnn` | [PyTorch native](https://docs.pytorch.org/docs/stable/generated/torch.nn.attention.SDPBackend.html#torch.nn.attention.SDPBackend) | CuDNN-optimized attention |
96+
| `_native_efficient` | [PyTorch native](https://docs.pytorch.org/docs/stable/generated/torch.nn.attention.SDPBackend.html#torch.nn.attention.SDPBackend) | Memory-efficient attention |
97+
| `_native_flash` | [PyTorch native](https://docs.pytorch.org/docs/stable/generated/torch.nn.attention.SDPBackend.html#torch.nn.attention.SDPBackend) | PyTorch's FlashAttention |
98+
| `_native_math` | [PyTorch native](https://docs.pytorch.org/docs/stable/generated/torch.nn.attention.SDPBackend.html#torch.nn.attention.SDPBackend) | Math-based attention (fallback) |
99+
| `_native_npu` | [PyTorch native](https://docs.pytorch.org/docs/stable/generated/torch.nn.attention.SDPBackend.html#torch.nn.attention.SDPBackend) | NPU-optimized attention |
100+
| `_native_xla` | [PyTorch native](https://docs.pytorch.org/docs/stable/generated/torch.nn.attention.SDPBackend.html#torch.nn.attention.SDPBackend) | XLA-optimized attention |
101+
| `flash` | [FlashAttention](https://github.com/Dao-AILab/flash-attention) | FlashAttention-2 |
102+
| `flash_varlen` | [FlashAttention](https://github.com/Dao-AILab/flash-attention) | Variable length FlashAttention |
103+
| `_flash_3` | [FlashAttention](https://github.com/Dao-AILab/flash-attention) | FlashAttention-3 |
104+
| `_flash_varlen_3` | [FlashAttention](https://github.com/Dao-AILab/flash-attention) | Variable length FlashAttention-3 |
105+
| `_flash_3_hub` | [FlashAttention](https://github.com/Dao-AILab/flash-attention) | FlashAttention-3 from kernels |
106+
| `sage` | [SageAttention](https://github.com/thu-ml/SageAttention) | Quantized attention (INT8 QK) |
107+
| `sage_varlen` | [SageAttention](https://github.com/thu-ml/SageAttention) | Variable length SageAttention |
108+
| `_sage_qk_int8_pv_fp8_cuda` | [SageAttention](https://github.com/thu-ml/SageAttention) | INT8 QK + FP8 PV (CUDA) |
109+
| `_sage_qk_int8_pv_fp8_cuda_sm90` | [SageAttention](https://github.com/thu-ml/SageAttention) | INT8 QK + FP8 PV (SM90) |
110+
| `_sage_qk_int8_pv_fp16_cuda` | [SageAttention](https://github.com/thu-ml/SageAttention) | INT8 QK + FP16 PV (CUDA) |
111+
| `_sage_qk_int8_pv_fp16_triton` | [SageAttention](https://github.com/thu-ml/SageAttention) | INT8 QK + FP16 PV (Triton) |
112+
| `xformers` | [xFormers](https://github.com/facebookresearch/xformers) | Memory-efficient attention |
113+
114+
</details>

0 commit comments

Comments
 (0)