Skip to content

Commit 81bc534

Browse files
Merge branch 'christopher-beckham/fix_flux_controlnet_modes' of github.com:christopher-beckham/diffusers into christopher-beckham/fix_flux_controlnet_modes
2 parents f8b6bb0 + d6d71fa commit 81bc534

File tree

11 files changed

+1699
-4
lines changed

11 files changed

+1699
-4
lines changed

docs/source/en/api/pipelines/animatediff.md

Lines changed: 83 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -914,6 +914,89 @@ export_to_gif(frames, "animatelcm-motion-lora.gif")
914914
</tr>
915915
</table>
916916

917+
## Using FreeNoise
918+
919+
[FreeNoise: Tuning-Free Longer Video Diffusion via Noise Rescheduling](https://arxiv.org/abs/2310.15169) by Haonan Qiu, Menghan Xia, Yong Zhang, Yingqing He, Xintao Wang, Ying Shan, Ziwei Liu.
920+
921+
FreeNoise is a sampling mechanism that can generate longer videos with short-video generation models by employing noise-rescheduling, temporal attention over sliding windows, and weighted averaging of latent frames. It also can be used with multiple prompts to allow for interpolated video generations. More details are available in the paper.
922+
923+
The currently supported AnimateDiff pipelines that can be used with FreeNoise are:
924+
- [`AnimateDiffPipeline`]
925+
- [`AnimateDiffControlNetPipeline`]
926+
- [`AnimateDiffVideoToVideoPipeline`]
927+
- [`AnimateDiffVideoToVideoControlNetPipeline`]
928+
929+
In order to use FreeNoise, a single line needs to be added to the inference code after loading your pipelines.
930+
931+
```diff
932+
+ pipe.enable_free_noise()
933+
```
934+
935+
After this, either a single prompt could be used, or multiple prompts can be passed as a dictionary of integer-string pairs. The integer keys of the dictionary correspond to the frame index at which the influence of that prompt would be maximum. Each frame index should map to a single string prompt. The prompts for intermediate frame indices, that are not passed in the dictionary, are created by interpolating between the frame prompts that are passed. By default, simple linear interpolation is used. However, you can customize this behaviour with a callback to the `prompt_interpolation_callback` parameter when enabling FreeNoise.
936+
937+
Full example:
938+
939+
```python
940+
import torch
941+
from diffusers import AutoencoderKL, AnimateDiffPipeline, LCMScheduler, MotionAdapter
942+
from diffusers.utils import export_to_video, load_image
943+
944+
# Load pipeline
945+
dtype = torch.float16
946+
motion_adapter = MotionAdapter.from_pretrained("wangfuyun/AnimateLCM", torch_dtype=dtype)
947+
vae = AutoencoderKL.from_pretrained("stabilityai/sd-vae-ft-mse", torch_dtype=dtype)
948+
949+
pipe = AnimateDiffPipeline.from_pretrained("emilianJR/epiCRealism", motion_adapter=motion_adapter, vae=vae, torch_dtype=dtype)
950+
pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config, beta_schedule="linear")
951+
952+
pipe.load_lora_weights(
953+
"wangfuyun/AnimateLCM", weight_name="AnimateLCM_sd15_t2v_lora.safetensors", adapter_name="lcm_lora"
954+
)
955+
pipe.set_adapters(["lcm_lora"], [0.8])
956+
957+
# Enable FreeNoise for long prompt generation
958+
pipe.enable_free_noise(context_length=16, context_stride=4)
959+
pipe.to("cuda")
960+
961+
# Can be a single prompt, or a dictionary with frame timesteps
962+
prompt = {
963+
0: "A caterpillar on a leaf, high quality, photorealistic",
964+
40: "A caterpillar transforming into a cocoon, on a leaf, near flowers, photorealistic",
965+
80: "A cocoon on a leaf, flowers in the backgrond, photorealistic",
966+
120: "A cocoon maturing and a butterfly being born, flowers and leaves visible in the background, photorealistic",
967+
160: "A beautiful butterfly, vibrant colors, sitting on a leaf, flowers in the background, photorealistic",
968+
200: "A beautiful butterfly, flying away in a forest, photorealistic",
969+
240: "A cyberpunk butterfly, neon lights, glowing",
970+
}
971+
negative_prompt = "bad quality, worst quality, jpeg artifacts"
972+
973+
# Run inference
974+
output = pipe(
975+
prompt=prompt,
976+
negative_prompt=negative_prompt,
977+
num_frames=256,
978+
guidance_scale=2.5,
979+
num_inference_steps=10,
980+
generator=torch.Generator("cpu").manual_seed(0),
981+
)
982+
983+
# Save video
984+
frames = output.frames[0]
985+
export_to_video(frames, "output.mp4", fps=16)
986+
```
987+
988+
### FreeNoise memory savings
989+
990+
Since FreeNoise processes multiple frames together, there are parts in the modeling where the memory required exceeds that available on normal consumer GPUs. The main memory bottlenecks that we identified are spatial and temporal attention blocks, upsampling and downsampling blocks, resnet blocks and feed-forward layers. Since most of these blocks operate effectively only on the channel/embedding dimension, one can perform chunked inference across the batch dimensions. The batch dimension in AnimateDiff are either spatial (`[B x F, H x W, C]`) or temporal (`B x H x W, F, C`) in nature (note that it may seem counter-intuitive, but the batch dimension here are correct, because spatial blocks process across the `B x F` dimension while the temporal blocks process across the `B x H x W` dimension). We introduce a `SplitInferenceModule` that makes it easier to chunk across any dimension and perform inference. This saves a lot of memory but comes at the cost of requiring more time for inference.
991+
992+
```diff
993+
# Load pipeline and adapters
994+
# ...
995+
+ pipe.enable_free_noise_split_inference()
996+
+ pipe.unet.enable_forward_chunking(16)
997+
```
998+
999+
The call to `pipe.enable_free_noise_split_inference` method accepts two parameters: `spatial_split_size` (defaults to `256`) and `temporal_split_size` (defaults to `16`). These can be configured based on how much VRAM you have available. A lower split size results in lower memory usage but slower inference, whereas a larger split size results in faster inference at the cost of more memory.
9171000

9181001
## Using `from_single_file` with the MotionAdapter
9191002

examples/controlnet/README_sd3.md

Lines changed: 152 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,152 @@
1+
# ControlNet training example for Stable Diffusion 3 (SD3)
2+
3+
The `train_controlnet_sd3.py` script shows how to implement the ControlNet training procedure and adapt it for [Stable Diffusion 3](https://arxiv.org/abs/2403.03206).
4+
5+
## Running locally with PyTorch
6+
7+
### Installing the dependencies
8+
9+
Before running the scripts, make sure to install the library's training dependencies:
10+
11+
**Important**
12+
13+
To make sure you can successfully run the latest versions of the example scripts, we highly recommend **installing from source** and keeping the install up to date as we update the example scripts frequently and install some example-specific requirements. To do this, execute the following steps in a new virtual environment:
14+
15+
```bash
16+
git clone https://github.com/huggingface/diffusers
17+
cd diffusers
18+
pip install -e .
19+
```
20+
21+
Then cd in the `examples/controlnet` folder and run
22+
```bash
23+
pip install -r requirements_sd3.txt
24+
```
25+
26+
And initialize an [🤗Accelerate](https://github.com/huggingface/accelerate/) environment with:
27+
28+
```bash
29+
accelerate config
30+
```
31+
32+
Or for a default accelerate configuration without answering questions about your environment
33+
34+
```bash
35+
accelerate config default
36+
```
37+
38+
Or if your environment doesn't support an interactive shell (e.g., a notebook)
39+
40+
```python
41+
from accelerate.utils import write_basic_config
42+
write_basic_config()
43+
```
44+
45+
When running `accelerate config`, if we specify torch compile mode to True there can be dramatic speedups.
46+
47+
## Circle filling dataset
48+
49+
The original dataset is hosted in the [ControlNet repo](https://huggingface.co/lllyasviel/ControlNet/blob/main/training/fill50k.zip). We re-uploaded it to be compatible with `datasets` [here](https://huggingface.co/datasets/fusing/fill50k). Note that `datasets` handles dataloading within the training script.
50+
Please download the dataset and unzip it in the directory `fill50k` in the `examples/controlnet` folder.
51+
52+
## Training
53+
54+
First download the SD3 model from [Hugging Face Hub](https://huggingface.co/stabilityai/stable-diffusion-3-medium). We will use it as a base model for the ControlNet training.
55+
> [!NOTE]
56+
> As the model is gated, before using it with diffusers you first need to go to the [Stable Diffusion 3 Medium Hugging Face page](https://huggingface.co/stabilityai/stable-diffusion-3-medium-diffusers), fill in the form and accept the gate. Once you are in, you need to log in so that your system knows you’ve accepted the gate. Use the command below to log in:
57+
58+
```bash
59+
huggingface-cli login
60+
```
61+
62+
This will also allow us to push the trained model parameters to the Hugging Face Hub platform.
63+
64+
65+
Our training examples use two test conditioning images. They can be downloaded by running
66+
67+
```sh
68+
wget https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet_training/conditioning_image_1.png
69+
70+
wget https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet_training/conditioning_image_2.png
71+
```
72+
73+
Then run the following commands to train a ControlNet model.
74+
75+
```bash
76+
export MODEL_DIR="stabilityai/stable-diffusion-3-medium-diffusers"
77+
export OUTPUT_DIR="sd3-controlnet-out"
78+
79+
accelerate launch train_controlnet_sd3.py \
80+
--pretrained_model_name_or_path=$MODEL_DIR \
81+
--output_dir=$OUTPUT_DIR \
82+
--train_data_dir="fill50k" \
83+
--resolution=1024 \
84+
--learning_rate=1e-5 \
85+
--max_train_steps=15000 \
86+
--validation_image "./conditioning_image_1.png" "./conditioning_image_2.png" \
87+
--validation_prompt "red circle with blue background" "cyan circle with brown floral background" \
88+
--validation_steps=100 \
89+
--train_batch_size=1 \
90+
--gradient_accumulation_steps=4
91+
```
92+
93+
To better track our training experiments, we're using flags `validation_image`, `validation_prompt`, and `validation_steps` to allow the script to do a few validation inference runs. This allows us to qualitatively check if the training is progressing as expected.
94+
95+
Our experiments were conducted on a single 40GB A100 GPU.
96+
97+
### Inference
98+
99+
Once training is done, we can perform inference like so:
100+
101+
```python
102+
from diffusers import StableDiffusion3ControlNetPipeline, SD3ControlNetModel
103+
from diffusers.utils import load_image
104+
import torch
105+
106+
base_model_path = "stabilityai/stable-diffusion-3-medium-diffusers"
107+
controlnet_path = "sd3-controlnet-out/checkpoint-6500/controlnet"
108+
109+
controlnet = SD3ControlNetModel.from_pretrained(controlnet_path, torch_dtype=torch.float16)
110+
pipe = StableDiffusion3ControlNetPipeline.from_pretrained(
111+
base_model_path, controlnet=controlnet
112+
)
113+
pipe.to("cuda", torch.float16)
114+
115+
116+
control_image = load_image("./conditioning_image_1.png").resize((1024, 1024))
117+
prompt = "pale golden rod circle with old lace background"
118+
119+
# generate image
120+
generator = torch.manual_seed(0)
121+
image = pipe(
122+
prompt, num_inference_steps=20, generator=generator, control_image=control_image
123+
).images[0]
124+
image.save("./output.png")
125+
```
126+
127+
## Notes
128+
129+
### GPU usage
130+
131+
SD3 is a large model and requires a lot of GPU memory.
132+
We recommend using one GPU with at least 80GB of memory.
133+
Make sure to use the right GPU when configuring the [accelerator](https://huggingface.co/docs/transformers/en/accelerate).
134+
135+
136+
## Example results
137+
138+
#### After 500 steps with batch size 8
139+
140+
| | |
141+
|-------------------|:-------------------------:|
142+
|| pale golden rod circle with old lace background |
143+
![conditioning image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet_training/conditioning_image_1.png) | ![pale golden rod circle with old lace background](https://huggingface.co/datasets/DavyMorgan/sd3-controlnet-results/resolve/main/step-500.png) |
144+
145+
146+
#### After 6500 steps with batch size 8:
147+
148+
| | |
149+
|-------------------|:-------------------------:|
150+
|| pale golden rod circle with old lace background |
151+
![conditioning image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet_training/conditioning_image_1.png) | ![pale golden rod circle with old lace background](https://huggingface.co/datasets/DavyMorgan/sd3-controlnet-results/resolve/main/step-6500.png) |
152+
Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,8 @@
1+
accelerate>=0.16.0
2+
torchvision
3+
transformers>=4.25.1
4+
ftfy
5+
tensorboard
6+
Jinja2
7+
datasets
8+
wandb

examples/controlnet/test_controlnet.py

Lines changed: 21 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -115,3 +115,24 @@ def test_controlnet_sdxl(self):
115115
run_command(self._launch_args + test_args)
116116

117117
self.assertTrue(os.path.isfile(os.path.join(tmpdir, "diffusion_pytorch_model.safetensors")))
118+
119+
120+
class ControlNetSD3(ExamplesTestsAccelerate):
121+
def test_controlnet_sd3(self):
122+
with tempfile.TemporaryDirectory() as tmpdir:
123+
test_args = f"""
124+
examples/controlnet/train_controlnet_sd3.py
125+
--pretrained_model_name_or_path=DavyMorgan/tiny-sd3-pipe
126+
--dataset_name=hf-internal-testing/fill10
127+
--output_dir={tmpdir}
128+
--resolution=64
129+
--train_batch_size=1
130+
--gradient_accumulation_steps=1
131+
--controlnet_model_name_or_path=DavyMorgan/tiny-controlnet-sd3
132+
--max_train_steps=4
133+
--checkpointing_steps=2
134+
""".split()
135+
136+
run_command(self._launch_args + test_args)
137+
138+
self.assertTrue(os.path.isfile(os.path.join(tmpdir, "diffusion_pytorch_model.safetensors")))

0 commit comments

Comments
 (0)