Skip to content

Conversation

a-r-r-o-w
Copy link
Contributor

@a-r-r-o-w a-r-r-o-w commented Jul 12, 2025

Code:

import torch
from diffusers import AutoGuidance, SkipLayerGuidance, ClassifierFreeGuidance, SmoothedEnergyGuidance, SmoothedEnergyGuidanceConfig, AdaptiveProjectedGuidance, PerturbedAttentionGuidance, ClassifierFreeZeroStarGuidance, TangentialClassifierFreeGuidance, LayerSkipConfig
from diffusers.modular_pipelines import SequentialPipelineBlocks, ComponentSpec, ComponentsManager
from diffusers.modular_pipelines.wan import TEXT2VIDEO_BLOCKS
from diffusers.utils.logging import set_verbosity_debug
from diffusers.utils import export_to_video

set_verbosity_debug()

model_id = "Wan-AI/Wan2.1-T2V-1.3B-Diffusers"

blocks = SequentialPipelineBlocks.from_blocks_dict(TEXT2VIDEO_BLOCKS)

pipeline = blocks.init_pipeline()
pipeline.load_components(["text_encoder"], repo=model_id, subfolder="text_encoder", torch_dtype=torch.bfloat16)
pipeline.load_components(["tokenizer"], repo=model_id, subfolder="tokenizer")
pipeline.load_components(["scheduler"], repo=model_id, subfolder="scheduler")
pipeline.load_components(["transformer"], repo=model_id, subfolder="transformer", torch_dtype=torch.bfloat16)
pipeline.load_components(["vae"], repo=model_id, subfolder="vae", torch_dtype=torch.float32)
pipeline.to("cuda")

for guider_cls in [
    AutoGuidance,
    SkipLayerGuidance,
    ClassifierFreeGuidance,
    SmoothedEnergyGuidance,
    AdaptiveProjectedGuidance,
    PerturbedAttentionGuidance,
    ClassifierFreeZeroStarGuidance,
    TangentialClassifierFreeGuidance,
]:
    print(f"Testing {guider_cls.__name__}...")
    
    kwargs = {"guidance_scale": 5.0}
    if guider_cls is AutoGuidance:
        kwargs.update({"auto_guidance_config": LayerSkipConfig(indices=[13], skip_attention=True, skip_ff=True, dropout=0.1)})
        kwargs.update({"stop": 0.2})
    elif guider_cls is SkipLayerGuidance:
        kwargs.update({"skip_layer_config": LayerSkipConfig(indices=[21], skip_attention=True, skip_ff=True)})
        kwargs.update({"skip_layer_guidance_scale": 1.5})
        kwargs.update({"skip_layer_guidance_stop": 0.3})
    elif guider_cls is SmoothedEnergyGuidance:
        kwargs.update({"seg_guidance_config": SmoothedEnergyGuidanceConfig(indices=[21])})
        kwargs.update({"seg_guidance_scale": 2.0})
        kwargs.update({"seg_guidance_stop": 0.4})
    elif guider_cls is PerturbedAttentionGuidance:
        kwargs.update({"perturbed_guidance_config": LayerSkipConfig(indices=[11, 12, 13], skip_attention=False, skip_attention_scores=True, skip_ff=False)})
        kwargs.update({"perturbed_guidance_scale": 2.0})
        kwargs.update({"perturbed_guidance_stop": 0.25})
    elif guider_cls is AdaptiveProjectedGuidance:
        kwargs["adaptive_projected_guidance_rescale"] = 40.0

    pipeline.update_components(
        guider=ComponentSpec(
            name="cfg",
            type_hint=guider_cls,
            config=kwargs,
            default_creation_method="from_config",
        )
    )

    prompt = "A cat and a dog baking a cake together in a kitchen. The cat is carefully measuring flour, while the dog is stirring the batter with a wooden spoon. The kitchen is cozy, with sunlight streaming through the window."
    negative_prompt = "Bright tones, overexposed, static, blurred details, subtitles, style, works, paintings, images, static, overall gray, worst quality, low quality, JPEG compression residue, ugly, incomplete, extra fingers, poorly drawn hands, poorly drawn faces, deformed, disfigured, misshapen limbs, fused fingers, still picture, messy background, three legs, many people in the background, walking backwards"

    video = pipeline(prompt=prompt, negative_prompt=negative_prompt, num_inference_steps=30, output="videos", generator=torch.Generator().manual_seed(0))[0]
    output_filename = f"output_guider_{guider_cls.__name__.lower()}.mp4"
    export_to_video(video, output_filename, fps=16)

Results:

CFG APG TCFG
output_guider_classifierfreeguidance.mp4
output_guider_adaptiveprojectedguidance.mp4
output_guider_tangentialclassifierfreeguidance.mp4
CFG-Zero* PAG AutoGuidance
output_guider_classifierfreezerostarguidance.mp4
output_guider_perturbedattentionguidance.mp4
output_guider_autoguidance.mp4

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@a-r-r-o-w a-r-r-o-w marked this pull request as ready for review July 23, 2025 01:44
@a-r-r-o-w a-r-r-o-w requested a review from yiyixuxu July 23, 2025 01:58
@yiyixuxu
Copy link
Collaborator

yiyixuxu commented Jul 23, 2025

once this PR #11944 is in , you can do this

model_id = "Wan-AI/Wan2.1-T2V-1.3B-Diffusers"
blocks = SequentialPipelineBlocks.from_blocks_dict(TEXT2VIDEO_BLOCKS)
pipeline = blocks.init_pipeline(model_id)
pipeline.load_default_components()

for now, can you push to "diffusers-internal-dev/modular-wan", and manually update the modular_model_index.json?

blocks = SequentialPipelineBlocks.from_blocks_dict(TEXT2VIDEO_BLOCKS)
pipeline = blocks.init_pipeline()
pipeline.push_to_hub("diffusers-internal-dev/modular-wan")

Copy link
Collaborator

@yiyixuxu yiyixuxu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks! looking great! I left small comments, let's fix and merge:)

@a-r-r-o-w
Copy link
Contributor Author

@yiyixuxu Addressed the review comments! The repo is here: https://huggingface.co/diffusers-internal-dev/modular-wan-t2v

@yiyixuxu yiyixuxu merged commit f36ba9f into main Jul 23, 2025
14 of 15 checks passed
@yiyixuxu yiyixuxu deleted the modular-diffusers-wan branch July 23, 2025 16:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants