-
Notifications
You must be signed in to change notification settings - Fork 197
Add LoRA extraction, verification, and comparison scripts #865
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Summary of ChangesHello @ShreejithSG, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces a comprehensive set of utility scripts under "scripts/lora_extraction" to facilitate the management and validation of LoRA adapters within FastVideo models, specifically those built on the Wan 2.2 TI2V architecture. The scripts enable users to extract LoRA weights, verify their numerical accuracy, and perform perceptual comparisons of model outputs, providing essential tools for researchers and developers working with fine-tuned diffusion models. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a valuable set of scripts for LoRA extraction, verification, and comparison. The implementation is generally sound and serves its purpose well. My review focuses on enhancing the scripts' robustness, configurability, and adherence to Python best practices. The main suggestions include wrapping script logic in main functions guarded by if __name__ == "__main__":, using argparse for configuration instead of hardcoded constants, improving error handling with more specific exception catching, and addressing potential runtime errors. I have also noted a discrepancy between the documentation and script behavior for resuming extraction.
| # Configuration | ||
| BASE_MODEL = "Wan-AI/Wan2.2-TI2V-5B-Diffusers" | ||
| FINETUNED_MODEL = "FastVideo/FastWan2.2-TI2V-5B-FullAttn-Diffusers" | ||
| OUTPUT_PATH = "fastwan2.2_transformer_lora.pt" | ||
| CHECKPOINT_PATH = "lora_checkpoint.pt" | ||
| RANK = 16 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The script's logic is executed at the module's top level. This is not a good practice as it prevents the script from being imported without running its code, and makes it hard to reuse. All script logic should be encapsulated in a main function, called from an if __name__ == '__main__': block. This would also be a good place to introduce argparse to handle configuration parameters like model names, paths, and rank, instead of using global constants. This would also resolve the discrepancy with the README.md regarding the --resume flag.
|
@SolitaryThinker, I’ve raised a PR adding the LoRA extraction, verification, and comparison scripts under scripts/lora_extraction. |
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
This PR adds a set of scripts under scripts/lora_extraction for working with LoRA adapters in FastVideo models based on Wan 2.2 TI2V:
Description:
This PR adds a set of scripts under scripts/lora_extraction for working with LoRA adapters in FastVideo models based on Wan 2.2 TI2V:
Notes:
Layer-level verification shows near-zero MSE (≈3.6e-08) between original and re-applied weights.
Preliminary perceptual checks show image-level differences (MSE ≈0.19, LPIPS ≈0.62, SSIM ≈0.128). The cause is under investigation.
Scripts are designed to be callable individually rather than a single pipeline, aligning with typical workflows (fused/finetuned checkpoint -> extract -> verify).