-
Notifications
You must be signed in to change notification settings - Fork 193
Fix/Improve vllm PTQ and Support multi-node with ray #484
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Auto-sync is disabled for draft pull requests in this repository. Workflows must be run manually. Contributors can view more details about this message here. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mxinO does this maintain the support for non-ray + vLLM ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, it still works for non-ray.
| model.load_state_dict(current_state_dict) | ||
| torch.distributed.barrier() | ||
|
|
||
| if amax_file_path is None: | ||
| # Sync amax across TP can be done here if needed | ||
| pass | ||
| # for name, buffer in model.named_buffers(): | ||
| # if name.endswith("_amax"): | ||
| # print("syncing amax across TP for", name) | ||
| # torch.distributed.all_reduce( | ||
| # buffer, op=torch.distributed.ReduceOp.MAX, group=get_tp_group().device_group | ||
| # ) | ||
| # torch.distributed.barrier() | ||
|
|
||
| if not torch.distributed.is_initialized() or torch.distributed.get_rank() == 0: | ||
| mtq.print_quant_summary(model) | ||
|
|
||
| mtq.fold_weight(model) | ||
| for name, module in model.named_modules(): | ||
| if name.endswith("weight_quantizer"): | ||
| assert not module.is_enabled, f"quantizer {name} is still enabled" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to do this under disable_compilation context?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't find issue here without disable_compilation.
Signed-off-by: mxin <[email protected]>
Signed-off-by: mxin <[email protected]>
Signed-off-by: mxin <[email protected]>
Signed-off-by: mxin <[email protected]>
Signed-off-by: mxin <[email protected]>
Signed-off-by: mxin <[email protected]>
Signed-off-by: mxin <[email protected]>
Signed-off-by: Keval Morabia <[email protected]> Signed-off-by: mxin <[email protected]>
Signed-off-by: unknown <[email protected]> Signed-off-by: mxin <[email protected]>
Signed-off-by: Chenjie Luo <[email protected]> Signed-off-by: mxin <[email protected]>
Signed-off-by: Keval Morabia <[email protected]> Signed-off-by: mxin <[email protected]>
Signed-off-by: Keval Morabia <[email protected]> Signed-off-by: mxin <[email protected]>
Signed-off-by: gcunhase <[email protected]> Signed-off-by: mxin <[email protected]>
Signed-off-by: gcunhase <[email protected]> Signed-off-by: mxin <[email protected]>
Signed-off-by: Kinjal Patel <[email protected]> Signed-off-by: mxin <[email protected]>
Signed-off-by: noeyy-mino <[email protected]> Signed-off-by: mxin <[email protected]>
#479) Signed-off-by: Shengliang Xu <[email protected]> Signed-off-by: mxin <[email protected]>
- Allow wheel build and release manual without depending on test status (sometimes nmm-sandbox tests fail because of unavailable slurm machines) Signed-off-by: Keval Morabia <[email protected]> Signed-off-by: mxin <[email protected]>
…VILA (#525) ## What does this PR do? **Type of change:** ? <!-- Use one of the following: Bug fix, new feature, new example, new tests, documentation. --> Bug fix **Overview:** ? Prompt user to manually install correct transformers version for VILA ## Usage <!-- You can potentially add a usage example below. --> ```python # Add a code snippet demonstrating how to use this ``` ## Testing <!-- Mention how have you tested your change if applicable. --> ``` CUDA_VISIBLE_DEVICES=0 bash -e scripts/huggingface_example.sh --model /models/VILA1.5-3b --quant fp8 --tp 1 --pp 1 --trust_remote_code --kv_cache_free_gpu_memory_fraction 0.5 ``` ## Before your PR is "*Ready for review*" <!-- If you haven't finished some of the above items you can still open `Draft` PR. --> - **Make sure you read and follow [Contributor guidelines](https://github.com/NVIDIA/TensorRT-Model-Optimizer/blob/main/CONTRIBUTING.md)** and your commits are signed. - **Is this change backward compatible?**: Yes/No <!--- If No, explain why. --> - **Did you write any new necessary tests?**: Yes/No - **Did you add or update any necessary documentation?**: Yes/No - **Did you update [Changelog](https://github.com/NVIDIA/TensorRT-Model-Optimizer/blob/main/CHANGELOG.rst)?**: Yes/No <!--- Only for new features, API changes, critical bug fixes or bw breaking changes. --> ## Additional Information <!-- E.g. related issue. --> Signed-off-by: Yue <[email protected]> Signed-off-by: mxin <[email protected]>
## What does this PR do? **Type of change:** Bug fix **Overview:** Ensure nodes are topologically sorted in ONNX graph. ## Usage ```python python -m modelopt.onnx.quantization --onnx_path=$MODEL_NAME.onnx ``` ## Testing See bug 5591945 (model 4) and 5589019@13. ## Before your PR is "*Ready for review*" <!-- If you haven't finished some of the above items you can still open `Draft` PR. --> - **Make sure you read and follow [Contributor guidelines](https://github.com/NVIDIA/TensorRT-Model-Optimizer/blob/main/CONTRIBUTING.md)** and your commits are signed. - **Is this change backward compatible?**: Yes - **Did you write any new necessary tests?**: No - **Did you add or update any necessary documentation?**: No - **Did you update [Changelog](https://github.com/NVIDIA/TensorRT-Model-Optimizer/blob/main/CHANGELOG.rst)?**: No --------- Signed-off-by: gcunhase <[email protected]> Signed-off-by: mxin <[email protected]>
## What does this PR do? **Type of change:** Improve existing feature <!-- Use one of the following: Bug fix, new feature, new example, new tests, documentation. --> **Overview:** GPT-OSS model has Yarn RoPE which adds additional nn.Embedding modules that need to be enabled in DynamicModule for Minitron pruning ## Testing <!-- Mention how have you tested your change if applicable. --> - gpt-oss-20b pruned using M-LM pruning example and conf scripts. Signed-off-by: Keval Morabia <[email protected]> Signed-off-by: mxin <[email protected]>
|
Sorry, messed up the sign-offs |
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## main #484 +/- ##
==========================================
+ Coverage 73.39% 74.37% +0.98%
==========================================
Files 180 182 +2
Lines 18138 18219 +81
==========================================
+ Hits 13312 13550 +238
+ Misses 4826 4669 -157 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
What does this PR do?
Type of change: Bug fix
Overview:
Fix or improve the vllm PTQ.
SharedFusedMoEUsage
Testing
Tested with latest vllm.
Additional Information
The vllm >0.11.0 changed the low-level API significantly. Some changes needs to be removed when vllm<=0.11.0 is outdated.