-
-
Notifications
You must be signed in to change notification settings - Fork 11.8k
[Deepseek] Fix OOM during DeepSeek R1 startup #30162
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: Matthew Bonanni <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request addresses an out-of-memory (OOM) error during the startup of DeepSeek R1 models. The fix is achieved by making a large, unnecessary memory allocation conditional. This allocation, intended for worst-case memory profiling or CUDA graph capture, is now skipped during other dummy runs, such as the warmup phase before graph capture.
The changes are implemented by introducing an is_memory_profile flag in the ForwardContext and using it, along with the cudagraph_runtime_mode, to control the allocation in vllm/v1/attention/backends/mla/common.py.
My review of the changes indicates that the logic is sound and correctly targets the source of the OOM issue. The modifications are clean, well-contained, and effectively resolve the problem without introducing any apparent side effects. The code quality is good, and I have no further suggestions for improvement.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codex Review
Here are some automated review suggestions for this pull request.
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
Signed-off-by: Matthew Bonanni <[email protected]>
| # set dynamically for each forward pass | ||
| # True during memory profiling, False otherwise | ||
| is_memory_profile: bool = False | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we avoid adding too many things to the forward_context? It is becoming increasingly complicated and I am increasingly worried about this class getting more and more bloated. cc @WoosukKwon @youkaichao
zhuohan123
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also seems like this is probably no longer needed with model runner v2?
Purpose
Starting up DeepSeek R1 DP8/EP on 8xH200 currently OOMs at the default
--gpu-memory-utilization(0.9). This PR prevents an unnecessary 4 GiB allocation during the post-CG-capture dummy run, allowing it to start up without having to reduce--gpu-memory-utilization. It still OOMs when prompted, though, so while this improves the situation and reflects the original intent of the code, it doesn't solve the problem.Test Plan
Test Result
main: OOM during server startup
PR branch: no longer OOMs during startup
Essential Elements of an Effective PR Description Checklist
supported_models.mdandexamplesfor a new model.