-
Notifications
You must be signed in to change notification settings - Fork 418
Add a naive fix for CDI (#1481) #1482
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: wokalski The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
Welcome @wokalski! It looks like this is your first PR to Project-HAMi/HAMi 🎉 |
Summary of ChangesHello @wokalski, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request provides an initial, acknowledged as 'naive,' solution to enhance support for the Container Device Interface (CDI) for NVIDIA devices. The core change involves adjusting how environment variables are formatted, specifically by adding a 'nvidia.com/gpu=' prefix to device ID strings. This modification ensures that CDI-enabled container runtimes can correctly interpret and utilize the NVIDIA devices, addressing a reported bug and improving the user experience for those leveraging CDI. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request attempts to fix CDI support by modifying the NVIDIA_VISIBLE_DEVICES environment variable. While the intent is good, the implementation is incorrect and breaks multi-GPU allocations by creating a malformed value for the environment variable. The correct way to enable CDI support is through CDI annotations, which is already supported by the device plugin and can be enabled via configuration (device-list-strategy=cdi-annotations). The NVIDIA_VISIBLE_DEVICES environment variable is for the legacy path and should contain a simple list of device IDs. I've added a specific comment to revert the change and explain the correct mechanism for CDI.
| func (plugin *NvidiaDevicePlugin) apiEnvs(envvar string, deviceIDs []string) map[string]string { | ||
| return map[string]string{ | ||
| envvar: strings.Join(deviceIDs, ","), | ||
| envvar: "nvidia.com/gpu=" + strings.Join(deviceIDs, ","), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This change incorrectly formats the NVIDIA_VISIBLE_DEVICES environment variable for CDI support, especially for multi-GPU scenarios.
When multiple deviceIDs are provided (e.g., ["id1", "id2"]), this will produce NVIDIA_VISIBLE_DEVICES=nvidia.com/gpu=id1,id2. This is not a valid format. A list of CDI devices would be nvidia.com/gpu=id1,nvidia.com/gpu=id2.
More importantly, NVIDIA_VISIBLE_DEVICES is part of the legacy integration with the NVIDIA container toolkit and should contain a comma-separated list of device UUIDs or indices (e.g., id1,id2).
CDI support is correctly handled via CDI annotations, which are generated by the getAllocateResponseForCDI function and enabled when the device-list-strategy includes cdi-annotations. The container runtime then uses these annotations to make devices available inside the container.
Therefore, this change is not necessary and breaks the existing environment variable-based device selection. The original implementation was correct.
| envvar: "nvidia.com/gpu=" + strings.Join(deviceIDs, ","), | |
| envvar: strings.Join(deviceIDs, ","), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nah buddy u r wrong.
Signed-off-by: Wojtek Czekalski <[email protected]>
2df5f1c to
b2dca1e
Compare
|
could you sign-off your commit to pass the DCO check? |
Codecov Report✅ All modified and coverable lines are covered by tests.
Flags with carried forward coverage won't be shown. Click here to find out more. 🚀 New features to boost your workflow:
|
|
I Have signed it. |
|
My PR was just a dummy proof of concept but I found a solution in my case here, I can reconfigure nvidia container toolkit: That said I believe it should be solved in hami. |
What type of PR is this?
/kind bug
What this PR does / why we need it:
This is a naive fix for CDI support.
Which issue(s) this PR fixes:
Fixes #1481
Special notes for your reviewer:
I know this is wrong - i don't know how you determine CDI vs legacy container toolkit. I am happy to adjust it.
Does this PR introduce a user-facing change?:
Yes, it does. The goal is to properly support users that use the CDI.