Skip to content

Conversation

zhonglin6666
Copy link

@zhonglin6666 zhonglin6666 commented Aug 29, 2025

What type of PR is this?

Specifying the nvidia GPU by uuid is not convenient because each physical machine needs to be checked, and it is not easy to know which GPU is running by using the nvidia-smi command.
If you specify the GPU by index, this method is easy to use. If there are several cards, the index will limit this range.

What this PR does / why we need it:
Spedify GPU by indexes,using pod's annotations 'nvidia.com/use-gpuindexes'
Exclude index usage,using pod's annotations 'nvidia.com/nouse-gpuindexes'

Which issue(s) this PR fixes:

Special notes for your reviewer:
GPU Indexing Support: Introduces the ability to specify NVIDIA GPUs by their index using new pod annotations, offering a more intuitive selection method than UUIDs.
New Pod Annotations: Adds "nvidia.com/use-gpuindexes" and "nvidia.com/nouse-gpuindexes" for fine-grained control over which GPU indexes a pod can use or must avoid.
Improved Usability: Addresses the inconvenience of using GPU UUIDs by providing a more straightforward index-based selection method, simplifying GPU management for users.

Does this PR introduce a user-facing change?:

@hami-robot
Copy link
Contributor

hami-robot bot commented Aug 29, 2025

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: zhonglin6666
Once this PR has been reviewed and has the lgtm label, please assign archlitchi for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @zhonglin6666, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances GPU resource allocation in Kubernetes by introducing a new mechanism to select or exclude NVIDIA GPUs based on their numerical index rather than their UUIDs. This change aims to simplify GPU management for users, making it easier to control which specific GPUs a pod can utilize or avoid, and provides clearer visibility into GPU usage.

Highlights

  • GPU Indexing Support: Introduces the ability to specify NVIDIA GPUs by their index using new pod annotations, offering a more intuitive selection method than UUIDs.
  • New Pod Annotations: Adds "nvidia.com/use-gpuindexes" and "nvidia.com/nouse-gpuindexes" for fine-grained control over which GPU indexes a pod can use or must avoid.
  • Improved Usability: Addresses the inconvenience of using GPU UUIDs by providing a more straightforward index-based selection method, simplifying GPU management for users.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a convenient feature for specifying NVIDIA GPUs by their indexes, using the nvidia.com/use-gpuindexes and nvidia.com/nouse-gpuindexes annotations. The changes include the core logic in the scheduler, new documentation, and example YAML files. The implementation is generally well-done, but I've identified a logical issue in how conflicting annotations are handled, which could lead to unexpected behavior. I've also suggested a minor improvement to the documentation for clarity.

Comment on lines +427 to +441
func (dev *NvidiaGPUDevices) checkIndex(annos map[string]string, d util.DeviceUsage) bool {
userIndexes, ok := annos[GPUUseIndexes]
if ok {
klog.V(5).Infof("check index for nvidia user indexes [%s], device index is %d", userIndexes, d.Index)
return util.ContainsTargetNonNegativeInt(userIndexes, ",", int(d.Index))
}

noUserIndexes, ok := annos[GPUNoUseIndexes]
if ok {
klog.V(5).Infof("check index for nvidia not user indexes [%s], device index is %d", noUserIndexes, d.Index)
return !util.ContainsTargetNonNegativeInt(noUserIndexes, ",", int(d.Index))
}

return true
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The current implementation of checkIndex doesn't correctly handle cases where both nvidia.com/use-gpuindexes and nvidia.com/nouse-gpuindexes annotations are present. If use-gpuindexes is specified, nouse-gpuindexes is ignored. The logic should be that a device is selected only if it's in the use-gpuindexes list (if specified) AND not in the nouse-gpuindexes list (if specified). This ensures that both conditions are respected, which is more intuitive for users.

For example, with use-gpuindexes: "0,1" and nouse-gpuindexes: "1", the current logic would incorrectly allow GPU 1. The proposed change fixes this.

I'd also recommend updating the unit tests in pkg/device/nvidia/device_test.go to cover scenarios where both annotations are used together to verify the corrected logic.

func (dev *NvidiaGPUDevices) checkIndex(annos map[string]string, d util.DeviceUsage) bool {
	useIndexes, useOk := annos[GPUUseIndexes]
	if useOk {
		klog.V(5).Infof("check index for nvidia user indexes [%s], device index is %d", useIndexes, d.Index)
		if !util.ContainsTargetNonNegativeInt(useIndexes, ",", int(d.Index)) {
			return false
		}
	}

	noUseIndexes, noUseOk := annos[GPUNoUseIndexes]
	if noUseOk {
		klog.V(5).Infof("check index for nvidia not user indexes [%s], device index is %d", noUseIndexes, d.Index)
		if util.ContainsTargetNonNegativeInt(noUseIndexes, ",", int(d.Index)) {
			return false
		}
	}

	return true
}

String type, ie: "0,1"
If set, devices allocated by this pod will NOT in indexes defined in this string.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

There's a small grammatical error in the description. It should be "must NOT be one of the indexes" to be clearer and consistent with the phrasing for use-gpuindexes.

Suggested change
If set, devices allocated by this pod will NOT in indexes defined in this string.
If set, devices allocated by this pod must NOT be one of the indexes defined in this string.

Signed-off-by: zhognlin6666 <[email protected]>

update examples nvidia specify index

Signed-off-by: zhognlin6666 <[email protected]>

update gpuindexes config

remove comment util test file
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant