-
Notifications
You must be signed in to change notification settings - Fork 469
[API] stormservice support podgroup #1506
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @Epsilon314, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request significantly enhances the StormService
API by integrating support for various Kubernetes coscheduling solutions, including Godel Scheduler, the Kubernetes Scheduler Plugin, and Volcano Scheduler. The core change involves introducing a flexible SchedulingStrategy
field within the StormService
specification, allowing users to define how groups of pods should be co-scheduled. The underlying controller logic has been updated to automatically manage the lifecycle of corresponding PodGroup Custom Resources (CRs) for the selected scheduler, ensuring that pods are correctly grouped and scheduled together. This feature provides critical functionality for workloads requiring strict co-placement, improving resource utilization and performance for distributed applications.
Highlights
- Enhanced Scheduling Strategy API: The
StormService
API now supports specifyingSchedulingStrategy
at both theRoleSet
and individualRole
levels. This new API allows users to choose between Godel Scheduler, Coscheduling Scheduler Plugin, and Volcano Scheduler for their pod groups, providing flexibility in how co-scheduled workloads are managed. - Automated PodGroup Management: The controller logic has been updated to automatically create, reconcile, and finalize PodGroup Custom Resources (CRs) for the chosen scheduling strategy. This includes dynamic client interactions to manage these CRs across different scheduler API groups, ensuring seamless integration and lifecycle management.
- Improved PodGroup Identification and Permissions: New constants for PodGroup annotation and label keys have been introduced to correctly identify and associate pods with their respective PodGroups across different schedulers. Additionally, RBAC rules have been expanded to grant the controller the necessary permissions to interact with these new PodGroup resources.
- Dependency Updates and Compatibility: The project's Go module dependencies have been updated to align with newer Kubernetes versions (
v0.31.8
) and to include the requiredsigs.k8s.io/scheduler-plugins
andvolcano.sh/apis
modules, ensuring compatibility and access to the new scheduling functionalities.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces support for pod groups in StormService, enabling co-scheduling capabilities for pods using Godel, Coscheduling, and Volcano schedulers. The changes include API updates to RoleSet
and Role
specs, controller logic for managing PodGroup
custom resources, and necessary RBAC and dependency updates. The implementation is comprehensive, but there are several areas for improvement regarding API validation, security hardening, and code maintainability through refactoring.
podSet.Labels[constants.VolcanoPodGroupNameAnnotationKey] = roleSet.Name | ||
} | ||
} | ||
if role.SchedulingStrategy != nil { // note that roleSet.Spec.SchedulingStrategy and role.SchedulingStrategy should not be set concurrently |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The comment // note that roleSet.Spec.SchedulingStrategy and role.SchedulingStrategy should not be set concurrently
indicates a critical usage constraint that is not enforced. Relying on comments for such constraints is error-prone. Please add a validation webhook to prevent both from being set, which will prevent ambiguous behavior and make the API more robust.
- apiGroups: | ||
- scheduling.godel.kubewharf.io | ||
resources: | ||
- podgroups | ||
verbs: | ||
- '*' | ||
- apiGroups: | ||
- scheduling.x-k8s.io | ||
resources: | ||
- podgroups | ||
verbs: | ||
- '*' | ||
- apiGroups: | ||
- scheduling.volcano.sh | ||
resources: | ||
- podgroups | ||
verbs: | ||
- '*' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if podSet.Spec.SchedulingStrategy.GodelSchedulingStrategy != nil { | ||
expectedGroup := &schedv1alpha1.PodGroup{ | ||
ObjectMeta: podGroupMeta, | ||
Spec: podSet.Spec.SchedulingStrategy.GodelSchedulingStrategy.PodGroupSpec, | ||
} | ||
expectedGroup.SetGroupVersionKind(schedv1alpha1.SchemeGroupVersion.WithKind("PodGroup")) | ||
if created, err := utils.EnsurePodGroupExist(ctx, r.DynamicClient, r.Client, expectedGroup, podSet.Name, podSet.Namespace); err != nil { | ||
return err | ||
} else if created { | ||
r.EventRecorder.Eventf(podSet, v1.EventTypeNormal, PodGroupSyncedEventType, "pod group %s synced", podSet.Name) | ||
} | ||
} | ||
if podSet.Spec.SchedulingStrategy.CoschedulingSchedulingStrategy != nil { | ||
expectedGroup := &schedulerpluginsv1aplha1.PodGroup{ | ||
ObjectMeta: podGroupMeta, | ||
Spec: podSet.Spec.SchedulingStrategy.CoschedulingSchedulingStrategy.PodGroupSpec, | ||
} | ||
expectedGroup.SetGroupVersionKind(schedulerpluginsv1aplha1.SchemeGroupVersion.WithKind("PodGroup")) | ||
if created, err := utils.EnsurePodGroupExist(ctx, r.DynamicClient, r.Client, expectedGroup, podSet.Name, podSet.Namespace); err != nil { | ||
return err | ||
} else if created { | ||
r.EventRecorder.Eventf(podSet, v1.EventTypeNormal, PodGroupSyncedEventType, "pod group %s synced", podSet.Name) | ||
} | ||
} | ||
if podSet.Spec.SchedulingStrategy.VolcanoSchedulingStrategy != nil { | ||
expectedGroup := &volcanoschedv1beta1.PodGroup{ | ||
ObjectMeta: podGroupMeta, | ||
Spec: podSet.Spec.SchedulingStrategy.VolcanoSchedulingStrategy.PodGroupSpec, | ||
} | ||
expectedGroup.SetGroupVersionKind(volcanoschedv1beta1.SchemeGroupVersion.WithKind("PodGroup")) | ||
if created, err := utils.EnsurePodGroupExist(ctx, r.DynamicClient, r.Client, expectedGroup, podSet.Name, podSet.Namespace); err != nil { | ||
return err | ||
} else if created { | ||
r.EventRecorder.Eventf(podSet, v1.EventTypeNormal, PodGroupSyncedEventType, "pod group %s synced", podSet.Name) | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The logic for reconciling PodGroup
s for different scheduling strategies is duplicated. This can be refactored to reduce code repetition and improve maintainability. You could iterate over a list of strategy configurations or use a switch statement, especially if the API is updated to enforce that only one scheduling strategy can be active at a time.
if roleSet.Spec.SchedulingStrategy != nil { | ||
if roleSet.Spec.SchedulingStrategy.CoschedulingSchedulingStrategy != nil { | ||
podSet.Labels[constants.CoschedulingPodGroupNameLabelKey] = roleSet.Name | ||
} | ||
if roleSet.Spec.SchedulingStrategy.GodelSchedulingStrategy != nil { | ||
podSet.Annotations[constants.GodelPodGroupNameAnnotationKey] = roleSet.Name | ||
podSet.Labels[constants.GodelPodGroupNameAnnotationKey] = roleSet.Name | ||
} | ||
if roleSet.Spec.SchedulingStrategy.VolcanoSchedulingStrategy != nil { | ||
podSet.Annotations[constants.VolcanoPodGroupNameAnnotationKey] = roleSet.Name | ||
podSet.Labels[constants.VolcanoPodGroupNameAnnotationKey] = roleSet.Name | ||
} | ||
} | ||
if role.SchedulingStrategy != nil { // note that roleSet.Spec.SchedulingStrategy and role.SchedulingStrategy should not be set concurrently | ||
if role.SchedulingStrategy.CoschedulingSchedulingStrategy != nil { | ||
podSet.Labels[constants.CoschedulingPodGroupNameLabelKey] = podSet.Name | ||
} | ||
if role.SchedulingStrategy.GodelSchedulingStrategy != nil { | ||
podSet.Annotations[constants.GodelPodGroupNameAnnotationKey] = podSet.Name | ||
podSet.Labels[constants.GodelPodGroupNameAnnotationKey] = podSet.Name | ||
} | ||
if role.SchedulingStrategy.VolcanoSchedulingStrategy != nil { | ||
podSet.Annotations[constants.VolcanoPodGroupNameAnnotationKey] = podSet.Name | ||
podSet.Labels[constants.VolcanoPodGroupNameAnnotationKey] = podSet.Name | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
} | ||
if roleSet.Spec.SchedulingStrategy.GodelSchedulingStrategy != nil { | ||
podSet.Annotations[constants.GodelPodGroupNameAnnotationKey] = roleSet.Name | ||
podSet.Labels[constants.GodelPodGroupNameAnnotationKey] = roleSet.Name |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The constant constants.GodelPodGroupNameAnnotationKey
is being used as a key for a label. This is confusing as the name implies it's for an annotation. If this is intentional, consider renaming the constant to something more generic like GodelPodGroupNameKey
or creating a separate constant for the label key to improve code clarity.
func (r *RoleSetReconciler) syncPodGroup(ctx context.Context, roleSet *orchestrationv1alpha1.RoleSet, spec *orchestrationv1alpha1.RoleSetSpec) error { | ||
if spec.SchedulingStrategy.PodGroup == nil { | ||
if spec.SchedulingStrategy == nil { | ||
return nil | ||
} | ||
expectedGroup := &schedv1alpha1.PodGroup{ | ||
ObjectMeta: metav1.ObjectMeta{ | ||
Name: roleSet.Name, | ||
Namespace: roleSet.Namespace, | ||
Labels: map[string]string{ | ||
constants.RoleSetNameLabelKey: roleSet.Name, | ||
}, | ||
OwnerReferences: []metav1.OwnerReference{ | ||
*metav1.NewControllerRef(roleSet, orchestrationv1alpha1.SchemeGroupVersion.WithKind(orchestrationv1alpha1.RoleSetKind)), | ||
}, | ||
|
||
podGroupMeta := metav1.ObjectMeta{ | ||
Name: roleSet.Name, | ||
Namespace: roleSet.Namespace, | ||
Labels: map[string]string{ | ||
constants.RoleSetNameLabelKey: roleSet.Name, | ||
}, | ||
OwnerReferences: []metav1.OwnerReference{ | ||
*metav1.NewControllerRef(roleSet, orchestrationv1alpha1.SchemeGroupVersion.WithKind(orchestrationv1alpha1.RoleSetKind)), | ||
}, | ||
Spec: *spec.SchedulingStrategy.PodGroup, | ||
} | ||
podGroup := &schedv1alpha1.PodGroup{} | ||
if err := r.Client.Get(ctx, client.ObjectKey{Name: roleSet.Name, Namespace: roleSet.Namespace}, podGroup); client.IgnoreNotFound(err) != nil { | ||
return err | ||
} else if err != nil { | ||
// not found pg, need to create | ||
if err = r.Client.Create(ctx, expectedGroup); err == nil { | ||
|
||
if spec.SchedulingStrategy.GodelSchedulingStrategy != nil { | ||
expectedGroup := &schedv1alpha1.PodGroup{ | ||
ObjectMeta: podGroupMeta, | ||
Spec: spec.SchedulingStrategy.GodelSchedulingStrategy.PodGroupSpec, | ||
} | ||
expectedGroup.SetGroupVersionKind(schedv1alpha1.SchemeGroupVersion.WithKind("PodGroup")) | ||
if created, err := utils.EnsurePodGroupExist(ctx, r.DynamicClient, r.Client, expectedGroup, roleSet.Name, roleSet.Namespace); err != nil { | ||
return err | ||
} else if created { | ||
r.EventRecorder.Eventf(roleSet, v1.EventTypeNormal, PodGroupSyncedEventType, "pod group %s synced", roleSet.Name) | ||
} | ||
} | ||
if spec.SchedulingStrategy.CoschedulingSchedulingStrategy != nil { | ||
expectedGroup := &schedulerpluginsv1aplha1.PodGroup{ | ||
ObjectMeta: podGroupMeta, | ||
Spec: spec.SchedulingStrategy.CoschedulingSchedulingStrategy.PodGroupSpec, | ||
} | ||
expectedGroup.SetGroupVersionKind(schedulerpluginsv1aplha1.SchemeGroupVersion.WithKind("PodGroup")) | ||
if created, err := utils.EnsurePodGroupExist(ctx, r.DynamicClient, r.Client, expectedGroup, roleSet.Name, roleSet.Namespace); err != nil { | ||
return err | ||
} else if created { | ||
r.EventRecorder.Eventf(roleSet, v1.EventTypeNormal, PodGroupSyncedEventType, "pod group %s synced", roleSet.Name) | ||
} | ||
} | ||
if spec.SchedulingStrategy.VolcanoSchedulingStrategy != nil { | ||
expectedGroup := &volcanoschedv1beta1.PodGroup{ | ||
ObjectMeta: podGroupMeta, | ||
Spec: spec.SchedulingStrategy.VolcanoSchedulingStrategy.PodGroupSpec, | ||
} | ||
expectedGroup.SetGroupVersionKind(volcanoschedv1beta1.SchemeGroupVersion.WithKind("PodGroup")) | ||
if created, err := utils.EnsurePodGroupExist(ctx, r.DynamicClient, r.Client, expectedGroup, roleSet.Name, roleSet.Namespace); err != nil { | ||
return err | ||
} else if created { | ||
r.EventRecorder.Eventf(roleSet, v1.EventTypeNormal, PodGroupSyncedEventType, "pod group %s synced", roleSet.Name) | ||
} | ||
return err | ||
} | ||
return nil | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pod.Labels[constants.CoschedulingPodGroupNameLabelKey] = roleSet.Name | ||
} | ||
if roleSet.Spec.SchedulingStrategy.GodelSchedulingStrategy != nil { | ||
pod.Labels[constants.GodelPodGroupNameAnnotationKey] = roleSet.Name |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The constant constants.GodelPodGroupNameAnnotationKey
is being used as a key for a label. This is confusing as the name implies it's for an annotation. If this is intentional, consider renaming the constant to something more generic like GodelPodGroupNameKey
or creating a separate constant for the label key to improve code clarity. This issue is also present in pkg/controller/roleset/podset_rollsyncer.go
.
5c0eaf7
to
e673f09
Compare
9e7848a
to
3a3c0f5
Compare
GodelSchedulingStrategy *GodelSchedulingStrategySpec `json:"godelSchedulingStrategy,omitempty"` | ||
|
||
CoschedulingSchedulingStrategy *CoschedulingSchedulingStrategySpec `json:"coschedulingSchedulingStrategy,omitempty"` | ||
|
||
VolcanoSchedulingStrategy *VolcanoSchedulingStrategySpec `json:"volcanoSchedulingStrategy,omitempty"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Regarding these dependencies, do the strategies require these components or CRDs to run? For example, does the VolcanoSchedulingStrategy require the Volcano component to be installed to take effect? Also, do we need to integrate with Aibrix (is that a large dependency)? Or how can we check if the CRD exists?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Imagine that if a user sets VolcanoSchedulingStrategy but doesn't have Volcano installed. What will the final be? Is there a fallback mechanism?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now we check if prd exist before each podgroup create/delete (with dynamic client), and if crd do not exist we just stop there.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
An alternative is to add a feature gate that requires user to choose the scheduler implementation if they want to enable podgroup, therefore we can check prd exist at start time and just crash and complain if it does not (if crd is determined at start time we can add it to informer which is a plus).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems like a feature gate approach is a good option. Regarding the CRD check, I understand we can this check crd when starting controller. If fg=true, users should install the corresponding component(like volcano...). We could implement this similarly.
FYI: https://github.com/Jeffwan/aibrix/blob/1008ec3260f0a85f573c45d0a6487080f10758b1/pkg/controller/controller.go#L62-L73
Also, if pg creation fails, we could throw an appropriate log or event to notify the user.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The tradeoff is
- add a feature-gate
pro: since crd is known at start time, we can add it to schema therefore watching and modification will be easier;
cons: since there is no obvious default value users have to config it explicitly, thus is not enabled out-of-box - detect dynamically
pro: config free, take effect immediately after scheduler is installed
cons: hard to watch podgroup change; more apicall
after rethink i feel like adding a feature gate would be a better choice , and we can simplify the added configuration complexity in release part for example some helm tricks
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
2. after rethink i feel like adding a feature gate would be a better choice , and we can simplify the added configuration complexity in release part for example some helm tricks
@Epsilon314 Hi yiqing, please help address the linter issue and let's get this merged asap |
5f54b7d
to
e65cc2b
Compare
e65cc2b
to
d4f777a
Compare
Signed-off-by: zhuyiqing.wiz <[email protected]>
14af40b
to
dcb9111
Compare
Pull Request Description
Add podgroup to stormservice spec, which provide coscheduling feature. Scheduler implementations supported are coscheduling scheduler plugin, Godel Scheduler and Volcano Scheduler.
Api
Add schedulingStrategy at either RoleSet level or Role level, which is defined as:
All pods in same roleset replica are binded to same podgroup.
e.g.
will auto create
Feasible iff. pod group size > 1. All pods in same podset of the role are binded to same podgroup.
e.g.
will auto create:
Controller
Podgroup CR and corresponding label/anno will be automatically created and finalized.
Related Issues
Resolves: #[1289]
Important: Before submitting, please complete the description above and review the checklist below.
Contribution Guidelines (Expand for Details)
We appreciate your contribution to aibrix! To ensure a smooth review process and maintain high code quality, please adhere to the following guidelines:
Pull Request Title Format
Your PR title should start with one of these prefixes to indicate the nature of the change:
[Bug]
: Corrections to existing functionality[CI]
: Changes to build process or CI pipeline[Docs]
: Updates or additions to documentation[API]
: Modifications to aibrix's API or interface[CLI]
: Changes or additions to the Command Line Interface[Misc]
: For changes not covered above (use sparingly)Note: For changes spanning multiple categories, use multiple prefixes in order of importance.
Submission Checklist
By submitting this PR, you confirm that you've read these guidelines and your changes align with the project's contribution standards.