Skip to content

Conversation

Epsilon314
Copy link
Contributor

Pull Request Description

Add podgroup to stormservice spec, which provide coscheduling feature. Scheduler implementations supported are coscheduling scheduler plugin, Godel Scheduler and Volcano Scheduler.

Api

Add schedulingStrategy at either RoleSet level or Role level, which is defined as:

// +enum
type SchedulingStrategy struct {
	GodelSchedulingStrategy *GodelSchedulingStrategySpec `json:"godelSchedulingStrategy,omitempty"`

	CoschedulingSchedulingStrategy *CoschedulingSchedulingStrategySpec `json:"coschedulingSchedulingStrategy,omitempty"`

	VolcanoSchedulingStrategy *VolcanoSchedulingStrategySpec `json:"volcanoSchedulingStrategy,omitempty"`
}

// GodelSchedulingStrategySpec uses godel scheduler's podgroup defination
type GodelSchedulingStrategySpec struct {
	schedv1alpha1.PodGroupSpec `json:",inline"`
}

// CoschedulingSchedulingStrategySpec uses coscheduling scheduler-plugin's podgroup defination
type CoschedulingSchedulingStrategySpec struct {
	schedulerpluginsv1aplha1.PodGroupSpec `json:",inline"`
}

// VolcanoSchedulingStrategySpec uses volcano's podgroup defination
type VolcanoSchedulingStrategySpec struct {
	volcanoschedv1beta1.PodGroupSpec `json:",inline"`
}
  • RoleSet level poggroup
    All pods in same roleset replica are binded to same podgroup.
    e.g.
apiVersion: orchestration.aibrix.ai/v1alpha1
kind: StormService
metadata:
  name: podgroupexample1
spec:
  replicas: 1
  stateful: true
  updateStrategy:
    type: "InPlaceUpdate"
  selector:
    matchLabels:
      app: podgroupexample1
  template:
    metadata:
      labels:
        app: podgroupexample1
    spec:
      schedulingStrategy:
        coschedulingSchedulingStrategy:
          minMember: 8
      roles:
        - name: role1
          replicas: 2
          podGroupSize: 2
          updateStrategy:
            maxSurge: "25%"
            maxUnavailable: "25%"
          template:
            spec:
              containers:
                - name: main
                  image: busybox
                  command:
                  - sh
                  - -c
                  args:
                  - sleep infinity
                  resources:
                    limits:
                      cpu: 1
        - name: role2
          replicas: 1
          podGroupSize: 4
          updateStrategy:
            maxSurge: "25%"
            maxUnavailable: "25%"
          template:
            spec:
              containers:
                - name: main
                  image: busybox
                  command:
                  - sh
                  - -c
                  args:
                  - sleep infinity
                  resources:
                    limits:
                      cpu: 1

will auto create

default    podgroupexample1-roleset-bwgjc
  • Role level podgroup
    Feasible iff. pod group size > 1. All pods in same podset of the role are binded to same podgroup.
    e.g.
apiVersion: orchestration.aibrix.ai/v1alpha1
kind: StormService
metadata:
  name: podgroupexample2
spec:
  replicas: 1
  stateful: true
  updateStrategy:
    type: "InPlaceUpdate"
  selector:
    matchLabels:
      app: podgroupexample2
  template:
    metadata:
      labels:
        app: podgroupexample2
    spec:
      roles:
        - name: role1
          replicas: 2
          podGroupSize: 2
          updateStrategy:
            maxSurge: "25%"
            maxUnavailable: "25%"
          schedulingStrategy:
            coschedulingSchedulingStrategy:
              minMember: 2
          template:
            spec:
              containers:
                - name: main
                  image: busybox
                  command:
                  - sh
                  - -c
                  args:
                  - sleep infinity
                  resources:
                    limits:
                      cpu: 1
        - name: role2
          replicas: 1
          podGroupSize: 4
          updateStrategy:
            maxSurge: "25%"
            maxUnavailable: "25%"
          template:
            spec:
              containers:
                - name: main
                  image: busybox
                  command:
                  - sh
                  - -c
                  args:
                  - sleep infinity
                  resources:
                    limits:
                      cpu: 1

will auto create:

default    podgroupexample2-roleset-8h6px-role1-6fc85b96bb-0
default    podgroupexample2-roleset-8h6px-role1-6fc85b96bb-1   

Controller

Podgroup CR and corresponding label/anno will be automatically created and finalized.

Related Issues

Resolves: #[1289]

Important: Before submitting, please complete the description above and review the checklist below.


Contribution Guidelines (Expand for Details)

We appreciate your contribution to aibrix! To ensure a smooth review process and maintain high code quality, please adhere to the following guidelines:

Pull Request Title Format

Your PR title should start with one of these prefixes to indicate the nature of the change:

  • [Bug]: Corrections to existing functionality
  • [CI]: Changes to build process or CI pipeline
  • [Docs]: Updates or additions to documentation
  • [API]: Modifications to aibrix's API or interface
  • [CLI]: Changes or additions to the Command Line Interface
  • [Misc]: For changes not covered above (use sparingly)

Note: For changes spanning multiple categories, use multiple prefixes in order of importance.

Submission Checklist

  • PR title includes appropriate prefix(es)
  • Changes are clearly explained in the PR description
  • New and existing tests pass successfully
  • Code adheres to project style and best practices
  • Documentation updated to reflect changes (if applicable)
  • Thorough testing completed, no regressions introduced

By submitting this PR, you confirm that you've read these guidelines and your changes align with the project's contribution standards.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @Epsilon314, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the StormService API by integrating support for various Kubernetes coscheduling solutions, including Godel Scheduler, the Kubernetes Scheduler Plugin, and Volcano Scheduler. The core change involves introducing a flexible SchedulingStrategy field within the StormService specification, allowing users to define how groups of pods should be co-scheduled. The underlying controller logic has been updated to automatically manage the lifecycle of corresponding PodGroup Custom Resources (CRs) for the selected scheduler, ensuring that pods are correctly grouped and scheduled together. This feature provides critical functionality for workloads requiring strict co-placement, improving resource utilization and performance for distributed applications.

Highlights

  • Enhanced Scheduling Strategy API: The StormService API now supports specifying SchedulingStrategy at both the RoleSet and individual Role levels. This new API allows users to choose between Godel Scheduler, Coscheduling Scheduler Plugin, and Volcano Scheduler for their pod groups, providing flexibility in how co-scheduled workloads are managed.
  • Automated PodGroup Management: The controller logic has been updated to automatically create, reconcile, and finalize PodGroup Custom Resources (CRs) for the chosen scheduling strategy. This includes dynamic client interactions to manage these CRs across different scheduler API groups, ensuring seamless integration and lifecycle management.
  • Improved PodGroup Identification and Permissions: New constants for PodGroup annotation and label keys have been introduced to correctly identify and associate pods with their respective PodGroups across different schedulers. Additionally, RBAC rules have been expanded to grant the controller the necessary permissions to interact with these new PodGroup resources.
  • Dependency Updates and Compatibility: The project's Go module dependencies have been updated to align with newer Kubernetes versions (v0.31.8) and to include the required sigs.k8s.io/scheduler-plugins and volcano.sh/apis modules, ensuring compatibility and access to the new scheduling functionalities.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for pod groups in StormService, enabling co-scheduling capabilities for pods using Godel, Coscheduling, and Volcano schedulers. The changes include API updates to RoleSet and Role specs, controller logic for managing PodGroup custom resources, and necessary RBAC and dependency updates. The implementation is comprehensive, but there are several areas for improvement regarding API validation, security hardening, and code maintainability through refactoring.

podSet.Labels[constants.VolcanoPodGroupNameAnnotationKey] = roleSet.Name
}
}
if role.SchedulingStrategy != nil { // note that roleSet.Spec.SchedulingStrategy and role.SchedulingStrategy should not be set concurrently
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The comment // note that roleSet.Spec.SchedulingStrategy and role.SchedulingStrategy should not be set concurrently indicates a critical usage constraint that is not enforced. Relying on comments for such constraints is error-prone. Please add a validation webhook to prevent both from being set, which will prevent ambiguous behavior and make the API more robust.

Comment on lines +294 to +311
- apiGroups:
- scheduling.godel.kubewharf.io
resources:
- podgroups
verbs:
- '*'
- apiGroups:
- scheduling.x-k8s.io
resources:
- podgroups
verbs:
- '*'
- apiGroups:
- scheduling.volcano.sh
resources:
- podgroups
verbs:
- '*'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For enhanced security, it's a best practice to grant specific, minimal permissions instead of using a wildcard ('*'). Please replace the wildcard verb with the specific verbs required by the controller for managing podgroups, such as get, list, watch, create, update, patch, and delete.

Comment on lines 172 to 207
if podSet.Spec.SchedulingStrategy.GodelSchedulingStrategy != nil {
expectedGroup := &schedv1alpha1.PodGroup{
ObjectMeta: podGroupMeta,
Spec: podSet.Spec.SchedulingStrategy.GodelSchedulingStrategy.PodGroupSpec,
}
expectedGroup.SetGroupVersionKind(schedv1alpha1.SchemeGroupVersion.WithKind("PodGroup"))
if created, err := utils.EnsurePodGroupExist(ctx, r.DynamicClient, r.Client, expectedGroup, podSet.Name, podSet.Namespace); err != nil {
return err
} else if created {
r.EventRecorder.Eventf(podSet, v1.EventTypeNormal, PodGroupSyncedEventType, "pod group %s synced", podSet.Name)
}
}
if podSet.Spec.SchedulingStrategy.CoschedulingSchedulingStrategy != nil {
expectedGroup := &schedulerpluginsv1aplha1.PodGroup{
ObjectMeta: podGroupMeta,
Spec: podSet.Spec.SchedulingStrategy.CoschedulingSchedulingStrategy.PodGroupSpec,
}
expectedGroup.SetGroupVersionKind(schedulerpluginsv1aplha1.SchemeGroupVersion.WithKind("PodGroup"))
if created, err := utils.EnsurePodGroupExist(ctx, r.DynamicClient, r.Client, expectedGroup, podSet.Name, podSet.Namespace); err != nil {
return err
} else if created {
r.EventRecorder.Eventf(podSet, v1.EventTypeNormal, PodGroupSyncedEventType, "pod group %s synced", podSet.Name)
}
}
if podSet.Spec.SchedulingStrategy.VolcanoSchedulingStrategy != nil {
expectedGroup := &volcanoschedv1beta1.PodGroup{
ObjectMeta: podGroupMeta,
Spec: podSet.Spec.SchedulingStrategy.VolcanoSchedulingStrategy.PodGroupSpec,
}
expectedGroup.SetGroupVersionKind(volcanoschedv1beta1.SchemeGroupVersion.WithKind("PodGroup"))
if created, err := utils.EnsurePodGroupExist(ctx, r.DynamicClient, r.Client, expectedGroup, podSet.Name, podSet.Namespace); err != nil {
return err
} else if created {
r.EventRecorder.Eventf(podSet, v1.EventTypeNormal, PodGroupSyncedEventType, "pod group %s synced", podSet.Name)
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The logic for reconciling PodGroups for different scheduling strategies is duplicated. This can be refactored to reduce code repetition and improve maintainability. You could iterate over a list of strategy configurations or use a switch statement, especially if the API is updated to enforce that only one scheduling strategy can be active at a time.

Comment on lines +403 to 428
if roleSet.Spec.SchedulingStrategy != nil {
if roleSet.Spec.SchedulingStrategy.CoschedulingSchedulingStrategy != nil {
podSet.Labels[constants.CoschedulingPodGroupNameLabelKey] = roleSet.Name
}
if roleSet.Spec.SchedulingStrategy.GodelSchedulingStrategy != nil {
podSet.Annotations[constants.GodelPodGroupNameAnnotationKey] = roleSet.Name
podSet.Labels[constants.GodelPodGroupNameAnnotationKey] = roleSet.Name
}
if roleSet.Spec.SchedulingStrategy.VolcanoSchedulingStrategy != nil {
podSet.Annotations[constants.VolcanoPodGroupNameAnnotationKey] = roleSet.Name
podSet.Labels[constants.VolcanoPodGroupNameAnnotationKey] = roleSet.Name
}
}
if role.SchedulingStrategy != nil { // note that roleSet.Spec.SchedulingStrategy and role.SchedulingStrategy should not be set concurrently
if role.SchedulingStrategy.CoschedulingSchedulingStrategy != nil {
podSet.Labels[constants.CoschedulingPodGroupNameLabelKey] = podSet.Name
}
if role.SchedulingStrategy.GodelSchedulingStrategy != nil {
podSet.Annotations[constants.GodelPodGroupNameAnnotationKey] = podSet.Name
podSet.Labels[constants.GodelPodGroupNameAnnotationKey] = podSet.Name
}
if role.SchedulingStrategy.VolcanoSchedulingStrategy != nil {
podSet.Annotations[constants.VolcanoPodGroupNameAnnotationKey] = podSet.Name
podSet.Labels[constants.VolcanoPodGroupNameAnnotationKey] = podSet.Name
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The logic for setting pod group labels and annotations is duplicated for roleSet.Spec.SchedulingStrategy and role.SchedulingStrategy. This can be refactored into a helper function to improve readability and maintainability.

}
if roleSet.Spec.SchedulingStrategy.GodelSchedulingStrategy != nil {
podSet.Annotations[constants.GodelPodGroupNameAnnotationKey] = roleSet.Name
podSet.Labels[constants.GodelPodGroupNameAnnotationKey] = roleSet.Name
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The constant constants.GodelPodGroupNameAnnotationKey is being used as a key for a label. This is confusing as the name implies it's for an annotation. If this is intentional, consider renaming the constant to something more generic like GodelPodGroupNameKey or creating a separate constant for the label key to improve code clarity.

Comment on lines 43 to 96
func (r *RoleSetReconciler) syncPodGroup(ctx context.Context, roleSet *orchestrationv1alpha1.RoleSet, spec *orchestrationv1alpha1.RoleSetSpec) error {
if spec.SchedulingStrategy.PodGroup == nil {
if spec.SchedulingStrategy == nil {
return nil
}
expectedGroup := &schedv1alpha1.PodGroup{
ObjectMeta: metav1.ObjectMeta{
Name: roleSet.Name,
Namespace: roleSet.Namespace,
Labels: map[string]string{
constants.RoleSetNameLabelKey: roleSet.Name,
},
OwnerReferences: []metav1.OwnerReference{
*metav1.NewControllerRef(roleSet, orchestrationv1alpha1.SchemeGroupVersion.WithKind(orchestrationv1alpha1.RoleSetKind)),
},

podGroupMeta := metav1.ObjectMeta{
Name: roleSet.Name,
Namespace: roleSet.Namespace,
Labels: map[string]string{
constants.RoleSetNameLabelKey: roleSet.Name,
},
OwnerReferences: []metav1.OwnerReference{
*metav1.NewControllerRef(roleSet, orchestrationv1alpha1.SchemeGroupVersion.WithKind(orchestrationv1alpha1.RoleSetKind)),
},
Spec: *spec.SchedulingStrategy.PodGroup,
}
podGroup := &schedv1alpha1.PodGroup{}
if err := r.Client.Get(ctx, client.ObjectKey{Name: roleSet.Name, Namespace: roleSet.Namespace}, podGroup); client.IgnoreNotFound(err) != nil {
return err
} else if err != nil {
// not found pg, need to create
if err = r.Client.Create(ctx, expectedGroup); err == nil {

if spec.SchedulingStrategy.GodelSchedulingStrategy != nil {
expectedGroup := &schedv1alpha1.PodGroup{
ObjectMeta: podGroupMeta,
Spec: spec.SchedulingStrategy.GodelSchedulingStrategy.PodGroupSpec,
}
expectedGroup.SetGroupVersionKind(schedv1alpha1.SchemeGroupVersion.WithKind("PodGroup"))
if created, err := utils.EnsurePodGroupExist(ctx, r.DynamicClient, r.Client, expectedGroup, roleSet.Name, roleSet.Namespace); err != nil {
return err
} else if created {
r.EventRecorder.Eventf(roleSet, v1.EventTypeNormal, PodGroupSyncedEventType, "pod group %s synced", roleSet.Name)
}
}
if spec.SchedulingStrategy.CoschedulingSchedulingStrategy != nil {
expectedGroup := &schedulerpluginsv1aplha1.PodGroup{
ObjectMeta: podGroupMeta,
Spec: spec.SchedulingStrategy.CoschedulingSchedulingStrategy.PodGroupSpec,
}
expectedGroup.SetGroupVersionKind(schedulerpluginsv1aplha1.SchemeGroupVersion.WithKind("PodGroup"))
if created, err := utils.EnsurePodGroupExist(ctx, r.DynamicClient, r.Client, expectedGroup, roleSet.Name, roleSet.Namespace); err != nil {
return err
} else if created {
r.EventRecorder.Eventf(roleSet, v1.EventTypeNormal, PodGroupSyncedEventType, "pod group %s synced", roleSet.Name)
}
}
if spec.SchedulingStrategy.VolcanoSchedulingStrategy != nil {
expectedGroup := &volcanoschedv1beta1.PodGroup{
ObjectMeta: podGroupMeta,
Spec: spec.SchedulingStrategy.VolcanoSchedulingStrategy.PodGroupSpec,
}
expectedGroup.SetGroupVersionKind(volcanoschedv1beta1.SchemeGroupVersion.WithKind("PodGroup"))
if created, err := utils.EnsurePodGroupExist(ctx, r.DynamicClient, r.Client, expectedGroup, roleSet.Name, roleSet.Namespace); err != nil {
return err
} else if created {
r.EventRecorder.Eventf(roleSet, v1.EventTypeNormal, PodGroupSyncedEventType, "pod group %s synced", roleSet.Name)
}
return err
}
return nil
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This syncPodGroup function contains duplicated logic for handling different scheduling strategies, similar to the one in the podset_controller. Consider refactoring this to reduce code repetition and improve maintainability.

pod.Labels[constants.CoschedulingPodGroupNameLabelKey] = roleSet.Name
}
if roleSet.Spec.SchedulingStrategy.GodelSchedulingStrategy != nil {
pod.Labels[constants.GodelPodGroupNameAnnotationKey] = roleSet.Name
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The constant constants.GodelPodGroupNameAnnotationKey is being used as a key for a label. This is confusing as the name implies it's for an annotation. If this is intentional, consider renaming the constant to something more generic like GodelPodGroupNameKey or creating a separate constant for the label key to improve code clarity. This issue is also present in pkg/controller/roleset/podset_rollsyncer.go.

@Epsilon314 Epsilon314 force-pushed the yiqing/github/podgroup branch 2 times, most recently from 5c0eaf7 to e673f09 Compare August 27, 2025 02:45
@googs1025 googs1025 self-assigned this Aug 27, 2025
@Epsilon314 Epsilon314 force-pushed the yiqing/github/podgroup branch 2 times, most recently from 9e7848a to 3a3c0f5 Compare September 1, 2025 06:35
Comment on lines +45 to +47
GodelSchedulingStrategy *GodelSchedulingStrategySpec `json:"godelSchedulingStrategy,omitempty"`

CoschedulingSchedulingStrategy *CoschedulingSchedulingStrategySpec `json:"coschedulingSchedulingStrategy,omitempty"`

VolcanoSchedulingStrategy *VolcanoSchedulingStrategySpec `json:"volcanoSchedulingStrategy,omitempty"`
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Regarding these dependencies, do the strategies require these components or CRDs to run? For example, does the VolcanoSchedulingStrategy require the Volcano component to be installed to take effect? ​​Also, do we need to integrate with Aibrix (is that a large dependency)? Or how can we check if the CRD exists?

Copy link
Collaborator

@googs1025 googs1025 Sep 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Imagine that if a user sets VolcanoSchedulingStrategy but doesn't have Volcano installed. What will the final be? Is there a fallback mechanism?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now we check if prd exist before each podgroup create/delete (with dynamic client), and if crd do not exist we just stop there.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

An alternative is to add a feature gate that requires user to choose the scheduler implementation if they want to enable podgroup, therefore we can check prd exist at start time and just crash and complain if it does not (if crd is determined at start time we can add it to informer which is a plus).

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems like a feature gate approach is a good option. Regarding the CRD check, I understand we can this check crd when starting controller. If fg=true, users should install the corresponding component(like volcano...). We could implement this similarly.
FYI: https://github.com/Jeffwan/aibrix/blob/1008ec3260f0a85f573c45d0a6487080f10758b1/pkg/controller/controller.go#L62-L73

Also, if pg creation fails, we could throw an appropriate log or event to notify the user.

Copy link
Contributor Author

@Epsilon314 Epsilon314 Sep 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The tradeoff is

  1. add a feature-gate
    pro: since crd is known at start time, we can add it to schema therefore watching and modification will be easier;
    cons: since there is no obvious default value users have to config it explicitly, thus is not enabled out-of-box
  2. detect dynamically
    pro: config free, take effect immediately after scheduler is installed
    cons: hard to watch podgroup change; more apicall
    after rethink i feel like adding a feature gate would be a better choice , and we can simplify the added configuration complexity in release part for example some helm tricks

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2. after rethink i feel like adding a feature gate would be a better choice , and we can simplify the added configuration complexity in release part for example some helm tricks

FYI:
InftyAI/llmaz#316 (comment)

@Jeffwan
Copy link
Collaborator

Jeffwan commented Sep 6, 2025

@Epsilon314 Hi yiqing, please help address the linter issue and let's get this merged asap

@Epsilon314 Epsilon314 force-pushed the yiqing/github/podgroup branch from 5f54b7d to e65cc2b Compare September 6, 2025 12:25
@Epsilon314 Epsilon314 closed this Sep 6, 2025
@Epsilon314 Epsilon314 force-pushed the yiqing/github/podgroup branch from e65cc2b to d4f777a Compare September 6, 2025 12:36
@Epsilon314 Epsilon314 reopened this Sep 6, 2025
@Epsilon314 Epsilon314 force-pushed the yiqing/github/podgroup branch from 14af40b to dcb9111 Compare September 6, 2025 14:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants