-
Notifications
You must be signed in to change notification settings - Fork 22
[INFRA-215] Plane-EE: Add Horizontal Pod Autoscaler (HPA) to Plane Enterprise Deployments #133
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
…scaler (HPA) configurations for multiple services in values.yaml and deployment templates. The HPA settings include minimum and maximum replicas along with CPU and memory utilization targets.
WalkthroughThis pull request adds a Helm template helper to detect metrics-server availability, introduces conditional HorizontalPodAutoscaler manifests across multiple workloads, increases default container resource requests/limits for several services, and exposes per-service autoscaling configuration in values/questions and documentation. Changes
Sequence DiagramsequenceDiagram
participant Helm as Helm renderer
participant Helper as "enable.hpa" helper
participant K8s as Kubernetes API
participant Deploy as Deployment templates
participant HPA as HPA templates
Helm->>Helper: render helper
Helper->>K8s: lookup ClusterRole "system:metrics-server"
alt ClusterRole exists
K8s-->>Helper: found
Helper-->>Helm: return true
else not found
K8s-->>Helper: not found
Helper-->>Helm: return false
end
Helm->>Deploy: render deployments (with updated resources)
alt enable.hpa == true
Helm->>HPA: render HPA manifests for workloads
HPA-->>Helm: include HPAs in output
else enable.hpa == false
Helm-->>HPA: skip HPA rendering
end
Helm->>K8s: apply resulting manifests (Deployments ± HPAs)
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes
Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
Linked to Plane Work Item(s) This comment was auto-generated by Plane |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (4)
charts/plane-enterprise/templates/workloads/silo.deployment.yaml (1)
92-128: Same DRY concern as beat-workerThe HPA stanza here is identical to the one reviewed in
beat-worker.deployment.yaml; please see that comment for the suggested refactor.charts/plane-enterprise/templates/workloads/web.deployment.yaml (1)
63-99: Same DRY concern as beat-workerThe HPA stanza here is identical to the one reviewed in
beat-worker.deployment.yaml; please see that comment for the suggested refactor.charts/plane-enterprise/templates/workloads/live.deployment.yaml (1)
70-105: Same DRY concern as beat-workerThe HPA stanza here is identical to the one reviewed in
beat-worker.deployment.yaml; please see that comment for the suggested refactor.charts/plane-enterprise/templates/workloads/api.deployment.yaml (1)
85-121: Same DRY concern as beat-workerThe HPA stanza here is identical to the one reviewed in
beat-worker.deployment.yaml; please see that comment for the suggested refactor.
🧹 Nitpick comments (5)
charts/plane-enterprise/values.yaml (2)
80-84: Consider workload-specific autoscaling parameters.The autoscaling configuration uses consistent defaults across all services, which is a good starting point. However, consider whether different workloads might benefit from service-specific tuning of these parameters based on their resource usage patterns and scaling characteristics.
Also applies to: 105-109, 120-124, 135-139, 150-154, 162-166, 173-177, 188-192
219-219: Fix trailing whitespace.Remove the trailing spaces to resolve the YAMLlint warning.
- +charts/plane-enterprise/templates/workloads/beat-worker.deployment.yaml (1)
53-89: Consolidate the HPA block into a reusable partial to eliminate duplicationThis entire HPA fragment is copy-pasted (with only the service name differing) across every workload template. Copy-paste will quickly go out of sync the next time you tweak fields such as
behavior, labels, or extra metrics.Helm already gives you
tpl/include—move the generic YAML into, say,_hpa.tpland call it with a dict containingworkload,valuesPath, etc.:+{{- include "plane-enterprise.renderHPA" (dict + "ctx" . + "svc" "beatworker") }}That removes ~150 duplicated lines and keeps future changes in one place.
charts/plane-enterprise/templates/workloads/space.deployment.yaml (2)
78-80: Add safe defaults for replica boundsIf the chart consumer omits
services.space.autoscaling.{minReplicas,maxReplicas}, Helm will render<no value>and fail the install/upgrade.
Wrap the look-ups withdefaultto keep the template resilient:- minReplicas: {{ .Values.services.space.autoscaling.minReplicas }} - maxReplicas: {{ .Values.services.space.autoscaling.maxReplicas }} + minReplicas: {{ .Values.services.space.autoscaling.minReplicas | default 1 }} + maxReplicas: {{ .Values.services.space.autoscaling.maxReplicas | default 5 }}(Adjust the fallback numbers to match project conventions.)
71-73: Align labels with Kubernetes conventions
app.nameis custom and duplicates information available in the standard
app.kubernetes.io/name,app.kubernetes.io/instance, etc. Using the
well-known labels improves compatibility with tooling and selectors:labels: app.kubernetes.io/name: space app.kubernetes.io/instance: {{ .Release.Name }} app.kubernetes.io/component: hpa
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (11)
charts/plane-enterprise/Chart.yaml(1 hunks)charts/plane-enterprise/templates/_helpers.tpl(1 hunks)charts/plane-enterprise/templates/workloads/admin.deployment.yaml(1 hunks)charts/plane-enterprise/templates/workloads/api.deployment.yaml(1 hunks)charts/plane-enterprise/templates/workloads/beat-worker.deployment.yaml(1 hunks)charts/plane-enterprise/templates/workloads/live.deployment.yaml(1 hunks)charts/plane-enterprise/templates/workloads/silo.deployment.yaml(1 hunks)charts/plane-enterprise/templates/workloads/space.deployment.yaml(1 hunks)charts/plane-enterprise/templates/workloads/web.deployment.yaml(1 hunks)charts/plane-enterprise/templates/workloads/worker.deployment.yaml(1 hunks)charts/plane-enterprise/values.yaml(7 hunks)
🧰 Additional context used
🪛 YAMLlint (1.37.1)
charts/plane-enterprise/values.yaml
[error] 219-219: trailing spaces
(trailing-spaces)
🔇 Additional comments (4)
charts/plane-enterprise/templates/_helpers.tpl (1)
9-16: LGTM! Well-implemented metrics-server detection.The
enable.hpahelper function correctly uses the Kubernetes lookup API to detect the presence of the metrics-server ClusterRole, which is essential for HPA functionality. This approach ensures HPAs are only created when the required infrastructure is available.charts/plane-enterprise/Chart.yaml (1)
8-8: Appropriate version bump for new functionality.The minor version increment from 1.2.7 to 1.3.0 correctly reflects the addition of HPA functionality as a new, backwards-compatible feature.
charts/plane-enterprise/templates/workloads/worker.deployment.yaml (1)
54-89: Well-implemented HPA resource.The HPA configuration is correctly structured with:
- Proper conditional rendering based on metrics-server availability
- Current autoscaling/v2 API version usage
- Correct target reference to the worker deployment
- Flexible metrics configuration supporting both CPU and memory utilization
- Values properly sourced from Helm configuration
charts/plane-enterprise/templates/workloads/admin.deployment.yaml (1)
64-99: Consistent and correct HPA implementation.The admin HPA resource follows the same well-structured pattern as other workloads, with proper conditional rendering, correct target references, and flexible metrics configuration. The implementation maintains consistency across all service HPAs.
| --- | ||
| {{- if eq (include "enable.hpa" . | trim) "true" }} | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Avoid emitting an empty YAML document when HPA is disabled
The --- delimiter sits outside the {{- if ... }} guard.
When enable.hpa resolves to false, Helm will still render a standalone ---, producing an empty document that some linters (kube-lint, kube-score, etc.) flag as a warning and that can break kubectl apply -k pipelines expecting non-empty manifests.
----
-{{- if eq (include "enable.hpa" . | trim) "true" }}
+{{- if eq (include "enable.hpa" . | trim) "true" }}
+---Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In charts/plane-enterprise/templates/workloads/space.deployment.yaml around
lines 63 to 65, the YAML document start delimiter `---` is outside the
conditional block that checks if HPA is enabled, causing an empty document to be
emitted when HPA is disabled. Move the `---` delimiter inside the `{{- if eq
(include "enable.hpa" . | trim) "true" }}` block to ensure it only appears when
HPA is enabled, preventing empty YAML documents and avoiding linter warnings or
pipeline errors.
…, maxReplicas, and averageUtilization for autoscaling metrics across multiple services.
…ues.yaml for multiple services. Update CPU and memory requests, limits, and autoscaling configurations to improve performance and resource management.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
🔭 Outside diff range comments (1)
charts/plane-enterprise/templates/workloads/worker.deployment.yaml (1)
21-23: Wrong values key referenced for worker image & pullPolicyThe worker container pulls its image and pullPolicy from
.Values.services.api.*, not.Values.services.worker.*. This looks like a copy-paste slip and prevents per-worker overrides.- imagePullPolicy: {{ .Values.services.api.pullPolicy | default "Always" }} - image: {{ .Values.services.api.image | default "artifacts.plane.so/makeplane/backend-commercial" }}:{{ .Values.planeVersion }} + imagePullPolicy: {{ .Values.services.worker.pullPolicy | default "Always" }} + image: {{ .Values.services.worker.image | default "artifacts.plane.so/makeplane/backend-commercial" }}:{{ .Values.planeVersion }}
🧹 Nitpick comments (1)
charts/plane-enterprise/questions.yml (1)
97-128: Default requests/limits & autoscaling targets are duplicated across every serviceEvery stanza repeats the same six keys (
cpuRequest,memoryRequest,cpuLimit,memoryLimit,autoscaling.*).
This is hard to maintain and makes accidental drift inevitable.Consider:
- Defining a YAML anchor / alias for the common block, or
- Moving the defaults to
values.yamland referencing them here only when you truly need to expose the knob in Rancher UI.This reduces ~300 duplicated lines and keeps all services consistent in the future.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (12)
charts/plane-enterprise/questions.yml(8 hunks)charts/plane-enterprise/templates/workloads/admin.deployment.yaml(2 hunks)charts/plane-enterprise/templates/workloads/api.deployment.yaml(2 hunks)charts/plane-enterprise/templates/workloads/beat-worker.deployment.yaml(2 hunks)charts/plane-enterprise/templates/workloads/email.deployment.yaml(2 hunks)charts/plane-enterprise/templates/workloads/live.deployment.yaml(2 hunks)charts/plane-enterprise/templates/workloads/monitor.stateful.yaml(0 hunks)charts/plane-enterprise/templates/workloads/silo.deployment.yaml(2 hunks)charts/plane-enterprise/templates/workloads/space.deployment.yaml(2 hunks)charts/plane-enterprise/templates/workloads/web.deployment.yaml(2 hunks)charts/plane-enterprise/templates/workloads/worker.deployment.yaml(2 hunks)charts/plane-enterprise/values.yaml(2 hunks)
💤 Files with no reviewable changes (1)
- charts/plane-enterprise/templates/workloads/monitor.stateful.yaml
🚧 Files skipped from review as they are similar to previous changes (4)
- charts/plane-enterprise/templates/workloads/live.deployment.yaml
- charts/plane-enterprise/values.yaml
- charts/plane-enterprise/templates/workloads/silo.deployment.yaml
- charts/plane-enterprise/templates/workloads/space.deployment.yaml
charts/plane-enterprise/templates/workloads/admin.deployment.yaml
Outdated
Show resolved
Hide resolved
| cpu: {{ .Values.services.api.cpuRequest | default "1000m" | quote }} | ||
| memory: {{ .Values.services.api.memoryRequest | default "1000Mi" | quote }} | ||
| limits: | ||
| memory: {{ .Values.services.api.memoryLimit | default "1000Mi" | quote }} | ||
| cpu: {{ .Values.services.api.cpuLimit | default "500m" | quote}} | ||
| cpu: {{ .Values.services.api.cpuLimit | default "2000m" | quote}} | ||
| memory: {{ .Values.services.api.memoryLimit | default "2000Mi" | quote }} | ||
| command: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Default resource bump is very aggressive
Jumping from 50 m / 50 Mi to 1000 m / 1000 Mi will make the chart unschedulable on many development clusters (e.g. k3d, kind, 2-core nodes) and will silently fail before the HPA even has a chance to scale down. Consider retaining the previous low defaults and letting users opt-in to larger footprints via values.yaml.
🤖 Prompt for AI Agents
In charts/plane-enterprise/templates/workloads/api.deployment.yaml between lines
50 and 55, the default CPU and memory requests and limits are set too high
(1000m CPU and 1000Mi memory), which can cause scheduling failures on small
development clusters. Adjust the default values to the previous lower settings
(e.g., 50m CPU and 50Mi memory) to ensure compatibility with lightweight
environments, and allow users to increase these values explicitly via
values.yaml if needed.
| --- | ||
| {{- if eq (include "enable.hpa" . | trim) "true" }} | ||
|
|
||
| apiVersion: autoscaling/v2 | ||
| kind: HorizontalPodAutoscaler | ||
| metadata: | ||
| name: {{ .Release.Name }}-api-hpa | ||
| namespace: {{ .Release.Namespace }} | ||
| labels: | ||
| app.name: {{ .Release.Namespace }}-{{ .Release.Name }}-api-hpa | ||
| spec: | ||
| scaleTargetRef: | ||
| apiVersion: apps/v1 | ||
| kind: Deployment | ||
| name: {{ .Release.Name }}-api-wl | ||
| minReplicas: {{ .Values.services.api.autoscaling.minReplicas | default 1 | quote }} | ||
| maxReplicas: {{ .Values.services.api.autoscaling.maxReplicas | default 5 | quote }} | ||
| {{- if or .Values.services.api.autoscaling.targetCPUUtilizationPercentage .Values.services.api.autoscaling.targetMemoryUtilizationPercentage }} | ||
| metrics: | ||
| {{- if .Values.services.api.autoscaling.targetCPUUtilizationPercentage }} | ||
| - type: Resource | ||
| resource: | ||
| name: cpu | ||
| target: | ||
| type: Utilization | ||
| averageUtilization: {{ .Values.services.api.autoscaling.targetCPUUtilizationPercentage | default 90 | quote }} | ||
| {{- end }} | ||
| {{- if .Values.services.api.autoscaling.targetMemoryUtilizationPercentage }} | ||
| - type: Resource | ||
| resource: | ||
| name: memory | ||
| target: | ||
| type: Utilization | ||
| averageUtilization: {{ .Values.services.api.autoscaling.targetMemoryUtilizationPercentage | default 90 | quote }} | ||
| {{- end }} | ||
| {{- end }} | ||
| {{- end }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
minReplicas, maxReplicas, and averageUtilization rendered as strings will break kubectl apply
The | quote filter forces the integers to YAML strings:
minReplicas: "1"Kubernetes rejects this with
cannot unmarshal string into Go value of type int32.
Same for averageUtilization.
- minReplicas: {{ .Values.services.api.autoscaling.minReplicas | default 1 | quote }}
- maxReplicas: {{ .Values.services.api.autoscaling.maxReplicas | default 5 | quote }}
+ minReplicas: {{ .Values.services.api.autoscaling.minReplicas | default 1 | int }}
+ maxReplicas: {{ .Values.services.api.autoscaling.maxReplicas | default 5 | int }}
- averageUtilization: {{ .Values.services.api.autoscaling.targetCPUUtilizationPercentage | default 90 | quote }}
+ averageUtilization: {{ .Values.services.api.autoscaling.targetCPUUtilizationPercentage | default 90 | int }}Repeat the | int change for the memory metric block as well.
🤖 Prompt for AI Agents
In charts/plane-enterprise/templates/workloads/api.deployment.yaml between lines
85 and 121, the minReplicas, maxReplicas, and averageUtilization fields are
incorrectly rendered as strings due to the use of the | quote filter, which
causes kubectl apply to fail. Remove the | quote filter from these fields and
instead use the | int filter to ensure they are rendered as integers. Apply this
fix consistently to both the CPU and memory averageUtilization values.
charts/plane-enterprise/templates/workloads/beat-worker.deployment.yaml
Outdated
Show resolved
Hide resolved
charts/plane-enterprise/templates/workloads/email.deployment.yaml
Outdated
Show resolved
Hide resolved
charts/plane-enterprise/templates/workloads/web.deployment.yaml
Outdated
Show resolved
Hide resolved
charts/plane-enterprise/templates/workloads/worker.deployment.yaml
Outdated
Show resolved
Hide resolved
…uests, limits, and autoscaling configurations for various services. Update default values for autoscaling parameters to improve deployment efficiency.
…ault values for minReplicas, maxReplicas, and averageUtilization settings across multiple services, enhancing clarity and consistency in autoscaling configurations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (2)
charts/plane-enterprise/templates/workloads/api.deployment.yaml (2)
50-55: Aggressive resource bump previously flagged, still present
Same concern as earlier review: defaults of 1000 m / 1000 Mi requests will break light clusters; please reconsider.
100-121: Ensure HPA numeric fields are rendered as integers
minReplicas,maxReplicas, and bothaverageUtilizationfields lack an explicit| intcast; any string values invalues.yamlwill surface as invalid YAML for the autoscaling API.- minReplicas: {{ .Values.services.api.autoscaling.minReplicas | default 1 }} - maxReplicas: {{ .Values.services.api.autoscaling.maxReplicas | default 5 }} + minReplicas: {{ .Values.services.api.autoscaling.minReplicas | default 1 | int }} + maxReplicas: {{ .Values.services.api.autoscaling.maxReplicas | default 5 | int }} ... - averageUtilization: {{ .Values.services.api.autoscaling.targetCPUUtilizationPercentage | default 90 }} + averageUtilization: {{ .Values.services.api.autoscaling.targetCPUUtilizationPercentage | default 90 | int }}Apply the same to the memory metric section.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (9)
charts/plane-enterprise/templates/workloads/admin.deployment.yaml(2 hunks)charts/plane-enterprise/templates/workloads/api.deployment.yaml(2 hunks)charts/plane-enterprise/templates/workloads/beat-worker.deployment.yaml(2 hunks)charts/plane-enterprise/templates/workloads/email.deployment.yaml(2 hunks)charts/plane-enterprise/templates/workloads/live.deployment.yaml(2 hunks)charts/plane-enterprise/templates/workloads/silo.deployment.yaml(2 hunks)charts/plane-enterprise/templates/workloads/space.deployment.yaml(2 hunks)charts/plane-enterprise/templates/workloads/web.deployment.yaml(2 hunks)charts/plane-enterprise/templates/workloads/worker.deployment.yaml(2 hunks)
🚧 Files skipped from review as they are similar to previous changes (7)
- charts/plane-enterprise/templates/workloads/email.deployment.yaml
- charts/plane-enterprise/templates/workloads/web.deployment.yaml
- charts/plane-enterprise/templates/workloads/worker.deployment.yaml
- charts/plane-enterprise/templates/workloads/silo.deployment.yaml
- charts/plane-enterprise/templates/workloads/beat-worker.deployment.yaml
- charts/plane-enterprise/templates/workloads/space.deployment.yaml
- charts/plane-enterprise/templates/workloads/admin.deployment.yaml
| cpu: {{ .Values.services.live.cpuRequest | default "500m" | quote }} | ||
| memory: {{ .Values.services.live.memoryRequest | default "1000Mi" | quote }} | ||
| limits: | ||
| memory: {{ .Values.services.live.memoryLimit | default "1000Mi" | quote }} | ||
| cpu: {{ .Values.services.live.cpuLimit | default "500m" | quote}} | ||
| cpu: {{ .Values.services.live.cpuLimit | default "1000m" | quote}} | ||
| memory: {{ .Values.services.live.memoryLimit | default "2000Mi" | quote }} | ||
| command: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Default requests/limits multiplied by 10×–20× risk unschedulable pods on dev clusters
Jumping from 50 m / 50 Mi to 500 m / 1000 Mi requests (and 1 CPU / 2 Gi limits) makes the live pod impossible to schedule on single-vCPU KIND/k3d nodes and many default CI runners. Consider keeping the previous conservative defaults and letting operators override via values.yaml, or at least document the implication very clearly.
🤖 Prompt for AI Agents
In charts/plane-enterprise/templates/workloads/live.deployment.yaml around lines
50 to 55, the default CPU and memory requests and limits are set too high,
causing pods to be unschedulable on small dev clusters. Reduce the default
values for cpuRequest and memoryRequest to more conservative levels similar to
the previous defaults (e.g., 50m CPU and 50Mi memory) and update cpuLimit and
memoryLimit accordingly. Also, ensure these defaults can be overridden via
values.yaml and add clear documentation about the resource implications for dev
environments.
| --- | ||
| {{- if eq (include "enable.hpa" . | trim) "true" }} | ||
|
|
||
| apiVersion: autoscaling/v2 | ||
| kind: HorizontalPodAutoscaler | ||
| metadata: | ||
| name: {{ .Release.Name }}-live-hpa | ||
| namespace: {{ .Release.Namespace }} | ||
| labels: | ||
| app.name: {{ .Release.Namespace }}-{{ .Release.Name }}-live-hpa | ||
| spec: | ||
| scaleTargetRef: | ||
| apiVersion: apps/v1 | ||
| kind: Deployment | ||
| name: {{ .Release.Name }}-live-wl | ||
| minReplicas: {{ .Values.services.live.autoscaling.minReplicas | default 1 }} | ||
| maxReplicas: {{ .Values.services.live.autoscaling.maxReplicas | default 5 }} | ||
| {{- if or .Values.services.live.autoscaling.targetCPUUtilizationPercentage .Values.services.live.autoscaling.targetMemoryUtilizationPercentage }} | ||
| metrics: | ||
| {{- if .Values.services.live.autoscaling.targetCPUUtilizationPercentage }} | ||
| - type: Resource | ||
| resource: | ||
| name: cpu | ||
| target: | ||
| type: Utilization | ||
| averageUtilization: {{ .Values.services.live.autoscaling.targetCPUUtilizationPercentage | default 90 }} | ||
| {{- end }} | ||
| {{- if .Values.services.live.autoscaling.targetMemoryUtilizationPercentage }} | ||
| - type: Resource | ||
| resource: | ||
| name: memory | ||
| target: | ||
| type: Utilization | ||
| averageUtilization: {{ .Values.services.live.autoscaling.targetMemoryUtilizationPercentage | default 90 }} | ||
| {{- end }} | ||
| {{- end }} | ||
| {{- end }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cast numeric knobs to int to avoid “cannot unmarshal string into int32”
If a user sets these values as strings in values.yaml (common Helm pattern) the rendered YAML will emit quoted scalars, and kubectl apply will fail. Pipe through | int to guarantee correct type:
- minReplicas: {{ .Values.services.live.autoscaling.minReplicas | default 1 }}
- maxReplicas: {{ .Values.services.live.autoscaling.maxReplicas | default 5 }}
+ minReplicas: {{ .Values.services.live.autoscaling.minReplicas | default 1 | int }}
+ maxReplicas: {{ .Values.services.live.autoscaling.maxReplicas | default 5 | int }}
- averageUtilization: {{ .Values.services.live.autoscaling.targetCPUUtilizationPercentage | default 90 }}
+ averageUtilization: {{ .Values.services.live.autoscaling.targetCPUUtilizationPercentage | default 90 | int }}Repeat the | int addition for the memory metric block as well.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| --- | |
| {{- if eq (include "enable.hpa" . | trim) "true" }} | |
| apiVersion: autoscaling/v2 | |
| kind: HorizontalPodAutoscaler | |
| metadata: | |
| name: {{ .Release.Name }}-live-hpa | |
| namespace: {{ .Release.Namespace }} | |
| labels: | |
| app.name: {{ .Release.Namespace }}-{{ .Release.Name }}-live-hpa | |
| spec: | |
| scaleTargetRef: | |
| apiVersion: apps/v1 | |
| kind: Deployment | |
| name: {{ .Release.Name }}-live-wl | |
| minReplicas: {{ .Values.services.live.autoscaling.minReplicas | default 1 }} | |
| maxReplicas: {{ .Values.services.live.autoscaling.maxReplicas | default 5 }} | |
| {{- if or .Values.services.live.autoscaling.targetCPUUtilizationPercentage .Values.services.live.autoscaling.targetMemoryUtilizationPercentage }} | |
| metrics: | |
| {{- if .Values.services.live.autoscaling.targetCPUUtilizationPercentage }} | |
| - type: Resource | |
| resource: | |
| name: cpu | |
| target: | |
| type: Utilization | |
| averageUtilization: {{ .Values.services.live.autoscaling.targetCPUUtilizationPercentage | default 90 }} | |
| {{- end }} | |
| {{- if .Values.services.live.autoscaling.targetMemoryUtilizationPercentage }} | |
| - type: Resource | |
| resource: | |
| name: memory | |
| target: | |
| type: Utilization | |
| averageUtilization: {{ .Values.services.live.autoscaling.targetMemoryUtilizationPercentage | default 90 }} | |
| {{- end }} | |
| {{- end }} | |
| {{- end }} | |
| --- | |
| {{- if eq (include "enable.hpa" . | trim) "true" }} | |
| apiVersion: autoscaling/v2 | |
| kind: HorizontalPodAutoscaler | |
| metadata: | |
| name: {{ .Release.Name }}-live-hpa | |
| namespace: {{ .Release.Namespace }} | |
| labels: | |
| app.name: {{ .Release.Namespace }}-{{ .Release.Name }}-live-hpa | |
| spec: | |
| scaleTargetRef: | |
| apiVersion: apps/v1 | |
| kind: Deployment | |
| name: {{ .Release.Name }}-live-wl | |
| minReplicas: {{ .Values.services.live.autoscaling.minReplicas | default 1 | int }} | |
| maxReplicas: {{ .Values.services.live.autoscaling.maxReplicas | default 5 | int }} | |
| {{- if or .Values.services.live.autoscaling.targetCPUUtilizationPercentage .Values.services.live.autoscaling.targetMemoryUtilizationPercentage }} | |
| metrics: | |
| {{- if .Values.services.live.autoscaling.targetCPUUtilizationPercentage }} | |
| - type: Resource | |
| resource: | |
| name: cpu | |
| target: | |
| type: Utilization | |
| averageUtilization: {{ .Values.services.live.autoscaling.targetCPUUtilizationPercentage | default 90 | int }} | |
| {{- end }} | |
| {{- if .Values.services.live.autoscaling.targetMemoryUtilizationPercentage }} | |
| - type: Resource | |
| resource: | |
| name: memory | |
| target: | |
| type: Utilization | |
| averageUtilization: {{ .Values.services.live.autoscaling.targetMemoryUtilizationPercentage | default 90 }} | |
| {{- end }} | |
| {{- end }} | |
| {{- end }} |
🤖 Prompt for AI Agents
In charts/plane-enterprise/templates/workloads/live.deployment.yaml between
lines 69 and 105, the minReplicas, maxReplicas, and averageUtilization values
for CPU and memory metrics may be rendered as strings if set as strings in
values.yaml, causing kubectl apply to fail. Fix this by piping these values
through the int function (| int) to ensure they are rendered as integers. Apply
this change to minReplicas, maxReplicas, targetCPUUtilizationPercentage, and
targetMemoryUtilizationPercentage fields.
* Introduced new Iframely service settings including replicas, resource limits, image, and deployment options. * Updated the values.yaml file to enhance service customization and resource management for the Iframely component.
* Set memory and CPU limits and requests for the monitor service to enhance resource management and performance optimization. * Updated values.yaml to reflect these changes, ensuring better control over resource allocation for the monitor component.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
charts/plane-enterprise/README.md (1)
358-365: Update README.md default forenv.silo_envs.mq_prefetch_countto matchvalues.yamlVerified mismatch: README.md (line 358) documents the default as
10, butcharts/plane-enterprise/values.yaml(line 404) sets it to1. Update the README table row to| env.silo_envs.mq_prefetch_count | 1 | | Prefetch count for RabbitMQ |to align documentation with the actual configuration.charts/plane-enterprise/templates/workloads/web.deployment.yaml (1)
51-100: Remove duplicatecpulimit and move HPA---inside conditional
Duplicate
cpulimit (lines 54–56)The
limitssection definescputwice:limits: cpu: {{ .Values.services.web.cpuLimit | default "500m" | quote}} memory: {{ .Values.services.web.memoryLimit | default "1000Mi" | quote }} cpu: {{ .Values.services.web.cpuLimit | default "500m" | quote}}Remove the redundant line:
limits: cpu: {{ .Values.services.web.cpuLimit | default "500m" | quote}} memory: {{ .Values.services.web.memoryLimit | default "1000Mi" | quote }}
cpu: {{ .Values.services.web.cpuLimit | default "500m" | quote}}
Empty YAML document when HPA disabled
The
---is rendered outside the HPA conditional, creating an orphaned document separator whenenable.hpaisfalse. Move it inside:
-{{- if eq (include "enable.hpa" . | trim) "true" }}
+{{- if eq (include "enable.hpa" . | trim) "true" }}
+---apiVersion: autoscaling/v2
</blockquote></details> <details> <summary>charts/plane-enterprise/templates/workloads/email.deployment.yaml (1)</summary><blockquote> `70-145`: **Email service resources approved; fix HPA separator to prevent empty YAML documents** The CPU/memory defaults for email service are reasonable. However, the `---` separator before the HPA conditional does create an empty YAML document when HPA is disabled. This pattern is consistent across all workload deployments and should be corrected. Move `---` inside the conditional: ```diff ---- {{- if eq (include "enable.hpa" . | trim) "true" }} +--- apiVersion: autoscaling/v2charts/plane-enterprise/templates/workloads/worker.deployment.yaml (1)
28-94: Fix HPA YAML empty document structure; resource defaults are context-dependentTwo findings:
Empty YAML document when HPA disabled: CONFIRMED
The leading
---at line 58 sits outside the{{- if eq (include "enable.hpa"...)condition. When HPA is disabled, Helm renders the---with no following object, creating a malformed empty YAML document. This mirrors the same issue inspace.deployment.yaml. The suggested fix is correct:
serviceAccountName: {{ .Release.Name }}-srv-account
-{{- if eq (include "enable.hpa" . | trim) "true" }}
serviceAccountName: {{ .Release.Name }}-srv-account+{{- if eq (include "enable.hpa" . | trim) "true" }}
+---apiVersion: autoscaling/v2
...
- {{- end }}
-{{- end }}
- {{- end }}
+{{- end }}
Worker resource defaults are reasonable, not outliers
Comparing across deployments: worker requests 500m CPU/2000Mi memory (limits 1000m/4000Mi). While the memory request is on the higher end compared to web/space/admin (200Mi), it's lower than some other services and not problematic. For small dev clusters (k3d, kind), the concern is valid, but these defaults are defensible for production-oriented charts. Users can override via
values.yaml.
♻️ Duplicate comments (2)
charts/plane-enterprise/templates/workloads/space.deployment.yaml (1)
51-100: Clean up duplicatecpulimit and avoid emitting an empty YAML documentTwo issues in this block:
Duplicate
cpulimit keyUnder
resources.limitsyou setcputwice with the same expression:limits: cpu: {{ .Values.services.space.cpuLimit | default "500m" | quote}} memory: {{ .Values.services.space.memoryLimit | default "1000Mi" | quote }} cpu: {{ .Values.services.space.cpuLimit | default "500m" | quote}}The second
cpusilently overwrites the first and is confusing. Drop the duplicate:- limits: - cpu: {{ .Values.services.space.cpuLimit | default "500m" | quote}} - memory: {{ .Values.services.space.memoryLimit | default "1000Mi" | quote }} - cpu: {{ .Values.services.space.cpuLimit | default "500m" | quote}} + limits: + cpu: {{ .Values.services.space.cpuLimit | default "500m" | quote}} + memory: {{ .Values.services.space.memoryLimit | default "1000Mi" | quote }}
---outside the HPA guard yields an empty YAML document when HPA is disabledThe
---separator at line 64 is always rendered, even ifenable.hpareturnsfalse, which leaves you with an empty document between---lines. Some linters and tooling complain about this.Move the separator inside the
if:
-{{- if eq (include "enable.hpa" . | trim) "true" }}
+{{- if eq (include "enable.hpa" . | trim) "true" }}
+---
@@
{{- end }}
-{{- end }}+{{- end }}
</blockquote></details> <details> <summary>charts/plane-enterprise/templates/workloads/api.deployment.yaml (1)</summary><blockquote> `60-180`: **API resources are very high by default and HPA `---` should be scoped under the guard** Two concerns: 1. **Aggressive API resource defaults** The API now requests `1000m` CPU and `1000Mi` memory, with limits at `2000m`/`2000Mi`. That’s reasonable for heavier production loads but can make the chart fail scheduling on lightweight clusters (kind, k3d, single‑node dev, etc.) before the HPA ever has a chance to scale down. Consider reverting to more modest defaults (e.g., closer to the previous 50m/50Mi range) and relying on per‑environment overrides in `values.yaml` for larger deployments. 2. **Avoid empty YAML document from the HPA section** As with other workloads, the `---` YAML document separator is outside the `{{- if eq (include "enable.hpa" . | trim) "true" }}` block, so when HPA is disabled you still emit an empty document. Suggested fix: ```diff ---- -{{- if eq (include "enable.hpa" . | trim) "true" }} - +{{- if eq (include "enable.hpa" . | trim) "true" }} +--- @@ {{- end }} -{{- end }} ---- +{{- end }}
🧹 Nitpick comments (1)
charts/plane-enterprise/templates/_helpers.tpl (1)
30-36: Guarding HPA via ClusterRole lookup can break on restricted RBAC clusters
enable.hpadepends onlookupagainst the cluster‑scopedClusterRolesystem:metrics-server. On clusters where the installing user cannot list cluster roles, this can fail template rendering entirely, even if the namespace and workloads are otherwise installable. It also assumes the metrics‑server RBAC name is alwayssystem:metrics-server, which may not hold for all distros/operators.Consider making this detection opt‑in or value‑driven, e.g.:
- Add a chart value like
global.hpa.enabled(defaulttrue) to short‑circuit the lookup, or- Treat lookup failure as “unknown, but allow HPA” instead of a hard failure, and rely on documentation to say “you need metrics‑server for HPA to work”.
This keeps the default experience good while avoiding hard coupling to cluster‑admin‑level permissions and a specific ClusterRole name.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (13)
charts/plane-enterprise/README.md(9 hunks)charts/plane-enterprise/questions.yml(8 hunks)charts/plane-enterprise/templates/_helpers.tpl(1 hunks)charts/plane-enterprise/templates/workloads/admin.deployment.yaml(1 hunks)charts/plane-enterprise/templates/workloads/api.deployment.yaml(2 hunks)charts/plane-enterprise/templates/workloads/beat-worker.deployment.yaml(2 hunks)charts/plane-enterprise/templates/workloads/email.deployment.yaml(2 hunks)charts/plane-enterprise/templates/workloads/live.deployment.yaml(2 hunks)charts/plane-enterprise/templates/workloads/silo.deployment.yaml(2 hunks)charts/plane-enterprise/templates/workloads/space.deployment.yaml(2 hunks)charts/plane-enterprise/templates/workloads/web.deployment.yaml(2 hunks)charts/plane-enterprise/templates/workloads/worker.deployment.yaml(2 hunks)charts/plane-enterprise/values.yaml(10 hunks)
🚧 Files skipped from review as they are similar to previous changes (4)
- charts/plane-enterprise/templates/workloads/beat-worker.deployment.yaml
- charts/plane-enterprise/templates/workloads/admin.deployment.yaml
- charts/plane-enterprise/templates/workloads/live.deployment.yaml
- charts/plane-enterprise/templates/workloads/silo.deployment.yaml
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-10-16T15:18:06.210Z
Learnt from: twtaylor
Repo: makeplane/helm-charts PR: 134
File: charts/plane-ce/templates/workloads/beat-worker.deployment.yaml:26-33
Timestamp: 2025-10-16T15:18:06.210Z
Learning: In the makeplane/helm-charts repository, prefer explicit indentation within conditional blocks in Helm templates over using nindent functions, as it improves readability and maintainability even if yamllint flags indentation warnings.
Applied to files:
charts/plane-enterprise/templates/workloads/space.deployment.yaml
🔇 Additional comments (5)
charts/plane-enterprise/values.yaml (1)
117-315: Autoscaling/value defaults are consistent across servicesThe added
autoscalingblocks and resource defaults forweb,space,admin,live,api,worker,beatworker,silo, andemail_serviceare structurally consistent and line up with the HPA templates (min/max replicas and CPU/memory utilization at 90%). This should make per‑service tuning straightforward for operators.If you haven’t already, you may want to dry‑run (
helm template) with a customvalues.yamlthat overrides a couple of these blocks (e.g., only CPU target set, memory target unset) to confirm the rendered HPAs look as expected.charts/plane-enterprise/README.md (1)
189-425: Autoscaling and resource docs match the new chart behaviorThe updated sections for Web/Space/Admin/Live/API/Silo/Worker/Beatworker/Email Service correctly mirror the new
cpuRequest/memoryRequest,cpuLimit/memoryLimit, andservices.*.autoscaling.*values. This will make it much easier for operators to reason about HPA tuning straight from the README.charts/plane-enterprise/questions.yml (3)
462-479: Verify monitor service intentionally excludes autoscaling configuration.Monitor service configuration lacks the autoscaling subquestions (minReplicas, maxReplicas, target utilization) that were added to all other scalable services. The AI summary states autoscaling is added to multiple services, but doesn't explicitly mention monitor. Confirm whether this is intentional (e.g., monitor is stateless and not expected to be auto-scaled) or an oversight.
699-781: Clarify autoscaling support for optional services (outbox_poller, automation_consumer, iframely).Services
outbox_poller(lines 699–781),automation_consumer(lines 783–829), andiframely(lines 831–873) have resource configuration (cpuRequest, memoryRequest, cpuLimit, memoryLimit) but do not include autoscaling subquestions. Verify whether:
- These services should support autoscaling but configuration was omitted
- These services are intentionally excluded from HPA support
If autoscaling should be added, the pattern should be consistent with other services (minReplicas=1, maxReplicas=5, target utilization=90%).
Also applies to: 831-873
113-144: Autoscaling configuration is properly coordinated with deployment templates and values.yaml for the nine main services.Verification confirms:
- All nine services (web, space, admin, live, silo, api, worker, beatworker, email_service) have autoscaling variables correctly defined in values.yaml and used in corresponding deployment templates (e.g.,
.Values.services.web.autoscaling.minReplicasin web.deployment.yaml)- Consistent defaults applied: minReplicas=1, maxReplicas=5, targetCPUUtilizationPercentage=90, targetMemoryUtilizationPercentage=90
- Monitor is correctly excluded (uses StatefulSet, not Deployment)
However, note that
outbox_poller,automation_consumer, andiframelyhave deployment templates but lack autoscaling configuration in bothquestions.ymlandvalues.yaml. If this exclusion is intentional (since outbox_poller and automation_consumer are disabled by default), it's acceptable; otherwise, consider adding autoscaling for consistency.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
charts/plane-enterprise/values.yaml (1)
138-151: Add autoscaling configuration to themonitorservice to match peer services.The
monitorservice incharts/plane-enterprise/values.yaml(lines 138-151) lacks anautoscalingblock, while all other services—web, space, admin, live, api, worker, beatworker, silo, and email—have consistent autoscaling configuration withminReplicas: 1, maxReplicas: 5, targetCPUUtilizationPercentage: 90, targetMemoryUtilizationPercentage: 90. Add the same autoscaling block tomonitorunless there is a documented reason why it should not auto-scale.
🧹 Nitpick comments (1)
charts/plane-enterprise/values.yaml (1)
131-136: Autoscaling configuration uniformly targets 90% utilization across all services.All nine services with added autoscaling blocks use identical thresholds:
targetCPUUtilizationPercentage: 90andtargetMemoryUtilizationPercentage: 90. While this provides consistency, verify that 90% is appropriate for your workload profile. For typical web services, lower thresholds (70–80% CPU) often provide more responsive scaling and better headroom for traffic spikes. Document this choice or consider per-service tuning if workload patterns differ.Also applies to: 162-166, 182-186, 202-206, 222-226, 239-243, 257-261, 278-282, 314-318
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
charts/plane-enterprise/values.yaml(10 hunks)
🔇 Additional comments (3)
charts/plane-enterprise/values.yaml (3)
101-116: Autoscaling configuration not added toiframelyservice.The
iframelyservice has resource specifications defined but lacks theautoscalingblock that other similar services now include. This is likely intentional since the service is disabled by default (enabled: false), but ensure this is the desired behavior—users enabling this service may need to manually add autoscaling configuration.
325-338: Autoscaling not configured foroutbox_pollerandautomation_consumer.These services are disabled by default but have resource specifications. Consider whether they should receive autoscaling blocks for consistency or document why they are excluded. If these services are enabled in production, lack of autoscaling config may be an oversight.
Also applies to: 340-353
155-176: Resource configurations are technically correct and HPA-ready.All 8 services have valid configurations with cpuRequest < cpuLimit and memoryRequest < memoryLimit. HPA prerequisites are fully met: explicit requests defined, autoscaling configured with target utilization percentages, and settings are consistent across all services.
However, verifying that these values reflect actual workload demands and prevent pod evictions requires operational testing during deployment (monitoring real resource consumption, validating against actual traffic patterns, especially for higher-traffic services like
apiandlive). This should be confirmed during your load testing and deployment validation phases.
Description
Summary by CodeRabbit
New Features
Chores