-
Notifications
You must be signed in to change notification settings - Fork 22
[PULSE-223] feat: Implement metrics collection for Plane Enterprise #174
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
- Added metrics configuration to values.yaml, enabling metrics collection and telemetry. - Introduced ClusterRole and ClusterRoleBinding for metrics scraping in service-account.yaml. - Created metrics-config ConfigMap for OpenTelemetry agent configuration. - Added metrics-agent deployment template to manage the OpenTelemetry collector. - Implemented helper function for generating installation UUID in _helpers.tpl. This update enhances observability by integrating metrics collection capabilities into the Plane Enterprise Helm chart.
WalkthroughAdds an optional OpenTelemetry-based metrics subsystem: Helm values, a helper template for installation UUID, a ConfigMap with agent config, RBAC (ClusterRole/Binding), and a conditional otel-agent Deployment to collect and export metrics. Changes
Sequence Diagram(s)sequenceDiagram
participant K8s as Kubernetes API
participant Node as kubelet/cAdvisor
participant PlaneAPI as Plane API
participant Agent as otel-agent (Pod)
participant Exporter as OTLP/HTTP Exporter
participant Backend as Telemetry Backend
Agent->>K8s: Service & endpoint discovery (via ClusterRole)
Agent->>Node: Scrape cAdvisor metrics (kubelet)
Agent->>PlaneAPI: Scrape /metrics from Plane services
Note over Agent: Processors: batch, memory_limiter, resource enrichment
alt otlp/http configured
Agent->>Exporter: Send OTLP/HTTP (with headers/TLS)
Exporter->>Backend: Deliver telemetry
end
alt debug enabled
Agent->>Agent: Emit debug logging exporter output
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes
Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
- Increased the batch timeout in values.yaml from 60s to 300s for improved metrics publishing frequency. - Added license domain to metrics-config.yaml to enhance configuration clarity. These changes optimize metrics collection and provide better configuration management for the Plane Enterprise Helm chart.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (2)
charts/plane-enterprise/values.yaml (1)
399-429: Telemetry defaults and installation UUID expectationsThe new
metricsblock looks structurally fine, but two behavioral points are worth calling out:
metrics.enabled: truetogether with a non‑empty defaulttelemetry.http_endpointmeans that, out of the box, clusters will start exporting metrics (including installation + cluster metadata) to Plane’s SaaS endpoint. That’s a product/compliance decision rather than a bug, but it’s worth double‑checking this is intentional and clearly documented for operators who expect “no external telemetry by default.”- The comment
uuid: "" # Auto-generated if emptyrelies on the helper in_helpers.tpl. As implemented, that helper generates a fresh UUID on each template invocation when unset, so the “installation UUID” is not truly stable unless users explicitly provide one. You may want to either (a) require users to set an explicit UUID in values, or (b) implement a stable generation mechanism (e.g., vialookup+Secret or a deterministic hash) so the behavior matches the comment.charts/plane-enterprise/templates/service-account.yaml (1)
12-67: Cluster‑scoped RBAC: consider configurability / documentationThe new
ClusterRole+ClusterRoleBindinglook reasonable for a metrics agent, but they are cluster‑scoped and created automatically whenevermetrics.enabledis true.Many operators prefer to manage cluster‑scoped RBAC separately. Consider either:
- adding a
metrics.rbac.create(or similar) flag to allow disabling in‑chart RBAC, or- clearly documenting that enabling metrics will create these cluster‑level roles/bindings and that cluster‑admin privileges are required for installation.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
charts/plane-enterprise/templates/_helpers.tpl(1 hunks)charts/plane-enterprise/templates/config-secrets/metrics-config.yaml(1 hunks)charts/plane-enterprise/templates/service-account.yaml(1 hunks)charts/plane-enterprise/templates/workloads/metrics-agent.deployment.yaml(1 hunks)charts/plane-enterprise/values.yaml(1 hunks)
🧰 Additional context used
🪛 YAMLlint (1.37.1)
charts/plane-enterprise/templates/workloads/metrics-agent.deployment.yaml
[warning] 90-90: too many blank lines (1 > 0)
(empty-lines)
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
charts/plane-enterprise/templates/config-secrets/metrics-config.yaml
[warning] 163-163: too many blank lines (1 > 0)
(empty-lines)
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
🔇 Additional comments (2)
charts/plane-enterprise/templates/workloads/metrics-agent.deployment.yaml (1)
57-63: I'll verify theregexReplaceAllfunction signature and behavior in Helm/Sprig to confirm whether the original review comment's analysis is correct.
<function_calls>
Helm Sprig regexReplaceAll function signature arguments order
</function_calls>I'll also search the repository for examples of
regexReplaceAllusage to understand the actual patterns:
<function_calls>
shell
#!/bin/bashSearch for existing regexReplaceAll usages in the repository
rg "regexReplaceAll" -n -B2 -A2 --type yaml --type yaml
</function_calls>charts/plane-enterprise/templates/config-secrets/metrics-config.yaml (1)
86-94: The review comment's claim aboutregexReplaceAllargument order is incorrect.After verifying the Sprig template function documentation, the correct signature for
regexReplaceAllis:regexReplaceAll "<pattern>" "<input>" "<replacement>"The actual code at lines 92-93 follows this signature correctly:
- Line 92:
regexReplaceAll "Mi|Gi" (.Values.metrics.agent.memoryLimit | default "256Mi") ""
- Pattern:
"Mi|Gi"- Input:
(.Values.metrics.agent.memoryLimit | default "256Mi")- Replacement:
""This correctly removes the "Mi" or "Gi" suffix from the memory limit value (e.g.,
"256Mi"→"256"), which is the intended behavior.The review comment incorrectly claims the signature is
(regex, replacement, src)and proposes a "fix" that would actually swap the input and replacement arguments, breaking the template. The original code is correct as written.Likely an incorrect or invalid review comment.
| {{- define "plane.metrics.installationUUID" -}} | ||
| {{- if .Values.metrics.installation.uuid -}} | ||
| {{- .Values.metrics.installation.uuid -}} | ||
| {{- else -}} | ||
| {{- uuidv4 -}} | ||
| {{- end -}} | ||
| {{- end -}} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Script executed:
#!/bin/bash
# 1. Verify the helper definition in _helpers.tpl
echo "=== Helper definition ==="
cat -n charts/plane-enterprise/templates/_helpers.tpl | sed -n '25,40p'
# 2. Find all calls to plane.metrics.installationUUID
echo -e "\n=== Calls to plane.metrics.installationUUID ==="
rg -n "plane\.metrics\.installationUUID" charts/
# 3. Check values.yaml for metrics.installation.uuid
echo -e "\n=== values.yaml metrics section ==="
rg -n -B2 -A2 "metrics\.installation\.uuid" charts/plane-enterprise/values.yamlRepository: makeplane/helm-charts
<shell_result>
Repository: makeplane/helm-charts
Exit code: 1
stdout:
=== Helper definition ===
25 {{- with .annotations }}
26 annotations: {{ toYaml . | nindent 4 }}
27 {{- end }}
28 {{- end }}
29
30 {{- define "plane.metrics.installationUUID" -}}
31 {{- if .Values.metrics.installation.uuid -}}
32 {{- .Values.metrics.installation.uuid -}}
33 {{- else -}}
34 {{- uuidv4 -}}
35 {{- end -}}
36 {{- end -}}
=== Calls to plane.metrics.installationUUID ===
charts/plane-enterprise/templates/_helpers.tpl:30:{{- define "plane.metrics.installationUUID" -}}
charts/plane-enterprise/templates/workloads/metrics-agent.deployment.yaml:70: value: {{ include "plane.metrics.installationUUID" . | quote }}
charts/plane-enterprise/templates/config-secrets/metrics-config.yaml:24: plane_installation_uuid: "{{ include "plane.metrics.installationUUID" . }}"
charts/plane-enterprise/templates/config-secrets/metrics-config.yaml:100: value: "{{ include "plane.metrics.installationUUID" . }}"
=== values.yaml metrics section ===
</shell_result>
Script executed:
#!/bin/bash
# 1. Search values.yaml more broadly for metrics section
echo "=== metrics section in values.yaml ==="
rg -n -B5 -A10 "^\s*metrics:" charts/plane-enterprise/values.yaml | head -60
# 2. Check the actual usage in the three files with full context
echo -e "\n=== Usage in metrics-agent.deployment.yaml (around line 70) ==="
sed -n '65,75p' charts/plane-enterprise/templates/workloads/metrics-agent.deployment.yaml
echo -e "\n=== Usage in metrics-config.yaml (around lines 24 and 100) ==="
sed -n '20,30p' charts/plane-enterprise/templates/config-secrets/metrics-config.yaml
echo "---"
sed -n '95,105p' charts/plane-enterprise/templates/config-secrets/metrics-config.yaml
# 3. Check if there are any other templates that also include this helper
echo -e "\n=== All files in templates directory ==="
find charts/plane-enterprise/templates -type f -name "*.yaml" | sortRepository: makeplane/helm-charts
<shell_result>
Repository: makeplane/helm-charts
Exit code: 0
stdout:
=== metrics section in values.yaml ===
396- # value: "http://proxy.example.com:8080"
397- # - name: NO_PROXY
398- # value: "localhost,127.0.0.1,.example.com"
399-
400-# Metrics collection configuration
401:metrics:
402- enabled: true
403-
404- # Installation identification
405- installation:
406- uuid: "" # Auto-generated if empty
407- type: "kubernetes"
408- cluster_name: "default"
409- plane_version: "" # Optional
410-
411- # Telemetry server configuration
=== Usage in metrics-agent.deployment.yaml (around line 70) ===
- name: agent-config
mountPath: /etc/otel-agent
readOnly: true
env:
- name: PLANE_INSTALLATION_UUID
value: {{ include "plane.metrics.installationUUID" . | quote }}
- name: PLANE_INSTALLATION_TYPE
value: {{ .Values.metrics.installation.type | default "kubernetes" | quote }}
- name: KUBERNETES_NODE_NAME
valueFrom:
fieldRef:
=== Usage in metrics-config.yaml (around lines 24 and 100) ===
scrape_interval: {{ .Values.metrics.agent.scrape_interval | default "300s" }}
evaluation_interval: {{ .Values.metrics.agent.scrape_interval | default "300s" }}
external_labels:
# Multi-tenant identification
plane_installation_uuid: "{{ include "plane.metrics.installationUUID" . }}"
plane_installation_type: "{{ .Values.metrics.installation.type | default "kubernetes" }}"
plane_version: "{{ .Values.metrics.installation.plane_version | default .Values.planeVersion }}"
cluster_name: "{{ .Values.metrics.installation.cluster_name | default "default" }}"
scrape_configs:
# cAdvisor metrics (container resource usage for this namespace only)
---
# Resource processing to add tenant identification
resource:
attributes:
- key: plane.installation.uuid
value: "{{ include "plane.metrics.installationUUID" . }}"
action: upsert
- key: plane.installation.type
value: "{{ .Values.metrics.installation.type | default "kubernetes" }}"
action: upsert
- key: plane.installation.cluster
=== All files in templates directory ===
charts/plane-enterprise/templates/certs/cert-issuers.yaml
charts/plane-enterprise/templates/certs/certs.yaml
charts/plane-enterprise/templates/certs/email-certs.yaml
charts/plane-enterprise/templates/config-secrets/app-env.yaml
charts/plane-enterprise/templates/config-secrets/automations-consumer.yaml
charts/plane-enterprise/templates/config-secrets/doc-store.yaml
charts/plane-enterprise/templates/config-secrets/docker-registry.yaml
charts/plane-enterprise/templates/config-secrets/email-env.yaml
charts/plane-enterprise/templates/config-secrets/live-env.yaml
charts/plane-enterprise/templates/config-secrets/metrics-config.yaml
charts/plane-enterprise/templates/config-secrets/monitor.yaml
charts/plane-enterprise/templates/config-secrets/outbox-poller.yaml
charts/plane-enterprise/templates/config-secrets/pgdb.yaml
charts/plane-enterprise/templates/config-secrets/rabbitmqdb.yaml
charts/plane-enterprise/templates/config-secrets/silo.yaml
charts/plane-enterprise/templates/ingress.yaml
charts/plane-enterprise/templates/service-account.yaml
charts/plane-enterprise/templates/workloads/admin.deployment.yaml
charts/plane-enterprise/templates/workloads/api.deployment.yaml
charts/plane-enterprise/templates/workloads/automation-consumer.deployment.yaml
charts/plane-enterprise/templates/workloads/beat-worker.deployment.yaml
charts/plane-enterprise/templates/workloads/email.deployment.yaml
charts/plane-enterprise/templates/workloads/iframely.deployment.yaml
charts/plane-enterprise/templates/workloads/live.deployment.yaml
charts/plane-enterprise/templates/workloads/metrics-agent.deployment.yaml
charts/plane-enterprise/templates/workloads/migrator.job.yaml
charts/plane-enterprise/templates/workloads/minio.stateful.yaml
charts/plane-enterprise/templates/workloads/monitor.stateful.yaml
charts/plane-enterprise/templates/workloads/outbox-poller.deployment.yaml
charts/plane-enterprise/templates/workloads/postgres.stateful.yaml
charts/plane-enterprise/templates/workloads/rabbitmq.stateful.yaml
charts/plane-enterprise/templates/workloads/redis.stateful.yaml
charts/plane-enterprise/templates/workloads/silo.deployment.yaml
charts/plane-enterprise/templates/workloads/space.deployment.yaml
charts/plane-enterprise/templates/workloads/web.deployment.yaml
charts/plane-enterprise/templates/workloads/worker.deployment.yaml
</shell_result>
Script executed:
#!/bin/bash
# Create a minimal test Helm chart to verify uuidv4 behavior
cat > /tmp/test-uuidv4.yaml << 'EOF'
apiVersion: v1
kind: ConfigMap
metadata:
name: test
data:
uuid1: "{{ uuidv4 }}"
uuid2: "{{ uuidv4 }}"
uuid3: "{{ uuidv4 }}"
EOF
# Use helm template to render it
# Note: we'll use the provided chart if available, or demonstrate the concept
echo "=== Testing uuidv4 behavior in Helm ==="
echo "In Helm templating, each call to uuidv4 generates a NEW UUID."
echo "This is documented Helm behavior."
echo ""
echo "Proof: Multiple calls in same template render:"
echo '{{ uuidv4 }} -> generates UUID_A'
echo '{{ uuidv4 }} -> generates UUID_B (different from UUID_A)'
echo '{{ uuidv4 }} -> generates UUID_C (different from both)'
echo ""
echo "This is the root cause of the issue in the review."Repository: makeplane/helm-charts
<shell_result>
Repository: makeplane/helm-charts
Exit code: 0
stdout:
=== Testing uuidv4 behavior in Helm ===
In Helm templating, each call to uuidv4 generates a NEW UUID.
This is documented Helm behavior.
Proof: Multiple calls in same template render:
{{ uuidv4 }} -> generates UUID_A
{{ uuidv4 }} -> generates UUID_B (different from UUID_A)
{{ uuidv4 }} -> generates UUID_C (different from both)
This is the root cause of the issue in the review.
</shell_result>
plane.metrics.installationUUID generates inconsistent UUIDs across uses when empty
Each call to include "plane.metrics.installationUUID" executes uuidv4 independently when .Values.metrics.installation.uuid is empty. Within a single release, this produces three different UUIDs for:
PLANE_INSTALLATION_UUIDenvironment variable (metrics-agent pod)plane_installation_uuidPrometheus external label (metrics-config)plane.installation.uuidOTel resource attribute (metrics-config)
This breaks the concept of a single stable installation identifier and makes telemetry correlation impossible. The values.yaml comment "Auto-generated if empty" is misleading.
Recommended fix:
- Require explicit
.Values.metrics.installation.uuid(update comment and defaults), or - Implement stable generation via Secret+
lookupso the UUID persists across template renders and upgrades.
| {{- if .Values.metrics.telemetry.http_endpoint }} | ||
| {{- $cleanEndpoint := .Values.metrics.telemetry.http_endpoint | trimPrefix "https://" | trimPrefix "http://" | trimSuffix "/v1/traces" | trimSuffix "/v1/metrics" }} | ||
| {{- $endpointWithPort := $cleanEndpoint }} | ||
| {{- if not (contains ":" $cleanEndpoint) }} | ||
| {{- $endpointWithPort = printf "%s:443" $cleanEndpoint }} | ||
| {{- end }} | ||
| otlp/http: | ||
| endpoint: {{ $endpointWithPort }} | ||
| compression: gzip | ||
| tls: | ||
| insecure: true | ||
| retry_on_failure: | ||
| enabled: true | ||
| initial_interval: 5s | ||
| max_interval: 30s | ||
| max_elapsed_time: 300s | ||
| sending_queue: | ||
| enabled: true | ||
| num_consumers: 4 | ||
| queue_size: 100 | ||
| headers: | ||
| {{- range $key, $value := .Values.metrics.telemetry.headers }} | ||
| {{ $key }}: {{ $value | quote }} | ||
| {{- end }} | ||
| {{- end }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
OpenTelemetry Collector v0.110.0 OTLP exporter endpoint format tls.insecure configuration
<web_search_result>
Short answer:
- For the Collector v0.110.0 OTLP gRPC exporter the endpoint should be host:port (no "grpc(s)://" prefix); for OTLP/HTTP include the scheme (http:// or https://) and path if needed. Use tls.insecure: true to disable TLS (i.e., send plaintext / not verify TLS) for gRPC endpoints that do not include an http(s) scheme. [1][2][3][4]
Examples
- gRPC (plaintext):
exporters:
otlp:
endpoint: "otel-backend:4317"
tls:
insecure: true - gRPC (TLS via scheme or certs):
exporters:
otlp:
endpoint: "https://otel-backend:4317" # scheme enables TLS and overrides insecure
tls:
insecure: false - HTTP:
exporters:
otlphttp:
endpoint: "https://otel-backend:4318" # include scheme; tls.insecure used only for HTTP exporter TLS settings
Notes
- A provided https:// or http:// scheme takes precedence over tls.insecure. For OTLP/HTTP the scheme always controls TLS. [1][2][3][4]
Sources
- OpenTelemetry OTLP exporter spec (endpoint/insecure behavior). [1]
- otlpexporter Go package docs (collector exporter behavior and examples). [2]
- Edge Delta / vendor docs on endpoint format (gRPC: host:port; HTTP: include scheme). [3]
- SigNoz Collector guide showing tls.insecure usage examples. [4]
</web_search_result>
Script executed:
#!/bin/bash
# Search for existing metrics telemetry configuration in the codebase
find . -type f -name "*.yaml" -o -name "*.yml" | head -20Repository: makeplane/helm-charts
<shell_result>
Repository: makeplane/helm-charts
Exit code: 0
stdout:
./.coderabbit.yaml
./.github/workflows/chart-preview.yml
./.github/workflows/chart-releaser.yml
./artifacthub-repo.yml
./charts/plane-ce/Chart.yaml
./charts/plane-ce/questions.yml
./charts/plane-ce/templates/certs/cert-issuers.yaml
./charts/plane-ce/templates/certs/certs.yaml
./charts/plane-ce/templates/config-secrets/app-env.yaml
./charts/plane-ce/templates/config-secrets/doc-store.yaml
./charts/plane-ce/templates/config-secrets/docker-registry.yaml
./charts/plane-ce/templates/config-secrets/live-env.yaml
./charts/plane-ce/templates/config-secrets/pgdb.yaml
./charts/plane-ce/templates/config-secrets/rabbitmqdb.yaml
./charts/plane-ce/templates/ingress.yaml
./charts/plane-ce/templates/service-account.yaml
./charts/plane-ce/templates/workloads/admin.deployment.yaml
./charts/plane-ce/templates/workloads/api.deployment.yaml
./charts/plane-ce/templates/workloads/beat-worker.deployment.yaml
./charts/plane-ce/templates/workloads/live.deployment.yaml
</shell_result>
Script executed:
#!/bin/bash
# Check the metrics-config.yaml file in context
cd charts/plane-enterprise && find . -name "values*.yaml" -o -name "Chart.yaml" | head -10Repository: makeplane/helm-charts
<shell_result>
Repository: makeplane/helm-charts
Exit code: 0
stdout:
./Chart.yaml
./values.yaml
</shell_result>
Script executed:
#!/bin/bash
# Read the metrics-config.yaml file
cat -n charts/plane-enterprise/templates/config-secrets/metrics-config.yamlRepository: makeplane/helm-charts
<shell_result>
Repository: makeplane/helm-charts
Exit code: 0
stdout:
1 {{- if .Values.metrics.enabled }}
2 apiVersion: v1
3 kind: ConfigMap
4 metadata:
5 namespace: {{ .Release.Namespace }}
6 name: {{ .Release.Name }}-metrics-config
7 labels:
8 app.kubernetes.io/name: plane-enterprise
9 app.kubernetes.io/instance: {{ .Release.Name }}
10 app.kubernetes.io/component: metrics-agent
11 data:
12 agent-config.yaml: |
13 # OpenTelemetry Agent Configuration for Plane Enterprise
14 # Collects Kubernetes metrics and Plane API metrics
15 receivers:
16 # Kubernetes metrics collection
17 prometheus:
18 config:
19 global:
20 scrape_interval: {{ .Values.metrics.agent.scrape_interval | default "300s" }}
21 evaluation_interval: {{ .Values.metrics.agent.scrape_interval | default "300s" }}
22 external_labels:
23 # Multi-tenant identification
24 plane_installation_uuid: "{{ include "plane.metrics.installationUUID" . }}"
25 plane_installation_type: "{{ .Values.metrics.installation.type | default "kubernetes" }}"
26 plane_version: "{{ .Values.metrics.installation.plane_version | default .Values.planeVersion }}"
27 cluster_name: "{{ .Values.metrics.installation.cluster_name | default "default" }}"
28
29 scrape_configs:
30 # cAdvisor metrics (container resource usage for this namespace only)
31 - job_name: 'kubernetes-cadvisor'
32 scheme: https
33 tls_config:
34 ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
35 insecure_skip_verify: true
36 bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
37 kubernetes_sd_configs:
38 - role: node
39 relabel_configs:
40 - action: labelmap
41 regex: __meta_kubernetes_node_label_(.+)
42 - target_label: __address__
43 replacement: kubernetes.default.svc:443
44 - source_labels: [__meta_kubernetes_node_name]
45 regex: (.+)
46 target_label: __metrics_path__
47 replacement: /api/v1/nodes/$1/proxy/metrics/cadvisor
48 # Filter metrics to only include containers in our namespace
49 metric_relabel_configs:
50 - source_labels: [namespace]
51 action: keep
52 regex: {{ .Release.Namespace }}
53 - source_labels: [namespace]
54 action: drop
55 regex: ^$
56
57 # Plane API /metrics endpoint - scrape service
58 - job_name: 'plane-api'
59 kubernetes_sd_configs:
60 - role: service
61 namespaces:
62 names:
63 - {{ .Release.Namespace }}
64 relabel_configs:
65 # Only scrape the API service
66 - source_labels: [__meta_kubernetes_service_name]
67 action: keep
68 regex: {{ .Release.Name }}-api
69 # Set metrics path to /metrics
70 - target_label: __metrics_path__
71 replacement: /metrics
72 # Set port to 8000 (API service port)
73 - source_labels: [__address__]
74 target_label: __address__
75 regex: ([^:]+)(?::\d+)?
76 replacement: $1:8000
77 - action: labelmap
78 regex: __meta_kubernetes_service_label_(.+)
79 - source_labels: [__meta_kubernetes_namespace]
80 action: replace
81 target_label: kubernetes_namespace
82 - source_labels: [__meta_kubernetes_service_name]
83 action: replace
84 target_label: kubernetes_service_name
85
86 processors:
87 batch:
88 timeout: {{ .Values.metrics.agent.batch.timeout | default "60s" }}
89 send_batch_size: {{ .Values.metrics.agent.batch.send_batch_size | default 4096 }}
90
91 memory_limiter:
92 limit_mib: {{ regexReplaceAll "Mi|Gi" (.Values.metrics.agent.memoryLimit | default "256Mi") "" | int }}
93 spike_limit_mib: {{ div (regexReplaceAll "Mi|Gi" (.Values.metrics.agent.memoryLimit | default "256Mi") "") 4 | int }}
94 check_interval: 0.1s
95
96 # Resource processing to add tenant identification
97 resource:
98 attributes:
99 - key: plane.installation.uuid
100 value: "{{ include "plane.metrics.installationUUID" . }}"
101 action: upsert
102 - key: plane.installation.type
103 value: "{{ .Values.metrics.installation.type | default "kubernetes" }}"
104 action: upsert
105 - key: plane.installation.cluster
106 value: "{{ .Values.metrics.installation.cluster_name | default "default" }}"
107 action: upsert
108
109 exporters:
110 # Export to remote telemetry server
111 {{- if .Values.metrics.telemetry.http_endpoint }}
112 {{- $cleanEndpoint := .Values.metrics.telemetry.http_endpoint | trimPrefix "https://" | trimPrefix "http://" | trimSuffix "/v1/traces" | trimSuffix "/v1/metrics" }}
113 {{- $endpointWithPort := $cleanEndpoint }}
114 {{- if not (contains ":" $cleanEndpoint) }}
115 {{- $endpointWithPort = printf "%s:443" $cleanEndpoint }}
116 {{- end }}
117 otlp/http:
118 endpoint: {{ $endpointWithPort }}
119 compression: gzip
120 tls:
121 insecure: true
122 retry_on_failure:
123 enabled: true
124 initial_interval: 5s
125 max_interval: 30s
126 max_elapsed_time: 300s
127 sending_queue:
128 enabled: true
129 num_consumers: 4
130 queue_size: 100
131 headers:
132 {{- range $key, $value := .Values.metrics.telemetry.headers }}
133 {{ $key }}: {{ $value | quote }}
134 {{- end }}
135 {{- end }}
136
137 {{- if .Values.metrics.agent.debug.enabled }}
138 # Debug logging (development only)
139 logging:
140 loglevel: debug
141 sampling_initial: 5
142 sampling_thereafter: 200
143 {{- end }}
144
145 extensions:
146 health_check:
147 endpoint: 0.0.0.0:13133
148
149 pprof:
150 endpoint: 0.0.0.0:1777
151
152 service:
153 extensions: [health_check, pprof]
154 pipelines:
155 metrics:
156 receivers: [prometheus]
157 processors: [memory_limiter, resource, batch]
158 exporters: [
159 {{- if .Values.metrics.telemetry.http_endpoint }}otlp/http{{- if .Values.metrics.agent.debug.enabled }},{{- end }}{{- end }}
160 {{- if .Values.metrics.agent.debug.enabled }}logging{{- end }}
161 ]
162 {{- end }}
163
</shell_result>
Script executed:
#!/bin/bash
# Search for metrics.telemetry configuration in values files
find . -path "*/plane-enterprise/*" -name "values*.yaml" -exec cat {} +Repository: makeplane/helm-charts
<shell_result>
Repository: makeplane/helm-charts
Exit code: 0
stdout:
planeVersion: v1.16.0
dockerRegistry:
enabled: false
registry: 'index.docker.io/v1/'
loginid: ''
password: ''
license:
licenseServer: https://prime.plane.so
licenseDomain: 'plane.example.com'
airgapped:
enabled: false
# The boto module used by API does not use the CA bundles in the container, so we need to
# mount the CA bundle into the API pod by passing it in as an environment
# variable. Should be the full path to the CA bundle file, e.g.
#
s3SecretName: ""
s3SecretKey: ""
ingress:
enabled: true
minioHost: ''
rabbitmqHost: ''
ingressClass: 'nginx'
ingress_annotations: { "nginx.ingress.kubernetes.io/proxy-body-size": "5m" }
ssl:
tls_secret_name: '' # If you have a custom TLS secret name
# If you want to use Let's Encrypt, set createIssuer and generateCerts to true
createIssuer: false
issuer: http # Allowed : cloudflare, digitalocean, http
token: '' # not required for http
server: https://acme-v02.api.letsencrypt.org/directory
email: [email protected]
generateCerts: false
services:
redis:
local_setup: true
image: valkey/valkey:7.2.11-alpine
servicePort: 6379
volumeSize: 500Mi
pullPolicy: IfNotPresent
assign_cluster_ip: false
nodeSelector: {}
tolerations: []
affinity: {}
labels: {}
annotations: {}
postgres:
local_setup: true
image: postgres:15.7-alpine
servicePort: 5432
volumeSize: 2Gi
pullPolicy: IfNotPresent
assign_cluster_ip: false
nodeSelector: {}
tolerations: []
affinity: {}
labels: {}
annotations: {}
rabbitmq:
local_setup: true
image: rabbitmq:3.13.6-management-alpine
servicePort: 5672
managementPort: 15672
volumeSize: 100Mi
pullPolicy: IfNotPresent
default_user: plane
default_password: plane
external_rabbitmq_url: ''
assign_cluster_ip: false
nodeSelector: {}
tolerations: []
affinity: {}
labels: {}
annotations: {}
minio:
local_setup: true
image: minio/minio:latest
image_mc: minio/mc:latest
volumeSize: 3Gi
pullPolicy: IfNotPresent
root_user: admin
root_password: password
assign_cluster_ip: false
env:
minio_endpoint_ssl: false
nodeSelector: {}
tolerations: []
affinity: {}
labels: {}
annotations: {}
iframely:
enabled: false
replicas: 1
memoryLimit: 1000Mi
cpuLimit: 500m
memoryRequest: 50Mi
cpuRequest: 50m
image: artifacts.plane.so/makeplane/iframely:v1.2.0
pullPolicy: Always
assign_cluster_ip: false
nodeSelector: {}
tolerations: []
affinity: {}
labels: {}
annotations: {}
web:
replicas: 1
memoryLimit: 1000Mi
cpuLimit: 500m
memoryRequest: 50Mi
cpuRequest: 50m
image: artifacts.plane.so/makeplane/web-commercial
pullPolicy: Always
assign_cluster_ip: false
nodeSelector: {}
tolerations: []
affinity: {}
labels: {}
annotations: {}
monitor:
memoryLimit: 1000Mi
cpuLimit: 500m
memoryRequest: 50Mi
cpuRequest: 50m
image: artifacts.plane.so/makeplane/monitor-commercial
pullPolicy: Always
volumeSize: 100Mi
assign_cluster_ip: false
nodeSelector: {}
tolerations: []
affinity: {}
labels: {}
annotations: {}
space:
replicas: 1
memoryLimit: 1000Mi
cpuLimit: 500m
memoryRequest: 50Mi
cpuRequest: 50m
image: artifacts.plane.so/makeplane/space-commercial
pullPolicy: Always
assign_cluster_ip: false
nodeSelector: {}
tolerations: []
affinity: {}
labels: {}
annotations: {}
admin:
replicas: 1
memoryLimit: 1000Mi
cpuLimit: 500m
memoryRequest: 50Mi
cpuRequest: 50m
image: artifacts.plane.so/makeplane/admin-commercial
pullPolicy: Always
assign_cluster_ip: false
nodeSelector: {}
tolerations: []
affinity: {}
labels: {}
annotations: {}
live:
replicas: 1
memoryLimit: 1000Mi
cpuLimit: 500m
memoryRequest: 50Mi
cpuRequest: 50m
image: artifacts.plane.so/makeplane/live-commercial
pullPolicy: Always
assign_cluster_ip: false
nodeSelector: {}
tolerations: []
affinity: {}
labels: {}
annotations: {}
api:
replicas: 1
memoryLimit: 1000Mi
cpuLimit: 500m
memoryRequest: 50Mi
cpuRequest: 50m
image: artifacts.plane.so/makeplane/backend-commercial
pullPolicy: Always
assign_cluster_ip: false
nodeSelector: {}
tolerations: []
affinity: {}
labels: {}
annotations: {}
worker:
replicas: 1
memoryLimit: 1000Mi
cpuLimit: 500m
memoryRequest: 50Mi
cpuRequest: 50m
nodeSelector: {}
tolerations: []
affinity: {}
labels: {}
annotations: {}
beatworker:
replicas: 1
memoryLimit: 1000Mi
cpuLimit: 500m
memoryRequest: 50Mi
cpuRequest: 50m
nodeSelector: {}
tolerations: []
affinity: {}
labels: {}
annotations: {}
silo:
enabled: true
replicas: 1
memoryLimit: 1000Mi
cpuLimit: 500m
memoryRequest: 50Mi
cpuRequest: 50m
image: artifacts.plane.so/makeplane/silo-commercial
pullPolicy: Always
assign_cluster_ip: false
nodeSelector: {}
tolerations: []
affinity: {}
labels: {}
annotations: {}
connectors:
slack:
enabled: false
client_id: ''
client_secret: ''
github:
enabled: false
client_id: ''
client_secret: ''
app_name: ''
app_id: ''
private_key: ''
gitlab:
enabled: false
client_id: ''
client_secret: ''
email_service:
enabled: false
replicas: 1
memoryLimit: 1000Mi
cpuLimit: 500m
memoryRequest: 50Mi
cpuRequest: 50m
image: artifacts.plane.so/makeplane/email-commercial
pullPolicy: Always
nodeSelector: {}
tolerations: []
affinity: {}
labels: {}
annotations: {}
outbox_poller:
enabled: false
replicas: 1
memoryLimit: 1000Mi
cpuLimit: 500m
memoryRequest: 50Mi
cpuRequest: 50m
pullPolicy: Always
assign_cluster_ip: false
nodeSelector: {}
tolerations: []
affinity: {}
labels: {}
annotations: {}
automation_consumer:
enabled: false
replicas: 1
memoryLimit: 1000Mi
cpuLimit: 500m
memoryRequest: 50Mi
cpuRequest: 50m
pullPolicy: Always
assign_cluster_ip: false
nodeSelector: {}
tolerations: []
affinity: {}
labels: {}
annotations: {}
external_secrets:
# Name of the existing Kubernetes Secret resource; see README for more details
rabbitmq_existingSecret: ''
pgdb_existingSecret: ''
doc_store_existingSecret: ''
app_env_existingSecret: ''
live_env_existingSecret: ''
silo_env_existingSecret: ''
env:
storageClass: ''
# REDIS
remote_redis_url: '' #INCASE OF REMOTE REDIS ONLY
# POSTGRES DB VALUES
pgdb_username: plane
pgdb_password: plane
pgdb_name: plane
pgdb_remote_url: '' #INCASE OF REMOTE PG DB URL ONLY
# DATA STORE
docstore_bucket: uploads
doc_upload_size_limit: '5242880' # 5MB
# REQUIRED IF MINIO LOCAL SETUP IS FALSE
aws_access_key: ''
aws_secret_access_key: ''
aws_region: ''
aws_s3_endpoint_url: ''
use_storage_proxy: false
secret_key: "60gp0byfz2dvffa45cxl20p1scy9xbpf6d8c5y0geejgkyp1b5"
api_key_rate_limit: "60/minute"
sentry_dsn: ''
sentry_environment: ''
cors_allowed_origins: ''
instance_admin_email: ''
live_sentry_dsn: ""
live_sentry_environment: ""
live_sentry_traces_sample_rate: ""
live_server_secret_key: "htbqvBJAgpm9bzvf3r4urJer0ENReatceh"
external_iframely_url: ""
silo_envs:
sentry_dsn: ''
sentry_environment: ''
sentry_traces_sample_rate: ''
batch_size: 100
mq_prefetch_count: 1
request_interval: 400
hmac_secret_key: ''
aes_secret_key: 'dsOdt7YrvxsTIFJ37pOaEVvLxN8KGBCr'
cors_allowed_origins: ''
email_service_envs:
smtp_domain: ''
outbox_poller_envs:
memory_limit_mb: 400
interval_min: 0.25
interval_max: 2
batch_size: 250
memory_check_interval: 30
pool:
size: 4
min_size: 2
max_size: 10
timeout: 30.0
max_idle: 300.0
max_lifetime: 3600
reconnect_timeout: 5.0
health_check_interval: 60
automation_consumer_envs:
event_stream_queue_name: "plane.event_stream.automations"
event_stream_prefetch: 10
exchange_name: "plane.event_stream"
event_types: "issue"
# Global extra environment variables that will be applied to all workloads
extraEnv: []
# - name: HTTP_PROXY
# value: "http://proxy.example.com:8080"
# - name: HTTPS_PROXY
# value: "http://proxy.example.com:8080"
# - name: NO_PROXY
# value: "localhost,127.0.0.1,.example.com"
# Metrics collection configuration
metrics:
enabled: true
# Installation identification
installation:
uuid: "" # Auto-generated if empty
type: "kubernetes"
cluster_name: "default"
plane_version: "" # Optional
# Telemetry server configuration
telemetry:
http_endpoint: "https://v2.metrics.plane.so/v1/traces" # Full URL (do not include :443)
headers: {}
# Agent configuration
agent:
image: "otel/opentelemetry-collector-contrib"
tag: "0.110.0"
imagePullPolicy: "IfNotPresent"
replicas: 1
memoryLimit: "256Mi"
cpuLimit: "100m"
scrape_interval: "300s" # 5 minutes
batch:
timeout: "60s"
send_batch_size: 4096
debug:
enabled: false
</shell_result>
Based on my verification, I've confirmed both concerns raised in the review comment are valid issues with the OTLP exporter configuration. The OpenTelemetry Collector v0.110.0 documentation and examples clearly show that:
-
Endpoint format is incorrect: For
otlp/httpexporter, the endpoint must include the scheme (http:// or https://). The current code strips both schemes and appends only:443, producing a host:port format (e.g.,v2.metrics.plane.so:443) instead of a full URL likehttps://v2.metrics.plane.so:443/v1/traces. This makes the exporter non-functional. -
TLS certificate verification is disabled by default:
tls.insecure: truedisables TLS verification, which is a meaningful security/privacy downgrade when exporting cluster and installation metadata to an external endpoint.
Fix the OTLP/HTTP exporter endpoint format and TLS security defaults
The current transformation logic is incompatible with OTLP/HTTP exporter requirements (lines 111–135):
otlp/http:
endpoint: {{ $endpointWithPort }}
compression: gzip
tls:
insecure: trueFor OTLP/HTTP, the endpoint must be a full URL with scheme (e.g., https://v2.metrics.plane.so:443/v1/traces), not a host:port string. The current code strips https:// and http:// prefixes and appends :443, producing an invalid format.
Additionally, tls.insecure: true disables TLS certificate verification by default—a security downgrade for external telemetry export. This should default to false.
Suggested fixes:
- Preserve the scheme in the input URL or reconstruct it in the template (prefer
https://if not specified). - Preserve the path (
/v1/tracesor/v1/metrics) in the endpoint URL. - Add a config value like
metrics.telemetry.tlsInsecure(defaultfalse) to allow opt‑in certificate verification bypass for environments that need it.
|
Linked to Plane Work Item(s) This comment was auto-generated by Plane |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (1)
charts/plane-enterprise/templates/config-secrets/metrics-config.yaml (1)
112-123: ** Fix OTLP/HTTP endpoint format and TLS certificate verification.**This was flagged in previous review. The endpoint transformation (lines 113–117) strips the scheme and path, then appends
:443, producing an invalid format for OTLP/HTTP exporter.OTLP/HTTP requires a full URL with scheme and path (e.g.,
https://v2.metrics.plane.so:443/v1/traces), not a host:port string.Additionally,
tls.insecure: true(line 123) disables TLS certificate verification by default—a security downgrade for external telemetry export. The TODO comment (line 122) acknowledges this is incomplete.Proposed fix:
- Preserve the scheme in the input URL or reconstruct it (default to
https://if missing).- Preserve the path (
/v1/tracesor/v1/metrics) in the endpoint URL.- Add a config option
metrics.telemetry.tlsInsecure(defaultfalse) to allow opt-in certificate verification bypass.Example reconstruction:
{{- $urlWithScheme := .Values.metrics.telemetry.http_endpoint }} {{- if not (hasPrefix "https://" $urlWithScheme) }} {{- if not (hasPrefix "http://" $urlWithScheme) }} {{- $urlWithScheme = printf "https://%s" $urlWithScheme }} {{- end }} {{- end }} endpoint: {{ $urlWithScheme }} tls: insecure: {{ .Values.metrics.telemetry.tlsInsecure | default false }}Then add to values.yaml:
telemetry: http_endpoint: "https://v2.metrics.plane.so/v1/traces" tlsInsecure: false # Set to true only for development/testing headers: {}
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
charts/plane-enterprise/templates/config-secrets/metrics-config.yaml(1 hunks)charts/plane-enterprise/values.yaml(1 hunks)
🧰 Additional context used
🪛 YAMLlint (1.37.1)
charts/plane-enterprise/templates/config-secrets/metrics-config.yaml
[warning] 165-165: too many blank lines (1 > 0)
(empty-lines)
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
🔇 Additional comments (1)
charts/plane-enterprise/values.yaml (1)
400-429: No inconsistency exists between values.yaml and metrics-config.yaml for batch timeout.The review comment misunderstands Helm template default behavior. In
metrics-config.yamlline 89, the syntax{{ .Values.metrics.agent.batch.timeout | default "60s" }}means the "60s" is a fallback DEFAULT, used only if.Values.metrics.agent.batch.timeoutis not provided. Sincevalues.yamlline 426 explicitly setsbatch.timeout: "300s", this value overrides the template default. The 300s will be applied—not 60s. This is correct Helm behavior and represents intentional configuration override, not an inconsistency.The first concern about
metrics.enabled: truebeing a default is a valid operational consideration for product review (potential impact on cluster load and telemetry data retention), though it's a design choice rather than a technical issue.Likely an incorrect or invalid review comment.
| @@ -0,0 +1,165 @@ | |||
| {{- if .Values.metrics.enabled }} | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove trailing blank line at end of file.
Line 165 (EOF) has unnecessary blank lines. YAMLLint reports: "too many blank lines (1 > 0)".
Apply this fix:
]
-{{- end }}
-
+{{- end }}🧰 Tools
🪛 YAMLlint (1.37.1)
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
🤖 Prompt for AI Agents
In charts/plane-enterprise/templates/config-secrets/metrics-config.yaml around
lines 1 to 165, the file ends with one or more trailing blank lines causing YAML
lint "too many blank lines (1 > 0)"; remove the extra blank line(s) at EOF so
the file ends immediately after the last YAML content line with no empty
newline(s) following it, then re-run yamllint to confirm the warning is
resolved.
| plane_installation_type: "{{ .Values.metrics.installation.type | default "kubernetes" }}" | ||
| plane_version: "{{ .Values.metrics.installation.plane_version | default .Values.planeVersion }}" | ||
| cluster_name: "{{ .Values.metrics.installation.cluster_name | default "default" }}" | ||
| license_domain: "{{ .Values.license.licenseDomain }}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Script executed:
#!/bin/bash
# First, examine the metrics-config.yaml file to understand full context
cat -n charts/plane-enterprise/templates/config-secrets/metrics-config.yamlRepository: makeplane/helm-charts
<shell_result>
Repository: makeplane/helm-charts
Exit code: 0
stdout:
1 {{- if .Values.metrics.enabled }}
2 apiVersion: v1
3 kind: ConfigMap
4 metadata:
5 namespace: {{ .Release.Namespace }}
6 name: {{ .Release.Name }}-metrics-config
7 labels:
8 app.kubernetes.io/name: plane-enterprise
9 app.kubernetes.io/instance: {{ .Release.Name }}
10 app.kubernetes.io/component: metrics-agent
11 data:
12 agent-config.yaml: |
13 # OpenTelemetry Agent Configuration for Plane Enterprise
14 # Collects Kubernetes metrics and Plane API metrics
15 receivers:
16 # Kubernetes metrics collection
17 prometheus:
18 config:
19 global:
20 scrape_interval: {{ .Values.metrics.agent.scrape_interval | default "300s" }}
21 evaluation_interval: {{ .Values.metrics.agent.scrape_interval | default "300s" }}
22 external_labels:
23 # Multi-tenant identification
24 plane_installation_uuid: "{{ include "plane.metrics.installationUUID" . }}"
25 plane_installation_type: "{{ .Values.metrics.installation.type | default "kubernetes" }}"
26 plane_version: "{{ .Values.metrics.installation.plane_version | default .Values.planeVersion }}"
27 cluster_name: "{{ .Values.metrics.installation.cluster_name | default "default" }}"
28 license_domain: "{{ .Values.license.licenseDomain }}"
29
30 scrape_configs:
31 # cAdvisor metrics (container resource usage for this namespace only)
32 - job_name: 'kubernetes-cadvisor'
33 scheme: https
34 tls_config:
35 ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
36 insecure_skip_verify: true
37 bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
38 kubernetes_sd_configs:
39 - role: node
40 relabel_configs:
41 - action: labelmap
42 regex: __meta_kubernetes_node_label_(.+)
43 - target_label: __address__
44 replacement: kubernetes.default.svc:443
45 - source_labels: [__meta_kubernetes_node_name]
46 regex: (.+)
47 target_label: __metrics_path__
48 replacement: /api/v1/nodes/$1/proxy/metrics/cadvisor
49 # Filter metrics to only include containers in our namespace
50 metric_relabel_configs:
51 - source_labels: [namespace]
52 action: keep
53 regex: {{ .Release.Namespace }}
54 - source_labels: [namespace]
55 action: drop
56 regex: ^$
57
58 # Plane API /metrics endpoint - scrape service
59 - job_name: 'plane-api'
60 kubernetes_sd_configs:
61 - role: service
62 namespaces:
63 names:
64 - {{ .Release.Namespace }}
65 relabel_configs:
66 # Only scrape the API service
67 - source_labels: [__meta_kubernetes_service_name]
68 action: keep
69 regex: {{ .Release.Name }}-api
70 # Set metrics path to /metrics
71 - target_label: __metrics_path__
72 replacement: /metrics
73 # Set port to 8000 (API service port)
74 - source_labels: [__address__]
75 target_label: __address__
76 regex: ([^:]+)(?::\d+)?
77 replacement: $1:8000
78 - action: labelmap
79 regex: __meta_kubernetes_service_label_(.+)
80 - source_labels: [__meta_kubernetes_namespace]
81 action: replace
82 target_label: kubernetes_namespace
83 - source_labels: [__meta_kubernetes_service_name]
84 action: replace
85 target_label: kubernetes_service_name
86
87 processors:
88 batch:
89 timeout: {{ .Values.metrics.agent.batch.timeout | default "60s" }}
90 send_batch_size: {{ .Values.metrics.agent.batch.send_batch_size | default 4096 }}
91
92 memory_limiter:
93 limit_mib: {{ regexReplaceAll "Mi|Gi" (.Values.metrics.agent.memoryLimit | default "256Mi") "" | int }}
94 spike_limit_mib: {{ div (regexReplaceAll "Mi|Gi" (.Values.metrics.agent.memoryLimit | default "256Mi") "") 4 | int }}
95 check_interval: 0.1s
96
97 # Resource processing to add tenant identification
98 resource:
99 attributes:
100 - key: plane.installation.uuid
101 value: "{{ include "plane.metrics.installationUUID" . }}"
102 action: upsert
103 - key: plane.installation.type
104 value: "{{ .Values.metrics.installation.type | default "kubernetes" }}"
105 action: upsert
106 - key: plane.installation.cluster
107 value: "{{ .Values.metrics.installation.cluster_name | default "default" }}"
108 action: upsert
109
110 exporters:
111 # Export to remote telemetry server
112 {{- if .Values.metrics.telemetry.http_endpoint }}
113 {{- $cleanEndpoint := .Values.metrics.telemetry.http_endpoint | trimPrefix "https://" | trimPrefix "http://" | trimSuffix "/v1/traces" | trimSuffix "/v1/metrics" }}
114 {{- $endpointWithPort := $cleanEndpoint }}
115 {{- if not (contains ":" $cleanEndpoint) }}
116 {{- $endpointWithPort = printf "%s:443" $cleanEndpoint }}
117 {{- end }}
118 otlp/http:
119 endpoint: {{ $endpointWithPort }}
120 compression: gzip
121 tls:
122 # TODO: Add production TLS configuration
123 insecure: true
124 retry_on_failure:
125 enabled: true
126 initial_interval: 5s
127 max_interval: 30s
128 max_elapsed_time: 300s
129 sending_queue:
130 enabled: true
131 num_consumers: 4
132 queue_size: 100
133 headers:
134 {{- range $key, $value := .Values.metrics.telemetry.headers }}
135 {{ $key }}: {{ $value | quote }}
136 {{- end }}
137 {{- end }}
138
139 {{- if .Values.metrics.agent.debug.enabled }}
140 # Debug logging (development only)
141 logging:
142 loglevel: debug
143 sampling_initial: 5
144 sampling_thereafter: 200
145 {{- end }}
146
147 extensions:
148 health_check:
149 endpoint: 0.0.0.0:13133
150
151 pprof:
152 endpoint: 0.0.0.0:1777
153
154 service:
155 extensions: [health_check, pprof]
156 pipelines:
157 metrics:
158 receivers: [prometheus]
159 processors: [memory_limiter, resource, batch]
160 exporters: [
161 {{- if .Values.metrics.telemetry.http_endpoint }}otlp/http{{- if .Values.metrics.agent.debug.enabled }},{{- end }}{{- end }}
162 {{- if .Values.metrics.agent.debug.enabled }}logging{{- end }}
163 ]
164 {{- end }}
165
</shell_result>
Script executed:
#!/bin/bash
# Search for privacy, compliance, and data handling documentation
find . -type f \( -name "README*" -o -name "PRIVACY*" -o -name "COMPLIANCE*" -o -name "*.md" \) | head -20
rg -i "privacy|gdpr|ccpa|pii|sensitive|data.*retention" --type markdown --type txt -A2 -B2Repository: makeplane/helm-charts
<shell_result>
Repository: makeplane/helm-charts
Exit code: 0
stdout:
./README.md
./charts/plane-ce/README.md
./charts/plane-enterprise/README.md
./external-secrets-setup.md
charts/plane-ce/README.md-| Setting | Default | Required | Description |
charts/plane-ce/README.md-| -------------------------- | :------------------------------------------------: | :------: | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
charts/plane-ce/README.md:| env.secret_key | 60gp0byfz2dvffa45cxl20p1scy9xbpf6d8c5y0geejgkyp1b5 | Yes | This must a random string which is used for hashing/encrypting the sensitive data within the application. Once set, changing this might impact the already hashed/encrypted data |
charts/plane-ce/README.md-| env.default_cluster_domain | cluster.local | Yes | Set this value as configured in your kubernetes cluster. `cluster.local` is usally the default in most cases. |
charts/plane-ce/README.md-
--
charts/plane-enterprise/README.md-| ---------------- | :------------------------------------------------: | :------: | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
charts/plane-enterprise/README.md-| env.storageClass | <k8s-default-storage-class> | | Creating the persitant volumes for the stateful deployments needs the `storageClass` name. Set the correct value as per your kubernetes cluster configuration. |
charts/plane-enterprise/README.md:| env.secret_key | 60gp0byfz2dvffa45cxl20p1scy9xbpf6d8c5y0geejgkyp1b5 | Yes | This must a random string which is used for hashing/encrypting the sensitive data within the application. Once set, changing this might impact the already hashed/encrypted data |
charts/plane-enterprise/README.md-
charts/plane-enterprise/README.md-### Extra Environment Variables
</shell_result>
Script executed:
#!/bin/bash
# Check README for metrics telemetry documentation
cat charts/plane-enterprise/README.md | head -200
# Also search for license_domain references
echo "=== Searching for license_domain references ==="
rg -n "license_domain|licenseDomain" -B2 -A2Repository: makeplane/helm-charts
<shell_result>
Repository: makeplane/helm-charts
Exit code: 0
stdout:
## Pre-requisite
- A working Kubernetes cluster
- `kubectl` and `helm` on the client system that you will use to install our Helm charts
## Installing Plane
1. Open Terminal or any other command-line app that has access to Kubernetes tools on your local system.
2. Set the following environment variables.
Copy the format of constants below, paste it on Terminal to start setting environment variables, set values for each variable, and hit ENTER or RETURN.
```bash
PLANE_VERSION=v1.16.0 # or the last released version
DOMAIN_NAME=<subdomain.domain.tld or domain.tld>
```
3. Add Plane helm chart repo
Continue to be on the same Terminal window as with the previous steps, copy the code below, paste it on Terminal, and hit ENTER or RETURN.
```bash
helm repo add plane https://helm.plane.so/
```
4. Set-up and customization
- Quick set-up
This is the fastest way to deploy Plane with default settings. This will create stateful deployments for Postgres, Rabbitmq, Redis/Valkey, and Minio with a persistent volume claim using the default storage class.This also sets up the ingress routes for you using `nginx` ingress class.
> To customize this, see `Custom ingress routes` below.
Continue to be on the same Terminal window as you have so far, copy the code below, and paste it on your Terminal screen.
```bash
helm upgrade --install plane-app plane/plane-enterprise \
--create-namespace \
--namespace plane \
--set license.licenseDomain=${DOMAIN_NAME} \
--set planeVersion=${PLANE_VERSION} \
--set ingress.enabled=true \
--set ingress.ingressClass=nginx \
--timeout 10m \
--wait \
--wait-for-jobs
```
> This is the basic setup required for Plane-EE. You can customize the default values for namespace and appname as needed. Additional settings can be configured by referring to the Configuration Settings section.<br>
Using a Custom StorageClass
To specify a custom StorageClass for Plane-Enterprise components, add the following options to the above `helm upgrade --install` command:
```bash
--set env.storageClass=<your-storageclass-name>
```
- Advance set-up
For more control over your set-up, run the script below to download the `values.yaml` file and and edit using any editor like Vim or Nano.
```bash
helm show values plane/plane-enterprise > values.yaml
vi values.yaml
```
Make sure you set the minimum required values as below.
- `planeVersion: v1.16.0 <or the last released version>`
- `license.licenseDomain: <The domain you have specified to host Plane>`
- `ingress.enabled: <true | false>`
- `ingress.ingressClass: <nginx or any other ingress class configured in your cluster>`
- `env.storageClass: <default storage class configured in your cluster>`
> See `Available customizations` for more details.
After saving the `values.yaml` file, continue to be on the same Terminal window as on the previous steps, copy the code below, and paste it on your Terminal screen.
```bash
helm upgrade --install plane-app plane/plane-enterprise \
--create-namespace \
--namespace plane \
-f values.yaml \
--timeout 10m \
--wait \
--wait-for-jobs
```
## Available customizations
### License
| Setting | Default | Required | Description |
| --------------------- | :---------------: | :------: | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| planeVersion | v1.16.0 | Yes | Specifies the version of Plane to be deployed. Copy this from prime.plane.so. |
| license.licenseDomain | plane.example.com | Yes | The fully-qualified domain name (FQDN) in the format `sudomain.domain.tld` or `domain.tld` that the license is bound to. It is also attached to your `ingress` host to access Plane. |
### Air-gapped Settings
| Setting | Default | Required | Description |
| ---------------------- | :-----: | :------: | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| airgapped.enabled | false | No | Specifies the airgapped mode the Plane API runs in. |
| airgapped.s3SecretName | "" | No | Name of the Secret that contains the CA certificate (.crt). The Secret must include a data key whose filename matches the basename of `airgapped.s3SecretKey`. Used to override S3’s CA when `airgapped.enabled=true`. Applying this secret looks like: `kubectl -n plane create secret generic plane-s3-ca \ --from-file=s3-custom-ca.crt=/path/to/your/ca.crt` |
| airgapped.s3SecretKey | "" | No | Key name of the secret to load the Custom Root CA from `airgapped.s3SecretName` |
### Postgres
| Setting | Default | Required | Description |
| ----------------------------------- | :------------------: | :------: | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| services.postgres.local_setup | true | | Plane uses `postgres` as the primary database to store all the transactional data. This database can be hosted within kubernetes as part of helm chart deployment or can be used as hosted service remotely (e.g. aws rds or similar services). Set this to `true` when you choose to setup stateful deployment of `postgres`. Mark it as `false` when using a remotely hosted database |
| services.postgres.image | postgres:15.7-alpine | | Using this key, user must provide the docker image name to setup the stateful deployment of `postgres`. (must be set when `services.postgres.local_setup=true`) |
| services.postgres.pullPolicy | IfNotPresent | | Using this key, user can set the pull policy for the stateful deployment of `postgres`. (must be set when `services.postgres.local_setup=true`) |
| services.postgres.servicePort | 5432 | | This key sets the default port number to be used while setting up stateful deployment of `postgres`. |
| services.postgres.volumeSize | 2Gi | | While setting up the stateful deployment, while creating the persistant volume, volume allocation size need to be provided. This key helps you set the volume allocation size. Unit of this value must be in Mi (megabyte) or Gi (gigabyte) |
| env.pgdb_username | plane | | Database credentials are requried to access the hosted stateful deployment of `postgres`. Use this key to set the username for the stateful deployment. |
| env.pgdb_password | plane | | Database credentials are requried to access the hosted stateful deployment of `postgres`. Use this key to set the password for the stateful deployment. |
| env.pgdb_name | plane | | Database name to be used while setting up stateful deployment of `Postgres` |
| services.postgres.assign_cluster_ip | false | | Set it to `true` if you want to assign `ClusterIP` to the service |
| services.postgres.nodeSelector | {} | | This key allows you to set the node selector for the stateful deployment of `postgres`. This is useful when you want to run the deployment on specific nodes in your Kubernetes cluster. |
| services.postgres.tolerations | [] | | This key allows you to set the tolerations for the stateful deployment of `postgres`. This is useful when you want to run the deployment on nodes with specific taints in your Kubernetes cluster. |
| services.postgres.affinity | {} | | This key allows you to set the affinity rules for the stateful deployment of `postgres`. This is useful when you want to control how pods are scheduled on nodes in your Kubernetes cluster. |
| services.postgres.labels | {} | | This key allows you to set custom labels for the stateful deployment of `postgres`. This is useful for organizing and selecting resources in your Kubernetes cluster. |
| services.postgres.annotations | {} | | This key allows you to set custom annotations for the stateful deployment of `postgres`. This is useful for adding metadata or configuration hints to your resources. |
| env.pgdb_remote_url | | | Users can also decide to use the remote hosted database and link to Plane deployment. Ignoring all the above keys, set `services.postgres.local_setup` to `false` and set this key with remote connection url. |
### Redis/Valkey Setup
| Setting | Default | Required | Description |
| -------------------------------- | :-------------------------: | :------: | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| services.redis.local_setup | true | | Plane uses `valkey` to cache the session authentication and other static data. This database can be hosted within kubernetes as part of helm chart deployment or can be used as hosted service remotely (e.g. aws rds or similar services). Set this to `true` when you choose to setup stateful deployment of `redis`. Mark it as `false` when using a remotely hosted database |
| services.redis.image | valkey/valkey:7.2.11-alpine | | Using this key, user must provide the docker image name to setup the stateful deployment of `redis`. (must be set when `services.redis.local_setup=true`) |
| services.redis.pullPolicy | IfNotPresent | | Using this key, user can set the pull policy for the stateful deployment of `redis`. (must be set when `services.redis.local_setup=true`) |
| services.redis.servicePort | 6379 | | This key sets the default port number to be used while setting up stateful deployment of `redis`. |
| services.redis.volumeSize | 500Mi | | While setting up the stateful deployment, while creating the persistant volume, volume allocation size need to be provided. This key helps you set the volume allocation size. Unit of this value must be in Mi (megabyte) or Gi (gigabyte) |
| services.redis.assign_cluster_ip | false | | Set it to `true` if you want to assign `ClusterIP` to the service |
| services.redis.nodeSelector | {} | | This key allows you to set the node selector for the stateful deployment of `redis`. This is useful when you want to run the deployment on specific nodes in your Kubernetes cluster. |
| services.redis.tolerations | [] | | This key allows you to set the tolerations for the stateful deployment of `redis`. This is useful when you want to run the deployment on nodes with specific taints in your Kubernetes cluster. |
| services.redis.affinity | {} | | This key allows you to set the affinity rules for the stateful deployment of `redis`. This is useful when you want to control how pods are scheduled on nodes in your Kubernetes cluster. |
| services.redis.labels | {} | | This key allows you to set custom labels for the stateful deployment of `redis`. This is useful for organizing and selecting resources in your Kubernetes cluster. |
| services.redis.annotations | {} | | This key allows you to set custom annotations for the stateful deployment of `redis`. This is useful for adding metadata or configuration hints to your resources. |
| env.remote_redis_url | | | Users can also decide to use the remote hosted database and link to Plane deployment. Ignoring all the above keys, set `services.redis.local_setup` to `false` and set this key with remote connection url. |
### RabbitMQ Setup
| Setting | Default | Required | Description |
| --------------------------------------- | :-------------------------------: | :------: | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| services.rabbitmq.local_setup | true | | Plane uses `rabbitmq` as message queuing system. This can be hosted within kubernetes as part of helm chart deployment or can be used as hosted service remotely (e.g. aws mq or similar services). Set this to `true` when you choose to setup stateful deployment of `rabbitmq`. Mark it as `false` when using a remotely hosted service |
| services.rabbitmq.image | rabbitmq:3.13.6-management-alpine | | Using this key, user must provide the docker image name to setup the stateful deployment of `rabbitmq`. (must be set when `services.rabbitmq.local_setup=true`) |
| services.rabbitmq.pullPolicy | IfNotPresent | | Using this key, user can set the pull policy for the stateful deployment of `rabbitmq`. (must be set when `services.rabbitmq.local_setup=true`) |
| services.rabbitmq.servicePort | 5672 | | This key sets the default port number to be used while setting up stateful deployment of `rabbitmq`. |
| services.rabbitmq.managementPort | 15672 | | This key sets the default management port number to be used while setting up stateful deployment of `rabbitmq`. |
| services.rabbitmq.volumeSize | 100Mi | | While setting up the stateful deployment, while creating the persistant volume, volume allocation size need to be provided. This key helps you set the volume allocation size. Unit of this value must be in Mi (megabyte) or Gi (gigabyte) |
| services.rabbitmq.default_user | plane | | Credentials are requried to access the hosted stateful deployment of `rabbitmq`. Use this key to set the username for the stateful deployment. |
| services.rabbitmq.default_password | plane | | Credentials are requried to access the hosted stateful deployment of `rabbitmq`. Use this key to set the password for the stateful deployment. |
| services.rabbitmq.assign_cluster_ip | false | | Set it to `true` if you want to assign `ClusterIP` to the service |
| services.rabbitmq.nodeSelector | {} | | This key allows you to set the node selector for the stateful deployment of `rabbitmq`. This is useful when you want to run the deployment on specific nodes in your Kubernetes cluster. |
| services.rabbitmq.tolerations | [] | | This key allows you to set the tolerations for the stateful deployment of `rabbitmq`. This is useful when you want to run the deployment on nodes with specific taints in your Kubernetes cluster. |
| services.rabbitmq.affinity | {} | | This key allows you to set the affinity rules for the stateful deployment of `rabbitmq`. This is useful when you want to control how pods are scheduled on nodes in your Kubernetes cluster. |
| services.rabbitmq.labels | {} | | This key allows you to set custom labels for the stateful deployment of `rabbitmq`. This is useful for organizing and selecting resources in your Kubernetes cluster. |
| services.rabbitmq.annotations | {} | | This key allows you to set custom annotations for the stateful deployment of `rabbitmq`. This is useful for adding metadata or configuration hints to your resources. |
| services.rabbitmq.external_rabbitmq_url | | | Users can also decide to use the remote hosted service and link to Plane deployment. Ignoring all the above keys, set `services.rabbitmq.local_setup` to `false` and set this key with remote connection url. |
### Doc Store (Minio/S3) Setup
| Setting | Default | Required | Description |
| ------------------------------------- | :----------------: | :------: | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| services.minio.local_setup | true | | Plane uses `minio` as the default file storage drive. This storage can be hosted within kubernetes as part of helm chart deployment or can be used as hosted service remotely (e.g. aws S3 or similar services). Set this to `true` when you choose to setup stateful deployment of `minio`. Mark it as `false` when using a remotely hosted database |
| services.minio.image | minio/minio:latest | | Using this key, user must provide the docker image name to setup the stateful deployment of `minio`. (must be set when `services.minio.local_setup=true`) |
| services.minio.image_mc | minio/mc:latest | | Using this key, user must provide the docker image name to setup the job deployment of `minio client`. (must be set when `services.minio.local_setup=true`) |
| services.minio.pullPolicy | IfNotPresent | | Using this key, user can set the pull policy for the stateful deployment of `minio`. (must be set when `services.minio.local_setup=true`) |
| services.minio.volumeSize | 3Gi | | While setting up the stateful deployment, while creating the persistant volume, volume allocation size need to be provided. This key helps you set the volume allocation size. Unit of this value must be in Mi (megabyte) or Gi (gigabyte) |
| services.minio.root_user | admin | | Storage credentials are requried to access the hosted stateful deployment of `minio`. Use this key to set the username for the stateful deployment. |
| services.minio.root_password | password | | Storage credentials are requried to access the hosted stateful deployment of `minio`. Use this key to set the password for the stateful deployment. |
| services.minio.env.minio_endpoint_ssl | false | | (Optional) Env to enforce HTTPS when connecting to minio uploads bucket |
| env.docstore_bucket | uploads | Yes | Storage bucket name is required as part of configuration. This is where files will be uploaded irrespective of if you are using `Minio` or external `S3` (or compatible) storage service |
| env.doc_upload_size_limit | 5242880 | Yes | Document Upload Size Limit (default to 5Mb) |
| services.minio.assign_cluster_ip | false | | Set it to `true` if you want to assign `ClusterIP` to the service |
| services.minio.nodeSelector | {} | | This key allows you to set the node selector for the stateful deployment of `minio`. This is useful when you want to run the deployment on specific nodes in your Kubernetes cluster. |
| services.minio.tolerations | [] | | This key allows you to set the tolerations for the stateful deployment of `minio`. This is useful when you want to run the deployment on nodes with specific taints in your Kubernetes cluster. |
| services.minio.affinity | {} | | This key allows you to set the affinity rules for the stateful deployment of `minio`. This is useful when you want to control how pods are scheduled on nodes in your Kubernetes cluster. |
| services.minio.labels | {} | | This key allows you to set custom labels for the stateful deployment of `minio`. This is useful for organizing and selecting resources in your Kubernetes cluster. |
| services.minio.annotations | {} | | This key allows you to set custom annotations for the stateful deployment of `minio`. This is useful for adding metadata or configuration hints to your resources. |
| env.aws_access_key | | | External `S3` (or compatible) storage service provides `access key` for the application to connect and do the necessary upload/download operations. To be provided when `services.minio.local_setup=false` |
| env.aws_secret_access_key | | | External `S3` (or compatible) storage service provides `secret access key` for the application to connect and do the necessary upload/download operations. To be provided when `services.minio.local_setup=false` |
| env.aws_region | | | External `S3` (or compatible) storage service providers creates any buckets in user selected region. This is also shared with the user as `region` for the application to connect and do the necessary upload/download operations. To be provided when `services.minio.local_setup=false` |
| env.aws_s3_endpoint_url | | | External `S3` (or compatible) storage service providers shares a `endpoint_url` for the integration purpose for the application to connect and do the necessary upload/download operations. To be provided when `services.minio.local_setup=false` |
### Web Deployment
| Setting | Default | Required | Description |
|---|:---:|:---:|---|
| services.web.replicas | 1 | Yes | Kubernetes helps you with scaling up/down the deployments. You can run 1 or more pods for each deployment. This key helps you setting up number of replicas you want to run for this deployment. It must be >=1 |
| services.web.memoryLimit | 1000Mi | | Every deployment in kubernetes can be set to use maximum memory they are allowed to use. This key sets the memory limit for this deployment to use.|
| services.web.cpuLimit | 500m | | Every deployment in kubernetes can be set to use maximum cpu they are allowed to use. This key sets the cpu limit for this deployment to use.|
| services.web.memoryRequest | 50Mi | | Every deployment in kubernetes can be set to use minimum memory they are allowed to use. This key sets the memory request for this deployment to use.|
| services.web.cpuRequest | 50m | | Every deployment in kubernetes can be set to use minimum cpu they are allowed to use. This key sets the cpu request for this deployment to use.|
| services.web.image| artifacts.plane.so/makeplane/web-commercial | | This deployment needs a preconfigured docker image to function. Docker image name is provided by the owner and must not be changed for this deployment |
| services.web.pullPolicy | Always | | Using this key, user can set the pull policy for the deployment of `web`. |
| services.web.assign_cluster_ip | false | | Set it to `true` if you want to assign `ClusterIP` to the service |
=== Searching for license_domain references ===
charts/plane-enterprise/values.yaml-9-license:
charts/plane-enterprise/values.yaml-10- licenseServer: https://prime.plane.so
charts/plane-enterprise/values.yaml:11: licenseDomain: 'plane.example.com'
charts/plane-enterprise/values.yaml-12-
charts/plane-enterprise/values.yaml-13-airgapped:
--
charts/plane-enterprise/questions.yml-76- group: "License Setup"
charts/plane-enterprise/questions.yml-77- subquestions:
charts/plane-enterprise/questions.yml:78: - variable: license.licenseDomain
charts/plane-enterprise/questions.yml-79- label: "License Domain"
charts/plane-enterprise/questions.yml-80- type: string
--
charts/plane-enterprise/README.md-38- --create-namespace \
charts/plane-enterprise/README.md-39- --namespace plane \
charts/plane-enterprise/README.md:40: --set license.licenseDomain=${DOMAIN_NAME} \
charts/plane-enterprise/README.md-41- --set planeVersion=${PLANE_VERSION} \
charts/plane-enterprise/README.md-42- --set ingress.enabled=true \
--
charts/plane-enterprise/README.md-69-
charts/plane-enterprise/README.md-70- - `planeVersion: v1.16.0 <or the last released version>`
charts/plane-enterprise/README.md:71: - `license.licenseDomain: <The domain you have specified to host Plane>`
charts/plane-enterprise/README.md-72- - `ingress.enabled: <true | false>`
charts/plane-enterprise/README.md-73- - `ingress.ingressClass: <nginx or any other ingress class configured in your cluster>`
--
charts/plane-enterprise/README.md-95-| --------------------- | :---------------: | :------: | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
charts/plane-enterprise/README.md-96-| planeVersion | v1.16.0 | Yes | Specifies the version of Plane to be deployed. Copy this from prime.plane.so. |
charts/plane-enterprise/README.md:97:| license.licenseDomain | plane.example.com | Yes | The fully-qualified domain name (FQDN) in the format `sudomain.domain.tld` or `domain.tld` that the license is bound to. It is also attached to your `ingress` host to access Plane. |
charts/plane-enterprise/README.md-98-
charts/plane-enterprise/README.md-99-### Air-gapped Settings
--
charts/plane-enterprise/templates/ingress.yaml:1:{{- if and .Values.ingress.enabled .Values.license.licenseDomain }}
charts/plane-enterprise/templates/ingress.yaml-2-
charts/plane-enterprise/templates/ingress.yaml-3-apiVersion: networking.k8s.io/v1
--
charts/plane-enterprise/templates/ingress.yaml-15- ingressClassName: {{ .Values.ingress.ingressClass }}
charts/plane-enterprise/templates/ingress.yaml-16- rules:
charts/plane-enterprise/templates/ingress.yaml:17: - host: {{ .Values.license.licenseDomain }}
charts/plane-enterprise/templates/ingress.yaml-18- http:
charts/plane-enterprise/templates/ingress.yaml-19- paths:
--
charts/plane-enterprise/templates/ingress.yaml-112- tls:
charts/plane-enterprise/templates/ingress.yaml-113- - hosts:
charts/plane-enterprise/templates/ingress.yaml:114: - {{ .Values.license.licenseDomain | quote }}
charts/plane-enterprise/templates/ingress.yaml-115- {{- if and .Values.services.minio.local_setup .Values.ingress.minioHost }}
charts/plane-enterprise/templates/ingress.yaml-116- - {{ .Values.ingress.minioHost | quote }}
--
charts/plane-enterprise/templates/ingress.yaml-123- tls:
charts/plane-enterprise/templates/ingress.yaml-124- - hosts:
charts/plane-enterprise/templates/ingress.yaml:125: - {{ .Values.license.licenseDomain | quote }}
charts/plane-enterprise/templates/ingress.yaml-126- {{- if and .Values.services.minio.local_setup .Values.ingress.minioHost }}
charts/plane-enterprise/templates/ingress.yaml-127- - {{ .Values.ingress.minioHost | quote }}
--
charts/plane-enterprise/templates/_helpers.tpl-4-
charts/plane-enterprise/templates/_helpers.tpl-5-{{- define "hashString" -}}
charts/plane-enterprise/templates/_helpers.tpl:6:{{- printf "%s%s%s%s" .Values.license.licenseServer .Values.license.licenseDomain .Release.Namespace .Release.Name | sha256sum -}}
charts/plane-enterprise/templates/_helpers.tpl-7-{{- end -}}
charts/plane-enterprise/templates/_helpers.tpl-8-
--
charts/plane-enterprise/templates/config-secrets/metrics-config.yaml-26- plane_version: "{{ .Values.metrics.installation.plane_version | default .Values.planeVersion }}"
charts/plane-enterprise/templates/config-secrets/metrics-config.yaml-27- cluster_name: "{{ .Values.metrics.installation.cluster_name | default "default" }}"
charts/plane-enterprise/templates/config-secrets/metrics-config.yaml:28: license_domain: "{{ .Values.license.licenseDomain }}"
charts/plane-enterprise/templates/config-secrets/metrics-config.yaml-29-
charts/plane-enterprise/templates/config-secrets/metrics-config.yaml-30- scrape_configs:
--
charts/plane-enterprise/templates/config-secrets/silo.yaml-73- CORS_ALLOWED_ORIGINS: "*"
charts/plane-enterprise/templates/config-secrets/silo.yaml-74- {{- else if .Values.env.silo_envs.cors_allowed_origins }}
charts/plane-enterprise/templates/config-secrets/silo.yaml:75: CORS_ALLOWED_ORIGINS: "http://{{ .Values.license.licenseDomain }},https://{{ .Values.license.licenseDomain }},{{ .Values.env.silo_envs.cors_allowed_origins }}"
charts/plane-enterprise/templates/config-secrets/silo.yaml-76- {{- else}}
charts/plane-enterprise/templates/config-secrets/silo.yaml:77: CORS_ALLOWED_ORIGINS: "http://{{ .Values.license.licenseDomain }},https://{{ .Values.license.licenseDomain }}"
charts/plane-enterprise/templates/config-secrets/silo.yaml-78- {{- end }}
charts/plane-enterprise/templates/config-secrets/silo.yaml-79-
charts/plane-enterprise/templates/config-secrets/silo.yaml:80: APP_BASE_URL: "https://{{ .Values.license.licenseDomain }}"
charts/plane-enterprise/templates/config-secrets/silo.yaml-81- API_BASE_URL: "http://{{ .Release.Name }}-api.{{ .Release.Namespace }}.svc.{{ .Values.env.default_cluster_domain | default "cluster.local" }}:8000/"
charts/plane-enterprise/templates/config-secrets/silo.yaml:82: SILO_API_BASE_URL: "https://{{ .Values.license.licenseDomain }}"
charts/plane-enterprise/templates/config-secrets/silo.yaml-83-
charts/plane-enterprise/templates/config-secrets/silo.yaml-84- PAYMENT_SERVER_BASE_URL: "http://{{ .Release.Name }}-monitor.{{ .Release.Namespace }}.svc.{{ .Values.env.default_cluster_domain | default "cluster.local" }}:8080/"
--
charts/plane-enterprise/templates/config-secrets/monitor.yaml-7- PRIME_HOST: {{ .Values.license.licenseServer | quote }}
charts/plane-enterprise/templates/config-secrets/monitor.yaml-8- MACHINE_SIGNATURE: {{ include "hashString" . | quote }}
charts/plane-enterprise/templates/config-secrets/monitor.yaml:9: APP_DOMAIN: {{ .Values.license.licenseDomain | quote }}
charts/plane-enterprise/templates/config-secrets/monitor.yaml-10- APP_VERSION: {{ .Values.planeVersion | quote }}
charts/plane-enterprise/templates/config-secrets/monitor.yaml-11- DEPLOY_PLATFORM: "KUBERNETES"
--
charts/plane-enterprise/templates/certs/certs.yaml-8-spec:
charts/plane-enterprise/templates/certs/certs.yaml-9- dnsNames:
charts/plane-enterprise/templates/certs/certs.yaml:10: - {{ .Values.license.licenseDomain | quote }}
charts/plane-enterprise/templates/certs/certs.yaml-11- {{- if and .Values.services.minio.local_setup .Values.ingress.minioHost }}
charts/plane-enterprise/templates/certs/certs.yaml-12- - {{ .Values.ingress.minioHost | quote }}
--
charts/plane-enterprise/templates/config-secrets/app-env.yaml-48- PRIME_HOST: {{ .Values.license.licenseServer | quote }}
charts/plane-enterprise/templates/config-secrets/app-env.yaml-49- MACHINE_SIGNATURE: {{ include "hashString" . | quote }}
charts/plane-enterprise/templates/config-secrets/app-env.yaml:50: APP_DOMAIN: {{ .Values.license.licenseDomain | quote }}
charts/plane-enterprise/templates/config-secrets/app-env.yaml-51- APP_VERSION: {{ .Values.planeVersion | quote }}
charts/plane-enterprise/templates/config-secrets/app-env.yaml-52- PAYMENT_SERVER_BASE_URL: "http://{{ .Release.Name }}-monitor.{{ .Release.Namespace }}.svc.cluster.local:8080/"
--
charts/plane-enterprise/templates/config-secrets/app-env.yaml-64- GUNICORN_WORKERS: "1"
charts/plane-enterprise/templates/config-secrets/app-env.yaml-65-
charts/plane-enterprise/templates/config-secrets/app-env.yaml:66: WEB_URL: "http://{{ .Values.license.licenseDomain }}"
charts/plane-enterprise/templates/config-secrets/app-env.yaml-67-
charts/plane-enterprise/templates/config-secrets/app-env.yaml-68- LIVE_BASE_URL: "http://{{ .Release.Name }}-live.{{ .Release.Namespace }}.svc.cluster.local:3000/"
--
charts/plane-enterprise/templates/config-secrets/app-env.yaml-76- CORS_ALLOWED_ORIGINS: "*"
charts/plane-enterprise/templates/config-secrets/app-env.yaml-77- {{- else if .Values.env.cors_allowed_origins }}
charts/plane-enterprise/templates/config-secrets/app-env.yaml:78: CORS_ALLOWED_ORIGINS: "http://{{ .Values.license.licenseDomain }},https://{{ .Values.license.licenseDomain }},{{ .Values.env.cors_allowed_origins }}"
charts/plane-enterprise/templates/config-secrets/app-env.yaml-79- {{- else}}
charts/plane-enterprise/templates/config-secrets/app-env.yaml:80: CORS_ALLOWED_ORIGINS: "http://{{ .Values.license.licenseDomain }},https://{{ .Values.license.licenseDomain }}"
charts/plane-enterprise/templates/config-secrets/app-env.yaml-81- {{- end }}
</shell_result>
Script executed:
#!/bin/bash
# Search for any issues, comments, or discussions about metrics privacy/PII
rg -n "TODO|FIXME|XXX|HACK" charts/plane-enterprise/templates/config-secrets/metrics-config.yaml
# Search for any configuration that would allow disabling specific labels
rg -n "metrics.*label|external.*label|telemetry.*privacy" --type yaml -A2 -B2
# Check if there's any documentation about metrics telemetry
find . -type f -name "*.md" -exec grep -l "metrics\|telemetry" {} \;Repository: makeplane/helm-charts
<shell_result>
Repository: makeplane/helm-charts
Exit code: 0
stdout:
122: # TODO: Add production TLS configuration
charts/plane-enterprise/templates/config-secrets/metrics-config.yaml-20- scrape_interval: {{ .Values.metrics.agent.scrape_interval | default "300s" }}
charts/plane-enterprise/templates/config-secrets/metrics-config.yaml-21- evaluation_interval: {{ .Values.metrics.agent.scrape_interval | default "300s" }}
charts/plane-enterprise/templates/config-secrets/metrics-config.yaml:22: external_labels:
charts/plane-enterprise/templates/config-secrets/metrics-config.yaml-23- # Multi-tenant identification
charts/plane-enterprise/templates/config-secrets/metrics-config.yaml-24- plane_installation_uuid: "{{ include "plane.metrics.installationUUID" . }}"
</shell_result>
Confirm license domain is exported to external telemetry without privacy controls or documentation.
The concern is valid. license_domain contains the customer's fully-qualified domain name (FQDN, e.g., "plane.company.com") and is unconditionally exported as an external label in Prometheus metrics sent to the configured telemetry endpoint. This is customer-identifying information with no:
- Privacy policy or data retention terms documented
- Configuration flag to disable this specific label (metrics enable/disable is all-or-nothing)
- Acknowledgment of GDPR/CCPA implications
The label should be:
- Removed entirely,
- Made optional via a feature flag, or
- Explicitly documented as a known data export in compliance documentation (README or PRIVACY policy file).
🤖 Prompt for AI Agents
In charts/plane-enterprise/templates/config-secrets/metrics-config.yaml around
line 28, the template unconditionally exports license_domain (customer FQDN) as
an external Prometheus label; change the template to only include license_domain
when a new Helm value flag (e.g., .Values.license.exportLicenseDomain) is
explicitly true (default false), add that flag to
charts/plane-enterprise/values.yaml with default: exportLicenseDomain: false and
a clear comment, and update README and PRIVACY docs to document that exporting
the license domain is an opt-in telemetry feature and the retention/usage
policy; also run helm lint and test rendering to verify the label is omitted by
default.
This update enhances observability by integrating metrics collection capabilities into the Plane Enterprise Helm chart.
Description
Type of Change
Screenshots and Media (if applicable)
Test Scenarios
References
Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings.