diff --git a/.gitattributes b/.gitattributes new file mode 100644 index 0000000..1eaf229 --- /dev/null +++ b/.gitattributes @@ -0,0 +1,36 @@ +# Set default behavior to automatically normalize line endings +* text=auto eol=lf + +# Explicitly declare text files that should use LF +*.yaml text eol=lf +*.yml text eol=lf +*.json text eol=lf +*.js text eol=lf +*.ts text eol=lf +*.md text eol=lf +*.sh text eol=lf +*.ps1 text eol=lf +*.env text eol=lf + +# Declare all files that are binary and should not be modified +*.png binary +*.jpg binary +*.jpeg binary +*.gif binary +*.ico binary +*.zip binary +*.tar binary +*.gz binary +*.db binary +*.pdf binary +*.psd binary + +# Git configuration +.gitattributes text eol=lf +.gitignore text eol=lf +.gitconfig text eol=lf + +# Docker configuration +Dockerfile text eol=lf +docker-compose.yml text eol=lf +docker-compose.yaml text eol=lf diff --git a/README.md b/README.md index c7ad221..80af962 100644 --- a/README.md +++ b/README.md @@ -1 +1,30 @@ # n8n-hosting + +This repository contains various deployment options for n8n workflow automation tool, including Docker, Docker Compose, and Kubernetes. + +## Deployment Options + +### Docker Compose +Simple deployment options using Docker Compose for development and small production environments. + +* [Basic Setup](docker-compose/) +* [With Postgres](docker-compose/withPostgres/) +* [With Postgres and Worker](docker-compose/withPostgresAndWorker/) + +### Docker with Caddy +Deployment option using Docker with Caddy as a reverse proxy. + +* [Docker + Caddy](docker-caddy/) + +### Kubernetes +Enterprise-grade deployment option with Kustomize overlays for production environments. Features UI/Worker separation, autoscaling via KEDA, and production-ready security hardening. + +* [Kubernetes Deployment](kubernetes/) +* [Production Overlay](kubernetes/overlays/production/) +* [KEDA Autoscaling Configuration](kubernetes/configure-keda-prometheus.ps1) + +## Getting Started + +Choose a deployment option based on your requirements and follow the instructions in the corresponding directory. + +For enterprise production deployments, the Kubernetes option offers the most scalability and resilience. diff --git a/kubernetes/README.md b/kubernetes/README.md index 4b29e33..9d3bf69 100644 --- a/kubernetes/README.md +++ b/kubernetes/README.md @@ -1,23 +1,123 @@ # n8n-kubernetes-hosting -Get up and running with n8n on the following platforms: +This repository provides Kubernetes manifests for deploying n8n workflow automation tool in both development and production environments. It uses Kustomize overlays to manage environment-specific configurations. -* [AWS](https://docs.n8n.io/hosting/server-setups/aws/) -* [Azure](https://docs.n8n.io/hosting/server-setups/azure/) -* [Google Cloud Platform](https://docs.n8n.io/hosting/server-setups/google-cloud/) +## Architecture Overview -If you have questions after trying the tutorials, check out the [forums](https://community.n8n.io/). +The production deployment uses a scalable architecture with the following components: + +* **n8n UI**: Single pod handling the web interface (Deployment) +* **n8n Workers**: Multiple pods processing workflow executions (Deployment with KEDA autoscaling) +* **Redis**: Queue management for worker coordination (Deployment) +* **Postgres**: Persistence layer for workflow storage (Deployment) + +### Key Features + +* **UI/Worker Architecture**: Separates web interface and workflow processing for better scalability +* **Autoscaling**: Uses KEDA with Prometheus metrics to scale workers based on queue size +* **Security Hardening**: Non-root execution, network policies, and secret management +* **Resource Management**: Production-grade resource limits and requests +* **Zero-Downtime Updates**: RollingUpdate strategy for all components ## Prerequisites Self-hosting n8n requires technical knowledge, including: -* Setting up and configuring servers and containers +* Kubernetes cluster administration +* Setting up and configuring containers and orchestration * Managing application resources and scaling -* Securing servers and applications -* Configuring n8n +* Securing Kubernetes workloads +* Configuring n8n and related infrastructure + +### Required Components + +* Kubernetes cluster (v1.16+) +* Kubectl and Kustomize +* KEDA for autoscaling (v2.0+) +* Prometheus for metrics (optional, required for autoscaling) +* Ingress controller (e.g., nginx-ingress) +* Cert-Manager (optional, for SSL) + +## Deployment Structure + +The repository is organized using Kustomize overlays: + +``` +kubernetes/ +├── base/ # Base configuration shared across environments +├── overlays/ +│ └── production/ # Production-specific configurations +└── configure-keda-prometheus.ps1 # Configuration script +``` + +### Base Directory + +Contains core components: +* n8n UI deployment (single replica) +* n8n worker deployment +* Redis and Postgres deployments +* Services, PVCs, ConfigMaps, and Secrets +* Basic ingress configuration + +### Production Overlay + +Enhances the base with production-ready configurations: +* Increased worker replicas (3 by default) +* Enhanced resource limits +* Redis security with password authentication +* Network policies for Redis and Postgres +* Security contexts for non-root execution +* Custom hostname through ingress-patch +* KEDA ScaledObject for worker autoscaling + +## Deployment Instructions + +### Basic Deployment + +```bash +# Deploy the production configuration +kubectl apply -k kubernetes/overlays/production +``` + +### Configuring KEDA and Prometheus + +The repository includes a PowerShell script that automatically detects KEDA and Prometheus in your cluster and configures the ScaledObject accordingly: + +```powershell +# Run with default settings +.\kubernetes\configure-keda-prometheus.ps1 + +# Or customize parameters +.\kubernetes\configure-keda-prometheus.ps1 -MinReplicas 3 -MaxReplicas 30 -Threshold 10 +``` + +#### Script Parameters + +* `Namespace`: Your n8n namespace (default: "n8n") +* `PromNamespace`: Prometheus namespace (default: "prometheus") +* `ScaledObjectPatchFile`: Path to ScaledObject patch (default: "overlays/production/n8n-worker-scaledobject-patch.yaml") +* `MinReplicas`: Minimum worker replicas (default: 2) +* `MaxReplicas`: Maximum worker replicas (default: 20) +* `MetricName`: Metric name for scaling (default: "n8n_queue_waiting_jobs") +* `Threshold`: Queue size threshold to trigger scaling (default: 5) +* `Query`: Prometheus query (default: "sum(n8n_queue_waiting_jobs)") + +## Environment Variables + +Key environment variables in the production deployment: + +* `EXECUTIONS_MODE=queue`: Enables queue mode for distributed execution +* `QUEUE_BULL_REDIS_HOST`: Points to Redis service +* `QUEUE_HEALTH_CHECK_ACTIVE=true`: Enables worker health checks +* `DB_POSTGRESDB_HOST`: Points to Postgres service +* `N8N_ENCRYPTION_KEY`: From Secret for data encryption + +## Resource Allocations -n8n recommends self-hosting for expert users. Mistakes can lead to data loss, security issues, and downtime. If you aren't experienced at managing servers, n8n recommends [n8n Cloud](https://n8n.io/cloud/). +* **n8n UI**: 500Mi-1Gi memory, 500m-1 CPU +* **n8n workers**: 500Mi-1Gi memory, 300m-1 CPU +* **Redis**: 128Mi-256Mi memory, 100m-300m CPU +* **Postgres**: 4Gi-8Gi memory, 2-4 CPU ## Contributions diff --git a/kubernetes/base/kustomization.yaml b/kubernetes/base/kustomization.yaml new file mode 100644 index 0000000..198a7f9 --- /dev/null +++ b/kubernetes/base/kustomization.yaml @@ -0,0 +1,26 @@ +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization + +namespace: n8n + +labels: + - pairs: + app: n8n + +resources: + - namespace.yaml + - n8n-deployment.yaml + - n8n-deployment-workers.yaml + - n8n-service.yaml + - redis-deployment.yaml + - redis-service.yaml + - pvcs.yaml + - postgres-deployment.yaml + - postgres-service.yaml + - postgres-pvc.yaml + - postgres-configmap.yaml + - postgres-secret.yaml + - n8n-secret.yaml + - n8n-ingress.yaml + - n8n-worker-scaledobject.yaml + - n8n-servicemonitor.yaml \ No newline at end of file diff --git a/kubernetes/n8n-deployment.yaml b/kubernetes/base/n8n-deployment-workers.yaml similarity index 59% rename from kubernetes/n8n-deployment.yaml rename to kubernetes/base/n8n-deployment-workers.yaml index 89d6d9c..105f6c6 100644 --- a/kubernetes/n8n-deployment.yaml +++ b/kubernetes/base/n8n-deployment-workers.yaml @@ -2,20 +2,22 @@ apiVersion: apps/v1 kind: Deployment metadata: labels: - service: n8n - name: n8n - namespace: n8n + service: n8n-worker + name: n8n-worker spec: - replicas: 1 + replicas: 2 selector: matchLabels: - service: n8n + service: n8n-worker strategy: - type: Recreate + type: RollingUpdate + rollingUpdate: + maxSurge: 1 + maxUnavailable: 0 template: metadata: labels: - service: n8n + service: n8n-worker spec: initContainers: - name: volume-permissions @@ -29,12 +31,12 @@ spec: - /bin/sh args: - -c - - sleep 5; n8n start + - sleep 5; n8n worker env: - name: DB_TYPE value: postgresdb - name: DB_POSTGRESDB_HOST - value: postgres-service.n8n.svc.cluster.local + value: postgres-service - name: DB_POSTGRESDB_PORT value: "5432" - name: DB_POSTGRESDB_DATABASE @@ -43,25 +45,41 @@ spec: valueFrom: secretKeyRef: name: postgres-secret - key: POSTGRES_NON_ROOT_USER + key: POSTGRES_USER - name: DB_POSTGRESDB_PASSWORD valueFrom: secretKeyRef: name: postgres-secret - key: POSTGRES_NON_ROOT_PASSWORD - - name: N8N_PROTOCOL - value: http - - name: N8N_PORT - value: "5678" + key: POSTGRES_PASSWORD + - name: EXECUTIONS_MODE + value: queue + - name: QUEUE_BULL_REDIS_HOST + value: redis-service + - name: QUEUE_HEALTH_CHECK_ACTIVE + value: "true" + - name: N8N_ENCRYPTION_KEY + valueFrom: + secretKeyRef: + name: n8n-secret + key: N8N_ENCRYPTION_KEY + - name: N8N_METRICS + value: "true" + - name: N8N_METRICS_PORT + value: "9100" + - name: N8N_METRICS_PREFIX + value: "n8n_" image: n8nio/n8n - name: n8n + name: n8n-worker ports: - - containerPort: 5678 + - containerPort: 9100 + name: metrics resources: requests: memory: "250Mi" + cpu: "100m" limits: memory: "500Mi" + cpu: "500m" volumeMounts: - mountPath: /home/node/.n8n name: n8n-claim0 diff --git a/kubernetes/base/n8n-deployment.yaml b/kubernetes/base/n8n-deployment.yaml new file mode 100644 index 0000000..806b1e2 --- /dev/null +++ b/kubernetes/base/n8n-deployment.yaml @@ -0,0 +1,112 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + labels: + service: n8n + name: n8n +spec: + replicas: 1 + selector: + matchLabels: + service: n8n + strategy: + type: RollingUpdate + rollingUpdate: + maxSurge: 1 + maxUnavailable: 0 + template: + metadata: + labels: + service: n8n + spec: + initContainers: + - name: volume-permissions + image: busybox:1.36 + command: ["sh", "-c", "chown 1000:1000 /data"] + volumeMounts: + - name: n8n-claim0 + mountPath: /data + containers: + - command: + - /bin/sh + args: + - -c + - sleep 5; n8n start + env: + - name: DB_TYPE + value: postgresdb + - name: DB_POSTGRESDB_HOST + value: postgres-service + - name: DB_POSTGRESDB_PORT + value: "5432" + - name: DB_POSTGRESDB_DATABASE + value: n8n + - name: DB_POSTGRESDB_USER + valueFrom: + secretKeyRef: + name: postgres-secret + key: POSTGRES_USER + - name: DB_POSTGRESDB_PASSWORD + valueFrom: + secretKeyRef: + name: postgres-secret + key: POSTGRES_PASSWORD + - name: N8N_ENCRYPTION_KEY + valueFrom: + secretKeyRef: + name: n8n-secret + key: N8N_ENCRYPTION_KEY + - name: EXECUTIONS_MODE + value: queue + - name: QUEUE_BULL_REDIS_HOST + value: redis-service + - name: QUEUE_HEALTH_CHECK_ACTIVE + value: "true" + - name: N8N_METRICS + value: "true" + - name: N8N_METRICS_PORT + value: "9100" + - name: N8N_METRICS_PREFIX + value: "n8n_" + image: n8nio/n8n + name: n8n + ports: + - containerPort: 5678 + name: http + - containerPort: 9100 + name: metrics + resources: + requests: + memory: "250Mi" + cpu: "200m" + limits: + memory: "500Mi" + cpu: "500m" + volumeMounts: + - mountPath: /home/node/.n8n + name: n8n-claim0 + livenessProbe: + httpGet: + path: /healthz + port: 5678 + initialDelaySeconds: 30 + periodSeconds: 10 + failureThreshold: 3 + readinessProbe: + httpGet: + path: /healthz/readiness + port: 5678 + initialDelaySeconds: 10 + periodSeconds: 5 + failureThreshold: 3 + restartPolicy: Always + volumes: + - name: n8n-claim0 + persistentVolumeClaim: + claimName: n8n-claim0 + - name: n8n-secret + secret: + secretName: n8n-secret + - name: postgres-secret + secret: + secretName: postgres-secret diff --git a/kubernetes/base/n8n-ingress.yaml b/kubernetes/base/n8n-ingress.yaml new file mode 100644 index 0000000..4fa346f --- /dev/null +++ b/kubernetes/base/n8n-ingress.yaml @@ -0,0 +1,19 @@ +apiVersion: networking.k8s.io/v1 +kind: Ingress +metadata: + name: n8n-ingress + annotations: + nginx.ingress.kubernetes.io/ssl-redirect: "false" +spec: + ingressClassName: nginx + rules: + - host: n8n.dev-k8s.cloud + http: + paths: + - path: / + pathType: Prefix + backend: + service: + name: n8n-service + port: + number: 5678 diff --git a/kubernetes/base/n8n-secret.yaml b/kubernetes/base/n8n-secret.yaml new file mode 100644 index 0000000..8d2af00 --- /dev/null +++ b/kubernetes/base/n8n-secret.yaml @@ -0,0 +1,7 @@ +apiVersion: v1 +kind: Secret +metadata: + name: n8n-secret +type: Opaque +stringData: + N8N_ENCRYPTION_KEY: "SuperSecretKeyChangeThis" diff --git a/kubernetes/base/n8n-service.yaml b/kubernetes/base/n8n-service.yaml new file mode 100644 index 0000000..4f687fe --- /dev/null +++ b/kubernetes/base/n8n-service.yaml @@ -0,0 +1,23 @@ +apiVersion: v1 +kind: Service +metadata: + labels: + service: n8n + name: n8n-service + annotations: + prometheus.io/scrape: "true" + prometheus.io/path: "/metrics" + prometheus.io/port: "9100" +spec: + ports: + - name: http + port: 5678 + targetPort: 5678 + protocol: TCP + - name: metrics + port: 9100 + targetPort: 9100 + protocol: TCP + selector: + service: n8n + type: ClusterIP diff --git a/kubernetes/base/n8n-servicemonitor.yaml b/kubernetes/base/n8n-servicemonitor.yaml new file mode 100644 index 0000000..699b5be --- /dev/null +++ b/kubernetes/base/n8n-servicemonitor.yaml @@ -0,0 +1,15 @@ +apiVersion: monitoring.coreos.com/v1 +kind: ServiceMonitor +metadata: + name: n8n-metrics + annotations: + prometheus.io/scrape: "true" + prometheus.io/namespace: "prometheus" +spec: + selector: + matchLabels: + service: n8n + endpoints: + - port: metrics + path: /metrics + interval: 15s diff --git a/kubernetes/base/n8n-worker-scaledobject.yaml b/kubernetes/base/n8n-worker-scaledobject.yaml new file mode 100644 index 0000000..68e38a1 --- /dev/null +++ b/kubernetes/base/n8n-worker-scaledobject.yaml @@ -0,0 +1,18 @@ +apiVersion: keda.sh/v1alpha1 +kind: ScaledObject +metadata: + name: n8n-worker-scaler +spec: + scaleTargetRef: + name: n8n-worker + minReplicaCount: 1 + maxReplicaCount: 10 + pollingInterval: 30 + cooldownPeriod: 300 + triggers: + - type: prometheus + metadata: + serverAddress: http://prometheus-kube-prometheus-prometheus.prometheus.svc.cluster.local:9090 + metricName: n8n_queue_size + threshold: "5" + query: sum(n8n_queue_waiting_jobs) diff --git a/kubernetes/namespace.yaml b/kubernetes/base/namespace.yaml similarity index 100% rename from kubernetes/namespace.yaml rename to kubernetes/base/namespace.yaml diff --git a/kubernetes/postgres-configmap.yaml b/kubernetes/base/postgres-configmap.yaml similarity index 95% rename from kubernetes/postgres-configmap.yaml rename to kubernetes/base/postgres-configmap.yaml index cfdac0e..45512e3 100644 --- a/kubernetes/postgres-configmap.yaml +++ b/kubernetes/base/postgres-configmap.yaml @@ -2,7 +2,6 @@ apiVersion: v1 kind: ConfigMap metadata: name: init-data - namespace: n8n data: init-data.sh: | #!/bin/bash @@ -14,4 +13,4 @@ data: EOSQL else echo "SETUP INFO: No Environment variables given!" - fi \ No newline at end of file + fi diff --git a/kubernetes/postgres-deployment.yaml b/kubernetes/base/postgres-deployment.yaml similarity index 97% rename from kubernetes/postgres-deployment.yaml rename to kubernetes/base/postgres-deployment.yaml index 21d2a1a..5ab8a01 100644 --- a/kubernetes/postgres-deployment.yaml +++ b/kubernetes/base/postgres-deployment.yaml @@ -4,7 +4,6 @@ metadata: labels: service: postgres-n8n name: postgres - namespace: n8n spec: replicas: 1 selector: @@ -63,7 +62,7 @@ spec: secretKeyRef: name: postgres-secret key: POSTGRES_NON_ROOT_PASSWORD - - name: POSTGRES_HOST + - name: POSTGRES_HOST value: postgres-service - name: POSTGRES_PORT value: '5432' diff --git a/kubernetes/postgres-claim0-persistentvolumeclaim.yaml b/kubernetes/base/postgres-pvc.yaml similarity index 90% rename from kubernetes/postgres-claim0-persistentvolumeclaim.yaml rename to kubernetes/base/postgres-pvc.yaml index 226dd48..48da07f 100644 --- a/kubernetes/postgres-claim0-persistentvolumeclaim.yaml +++ b/kubernetes/base/postgres-pvc.yaml @@ -2,7 +2,6 @@ kind: PersistentVolumeClaim apiVersion: v1 metadata: name: postgresql-pv - namespace: n8n spec: accessModes: - ReadWriteOnce diff --git a/kubernetes/postgres-secret.yaml b/kubernetes/base/postgres-secret.yaml similarity index 77% rename from kubernetes/postgres-secret.yaml rename to kubernetes/base/postgres-secret.yaml index 29d006c..d8d3e0d 100644 --- a/kubernetes/postgres-secret.yaml +++ b/kubernetes/base/postgres-secret.yaml @@ -1,7 +1,6 @@ apiVersion: v1 kind: Secret metadata: - namespace: n8n name: postgres-secret type: Opaque stringData: @@ -9,4 +8,4 @@ stringData: POSTGRES_PASSWORD: changePassword POSTGRES_DB: n8n POSTGRES_NON_ROOT_USER: changeUser - POSTGRES_NON_ROOT_PASSWORD: changePassword \ No newline at end of file + POSTGRES_NON_ROOT_PASSWORD: changePassword diff --git a/kubernetes/postgres-service.yaml b/kubernetes/base/postgres-service.yaml similarity index 86% rename from kubernetes/postgres-service.yaml rename to kubernetes/base/postgres-service.yaml index ab755fe..d715d87 100644 --- a/kubernetes/postgres-service.yaml +++ b/kubernetes/base/postgres-service.yaml @@ -4,9 +4,7 @@ metadata: labels: service: postgres-n8n name: postgres-service - namespace: n8n spec: - clusterIP: None ports: - name: "5432" port: 5432 diff --git a/kubernetes/base/pvcs.yaml b/kubernetes/base/pvcs.yaml new file mode 100644 index 0000000..9f3a601 --- /dev/null +++ b/kubernetes/base/pvcs.yaml @@ -0,0 +1,21 @@ +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: n8n-claim0 +spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi +--- +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: redis-claim +spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi diff --git a/kubernetes/base/redis-deployment.yaml b/kubernetes/base/redis-deployment.yaml new file mode 100644 index 0000000..645f83c --- /dev/null +++ b/kubernetes/base/redis-deployment.yaml @@ -0,0 +1,55 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + labels: + service: redis + name: redis +spec: + replicas: 1 + selector: + matchLabels: + service: redis + strategy: + type: RollingUpdate + rollingUpdate: + maxSurge: 1 + maxUnavailable: 0 + template: + metadata: + labels: + service: redis + spec: + containers: + - image: redis:6-alpine + name: redis + ports: + - containerPort: 6379 + resources: + requests: + memory: "128Mi" + cpu: "100m" + limits: + memory: "256Mi" + cpu: "300m" + volumeMounts: + - mountPath: /data + name: redis-data + livenessProbe: + exec: + command: + - redis-cli + - ping + initialDelaySeconds: 5 + periodSeconds: 10 + readinessProbe: + exec: + command: + - redis-cli + - ping + initialDelaySeconds: 5 + periodSeconds: 5 + restartPolicy: Always + volumes: + - name: redis-data + persistentVolumeClaim: + claimName: redis-claim diff --git a/kubernetes/base/redis-service.yaml b/kubernetes/base/redis-service.yaml new file mode 100644 index 0000000..54a83c5 --- /dev/null +++ b/kubernetes/base/redis-service.yaml @@ -0,0 +1,14 @@ +apiVersion: v1 +kind: Service +metadata: + labels: + service: redis + name: redis-service +spec: + ports: + - port: 6379 + targetPort: 6379 + protocol: TCP + selector: + service: redis + type: ClusterIP diff --git a/kubernetes/configure-keda-prometheus.ps1 b/kubernetes/configure-keda-prometheus.ps1 new file mode 100644 index 0000000..ddbebf9 --- /dev/null +++ b/kubernetes/configure-keda-prometheus.ps1 @@ -0,0 +1,158 @@ +#!/usr/bin/env pwsh + +# Configure-KEDA-Prometheus.ps1 +# This script detects KEDA and Prometheus configurations in your Kubernetes cluster +# and updates n8n worker ScaledObject configuration accordingly + +# Parameters with default values that can be overridden +param( + [string]$Namespace = "n8n", + [string]$PromNamespace = "prometheus", + [string]$ScaledObjectPatchFile = "overlays/production/n8n-worker-scaledobject-patch.yaml", + [int]$MinReplicas = 2, + [int]$MaxReplicas = 20, + [string]$MetricName = "n8n_queue_waiting_jobs", + [int]$Threshold = 5, + [string]$Query = "sum(n8n_queue_waiting_jobs)" +) + +Write-Host "===== n8n KEDA & Prometheus Configuration Script =====" +Write-Host "Detecting environment and configuring autoscaling..." + +# Function to check if a Kubernetes resource exists +function Test-K8sResource { + param ( + [string]$Resource, + [string]$Name, + [string]$Namespace = "" + ) + + $nsParam = "" + if ($Namespace) { + $nsParam = "-n $Namespace" + } + + $result = kubectl get $Resource $Name $nsParam 2>&1 + return $LASTEXITCODE -eq 0 +} + +# Check if KEDA is installed in the cluster +Write-Host "Checking for KEDA installation..." +$kedaExists = Test-K8sResource -Resource "crd" -Name "scaledobjects.keda.sh" + +if (-not $kedaExists) { + Write-Host "ERROR: KEDA is not installed in the cluster. Please install KEDA first." -ForegroundColor Red + Write-Host "Installation instructions: https://keda.sh/docs/latest/deploy/" -ForegroundColor Yellow + exit 1 +} + +Write-Host "✅ KEDA is installed properly" -ForegroundColor Green + +# Detect Prometheus server URL +Write-Host "Detecting Prometheus server..." +$prometheusSvc = $null + +# First check in the provided namespace +$promSvcInNamespace = kubectl get svc -n $PromNamespace -o jsonpath="{.items[?(@.metadata.name=='prometheus-server' || @.metadata.name=='prometheus')].metadata.name}" 2>&1 +if ($promSvcInNamespace) { + $prometheusSvc = "$promSvcInNamespace.$PromNamespace.svc.cluster.local" + Write-Host "Found Prometheus service in $PromNamespace namespace: $prometheusSvc" -ForegroundColor Green +} + +# If not found, check for kube-prometheus-stack installation +if (-not $prometheusSvc) { + $kubePromSvc = kubectl get svc -n $PromNamespace -o jsonpath="{.items[?(@.metadata.name=='prometheus-kube-prometheus-prometheus')].metadata.name}" 2>&1 + if ($kubePromSvc) { + $prometheusSvc = "$kubePromSvc.$PromNamespace.svc.cluster.local" + Write-Host "Found kube-prometheus-stack service in $PromNamespace namespace: $prometheusSvc" -ForegroundColor Green + } +} + +# If still not found, search in all namespaces +if (-not $prometheusSvc) { + Write-Host "Searching for Prometheus in all namespaces (this may take a moment)..." -ForegroundColor Yellow + $allPromSvcs = kubectl get svc --all-namespaces -o jsonpath="{range .items[?(@.metadata.name=='prometheus-server' || @.metadata.name=='prometheus' || @.metadata.name=='prometheus-kube-prometheus-prometheus')]}{.metadata.namespace}/{.metadata.name}{'\n'}{end}" 2>&1 + + if ($allPromSvcs) { + $firstPromSvc = $allPromSvcs -split "`n" | Select-Object -First 1 + $promParts = $firstPromSvc -split "/" + if ($promParts.Length -eq 2) { + $promNamespace = $promParts[0] + $promName = $promParts[1] + $prometheusSvc = "$promName.$promNamespace.svc.cluster.local" + Write-Host "Found Prometheus service in $promNamespace namespace: $prometheusSvc" -ForegroundColor Green + } + } +} + +if (-not $prometheusSvc) { + Write-Host "WARNING: Could not automatically detect Prometheus service." -ForegroundColor Yellow + $prometheusSvc = Read-Host "Please enter the Prometheus service URL (e.g., prometheus-server.prometheus.svc.cluster.local)" + + if (-not $prometheusSvc) { + Write-Host "No Prometheus service provided. Exiting." -ForegroundColor Red + exit 1 + } +} + +# Determine Prometheus port +Write-Host "Detecting Prometheus port..." +$promPort = "9090" # Default Prometheus port +$foundPort = kubectl get svc -n $promNamespace $prometheusSvc.Split(".")[0] -o jsonpath="{.spec.ports[?(@.name=='http')].port}" 2>&1 +if ($foundPort) { + $promPort = $foundPort +} + +$prometheusUrl = "http://${prometheusSvc}:${promPort}" +Write-Host "Using Prometheus URL: $prometheusUrl" -ForegroundColor Green + +# Check if n8n namespace exists +Write-Host "Checking n8n namespace..." +$n8nNamespace = Test-K8sResource -Resource "namespace" -Name $Namespace +if (-not $n8nNamespace) { + Write-Host "Creating namespace $Namespace..." + kubectl create namespace $Namespace +} + +# Check path to ScaledObject patch file +$fullPath = Join-Path -Path (Get-Location) -ChildPath $ScaledObjectPatchFile +if (-not (Test-Path $fullPath)) { + Write-Host "ERROR: ScaledObject patch file not found at $fullPath" -ForegroundColor Red + exit 1 +} + +# Create or update the n8n-worker-scaledobject-patch.yaml file +Write-Host "Updating ScaledObject patch file with Prometheus configuration..." +$scaleObjectPatch = @" +apiVersion: keda.sh/v1alpha1 +kind: ScaledObject +metadata: + name: n8n-worker-scaler +spec: + scaleTargetRef: + name: n8n-worker + minReplicaCount: $MinReplicas + maxReplicaCount: $MaxReplicas + triggers: + - type: prometheus + metadata: + serverAddress: $prometheusUrl + metricName: $MetricName + threshold: "$Threshold" + query: $Query +"@ + +# Write the updated content to the file +Set-Content -Path $fullPath -Value $scaleObjectPatch + +Write-Host "✅ ScaledObject patch file updated successfully!" -ForegroundColor Green +Write-Host " Path: $fullPath" -ForegroundColor Green +Write-Host " Prometheus URL: $prometheusUrl" -ForegroundColor Green +Write-Host " Metric Name: $MetricName" -ForegroundColor Green +Write-Host " Threshold: $Threshold" -ForegroundColor Green +Write-Host " Min/Max Replicas: $MinReplicas/$MaxReplicas" -ForegroundColor Green + +Write-Host "`nTo apply the configuration, run:" +Write-Host "kubectl apply -k kubernetes/overlays/production" -ForegroundColor Cyan + +Write-Host "`n===== Configuration Complete =====" diff --git a/kubernetes/n8n-claim0-persistentvolumeclaim.yaml b/kubernetes/n8n-claim0-persistentvolumeclaim.yaml deleted file mode 100644 index 5395484..0000000 --- a/kubernetes/n8n-claim0-persistentvolumeclaim.yaml +++ /dev/null @@ -1,13 +0,0 @@ -apiVersion: v1 -kind: PersistentVolumeClaim -metadata: - labels: - service: n8n-claim0 - name: n8n-claim0 - namespace: n8n -spec: - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 2Gi diff --git a/kubernetes/n8n-service.yaml b/kubernetes/n8n-service.yaml deleted file mode 100644 index bd3748a..0000000 --- a/kubernetes/n8n-service.yaml +++ /dev/null @@ -1,16 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - labels: - service: n8n - name: n8n - namespace: n8n -spec: - type: LoadBalancer - ports: - - name: "5678" - port: 5678 - targetPort: 5678 - protocol: TCP - selector: - service: n8n diff --git a/kubernetes/overlays/production/ingress-patch.yaml b/kubernetes/overlays/production/ingress-patch.yaml new file mode 100644 index 0000000..febbe0b --- /dev/null +++ b/kubernetes/overlays/production/ingress-patch.yaml @@ -0,0 +1,19 @@ +apiVersion: networking.k8s.io/v1 +kind: Ingress +metadata: + name: n8n-ingress + annotations: + nginx.ingress.kubernetes.io/ssl-redirect: "true" + cert-manager.io/cluster-issuer: "letsencrypt-prod" +spec: + rules: + - host: n8n.production-domain.com # Override the host for production + http: + paths: + - path: / + pathType: Prefix + backend: + service: + name: prod-n8n-service # Updated to use the prefixed service name + port: + number: 5678 diff --git a/kubernetes/overlays/production/kustomization.yaml b/kubernetes/overlays/production/kustomization.yaml new file mode 100644 index 0000000..da44e0e --- /dev/null +++ b/kubernetes/overlays/production/kustomization.yaml @@ -0,0 +1,94 @@ +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization + +namePrefix: prod- + +resources: + - ../../base + - redis-networkpolicy.yaml + - postgres-networkpolicy.yaml + - redis-secret.yaml + +patches: + # Strategic merge patches + - path: n8n-redis-credentials-patch.yaml + - path: ingress-patch.yaml + - path: n8n-worker-scaledobject-patch.yaml + - path: redis-container-patch.yaml + - path: n8n-init-container-patch.yaml + - path: n8n-env-patch.yaml + + # Enhanced resource settings for UI pod + - target: + kind: Deployment + name: n8n + patch: | + - op: replace + path: /spec/template/spec/containers/0/resources/limits/memory + value: 1Gi + - op: replace + path: /spec/template/spec/containers/0/resources/limits/cpu + value: "1" + - op: replace + path: /spec/template/spec/containers/0/resources/requests/memory + value: 500Mi + - op: replace + path: /spec/template/spec/containers/0/resources/requests/cpu + value: 500m + - op: add + path: /spec/template/spec/securityContext + value: + fsGroup: 1000 + runAsUser: 1000 + runAsNonRoot: true + + # Worker deployment patch + - target: + kind: Deployment + name: n8n-worker + patch: | + - op: replace + path: /spec/replicas + value: 3 + - op: replace + path: /spec/template/spec/containers/0/resources/limits/memory + value: 1Gi + - op: replace + path: /spec/template/spec/containers/0/resources/limits/cpu + value: "1" + - op: replace + path: /spec/template/spec/containers/0/resources/requests/memory + value: 500Mi + - op: replace + path: /spec/template/spec/containers/0/resources/requests/cpu + value: 300m + - op: add + path: /spec/template/spec/securityContext + value: + fsGroup: 1000 + runAsUser: 1000 + runAsNonRoot: true + + # Postgres resource and security hardening + - target: + kind: Deployment + name: postgres + patch: | + - op: replace + path: /spec/template/spec/containers/0/resources/limits/memory + value: 8Gi + - op: replace + path: /spec/template/spec/containers/0/resources/limits/cpu + value: "4" + - op: replace + path: /spec/template/spec/containers/0/resources/requests/memory + value: 4Gi + - op: replace + path: /spec/template/spec/containers/0/resources/requests/cpu + value: "2" + - op: add + path: /spec/template/spec/securityContext + value: + fsGroup: 999 + runAsUser: 70 + runAsNonRoot: true diff --git a/kubernetes/overlays/production/n8n-env-patch.yaml b/kubernetes/overlays/production/n8n-env-patch.yaml new file mode 100644 index 0000000..d4985ea --- /dev/null +++ b/kubernetes/overlays/production/n8n-env-patch.yaml @@ -0,0 +1,29 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: n8n +spec: + template: + spec: + containers: + - name: n8n + env: + - name: DB_POSTGRESDB_HOST + value: prod-postgres-service + - name: QUEUE_BULL_REDIS_HOST + value: prod-redis-service.n8n.svc.cluster.local +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: n8n-worker +spec: + template: + spec: + containers: + - name: n8n-worker + env: + - name: DB_POSTGRESDB_HOST + value: prod-postgres-service + - name: QUEUE_BULL_REDIS_HOST + value: prod-redis-service.n8n.svc.cluster.local diff --git a/kubernetes/overlays/production/n8n-init-container-patch.yaml b/kubernetes/overlays/production/n8n-init-container-patch.yaml new file mode 100644 index 0000000..473860c --- /dev/null +++ b/kubernetes/overlays/production/n8n-init-container-patch.yaml @@ -0,0 +1,25 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: n8n +spec: + template: + spec: + initContainers: + - name: volume-permissions + securityContext: + runAsUser: 0 + runAsNonRoot: false +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: n8n-worker +spec: + template: + spec: + initContainers: + - name: volume-permissions + securityContext: + runAsUser: 0 + runAsNonRoot: false diff --git a/kubernetes/overlays/production/n8n-redis-credentials-patch.yaml b/kubernetes/overlays/production/n8n-redis-credentials-patch.yaml new file mode 100644 index 0000000..08e6270 --- /dev/null +++ b/kubernetes/overlays/production/n8n-redis-credentials-patch.yaml @@ -0,0 +1,31 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: n8n +spec: + template: + spec: + containers: + - name: n8n + env: + - name: QUEUE_BULL_REDIS_PASSWORD + valueFrom: + secretKeyRef: + name: prod-redis-secret + key: REDIS_PASSWORD +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: n8n-worker +spec: + template: + spec: + containers: + - name: n8n-worker + env: + - name: QUEUE_BULL_REDIS_PASSWORD + valueFrom: + secretKeyRef: + name: prod-redis-secret + key: REDIS_PASSWORD diff --git a/kubernetes/overlays/production/n8n-worker-scaledobject-patch.yaml b/kubernetes/overlays/production/n8n-worker-scaledobject-patch.yaml new file mode 100644 index 0000000..b15b9a1 --- /dev/null +++ b/kubernetes/overlays/production/n8n-worker-scaledobject-patch.yaml @@ -0,0 +1,14 @@ +apiVersion: keda.sh/v1alpha1 +kind: ScaledObject +metadata: + name: n8n-worker-scaler +spec: + minReplicaCount: 2 # Production should have higher minimum + maxReplicaCount: 20 # Higher capacity for production + cooldownPeriod: 600 # Longer cooldown to prevent rapid scaling + triggers: + - type: prometheus + metadata: + serverAddress: http://prometheus-kube-prometheus-prometheus.prometheus.svc.cluster.local:9090 + threshold: "3" # More aggressive scaling for production + query: sum(rate(n8n_queue_waiting_jobs[5m])) diff --git a/kubernetes/overlays/production/postgres-networkpolicy.yaml b/kubernetes/overlays/production/postgres-networkpolicy.yaml new file mode 100644 index 0000000..e561e66 --- /dev/null +++ b/kubernetes/overlays/production/postgres-networkpolicy.yaml @@ -0,0 +1,29 @@ +apiVersion: networking.k8s.io/v1 +kind: NetworkPolicy +metadata: + name: postgres-network-policy +spec: + podSelector: + matchLabels: + service: postgres-n8n + policyTypes: + - Ingress + - Egress + ingress: + - from: + # Allow only n8n components to access Postgres + - podSelector: + matchLabels: + app: n8n + ports: + - protocol: TCP + port: 5432 + egress: + # Restrict outbound traffic to essentials only + - to: + - namespaceSelector: + matchLabels: + kubernetes.io/metadata.name: kube-system + ports: + - protocol: UDP + port: 53 # DNS diff --git a/kubernetes/overlays/production/redis-container-patch.yaml b/kubernetes/overlays/production/redis-container-patch.yaml new file mode 100644 index 0000000..cdd1a34 --- /dev/null +++ b/kubernetes/overlays/production/redis-container-patch.yaml @@ -0,0 +1,47 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: redis +spec: + template: + spec: + containers: + - name: redis + image: redis:6-alpine + args: ["--requirepass", "$(REDIS_PASSWORD)"] + ports: + - containerPort: 6379 + resources: + requests: + memory: "256Mi" + cpu: "200m" + limits: + memory: "512Mi" + cpu: "500m" + env: + - name: REDIS_PASSWORD + valueFrom: + secretKeyRef: + name: prod-redis-secret + key: REDIS_PASSWORD + volumeMounts: + - mountPath: /data + name: redis-data + livenessProbe: + exec: + command: + - redis-cli + - ping + initialDelaySeconds: 5 + periodSeconds: 10 + readinessProbe: + exec: + command: + - redis-cli + - ping + initialDelaySeconds: 5 + periodSeconds: 5 + securityContext: + fsGroup: 999 + runAsUser: 999 + runAsNonRoot: true diff --git a/kubernetes/overlays/production/redis-networkpolicy.yaml b/kubernetes/overlays/production/redis-networkpolicy.yaml new file mode 100644 index 0000000..de5961f --- /dev/null +++ b/kubernetes/overlays/production/redis-networkpolicy.yaml @@ -0,0 +1,29 @@ +apiVersion: networking.k8s.io/v1 +kind: NetworkPolicy +metadata: + name: redis-network-policy +spec: + podSelector: + matchLabels: + service: redis + policyTypes: + - Ingress + - Egress + ingress: + - from: + # Allow only n8n components to access Redis + - podSelector: + matchLabels: + app: n8n + ports: + - protocol: TCP + port: 6379 + egress: + # Restrict outbound traffic to essentials only + - to: + - namespaceSelector: + matchLabels: + kubernetes.io/metadata.name: kube-system + ports: + - protocol: UDP + port: 53 # DNS diff --git a/kubernetes/overlays/production/redis-secret.yaml b/kubernetes/overlays/production/redis-secret.yaml new file mode 100644 index 0000000..90f5d81 --- /dev/null +++ b/kubernetes/overlays/production/redis-secret.yaml @@ -0,0 +1,8 @@ +apiVersion: v1 +kind: Secret +metadata: + name: redis-secret + namespace: n8n +type: Opaque +stringData: + REDIS_PASSWORD: "ChangeThisToAStrongPasswordInProduction!"