using the matching operator. | | |
+| `topologySpreadConstraints` _[TopologySpreadConstraint](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#topologyspreadconstraint-v1-core) array_ | Specifies how to spread matching pods among the given topology. | | |
+| `replicas` _integer_ | Desired number of instances.
In case of core nodes, each instance has a consistent identity. | 2 | Minimum: 0
|
+| `minAvailable` _[IntOrString](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#intorstring-intstr-util)_ | An eviction is allowed if at least "minAvailable" pods selected by
"selector" will still be available after the eviction, i.e. even in the
absence of the evicted pod. So for example you can prevent all voluntary
evictions by specifying "100%". | | XIntOrString: \{\}
|
+| `maxUnavailable` _[IntOrString](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#intorstring-intstr-util)_ | An eviction is allowed if at most "maxUnavailable" pods selected by
"selector" are unavailable after the eviction, i.e. even in absence of
the evicted pod. For example, one can prevent all voluntary evictions
by specifying 0. This is a mutually exclusive setting with "minAvailable". | | XIntOrString: \{\}
|
+| `command` _string array_ | Entrypoint array. Not executed within a shell.
The container image's ENTRYPOINT is used if this is not provided.
Variable references `$(VAR_NAME)` are expanded using the container's environment. If a variable
cannot be resolved, the reference in the input string will be unchanged. Double `$$` are reduced
to a single `$`, which allows for escaping the `$(VAR_NAME)` syntax: i.e. `$$(VAR_NAME)` will
produce the string literal `$(VAR_NAME)`. Escaped references will never be expanded, regardless
of whether the variable exists or not. Cannot be updated.
More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell | | |
+| `args` _string array_ | Arguments to the entrypoint.
The container image's CMD is used if this is not provided.
Variable references `$(VAR_NAME)` are expanded using the container's environment. If a variable
cannot be resolved, the reference in the input string will be unchanged. Double `$$` are reduced
to a single `$`, which allows for escaping the `$(VAR_NAME)` syntax: i.e. `$$(VAR_NAME)` will
produce the string literal `$(VAR_NAME)`. Escaped references will never be expanded, regardless
of whether the variable exists or not.
More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell | | |
+| `ports` _[ContainerPort](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#containerport-v1-core) array_ | List of ports to expose from the container.
Exposing a port here gives the system additional information about the network connections a
container uses, but is primarily informational. Not specifying a port here DOES NOT prevent that
port from being exposed. Any port which is listening on the default `0.0.0.0` address inside a
container will be accessible from the network. | | |
+| `env` _[EnvVar](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#envvar-v1-core) array_ | List of environment variables to set in the container. | | |
+| `envFrom` _[EnvFromSource](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#envfromsource-v1-core) array_ | List of sources to populate environment variables from in the container.
The keys defined within a source must be a C_IDENTIFIER. All invalid keys
will be reported as an event when the container is starting. When a key exists in multiple
sources, the value associated with the last source will take precedence.
Values defined by an Env with a duplicate key will take precedence. | | |
+| `resources` _[ResourceRequirements](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#resourcerequirements-v1-core)_ | Compute Resources required by this container.
More info: https://kubernetes.io/docs/concepts/config/manage-resources-containers/ | | |
+| `podSecurityContext` _[PodSecurityContext](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#podsecuritycontext-v1-core)_ | Pod-level security attributes and common container settings. | \{ fsGroup:1000 fsGroupChangePolicy:Always runAsGroup:1000 runAsUser:1000 supplementalGroups:[1000] \} | |
+| `containerSecurityContext` _[SecurityContext](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#securitycontext-v1-core)_ | Security options the container should be run with.
If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext.
More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ | \{ runAsGroup:1000 runAsNonRoot:true runAsUser:1000 \} | |
+| `initContainers` _[Container](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#container-v1-core) array_ | List of initialization containers belonging to the pod.
Init containers are executed in order prior to containers being started. If any
init container fails, the pod is considered to have failed and is handled according
to its restartPolicy. The name for an init container or normal container must be
unique among all containers.
Init containers may not have Lifecycle actions, Readiness probes, Liveness probes, or Startup probes.
The resourceRequirements of an init container are taken into account during scheduling
by finding the highest request/limit for each resource type, and then using the max of
of that value or the sum of the normal containers. Limits are applied to init containers
in a similar fashion.
More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ | | |
+| `extraContainers` _[Container](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#container-v1-core) array_ | Additional containers to run alongside the main container. | | |
+| `extraVolumes` _[Volume](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#volume-v1-core) array_ | Additional volumes to provide to a Pod. | | |
+| `extraVolumeMounts` _[VolumeMount](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#volumemount-v1-core) array_ | Specifies how additional volumes are mounted into the main container. | | |
+| `livenessProbe` _[Probe](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#probe-v1-core)_ | Periodic probe of container liveness.
Container will be restarted if the probe fails.
More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes | \{ failureThreshold:3 httpGet:map[path:/status port:dashboard] initialDelaySeconds:60 periodSeconds:30 \} | |
+| `readinessProbe` _[Probe](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#probe-v1-core)_ | Periodic probe of container service readiness.
Container will be removed from service endpoints if the probe fails.
More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes | \{ failureThreshold:12 httpGet:map[path:/status port:dashboard] initialDelaySeconds:10 periodSeconds:5 \} | |
+| `startupProbe` _[Probe](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#probe-v1-core)_ | StartupProbe indicates that the Pod has successfully initialized.
If specified, no other probes are executed until this completes successfully.
If this probe fails, the Pod will be restarted, just as if the `livenessProbe` failed.
This can be used to provide different probe parameters at the beginning of a Pod's lifecycle,
when it might take a long time to load data or warm a cache, than during steady-state operation.
More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes | | |
+| `lifecycle` _[Lifecycle](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#lifecycle-v1-core)_ | Actions that the management system should take in response to container lifecycle events. | | |
+
+
+#### EMQXSpec
+
+
+
+EMQXSpec defines the desired state of EMQX.
+
+
+
+_Appears in:_
+- [EMQX](#emqx)
+
+| Field | Description | Default | Validation |
+| --- | --- | --- | --- |
+| `image` _string_ | EMQX container image.
More info: https://kubernetes.io/docs/concepts/containers/images | | |
+| `imagePullPolicy` _[PullPolicy](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#pullpolicy-v1-core)_ | Container image pull policy.
One of `Always`, `Never`, `IfNotPresent`.
Defaults to `Always` if `:latest` tag is specified, or `IfNotPresent` otherwise.
More info: https://kubernetes.io/docs/concepts/containers/images#updating-images | | |
+| `imagePullSecrets` _[LocalObjectReference](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#localobjectreference-v1-core) array_ | ImagePullSecrets is an optional list of references to secrets in the same namespace to use for pulling any of the images used by this PodSpec.
If specified, these secrets will be passed to individual puller implementations for them to use.
More info: https://kubernetes.io/docs/concepts/containers/images#specifying-imagepullsecrets-on-a-pod | | |
+| `serviceAccountName` _string_ | ServiceAccount name.
Managed ReplicaSets and StatefulSets are associated with the specified ServiceAccount for authentication purposes.
More info: https://kubernetes.io/docs/concepts/security/service-accounts | | |
+| `bootstrapAPIKeys` _[BootstrapAPIKey](#bootstrapapikey) array_ | Bootstrap API keys to access EMQX API.
Cannot be updated. | | |
+| `config` _[Config](#config)_ | EMQX Configuration. | | |
+| `clusterDomain` _string_ | Kubernetes cluster domain. | cluster.local | |
+| `revisionHistoryLimit` _integer_ | Number of old ReplicaSets, old StatefulSets and old PersistentVolumeClaims to retain to allow rollback. | 3 | |
+| `updateStrategy` _[UpdateStrategy](#updatestrategy)_ | Cluster upgrade strategy settings. | \{ type:Recreate \} | |
+| `coreTemplate` _[EMQXCoreTemplate](#emqxcoretemplate)_ | Template for Pods running EMQX core nodes. | \{ spec:map[replicas:2] \} | |
+| `replicantTemplate` _[EMQXReplicantTemplate](#emqxreplicanttemplate)_ | Template for Pods running EMQX replicant nodes. | | |
+| `dashboardServiceTemplate` _[ServiceTemplate](#servicetemplate)_ | Template for Service exposing the EMQX Dashboard.
Dashboard Service always points to the set of EMQX core nodes. | | |
+| `listenersServiceTemplate` _[ServiceTemplate](#servicetemplate)_ | Template for Service exposing enabled EMQX listeners.
Listeners Service points to the set of EMQX replicant nodes if they are enabled and exist.
Otherwise, it points to the set of EMQX core nodes. | | |
+
+
+#### EMQXStatus
+
+
+
+EMQXStatus defines the observed state of EMQX
+
+
+
+_Appears in:_
+- [EMQX](#emqx)
+
+| Field | Description | Default | Validation |
+| --- | --- | --- | --- |
+| `conditions` _[Condition](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#condition-v1-meta) array_ | Conditions representing the current status of the EMQX Custom Resource. | | |
+| `coreNodes` _[EMQXNode](#emqxnode) array_ | Status of each core node in the cluster. | | |
+| `coreNodesStatus` _[EMQXNodesStatus](#emqxnodesstatus)_ | Summary status of the set of core nodes. | | |
+| `replicantNodes` _[EMQXNode](#emqxnode) array_ | Status of each replicant node in the cluster. | | |
+| `replicantNodesStatus` _[EMQXNodesStatus](#emqxnodesstatus)_ | Summary status of the set of replicant nodes. | | |
+| `nodeEvacuationsStatus` _[NodeEvacuationStatus](#nodeevacuationstatus) array_ | Status of active node evacuations in the cluster. | | |
+| `dsReplication` _[DSReplicationStatus](#dsreplicationstatus)_ | Status of EMQX Durable Storage replication. | | |
+
+
+#### EvacuationStrategy
+
+
+
+
+
+
+
+_Appears in:_
+- [UpdateStrategy](#updatestrategy)
+
+| Field | Description | Default | Validation |
+| --- | --- | --- | --- |
+| `connEvictRate` _integer_ | Client disconnect rate (number per second).
Same as `conn-evict-rate` in [EMQX Node Evacuation](https://docs.emqx.com/en/emqx/v5.10/deploy/cluster/rebalancing.html#node-evacuation). | 1000 | Minimum: 1
|
+| `sessEvictRate` _integer_ | Session evacuation rate (number per second).
Same as `sess-evict-rate` in [EMQX Node Evacuation](https://docs.emqx.com/en/emqx/v5.10/deploy/cluster/rebalancing.html#node-evacuation). | 1000 | Minimum: 1
|
+| `waitTakeover` _integer_ | Amount of time (in seconds) to wait before starting session evacuation.
Same as `wait-takeover` in [EMQX Node Evacuation](https://docs.emqx.com/en/emqx/v5.10/deploy/cluster/rebalancing.html#node-evacuation). | 10 | Minimum: 0
|
+| `waitHealthCheck` _integer_ | Duration (in seconds) during which the node waits for the Load Balancer to remove it from the active backend node list.
Same as `wait-health-check` in [EMQX Node Evacuation](https://docs.emqx.com/en/emqx/v5.10/deploy/cluster/rebalancing.html#node-evacuation). | 60 | Minimum: 0
|
+
+
+#### KeyRef
+
+
+
+
+
+
+
+_Appears in:_
+- [SecretRef](#secretref)
+
+| Field | Description | Default | Validation |
+| --- | --- | --- | --- |
+| `secretName` _string_ | Name of the Secret object. | | |
+| `secretKey` _string_ | Entry within the Secret data. | | Pattern: `^[a-zA-Z\d-_]+$`
|
+
+
+#### NodeEvacuationStatus
+
+
+
+
+
+
+
+_Appears in:_
+- [EMQXStatus](#emqxstatus)
+
+| Field | Description | Default | Validation |
+| --- | --- | --- | --- |
+| `nodeName` _string_ | Evacuated node name | | |
+| `state` _string_ | Evacuation state | | |
+| `sessionRecipients` _string array_ | Session recipients | | |
+| `sessionEvictionRate` _integer_ | Session eviction rate, in sessions per second. | | |
+| `connectionEvictionRate` _integer_ | Connection eviction rate, in connections per second. | | |
+| `initialSessions` _integer_ | Initial number of sessions on this node | | |
+| `initialConnections` _integer_ | Initial number of connections to this node | | |
+
+
+#### SecretRef
+
+
+
+
+
+
+
+_Appears in:_
+- [BootstrapAPIKey](#bootstrapapikey)
+
+| Field | Description | Default | Validation |
+| --- | --- | --- | --- |
+| `key` _[KeyRef](#keyref)_ | Reference to a Secret entry containing the EMQX API Key. | | |
+| `secret` _[KeyRef](#keyref)_ | Reference to a Secret entry containing the EMQX API Key's secret. | | |
+
+
+#### ServiceTemplate
+
+
+
+
+
+
+
+_Appears in:_
+- [EMQXSpec](#emqxspec)
+
+| Field | Description | Default | Validation |
+| --- | --- | --- | --- |
+| `enabled` _boolean_ | Specifies whether the Service should be created. | true | |
+| `metadata` _[ObjectMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#objectmeta-v1-meta)_ | Refer to Kubernetes API documentation for fields of `metadata`. | | |
+| `spec` _[ServiceSpec](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#servicespec-v1-core)_ | Specification of the desired state of a Service.
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status | | |
+
+
+#### UpdateStrategy
+
+
+
+
+
+
+
+_Appears in:_
+- [EMQXSpec](#emqxspec)
+
+| Field | Description | Default | Validation |
+| --- | --- | --- | --- |
+| `type` _string_ | Determines how cluster upgrade is performed.
* `Recreate`: Perform blue-green upgrade. | Recreate | Enum: [Recreate]
|
+| `initialDelaySeconds` _integer_ | Number of seconds before connection evacuation starts. | 10 | Minimum: 0
|
+| `evacuationStrategy` _[EvacuationStrategy](#evacuationstrategy)_ | Evacuation strategy settings. | | |
+
diff --git a/en_US/deploy/kubernetes/operator/api-reference.md b/en_US/deploy/kubernetes/operator/reference/v2beta1-reference.md
similarity index 99%
rename from en_US/deploy/kubernetes/operator/api-reference.md
rename to en_US/deploy/kubernetes/operator/reference/v2beta1-reference.md
index 26a7e6096..ca99fb6ba 100644
--- a/en_US/deploy/kubernetes/operator/api-reference.md
+++ b/en_US/deploy/kubernetes/operator/reference/v2beta1-reference.md
@@ -1,4 +1,4 @@
-# API Reference
+# API Reference (v2beta1)
## Packages
- [apps.emqx.io/v2beta1](#appsemqxiov2beta1)
@@ -608,4 +608,3 @@ _Appears in:_
| `initialDelaySeconds` _integer_ | Number of seconds before evacuation connection start. | | |
| `evacuationStrategy` _[EvacuationStrategy](#evacuationstrategy)_ | Number of seconds before evacuation connection timeout. | | |
-
diff --git a/en_US/deploy/kubernetes/operator/tasks/assets/configure-core-replicant/mria-core-repliant.png b/en_US/deploy/kubernetes/operator/tasks/assets/configure-core-replicant/mria-core-replicant.png
similarity index 100%
rename from en_US/deploy/kubernetes/operator/tasks/assets/configure-core-replicant/mria-core-repliant.png
rename to en_US/deploy/kubernetes/operator/tasks/assets/configure-core-replicant/mria-core-replicant.png
diff --git a/en_US/deploy/kubernetes/operator/tasks/assets/configure-emqx-persistent/emqx-core-action.png b/en_US/deploy/kubernetes/operator/tasks/assets/configure-emqx-persistent/emqx-core-action.png
index 3c362b208..76a963d7b 100644
Binary files a/en_US/deploy/kubernetes/operator/tasks/assets/configure-emqx-persistent/emqx-core-action.png and b/en_US/deploy/kubernetes/operator/tasks/assets/configure-emqx-persistent/emqx-core-action.png differ
diff --git a/en_US/deploy/kubernetes/operator/tasks/assets/configure-emqx-persistent/emqx-core-rule-new.png b/en_US/deploy/kubernetes/operator/tasks/assets/configure-emqx-persistent/emqx-core-rule-new.png
index 1714bd142..52ff54bd3 100644
Binary files a/en_US/deploy/kubernetes/operator/tasks/assets/configure-emqx-persistent/emqx-core-rule-new.png and b/en_US/deploy/kubernetes/operator/tasks/assets/configure-emqx-persistent/emqx-core-rule-new.png differ
diff --git a/en_US/deploy/kubernetes/operator/tasks/assets/configure-emqx-persistent/emqx-core-rule-old.png b/en_US/deploy/kubernetes/operator/tasks/assets/configure-emqx-persistent/emqx-core-rule-old.png
index c2086f99f..64c052ab9 100644
Binary files a/en_US/deploy/kubernetes/operator/tasks/assets/configure-emqx-persistent/emqx-core-rule-old.png and b/en_US/deploy/kubernetes/operator/tasks/assets/configure-emqx-persistent/emqx-core-rule-old.png differ
diff --git a/en_US/deploy/kubernetes/operator/tasks/assets/configure-service/emqx-add-listener.png b/en_US/deploy/kubernetes/operator/tasks/assets/configure-service/emqx-add-listener.png
index 2dc20d6d1..255a3f35a 100644
Binary files a/en_US/deploy/kubernetes/operator/tasks/assets/configure-service/emqx-add-listener.png and b/en_US/deploy/kubernetes/operator/tasks/assets/configure-service/emqx-add-listener.png differ
diff --git a/en_US/deploy/kubernetes/operator/tasks/assets/configure-service/emqx-listeners.png b/en_US/deploy/kubernetes/operator/tasks/assets/configure-service/emqx-listeners.png
index ee34a1993..1854ea903 100644
Binary files a/en_US/deploy/kubernetes/operator/tasks/assets/configure-service/emqx-listeners.png and b/en_US/deploy/kubernetes/operator/tasks/assets/configure-service/emqx-listeners.png differ
diff --git a/en_US/deploy/kubernetes/operator/tasks/configure-emqx-blueGreenUpdate.md b/en_US/deploy/kubernetes/operator/tasks/configure-emqx-blueGreenUpdate.md
index 961893c00..6eeb0e917 100644
--- a/en_US/deploy/kubernetes/operator/tasks/configure-emqx-blueGreenUpdate.md
+++ b/en_US/deploy/kubernetes/operator/tasks/configure-emqx-blueGreenUpdate.md
@@ -1,20 +1,15 @@
-# Upgrade the EMQX Cluster Elegantly through Blue-Green Deployment
+# Perform Blue-Green Upgrade of EMQX Cluster
-This page demonstrates how to gracefully upgrade the EMQX cluster through blue-green deployment.
+## Objective
-:::tip
-
-This feature only supports `apps.emqx.io/v1beta4 EmqxEnterprise` and `apps.emqx.io/v2beta1 EMQX`.
-
-:::
+Perform a graceful upgrade of the EMQX cluster through blue-green deployment.
## Background
-1. In traditional EMQX cluster deployment, the default rolling upgrade strategy of StatefulSet is usually used to update EMQX Pods. However, this approach has the following two problems:
+In traditional EMQX cluster deployment, StatefulSet's default rolling upgrade strategy is usually used to update EMQX Pods. However, this approach has the following two problems:
- 1. During the rolling update, both new and old Pods are selected by the corresponding Service. This may cause MQTT clients to connect to the wrong Pod, resulting in frequent disconnections and reconnections.
-
- 2. During the rolling update process, only N - 1 Pods can provide services because it takes some time for new Pods to start up and become ready. This may lead to a decrease in service availability.
+* During the rolling update, both new and old Pods are selected by the corresponding Service. This may cause MQTT clients to connect to old Pods that are being terminated, resulting in frequent disconnections and reconnections.
+* During the rolling update process, only _N - 1_ Pods can provide services at any given time because it takes some time for new Pods to start up and become ready. This may lead to a decrease in service availability.
```mermaid
timeline
@@ -43,20 +38,14 @@ timeline
## Solution
-Regarding the issue of rolling updates mentioned in the previous text, EMQX Operator provides a blue-green deployment upgrade solution. When upgrading the EMQX cluster using EMQX custom resources, EMQX Operator will create a new EMQX cluster and redirect the Kubernetes Service to the new EMQX cluster after it is ready. It will then gradually delete Pods from the old EMQX cluster to achieve the purpose of updating the EMQX cluster.
-
-When deleting Pods from the old EMQX cluster, EMQX Operator can also take advantage of the node evacuation feature of EMQX to transfer MQTT connections to the new cluster at a desired rate, avoiding issues with a large number of connections for a period of time.
-
-The entire upgrade process can be roughly divided into the following steps:
-
-1. Create a cluster with the same specifications.
-
-2. After the new cluster is ready, redirect the service to the new cluster and remove the old cluster from the service. At this time, the new cluster starts to receive traffic, and existing connections in the old cluster are not affected.
-
-3. (Only supported by EMQX Enterprise Edition) Use EMQX node evacuation function to evacuate connections on each node one by one.
+EMQX Operator performs blue-green deployment by default. When an EMQX cluster is updated through the corresponding EMQX CR, EMQX Operator initiates an upgrade.
-4. Gradually scale down the old cluster to 0 nodes.
+The entire upgrade process is roughly divided into the following steps:
+1. Create a set of new EMQX nodes with updated specifications.
+2. Redirect the Service resources to the new set of nodes once they are ready, ensuring that no new connections are routed to the old set.
+3. Safely migrate existing MQTT connections from the old set of nodes to the new set of nodes at a controlled rate to avoid reconnect storms.
+4. Gradually scale down the old set of EMQX nodes.
5. Complete the upgrade.
```mermaid
@@ -96,147 +85,82 @@ timeline
: pod-2
```
-## Configuration Update Strategy
-
-:::: tabs type:card
-::: tab apps.emqx.io/v2beta1
-
-Create `apps.emqx.io/v2beta1` EMQX and configure update strategy.
-
-```yaml
-apiVersion: apps.emqx.io/v2beta1
-kind: EMQX
-metadata:
- name: emqx-ee
-spec:
- image: emqx/emqx-enterprise:5.10
- config:
- data: |
- license {
- key = "..."
- }
- updateStrategy:
- evacuationStrategy:
- connEvictRate: 1000
- sessEvictRate: 1000
- waitTakeover: 10
- initialDelaySeconds: 10
- type: Recreate
-```
-
-`initialDelaySeconds`:The waiting time before starting the update after all nodes are ready (unit: second).
-
-`waitTakeover`: Interval time when deleting a Pod (unit: second)。
-
-`connEvictRate`: MQTT client evacuation rate, only supported by EMQX Enterprise Edition (unit: count/second)。
-
-`sessEvictRate`: MQTT Session evacuation rate, only supported by EMQX Enterprise Edition (unit: count/second)。
-
-Save the above content as: `emqx-update.yaml`, execute the following command to deploy EMQX:
-
-```bash
-$ kubectl apply -f emqx-update.yaml
-
-emqx.apps.emqx.io/emqx-ee created
-```
-
-Check the status of the EMQX cluster, please make sure that `STATUS` is `Ready`. This may require some time to wait for the EMQX cluster to be ready.
-
-```bash
-$ kubectl get emqx
-
-NAME STATUS AGE
-emqx-ee Ready 8m33s
-```
-
-:::
-::: tab apps.emqx.io/v1beta4
-
-Create `apps.emqx.io/v1beta4 EmqxEnterprise` and configure update strategy.
-
-```yaml
-apiVersion: apps.emqx.io/v1beta4
-kind: EmqxEnterprise
-metadata:
- name: emqx-ee
-spec:
- blueGreenUpdate:
- initialDelaySeconds: 60
- evacuationStrategy:
- waitTakeover: 5
- connEvictRate: 200
- sessEvictRate: 200
- template:
- spec:
- emqxContainer:
- image:
- repository: emqx/emqx-ee
- version: 4.4.30
-```
-
-`initialDelaySeconds`: The waiting time before the start node is evacuated after all nodes are ready (unit: second).
-
-`waitTakeover`: The time to wait for the client to reconnect and take over the session after all connections are disconnected (unit: second).
-
-`connEvictRate`: MQTT client evacuation rate (unit: count/second)。
-
-`sessEvictRate`: MQTT Session evacuation speed (unit: count/second)。
-
-Save the above content as: `emqx-update.yaml`, execute the following command to deploy EMQX Enterprise Edition cluster:
-
-```bash
-$ kubectl apply -f emqx-update.yaml
+## Procedure
+
+### Configure the Update Strategy
+
+1. Create an `apps.emqx.io/v2beta1` EMQX CR and configure the update strategy.
+
+ ```yaml
+ apiVersion: apps.emqx.io/v2beta1
+ kind: EMQX
+ metadata:
+ name: emqx-ee
+ spec:
+ image: emqx/emqx:@EE_VERSION@
+ config:
+ data: |
+ license {
+ key = "..."
+ }
+ updateStrategy:
+ evacuationStrategy:
+ # MQTT client evacuation rate, connections per second:
+ connEvictRate: 1000
+ # MQTT Session evacuation rate, sessions per second:
+ sessEvictRate: 1000
+ # Time to wait before deleting a Pod:
+ waitTakeover: 10
+ # Time to wait before starting the upgrade once all nodes are ready:
+ initialDelaySeconds: 10
+ type: Recreate
+ ```
-emqxenterprise.apps.emqx.io/emqx-ee created
-```
+2. Save the above content as `emqx-update.yaml` and deploy it using `kubectl apply`:
-Check the status of the EMQX cluster, please make sure that `STATUS` is `Running`. This may require some time to wait for the EMQX cluster to be ready.
+ ```bash
+ $ kubectl apply -f emqx-update.yaml
+ emqx.apps.emqx.io/emqx-ee created
+ ```
-```bash
-$ kubectl get emqxenterprises
+3. Check the status of the EMQX cluster.
-NAME STATUS AGE
-emqx-ee Running 8m33s
-```
+ Make sure that `STATUS` is `Ready`. This may take a while.
-:::
-::::
+ ```bash
+ $ kubectl get emqx
+ NAME STATUS AGE
+ emqx-ee Ready 8m33s
+ ```
-## Connect to EMQX Cluster Using MQTTX CLI
+### Connect to EMQX Cluster
-MQTT X CLI is an open-source MQTT 5.0 CLI Client that supports automatic reconnection. It is also a pure command-line mode MQTT X. It aims to help develop and debug MQTT services and applications faster without using a graphical interface. For documentation about MQTT X CLI, please refer to: [MQTTX CLI](https://mqttx.app/cli).
+[MQTTX](https://mqttx.app/cli) is an open-source MQTT 5.0 compatible command line client tool that supports automatic reconnection, designed to help in development and debugging of MQTT services and applications.
-Execute the following command to connect to the EMQX cluster:
+Use MQTTX to connect to the EMQX cluster:
```bash
mqttx bench conn -h ${IP} -p ${PORT} -c 3000
-```
-
-Output is similar to:
-
-```bash
[10:05:21 AM] › ℹ Start the connect benchmarking, connections: 3000, req interval: 10ms
✔ success [3000/3000] - Connected
[10:06:13 AM] › ℹ Done, total time: 31.113s
```
-## Upgrade EMQX Cluster
+### Trigger the upgrade
-- Any modifications made to the Pod Template will trigger the upgrade strategy of EMQX Operator.
+1. Any modifications made to the Pod template will trigger the upgrade strategy of EMQX Operator.
- > In this article, we trigger the upgrade by modifying the Container ImagePullPolicy. Users can modify it according to their actual needs.
+ In this example, we trigger the upgrade by modifying the Pod's `ImagePullPolicy`.
```bash
$ kubectl patch emqx emqx-ee --type=merge -p '{"spec": {"imagePullPolicy": "Never"}}'
-
emqx.apps.emqx.io/emqx-ee patched
```
-- Check status.
+2. Check the status of the upgrade process.
```bash
$ kubectl get emqx emqx-ee -o json | jq ".status.nodeEvacuationsStatus"
-
[
{
"connection_eviction_rate": 200,
@@ -260,42 +184,40 @@ Output is similar to:
]
```
- `connection_eviction_rate`: Node evacuation rate (unit: count/second).
-
- `node`: The node being evacuated currently.
+ | Field | Description |
+ |-------------------------|-----------------------------------------------------------------------|
+ | `node` | The node currently being evacuated. |
+ | `state` | Node evacuation phase. |
+ | `session_recipients` | MQTT session recipients. |
+ | `session_eviction_rate` | MQTT session eviction rate on this node (sessions per second). |
+ | `connection_eviction_rate`| MQTT connection eviction rate on this node (connections per second). |
+ | `initial_sessions` | Initial number of sessions on this node. |
+ | `initial_connected` | Initial number of connections on this node. |
+ | `current_sessions` | Current number of sessions on this node. |
+ | `current_connected` | Current number of connections on this node. |
- `session_eviction_rate`: Node session evacuation rate (unit: count/second).
-
- `session_recipients`: Session evacuation recipient list.
-
- `state`: Node evacuation phase.
-
- `stats`: Evacuation node statistical indicators, including current number of connections (current_connected), current number of sessions (current_sessions), initial number of connections (initial_connected), and initial number of sessions (initial_sessions).
-
-- Waiting for the upgrade to complete.
+3. Wait for the upgrade to complete.
```bash
$ kubectl get emqx
-
NAME STATUS AGE
emqx-ee Ready 8m33s
```
- Please make sure that the STATUS is Running, which requires some time to wait for the EMQX cluster to complete the upgrade.
-
- After the upgrade is completed, you can observe that the old EMQX nodes have been deleted by using the command $ kubectl get pods.
+ Make sure that the `STATUS` is `Ready`. Depending on the number of MQTT clients and sessions, the upgrade process may take a while.
+ After the upgrade is completed, you can verify that the old EMQX nodes have been deleted using `kubectl get pods`.
## Grafana Monitoring
-The monitoring graph of the number of connections during the upgrade process is shown below (using 10,000 connections as an example).
+The following monitoring graph shows the number of connections during the upgrade process, using 10,000 connections as an example.

-Total: Total number of connections, represented by the top line in the graph.
-
-emqx-ee-86f864f975: This prefix represents the 3 EMQX nodes before the upgrade.
-
-emqx-ee-648c45c747: This prefix represents the 3 EMQX nodes after the upgrade.
+| Label/Prefix | Description |
+|-------------------------|-----------------------------------------------------|
+| Total | Total number of connections; shown as the top line in the graph. |
+| `emqx-ee-86f864f975` | Name prefix for the set of 3 old EMQX nodes. |
+| `emqx-ee-648c45c747` | Name prefix for the set of 3 upgraded EMQX nodes. |
-As shown in the figure above, we have implemented graceful upgrade in Kubernetes through EMQX Kubernetes Operator's blue-green deployment. Through this solution, the total number of connections did not have a significant shake (depending on migration rate, server reception rate, client reconnection policy, etc.) during the upgrade process, which can greatly ensure the smoothness of the upgrade process, effectively prevent server overload, reduce business perception, and improve the stability of the service.
+This timeline illustrates how EMQX Operator performs a smooth blue-green upgrade. Throughout the process, the total number of connections remained stable (subject to factors such as migration rate, server capacity, and client reconnection strategy). This approach ensures minimal disruption, prevents server overload, and enhances overall service stability.
diff --git a/en_US/deploy/kubernetes/operator/tasks/configure-emqx-config.md b/en_US/deploy/kubernetes/operator/tasks/configure-emqx-config.md
index 87a7753ad..52e366855 100644
--- a/en_US/deploy/kubernetes/operator/tasks/configure-emqx-config.md
+++ b/en_US/deploy/kubernetes/operator/tasks/configure-emqx-config.md
@@ -1,16 +1,16 @@
-# Change EMQX Configurations
+# Change EMQX Configuration
-## Task Target
+## Objective
-Change EMQX configuration by `config.data` in EMQX Custom Resource.
+Change the EMQX configuration using the `.spec.config.data` field in the EMQX Custom Resource.
## Configure EMQX Cluster
-The main configuration file of EMQX is `/etc/emqx.conf`. Starting from version 5.0, EMQX adopts [HOCON](https://www.emqx.io/docs/en/v5.1/configuration/configuration.html#hocon-configuration-format) as the configuration file format.
+The EMQX CRD `apps.emqx.io/v2beta1` supports configuring the EMQX cluster through the `.spec.config.data` field. Refer to the [Configuration Manual](https://docs.emqx.com/en/enterprise/v6.0.0/hocon/) for the complete configuration reference.
-`apps.emqx.io/v2beta1 EMQX` supports configuring EMQX cluster through `.spec.config.data` field. For config.data configuration, please refer to the document: [Configuration Manual](https://www.emqx.io/docs/en/v5.1/configuration/configuration-manual.html#configuration-manual).
+EMQX uses [HOCON](../../../../configuration/configuration.md#hocon-configuration-format) as the configuration file format.
-+ Save the following content as a YAML file and deploy it with the `kubectl apply` command
+1. Save the following as a YAML file and deploy it using `kubectl apply`:
```yaml
apiVersion: apps.emqx.io/v2beta1
@@ -18,9 +18,10 @@ The main configuration file of EMQX is `/etc/emqx.conf`. Starting from version 5
metadata:
name: emqx
spec:
- image: emqx/emqx-enterprise:5.10
+ image: emqx/emqx:@EE_VERSION@
imagePullPolicy: IfNotPresent
config:
+ # Configure a TCP listener named `test` listening on port 1884:
data: |
listeners.tcp.test {
bind = "0.0.0.0:1884"
@@ -37,51 +38,38 @@ The main configuration file of EMQX is `/etc/emqx.conf`. Starting from version 5
type: LoadBalancer
```
- > In the `.spec.config.data` field, we have configured a TCP listener for the EMQX cluster. The name of this listener is: test, and the listening port is: 1884.
+ ::: tip
+ The content of the `.spec.config.data` field is supplied as [`emqx.conf` configuration file](../../../../configuration/configuration.md#immutable-configuration-file) to the EMQX container.
+ :::
-+ Wait for the EMQX cluster to be ready, you can check the status of EMQX cluster through `kubectl get` command, please make sure `STATUS` is `Running`, this may take some time
+2. Wait for the EMQX cluster to become ready. Check the status of the EMQX cluster using `kubectl get`, and make sure that `STATUS` is `Ready`. This may take some time.
```bash
$ kubectl get emqx emqx
- NAME IMAGE STATUS AGE
- emqx emqx/emqx-enterprise:5.10.0 Running 10m
+ NAME STATUS AGE
+ emqx Ready 10m
```
-+ Obtain the Dashboard External IP of EMQX cluster and access EMQX console
-
- EMQX Operator will create two EMQX Service resources, one is emqx-dashboard and the other is emqx-listeners, corresponding to EMQX console and EMQX listening port respectively.
-
- ```bash
- $ kubectl get svc emqx-dashboard -o json | jq '.status.loadBalancer.ingress[0].ip'
-
- 192.168.1.200
- ```
-
- Access `http://192.168.1.200:18083` through a browser, and use the default username and password `admin/public` to login EMQX console.
-
## Verify Configuration
-+ View EMQX cluster listener information
-
- ```bash
- $ kubectl exec -it emqx-core-0 -c emqx -- emqx ctl listeners
- ```
-
- You can get a print similar to the following, which means that the listener named `test` configured by us has taken effect.
-
- ```bash
- tcp:default
- listen_on: 0.0.0.0:1883
- acceptors: 16
- proxy_protocol : false
- running: true
- current_conn: 0
- max_conns : 1024000
- tcp:test
- listen_on: 0.0.0.0:1884
- acceptors: 16
- proxy_protocol : false
- running: true
- current_conn: 0
- max_conns : 1024000
- ```
+View the EMQX listeners' status.
+
+```bash
+$ kubectl exec -it emqx-core-0 -c emqx -- emqx ctl listeners
+tcp:default
+ listen_on: 0.0.0.0:1883
+ acceptors: 16
+ proxy_protocol : false
+ running: true
+ current_conn: 0
+ max_conns : 1024000
+tcp:test
+ listen_on: 0.0.0.0:1884
+ acceptors: 16
+ proxy_protocol : false
+ running: true
+ current_conn: 0
+ max_conns : 1024000
+```
+
+Here we can see that the new listener on port 1884 is running.
diff --git a/en_US/deploy/kubernetes/operator/tasks/configure-emqx-core-replicant.md b/en_US/deploy/kubernetes/operator/tasks/configure-emqx-core-replicant.md
index 77c2ec7cb..a51dca93e 100644
--- a/en_US/deploy/kubernetes/operator/tasks/configure-emqx-core-replicant.md
+++ b/en_US/deploy/kubernetes/operator/tasks/configure-emqx-core-replicant.md
@@ -1,140 +1,125 @@
-# Enable Core + Replicant Cluster (EMQX 5.x)
+# Enable Core + Replicant Cluster
-## Task Target
+## Objective
-- Configure EMQX cluster Core node through `coreTemplate` field.
-- Configure EMQX cluster Replicant node through `replicantTemplate` field.
+- Configure EMQX cluster Core nodes through the `coreTemplate` field.
+- Configure EMQX cluster Replicant nodes through the `replicantTemplate` field.
-## Core Nodes And Replicant Nodes
+## Core and Replicant Nodes
-:::tip
-Just EMQX Enterprise Edition supports Core + Replicant cluster.
-:::
+Nodes in the EMQX cluster can have one of two roles: Core node and Replicant node.
+* Core nodes are responsible for data persistence in the cluster and serve as the authoritative source for shared cluster state such as routing tables, MQTT client channels, retained messages, cluster configuration, alarms, Dashboard user credentials, etc.
+* Replicant nodes are designed to be stateless and do not participate in database operations. Adding or deleting Replicant nodes will not affect the redundancy of the cluster data.
-In EMQX 5.0, the nodes in the EMQX cluster can be divided into two roles: core (Core) node and replication (Replicant) node. The Core node is responsible for all write operations in the cluster, which is consistent with the behavior of the nodes in the EMQX 4.x cluster, and serves as the real data source of the EMQX database [Mria](https://github.com/emqx/mria) to store the routing table, Data such as sessions, configurations, alarms, and Dashboard user information. The Replicant node is designed to be stateless and does not participate in the writing of data. Adding or deleting Replicant nodes will not change the redundancy of the cluster data. For more information about the EMQX 5.0 architecture, please refer to the document: [EMQX 5.0 Architecture](../../../cluster/mria-introduction.md), the topological structure of the Core node and the Replicant node is shown in the following figure:
+Communication between Core and Replicant nodes in a typical EMQX cluster is illustrated in the following diagram:
-

+
+For more information about the EMQX Core-Replicant architecture, refer to the [Cluster Architecture](../../../cluster/mria-introduction.md) documentation.
+
:::tip
There must be at least one Core node in the EMQX cluster. For the purpose of high availability, EMQX Operator recommends that the EMQX cluster have at least three Core nodes.
:::
## Configure EMQX Cluster
-`apps.emqx.io/v2beta1 EMQX` supports configuring the Core node of the EMQX cluster through the `.spec.coreTemplate` field, and configuring the Replicant node of the EMQX cluster using the `.spec.replicantTemplate` field. For more information, please refer to: [API Reference](../api-reference.md#emqxspec).
-
-+ Save the following content as a YAML file and deploy it with the `kubectl apply` command
-
- ```yaml
- apiVersion: apps.emqx.io/v2beta1
- kind: EMQX
- metadata:
- name: emqx
- spec:
- image: emqx/emqx-enterprise:@EE_VERSION@
- config:
- data: |
- license {
- key = "..."
- }
- coreTemplate:
- spec:
- replicas: 2
- resources:
- requests:
- cpu: 250m
- memory: 512Mi
- replicantTemplate:
- spec:
- replicas: 3
- resources:
- requests:
- cpu: 250m
- memory: 1Gi
- dashboardServiceTemplate:
- spec:
- type: LoadBalancer
- ```
-
- > In the YAML above, we declared that this is an EMQX cluster consisting of two Core nodes and three Replicant nodes. Core nodes require a minimum of 512Mi of memory, and Replicant nodes require a minimum of 1Gi of memory. You can adjust according to the actual business load. In actual business, the Replicant node will accept all client requests, so the resources required by the Replicant node will be higher.
-
-+ Wait for the EMQX cluster to be ready, you can check the status of EMQX cluster through `kubectl get` command, please make sure `STATUS` is `Running`, this may take some time
-
- ```bash
- $ kubectl get emqx emqx
- NAME IMAGE STATUS AGE
- emqx emqx/emqx-enterprise:@EE_VERSION@ Running 10m
- ```
-
-+ Obtain the Dashboard External IP of EMQX cluster and access EMQX console
-
- EMQX Operator will create two EMQX Service resources, one is emqx-dashboard and the other is emqx-listeners, corresponding to EMQX console and EMQX listening port respectively.
-
- ```bash
- $ kubectl get svc emqx-dashboard -o json | jq '.status.loadBalancer.ingress[0].ip'
-
- 192.168.1.200
- ```
-
- Access `http://192.168.1.200:18083` through a browser, and use the default username and password `admin/public` to login EMQX console.
+EMQX CRD `apps.emqx.io/v2beta1` supports configuring Core nodes of the EMQX cluster through the `.spec.coreTemplate` field, and configuring Replicant nodes of the EMQX cluster through the `.spec.replicantTemplate` field.
+
+1. Save the following content as a YAML file and deploy using `kubectl apply`.
+
+ ```yaml
+ apiVersion: apps.emqx.io/v2beta1
+ kind: EMQX
+ metadata:
+ name: emqx
+ spec:
+ image: emqx/emqx:@EE_VERSION@
+ config:
+ data: |
+ license {
+ key = "..."
+ }
+ coreTemplate:
+ spec:
+ replicas: 2
+ resources:
+ requests:
+ cpu: 250m
+ memory: 512Mi
+ replicantTemplate:
+ spec:
+ replicas: 3
+ resources:
+ requests:
+ cpu: 250m
+ memory: 1Gi
+ dashboardServiceTemplate:
+ spec:
+ type: LoadBalancer
+ ```
+
+ In the example above, the EMQX CR defines an EMQX cluster consisting of two Core nodes and three Replicant nodes.
+
+ Core nodes require a minimum of 512Mi of memory, and Replicant nodes require a minimum of 1Gi of memory. You can adjust these constraints according to the actual business load. Typically, Replicant nodes accept all client requests, so the resources required by Replicant nodes may be higher to accommodate many concurrent connections.
+
+2. Wait for the EMQX cluster to become ready. Check the status of the EMQX cluster with `kubectl get`, ensuring that `STATUS` is `Ready`. This may take some time.
+
+ ```bash
+ $ kubectl get emqx emqx
+ NAME STATUS AGE
+ emqx Ready 10m
+ ```
## Verify EMQX Cluster
- Information about all the nodes in the cluster can be obtained by checking the `.status` of the EMQX custom resources.
-
- ```bash
- $ kubectl get emqx emqx -o json | jq .status.coreNodes
- [
- {
- "node": "emqx@emqx-core-0.emqx-headless.default.svc.cluster.local",
- "node_status": "running",
- "otp_release": "27.2-3/15.2",
- "role": "core",
- "version": "@EE_VERSION@"
- },
- {
- "node": "emqx@emqx-core-1.emqx-headless.default.svc.cluster.local",
- "node_status": "running",
- "otp_release": "27.2-3/15.2",
- "role": "core",
- "version": "@EE_VERSION@"
- },
- {
- "node": "emqx@emqx-core-2.emqx-headless.default.svc.cluster.local",
- "node_status": "running",
- "otp_release": "27.2-3/15.2",
- "role": "core",
- "version": "@EE_VERSION@"
- }
- ]
- ```
-
-
- ```bash
- $ kubectl get emqx emqx -o json | jq .status.replicantNodes
- [
- {
- "node": "emqx@10.244.4.56",
- "node_status": "running",
- "otp_release": "27.2-3/15.2",
- "role": "replicant",
- "version": "@EE_VERSION@"
- },
- {
- "node": "emqx@10.244.4.57",
- "node_status": "running",
- "otp_release": "27.2-3/15.2",
- "role": "replicant",
- "version": "@EE_VERSION@"
- },
- {
- "node": "emqx@10.244.4.58",
- "node_status": "running",
- "otp_release": "27.2-3/15.2",
- "role": "replicant",
- "version": "@EE_VERSION@"
- }
- ]
- ```
+You can view information about all nodes in the cluster by checking the `.status` field of the EMQX CR.
+
+```bash
+$ kubectl get emqx emqx -o json | jq .status.coreNodes
+[
+ {
+ "name": "emqx@emqx-core-adcdef012-0.emqx-headless.default.svc.cluster.local",
+ "node_status": "running",
+ "otp_release": "27.2-3/15.2",
+ "role": "core",
+ "version": "@EE_VERSION@"
+ },
+ {
+ "name": "emqx@emqx-core-adcdef012-1.emqx-headless.default.svc.cluster.local",
+ "node_status": "running",
+ "otp_release": "27.2-3/15.2",
+ "role": "core",
+ "version": "@EE_VERSION@"
+ }
+]
+```
+
+
+```bash
+$ kubectl get emqx emqx -o json | jq .status.replicantNodes
+[
+ {
+ "name": "emqx@10.244.4.56",
+ "node_status": "running",
+ "otp_release": "27.2-3/15.2",
+ "role": "replicant",
+ "version": "@EE_VERSION@"
+ },
+ {
+ "name": "emqx@10.244.4.57",
+ "node_status": "running",
+ "otp_release": "27.2-3/15.2",
+ "role": "replicant",
+ "version": "@EE_VERSION@"
+ },
+ {
+ "name": "emqx@10.244.4.58",
+ "node_status": "running",
+ "otp_release": "27.2-3/15.2",
+ "role": "replicant",
+ "version": "@EE_VERSION@"
+ }
+]
+```
diff --git a/en_US/deploy/kubernetes/operator/tasks/configure-emqx-license.md b/en_US/deploy/kubernetes/operator/tasks/configure-emqx-license.md
index 1465a92d9..953429f23 100644
--- a/en_US/deploy/kubernetes/operator/tasks/configure-emqx-license.md
+++ b/en_US/deploy/kubernetes/operator/tasks/configure-emqx-license.md
@@ -1,111 +1,101 @@
-# License Configuration (EMQX Enterprise)
+# Manage License
-## Task Target
+## Objective
-- Configure EMQX Enterprise License.
-- Update EMQX Enterprise License.
+- Configure the EMQX Enterprise license.
+- Update EMQX Enterprise license.
## Configure License
-EMQX Enterprise License can be applied for free on EMQ official website: [Apply for EMQX Enterprise License](https://www.emqx.com/en/apply-licenses/emqx).
+You can apply for an EMQX Enterprise license for free on the EMQX official website: [Apply for EMQX Enterprise License](https://www.emqx.com/en/apply-licenses/emqx).
## Configure EMQX Cluster
-`apps.emqx.io/v2beta1 EMQX` supports configuring EMQX cluster license through `.spec.config.data`. For config.data configuration, please refer to the document: [Configuration Manual](../../../../configuration/configuration.md). This field is only allowed to be configured when creating an EMQX cluster, and does not support updating.
+EMQX CRD `apps.emqx.io/v2beta1` supports configuring the EMQX cluster license through the `.spec.config.data` field. Refer to the [Configuration Manual](https://docs.emqx.com/en/enterprise/v6.0.0/hocon/) for a complete configuration reference.
- > After the EMQX cluster is created, if the license needs to be updated, please update it through the EMQX Dashboard.
+1. Save the following as a YAML file and deploy it using `kubectl apply`.
-+ Save the following content as a YAML file and deploy it via the `kubectl apply` command
+ ```yaml
+ apiVersion: apps.emqx.io/v2beta1
+ kind: EMQX
+ metadata:
+ name: emqx-ee
+ spec:
+ config:
+ data: |
+ license {
+ key = "..."
+ }
+ image: emqx/emqx:@EE_VERSION@
+ dashboardServiceTemplate:
+ spec:
+ type: LoadBalancer
+ ```
- ```yaml
- apiVersion: apps.emqx.io/v2beta1
- kind: EMQX
- metadata:
- name: emqx
- spec:
- config:
- data: |
- license {
- key = "..."
- }
- image: emqx/emqx-enterprise:@EE_VERSION@
- dashboardServiceTemplate:
- spec:
- type: LoadBalancer
- ```
+ ::: tip
- > The `license.key` in the `config.data` field represents the Licesne content. In this example, the License content is omitted, please fill it in by the user.
+ The `license.key` in the `.spec.config.data` field represents the license content. In this example, the license content is omitted. Please fill it in with your own license key.
-+ Wait for the EMQX cluster to be ready, you can check the status of the EMQX cluster through `kubectl get` command, please make sure `STATUS` is `Running`, this may take some time
+ :::
- ```bash
- $ kubectl get emqx emqx
- NAME IMAGE STATUS AGE
- emqx emqx/emqx-enterprise:@EE_VERSION@ Running 10m
- ```
+2. Wait for the EMQX cluster to become ready.
-+ Obtain the Dashboard External IP of EMQX cluster and access EMQX console
+ Check the status of the EMQX cluster with `kubectl get` and ensure that `STATUS` is `Ready`. This may take some time.
- EMQX Operator will create two EMQX Service resources, one is emqx-dashboard and the other is emqx-listeners, corresponding to EMQX console and EMQX listening port respectively.
-
- ```bash
- $ kubectl get svc emqx-ee-dashboard -o json | jq '.status.loadBalancer.ingress[0].ip'
-
- 192.168.1.200
- ```
-
- Access `http://192.168.1.200:18083` through a browser, and use the default username and password `admin/public` to login EMQX console.
+ ```bash
+ $ kubectl get emqx emqx-ee
+ NAME STATUS AGE
+ emqx Ready 10m
+ ```
## Update License
-+ View License information
- ```bash
- $ pod_name="$(kubectl get pods -l 'apps.emqx.io/instance=emqx,apps.emqx.io/db-role=core' -o json | jq --raw-output '.items[0].metadata.name')"
- $ kubectl exec -it ${pod_name} -c emqx -- emqx_ctl license info
- ```
-
- The following output can be obtained. From the output, we can see the basic information of the license we applied for, including applicant's information, maximum connection supported by the license, and expiration time of the license.
- ```bash
- customer : Evaluation
- email : contact@emqx.io
- deployment : default
- max_connections : 100
- start_at : 2023-01-09
- expiry_at : 2028-01-08
- type : trial
- customer_type : 10
- expiry : false
- ```
-
-+ Modify EMQX custom resources to update the License.
- ```bash
- $ kubectl edit emqx emqx
- ...
- spec:
- image: emqx/emqx-enterprise:@EE_VERSION@
- config:
- data: |
- license {
- key = "${new_license_key}"
- }
- ...
- ```
-
- + Check if the EMQX cluster license has been updated.
- ```bash
- $ pod_name="$(kubectl get pods -l 'apps.emqx.io/instance=emqx,apps.emqx.io/db-role=core' -o json | jq --raw-output '.items[0].metadata.name')"
- $ kubectl exec -it ${pod_name} -c emqx -- emqx_ctl license info
- ```
-
- It can be seen from the "max_connections" field that the content of the License has been updated, indicating that the EMQX Enterprise Edition License update is successful. If the certificate information is not updated, you can wait for a while as there may be some delay in updating the License.
- ```bash
- customer : Evaluation
- email : contact@emqx.io
- deployment : default
- max_connections : 100000
- start_at : 2023-01-09
- expiry_at : 2028-01-08
- type : trial
- customer_type : 10
- expiry : false
- ```
+1. View the license information.
+
+ ```bash
+ $ kubectl exec -it service/emqx-ee-headless -c emqx -- emqx ctl license info
+ customer : Evaluation
+ email : contact@emqx.io
+ deployment : default
+ max_connections : 100
+ start_at : 2023-01-09
+ expiry_at : 2028-01-08
+ type : trial
+ customer_type : 10
+ expiry : false
+ ```
+
+ The output shows basic license information, including the applicant's information, the maximum number of connections supported by the license, and the expiration time.
+
+2. Modify the EMQX CR to update the license.
+
+ ```bash
+ $ kubectl edit emqx emqx-ee
+ ...
+ spec:
+ image: emqx/emqx:@EE_VERSION@
+ config:
+ data: |
+ license {
+ key = "${new_license_key}"
+ }
+ ...
+ ```
+
+3. Verify that the license has been updated.
+
+ ```bash
+ $ kubectl exec -it service/emqx-ee-headless -c emqx -- emqx ctl license info
+ customer : Evaluation
+ email : contact@emqx.io
+ deployment : default
+ max_connections : 100000
+ start_at : 2023-01-09
+ expiry_at : 2028-01-08
+ type : trial
+ customer_type : 10
+ expiry : false
+ ```
+
+ The updated `max_connections` field clearly indicates that the EMQX Enterprise license has been updated successfully. Keep in mind that the license update may take time, so you may need to retry the command.
+
diff --git a/en_US/deploy/kubernetes/operator/tasks/configure-emqx-log-collection.md b/en_US/deploy/kubernetes/operator/tasks/configure-emqx-log-collection.md
index 70a0c26b4..2f3d0a180 100644
--- a/en_US/deploy/kubernetes/operator/tasks/configure-emqx-log-collection.md
+++ b/en_US/deploy/kubernetes/operator/tasks/configure-emqx-log-collection.md
@@ -1,583 +1,604 @@
-# Collect EMQX Logs In Kubernetes
+# Collect EMQX Logs in Kubernetes
-## Task Target
+## Objective
Use ELK to collect EMQX cluster logs.
## Deploy ELK
-ELK is the capitalized abbreviation of the three open source frameworks of Elasticsearch, Logstash, and Kibana, and is also known as the Elastic Stack. [Elasticsearch](https://www.elastic.co/elasticsearch/) is a near-real-time search platform framework based on Lucene, distributed, and interactive through Restful, also referred to as: es. [Logstash](https://www.elastic.co/logstash/) is the central data flow engine of ELK, which is used to collect data in different formats from different targets (files/data storage/MQ), and supports after filtering Output to different destinations (file/MQ/redis/elasticsearch/kafka, etc.). [Kibana](https://www.elastic.co/kibana/) can display es data on a page and provide real-time analysis functions.
+**ELK** stands for Elasticsearch, Logstash, and Kibana (also known as the Elastic Stack):
+
+- [**Elasticsearch**](https://www.elastic.co/elasticsearch/): Distributed, near-real-time search and analytics engine based on Lucene providing REST APIs to interact with data.
+- [**Logstash**](https://www.elastic.co/logstash/): Primary data flow engine for collecting, transforming, and forwarding logs from various sources to different destinations.
+- [**Kibana**](https://www.elastic.co/kibana/): Web interface for visualizing and analyzing Elasticsearch data in real time.
### Deploy Single Node Elasticsearch
-The method of deploying single-node Elasticsearch is relatively simple. You can refer to the following YAML orchestration file to quickly deploy an Elasticsearch cluster.
-
-- Save the following content as a YAML file and deploy it via the `kubectl apply` command
-
- ```yaml
- ---
- apiVersion: v1
- kind: Service
- metadata:
- name: elasticsearch-logging
- namespace: kube-logging
- labels:
- k8s-app: elasticsearch
- kubernetes.io/cluster-service: "true"
- addonmanager.kubernetes.io/mode: Reconcile
- spec:
- ports:
- - port: 9200
- protocol: TCP
- targetPort: db
- selector:
- k8s-app: elasticsearch
- ---
- apiVersion: v1
- kind: ServiceAccount
- metadata:
- name: elasticsearch-logging
- namespace: kube-logging
- labels:
- k8s-app: elasticsearch
- kubernetes.io/cluster-service: "true"
- addonmanager.kubernetes.io/mode: Reconcile
- ---
- kind: ClusterRole
- apiVersion: rbac.authorization.k8s.io/v1
- metadata:
- name: elasticsearch-logging
- labels:
- k8s-app: elasticsearch
- kubernetes.io/cluster-service: "true"
- addonmanager.kubernetes.io/mode: Reconcile
- rules:
- - apiGroups:
- - ""
- resources:
- - "services"
- - "namespaces"
- - "endpoints"
- verbs:
- - "get"
- ---
- kind: ClusterRoleBinding
- apiVersion: rbac.authorization.k8s.io/v1
- metadata:
- namespace: kube-logging
- name: elasticsearch-logging
- labels:
- k8s-app: elasticsearch
- kubernetes.io/cluster-service: "true"
- addonmanager.kubernetes.io/mode: Reconcile
- subjects:
- - kind: ServiceAccount
- name: elasticsearch-logging
- namespace: kube-logging
- apiGroup: ""
- roleRef:
- kind: ClusterRole
- name: elasticsearch
- apiGroup: ""
- ---
- apiVersion: apps/v1
- kind: StatefulSet
- metadata:
- name: elasticsearch-logging
- namespace: kube-logging
- labels:
- k8s-app: elasticsearch
- kubernetes.io/cluster-service: "true"
- addonmanager.kubernetes.io/mode: Reconcile
- spec:
- serviceName: elasticsearch-logging
- replicas: 1
- selector:
- matchLabels:
- k8s-app: elasticsearch
- template:
- metadata:
- labels:
- k8s-app: elasticsearch
- spec:
- serviceAccountName: elasticsearch-logging
- containers:
- - image: docker.io/library/elasticsearch:7.9.3
- name: elasticsearch-logging
- limits:
- cpu: 1000m
- memory: 1Gi
- requests:
- cpu: 100m
- memory: 500Mi
- ports:
- - containerPort: 9200
- name: db
- protocol: TCP
- - containerPort: 9300
- name: transport
- protocol: TCP
- volumeMounts:
- - name: elasticsearch-logging
- mountPath: /usr/share/elasticsearch/data/
- env:
- - name: "NAMESPACE"
- valueFrom:
- fieldRef:
- fieldPath: metadata.namespace
- - name: "discovery.type"
- value: "single-node"
- - name: ES_JAVA_OPTS
- value: "-Xms512m -Xmx2g"
- # Elasticsearch requires vm.max_map_count to be at least 262144.
- # If your OS already sets up this number to a higher value, feel free
- # to remove this init container.
- initContainers:
- - name: elasticsearch-logging-init
- image: alpine:3.6
- command: ["/sbin/sysctl", "-w", "vm.max_map_count=262144"]
- securityContext:
- privileged: true
- - name: increase-fd-ulimit
- image: busybox
- imagePullPolicy: IfNotPresent
- command: ["sh", "-c", "ulimit -n 65536"]
- securityContext:
- privileged: true
- - name: elasticsearch-volume-init
- image: alpine:3.6
- command:
- -chmod
- - -R
- - "777"
- - /usr/share/elasticsearch/data/
- volumeMounts:
- - name: elasticsearch-logging
- mountPath: /usr/share/elasticsearch/data/
- volumeClaimTemplates:
- - metadata:
- name: elasticsearch-logging
- spec:
- storageClassName: ${storageClassName}
- accessModes: [ "ReadWriteOnce" ]
- resources:
- requests:
- storage: 10Gi
- ```
- > The `storageClassName` field indicates the name of `StorageClass`, you can use the command `kubectl get storageclass` to get the StorageClass that already exists in the Kubernetes cluster, or you can create a StorageClass according to your own needs.
-
-- Wait for the es to be ready, you can check the status of the es pod through the `kubectl get` command, make sure `STATUS` is `Running`
-
- ```bash
- $ kubectl get pod -n kube-logging -l "k8s-app=elasticsearch"
- NAME READY STATUS RESTARTS AGE
- elasticsearch-0 1/1 Running 0 16m
- ```
+Deploying a single-node Elasticsearch cluster is relatively simple. You can use the following YAML configuration file to quickly deploy an Elasticsearch cluster.
+
+1. Save the following content as a YAML file and deploy it using `kubectl apply`.
+
+ ```yaml
+ ---
+ apiVersion: v1
+ kind: Service
+ metadata:
+ name: elasticsearch-logging
+ namespace: kube-logging
+ labels:
+ k8s-app: elasticsearch
+ kubernetes.io/cluster-service: "true"
+ addonmanager.kubernetes.io/mode: Reconcile
+ spec:
+ ports:
+ - port: 9200
+ protocol: TCP
+ targetPort: db
+ selector:
+ k8s-app: elasticsearch
+ ---
+ apiVersion: v1
+ kind: ServiceAccount
+ metadata:
+ name: elasticsearch-logging
+ namespace: kube-logging
+ labels:
+ k8s-app: elasticsearch
+ kubernetes.io/cluster-service: "true"
+ addonmanager.kubernetes.io/mode: Reconcile
+ ---
+ kind: ClusterRole
+ apiVersion: rbac.authorization.k8s.io/v1
+ metadata:
+ name: elasticsearch-logging
+ labels:
+ k8s-app: elasticsearch
+ kubernetes.io/cluster-service: "true"
+ addonmanager.kubernetes.io/mode: Reconcile
+ rules:
+ - apiGroups:
+ - ""
+ resources:
+ - "services"
+ - "namespaces"
+ - "endpoints"
+ verbs:
+ - "get"
+ ---
+ kind: ClusterRoleBinding
+ apiVersion: rbac.authorization.k8s.io/v1
+ metadata:
+ namespace: kube-logging
+ name: elasticsearch-logging
+ labels:
+ k8s-app: elasticsearch
+ kubernetes.io/cluster-service: "true"
+ addonmanager.kubernetes.io/mode: Reconcile
+ subjects:
+ - kind: ServiceAccount
+ name: elasticsearch-logging
+ namespace: kube-logging
+ apiGroup: ""
+ roleRef:
+ kind: ClusterRole
+ name: elasticsearch
+ apiGroup: ""
+ ---
+ apiVersion: apps/v1
+ kind: StatefulSet
+ metadata:
+ name: elasticsearch-logging
+ namespace: kube-logging
+ labels:
+ k8s-app: elasticsearch
+ kubernetes.io/cluster-service: "true"
+ addonmanager.kubernetes.io/mode: Reconcile
+ spec:
+ serviceName: elasticsearch-logging
+ replicas: 1
+ selector:
+ matchLabels:
+ k8s-app: elasticsearch
+ template:
+ metadata:
+ labels:
+ k8s-app: elasticsearch
+ spec:
+ serviceAccountName: elasticsearch-logging
+ containers:
+ - image: docker.io/library/elasticsearch:7.9.3
+ name: elasticsearch-logging
+ resources:
+ limits:
+ cpu: 1000m
+ memory: 1Gi
+ requests:
+ cpu: 100m
+ memory: 500Mi
+ ports:
+ - containerPort: 9200
+ name: db
+ protocol: TCP
+ - containerPort: 9300
+ name: transport
+ protocol: TCP
+ volumeMounts:
+ - name: elasticsearch-logging
+ mountPath: /usr/share/elasticsearch/data/
+ env:
+ - name: "NAMESPACE"
+ valueFrom:
+ fieldRef:
+ fieldPath: metadata.namespace
+ - name: "discovery.type"
+ value: "single-node"
+ - name: ES_JAVA_OPTS
+ value: "-Xms512m -Xmx2g"
+ # Elasticsearch requires vm.max_map_count to be at least 262144.
+ # If your OS already sets up this number to a higher value, feel free
+ # to remove this init container.
+ initContainers:
+ - name: elasticsearch-logging-init
+ image: alpine:3.6
+ command: ["/sbin/sysctl", "-w", "vm.max_map_count=262144"]
+ securityContext:
+ privileged: true
+ - name: increase-fd-ulimit
+ image: busybox
+ imagePullPolicy: IfNotPresent
+ command: ["sh", "-c", "ulimit -n 65536"]
+ securityContext:
+ privileged: true
+ - name: elasticsearch-volume-init
+ image: alpine:3.6
+ command:
+ -chmod
+ - -R
+ - "777"
+ - /usr/share/elasticsearch/data/
+ volumeMounts:
+ - name: elasticsearch-logging
+ mountPath: /usr/share/elasticsearch/data/
+ volumeClaimTemplates:
+ - metadata:
+ name: elasticsearch-logging
+ spec:
+ storageClassName: ${storageClassName}
+ accessModes: [ "ReadWriteOnce" ]
+ resources:
+ requests:
+ storage: 10Gi
+ ```
+
+ :::tip
+
+ Use the `storageClassName` field to choose the appropriate [StorageClass](https://kubernetes.io/docs/concepts/storage/storage-classes/). Run `kubectl get storageclass` to list the StorageClasses that already exist in the Kubernetes cluster, or create a StorageClass according to your needs.
+
+ :::
+
+2. Wait for Elasticsearch to be ready.
+
+ Check the status of the Elasticsearch pod using the `kubectl get` command and ensure that `STATUS` is `Running`.
+
+ ```bash
+ $ kubectl get pod -n kube-logging -l "k8s-app=elasticsearch"
+ NAME READY STATUS RESTARTS AGE
+ elasticsearch-0 1/1 Running 0 16m
+ ```
### Deploy Kibana
-This article uses `Deployment` to deploy Kibana to visualize the collected logs. `Service` uses `NodePort`.
-
-- Save the following content as a YAML file and deploy it via the `kubectl apply` command
-
- ```yaml
- ---
- apiVersion: v1
- kind: Service
- metadata:
- name: kibana
- namespace: kube-logging
- labels:
- k8s-app: kibana
- spec:
- type: NodePort
- - port: 5601
- nodePort: 35601
- protocol: TCP
- targetPort: ui
- selector:
- k8s-app: kibana
- ---
- apiVersion: apps/v1
- kind: Deployment
- metadata:
- name: kibana
- namespace: kube-logging
- labels:
- k8s-app: kibana
- kubernetes.io/cluster-service: "true"
- addonmanager.kubernetes.io/mode: Reconcile
- spec:
- replicas: 1
- selector:
- matchLabels:
- k8s-app: kibana
- template:
- metadata:
- labels:
- k8s-app: kibana
- annotations:
- seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
- spec:
- containers:
- -name: kibana
- image: docker.io/kubeimages/kibana:7.9.3
- resources:
- limits:
- cpu: 1000m
- requests:
- cpu: 100m
- env:
- # The access address of ES
- - name: ELASTICSEARCH_HOSTS
- value: http://elasticsearch-logging:9200
- ports:
- - containerPort: 5601
- name: ui
- protocol: TCP
- ```
-
-- Wait for Kibana to be ready, you can check the status of the Kibana pod through the `kubectl get` command, make sure `STATUS` is `Running`
-
- ```bash
- $ kubectl get pod -n kube-logging -l "k8s-app=kibana"
- NAME READY STATUS RESTARTS AGE
- kibana-b7d98644-48gtm 1/1 Running 0 17m
- ```
-
- Finally, in the browser, enter `http://{node_ip}:35601`, and you will enter the kibana web interface
+This walkthrough uses a `Deployment` to deploy Kibana for visualizing the collected logs, and a `Service` of type `NodePort` to expose Kibana externally.
+
+1. Save the following content as a YAML file and deploy it using `kubectl apply`.
+
+ ```yaml
+ ---
+ apiVersion: v1
+ kind: Service
+ metadata:
+ name: kibana
+ namespace: kube-logging
+ labels:
+ k8s-app: kibana
+ spec:
+ type: NodePort
+ ports:
+ - port: 5601
+ nodePort: 35601
+ protocol: TCP
+ targetPort: ui
+ selector:
+ k8s-app: kibana
+ ---
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ name: kibana
+ namespace: kube-logging
+ labels:
+ k8s-app: kibana
+ kubernetes.io/cluster-service: "true"
+ addonmanager.kubernetes.io/mode: Reconcile
+ spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ k8s-app: kibana
+ template:
+ metadata:
+ labels:
+ k8s-app: kibana
+ annotations:
+ seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
+ spec:
+ containers:
+ - name: kibana
+ image: docker.io/kubeimages/kibana:7.9.3
+ resources:
+ limits:
+ cpu: 1000m
+ requests:
+ cpu: 100m
+ env:
+ # The access address of ES
+ - name: ELASTICSEARCH_HOSTS
+ value: http://elasticsearch-logging:9200
+ ports:
+ - containerPort: 5601
+ name: ui
+ protocol: TCP
+ ```
+
+2. Wait for Kibana to be ready.
+
+ Check the status of the Kibana pod using the `kubectl get` command and ensure that `STATUS` is `Running`.
+
+ ```bash
+ $ kubectl get pod -n kube-logging -l "k8s-app=kibana"
+ NAME READY STATUS RESTARTS AGE
+ kibana-b7d98644-48gtm 1/1 Running 0 17m
+ ```
+
+3. In your browser, navigate to `http://{node_ip}:35601` to access the Kibana web interface.
### Deploy Filebeat
-[Filebeat](https://www.elastic.co/beats/filebeat) is a lightweight eating log collection component, which is part of the Elastic Stack and can work seamlessly with Logstash, Elasticsearch and Kibana. Whether you're transforming or enriching logs and files with Logstash, throwing around some data analysis in Elasticsearch, or building and sharing dashboards in Kibana, Filebeat makes it easy to get your data where it matters most.
-
-- Save the following content as a YAML file and deploy it via the `kubectl apply` command
-
- ```yaml
- ---
- apiVersion: v1
- kind: ConfigMap
- metadata:
- name: filebeat-config
- namespace: kube-system
- labels:
- k8s-app: filebeat
- data:
- filebeat.yml: |-
- filebeat.inputs:
- - type: container
- paths:
- # The log path of the EMQX container on the host
- - /var/log/containers/^emqx.*.log
- processors:
- - add_kubernetes_metadata:
- host: ${NODE_NAME}
- matchers:
- - logs_path:
- logs_path: "/var/log/containers/"
- output.logstash:
- hosts: ["logstash:5044"]
- enabled: true
- ---
- apiVersion: v1
- kind: ServiceAccount
- metadata:
- name: filebeat
- namespace: kube-logging
- labels:
- k8s-app: filebeat
- ---
- apiVersion: rbac.authorization.k8s.io/v1beta1
- kind: ClusterRole
- metadata:
- name: filebeat
- labels:
- k8s-app: filebeat
- rules:
- - apiGroups: [""]
- resources:
- - namespaces
- - pods
- verbs:
- - get
- - watch
- - list
- ---
- apiVersion: rbac.authorization.k8s.io/v1beta1
- kind: ClusterRoleBinding
- metadata:
- name: filebeat
- subjects:
- - kind: ServiceAccount
- name: filebeat
- namespace: kube-logging
- roleRef:
- kind: ClusterRole
- name: filebeat
- apiGroup: rbac.authorization.k8s.io
- ---
- apiVersion: apps/v1
- kind: DaemonSet
- metadata:
- name: filebeat
- namespace: kube-logging
- labels:
- k8s-app: filebeat
- spec:
- selector:
- matchLabels:
- k8s-app: filebeat
- template:
- metadata:
- labels:
- k8s-app: filebeat
- spec:
- serviceAccountName: filebeat
- terminationGracePeriodSeconds: 30
- containers:
- - name: filebeat
- image: docker.io/kubeimages/filebeat:7.9.3
- args: [
- "-c", "/etc/filebeat.yml",
- "-e","-httpprof","0.0.0.0:6060"
- ]
- env:
- - name: NODE_NAME
- valueFrom:
- fieldRef:
- fieldPath: spec.nodeName
- - name: ELASTICSEARCH_HOST
- value: elasticsearch
- - name: ELASTICSEARCH_PORT
- value: "9200"
- securityContext:
- runAsUser: 0
- resources:
- limits:
- memory: 1000Mi
- cpu: 1000m
- requests:
- memory: 100Mi
- cpu: 100m
- volumeMounts:
- - name: config
- mountPath: /etc/filebeat.yml
- readOnly: true
- subPath: filebeat.yml
- - name: data
- mountPath: /usr/share/filebeat/data
- - name: varlibdockercontainers
- mountPath: /data/var/
- readOnly: true
- -name: varlog
- mountPath: /var/log/
- readOnly: true
- -name: timezone
- mountPath: /etc/localtime
- volumes:
- - name: config
- configMap:
- defaultMode: 0600
- name: filebeat-config
- - name: varlibdockercontainers
- hostPath:
- path: /data/var/
- -name: varlog
- hostPath:
- path: /var/log/
- - name: inputs
- configMap:
- defaultMode: 0600
- name: filebeat-inputs
- - name: data
- hostPath:
- path: /data/filebeat-data
- type: DirectoryOrCreate
- -name: timezone
- hostPath:
- path: /etc/localtime
- ```
-
-- Wait for Filebeat to be ready, you can check the status of the Filebeat pod through the `kubectl get` command, make sure `STATUS` is `Running`
-
- ```bash
- $ kubectl get pod -n kube-logging -l "k8s-app=filebeat"
- NAME READY STATUS RESTARTS AGE
- filebeat-82d2b 1/1 Running 0 45m
- filebeat-vwrjn 1/1 Running 0 45m
- ```
+[Filebeat](https://www.elastic.co/beats/filebeat) is a lightweight log collection component that is part of the Elastic Stack and works seamlessly with Logstash, Elasticsearch, and Kibana.
+
+1. Save the following content as a YAML file and deploy it using `kubectl apply`.
+
+ ```yaml
+ ---
+ apiVersion: v1
+ kind: ConfigMap
+ metadata:
+ name: filebeat-config
+ namespace: kube-logging
+ labels:
+ k8s-app: filebeat
+ data:
+ filebeat.yml: |-
+ filebeat.inputs:
+ - type: container
+ paths:
+ # The log path of the EMQX container on the host
+ - /var/log/containers/^emqx.*.log
+ processors:
+ - add_kubernetes_metadata:
+ host: ${NODE_NAME}
+ matchers:
+ - logs_path:
+ logs_path: "/var/log/containers/"
+ output.logstash:
+ hosts: ["logstash:5044"]
+ enabled: true
+ ---
+ apiVersion: v1
+ kind: ServiceAccount
+ metadata:
+ name: filebeat
+ namespace: kube-logging
+ labels:
+ k8s-app: filebeat
+ ---
+ apiVersion: rbac.authorization.k8s.io/v1beta1
+ kind: ClusterRole
+ metadata:
+ name: filebeat
+ labels:
+ k8s-app: filebeat
+ rules:
+ - apiGroups: [""]
+ resources:
+ - namespaces
+ - pods
+ verbs:
+ - get
+ - watch
+ - list
+ ---
+ apiVersion: rbac.authorization.k8s.io/v1beta1
+ kind: ClusterRoleBinding
+ metadata:
+ name: filebeat
+ subjects:
+ - kind: ServiceAccount
+ name: filebeat
+ namespace: kube-logging
+ roleRef:
+ kind: ClusterRole
+ name: filebeat
+ apiGroup: rbac.authorization.k8s.io
+ ---
+ apiVersion: apps/v1
+ kind: DaemonSet
+ metadata:
+ name: filebeat
+ namespace: kube-logging
+ labels:
+ k8s-app: filebeat
+ spec:
+ selector:
+ matchLabels:
+ k8s-app: filebeat
+ template:
+ metadata:
+ labels:
+ k8s-app: filebeat
+ spec:
+ serviceAccountName: filebeat
+ terminationGracePeriodSeconds: 30
+ containers:
+ - name: filebeat
+ image: docker.io/kubeimages/filebeat:7.9.3
+ args: [
+ "-c", "/etc/filebeat.yml",
+ "-e","-httpprof","0.0.0.0:6060"
+ ]
+ env:
+ - name: NODE_NAME
+ valueFrom:
+ fieldRef:
+ fieldPath: spec.nodeName
+ - name: ELASTICSEARCH_HOST
+ value: elasticsearch
+ - name: ELASTICSEARCH_PORT
+ value: "9200"
+ securityContext:
+ runAsUser: 0
+ resources:
+ limits:
+ memory: 1000Mi
+ cpu: 1000m
+ requests:
+ memory: 100Mi
+ cpu: 100m
+ volumeMounts:
+ - name: config
+ mountPath: /etc/filebeat.yml
+ readOnly: true
+ subPath: filebeat.yml
+ - name: data
+ mountPath: /usr/share/filebeat/data
+ - name: varlibdockercontainers
+ mountPath: /data/var/
+ readOnly: true
+ - name: varlog
+ mountPath: /var/log/
+ readOnly: true
+ - name: timezone
+ mountPath: /etc/localtime
+ volumes:
+ - name: config
+ configMap:
+ defaultMode: 0600
+ name: filebeat-config
+ - name: varlibdockercontainers
+ hostPath:
+ path: /data/var/
+ - name: varlog
+ hostPath:
+ path: /var/log/
+ - name: inputs
+ configMap:
+ defaultMode: 0600
+ name: filebeat-inputs
+ - name: data
+ hostPath:
+ path: /data/filebeat-data
+ type: DirectoryOrCreate
+ - name: timezone
+ hostPath:
+ path: /etc/localtime
+ ```
+
+2. Wait for Filebeat to become ready.
+
+ Check the status of Filebeat pods using the `kubectl get` command and ensure that `STATUS` is `Running`.
+
+ ```bash
+ $ kubectl get pod -n kube-logging -l "k8s-app=filebeat"
+ NAME READY STATUS RESTARTS AGE
+ filebeat-82d2b 1/1 Running 0 45m
+ filebeat-vwrjn 1/1 Running 0 45m
+ ```
### Deploy Logstash
-This is mainly to combine the business needs and the secondary utilization of logs, and Logstash is added for log cleaning. This article uses the [Beats Input plugin](https://www.elastic.co/guide/en/logstash/current/plugins-inputs-beats.html) of Logstash to collect logs, and uses the [Ruby filter plugin](https://www.elastic.co/guide/en/logstash/current/plugins-filters-ruby.html) to filter logs. Logstash also provides many other input and filtering plug-ins for users to use, and you can configure appropriate plug-ins according to your business needs.
-
-- Save the following content as a YAML file and deploy it via the `kubectl apply` command
-
- ```yaml
- ---
- apiVersion: v1
- kind: Service
- metadata:
- name: logstash
- namespace: kube-system
- spec:
- ports:
- - port: 5044
- targetPort: beats
- selector:
- k8s-app: logstash
- clusterIP: None
- ---
- apiVersion: apps/v1
- kind: Deployment
- metadata:
- name: logstash
- namespace: kube-system
- spec:
- selector:
- matchLabels:
- k8s-app: logstash
- template:
- metadata:
- labels:
- k8s-app: logstash
- spec:
- containers:
- - image: docker.io/kubeimages/logstash:7.9.3
- name: logstash
- ports:
- - containerPort: 5044
- name: beats
- command:
- - logstash
- - '-f'
- - '/etc/logstash_c/logstash.conf'
- env:
- - name: "XPACK_MONITORING_ELASTICSEARCH_HOSTS"
- value: "http://elasticsearch-logging:9200"
- volumeMounts:
- - name: config-volume
- mountPath: /etc/logstash_c/
- - name: config-yml-volume
- mountPath: /usr/share/logstash/config/
- -name: timezone
- mountPath: /etc/localtime
- resources:
- limits:
- cpu: 1000m
- memory: 2048Mi
- requests:
- cpu: 512m
- memory: 512Mi
- volumes:
- - name: config-volume
- configMap:
- name: logstash-conf
- items:
- - key: logstash.conf
- path: logstash.conf
- -name: timezone
- hostPath:
- path: /etc/localtime
- - name: config-yml-volume
- configMap:
- name: logstash-yml
- items:
- - key: logstash.yml
- path: logstash.yml
- ---
- apiVersion: v1
- kind: ConfigMap
- metadata:
- name: logstash-conf
- namespace: kube-logging
- labels:
- k8s-app: logstash
- data:
- logstash.conf: |-
- input {
- beats {
- port => 5044
- }
- }
- filter {
- ruby {
- code => "
- ss = event.get('message').split(' ')
- len = ss. length()
- level = ''
- index = ''
- msg = ''
- if len == 0 || len < 2
- event.set('level','invalid')
- return
- end
- if ss[1][0] == '['
- l = ss[1].length()
- level = ss[1][1..l-2]
- index = 2
- else
- level = 'info'
- index = 0
- end
- event.set('level',level)
- for i in ss[index..len]
- msg = msg + i
- msg = msg + ' '
- end
- event.set('message',msg)
- "
- }
- if [level] == "invalid" {
- drop {}
- }
- }
- output {
- elasticsearch {
- hosts => ["http://elasticsearch-logging:9200"]
- codec => json
- index => "logstash-%{+YYYY.MM.dd}"
- }
- }
- ---
- apiVersion: v1
- kind: ConfigMap
- metadata:
- name: logstash
- namespace: kube-logging
- labels:
- k8s-app: logstash
- data:
- logstash.yml: |-
- http.host: "0.0.0.0"
- xpack.monitoring.elasticsearch.hosts: http://elasticsearch-logging:9200
- ```
-
-- Wait for Logstash to be ready, you can view the status of the Logstash pod through the `kubectl get` command, make sure `STATUS` is `Running`
-
- ```bash
- $ kubectl get pod -n kube-logging -l "k8s-app=logstash"
- NAME READY STATUS RESTARTS AGE
- filebeat-82d2b 1/1 Running 0 45m
- filebeat-vwrjn 1/1 Running 0 45m
- ```
+Logstash is used for log processing and cleaning.
+
+In this walkthrough, we use the [Beats Input plugin](https://www.elastic.co/guide/en/logstash/current/plugins-inputs-beats.html) of Logstash to collect logs and the [Ruby filter plugin](https://www.elastic.co/guide/en/logstash/current/plugins-filters-ruby.html) to filter logs. Logstash also provides many other input and filtering plugins that you can configure according to your business needs.
+
+1. Save the following content as a YAML file and deploy it using `kubectl apply`.
+
+ ```yaml
+ ---
+ apiVersion: v1
+ kind: Service
+ metadata:
+ name: logstash
+ namespace: kube-logging
+ spec:
+ ports:
+ - port: 5044
+ targetPort: beats
+ selector:
+ k8s-app: logstash
+ clusterIP: None
+ ---
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ name: logstash
+ namespace: kube-logging
+ spec:
+ selector:
+ matchLabels:
+ k8s-app: logstash
+ template:
+ metadata:
+ labels:
+ k8s-app: logstash
+ spec:
+ containers:
+ - image: docker.io/kubeimages/logstash:7.9.3
+ name: logstash
+ ports:
+ - containerPort: 5044
+ name: beats
+ command:
+ - logstash
+ - '-f'
+ - '/etc/logstash_c/logstash.conf'
+ env:
+ - name: "XPACK_MONITORING_ELASTICSEARCH_HOSTS"
+ value: "http://elasticsearch-logging:9200"
+ volumeMounts:
+ - name: config-volume
+ mountPath: /etc/logstash_c/
+ - name: config-yml-volume
+ mountPath: /usr/share/logstash/config/
+ - name: timezone
+ mountPath: /etc/localtime
+ resources:
+ limits:
+ cpu: 1000m
+ memory: 2048Mi
+ requests:
+ cpu: 512m
+ memory: 512Mi
+ volumes:
+ - name: config-volume
+ configMap:
+ name: logstash-conf
+ items:
+ - key: logstash.conf
+ path: logstash.conf
+ - name: timezone
+ hostPath:
+ path: /etc/localtime
+ - name: config-yml-volume
+ configMap:
+ name: logstash-yml
+ items:
+ - key: logstash.yml
+ path: logstash.yml
+ ---
+ apiVersion: v1
+ kind: ConfigMap
+ metadata:
+ name: logstash-conf
+ namespace: kube-logging
+ labels:
+ k8s-app: logstash
+ data:
+ logstash.conf: |-
+ input {
+ beats {
+ port => 5044
+ }
+ }
+ filter {
+ ruby {
+ code => "
+ ss = event.get('message').split(' ')
+ len = ss.length()
+ level = ''
+ index = ''
+ msg = ''
+ if len == 0 || len < 2
+ event.set('level','invalid')
+ return
+ end
+ if ss[1][0] == '['
+ l = ss[1].length()
+ level = ss[1][1..l-2]
+ index = 2
+ else
+ level = 'info'
+ index = 0
+ end
+ event.set('level',level)
+ for i in ss[index..len]
+ msg = msg + i
+ msg = msg + ' '
+ end
+ event.set('message',msg)
+ "
+ }
+ if [level] == "invalid" {
+ drop {}
+ }
+ }
+ output {
+ elasticsearch {
+ hosts => ["http://elasticsearch-logging:9200"]
+ codec => json
+ index => "logstash-%{+YYYY.MM.dd}"
+ }
+ }
+ ---
+ apiVersion: v1
+ kind: ConfigMap
+ metadata:
+ name: logstash-yml
+ namespace: kube-logging
+ labels:
+ k8s-app: logstash
+ data:
+ logstash.yml: |-
+ http.host: "0.0.0.0"
+ xpack.monitoring.elasticsearch.hosts: http://elasticsearch-logging:9200
+ ```
+
+2. Wait for Logstash to be ready.
+
+ Check the status of Logstash pods using the `kubectl get` command and ensure that `STATUS` is `Running`.
+
+ ```bash
+ $ kubectl get pod -n kube-logging -l "k8s-app=logstash"
+ NAME READY STATUS RESTARTS AGE
+ filebeat-82d2b 1/1 Running 0 45m
+ filebeat-vwrjn 1/1 Running 0 45m
+ ```
## Deploy EMQX Cluster
-To deploy the EMQX cluster, please refer to the document [Deploy EMQX](../getting-started.md).
+To deploy an EMQX cluster, please refer to the document [Deploy EMQX](../getting-started.md).
## Verify Log Collection
-- First log in to the Kibana interface, open the stack management module in the menu, click on the index management, you can find that there are already collected log indexes
+1. Log in to the Kibana interface, open the stack management module in the menu, and click on _Index Management_. You can see that log indices have already been collected.
- 
+ 
-- In order to be able to discover and view logs in Kibana, you need to set an index match, select index patterns, and click Create
+2. To discover and view logs in Kibana, you need to create an index pattern. Select index patterns and click _Create_.
- 
+ 
- 
+ 
-- Finally verify whether the EMQX cluster logs are collected
+3. Verify that the EMQX cluster logs are collected.
- 
+ 
diff --git a/en_US/deploy/kubernetes/operator/tasks/configure-emqx-log-level.md b/en_US/deploy/kubernetes/operator/tasks/configure-emqx-log-level.md
index ad64a561d..14eae38cb 100644
--- a/en_US/deploy/kubernetes/operator/tasks/configure-emqx-log-level.md
+++ b/en_US/deploy/kubernetes/operator/tasks/configure-emqx-log-level.md
@@ -1,88 +1,72 @@
# Change EMQX Log Level
-## Task Target
+## Objective
-Modify the log level of EMQX cluster.
+Modify the log level in the EMQX cluster.
## Configure EMQX Cluster
-The following is the relevant configuration of EMQX Custom Resource. You can choose the corresponding APIVersion according to the version of EMQX you want to deploy. For the specific compatibility relationship, please refer to [EMQX Operator Compatibility](../operator.md):
-
-`apps.emqx.io/v2beta1 EMQX` supports configuration of EMQX cluster log level through `.spec.config.data`. The configuration of config.data can refer to the document: [Configuration Manual](https://www.emqx.io/docs/en/v5.1/configuration/configuration-manual.html#configuration-manual).
-
-> This field is only allowed to be configured when creating an EMQX cluster, and does not support updating. If you need to modify the cluster log level after creating EMQX, please modify it through EMQX Dashboard.
-
-+ Save the following content as a YAML file and deploy it with the kubectl apply command
-
- ```yaml
- apiVersion: apps.emqx.io/v2beta1
- kind: EMQX
- metadata:
- name: emqx
- spec:
- image: emqx/emqx-enterprise:@EE_VERSION@
- config:
- data: |
- log.console.level = debug
- license {
- key = "..."
- }
- dashboardServiceTemplate:
- spec:
- type: LoadBalancer
- listenersServiceTemplate:
- spec:
- type: LoadBalancer
- ```
-
- > The `.spec.config.data` field configures the EMQX cluster log level to `debug`.
-
-+ Wait for the EMQX cluster to be ready, you can check the status of the EMQX cluster through the kubectl get command, please make sure that `STATUS` is Running, this may take some time
-
- ```bash
- $ kubectl get emqx emqx
- NAME IMAGE STATUS AGE
- emqx emqx/emqx-enterprise:@EE_VERSION@ Running 10m
- ```
-
-+ EMQX Operator will create two EMQX Service resources, one is emqx-dashboard and the other is emqx-listeners, corresponding to EMQX console and EMQX listening port respectively.
-
- ```bash
- $ kubectl get svc emqx-dashboard -o json | jq '.status.loadBalancer.ingress[0].ip'
-
- 192.168.1.200
- ```
-
- Access `http://192.168.1.200:18083` through a browser, and use the default username and password `admin/public` to login EMQX console.
+EMQX CRD `apps.emqx.io/v2beta1` supports configuring the log level of the EMQX cluster through `.spec.config.data`. Refer to the [Configuration Manual](https://docs.emqx.com/en/enterprise/v6.0.0/hocon/) for a complete configuration reference.
+
+1. Save the following content as a YAML file and deploy it using `kubectl apply`:
+
+ ```yaml
+ apiVersion: apps.emqx.io/v2beta1
+ kind: EMQX
+ metadata:
+ name: emqx
+ spec:
+ image: emqx/emqx:@EE_VERSION@
+ config:
+ # Enable debug logging:
+ data: |
+ log.console.level = debug
+ license {
+ key = "..."
+ }
+ dashboardServiceTemplate:
+ spec:
+ type: LoadBalancer
+ listenersServiceTemplate:
+ spec:
+ type: LoadBalancer
+ ```
+
+2. Wait for the EMQX cluster to become ready.
+
+ Check the status of the EMQX cluster with `kubectl get` and ensure that `STATUS` is `Ready`. This may take some time.
+
+ ```bash
+ $ kubectl get emqx
+ NAME STATUS AGE
+ emqx Ready 10m
+ ```
## Verify Log Level
-[MQTTX CLI](https://mqttx.app/cli) is an open source MQTT 5.0 command line client tool, designed to help developers to more Quickly develop and debug MQTT services and applications.
-
-+ Obtain the External IP of EMQX cluster
-
- ```bash
- external_ip=$(kubectl get svc emqx-listeners -o json | jq '.status.loadBalancer.ingress[0].ip')
- ```
+1. Obtain the External IP of the EMQX cluster.
-+ Use MQTTX CLI to connect to EMQX cluster
+ ```bash
+ external_ip=$(kubectl get svc emqx-listeners -o json | jq '.status.loadBalancer.ingress[0].ip')
+ ```
- ```bash
- $ mqttx conn -h ${external_ip} -p 1883
+2. Use MQTTX CLI to connect to the EMQX cluster.
- [4/17/2023] [5:17:31 PM] › … Connecting...
- [4/17/2023] [5:17:31 PM] › ✔ Connected
- ```
+ [MQTTX CLI](https://mqttx.app/cli) is an open source MQTT 5.0 command line client tool, designed to help developers start using MQTT services and applications more quickly.
-+ Use the command line to view EMQX cluster log information
+ ```
+ $ mqttx conn -h ${external_ip} -p 1883
+ [4/17/2023] [5:17:31 PM] › … Connecting...
+ [4/17/2023] [5:17:31 PM] › ✔ Connected
+ ```
- ```bash
- $ kubectl logs emqx-core-0 -c emqx
- ```
+3. View EMQX container logs.
- You can get a print similar to the following, which means that EMQX has received a CONNECT message from the client and replied a CONNACK message to the client:
+ ```bash
+ $ kubectl logs emqx-core-0 -c emqx
+ ...
+ 2023-04-17T09:11:35.993031+00:00 [debug] msg: mqtt_packet_received, mfa: emqx_channel:handle_in/2, line: 360, peername: 218.190.230.144:59457, clientid: mqttx_322680d9, packet: CONNECT(Q0, R0, D0, ClientId=mqttx_322680d9, ProtoName=MQTT, ProtoVsn=5, CleanStart=true, KeepAlive=30, Username=undefined, Password=), tag: MQTT
+ 2023-04-17T09:11:35.997066+00:00 [debug] msg: mqtt_packet_sent, mfa: emqx_connection:serialize_and_inc_stats_fun/1, line: 872, peername: 218.190.230.144:59457, clientid: mqttx_322680d9, packet: CONNACK(Q0, R0, D0, AckFlags=0, ReasonCode=0), tag: MQTT
+ ```
- ```bash
- 2023-04-17T09:11:35.993031+00:00 [debug] msg: mqtt_packet_received, mfa: emqx_channel:handle_in/2, line: 360, peername: 218.190.230.144:59457, clientid: mqttx_322680d9, packet: CONNECT(Q0, R0, D0, ClientId=mqttx_322680d9, ProtoName=MQTT, ProtoVsn=5, CleanStart=true, KeepAlive=30, Username=undefined, Password=), tag: MQTT
- 2023-04-17T09:11:35.997066+00:00 [debug] msg: mqtt_packet_sent, mfa: emqx_connection:serialize_and_inc_stats_fun/1, line: 872, peername: 218.190.230.144:59457, clientid: mqttx_322680d9, packet: CONNACK(Q0, R0, D0, AckFlags=0, ReasonCode=0), tag: MQTT
- ```
+
diff --git a/en_US/deploy/kubernetes/operator/tasks/configure-emqx-persistence.md b/en_US/deploy/kubernetes/operator/tasks/configure-emqx-persistence.md
index 91c9a0fd9..9d6f93d05 100644
--- a/en_US/deploy/kubernetes/operator/tasks/configure-emqx-persistence.md
+++ b/en_US/deploy/kubernetes/operator/tasks/configure-emqx-persistence.md
@@ -1,115 +1,109 @@
-# Enable Persistence In EMQX Cluster
+# Enable Persistence in EMQX Cluster
-## Task Target
+## Objective
-Configure EMQX 5.x cluster Core node persistence through `volumeClaimTemplates` field.
+Configure persistence for the set of Core nodes of an EMQX cluster through the `volumeClaimTemplates` field.
## Configure EMQX Cluster Persistence
-The following is the relevant configuration of EMQX Custom Resource. You can choose the corresponding APIVersion according to the version of EMQX you want to deploy. For the specific compatibility relationship, please refer to [EMQX Operator Compatibility](../operator.md):
+EMQX CRD `apps.emqx.io/v2beta1` supports configuring persistence of each core node data through `.spec.coreTemplate.spec.volumeClaimTemplates`.
-`apps.emqx.io/v2beta1 EMQX` supports configuration of EMQX cluster Core node persistence through `.spec.coreTemplate.spec.volumeClaimTemplates` field. The semantics and configuration of `.spec.coreTemplate.spec.volumeClaimTemplates` field are consistent with `PersistentVolumeClaimSpec` of Kubernetes, and its configuration can refer to the document: [PersistentVolumeClaimSpec](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#persistentvolumeclaimspec-v1-core).
+The definition and semantics of the `.spec.coreTemplate.spec.volumeClaimTemplates` field are consistent with those of `PersistentVolumeClaimSpec` defined in the Kubernetes API.
-When the user configures the `.spec.coreTemplate.spec.volumeClaimTemplates` field, EMQX Operator will mount the `/opt/emqx/data` directory in the EMQX container to [StorageClass](https://kubernetes.io/docs/concepts/storage/storage-classes/) in the PV and PVC created, when the EMQX Pod is deleted, the PV and PVC will not be deleted, so as to achieve the purpose of saving EMQX runtime data. For more information about PV and PVC, refer to the document [Persistent Volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/).
+When you specify the `.spec.coreTemplate.spec.volumeClaimTemplates` field, EMQX Operator configures the `/opt/emqx/data` volume of the EMQX container to be backed by a Persistent Volume Claim (PVC), which provisions a Persistent Volume (PV) using a specified [StorageClass](https://kubernetes.io/docs/concepts/storage/storage-classes/). As a result, when an EMQX Pod is deleted, the associated PV and PVC are retained, preserving EMQX runtime data.
-+ Save the following content as a YAML file and deploy it via the `kubectl apply` command
+For more details about PVs and PVCs, refer to the [Persistent Volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) documentation.
- ```yaml
- apiVersion: apps.emqx.io/v2beta1
- kind: EMQX
- metadata:
- name: emqx
- spec:
- image: emqx/emqx-enterprise:@EE_VERSION@
- config:
- data: |
- license {
- key = "..."
- }
- coreTemplate:
- spec:
- volumeClaimTemplates:
- storageClassName: standard
- resources:
- requests:
- storage: 20Mi
- accessModes:
- - ReadWriteOnce
- replicas: 3
- listenersServiceTemplate:
- spec:
- type: LoadBalancer
- dashboardServiceTemplate:
- spec:
- type: LoadBalancer
- ```
+1. Save the following content as a YAML file and deploy it using `kubectl apply`.
- > `storageClassName` field indicates the name of the StorageClass. You can use the command `kubectl get storageclass` to get the StorageClass that already exists in the Kubernetes cluster, or you can create a StorageClass according to your own needs.
+ ```yaml
+ apiVersion: apps.emqx.io/v2beta1
+ kind: EMQX
+ metadata:
+ name: emqx
+ spec:
+ image: emqx/emqx:@EE_VERSION@
+ config:
+ data: |
+ license {
+ key = "..."
+ }
+ coreTemplate:
+ spec:
+ volumeClaimTemplates:
+ storageClassName: standard
+ resources:
+ requests:
+ storage: 20Mi
+ accessModes:
+ - ReadWriteOnce
+ replicas: 3
+ listenersServiceTemplate:
+ spec:
+ type: LoadBalancer
+ dashboardServiceTemplate:
+ spec:
+ type: LoadBalancer
+ ```
-+ Wait for EMQX cluster to be ready, you can check the status of the EMQX cluster through `kubectl get` command, please make sure `STATUS` is `Running`, this may take some time
+ ::: tip
- ```bash
- $ kubectl get emqx emqx
- NAME IMAGE STATUS AGE
- emqx emqx/emqx-enterprise:@EE_VERSION@ Running 10m
- ```
+ Use the `storageClassName` field to choose the appropriate [StorageClass](https://kubernetes.io/docs/concepts/storage/storage-classes/) for EMQX data. Run `kubectl get storageclass` to list the StorageClasses that already exist in the Kubernetes cluster, or create a StorageClass according to your needs.
-+ Obtain the Dashboard External IP of the EMQX cluster and access the EMQX console
+ :::
- EMQX Operator will create two EMQX Service resources, one is emqx-dashboard and the other is emqx-listeners, corresponding to EMQX console and EMQX listening port respectively.
+2. Wait for the EMQX cluster to become ready.
- ```bash
- $ kubectl get svc emqx-dashboard -o json | jq '.status.loadBalancer.ingress[0].ip'
+ Check the status of the EMQX cluster with `kubectl get` and ensure that `STATUS` is `Ready`. This may take some time.
- 192.168.1.200
- ```
+ ```bash
+ $ kubectl get emqx emqx
+ NAME STATUS AGE
+ emqx Ready 10m
+ ```
- Access `http://192.168.1.200:18083` through a browser, and use the default username and password `admin/public` to login EMQX console.
+## Verify Persistence
-## Verify EMQX Cluster Persistence
+1. Create a test rule in the EMQX Dashboard.
-Verification scheme: 1) Passed in the old EMQX Dashboard creates a test rule; 2) Deletes the old cluster; 3) Recreates the EMQX cluster,and checks whether the previously created rule exists through the Dashboard.
+ ```bash
+ external_ip=$(kubectl get svc emqx-dashboard -o json | jq -r '.status.loadBalancer.ingress[0].ip')
+ ```
-+ Access EMQX Dashboard through browser to create test rules
+ - Log in to the EMQX Dashboard at `http://${external_ip}:18083`.
- ```bash
- external_ip=$(kubectl get svc emqx-listeners -o json | jq '.status.loadBalancer.ingress[0].ip')
- ```
+ - Navigate to **Integration** -> **Rules** to create a new rule.
- Login EMQX Dashboard by accessing `http://${external_ip}:18083`, and click Data Integration → Rules to enter the page for creating rules. Let’s first click the button to add an action Add a response action for this rule, and then click Create to generate a rule, as shown in the following figure:
+ - Attach a simple action to this rule.
- 
+ - Click **Save** to generate a rule, as shown in the following figure:
- When our rule is successfully created, a rule record will appear on the page with the rule ID: emqx-persistent-test, as shown in the figure below:
+ 
- 
+ Once the rule is created successfully, a corresponding record with `emqx-persistent-test` ID will appear on the page, as shown in the figure below:
-+ delete old EMQX cluster
+ 
- Execute the following command to delete the EMQX cluster:
+2. Delete the old EMQX cluster.
- ```bash
- $ kubectl delete -f emqx.yaml
+ Run the following command to delete the EMQX cluster, where `emqx.yaml` is the file you used to deploy the cluster earlier:
- emqx.apps.emqx.io "emqx" deleted
- # emqxenterprise.apps.emqx.io "emqx" deleted
- ```
+ ```bash
+ $ kubectl delete -f emqx.yaml
+ emqx.apps.emqx.io "emqx" deleted
+ ```
- > emqx-persistent.yaml is the YAML file used to deploy the EMQX cluster for the first time in this article, and this file does not need to be changed.
+3. Re-deploy the EMQX cluster.
-+ Recreate the EMQX cluster
+ Run the following command to re-deploy the EMQX cluster:
- Execute the following command to recreate the EMQX cluster:
+ ```bash
+ $ kubectl apply -f emqx.yaml
+ emqx.apps.emqx.io/emqx created
+ ```
- ```bash
- $ kubectl apply -f emqx.yaml
+4. Wait for the EMQX cluster to be ready. Access the EMQX Dashboard through your browser to verify that the previously created rule still exists, as shown in the following figure:
- emqx.apps.emqx.io/emqx created
- # emqxenterprise.apps.emqx.io/emqx created
- ```
+ 
- Wait for the EMQX cluster to be ready, and then access the EMQX Dashboard through the browser to check whether the previously created rules exist, as shown in the following figure:
-
- 
-
- It can be seen from the figure that the rule emqx-persistent-test created in the old cluster still exists in the new cluster, which means that the persistence we configured is in effect.
+ The `emqx-persistent-test` rule created in the old cluster still exists in the new cluster, which confirms that the persistence configuration is working correctly.
diff --git a/en_US/deploy/kubernetes/operator/tasks/configure-emqx-prometheus.md b/en_US/deploy/kubernetes/operator/tasks/configure-emqx-prometheus.md
index eeaab5edd..7a0d6012e 100644
--- a/en_US/deploy/kubernetes/operator/tasks/configure-emqx-prometheus.md
+++ b/en_US/deploy/kubernetes/operator/tasks/configure-emqx-prometheus.md
@@ -1,18 +1,17 @@
-# Monitor EMQX cluster by Prometheus and Grafana
+# Monitor EMQX Cluster by Prometheus and Grafana
-## Task Target
-Deploy [EMQX Exporter](https://github.com/emqx/emqx-exporter) and monitor EMQX cluster by Prometheus and Grafana.
+## Objective
+
+Deploy [EMQX Exporter](https://github.com/emqx/emqx-exporter) and monitor an EMQX cluster using Prometheus and Grafana.
## Deploy Prometheus and Grafana
-Prometheus' deployment documentation can refer to [Prometheus](https://github.com/prometheus-operator/prometheus-operator)
-Grafana' deployment documentation can refer to [Grafana](https://grafana.com/docs/grafana/latest/setup-grafana/installation/kubernetes/)
+* To learn more about Prometheus deployment, refer to the [Prometheus](https://github.com/prometheus-operator/prometheus-operator) documentation.
+* To learn more about Grafana deployment, refer to [Grafana](https://grafana.com/docs/grafana/latest/setup-grafana/installation/kubernetes/) documentation.
## Deploy EMQX Cluster
-Here are the relevant configurations for EMQX Custom Resource. You can choose the corresponding APIVersion based on the version of EMQX you wish to deploy. For specific compatibility relationships, please refer to [EMQX Operator Compatibility](../operator.md):
-
-EMQX supports exposing indicators through the http interface. For all statistical indicators under the cluster, please refer to the document: [Integrate with Prometheus](../../../../observability/prometheus.md)
+EMQX exposes various metrics through the [Prometheus-compatible HTTP API](../../../../observability/prometheus.md).
```yaml
apiVersion: apps.emqx.io/v2beta1
@@ -20,7 +19,7 @@ kind: EMQX
metadata:
name: emqx
spec:
- image: emqx/emqx-enterprise:@EE_VERSION@
+ image: emqx/emqx:@EE_VERSION@
config:
data: |
license {
@@ -32,24 +31,24 @@ Save the above content as `emqx.yaml` and execute the following command to deplo
```bash
$ kubectl apply -f emqx.yaml
-
emqx.apps.emqx.io/emqx created
```
-Check the status of the EMQX cluster and make sure that `STATUS` is `Running`, which may take some time to wait for the EMQX cluster to be ready.
+Check the status of the EMQX cluster and make sure that `STATUS` is `Ready`. This may take some time.
- ```bash
- $ kubectl get emqx emqx
- NAME IMAGE STATUS AGE
- emqx emqx/emqx-enterprise:@EE_VERSION@ Running 10m
- ```
+```bash
+$ kubectl get emqx emqx
+NAME STATUS AGE
+emqx Ready 10m
+```
-## Create API Secret
-emqx-exporter and Prometheus will pull metrics from EMQX dashboard API, so you need to sign in to dashboard to create an [API Key](../../../../dashboard/system.md#api-keys).
+## Create API secret
+
+Prometheus is going to pull metrics from the EMQX Dashboard API, so you need to sign in to the Dashboard to [create an API secret](../../../../dashboard/system.md#api-keys).
## Deploy [EMQX Exporter](https://github.com/emqx/emqx-exporter)
-The `emqx-exporter` is designed to expose partial metrics that are not included in the EMQX Prometheus API.
+The `emqx-exporter` is designed to expose partial metrics that are not exposed in the EMQX Prometheus API.
```yaml
apiVersion: v1
@@ -109,24 +108,24 @@ spec:
memory: 20Mi
```
-> Set the arg "--emqx.nodes" to the service name that creating by operator for exposing 18083 port. Check out the service name by call `kubectl get svc`.
+> Set the arg "--emqx.nodes" to the service name that creating by operator for exposing 18083 port. Look up the service name by calling `kubectl get svc`.
-Save the above content as `emqx-exporter.yaml`, replace `--emqx.auth-username` and `--emqx.auth-password` with your new creating API secret, then execute the following command to deploy the emqx-exporter:
+Save the above content as `emqx-exporter.yaml`, replacing `--emqx.auth-username` and `--emqx.auth-password` with your new API secret. Run the following command to deploy the `emqx-exporter`:
```bash
kubectl apply -f emqx-exporter.yaml
```
-Check the status of emqx-exporter pod。
+Check the status of the `emqx-exporter` pod.
```bash
$ kubectl get po -l="app=emqx-exporter"
-
-NAME STATUS AGE
+NAME STATUS AGE
emqx-exporter-856564c95-j4q5v Running 8m33s
```
## Configure Prometheus Monitor
-Prometheus-operator uses [PodMonitor](https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/design.md#podmonitor) and [ServiceMonitor](https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/design.md#servicemonitor) CRD to define how to monitor a set of pods or services dynamically.
+
+Prometheus Operator uses [PodMonitor](https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/design.md#podmonitor) and [ServiceMonitor](https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/design.md#servicemonitor) CRDs to define how to monitor a set of pods or services dynamically.
```yaml
apiVersion: monitoring.coreos.com/v1
@@ -203,8 +202,9 @@ spec:
#- default
```
- `path` indicates the path of the indicator collection interface. In EMQX 5, the path is: `/api/v5/prometheus/stats`. `selector.matchLabels` indicates the label of the matching Pod: `apps.emqx.io/instance: emqx`.
- The value of targetLabel `cluster` represents the name of current cluster, make sure its uniqueness.
+`path` indicates the path of the indicator collection interface. In EMQX 5, the path is: `/api/v5/prometheus/stats`. `selector.matchLabels` indicates the label of the matching Pod: `apps.emqx.io/instance: emqx`.
+
+The value of the targetLabel `cluster` represents the name of the current cluster. Make sure it is unique.
Save the above content as `monitor.yaml` and execute the following command:
diff --git a/en_US/deploy/kubernetes/operator/tasks/configure-emqx-rebalance.md b/en_US/deploy/kubernetes/operator/tasks/configure-emqx-rebalance.md
index 49e1b849c..2be27c3c9 100644
--- a/en_US/deploy/kubernetes/operator/tasks/configure-emqx-rebalance.md
+++ b/en_US/deploy/kubernetes/operator/tasks/configure-emqx-rebalance.md
@@ -1,4 +1,4 @@
-# Cluster Load Rebalancing (EMQX Enterprise)
+# Rebalance Cluster Load
## Task Target
@@ -6,12 +6,12 @@ How to rebalance MQTT connections.
## Why Need Load Rebalancing
-Cluster load rebalancing is the act of forcibly migrating client connections and sessions from one set of nodes to another. It will automatically calculate the number of connections that need to be migrated to achieve node balance, and then migrate the corresponding number of connections and sessions from high-load nodes to low-load nodes, thereby achieving load balancing between nodes. This operation is usually required to achieve balance after a new join or restart of a node.
+Cluster load rebalancing is the act of forcibly migrating client connections and sessions from one set of nodes to another. It will automatically calculate the number of connections that need to be migrated to achieve node balance, and then migrate the corresponding number of connections and sessions from high-load nodes to low-load nodes, thereby achieving load balancing between nodes. This operation is usually required to achieve balance after a new join or a restart of a node.
The value of rebalancing mainly has the following two points:
- **Improve system scalability**: Due to the persistent nature of MQTT connections, connections to the original nodes will not automatically migrate to the new nodes when the cluster scales. To address this, you can use the load rebalancing feature to smoothly transfer connections from overloaded nodes to newly-added ones. This process ensures a more balanced distribution of load across the entire cluster and enhances throughput, response speed, and resource utilization rate.
-- **Reduce O&M costs**: For clusters with unevenly distributed loads, where some nodes are overloaded while others remain idle, you can use the load rebalancing feature to automatically adjust the load within the cluster. This helps achieve a more balanced distribution of work and reduces operation and maintenance costs.
+- **Reduce O&M costs**: For clusters with unevenly distributed loads, where some nodes are overloaded while others remain idle, you can use the load rebalancing feature to automatically adjust the load within the cluster. This helps achieve a more balanced distribution of work and reduces operational and maintenance costs.
For EMQX cluster load rebalancing, please refer to the document: [Rebalancing](../../../cluster/rebalancing.md)
@@ -37,19 +37,23 @@ spec:
relSessThreshold: "1.1"
```
-> For Rebalance configuration, please refer to the document: [Rebalance reference](../api-reference.md#rebalancestrategy).
+> For Rebalance configuration, please refer to the document: [Rebalance reference](../reference/v2beta1-reference.md#rebalancestrategy).
## Test Load Rebalancing
### Cluster Load Distribution Before Rebalancing
-Before Rebalancing, we built a cluster with unbalanced load. And use Grafana + Prometheus to monitor the load of EMQX cluster:
+Before rebalancing, we intentionally created an EMQX cluster with an uneven distribution of connections. We then used Grafana and Prometheus to monitor the cluster load:

-It can be seen from the figure that there are four EMQX nodes in the current cluster, three of which carry 10,000 connections, and the remaining one has 0 connections. Next, we will demonstrate how to perform a rebalancing operation so that the load of the four nodes reaches a balanced state. Next, we will demonstrate how to perform a rebalancing operation so that the load of the four nodes reaches a balanced state.
+As shown in the graph, the cluster consists of four EMQX nodes. Three nodes each handle 10,000 connections, while one node has **zero** connections.
-- Submit the Rebalance task
+In the following example, we demonstrate how to perform a rebalancing operation to evenly distribute the load across all four nodes.
+
+#### Submit a Rebalance Task
+
+Create a `Rebalance` resource to initiate the rebalancing process:
```yaml
apiVersion: apps.emqx.io/v1beta4
@@ -70,14 +74,16 @@ spec:
relSessThreshold: "1.1"
```
-Save the above content as: rebalance.yaml, and execute the following command to submit the Rebalance task:
+Save the file as `rebalance.yaml`, and execute the following command to submit the Rebalance task:
```bash
$ kubectl apply -f rebalance.yaml
rebalance.apps.emqx.io/rebalance-sample created
```
-Execute the following command to view the rebalancing status of the EMQX cluster:
+#### Check the Rebalance Progress
+
+Execute the following command to inspect the rebalancing status of the EMQX cluster:
```bash
$ kubectl get rebalances rebalance-sample -o json | jq '.status.rebalanceStates'
@@ -97,9 +103,11 @@ $ kubectl get rebalances rebalance-sample -o json | jq '.status.rebalanceStates'
"connection_eviction_rate": 10
}
```
-> For a detailed description of the rebalanceStates field, please refer to the document: [rebalanceStates reference](../api-reference.md#rebalancestate).
+> For a detailed description of the `rebalanceStates` field, refer to the documentation: [rebalanceStates reference](../reference/v2beta1-reference.md#rebalancestate).
+
+#### Wait for Completion
-Wait for the Rebalance task to complete:
+Monitor the task until its status becomes `Completed`:
```bash
$ kubectl get rebalances rebalance-sample
@@ -107,15 +115,23 @@ NAME STATUS AGE
rebalance-sample Completed 62s
```
-> There are three states of Rebalance: Processing, Completed, and Failed. Processing indicates that the rebalancing task is in progress, Completed indicates that the rebalancing task has been completed, and Failed indicates that the rebalancing task failed.
+> The `STATUS` field indicates the lifecycle state of the Rebalance task:
+>
+> | Status | Meaning |
+> | -------------- | --------------------------------------------- |
+> | **Processing** | Rebalancing is in progress. |
+> | **Completed** | Rebalancing has successfully finished. |
+> | **Failed** | Rebalancing encountered an error and stopped. |
### Cluster Load Distribution After Rebalancing

-The figure above shows the cluster load after Rebalance is completed. It can be seen from the graph that the entire Rebalance process is very smooth. It can be seen from the data that the total number of connections in the cluster is still 10,000, which is consistent with that before Rebalance. The connections of four nodes has changed, and some connections of three nodes have been migrated to newly expanded nodes. After rebalancing, the loads of the four nodes remain stable, and the connections is close to 2,500 and will not change.
+The figure above shows the cluster load after Rebalance has completed. As illustrated, the migration of client connections is smooth and stable throughout the entire operation. The total number of connections in the cluster remains **10,000**, the same as before rebalancing.
-According to the conditions for the cluster to reach balance:
+Before rebalancing, one node carried **0** connections while three nodes carried **10,000** connections each. After rebalancing, the connections have been redistributed evenly across all four nodes. The load on each node stabilizes around **2,500** connections and remains consistent.
+
+To determine whether the cluster has reached a balanced state, the EMQX Operator evaluates the following conditions:
```
avg(source node connection number) < avg(target node connection number) + abs_conn_threshold
@@ -123,4 +139,10 @@ or
avg(source node connection number) < avg(target node connection number) * rel_conn_threshold
```
-Substituting the configured Rebalance parameters and the number of connections can calculate `avg(2553 + 2553+ 2554) < 2340 * 1.1`, so the current cluster has reached a balanced state, and the Rebalance task has successfully rebalanced the cluster load.
+Using the configured Rebalance thresholds and real connection counts:
+
+- Source node average: `avg(2553 + 2553 + 2554) ≈ 2553`
+- Target node average: `2340`
+- Condition checked: `2553 < 2340 * 1.1`
+
+Since the condition holds true, the Operator concludes that the cluster has reached a balanced state and the rebalancing task has successfully completed.
diff --git a/en_US/deploy/kubernetes/operator/tasks/configure-emqx-restricted-k8s.md b/en_US/deploy/kubernetes/operator/tasks/configure-emqx-restricted-k8s.md
index e01d84708..201567e9c 100644
--- a/en_US/deploy/kubernetes/operator/tasks/configure-emqx-restricted-k8s.md
+++ b/en_US/deploy/kubernetes/operator/tasks/configure-emqx-restricted-k8s.md
@@ -108,27 +108,29 @@ kubectl -n emqx wait --for=condition=Ready pods -l "control-plane=controller-man
## Configure EMQX Cluster
-+ Save the following content as a YAML file and deploy it with the `kubectl apply` command
-
- ```yaml
- apiVersion: apps.emqx.io/v2beta1
- kind: EMQX
- metadata:
- name: emqx
- namespace: emqx
- spec:
- image: ${REGISTRY}/emqx/emqx-enterprise:${EMQX_VERSION}
- config:
- data: |
- license {
- key = "..."
- }
- ```
-
-+ Wait for the EMQX cluster to be ready, you can check the status of EMQX cluster through `kubectl get` command, please make sure `STATUS` is `Running`, this may take some time
-
- ```bash
- $ kubectl get emqx emqx
- NAME IMAGE STATUS AGE
- emqx my.private.registry/emqx/emqx-enterprise:5.10.0 Running 10m
- ```
+1. Save the following content as a YAML file and deploy it with the `kubectl apply` command:
+
+ ```bash
+ apiVersion: apps.emqx.io/v2beta1
+ kind: EMQX
+ metadata:
+ name: emqx
+ namespace: emqx
+ spec:
+ image: ${REGISTRY}/emqx/emqx-enterprise:${EMQX_VERSION}
+ config:
+ data: |
+ license {
+ key = "..."
+ }
+ ```
+
+2. Wait for the EMQX cluster to be ready. You can check the status of the EMQX cluster through `kubectl get` command. Make sure `STATUS` is `Running`. This may take some time.
+
+ ```bash
+ $ kubectl get emqx emqx
+ NAME IMAGE STATUS AGE
+ emqx my.private.registry/emqx/emqx-enterprise:5.10.0 Running 10m
+ ```
+
+
diff --git a/en_US/deploy/kubernetes/operator/tasks/configure-emqx-service.md b/en_US/deploy/kubernetes/operator/tasks/configure-emqx-service.md
index b5dcad694..adc2439b1 100644
--- a/en_US/deploy/kubernetes/operator/tasks/configure-emqx-service.md
+++ b/en_US/deploy/kubernetes/operator/tasks/configure-emqx-service.md
@@ -1,100 +1,100 @@
-# Access EMQX Cluster Through LoadBalancer
+# Access EMQX Cluster through LoadBalancer
-## Task Target
+## Objective
-Access the EMQX cluster through the Service of LoadBalancer type.
+Access the EMQX cluster through a Service of type LoadBalancer.
## Configure EMQX Cluster
-The following is the relevant configuration of EMQX Custom Resource. You can choose the corresponding APIVersion according to the version of EMQX you want to deploy. For the specific compatibility relationship, please refer to [EMQX Operator Compatibility](../operator.md):
+EMQX CRD `apps.emqx.io/v2beta1` supports:
+* Configuring the EMQX Dashboard Service through `.spec.dashboardServiceTemplate`.
+* Configuring the EMQX cluster listener Service through `.spec.listenersServiceTemplate`.
-Operator supports configuring EMQX cluster Dashboard Service through `.spec.dashboardServiceTemplate`, and configuring EMQX cluster listener Service through `.spec.listenersServiceTemplate`, its documentation can refer to [Service](../api-reference.md#emqxspec).
+Refer to the [respective documentation](../reference/v2beta1-reference.md#emqxspec) for more details.
-+ Save the following content as a YAML file and deploy it via the `kubectl apply` command
+1. Save the following as a YAML file and deploy it using `kubectl apply`.
- ```yaml
- apiVersion: apps.emqx.io/v2beta1
- kind: EMQX
- metadata:
- name: emqx
- spec:
- image: emqx/emqx-enterprise:@EE_VERSION@
- config:
- data: |
- license {
- key = "..."
- }
- listenersServiceTemplate:
- spec:
- type: LoadBalancer
- dashboardServiceTemplate:
- spec:
- type: LoadBalancer
- ```
+ ```yaml
+ apiVersion: apps.emqx.io/v2beta1
+ kind: EMQX
+ metadata:
+ name: emqx
+ spec:
+ image: emqx/emqx:@EE_VERSION@
+ config:
+ data: |
+ license {
+ key = "..."
+ }
+ listenersServiceTemplate:
+ spec:
+ type: LoadBalancer
+ dashboardServiceTemplate:
+ spec:
+ type: LoadBalancer
+ ```
- > By default, EMQX will open an MQTT TCP listener `tcp-default` corresponding to port 1883 and Dashboard listener `dashboard-listeners-http-bind` corresponding to port 18083.
+ ::: tip
- > Users can add new listeners through `.spec.config.data` field or EMQX Dashboard. EMQX Operator will automatically inject the default listener information into the Service when creating the Service, but when there is a conflict between the Service configured by the user and the listener configured by EMQX (name or port fields are repeated), EMQX Operator will use the user's configuration prevail.
+ By default, EMQX starts an MQTT TCP listener `tcp-default` on port 1883 and a Dashboard HTTP listener on port 18083.
-+ Wait for the EMQX cluster to be ready, you can check the status of the EMQX cluster through `kubectl get` command, please make sure `STATUS` is `Running`, this may take some time
+ Users can configure new or existing listeners through `.spec.config.data`, or manage them through the EMQX Dashboard.
- ```bash
- $ kubectl get emqx emqx
- NAME IMAGE STATUS AGE
- emqx emqx/emqx-enterprise:@EE_VERSION@ Running 10m
- ```
-+ Obtain the Dashboard External IP of the EMQX cluster and access the EMQX console
+ EMQX Operator automatically reflects the default listener information in the Service resources. When there is a conflict between the Service configured by the user and the listener configured by EMQX (name or port fields are repeated), EMQX Operator prioritizes the user configuration.
- EMQX Operator will create two EMQX Service resources, one is emqx-dashboard and the other is emqx-listeners, corresponding to EMQX console and EMQX listening port respectively.
+ :::
- ```bash
- $ kubectl get svc emqx-dashboard -o json | jq '.status.loadBalancer.ingress[0].ip'
+2. Wait for the EMQX cluster to become ready.
- 192.168.1.200
- ```
+ Check the status of the EMQX cluster with `kubectl get` and make sure that `STATUS` is `Ready`. This may take some time.
- Access `http://192.168.1.200:18083` through a browser, and use the default username and password `admin/public` to log in to the EMQX console.
+ ```bash
+ $ kubectl get emqx emqx
+ NAME STATUS AGE
+ emqx Ready 10m
+ ```
-## Connect To EMQX Cluster By MQTTX CLI
+## Add New Listener through EMQX Dashboard
-+ Obtain the External IP of the EMQX cluster
+1. Add a new listener.
- ```bash
- external_ip=$(kubectl get svc emqx-listeners -o json | jq '.status.loadBalancer.ingress[0].ip')
- ```
+ - Open the EMQX Dashboard and navigate to **Management** -> **Listeners**.
-+ Use MQTTX CLI to connect to the EMQX cluster
+ - Click the **Add Listener** to add a new listener with the name `test` and port `1884`, as shown in the following figure:
- ```bash
- $ mqttx conn -h ${external_ip} -p 1883
+ 
- [4/17/2023] [5:17:31 PM] › … Connecting...
- [4/17/2023] [5:17:31 PM] › ✔ Connected
- ```
+ - Click **Add** to create the listener. As shown in the following figure, the new listener has been created.
-## Add New Listener Through EMQX Dashboard
+ 
-+ Add new Listener
+2. Check if the new listener is reflected in the Service.
- Open the browser to login the EMQX Dashboard and click Configuration → Listeners to enter the listener page, we first click the Add Listener button to add a name called test, port 1884 The listener, as shown in the following figure:
+ ```bash
+ kubectl get svc
+
+ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+ emqx-dashboard NodePort 10.105.110.235 18083:32012/TCP 13m
+ emqx-listeners NodePort 10.106.1.58 1883:32010/TCP,1884:30763/TCP 12m
+ ```
-
-

-
- Then click the Add button to create the listener, as shown in the following figure:
+ This output shows that the newly added listener on port 1884 has been reflected in the `emqx-listeners` Service resource.
-
+## Connect to the New Listener Using MQTTX
- As can be seen from the figure, the test listener we created has taken effect.
+1. Obtain the external IP of the EMQX listeners service.
-+ Check whether the newly added listener is injected into the Service
+ ```bash
+ external_ip=$(kubectl get svc emqx-listeners -o json | jq -r '.status.loadBalancer.ingress[0].ip')
+ ```
- ```bash
- kubectl get svc
+2. Connect to the new listener using MQTTX CLI.
- NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
- emqx-dashboard NodePort 10.105.110.235 18083:32012/TCP 13m
- emqx-listeners NodePort 10.106.1.58 1883:32010/TCP,1884:30763/TCP 12m
- ```
+ ```bash
+ $ mqttx conn -h ${external_ip} -p 1884
+
+ [4/17/2023] [5:17:31 PM] › … Connecting...
+ [4/17/2023] [5:17:31 PM] › ✔ Connected
+ ```
- From the output results, we can see that the newly added listener 1884 has been injected into the `emqx-listeners` Service.
+
diff --git a/en_US/deploy/kubernetes/operator/tasks/configure-emqx-tls.md b/en_US/deploy/kubernetes/operator/tasks/configure-emqx-tls.md
index aeca1af0a..d760940de 100644
--- a/en_US/deploy/kubernetes/operator/tasks/configure-emqx-tls.md
+++ b/en_US/deploy/kubernetes/operator/tasks/configure-emqx-tls.md
@@ -1,156 +1,160 @@
# Enable TLS In EMQX
-## Task Target
-
-Customize TLS certificates via the `extraVolumes` and `extraVolumeMounts` fields.
-
-## Create Secret Based On TLS Certificate
-
-Secret is an object that contains a small amount of sensitive information such as passwords, tokens, or keys. For its documentation, please refer to: [Secret](https://kubernetes.io/docs/concepts/configuration/secret/#working-with-secrets). In this article, we use Secret to save TLS certificate information, so we need to create Secret based on TLS certificate before creating EMQX cluster.
-
-+ Save the following as a YAML file and deploy it with the `kubectl apply` command
-
- ```yaml
- apiVersion: v1
- kind: Secret
- metadata:
- name: emqx-tls
- type: kubernetes.io/tls
- stringData:
- ca.crt: |
- -----BEGIN CERTIFICATE-----
- ...
- -----END CERTIFICATE-----
- tls.crt: |
- -----BEGIN CERTIFICATE-----
- ...
- -----END CERTIFICATE-----
- tls.key: |
- -----BEGIN RSA PRIVATE KEY-----
- ...
- -----END RSA PRIVATE KEY-----
- ```
-
- > `ca.crt` indicates the content of the CA certificate, `tls.crt` indicates the content of the server certificate, and `tls.key` indicates the content of the server private key. In this example, the contents of the above three fields are omitted, please fill them with the contents of your own certificate.
+## Objective
+
+Customize TLS certificates using the `extraVolumes` and `extraVolumeMounts` fields.
+
+## Create a Secret Based On TLS Certificate
+
+A secret is an object that contains a small amount of sensitive information, such as passwords, tokens, or keys. In this demonstration, we use secrets to store TLS certificate information, so we need to create one before creating the EMQX cluster.
+
+For more information, please refer to the [Secret](https://kubernetes.io/docs/concepts/configuration/secret/#working-with-secrets) documentation.
+
+Save the following as a YAML file and deploy it using the `kubectl apply` command:
+
+```yaml
+apiVersion: v1
+kind: Secret
+metadata:
+ name: emqx-tls
+type: kubernetes.io/tls
+stringData:
+ ca.crt: |
+ -----BEGIN CERTIFICATE-----
+ ...
+ -----END CERTIFICATE-----
+ tls.crt: |
+ -----BEGIN CERTIFICATE-----
+ ...
+ -----END CERTIFICATE-----
+ tls.key: |
+ -----BEGIN RSA PRIVATE KEY-----
+ ...
+ -----END RSA PRIVATE KEY-----
+```
+
+:::tip
+In this example, the contents of the above three fields are omitted. Please fill them with your own certificate contents.
+* `ca.crt` should contain the CA certificate.
+* `tls.crt` should contain the server certificate.
+* `tls.key` should contain the server's private key.
+:::
## Configure EMQX Cluster
-The following is the relevant configuration of EMQX Custom Resource. You can choose the corresponding APIVersion according to the version of EMQX you want to deploy. For the specific compatibility relationship, please refer to [EMQX Operator Compatibility](../operator.md):
-
-`apps.emqx.io/v2beta1 EMQX` supports `.spec.coreTemplate.extraVolumes` and `.spec.coreTemplate.extraVolumeMounts` and `.spec.replicantTemplate.extraVolumes` and `.spec.replicantTemplate.extraVolumeMounts` fields to EMQX The cluster configures additional volumes and mount points. In this article, we can use these two fields to configure TLS certificates for the EMQX cluster.
-
-There are many types of Volumes. For the description of Volumes, please refer to the document: [Volumes](https://kubernetes.io/docs/concepts/storage/volumes/#secret). In this article we are using the `secret` type.
-
-+ Save the following as a YAML file and deploy it with the `kubectl apply` command
-
- ```yaml
- apiVersion: apps.emqx.io/v2beta1
- kind: EMQX
- metadata:
- name: emqx
- spec:
- image: emqx/emqx-enterprise:@EE_VERSION@
- config:
- data: |
- listeners.ssl.default {
- bind = "0.0.0.0:8883"
- ssl_options {
- cacertfile = "/mounted/cert/ca.crt"
- certfile = "/mounted/cert/tls.crt"
- keyfile = "/mounted/cert/tls.key"
- gc_after_handshake = true
- handshake_timeout = 5s
- }
- }
- license {
- key = "..."
- }
- coreTemplate:
- spec:
- extraVolumes:
- - name: emqx-tls
- secret:
- secretName: emqx-tls
- extraVolumeMounts:
- - name: emqx-tls
- mountPath: /mounted/cert
- replicantTemplate:
- spec:
- extraVolumes:
- - name: emqx-tls
- secret:
- secretName: emqx-tls
- extraVolumeMounts:
- - name: emqx-tls
- mountPath: /mounted/cert
- dashboardServiceTemplate:
- spec:
- type: LoadBalancer
- listenersServiceTemplate:
- spec:
- type: LoadBalancer
- ```
-
- > The `.spec.coreTemplate.extraVolumes` field configures the volume type as: secret, and the name as: emqx-tls.
-
- > The `.spec.coreTemplate.extraVolumeMounts` field configures the directory where the TLS certificate is mounted to EMQX: `/mounted/cert`.
-
- > The `.spec.config.data` field configures the TLS listener certificate path. For more TLS listener configurations, please refer to the document: [Configuration Manual](../../../../configuration/configuration.md).
-
-+ Wait for EMQX cluster to be ready, you can check the status of EMQX cluster through the `kubectl get` command, please make sure that `STATUS` is `Running`, this may take some time
-
- ```bash
- $ kubectl get emqx emqx
- NAME IMAGE STATUS AGE
- emqx emqx/emqx-enterprise:@EE_VERSION@ Running 10m
- ```
-
-+ Obtain the External IP of EMQX cluster and access EMQX console
-
- EMQX Operator will create two EMQX Service resources, one is emqx-dashboard and the other is emqx-listeners, corresponding to EMQX console and EMQX listening port respectively.
+EMQX CRD `apps.emqx.io/v2beta1` provides the following fields to configure additional volumes and mount points for the EMQX cluster:
+* `.spec.coreTemplate.extraVolumes`
+* `.spec.coreTemplate.extraVolumeMounts`
+* `.spec.replicantTemplate.extraVolumes`
+* `.spec.replicantTemplate.extraVolumeMounts`
+
+In this demonstration, we will use these fields to provide TLS certificates to the EMQX cluster.
+
+There are many types of Volumes. For information about Volumes, please refer to the [Volumes](https://kubernetes.io/docs/concepts/storage/volumes/#secret) documentation. Here we are using the `secret` volume type.
+
+1. Save the following as a YAML file and deploy it using `kubectl apply`:
+
+ ```yaml
+ apiVersion: apps.emqx.io/v2beta1
+ kind: EMQX
+ metadata:
+ name: emqx
+ spec:
+ image: emqx/emqx:@EE_VERSION@
+ config:
+ # Configure the TLS listener certificates mounted from the `emqx-tls` volume:
+ data: |
+ listeners.ssl.default {
+ bind = "0.0.0.0:8883"
+ ssl_options {
+ cacertfile = "/mounted/cert/ca.crt"
+ certfile = "/mounted/cert/tls.crt"
+ keyfile = "/mounted/cert/tls.key"
+ gc_after_handshake = true
+ handshake_timeout = 5s
+ }
+ }
+ license {
+ key = "..."
+ }
+ coreTemplate:
+ spec:
+ extraVolumes:
+ - name: emqx-tls
+ secret:
+ secretName: emqx-tls
+ extraVolumeMounts:
+ - name: emqx-tls
+ mountPath: /mounted/cert
+ replicantTemplate:
+ spec:
+ extraVolumes:
+ # Create a `secret` volume type named `emqx-tls`:
+ - name: emqx-tls
+ secret:
+ secretName: emqx-tls
+ extraVolumeMounts:
+ - name: emqx-tls
+ # Directory where the TLS certificate is mounted to EMQX nodes:
+ mountPath: /mounted/cert
+ dashboardServiceTemplate:
+ spec:
+ type: LoadBalancer
+ listenersServiceTemplate:
+ spec:
+ type: LoadBalancer
+ ```
+
+2. Wait for the EMQX cluster to become ready.
+
+ Check the status of the EMQX cluster using `kubectl get`, and make sure that `STATUS` is `Ready`. This may take a while.
```bash
- $ kubectl get svc emqx-dashboard -o json | jq '.status.loadBalancer.ingress[0].ip'
-
- 192.168.1.200
+ $ kubectl get emqx
+ NAME STATUS AGE
+ emqx Ready 10m
```
- Access `http://192.168.1.200:18083` through a browser, and use the default username and password `admin/public` to login EMQX console.
+## Verify TLS Connection using MQTTX
-## Verify TLS Connection Using MQTTX CLI
+[MQTTX CLI](https://mqttx.app/cli) is an open-source MQTT 5.0 command-line client tool, designed to help developers quickly get started with MQTT services and applications.
-[MQTTX CLI](https://mqttx.app/cli) is an open source MQTT 5.0 command line client tool, designed to help developers to more Quickly develop and debug MQTT services and applications.
+1. Obtain the external IP of the EMQX listeners service.
-+ Obtain the External IP of EMQX cluster
-
- ```bash
- external_ip=$(kubectl get svc emqx-listeners -o json | jq '.status.loadBalancer.ingress[0].ip')
- ```
+ ```bash
+ external_ip=$(kubectl get svc emqx-listeners -o json | jq '.status.loadBalancer.ingress[0].ip')
+ ```
-+ Subscribe to messages using MQTTX CLI
+2. Subscribe to messages using MQTTX CLI.
- ```bash
- mqttx sub -h ${external_ip} -p 8883 -t "hello" -l mqtts --insecure
+ Connect to the TLS listener port 8883, using the `--insecure` flag to skip certificate verification.
- [10:00:25] › … Connecting...
- [10:00:25] › ✔ Connected
- [10:00:25] › … Subscribing to hello...
- [10:00:25] › ✔ Subscribed to hello
- ```
+ ```bash
+ mqttx sub -h ${external_ip} -p 8883 -t "hello" -l mqtts --insecure
+ [10:00:25] › … Connecting...
+ [10:00:25] › ✔ Connected
+ [10:00:25] › … Subscribing to hello...
+ [10:00:25] › ✔ Subscribed to hello
+ ```
-+ Create a new terminal window and publish a message using the MQTTX CLI
+3. In a separate terminal window, publish a message.
- ```bash
- mqttx pub -h ${external_ip} -p 8883 -t "hello" -m "hello world" -l mqtts --insecure
+ ```bash
+ mqttx pub -h ${external_ip} -p 8883 -t "hello" -m "hello world" -l mqtts --insecure
+ [10:00:58] › … Connecting...
+ [10:00:58] › ✔ Connected
+ [10:00:58] › … Message Publishing...
+ [10:00:58] › ✔ Message published
+ ```
- [10:00:58] › … Connecting...
- [10:00:58] › ✔ Connected
- [10:00:58] › … Message Publishing...
- [10:00:58] › ✔ Message published
- ```
+4. Observe the subscriber client receiving the message.
-+ View messages received in the subscribed terminal window
+ ```bash
+ mqttx pub -h ${external_ip} -p 8883 -t "hello" -m "hello world" -l mqtts --insecure
+ [10:00:58] › … Connecting...
+ [10:00:58] › ✔ Connected
+ [10:00:58] › … Message Publishing...
+ [10:00:58] › ✔ Message published
+ ```
- ```bash
- [10:00:58] › payload: hello world
- ```
+ This indicates that both the publisher and subscriber clients successfully communicate with the broker over a TLS connection.
diff --git a/en_US/deploy/kubernetes/operator/tasks/overview.md b/en_US/deploy/kubernetes/operator/tasks/overview.md
index 3c93c2999..2ee1527a2 100644
--- a/en_US/deploy/kubernetes/operator/tasks/overview.md
+++ b/en_US/deploy/kubernetes/operator/tasks/overview.md
@@ -2,29 +2,26 @@
This chapter provides step-by-step instructions for performing common tasks and operations with EMQX in a Kubernetes cluster.
-The chapter is divided into sections covering
-
-**Configuration and Setup**
+## Configuration and Setup
- License and Security
- - [License Configuration (EMQX Enterprise)](./configure-emqx-license.md)
- - [Enable TLS In EMQX](./configure-emqx-tls.md)
+ - [Manage License](./configure-emqx-license.md)
+ - [Enable TLS for EMQX listeners](./configure-emqx-tls.md)
- Cluster Configuration
- - [Change EMQX Configurations Via Operator](./configure-emqx-config.md)
- - [Enable Core + Replicant Cluster (EMQX 5.x)](./configure-emqx-core-replicant.md)
- - [Enable Persistence In EMQX Cluster](./configure-emqx-persistence.md)
- - [Access EMQX Cluster by Kubernetes Service](./configure-emqx-service.md)
- - [Cluster Load Rebalancing (EMQX Enterprise)](./configure-emqx-rebalance.md)
+ - [Change EMQX Configuration](./configure-emqx-config.md)
+ - [Enable Core-Replicant Deployment](./configure-emqx-core-replicant.md)
+ - [Enable Persistence](./configure-emqx-persistence.md)
+ - [Access EMQX Cluster through LoadBalancer](./configure-emqx-service.md)
+ - [Rebalance Cluster Load](./configure-emqx-rebalance.md)
-**Upgrades and Maintenance**
+## Upgrades and Maintenance
- Upgrade
- - [Configure Blue-Green Upgrade (EMQX Enterprise](./configure-emqx-blueGreenUpdate.md)
+ - [Perform Blue-Green Upgrade](./configure-emqx-blueGreenUpdate.md)
- Log Management
- - [Collect EMQX Logs in Kubernetes](./configure-emqx-log-collection.md)
+ - [Collect EMQX Logs](./configure-emqx-log-collection.md)
- [Change EMQX Log Level](./configure-emqx-log-level.md)
-**Monitoring and Performance**
-
-- [Monitor EMQX cluster by Prometheus](./configure-emqx-prometheus.md)
+## Monitoring and Performance
+- [Monitor EMQX Cluster using Prometheus](./configure-emqx-prometheus.md)
diff --git a/ja_JP/deploy/kubernetes/operator/aws-eks.md b/ja_JP/deploy/kubernetes/operator/aws-eks.md
index 23db5bf9b..eeb6a2f82 100644
--- a/ja_JP/deploy/kubernetes/operator/aws-eks.md
+++ b/ja_JP/deploy/kubernetes/operator/aws-eks.md
@@ -1,153 +1,161 @@
# Deploy EMQX on Amazon Elastic Kubernetes Service
-EMQX Operator supports deploying EMQX on Amazon Container Service EKS (Elastic Kubernetes Service). Amazon EKS is a managed Kubernetes service that makes it easy to deploy, manage, and scale containerized applications. EKS provides the Kubernetes control plane and node groups, automatically handling node replacements, upgrades, and patching. It supports AWS services such as Load Balancers, RDS, and IAM, and integrates seamlessly with other Kubernetes ecosystem tools. For details, please see [What is Amazon EKS](https://docs.aws.amazon.com/eks/latest/userguide/what-is-eks.html)
+EMQX Operator supports running on Amazon Container Service EKS (Elastic Kubernetes Service). Amazon EKS is a managed Kubernetes service that simplifies the deployment, management, and scaling of containerized applications. EKS provides the Kubernetes control plane and node groups, automatically handling node replacements, upgrades, and patching. It supports AWS services such as Load Balancers, RDS, and IAM, and integrates seamlessly with other Kubernetes ecosystem tools.
-## Before You Begin
+For an in-depth introduction, refer to [What is Amazon EKS](https://docs.aws.amazon.com/eks/latest/userguide/what-is-eks.html).
-Before you begin, you must have the following:
-
-- Activate Amazon Container Service and create an EKS cluster. For details, please refer to: [Create an Amazon EKS cluster](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html)
-
-- Connect to EKS cluster by installing kubectl tool locally: For details, please refer to: [Using kubectl to connect to the cluster](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-console.html#eks-configure-kubectl)
-
-- Deploy an AWS Load Balancer Controller on a cluster, for details, please refer to: [Create a Network Load Balancer](https://docs.aws.amazon.com/eks/latest/userguide/network-load-balancing.html)
-
-- Install the Amazon EBS CSI driver on the cluster, for details, please refer to: [Amazon EBS CSI driver](https://docs.aws.amazon.com/eks/latest/userguide/ebs-csi.html)
-
-- Install EMQX Operator: For details, please refer to: [Install EMQX Operator](./getting-started.md)
-
-## Quickly Deploy an EMQX Cluster
-
-The following is the relevant configuration of EMQX custom resources.
-
-+ Save the following content as a YAML file and deploy it via the `kubectl apply` command
-
- ```yaml
- # Configure EBS StorageClass with WaitForFirstConsumer binding mode
- # This ensures volumes are created in the same AZ as the pods that will use them
- apiVersion: storage.k8s.io/v1
- kind: StorageClass
- metadata:
- name: ebs-sc
- provisioner: ebs.csi.aws.com
- volumeBindingMode: WaitForFirstConsumer
- ---
- apiVersion: apps.emqx.io/v2beta1
- kind: EMQX
- metadata:
- name: emqx
- spec:
- image: emqx/emqx-enterprise:@EE_VERSION@
- config:
- data: |
- license {
- key = "..."
- }
- coreTemplate:
- spec:
- ## EMQX custom resources do not support updating this field at runtime
- volumeClaimTemplates:
- storageClassName: ebs-sc
- resources:
- requests:
- storage: 10Gi
- accessModes:
- - ReadWriteOnce
- dashboardServiceTemplate:
- metadata:
- ## More content: https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.4/guide/service/annotations/
- annotations:
- ## Specifies whether the NLB is Internet-facing or internal. If not specified, defaults to internal.
- service.beta.kubernetes.io/aws-load-balancer-type: external
- service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
- spec:
- type: LoadBalancer
- ## More content: https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.4/guide/service/nlb/
- loadBalancerClass: service.k8s.aws/nlb
- listenersServiceTemplate:
- metadata:
- ## More content: https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.4/guide/service/annotations/
- annotations:
- ## Specifies whether the NLB is Internet-facing or internal. If not specified, defaults to internal.
- service.beta.kubernetes.io/aws-load-balancer-type: external
- service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
- spec:
- type: LoadBalancer
- ## More content: https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.4/guide/service/nlb/
- loadBalancerClass: service.k8s.aws/nlb
- ```
-
-+ Wait for EMQX cluster to be ready, you can check the status of EMQX cluster through `kubectl get` command, please make sure that `STATUS` is `Running`, this may take some time
-
- ```bash
- $ kubectl get emqx emqx
- NAME IMAGE STATUS AGE
- emqx emqx/emqx-enterprise:@EE_VERSION@ Running 10m
- ```
-
-+ Obtain Dashboard External IP of EMQX cluster and access EMQX console
-
- EMQX Operator will create two EMQX Service resources, one is emqx-dashboard and the other is emqx-listeners, corresponding to EMQX console and EMQX listening port respectively.
-
- ```bash
- $ kubectl get svc emqx-dashboard -o json | jq '.status.loadBalancer.ingress[0].ip'
-
- 192.168.1.200
- ```
-
- Access `http://192.168.1.200:18083` through a browser, and use the default username and password `admin/public` to login EMQX console.
-
-## Use MQTTX application To Publish/Subscribe Messages
-
-[MQTTX CLI](https://mqttx.app/cli) is an open source MQTT 5.0 command line client tool, designed to help developers to more Quickly develop and debug MQTT services and applications.
-
-+ Obtain External IP of EMQX cluster
-
- ```bash
- external_ip=$(kubectl get svc emqx-listeners -o json | jq '.status.loadBalancer.ingress[0].ip')
- ```
-
-+ Subscribe to news
-
- ```bash
- $ mqttx sub -t 'hello' -h ${external_ip} -p 1883
-
- [10:00:25] › … Connecting...
- [10:00:25] › ✔ Connected
- [10:00:25] › … Subscribing to hello...
- [10:00:25] › ✔ Subscribed to hello
- ```
-
-+ create a new terminal window and publish message
-
- ```bash
- $ mqttx pub -t 'hello' -h ${external_ip} -p 1883 -m 'hello world'
-
- [10:00:58] › … Connecting...
- [10:00:58] › ✔ Connected
- [10:00:58] › … Message Publishing...
- [10:00:58] › ✔ Message published
- ```
-
-+ View messages received in the subscribed terminal window
-
- ```bash
- [10:00:58] › payload: hello world
- ```
-
-## Terminate TLS Encryption With LoadBalancer
-
-On Amazon EKS, you can use the NLB to do TLS termination, which you can do in the following steps:
-
-1. Import relevant certificates in [AWS Console](https://us-east-2.console.aws.amazon.com/acm/home), then enter the details page by clicking the certificate ID, Then record the ARN information
-
- :::tip
-
- For the import format of certificates and keys, please refer to [import certificate](https://docs.aws.amazon.com/acm/latest/userguide/import-certificate-format.html)
+## Before You Begin
+Before deploying EMQX on EKS, ensure you have completed the following prerequisites:
+
+- Create an EKS cluster.
See [Create an Amazon EKS cluster](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html) for more details.
+
+- Configure kubectl to connect to your EKS cluster.
See [Using kubectl to connect to the cluster](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-console.html#eks-configure-kubectl) for more details.
+
+- Deploy an AWS Load Balancer Controller on a cluster.
See [Create a Network Load Balancer](https://docs.aws.amazon.com/eks/latest/userguide/network-load-balancing.html) for more details.
+
+- Install the Amazon EBS CSI driver on the cluster.
See [Amazon EBS CSI driver](https://docs.aws.amazon.com/eks/latest/userguide/ebs-csi.html) for further details.
+
+- Install EMQX Operator.
Please refer to [Install EMQX Operator](./getting-started.md) for further details.
+
+## Deploy EMQX Cluster Quickly
+
+The following example demonstrates the relevant EMQX Custom Resource (CR) configuration for deployment on EKS.
+
+1. Save the following content as a YAML file and deploy it with `kubectl apply`.
+
+ ```yaml
+ # Configure EBS StorageClass with WaitForFirstConsumer binding mode
+ # This ensures volumes are created in the same AZ as the pods that will use them
+ apiVersion: storage.k8s.io/v1
+ kind: StorageClass
+ metadata:
+ name: ebs-sc
+ provisioner: ebs.csi.aws.com
+ volumeBindingMode: WaitForFirstConsumer
+ ---
+ apiVersion: apps.emqx.io/v2beta1
+ kind: EMQX
+ metadata:
+ name: emqx
+ spec:
+ image: emqx/emqx:@EE_VERSION@
+ config:
+ data: |
+ license {
+ key = "..."
+ }
+ coreTemplate:
+ spec:
+ ## EMQX custom resources do not support updating this field at runtime
+ volumeClaimTemplates:
+ storageClassName: ebs-sc
+ resources:
+ requests:
+ storage: 10Gi
+ accessModes:
+ - ReadWriteOnce
+ dashboardServiceTemplate:
+ metadata:
+ ## More content: https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.4/guide/service/annotations/
+ annotations:
+ ## Specifies whether the NLB is Internet-facing or internal. If not specified, defaults to internal.
+ service.beta.kubernetes.io/aws-load-balancer-type: external
+ service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
+ spec:
+ type: LoadBalancer
+ ## More content: https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.4/guide/service/nlb/
+ loadBalancerClass: service.k8s.aws/nlb
+ listenersServiceTemplate:
+ metadata:
+ ## More content: https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.4/guide/service/annotations/
+ annotations:
+ ## Specifies whether the NLB is Internet-facing or internal. If not specified, defaults to internal.
+ service.beta.kubernetes.io/aws-load-balancer-type: external
+ service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
+ spec:
+ type: LoadBalancer
+ ## More content: https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.4/guide/service/nlb/
+ loadBalancerClass: service.k8s.aws/nlb
+ ```
+
+2. Wait for the EMQX cluster to become ready.
+
+ Use the following command to check the status. The `STATUS` field must show `Ready`, which may take several minutes:
+
+ ```shell
+ $ kubectl get emqx
+ NAME STATUS AGE
+ emqx Ready 55s
+ ```
+
+3. Obtain the external IP of the EMQX Dashboard and access it.
+
+ The EMQX Operator creates a Service for the EMQX Dashboard based on your `dashboardServiceTemplate` configuration.
+
+ ```shell
+ $ kubectl get svc emqx-dashboard -o json | jq -r '.status.loadBalancer.ingress[0].ip'
+ 192.168.1.200
+ ```
+
+4. Open the Dashboard at: `http://192.168.1.200:18083`.
+
+ Log in with the default credentials:
+
+ - **Username:** `admin`
+ - **Password:** `public`
+
+## Subscribe and Publish
+
+This walkthrough uses [MQTTX CLI](https://mqttx.app/cli), an open-source MQTT 5.0 command-line client tool that helps developers quickly test the MQTT services and applications.
+
+1. Retrieve the external IP of the EMQX TCP listener.
+
+ The EMQX Operator automatically creates a Service resource for each configured listener.
+
+ ```shell
+ external_ip=$(kubectl get svc emqx-listeners -o json | jq -r '.status.loadBalancer.ingress[0].ip')
+ ```
+
+2. Subscribe to a topic.
+
+ ```shell
+ $ mqttx sub -t 'hello' -h ${external_ip} -p 1883
+
+ [10:00:25] › … Connecting...
+ [10:00:25] › ✔ Connected
+ [10:00:25] › … Subscribing to hello...
+ [10:00:25] › ✔ Subscribed to hello
+ ```
+
+3. In another terminal, connect to the EMQX cluster and publish a message.
+
+ ```shell
+ $ mqttx pub -t 'hello' -h ${external_ip} -p 1883 -m 'hello world'
+
+ [10:00:58] › … Connecting...
+ [10:00:58] › ✔ Connected
+ [10:00:58] › … Message Publishing...
+ [10:00:58] › ✔ Message published
+ ```
+
+4. Observe the subscriber receiving the message.
+
+ ```shell
+ [10:00:58] › payload: hello world
+ ```
+
+## Terminate TLS Encryption with LoadBalancer
+
+You can use an AWS Network Load Balancer (NLB) to terminate TLS traffic for EMQX. Follow the steps below:
+
+1. Import relevant certificates in [AWS Console](https://us-east-2.console.aws.amazon.com/acm/home). Open the certificate details page by clicking the certificate ID. Record the certificate ARN.
+
+ ::: tip
+For certificate/key import formats, see [Importing certificates](https://docs.aws.amazon.com/acm/latest/userguide/import-certificate-format.html).
:::
-2. Add some annotations in EMQX custom resources' metadata, just as shown below:
+2. Add annotations to the EMQX Service metadata, for example:
```yaml
## Specifies the ARN of one or more certificates managed by the AWS Certificate Manager.
@@ -159,4 +167,6 @@ On Amazon EKS, you can use the NLB to do TLS termination, which you can do in th
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "1883"
```
- > The value of `service.beta.kubernetes.io/aws-load-balancer-ssl-cert` is the ARN information we record in step 1.
+ ::: tip
+ The value of `service.beta.kubernetes.io/aws-load-balancer-ssl-cert` should match the ARN recorded in step 1.
+ :::
diff --git a/ja_JP/deploy/kubernetes/operator/azure-aks.md b/ja_JP/deploy/kubernetes/operator/azure-aks.md
index cbca19595..9fae4af27 100644
--- a/ja_JP/deploy/kubernetes/operator/azure-aks.md
+++ b/ja_JP/deploy/kubernetes/operator/azure-aks.md
@@ -1,114 +1,124 @@
# Deploy EMQX on Azure Kubernetes Service
-EMQX Operator supports deploying EMQX on Azure Kubernetes Service(AKS). AKS simplifies deploying a managed Kubernetes cluster in Azure by offloading the operational overhead to Azure. As a hosted Kubernetes service, Azure handles critical tasks, like health monitoring and maintenance. When you create an AKS cluster, a control plane is automatically created and configured. This control plane is provided at no cost as a managed Azure resource abstracted from the user. You only pay for and manage the nodes attached to the AKS cluster.
+EMQX Operator supports deploying EMQX on Azure Kubernetes Service (AKS). AKS simplifies deploying a managed Kubernetes cluster in Azure by offloading the operational overhead to Azure. As a hosted Kubernetes service, Azure handles critical tasks, like health monitoring and maintenance. When an AKS cluster is created, Azure automatically provisions and manages the Kubernetes control plane at no additional cost.
## Before You Begin
-Before you begin, you must have the following:
-- To create an AKS cluster on Azure, you first need to activate the AKS service in your Azure subscription. Refer to the [Azure Kubernetes Service](https://learn.microsoft.com/en-us/azure/aks/) documentation for more information.
+Before deploying EMQX on AKS, ensure the following prerequisites are met:
-- To connect to an AKS cluster using kubectl commands, you can install the kubectl tool locally and obtain the cluster's KubeConfig to connect to the cluster. Alternatively, you can use Cloud Shell through the Azure portal to manage the cluster with kubectl.
- - To connect to an AKS cluster using kubectl, you need to install and configure the kubectl tool on your local machine. Refer to the [Connect to an AKS cluster](https://learn.microsoft.com/en-us/azure/aks/learn/quick-kubernetes-deploy-cli) documentation for detailed instructions on how to do this.
- - To connect to an AKS cluster using CloudShell, use Azure CloudShell to connect to the AKS cluster and manage the cluster using kubectl. Refer to the [Manage an AKS cluster in Azure CloudShell](https://learn.microsoft.com/en-us/azure/aks/learn/quick-kubernetes-deploy-portal?tabs=azure-cli) documentation for detailed instructions on how to connect to Azure CloudShell and use kubectl.
+- An AKS cluster in your Azure subscription
+ * Refer to the [Azure Kubernetes Service documentation](https://learn.microsoft.com/en-us/azure/aks/) for guidance on creating and configuring an AKS cluster.
+- A working `kubectl` configuration for connecting to the AKS cluster
+ - To connect using the locally installed `kubectl`, follow the instructions in [Connect to an AKS cluster](https://learn.microsoft.com/en-us/azure/aks/learn/quick-kubernetes-deploy-cli).
+ - To connect using Azure Cloud Shell, see [Manage an AKS cluster in Azure CloudShell](https://learn.microsoft.com/en-us/azure/aks/learn/quick-kubernetes-deploy-portal?tabs=azure-cli).
-- To install EMQX Operator, please refer to [Install EMQX Operator](./getting-started.md)
+- EMQX Operator installed on the cluster
+ - Refer to [Install EMQX Operator](./getting-started.md) for installation details.
+
+## Deploy EMQX Cluster Quickly
-## Quickly Deploy an EMQX Cluster
+The following example shows a basic configuration for an EMQX Custom Resource (CR).
-Here are the relevant configurations for EMQX Custom Resource. You can choose the corresponding APIVersion based on the version of EMQX you wish to deploy. For specific compatibility relationships, please refer to [EMQX Operator Compatibility](./operator.md):
+1. Save it as a YAML file and deploy with `kubectl apply`.
-```yaml
-apiVersion: apps.emqx.io/v2beta1
-kind: EMQX
-metadata:
- name: emqx
-spec:
- image: emqx/emqx-enterprise:@EE_VERSION@
- config:
- data: |
- license {
- key = "..."
- }
- coreTemplate:
- spec:
- volumeClaimTemplates:
- ## more information about storage classes: https://learn.microsoft.com/en-us/azure/aks/concepts-storage#storage-classes
- storageClassName: default
- resources:
- requests:
- storage: 10Gi
- accessModes:
- - ReadWriteOnce
- dashboardServiceTemplate:
- spec:
- ## more information about load balancer: https://learn.microsoft.com/en-us/azure/aks/load-balancer-standard
- type: LoadBalancer
- listenersServiceTemplate:
- spec:
- ## more information about load balancer: https://learn.microsoft.com/en-us/azure/aks/load-balancer-standard
- type: LoadBalancer
-```
+ ```yaml
+ apiVersion: apps.emqx.io/v2beta1
+ kind: EMQX
+ metadata:
+ name: emqx
+ spec:
+ image: emqx/emqx:@EE_VERSION@
+ config:
+ data: |
+ license {
+ key = "..."
+ }
+ coreTemplate:
+ spec:
+ volumeClaimTemplates:
+ ## more information about storage classes: https://learn.microsoft.com/en-us/azure/aks/concepts-storage#storage-classes
+ storageClassName: default
+ resources:
+ requests:
+ storage: 10Gi
+ accessModes:
+ - ReadWriteOnce
+ dashboardServiceTemplate:
+ spec:
+ ## more information about load balancer: https://learn.microsoft.com/en-us/azure/aks/load-balancer-standard
+ type: LoadBalancer
+ listenersServiceTemplate:
+ spec:
+ ## more information about load balancer: https://learn.microsoft.com/en-us/azure/aks/load-balancer-standard
+ type: LoadBalancer
+ ```
-Wait for the EMQX cluster to be ready. You can check the status of the EMQX cluster using the `kubectl get` command. Please ensure that the STATUS is `Running` which may take some time.
+2. Wait for the EMQX cluster to become ready.
- ```bash
- $ kubectl get emqx emqx
- NAME IMAGE STATUS AGE
- emqx emqx/emqx-enterprise:@EE_VERSION@ Running 10m
- ```
+ Check the cluster status using `kubectl get` and verify that the `STATUS` is `Ready`. A startup may take some time.
-Get the External IP of the EMQX cluster and access the EMQX console.
+ ```shell
+ $ kubectl get emqx
+ NAME STATUS AGE
+ emqx Ready 1m5s
+ ```
-The EMQX Operator will create two EMQX Service resources, one is `emqx-dashboard`, and the other is `emqx-listeners`, corresponding to the EMQX console and EMQX listening port, respectively.
+3. Retrieve the external IP of the EMQX Dashboard and access it.
-```shell
-$ kubectl get svc emqx-dashboard -o json | jq '.status.loadBalancer.ingress[0].ip'
+ The EMQX Operator automatically creates a Service based on the `dashboardServiceTemplate` configuration.
-20.245.230.91
-```
+ ```shell
+ $ kubectl get svc emqx-dashboard -o json | jq -r '.status.loadBalancer.ingress[0].ip'
+ 20.245.230.91
+ ```
-Access the EMQX console by opening a web browser and visiting http://20.245.230.91:18083. Login using the default username and password `admin/public`.
+4. Open the Dashboard at `http://20.245.230.91:18083`.
-## Connect to EMQX Cluster to Publish/Subscribe Messages Using MQTTX CLI
+ Log in with the default credentials:
-[MQTTX CLI](https://mqttx.app/cli) is an open-source MQTT 5.0 command-line client tool designed to help developers develop and debug MQTT services and applications faster without the need for a GUI.
+ - **Username:** `admin`
+ - **Password:** `public`
-- Retrieve the External IP of the EMQX cluster
+## Use MQTTX to Subscribe and Publish
-```shell
-external_ip=$(kubectl get svc emqx -o json | jq '.status.loadBalancer.ingress[0].ip')
-```
+This walkthrough uses [MQTTX CLI](https://mqttx.app/cli), an open-source MQTT 5.0 command-line client tool that helps developers quickly test the MQTT services and applications.
-- Subscribe to messages
+1. Obtain the external IP of the EMQX TCP listener.
-```shell
-$ mqttx sub -t 'hello' -h ${external_ip} -p 1883
+ The EMQX Operator automatically creates a Service resource for each configured listener.
-[10:00:25] › … Connecting...
-[10:00:25] › ✔ Connected
-[10:00:25] › … Subscribing to hello...
-[10:00:25] › ✔ Subscribed to hello
-```
+ ```shell
+ external_ip=$(kubectl get svc emqx-listeners -o json | jq -r '.status.loadBalancer.ingress[0].ip')
+ ```
-- Create a new terminal window and send a message
+2. Subscribe to a topic.
-```shell
-$ mqttx pub -t 'hello' -h ${external_ip} -p 1883 -m 'hello world'
+ ```shell
+ $ mqttx sub -t 'hello' -h ${external_ip} -p 1883
+ [10:00:25] › … Connecting...
+ [10:00:25] › ✔ Connected
+ [10:00:25] › … Subscribing to hello...
+ [10:00:25] › ✔ Subscribed to hello
+ ```
-[10:00:58] › … Connecting...
-[10:00:58] › ✔ Connected
-[10:00:58] › … Message Publishing...
-[10:00:58] › ✔ Message published
-```
+3. In another terminal, connect to the EMQX cluster and publish a message.
-- View messages received in the subscription terminal window
+ ```shell
+ $ mqttx pub -t 'hello' -h ${external_ip} -p 1883 -m 'hello world'
+ [10:00:58] › … Connecting...
+ [10:00:58] › ✔ Connected
+ [10:00:58] › … Message Publishing...
+ [10:00:58] › ✔ Message published
+ ```
-```shell
-[10:00:58] › payload: hello world
-```
-
-## About LoadBalancer Offloading TLS
-
-Since Azure LoadBalancer does not support TCP certificates, please refer to this [document](https://github.com/emqx/emqx-operator/discussions/312) to resolve TCP certificate offloading issues.
+4. Observe the subscriber receiving the message.
+
+ ```shell
+ [10:00:58] › payload: hello world
+ ```
+
+## Notes on TLS Offloading with LoadBalancer
+
+As an L3/L4 load balancer, Azure LoadBalancer does not support TLS termination. Please refer to this [discussion](https://github.com/emqx/emqx-operator/discussions/312) to understand possible workarounds.
diff --git a/ja_JP/deploy/kubernetes/operator/gcp-gke.md b/ja_JP/deploy/kubernetes/operator/gcp-gke.md
index c9fa822ca..dc2491c1f 100644
--- a/ja_JP/deploy/kubernetes/operator/gcp-gke.md
+++ b/ja_JP/deploy/kubernetes/operator/gcp-gke.md
@@ -1,125 +1,137 @@
# Deploy EMQX on Google Kubernetes Engine
-The EMQX Operator allows for the deployment of EMQX on Google Kubernetes Engine (GKE), which simplifies the process of deploying a managed Kubernetes cluster in GCP. With GKE, you can offload the operational overhead to GCP, enabling you to focus on your application deployment and management. By deploying EMQX on GKE, you can take advantage of the scalability and flexibility of Kubernetes, while benefiting from the simplicity and convenience of a managed service. With EMQX Operator and GKE, you can easily deploy and manage your MQTT broker on the cloud, allowing you to focus on your business goals and objectives.
-
+The EMQX Operator allows for the deployment of EMQX on Google Kubernetes Engine (GKE), which simplifies the process of deploying a managed Kubernetes cluster in GCP. With GKE, you can offload the operational overhead to GCP. By deploying EMQX on GKE, you can take advantage of the scalability and flexibility of Kubernetes, while benefiting from the simplicity and convenience of a managed service. With EMQX Operator on GKE, you can easily deploy and manage your MQTT broker in the cloud and focus on your business goals.
## Before You Begin
-Before you begin, you must have the following:
-- To create a GKE cluster on Google Cloud Platform, you will need to enable the GKE service in your GCP subscription. You can find more information on how to do this in the Google Kubernetes Engine documentation.
+Before deploying EMQX on GKE, ensure the following prerequisites are met:
+- A GKE cluster on Google Cloud Platform
+ - You must enable the GKE API in your project. Refer to the [Google Kubernetes Engine documentation](https://cloud.google.com/kubernetes-engine/) for setup instructions.
-- To connect to a GKE cluster using kubectl commands, you can install the kubectl tool on your local machine and obtain the cluster's KubeConfig to connect to the cluster. Alternatively, you can use Cloud Shell through the GCP Console to manage the cluster with kubectl.
+- A working `kubectl` configuration to connect to the GKE cluster
+ - To connect using a local `kubectl` installation, see [Connect to a GKE cluster](https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl).
+
+ To connect using Cloud Shell directly from the GCP Console, refer to [Manage a GKE cluster with Cloud Shell](https://cloud.google.com/code/docs/shell/create-configure-gke-cluster).
+
+- EMQX Operator installed on the cluster
+ - Refer to [Install EMQX Operator](./getting-started.md) for further details.
- - To connect to a GKE cluster using kubectl, you will need to install and configure the kubectl tool on your local machine. Refer to the [Connect to a GKE cluster](https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl) documentation for detailed instructions on how to do this.
+ ::: warning Note
+
+ Installing cert-manager on GKE with default settings may cause bootstrapping issues. Add the configuration `--set global.leaderElection.namespace=cert-manager` to use a different namespace in leader election. For details, see the [cert-manager compatibility documentation](https://cert-manager.io/docs/installation/compatibility/).
+
+ :::
- - To connect to a GKE cluster using Cloud Shell, you can use the Cloud Shell directly from the GCP Console to connect to the GKE cluster and manage the cluster using kubectl. Refer to the [Manage a GKE cluster with Cloud Shell](https://cloud.google.com/code/docs/shell/create-configure-gke-cluster) documentation for detailed instructions on how to connect to Cloud Shell and use kubectl.
+## Deploy EMQX Cluster Quickly
-- To install EMQX Operator, please refer to [Install EMQX Operator](./getting-started.md)
+The following example shows the basic EMQX Custom Resource (CR) configuration.
-## Quickly Deploy an EMQX Cluster
+1. Save the following document as a YAML file and deploy it with `kubectl apply`.
-Here are the relevant configurations for EMQX Custom Resource. You can choose the corresponding APIVersion based on the version of EMQX you wish to deploy. For specific compatibility relationships, please refer to [EMQX Operator Compatibility](./operator.md):
+ ::: warning Note
- ::: warning
- If you want to request CPU and memory resources, you need to ensure that the CPU is greater than or equal to 250m and the memory is greater than or equal to 512M.
+ If you specify CPU and memory limits, ensure a minimum of 250m CPU and 512Mi memory. See [Resource requests in Autopilot](https://cloud.google.com/kubernetes-engine/docs/concepts/autopilot-resource-requests) for details.
- - [Resource requests in Autopilot](https://cloud.google.com/kubernetes-engine/docs/concepts/autopilot-resource-requests)
:::
-Save the following content as a YAML file and deploy it using the `kubectl apply` command.
-
-
-```yaml
-apiVersion: apps.emqx.io/v2beta1
-kind: EMQX
-metadata:
- name: emqx
-spec:
- image: emqx/emqx-enterprise:@EE_VERSION@
- config:
- data: |
- license {
- key = "..."
- }
- coreTemplate:
- spec:
- volumeClaimTemplates:
- ## more information about storage classes: https://cloud.google.com/kubernetes-engine/docs/concepts/persistent-volumes#storageclasses
- storageClassName: standard
- resources:
- requests:
- storage: 10Gi
- accessModes:
- - ReadWriteOnce
- dashboardServiceTemplate:
- spec:
- ## more information about load balancer: https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balancing
- type: LoadBalancer
- listenersServiceTemplate:
- spec:
- ## more information about load balancer: https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balancing
- type: LoadBalancer
-```
-
-Wait for the EMQX cluster to be ready. You can check the status of the EMQX cluster using the `kubectl get` command. Please ensure that the STATUS is `Running` which may take some time.
-
- ```bash
- $ kubectl get emqx emqx
- NAME IMAGE STATUS AGE
- emqx emqx/emqx-enterprise:@EE_VERSION@ Running 10m
- ```
-
-Get the External IP of the EMQX cluster and access the EMQX console.
-
-The EMQX Operator will create two EMQX Service resources, one is `emqx-dashboard`, and the other is `emqx-listeners`, corresponding to the EMQX console and EMQX listening port, respectively.
-
-```shell
-$ kubectl get svc emqx-dashboard -o json | jq '.status.loadBalancer.ingress[0].ip'
-
-34.122.174.166
-```
-
-Access the EMQX console by opening a web browser and visiting http://34.122.174.166:18083. Login using the default username and password `admin/public`.
-
-## Connect to EMQX Cluster to Publish/Subscribe Messages Using MQTTX CLI
-
-[MQTTX CLI](https://mqttx.app/cli) is an open-source MQTT 5.0 command-line client tool designed to help developers develop and debug MQTT services and applications faster without the need for a GUI.
-
-- Retrieve External IP of the EMQX cluster
-
-```shell
-external_ip=$(kubectl get svc emqx-listeners -o json | jq '.status.loadBalancer.ingress[0].ip')
-```
-
-- Subscribe to messages
-
-```shell
-$ mqttx sub -t 'hello' -h ${external_ip} -p 1883
-
-[10:00:25] › … Connecting...
-[10:00:25] › ✔ Connected
-[10:00:25] › … Subscribing to hello...
-[10:00:25] › ✔ Subscribed to hello
-```
-
-- Create a new terminal window and send a message
-
-```shell
-$ mqttx pub -t 'hello' -h ${external_ip} -p 1883 -m 'hello world'
-
-[10:00:58] › … Connecting...
-[10:00:58] › ✔ Connected
-[10:00:58] › … Message Publishing...
-[10:00:58] › ✔ Message published
-```
-
-- View messages received in the subscription terminal window
-
-```shell
-[10:00:58] › payload: hello world
-```
-
-## Use LoadBalancer for TLS Offloading
-
-Since Google LoadBalancer doesn't support TCP certificates, please check [discussion](https://github.com/emqx/emqx-operator/discussions/312) to address TCP certificate offloading issues.
+ ```yaml
+ apiVersion: apps.emqx.io/v2beta1
+ kind: EMQX
+ metadata:
+ name: emqx
+ spec:
+ image: emqx/emqx:@EE_VERSION@
+ config:
+ data: |
+ license {
+ key = "..."
+ }
+ coreTemplate:
+ spec:
+ volumeClaimTemplates:
+ ## more information about storage classes: https://cloud.google.com/kubernetes-engine/docs/concepts/persistent-volumes#storageclasses
+ storageClassName: standard
+ resources:
+ requests:
+ storage: 10Gi
+ accessModes:
+ - ReadWriteOnce
+ dashboardServiceTemplate:
+ spec:
+ ## more information about load balancer: https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balancing
+ type: LoadBalancer
+ listenersServiceTemplate:
+ spec:
+ ## more information about load balancer: https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balancing
+ type: LoadBalancer
+ ```
+
+2. Wait for the EMQX cluster to become ready.
+
+ Check the status of the EMQX cluster using `kubectl get`, make sure that the `STATUS` is `Ready`. This may take some time.
+
+ ```shell
+ $ kubectl get emqx
+ NAME STATUS AGE
+ emqx Ready 1m2s
+ ```
+
+3. Retrieve the external IP of the EMQX Dashboard.
+
+ EMQX Operator will create a Service resource for the EMQX Dashboard according to the `dashboardServiceTemplate` configuration.
+
+ ```shell
+ $ kubectl get svc emqx-dashboard -o json | jq -r '.status.loadBalancer.ingress[0].ip'
+ 34.122.174.166
+ ```
+
+4. Open the Dashboard at `http://34.122.174.166:18083`.
+
+ Log in with the default credentials:
+
+ - **Username:** `admin`
+ - **Password:** `public`
+
+## Subscribe and Publish
+
+This walkthrough uses [MQTTX CLI](https://mqttx.app/cli), an open-source MQTT 5.0 command-line client tool that helps developers quickly test the MQTT services and applications.
+
+1. Obtain the external IP of the EMQX TCP listener.
+
+ The EMQX Operator automatically creates a Service resource for each configured listener.
+
+ ```shell
+ external_ip=$(kubectl get svc emqx-listeners -o json | jq -r '.status.loadBalancer.ingress[0].ip')
+ ```
+
+2. Subscribe to a topic.
+
+ ```shell
+ $ mqttx sub -t 'hello' -h ${external_ip} -p 1883
+ [10:00:25] › … Connecting...
+ [10:00:25] › ✔ Connected
+ [10:00:25] › … Subscribing to hello...
+ [10:00:25] › ✔ Subscribed to hello
+ ```
+
+3. In a separate terminal, connect to the EMQX cluster and publish a message.
+
+ ```shell
+ $ mqttx pub -t 'hello' -h ${external_ip} -p 1883 -m 'hello world'
+
+ [10:00:58] › … Connecting...
+ [10:00:58] › ✔ Connected
+ [10:00:58] › … Message Publishing...
+ [10:00:58] › ✔ Message published
+ ```
+
+4. Observe the subscriber receiving the message.
+
+ ```shell
+ [10:00:58] › payload: hello world
+ ```
+
+## Notes on TLS Offloading with LoadBalancer
+
+At the time of writing, Google LoadBalancer does not support termination of TLS-to-plain-TCP traffic. Refer to this [discussion](https://github.com/emqx/emqx-operator/discussions/312) to understand possible workarounds.
diff --git a/ja_JP/deploy/kubernetes/operator/getting-started.md b/ja_JP/deploy/kubernetes/operator/getting-started.md
index 3587bb0ae..9794a342c 100644
--- a/ja_JP/deploy/kubernetes/operator/getting-started.md
+++ b/ja_JP/deploy/kubernetes/operator/getting-started.md
@@ -1,12 +1,12 @@
# Install Operator and Deploy EMQX
-In this section, we will walk you through the steps required to efficiently set up the environment for the EMQX Operator, install the Operator, and then use it to deploy EMQX. By following the guidelines outlined in this section, you will be able to effectively install and manage EMQX using the EMQX Operator.
+This section guides you through preparing the environment for EMQX Operator, installing the Operator itself, and using it to deploy EMQX. By following the steps provided, you can install and manage EMQX efficiently and reliably with the Operator.
## Prepare the Environment
-Before deploying EMQX Operator, please confirm that the following components have been ready:
+Before deploying EMQX Operator, ensure that the following components are ready:
-- A running [Kubernetes cluster](https://kubernetes.io/docs/concepts/overview/), for a version of Kubernetes, please check [How to selector Kubernetes version](./operator.md#how-to-selector-kubernetes-version)
+- A [Kubernetes](https://kubernetes.io/docs/concepts/overview/) environment running Kubernetes version 1.24 or higher.
- A [kubectl](https://kubernetes.io/docs/tasks/tools/#kubectl) tool that can access the Kubernetes cluster. You can check the status of the Kubernetes cluster using `kubectl cluster-info` command.
@@ -31,12 +31,7 @@ Before deploying EMQX Operator, please confirm that the following components hav
--set crds.enabled=true
```
- Or you can follow the [cert-manager installation guide](https://cert-manager.io/docs/installation/) to install it.
-
- ::: warning
- If you install cert-manager on Google Kubernetes Engine (GKE) with default configuration may cause bootstrapping issues. Therefore, by adding the configuration of `--set global.leaderElection.namespace=cert-manager`, configure to use a different namespace in leader election. Please check [cert-manager compatibility](https://cert-manager.io/docs/installation/compatibility/)
- :::
-
+ Alternatively, follow the official [cert-manager installation guide](https://cert-manager.io/docs/installation/).
2. Install the EMQX Operator with the command below:
@@ -52,13 +47,10 @@ Before deploying EMQX Operator, please confirm that the following components hav
```bash
$ kubectl wait --for=condition=Ready pods -l "control-plane=controller-manager" -n emqx-operator-system
-
pod/emqx-operator-controller-manager-57bd7b8bd4-h2mcr condition met
```
-Now that you have successfully installed the operator, you are ready to proceed to the next step. In the [Deploy EMQX](#deploy-emqx) section, you will learn how to use the EMQX Operator to deploy EMQX.
-
-Alternatively, if you are interested in learning how to upgrade or uninstall EMQX using the operator, you can continue reading this section.
+Once the Operator is running, you can proceed to deploy EMQX.
## Deploy EMQX
@@ -74,7 +66,7 @@ Alternatively, if you are interested in learning how to upgrade or uninstall EMQ
metadata:
name: emqx-ee
spec:
- image: emqx/emqx-enterprise:@EE_VERSION@
+ image: emqx/emqx:@EE_VERSION@
config:
data: |
license {
@@ -82,18 +74,17 @@ Alternatively, if you are interested in learning how to upgrade or uninstall EMQ
}
```
- For more details about the EMQX CRD, please check the [reference document](./api-reference.md).
+ For more details about the EMQX CRD, check out the [reference documentation](./reference/v2beta1-reference.md).
-2. Wait, and the EMQX cluster is running.
+2. Wait until the EMQX cluster is ready.
```bash
$ kubectl get emqx
-
- NAME IMAGE STATUS AGE
- emqx-ee emqx/emqx-enterprise:@EE_VERSION@ Running 2m55s
+ NAME STATUS AGE
+ emqx-ee Ready 2m55s
```
- Make sure the `STATUS` is `Running`, it may take some time to wait for the EMQX cluster to be ready.
+ Make sure the `STATUS` is `Ready`. It may take some time for the EMQX cluster to become ready.
:::
@@ -110,18 +101,17 @@ Alternatively, if you are interested in learning how to upgrade or uninstall EMQ
image: emqx/emqx:@CE_VERSION@
```
- For more details about the EMQX CRD, please check the [reference document](./api-reference.md).
+ For more details about the EMQX CRD, check out the [reference documentation](./reference/v2beta1-reference.md).
-2. Wait the EMQX cluster is running.
+2. Wait until the EMQX cluster is ready.
```bash
$ kubectl get emqx
-
- NAME IMAGE STATUS AGE
- emqx emqx/emqx:@CE_VERSION@ Running 2m55s
+ NAME STATUS AGE
+ emqx Ready 2m55s
```
- Make sure the `STATUS` is `Running`, it maybe takes some time to wait for the EMQX cluster to be ready.
+ Make sure the `STATUS` is `Ready`, it may take some time for the EMQX cluster to become ready. A lot of things happen behind the scenes.
:::
@@ -129,7 +119,7 @@ Alternatively, if you are interested in learning how to upgrade or uninstall EMQ
## Deploy on Public Cloud
-Check out the following guides to deploy EMQX on public cloud platforms using the EMQX Operator:
+Use the following guides to deploy EMQX on managed Kubernetes services using the EMQX Operator:
- [Amazon Elastic Kubernetes Service (EKS)](./aws-eks.md)
- [Google Cloud GKE](./gcp-gke.md)
diff --git a/ja_JP/deploy/kubernetes/operator/operator.md b/ja_JP/deploy/kubernetes/operator/operator.md
index 57c306d77..678cfac91 100644
--- a/ja_JP/deploy/kubernetes/operator/operator.md
+++ b/ja_JP/deploy/kubernetes/operator/operator.md
@@ -1,24 +1,30 @@
# EMQX Operator Overview
-The EMQX Operator provides [Kubernetes](https://kubernetes.io/) native deployment and management of [EMQX](https://www.emqx.io/), including EMQX Broker and EMQX Enterprise. The purpose of this project is to simplify and automate the configuration of the EMQX cluster.
+The EMQX Operator provides native [Kubernetes](https://kubernetes.io/) support for deploying and managing [EMQX](https://www.emqx.io/) clusters. Its primary goal is to simplify and automate the lifecycle management of EMQX in Kubernetes environments.
-The EMQX Operator includes, but is not limited to, the following features:
+EMQX Operator requires Kubernetes 1.24 or higher.
-* **Simplified Deployment**: Declare EMQX clusters with EMQX custom resources and deploy them quickly. For more details, please check [Getting Started](./getting-started.md).
+EMQX Operator includes, but is not limited to, the following features:
-* **Manage EMQX Cluster**: Automate operations and maintenance for EMQX, including cluster upgrades, runtime data persistence, updating Kubernetes resources based on the status of EMQX, etc. For more details, please check [Manage EMQX](./tasks/overview.md).
+* **Simplified Deployment**: Declare EMQX clusters with EMQX custom resources and deploy them quickly.
+
+ For more details, see the [Getting Started](./getting-started.md) guide.
+
+* **Cluster Management**: Automate operations and maintenance of EMQX clusters, including cluster upgrades with workload migrations, runtime data persistence, keeping Kubernetes managed resources up to date, etc.
+
+ For more details, see the [Manage EMQX](./tasks/overview.md) section.
-## How to Selector Kubernetes Version
+## EMQX and EMQX Operator compatibility
-The EMQX Operator requires a Kubernetes cluster of version `>=1.24`.
+The current EMQX Operator release series 2.2.x is compatible with the following EMQX versions:
+- EMQX Open Source & Enterprise 5.1.1 ~ 5.8.x
+- EMQX 5.9 & 5.10 (limited support)
+- EMQX 6.0 and higher (limited support)
-| Kubernetes Versions | EMQX Operator Compatibility | Notes |
-| ----------------------- | ------------------------------------------------------------ | ------------------------------------------------------------ |
-| 1.24 or higher | All functions supported | |
-| 1.22 (included) ~ 1.23 | Supported, except [MixedProtocolLBService](https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/) | EMQX cluster can only use one protocol in `LoadBalancer` type of Service, for example TCP or UDP. |
-| 1.21 (included) ~ 1.22 | Supported, except [pod-deletion-cost](https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/#pod-deletion-cost) | When using EMQX Core + Replicant mode cluster, updating the EMQX cluster cannot accurately delete Pods. |
-| 1.20 (included) ~ 1.21 | Supported, manual `.spec.ports[].nodePort` assignment required if using `NodePort` type of Service | For more details, please refer to [Kubernetes changelog](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md#bug-or-regression-4). |
-| 1.16 (included) ~ 1.20 | Supported, not recommended due to lack of testing | |
-| Lower than 1.16 | Not supported | `apiextensions/v1` APIVersion is not supported. |
+The following API versions are supported:
+- [apps.emqx.io/v2beta1](./reference/v2beta1-reference.md)
+- apps.emqx.io/v2alpha1 (deprecated)
+- apps.emqx.io/v1beta4
+- apps.emqx.io/v1beta3 (deprecated)
diff --git a/ja_JP/deploy/kubernetes/operator/reference/overview.md b/ja_JP/deploy/kubernetes/operator/reference/overview.md
new file mode 100644
index 000000000..56c3fd749
--- /dev/null
+++ b/ja_JP/deploy/kubernetes/operator/reference/overview.md
@@ -0,0 +1,4 @@
+# API Reference
+
++ [apps.emqx.io/v2](./v2-reference.md)
++ [apps.emqx.io/v2beta1](./v2beta1-reference.md) (partially deprecated)
diff --git a/ja_JP/deploy/kubernetes/operator/reference/v2-reference.md b/ja_JP/deploy/kubernetes/operator/reference/v2-reference.md
new file mode 100644
index 000000000..0961accea
--- /dev/null
+++ b/ja_JP/deploy/kubernetes/operator/reference/v2-reference.md
@@ -0,0 +1,426 @@
+# API Reference (v2)
+
+## Packages
+- [apps.emqx.io/v2](#appsemqxiov2)
+
+
+## apps.emqx.io/v2
+
+package v2 contains API Schema definitions for the apps v2 API group.
+
+### Resource Types
+- [EMQX](#emqx)
+
+
+
+#### BootstrapAPIKey
+
+
+
+
+
+
+
+_Appears in:_
+- [EMQXSpec](#emqxspec)
+
+| Field | Description | Default | Validation |
+| --- | --- | --- | --- |
+| `key` _string_ | | | Pattern: `^[a-zA-Z\d-_]+$`
|
+| `secret` _string_ | | | MaxLength: 128
MinLength: 3
|
+| `secretRef` _[SecretRef](#secretref)_ | Reference to a Secret entry containing the EMQX API Key. | | |
+
+
+#### Config
+
+
+
+
+
+
+
+_Appears in:_
+- [EMQXSpec](#emqxspec)
+
+| Field | Description | Default | Validation |
+| --- | --- | --- | --- |
+| `mode` _string_ | Determines how configuration updates are applied.
* `Merge`: Merge the new configuration into the existing configuration.
* `Replace`: Replace the whole configuration. | Merge | Enum: [Merge Replace]
|
+| `data` _string_ | EMQX configuration, in HOCON format.
This configuration will be supplied as `base.hocon` to the container. See respective
[documentation](https://docs.emqx.com/en/emqx/latest/configuration/configuration.html#base-configuration-file). | | |
+
+
+#### DSDBReplicationStatus
+
+
+
+
+
+
+
+_Appears in:_
+- [DSReplicationStatus](#dsreplicationstatus)
+
+| Field | Description | Default | Validation |
+| --- | --- | --- | --- |
+| `name` _string_ | Name of the database | | |
+| `numShards` _integer_ | Number of shards of the database | | |
+| `numShardReplicas` _integer_ | Total number of shard replicas | | |
+| `lostShardReplicas` _integer_ | Total number of shard replicas belonging to lost sites | | |
+| `numTransitions` _integer_ | Current number of shard ownership transitions | | |
+| `minReplicas` _integer_ | Minimum replication factor among database shards | | |
+| `maxReplicas` _integer_ | Maximum replication factor among database shards | | |
+
+
+#### DSReplicationStatus
+
+
+
+Summary of DS replication status per database.
+
+
+
+_Appears in:_
+- [EMQXStatus](#emqxstatus)
+
+| Field | Description | Default | Validation |
+| --- | --- | --- | --- |
+| `dbs` _[DSDBReplicationStatus](#dsdbreplicationstatus) array_ | | | |
+
+
+#### EMQX
+
+
+
+Custom Resource representing an EMQX cluster.
+
+
+
+
+
+| Field | Description | Default | Validation |
+| --- | --- | --- | --- |
+| `apiVersion` _string_ | `apps.emqx.io/v2` | | |
+| `kind` _string_ | `EMQX` | | |
+| `metadata` _[ObjectMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#objectmeta-v1-meta)_ | Refer to Kubernetes API documentation for fields of `metadata`. | | |
+| `spec` _[EMQXSpec](#emqxspec)_ | Specification of the desired state of the EMQX cluster. | | |
+| `status` _[EMQXStatus](#emqxstatus)_ | Current status of the EMQX cluster. | | |
+
+
+#### EMQXCoreTemplate
+
+
+
+
+
+
+
+_Appears in:_
+- [EMQXSpec](#emqxspec)
+
+| Field | Description | Default | Validation |
+| --- | --- | --- | --- |
+| `metadata` _[ObjectMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#objectmeta-v1-meta)_ | Refer to Kubernetes API documentation for fields of `metadata`. | | |
+| `spec` _[EMQXCoreTemplateSpec](#emqxcoretemplatespec)_ | Specification of the desired state of a core node.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status | | |
+
+
+#### EMQXCoreTemplateSpec
+
+
+
+
+
+
+
+_Appears in:_
+- [EMQXCoreTemplate](#emqxcoretemplate)
+
+| Field | Description | Default | Validation |
+| --- | --- | --- | --- |
+| `nodeSelector` _object (keys:string, values:string)_ | Selector which must be true for the pod to fit on a node.
Must match a node's labels for the pod to be scheduled on that node.
More info: https://kubernetes.io/docs/concepts/config/assign-pod-node/ | | |
+| `nodeName` _string_ | Request to schedule this pod onto a specific node.
If it is non-empty, the scheduler simply schedules this pod onto that node, assuming that it fits resource requirements. | | |
+| `affinity` _[Affinity](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#affinity-v1-core)_ | Affinity for pod assignment
ref: https://kubernetes.io/docs/concepts/config/assign-pod-node/#affinity-and-anti-affinity | | |
+| `tolerations` _[Toleration](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#toleration-v1-core) array_ | Pod tolerations.
If specified, Pod tolerates any taint that matches the triple using the matching operator. | | |
+| `topologySpreadConstraints` _[TopologySpreadConstraint](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#topologyspreadconstraint-v1-core) array_ | Specifies how to spread matching pods among the given topology. | | |
+| `replicas` _integer_ | Desired number of instances.
In case of core nodes, each instance has a consistent identity. | 2 | Minimum: 0
|
+| `minAvailable` _[IntOrString](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#intorstring-intstr-util)_ | An eviction is allowed if at least "minAvailable" pods selected by
"selector" will still be available after the eviction, i.e. even in the
absence of the evicted pod. So for example you can prevent all voluntary
evictions by specifying "100%". | | XIntOrString: \{\}
|
+| `maxUnavailable` _[IntOrString](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#intorstring-intstr-util)_ | An eviction is allowed if at most "maxUnavailable" pods selected by
"selector" are unavailable after the eviction, i.e. even in absence of
the evicted pod. For example, one can prevent all voluntary evictions
by specifying 0. This is a mutually exclusive setting with "minAvailable". | | XIntOrString: \{\}
|
+| `command` _string array_ | Entrypoint array. Not executed within a shell.
The container image's ENTRYPOINT is used if this is not provided.
Variable references `$(VAR_NAME)` are expanded using the container's environment. If a variable
cannot be resolved, the reference in the input string will be unchanged. Double `$$` are reduced
to a single `$`, which allows for escaping the `$(VAR_NAME)` syntax: i.e. `$$(VAR_NAME)` will
produce the string literal `$(VAR_NAME)`. Escaped references will never be expanded, regardless
of whether the variable exists or not. Cannot be updated.
More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell | | |
+| `args` _string array_ | Arguments to the entrypoint.
The container image's CMD is used if this is not provided.
Variable references `$(VAR_NAME)` are expanded using the container's environment. If a variable
cannot be resolved, the reference in the input string will be unchanged. Double `$$` are reduced
to a single `$`, which allows for escaping the `$(VAR_NAME)` syntax: i.e. `$$(VAR_NAME)` will
produce the string literal `$(VAR_NAME)`. Escaped references will never be expanded, regardless
of whether the variable exists or not.
More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell | | |
+| `ports` _[ContainerPort](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#containerport-v1-core) array_ | List of ports to expose from the container.
Exposing a port here gives the system additional information about the network connections a
container uses, but is primarily informational. Not specifying a port here DOES NOT prevent that
port from being exposed. Any port which is listening on the default `0.0.0.0` address inside a
container will be accessible from the network. | | |
+| `env` _[EnvVar](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#envvar-v1-core) array_ | List of environment variables to set in the container. | | |
+| `envFrom` _[EnvFromSource](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#envfromsource-v1-core) array_ | List of sources to populate environment variables from in the container.
The keys defined within a source must be a C_IDENTIFIER. All invalid keys
will be reported as an event when the container is starting. When a key exists in multiple
sources, the value associated with the last source will take precedence.
Values defined by an Env with a duplicate key will take precedence. | | |
+| `resources` _[ResourceRequirements](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#resourcerequirements-v1-core)_ | Compute Resources required by this container.
More info: https://kubernetes.io/docs/concepts/config/manage-resources-containers/ | | |
+| `podSecurityContext` _[PodSecurityContext](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#podsecuritycontext-v1-core)_ | Pod-level security attributes and common container settings. | \{ fsGroup:1000 fsGroupChangePolicy:Always runAsGroup:1000 runAsUser:1000 supplementalGroups:[1000] \} | |
+| `containerSecurityContext` _[SecurityContext](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#securitycontext-v1-core)_ | Security options the container should be run with.
If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext.
More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ | \{ runAsGroup:1000 runAsNonRoot:true runAsUser:1000 \} | |
+| `initContainers` _[Container](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#container-v1-core) array_ | List of initialization containers belonging to the pod.
Init containers are executed in order prior to containers being started. If any
init container fails, the pod is considered to have failed and is handled according
to its restartPolicy. The name for an init container or normal container must be
unique among all containers.
Init containers may not have Lifecycle actions, Readiness probes, Liveness probes, or Startup probes.
The resourceRequirements of an init container are taken into account during scheduling
by finding the highest request/limit for each resource type, and then using the max of
of that value or the sum of the normal containers. Limits are applied to init containers
in a similar fashion.
More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ | | |
+| `extraContainers` _[Container](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#container-v1-core) array_ | Additional containers to run alongside the main container. | | |
+| `extraVolumes` _[Volume](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#volume-v1-core) array_ | Additional volumes to provide to a Pod. | | |
+| `extraVolumeMounts` _[VolumeMount](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#volumemount-v1-core) array_ | Specifies how additional volumes are mounted into the main container. | | |
+| `livenessProbe` _[Probe](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#probe-v1-core)_ | Periodic probe of container liveness.
Container will be restarted if the probe fails.
More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes | \{ failureThreshold:3 httpGet:map[path:/status port:dashboard] initialDelaySeconds:60 periodSeconds:30 \} | |
+| `readinessProbe` _[Probe](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#probe-v1-core)_ | Periodic probe of container service readiness.
Container will be removed from service endpoints if the probe fails.
More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes | \{ failureThreshold:12 httpGet:map[path:/status port:dashboard] initialDelaySeconds:10 periodSeconds:5 \} | |
+| `startupProbe` _[Probe](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#probe-v1-core)_ | StartupProbe indicates that the Pod has successfully initialized.
If specified, no other probes are executed until this completes successfully.
If this probe fails, the Pod will be restarted, just as if the `livenessProbe` failed.
This can be used to provide different probe parameters at the beginning of a Pod's lifecycle,
when it might take a long time to load data or warm a cache, than during steady-state operation.
More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes | | |
+| `lifecycle` _[Lifecycle](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#lifecycle-v1-core)_ | Actions that the management system should take in response to container lifecycle events. | | |
+| `volumeClaimTemplates` _[PersistentVolumeClaimSpec](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#persistentvolumeclaimspec-v1-core)_ | PVC specification for a core node data storage.
Note: this field named inconsistently, it is actually just a `PersistentVolumeClaimSpec`. | | |
+
+
+#### EMQXNode
+
+
+
+
+
+
+
+_Appears in:_
+- [EMQXStatus](#emqxstatus)
+
+| Field | Description | Default | Validation |
+| --- | --- | --- | --- |
+| `name` _string_ | Node name | | |
+| `podName` _string_ | Corresponding pod name | | |
+| `status` _string_ | Node status | | |
+| `otpRelease` _string_ | Erlang/OTP version node is running on | | |
+| `version` _string_ | EMQX version | | |
+| `role` _string_ | Node role, either "core" or "replicant" | | |
+| `sessions` _integer_ | Number of MQTT sessions | | |
+| `connections` _integer_ | Number of connected MQTT clients | | |
+
+
+#### EMQXNodesStatus
+
+
+
+
+
+
+
+_Appears in:_
+- [EMQXStatus](#emqxstatus)
+
+| Field | Description | Default | Validation |
+| --- | --- | --- | --- |
+| `replicas` _integer_ | Total number of replicas. | | |
+| `readyReplicas` _integer_ | Number of ready replicas. | | |
+| `currentRevision` _string_ | Current revision of the respective core or replicant set. | | |
+| `currentReplicas` _integer_ | Number of replicas running current revision. | | |
+| `updateRevision` _string_ | Update revision of the respective core or replicant set.
When different from the current revision, the set is being updated. | | |
+| `updateReplicas` _integer_ | Number of replicas running update revision. | | |
+| `collisionCount` _integer_ | | | |
+
+
+#### EMQXReplicantTemplate
+
+
+
+
+
+
+
+_Appears in:_
+- [EMQXSpec](#emqxspec)
+
+| Field | Description | Default | Validation |
+| --- | --- | --- | --- |
+| `metadata` _[ObjectMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#objectmeta-v1-meta)_ | Refer to Kubernetes API documentation for fields of `metadata`. | | |
+| `spec` _[EMQXReplicantTemplateSpec](#emqxreplicanttemplatespec)_ | Specification of the desired state of a replicant node.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status | | |
+
+
+#### EMQXReplicantTemplateSpec
+
+
+
+
+
+
+
+_Appears in:_
+- [EMQXCoreTemplateSpec](#emqxcoretemplatespec)
+- [EMQXReplicantTemplate](#emqxreplicanttemplate)
+
+| Field | Description | Default | Validation |
+| --- | --- | --- | --- |
+| `nodeSelector` _object (keys:string, values:string)_ | Selector which must be true for the pod to fit on a node.
Must match a node's labels for the pod to be scheduled on that node.
More info: https://kubernetes.io/docs/concepts/config/assign-pod-node/ | | |
+| `nodeName` _string_ | Request to schedule this pod onto a specific node.
If it is non-empty, the scheduler simply schedules this pod onto that node, assuming that it fits resource requirements. | | |
+| `affinity` _[Affinity](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#affinity-v1-core)_ | Affinity for pod assignment
ref: https://kubernetes.io/docs/concepts/config/assign-pod-node/#affinity-and-anti-affinity | | |
+| `tolerations` _[Toleration](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#toleration-v1-core) array_ | Pod tolerations.
If specified, Pod tolerates any taint that matches the triple using the matching operator. | | |
+| `topologySpreadConstraints` _[TopologySpreadConstraint](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#topologyspreadconstraint-v1-core) array_ | Specifies how to spread matching pods among the given topology. | | |
+| `replicas` _integer_ | Desired number of instances.
In case of core nodes, each instance has a consistent identity. | 2 | Minimum: 0
|
+| `minAvailable` _[IntOrString](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#intorstring-intstr-util)_ | An eviction is allowed if at least "minAvailable" pods selected by
"selector" will still be available after the eviction, i.e. even in the
absence of the evicted pod. So for example you can prevent all voluntary
evictions by specifying "100%". | | XIntOrString: \{\}
|
+| `maxUnavailable` _[IntOrString](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#intorstring-intstr-util)_ | An eviction is allowed if at most "maxUnavailable" pods selected by
"selector" are unavailable after the eviction, i.e. even in absence of
the evicted pod. For example, one can prevent all voluntary evictions
by specifying 0. This is a mutually exclusive setting with "minAvailable". | | XIntOrString: \{\}
|
+| `command` _string array_ | Entrypoint array. Not executed within a shell.
The container image's ENTRYPOINT is used if this is not provided.
Variable references `$(VAR_NAME)` are expanded using the container's environment. If a variable
cannot be resolved, the reference in the input string will be unchanged. Double `$$` are reduced
to a single `$`, which allows for escaping the `$(VAR_NAME)` syntax: i.e. `$$(VAR_NAME)` will
produce the string literal `$(VAR_NAME)`. Escaped references will never be expanded, regardless
of whether the variable exists or not. Cannot be updated.
More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell | | |
+| `args` _string array_ | Arguments to the entrypoint.
The container image's CMD is used if this is not provided.
Variable references `$(VAR_NAME)` are expanded using the container's environment. If a variable
cannot be resolved, the reference in the input string will be unchanged. Double `$$` are reduced
to a single `$`, which allows for escaping the `$(VAR_NAME)` syntax: i.e. `$$(VAR_NAME)` will
produce the string literal `$(VAR_NAME)`. Escaped references will never be expanded, regardless
of whether the variable exists or not.
More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell | | |
+| `ports` _[ContainerPort](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#containerport-v1-core) array_ | List of ports to expose from the container.
Exposing a port here gives the system additional information about the network connections a
container uses, but is primarily informational. Not specifying a port here DOES NOT prevent that
port from being exposed. Any port which is listening on the default `0.0.0.0` address inside a
container will be accessible from the network. | | |
+| `env` _[EnvVar](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#envvar-v1-core) array_ | List of environment variables to set in the container. | | |
+| `envFrom` _[EnvFromSource](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#envfromsource-v1-core) array_ | List of sources to populate environment variables from in the container.
The keys defined within a source must be a C_IDENTIFIER. All invalid keys
will be reported as an event when the container is starting. When a key exists in multiple
sources, the value associated with the last source will take precedence.
Values defined by an Env with a duplicate key will take precedence. | | |
+| `resources` _[ResourceRequirements](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#resourcerequirements-v1-core)_ | Compute Resources required by this container.
More info: https://kubernetes.io/docs/concepts/config/manage-resources-containers/ | | |
+| `podSecurityContext` _[PodSecurityContext](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#podsecuritycontext-v1-core)_ | Pod-level security attributes and common container settings. | \{ fsGroup:1000 fsGroupChangePolicy:Always runAsGroup:1000 runAsUser:1000 supplementalGroups:[1000] \} | |
+| `containerSecurityContext` _[SecurityContext](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#securitycontext-v1-core)_ | Security options the container should be run with.
If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext.
More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ | \{ runAsGroup:1000 runAsNonRoot:true runAsUser:1000 \} | |
+| `initContainers` _[Container](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#container-v1-core) array_ | List of initialization containers belonging to the pod.
Init containers are executed in order prior to containers being started. If any
init container fails, the pod is considered to have failed and is handled according
to its restartPolicy. The name for an init container or normal container must be
unique among all containers.
Init containers may not have Lifecycle actions, Readiness probes, Liveness probes, or Startup probes.
The resourceRequirements of an init container are taken into account during scheduling
by finding the highest request/limit for each resource type, and then using the max of
of that value or the sum of the normal containers. Limits are applied to init containers
in a similar fashion.
More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ | | |
+| `extraContainers` _[Container](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#container-v1-core) array_ | Additional containers to run alongside the main container. | | |
+| `extraVolumes` _[Volume](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#volume-v1-core) array_ | Additional volumes to provide to a Pod. | | |
+| `extraVolumeMounts` _[VolumeMount](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#volumemount-v1-core) array_ | Specifies how additional volumes are mounted into the main container. | | |
+| `livenessProbe` _[Probe](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#probe-v1-core)_ | Periodic probe of container liveness.
Container will be restarted if the probe fails.
More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes | \{ failureThreshold:3 httpGet:map[path:/status port:dashboard] initialDelaySeconds:60 periodSeconds:30 \} | |
+| `readinessProbe` _[Probe](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#probe-v1-core)_ | Periodic probe of container service readiness.
Container will be removed from service endpoints if the probe fails.
More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes | \{ failureThreshold:12 httpGet:map[path:/status port:dashboard] initialDelaySeconds:10 periodSeconds:5 \} | |
+| `startupProbe` _[Probe](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#probe-v1-core)_ | StartupProbe indicates that the Pod has successfully initialized.
If specified, no other probes are executed until this completes successfully.
If this probe fails, the Pod will be restarted, just as if the `livenessProbe` failed.
This can be used to provide different probe parameters at the beginning of a Pod's lifecycle,
when it might take a long time to load data or warm a cache, than during steady-state operation.
More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes | | |
+| `lifecycle` _[Lifecycle](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#lifecycle-v1-core)_ | Actions that the management system should take in response to container lifecycle events. | | |
+
+
+#### EMQXSpec
+
+
+
+EMQXSpec defines the desired state of EMQX.
+
+
+
+_Appears in:_
+- [EMQX](#emqx)
+
+| Field | Description | Default | Validation |
+| --- | --- | --- | --- |
+| `image` _string_ | EMQX container image.
More info: https://kubernetes.io/docs/concepts/containers/images | | |
+| `imagePullPolicy` _[PullPolicy](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#pullpolicy-v1-core)_ | Container image pull policy.
One of `Always`, `Never`, `IfNotPresent`.
Defaults to `Always` if `:latest` tag is specified, or `IfNotPresent` otherwise.
More info: https://kubernetes.io/docs/concepts/containers/images#updating-images | | |
+| `imagePullSecrets` _[LocalObjectReference](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#localobjectreference-v1-core) array_ | ImagePullSecrets is an optional list of references to secrets in the same namespace to use for pulling any of the images used by this PodSpec.
If specified, these secrets will be passed to individual puller implementations for them to use.
More info: https://kubernetes.io/docs/concepts/containers/images#specifying-imagepullsecrets-on-a-pod | | |
+| `serviceAccountName` _string_ | ServiceAccount name.
Managed ReplicaSets and StatefulSets are associated with the specified ServiceAccount for authentication purposes.
More info: https://kubernetes.io/docs/concepts/security/service-accounts | | |
+| `bootstrapAPIKeys` _[BootstrapAPIKey](#bootstrapapikey) array_ | Bootstrap API keys to access EMQX API.
Cannot be updated. | | |
+| `config` _[Config](#config)_ | EMQX Configuration. | | |
+| `clusterDomain` _string_ | Kubernetes cluster domain. | cluster.local | |
+| `revisionHistoryLimit` _integer_ | Number of old ReplicaSets, old StatefulSets and old PersistentVolumeClaims to retain to allow rollback. | 3 | |
+| `updateStrategy` _[UpdateStrategy](#updatestrategy)_ | Cluster upgrade strategy settings. | \{ type:Recreate \} | |
+| `coreTemplate` _[EMQXCoreTemplate](#emqxcoretemplate)_ | Template for Pods running EMQX core nodes. | \{ spec:map[replicas:2] \} | |
+| `replicantTemplate` _[EMQXReplicantTemplate](#emqxreplicanttemplate)_ | Template for Pods running EMQX replicant nodes. | | |
+| `dashboardServiceTemplate` _[ServiceTemplate](#servicetemplate)_ | Template for Service exposing the EMQX Dashboard.
Dashboard Service always points to the set of EMQX core nodes. | | |
+| `listenersServiceTemplate` _[ServiceTemplate](#servicetemplate)_ | Template for Service exposing enabled EMQX listeners.
Listeners Service points to the set of EMQX replicant nodes if they are enabled and exist.
Otherwise, it points to the set of EMQX core nodes. | | |
+
+
+#### EMQXStatus
+
+
+
+EMQXStatus defines the observed state of EMQX
+
+
+
+_Appears in:_
+- [EMQX](#emqx)
+
+| Field | Description | Default | Validation |
+| --- | --- | --- | --- |
+| `conditions` _[Condition](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#condition-v1-meta) array_ | Conditions representing the current status of the EMQX Custom Resource. | | |
+| `coreNodes` _[EMQXNode](#emqxnode) array_ | Status of each core node in the cluster. | | |
+| `coreNodesStatus` _[EMQXNodesStatus](#emqxnodesstatus)_ | Summary status of the set of core nodes. | | |
+| `replicantNodes` _[EMQXNode](#emqxnode) array_ | Status of each replicant node in the cluster. | | |
+| `replicantNodesStatus` _[EMQXNodesStatus](#emqxnodesstatus)_ | Summary status of the set of replicant nodes. | | |
+| `nodeEvacuationsStatus` _[NodeEvacuationStatus](#nodeevacuationstatus) array_ | Status of active node evacuations in the cluster. | | |
+| `dsReplication` _[DSReplicationStatus](#dsreplicationstatus)_ | Status of EMQX Durable Storage replication. | | |
+
+
+#### EvacuationStrategy
+
+
+
+
+
+
+
+_Appears in:_
+- [UpdateStrategy](#updatestrategy)
+
+| Field | Description | Default | Validation |
+| --- | --- | --- | --- |
+| `connEvictRate` _integer_ | Client disconnect rate (number per second).
Same as `conn-evict-rate` in [EMQX Node Evacuation](https://docs.emqx.com/en/emqx/v5.10/deploy/cluster/rebalancing.html#node-evacuation). | 1000 | Minimum: 1
|
+| `sessEvictRate` _integer_ | Session evacuation rate (number per second).
Same as `sess-evict-rate` in [EMQX Node Evacuation](https://docs.emqx.com/en/emqx/v5.10/deploy/cluster/rebalancing.html#node-evacuation). | 1000 | Minimum: 1
|
+| `waitTakeover` _integer_ | Amount of time (in seconds) to wait before starting session evacuation.
Same as `wait-takeover` in [EMQX Node Evacuation](https://docs.emqx.com/en/emqx/v5.10/deploy/cluster/rebalancing.html#node-evacuation). | 10 | Minimum: 0
|
+| `waitHealthCheck` _integer_ | Duration (in seconds) during which the node waits for the Load Balancer to remove it from the active backend node list.
Same as `wait-health-check` in [EMQX Node Evacuation](https://docs.emqx.com/en/emqx/v5.10/deploy/cluster/rebalancing.html#node-evacuation). | 60 | Minimum: 0
|
+
+
+#### KeyRef
+
+
+
+
+
+
+
+_Appears in:_
+- [SecretRef](#secretref)
+
+| Field | Description | Default | Validation |
+| --- | --- | --- | --- |
+| `secretName` _string_ | Name of the Secret object. | | |
+| `secretKey` _string_ | Entry within the Secret data. | | Pattern: `^[a-zA-Z\d-_]+$`
|
+
+
+#### NodeEvacuationStatus
+
+
+
+
+
+
+
+_Appears in:_
+- [EMQXStatus](#emqxstatus)
+
+| Field | Description | Default | Validation |
+| --- | --- | --- | --- |
+| `nodeName` _string_ | Evacuated node name | | |
+| `state` _string_ | Evacuation state | | |
+| `sessionRecipients` _string array_ | Session recipients | | |
+| `sessionEvictionRate` _integer_ | Session eviction rate, in sessions per second. | | |
+| `connectionEvictionRate` _integer_ | Connection eviction rate, in connections per second. | | |
+| `initialSessions` _integer_ | Initial number of sessions on this node | | |
+| `initialConnections` _integer_ | Initial number of connections to this node | | |
+
+
+#### SecretRef
+
+
+
+
+
+
+
+_Appears in:_
+- [BootstrapAPIKey](#bootstrapapikey)
+
+| Field | Description | Default | Validation |
+| --- | --- | --- | --- |
+| `key` _[KeyRef](#keyref)_ | Reference to a Secret entry containing the EMQX API Key. | | |
+| `secret` _[KeyRef](#keyref)_ | Reference to a Secret entry containing the EMQX API Key's secret. | | |
+
+
+#### ServiceTemplate
+
+
+
+
+
+
+
+_Appears in:_
+- [EMQXSpec](#emqxspec)
+
+| Field | Description | Default | Validation |
+| --- | --- | --- | --- |
+| `enabled` _boolean_ | Specifies whether the Service should be created. | true | |
+| `metadata` _[ObjectMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#objectmeta-v1-meta)_ | Refer to Kubernetes API documentation for fields of `metadata`. | | |
+| `spec` _[ServiceSpec](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#servicespec-v1-core)_ | Specification of the desired state of a Service.
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status | | |
+
+
+#### UpdateStrategy
+
+
+
+
+
+
+
+_Appears in:_
+- [EMQXSpec](#emqxspec)
+
+| Field | Description | Default | Validation |
+| --- | --- | --- | --- |
+| `type` _string_ | Determines how cluster upgrade is performed.
* `Recreate`: Perform blue-green upgrade. | Recreate | Enum: [Recreate]
|
+| `initialDelaySeconds` _integer_ | Number of seconds before connection evacuation starts. | 10 | Minimum: 0
|
+| `evacuationStrategy` _[EvacuationStrategy](#evacuationstrategy)_ | Evacuation strategy settings. | | |
+
diff --git a/ja_JP/deploy/kubernetes/operator/api-reference.md b/ja_JP/deploy/kubernetes/operator/reference/v2beta1-reference.md
similarity index 99%
rename from ja_JP/deploy/kubernetes/operator/api-reference.md
rename to ja_JP/deploy/kubernetes/operator/reference/v2beta1-reference.md
index 26a7e6096..ca99fb6ba 100644
--- a/ja_JP/deploy/kubernetes/operator/api-reference.md
+++ b/ja_JP/deploy/kubernetes/operator/reference/v2beta1-reference.md
@@ -1,4 +1,4 @@
-# API Reference
+# API Reference (v2beta1)
## Packages
- [apps.emqx.io/v2beta1](#appsemqxiov2beta1)
@@ -608,4 +608,3 @@ _Appears in:_
| `initialDelaySeconds` _integer_ | Number of seconds before evacuation connection start. | | |
| `evacuationStrategy` _[EvacuationStrategy](#evacuationstrategy)_ | Number of seconds before evacuation connection timeout. | | |
-
diff --git a/ja_JP/deploy/kubernetes/operator/tasks/assets/configure-core-replicant/mria-core-repliant.png b/ja_JP/deploy/kubernetes/operator/tasks/assets/configure-core-replicant/mria-core-replicant.png
similarity index 100%
rename from ja_JP/deploy/kubernetes/operator/tasks/assets/configure-core-replicant/mria-core-repliant.png
rename to ja_JP/deploy/kubernetes/operator/tasks/assets/configure-core-replicant/mria-core-replicant.png
diff --git a/ja_JP/deploy/kubernetes/operator/tasks/configure-emqx-blueGreenUpdate.md b/ja_JP/deploy/kubernetes/operator/tasks/configure-emqx-blueGreenUpdate.md
index 961893c00..6eeb0e917 100644
--- a/ja_JP/deploy/kubernetes/operator/tasks/configure-emqx-blueGreenUpdate.md
+++ b/ja_JP/deploy/kubernetes/operator/tasks/configure-emqx-blueGreenUpdate.md
@@ -1,20 +1,15 @@
-# Upgrade the EMQX Cluster Elegantly through Blue-Green Deployment
+# Perform Blue-Green Upgrade of EMQX Cluster
-This page demonstrates how to gracefully upgrade the EMQX cluster through blue-green deployment.
+## Objective
-:::tip
-
-This feature only supports `apps.emqx.io/v1beta4 EmqxEnterprise` and `apps.emqx.io/v2beta1 EMQX`.
-
-:::
+Perform a graceful upgrade of the EMQX cluster through blue-green deployment.
## Background
-1. In traditional EMQX cluster deployment, the default rolling upgrade strategy of StatefulSet is usually used to update EMQX Pods. However, this approach has the following two problems:
+In traditional EMQX cluster deployment, StatefulSet's default rolling upgrade strategy is usually used to update EMQX Pods. However, this approach has the following two problems:
- 1. During the rolling update, both new and old Pods are selected by the corresponding Service. This may cause MQTT clients to connect to the wrong Pod, resulting in frequent disconnections and reconnections.
-
- 2. During the rolling update process, only N - 1 Pods can provide services because it takes some time for new Pods to start up and become ready. This may lead to a decrease in service availability.
+* During the rolling update, both new and old Pods are selected by the corresponding Service. This may cause MQTT clients to connect to old Pods that are being terminated, resulting in frequent disconnections and reconnections.
+* During the rolling update process, only _N - 1_ Pods can provide services at any given time because it takes some time for new Pods to start up and become ready. This may lead to a decrease in service availability.
```mermaid
timeline
@@ -43,20 +38,14 @@ timeline
## Solution
-Regarding the issue of rolling updates mentioned in the previous text, EMQX Operator provides a blue-green deployment upgrade solution. When upgrading the EMQX cluster using EMQX custom resources, EMQX Operator will create a new EMQX cluster and redirect the Kubernetes Service to the new EMQX cluster after it is ready. It will then gradually delete Pods from the old EMQX cluster to achieve the purpose of updating the EMQX cluster.
-
-When deleting Pods from the old EMQX cluster, EMQX Operator can also take advantage of the node evacuation feature of EMQX to transfer MQTT connections to the new cluster at a desired rate, avoiding issues with a large number of connections for a period of time.
-
-The entire upgrade process can be roughly divided into the following steps:
-
-1. Create a cluster with the same specifications.
-
-2. After the new cluster is ready, redirect the service to the new cluster and remove the old cluster from the service. At this time, the new cluster starts to receive traffic, and existing connections in the old cluster are not affected.
-
-3. (Only supported by EMQX Enterprise Edition) Use EMQX node evacuation function to evacuate connections on each node one by one.
+EMQX Operator performs blue-green deployment by default. When an EMQX cluster is updated through the corresponding EMQX CR, EMQX Operator initiates an upgrade.
-4. Gradually scale down the old cluster to 0 nodes.
+The entire upgrade process is roughly divided into the following steps:
+1. Create a set of new EMQX nodes with updated specifications.
+2. Redirect the Service resources to the new set of nodes once they are ready, ensuring that no new connections are routed to the old set.
+3. Safely migrate existing MQTT connections from the old set of nodes to the new set of nodes at a controlled rate to avoid reconnect storms.
+4. Gradually scale down the old set of EMQX nodes.
5. Complete the upgrade.
```mermaid
@@ -96,147 +85,82 @@ timeline
: pod-2
```
-## Configuration Update Strategy
-
-:::: tabs type:card
-::: tab apps.emqx.io/v2beta1
-
-Create `apps.emqx.io/v2beta1` EMQX and configure update strategy.
-
-```yaml
-apiVersion: apps.emqx.io/v2beta1
-kind: EMQX
-metadata:
- name: emqx-ee
-spec:
- image: emqx/emqx-enterprise:5.10
- config:
- data: |
- license {
- key = "..."
- }
- updateStrategy:
- evacuationStrategy:
- connEvictRate: 1000
- sessEvictRate: 1000
- waitTakeover: 10
- initialDelaySeconds: 10
- type: Recreate
-```
-
-`initialDelaySeconds`:The waiting time before starting the update after all nodes are ready (unit: second).
-
-`waitTakeover`: Interval time when deleting a Pod (unit: second)。
-
-`connEvictRate`: MQTT client evacuation rate, only supported by EMQX Enterprise Edition (unit: count/second)。
-
-`sessEvictRate`: MQTT Session evacuation rate, only supported by EMQX Enterprise Edition (unit: count/second)。
-
-Save the above content as: `emqx-update.yaml`, execute the following command to deploy EMQX:
-
-```bash
-$ kubectl apply -f emqx-update.yaml
-
-emqx.apps.emqx.io/emqx-ee created
-```
-
-Check the status of the EMQX cluster, please make sure that `STATUS` is `Ready`. This may require some time to wait for the EMQX cluster to be ready.
-
-```bash
-$ kubectl get emqx
-
-NAME STATUS AGE
-emqx-ee Ready 8m33s
-```
-
-:::
-::: tab apps.emqx.io/v1beta4
-
-Create `apps.emqx.io/v1beta4 EmqxEnterprise` and configure update strategy.
-
-```yaml
-apiVersion: apps.emqx.io/v1beta4
-kind: EmqxEnterprise
-metadata:
- name: emqx-ee
-spec:
- blueGreenUpdate:
- initialDelaySeconds: 60
- evacuationStrategy:
- waitTakeover: 5
- connEvictRate: 200
- sessEvictRate: 200
- template:
- spec:
- emqxContainer:
- image:
- repository: emqx/emqx-ee
- version: 4.4.30
-```
-
-`initialDelaySeconds`: The waiting time before the start node is evacuated after all nodes are ready (unit: second).
-
-`waitTakeover`: The time to wait for the client to reconnect and take over the session after all connections are disconnected (unit: second).
-
-`connEvictRate`: MQTT client evacuation rate (unit: count/second)。
-
-`sessEvictRate`: MQTT Session evacuation speed (unit: count/second)。
-
-Save the above content as: `emqx-update.yaml`, execute the following command to deploy EMQX Enterprise Edition cluster:
-
-```bash
-$ kubectl apply -f emqx-update.yaml
+## Procedure
+
+### Configure the Update Strategy
+
+1. Create an `apps.emqx.io/v2beta1` EMQX CR and configure the update strategy.
+
+ ```yaml
+ apiVersion: apps.emqx.io/v2beta1
+ kind: EMQX
+ metadata:
+ name: emqx-ee
+ spec:
+ image: emqx/emqx:@EE_VERSION@
+ config:
+ data: |
+ license {
+ key = "..."
+ }
+ updateStrategy:
+ evacuationStrategy:
+ # MQTT client evacuation rate, connections per second:
+ connEvictRate: 1000
+ # MQTT Session evacuation rate, sessions per second:
+ sessEvictRate: 1000
+ # Time to wait before deleting a Pod:
+ waitTakeover: 10
+ # Time to wait before starting the upgrade once all nodes are ready:
+ initialDelaySeconds: 10
+ type: Recreate
+ ```
-emqxenterprise.apps.emqx.io/emqx-ee created
-```
+2. Save the above content as `emqx-update.yaml` and deploy it using `kubectl apply`:
-Check the status of the EMQX cluster, please make sure that `STATUS` is `Running`. This may require some time to wait for the EMQX cluster to be ready.
+ ```bash
+ $ kubectl apply -f emqx-update.yaml
+ emqx.apps.emqx.io/emqx-ee created
+ ```
-```bash
-$ kubectl get emqxenterprises
+3. Check the status of the EMQX cluster.
-NAME STATUS AGE
-emqx-ee Running 8m33s
-```
+ Make sure that `STATUS` is `Ready`. This may take a while.
-:::
-::::
+ ```bash
+ $ kubectl get emqx
+ NAME STATUS AGE
+ emqx-ee Ready 8m33s
+ ```
-## Connect to EMQX Cluster Using MQTTX CLI
+### Connect to EMQX Cluster
-MQTT X CLI is an open-source MQTT 5.0 CLI Client that supports automatic reconnection. It is also a pure command-line mode MQTT X. It aims to help develop and debug MQTT services and applications faster without using a graphical interface. For documentation about MQTT X CLI, please refer to: [MQTTX CLI](https://mqttx.app/cli).
+[MQTTX](https://mqttx.app/cli) is an open-source MQTT 5.0 compatible command line client tool that supports automatic reconnection, designed to help in development and debugging of MQTT services and applications.
-Execute the following command to connect to the EMQX cluster:
+Use MQTTX to connect to the EMQX cluster:
```bash
mqttx bench conn -h ${IP} -p ${PORT} -c 3000
-```
-
-Output is similar to:
-
-```bash
[10:05:21 AM] › ℹ Start the connect benchmarking, connections: 3000, req interval: 10ms
✔ success [3000/3000] - Connected
[10:06:13 AM] › ℹ Done, total time: 31.113s
```
-## Upgrade EMQX Cluster
+### Trigger the upgrade
-- Any modifications made to the Pod Template will trigger the upgrade strategy of EMQX Operator.
+1. Any modifications made to the Pod template will trigger the upgrade strategy of EMQX Operator.
- > In this article, we trigger the upgrade by modifying the Container ImagePullPolicy. Users can modify it according to their actual needs.
+ In this example, we trigger the upgrade by modifying the Pod's `ImagePullPolicy`.
```bash
$ kubectl patch emqx emqx-ee --type=merge -p '{"spec": {"imagePullPolicy": "Never"}}'
-
emqx.apps.emqx.io/emqx-ee patched
```
-- Check status.
+2. Check the status of the upgrade process.
```bash
$ kubectl get emqx emqx-ee -o json | jq ".status.nodeEvacuationsStatus"
-
[
{
"connection_eviction_rate": 200,
@@ -260,42 +184,40 @@ Output is similar to:
]
```
- `connection_eviction_rate`: Node evacuation rate (unit: count/second).
-
- `node`: The node being evacuated currently.
+ | Field | Description |
+ |-------------------------|-----------------------------------------------------------------------|
+ | `node` | The node currently being evacuated. |
+ | `state` | Node evacuation phase. |
+ | `session_recipients` | MQTT session recipients. |
+ | `session_eviction_rate` | MQTT session eviction rate on this node (sessions per second). |
+ | `connection_eviction_rate`| MQTT connection eviction rate on this node (connections per second). |
+ | `initial_sessions` | Initial number of sessions on this node. |
+ | `initial_connected` | Initial number of connections on this node. |
+ | `current_sessions` | Current number of sessions on this node. |
+ | `current_connected` | Current number of connections on this node. |
- `session_eviction_rate`: Node session evacuation rate (unit: count/second).
-
- `session_recipients`: Session evacuation recipient list.
-
- `state`: Node evacuation phase.
-
- `stats`: Evacuation node statistical indicators, including current number of connections (current_connected), current number of sessions (current_sessions), initial number of connections (initial_connected), and initial number of sessions (initial_sessions).
-
-- Waiting for the upgrade to complete.
+3. Wait for the upgrade to complete.
```bash
$ kubectl get emqx
-
NAME STATUS AGE
emqx-ee Ready 8m33s
```
- Please make sure that the STATUS is Running, which requires some time to wait for the EMQX cluster to complete the upgrade.
-
- After the upgrade is completed, you can observe that the old EMQX nodes have been deleted by using the command $ kubectl get pods.
+ Make sure that the `STATUS` is `Ready`. Depending on the number of MQTT clients and sessions, the upgrade process may take a while.
+ After the upgrade is completed, you can verify that the old EMQX nodes have been deleted using `kubectl get pods`.
## Grafana Monitoring
-The monitoring graph of the number of connections during the upgrade process is shown below (using 10,000 connections as an example).
+The following monitoring graph shows the number of connections during the upgrade process, using 10,000 connections as an example.

-Total: Total number of connections, represented by the top line in the graph.
-
-emqx-ee-86f864f975: This prefix represents the 3 EMQX nodes before the upgrade.
-
-emqx-ee-648c45c747: This prefix represents the 3 EMQX nodes after the upgrade.
+| Label/Prefix | Description |
+|-------------------------|-----------------------------------------------------|
+| Total | Total number of connections; shown as the top line in the graph. |
+| `emqx-ee-86f864f975` | Name prefix for the set of 3 old EMQX nodes. |
+| `emqx-ee-648c45c747` | Name prefix for the set of 3 upgraded EMQX nodes. |
-As shown in the figure above, we have implemented graceful upgrade in Kubernetes through EMQX Kubernetes Operator's blue-green deployment. Through this solution, the total number of connections did not have a significant shake (depending on migration rate, server reception rate, client reconnection policy, etc.) during the upgrade process, which can greatly ensure the smoothness of the upgrade process, effectively prevent server overload, reduce business perception, and improve the stability of the service.
+This timeline illustrates how EMQX Operator performs a smooth blue-green upgrade. Throughout the process, the total number of connections remained stable (subject to factors such as migration rate, server capacity, and client reconnection strategy). This approach ensures minimal disruption, prevents server overload, and enhances overall service stability.
diff --git a/ja_JP/deploy/kubernetes/operator/tasks/configure-emqx-config.md b/ja_JP/deploy/kubernetes/operator/tasks/configure-emqx-config.md
index 87a7753ad..2c803e831 100644
--- a/ja_JP/deploy/kubernetes/operator/tasks/configure-emqx-config.md
+++ b/ja_JP/deploy/kubernetes/operator/tasks/configure-emqx-config.md
@@ -1,16 +1,16 @@
-# Change EMQX Configurations
+# Change EMQX Configuration
-## Task Target
+## Objective
-Change EMQX configuration by `config.data` in EMQX Custom Resource.
+Change the EMQX configuration using the `.spec.config.data` field in the EMQX Custom Resource.
## Configure EMQX Cluster
-The main configuration file of EMQX is `/etc/emqx.conf`. Starting from version 5.0, EMQX adopts [HOCON](https://www.emqx.io/docs/en/v5.1/configuration/configuration.html#hocon-configuration-format) as the configuration file format.
+The EMQX CRD `apps.emqx.io/v2beta1` supports configuring the EMQX cluster through the `.spec.config.data` field. Refer to the [Configuration Manual](https://docs.emqx.com/en/enterprise/v6.0.0/hocon/) for the complete configuration reference.
-`apps.emqx.io/v2beta1 EMQX` supports configuring EMQX cluster through `.spec.config.data` field. For config.data configuration, please refer to the document: [Configuration Manual](https://www.emqx.io/docs/en/v5.1/configuration/configuration-manual.html#configuration-manual).
+EMQX uses [HOCON](../../../../configuration/configuration.md#hocon-configuration-format) as the configuration file format.
-+ Save the following content as a YAML file and deploy it with the `kubectl apply` command
+1. Save the following as a YAML file and deploy it using `kubectl apply`:
```yaml
apiVersion: apps.emqx.io/v2beta1
@@ -18,9 +18,10 @@ The main configuration file of EMQX is `/etc/emqx.conf`. Starting from version 5
metadata:
name: emqx
spec:
- image: emqx/emqx-enterprise:5.10
+ image: emqx/emqx:@EE_VERSION@
imagePullPolicy: IfNotPresent
config:
+ # Configure a TCP listener named `test` listening on port 1884:
data: |
listeners.tcp.test {
bind = "0.0.0.0:1884"
@@ -37,51 +38,40 @@ The main configuration file of EMQX is `/etc/emqx.conf`. Starting from version 5
type: LoadBalancer
```
- > In the `.spec.config.data` field, we have configured a TCP listener for the EMQX cluster. The name of this listener is: test, and the listening port is: 1884.
+ ::: tip
+ The content of the `.spec.config.data` field is supplied as [`emqx.conf` configuration file](../../../../configuration/configuration.md#immutable-configuration-file) to the EMQX container.
+ :::
-+ Wait for the EMQX cluster to be ready, you can check the status of EMQX cluster through `kubectl get` command, please make sure `STATUS` is `Running`, this may take some time
+2. Wait for the EMQX cluster to become ready.
+
+ Check the status of the EMQX cluster using `kubectl get`, and make sure that `STATUS` is `Ready`. This may take some time.
```bash
$ kubectl get emqx emqx
- NAME IMAGE STATUS AGE
- emqx emqx/emqx-enterprise:5.10.0 Running 10m
+ NAME STATUS AGE
+ emqx Ready 10m
```
-+ Obtain the Dashboard External IP of EMQX cluster and access EMQX console
-
- EMQX Operator will create two EMQX Service resources, one is emqx-dashboard and the other is emqx-listeners, corresponding to EMQX console and EMQX listening port respectively.
-
- ```bash
- $ kubectl get svc emqx-dashboard -o json | jq '.status.loadBalancer.ingress[0].ip'
-
- 192.168.1.200
- ```
-
- Access `http://192.168.1.200:18083` through a browser, and use the default username and password `admin/public` to login EMQX console.
-
## Verify Configuration
-+ View EMQX cluster listener information
-
- ```bash
- $ kubectl exec -it emqx-core-0 -c emqx -- emqx ctl listeners
- ```
-
- You can get a print similar to the following, which means that the listener named `test` configured by us has taken effect.
-
- ```bash
- tcp:default
- listen_on: 0.0.0.0:1883
- acceptors: 16
- proxy_protocol : false
- running: true
- current_conn: 0
- max_conns : 1024000
- tcp:test
- listen_on: 0.0.0.0:1884
- acceptors: 16
- proxy_protocol : false
- running: true
- current_conn: 0
- max_conns : 1024000
- ```
+View the EMQX listeners' status.
+
+```bash
+$ kubectl exec -it emqx-core-0 -c emqx -- emqx ctl listeners
+tcp:default
+ listen_on: 0.0.0.0:1883
+ acceptors: 16
+ proxy_protocol : false
+ running: true
+ current_conn: 0
+ max_conns : 1024000
+tcp:test
+ listen_on: 0.0.0.0:1884
+ acceptors: 16
+ proxy_protocol : false
+ running: true
+ current_conn: 0
+ max_conns : 1024000
+```
+
+Here we can see that the new listener on port 1884 is running.
diff --git a/ja_JP/deploy/kubernetes/operator/tasks/configure-emqx-core-replicant.md b/ja_JP/deploy/kubernetes/operator/tasks/configure-emqx-core-replicant.md
index 77c2ec7cb..6e75bc9f0 100644
--- a/ja_JP/deploy/kubernetes/operator/tasks/configure-emqx-core-replicant.md
+++ b/ja_JP/deploy/kubernetes/operator/tasks/configure-emqx-core-replicant.md
@@ -1,31 +1,33 @@
-# Enable Core + Replicant Cluster (EMQX 5.x)
+# Enable Core + Replicant Cluster
-## Task Target
+## Objective
-- Configure EMQX cluster Core node through `coreTemplate` field.
-- Configure EMQX cluster Replicant node through `replicantTemplate` field.
+- Configure EMQX cluster Core nodes through the `coreTemplate` field.
+- Configure EMQX cluster Replicant nodes through the `replicantTemplate` field.
-## Core Nodes And Replicant Nodes
+## Core and Replicant Nodes
-:::tip
-Just EMQX Enterprise Edition supports Core + Replicant cluster.
-:::
+Nodes in the EMQX cluster can have one of two roles: Core node and Replicant node.
+* Core nodes are responsible for data persistence in the cluster and serve as the authoritative source for shared cluster state such as routing tables, MQTT client channels, retained messages, cluster configuration, alarms, Dashboard user credentials, etc.
+* Replicant nodes are designed to be stateless and do not participate in database operations. Adding or deleting Replicant nodes will not affect the redundancy of the cluster data.
-In EMQX 5.0, the nodes in the EMQX cluster can be divided into two roles: core (Core) node and replication (Replicant) node. The Core node is responsible for all write operations in the cluster, which is consistent with the behavior of the nodes in the EMQX 4.x cluster, and serves as the real data source of the EMQX database [Mria](https://github.com/emqx/mria) to store the routing table, Data such as sessions, configurations, alarms, and Dashboard user information. The Replicant node is designed to be stateless and does not participate in the writing of data. Adding or deleting Replicant nodes will not change the redundancy of the cluster data. For more information about the EMQX 5.0 architecture, please refer to the document: [EMQX 5.0 Architecture](../../../cluster/mria-introduction.md), the topological structure of the Core node and the Replicant node is shown in the following figure:
+Communication between Core and Replicant nodes in a typical EMQX cluster is illustrated in the following diagram:
-

+
+For more information about the EMQX Core-Replicant architecture, refer to the [Cluster Architecture](../../../cluster/mria-introduction.md) documentation.
+
:::tip
There must be at least one Core node in the EMQX cluster. For the purpose of high availability, EMQX Operator recommends that the EMQX cluster have at least three Core nodes.
:::
## Configure EMQX Cluster
-`apps.emqx.io/v2beta1 EMQX` supports configuring the Core node of the EMQX cluster through the `.spec.coreTemplate` field, and configuring the Replicant node of the EMQX cluster using the `.spec.replicantTemplate` field. For more information, please refer to: [API Reference](../api-reference.md#emqxspec).
+EMQX CRD `apps.emqx.io/v2beta1` supports configuring Core nodes of the EMQX cluster through the `.spec.coreTemplate` field, and configuring Replicant nodes of the EMQX cluster through the `.spec.replicantTemplate` field.
-+ Save the following content as a YAML file and deploy it with the `kubectl apply` command
+1. Save the following content as a YAML file and deploy using `kubectl apply`.
```yaml
apiVersion: apps.emqx.io/v2beta1
@@ -33,7 +35,7 @@ There must be at least one Core node in the EMQX cluster. For the purpose of hig
metadata:
name: emqx
spec:
- image: emqx/emqx-enterprise:@EE_VERSION@
+ image: emqx/emqx:@EE_VERSION@
config:
data: |
license {
@@ -58,83 +60,68 @@ There must be at least one Core node in the EMQX cluster. For the purpose of hig
type: LoadBalancer
```
- > In the YAML above, we declared that this is an EMQX cluster consisting of two Core nodes and three Replicant nodes. Core nodes require a minimum of 512Mi of memory, and Replicant nodes require a minimum of 1Gi of memory. You can adjust according to the actual business load. In actual business, the Replicant node will accept all client requests, so the resources required by the Replicant node will be higher.
+ In the example above, the EMQX CR defines an EMQX cluster consisting of two Core nodes and three Replicant nodes.
-+ Wait for the EMQX cluster to be ready, you can check the status of EMQX cluster through `kubectl get` command, please make sure `STATUS` is `Running`, this may take some time
+ Core nodes require a minimum of 512Mi of memory, and Replicant nodes require a minimum of 1Gi of memory. You can adjust these constraints according to the actual business load. Typically, Replicant nodes accept all client requests, so the resources required by Replicant nodes may be higher to accommodate many concurrent connections.
- ```bash
- $ kubectl get emqx emqx
- NAME IMAGE STATUS AGE
- emqx emqx/emqx-enterprise:@EE_VERSION@ Running 10m
- ```
-
-+ Obtain the Dashboard External IP of EMQX cluster and access EMQX console
+2. Wait for the EMQX cluster to become ready.
- EMQX Operator will create two EMQX Service resources, one is emqx-dashboard and the other is emqx-listeners, corresponding to EMQX console and EMQX listening port respectively.
+ Check the status of the EMQX cluster with `kubectl get`, ensuring that `STATUS` is `Ready`. This may take some time.
```bash
- $ kubectl get svc emqx-dashboard -o json | jq '.status.loadBalancer.ingress[0].ip'
-
- 192.168.1.200
+ $ kubectl get emqx emqx
+ NAME STATUS AGE
+ emqx Ready 10m
```
- Access `http://192.168.1.200:18083` through a browser, and use the default username and password `admin/public` to login EMQX console.
-
## Verify EMQX Cluster
- Information about all the nodes in the cluster can be obtained by checking the `.status` of the EMQX custom resources.
-
- ```bash
- $ kubectl get emqx emqx -o json | jq .status.coreNodes
- [
- {
- "node": "emqx@emqx-core-0.emqx-headless.default.svc.cluster.local",
- "node_status": "running",
- "otp_release": "27.2-3/15.2",
- "role": "core",
- "version": "@EE_VERSION@"
- },
- {
- "node": "emqx@emqx-core-1.emqx-headless.default.svc.cluster.local",
- "node_status": "running",
- "otp_release": "27.2-3/15.2",
- "role": "core",
- "version": "@EE_VERSION@"
- },
- {
- "node": "emqx@emqx-core-2.emqx-headless.default.svc.cluster.local",
- "node_status": "running",
- "otp_release": "27.2-3/15.2",
- "role": "core",
- "version": "@EE_VERSION@"
- }
- ]
- ```
-
-
- ```bash
- $ kubectl get emqx emqx -o json | jq .status.replicantNodes
- [
- {
- "node": "emqx@10.244.4.56",
- "node_status": "running",
- "otp_release": "27.2-3/15.2",
- "role": "replicant",
- "version": "@EE_VERSION@"
- },
- {
- "node": "emqx@10.244.4.57",
- "node_status": "running",
- "otp_release": "27.2-3/15.2",
- "role": "replicant",
- "version": "@EE_VERSION@"
- },
- {
- "node": "emqx@10.244.4.58",
- "node_status": "running",
- "otp_release": "27.2-3/15.2",
- "role": "replicant",
- "version": "@EE_VERSION@"
- }
- ]
- ```
+You can view information about all nodes in the cluster by checking the `.status` field of the EMQX CR.
+
+```bash
+$ kubectl get emqx emqx -o json | jq .status.coreNodes
+[
+ {
+ "name": "emqx@emqx-core-adcdef012-0.emqx-headless.default.svc.cluster.local",
+ "node_status": "running",
+ "otp_release": "27.2-3/15.2",
+ "role": "core",
+ "version": "@EE_VERSION@"
+ },
+ {
+ "name": "emqx@emqx-core-adcdef012-1.emqx-headless.default.svc.cluster.local",
+ "node_status": "running",
+ "otp_release": "27.2-3/15.2",
+ "role": "core",
+ "version": "@EE_VERSION@"
+ }
+]
+```
+
+
+```bash
+$ kubectl get emqx emqx -o json | jq .status.replicantNodes
+[
+ {
+ "name": "emqx@10.244.4.56",
+ "node_status": "running",
+ "otp_release": "27.2-3/15.2",
+ "role": "replicant",
+ "version": "@EE_VERSION@"
+ },
+ {
+ "name": "emqx@10.244.4.57",
+ "node_status": "running",
+ "otp_release": "27.2-3/15.2",
+ "role": "replicant",
+ "version": "@EE_VERSION@"
+ },
+ {
+ "name": "emqx@10.244.4.58",
+ "node_status": "running",
+ "otp_release": "27.2-3/15.2",
+ "role": "replicant",
+ "version": "@EE_VERSION@"
+ }
+]
+```
diff --git a/ja_JP/deploy/kubernetes/operator/tasks/configure-emqx-license.md b/ja_JP/deploy/kubernetes/operator/tasks/configure-emqx-license.md
index 1465a92d9..c4930f383 100644
--- a/ja_JP/deploy/kubernetes/operator/tasks/configure-emqx-license.md
+++ b/ja_JP/deploy/kubernetes/operator/tasks/configure-emqx-license.md
@@ -1,71 +1,57 @@
-# License Configuration (EMQX Enterprise)
+# Manage License
-## Task Target
+## Objective
-- Configure EMQX Enterprise License.
-- Update EMQX Enterprise License.
+- Configure the EMQX Enterprise license.
+- Update EMQX Enterprise license.
## Configure License
-EMQX Enterprise License can be applied for free on EMQ official website: [Apply for EMQX Enterprise License](https://www.emqx.com/en/apply-licenses/emqx).
+You can apply for an EMQX Enterprise license for free on the EMQX official website: [Apply for EMQX Enterprise License](https://www.emqx.com/en/apply-licenses/emqx).
## Configure EMQX Cluster
-`apps.emqx.io/v2beta1 EMQX` supports configuring EMQX cluster license through `.spec.config.data`. For config.data configuration, please refer to the document: [Configuration Manual](../../../../configuration/configuration.md). This field is only allowed to be configured when creating an EMQX cluster, and does not support updating.
+EMQX CRD `apps.emqx.io/v2beta1` supports configuring the EMQX cluster license through the `.spec.config.data` field. Refer to the [Configuration Manual](https://docs.emqx.com/en/enterprise/v6.0.0/hocon/) for complete configuration reference.
- > After the EMQX cluster is created, if the license needs to be updated, please update it through the EMQX Dashboard.
-
-+ Save the following content as a YAML file and deploy it via the `kubectl apply` command
+1. Save the following as a YAML file and deploy it using `kubectl apply`.
```yaml
apiVersion: apps.emqx.io/v2beta1
kind: EMQX
metadata:
- name: emqx
+ name: emqx-ee
spec:
config:
data: |
license {
key = "..."
}
- image: emqx/emqx-enterprise:@EE_VERSION@
+ image: emqx/emqx:@EE_VERSION@
dashboardServiceTemplate:
spec:
type: LoadBalancer
```
- > The `license.key` in the `config.data` field represents the Licesne content. In this example, the License content is omitted, please fill it in by the user.
-
-+ Wait for the EMQX cluster to be ready, you can check the status of the EMQX cluster through `kubectl get` command, please make sure `STATUS` is `Running`, this may take some time
+ ::: tip
+ The `license.key` in the `.spec.config.data` field represents the license content. In this example, the license content is omitted. Please fill it in with your own license key.
+ :::
- ```bash
- $ kubectl get emqx emqx
- NAME IMAGE STATUS AGE
- emqx emqx/emqx-enterprise:@EE_VERSION@ Running 10m
- ```
+2. Wait for the EMQX cluster to become ready.
-+ Obtain the Dashboard External IP of EMQX cluster and access EMQX console
-
- EMQX Operator will create two EMQX Service resources, one is emqx-dashboard and the other is emqx-listeners, corresponding to EMQX console and EMQX listening port respectively.
+ Check the status of the EMQX cluster with `kubectl get` and ensure that `STATUS` is `Ready`. This may take some time.
```bash
- $ kubectl get svc emqx-ee-dashboard -o json | jq '.status.loadBalancer.ingress[0].ip'
-
- 192.168.1.200
+ $ kubectl get emqx emqx-ee
+ NAME STATUS AGE
+ emqx Ready 10m
```
- Access `http://192.168.1.200:18083` through a browser, and use the default username and password `admin/public` to login EMQX console.
-
## Update License
-+ View License information
- ```bash
- $ pod_name="$(kubectl get pods -l 'apps.emqx.io/instance=emqx,apps.emqx.io/db-role=core' -o json | jq --raw-output '.items[0].metadata.name')"
- $ kubectl exec -it ${pod_name} -c emqx -- emqx_ctl license info
- ```
+1. View the license information.
- The following output can be obtained. From the output, we can see the basic information of the license we applied for, including applicant's information, maximum connection supported by the license, and expiration time of the license.
```bash
+ $ kubectl exec -it service/emqx-ee-headless -c emqx -- emqx ctl license info
customer : Evaluation
email : contact@emqx.io
deployment : default
@@ -77,12 +63,15 @@ EMQX Enterprise License can be applied for free on EMQ official website: [Apply
expiry : false
```
-+ Modify EMQX custom resources to update the License.
+ The output shows basic license information, including the applicant's information, the maximum number of connections supported by the license, and the expiration time.
+
+2. Modify the EMQX CR to update the license.
+
```bash
- $ kubectl edit emqx emqx
+ $ kubectl edit emqx emqx-ee
...
spec:
- image: emqx/emqx-enterprise:@EE_VERSION@
+ image: emqx/emqx:@EE_VERSION@
config:
data: |
license {
@@ -91,14 +80,10 @@ EMQX Enterprise License can be applied for free on EMQ official website: [Apply
...
```
- + Check if the EMQX cluster license has been updated.
- ```bash
- $ pod_name="$(kubectl get pods -l 'apps.emqx.io/instance=emqx,apps.emqx.io/db-role=core' -o json | jq --raw-output '.items[0].metadata.name')"
- $ kubectl exec -it ${pod_name} -c emqx -- emqx_ctl license info
- ```
+3. Verify that the license has been updated.
- It can be seen from the "max_connections" field that the content of the License has been updated, indicating that the EMQX Enterprise Edition License update is successful. If the certificate information is not updated, you can wait for a while as there may be some delay in updating the License.
```bash
+ $ kubectl exec -it service/emqx-ee-headless -c emqx -- emqx ctl license info
customer : Evaluation
email : contact@emqx.io
deployment : default
@@ -109,3 +94,6 @@ EMQX Enterprise License can be applied for free on EMQ official website: [Apply
customer_type : 10
expiry : false
```
+
+ The updated `max_connections` field clearly indicates that the EMQX Enterprise license has been updated successfully. Keep in mind that the license update may take time, so you may need to retry the command.
+
diff --git a/ja_JP/deploy/kubernetes/operator/tasks/configure-emqx-log-collection.md b/ja_JP/deploy/kubernetes/operator/tasks/configure-emqx-log-collection.md
index 70a0c26b4..f6ae2b92d 100644
--- a/ja_JP/deploy/kubernetes/operator/tasks/configure-emqx-log-collection.md
+++ b/ja_JP/deploy/kubernetes/operator/tasks/configure-emqx-log-collection.md
@@ -1,18 +1,22 @@
-# Collect EMQX Logs In Kubernetes
+# Collect EMQX Logs in Kubernetes
-## Task Target
+## Objective
Use ELK to collect EMQX cluster logs.
## Deploy ELK
-ELK is the capitalized abbreviation of the three open source frameworks of Elasticsearch, Logstash, and Kibana, and is also known as the Elastic Stack. [Elasticsearch](https://www.elastic.co/elasticsearch/) is a near-real-time search platform framework based on Lucene, distributed, and interactive through Restful, also referred to as: es. [Logstash](https://www.elastic.co/logstash/) is the central data flow engine of ELK, which is used to collect data in different formats from different targets (files/data storage/MQ), and supports after filtering Output to different destinations (file/MQ/redis/elasticsearch/kafka, etc.). [Kibana](https://www.elastic.co/kibana/) can display es data on a page and provide real-time analysis functions.
+**ELK** stands for Elasticsearch, Logstash, and Kibana (also known as the Elastic Stack):
+
+- [**Elasticsearch**](https://www.elastic.co/elasticsearch/): Distributed, near-real-time search and analytics engine based on Lucene providing REST APIs to interact with data.
+- [**Logstash**](https://www.elastic.co/logstash/): Primary data flow engine for collecting, transforming, and forwarding logs from various sources to different destinations.
+- [**Kibana**](https://www.elastic.co/kibana/): Web interface for visualizing and analyzing Elasticsearch data in real time.
### Deploy Single Node Elasticsearch
-The method of deploying single-node Elasticsearch is relatively simple. You can refer to the following YAML orchestration file to quickly deploy an Elasticsearch cluster.
+Deploying a single-node Elasticsearch cluster is relatively simple. You can use the following YAML configuration file to quickly deploy an Elasticsearch cluster.
-- Save the following content as a YAML file and deploy it via the `kubectl apply` command
+1. Save the following content as a YAML file and deploy it using `kubectl apply`.
```yaml
---
@@ -104,6 +108,7 @@ The method of deploying single-node Elasticsearch is relatively simple. You can
containers:
- image: docker.io/library/elasticsearch:7.9.3
name: elasticsearch-logging
+ resources:
limits:
cpu: 1000m
memory: 1Gi
@@ -164,9 +169,13 @@ The method of deploying single-node Elasticsearch is relatively simple. You can
requests:
storage: 10Gi
```
- > The `storageClassName` field indicates the name of `StorageClass`, you can use the command `kubectl get storageclass` to get the StorageClass that already exists in the Kubernetes cluster, or you can create a StorageClass according to your own needs.
+ :::tip
+ Use the `storageClassName` field to choose the appropriate [StorageClass](https://kubernetes.io/docs/concepts/storage/storage-classes/). Run `kubectl get storageclass` to list the StorageClasses that already exist in the Kubernetes cluster, or create a StorageClass according to your needs.
+ :::
+
+2. Wait for Elasticsearch to be ready.
-- Wait for the es to be ready, you can check the status of the es pod through the `kubectl get` command, make sure `STATUS` is `Running`
+ Check the status of the Elasticsearch pod using the `kubectl get` command and ensure that `STATUS` is `Running`.
```bash
$ kubectl get pod -n kube-logging -l "k8s-app=elasticsearch"
@@ -176,9 +185,9 @@ The method of deploying single-node Elasticsearch is relatively simple. You can
### Deploy Kibana
-This article uses `Deployment` to deploy Kibana to visualize the collected logs. `Service` uses `NodePort`.
+This walkthrough uses a `Deployment` to deploy Kibana for visualizing the collected logs, and a `Service` of type `NodePort` to expose Kibana externally.
-- Save the following content as a YAML file and deploy it via the `kubectl apply` command
+1. Save the following content as a YAML file and deploy it using `kubectl apply`.
```yaml
---
@@ -191,6 +200,7 @@ This article uses `Deployment` to deploy Kibana to visualize the collected logs.
k8s-app: kibana
spec:
type: NodePort
+ ports:
- port: 5601
nodePort: 35601
protocol: TCP
@@ -220,7 +230,7 @@ This article uses `Deployment` to deploy Kibana to visualize the collected logs.
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
spec:
containers:
- -name: kibana
+ - name: kibana
image: docker.io/kubeimages/kibana:7.9.3
resources:
limits:
@@ -237,7 +247,9 @@ This article uses `Deployment` to deploy Kibana to visualize the collected logs.
protocol: TCP
```
-- Wait for Kibana to be ready, you can check the status of the Kibana pod through the `kubectl get` command, make sure `STATUS` is `Running`
+2. Wait for Kibana to be ready.
+
+ Check the status of the Kibana pod using the `kubectl get` command and ensure that `STATUS` is `Running`.
```bash
$ kubectl get pod -n kube-logging -l "k8s-app=kibana"
@@ -245,13 +257,13 @@ This article uses `Deployment` to deploy Kibana to visualize the collected logs.
kibana-b7d98644-48gtm 1/1 Running 0 17m
```
- Finally, in the browser, enter `http://{node_ip}:35601`, and you will enter the kibana web interface
+ Finally, in your browser, navigate to `http://{node_ip}:35601` to access the Kibana web interface.
### Deploy Filebeat
-[Filebeat](https://www.elastic.co/beats/filebeat) is a lightweight eating log collection component, which is part of the Elastic Stack and can work seamlessly with Logstash, Elasticsearch and Kibana. Whether you're transforming or enriching logs and files with Logstash, throwing around some data analysis in Elasticsearch, or building and sharing dashboards in Kibana, Filebeat makes it easy to get your data where it matters most.
+[Filebeat](https://www.elastic.co/beats/filebeat) is a lightweight log collection component that is part of the Elastic Stack and works seamlessly with Logstash, Elasticsearch, and Kibana.
-- Save the following content as a YAML file and deploy it via the `kubectl apply` command
+1. Save the following content as a YAML file and deploy it using `kubectl apply`.
```yaml
---
@@ -259,7 +271,7 @@ This article uses `Deployment` to deploy Kibana to visualize the collected logs.
kind: ConfigMap
metadata:
name: filebeat-config
- namespace: kube-system
+ namespace: kube-logging
labels:
k8s-app: filebeat
data:
@@ -369,10 +381,10 @@ This article uses `Deployment` to deploy Kibana to visualize the collected logs.
- name: varlibdockercontainers
mountPath: /data/var/
readOnly: true
- -name: varlog
+ - name: varlog
mountPath: /var/log/
readOnly: true
- -name: timezone
+ - name: timezone
mountPath: /etc/localtime
volumes:
- name: config
@@ -382,7 +394,7 @@ This article uses `Deployment` to deploy Kibana to visualize the collected logs.
- name: varlibdockercontainers
hostPath:
path: /data/var/
- -name: varlog
+ - name: varlog
hostPath:
path: /var/log/
- name: inputs
@@ -393,12 +405,14 @@ This article uses `Deployment` to deploy Kibana to visualize the collected logs.
hostPath:
path: /data/filebeat-data
type: DirectoryOrCreate
- -name: timezone
+ - name: timezone
hostPath:
path: /etc/localtime
```
-- Wait for Filebeat to be ready, you can check the status of the Filebeat pod through the `kubectl get` command, make sure `STATUS` is `Running`
+2. Wait for Filebeat to become ready.
+
+ Check the status of Filebeat pods using the `kubectl get` command and ensure that `STATUS` is `Running`.
```bash
$ kubectl get pod -n kube-logging -l "k8s-app=filebeat"
@@ -409,9 +423,11 @@ This article uses `Deployment` to deploy Kibana to visualize the collected logs.
### Deploy Logstash
-This is mainly to combine the business needs and the secondary utilization of logs, and Logstash is added for log cleaning. This article uses the [Beats Input plugin](https://www.elastic.co/guide/en/logstash/current/plugins-inputs-beats.html) of Logstash to collect logs, and uses the [Ruby filter plugin](https://www.elastic.co/guide/en/logstash/current/plugins-filters-ruby.html) to filter logs. Logstash also provides many other input and filtering plug-ins for users to use, and you can configure appropriate plug-ins according to your business needs.
+Logstash is used for log processing and cleaning.
+
+In this walkthrough, we use the [Beats Input plugin](https://www.elastic.co/guide/en/logstash/current/plugins-inputs-beats.html) of Logstash to collect logs and the [Ruby filter plugin](https://www.elastic.co/guide/en/logstash/current/plugins-filters-ruby.html) to filter logs. Logstash also provides many other input and filtering plugins that you can configure according to your business needs.
-- Save the following content as a YAML file and deploy it via the `kubectl apply` command
+1. Save the following content as a YAML file and deploy it using `kubectl apply`.
```yaml
---
@@ -419,7 +435,7 @@ This is mainly to combine the business needs and the secondary utilization of lo
kind: Service
metadata:
name: logstash
- namespace: kube-system
+ namespace: kube-logging
spec:
ports:
- port: 5044
@@ -432,7 +448,7 @@ This is mainly to combine the business needs and the secondary utilization of lo
kind: Deployment
metadata:
name: logstash
- namespace: kube-system
+ namespace: kube-logging
spec:
selector:
matchLabels:
@@ -460,7 +476,7 @@ This is mainly to combine the business needs and the secondary utilization of lo
mountPath: /etc/logstash_c/
- name: config-yml-volume
mountPath: /usr/share/logstash/config/
- -name: timezone
+ - name: timezone
mountPath: /etc/localtime
resources:
limits:
@@ -476,7 +492,7 @@ This is mainly to combine the business needs and the secondary utilization of lo
items:
- key: logstash.conf
path: logstash.conf
- -name: timezone
+ - name: timezone
hostPath:
path: /etc/localtime
- name: config-yml-volume
@@ -504,7 +520,7 @@ This is mainly to combine the business needs and the secondary utilization of lo
ruby {
code => "
ss = event.get('message').split(' ')
- len = ss. length()
+ len = ss.length()
level = ''
index = ''
msg = ''
@@ -543,7 +559,7 @@ This is mainly to combine the business needs and the secondary utilization of lo
apiVersion: v1
kind: ConfigMap
metadata:
- name: logstash
+ name: logstash-yml
namespace: kube-logging
labels:
k8s-app: logstash
@@ -553,7 +569,9 @@ This is mainly to combine the business needs and the secondary utilization of lo
xpack.monitoring.elasticsearch.hosts: http://elasticsearch-logging:9200
```
-- Wait for Logstash to be ready, you can view the status of the Logstash pod through the `kubectl get` command, make sure `STATUS` is `Running`
+2. Wait for Logstash to be ready.
+
+ Check the status of Logstash pods using the `kubectl get` command and ensure that `STATUS` is `Running`.
```bash
$ kubectl get pod -n kube-logging -l "k8s-app=logstash"
@@ -564,20 +582,20 @@ This is mainly to combine the business needs and the secondary utilization of lo
## Deploy EMQX Cluster
-To deploy the EMQX cluster, please refer to the document [Deploy EMQX](../getting-started.md).
+To deploy an EMQX cluster, please refer to the document [Deploy EMQX](../getting-started.md).
## Verify Log Collection
-- First log in to the Kibana interface, open the stack management module in the menu, click on the index management, you can find that there are already collected log indexes
+1. Log in to the Kibana interface, open the stack management module in the menu, and click on _Index Management_. You can see that log indices have already been collected.

-- In order to be able to discover and view logs in Kibana, you need to set an index match, select index patterns, and click Create
+2. To discover and view logs in Kibana, you need to create an index pattern. Select index patterns and click _Create_.


-- Finally verify whether the EMQX cluster logs are collected
+3. Finally, verify that the EMQX cluster logs are collected.

diff --git a/ja_JP/deploy/kubernetes/operator/tasks/configure-emqx-log-level.md b/ja_JP/deploy/kubernetes/operator/tasks/configure-emqx-log-level.md
index ad64a561d..771458a20 100644
--- a/ja_JP/deploy/kubernetes/operator/tasks/configure-emqx-log-level.md
+++ b/ja_JP/deploy/kubernetes/operator/tasks/configure-emqx-log-level.md
@@ -1,18 +1,14 @@
# Change EMQX Log Level
-## Task Target
+## Objective
-Modify the log level of EMQX cluster.
+Modify the log level in the EMQX cluster.
## Configure EMQX Cluster
-The following is the relevant configuration of EMQX Custom Resource. You can choose the corresponding APIVersion according to the version of EMQX you want to deploy. For the specific compatibility relationship, please refer to [EMQX Operator Compatibility](../operator.md):
+EMQX CRD `apps.emqx.io/v2beta1` supports configuring the log level of the EMQX cluster through `.spec.config.data`. Refer to the [Configuration Manual](https://docs.emqx.com/en/enterprise/v6.0.0/hocon/) for complete configuration reference.
-`apps.emqx.io/v2beta1 EMQX` supports configuration of EMQX cluster log level through `.spec.config.data`. The configuration of config.data can refer to the document: [Configuration Manual](https://www.emqx.io/docs/en/v5.1/configuration/configuration-manual.html#configuration-manual).
-
-> This field is only allowed to be configured when creating an EMQX cluster, and does not support updating. If you need to modify the cluster log level after creating EMQX, please modify it through EMQX Dashboard.
-
-+ Save the following content as a YAML file and deploy it with the kubectl apply command
+1. Save the following content as a YAML file and deploy it using `kubectl apply`:
```yaml
apiVersion: apps.emqx.io/v2beta1
@@ -20,8 +16,9 @@ The following is the relevant configuration of EMQX Custom Resource. You can cho
metadata:
name: emqx
spec:
- image: emqx/emqx-enterprise:@EE_VERSION@
+ image: emqx/emqx:@EE_VERSION@
config:
+ # Enable debug logging:
data: |
log.console.level = debug
license {
@@ -35,54 +32,39 @@ The following is the relevant configuration of EMQX Custom Resource. You can cho
type: LoadBalancer
```
- > The `.spec.config.data` field configures the EMQX cluster log level to `debug`.
-
-+ Wait for the EMQX cluster to be ready, you can check the status of the EMQX cluster through the kubectl get command, please make sure that `STATUS` is Running, this may take some time
-
- ```bash
- $ kubectl get emqx emqx
- NAME IMAGE STATUS AGE
- emqx emqx/emqx-enterprise:@EE_VERSION@ Running 10m
- ```
+2. Wait for the EMQX cluster to become ready.
-+ EMQX Operator will create two EMQX Service resources, one is emqx-dashboard and the other is emqx-listeners, corresponding to EMQX console and EMQX listening port respectively.
+ Check the status of the EMQX cluster with `kubectl get` and ensure that `STATUS` is `Ready`. This may take some time.
```bash
- $ kubectl get svc emqx-dashboard -o json | jq '.status.loadBalancer.ingress[0].ip'
-
- 192.168.1.200
+ $ kubectl get emqx
+ NAME STATUS AGE
+ emqx Ready 10m
```
- Access `http://192.168.1.200:18083` through a browser, and use the default username and password `admin/public` to login EMQX console.
-
## Verify Log Level
-[MQTTX CLI](https://mqttx.app/cli) is an open source MQTT 5.0 command line client tool, designed to help developers to more Quickly develop and debug MQTT services and applications.
-
-+ Obtain the External IP of EMQX cluster
+1. Obtain the External IP of the EMQX cluster.
```bash
external_ip=$(kubectl get svc emqx-listeners -o json | jq '.status.loadBalancer.ingress[0].ip')
```
-+ Use MQTTX CLI to connect to EMQX cluster
+2. Use MQTTX CLI to connect to the EMQX cluster.
+
+ [MQTTX CLI](https://mqttx.app/cli) is an open source MQTT 5.0 command line client tool, designed to help developers start using MQTT services and applications more quickly.
```bash
$ mqttx conn -h ${external_ip} -p 1883
-
[4/17/2023] [5:17:31 PM] › … Connecting...
[4/17/2023] [5:17:31 PM] › ✔ Connected
```
-+ Use the command line to view EMQX cluster log information
+3. View EMQX container logs.
```bash
$ kubectl logs emqx-core-0 -c emqx
- ```
-
- You can get a print similar to the following, which means that EMQX has received a CONNECT message from the client and replied a CONNACK message to the client:
-
- ```bash
+ ...
2023-04-17T09:11:35.993031+00:00 [debug] msg: mqtt_packet_received, mfa: emqx_channel:handle_in/2, line: 360, peername: 218.190.230.144:59457, clientid: mqttx_322680d9, packet: CONNECT(Q0, R0, D0, ClientId=mqttx_322680d9, ProtoName=MQTT, ProtoVsn=5, CleanStart=true, KeepAlive=30, Username=undefined, Password=), tag: MQTT
2023-04-17T09:11:35.997066+00:00 [debug] msg: mqtt_packet_sent, mfa: emqx_connection:serialize_and_inc_stats_fun/1, line: 872, peername: 218.190.230.144:59457, clientid: mqttx_322680d9, packet: CONNACK(Q0, R0, D0, AckFlags=0, ReasonCode=0), tag: MQTT
```
diff --git a/ja_JP/deploy/kubernetes/operator/tasks/configure-emqx-persistence.md b/ja_JP/deploy/kubernetes/operator/tasks/configure-emqx-persistence.md
index 91c9a0fd9..f3f3688f1 100644
--- a/ja_JP/deploy/kubernetes/operator/tasks/configure-emqx-persistence.md
+++ b/ja_JP/deploy/kubernetes/operator/tasks/configure-emqx-persistence.md
@@ -1,18 +1,20 @@
-# Enable Persistence In EMQX Cluster
+# Enable Persistence in EMQX Cluster
-## Task Target
+## Objective
-Configure EMQX 5.x cluster Core node persistence through `volumeClaimTemplates` field.
+Configure persistence for the set of Core nodes of an EMQX cluster through the `volumeClaimTemplates` field.
## Configure EMQX Cluster Persistence
-The following is the relevant configuration of EMQX Custom Resource. You can choose the corresponding APIVersion according to the version of EMQX you want to deploy. For the specific compatibility relationship, please refer to [EMQX Operator Compatibility](../operator.md):
+EMQX CRD `apps.emqx.io/v2beta1` supports configuring persistence of each core node data through `.spec.coreTemplate.spec.volumeClaimTemplates`.
-`apps.emqx.io/v2beta1 EMQX` supports configuration of EMQX cluster Core node persistence through `.spec.coreTemplate.spec.volumeClaimTemplates` field. The semantics and configuration of `.spec.coreTemplate.spec.volumeClaimTemplates` field are consistent with `PersistentVolumeClaimSpec` of Kubernetes, and its configuration can refer to the document: [PersistentVolumeClaimSpec](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#persistentvolumeclaimspec-v1-core).
+The definition and semantics of the `.spec.coreTemplate.spec.volumeClaimTemplates` field are consistent with those of `PersistentVolumeClaimSpec` defined in the Kubernetes API.
-When the user configures the `.spec.coreTemplate.spec.volumeClaimTemplates` field, EMQX Operator will mount the `/opt/emqx/data` directory in the EMQX container to [StorageClass](https://kubernetes.io/docs/concepts/storage/storage-classes/) in the PV and PVC created, when the EMQX Pod is deleted, the PV and PVC will not be deleted, so as to achieve the purpose of saving EMQX runtime data. For more information about PV and PVC, refer to the document [Persistent Volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/).
+When you specify the `.spec.coreTemplate.spec.volumeClaimTemplates` field, EMQX Operator configures the `/opt/emqx/data` volume of the EMQX container to be backed by a Persistent Volume Claim (PVC), which provisions a Persistent Volume (PV) using a specified [StorageClass](https://kubernetes.io/docs/concepts/storage/storage-classes/). As a result, when an EMQX Pod is deleted, the associated PV and PVC are retained, preserving EMQX runtime data.
-+ Save the following content as a YAML file and deploy it via the `kubectl apply` command
+For more details about PVs and PVCs, refer to the [Persistent Volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) documentation.
+
+1. Save the following content as a YAML file and deploy it using `kubectl apply`.
```yaml
apiVersion: apps.emqx.io/v2beta1
@@ -20,7 +22,7 @@ When the user configures the `.spec.coreTemplate.spec.volumeClaimTemplates` fiel
metadata:
name: emqx
spec:
- image: emqx/emqx-enterprise:@EE_VERSION@
+ image: emqx/emqx:@EE_VERSION@
config:
data: |
license {
@@ -44,72 +46,59 @@ When the user configures the `.spec.coreTemplate.spec.volumeClaimTemplates` fiel
type: LoadBalancer
```
- > `storageClassName` field indicates the name of the StorageClass. You can use the command `kubectl get storageclass` to get the StorageClass that already exists in the Kubernetes cluster, or you can create a StorageClass according to your own needs.
+ :::tip
+ Use the `storageClassName` field to choose the appropriate [StorageClass](https://kubernetes.io/docs/concepts/storage/storage-classes/) for EMQX data. Run `kubectl get storageclass` to list the StorageClasses that already exist in the Kubernetes cluster, or create a StorageClass according to your needs.
+ :::
-+ Wait for EMQX cluster to be ready, you can check the status of the EMQX cluster through `kubectl get` command, please make sure `STATUS` is `Running`, this may take some time
+2. Wait for the EMQX cluster to become ready.
- ```bash
- $ kubectl get emqx emqx
- NAME IMAGE STATUS AGE
- emqx emqx/emqx-enterprise:@EE_VERSION@ Running 10m
- ```
-
-+ Obtain the Dashboard External IP of the EMQX cluster and access the EMQX console
-
- EMQX Operator will create two EMQX Service resources, one is emqx-dashboard and the other is emqx-listeners, corresponding to EMQX console and EMQX listening port respectively.
+ Check the status of the EMQX cluster with `kubectl get` and ensure that `STATUS` is `Ready`. This may take some time.
```bash
- $ kubectl get svc emqx-dashboard -o json | jq '.status.loadBalancer.ingress[0].ip'
-
- 192.168.1.200
+ $ kubectl get emqx emqx
+ NAME STATUS AGE
+ emqx Ready 10m
```
- Access `http://192.168.1.200:18083` through a browser, and use the default username and password `admin/public` to login EMQX console.
-
-## Verify EMQX Cluster Persistence
-
-Verification scheme: 1) Passed in the old EMQX Dashboard creates a test rule; 2) Deletes the old cluster; 3) Recreates the EMQX cluster,and checks whether the previously created rule exists through the Dashboard.
+## Verify Persistence
-+ Access EMQX Dashboard through browser to create test rules
+1. Create a test rule in the EMQX Dashboard.
```bash
- external_ip=$(kubectl get svc emqx-listeners -o json | jq '.status.loadBalancer.ingress[0].ip')
+ external_ip=$(kubectl get svc emqx-dashboard -o json | jq -r '.status.loadBalancer.ingress[0].ip')
```
- Login EMQX Dashboard by accessing `http://${external_ip}:18083`, and click Data Integration → Rules to enter the page for creating rules. Let’s first click the button to add an action Add a response action for this rule, and then click Create to generate a rule, as shown in the following figure:
+ - Log in to the EMQX Dashboard at `http://${external_ip}:18083`.
+ - Navigate to _Data Integration_ → _Rules_ to create a new rule.
+ - Attach a simple action to this rule.
+ - Click _Create_ to generate a rule, as shown in the following figure:

- When our rule is successfully created, a rule record will appear on the page with the rule ID: emqx-persistent-test, as shown in the figure below:
+ Once the rule is created successfully, a corresponding record with `emqx-persistent-test` ID will appear on the page, as shown in the figure below:

-+ delete old EMQX cluster
+2. Delete the old EMQX cluster.
- Execute the following command to delete the EMQX cluster:
+ Run the following command to delete the EMQX cluster, where `emqx.yaml` is the file you used to deploy the cluster earlier:
```bash
$ kubectl delete -f emqx.yaml
-
emqx.apps.emqx.io "emqx" deleted
- # emqxenterprise.apps.emqx.io "emqx" deleted
```
- > emqx-persistent.yaml is the YAML file used to deploy the EMQX cluster for the first time in this article, and this file does not need to be changed.
-
-+ Recreate the EMQX cluster
+3. Re-deploy the EMQX cluster.
- Execute the following command to recreate the EMQX cluster:
+ Run the following command to re-deploy the EMQX cluster:
```bash
$ kubectl apply -f emqx.yaml
-
emqx.apps.emqx.io/emqx created
- # emqxenterprise.apps.emqx.io/emqx created
```
- Wait for the EMQX cluster to be ready, and then access the EMQX Dashboard through the browser to check whether the previously created rules exist, as shown in the following figure:
+ Wait for the EMQX cluster to be ready. Access the EMQX Dashboard through your browser to verify that the previously created rule still exists, as shown in the following figure:

- It can be seen from the figure that the rule emqx-persistent-test created in the old cluster still exists in the new cluster, which means that the persistence we configured is in effect.
+ The `emqx-persistent-test` rule created in the old cluster still exists in the new cluster, which confirms that the persistence configuration is working correctly.
diff --git a/ja_JP/deploy/kubernetes/operator/tasks/configure-emqx-prometheus.md b/ja_JP/deploy/kubernetes/operator/tasks/configure-emqx-prometheus.md
index eeaab5edd..7a0d6012e 100644
--- a/ja_JP/deploy/kubernetes/operator/tasks/configure-emqx-prometheus.md
+++ b/ja_JP/deploy/kubernetes/operator/tasks/configure-emqx-prometheus.md
@@ -1,18 +1,17 @@
-# Monitor EMQX cluster by Prometheus and Grafana
+# Monitor EMQX Cluster by Prometheus and Grafana
-## Task Target
-Deploy [EMQX Exporter](https://github.com/emqx/emqx-exporter) and monitor EMQX cluster by Prometheus and Grafana.
+## Objective
+
+Deploy [EMQX Exporter](https://github.com/emqx/emqx-exporter) and monitor an EMQX cluster using Prometheus and Grafana.
## Deploy Prometheus and Grafana
-Prometheus' deployment documentation can refer to [Prometheus](https://github.com/prometheus-operator/prometheus-operator)
-Grafana' deployment documentation can refer to [Grafana](https://grafana.com/docs/grafana/latest/setup-grafana/installation/kubernetes/)
+* To learn more about Prometheus deployment, refer to the [Prometheus](https://github.com/prometheus-operator/prometheus-operator) documentation.
+* To learn more about Grafana deployment, refer to [Grafana](https://grafana.com/docs/grafana/latest/setup-grafana/installation/kubernetes/) documentation.
## Deploy EMQX Cluster
-Here are the relevant configurations for EMQX Custom Resource. You can choose the corresponding APIVersion based on the version of EMQX you wish to deploy. For specific compatibility relationships, please refer to [EMQX Operator Compatibility](../operator.md):
-
-EMQX supports exposing indicators through the http interface. For all statistical indicators under the cluster, please refer to the document: [Integrate with Prometheus](../../../../observability/prometheus.md)
+EMQX exposes various metrics through the [Prometheus-compatible HTTP API](../../../../observability/prometheus.md).
```yaml
apiVersion: apps.emqx.io/v2beta1
@@ -20,7 +19,7 @@ kind: EMQX
metadata:
name: emqx
spec:
- image: emqx/emqx-enterprise:@EE_VERSION@
+ image: emqx/emqx:@EE_VERSION@
config:
data: |
license {
@@ -32,24 +31,24 @@ Save the above content as `emqx.yaml` and execute the following command to deplo
```bash
$ kubectl apply -f emqx.yaml
-
emqx.apps.emqx.io/emqx created
```
-Check the status of the EMQX cluster and make sure that `STATUS` is `Running`, which may take some time to wait for the EMQX cluster to be ready.
+Check the status of the EMQX cluster and make sure that `STATUS` is `Ready`. This may take some time.
- ```bash
- $ kubectl get emqx emqx
- NAME IMAGE STATUS AGE
- emqx emqx/emqx-enterprise:@EE_VERSION@ Running 10m
- ```
+```bash
+$ kubectl get emqx emqx
+NAME STATUS AGE
+emqx Ready 10m
+```
-## Create API Secret
-emqx-exporter and Prometheus will pull metrics from EMQX dashboard API, so you need to sign in to dashboard to create an [API Key](../../../../dashboard/system.md#api-keys).
+## Create API secret
+
+Prometheus is going to pull metrics from the EMQX Dashboard API, so you need to sign in to the Dashboard to [create an API secret](../../../../dashboard/system.md#api-keys).
## Deploy [EMQX Exporter](https://github.com/emqx/emqx-exporter)
-The `emqx-exporter` is designed to expose partial metrics that are not included in the EMQX Prometheus API.
+The `emqx-exporter` is designed to expose partial metrics that are not exposed in the EMQX Prometheus API.
```yaml
apiVersion: v1
@@ -109,24 +108,24 @@ spec:
memory: 20Mi
```
-> Set the arg "--emqx.nodes" to the service name that creating by operator for exposing 18083 port. Check out the service name by call `kubectl get svc`.
+> Set the arg "--emqx.nodes" to the service name that creating by operator for exposing 18083 port. Look up the service name by calling `kubectl get svc`.
-Save the above content as `emqx-exporter.yaml`, replace `--emqx.auth-username` and `--emqx.auth-password` with your new creating API secret, then execute the following command to deploy the emqx-exporter:
+Save the above content as `emqx-exporter.yaml`, replacing `--emqx.auth-username` and `--emqx.auth-password` with your new API secret. Run the following command to deploy the `emqx-exporter`:
```bash
kubectl apply -f emqx-exporter.yaml
```
-Check the status of emqx-exporter pod。
+Check the status of the `emqx-exporter` pod.
```bash
$ kubectl get po -l="app=emqx-exporter"
-
-NAME STATUS AGE
+NAME STATUS AGE
emqx-exporter-856564c95-j4q5v Running 8m33s
```
## Configure Prometheus Monitor
-Prometheus-operator uses [PodMonitor](https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/design.md#podmonitor) and [ServiceMonitor](https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/design.md#servicemonitor) CRD to define how to monitor a set of pods or services dynamically.
+
+Prometheus Operator uses [PodMonitor](https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/design.md#podmonitor) and [ServiceMonitor](https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/design.md#servicemonitor) CRDs to define how to monitor a set of pods or services dynamically.
```yaml
apiVersion: monitoring.coreos.com/v1
@@ -203,8 +202,9 @@ spec:
#- default
```
- `path` indicates the path of the indicator collection interface. In EMQX 5, the path is: `/api/v5/prometheus/stats`. `selector.matchLabels` indicates the label of the matching Pod: `apps.emqx.io/instance: emqx`.
- The value of targetLabel `cluster` represents the name of current cluster, make sure its uniqueness.
+`path` indicates the path of the indicator collection interface. In EMQX 5, the path is: `/api/v5/prometheus/stats`. `selector.matchLabels` indicates the label of the matching Pod: `apps.emqx.io/instance: emqx`.
+
+The value of the targetLabel `cluster` represents the name of the current cluster. Make sure it is unique.
Save the above content as `monitor.yaml` and execute the following command:
diff --git a/ja_JP/deploy/kubernetes/operator/tasks/configure-emqx-rebalance.md b/ja_JP/deploy/kubernetes/operator/tasks/configure-emqx-rebalance.md
index 49e1b849c..2be27c3c9 100644
--- a/ja_JP/deploy/kubernetes/operator/tasks/configure-emqx-rebalance.md
+++ b/ja_JP/deploy/kubernetes/operator/tasks/configure-emqx-rebalance.md
@@ -1,4 +1,4 @@
-# Cluster Load Rebalancing (EMQX Enterprise)
+# Rebalance Cluster Load
## Task Target
@@ -6,12 +6,12 @@ How to rebalance MQTT connections.
## Why Need Load Rebalancing
-Cluster load rebalancing is the act of forcibly migrating client connections and sessions from one set of nodes to another. It will automatically calculate the number of connections that need to be migrated to achieve node balance, and then migrate the corresponding number of connections and sessions from high-load nodes to low-load nodes, thereby achieving load balancing between nodes. This operation is usually required to achieve balance after a new join or restart of a node.
+Cluster load rebalancing is the act of forcibly migrating client connections and sessions from one set of nodes to another. It will automatically calculate the number of connections that need to be migrated to achieve node balance, and then migrate the corresponding number of connections and sessions from high-load nodes to low-load nodes, thereby achieving load balancing between nodes. This operation is usually required to achieve balance after a new join or a restart of a node.
The value of rebalancing mainly has the following two points:
- **Improve system scalability**: Due to the persistent nature of MQTT connections, connections to the original nodes will not automatically migrate to the new nodes when the cluster scales. To address this, you can use the load rebalancing feature to smoothly transfer connections from overloaded nodes to newly-added ones. This process ensures a more balanced distribution of load across the entire cluster and enhances throughput, response speed, and resource utilization rate.
-- **Reduce O&M costs**: For clusters with unevenly distributed loads, where some nodes are overloaded while others remain idle, you can use the load rebalancing feature to automatically adjust the load within the cluster. This helps achieve a more balanced distribution of work and reduces operation and maintenance costs.
+- **Reduce O&M costs**: For clusters with unevenly distributed loads, where some nodes are overloaded while others remain idle, you can use the load rebalancing feature to automatically adjust the load within the cluster. This helps achieve a more balanced distribution of work and reduces operational and maintenance costs.
For EMQX cluster load rebalancing, please refer to the document: [Rebalancing](../../../cluster/rebalancing.md)
@@ -37,19 +37,23 @@ spec:
relSessThreshold: "1.1"
```
-> For Rebalance configuration, please refer to the document: [Rebalance reference](../api-reference.md#rebalancestrategy).
+> For Rebalance configuration, please refer to the document: [Rebalance reference](../reference/v2beta1-reference.md#rebalancestrategy).
## Test Load Rebalancing
### Cluster Load Distribution Before Rebalancing
-Before Rebalancing, we built a cluster with unbalanced load. And use Grafana + Prometheus to monitor the load of EMQX cluster:
+Before rebalancing, we intentionally created an EMQX cluster with an uneven distribution of connections. We then used Grafana and Prometheus to monitor the cluster load:

-It can be seen from the figure that there are four EMQX nodes in the current cluster, three of which carry 10,000 connections, and the remaining one has 0 connections. Next, we will demonstrate how to perform a rebalancing operation so that the load of the four nodes reaches a balanced state. Next, we will demonstrate how to perform a rebalancing operation so that the load of the four nodes reaches a balanced state.
+As shown in the graph, the cluster consists of four EMQX nodes. Three nodes each handle 10,000 connections, while one node has **zero** connections.
-- Submit the Rebalance task
+In the following example, we demonstrate how to perform a rebalancing operation to evenly distribute the load across all four nodes.
+
+#### Submit a Rebalance Task
+
+Create a `Rebalance` resource to initiate the rebalancing process:
```yaml
apiVersion: apps.emqx.io/v1beta4
@@ -70,14 +74,16 @@ spec:
relSessThreshold: "1.1"
```
-Save the above content as: rebalance.yaml, and execute the following command to submit the Rebalance task:
+Save the file as `rebalance.yaml`, and execute the following command to submit the Rebalance task:
```bash
$ kubectl apply -f rebalance.yaml
rebalance.apps.emqx.io/rebalance-sample created
```
-Execute the following command to view the rebalancing status of the EMQX cluster:
+#### Check the Rebalance Progress
+
+Execute the following command to inspect the rebalancing status of the EMQX cluster:
```bash
$ kubectl get rebalances rebalance-sample -o json | jq '.status.rebalanceStates'
@@ -97,9 +103,11 @@ $ kubectl get rebalances rebalance-sample -o json | jq '.status.rebalanceStates'
"connection_eviction_rate": 10
}
```
-> For a detailed description of the rebalanceStates field, please refer to the document: [rebalanceStates reference](../api-reference.md#rebalancestate).
+> For a detailed description of the `rebalanceStates` field, refer to the documentation: [rebalanceStates reference](../reference/v2beta1-reference.md#rebalancestate).
+
+#### Wait for Completion
-Wait for the Rebalance task to complete:
+Monitor the task until its status becomes `Completed`:
```bash
$ kubectl get rebalances rebalance-sample
@@ -107,15 +115,23 @@ NAME STATUS AGE
rebalance-sample Completed 62s
```
-> There are three states of Rebalance: Processing, Completed, and Failed. Processing indicates that the rebalancing task is in progress, Completed indicates that the rebalancing task has been completed, and Failed indicates that the rebalancing task failed.
+> The `STATUS` field indicates the lifecycle state of the Rebalance task:
+>
+> | Status | Meaning |
+> | -------------- | --------------------------------------------- |
+> | **Processing** | Rebalancing is in progress. |
+> | **Completed** | Rebalancing has successfully finished. |
+> | **Failed** | Rebalancing encountered an error and stopped. |
### Cluster Load Distribution After Rebalancing

-The figure above shows the cluster load after Rebalance is completed. It can be seen from the graph that the entire Rebalance process is very smooth. It can be seen from the data that the total number of connections in the cluster is still 10,000, which is consistent with that before Rebalance. The connections of four nodes has changed, and some connections of three nodes have been migrated to newly expanded nodes. After rebalancing, the loads of the four nodes remain stable, and the connections is close to 2,500 and will not change.
+The figure above shows the cluster load after Rebalance has completed. As illustrated, the migration of client connections is smooth and stable throughout the entire operation. The total number of connections in the cluster remains **10,000**, the same as before rebalancing.
-According to the conditions for the cluster to reach balance:
+Before rebalancing, one node carried **0** connections while three nodes carried **10,000** connections each. After rebalancing, the connections have been redistributed evenly across all four nodes. The load on each node stabilizes around **2,500** connections and remains consistent.
+
+To determine whether the cluster has reached a balanced state, the EMQX Operator evaluates the following conditions:
```
avg(source node connection number) < avg(target node connection number) + abs_conn_threshold
@@ -123,4 +139,10 @@ or
avg(source node connection number) < avg(target node connection number) * rel_conn_threshold
```
-Substituting the configured Rebalance parameters and the number of connections can calculate `avg(2553 + 2553+ 2554) < 2340 * 1.1`, so the current cluster has reached a balanced state, and the Rebalance task has successfully rebalanced the cluster load.
+Using the configured Rebalance thresholds and real connection counts:
+
+- Source node average: `avg(2553 + 2553 + 2554) ≈ 2553`
+- Target node average: `2340`
+- Condition checked: `2553 < 2340 * 1.1`
+
+Since the condition holds true, the Operator concludes that the cluster has reached a balanced state and the rebalancing task has successfully completed.
diff --git a/ja_JP/deploy/kubernetes/operator/tasks/configure-emqx-restricted-k8s.md b/ja_JP/deploy/kubernetes/operator/tasks/configure-emqx-restricted-k8s.md
index e01d84708..f76492d80 100644
--- a/ja_JP/deploy/kubernetes/operator/tasks/configure-emqx-restricted-k8s.md
+++ b/ja_JP/deploy/kubernetes/operator/tasks/configure-emqx-restricted-k8s.md
@@ -108,7 +108,7 @@ kubectl -n emqx wait --for=condition=Ready pods -l "control-plane=controller-man
## Configure EMQX Cluster
-+ Save the following content as a YAML file and deploy it with the `kubectl apply` command
+1. Save the following content as a YAML file and deploy it with the `kubectl apply` command:
```yaml
apiVersion: apps.emqx.io/v2beta1
@@ -125,7 +125,7 @@ kubectl -n emqx wait --for=condition=Ready pods -l "control-plane=controller-man
}
```
-+ Wait for the EMQX cluster to be ready, you can check the status of EMQX cluster through `kubectl get` command, please make sure `STATUS` is `Running`, this may take some time
+2. Wait for the EMQX cluster to be ready, you can check the status of EMQX cluster through `kubectl get` command, please make sure `STATUS` is `Running`, this may take some time
```bash
$ kubectl get emqx emqx
diff --git a/ja_JP/deploy/kubernetes/operator/tasks/configure-emqx-service.md b/ja_JP/deploy/kubernetes/operator/tasks/configure-emqx-service.md
index b5dcad694..5a8817608 100644
--- a/ja_JP/deploy/kubernetes/operator/tasks/configure-emqx-service.md
+++ b/ja_JP/deploy/kubernetes/operator/tasks/configure-emqx-service.md
@@ -1,16 +1,18 @@
-# Access EMQX Cluster Through LoadBalancer
+# Access EMQX Cluster through LoadBalancer
-## Task Target
+## Objective
-Access the EMQX cluster through the Service of LoadBalancer type.
+Access the EMQX cluster through a Service of type LoadBalancer.
## Configure EMQX Cluster
-The following is the relevant configuration of EMQX Custom Resource. You can choose the corresponding APIVersion according to the version of EMQX you want to deploy. For the specific compatibility relationship, please refer to [EMQX Operator Compatibility](../operator.md):
+EMQX CRD `apps.emqx.io/v2beta1` supports:
+* Configuring the EMQX Dashboard Service through `.spec.dashboardServiceTemplate`.
+* Configuring the EMQX cluster listener Service through `.spec.listenersServiceTemplate`.
-Operator supports configuring EMQX cluster Dashboard Service through `.spec.dashboardServiceTemplate`, and configuring EMQX cluster listener Service through `.spec.listenersServiceTemplate`, its documentation can refer to [Service](../api-reference.md#emqxspec).
+Refer to the [respective documentation](../reference/v2beta1-reference.md#emqxspec) for more details.
-+ Save the following content as a YAML file and deploy it via the `kubectl apply` command
+1. Save the following as a YAML file and deploy it using `kubectl apply`.
```yaml
apiVersion: apps.emqx.io/v2beta1
@@ -18,7 +20,7 @@ Operator supports configuring EMQX cluster Dashboard Service through `.spec.dash
metadata:
name: emqx
spec:
- image: emqx/emqx-enterprise:@EE_VERSION@
+ image: emqx/emqx:@EE_VERSION@
config:
data: |
license {
@@ -32,69 +34,67 @@ Operator supports configuring EMQX cluster Dashboard Service through `.spec.dash
type: LoadBalancer
```
- > By default, EMQX will open an MQTT TCP listener `tcp-default` corresponding to port 1883 and Dashboard listener `dashboard-listeners-http-bind` corresponding to port 18083.
+ ::: tip
+ By default, EMQX starts an MQTT TCP listener `tcp-default` on port 1883 and a Dashboard HTTP listener on port 18083.
- > Users can add new listeners through `.spec.config.data` field or EMQX Dashboard. EMQX Operator will automatically inject the default listener information into the Service when creating the Service, but when there is a conflict between the Service configured by the user and the listener configured by EMQX (name or port fields are repeated), EMQX Operator will use the user's configuration prevail.
+ Users can configure new or existing listeners through `.spec.config.data`, or manage them through the EMQX Dashboard.
-+ Wait for the EMQX cluster to be ready, you can check the status of the EMQX cluster through `kubectl get` command, please make sure `STATUS` is `Running`, this may take some time
+ EMQX Operator automatically reflects the default listener information in the Service resources. When there is a conflict between the Service configured by the user and the listener configured by EMQX (name or port fields are repeated), EMQX Operator prioritizes the user configuration.
+ :::
- ```bash
- $ kubectl get emqx emqx
- NAME IMAGE STATUS AGE
- emqx emqx/emqx-enterprise:@EE_VERSION@ Running 10m
- ```
-+ Obtain the Dashboard External IP of the EMQX cluster and access the EMQX console
+2. Wait for the EMQX cluster to become ready.
- EMQX Operator will create two EMQX Service resources, one is emqx-dashboard and the other is emqx-listeners, corresponding to EMQX console and EMQX listening port respectively.
+ Check the status of the EMQX cluster with `kubectl get` and make sure that `STATUS` is `Ready`. This may take some time.
```bash
- $ kubectl get svc emqx-dashboard -o json | jq '.status.loadBalancer.ingress[0].ip'
-
- 192.168.1.200
- ```
-
- Access `http://192.168.1.200:18083` through a browser, and use the default username and password `admin/public` to log in to the EMQX console.
-
-## Connect To EMQX Cluster By MQTTX CLI
-
-+ Obtain the External IP of the EMQX cluster
-
- ```bash
- external_ip=$(kubectl get svc emqx-listeners -o json | jq '.status.loadBalancer.ingress[0].ip')
+ $ kubectl get emqx emqx
+ NAME STATUS AGE
+ emqx Ready 10m
```
-+ Use MQTTX CLI to connect to the EMQX cluster
-
- ```bash
- $ mqttx conn -h ${external_ip} -p 1883
-
- [4/17/2023] [5:17:31 PM] › … Connecting...
- [4/17/2023] [5:17:31 PM] › ✔ Connected
- ```
+## Add New Listener through EMQX Dashboard
-## Add New Listener Through EMQX Dashboard
+1. Add a new listener.
-+ Add new Listener
+ Open the EMQX Dashboard and navigate to _Configuration_ → _Listeners_.
- Open the browser to login the EMQX Dashboard and click Configuration → Listeners to enter the listener page, we first click the Add Listener button to add a name called test, port 1884 The listener, as shown in the following figure:
+ Click the _Add Listener_ button to add a listener with the name `test` and port `1884`, as shown in the following figure:
- Then click the Add button to create the listener, as shown in the following figure:
+
+ Then click the _Add_ button to create the listener, as shown in the following figure:
- As can be seen from the figure, the test listener we created has taken effect.
+ As seen in the figure, the new listener has been created.
-+ Check whether the newly added listener is injected into the Service
+2. Check if the new listener is reflected in the Service.
```bash
kubectl get svc
-
+
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
emqx-dashboard NodePort 10.105.110.235 18083:32012/TCP 13m
emqx-listeners NodePort 10.106.1.58 1883:32010/TCP,1884:30763/TCP 12m
```
- From the output results, we can see that the newly added listener 1884 has been injected into the `emqx-listeners` Service.
+ From this output, we can see that the newly added listener on port 1884 has been reflected in the `emqx-listeners` Service resource.
+
+## Connect to the New Listener Using MQTTX
+
+1. Obtain the external IP of the EMQX listeners service.
+
+ ```bash
+ external_ip=$(kubectl get svc emqx-listeners -o json | jq -r '.status.loadBalancer.ingress[0].ip')
+ ```
+
+2. Connect to the new listener using MQTTX CLI.
+
+ ```bash
+ $ mqttx conn -h ${external_ip} -p 1884
+
+ [4/17/2023] [5:17:31 PM] › … Connecting...
+ [4/17/2023] [5:17:31 PM] › ✔ Connected
+ ```
diff --git a/ja_JP/deploy/kubernetes/operator/tasks/configure-emqx-tls.md b/ja_JP/deploy/kubernetes/operator/tasks/configure-emqx-tls.md
index aeca1af0a..bd4f825e3 100644
--- a/ja_JP/deploy/kubernetes/operator/tasks/configure-emqx-tls.md
+++ b/ja_JP/deploy/kubernetes/operator/tasks/configure-emqx-tls.md
@@ -1,47 +1,58 @@
# Enable TLS In EMQX
-## Task Target
-
-Customize TLS certificates via the `extraVolumes` and `extraVolumeMounts` fields.
-
-## Create Secret Based On TLS Certificate
-
-Secret is an object that contains a small amount of sensitive information such as passwords, tokens, or keys. For its documentation, please refer to: [Secret](https://kubernetes.io/docs/concepts/configuration/secret/#working-with-secrets). In this article, we use Secret to save TLS certificate information, so we need to create Secret based on TLS certificate before creating EMQX cluster.
-
-+ Save the following as a YAML file and deploy it with the `kubectl apply` command
-
- ```yaml
- apiVersion: v1
- kind: Secret
- metadata:
- name: emqx-tls
- type: kubernetes.io/tls
- stringData:
- ca.crt: |
- -----BEGIN CERTIFICATE-----
- ...
- -----END CERTIFICATE-----
- tls.crt: |
- -----BEGIN CERTIFICATE-----
- ...
- -----END CERTIFICATE-----
- tls.key: |
- -----BEGIN RSA PRIVATE KEY-----
- ...
- -----END RSA PRIVATE KEY-----
- ```
-
- > `ca.crt` indicates the content of the CA certificate, `tls.crt` indicates the content of the server certificate, and `tls.key` indicates the content of the server private key. In this example, the contents of the above three fields are omitted, please fill them with the contents of your own certificate.
+## Objective
+
+Customize TLS certificates using the `extraVolumes` and `extraVolumeMounts` fields.
+
+## Create a Secret Based On TLS Certificate
+
+A secret is an object that contains a small amount of sensitive information, such as passwords, tokens, or keys. In this demonstration, we use secrets to store TLS certificate information, so we need to create one before creating the EMQX cluster.
+
+For more information, please refer to the [Secret](https://kubernetes.io/docs/concepts/configuration/secret/#working-with-secrets) documentation.
+
+Save the following as a YAML file and deploy it using the `kubectl apply` command:
+
+```yaml
+apiVersion: v1
+kind: Secret
+metadata:
+ name: emqx-tls
+type: kubernetes.io/tls
+stringData:
+ ca.crt: |
+ -----BEGIN CERTIFICATE-----
+ ...
+ -----END CERTIFICATE-----
+ tls.crt: |
+ -----BEGIN CERTIFICATE-----
+ ...
+ -----END CERTIFICATE-----
+ tls.key: |
+ -----BEGIN RSA PRIVATE KEY-----
+ ...
+ -----END RSA PRIVATE KEY-----
+```
+
+:::tip
+In this example, the contents of the above three fields are omitted. Please fill them with your own certificate contents.
+* `ca.crt` should contain the CA certificate.
+* `tls.crt` should contain the server certificate.
+* `tls.key` should contain the server's private key.
+:::
## Configure EMQX Cluster
-The following is the relevant configuration of EMQX Custom Resource. You can choose the corresponding APIVersion according to the version of EMQX you want to deploy. For the specific compatibility relationship, please refer to [EMQX Operator Compatibility](../operator.md):
+EMQX CRD `apps.emqx.io/v2beta1` provides the following fields to configure additional volumes and mount points for the EMQX cluster:
+* `.spec.coreTemplate.extraVolumes`
+* `.spec.coreTemplate.extraVolumeMounts`
+* `.spec.replicantTemplate.extraVolumes`
+* `.spec.replicantTemplate.extraVolumeMounts`
-`apps.emqx.io/v2beta1 EMQX` supports `.spec.coreTemplate.extraVolumes` and `.spec.coreTemplate.extraVolumeMounts` and `.spec.replicantTemplate.extraVolumes` and `.spec.replicantTemplate.extraVolumeMounts` fields to EMQX The cluster configures additional volumes and mount points. In this article, we can use these two fields to configure TLS certificates for the EMQX cluster.
+In this demonstration, we will use these fields to provide TLS certificates to the EMQX cluster.
-There are many types of Volumes. For the description of Volumes, please refer to the document: [Volumes](https://kubernetes.io/docs/concepts/storage/volumes/#secret). In this article we are using the `secret` type.
+There are many types of Volumes. For information about Volumes, please refer to the [Volumes](https://kubernetes.io/docs/concepts/storage/volumes/#secret) documentation. Here we are using the `secret` volume type.
-+ Save the following as a YAML file and deploy it with the `kubectl apply` command
+1. Save the following as a YAML file and deploy it using `kubectl apply`:
```yaml
apiVersion: apps.emqx.io/v2beta1
@@ -49,8 +60,9 @@ There are many types of Volumes. For the description of Volumes, please refer to
metadata:
name: emqx
spec:
- image: emqx/emqx-enterprise:@EE_VERSION@
+ image: emqx/emqx:@EE_VERSION@
config:
+ # Configure the TLS listener certificates mounted from the `emqx-tls` volume:
data: |
listeners.ssl.default {
bind = "0.0.0.0:8883"
@@ -77,11 +89,13 @@ There are many types of Volumes. For the description of Volumes, please refer to
replicantTemplate:
spec:
extraVolumes:
+ # Create a `secret` volume type named `emqx-tls`:
- name: emqx-tls
secret:
secretName: emqx-tls
extraVolumeMounts:
- name: emqx-tls
+ # Directory where the TLS certificate is mounted to EMQX nodes:
mountPath: /mounted/cert
dashboardServiceTemplate:
spec:
@@ -91,65 +105,51 @@ There are many types of Volumes. For the description of Volumes, please refer to
type: LoadBalancer
```
- > The `.spec.coreTemplate.extraVolumes` field configures the volume type as: secret, and the name as: emqx-tls.
-
- > The `.spec.coreTemplate.extraVolumeMounts` field configures the directory where the TLS certificate is mounted to EMQX: `/mounted/cert`.
+2. Wait for the EMQX cluster to become ready.
- > The `.spec.config.data` field configures the TLS listener certificate path. For more TLS listener configurations, please refer to the document: [Configuration Manual](../../../../configuration/configuration.md).
-
-+ Wait for EMQX cluster to be ready, you can check the status of EMQX cluster through the `kubectl get` command, please make sure that `STATUS` is `Running`, this may take some time
+ Check the status of the EMQX cluster using `kubectl get`, and make sure that `STATUS` is `Ready`. This may take a while.
```bash
- $ kubectl get emqx emqx
- NAME IMAGE STATUS AGE
- emqx emqx/emqx-enterprise:@EE_VERSION@ Running 10m
+ $ kubectl get emqx
+ NAME STATUS AGE
+ emqx Ready 10m
```
-+ Obtain the External IP of EMQX cluster and access EMQX console
-
- EMQX Operator will create two EMQX Service resources, one is emqx-dashboard and the other is emqx-listeners, corresponding to EMQX console and EMQX listening port respectively.
-
- ```bash
- $ kubectl get svc emqx-dashboard -o json | jq '.status.loadBalancer.ingress[0].ip'
-
- 192.168.1.200
- ```
-
- Access `http://192.168.1.200:18083` through a browser, and use the default username and password `admin/public` to login EMQX console.
-
-## Verify TLS Connection Using MQTTX CLI
+## Verify TLS Connection using MQTTX
-[MQTTX CLI](https://mqttx.app/cli) is an open source MQTT 5.0 command line client tool, designed to help developers to more Quickly develop and debug MQTT services and applications.
+[MQTTX CLI](https://mqttx.app/cli) is an open-source MQTT 5.0 command-line client tool, designed to help developers quickly get started with MQTT services and applications.
-+ Obtain the External IP of EMQX cluster
+1. Obtain the external IP of the EMQX listeners service.
```bash
external_ip=$(kubectl get svc emqx-listeners -o json | jq '.status.loadBalancer.ingress[0].ip')
```
-+ Subscribe to messages using MQTTX CLI
+2. Subscribe to messages using MQTTX CLI.
+
+ Connect to the TLS listener port 8883, using the `--insecure` flag to skip certificate verification.
```bash
mqttx sub -h ${external_ip} -p 8883 -t "hello" -l mqtts --insecure
-
[10:00:25] › … Connecting...
[10:00:25] › ✔ Connected
[10:00:25] › … Subscribing to hello...
[10:00:25] › ✔ Subscribed to hello
```
-+ Create a new terminal window and publish a message using the MQTTX CLI
+3. In a separate terminal window, publish a message.
```bash
mqttx pub -h ${external_ip} -p 8883 -t "hello" -m "hello world" -l mqtts --insecure
-
[10:00:58] › … Connecting...
[10:00:58] › ✔ Connected
[10:00:58] › … Message Publishing...
[10:00:58] › ✔ Message published
```
-+ View messages received in the subscribed terminal window
+4. Observe the subscriber client receiving the message.
+
+ This indicates that both the publisher and subscriber clients successfully communicate with the broker over a TLS connection.
```bash
[10:00:58] › payload: hello world
diff --git a/ja_JP/deploy/kubernetes/operator/tasks/overview.md b/ja_JP/deploy/kubernetes/operator/tasks/overview.md
index 3c93c2999..2ee1527a2 100644
--- a/ja_JP/deploy/kubernetes/operator/tasks/overview.md
+++ b/ja_JP/deploy/kubernetes/operator/tasks/overview.md
@@ -2,29 +2,26 @@
This chapter provides step-by-step instructions for performing common tasks and operations with EMQX in a Kubernetes cluster.
-The chapter is divided into sections covering
-
-**Configuration and Setup**
+## Configuration and Setup
- License and Security
- - [License Configuration (EMQX Enterprise)](./configure-emqx-license.md)
- - [Enable TLS In EMQX](./configure-emqx-tls.md)
+ - [Manage License](./configure-emqx-license.md)
+ - [Enable TLS for EMQX listeners](./configure-emqx-tls.md)
- Cluster Configuration
- - [Change EMQX Configurations Via Operator](./configure-emqx-config.md)
- - [Enable Core + Replicant Cluster (EMQX 5.x)](./configure-emqx-core-replicant.md)
- - [Enable Persistence In EMQX Cluster](./configure-emqx-persistence.md)
- - [Access EMQX Cluster by Kubernetes Service](./configure-emqx-service.md)
- - [Cluster Load Rebalancing (EMQX Enterprise)](./configure-emqx-rebalance.md)
+ - [Change EMQX Configuration](./configure-emqx-config.md)
+ - [Enable Core-Replicant Deployment](./configure-emqx-core-replicant.md)
+ - [Enable Persistence](./configure-emqx-persistence.md)
+ - [Access EMQX Cluster through LoadBalancer](./configure-emqx-service.md)
+ - [Rebalance Cluster Load](./configure-emqx-rebalance.md)
-**Upgrades and Maintenance**
+## Upgrades and Maintenance
- Upgrade
- - [Configure Blue-Green Upgrade (EMQX Enterprise](./configure-emqx-blueGreenUpdate.md)
+ - [Perform Blue-Green Upgrade](./configure-emqx-blueGreenUpdate.md)
- Log Management
- - [Collect EMQX Logs in Kubernetes](./configure-emqx-log-collection.md)
+ - [Collect EMQX Logs](./configure-emqx-log-collection.md)
- [Change EMQX Log Level](./configure-emqx-log-level.md)
-**Monitoring and Performance**
-
-- [Monitor EMQX cluster by Prometheus](./configure-emqx-prometheus.md)
+## Monitoring and Performance
+- [Monitor EMQX Cluster using Prometheus](./configure-emqx-prometheus.md)
diff --git a/zh_CN/deploy/kubernetes/operator/aws-eks.md b/zh_CN/deploy/kubernetes/operator/aws-eks.md
index ec6eb8c2f..0411a2af8 100644
--- a/zh_CN/deploy/kubernetes/operator/aws-eks.md
+++ b/zh_CN/deploy/kubernetes/operator/aws-eks.md
@@ -1,27 +1,31 @@
# 在 Amazon EKS 中部署 EMQX
-EMQX Operator 支持在 Amazon 容器服务 EKS(Elastic Kubernetes Service)上部署 EMQX。Amazon EKS 是一种托管的 Kubernetes 服务,可让您轻松部署、管理和扩展容器化应用程序。EKS 提供了 Kubernetes 控制平面和节点组,自动处理节点替换、升级和修补。它支持 AWS 服务,如 Load Balancers、RDS 和 IAM,并与其他 Kubernetes 生态系统工具无缝集成。详情请查看 [什么是 Amazon EKS](https://docs.aws.amazon.com/zh_cn/eks/latest/userguide/what-is-eks.html)
+EMQX Operator 支持在 Amazon 容器服务 EKS(Elastic Kubernetes Service)上运行。Amazon EKS 是一种托管的 Kubernetes 服务,可简化容器化应用程序的部署、管理和扩展。EKS 提供了 Kubernetes 控制平面和节点组,自动处理节点替换、升级和修补。它支持 AWS 服务,如 Load Balancers、RDS 和 IAM,并与其他 Kubernetes 生态系统工具无缝集成。
+
+有关详细介绍,请参阅 [什么是 Amazon EKS](https://docs.aws.amazon.com/zh_cn/eks/latest/userguide/what-is-eks.html)。
## 前提条件
-在开始之前,您需要准备以下内容:
+在 EKS 上部署 EMQX 之前,请确保您已完成以下先决条件:
+
+- 创建 EKS 集群。
有关更多详细信息,请参阅 [创建 Amazon EKS 集群](https://docs.aws.amazon.com/zh_cn/eks/latest/userguide/getting-started.html)。
-- 开通 Amazon 容器服务,并创建一个 EKS 集群,具体请参考:[创建 Amazon EKS 集群](https://docs.aws.amazon.com/zh_cn/eks/latest/userguide/getting-started.html)
+- 配置 kubectl 以连接到您的 EKS 集群。
有关更多详细信息,请参阅 [使用 kubectl 连接集群](https://docs.aws.amazon.com/zh_cn/eks/latest/userguide/getting-started-console.html#eks-configure-kubectl)。
-- 通过本地安装 kubectl 工具连接 EKS 集群:具体请参考:[使用 kubectl 连接集群](https://docs.aws.amazon.com/zh_cn/eks/latest/userguide/getting-started-console.html#eks-configure-kubectl)
+- 在集群上部署 AWS Load Balancer Controller。
有关更多详细信息,请参阅 [创建网络负载均衡器](https://docs.aws.amazon.com/zh_cn/eks/latest/userguide/network-load-balancing.html)。
-- 在集群上部署 AWS Load Balancer Controller,具体请参考:[创建网络负载均衡器](https://docs.aws.amazon.com/zh_cn/eks/latest/userguide/network-load-balancing.html)
+- 在集群上安装 Amazon EBS CSI 驱动程序。
有关更多详细信息,请参阅 [Amazon EBS CSI 驱动程序](https://docs.aws.amazon.com/zh_cn/eks/latest/userguide/ebs-csi.html)。
-- 安装 EMQX Operator:具体请参考:[安装 EMQX Operator](./getting-started.md)
+- 安装 EMQX Operator。
有关更多详细信息,请参阅 [安装 EMQX Operator](./getting-started.md)。
-## 快速部署一个 EMQX 集群
+## 快速部署 EMQX 集群
-下面是 EMQX 自定义资源的相关配置。你可以根据你想部署的 EMQX 版本选择相应的 APIVersion。关于具体的兼容性关系,请参考 [EMQX 与 EMQX Operator 的兼容性列表](./operator.md)。
+以下示例演示了在 EKS 上部署的相关 EMQX 自定义资源(CR)配置。
:::: tabs type:card
::: tab apps.emqx.io/v2beta1
-+ 将下面的内容保存成 YAML 文件,并通过 `kubectl apply` 命令部署它
+1. 将下面的内容保存成 YAML 文件,并通过 `kubectl apply` 命令部署它。
```yaml
apiVersion: apps.emqx.io/v2beta1
@@ -78,21 +82,21 @@ EMQX Operator 支持在 Amazon 容器服务 EKS(Elastic Kubernetes Service)
loadBalancerClass: service.k8s.aws/nlb
```
-+ 等待 EMQX 集群就绪,可以通过 `kubectl get` 命令查看 EMQX 集群的状态,请确保 `STATUS` 为 `Running`,这个可能需要一些时间
+2. 等待 EMQX 集群就绪,可以通过 `kubectl get` 命令查看 EMQX 集群的状态,请确保 `STATUS` 为 `Running`,这个可能需要一些时间。
```bash
$ kubectl get emqx
- NAME IMAGE STATUS AGE
- emqx emqx/emqx:latest Running 18m
+ NAME STATUS AGE
+ emqx Ready 55s
```
-+ 获取 EMQX 集群的 Dashboard External IP, 访问 EMQX 控制台
+3. 获取 EMQX 集群的 Dashboard External IP, 访问 EMQX 控制台。
EMQX Operator 会创建两个 EMQX Service 资源,一个是 emqx-dashboard,一个是 emqx-listeners,分别对应 EMQX 控制台和 EMQX 监听端口。
```bash
$ kubectl get svc emqx-dashboard -o json | jq '.status.loadBalancer.ingress[0].ip'
-
+
192.168.1.200
```
@@ -101,7 +105,7 @@ EMQX Operator 支持在 Amazon 容器服务 EKS(Elastic Kubernetes Service)
:::
::: tab apps.emqx.io/v1beta4
-+ 将下面的内容保存成 YAML 文件,并通过 `kubectl apply` 命令部署它
+1. 将下面的内容保存成 YAML 文件,并通过 `kubectl apply` 命令部署它。
```yaml
apiVersion: apps.emqx.io/v1beta4
@@ -151,7 +155,7 @@ EMQX Operator 支持在 Amazon 容器服务 EKS(Elastic Kubernetes Service)
loadBalancerClass: service.k8s.aws/nlb
```
-+ 等待 EMQX 集群就绪,可以通过 `kubectl get` 命令查看 EMQX 集群的状态,请确保 `STATUS` 为 `Running`,这个可能需要一些时间
+2. 等待 EMQX 集群就绪,可以通过 `kubectl get` 命令查看 EMQX 集群的状态,请确保 `STATUS` 为 `Running`,这个可能需要一些时间。
```bash
$ kubectl get emqxenterprises
@@ -159,11 +163,11 @@ EMQX Operator 支持在 Amazon 容器服务 EKS(Elastic Kubernetes Service)
emqx-ee Running 26m
```
-+ 获取 EMQX 集群的 External IP, 访问 EMQX 控制台
+3. 获取 EMQX 集群的 External IP, 访问 EMQX 控制台。
```bash
$ kubectl get svc emqx-ee -o json | jq '.status.loadBalancer.ingress[0].ip'
-
+
192.168.1.200
```
@@ -176,7 +180,7 @@ EMQX Operator 支持在 Amazon 容器服务 EKS(Elastic Kubernetes Service)
[MQTT X CLI](https://mqttx.app/zh/cli) 是一款开源的 MQTT 5.0 命令行客户端工具,旨在帮助开发者在不需要使用图形化界面的基础上,也能更快的开发和调试 MQTT 服务与应用。
-+ 获取 EMQX 集群的 External IP
+1. 获取 EMQX 集群的 External IP。
:::: tabs type:card
::: tab apps.emqx.io/v2beta1
@@ -195,29 +199,29 @@ EMQX Operator 支持在 Amazon 容器服务 EKS(Elastic Kubernetes Service)
:::
::::
-+ 订阅消息
+2. 订阅消息。
```bash
$ mqttx sub -t 'hello' -h ${external_ip} -p 1883
-
+
[10:00:25] › … Connecting...
[10:00:25] › ✔ Connected
[10:00:25] › … Subscribing to hello...
[10:00:25] › ✔ Subscribed to hello
```
-+ 创建一个新的终端窗口并发布消息
+3. 创建一个新的终端窗口并发布消息。
```bash
$ mqttx pub -t 'hello' -h ${external_ip} -p 1883 -m 'hello world'
-
+
[10:00:58] › … Connecting...
[10:00:58] › ✔ Connected
[10:00:58] › … Message Publishing...
[10:00:58] › ✔ Message published
```
-+ 查看订阅终端窗口收到的消息
+4. 查看订阅终端窗口收到的消息。
```bash
[10:00:58] › payload: hello world
diff --git a/zh_CN/deploy/kubernetes/operator/azure-aks.md b/zh_CN/deploy/kubernetes/operator/azure-aks.md
index 39d02964e..00d8f1c9b 100644
--- a/zh_CN/deploy/kubernetes/operator/azure-aks.md
+++ b/zh_CN/deploy/kubernetes/operator/azure-aks.md
@@ -1,109 +1,124 @@
-# 在 Azure 中部署 EMQX
-
-EMQX 是一款高性能的开源分布式物联网 MQTT 消息服务器,它提供了可靠、高效的消息传递功能。而 Azure Kubernetes Service(AKS)作为一种托管的 Kubernetes 服务,提供了便捷的容器化应用程序部署和管理能力。在本文中,我们将介绍如何利用 EMQX Operator 在 Azure AKS 上部署 EMQX,从而构建强大的物联网 MQTT 通信解决方案。
+# 在 Azure Kubernetes Service 中部署 EMQX
+EMQX Operator 支持在 Azure Kubernetes Service (AKS) 上部署 EMQX。AKS 通过将运维开销转移到 Azure 来简化在 Azure 中部署托管 Kubernetes 集群的过程。作为托管的 Kubernetes 服务,Azure 处理关键任务,如健康监控和维护。创建 AKS 集群时,Azure 会自动配置和管理 Kubernetes 控制平面,无需额外费用。
## 前提条件
-在开始之前,您必须具备以下条件:
+在 AKS 上部署 EMQX 之前,请确保满足以下先决条件:
+
+- Azure 订阅中的 AKS 集群
+ - 有关创建和配置 AKS 集群的指导,请参阅 [Azure Kubernetes Service 文档](https://learn.microsoft.com/zh-cn/azure/aks/)。
-- 要在 Azure 上创建一个 AKS 集群,您首先需要在您的 Azure 订阅中激活 AKS 服务。请参考 [Azure Kubernetes 服务](https://learn.microsoft.com/zh-cn/azure/aks/) 文档以获取更多信息。
+- 用于连接到 AKS 集群的有效 `kubectl` 配置
+ - 要使用本地安装的 `kubectl` 连接,请按照 [连接到 AKS 集群](https://learn.microsoft.com/zh-cn/azure/aks/learn/quick-kubernetes-deploy-cli) 中的说明操作。
+ - 要使用 Azure Cloud Shell 连接,请参阅 [在 Azure CloudShell 中管理 AKS 集群](https://learn.microsoft.com/zh-cn/azure/aks/learn/quick-kubernetes-deploy-portal?tabs=azure-cli)。
-- 要使用 kubectl 命令连接到一个 AKS 集群,您可以在本地安装 kubectl 工具并获取集群的 KubeConfig 来连接到集群。或者,您可以通过 Azure 门户使用 Cloud Shell 来管理集群。
- - 要使用 kubectl 连接到一个 AKS 集群,您需要在您的本地机器上安装并配置 kubectl 工具。请参考 [连接到一个 AKS 集群](https://learn.microsoft.com/zh-cn/azure/aks/learn/quick-kubernetes-deploy-cli) 文档。
- - 要使用 CloudShell 连接到一个 AKS 集群,使用 Azure CloudShell 连接到 AKS 集群并使用 kubectl 管理集群。请参考 [在 Azure CloudShell 中管理一个 AKS 集群](https://learn.microsoft.com/zh-cn/azure/aks/learn/quick-kubernetes-deploy-portal?tabs=azure-cli) 文档,了解如何连接到 Azure CloudShell 和使用 kubectl 的详细说明。
+- 在集群上安装 EMQX Operator
+ - 有关安装详细信息,请参阅 [安装 EMQX Operator](./getting-started.md)。
-- 要安装 EMQX Operator,请参考 [安装 EMQX Operator](./getting-started.md)。
-## 快速部署一个 EMQX 集群
+## 快速部署 EMQX 集群
-以下是 EMQX Custom Resource 的相关配置。您可以根据您想要部署的 EMQX 版本选择相应的 APIVersion。具体的兼容关系,请参考 [EMQX Operator 兼容性](./operator.md)。
+以下示例显示了 EMQX 自定义资源 (CR) 的基本配置。
-```yaml
-apiVersion: apps.emqx.io/v2beta1
-kind: EMQX
-metadata:
- name: emqx
-spec:
- image: emqx/emqx-enterprise:@EE_VERSION@
- coreTemplate:
- spec:
- volumeClaimTemplates:
- ## 关于存储类的更多信息:https://learn.microsoft.com/zh-cn/azure/aks/concepts-storage#storage-classes
- storageClassName: default
- resources:
- requests:
- storage: 10Gi
- accessModes:
- - ReadWriteOnce
- dashboardServiceTemplate:
- spec:
- ## 关于负载均衡器的更多信息:https://learn.microsoft.com/zh-cn/azure/aks/load-balancer-standard
- type: LoadBalancer
- listenersServiceTemplate:
- spec:
- ## 关于负载均衡器的更多信息:https://learn.microsoft.com/zh-cn/azure/aks/load-balancer-standard
- type: LoadBalancer
-```
+1. 将其保存为 YAML 文件,并使用 `kubectl apply` 部署。
-等待 EMQX 集群准备就绪。您可以使用 kubectl get 命令检查 EMQX 集群的状态。请确保状态为 Running,这可能需要一些时间。
+ ```yaml
+ apiVersion: apps.emqx.io/v2beta1
+ kind: EMQX
+ metadata:
+ name: emqx
+ spec:
+ image: emqx/emqx:@EE_VERSION@
+ config:
+ data: |
+ license {
+ key = "..."
+ }
+ coreTemplate:
+ spec:
+ volumeClaimTemplates:
+ ## 有关存储类的更多信息:https://learn.microsoft.com/zh-cn/azure/aks/concepts-storage#storage-classes
+ storageClassName: default
+ resources:
+ requests:
+ storage: 10Gi
+ accessModes:
+ - ReadWriteOnce
+ dashboardServiceTemplate:
+ spec:
+ ## 有关负载均衡器的更多信息:https://learn.microsoft.com/zh-cn/azure/aks/load-balancer-standard
+ type: LoadBalancer
+ listenersServiceTemplate:
+ spec:
+ ## 有关负载均衡器的更多信息:https://learn.microsoft.com/zh-cn/azure/aks/load-balancer-standard
+ type: LoadBalancer
+ ```
- ```bash
- $ kubectl get emqx emqx
- NAME IMAGE STATUS AGE
- emqx emqx/emqx-enterprise:@EE_VERSION@ Running 10m
- ```
+2. 等待 EMQX 集群就绪。
-获取 EMQX 集群的外部 IP,并访问 EMQX 控制台。
+ 使用 `kubectl get` 检查集群状态,并验证 `STATUS` 为 `Ready`。启动可能需要一些时间。
-EMQX Operator 将创建两个 EMQX 服务资源,一个是 emqx-dashboard,另一个是 emqx-listeners,分别对应 EMQX 控制台和 EMQX 监听端口。
+ ```shell
+ $ kubectl get emqx
+ NAME STATUS AGE
+ emqx Ready 1m5s
+ ```
-```shell
-$ kubectl get svc emqx-dashboard -o json | jq '.status.loadBalancer.ingress[0].ip'
+3. 获取 EMQX Dashboard 的外部 IP 并访问它。
-52.132.12.100
-```
+ EMQX Operator 会根据 `dashboardServiceTemplate` 配置自动创建 Service。
-通过打开一个网络浏览器并访问 http://52.132.12.100:18083 来访问 EMQX 控制台。使用默认的用户名和密码 admin/public 登录。
+ ```shell
+ $ kubectl get svc emqx-dashboard -o json | jq -r '.status.loadBalancer.ingress[0].ip'
+ 20.245.230.91
+ ```
-## 使用 MQTTX CLI 连接到 EMQX 集群以发布/订阅消息
+4. 在 `http://20.245.230.91:18083` 打开 Dashboard。
-MQTTX CLI 是一个开源的 MQTT 5.0 命令行客户端工具,旨在帮助开发者无需 GUI 即可更快地开发和调试 MQTT 服务和应用。
+ 使用默认凭据登录:
-- 获取 EMQX 集群的外部 IP
+ - **用户名:** `admin`
+ - **密码:** `public`
- ```shell
- external_ip=$(kubectl get svc emqx -o json | jq '.status.loadBalancer.ingress[0].ip')
- ```
+## 使用 MQTTX 订阅和发布
-- 订阅消息
+本演练使用 [MQTTX CLI](https://mqttx.app/zh/cli),这是一款开源的 MQTT 5.0 命令行客户端工具,可帮助开发者快速测试 MQTT 服务和应用。
- ```shell
- $ mqttx sub -t 'hello' -h ${external_ip} -p 1883
+1. 获取 EMQX TCP 监听器的外部 IP。
- [10:00:25] › … Connecting...
- [10:00:25] › ✔ Connected
- [10:00:25] › … Subscribing to hello...
- [10:00:25] › ✔ Subscribed to hello
- ```
+ EMQX Operator 会为每个配置的监听器自动创建 Service 资源。
-- 创建一个新的终端窗口并发送消息
+ ```shell
+ external_ip=$(kubectl get svc emqx-listeners -o json | jq -r '.status.loadBalancer.ingress[0].ip')
+ ```
- ```shell
- $ mqttx pub -t 'hello' -h ${external_ip} -p 1883 -m 'hello world'
+2. 订阅主题。
- [10:00:58] › … Connecting...
- [10:00:58] › ✔ Connected
- [10:00:58] › … Message Publishing...
- [10:00:58] › ✔ Message published
- ```
+ ```shell
+ $ mqttx sub -t 'hello' -h ${external_ip} -p 1883
+ [10:00:25] › … Connecting...
+ [10:00:25] › ✔ Connected
+ [10:00:25] › … Subscribing to hello...
+ [10:00:25] › ✔ Subscribed to hello
+ ```
-- 在订阅终端窗口中查看接收到的消息
+3. 在另一个终端中,连接到 EMQX 集群并发布消息。
- ```shell
- [10:00:58] › payload: hello world
- ```
+ ```shell
+ $ mqttx pub -t 'hello' -h ${external_ip} -p 1883 -m 'hello world'
+ [10:00:58] › … Connecting...
+ [10:00:58] › ✔ Connected
+ [10:00:58] › … Message Publishing...
+ [10:00:58] › ✔ Message published
+ ```
-## 使用 LoadBalancer 进行 TLS 终结
+4. 观察订阅者接收消息。
-由于 Azure LoadBalancer 不支持 TCP 证书,请参阅这个[文档](https://github.com/emqx/emqx-operator/discussions/312)解决 TCP 证书卸载问题。
+ ```shell
+ [10:00:58] › payload: hello world
+ ```
+
+## 关于使用 LoadBalancer 进行 TLS 卸载的说明
+
+作为 L3/L4 负载均衡器,Azure LoadBalancer 不支持 TLS 终止。请参阅此[讨论](https://github.com/emqx/emqx-operator/discussions/312)以了解可能的解决方案。
diff --git a/zh_CN/deploy/kubernetes/operator/gcp-gke.md b/zh_CN/deploy/kubernetes/operator/gcp-gke.md
index 5fea60b5c..c23be76f2 100644
--- a/zh_CN/deploy/kubernetes/operator/gcp-gke.md
+++ b/zh_CN/deploy/kubernetes/operator/gcp-gke.md
@@ -1,118 +1,136 @@
-# 在 GCP 中部署 EMQX
+# 在 Google Kubernetes Engine 中部署 EMQX
-EMQX 是一款高性能的开源分布式物联网 MQTT 消息服务器,它提供了可靠、高效的消息传递功能。而 Google Kubernetes Engine (GKE)作为一种托管的 Kubernetes 服务,提供了便捷的容器化应用程序部署和管理能力。在本文中,我们将介绍如何利用 EMQX Operator 在 GCP GKE 上部署 EMQX,从而构建强大的物联网 MQTT 通信解决方案。
+EMQX Operator 允许在 Google Kubernetes Engine (GKE) 上部署 EMQX,这简化了在 GCP 中部署托管 Kubernetes 集群的过程。使用 GKE,您可以将运维开销转移到 GCP。通过在 GKE 上部署 EMQX,您可以利用 Kubernetes 的可扩展性和灵活性,同时受益于托管服务的简单性和便利性。使用 GKE 上的 EMQX Operator,您可以轻松地在云中部署和管理 MQTT 代理,并专注于您的业务目标。
## 前提条件
-在开始之前,您必须具备以下条件:
+在 GKE 上部署 EMQX 之前,请确保满足以下先决条件:
-- 要在 Google Cloud Platform 上创建 GKE 集群,您需要在 GCP 订阅中启用 GKE 服务。您可以在 Google Kubernetes Engine 文档中找到有关如何执行此操作的更多信息。
+- Google Cloud Platform 上的 GKE 集群
+ - 您必须在项目中启用 GKE API。有关设置说明,请参阅 [Google Kubernetes Engine 文档](https://cloud.google.com/kubernetes-engine/)。
-- 要使用 kubectl 命令连接到 GKE 集群,您可以在本地计算机上安装 kubectl 工具,并获取集群的 KubeConfig 以连接到集群。或者,您可以通过 GCP 控制台使用 Cloud Shell 来使用 kubectl 管理集群。
+- 用于连接到 GKE 集群的有效 `kubectl` 配置
+ - 要使用本地 `kubectl` 安装连接,请参阅 [连接到 GKE 集群](https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl)。
+ - 要直接从 GCP 控制台使用 Cloud Shell 连接,请参阅 [使用 Cloud Shell 管理 GKE 集群](https://cloud.google.com/code/docs/shell/create-configure-gke-cluster)。
+
+- 在集群上安装 EMQX Operator
+ - 有关更多详细信息,请参阅 [安装 EMQX Operator](./getting-started.md)。
- - 要使用 kubectl 连接到 GKE 集群,您需要在本地计算机上安装并配置 kubectl 工具。有关如何执行此操作的详细说明,请参阅 [连接到 GKE 集群](https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl) 文档。
+ ::: warning 注意
- - 要使用 Cloud Shell 连接到 GKE 集群,您可以直接使用 GCP 控制台中的 Cloud Shell 来连接到 GKE 集群并使用 kubectl 管理集群。有关如何连接到 Cloud Shell 并使用 kubectl 的详细说明,请参阅 [使用 Cloud Shell 管理 GKE 集群](https://cloud.google.com/code/docs/shell/create-configure-gke-cluster) 文档。
+ 在 GKE 上使用默认设置安装 cert-manager 可能会导致引导问题。添加配置 `--set global.leaderElection.namespace=cert-manager` 以在领导者选举中使用不同的命名空间。有关详细信息,请参阅 [cert-manager 兼容性文档](https://cert-manager.io/docs/installation/compatibility/)。
-- 要安装 EMQX Operator,请参考 [安装 EMQX Operator](./getting-started.md)。
+ :::
## 快速部署 EMQX 集群
-以下是 EMQX 自定义资源的相关配置。您可以根据您希望部署的 EMQX 版本选择相应的 APIVersion。有关具体的兼容关系,请参阅 [EMQX Operator 兼容性](./operator.md)。
+以下示例显示了基本的 EMQX 自定义资源 (CR) 配置。
- ::: warning
- 如果要请求 cpu 和 memory 资源,需要保证 cpu 大于等于 250m,memory 大于等于 512M
+1. 将以下文档保存为 YAML 文件,并使用 `kubectl apply` 部署。
- - [Autopilot 中的资源请求](https://cloud.google.com/kubernetes-engine/docs/concepts/autopilot-resource-requests?hl=zh-cn)
- :::
-
-将以下内容保存为 YAML 文件,并使用 kubectl apply 命令进行部署。
-
-```yaml
-apiVersion: apps.emqx.io/v2beta1
-kind: EMQX
-metadata:
- name: emqx
-spec:
- image: emqx/emqx-enterprise:@EE_VERSION@
- coreTemplate:
- spec:
- volumeClaimTemplates:
- ## 关于存储类的更多信息:https://cloud.google.com/kubernetes-engine/docs/concepts/persistent-volumes#storageclasses
- storageClassName: standard
- resources:
- requests:
- storage: 10Gi
- accessModes:
- - ReadWriteOnce
- dashboardServiceTemplate:
- spec:
- ## 关于负载均衡器的更多信息:https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balancing
- type: LoadBalancer
- listenersServiceTemplate:
- spec:
- ## 关于负载均衡器的更多信息:https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balancing
- type: LoadBalancer
-```
-
-等待 EMQX 集群准备就绪。您可以使用 kubectl get 命令检查 EMQX 集群的状态。请确保状态为 Running,这可能需要一些时间。
-
- ```bash
- $ kubectl get emqx emqx
- NAME IMAGE STATUS AGE
- emqx emqx/emqx-enterprise:@EE_VERSION@ Running 10m
- ```
-
-获取 EMQX 集群的外部 IP 地址,并访问 EMQX 控制台。
-
-EMQX Operator 将创建两个 EMQX Service 资源,一个是 emqx-dashboard,另一个是 emqx-listeners,分别对应 EMQX 控制台和 EMQX 监听端口。
-
-```shell
-$ kubectl get svc emqx-dashboard -o json | jq '.status.loadBalancer.ingress[0].ip'
-
-34.122.174.166
-```
-
-通过在 Web 浏览器中打开 http://34.122.174.166:18083,访问 EMQX 控制台。使用默认的用户名和密码 admin/public 进行登录。
-
-## 使用 MQTTX CLI 连接到 EMQX 集群发布/订阅消息
+ ::: warning 注意
-MQTTX CLI 是一个开源的 MQTT 5.0 命令行客户端工具,旨在帮助开发人员在没有 GUI 的情况下更快地开发和调试 MQTT 服务和应用程序。
+ 如果指定 CPU 和内存限制,请确保至少 250m CPU 和 512Mi 内存。有关详细信息,请参阅 [Autopilot 中的资源请求](https://cloud.google.com/kubernetes-engine/docs/concepts/autopilot-resource-requests)。
-- 获取 EMQX 集群的外部 IP 地址
-
- ```shell
- external_ip=$(kubectl get svc emqx-listeners -o json | jq '.status.loadBalancer.ingress[0].ip')
- ```
-
-- 订阅消息
-
- ```shell
- $ mqttx sub -t 'hello' -h ${external_ip} -p 1883
-
- [10:00:25] › … Connecting...
- [10:00:25] › ✔ Connected
- [10:00:25] › … Subscribing to hello...
- [10:00:25] › ✔ Subscribed to hello
- ```
-
-- 在新的终端窗口中发送消息
-
- ```shell
- $ mqttx pub -t 'hello' -h ${external_ip} -p 1883 -m 'hello world'
-
- [10:00:58] › … Connecting...
- [10:00:58] › ✔ Connected
- [10:00:58] › … Message Publishing...
- [10:00:58] › ✔ Message published
- ```
-
-- 在订阅的终端窗口中查看接收到的消息
-
- ```shell
- [10:00:58] › payload: hello world
- ```
-
-## 使用 LoadBalancer 进行 TLS 终结
+ :::
-由于 Google LoadBalancer 不支持 TCP 证书,请参阅这个[文档](https://github.com/emqx/emqx-operator/discussions/312)解决 TCP 证书卸载问题。
+ ```yaml
+ apiVersion: apps.emqx.io/v2beta1
+ kind: EMQX
+ metadata:
+ name: emqx
+ spec:
+ image: emqx/emqx:@EE_VERSION@
+ config:
+ data: |
+ license {
+ key = "..."
+ }
+ coreTemplate:
+ spec:
+ volumeClaimTemplates:
+ ## 有关存储类的更多信息:https://cloud.google.com/kubernetes-engine/docs/concepts/persistent-volumes#storageclasses
+ storageClassName: standard
+ resources:
+ requests:
+ storage: 10Gi
+ accessModes:
+ - ReadWriteOnce
+ dashboardServiceTemplate:
+ spec:
+ ## 有关负载均衡器的更多信息:https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balancing
+ type: LoadBalancer
+ listenersServiceTemplate:
+ spec:
+ ## 有关负载均衡器的更多信息:https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balancing
+ type: LoadBalancer
+ ```
+
+2. 等待 EMQX 集群就绪。
+
+ 使用 `kubectl get` 检查 EMQX 集群的状态,并确保 `STATUS` 为 `Ready`。这可能需要一些时间。
+
+ ```shell
+ $ kubectl get emqx
+ NAME STATUS AGE
+ emqx Ready 1m2s
+ ```
+
+3. 获取 EMQX Dashboard 的外部 IP。
+
+ EMQX Operator 会根据 `dashboardServiceTemplate` 配置创建 Service 资源。
+
+ ```shell
+ $ kubectl get svc emqx-dashboard -o json | jq -r '.status.loadBalancer.ingress[0].ip'
+ 34.122.174.166
+ ```
+
+4. 在 `http://34.122.174.166:18083` 打开 Dashboard。
+
+ 使用默认凭据登录:
+
+ - **用户名:** `admin`
+ - **密码:** `public`
+
+## 订阅和发布
+
+本演练使用 [MQTTX CLI](https://mqttx.app/zh/cli),这是一款开源的 MQTT 5.0 命令行客户端工具,可帮助开发者快速测试 MQTT 服务和应用。
+
+1. 获取 EMQX TCP 监听器的外部 IP。
+
+ EMQX Operator 会为每个配置的监听器自动创建 Service 资源。
+
+ ```shell
+ external_ip=$(kubectl get svc emqx-listeners -o json | jq -r '.status.loadBalancer.ingress[0].ip')
+ ```
+
+2. 订阅主题。
+
+ ```shell
+ $ mqttx sub -t 'hello' -h ${external_ip} -p 1883
+ [10:00:25] › … Connecting...
+ [10:00:25] › ✔ Connected
+ [10:00:25] › … Subscribing to hello...
+ [10:00:25] › ✔ Subscribed to hello
+ ```
+
+3. 在单独的终端中,连接到 EMQX 集群并发布消息。
+
+ ```shell
+ $ mqttx pub -t 'hello' -h ${external_ip} -p 1883 -m 'hello world'
+
+ [10:00:58] › … Connecting...
+ [10:00:58] › ✔ Connected
+ [10:00:58] › … Message Publishing...
+ [10:00:58] › ✔ Message published
+ ```
+
+4. 观察订阅者接收消息。
+
+ ```shell
+ [10:00:58] › payload: hello world
+ ```
+
+## 关于使用 LoadBalancer 进行 TLS 卸载的说明
+
+在撰写本文时,Google LoadBalancer 不支持 TLS 到纯 TCP 流量的终止。请参阅此[讨论](https://github.com/emqx/emqx-operator/discussions/312)以了解可能的解决方案。
diff --git a/zh_CN/deploy/kubernetes/operator/getting-started.md b/zh_CN/deploy/kubernetes/operator/getting-started.md
index 76f10a3fe..0f1974efe 100644
--- a/zh_CN/deploy/kubernetes/operator/getting-started.md
+++ b/zh_CN/deploy/kubernetes/operator/getting-started.md
@@ -1,12 +1,12 @@
-# 快速开始
+# 安装 Operator 并部署 EMQX
-在本文中,我们将指导您完成高效设置 EMQX Operator 环境、安装 EMQX Operator,然后使用它部署 EMQX 所需的步骤。通过遵循本节中概述的指南,您将能够使用 EMQX Operator 有效地安装和管理 EMQX。
+本节将指导您准备 EMQX Operator 环境、安装 Operator 本身,然后使用它部署 EMQX。通过遵循提供的步骤,您可以使用 Operator 高效可靠地安装和管理 EMQX。
## 准备环境
-在部署 EMQX Operator 之前,请确认以下组件已经准备就绪:
+在部署 EMQX Operator 之前,请确保以下组件已准备就绪:
-- 一个正在运行的 [Kubernetes 集群](https://kubernetes.io/docs/concepts/overview/),关于 Kubernetes 的版本,请查看[如何选择 Kubernetes 版本](./operator.md)
+- 运行 Kubernetes 1.24 或更高版本的 [Kubernetes](https://kubernetes.io/docs/concepts/overview/) 环境。
- 一个可以访问 Kubernetes 集群的 [kubectl](https://kubernetes.io/docs/tasks/tools/#kubectl) 工具。您可以使用 `kubectl cluster-info` 命令检查 Kubernetes 集群的状态。
@@ -31,11 +31,7 @@
--set crds.enabled=true
```
- 或者按照 [cert-manager 安装指南](https://cert-manager.io/docs/installation/)来安装它。
-
- ::: warning
- 如果您在 Google Kubernetes Engine(GKE) 上安装它。那么通过默认配置安装可能会导致 bootstraping 问题。所以通过增加 `--set global.leaderElection.namespace=cert-manager` 这个配置为 leader 选举使用不同的命名空间。查看 [cert-manager 兼容性](https://cert-manager.io/docs/installation/compatibility/)
- :::
+ 或者按照官方的 [cert-manager 安装指南](https://cert-manager.io/docs/installation/)来安装它。
2. 运行以下命令来安装 EMQX Operator。
@@ -47,15 +43,14 @@
--create-namespace
```
-3. 等待 EMQX Operator 就绪。
+3. 等待 EMQX Operator 就绪:
```bash
$ kubectl wait --for=condition=Ready pods -l "control-plane=controller-manager" -n emqx-operator-system
-
pod/emqx-operator-controller-manager-57bd7b8bd4-h2mcr condition met
```
-现在你已经成功的安装 EMQX Operator,你可以继续下一步了。在部署 EMQX 部分中,您将学习如何使用 EMQX Operator 来部署 EMQX。
+Operator 运行后,您可以继续部署 EMQX。
## 部署 EMQX
@@ -63,7 +58,7 @@
::: tab EMQX Enterprise 5
-1. 将下面的 YAML 配置文件保存为 `emqx.yaml`。
+1. 将以下内容保存为 YAML 文件,并使用 `kubectl apply` 部署。
```yaml
apiVersion: apps.emqx.io/v2beta1
@@ -71,30 +66,30 @@
metadata:
name: emqx-ee
spec:
- image: emqx/emqx-enterprise:5.8
+ image: emqx/emqx:@EE_VERSION@
+ config:
+ data: |
+ license {
+ key = "..."
+ }
```
- 并使用 `kubectl apply` 命令来部署 EMQX。
+ 有关 EMQX CRD 的更多详细信息,请查看 [参考文档](./reference/v2beta1-reference.md)。
- ```bash
- $ kubectl apply -f emqx.yaml
- ```
-
- 关于 EMQX 自定义资源的更多信息,请查看 [API 参考](./api-reference.md)
-
-2. 检查 EMQX 集群状态,请确保 STATUS 为 Running,这可能需要一些时间等待 EMQX 集群准备就绪。
+2. 等待 EMQX 集群就绪。
```bash
$ kubectl get emqx
-
- NAME IMAGE STATUS AGE
- emqx-ee emqx/emqx-enterprise:5.8.6 Running 2m55s
+ NAME STATUS AGE
+ emqx-ee Ready 2m55s
```
-:::
+
+ 请确保 `STATUS` 为 `Ready`。EMQX 集群可能需要一些时间才能就绪。
+ :::
::: tab EMQX Open Source 5
-1. 将下面的 YAML 配置文件保存为 `emqx.yaml`。
+1. 将以下内容保存为 YAML 文件,并使用 `kubectl apply` 部署。
```yaml
apiVersion: apps.emqx.io/v2beta1
@@ -102,32 +97,27 @@
metadata:
name: emqx
spec:
- image: emqx/emqx:latest
- ```
-
- 并使用 `kubectl apply` 命令来部署 EMQX。
-
- ```bash
- $ kubectl apply -f emqx.yaml
+ image: emqx/emqx:@CE_VERSION@
```
- 关于 EMQX 自定义资源的更多信息,请查看 [API 参考](./api-reference.md)
+ 有关 EMQX CRD 的更多详细信息,请查看 [参考文档](./reference/v2beta1-reference.md)。
-2. 检查 EMQX 集群状态,请确保 STATUS 为 Running,这可能需要一些时间等待 EMQX 集群准备就绪。
+2. 等待 EMQX 集群就绪。
```bash
$ kubectl get emqx
-
- NAME IMAGE STATUS AGE
- emqx emqx/emqx:latest Running 2m55s
+ NAME STATUS AGE
+ emqx Ready 2m55s
```
-:::
+
+ 请确保 `STATUS` 为 `Ready`。EMQX 集群可能需要一些时间才能就绪。后台会发生很多事情。
+ :::
::::
## 在公有云中部署 EMQX
-查看以下指南,使用 EMQX Operator 在公共云平台上部署 EMQX:
+使用以下指南,通过 EMQX Operator 在托管 Kubernetes 服务上部署 EMQX:
- [在阿里云中部署 EMQX (AKS)](./alibaba-cloud.md)
- [在华为云中部署 EMQX (CCE)](./huawei-cloud.md)
diff --git a/zh_CN/deploy/kubernetes/operator/huawei-cloud.md b/zh_CN/deploy/kubernetes/operator/huawei-cloud.md
index b38556771..432053f14 100644
--- a/zh_CN/deploy/kubernetes/operator/huawei-cloud.md
+++ b/zh_CN/deploy/kubernetes/operator/huawei-cloud.md
@@ -29,7 +29,7 @@ EMQX Operator 支持在华为云容器引擎(Cloud Container Engine,简称 C
下面是 EMQX 自定义资源的相关配置。你可以根据你想部署的 EMQX 版本选择相应的 APIVersion。关于具体的兼容性关系,请参考[ EMQX 与 EMQX Operator 的兼容性列表](./operator.md)。
-+ 将下面的内容保存成 YAML 文件,并通过 `kubectl apply` 命令部署它
+1. 将下面的内容保存成 YAML 文件,并通过 `kubectl apply` 命令部署它。
```yaml
apiVersion: apps.emqx.io/v2beta1
@@ -79,17 +79,17 @@ EMQX Operator 支持在华为云容器引擎(Cloud Container Engine,简称 C
type: LoadBalancer
```
-+ 等待 EMQX 集群就绪,可以通过 `kubectl get` 命令查看 EMQX 集群的状态,请确保 `STATUS` 为 `Running`,这个可能需要一些时间
+2. 等待 EMQX 集群就绪,可以通过 `kubectl get` 命令查看 EMQX 集群的状态,请确保 `STATUS` 为 `Running`,这个可能需要一些时间。
```bash
$ kubectl get emqx emqx
- NAME IMAGE STATUS AGE
- emqx emqx/emqx-enterprise:@EE_VERSION@ Running 10m
+ NAME STATUS AGE
+ emqx Running 10m
```
-+ 获取 EMQX 集群的 External IP,访问 EMQX 控制台
+3. 获取 EMQX 集群的 External IP,访问 EMQX 控制台。
- EMQX Operator 会创建两个 EMQX Service 资源,一个是 `emqx-dashboard`,一个是 `emqx-listeners`,分别对应 EMQX 控制台和 EMQX 监听端口。
+EMQX Operator 会创建两个 EMQX Service 资源,一个是 `emqx-dashboard`,一个是 `emqx-listeners`,分别对应 EMQX 控制台和 EMQX 监听端口。
```bash
$ kubectl get svc emqx-dashboard -o json | jq '.status.loadBalancer.ingress[0].ip'
@@ -104,35 +104,35 @@ EMQX Operator 支持在华为云容器引擎(Cloud Container Engine,简称 C
[MQTTX CLI](https://mqttx.app/zh/cli) 是一款开源的 MQTT 5.0 命令行客户端工具,旨在帮助开发者在不需要使用图形化界面的基础上,也能更快的开发和调试 MQTT 服务与应用。
-+ 获取 EMQX 集群的 External IP
+1. 获取 EMQX 集群的 External IP。
```bash
external_ip=$(kubectl get svc emqx-listeners -o json | jq '.status.loadBalancer.ingress[0].ip')
```
-+ 订阅消息
+2. 订阅消息。
```bash
$ mqttx sub -t 'hello' -h ${external_ip} -p 1883
-
+
[10:00:25] › … Connecting...
[10:00:25] › ✔ Connected
[10:00:25] › … Subscribing to hello...
[10:00:25] › ✔ Subscribed to hello
```
-+ 创建一个新的终端窗口并发布消息
+3. 创建一个新的终端窗口并发布消息。
```bash
$ mqttx pub -t 'hello' -h ${external_ip} -p 1883 -m 'hello world'
-
+
[10:00:58] › … Connecting...
[10:00:58] › ✔ Connected
[10:00:58] › … Message Publishing...
[10:00:58] › ✔ Message published
```
-+ 查看订阅终端窗口收到的消息
+4. 查看订阅终端窗口收到的消息。
```bash
[10:00:58] › payload: hello world
diff --git a/zh_CN/deploy/kubernetes/operator/operator.md b/zh_CN/deploy/kubernetes/operator/operator.md
index 50e873716..1373181e8 100644
--- a/zh_CN/deploy/kubernetes/operator/operator.md
+++ b/zh_CN/deploy/kubernetes/operator/operator.md
@@ -1,24 +1,36 @@
-# EMQX Operator 简介
+# EMQX Operator 概述
-EMQX Broker/Enterprise 是一个云原生的 MQTT 消息中间件。 我们提供了 EMQX Kubernetes Operator 来帮助您在 Kubernetes 的环境上快速创建和管理 EMQX Broker/Enterprise 集群。 它可以大大简化部署和管理 EMQX 集群的流程,对于管理和配置的知识要求也更低。它把部署和管理的工作变成一种低成本的、标准化的、可重复性的能力。
+EMQX Operator 为部署和管理 [EMQX](https://www.emqx.io/) 集群提供原生 [Kubernetes](https://kubernetes.io/) 支持。其主要目标是简化和自动化 Kubernetes 环境中 EMQX 的生命周期管理。
+
+EMQX Operator 要求 Kubernetes 1.24 或更高版本。
EMQX Operator 包括但不限于以下功能:
-* **简化 EMQX 部署**:通过 EMQX 自定义资源声明 EMQX 集群,并快速的部署,更多的内容,请查看[快速开始](./getting-started.md)。
+* **简化部署**:通过 EMQX 自定义资源声明 EMQX 集群并快速部署。
+
+ 更多详细信息,请参阅[快速开始](./getting-started.md)指南。
+
+* **集群管理**:自动化 EMQX 集群的运维操作,包括带工作负载迁移的集群升级、运行时数据持久化、保持 Kubernetes 管理的资源处于最新状态等。
-* **管理 EMQX 集群**:对 EMQX 进行自动化运维操作,包括集群升级、运行时数据持久化、根据 EMQX 的状态更新 Kubernetes 的资源等,更多的内容,请查看[管理 EMQX 集群](./tasks/overview.md)。
+ 更多详细信息,请参阅[管理 EMQX](./tasks/overview.md)部分。
-## 如何选择 Kubernetes 版本
+## EMQX 和 EMQX Operator 兼容性
+
+当前 EMQX Operator 2.2.x 版本系列与以下 EMQX 版本兼容:
+- EMQX 开源版和企业版 5.1.1 ~ 5.8.x
+- EMQX 5.9 和 5.10 *
+- EMQX 6.0 及更高版本 *
+
+支持以下 API 版本:
+- [apps.emqx.io/v2beta1](./reference/v2beta1-reference.md)
+- apps.emqx.io/v2alpha1(已弃用)
+- apps.emqx.io/v1beta4
+- apps.emqx.io/v1beta3(已弃用)
+
+::: tip
-EMQX Operator 要求 Kubernetes 集群的版本号 `>=1.24`。
+* 这些版本暂不支持自动管理持久存储(Durable Storage)副本的功能,该功能计划在即将发布的 2.3.0 版本中提供。
-| Kubernetes 版本 | EMQX Operator 兼容性 | 注释 |
-| -------------------- | ------------------------------------------------------------ | ------------------------------------------------------------ |
-| 1.24 更高 | 支持所有功能 | |
-| 1.22 ( 包含) ~ 1.23 | 支持,但是不包含 [MixedProtocolLBService](https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/) | EMQX 集群只能在 LoadBalancer 类型的 Service 中使用一个协议,例如 TCP 或 UDP。 |
-| 1.21 ( 包含) ~ 1.22 | 支持,但是不包含 [Pod 删除开销](https://kubernetes.io/zh-cn/docs/concepts/workloads/controllers/replicaset/#pod-deletion-cost) | EMQX Core + Replicant 模式集群时,更新 EMQX 集群无法准确的删除 Pod。 |
-| 1.20 ( 包含) ~ 1.21 | 支持,但是如果使用 `NodePort` 类型的 Service,需要手动管理 `.spec.ports[].nodePort` | 更多的详情,请查看 [Kubernetes changelog](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md#bug-or-regression-4)。 |
-| 1.16 ( 包含) ~ 1.20 | 支持,但是不推荐,因为缺乏足够的测试 | |
-| 低于 1.16 | 不支持 | 低于 1.16 版本的 Kubernetes 不支持 `apiextensions/v1` APIVersion。 |
+:::
diff --git a/zh_CN/deploy/kubernetes/operator/reference/overview.md b/zh_CN/deploy/kubernetes/operator/reference/overview.md
new file mode 100644
index 000000000..56c3fd749
--- /dev/null
+++ b/zh_CN/deploy/kubernetes/operator/reference/overview.md
@@ -0,0 +1,4 @@
+# API Reference
+
++ [apps.emqx.io/v2](./v2-reference.md)
++ [apps.emqx.io/v2beta1](./v2beta1-reference.md) (partially deprecated)
diff --git a/zh_CN/deploy/kubernetes/operator/reference/v2-reference.md b/zh_CN/deploy/kubernetes/operator/reference/v2-reference.md
new file mode 100644
index 000000000..0961accea
--- /dev/null
+++ b/zh_CN/deploy/kubernetes/operator/reference/v2-reference.md
@@ -0,0 +1,426 @@
+# API Reference (v2)
+
+## Packages
+- [apps.emqx.io/v2](#appsemqxiov2)
+
+
+## apps.emqx.io/v2
+
+package v2 contains API Schema definitions for the apps v2 API group.
+
+### Resource Types
+- [EMQX](#emqx)
+
+
+
+#### BootstrapAPIKey
+
+
+
+
+
+
+
+_Appears in:_
+- [EMQXSpec](#emqxspec)
+
+| Field | Description | Default | Validation |
+| --- | --- | --- | --- |
+| `key` _string_ | | | Pattern: `^[a-zA-Z\d-_]+$`
|
+| `secret` _string_ | | | MaxLength: 128
MinLength: 3
|
+| `secretRef` _[SecretRef](#secretref)_ | Reference to a Secret entry containing the EMQX API Key. | | |
+
+
+#### Config
+
+
+
+
+
+
+
+_Appears in:_
+- [EMQXSpec](#emqxspec)
+
+| Field | Description | Default | Validation |
+| --- | --- | --- | --- |
+| `mode` _string_ | Determines how configuration updates are applied.
* `Merge`: Merge the new configuration into the existing configuration.
* `Replace`: Replace the whole configuration. | Merge | Enum: [Merge Replace]
|
+| `data` _string_ | EMQX configuration, in HOCON format.
This configuration will be supplied as `base.hocon` to the container. See respective
[documentation](https://docs.emqx.com/en/emqx/latest/configuration/configuration.html#base-configuration-file). | | |
+
+
+#### DSDBReplicationStatus
+
+
+
+
+
+
+
+_Appears in:_
+- [DSReplicationStatus](#dsreplicationstatus)
+
+| Field | Description | Default | Validation |
+| --- | --- | --- | --- |
+| `name` _string_ | Name of the database | | |
+| `numShards` _integer_ | Number of shards of the database | | |
+| `numShardReplicas` _integer_ | Total number of shard replicas | | |
+| `lostShardReplicas` _integer_ | Total number of shard replicas belonging to lost sites | | |
+| `numTransitions` _integer_ | Current number of shard ownership transitions | | |
+| `minReplicas` _integer_ | Minimum replication factor among database shards | | |
+| `maxReplicas` _integer_ | Maximum replication factor among database shards | | |
+
+
+#### DSReplicationStatus
+
+
+
+Summary of DS replication status per database.
+
+
+
+_Appears in:_
+- [EMQXStatus](#emqxstatus)
+
+| Field | Description | Default | Validation |
+| --- | --- | --- | --- |
+| `dbs` _[DSDBReplicationStatus](#dsdbreplicationstatus) array_ | | | |
+
+
+#### EMQX
+
+
+
+Custom Resource representing an EMQX cluster.
+
+
+
+
+
+| Field | Description | Default | Validation |
+| --- | --- | --- | --- |
+| `apiVersion` _string_ | `apps.emqx.io/v2` | | |
+| `kind` _string_ | `EMQX` | | |
+| `metadata` _[ObjectMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#objectmeta-v1-meta)_ | Refer to Kubernetes API documentation for fields of `metadata`. | | |
+| `spec` _[EMQXSpec](#emqxspec)_ | Specification of the desired state of the EMQX cluster. | | |
+| `status` _[EMQXStatus](#emqxstatus)_ | Current status of the EMQX cluster. | | |
+
+
+#### EMQXCoreTemplate
+
+
+
+
+
+
+
+_Appears in:_
+- [EMQXSpec](#emqxspec)
+
+| Field | Description | Default | Validation |
+| --- | --- | --- | --- |
+| `metadata` _[ObjectMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#objectmeta-v1-meta)_ | Refer to Kubernetes API documentation for fields of `metadata`. | | |
+| `spec` _[EMQXCoreTemplateSpec](#emqxcoretemplatespec)_ | Specification of the desired state of a core node.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status | | |
+
+
+#### EMQXCoreTemplateSpec
+
+
+
+
+
+
+
+_Appears in:_
+- [EMQXCoreTemplate](#emqxcoretemplate)
+
+| Field | Description | Default | Validation |
+| --- | --- | --- | --- |
+| `nodeSelector` _object (keys:string, values:string)_ | Selector which must be true for the pod to fit on a node.
Must match a node's labels for the pod to be scheduled on that node.
More info: https://kubernetes.io/docs/concepts/config/assign-pod-node/ | | |
+| `nodeName` _string_ | Request to schedule this pod onto a specific node.
If it is non-empty, the scheduler simply schedules this pod onto that node, assuming that it fits resource requirements. | | |
+| `affinity` _[Affinity](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#affinity-v1-core)_ | Affinity for pod assignment
ref: https://kubernetes.io/docs/concepts/config/assign-pod-node/#affinity-and-anti-affinity | | |
+| `tolerations` _[Toleration](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#toleration-v1-core) array_ | Pod tolerations.
If specified, Pod tolerates any taint that matches the triple using the matching operator. | | |
+| `topologySpreadConstraints` _[TopologySpreadConstraint](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#topologyspreadconstraint-v1-core) array_ | Specifies how to spread matching pods among the given topology. | | |
+| `replicas` _integer_ | Desired number of instances.
In case of core nodes, each instance has a consistent identity. | 2 | Minimum: 0
|
+| `minAvailable` _[IntOrString](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#intorstring-intstr-util)_ | An eviction is allowed if at least "minAvailable" pods selected by
"selector" will still be available after the eviction, i.e. even in the
absence of the evicted pod. So for example you can prevent all voluntary
evictions by specifying "100%". | | XIntOrString: \{\}
|
+| `maxUnavailable` _[IntOrString](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#intorstring-intstr-util)_ | An eviction is allowed if at most "maxUnavailable" pods selected by
"selector" are unavailable after the eviction, i.e. even in absence of
the evicted pod. For example, one can prevent all voluntary evictions
by specifying 0. This is a mutually exclusive setting with "minAvailable". | | XIntOrString: \{\}
|
+| `command` _string array_ | Entrypoint array. Not executed within a shell.
The container image's ENTRYPOINT is used if this is not provided.
Variable references `$(VAR_NAME)` are expanded using the container's environment. If a variable
cannot be resolved, the reference in the input string will be unchanged. Double `$$` are reduced
to a single `$`, which allows for escaping the `$(VAR_NAME)` syntax: i.e. `$$(VAR_NAME)` will
produce the string literal `$(VAR_NAME)`. Escaped references will never be expanded, regardless
of whether the variable exists or not. Cannot be updated.
More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell | | |
+| `args` _string array_ | Arguments to the entrypoint.
The container image's CMD is used if this is not provided.
Variable references `$(VAR_NAME)` are expanded using the container's environment. If a variable
cannot be resolved, the reference in the input string will be unchanged. Double `$$` are reduced
to a single `$`, which allows for escaping the `$(VAR_NAME)` syntax: i.e. `$$(VAR_NAME)` will
produce the string literal `$(VAR_NAME)`. Escaped references will never be expanded, regardless
of whether the variable exists or not.
More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell | | |
+| `ports` _[ContainerPort](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#containerport-v1-core) array_ | List of ports to expose from the container.
Exposing a port here gives the system additional information about the network connections a
container uses, but is primarily informational. Not specifying a port here DOES NOT prevent that
port from being exposed. Any port which is listening on the default `0.0.0.0` address inside a
container will be accessible from the network. | | |
+| `env` _[EnvVar](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#envvar-v1-core) array_ | List of environment variables to set in the container. | | |
+| `envFrom` _[EnvFromSource](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#envfromsource-v1-core) array_ | List of sources to populate environment variables from in the container.
The keys defined within a source must be a C_IDENTIFIER. All invalid keys
will be reported as an event when the container is starting. When a key exists in multiple
sources, the value associated with the last source will take precedence.
Values defined by an Env with a duplicate key will take precedence. | | |
+| `resources` _[ResourceRequirements](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#resourcerequirements-v1-core)_ | Compute Resources required by this container.
More info: https://kubernetes.io/docs/concepts/config/manage-resources-containers/ | | |
+| `podSecurityContext` _[PodSecurityContext](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#podsecuritycontext-v1-core)_ | Pod-level security attributes and common container settings. | \{ fsGroup:1000 fsGroupChangePolicy:Always runAsGroup:1000 runAsUser:1000 supplementalGroups:[1000] \} | |
+| `containerSecurityContext` _[SecurityContext](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#securitycontext-v1-core)_ | Security options the container should be run with.
If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext.
More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ | \{ runAsGroup:1000 runAsNonRoot:true runAsUser:1000 \} | |
+| `initContainers` _[Container](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#container-v1-core) array_ | List of initialization containers belonging to the pod.
Init containers are executed in order prior to containers being started. If any
init container fails, the pod is considered to have failed and is handled according
to its restartPolicy. The name for an init container or normal container must be
unique among all containers.
Init containers may not have Lifecycle actions, Readiness probes, Liveness probes, or Startup probes.
The resourceRequirements of an init container are taken into account during scheduling
by finding the highest request/limit for each resource type, and then using the max of
of that value or the sum of the normal containers. Limits are applied to init containers
in a similar fashion.
More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ | | |
+| `extraContainers` _[Container](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#container-v1-core) array_ | Additional containers to run alongside the main container. | | |
+| `extraVolumes` _[Volume](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#volume-v1-core) array_ | Additional volumes to provide to a Pod. | | |
+| `extraVolumeMounts` _[VolumeMount](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#volumemount-v1-core) array_ | Specifies how additional volumes are mounted into the main container. | | |
+| `livenessProbe` _[Probe](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#probe-v1-core)_ | Periodic probe of container liveness.
Container will be restarted if the probe fails.
More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes | \{ failureThreshold:3 httpGet:map[path:/status port:dashboard] initialDelaySeconds:60 periodSeconds:30 \} | |
+| `readinessProbe` _[Probe](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#probe-v1-core)_ | Periodic probe of container service readiness.
Container will be removed from service endpoints if the probe fails.
More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes | \{ failureThreshold:12 httpGet:map[path:/status port:dashboard] initialDelaySeconds:10 periodSeconds:5 \} | |
+| `startupProbe` _[Probe](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#probe-v1-core)_ | StartupProbe indicates that the Pod has successfully initialized.
If specified, no other probes are executed until this completes successfully.
If this probe fails, the Pod will be restarted, just as if the `livenessProbe` failed.
This can be used to provide different probe parameters at the beginning of a Pod's lifecycle,
when it might take a long time to load data or warm a cache, than during steady-state operation.
More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes | | |
+| `lifecycle` _[Lifecycle](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#lifecycle-v1-core)_ | Actions that the management system should take in response to container lifecycle events. | | |
+| `volumeClaimTemplates` _[PersistentVolumeClaimSpec](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#persistentvolumeclaimspec-v1-core)_ | PVC specification for a core node data storage.
Note: this field named inconsistently, it is actually just a `PersistentVolumeClaimSpec`. | | |
+
+
+#### EMQXNode
+
+
+
+
+
+
+
+_Appears in:_
+- [EMQXStatus](#emqxstatus)
+
+| Field | Description | Default | Validation |
+| --- | --- | --- | --- |
+| `name` _string_ | Node name | | |
+| `podName` _string_ | Corresponding pod name | | |
+| `status` _string_ | Node status | | |
+| `otpRelease` _string_ | Erlang/OTP version node is running on | | |
+| `version` _string_ | EMQX version | | |
+| `role` _string_ | Node role, either "core" or "replicant" | | |
+| `sessions` _integer_ | Number of MQTT sessions | | |
+| `connections` _integer_ | Number of connected MQTT clients | | |
+
+
+#### EMQXNodesStatus
+
+
+
+
+
+
+
+_Appears in:_
+- [EMQXStatus](#emqxstatus)
+
+| Field | Description | Default | Validation |
+| --- | --- | --- | --- |
+| `replicas` _integer_ | Total number of replicas. | | |
+| `readyReplicas` _integer_ | Number of ready replicas. | | |
+| `currentRevision` _string_ | Current revision of the respective core or replicant set. | | |
+| `currentReplicas` _integer_ | Number of replicas running current revision. | | |
+| `updateRevision` _string_ | Update revision of the respective core or replicant set.
When different from the current revision, the set is being updated. | | |
+| `updateReplicas` _integer_ | Number of replicas running update revision. | | |
+| `collisionCount` _integer_ | | | |
+
+
+#### EMQXReplicantTemplate
+
+
+
+
+
+
+
+_Appears in:_
+- [EMQXSpec](#emqxspec)
+
+| Field | Description | Default | Validation |
+| --- | --- | --- | --- |
+| `metadata` _[ObjectMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#objectmeta-v1-meta)_ | Refer to Kubernetes API documentation for fields of `metadata`. | | |
+| `spec` _[EMQXReplicantTemplateSpec](#emqxreplicanttemplatespec)_ | Specification of the desired state of a replicant node.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status | | |
+
+
+#### EMQXReplicantTemplateSpec
+
+
+
+
+
+
+
+_Appears in:_
+- [EMQXCoreTemplateSpec](#emqxcoretemplatespec)
+- [EMQXReplicantTemplate](#emqxreplicanttemplate)
+
+| Field | Description | Default | Validation |
+| --- | --- | --- | --- |
+| `nodeSelector` _object (keys:string, values:string)_ | Selector which must be true for the pod to fit on a node.
Must match a node's labels for the pod to be scheduled on that node.
More info: https://kubernetes.io/docs/concepts/config/assign-pod-node/ | | |
+| `nodeName` _string_ | Request to schedule this pod onto a specific node.
If it is non-empty, the scheduler simply schedules this pod onto that node, assuming that it fits resource requirements. | | |
+| `affinity` _[Affinity](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#affinity-v1-core)_ | Affinity for pod assignment
ref: https://kubernetes.io/docs/concepts/config/assign-pod-node/#affinity-and-anti-affinity | | |
+| `tolerations` _[Toleration](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#toleration-v1-core) array_ | Pod tolerations.
If specified, Pod tolerates any taint that matches the triple using the matching operator. | | |
+| `topologySpreadConstraints` _[TopologySpreadConstraint](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#topologyspreadconstraint-v1-core) array_ | Specifies how to spread matching pods among the given topology. | | |
+| `replicas` _integer_ | Desired number of instances.
In case of core nodes, each instance has a consistent identity. | 2 | Minimum: 0
|
+| `minAvailable` _[IntOrString](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#intorstring-intstr-util)_ | An eviction is allowed if at least "minAvailable" pods selected by
"selector" will still be available after the eviction, i.e. even in the
absence of the evicted pod. So for example you can prevent all voluntary
evictions by specifying "100%". | | XIntOrString: \{\}
|
+| `maxUnavailable` _[IntOrString](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#intorstring-intstr-util)_ | An eviction is allowed if at most "maxUnavailable" pods selected by
"selector" are unavailable after the eviction, i.e. even in absence of
the evicted pod. For example, one can prevent all voluntary evictions
by specifying 0. This is a mutually exclusive setting with "minAvailable". | | XIntOrString: \{\}
|
+| `command` _string array_ | Entrypoint array. Not executed within a shell.
The container image's ENTRYPOINT is used if this is not provided.
Variable references `$(VAR_NAME)` are expanded using the container's environment. If a variable
cannot be resolved, the reference in the input string will be unchanged. Double `$$` are reduced
to a single `$`, which allows for escaping the `$(VAR_NAME)` syntax: i.e. `$$(VAR_NAME)` will
produce the string literal `$(VAR_NAME)`. Escaped references will never be expanded, regardless
of whether the variable exists or not. Cannot be updated.
More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell | | |
+| `args` _string array_ | Arguments to the entrypoint.
The container image's CMD is used if this is not provided.
Variable references `$(VAR_NAME)` are expanded using the container's environment. If a variable
cannot be resolved, the reference in the input string will be unchanged. Double `$$` are reduced
to a single `$`, which allows for escaping the `$(VAR_NAME)` syntax: i.e. `$$(VAR_NAME)` will
produce the string literal `$(VAR_NAME)`. Escaped references will never be expanded, regardless
of whether the variable exists or not.
More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell | | |
+| `ports` _[ContainerPort](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#containerport-v1-core) array_ | List of ports to expose from the container.
Exposing a port here gives the system additional information about the network connections a
container uses, but is primarily informational. Not specifying a port here DOES NOT prevent that
port from being exposed. Any port which is listening on the default `0.0.0.0` address inside a
container will be accessible from the network. | | |
+| `env` _[EnvVar](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#envvar-v1-core) array_ | List of environment variables to set in the container. | | |
+| `envFrom` _[EnvFromSource](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#envfromsource-v1-core) array_ | List of sources to populate environment variables from in the container.
The keys defined within a source must be a C_IDENTIFIER. All invalid keys
will be reported as an event when the container is starting. When a key exists in multiple
sources, the value associated with the last source will take precedence.
Values defined by an Env with a duplicate key will take precedence. | | |
+| `resources` _[ResourceRequirements](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#resourcerequirements-v1-core)_ | Compute Resources required by this container.
More info: https://kubernetes.io/docs/concepts/config/manage-resources-containers/ | | |
+| `podSecurityContext` _[PodSecurityContext](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#podsecuritycontext-v1-core)_ | Pod-level security attributes and common container settings. | \{ fsGroup:1000 fsGroupChangePolicy:Always runAsGroup:1000 runAsUser:1000 supplementalGroups:[1000] \} | |
+| `containerSecurityContext` _[SecurityContext](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#securitycontext-v1-core)_ | Security options the container should be run with.
If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext.
More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ | \{ runAsGroup:1000 runAsNonRoot:true runAsUser:1000 \} | |
+| `initContainers` _[Container](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#container-v1-core) array_ | List of initialization containers belonging to the pod.
Init containers are executed in order prior to containers being started. If any
init container fails, the pod is considered to have failed and is handled according
to its restartPolicy. The name for an init container or normal container must be
unique among all containers.
Init containers may not have Lifecycle actions, Readiness probes, Liveness probes, or Startup probes.
The resourceRequirements of an init container are taken into account during scheduling
by finding the highest request/limit for each resource type, and then using the max of
of that value or the sum of the normal containers. Limits are applied to init containers
in a similar fashion.
More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ | | |
+| `extraContainers` _[Container](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#container-v1-core) array_ | Additional containers to run alongside the main container. | | |
+| `extraVolumes` _[Volume](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#volume-v1-core) array_ | Additional volumes to provide to a Pod. | | |
+| `extraVolumeMounts` _[VolumeMount](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#volumemount-v1-core) array_ | Specifies how additional volumes are mounted into the main container. | | |
+| `livenessProbe` _[Probe](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#probe-v1-core)_ | Periodic probe of container liveness.
Container will be restarted if the probe fails.
More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes | \{ failureThreshold:3 httpGet:map[path:/status port:dashboard] initialDelaySeconds:60 periodSeconds:30 \} | |
+| `readinessProbe` _[Probe](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#probe-v1-core)_ | Periodic probe of container service readiness.
Container will be removed from service endpoints if the probe fails.
More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes | \{ failureThreshold:12 httpGet:map[path:/status port:dashboard] initialDelaySeconds:10 periodSeconds:5 \} | |
+| `startupProbe` _[Probe](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#probe-v1-core)_ | StartupProbe indicates that the Pod has successfully initialized.
If specified, no other probes are executed until this completes successfully.
If this probe fails, the Pod will be restarted, just as if the `livenessProbe` failed.
This can be used to provide different probe parameters at the beginning of a Pod's lifecycle,
when it might take a long time to load data or warm a cache, than during steady-state operation.
More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes | | |
+| `lifecycle` _[Lifecycle](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#lifecycle-v1-core)_ | Actions that the management system should take in response to container lifecycle events. | | |
+
+
+#### EMQXSpec
+
+
+
+EMQXSpec defines the desired state of EMQX.
+
+
+
+_Appears in:_
+- [EMQX](#emqx)
+
+| Field | Description | Default | Validation |
+| --- | --- | --- | --- |
+| `image` _string_ | EMQX container image.
More info: https://kubernetes.io/docs/concepts/containers/images | | |
+| `imagePullPolicy` _[PullPolicy](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#pullpolicy-v1-core)_ | Container image pull policy.
One of `Always`, `Never`, `IfNotPresent`.
Defaults to `Always` if `:latest` tag is specified, or `IfNotPresent` otherwise.
More info: https://kubernetes.io/docs/concepts/containers/images#updating-images | | |
+| `imagePullSecrets` _[LocalObjectReference](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#localobjectreference-v1-core) array_ | ImagePullSecrets is an optional list of references to secrets in the same namespace to use for pulling any of the images used by this PodSpec.
If specified, these secrets will be passed to individual puller implementations for them to use.
More info: https://kubernetes.io/docs/concepts/containers/images#specifying-imagepullsecrets-on-a-pod | | |
+| `serviceAccountName` _string_ | ServiceAccount name.
Managed ReplicaSets and StatefulSets are associated with the specified ServiceAccount for authentication purposes.
More info: https://kubernetes.io/docs/concepts/security/service-accounts | | |
+| `bootstrapAPIKeys` _[BootstrapAPIKey](#bootstrapapikey) array_ | Bootstrap API keys to access EMQX API.
Cannot be updated. | | |
+| `config` _[Config](#config)_ | EMQX Configuration. | | |
+| `clusterDomain` _string_ | Kubernetes cluster domain. | cluster.local | |
+| `revisionHistoryLimit` _integer_ | Number of old ReplicaSets, old StatefulSets and old PersistentVolumeClaims to retain to allow rollback. | 3 | |
+| `updateStrategy` _[UpdateStrategy](#updatestrategy)_ | Cluster upgrade strategy settings. | \{ type:Recreate \} | |
+| `coreTemplate` _[EMQXCoreTemplate](#emqxcoretemplate)_ | Template for Pods running EMQX core nodes. | \{ spec:map[replicas:2] \} | |
+| `replicantTemplate` _[EMQXReplicantTemplate](#emqxreplicanttemplate)_ | Template for Pods running EMQX replicant nodes. | | |
+| `dashboardServiceTemplate` _[ServiceTemplate](#servicetemplate)_ | Template for Service exposing the EMQX Dashboard.
Dashboard Service always points to the set of EMQX core nodes. | | |
+| `listenersServiceTemplate` _[ServiceTemplate](#servicetemplate)_ | Template for Service exposing enabled EMQX listeners.
Listeners Service points to the set of EMQX replicant nodes if they are enabled and exist.
Otherwise, it points to the set of EMQX core nodes. | | |
+
+
+#### EMQXStatus
+
+
+
+EMQXStatus defines the observed state of EMQX
+
+
+
+_Appears in:_
+- [EMQX](#emqx)
+
+| Field | Description | Default | Validation |
+| --- | --- | --- | --- |
+| `conditions` _[Condition](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#condition-v1-meta) array_ | Conditions representing the current status of the EMQX Custom Resource. | | |
+| `coreNodes` _[EMQXNode](#emqxnode) array_ | Status of each core node in the cluster. | | |
+| `coreNodesStatus` _[EMQXNodesStatus](#emqxnodesstatus)_ | Summary status of the set of core nodes. | | |
+| `replicantNodes` _[EMQXNode](#emqxnode) array_ | Status of each replicant node in the cluster. | | |
+| `replicantNodesStatus` _[EMQXNodesStatus](#emqxnodesstatus)_ | Summary status of the set of replicant nodes. | | |
+| `nodeEvacuationsStatus` _[NodeEvacuationStatus](#nodeevacuationstatus) array_ | Status of active node evacuations in the cluster. | | |
+| `dsReplication` _[DSReplicationStatus](#dsreplicationstatus)_ | Status of EMQX Durable Storage replication. | | |
+
+
+#### EvacuationStrategy
+
+
+
+
+
+
+
+_Appears in:_
+- [UpdateStrategy](#updatestrategy)
+
+| Field | Description | Default | Validation |
+| --- | --- | --- | --- |
+| `connEvictRate` _integer_ | Client disconnect rate (number per second).
Same as `conn-evict-rate` in [EMQX Node Evacuation](https://docs.emqx.com/en/emqx/v5.10/deploy/cluster/rebalancing.html#node-evacuation). | 1000 | Minimum: 1
|
+| `sessEvictRate` _integer_ | Session evacuation rate (number per second).
Same as `sess-evict-rate` in [EMQX Node Evacuation](https://docs.emqx.com/en/emqx/v5.10/deploy/cluster/rebalancing.html#node-evacuation). | 1000 | Minimum: 1
|
+| `waitTakeover` _integer_ | Amount of time (in seconds) to wait before starting session evacuation.
Same as `wait-takeover` in [EMQX Node Evacuation](https://docs.emqx.com/en/emqx/v5.10/deploy/cluster/rebalancing.html#node-evacuation). | 10 | Minimum: 0
|
+| `waitHealthCheck` _integer_ | Duration (in seconds) during which the node waits for the Load Balancer to remove it from the active backend node list.
Same as `wait-health-check` in [EMQX Node Evacuation](https://docs.emqx.com/en/emqx/v5.10/deploy/cluster/rebalancing.html#node-evacuation). | 60 | Minimum: 0
|
+
+
+#### KeyRef
+
+
+
+
+
+
+
+_Appears in:_
+- [SecretRef](#secretref)
+
+| Field | Description | Default | Validation |
+| --- | --- | --- | --- |
+| `secretName` _string_ | Name of the Secret object. | | |
+| `secretKey` _string_ | Entry within the Secret data. | | Pattern: `^[a-zA-Z\d-_]+$`
|
+
+
+#### NodeEvacuationStatus
+
+
+
+
+
+
+
+_Appears in:_
+- [EMQXStatus](#emqxstatus)
+
+| Field | Description | Default | Validation |
+| --- | --- | --- | --- |
+| `nodeName` _string_ | Evacuated node name | | |
+| `state` _string_ | Evacuation state | | |
+| `sessionRecipients` _string array_ | Session recipients | | |
+| `sessionEvictionRate` _integer_ | Session eviction rate, in sessions per second. | | |
+| `connectionEvictionRate` _integer_ | Connection eviction rate, in connections per second. | | |
+| `initialSessions` _integer_ | Initial number of sessions on this node | | |
+| `initialConnections` _integer_ | Initial number of connections to this node | | |
+
+
+#### SecretRef
+
+
+
+
+
+
+
+_Appears in:_
+- [BootstrapAPIKey](#bootstrapapikey)
+
+| Field | Description | Default | Validation |
+| --- | --- | --- | --- |
+| `key` _[KeyRef](#keyref)_ | Reference to a Secret entry containing the EMQX API Key. | | |
+| `secret` _[KeyRef](#keyref)_ | Reference to a Secret entry containing the EMQX API Key's secret. | | |
+
+
+#### ServiceTemplate
+
+
+
+
+
+
+
+_Appears in:_
+- [EMQXSpec](#emqxspec)
+
+| Field | Description | Default | Validation |
+| --- | --- | --- | --- |
+| `enabled` _boolean_ | Specifies whether the Service should be created. | true | |
+| `metadata` _[ObjectMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#objectmeta-v1-meta)_ | Refer to Kubernetes API documentation for fields of `metadata`. | | |
+| `spec` _[ServiceSpec](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#servicespec-v1-core)_ | Specification of the desired state of a Service.
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status | | |
+
+
+#### UpdateStrategy
+
+
+
+
+
+
+
+_Appears in:_
+- [EMQXSpec](#emqxspec)
+
+| Field | Description | Default | Validation |
+| --- | --- | --- | --- |
+| `type` _string_ | Determines how cluster upgrade is performed.
* `Recreate`: Perform blue-green upgrade. | Recreate | Enum: [Recreate]
|
+| `initialDelaySeconds` _integer_ | Number of seconds before connection evacuation starts. | 10 | Minimum: 0
|
+| `evacuationStrategy` _[EvacuationStrategy](#evacuationstrategy)_ | Evacuation strategy settings. | | |
+
diff --git a/zh_CN/deploy/kubernetes/operator/api-reference.md b/zh_CN/deploy/kubernetes/operator/reference/v2beta1-reference.md
similarity index 99%
rename from zh_CN/deploy/kubernetes/operator/api-reference.md
rename to zh_CN/deploy/kubernetes/operator/reference/v2beta1-reference.md
index 26a7e6096..ca99fb6ba 100644
--- a/zh_CN/deploy/kubernetes/operator/api-reference.md
+++ b/zh_CN/deploy/kubernetes/operator/reference/v2beta1-reference.md
@@ -1,4 +1,4 @@
-# API Reference
+# API Reference (v2beta1)
## Packages
- [apps.emqx.io/v2beta1](#appsemqxiov2beta1)
@@ -608,4 +608,3 @@ _Appears in:_
| `initialDelaySeconds` _integer_ | Number of seconds before evacuation connection start. | | |
| `evacuationStrategy` _[EvacuationStrategy](#evacuationstrategy)_ | Number of seconds before evacuation connection timeout. | | |
-
diff --git a/zh_CN/deploy/kubernetes/operator/tasks/assets/configure-core-replicant/mria-core-repliant.png b/zh_CN/deploy/kubernetes/operator/tasks/assets/configure-core-replicant/mria-core-replicant.png
similarity index 100%
rename from zh_CN/deploy/kubernetes/operator/tasks/assets/configure-core-replicant/mria-core-repliant.png
rename to zh_CN/deploy/kubernetes/operator/tasks/assets/configure-core-replicant/mria-core-replicant.png
diff --git a/zh_CN/deploy/kubernetes/operator/tasks/assets/configure-emqx-persistent/emqx-core-action.png b/zh_CN/deploy/kubernetes/operator/tasks/assets/configure-emqx-persistent/emqx-core-action.png
index 3c362b208..02eaa1ce3 100644
Binary files a/zh_CN/deploy/kubernetes/operator/tasks/assets/configure-emqx-persistent/emqx-core-action.png and b/zh_CN/deploy/kubernetes/operator/tasks/assets/configure-emqx-persistent/emqx-core-action.png differ
diff --git a/zh_CN/deploy/kubernetes/operator/tasks/assets/configure-emqx-persistent/emqx-core-rule-new.png b/zh_CN/deploy/kubernetes/operator/tasks/assets/configure-emqx-persistent/emqx-core-rule-new.png
index 1714bd142..954acc890 100644
Binary files a/zh_CN/deploy/kubernetes/operator/tasks/assets/configure-emqx-persistent/emqx-core-rule-new.png and b/zh_CN/deploy/kubernetes/operator/tasks/assets/configure-emqx-persistent/emqx-core-rule-new.png differ
diff --git a/zh_CN/deploy/kubernetes/operator/tasks/assets/configure-emqx-persistent/emqx-core-rule-old.png b/zh_CN/deploy/kubernetes/operator/tasks/assets/configure-emqx-persistent/emqx-core-rule-old.png
index c2086f99f..77a71eeb1 100644
Binary files a/zh_CN/deploy/kubernetes/operator/tasks/assets/configure-emqx-persistent/emqx-core-rule-old.png and b/zh_CN/deploy/kubernetes/operator/tasks/assets/configure-emqx-persistent/emqx-core-rule-old.png differ
diff --git a/zh_CN/deploy/kubernetes/operator/tasks/assets/configure-service/emqx-add-listener.png b/zh_CN/deploy/kubernetes/operator/tasks/assets/configure-service/emqx-add-listener.png
index 2dc20d6d1..7015cb5db 100644
Binary files a/zh_CN/deploy/kubernetes/operator/tasks/assets/configure-service/emqx-add-listener.png and b/zh_CN/deploy/kubernetes/operator/tasks/assets/configure-service/emqx-add-listener.png differ
diff --git a/zh_CN/deploy/kubernetes/operator/tasks/assets/configure-service/emqx-listeners.png b/zh_CN/deploy/kubernetes/operator/tasks/assets/configure-service/emqx-listeners.png
index ee34a1993..7baa2e6a2 100644
Binary files a/zh_CN/deploy/kubernetes/operator/tasks/assets/configure-service/emqx-listeners.png and b/zh_CN/deploy/kubernetes/operator/tasks/assets/configure-service/emqx-listeners.png differ
diff --git a/zh_CN/deploy/kubernetes/operator/tasks/configure-emqx-blueGreenUpdate.md b/zh_CN/deploy/kubernetes/operator/tasks/configure-emqx-blueGreenUpdate.md
index 81ebd0ba8..779eda228 100644
--- a/zh_CN/deploy/kubernetes/operator/tasks/configure-emqx-blueGreenUpdate.md
+++ b/zh_CN/deploy/kubernetes/operator/tasks/configure-emqx-blueGreenUpdate.md
@@ -1,21 +1,16 @@
-# 通过蓝绿发布优雅的升级 EMQX 集群
+# 执行蓝绿升级
-## 任务目标
+## 目标
-如何通过蓝绿发布优雅的升级 EMQX 集群
-
-:::tip
-
-该功能仅支持 `apps.emqx.io/v1beta4 EmqxEnterprise` 及 `apps.emqx.io/v2beta1 EMQX`
-
-:::
+通过蓝绿部署执行 EMQX 集群的优雅升级。
## 背景
-在传统的 EMQX 集群部署中,通常使用 StatefulSet 默认的滚动升级策略来更新 EMQX Pod。然而,这种方式存在以下两个问题:
+在传统的 EMQX 集群部署中,通常使用 StatefulSet 的默认滚动升级策略来更新 EMQX Pod。然而,这种方法存在以下两个问题:
+
+- 在滚动更新期间,新 Pod 和旧 Pod 都会被相应的 Service 选中。这可能导致 MQTT 客户端连接到正在终止的旧 Pod,从而导致频繁断开连接和重连。
-1. 在进行滚动更新时,对应的 Service 会同时选中新的和旧的 Pod。这可能导致 MQTT 客户端连接到错误的 Pod 上,从而频繁断开连接并进行重连操作。
-2. 在滚动更新过程中,只有 N - 1 个 Pod 能够提供服务,因为新的 Pod 需要一定时间来启动和准备就绪。这可能导致服务的可用性下降。
+- 在滚动更新过程中,在任何给定时间只有 _N - 1_ 个 Pod 能够提供服务,因为新 Pod 需要一些时间来启动和就绪。这可能导致服务可用性下降。
```mermaid
timeline
@@ -44,16 +39,14 @@ timeline
## 解决方案
-针对上文提到的滚动更新的问题,EMQX Operator 提供了蓝绿发布的升级方案,通过 EMQX 自定义资源升级 EMQX 集群时,EMQX Operator 会创建新的 EMQX 集群,并在集群就绪后将 Kubernetes Service 指向新的 EMQX 集群,并逐步删除旧的 EMQX 集群的 Pod,从而达到更新 EMQX 集群的目的。
+EMQX Operator 默认执行蓝绿部署。当通过相应的 EMQX CR 更新 EMQX 集群时,EMQX Operator 会启动升级。
-在删除旧的 EMQX 集群的 Pod 时,EMQX Operator 还可以利用 EMQX 节点疏散的特性,以用户所希望的速率将 MQTT 连接转移到新的集群中,避免了段时间内大量连接的问题。
+整个升级过程大致分为以下步骤:
-整个升级流程大致可分为以下几步:
-
-1. 创建一个相同规格的集群。
-2. 新集群就绪后,将 service 指向新集群,并将旧集群从 service 中摘除,此时新集群开始接受流量,旧集群现有的连接不受影响。
-3. (仅支持 EMQX 企业版)通过 EMQX 节点疏散功能,逐个对节点上的连接进行疏散。
-4. 将旧的集群逐步缩容到 0 个节点。
+1. 创建一组具有更新规格的新 EMQX 节点。
+2. 一旦新节点集就绪,将 Service 资源重定向到新节点集,确保没有新连接路由到旧节点集。
+3. 以受控速率安全地将现有 MQTT 连接从旧节点集迁移到新节点集,以避免重连风暴。
+4. 逐步缩容旧 EMQX 节点集。
5. 完成升级。
```mermaid
@@ -93,144 +86,82 @@ timeline
: pod-2
```
-## 如何通过蓝绿发布更新 EMQX 集群
+## 步骤
### 配置更新策略
-:::: tabs type:card
-::: tab apps.emqx.io/v2beta1
-
-创建 `apps.emqx.io/v2beta1 EMQX`,并配置更新策略
-
-```yaml
-apiVersion: apps.emqx.io/v2beta1
-kind: EMQX
-metadata:
- name: emqx
-spec:
- image: emqx/emqx:latest
- updateStrategy:
- evacuationStrategy:
- connEvictRate: 1000
- sessEvictRate: 1000
- waitTakeover: 10
- initialDelaySeconds: 10
- type: Recreate
-```
-
-`initialDelaySeconds`: 所有的节点就绪后,开始更新前的等待时间(单位: second)。
-
-`waitTakeover`: 删除 Pod 时的间隔时间(单位: second)。
-
-`connEvictRate`: MQTT 客户端疏散速率,仅支持 EMQX 企业版(单位: count/second)。
-
-`sessEvictRate`: MQTT Session 疏散速率,仅支持 EMQX 企业版(单位:count/second)。
-
-将上述内容保存为:emqx-update.yaml,执行如下命令部署 EMQX:
-
-```bash
-$ kubectl apply -f emqx-update.yaml
-
-emqx.apps.emqx.io/emqx-ee created
-```
-
-检查 EMQX 集群状态,请确保 `STATUS` 为 `Running`,这可能需要一些时间等待 EMQX 集群准备就绪。
-
-```bash
-$ kubectl get emqx
-
-NAME STATUS AGE
-emqx-ee Running 8m33s
-```
-
-:::
-::: tab apps.emqx.io/v1beta4
-
-创建 `apps.emqx.io/v1beta4 EmqxEnterprise` 并配置更新策略。
-
-```yaml
-apiVersion: apps.emqx.io/v1beta4
-kind: EmqxEnterprise
-metadata:
- name: emqx-ee
-spec:
- blueGreenUpdate:
- initialDelaySeconds: 60
- evacuationStrategy:
- waitTakeover: 5
- connEvictRate: 200
- sessEvictRate: 200
- template:
- spec:
- emqxContainer:
- image:
- repository: emqx/emqx-ee
- version: 4.4.14
-```
-
-`initialDelaySeconds`: 所有的节点就绪后,开始节点疏散前的等待时间(单位: second)。
-
-`waitTakeover`: 所有连接断开后,等待客户端重连以接管会话的时间(单位: second)。
-
-`connEvictRate`: MQTT 客户端疏散速率(单位: count/second)。
-
-`sessEvictRate`: MQTT Session 疏散速度(单位:count/second)。
-
-将上述内容保存为:emqx-update.yaml,执行如下命令部署 EMQX 企业版集群:
-
-```bash
-$ kubectl apply -f emqx-update.yaml
+1. 创建 `apps.emqx.io/v2beta1` EMQX CR 并配置更新策略。
+
+ ```yaml
+ apiVersion: apps.emqx.io/v2beta1
+ kind: EMQX
+ metadata:
+ name: emqx-ee
+ spec:
+ image: emqx/emqx:@EE_VERSION@
+ config:
+ data: |
+ license {
+ key = "..."
+ }
+ updateStrategy:
+ evacuationStrategy:
+ # MQTT 客户端疏散速率,每秒连接数:
+ connEvictRate: 1000
+ # MQTT Session 疏散速率,每秒会话数:
+ sessEvictRate: 1000
+ # 删除 Pod 前的等待时间:
+ waitTakeover: 10
+ # 所有节点就绪后,开始升级前的等待时间:
+ initialDelaySeconds: 10
+ type: Recreate
+ ```
-emqxenterprise.apps.emqx.io/emqx-ee created
-```
+2. 将上述内容保存为 `emqx-update.yaml`,并使用 `kubectl apply` 部署:
-检查 EMQX 集群状态,请确保 `STATUS` 为 `Running`,这可能需要一些时间等待 EMQX 集群准备就绪。
+ ```bash
+ $ kubectl apply -f emqx-update.yaml
+ emqx.apps.emqx.io/emqx-ee created
+ ```
-```bash
-$ kubectl get emqxenterprises
+3. 检查 EMQX 集群的状态。
-NAME STATUS AGE
-emqx-ee Running 8m33s
-```
+ 确保 `STATUS` 为 `Ready`。这可能需要一些时间。
-:::
-::::
+ ```bash
+ $ kubectl get emqx
+ NAME STATUS AGE
+ emqx-ee Ready 8m33s
+ ```
-### 使用 MQTT X CLI 连接 EMQX 集群
+### 连接到 EMQX 集群
-MQTT X CLI 是一个开源的,支持自动重连的 MQTT 5.0 CLI Client,也是一个纯命令行模式的 MQTT X。旨在帮助更快地开发和调试 MQTT 服务和应用程序,而无需使用图形界面。关于 MQTT X CLI 的文档可以参考:[MQTTX CLI](https://mqttx.app/docs/cli)。
+[MQTTX](https://mqttx.app/zh/cli) 是一款开源的、支持自动重连的 MQTT 5.0 兼容命令行客户端工具,旨在帮助开发和调试 MQTT 服务和应用。
-执行如下命令连接 EMQX 集群:
+使用 MQTTX 连接到 EMQX 集群:
```bash
mqttx bench conn -h ${IP} -p ${PORT} -c 3000
-```
-
-输出类似于:
-
-```bash
[10:05:21 AM] › ℹ Start the connect benchmarking, connections: 3000, req interval: 10ms
✔ success [3000/3000] - Connected
[10:06:13 AM] › ℹ Done, total time: 31.113s
```
-### 升级 EMQX 集群
+### 触发升级
-- 任何作用到 Pod Template 的修改都会触发 EMQX Operator 的升级策略
+1. 对 Pod 模板的任何修改都会触发 EMQX Operator 的升级策略。
- > 在本文中通过我们修改 EMQX Container Image 来触发升级,用户可根据实际需求自行修改。
+ 在此示例中,我们通过修改 Pod 的 `ImagePullPolicy` 来触发升级。
```bash
$ kubectl patch emqx emqx-ee --type=merge -p '{"spec": {"imagePullPolicy": "Never"}}'
-
emqx.apps.emqx.io/emqx-ee patched
```
-- 检查蓝绿升级的状态
+2. 检查升级过程的状态。
```bash
$ kubectl get emqx emqx-ee -o json | jq ".status.nodeEvacuationsStatus"
-
[
{
"connection_eviction_rate": 200,
@@ -254,41 +185,40 @@ mqttx bench conn -h ${IP} -p ${PORT} -c 3000
]
```
- `connection_eviction_rate`: 节点疏散速率(单位:count/second)。
-
- `node`: 当前正在进行疏散的节点。
-
- `session_eviction_rate`: 节点 session 疏散速率(单位:count/second)。
+ | 字段 | 描述 |
+ |-------------------------|-----------------------------------------------------------------------|
+ | `node` | 当前正在疏散的节点。 |
+ | `state` | 节点疏散阶段。 |
+ | `session_recipients` | MQTT 会话接收者。 |
+ | `session_eviction_rate` | 此节点上的 MQTT 会话疏散速率(每秒会话数)。 |
+ | `connection_eviction_rate`| 此节点上的 MQTT 连接疏散速率(每秒连接数)。 |
+ | `initial_sessions` | 此节点上的初始会话数。 |
+ | `initial_connected` | 此节点上的初始连接数。 |
+ | `current_sessions` | 此节点上的当前会话数。 |
+ | `current_connected` | 此节点上的当前连接数。 |
- `session_recipients`: session 疏散的接受者列表。
-
- `state`: 节点疏散阶段。
-
- `stats`: 疏散节点的统计指标,包括当前连接数(current_connected),当前 session 数(current_sessions),初始连接数(initial_connected),初始 session 数(initial_sessions)。
-
-- 等待完成升级
+3. 等待升级完成。
```bash
$ kubectl get emqx
-
NAME STATUS AGE
emqx-ee Ready 8m33s
```
- 请确保 `STATUS` 为 `Ready`, 这需要一些时间等待 EMQX 集群完成升级。
+ 确保 `STATUS` 为 `Ready`。根据 MQTT 客户端和会话的数量,升级过程可能需要一些时间。
- 升级完成后, 通过 `$ kubectl get pods` 命令可以观察到旧的 EMQX 节点已经被删除。
+ 升级完成后,您可以使用 `kubectl get pods` 验证旧的 EMQX 节点已被删除。
## Grafana 监控
-升级过程中连接数监控图如下(1万连接为例)
+以下监控图显示了升级过程中的连接数,以 10,000 个连接为例。

-Total:总的连接数,图中最上面的一条线
-
-emqx-ee-86f864f975:前缀表示的是升级前的 3 个 EMQX 节点
-
-emqx-ee-648c45c747:前缀表示升级后的 3 个 EMQX 节点
+| 标签/前缀 | 描述 |
+|-------------------------|-----------------------------------------------------|
+| Total | 连接总数;图中最上面的线。 |
+| `emqx-ee-86f864f975` | 3 个旧 EMQX 节点集的名称前缀。 |
+| `emqx-ee-648c45c747` | 3 个升级后的 EMQX 节点集的名称前缀。 |
-如上图,我们通过 EMQX Kubernetes Operator 的蓝绿发布在 Kubernetes 中实现了优雅升级,通过该方案升级,总连接数未出现较大抖动(取决于迁移速率、服务端能够接收的速率、客户端重连策略等),能够极大程度保障升级过程的平滑,有效防止服务端过载,减少业务感知,从而提升服务的稳定性。
+此时间线说明了 EMQX Operator 如何执行平滑的蓝绿升级。在整个过程中,连接总数保持稳定(受迁移速率、服务器容量和客户端重连策略等因素影响)。这种方法确保最小中断,防止服务器过载,并提高整体服务稳定性。
diff --git a/zh_CN/deploy/kubernetes/operator/tasks/configure-emqx-config.md b/zh_CN/deploy/kubernetes/operator/tasks/configure-emqx-config.md
index 4fd0f45e7..483d7a6e1 100644
--- a/zh_CN/deploy/kubernetes/operator/tasks/configure-emqx-config.md
+++ b/zh_CN/deploy/kubernetes/operator/tasks/configure-emqx-config.md
@@ -1,87 +1,77 @@
# 修改 EMQX 配置
-## 任务目标
+## 目标
-通过 `config.data` 字段修改 EMQX 节点配置。
+使用 EMQX 自定义资源中的 `.spec.config.data` 字段修改 EMQX 配置。
## 配置 EMQX 集群
-EMQX 主配置文件为 `etc/emqx.conf`,从 5.0 版本开始,EMQX 采用 [HOCON](https://www.emqx.io/docs/zh/v5.1/configuration/configuration.html#hocon-%E9%85%8D%E7%BD%AE%E6%A0%BC%E5%BC%8F) 作为配置文件格式。
-
-`apps.emqx.io/v2beta1 EMQX` 支持通过 `.spec.config.data` 字段配置 EMQX 集群。EMQX 配置可以参考文档:[配置手册](https://www.emqx.io/docs/zh/v5.1/configuration/configuration-manual.html#%E8%8A%82%E7%82%B9%E8%AE%BE%E7%BD%AE)。
-
-:::tip
-如果在创建 EMQX 之后需要修改集群配置,请通过 EMQX Dashboard 进行修改。
-:::
-
-+ 将下面的内容保存成 YAML 文件,并通过 `kubectl apply` 命令部署它
-
- ```yaml
- apiVersion: apps.emqx.io/v2beta1
- kind: EMQX
- metadata:
- name: emqx
- spec:
- image: emqx/emqx:latest
- config:
- data: |
- listeners.tcp.test {
- bind = "0.0.0.0:1884"
- max_connections = 1024000
- }
- listenersServiceTemplate:
- spec:
- type: LoadBalancer
- dashboardServiceTemplate:
- spec:
- type: LoadBalancer
- ```
-
-> 在 `.spec.config.data` 字段里面,我们为 EMQX 集群配置了一个 TCP listener,这个 listener 名称为:test,监听的端口为:1884。
-
-+ 等待 EMQX 集群就绪,可以通过 `kubectl get` 命令查看 EMQX 集群的状态,请确保 `STATUS` 为 `Running`,这个可能需要一些时间
-
- ```
- $ kubectl get emqx
- NAME IMAGE STATUS AGE
- emqx emqx/emqx:latest Running 2m55s
- ```
-
-+ 获取 EMQX 集群的 Dashboard External IP,访问 EMQX 控制台
-
- EMQX Operator 会创建两个 EMQX Service 资源,一个是 emqx-dashboard,一个是 emqx-listeners,分别对应 EMQX 控制台和 EMQX 监听端口。
-
- ```bash
- $ kubectl get svc emqx-dashboard -o json | jq '.status.loadBalancer.ingress[0].ip'
-
- 192.168.1.200
- ```
-
- 通过浏览器访问 `http://192.168.1.200:18083` ,使用默认的用户名和密码 `admin/public` 登录 EMQX 控制台。
+EMQX CRD `apps.emqx.io/v2beta1` 支持通过 `.spec.config.data` 字段配置 EMQX 集群。有关完整的配置参考,请参阅[配置手册](https://docs.emqx.com/zh/enterprise/v6.0.0/hocon/)。
+
+EMQX 使用 [HOCON](../../../../configuration/configuration.md#hocon-配置格式) 作为配置文件格式。
+
+1. 将以下内容保存为 YAML 文件,并使用 `kubectl apply` 部署:
+
+ ```yaml
+ apiVersion: apps.emqx.io/v2beta1
+ kind: EMQX
+ metadata:
+ name: emqx
+ spec:
+ image: emqx/emqx:@EE_VERSION@
+ imagePullPolicy: IfNotPresent
+ config:
+ # 配置一个名为 `test` 的 TCP 监听器,监听端口 1884:
+ data: |
+ listeners.tcp.test {
+ bind = "0.0.0.0:1884"
+ max_connections = 1024000
+ }
+ license {
+ key = "..."
+ }
+ listenersServiceTemplate:
+ spec:
+ type: LoadBalancer
+ dashboardServiceTemplate:
+ spec:
+ type: LoadBalancer
+ ```
+
+ ::: tip
+ `.spec.config.data` 字段的内容作为 [`emqx.conf` 配置文件](../../../../configuration/configuration.md#不可变配置文件)提供给 EMQX 容器。
+ :::
+
+2. 等待 EMQX 集群就绪。
+
+ 使用 `kubectl get` 检查 EMQX 集群的状态,并确保 `STATUS` 为 `Ready`。这可能需要一些时间。
+
+ ```bash
+ $ kubectl get emqx emqx
+ NAME STATUS AGE
+ emqx Ready 10m
+ ```
## 验证配置
-+ 查看 EMQX 集群 listener 信息
-
- ```bash
- $ kubectl exec -it emqx-core-0 -c emqx -- emqx_ctl listeners
- ```
-
- 可以获取到类似如下的打印,这意味着们配置的名称为 `test` 的 listener 已经生效。
-
- ```bash
- tcp:default
- listen_on : 0.0.0.0:1883
- acceptors : 16
- proxy_protocol : false
- running : true
- current_conn : 0
- max_conns : 1024000
- tcp:test
- listen_on : 0.0.0.0:1884
- acceptors : 16
- proxy_protocol : false
- running : true
- current_conn : 0
- max_conns : 1024000
- ```
+查看 EMQX 监听器的状态。
+
+```bash
+$ kubectl exec -it emqx-core-0 -c emqx -- emqx ctl listeners
+tcp:default
+ listen_on: 0.0.0.0:1883
+ acceptors: 16
+ proxy_protocol : false
+ running: true
+ current_conn: 0
+ max_conns : 1024000
+tcp:test
+ listen_on: 0.0.0.0:1884
+ acceptors: 16
+ proxy_protocol : false
+ running: true
+ current_conn: 0
+ max_conns : 1024000
+```
+
+这里我们可以看到端口 1884 上的新监听器正在运行。
diff --git a/zh_CN/deploy/kubernetes/operator/tasks/configure-emqx-core-replicant.md b/zh_CN/deploy/kubernetes/operator/tasks/configure-emqx-core-replicant.md
index eb0c6b117..44eac901d 100644
--- a/zh_CN/deploy/kubernetes/operator/tasks/configure-emqx-core-replicant.md
+++ b/zh_CN/deploy/kubernetes/operator/tasks/configure-emqx-core-replicant.md
@@ -1,135 +1,126 @@
-# 配置 Core + Replicant 集群 (EMQX 5.x)
+# 启用 Core-Replicant 部署
-## 任务目标
+## 目标
- 通过 `coreTemplate` 字段配置 EMQX 集群 Core 节点。
- 通过 `replicantTemplate` 字段配置 EMQX 集群 Replicant 节点。
-## Core 节点与 Replicant 节点
+## Core 和 Replicant 节点
-::: tip
-仅有 EMQX 企业版支持 Core + Replicant 节点集群。
-:::
+EMQX 集群中的节点可以具有两种角色之一:Core 节点和 Replicant 节点。
+
+- Core 节点负责集群中的数据持久化,并作为共享集群状态的权威来源,例如路由表、MQTT 客户端通道、保留消息、集群配置、告警、Dashboard 用户凭据等。
+- Replicant 节点被设计为无状态的,不参与数据库操作。添加或删除 Replicant 节点不会影响集群数据的冗余。
-在 EMQX 5.0 中,EMQX 集群中的节点可以分成两个角色:核心(Core)节点和 复制(Replicant)节点。Core 节点负责集群中所有的写操作,与 EMQX 4.x 集群中的节点行为一致,作为 EMQX 数据库 [Mria](https://github.com/emqx/mria) 的真实数据源来存储路由表、会话、配置、报警以及 Dashboard 用户信息等数据。而 Replicant 节点被设计成无状态的,不参与数据的写入,添加或者删除 Replicant 节点不会改变集群数据的冗余。更多关于 EMQX 5.0 架构的信息请参考文档:[EMQX 5.0 架构](../../../cluster/mria-introduction.md),Core 节点与 Replicant 节点的拓扑结构如下图所示:
+典型 EMQX 集群中 Core 和 Replicant 节点之间的通信如下图所示:
-

+
-::: tip
-EMQX 集群中至少要有一个 Core 节点,出于高可用的目的,EMQX Operator 建议 EMQX 集群至少有三个 Core 节点。
+有关 EMQX Core-Replicant 架构的更多信息,请参阅[集群架构](../../../cluster/mria-introduction.md)文档。
+
+:::tip
+EMQX 集群中必须至少有一个 Core 节点。为了高可用性,EMQX Operator 建议 EMQX 集群至少有三个 Core 节点。
:::
-## 部署 EMQX 集群
-
-`apps.emqx.io/v2beta1 EMQX` 支持通过 `.spec.coreTemplate` 字段来配置 EMQX 集群 Core 节点,使用 `.spec.replicantTemplate` 字段来配置 EMQX 集群 Replicant 节点,更多信息请查看:[API 参考](../api-reference.md#emqxspec)。
-
-+ 将下面的内容保存成 YAML 文件,并通过 `kubectl apply` 命令部署它
-
- ```yaml
- apiVersion: apps.emqx.io/v2beta1
- kind: EMQX
- metadata:
- name: emqx
- spec:
- image: emqx/emqx-enterprise:@EE_VERSION@
- coreTemplate:
- spec:
- replicas: 2
- resources:
- requests:
- cpu: 250m
- memory: 512Mi
- replicantTemplate:
- spec:
- replicas: 3
- resources:
- requests:
- cpu: 250m
- memory: 1Gi
- dashboardServiceTemplate:
- spec:
- type: LoadBalancer
- ```
-
- > 上文的 YAML 中,我们声明了这是一个由二个 Core 节点和三个 Replicant 节点组成的 EMQX 集群。Core 节点最低需要 512Mi 内存 ,Replicant 节点最低需要 1Gi 内存。你可以根据实际的业务负载进行调整。在实际业务中,Replicant 节点会接受全部的客户端请求,所以 Replicant 节点需要的资源会更高一些。
-
-+ 等待 EMQX 集群就绪,可以通过 `kubectl get` 命令查看 EMQX 集群的状态,请确保 `STATUS` 为 `Running`,这个可能需要一些时间
-
- ```
- $ kubectl get emqx emqx
- NAME IMAGE STATUS AGE
- emqx emqx/emqx-enterprise:@EE_VERSION@ Running 10m
- ```
-
-+ 获取 EMQX 集群的 Dashboard External IP,访问 EMQX 控制台
-
- EMQX Operator 会创建两个 EMQX Service 资源,一个是 emqx-dashboard,一个是 emqx-listeners,分别对应 EMQX 控制台和 EMQX 监听端口。
-
- ```bash
- $ kubectl get svc emqx-dashboard -o json | jq '.status.loadBalancer.ingress[0].ip'
-
- 192.168.1.200
- ```
-
- 通过浏览器访问 `http://192.168.1.200:18083` ,使用默认的用户名和密码 `admin/public` 登录 EMQX 控制台。
-
-## 检查 EMQX 集群
-
- 可以通过检查 EMQX 自定义资源的状态来获取所有集群中节点的信息。
-
- ```bash
- $ kubectl get emqx emqx -o json | jq .status.coreNodes
- [
- {
- "node": "emqx@emqx-core-0.emqx-headless.default.svc.cluster.local",
- "node_status": "running",
- "otp_release": "27.2-3/15.2",
- "role": "core",
- "version": "@EE_VERSION@"
- },
- {
- "node": "emqx@emqx-core-1.emqx-headless.default.svc.cluster.local",
- "node_status": "running",
- "otp_release": "27.2-3/15.2",
- "role": "core",
- "version": "@EE_VERSION@"
- },
- {
- "node": "emqx@emqx-core-2.emqx-headless.default.svc.cluster.local",
- "node_status": "running",
- "otp_release": "27.2-3/15.2",
- "role": "core",
- "version": "@EE_VERSION@"
- }
- ]
- ```
-
-
- ```bash
- $ kubectl get emqx emqx -o json | jq .status.replicantNodes
- [
- {
- "node": "emqx@10.244.4.56",
- "node_status": "running",
- "otp_release": "27.2-3/15.2",
- "role": "replicant",
- "version": "@EE_VERSION@"
- },
- {
- "node": "emqx@10.244.4.57",
- "node_status": "running",
- "otp_release": "27.2-3/15.2",
- "role": "replicant",
- "version": "@EE_VERSION@"
- },
- {
- "node": "emqx@10.244.4.58",
- "node_status": "running",
- "otp_release": "27.2-3/15.2",
- "role": "replicant",
- "version": "@EE_VERSION@"
- }
- ]
- ```
+## 配置 EMQX 集群
+
+EMQX CRD `apps.emqx.io/v2beta1` 支持通过 `.spec.coreTemplate` 字段配置 EMQX 集群的 Core 节点,并通过 `.spec.replicantTemplate` 字段配置 EMQX 集群的 Replicant 节点。
+
+1. 将以下内容保存为 YAML 文件,并使用 `kubectl apply` 部署。
+
+ ```yaml
+ apiVersion: apps.emqx.io/v2beta1
+ kind: EMQX
+ metadata:
+ name: emqx
+ spec:
+ image: emqx/emqx:@EE_VERSION@
+ config:
+ data: |
+ license {
+ key = "..."
+ }
+ coreTemplate:
+ spec:
+ replicas: 2
+ resources:
+ requests:
+ cpu: 250m
+ memory: 512Mi
+ replicantTemplate:
+ spec:
+ replicas: 3
+ resources:
+ requests:
+ cpu: 250m
+ memory: 1Gi
+ dashboardServiceTemplate:
+ spec:
+ type: LoadBalancer
+ ```
+
+ 在上面的示例中,EMQX CR 定义了一个由两个 Core 节点和三个 Replicant 节点组成的 EMQX 集群。
+
+ Core 节点至少需要 512Mi 内存,Replicant 节点至少需要 1Gi 内存。您可以根据实际业务负载调整这些约束。通常,Replicant 节点接受所有客户端请求,因此 Replicant 节点所需的资源可能更高,以适应许多并发连接。
+
+2. 等待 EMQX 集群就绪。使用 `kubectl get` 检查 EMQX 集群的状态,并确保 `STATUS` 为 `Ready`。这可能需要一些时间。
+
+ ```bash
+ $ kubectl get emqx emqx
+ NAME STATUS AGE
+ emqx Ready 10m
+ ```
+
+## 验证 EMQX 集群
+
+您可以通过检查 EMQX CR 的 `.status` 字段查看集群中所有节点的信息。
+
+```bash
+$ kubectl get emqx emqx -o json | jq .status.coreNodes
+[
+ {
+ "name": "emqx@emqx-core-adcdef012-0.emqx-headless.default.svc.cluster.local",
+ "node_status": "running",
+ "otp_release": "27.2-3/15.2",
+ "role": "core",
+ "version": "@EE_VERSION@"
+ },
+ {
+ "name": "emqx@emqx-core-adcdef012-1.emqx-headless.default.svc.cluster.local",
+ "node_status": "running",
+ "otp_release": "27.2-3/15.2",
+ "role": "core",
+ "version": "@EE_VERSION@"
+ }
+]
+```
+
+
+```bash
+$ kubectl get emqx emqx -o json | jq .status.replicantNodes
+[
+ {
+ "name": "emqx@10.244.4.56",
+ "node_status": "running",
+ "otp_release": "27.2-3/15.2",
+ "role": "replicant",
+ "version": "@EE_VERSION@"
+ },
+ {
+ "name": "emqx@10.244.4.57",
+ "node_status": "running",
+ "otp_release": "27.2-3/15.2",
+ "role": "replicant",
+ "version": "@EE_VERSION@"
+ },
+ {
+ "name": "emqx@10.244.4.58",
+ "node_status": "running",
+ "otp_release": "27.2-3/15.2",
+ "role": "replicant",
+ "version": "@EE_VERSION@"
+ }
+]
+```
diff --git a/zh_CN/deploy/kubernetes/operator/tasks/configure-emqx-license.md b/zh_CN/deploy/kubernetes/operator/tasks/configure-emqx-license.md
index bed5c9e72..71bde312f 100644
--- a/zh_CN/deploy/kubernetes/operator/tasks/configure-emqx-license.md
+++ b/zh_CN/deploy/kubernetes/operator/tasks/configure-emqx-license.md
@@ -1,116 +1,98 @@
-# License 配置 (EMQX 企业版)
+# 管理 License
-## 任务目标
+## 目标
- 配置 EMQX 企业版 License。
- 更新 EMQX 企业版 License。
## 配置 License
-EMQX 企业版 License 可以在 EMQ 官网免费申请:[申请 EMQX 企业版 License](https://www.emqx.com/zh/apply-licenses/emqx)。
+您可以在 EMQX 官网免费申请 EMQX 企业版 License:[申请 EMQX 企业版 License](https://www.emqx.com/zh/apply-licenses/emqx)。
## 配置 EMQX 集群
-`apps.emqx.io/v2beta1 EMQX` 支持通过 `.spec.config.data` 配置 EMQX 集群 License,EMQX 配置可以参考文档:[配置手册](../../../../configuration/configuration.md)。
+EMQX CRD `apps.emqx.io/v2beta1` 支持通过 `.spec.config.data` 字段配置 EMQX 集群 License。有关完整的配置参考,请参阅[配置手册](https://docs.emqx.com/zh/enterprise/v6.0.0/hocon/)。
-> 在创建 EMQX 集群之后,如果需要更新 License,请通过 EMQX Dashboard 进行更新。
+1. 将以下内容保存为 YAML 文件,并使用 `kubectl apply` 部署。
-+ 将下面的内容保存成 YAML 文件,并通过 `kubectl apply` 命令部署它
+ ```yaml
+ apiVersion: apps.emqx.io/v2beta1
+ kind: EMQX
+ metadata:
+ name: emqx-ee
+ spec:
+ config:
+ data: |
+ license {
+ key = "..."
+ }
+ image: emqx/emqx:@EE_VERSION@
+ dashboardServiceTemplate:
+ spec:
+ type: LoadBalancer
+ ```
- ```yaml
- apiVersion: apps.emqx.io/v2beta1
- kind: EMQX
- metadata:
- name: emqx
- spec:
- config:
- data: |
- license {
- key = "${your_license_key}"
- }
- image: emqx/emqx-enterprise:@EE_VERSION@
- listenersServiceTemplate:
- spec:
- type: LoadBalancer
- dashboardServiceTemplate:
- spec:
- type: LoadBalancer
- ```
+ ::: tip
- > `config.data` 字段里面的 `license.key` 表示 Licesne 内容,此例中 License 内容被省略,请用户自行填充。
+ `.spec.config.data` 字段中的 `license.key` 表示 License 内容。在此示例中,License 内容被省略。请用您自己的 License 密钥填充。
-+ 等待 EMQX 集群就绪,可以通过 `kubectl get` 命令查看 EMQX 集群的状态,请确保 `STATUS` 为 `Running`,这个可能需要一些时间
+ :::
- ```bash
- $ kubectl get emqx emqx
- NAME IMAGE STATUS AGE
- emqx emqx/emqx-enterprise:@EE_VERSION@ Running 10m
- ```
+2. 等待 EMQX 集群就绪。使用 `kubectl get` 检查 EMQX 集群的状态,并确保 `STATUS` 为 `Ready`。这可能需要一些时间。
-+ 获取 EMQX 集群的 Dashboard External IP,访问 EMQX 控制台
-
- EMQX Operator 会创建两个 EMQX Service 资源,一个是 emqx-dashboard,一个是 emqx-listeners,分别对应 EMQX 控制台和 EMQX 监听端口。
-
- ```bash
- $ kubectl get svc emqx-ee-dashboard -o json | jq '.status.loadBalancer.ingress[0].ip'
-
- 192.168.1.200
- ```
-
- 通过浏览器访问 `http://192.168.1.200:18083` ,使用默认的用户名和密码 `admin/public` 登录 EMQX 控制台。
+ ```bash
+ $ kubectl get emqx emqx-ee
+ NAME STATUS AGE
+ emqx Ready 10m
+ ```
## 更新 License
-+ 查看 License 信息
-
- ```bash
- $ pod_name="$(kubectl get pods -l 'apps.emqx.io/instance=emqx,apps.emqx.io/db-role=core' -o json | jq --raw-output '.items[0].metadata.name')"
- $ kubectl exec -it ${pod_name} -c emqx -- emqx_ctl license info
- ```
-
- 可以获取到如下输出,从输出结果可以看到我们申请的 License 的基本信息,包括申请人的信息和 License 支持最大连接数以及 License 过期时间等。
- ```bash
- customer : Evaluation
- email : contact@emqx.io
- deployment : default
- max_connections : 100
- start_at : 2023-01-09
- expiry_at : 2028-01-08
- type : trial
- customer_type : 10
- expiry : false
- ```
-
-+ 修改 EMQX 自定义资源以更新 License
- ```bash
- $ kubectl edit emqx emqx
- ...
- spec:
- image: emqx/emqx-enterprise:@EE_VERSION@
- config:
- data: |
- license {
- key = "${new_license_key}"
- }
- ...
- ```
-
- + 查看 EMQX 集群 License 是否被更新
-
- ```bash
- $ pod_name="$(kubectl get pods -l 'apps.emqx.io/instance=emqx,apps.emqx.io/db-role=core' -o json | jq --raw-output '.items[0].metadata.name')"
- $ kubectl exec -it ${pod_name} -c emqx -- emqx_ctl license info
- ```
- 可以获取到类似如下的信息,从获取到 `max_connections` 字段可以看出 License 的内容已经更新,则说明 EMQX 企业版 License 更新成功。若证书信息没有更新,可以等待一会,License 的更新会有些时延。
-
- ```bash
- customer : Evaluation
- email : contact@emqx.io
- deployment : default
- max_connections : 100000
- start_at : 2023-01-09
- expiry_at : 2028-01-08
- type : trial
- customer_type : 10
- expiry : false
- ```
+1. 查看 License 信息。
+
+ ```bash
+ $ kubectl exec -it service/emqx-ee-headless -c emqx -- emqx ctl license info
+ customer : Evaluation
+ email : contact@emqx.io
+ deployment : default
+ max_connections : 100
+ start_at : 2023-01-09
+ expiry_at : 2028-01-08
+ type : trial
+ customer_type : 10
+ expiry : false
+ ```
+
+ 输出显示基本的 License 信息,包括申请人的信息、License 支持的最大连接数和过期时间。
+
+2. 修改 EMQX CR 以更新 License。
+
+ ```bash
+ $ kubectl edit emqx emqx-ee
+ ...
+ spec:
+ image: emqx/emqx:@EE_VERSION@
+ config:
+ data: |
+ license {
+ key = "${new_license_key}"
+ }
+ ...
+ ```
+
+3. 验证 License 是否已更新。
+
+ ```bash
+ $ kubectl exec -it service/emqx-ee-headless -c emqx -- emqx ctl license info
+ customer : Evaluation
+ email : contact@emqx.io
+ deployment : default
+ max_connections : 100000
+ start_at : 2023-01-09
+ expiry_at : 2028-01-08
+ type : trial
+ customer_type : 10
+ expiry : false
+ ```
+
+ 更新的 `max_connections` 字段清楚地表明 EMQX 企业版 License 已成功更新。请注意,License 更新可能需要一些时间,因此您可能需要重试该命令。
diff --git a/zh_CN/deploy/kubernetes/operator/tasks/configure-emqx-log-collection.md b/zh_CN/deploy/kubernetes/operator/tasks/configure-emqx-log-collection.md
index 0bdcf5781..6467af2b9 100644
--- a/zh_CN/deploy/kubernetes/operator/tasks/configure-emqx-log-collection.md
+++ b/zh_CN/deploy/kubernetes/operator/tasks/configure-emqx-log-collection.md
@@ -1,257 +1,267 @@
-# 采集 EMQX 的日志
+# 在 Kubernetes 中采集 EMQX 日志
-## 任务目标
+## 目标
使用 ELK 收集 EMQX 集群日志。
## 部署 ELK
-ELK 是 Elasticsearch、Logstash、Kibana 三大开源框架首字母大写简称,也被称为 Elastic Stack。[Elasticsearch](https://www.elastic.co/cn/elasticsearch/) 是一个基于 Lucene、分布式、通过 Restful 方式进行交互的近实时搜索平台框架,也被简称为:es。[Logstash](https://www.elastic.co/cn/logstash/) 是 ELK 的中央数据流引擎,用于从不同目标(文件/数据存储/MQ)收集的不同格式数据,经过过滤后支持输出到不同目的地(文件 /MQ/redis/elasticsearch/kafka 等)。[Kibana](https://www.elastic.co/cn/kibana/) 可以将 es 的数据通过页面展示出来,提供实时分析的功能。
+**ELK** 代表 Elasticsearch、Logstash 和 Kibana(也称为 Elastic Stack):
-### 部署单节点 Elasticsearch
-
-部署单节点 Elasticsearch 的方法较简单,可以参考下面的 YAML 编排文件,快速部署一个 Elasticsearch 集群。
-
-- 将下面的内容保存成 YAML 文件,并通过 `kubectl apply` 命令部署它
-
- ```yaml
- ---
- apiVersion: v1
- kind: Service
- metadata:
- name: elasticsearch-logging
- namespace: kube-logging
- labels:
- k8s-app: elasticsearch
- kubernetes.io/cluster-service: "true"
- addonmanager.kubernetes.io/mode: Reconcile
- spec:
- ports:
- - port: 9200
- protocol: TCP
- targetPort: db
- selector:
- k8s-app: elasticsearch
- ---
- apiVersion: v1
- kind: ServiceAccount
- metadata:
- name: elasticsearch-logging
- namespace: kube-logging
- labels:
- k8s-app: elasticsearch
- kubernetes.io/cluster-service: "true"
- addonmanager.kubernetes.io/mode: Reconcile
- ---
- kind: ClusterRole
- apiVersion: rbac.authorization.k8s.io/v1
- metadata:
- name: elasticsearch-logging
- labels:
- k8s-app: elasticsearch
- kubernetes.io/cluster-service: "true"
- addonmanager.kubernetes.io/mode: Reconcile
- rules:
- - apiGroups:
- - ""
- resources:
- - "services"
- - "namespaces"
- - "endpoints"
- verbs:
- - "get"
- ---
- kind: ClusterRoleBinding
- apiVersion: rbac.authorization.k8s.io/v1
- metadata:
- namespace: kube-logging
- name: elasticsearch-logging
- labels:
- k8s-app: elasticsearch
- kubernetes.io/cluster-service: "true"
- addonmanager.kubernetes.io/mode: Reconcile
- subjects:
- - kind: ServiceAccount
- name: elasticsearch-logging
- namespace: kube-logging
- apiGroup: ""
- roleRef:
- kind: ClusterRole
- name: elasticsearch
- apiGroup: ""
- ---
- apiVersion: apps/v1
- kind: StatefulSet
- metadata:
- name: elasticsearch-logging
- namespace: kube-logging
- labels:
- k8s-app: elasticsearch
- kubernetes.io/cluster-service: "true"
- addonmanager.kubernetes.io/mode: Reconcile
- spec:
- serviceName: elasticsearch-svc
- replicas: 1
- selector:
- matchLabels:
- k8s-app: elasticsearch
- template:
- metadata:
- labels:
- k8s-app: elasticsearch
- spec:
- serviceAccountName: elasticsearch-logging
- containers:
- - image: docker.io/library/elasticsearch:7.9.3
- name: elasticsearch-logging
- limits:
- cpu: 1000m
- memory: 1Gi
- requests:
- cpu: 100m
- memory: 500Mi
- ports:
- - containerPort: 9200
- name: db
- protocol: TCP
- - containerPort: 9300
- name: transport
- protocol: TCP
- volumeMounts:
- - name: elasticsearch-logging
- mountPath: /usr/share/elasticsearch/data/
- env:
- - name: "NAMESPACE"
- valueFrom:
- fieldRef:
- fieldPath: metadata.namespace
- - name: "discovery.type"
- value: "single-node"
- - name: ES_JAVA_OPTS
- value: "-Xms512m -Xmx2g"
- # Elasticsearch requires vm.max_map_count to be at least 262144.
- # If your OS already sets up this number to a higher value, feel free
- # to remove this init container.
- initContainers:
- - name: elasticsearch-logging-init
- image: alpine:3.6
- command: ["/sbin/sysctl", "-w", "vm.max_map_count=262144"]
- securityContext:
- privileged: true
- - name: increase-fd-ulimit
- image: busybox
- imagePullPolicy: IfNotPresent
- command: ["sh", "-c", "ulimit -n 65536"]
- securityContext:
- privileged: true
- - name: elasticsearch-volume-init
- image: alpine:3.6
- command:
- - chmod
- - -R
- - "777"
- - /usr/share/elasticsearch/data/
- volumeMounts:
- - name: elasticsearch-logging
- mountPath: /usr/share/elasticsearch/data/
- volumeClaimTemplates:
- - metadata:
- name: elasticsearch-logging
- spec:
- storageClassName: ${storageClassName}
- accessModes: [ "ReadWriteOnce" ]
- resources:
- requests:
- storage: 10Gi
- ```
- > `storageClassName` 字段表示 `StorageClass` 的名称,可以使用命令 `kubectl get storageclass` 获取 Kubernetes 集群已经存在的 StorageClass,也可以根据自己需求自行创建 StorageClass。
+- [**Elasticsearch**](https://www.elastic.co/cn/elasticsearch/):基于 Lucene 的分布式、近实时搜索和分析引擎,提供 REST API 与数据交互。
+- [**Logstash**](https://www.elastic.co/cn/logstash/):用于从各种来源收集、转换和转发日志到不同目的地的主要数据流引擎。
+- [**Kibana**](https://www.elastic.co/cn/kibana/):用于实时可视化和分析 Elasticsearch 数据的 Web 界面。
-- 等待 es 就绪,可以通过 `kubectl get` 命令查看 es pod 的状态,请确保 `STATUS` 为 `Running`
+### 部署单节点 Elasticsearch
- ```bash
- $ kubectl get pod -n kube-logging -l "k8s-app=elasticsearch"
- NAME READY STATUS RESTARTS AGE
- elasticsearch-0 1/1 Running 0 16m
- ```
+部署单节点 Elasticsearch 集群相对简单。您可以使用以下 YAML 配置文件快速部署 Elasticsearch 集群。
+
+1. 将以下内容保存为 YAML 文件,并使用 `kubectl apply` 部署。
+
+ ```yaml
+ ---
+ apiVersion: v1
+ kind: Service
+ metadata:
+ name: elasticsearch-logging
+ namespace: kube-logging
+ labels:
+ k8s-app: elasticsearch
+ kubernetes.io/cluster-service: "true"
+ addonmanager.kubernetes.io/mode: Reconcile
+ spec:
+ ports:
+ - port: 9200
+ protocol: TCP
+ targetPort: db
+ selector:
+ k8s-app: elasticsearch
+ ---
+ apiVersion: v1
+ kind: ServiceAccount
+ metadata:
+ name: elasticsearch-logging
+ namespace: kube-logging
+ labels:
+ k8s-app: elasticsearch
+ kubernetes.io/cluster-service: "true"
+ addonmanager.kubernetes.io/mode: Reconcile
+ ---
+ kind: ClusterRole
+ apiVersion: rbac.authorization.k8s.io/v1
+ metadata:
+ name: elasticsearch-logging
+ labels:
+ k8s-app: elasticsearch
+ kubernetes.io/cluster-service: "true"
+ addonmanager.kubernetes.io/mode: Reconcile
+ rules:
+ - apiGroups:
+ - ""
+ resources:
+ - "services"
+ - "namespaces"
+ - "endpoints"
+ verbs:
+ - "get"
+ ---
+ kind: ClusterRoleBinding
+ apiVersion: rbac.authorization.k8s.io/v1
+ metadata:
+ namespace: kube-logging
+ name: elasticsearch-logging
+ labels:
+ k8s-app: elasticsearch
+ kubernetes.io/cluster-service: "true"
+ addonmanager.kubernetes.io/mode: Reconcile
+ subjects:
+ - kind: ServiceAccount
+ name: elasticsearch-logging
+ namespace: kube-logging
+ apiGroup: ""
+ roleRef:
+ kind: ClusterRole
+ name: elasticsearch
+ apiGroup: ""
+ ---
+ apiVersion: apps/v1
+ kind: StatefulSet
+ metadata:
+ name: elasticsearch-logging
+ namespace: kube-logging
+ labels:
+ k8s-app: elasticsearch
+ kubernetes.io/cluster-service: "true"
+ addonmanager.kubernetes.io/mode: Reconcile
+ spec:
+ serviceName: elasticsearch-svc
+ replicas: 1
+ selector:
+ matchLabels:
+ k8s-app: elasticsearch
+ template:
+ metadata:
+ labels:
+ k8s-app: elasticsearch
+ spec:
+ serviceAccountName: elasticsearch-logging
+ containers:
+ - image: docker.io/library/elasticsearch:7.9.3
+ name: elasticsearch-logging
+ limits:
+ cpu: 1000m
+ memory: 1Gi
+ requests:
+ cpu: 100m
+ memory: 500Mi
+ ports:
+ - containerPort: 9200
+ name: db
+ protocol: TCP
+ - containerPort: 9300
+ name: transport
+ protocol: TCP
+ volumeMounts:
+ - name: elasticsearch-logging
+ mountPath: /usr/share/elasticsearch/data/
+ env:
+ - name: "NAMESPACE"
+ valueFrom:
+ fieldRef:
+ fieldPath: metadata.namespace
+ - name: "discovery.type"
+ value: "single-node"
+ - name: ES_JAVA_OPTS
+ value: "-Xms512m -Xmx2g"
+ # Elasticsearch requires vm.max_map_count to be at least 262144.
+ # If your OS already sets up this number to a higher value, feel free
+ # to remove this init container.
+ initContainers:
+ - name: elasticsearch-logging-init
+ image: alpine:3.6
+ command: ["/sbin/sysctl", "-w", "vm.max_map_count=262144"]
+ securityContext:
+ privileged: true
+ - name: increase-fd-ulimit
+ image: busybox
+ imagePullPolicy: IfNotPresent
+ command: ["sh", "-c", "ulimit -n 65536"]
+ securityContext:
+ privileged: true
+ - name: elasticsearch-volume-init
+ image: alpine:3.6
+ command:
+ - chmod
+ - -R
+ - "777"
+ - /usr/share/elasticsearch/data/
+ volumeMounts:
+ - name: elasticsearch-logging
+ mountPath: /usr/share/elasticsearch/data/
+ volumeClaimTemplates:
+ - metadata:
+ name: elasticsearch-logging
+ spec:
+ storageClassName: ${storageClassName}
+ accessModes: [ "ReadWriteOnce" ]
+ resources:
+ requests:
+ storage: 10Gi
+ ```
+
+ :::tip
+
+ 使用 `storageClassName` 字段选择合适的 [StorageClass](https://kubernetes.io/zh-cn/docs/concepts/storage/storage-classes/)。运行 `kubectl get storageclass` 列出 Kubernetes 集群中已存在的 StorageClass,或根据您的需求创建 StorageClass。
+
+ :::
+
+2. 等待 Elasticsearch 就绪。使用 `kubectl get` 命令检查 Elasticsearch Pod 的状态,并确保 `STATUS` 为 `Running`。
+
+ ```bash
+ $ kubectl get pod -n kube-logging -l "k8s-app=elasticsearch"
+ NAME READY STATUS RESTARTS AGE
+ elasticsearch-0 1/1 Running 0 16m
+ ```
### 部署 Kibana
本文使用 `Deployment` 的方式部署 Kibana,对搜集到的日志进行可视化展示,`Service` 中使用的是 `NodePort`。
-- 将下面的内容保存成 YAML 文件,并通过 `kubectl apply` 命令部署它
-
- ```yaml
- ---
- apiVersion: v1
- kind: Service
- metadata:
- name: kibana
- namespace: kube-logging
- labels:
- k8s-app: kibana
- spec:
- type: NodePort
- - port: 5601
- nodePort: 35601
- protocol: TCP
- targetPort: ui
- selector:
- k8s-app: kibana
- ---
- apiVersion: apps/v1
- kind: Deployment
- metadata:
- name: kibana
- namespace: kube-logging
- labels:
- k8s-app: kibana
- kubernetes.io/cluster-service: "true"
- addonmanager.kubernetes.io/mode: Reconcile
- spec:
- replicas: 1
- selector:
- matchLabels:
- k8s-app: kibana
- template:
- metadata:
- labels:
- k8s-app: kibana
- annotations:
- seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
- spec:
- containers:
- - name: kibana
- image: docker.io/kubeimages/kibana:7.9.3
- resources:
- limits:
- cpu: 1000m
- requests:
- cpu: 100m
- env:
- # The access address of ES
- - name: ELASTICSEARCH_HOSTS
- value: http://elasticsearch-logging:9200
- ports:
- - containerPort: 5601
- name: ui
- protocol: TCP
- ```
-
-- 等待 Kibana 就绪,可以通过 `kubectl get` 命令查看 Kibana pod 的状态,请确保 `STATUS` 为 `Running`
-
- ```bash
- $ kubectl get pod -n kube-logging -l "k8s-app=kibana"
- NAME READY STATUS RESTARTS AGE
- kibana-b7d98644-48gtm 1/1 Running 0 17m
- ```
-
- 最后在浏览器中,输入 `http://{node_ip}:35601`,就会进入 kibana 的 web 界面
+1. 将以下内容保存为 YAML 文件,并使用 `kubectl apply` 部署。
+
+ ```bash
+ ---
+ apiVersion: v1
+ kind: Service
+ metadata:
+ name: kibana
+ namespace: kube-logging
+ labels:
+ k8s-app: kibana
+ spec:
+ type: NodePort
+ ports:
+ - port: 5601
+ nodePort: 35601
+ protocol: TCP
+ targetPort: ui
+ selector:
+ k8s-app: kibana
+ ---
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ name: kibana
+ namespace: kube-logging
+ labels:
+ k8s-app: kibana
+ kubernetes.io/cluster-service: "true"
+ addonmanager.kubernetes.io/mode: Reconcile
+ spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ k8s-app: kibana
+ template:
+ metadata:
+ labels:
+ k8s-app: kibana
+ annotations:
+ seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
+ spec:
+ containers:
+ - name: kibana
+ image: docker.io/kubeimages/kibana:7.9.3
+ resources:
+ limits:
+ cpu: 1000m
+ requests:
+ cpu: 100m
+ env:
+ # The access address of ES
+ - name: ELASTICSEARCH_HOSTS
+ value: http://elasticsearch-logging:9200
+ ports:
+ - containerPort: 5601
+ name: ui
+ protocol: TCP
+ ```
+
+2. 等待 Kibana 就绪,可以通过 `kubectl get` 命令查看 Kibana pod 的状态,请确保 `STATUS` 为 `Running`。
+
+ ```bash
+ $ kubectl get pod -n kube-logging -l "k8s-app=kibana"
+ NAME READY STATUS RESTARTS AGE
+ kibana-b7d98644-48gtm 1/1 Running 0 17m
+ ```
+
+1. 在浏览器中输入 `http://{node_ip}:35601`,进入 kibana 的 web 界面。
### 部署日志采集组件 Filebeat
[Filebeat](https://www.elastic.co/cn/beats/filebeat) 是一个轻量级的吃日志采集组件,是 Elastic Stack 的一部分,能够与 Logstash、Elasticsearch 和 Kibana 无缝协作。无论您要使用 Logstash 转换或充实日志和文件,还是在 Elasticsearch 中随意处理一些数据分析,亦或在 Kibana 中构建和分享仪表板,Filebeat 都能轻松地将您的数据发送至最关键的地方。
-- 将下面的内容保存成 YAML 文件,并通过 `kubectl apply` 命令部署它
+1. 将下面的内容保存成 YAML 文件,并通过 `kubectl apply` 命令部署。
```yaml
---
@@ -398,7 +408,7 @@ ELK 是 Elasticsearch、Logstash、Kibana 三大开源框架首字母大写简
path: /etc/localtime
```
-- 等待 Filebeat 就绪,可以通过 `kubectl get` 命令查看 Filebeat pod 的状态,请确保 `STATUS` 为 `Running`
+2. 等待 Filebeat 就绪,可以通过 `kubectl get` 命令查看 Filebeat pod 的状态,请确保 `STATUS` 为 `Running`。
```bash
$ kubectl get pod -n kube-logging -l "k8s-app=filebeat"
@@ -407,177 +417,179 @@ ELK 是 Elasticsearch、Logstash、Kibana 三大开源框架首字母大写简
filebeat-vwrjn 1/1 Running 0 45m
```
-## 部署 Logstash 对日志进行清洗
-
-这里主要是结合业务需要和对日志的二次利用,加入了 Logstash 进行日志清洗。本文使用 Logstash 的 [Beats Input plugin](https://www.elastic.co/guide/en/logstash/current/plugins-inputs-beats.html) 插件来采集日志,使用 [Ruby filter plugin](https://www.elastic.co/guide/en/logstash/current/plugins-filters-ruby.html) 插件来过滤日志。Logstash 还提供很多其他输入和过滤插件供用户使用,大家可以根据自己的业务需求配置合适的插件。
-
-- 将下面的内容保存成 YAML 文件,并通过 `kubectl apply` 命令部署它
-
- ```yaml
- ---
- apiVersion: v1
- kind: Service
- metadata:
- name: logstash
- namespace: kube-system
- spec:
- ports:
- - port: 5044
- targetPort: beats
- selector:
- k8s-app: logstash
- clusterIP: None
- ---
- apiVersion: apps/v1
- kind: Deployment
- metadata:
- name: logstash
- namespace: kube-system
- spec:
- selector:
- matchLabels:
- k8s-app: logstash
- template:
- metadata:
- labels:
- k8s-app: logstash
- spec:
- containers:
- - image: docker.io/kubeimages/logstash:7.9.3
- name: logstash
- ports:
- - containerPort: 5044
- name: beats
- command:
- - logstash
- - '-f'
- - '/etc/logstash_c/logstash.conf'
- env:
- - name: "XPACK_MONITORING_ELASTICSEARCH_HOSTS"
- value: "http://elasticsearch-logging:9200"
- volumeMounts:
- - name: config-volume
- mountPath: /etc/logstash_c/
- - name: config-yml-volume
- mountPath: /usr/share/logstash/config/
- - name: timezone
- mountPath: /etc/localtime
- resources:
- limits:
- cpu: 1000m
- memory: 2048Mi
- requests:
- cpu: 512m
- memory: 512Mi
- volumes:
- - name: config-volume
- configMap:
- name: logstash-conf
- items:
- - key: logstash.conf
- path: logstash.conf
- - name: timezone
- hostPath:
- path: /etc/localtime
- - name: config-yml-volume
- configMap:
- name: logstash-yml
- items:
- - key: logstash.yml
- path: logstash.yml
- ---
- apiVersion: v1
- kind: ConfigMap
- metadata:
- name: logstash-conf
- namespace: kube-logging
- labels:
- k8s-app: logstash
- data:
- logstash.conf: |-
- input {
- beats {
- port => 5044
- }
- }
- filter {
- ruby {
- code => "
- ss = event.get('message').split(' ')
- len = ss.length()
- level = ''
- index = ''
- msg = ''
- if len == 0 || len < 2
- event.set('level','invalid')
- return
- end
- if ss[1][0] == '['
- l = ss[1].length()
- level = ss[1][1..l-2]
- index = 2
- else
- level = 'info'
- index = 0
- end
- event.set('level',level)
- for i in ss[index..len]
- msg = msg + i
- msg = msg + ' '
- end
- event.set('message',msg)
- "
- }
- if [level] == "invalid" {
- drop {}
- }
- }
- output{
- elasticsearch {
- hosts => ["http://elasticsearch-logging:9200"]
- codec => json
- index => "logstash-%{+YYYY.MM.dd}"
- }
- }
- ---
- apiVersion: v1
- kind: ConfigMap
- metadata:
- name: logstash
- namespace: kube-logging
- labels:
- k8s-app: logstash
- data:
- logstash.yml: |-
- http.host: "0.0.0.0"
- xpack.monitoring.elasticsearch.hosts: http://elasticsearch-logging:9200
- ```
-
-- 等待 Logstash 就绪,可以通过 `kubectl get` 命令查看 Filogstash pod 的状态,请确保 `STATUS` 为 `Running`
-
- ```bash
- $ kubectl get pod -n kube-logging -l "k8s-app=logstash"
- NAME READY STATUS RESTARTS AGE
- filebeat-82d2b 1/1 Running 0 45m
- filebeat-vwrjn 1/1 Running 0 45m
- ```
+### 部署 Logstash
+
+Logstash 用于日志处理和清洗。
+
+在本演练中,我们使用 Logstash 的 [Beats Input 插件](https://www.elastic.co/guide/cn/logstash/current/plugins-inputs-beats.html) 收集日志,使用 [Ruby filter 插件](https://www.elastic.co/guide/cn/logstash/current/plugins-filters-ruby.html) 过滤日志。Logstash 还提供了许多其他输入和过滤插件,您可以根据业务需求进行配置。
+
+1. 将以下内容保存为 YAML 文件,并使用 `kubectl apply` 部署。
+
+ ```yaml
+ ---
+ apiVersion: v1
+ kind: Service
+ metadata:
+ name: logstash
+ namespace: kube-system
+ spec:
+ ports:
+ - port: 5044
+ targetPort: beats
+ selector:
+ k8s-app: logstash
+ clusterIP: None
+ ---
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ name: logstash
+ namespace: kube-system
+ spec:
+ selector:
+ matchLabels:
+ k8s-app: logstash
+ template:
+ metadata:
+ labels:
+ k8s-app: logstash
+ spec:
+ containers:
+ - image: docker.io/kubeimages/logstash:7.9.3
+ name: logstash
+ ports:
+ - containerPort: 5044
+ name: beats
+ command:
+ - logstash
+ - '-f'
+ - '/etc/logstash_c/logstash.conf'
+ env:
+ - name: "XPACK_MONITORING_ELASTICSEARCH_HOSTS"
+ value: "http://elasticsearch-logging:9200"
+ volumeMounts:
+ - name: config-volume
+ mountPath: /etc/logstash_c/
+ - name: config-yml-volume
+ mountPath: /usr/share/logstash/config/
+ - name: timezone
+ mountPath: /etc/localtime
+ resources:
+ limits:
+ cpu: 1000m
+ memory: 2048Mi
+ requests:
+ cpu: 512m
+ memory: 512Mi
+ volumes:
+ - name: config-volume
+ configMap:
+ name: logstash-conf
+ items:
+ - key: logstash.conf
+ path: logstash.conf
+ - name: timezone
+ hostPath:
+ path: /etc/localtime
+ - name: config-yml-volume
+ configMap:
+ name: logstash-yml
+ items:
+ - key: logstash.yml
+ path: logstash.yml
+ ---
+ apiVersion: v1
+ kind: ConfigMap
+ metadata:
+ name: logstash-conf
+ namespace: kube-logging
+ labels:
+ k8s-app: logstash
+ data:
+ logstash.conf: |-
+ input {
+ beats {
+ port => 5044
+ }
+ }
+ filter {
+ ruby {
+ code => "
+ ss = event.get('message').split(' ')
+ len = ss.length()
+ level = ''
+ index = ''
+ msg = ''
+ if len == 0 || len < 2
+ event.set('level','invalid')
+ return
+ end
+ if ss[1][0] == '['
+ l = ss[1].length()
+ level = ss[1][1..l-2]
+ index = 2
+ else
+ level = 'info'
+ index = 0
+ end
+ event.set('level',level)
+ for i in ss[index..len]
+ msg = msg + i
+ msg = msg + ' '
+ end
+ event.set('message',msg)
+ "
+ }
+ if [level] == "invalid" {
+ drop {}
+ }
+ }
+ output{
+ elasticsearch {
+ hosts => ["http://elasticsearch-logging:9200"]
+ codec => json
+ index => "logstash-%{+YYYY.MM.dd}"
+ }
+ }
+ ---
+ apiVersion: v1
+ kind: ConfigMap
+ metadata:
+ name: logstash
+ namespace: kube-logging
+ labels:
+ k8s-app: logstash
+ data:
+ logstash.yml: |-
+ http.host: "0.0.0.0"
+ xpack.monitoring.elasticsearch.hosts: http://elasticsearch-logging:9200
+ ```
+
+2. 等待 Logstash 就绪,可以通过 `kubectl get` 命令查看 Filogstash pod 的状态,请确保 `STATUS` 为 `Running`。
+
+ ```bash
+ $ kubectl get pod -n kube-logging -l "k8s-app=logstash"
+ NAME READY STATUS RESTARTS AGE
+ filebeat-82d2b 1/1 Running 0 45m
+ filebeat-vwrjn 1/1 Running 0 45m
+ ```
## 部署 EMQX 集群
-部署 EMQX 集群可以参考文档 [部署 EMQX](../getting-started.md)
+要部署 EMQX 集群,请参阅文档[部署 EMQX](../getting-started.md)。
## 验证日志采集
-- 首先登录 Kibana 界面,打开菜单中的 stack management 模块,点开索引管理,可以发现,已经有采集到的日志索引了
+1. 登录 Kibana 界面,打开菜单中的堆栈管理模块,点击 _Index Management_。您可以看到日志索引已经被采集。
- 
+ 
-- 为了能够在 Kibana 中能够 discover 查看日志,因此需要设置一个索引匹配,选择 index patterns,然后点击创建
+2. 要在 Kibana 中发现和查看日志,您需要创建索引模式。选择索引模式并点击 _Create_。
- 
+ 
- 
+ 
-- 最后验证是否采集到 EMQX 集群日志
+3. 最后,验证 EMQX 集群日志已被采集。
- 
+ 
diff --git a/zh_CN/deploy/kubernetes/operator/tasks/configure-emqx-log-level.md b/zh_CN/deploy/kubernetes/operator/tasks/configure-emqx-log-level.md
index 1b49fb49e..187e66fe0 100644
--- a/zh_CN/deploy/kubernetes/operator/tasks/configure-emqx-log-level.md
+++ b/zh_CN/deploy/kubernetes/operator/tasks/configure-emqx-log-level.md
@@ -1,86 +1,68 @@
# 修改 EMQX 日志等级
-## 任务目标
+## 目标
-如何修改 EMQX 集群日志等级。
+修改 EMQX 集群中的日志等级。
## 配置 EMQX 集群
-下面是 EMQX Custom Resource 的相关配置,你可以根据希望部署的 EMQX 的版本来选择对应的 APIVersion,具体的兼容性关系,请参考 [EMQX Operator 兼容性](../operator.md):
-
-`apps.emqx.io/v2beta1 EMQX` 支持通过 `.spec.config.data` 来配置 EMQX 集群日志等级,EMQX 配置可以参考文档:[配置手册](https://www.emqx.io/docs/zh/v5.1/configuration/configuration-manual.html#%E8%8A%82%E7%82%B9%E8%AE%BE%E7%BD%AE)。
-
-> 这个字段只允许在创建 EMQX 集群的时候配置,不支持更新。如果在创建 EMQX 之后需要修改集群日志等级,请通过 EMQX Dashboard 进行修改。
-
-+ 将下面的内容保存成 YAML 文件,并通过 `kubectl apply` 命令部署它
-
- ```yaml
- apiVersion: apps.emqx.io/v2beta1
- kind: EMQX
- metadata:
- name: emqx
- spec:
- image: emqx/emqx-enterprise:@EE_VERSION@
- config:
- data: |
- log.console.level = debug
- dashboardServiceTemplate:
- spec:
- type: LoadBalancer
- listenersServiceTemplate:
- spec:
- type: LoadBalancer
- ```
-
- > `.spec.config.data` 字段配置 EMQX 集群日志等级为 `debug`。
-
-+ 等待 EMQX 集群就绪,可以通过 `kubectl get` 命令查看 EMQX 集群的状态,请确保 `STATUS` 为 `Running`,这个可能需要一些时间
-
- ```bash
- $ kubectl get emqx emqx
- NAME IMAGE STATUS AGE
- emqx emqx/emqx-enterprise:@EE_VERSION@ Running 10m
- ```
-
-+ 获取 EMQX 集群的 Dashboard External IP,访问 EMQX 控制台
-
- ```bash
- $ kubectl get svc emqx-dashboard -o json | jq '.status.loadBalancer.ingress[0].ip'
-
- 192.168.1.200
- ```
-
- 通过浏览器访问 `http://192.168.1.200:18083` ,使用默认的用户名和密码 `admin/public` 登录 EMQX 控制台。
+EMQX CRD `apps.emqx.io/v2beta1` 支持通过 `.spec.config.data` 配置 EMQX 集群的日志等级。有关完整的配置参考,请参阅[配置手册](https://docs.emqx.com/zh/enterprise/v6.0.0/hocon/)。
+
+1. 将以下内容保存为 YAML 文件,并使用 `kubectl apply` 部署:
+
+ ```yaml
+ apiVersion: apps.emqx.io/v2beta1
+ kind: EMQX
+ metadata:
+ name: emqx
+ spec:
+ image: emqx/emqx:@EE_VERSION@
+ config:
+ # 启用 debug 日志记录:
+ data: |
+ log.console.level = debug
+ license {
+ key = "..."
+ }
+ dashboardServiceTemplate:
+ spec:
+ type: LoadBalancer
+ listenersServiceTemplate:
+ spec:
+ type: LoadBalancer
+ ```
+
+2. 等待 EMQX 集群就绪。使用 `kubectl get` 检查 EMQX 集群的状态,并确保 `STATUS` 为 `Ready`。这可能需要一些时间。
+
+ ```bash
+ $ kubectl get emqx
+ NAME STATUS AGE
+ emqx Ready 10m
+ ```
## 验证日志等级
-[MQTTX CLI](https://mqttx.app/zh/cli) 是一款开源的 MQTT 5.0 命令行客户端工具,旨在帮助开发者在不需要使用图形化界面的基础上,也能更快的开发和调试 MQTT 服务与应用。
-
-+ 获取 EMQX 集群的 External IP
-
- ```bash
- external_ip=$(kubectl get svc emqx-listeners -o json | jq '.status.loadBalancer.ingress[0].ip')
- ```
-
-+ 使用 MQTTX CLI 连接 EMQX 集群
-
- ```bash
- $ mqttx conn -h ${external_ip} -p 1883
+1. 获取 EMQX 集群的外部 IP。
- [4/17/2023] [5:17:31 PM] › … Connecting...
- [4/17/2023] [5:17:31 PM] › ✔ Connected
- ```
+ ```bash
+ external_ip=$(kubectl get svc emqx-listeners -o json | jq '.status.loadBalancer.ingress[0].ip')
+ ```
-+ 使用命令行查看 EMQX 集群日志信息
+2. 使用 MQTTX CLI 连接到 EMQX 集群。
- ```bash
- $ kubectl logs emqx-core-0 -c emqx
- ```
+ [MQTTX CLI](https://mqttx.app/zh/cli) 是一款开源的 MQTT 5.0 命令行客户端工具,旨在帮助开发者更快地开始使用 MQTT 服务和应用。
- 可以获取到类似如下的打印,这意味着 EMQX 收到了一个来自客户端的 CONNECT 报文,并向客户端回复了 CONNACK 报文:
+ ```bash
+ $ mqttx conn -h ${external_ip} -p 1883
+ [4/17/2023] [5:17:31 PM] › … Connecting...
+ [4/17/2023] [5:17:31 PM] › ✔ Connected
+ ```
- ```bash
- 2023-04-17T09:11:35.993031+00:00 [debug] msg: mqtt_packet_received, mfa: emqx_channel:handle_in/2, line: 360, peername: 218.190.230.144:59457, clientid: mqttx_322680d9, packet: CONNECT(Q0, R0, D0, ClientId=mqttx_322680d9, ProtoName=MQTT, ProtoVsn=5, CleanStart=true, KeepAlive=30, Username=undefined, Password=), tag: MQTT
- 2023-04-17T09:11:35.997066+00:00 [debug] msg: mqtt_packet_sent, mfa: emqx_connection:serialize_and_inc_stats_fun/1, line: 872, peername: 218.190.230.144:59457, clientid: mqttx_322680d9, packet: CONNACK(Q0, R0, D0, AckFlags=0, ReasonCode=0), tag: MQTT
- ```
+3. 查看 EMQX 容器日志。
+ ```bash
+ $ kubectl logs emqx-core-0 -c emqx
+ ...
+ 2023-04-17T09:11:35.993031+00:00 [debug] msg: mqtt_packet_received, mfa: emqx_channel:handle_in/2, line: 360, peername: 218.190.230.144:59457, clientid: mqttx_322680d9, packet: CONNECT(Q0, R0, D0, ClientId=mqttx_322680d9, ProtoName=MQTT, ProtoVsn=5, CleanStart=true, KeepAlive=30, Username=undefined, Password=), tag: MQTT
+ 2023-04-17T09:11:35.997066+00:00 [debug] msg: mqtt_packet_sent, mfa: emqx_connection:serialize_and_inc_stats_fun/1, line: 872, peername: 218.190.230.144:59457, clientid: mqttx_322680d9, packet: CONNACK(Q0, R0, D0, AckFlags=0, ReasonCode=0), tag: MQTT
+ ```
diff --git a/zh_CN/deploy/kubernetes/operator/tasks/configure-emqx-persistence.md b/zh_CN/deploy/kubernetes/operator/tasks/configure-emqx-persistence.md
index a71e861e9..76ef84176 100644
--- a/zh_CN/deploy/kubernetes/operator/tasks/configure-emqx-persistence.md
+++ b/zh_CN/deploy/kubernetes/operator/tasks/configure-emqx-persistence.md
@@ -1,110 +1,105 @@
-# 在 EMQX 集群中开启持久化
+# 启用持久化
-## 任务目标
+## 目标
-通过 `volumeClaimTemplates` 字段配置 EMQX 5.x 集群 Core 节点持久化。
+通过 `volumeClaimTemplates` 字段为 EMQX 集群的 Core 节点集配置持久化。
-## EMQX 集群持久化配置
+## 配置 EMQX 集群持久化
-下面是 EMQX Custom Resource 的相关配置,你可以根据希望部署的 EMQX 的版本来选择对应的 APIVersion,具体的兼容性关系,请参考[EMQX Operator 兼容性](../operator.md):
+EMQX CRD `apps.emqx.io/v2beta1` 支持通过 `.spec.coreTemplate.spec.volumeClaimTemplates` 配置每个 Core 节点数据的持久化。
-`apps.emqx.io/v2beta1 EMQX` 支持通过 `.spec.coreTemplate.spec.volumeClaimTemplates` 字段配置 EMQX 集群 Core 节点持久化。`.spec.coreTemplate.spec.volumeClaimTemplates` 字段的语义及配置与 Kubernetes 的 `PersistentVolumeClaimSpec` 一致,其配置可以参考文档:[PersistentVolumeClaimSpec](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#persistentvolumeclaimspec-v1-core) 。
+`.spec.coreTemplate.spec.volumeClaimTemplates` 字段的定义和语义与 Kubernetes API 中定义的 `PersistentVolumeClaimSpec` 一致。
-当用户配置了 `.spec.coreTemplate.spec.volumeClaimTemplates` 字段时,EMQX Operator 会将 PVC(PersistentVolumeClaim) 作为 Volume 挂载到 EMQX Pod 中,PVC 表示用户持久化请求,最终负责存储的是持久化卷(PersistentVolume,PV),PV 和 PVC 是一一对应的。EMQX Operator 使用 [StorageClass](https://kubernetes.io/zh-cn/docs/concepts/storage/storage-classes/) 动态创建 PV,PV 存储了 EMQX 容器中 `/opt/emqx/data` 目录下的数据,当用户不再使用 PV 资源时,可以手动删除 PVC 对象,从而允许该 PV 资源被回收再利用。
+当您指定 `.spec.coreTemplate.spec.volumeClaimTemplates` 字段时,EMQX Operator 会将 EMQX 容器的 `/opt/emqx/data` 卷配置为由 Persistent Volume Claim (PVC) 支持,该 PVC 使用指定的 [StorageClass](https://kubernetes.io/zh-cn/docs/concepts/storage/storage-classes/) 提供 Persistent Volume (PV)。因此,当删除 EMQX Pod 时,关联的 PV 和 PVC 会被保留,从而保留 EMQX 运行时数据。
-+ 将下面的内容保存成 YAML 文件,并通过 `kubectl apply` 命令部署它
+有关 PV 和 PVC 的更多详细信息,请参阅 [Persistent Volumes](https://kubernetes.io/zh-cn/docs/concepts/storage/persistent-volumes/) 文档。
- ```yaml
- apiVersion: apps.emqx.io/v2beta1
- kind: EMQX
- metadata:
- name: emqx
- spec:
- image: emqx/emqx-enterprise:@EE_VERSION@
- coreTemplate:
- spec:
- volumeClaimTemplates:
- storageClassName: standard
- resources:
- requests:
- storage: 20Mi
- accessModes:
- - ReadWriteOnce
- replicas: 3
- listenersServiceTemplate:
- spec:
- type: LoadBalancer
- dashboardServiceTemplate:
- spec:
- type: LoadBalancer
- ```
+1. 将以下内容保存为 YAML 文件,并使用 `kubectl apply` 部署。
- > `storageClassName` 字段表示 StorageClass 的名称,可以使用命令 `kubectl get storageclass` 获取 Kubernetes 集群已经存在的 StorageClass,也可以根据自己需求自行创建 StorageClass。
+ ```yaml
+ apiVersion: apps.emqx.io/v2beta1
+ kind: EMQX
+ metadata:
+ name: emqx
+ spec:
+ image: emqx/emqx:@EE_VERSION@
+ config:
+ data: |
+ license {
+ key = "..."
+ }
+ coreTemplate:
+ spec:
+ volumeClaimTemplates:
+ storageClassName: standard
+ resources:
+ requests:
+ storage: 20Mi
+ accessModes:
+ - ReadWriteOnce
+ replicas: 3
+ listenersServiceTemplate:
+ spec:
+ type: LoadBalancer
+ dashboardServiceTemplate:
+ spec:
+ type: LoadBalancer
+ ```
-+ 等待 EMQX 集群就绪,可以通过 `kubectl get` 命令查看 EMQX 集群的状态,请确保 `STATUS` 为 `Running`,这个可能需要一些时间
+ ::: tip
- ```bash
- $ kubectl get emqx emqx
- NAME IMAGE STATUS AGE
- emqx emqx/emqx-enterprise:@EE_VERSION@ Running 10m
- ```
+ 使用 `storageClassName` 字段为 EMQX 数据选择合适的 [StorageClass](https://kubernetes.io/zh-cn/docs/concepts/storage/storage-classes/)。运行 `kubectl get storageclass` 列出 Kubernetes 集群中已存在的 StorageClass,或根据您的需求创建 StorageClass。
-+ 获取 EMQX 集群的 Dashboard External IP,访问 EMQX 控制台
+ :::
- EMQX Operator 会创建两个 EMQX Service 资源,一个是 emqx-dashboard,一个是 emqx-listeners,分别对应 EMQX 控制台和 EMQX 监听端口。
+2. 等待 EMQX 集群就绪。使用 `kubectl get` 检查 EMQX 集群的状态,并确保 `STATUS` 为 `Ready`。这可能需要一些时间。
- ```bash
- $ kubectl get svc emqx-dashboard -o json | jq '.status.loadBalancer.ingress[0].ip'
+ ```bash
+ $ kubectl get emqx emqx
+ NAME STATUS AGE
+ emqx Ready 10m
+ ```
- 192.168.1.200
- ```
+## 验证持久化
- 通过浏览器访问 `http://192.168.1.200:18083` ,使用默认的用户名和密码 `admin/public` 登录 EMQX 控制台。
+1. 在 EMQX Dashboard 中创建测试规则。
-## 验证 EMQX 集群持久化
+ ```bash
+ external_ip=$(kubectl get svc emqx-dashboard -o json | jq -r '.status.loadBalancer.ingress[0].ip')
+ ```
-验证方案: 1)在旧 EMQX 集群中通过 Dashboard 创建一条测试规则;2)删除旧集群;3)重新创建 EMQX 集群,通过 Dashboard 查看之前创建的规则是否存在。
+ - 在 `http://${external_ip}:18083` 登录 EMQX Dashboard。
-+ 通过浏览器访问 EMQX Dashboard 创建测试规则
+ - 导航到**集成** -> **规则**创建新规则。
- ```bash
- external_ip=$(kubectl get svc emqx-listeners -o json | jq '.status.loadBalancer.ingress[0].ip')
- ```
+ - 为此规则附加简单动作。
- 通过访问 `http://${external_ip}:18083` 进入 Dashboard 点击 数据集成 → 规则 进入创建规则的页面,我们先点击添加动作的按钮为这条规则添加响应动作,然后点击创建生成规则,如下图所示:
+ 
- 
+ - 点击**保存**生成规则。规则创建成功后,页面上会出现一条 ID 为 `emqx-persistent-test` 的相应记录,如下图所示:
- 当我们的规则创建成功之后,在页面会出现一条规则记录,规则 ID 为:emqx-persistent-test,如下图所示:
+ 
- 
+2. 删除旧 EMQX 集群。
-+ 删除旧 EMQX 集群
+ 运行以下命令删除 EMQX 集群,其中 `emqx.yaml` 是您之前用于部署集群的文件:
- 执行如下命令删除 EMQX 集群:
+ ```bash
+ $ kubectl delete -f emqx.yaml
+ emqx.apps.emqx.io "emqx" deleted
+ ```
- ```bash
- $ kubectl delete -f emqx.yaml
+3. 重新部署 EMQX 集群。
- emqx.apps.emqx.io "emqx" deleted
- # emqxenterprise.apps.emqx.io "emqx" deleted
- ```
+ 运行以下命令重新部署 EMQX 集群:
- > emqx-persistent.yaml 是本文中第一次部署 EMQX 集群所使用的 YAML 文件,这个文件不需要做任何的改动。
+ ```bash
+ $ kubectl apply -f emqx.yaml
+ emqx.apps.emqx.io/emqx created
+ ```
-+ 重新创建 EMQX 集群
+4. 等待 EMQX 集群就绪。通过浏览器访问 EMQX Dashboard 验证之前创建的规则是否仍然存在,如下图所示:
- 执行如下命令重新创建 EMQX 集群:
+ 
- ```bash
- $ kubectl apply -f emqx.yaml
-
- emqx.apps.emqx.io/emqx created
- # emqxenterprise.apps.emqx.io/emqx created
- ```
-
- 等待 EMQX 集群就绪,然后通过浏览器访问 EMQX Dashboard 查看之前创建的规则是否存在,如下如图所示:
-
- 
-
- 从图中可以看出:在旧集群中创建的规则 emqx-persistent-test 在新的集群中依旧存在,则说明我们配置的持久化是生效的。
+ 在旧集群中创建的 `emqx-persistent-test` 规则在新集群中仍然存在,这确认了持久化配置工作正常。
diff --git a/zh_CN/deploy/kubernetes/operator/tasks/configure-emqx-prometheus.md b/zh_CN/deploy/kubernetes/operator/tasks/configure-emqx-prometheus.md
index 1049d9f03..08ceb2e02 100644
--- a/zh_CN/deploy/kubernetes/operator/tasks/configure-emqx-prometheus.md
+++ b/zh_CN/deploy/kubernetes/operator/tasks/configure-emqx-prometheus.md
@@ -1,61 +1,54 @@
-# 使用 Prometheus+Grafana 监控 EMQX 集群
+# 使用 Prometheus 和 Grafana 监控 EMQX 集群
-## 任务目标
-部署 [EMQX Exporter](https://github.com/emqx/emqx-exporter) 并通过 Prometheus 和 Grafana 监控 EMQX 集群。
+## 目标
+
+部署 [EMQX Exporter](https://github.com/emqx/emqx-exporter) 并使用 Prometheus 和 Grafana 监控 EMQX 集群。
## 部署 Prometheus 和 Grafana
-Prometheus 部署文档可以参考:[Prometheus](https://github.com/prometheus-operator/prometheus-operator)
-Grafana 部署文档可以参考:[Grafana](https://grafana.com/docs/grafana/latest/setup-grafana/installation/kubernetes/)
+* 要了解更多关于 Prometheus 部署的信息,请参阅 [Prometheus](https://github.com/prometheus-operator/prometheus-operator) 文档。
+* 要了解更多关于 Grafana 部署的信息,请参阅 [Grafana](https://grafana.com/docs/grafana/latest/setup-grafana/installation/kubernetes/) 文档。
## 部署 EMQX 集群
-下面是 EMQX Custom Resource 的相关配置,你可以根据希望部署的 EMQX 的版本来选择对应的 APIVersion,具体的兼容性关系,请参考 [EMQX Operator 兼容性](../operator.md)。
-
-EMQX 支持通过 http 接口对外暴露指标,集群下所有统计指标数据可以参考文档:[集成 Prometheus](../../../../observability/prometheus.md)。
-
-```yaml
-apiVersion: apps.emqx.io/v2beta1
-kind: EMQX
-metadata:
- name: emqx
-spec:
- image: emqx/emqx-enterprise:@EE_VERSION@
- coreTemplate:
- spec:
- ports:
- # prometheus monitor requires the pod must name the target port
- - name: dashboard
- containerPort: 18083
- replicantTemplate:
- spec:
- ports:
- - name: dashboard
- containerPort: 18083
-```
-
-将上述内容保存为:`emqx.yaml`,并执行如下命令部署 EMQX 集群:
-
-```bash
-$ kubectl apply -f emqx.yaml
-
-emqx.apps.emqx.io/emqx created
-```
-
-检查 EMQX 集群状态,请确保 `STATUS` 为 `Running`,这可能需要一些时间等待 EMQX 集群准备就绪。
-
- ```bash
- $ kubectl get emqx emqx
- NAME IMAGE STATUS AGE
- emqx emqx/emqx-enterprise:@EE_VERSION@ Running 10m
- ```
+1. EMQX 通过 [Prometheus 兼容的 HTTP API](../../../../observability/prometheus.md) 暴露各种指标。
+
+ ```yaml
+ apiVersion: apps.emqx.io/v2beta1
+ kind: EMQX
+ metadata:
+ name: emqx
+ spec:
+ image: emqx/emqx:@EE_VERSION@
+ config:
+ data: |
+ license {
+ key = "..."
+ }
+ ```
+
+2. 将上述内容保存为 `emqx.yaml` 并执行以下命令部署 EMQX 集群:
+
+ ```bash
+ $ kubectl apply -f emqx.yaml
+ emqx.apps.emqx.io/emqx created
+ ```
+
+3. 检查 EMQX 集群的状态,并确保 `STATUS` 为 `Ready`。这可能需要一些时间。
+
+ ```bash
+ $ kubectl get emqx emqx
+ NAME STATUS AGE
+ emqx Ready 10m
+ ```
## 创建 API Secret
-emqx-exporter 和 Prometheus 通过访问 EMQX dashboard API 拉取监控指标,因此需要提前登录 dashboard 创建 [API 密钥](../../../../dashboard/system.md#api-%E5%AF%86%E9%92%A5)。
+
+Prometheus 将从 EMQX Dashboard API 拉取指标,因此您需要登录 Dashboard [创建 API Secret](../../../../dashboard/system.md#api-密钥)。
## 部署 [EMQX Exporter](https://github.com/emqx/emqx-exporter)
-`emqx-exporter` 的设计目的是用于暴露 EMQX Prometheus API 中未包含的部分指标。
+`emqx-exporter` 的设计目的是暴露 EMQX Prometheus API 中未暴露的部分指标。
```yaml
apiVersion: v1
@@ -95,7 +88,7 @@ spec:
image: emqx-exporter:latest
imagePullPolicy: IfNotPresent
args:
- # "emqx-dashboard-service-name" is the service name that creating by operator for exposing 18083 port
+ # "emqx-dashboard-service-name" 是 Operator 创建的用于暴露 18083 端口的服务名称
- --emqx.nodes=${emqx-dashboard-service-name}:18083
- --emqx.auth-username=${paste_your_new_api_key_here}
- --emqx.auth-password=${paste_your_new_secret_here}
@@ -115,25 +108,25 @@ spec:
memory: 20Mi
```
-> 参数"--emqx.nodes" 为暴露18083端口的 service name。不同的 EMQX 版本的 service name 不一样,可以通过命令 `kubectl get svc` 查看。
+> 将参数 "--emqx.nodes" 设置为 Operator 创建的用于暴露 18083 端口的服务名称。通过调用 `kubectl get svc` 查找服务名称。
-将上述内容保存为`emqx-exporter.yaml`,同时使用你新创建的 API 密钥(EMQX 4.4 则为用户名密码)替换其中的`--emqx.auth-username`以及`--emqx.auth-password`,并执行如下命令:
+将上述内容保存为 `emqx-exporter.yaml`,替换 `--emqx.auth-username` 和 `--emqx.auth-password` 为您的新 API Secret。运行以下命令部署 `emqx-exporter`:
```bash
kubectl apply -f emqx-exporter.yaml
```
-检查 emqx-exporter 状态,请确保 `STATUS` 为 `Running`。
+检查 `emqx-exporter` Pod 的状态。
```bash
$ kubectl get po -l="app=emqx-exporter"
-
-NAME STATUS AGE
+NAME STATUS AGE
emqx-exporter-856564c95-j4q5v Running 8m33s
```
## 配置 Prometheus Monitor
-Prometheus-operator 使用 [PodMonitor](https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/design.md#podmonitor) 和 [ServiceMonitor](https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/design.md#servicemonitor) CRD 定义如何动态的监视一组 pod 或者 service。
+
+Prometheus Operator 使用 [PodMonitor](https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/design.md#podmonitor) 和 [ServiceMonitor](https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/design.md#servicemonitor) CRD 来定义如何动态监控一组 Pod 或服务。
```yaml
apiVersion: monitoring.coreos.com/v1
@@ -144,31 +137,31 @@ metadata:
app.kubernetes.io/name: emqx
spec:
podMetricsEndpoints:
- - interval: 5s
- path: /api/v5/prometheus/stats
- # the name of emqx dashboard containerPort
- port: dashboard
- relabelings:
- - action: replace
- # user-defined cluster name, requires unique
- replacement: emqx5
- targetLabel: cluster
- - action: replace
- # fix value, don't modify
- replacement: emqx
- targetLabel: from
- - action: replace
- # fix value, don't modify
- sourceLabels: ['pod']
- targetLabel: "instance"
+ - interval: 5s
+ path: /api/v5/prometheus/stats
+ # emqx dashboard containerPort 的名称
+ port: dashboard
+ relabelings:
+ - action: replace
+ # 用户定义的集群名称,需要唯一
+ replacement: emqx5
+ targetLabel: cluster
+ - action: replace
+ # 固定值,请勿修改
+ replacement: emqx
+ targetLabel: from
+ - action: replace
+ # 固定值,请勿修改
+ sourceLabels: ['pod']
+ targetLabel: "instance"
selector:
matchLabels:
- # the label in emqx pod
+ # 标签与 emqx pod 的标签相同
apps.emqx.io/instance: emqx
apps.emqx.io/managed-by: emqx-operator
namespaceSelector:
matchNames:
- # modify the namespace if your EMQX cluster deployed in other namespace
+ # 如果您的 EMQX 集群部署在其他命名空间中,请修改命名空间
#- default
---
apiVersion: monitoring.coreos.com/v1
@@ -180,58 +173,58 @@ metadata:
spec:
selector:
matchLabels:
- # the label in emqx exporter svc
+ # 标签与 emqx exporter svc 的标签相同
app: emqx-exporter
endpoints:
- - port: metrics
- interval: 5s
- path: /metrics
- relabelings:
- - action: replace
- # user-defined cluster name, requires unique
- replacement: emqx5
- targetLabel: cluster
- - action: replace
- # fix value, don't modify
- replacement: exporter
- targetLabel: from
- - action: replace
- # fix value, don't modify
- sourceLabels: ['pod']
- regex: '(.*)-.*-.*'
- replacement: $1
- targetLabel: "instance"
- - action: labeldrop
- # fix value, don't modify
- regex: 'pod'
+ - port: metrics
+ interval: 5s
+ path: /metrics
+ relabelings:
+ - action: replace
+ # 用户定义的集群名称,需要唯一
+ replacement: emqx5
+ targetLabel: cluster
+ - action: replace
+ # 固定值,请勿修改
+ replacement: exporter
+ targetLabel: from
+ - action: replace
+ # 固定值,请勿修改
+ sourceLabels: ['pod']
+ regex: '(.*)-.*-.*'
+ replacement: $1
+ targetLabel: "instance"
+ - action: labeldrop
+ # 固定值,请勿修改
+ regex: 'pod'
namespaceSelector:
matchNames:
- # modify the namespace if your exporter deployed in other namespace
+ # 如果您的 exporter 部署在其他命名空间中,请修改命名空间
#- default
```
- `path` 表示指标采集接口路径,在 EMQX 5 里面路径为:`/api/v5/prometheus/stats`。`selector.matchLabels` 表示匹配 Pod 的 label。
- 每个集群的 Monitor 配置中都需要为当前集群打上特定的标签,其中`targetLabel`为`cluster`的值表示当前集群的名字,需确保每个集群的名字唯一。
+`path` 表示指标采集接口的路径。在 EMQX 5 中,路径为:`/api/v5/prometheus/stats`。`selector.matchLabels` 表示匹配 Pod 的标签:`apps.emqx.io/instance: emqx`。
+
+targetLabel `cluster` 的值表示当前集群的名称。请确保它是唯一的。
-将上述内容保存为`monitor.yaml`,并执行如下命令:
+将上述内容保存为 `monitor.yaml` 并执行以下命令:
```bash
-kubectl apply -f monitor.yaml
+$ kubectl apply -f monitor.yaml
```
-## 访问 Prometheus 查看 EMQX 集群的指标
+## 在 Prometheus 上查看 EMQX 指标
-打开 Prometheus 的界面,切换到 Graph 页面,输入 emqx 显示如下图所示:
+打开 Prometheus 界面,切换到 Graph 页面,输入 `emqx` 显示如下图所示:

-切换到 **Status** → **Targets** 页面,显示如下图,可以看到集群中所有被监控的 EMQX Pod 信息:
+切换到 **Status** → **Targets** 页面,显示如下图所示,您可以看到集群中所有被监控的 EMQX Pod 信息:

## 导入 Grafana 模板
-导入所有 dashboard [模板](https://github.com/emqx/emqx-exporter/tree/main/grafana-dashboard/template)。
-集群的整体监控状态位于 **EMQX** 看板中。
+导入所有 dashboard [模板](https://github.com/emqx/emqx-exporter/tree/main/grafana-dashboard/template)。打开主 dashboard **EMQX** 并开始使用!

diff --git a/zh_CN/deploy/kubernetes/operator/tasks/configure-emqx-rebalance.md b/zh_CN/deploy/kubernetes/operator/tasks/configure-emqx-rebalance.md
index eb045e7c5..074614f95 100644
--- a/zh_CN/deploy/kubernetes/operator/tasks/configure-emqx-rebalance.md
+++ b/zh_CN/deploy/kubernetes/operator/tasks/configure-emqx-rebalance.md
@@ -1,107 +1,114 @@
-# 集群负载重平衡(EMQX 企业版)
+# 重新平衡集群负载
-## 任务目标
+## 目标
-如何重平衡 MQTT 连接。
+如何重新平衡 MQTT 连接。
-## 为什么需重平衡
+## 为什么需要负载重平衡
-集群负载重平衡是将客户端连接与会话从一组节点强行迁移到其他节点的行为。它将自动计算得到到达成节点平衡所需迁移的连接数量,然后将对应数量的连接和会话数从高负载节点迁移到低负载节点,从而在节点之间实现负载均衡。通常在新加入节点或节点重启后,需要此操作来达成平衡。
+集群负载重平衡是将客户端连接和会话从一组节点强制迁移到另一组节点的操作。它将自动计算需要迁移的连接数量以实现节点平衡,然后将相应数量的连接和会话从高负载节点迁移到低负载节点,从而在节点之间实现负载均衡。通常在新节点加入或节点重启后需要此操作来实现平衡。
重平衡的价值主要有以下两点:
-- **提升系统扩展性**:由于 MQTT 连接是基于 TCP/IP 协议的长连接,当集群扩容后,旧节点的连接不会自动迁移到新节点上。如果希望新节点承载旧节点上部分负载,可以通过重平衡功能,将旧节点上的负载平滑地迁移到新节点上,从而使整个集群负载更加均衡,提高系统的吞吐量,响应速度以及资源利用率,使系统更好地扩展。
+- **提高系统可扩展性**:由于 MQTT 连接的持久性,当集群扩展时,对原始节点的连接不会自动迁移到新节点。为了解决这个问题,您可以使用负载重平衡功能将连接从过载节点平滑转移到新添加的节点。此过程确保整个集群的负载分布更加均衡,并提高吞吐量、响应速度和资源利用率。
+- **降低运维成本**:对于负载分布不均的集群,某些节点过载而其他节点保持空闲,您可以使用负载重平衡功能自动调整集群内的负载。这有助于实现更均衡的工作分布并降低运维成本。
-- **降低运维成本**:如果系统中某些节点负载过高或过低,需要对这些节点进行手动调整,而通过重平衡,可以自动调整节点的负载,降低运维成本。
+有关 EMQX 集群负载重平衡,请参阅文档:[重平衡](../../../cluster/rebalancing.md)
-关于 EMQX 集群负载重平衡可以参考文档:[重平衡](../../../cluster/rebalancing.md)
+## 如何使用负载重平衡
-- 如何使用重平衡
-
-集群重平衡在 EMQX Operator 里面对应的 CRD 为 `Rebalance`,其示例如下所示:
+EMQX Operator 中集群重平衡对应的 CRD 是 `Rebalance`,示例如下:
```yaml
apiVersion: apps.emqx.io/v2beta1
kind: Rebalance
metadata:
- name: rebalance-sample
+ name: rebalance-sample
spec:
- instanceName: emqx-ee
- rebalanceStrategy:
- connEvictRate: 10
- sessEvictRate: 10
- waitTakeover: 10
- waitHealthCheck: 10
- absConnThreshold: 100
- absSessThreshold: 100
- relConnThreshold: "1.1"
- relSessThreshold: "1.1"
+ instanceName: emqx-ee
+ rebalanceStrategy:
+ connEvictRate: 10
+ sessEvictRate: 10
+ waitTakeover: 10
+ waitHealthCheck: 10
+ absConnThreshold: 100
+ absSessThreshold: 100
+ relConnThreshold: "1.1"
+ relSessThreshold: "1.1"
```
-> 关于 Rebalance 配置可以参考文档:[Rebalance reference](../api-reference.md#rebalancestrategy)。
+> 有关 Rebalance 配置,请参阅文档:[Rebalance reference](../reference/v2beta1-reference.md#rebalancestrategy)。
-## 测试集群负载重平衡
+## 测试负载重平衡
-### 集群负载情况(重平衡前)
+### 重平衡前的集群负载分布
-在之前执行 Rebalance 前,我们构建了一个负载不均衡的集群。并使用 Grafana + Prometheus 监控 EMQX 集群负载的情况:
+在重平衡之前,我们有意创建了一个连接分布不均的 EMQX 集群。然后使用 Grafana 和 Prometheus 监控集群负载:

-从图中可以看出,当前集群共有四个 EMQX 节点,其中三个节点承载了 10000 的连接,剩余一个节点的连接数为 0 。接下来我们将演示如何进行重平衡操作,使得四个节点的负载达到均衡状态。接下来我们将演示如何进行重平衡操作,使得四个节点的负载达到均衡状态。
+如图所示,集群由四个 EMQX 节点组成。三个节点各自处理 10,000 个连接,而一个节点的连接数为 **零**。
+
+在以下示例中,我们演示如何执行重平衡操作,以在所有四个节点之间均匀分配负载。
-- 提交 Rebalance 任务
+#### 提交 Rebalance 任务
+
+创建 `Rebalance` 资源以启动重平衡过程:
```yaml
apiVersion: apps.emqx.io/v1beta4
kind: Rebalance
metadata:
- name: rebalance-sample
+ name: rebalance-sample
spec:
- instanceName: emqx-ee
- instanceKind: EmqxEnterprise
- rebalanceStrategy:
- connEvictRate: 10
- sessEvictRate: 10
- waitTakeover: 10
- waitHealthCheck: 10
- absConnThreshold: 100
- absSessThreshold: 100
- relConnThreshold: "1.1"
- relSessThreshold: "1.1"
+ instanceName: emqx-ee
+ instanceKind: EmqxEnterprise
+ rebalanceStrategy:
+ connEvictRate: 10
+ sessEvictRate: 10
+ waitTakeover: 10
+ waitHealthCheck: 10
+ absConnThreshold: 100
+ absSessThreshold: 100
+ relConnThreshold: "1.1"
+ relSessThreshold: "1.1"
```
-将上述内容保存为:rebalance.yaml,并执行如下命令提交 Rebalance 任务:
+将文件保存为 `rebalance.yaml`,并执行以下命令提交 Rebalance 任务:
```bash
$ kubectl apply -f rebalance.yaml
rebalance.apps.emqx.io/rebalance-sample created
```
-执行如下命令查看 EMQX 集群重平衡状态:
+#### 检查重平衡进度
+
+执行以下命令检查 EMQX 集群的重平衡状态:
```bash
$ kubectl get rebalances rebalance-sample -o json | jq '.status.rebalanceStates'
{
- "state": "wait_health_check",
- "session_eviction_rate": 10,
- "recipients":[
- "emqx-ee@emqx-ee-3.emqx-ee-headless.default.svc.cluster.local",
- ],
- "node": "emqx-ee@emqx-ee-0.emqx-ee-headless.default.svc.cluster.local",
- "donors":[
- "emqx-ee@emqx-ee-0.emqx-ee-headless.default.svc.cluster.local",
- "emqx-ee@emqx-ee-1.emqx-ee-headless.default.svc.cluster.local",
- "emqx-ee@emqx-ee-2.emqx-ee-headless.default.svc.cluster.local"
- ],
- "coordinator_node": "emqx-ee@emqx-ee-0.emqx-ee-headless.default.svc.cluster.local",
- "connection_eviction_rate": 10
+ "state": "wait_health_check",
+ "session_eviction_rate": 10,
+ "recipients":[
+ "emqx-ee@emqx-ee-3.emqx-ee-headless.default.svc.cluster.local",
+ ],
+ "node": "emqx-ee@emqx-ee-0.emqx-ee-headless.default.svc.cluster.local",
+ "donors":[
+ "emqx-ee@emqx-ee-0.emqx-ee-headless.default.svc.cluster.local",
+ "emqx-ee@emqx-ee-1.emqx-ee-headless.default.svc.cluster.local",
+ "emqx-ee@emqx-ee-2.emqx-ee-headless.default.svc.cluster.local"
+ ],
+ "coordinator_node": "emqx-ee@emqx-ee-0.emqx-ee-headless.default.svc.cluster.local",
+ "connection_eviction_rate": 10
}
```
-> 关于 rebalanceStates 字段的详细描述可以参考文档:[rebalanceStates reference](../api-reference.md#rebalancestate)。
+> 有关 `rebalanceStates` 字段的详细描述,请参阅文档:[rebalanceStates reference](../reference/v2beta1-reference.md#rebalancestate)。
-等待 Rebalance 任务完成:
+#### 等待完成
+
+监控任务直到其状态变为 `Completed`:
```bash
$ kubectl get rebalances rebalance-sample
@@ -109,15 +116,23 @@ NAME STATUS AGE
rebalance-sample Completed 62s
```
-> Rebalance 的状态有三种,分别是:Processing,Completed 以及 Failed。Processing 表示重平衡任务正在进行, Completed 表示重平衡任务已经完成,Failed 表示重平衡任务失败。
+> `STATUS` 字段表示 Rebalance 任务的生命周期状态:
+>
+> | 状态 | 含义 |
+> | -------------- | --------------------------------------------- |
+> | **Processing** | 重平衡正在进行中。 |
+> | **Completed** | 重平衡已成功完成。 |
+> | **Failed** | 重平衡遇到错误并停止。 |
-### 集群负载情况(重平衡后)
+### 重平衡后的集群负载分布

-上图是 Rebalance 完成后,集群负载情况。从图形上可以看出整个 Rebalance 过程非常平滑。从数据上可以看出,集群总连接数还是 10000,与 Rebalance 前一致。四个节点的连接数已经发生了变化,其中三个节点有部分连接迁移到了新扩容的节点上。重平衡结束后,四个节点的负载保持稳定状态,连接数都接近2500左右,不再变化。
+上图显示了 Rebalance 完成后的集群负载。如图所示,在整个操作过程中,客户端连接的迁移是平滑且稳定的。集群中的连接总数仍为 **10,000**,与重平衡前相同。
+
+在重平衡之前,一个节点承载 **0** 个连接,而三个节点各自承载 **10,000** 个连接。重平衡后,连接已重新分配到所有四个节点。每个节点上的负载稳定在 **2,500** 个连接左右并保持一致。
-根据集群达到平衡的条件:
+要确定集群是否已达到平衡状态,EMQX Operator 评估以下条件:
```
avg(源节点连接数) < avg(目标节点连接数) + abs_conn_threshold
@@ -125,4 +140,10 @@ avg(源节点连接数) < avg(目标节点连接数) + abs_conn_threshold
avg(源节点连接数) < avg(目标节点连接数) * rel_conn_threshold
```
-代入配置的 Rebalance 参数和连接数可以计算出 `avg(2553 + 2553+ 2554) < 2340 * 1.1`,因此当前集群负载已经达到平衡状态,Rebalance 任务成功实现了集群负载的重新平衡。
+使用配置的 Rebalance 阈值和实际连接数:
+
+- 源节点平均值:`avg(2553 + 2553 + 2554) ≈ 2553`
+- 目标节点平均值:`2340`
+- 检查的条件:`2553 < 2340 * 1.1`
+
+由于条件成立,Operator 得出结论:集群已达到平衡状态,重平衡任务已成功完成。
diff --git a/zh_CN/deploy/kubernetes/operator/tasks/configure-emqx-restricted-k8s.md b/zh_CN/deploy/kubernetes/operator/tasks/configure-emqx-restricted-k8s.md
index 4d987b276..4ee1f8d0e 100644
--- a/zh_CN/deploy/kubernetes/operator/tasks/configure-emqx-restricted-k8s.md
+++ b/zh_CN/deploy/kubernetes/operator/tasks/configure-emqx-restricted-k8s.md
@@ -111,27 +111,29 @@ kubectl -n emqx wait --for=condition=Ready pods -l "control-plane=controller-man
## 配置 EMQX 集群
-1. 将以下内容保存为 YAML 文件,并通过 `kubectl apply` 命令部署:
-
-```yaml
-apiVersion: apps.emqx.io/v2beta1
-kind: EMQX
-metadata:
- name: emqx
- namespace: emqx
-spec:
- image: ${REGISTRY}/emqx/emqx-enterprise:${EMQX_VERSION}
- config:
- data: |
- license {
- key = "..."
- }
-```
-
-2. 等待 EMQX 集群就绪。可通过以下命令检查状态,请确保 `STATUS` 为 `Running`(可能需要一些时间):
-
-```bash
-$ kubectl get emqx emqx
-NAME IMAGE STATUS AGE
-emqx my.private.registry/emqx/emqx-enterprise:5.10.0 Running 10m
-```
+1. 将以下内容保存为 YAML 文件,并使用 `kubectl apply` 命令部署:
+
+ ```yaml
+ apiVersion: apps.emqx.io/v2beta1
+ kind: EMQX
+ metadata:
+ name: emqx
+ namespace: emqx
+ spec:
+ image: ${REGISTRY}/emqx/emqx-enterprise:${EMQX_VERSION}
+ config:
+ data: |
+ license {
+ key = "..."
+ }
+ ```
+
+2. 等待 EMQX 集群就绪,您可以通过 `kubectl get` 命令检查 EMQX 集群的状态,请确保 `STATUS` 为 `Running`,这可能需要一些时间。
+
+ ```bash
+ $ kubectl get emqx emqx
+ NAME IMAGE STATUS AGE
+ emqx my.private.registry/emqx/emqx-enterprise:5.10.0 Running 10m
+ ```
+
+
diff --git a/zh_CN/deploy/kubernetes/operator/tasks/configure-emqx-service.md b/zh_CN/deploy/kubernetes/operator/tasks/configure-emqx-service.md
index 3d94625c0..4766e86bb 100644
--- a/zh_CN/deploy/kubernetes/operator/tasks/configure-emqx-service.md
+++ b/zh_CN/deploy/kubernetes/operator/tasks/configure-emqx-service.md
@@ -1,97 +1,98 @@
# 通过 LoadBalancer 访问 EMQX 集群
-## 任务目标
+## 目标
通过 LoadBalancer 类型的 Service 访问 EMQX 集群。
## 配置 EMQX 集群
-下面是 EMQX Custom Resource 的相关配置,你可以根据希望部署的 EMQX 的版本来选择对应的 APIVersion,具体的兼容性关系,请参考 [EMQX Operator 兼容性](../operator.md):
+EMQX CRD `apps.emqx.io/v2beta1` 支持:
+* 通过 `.spec.dashboardServiceTemplate` 配置 EMQX Dashboard Service。
+* 通过 `.spec.listenersServiceTemplate` 配置 EMQX 集群监听器 Service。
-`apps.emqx.io/v2beta1 EMQX` 支持通过 `.spec.dashboardServiceTemplate` 配置 EMQX 集群 Dashboard Service ,通过 `.spec.listenersServiceTemplate` 配置 EMQX 集群 listener Service,其文档可以参考:[Service](../api-reference.md#emqxspec)。
+有关更多详细信息,请参阅[相应文档](../reference/v2beta1-reference.md#emqxspec)。
-+ 将下面的内容保存成 YAML 文件,并通过 `kubectl apply` 命令部署它
+1. 将以下内容保存为 YAML 文件,并使用 `kubectl apply` 部署。
- ```yaml
- apiVersion: apps.emqx.io/v2beta1
- kind: EMQX
- metadata:
- name: emqx
- spec:
- image: emqx/emqx-enterprise:@EE_VERSION@
- listenersServiceTemplate:
- spec:
- type: LoadBalancer
- dashboardServiceTemplate:
- spec:
- type: LoadBalancer
- ```
+ ```yaml
+ apiVersion: apps.emqx.io/v2beta1
+ kind: EMQX
+ metadata:
+ name: emqx
+ spec:
+ image: emqx/emqx:@EE_VERSION@
+ config:
+ data: |
+ license {
+ key = "..."
+ }
+ listenersServiceTemplate:
+ spec:
+ type: LoadBalancer
+ dashboardServiceTemplate:
+ spec:
+ type: LoadBalancer
+ ```
- > EMQX 默认会开启一个 MQTT TCP 监听器 `tcp-default` 对应的端口为1883 以及 Dashboard 监听器 `dashboard-listeners-http-bind` 对应的端口为18083 。
+ ::: tip
- > 用户可以通过 `.spec.config.data` 字段或者 EMQX Dashboard 增加新的监听器。EMQX Operator 在创建 Service 时会将缺省的监听器信息自动注入到 Service 里面,但是当用户配置的 Service 和 EMQX 配置的监听器有冲突时(name 或者 port 字段重复),EMQX Operator 会以用户的配置为准。
+ 默认情况下,EMQX 在端口 1883 上启动 MQTT TCP 监听器 `tcp-default`,在端口 18083 上启动 Dashboard HTTP 监听器。
-+ 等待 EMQX 集群就绪,可以通过 `kubectl get` 命令查看 EMQX 集群的状态,请确保 `STATUS` 为 `Running`,这个可能需要一些时间
+ 用户可以通过 `.spec.config.data` 配置新的或现有的监听器,或通过 EMQX Dashboard 管理它们。
- ```bash
- $ kubectl get emqx emqx
- NAME IMAGE STATUS AGE
- emqx emqx/emqx-enterprise:@EE_VERSION@ Running 10m
- ```
+ EMQX Operator 会自动在 Service 资源中反映默认监听器信息。当用户配置的 Service 与 EMQX 配置的监听器发生冲突时(名称或端口字段重复),EMQX Operator 会优先使用用户配置。
-+ 获取 EMQX 集群的 Dashboard External IP,访问 EMQX 控制台
+ :::
- EMQX Operator 会创建两个 EMQX Service 资源,一个是 emqx-dashboard,一个是 emqx-listeners,分别对应 EMQX 控制台和 EMQX 监听端口。
+2. 等待 EMQX 集群就绪。使用 `kubectl get` 检查 EMQX 集群的状态,并确保 `STATUS` 为 `Ready`。这可能需要一些时间。
- ```bash
- $ kubectl get svc emqx-dashboard -o json | jq '.status.loadBalancer.ingress[0].ip'
+ ```bash
+ $ kubectl get emqx emqx
+ NAME STATUS AGE
+ emqx Ready 10m
+ ```
- 192.168.1.200
- ```
+## 通过 EMQX Dashboard 添加新监听器
- 通过浏览器访问 `http://192.168.1.200:18083` ,使用默认的用户名和密码 `admin/public` 登录 EMQX 控制台。
+1. 添加新监听器。
-## 通过 MQTTX CLI 连接 EMQX Cluster
+ - 打开 EMQX Dashboard 并导航到 **管理** -> **监听器**。
-+ 获取 EMQX 集群的 External IP
+ - 点击**添加监听器**按钮添加一个名称为 `test`、端口为 `1884` 的监听器,如下图所示:
- ```bash
- external_ip=$(kubectl get svc emqx-listeners -o json | jq '.status.loadBalancer.ingress[0].ip')
- ```
+
-+ 使用 MQTTX CLI 连接 EMQX 集群
+ - 点击**添加**按钮创建监听器,如下图所示:
- ```bash
- $ mqttx conn -h ${external_ip} -p 1883
+
- [4/17/2023] [5:17:31 PM] › … Connecting...
- [4/17/2023] [5:17:31 PM] › ✔ Connected
- ```
+ 从图中可以看出,新监听器已创建。
-## 通过 EMQX Dashboard 添加监听器
+2. 检查新监听器是否反映在 Service 中。
-+ 添加监听器
+ ```bash
+ kubectl get svc
+
+ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+ emqx-dashboard NodePort 10.105.110.235 18083:32012/TCP 13m
+ emqx-listeners NodePort 10.106.1.58 1883:32010/TCP,1884:30763/TCP 12m
+ ```
- 打开浏览器登录 EMQX Dashboard,点击 Configuration → Listeners 进入监听器的页面,我们先点击 Add Listener 的按钮添加一个名称为 test,端口为1884的监听器,如下图所示:
+ 从输出结果可以看到,新添加的端口 1884 上的监听器已反映在 `emqx-listeners` Service 资源中。
-
-

-
+## 使用 MQTTX 连接到新监听器
- 然后点击 Add 按钮创建监听器,如下图所示:
+1. 获取 EMQX 监听器服务的外部 IP。
-
+ ```bash
+ external_ip=$(kubectl get svc emqx-listeners -o json | jq -r '.status.loadBalancer.ingress[0].ip')
+ ```
- 从图中可以看出,我们创建的 test 监听器已经生效。
+2. 使用 MQTTX CLI 连接到新监听器。
-+ 查看新增的监听器是否注入 Service
-
- ```bash
- kubectl get svc
-
- NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
- emqx-dashboard NodePort 10.105.110.235 18083:32012/TCP 13m
- emqx-listeners NodePort 10.106.1.58 1883:32010/TCP,1884:30763/TCP 12m
- ```
-
- 从输出结果可以看到,刚才新增加的监听器1884已经注入到 `emqx-listeners` 这个 Service 里面。
+ ```bash
+ $ mqttx conn -h ${external_ip} -p 1884
+
+ [4/17/2023] [5:17:31 PM] › … Connecting...
+ [4/17/2023] [5:17:31 PM] › ✔ Connected
+ ```
diff --git a/zh_CN/deploy/kubernetes/operator/tasks/configure-emqx-tls.md b/zh_CN/deploy/kubernetes/operator/tasks/configure-emqx-tls.md
index 37757d3cd..11348926c 100644
--- a/zh_CN/deploy/kubernetes/operator/tasks/configure-emqx-tls.md
+++ b/zh_CN/deploy/kubernetes/operator/tasks/configure-emqx-tls.md
@@ -1,50 +1,58 @@
# 在 EMQX 中开启 TLS
-## 任务目标
+## 目标
-通过 `extraVolumes` 和 `extraVolumeMounts` 字段自定义 TLS 证书。
+使用 `extraVolumes` 和 `extraVolumeMounts` 字段自定义 TLS 证书。
## 基于 TLS 证书创建 Secret
-Secret 是一种包含少量敏感信息例如密码、令牌或密钥的对象,其文档可以参考:[Secret](https://kubernetes.io/zh-cn/docs/concepts/configuration/secret/#working-with-secrets)。在本文中我们使用 Secret 保存 TLS 证书信息,因此在创建 EMQX 集群之前我们需要基于 TLS 证书创建好 Secret。
-
-+ 将下面的内容保存成 YAML 文件,并通过 `kubectl apply` 命令部署它
-
- ```yaml
- apiVersion: v1
- kind: Secret
- metadata:
- name: emqx-tls
- type: kubernetes.io/tls
- stringData:
- ca.crt: |
- -----BEGIN CERTIFICATE-----
- ...
- -----END CERTIFICATE-----
- tls.crt: |
- -----BEGIN CERTIFICATE-----
- ...
- -----END CERTIFICATE-----
- tls.key: |
- -----BEGIN RSA PRIVATE KEY-----
- ...
- -----END RSA PRIVATE KEY-----
- ```
-
- > `ca.crt` 表示 CA 证书内容,`tls.crt` 表示服务端证书内容,`tls.key` 表示服务端私钥内容。此例中上述三个字段的内容被省略,请用自己证书的内容进行填充。
+Secret 是一种包含少量敏感信息的对象,例如密码、令牌或密钥。在本演示中,我们使用 Secret 存储 TLS 证书信息,因此在创建 EMQX 集群之前需要创建一个 Secret。
+
+有关更多信息,请参阅 [Secret](https://kubernetes.io/zh-cn/docs/concepts/configuration/secret/#working-with-secrets) 文档。
+
+将以下内容保存为 YAML 文件,并使用 `kubectl apply` 命令部署:
+
+```yaml
+apiVersion: v1
+kind: Secret
+metadata:
+ name: emqx-tls
+type: kubernetes.io/tls
+stringData:
+ ca.crt: |
+ -----BEGIN CERTIFICATE-----
+ ...
+ -----END CERTIFICATE-----
+ tls.crt: |
+ -----BEGIN CERTIFICATE-----
+ ...
+ -----END CERTIFICATE-----
+ tls.key: |
+ -----BEGIN RSA PRIVATE KEY-----
+ ...
+ -----END RSA PRIVATE KEY-----
+```
+
+:::tip
+在此示例中,上述三个字段的内容被省略。请用您自己的证书内容填充。
+* `ca.crt` 应包含 CA 证书。
+* `tls.crt` 应包含服务器证书。
+* `tls.key` 应包含服务器的私钥。
+:::
## 配置 EMQX 集群
-下面是 EMQX Custom Resource 的相关配置,你可以根据希望部署的 EMQX 的版本来选择对应的 APIVersion,具体的兼容性关系,请参考 [EMQX Operator 兼容性](../operator.md):
-
-:::: tabs type:card
-::: tab apps.emqx.io/v2beta1
+EMQX CRD `apps.emqx.io/v2beta1` 提供以下字段来为 EMQX 集群配置额外的卷和挂载点:
+* `.spec.coreTemplate.extraVolumes`
+* `.spec.coreTemplate.extraVolumeMounts`
+* `.spec.replicantTemplate.extraVolumes`
+* `.spec.replicantTemplate.extraVolumeMounts`
-`apps.emqx.io/v2beta1 EMQX` 支持通过 `.spec.coreTemplate.extraVolumes` 和 `.spec.coreTemplate.extraVolumeMounts` 以及 `.spec.replicantTemplate.extraVolumes` 和 `.spec.replicantTemplate.extraVolumeMounts` 字段给 EMQX 集群配置额外的卷和挂载点。在本文中我们可以使用这个两个字段为 EMQX 集群配置 TLS 证书。
+在本演示中,我们将使用这些字段为 EMQX 集群提供 TLS 证书。
-Volumes 的类型有很多种,关于 Volumes 描述可以参考文档:[Volumes](https://kubernetes.io/zh-cn/docs/concepts/storage/volumes/#secret)。在本文中我们使用的是 `secret` 类型。
+Volumes 的类型有很多种。有关 Volumes 的信息,请参阅 [Volumes](https://kubernetes.io/zh-cn/docs/concepts/storage/volumes/#secret) 文档。这里我们使用的是 `secret` 卷类型。
-+ 将下面的内容保存成 YAML 文件,并通过 `kubectl apply` 命令部署它
+1. 将以下内容保存为 YAML 文件,并使用 `kubectl apply` 部署:
```yaml
apiVersion: apps.emqx.io/v2beta1
@@ -52,8 +60,9 @@ Volumes 的类型有很多种,关于 Volumes 描述可以参考文档:[Volum
metadata:
name: emqx
spec:
- image: emqx/emqx-enterprise:@EE_VERSION@
+ image: emqx/emqx:@EE_VERSION@
config:
+ # 配置从 `emqx-tls` 卷挂载的 TLS 监听器证书:
data: |
listeners.ssl.default {
bind = "0.0.0.0:8883"
@@ -65,6 +74,9 @@ Volumes 的类型有很多种,关于 Volumes 描述可以参考文档:[Volum
handshake_timeout = 5s
}
}
+ license {
+ key = "..."
+ }
coreTemplate:
spec:
extraVolumes:
@@ -77,11 +89,13 @@ Volumes 的类型有很多种,关于 Volumes 描述可以参考文档:[Volum
replicantTemplate:
spec:
extraVolumes:
+ # 创建一个名为 `emqx-tls` 的 `secret` 卷类型:
- name: emqx-tls
secret:
secretName: emqx-tls
extraVolumeMounts:
- name: emqx-tls
+ # TLS 证书挂载到 EMQX 节点的目录:
mountPath: /mounted/cert
dashboardServiceTemplate:
spec:
@@ -91,66 +105,50 @@ Volumes 的类型有很多种,关于 Volumes 描述可以参考文档:[Volum
type: LoadBalancer
```
- > `.spec.coreTemplate.extraVolumes` 字段配置了卷的类型为:secret,名称为:emqx-tls。
-
- >`.spec.coreTemplate.extraVolumeMounts` 字段配置了 TLS 证书挂载到 EMQX 的目录为:`/mounted/cert`。
-
- >`.spec.config.data` 字段配置了 TLS 监听器证书路径,更多 TLS 监听器的配置可以参考文档:[配置手册](../../../../configuration/configuration.md)。
+2. 等待 EMQX 集群就绪。
-+ 等待 EMQX 集群就绪,可以通过 `kubectl get` 命令查看 EMQX 集群的状态,请确保 `STATUS` 为 `Running`,这个可能需要一些时间
+ 使用 `kubectl get` 检查 EMQX 集群的状态,并确保 `STATUS` 为 `Ready`。这可能需要一些时间。
```bash
- $ kubectl get emqx emqx
- NAME IMAGE STATUS AGE
- emqx emqx/emqx-enterprise:@EE_VERSION@ Running 10m
+ $ kubectl get emqx
+ NAME STATUS AGE
+ emqx Ready 10m
```
-+ 获取 EMQX 集群的 Dashboard External IP,访问 EMQX 控制台
+## 使用 MQTTX 验证 TLS 连接
- EMQX Operator 会创建两个 EMQX Service 资源,一个是 emqx-dashboard,一个是 emqx-listeners,分别对应 EMQX 控制台和 EMQX 监听端口。
+[MQTTX CLI](https://mqttx.app/zh/cli) 是一款开源的 MQTT 5.0 命令行客户端工具,旨在帮助开发者快速开始使用 MQTT 服务和应用。
- ```bash
- $ kubectl get svc emqx-dashboard -o json | jq '.status.loadBalancer.ingress[0].ip'
-
- 192.168.1.200
- ```
+1. 获取 EMQX 监听器服务的外部 IP。
- 通过浏览器访问 `http://192.168.1.200:18083`,使用默认的用户名和密码 `admin/public` 登录 EMQX 控制台。
+ ```bash
+ external_ip=$(kubectl get svc emqx-listeners -o json | jq '.status.loadBalancer.ingress[0].ip')
+ ```
-## 使用 MQTTX CLI 验证 TLS 连接
+2. 使用 MQTTX CLI 订阅消息。连接到 TLS 监听器端口 8883,使用 `--insecure` 标志跳过证书验证。
-[MQTTX CLI](https://mqttx.app/zh/cli) 是一款开源的 MQTT 5.0 命令行客户端工具,旨在帮助开发者在不需要使用图形化界面的基础上,也能更快的开发和调试 MQTT 服务与应用。
+ ```bash
+ mqttx sub -h ${external_ip} -p 8883 -t "hello" -l mqtts --insecure
+ [10:00:25] › … Connecting...
+ [10:00:25] › ✔ Connected
+ [10:00:25] › … Subscribing to hello...
+ [10:00:25] › ✔ Subscribed to hello
+ ```
-+ 获取 EMQX 集群的 External IP
+3. 在单独的终端窗口中发布消息。
- ```bash
- external_ip=$(kubectl get svc emqx-listeners -o json | jq '.status.loadBalancer.ingress[0].ip')
- ```
-
-+ 使用 MQTTX CLI 订阅消息
-
- ```bash
- mqttx sub -h ${external_ip} -p 8883 -t "hello" -l mqtts --insecure
-
- [10:00:25] › … Connecting...
- [10:00:25] › ✔ Connected
- [10:00:25] › … Subscribing to hello...
- [10:00:25] › ✔ Subscribed to hello
- ```
+ ```bash
+ mqttx pub -h ${external_ip} -p 8883 -t "hello" -m "hello world" -l mqtts --insecure
+ [10:00:58] › … Connecting...
+ [10:00:58] › ✔ Connected
+ [10:00:58] › … Message Publishing...
+ [10:00:58] › ✔ Message published
+ ```
-+ 创建一个新的终端窗口并使用 MQTTX CLI 发布消息
+4. 观察订阅客户端接收消息。
- ```bash
- mqttx pub -h ${external_ip} -p 8883 -t "hello" -m "hello world" -l mqtts --insecure
-
- [10:00:58] › … Connecting...
- [10:00:58] › ✔ Connected
- [10:00:58] › … Message Publishing...
- [10:00:58] › ✔ Message published
- ```
+ ```bash
+ [10:00:58] › payload: hello world
+ ```
-+ 查看订阅终端窗口收到的消息
-
- ```bash
- [10:00:58] › payload: hello world
- ```
+ 这表明发布者和订阅者客户端都通过 TLS 连接成功与代理通信。
diff --git a/zh_CN/deploy/kubernetes/operator/tasks/overview.md b/zh_CN/deploy/kubernetes/operator/tasks/overview.md
index 0ab8891a6..7a0a408e6 100644
--- a/zh_CN/deploy/kubernetes/operator/tasks/overview.md
+++ b/zh_CN/deploy/kubernetes/operator/tasks/overview.md
@@ -2,29 +2,27 @@
本章提供了在 Kubernetes 集群中使用 EMQX 执行常见任务和操作的分步说明。
-本章分为几个部分,涵盖:
+## 配置和设置
-**配置和设置**
-
-- License 文件和安全性
- - [License 配置 (EMQX 企业版)](./configure-emqx-license.md)
- - [在 EMQX 中开启 TLS](./configure-emqx-tls.md)
+- License 和安全性
+ - [管理 License](./configure-emqx-license.md)
+ - [为 EMQX 监听器启用 TLS](./configure-emqx-tls.md)
- 集群配置
- - [通过 EMQX Operator 修改 EMQX 配置](./configure-emqx-config.md)
- - [开启 Core + Replicant 集群 (EMQX 5.x)](./configure-emqx-core-replicant.md)
- - [在 EMQX 集群中开启持久化](./configure-emqx-persistence.md)
- - [通过 Kubernetes Service 访问 EMQX 集群](./configure-emqx-service.md)
- - [集群负载重平衡(EMQX 企业版)](./configure-emqx-rebalance.md)
+ - [修改 EMQX 配置](./configure-emqx-config.md)
+ - [启用 Core-Replicant 部署](./configure-emqx-core-replicant.md)
+ - [启用持久化](./configure-emqx-persistence.md)
+ - [通过 LoadBalancer 访问 EMQX 集群](./configure-emqx-service.md)
+ - [重新平衡集群负载](./configure-emqx-rebalance.md)
-**升级和维护**
+## 升级和维护
- 升级
- - [配置蓝绿发布 (EMQX 企业版)](./configure-emqx-blueGreenUpdate.md)
+ - [执行蓝绿升级](./configure-emqx-blueGreenUpdate.md)
- 日志管理
- - [在 Kubernetes 中采集 EMQX 的日志](./configure-emqx-log-collection.md)
+ - [采集 EMQX 日志](./configure-emqx-log-collection.md)
- [修改 EMQX 日志等级](./configure-emqx-log-level.md)
-**监控和性能**
+## 监控和性能
-- [通过 Prometheus 监控 EMQX 集群](./configure-emqx-prometheus.md)
+- [使用 Prometheus 监控 EMQX 集群](./configure-emqx-prometheus.md)
diff --git a/zh_CN/deploy/kubernetes/operator/tencent-cloud.md b/zh_CN/deploy/kubernetes/operator/tencent-cloud.md
index 19b3f3794..de940c974 100644
--- a/zh_CN/deploy/kubernetes/operator/tencent-cloud.md
+++ b/zh_CN/deploy/kubernetes/operator/tencent-cloud.md
@@ -19,7 +19,7 @@ EMQX Operator 支持在腾讯云容器服务(Tencent Kubernetes Engine,TKE
下面是 EMQX 自定义资源的相关配置。你可以根据你想部署的 EMQX 版本选择相应的 APIVersion。关于具体的兼容性关系,请参考[ EMQX 与 EMQX Operator 的兼容性列表](./operator.md)。
-+ 将下面的内容保存成 YAML 文件,并通过 `kubectl apply` 命令部署它
+1. 将下面的内容保存成 YAML 文件,并通过 `kubectl apply` 命令部署它。
```yaml
apiVersion: apps.emqx.io/v2beta1
@@ -56,15 +56,15 @@ EMQX Operator 支持在腾讯云容器服务(Tencent Kubernetes Engine,TKE
type: LoadBalancer
```
-+ 等待 EMQX 集群就绪,可以通过 `kubectl get` 命令查看 EMQX 集群的状态,请确保 `STATUS` 为 `Running`,这个可能需要一些时间
+2. 等待 EMQX 集群就绪,可以通过 `kubectl get` 命令查看 EMQX 集群的状态,请确保 `STATUS` 为 `Running`,这个可能需要一些时间
```bash
$ kubectl get emqx emqx
- NAME IMAGE STATUS AGE
- emqx emqx/emqx-enterprise:@EE_VERSION@ Running 10m
+ NAME STATUS AGE
+ emqx Running 10m
```
-+ 获取 EMQX 集群的 External IP,访问 EMQX 控制台
+3. 获取 EMQX 集群的 External IP,访问 EMQX 控制台。
EMQX Operator 会创建两个 EMQX Service 资源,一个是 `emqx-dashboard`,一个是 `emqx-listeners`,分别对应 EMQX 控制台和 EMQX 监听端口。
@@ -80,35 +80,35 @@ EMQX Operator 支持在腾讯云容器服务(Tencent Kubernetes Engine,TKE
[MQTTX CLI](https://mqttx.app/zh/cli) 是一款开源的 MQTT 5.0 命令行客户端工具,旨在帮助开发者在不需要使用图形化界面的基础上,也能更快的开发和调试 MQTT 服务与应用。
-+ 获取 EMQX 集群的 External IP
+1. 获取 EMQX 集群的 External IP。
```bash
external_ip=$(kubectl get svc emqx-listeners -o json | jq '.status.loadBalancer.ingress[0].ip')
```
-+ 订阅消息
+2. 订阅消息。
```bash
$ mqttx sub -t 'hello' -h ${external_ip} -p 1883
-
+
[10:00:25] › … Connecting...
[10:00:25] › ✔ Connected
[10:00:25] › … Subscribing to hello...
[10:00:25] › ✔ Subscribed to hello
```
-+ 创建一个新的终端窗口并发布消息
+3. 创建一个新的终端窗口并发布消息。
```bash
$ mqttx pub -t 'hello' -h ${external_ip} -p 1883 -m 'hello world'
-
+
[10:00:58] › … Connecting...
[10:00:58] › ✔ Connected
[10:00:58] › … Message Publishing...
[10:00:58] › ✔ Message published
```
-+ 查看订阅终端窗口收到的消息
+4. 查看订阅终端窗口收到的消息。
```bash
[10:00:58] › payload: hello world