From 65fa58a6eb14e7e3188782a28564e65548b872c7 Mon Sep 17 00:00:00 2001 From: Sunil Singh Date: Fri, 7 Nov 2025 12:39:45 -0800 Subject: [PATCH] RKE1 removal from specified pages in new user guides section. Signed-off-by: Sunil Singh --- .../about-rancher-agents.md | 30 ----- .../use-new-nodes-in-an-infra-provider.md | 126 +----------------- .../authorized-cluster-endpoint.md | 15 --- .../create-kubernetes-persistent-storage.md | 3 - .../use-new-nodes-in-an-infra-provider.md | 124 +---------------- .../authorized-cluster-endpoint.md | 15 --- .../create-kubernetes-persistent-storage.md | 4 - .../use-new-nodes-in-an-infra-provider.md | 124 +---------------- .../authorized-cluster-endpoint.md | 15 --- .../create-kubernetes-persistent-storage.md | 4 - .../use-new-nodes-in-an-infra-provider.md | 124 +---------------- .../authorized-cluster-endpoint.md | 15 --- .../create-kubernetes-persistent-storage.md | 4 - .../about-rancher-agents.md | 30 ----- .../use-new-nodes-in-an-infra-provider.md | 126 +----------------- .../authorized-cluster-endpoint.md | 15 --- .../create-kubernetes-persistent-storage.md | 3 - .../about-rancher-agents.md | 30 ----- .../use-new-nodes-in-an-infra-provider.md | 126 +----------------- .../authorized-cluster-endpoint.md | 15 --- .../create-kubernetes-persistent-storage.md | 3 - 21 files changed, 12 insertions(+), 939 deletions(-) diff --git a/docs/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/about-rancher-agents.md b/docs/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/about-rancher-agents.md index 375397eb5135..01aa252eb4ee 100644 --- a/docs/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/about-rancher-agents.md +++ b/docs/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/about-rancher-agents.md @@ -27,39 +27,12 @@ The `cattle-cluster-agent` pod does not define the default CPU and memory reques To configure request values through the UI: - - - -1. When you [create](./launch-kubernetes-with-rancher.md) or edit an existing cluster, go to the **Cluster Options** section. -1. Expand the **Cluster Configuration** subsection. -1. Configure your request values using the **CPU Requests** and **Memory Requests** fields as needed. - - - - 1. When you [create](./launch-kubernetes-with-rancher.md) or edit an existing cluster, go to the **Cluster Configuration**. 1. Select the **Cluster Agent** subsection. 1. Configure your request values using the **CPU Reservation** and **Memory Reservation** fields as needed. - - - If you prefer to configure via YAML, add the following snippet to your configuration file: - - - -```yaml -cluster_agent_deployment_customization: - override_resource_requirements: - requests: - cpu: 50m - memory: 100Mi -``` - - - - ```yaml spec: clusterAgentDeploymentCustomization: @@ -69,9 +42,6 @@ spec: memory: 100Mi ``` - - - ### Scheduling rules The `cattle-cluster-agent` uses either a fixed set of tolerations, or dynamically-added tolerations based on taints applied to the control plane nodes. This structure allows [Taint based Evictions](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/#taint-based-evictions) to work properly for `cattle-cluster-agent`. diff --git a/docs/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/use-new-nodes-in-an-infra-provider.md b/docs/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/use-new-nodes-in-an-infra-provider.md index 45093ee8232a..bbe848936ede 100644 --- a/docs/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/use-new-nodes-in-an-infra-provider.md +++ b/docs/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/use-new-nodes-in-an-infra-provider.md @@ -6,130 +6,10 @@ title: Launching Kubernetes on New Nodes in an Infrastructure Provider -When you create an RKE or RKE2 cluster using a node template in Rancher, each resulting node pool is shown in a new **Machine Pools** tab. You can see the machine pools by doing the following: - -1. Click **☰ > Cluster Management**. -1. Click the name of the RKE or RKE2 cluster. - -## RKE Clusters - -Using Rancher, you can create pools of nodes based on a [node template](#node-templates). This node template defines the parameters you want to use to launch nodes in your infrastructure providers or cloud providers. - -One benefit of installing Kubernetes on node pools hosted by an infrastructure provider is that if a node loses connectivity with the cluster, Rancher can automatically create another node to join the cluster to ensure that the count of the node pool is as expected. - -The available cloud providers to create a node template are decided based on active [node drivers](#node-drivers). - -### Node Templates - -A node template is the saved configuration for the parameters to use when provisioning nodes in a specific cloud provider. These nodes can be launched from the UI. Rancher uses [Docker Machine](https://github.com/docker/docs/blob/vnext-engine/machine/overview.md) to provision these nodes. The available cloud providers to create node templates are based on the active node drivers in Rancher. - -After you create a node template in Rancher, it's saved so that you can use this template again to create node pools. Node templates are bound to your login. After you add a template, you can remove them from your user profile. - -#### Node Labels - -You can add [labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/) on each node template, so that any nodes created from the node template will automatically have these labels on them. - -Invalid labels can prevent upgrades or can prevent Rancher from starting. For details on label syntax requirements, see the [Kubernetes documentation.](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set) - -#### Node Taints - -You can add [taints](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) on each node template, so that any nodes created from the node template will automatically have these taints on them. - -Since taints can be added at a node template and node pool, if there is no conflict with the same key and effect of the taints, all taints will be added to the nodes. If there are taints with the same key and different effect, the taints from the node pool will override the taints from the node template. - -#### Administrator Control of Node Templates - -Administrators can control all node templates. Admins can now maintain all the node templates within Rancher. When a node template owner is no longer using Rancher, the node templates created by them can be managed by administrators so the cluster can continue to be updated and maintained. - -To access all node templates, an administrator will need to do the following: +When you create an RKE2 cluster using a node template in Rancher, each resulting node pool is shown in a new **Machine Pools** tab. You can see the machine pools by doing the following: 1. Click **☰ > Cluster Management**. -1. Click **RKE1 Configuration > Node Templates**. - -**Result:** All node templates are listed. The templates can be edited or cloned by clicking the **⋮**. - -### Node Pools - -Using Rancher, you can create pools of nodes based on a [node template](#node-templates). - -A node template defines the configuration of a node, like what operating system to use, number of CPUs, and amount of memory. - -The benefit of using a node pool is that if a node is destroyed or deleted, you can increase the number of live nodes to compensate for the node that was lost. The node pool helps you ensure that the count of the node pool is as expected. - -Each node pool must have one or more nodes roles assigned. - -Each node role (i.e. etcd, controlplane, and worker) should be assigned to a distinct node pool. Although it is possible to assign multiple node roles to a node pool, this should not be done for production clusters. - -The recommended setup is to have: - -- a node pool with the etcd node role and a count of three -- a node pool with the controlplane node role and a count of at least two -- a node pool with the worker node role and a count of at least two - -**RKE1 downstream cluster nodes in an air-gapped environment:** - -By default, Rancher tries to run the Docker Install script when provisioning RKE1 downstream cluster nodes, such as in vSphere. However, the Rancher Docker installation script would fail in air-gapped environments. To work around this issue, you may choose to skip installing Docker when creating a Node Template where Docker is pre-installed onto a VM image. You can accomplish this by selecting **None** in the dropdown list for `Docker Install URL` under **Engine Options** in the Rancher UI. - -
**Engine Options Dropdown:**
- -![Engine Options Dropdown](/img/node-template-engine-options-rke1.png) - -#### Node Pool Taints - -If you haven't defined [taints](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) on your node template, you can add taints for each node pool. The benefit of adding taints to a node pool is that you can change the node template without having to first ensure that the taint exists in the new template. - -For each taint, they will automatically be added to any created node in the node pool. Therefore, if you add taints to a node pool that have existing nodes, the taints won't apply to existing nodes in the node pool, but any new node added into the node pool will get the taint. - -When there are taints on the node pool and node template, if there is no conflict with the same key and effect of the taints, all taints will be added to the nodes. If there are taints with the same key and different effect, the taints from the node pool will override the taints from the node template. - -#### About Node Auto-replace - -If a node is in a node pool, Rancher can automatically replace unreachable nodes. Rancher will use the existing node template for the given node pool to recreate the node if it becomes inactive for a specified number of minutes. - -:::caution - -Self-healing node pools are designed to help you replace worker nodes for stateless applications. It is not recommended to enable node auto-replace on a node pool of master nodes or nodes with persistent volumes attached, because VMs are treated ephemerally. When a node in a node pool loses connectivity with the cluster, its persistent volumes are destroyed, resulting in data loss for stateful applications. - -::: - -Node auto-replace works on top of the Kubernetes node controller. The node controller periodically checks the status of all the nodes (configurable via the `--node-monitor-period` flag of the `kube-controller`). When a node is unreachable, the node controller will taint that node. When this occurs, Rancher will begin its deletion countdown. You can configure the amount of time Rancher waits to delete the node. If the taint is not removed before the deletion countdown ends, Rancher will proceed to delete the node object. Rancher will then provision a node in accordance with the set quantity of the node pool. - -#### Enabling Node Auto-replace - -When you create the node pool, you can specify the amount of time in minutes that Rancher will wait to replace an unresponsive node. - -1. In the form for creating or editing a cluster, go to the **Node Pools** section. -1. Go to the node pool where you want to enable node auto-replace. In the **Recreate Unreachable After** field, enter the number of minutes that Rancher should wait for a node to respond before replacing the node. -1. Fill out the rest of the form for creating or editing the cluster. - -**Result:** Node auto-replace is enabled for the node pool. - -#### Disabling Node Auto-replace - -You can disable node auto-replace from the Rancher UI with the following steps: - -1. Click **☰ > Cluster Management**. -1. On the **Clusters** page, go to the cluster where you want to disable node auto-replace and click **⋮ > Edit Config**. -1. In the **Node Pools** section, go to the node pool where you want to enable node auto-replace. In the **Recreate Unreachable After** field, enter 0. -1. Click **Save**. - -**Result:** Node auto-replace is disabled for the node pool. - -### Cloud Credentials - -Node templates can use cloud credentials to store credentials for launching nodes in your cloud provider, which has some benefits: - -- Credentials are stored as a Kubernetes secret, which is not only more secure, but it also allows you to edit a node template without having to enter your credentials every time. - -- After the cloud credential is created, it can be re-used to create additional node templates. - -- Multiple node templates can share the same cloud credential to create node pools. If your key is compromised or expired, the cloud credential can be updated in a single place, which allows all node templates that are using it to be updated at once. - -After cloud credentials are created, the user can start [managing the cloud credentials that they created](../../../../reference-guides/user-settings/manage-cloud-credentials.md). - -### Node Drivers - -If you don't find the node driver that you want to use, you can see if it is available in Rancher's built-in [node drivers and activate it](../../authentication-permissions-and-global-configuration/about-provisioning-drivers/manage-node-drivers.md#activatingdeactivating-node-drivers), or you can [add your own custom node driver](../../authentication-permissions-and-global-configuration/about-provisioning-drivers/manage-node-drivers.md#adding-custom-node-drivers). +1. Click the name of the RKE2 cluster. ## RKE2 Clusters @@ -147,8 +27,6 @@ The RKE2 CLI exposes two roles, `server` and `agent`, which represent the Kubern The same functionality of using `etcd`, `controlplane` and `worker` nodes is possible in the RKE2 CLI by using flags and node tainting to control where workloads and the Kubernetes master were scheduled. The reason those roles were not implemented as first-class roles in the RKE2 CLI is that RKE2 is conceptualized as a set of raw building blocks that are best leveraged through an orchestration system such as Rancher. -The implementation of the three node roles in Rancher means that Rancher managed RKE2 clusters are able to easily leverage all of the same architectural best practices that are recommended for RKE clusters. - In our [recommended cluster architecture](../../kubernetes-clusters-in-rancher-setup/checklist-for-production-ready-clusters/recommended-cluster-architecture.md), we outline how many nodes of each role clusters should have: - At least three nodes with the role etcd to survive losing one node diff --git a/docs/how-to-guides/new-user-guides/manage-clusters/access-clusters/authorized-cluster-endpoint.md b/docs/how-to-guides/new-user-guides/manage-clusters/access-clusters/authorized-cluster-endpoint.md index 328537bb4fd7..982151628bc8 100644 --- a/docs/how-to-guides/new-user-guides/manage-clusters/access-clusters/authorized-cluster-endpoint.md +++ b/docs/how-to-guides/new-user-guides/manage-clusters/access-clusters/authorized-cluster-endpoint.md @@ -25,21 +25,6 @@ After you download the kubeconfig file, you are able to use the kubeconfig file If admins have [kubeconfig token generation turned off](../../../../api/api-tokens.md#disable-tokens-in-generated-kubeconfigs), the kubeconfig file requires that the [Rancher CLI](../../../../reference-guides/cli-with-rancher/rancher-cli.md) to be present in your PATH. -### Two Authentication Methods for RKE Clusters - -If the cluster is not an [RKE cluster,](../../launch-kubernetes-with-rancher/launch-kubernetes-with-rancher.md) the kubeconfig file allows you to access the cluster in only one way: it lets you be authenticated with the Rancher server, then Rancher allows you to run kubectl commands on the cluster. - -For RKE clusters, the kubeconfig file allows you to be authenticated in two ways: - -- **Through the Rancher server authentication proxy:** Rancher's authentication proxy validates your identity, then connects you to the downstream cluster that you want to access. -- **Directly with the downstream cluster's API server:** RKE clusters have an authorized cluster endpoint enabled by default. This endpoint allows you to access your downstream Kubernetes cluster with the kubectl CLI and a kubeconfig file, and it is enabled by default for RKE clusters. In this scenario, the downstream cluster's Kubernetes API server authenticates you by calling a webhook (the `kube-api-auth` microservice) that Rancher set up. - -This second method, the capability to connect directly to the cluster's Kubernetes API server, is important because it lets you access your downstream cluster if you can't connect to Rancher. - -To use the authorized cluster endpoint, you need to configure kubectl to use the extra kubectl context in the kubeconfig file that Rancher generates for you when the RKE cluster is created. This file can be downloaded from the cluster view in the Rancher UI, and the instructions for configuring kubectl are on [this page.](use-kubectl-and-kubeconfig.md#authenticating-directly-with-a-downstream-cluster) - -These methods of communicating with downstream Kubernetes clusters are also explained in the [architecture page](../../../../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md) in the larger context of explaining how Rancher works and how Rancher communicates with downstream clusters. - ### About the kube-api-auth Authentication Webhook The `kube-api-auth` microservice is deployed to provide the user authentication functionality for the [authorized cluster endpoint](../../../../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md#4-authorized-cluster-endpoint). When you access the user cluster using `kubectl`, the cluster's Kubernetes API server authenticates you by using the `kube-api-auth` service as a webhook. diff --git a/docs/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/create-kubernetes-persistent-storage.md b/docs/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/create-kubernetes-persistent-storage.md index 67a8c9dffbd6..3c0803a9f53a 100644 --- a/docs/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/create-kubernetes-persistent-storage.md +++ b/docs/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/create-kubernetes-persistent-storage.md @@ -64,9 +64,6 @@ In clusters that store data on GlusterFS volumes, you may experience an issue wh In [Rancher Launched Kubernetes clusters](../../launch-kubernetes-with-rancher/launch-kubernetes-with-rancher.md) that store data on iSCSI volumes, you may experience an issue where kubelets fail to automatically connect with iSCSI volumes. For details on resolving this issue, refer to [this page.](manage-persistent-storage/install-iscsi-volumes.md) -### hostPath Volumes -Before you create a hostPath volume, you need to set up an [extra_bind](https://rancher.com/docs/rke/latest/en/config-options/services/services-extras/#extra-binds/) in your cluster configuration. This will mount the path as a volume in your kubelets, which can then be used for hostPath volumes in your workloads. - ### Migrating VMware vSphere Cloud Provider from In-tree to Out-of-tree Kubernetes is moving away from maintaining cloud providers in-tree. vSphere has an out-of-tree cloud provider that can be used by installing the vSphere cloud provider and cloud storage plugins. diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/use-new-nodes-in-an-infra-provider.md b/i18n/zh/docusaurus-plugin-content-docs/current/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/use-new-nodes-in-an-infra-provider.md index 74855aac6b12..2fece02d331f 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/use-new-nodes-in-an-infra-provider.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/use-new-nodes-in-an-infra-provider.md @@ -6,130 +6,10 @@ title: 在云厂商的新节点上启动 Kubernetes -在 Rancher 中使用节点模板来创建 RKE 或 RKE2 集群时,每个生成的节点池都会显示在新的**主机池**选项卡中。你可以通过执行以下操作来查看主机池: +在 Rancher 中使用节点模板来创建 RKE2 集群时,每个生成的节点池都会显示在新的**主机池**选项卡中。你可以通过执行以下操作来查看主机池: 1. 点击**☰ > 集群管理**。 -1. 单击 RKE 或 RKE2 集群的名称。 - -## RKE 集群 - -使用 Rancher,你可以基于[节点模板](use-new-nodes-in-an-infra-provider.md#节点模板)创建节点池。此节点模板定义了要用于在基础设施提供商或云厂商中启动节点的参数。 - -在托管在云厂商的节点池上安装 Kubernetes 的一个好处是,如果一个节点与集群断开连接,Rancher 可以自动创建另一个节点并将其加入集群,从而确保节点池的数量符合要求。 - -可用于创建节点模板的云提供商是由[主机驱动](use-new-nodes-in-an-infra-provider.md#主机驱动)决定的。 - -### 节点模板 - -节点模板保存了用于在特定云提供商中配置节点时要使用的参数。这些节点可以从 UI 启动。Rancher 使用 [Docker Machine](https://github.com/docker/docs/blob/vnext-engine/machine/overview.md) 来配置这些节点。可用于创建节点模板的云提供商取决于 Rancher 中状态是 Active 的主机驱动。 - -在 Rancher 中创建节点模板后,模板会被保存,以便你可以再次使用该模板来创建节点池。节点模板绑定到你的登录名。添加模板后,你可以将其从用户配置文件中删除。 - -#### 节点标签 - -你可以为每个节点模板添加[标签](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/),这样,使用节点模板创建的节点都会自动带有这些标签。 - -无效标签会阻止升级,或阻止 Rancher 启动。有关标签语法的详细信息,请参阅 [Kubernetes 文档](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set)。 - -#### 节点污点 - -你可以为每个节点模板添加[污点](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/),这样,使用节点模板创建的节点都会自动带有这些污点。 - -由于污点可以同时添加到节点模板和节点池中,因此如果添加了相同键的污点效果没有冲突,则所有污点都将添加到节点中。如果存在具有相同键但不同效果的污点,则节点池中的污点将覆盖节点模板中的污点。 - -#### 节点模板的管理员控制 - -管理员可以控制所有节点模板。现在,管理员可以维护 Rancher 中的所有节点模板。当节点模板所有者不再使用 Rancher 时,他们创建的节点模板可以由管理员管理,以便继续更新和维护集群。 - -要访问所有节点模板,管理员需要执行以下操作: - -1. 点击 **☰ > 集群管理**。 -1. 单击 **RKE1 配置 > 节点模板**。 - -**结果**:列出所有节点模板。你可以通过单击 **⋮** 来编辑或克隆模板。 - -### 节点池 - -使用 Rancher,你可以基于[节点模板](#节点模板)创建节点池。 - -节点模板定义了节点的配置,例如要使用的操作系统、CPU 数量和内存量。 - -使用节点池的好处是,如果一个节点被销毁或删除,你可以增加 Active 节点的数量来补偿丢失的节点。节点池可以帮助你确保节点池的计数符合要求。 - -每个节点池必须分配一个或多个节点角色。 - -每个节点角色(即 etcd、controlplane 和 worker)都应分配给不同的节点池。虽然你可以将多个节点角色分配给同一个节点池,但不要在生产集群中执行此操作。 - -推荐的设置: - -- 具有 etcd 角色且计数为 3 的节点池 -- 具有 controlplane 角色且计数至少为 2 的节点池 -- 具有 worker 角色且计数至少为 2 的节点池 - -**离线环境中的 RKE1 下游集群节点**: - -默认情况下,在配置 RKE1 下游集群节点时(例如在 vSphere 中),Rancher 会尝试运行 Docker 安装脚本。但是,Rancher Docker 安装脚本在离线环境中会运行失败。要解决此问题,如果 Docker 已预安装到 VM 镜像上,你可以选择在创建节点模板时跳过安装 Docker。为此,你可以在 Rancher UI **引擎选项**下的 `Docker 安装 URL` 下拉列表中选择 **无**。 - -
**引擎选项下拉列表**
- -![引擎选项下拉列表](/img/node-template-engine-options-rke1.png) - -#### 节点池污点 - -如果你没有在节点模板上定义[污点](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/),则可以为每个节点池添加污点。将污点添加到节点池的好处是你可以更改节点模板,而不需要先确保污点存在于新模板中。 - -每个污点都将自动添加到节点池中已创建的节点。因此,如果你在已有节点的节点池中添加污点,污点不会应用到已有的节点,但是添加到该节点池中的新节点都将获得该污点。 - -如果污点同时添加到节点模板和节点池中,且添加了相同键的污点效果没有冲突,则所有污点都将添加到节点中。如果存在具有相同键但不同效果的污点,则节点池中的污点将覆盖节点模板中的污点。 - -#### 节点自动替换 - -Rancher 可以自动替换节点池中无法访问的节点。如果节点在指定的时间中处于 Inactive 状态,Rancher 将使用该节点池的节点模板来重新创建节点。 - -:::caution - -自我修复节点池的功能帮助你替换无状态应用的 worker 节点。不建议在 master 节点或连接了持久卷的节点的节点池上启用节点自动替换,因为虚拟机会被临时处理。节点池中的节点与集群断开连接时,其持久卷将被破坏,从而导致有状态应用的数据丢失。 - -::: - -节点自动替换基于 Kubernetes 节点控制器工作。节点控制器定期检查所有节点的状态(可通过 `kube-controller` 的 `--node-monitor-period` 标志配置)。一个节点不可访问时,节点控制器将污染该节点。发生这种情况时,Rancher 将开始其删除倒计时。你可以配置 Rancher 等待删除节点的时间。如果在删除倒计时结束前污点没有被删除,Rancher 将继续删除该节点。Rancher 会根据节点池设置的数量来创建新的节点。 - -#### 启用节点自动替换 - -创建节点池时,你可以指定 Rancher 替换无响应节点的等待时间(以分钟为单位)。 - -1. 在创建或编辑集群的表单中,转到**节点池**。 -1. 转到要启用节点自动替换的节点池。在 **Recreate Unreachable After** 字段中,输入 Rancher 在替换节点之前应该等待节点响应的分钟数。 -1. 填写表单的其余部分以创建或编辑集群。 - -**结果** :已为节点池启用节点自动替换。 - -#### 禁用节点自动替换 - -你可以执行以下步骤从 Rancher UI 禁用节点自动替换: - -1. 点击 **☰ > 集群管理**。 -1. 在**集群**页面上,转到要禁用节点自动替换的集群,然后单击 **⋮ > 编辑配置**。 -1. 在**节点池**部分中,转到要启用节点自动替换的节点池。在 **Recreate Unreachable After** 字段中,输入 0。 -1. 单击**保存**。 - -**结果**:已禁用节点池的节点自动替换。 - -### 云凭证 - -节点模板可以使用云凭证,来存储用于在云提供商中启动节点的凭证,其优点是: - -- 凭证会存储为更安全的 Kubernetes 密文,而且你无需每次都输入凭证便可编辑节点模板。 - -- 创建云凭证后,你可以重新使用该凭证来创建其他节点模板。 - -- 多个节点模板可以使用相同的云凭证来创建节点池。如果你的密钥被泄露或过期,则可以在一个位置更新云凭证,从而一次更新所有使用该凭证的节点模板。 - -创建云凭证后,用户可以[管理创建的云凭证](../../../../reference-guides/user-settings/manage-cloud-credentials.md)。 - -### 主机驱动 - -如果你找不到想要的主机驱动,你可以在 Rancher 的[内置主机驱动](../../authentication-permissions-and-global-configuration/about-provisioning-drivers/manage-node-drivers.md#激活停用主机驱动)中查看并激活它,也可以[添加自定义主机驱动](../../authentication-permissions-and-global-configuration/about-provisioning-drivers/manage-node-drivers.md#添加自定义主机驱动)。 +1. Click the name of the RKE2 cluster. ## RKE2 集群 diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/how-to-guides/new-user-guides/manage-clusters/access-clusters/authorized-cluster-endpoint.md b/i18n/zh/docusaurus-plugin-content-docs/current/how-to-guides/new-user-guides/manage-clusters/access-clusters/authorized-cluster-endpoint.md index 587d3b1fc3bc..1544188714ab 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/how-to-guides/new-user-guides/manage-clusters/access-clusters/authorized-cluster-endpoint.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/how-to-guides/new-user-guides/manage-clusters/access-clusters/authorized-cluster-endpoint.md @@ -21,21 +21,6 @@ kubeconfig 文件及其内容特定于各个集群。你可以从 Rancher 的** 如果管理员[关闭了 kubeconfig 令牌生成](../../../../api/api-tokens.md#在生成的-kubeconfig-中禁用令牌),则 kubeconfig 文件要求 [Rancher CLI](../../../../reference-guides/cli-with-rancher/rancher-cli.md) 存在于你的 PATH 中。 -## RKE 集群的两种身份验证方法 - -如果集群不是 [RKE 集群](../../launch-kubernetes-with-rancher/launch-kubernetes-with-rancher.md),kubeconfig 文件只允许你以一种方式访问​​集群,即通过 Rancher Server 进行身份验证,然后 Rancher 允许你在集群上运行 kubectl 命令。 - -对于 RKE 集群,kubeconfig 文件允许你通过两种方式进行身份验证: - -- **通过 Rancher Server 身份验证代理**:Rancher 的身份验证代理会验证你的身份,然后将你连接到要访问的下游集群。 -- **直接使用下游集群的 API Server**:RKE 集群默认启用授权集群端点。此端点允许你使用 kubectl CLI 和 kubeconfig 文件访问下游 Kubernetes 集群,且 RKE 集群默认启用该端点。在这种情况下,下游集群的 Kubernetes API server 通过调用 Rancher 设置的 webhook(`kube-api-auth` 微服务)对你进行身份验证。 - -第二种方法(即直接连接到集群的 Kubernetes API server)非常重要,因为如果你无法连接到 Rancher,这种方法可以让你访问下游集群。 - -要使用授权集群端点,你需要配置 kubectl,从而使用 Rancher 在创建 RKE 集群时生成的 kubeconfig 文件中的额外 kubectl 上下文。该文件可以从 Rancher UI 的**集群**视图中下载,配置 kubectl 的说明在[此页面](use-kubectl-and-kubeconfig.md#直接使用下游集群进行身份验证)。 - -[架构介绍](../../../../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md)也详细解释了这些与下游 Kubernetes 集群通信的方法,并介绍了 Rancher 的工作原理以及 Rancher 如何与下游集群通信的详细信息。 - ## 关于 kube-api-auth 身份验证 Webhook `kube-api-auth` 微服务是为[授权集群端点](../../../../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md#4-授权集群端点)提供用户认证功能而部署的。当你使用 `kubectl` 访问下游集群时,集群的 Kubernetes API server 会使用 `kube-api-auth` 服务作为 webhook 对你进行身份验证。 diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/create-kubernetes-persistent-storage.md b/i18n/zh/docusaurus-plugin-content-docs/current/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/create-kubernetes-persistent-storage.md index 4c4f0aaee77b..19bbb8431f31 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/create-kubernetes-persistent-storage.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/create-kubernetes-persistent-storage.md @@ -64,10 +64,6 @@ Rancher v2.5 简化了在 Rancher 管理的集群上安装 Longhorn 的过程。 在将数据存储在 iSCSI 卷上的 [Rancher 启动的 Kubernetes 集群](../../launch-kubernetes-with-rancher/launch-kubernetes-with-rancher.md)中,你可能会遇到 kubelet 无法自动连接 iSCSI 卷的问题。有关解决此问题的详细信息,请参阅[此页面](manage-persistent-storage/install-iscsi-volumes.md)。 -## hostPath 卷 - -在创建 hostPath 卷之前,你需要在集群配置中设置 [extra_bind](https://rancher.com/docs/rke/latest/en/config-options/services/services-extras/#extra-binds/)。这会将路径作为卷安装在你的 kubelet 中,可用于工作负载中的 hostPath 卷。 - ## 将 vSphere Cloud Provider 从树内迁移到树外 Kubernetes 正在逐渐不在树内维护云提供商。vSphere 有一个树外云提供商,可通过安装 vSphere 云提供商和云存储插件来使用。 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.12/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/use-new-nodes-in-an-infra-provider.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.12/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/use-new-nodes-in-an-infra-provider.md index 74855aac6b12..2fece02d331f 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/version-2.12/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/use-new-nodes-in-an-infra-provider.md +++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.12/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/use-new-nodes-in-an-infra-provider.md @@ -6,130 +6,10 @@ title: 在云厂商的新节点上启动 Kubernetes -在 Rancher 中使用节点模板来创建 RKE 或 RKE2 集群时,每个生成的节点池都会显示在新的**主机池**选项卡中。你可以通过执行以下操作来查看主机池: +在 Rancher 中使用节点模板来创建 RKE2 集群时,每个生成的节点池都会显示在新的**主机池**选项卡中。你可以通过执行以下操作来查看主机池: 1. 点击**☰ > 集群管理**。 -1. 单击 RKE 或 RKE2 集群的名称。 - -## RKE 集群 - -使用 Rancher,你可以基于[节点模板](use-new-nodes-in-an-infra-provider.md#节点模板)创建节点池。此节点模板定义了要用于在基础设施提供商或云厂商中启动节点的参数。 - -在托管在云厂商的节点池上安装 Kubernetes 的一个好处是,如果一个节点与集群断开连接,Rancher 可以自动创建另一个节点并将其加入集群,从而确保节点池的数量符合要求。 - -可用于创建节点模板的云提供商是由[主机驱动](use-new-nodes-in-an-infra-provider.md#主机驱动)决定的。 - -### 节点模板 - -节点模板保存了用于在特定云提供商中配置节点时要使用的参数。这些节点可以从 UI 启动。Rancher 使用 [Docker Machine](https://github.com/docker/docs/blob/vnext-engine/machine/overview.md) 来配置这些节点。可用于创建节点模板的云提供商取决于 Rancher 中状态是 Active 的主机驱动。 - -在 Rancher 中创建节点模板后,模板会被保存,以便你可以再次使用该模板来创建节点池。节点模板绑定到你的登录名。添加模板后,你可以将其从用户配置文件中删除。 - -#### 节点标签 - -你可以为每个节点模板添加[标签](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/),这样,使用节点模板创建的节点都会自动带有这些标签。 - -无效标签会阻止升级,或阻止 Rancher 启动。有关标签语法的详细信息,请参阅 [Kubernetes 文档](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set)。 - -#### 节点污点 - -你可以为每个节点模板添加[污点](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/),这样,使用节点模板创建的节点都会自动带有这些污点。 - -由于污点可以同时添加到节点模板和节点池中,因此如果添加了相同键的污点效果没有冲突,则所有污点都将添加到节点中。如果存在具有相同键但不同效果的污点,则节点池中的污点将覆盖节点模板中的污点。 - -#### 节点模板的管理员控制 - -管理员可以控制所有节点模板。现在,管理员可以维护 Rancher 中的所有节点模板。当节点模板所有者不再使用 Rancher 时,他们创建的节点模板可以由管理员管理,以便继续更新和维护集群。 - -要访问所有节点模板,管理员需要执行以下操作: - -1. 点击 **☰ > 集群管理**。 -1. 单击 **RKE1 配置 > 节点模板**。 - -**结果**:列出所有节点模板。你可以通过单击 **⋮** 来编辑或克隆模板。 - -### 节点池 - -使用 Rancher,你可以基于[节点模板](#节点模板)创建节点池。 - -节点模板定义了节点的配置,例如要使用的操作系统、CPU 数量和内存量。 - -使用节点池的好处是,如果一个节点被销毁或删除,你可以增加 Active 节点的数量来补偿丢失的节点。节点池可以帮助你确保节点池的计数符合要求。 - -每个节点池必须分配一个或多个节点角色。 - -每个节点角色(即 etcd、controlplane 和 worker)都应分配给不同的节点池。虽然你可以将多个节点角色分配给同一个节点池,但不要在生产集群中执行此操作。 - -推荐的设置: - -- 具有 etcd 角色且计数为 3 的节点池 -- 具有 controlplane 角色且计数至少为 2 的节点池 -- 具有 worker 角色且计数至少为 2 的节点池 - -**离线环境中的 RKE1 下游集群节点**: - -默认情况下,在配置 RKE1 下游集群节点时(例如在 vSphere 中),Rancher 会尝试运行 Docker 安装脚本。但是,Rancher Docker 安装脚本在离线环境中会运行失败。要解决此问题,如果 Docker 已预安装到 VM 镜像上,你可以选择在创建节点模板时跳过安装 Docker。为此,你可以在 Rancher UI **引擎选项**下的 `Docker 安装 URL` 下拉列表中选择 **无**。 - -
**引擎选项下拉列表**
- -![引擎选项下拉列表](/img/node-template-engine-options-rke1.png) - -#### 节点池污点 - -如果你没有在节点模板上定义[污点](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/),则可以为每个节点池添加污点。将污点添加到节点池的好处是你可以更改节点模板,而不需要先确保污点存在于新模板中。 - -每个污点都将自动添加到节点池中已创建的节点。因此,如果你在已有节点的节点池中添加污点,污点不会应用到已有的节点,但是添加到该节点池中的新节点都将获得该污点。 - -如果污点同时添加到节点模板和节点池中,且添加了相同键的污点效果没有冲突,则所有污点都将添加到节点中。如果存在具有相同键但不同效果的污点,则节点池中的污点将覆盖节点模板中的污点。 - -#### 节点自动替换 - -Rancher 可以自动替换节点池中无法访问的节点。如果节点在指定的时间中处于 Inactive 状态,Rancher 将使用该节点池的节点模板来重新创建节点。 - -:::caution - -自我修复节点池的功能帮助你替换无状态应用的 worker 节点。不建议在 master 节点或连接了持久卷的节点的节点池上启用节点自动替换,因为虚拟机会被临时处理。节点池中的节点与集群断开连接时,其持久卷将被破坏,从而导致有状态应用的数据丢失。 - -::: - -节点自动替换基于 Kubernetes 节点控制器工作。节点控制器定期检查所有节点的状态(可通过 `kube-controller` 的 `--node-monitor-period` 标志配置)。一个节点不可访问时,节点控制器将污染该节点。发生这种情况时,Rancher 将开始其删除倒计时。你可以配置 Rancher 等待删除节点的时间。如果在删除倒计时结束前污点没有被删除,Rancher 将继续删除该节点。Rancher 会根据节点池设置的数量来创建新的节点。 - -#### 启用节点自动替换 - -创建节点池时,你可以指定 Rancher 替换无响应节点的等待时间(以分钟为单位)。 - -1. 在创建或编辑集群的表单中,转到**节点池**。 -1. 转到要启用节点自动替换的节点池。在 **Recreate Unreachable After** 字段中,输入 Rancher 在替换节点之前应该等待节点响应的分钟数。 -1. 填写表单的其余部分以创建或编辑集群。 - -**结果** :已为节点池启用节点自动替换。 - -#### 禁用节点自动替换 - -你可以执行以下步骤从 Rancher UI 禁用节点自动替换: - -1. 点击 **☰ > 集群管理**。 -1. 在**集群**页面上,转到要禁用节点自动替换的集群,然后单击 **⋮ > 编辑配置**。 -1. 在**节点池**部分中,转到要启用节点自动替换的节点池。在 **Recreate Unreachable After** 字段中,输入 0。 -1. 单击**保存**。 - -**结果**:已禁用节点池的节点自动替换。 - -### 云凭证 - -节点模板可以使用云凭证,来存储用于在云提供商中启动节点的凭证,其优点是: - -- 凭证会存储为更安全的 Kubernetes 密文,而且你无需每次都输入凭证便可编辑节点模板。 - -- 创建云凭证后,你可以重新使用该凭证来创建其他节点模板。 - -- 多个节点模板可以使用相同的云凭证来创建节点池。如果你的密钥被泄露或过期,则可以在一个位置更新云凭证,从而一次更新所有使用该凭证的节点模板。 - -创建云凭证后,用户可以[管理创建的云凭证](../../../../reference-guides/user-settings/manage-cloud-credentials.md)。 - -### 主机驱动 - -如果你找不到想要的主机驱动,你可以在 Rancher 的[内置主机驱动](../../authentication-permissions-and-global-configuration/about-provisioning-drivers/manage-node-drivers.md#激活停用主机驱动)中查看并激活它,也可以[添加自定义主机驱动](../../authentication-permissions-and-global-configuration/about-provisioning-drivers/manage-node-drivers.md#添加自定义主机驱动)。 +1. Click the name of the RKE2 cluster. ## RKE2 集群 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.12/how-to-guides/new-user-guides/manage-clusters/access-clusters/authorized-cluster-endpoint.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.12/how-to-guides/new-user-guides/manage-clusters/access-clusters/authorized-cluster-endpoint.md index 587d3b1fc3bc..1544188714ab 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/version-2.12/how-to-guides/new-user-guides/manage-clusters/access-clusters/authorized-cluster-endpoint.md +++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.12/how-to-guides/new-user-guides/manage-clusters/access-clusters/authorized-cluster-endpoint.md @@ -21,21 +21,6 @@ kubeconfig 文件及其内容特定于各个集群。你可以从 Rancher 的** 如果管理员[关闭了 kubeconfig 令牌生成](../../../../api/api-tokens.md#在生成的-kubeconfig-中禁用令牌),则 kubeconfig 文件要求 [Rancher CLI](../../../../reference-guides/cli-with-rancher/rancher-cli.md) 存在于你的 PATH 中。 -## RKE 集群的两种身份验证方法 - -如果集群不是 [RKE 集群](../../launch-kubernetes-with-rancher/launch-kubernetes-with-rancher.md),kubeconfig 文件只允许你以一种方式访问​​集群,即通过 Rancher Server 进行身份验证,然后 Rancher 允许你在集群上运行 kubectl 命令。 - -对于 RKE 集群,kubeconfig 文件允许你通过两种方式进行身份验证: - -- **通过 Rancher Server 身份验证代理**:Rancher 的身份验证代理会验证你的身份,然后将你连接到要访问的下游集群。 -- **直接使用下游集群的 API Server**:RKE 集群默认启用授权集群端点。此端点允许你使用 kubectl CLI 和 kubeconfig 文件访问下游 Kubernetes 集群,且 RKE 集群默认启用该端点。在这种情况下,下游集群的 Kubernetes API server 通过调用 Rancher 设置的 webhook(`kube-api-auth` 微服务)对你进行身份验证。 - -第二种方法(即直接连接到集群的 Kubernetes API server)非常重要,因为如果你无法连接到 Rancher,这种方法可以让你访问下游集群。 - -要使用授权集群端点,你需要配置 kubectl,从而使用 Rancher 在创建 RKE 集群时生成的 kubeconfig 文件中的额外 kubectl 上下文。该文件可以从 Rancher UI 的**集群**视图中下载,配置 kubectl 的说明在[此页面](use-kubectl-and-kubeconfig.md#直接使用下游集群进行身份验证)。 - -[架构介绍](../../../../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md)也详细解释了这些与下游 Kubernetes 集群通信的方法,并介绍了 Rancher 的工作原理以及 Rancher 如何与下游集群通信的详细信息。 - ## 关于 kube-api-auth 身份验证 Webhook `kube-api-auth` 微服务是为[授权集群端点](../../../../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md#4-授权集群端点)提供用户认证功能而部署的。当你使用 `kubectl` 访问下游集群时,集群的 Kubernetes API server 会使用 `kube-api-auth` 服务作为 webhook 对你进行身份验证。 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.12/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/create-kubernetes-persistent-storage.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.12/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/create-kubernetes-persistent-storage.md index 4c4f0aaee77b..19bbb8431f31 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/version-2.12/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/create-kubernetes-persistent-storage.md +++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.12/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/create-kubernetes-persistent-storage.md @@ -64,10 +64,6 @@ Rancher v2.5 简化了在 Rancher 管理的集群上安装 Longhorn 的过程。 在将数据存储在 iSCSI 卷上的 [Rancher 启动的 Kubernetes 集群](../../launch-kubernetes-with-rancher/launch-kubernetes-with-rancher.md)中,你可能会遇到 kubelet 无法自动连接 iSCSI 卷的问题。有关解决此问题的详细信息,请参阅[此页面](manage-persistent-storage/install-iscsi-volumes.md)。 -## hostPath 卷 - -在创建 hostPath 卷之前,你需要在集群配置中设置 [extra_bind](https://rancher.com/docs/rke/latest/en/config-options/services/services-extras/#extra-binds/)。这会将路径作为卷安装在你的 kubelet 中,可用于工作负载中的 hostPath 卷。 - ## 将 vSphere Cloud Provider 从树内迁移到树外 Kubernetes 正在逐渐不在树内维护云提供商。vSphere 有一个树外云提供商,可通过安装 vSphere 云提供商和云存储插件来使用。 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.13/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/use-new-nodes-in-an-infra-provider.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.13/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/use-new-nodes-in-an-infra-provider.md index 74855aac6b12..2fece02d331f 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/version-2.13/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/use-new-nodes-in-an-infra-provider.md +++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.13/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/use-new-nodes-in-an-infra-provider.md @@ -6,130 +6,10 @@ title: 在云厂商的新节点上启动 Kubernetes -在 Rancher 中使用节点模板来创建 RKE 或 RKE2 集群时,每个生成的节点池都会显示在新的**主机池**选项卡中。你可以通过执行以下操作来查看主机池: +在 Rancher 中使用节点模板来创建 RKE2 集群时,每个生成的节点池都会显示在新的**主机池**选项卡中。你可以通过执行以下操作来查看主机池: 1. 点击**☰ > 集群管理**。 -1. 单击 RKE 或 RKE2 集群的名称。 - -## RKE 集群 - -使用 Rancher,你可以基于[节点模板](use-new-nodes-in-an-infra-provider.md#节点模板)创建节点池。此节点模板定义了要用于在基础设施提供商或云厂商中启动节点的参数。 - -在托管在云厂商的节点池上安装 Kubernetes 的一个好处是,如果一个节点与集群断开连接,Rancher 可以自动创建另一个节点并将其加入集群,从而确保节点池的数量符合要求。 - -可用于创建节点模板的云提供商是由[主机驱动](use-new-nodes-in-an-infra-provider.md#主机驱动)决定的。 - -### 节点模板 - -节点模板保存了用于在特定云提供商中配置节点时要使用的参数。这些节点可以从 UI 启动。Rancher 使用 [Docker Machine](https://github.com/docker/docs/blob/vnext-engine/machine/overview.md) 来配置这些节点。可用于创建节点模板的云提供商取决于 Rancher 中状态是 Active 的主机驱动。 - -在 Rancher 中创建节点模板后,模板会被保存,以便你可以再次使用该模板来创建节点池。节点模板绑定到你的登录名。添加模板后,你可以将其从用户配置文件中删除。 - -#### 节点标签 - -你可以为每个节点模板添加[标签](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/),这样,使用节点模板创建的节点都会自动带有这些标签。 - -无效标签会阻止升级,或阻止 Rancher 启动。有关标签语法的详细信息,请参阅 [Kubernetes 文档](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set)。 - -#### 节点污点 - -你可以为每个节点模板添加[污点](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/),这样,使用节点模板创建的节点都会自动带有这些污点。 - -由于污点可以同时添加到节点模板和节点池中,因此如果添加了相同键的污点效果没有冲突,则所有污点都将添加到节点中。如果存在具有相同键但不同效果的污点,则节点池中的污点将覆盖节点模板中的污点。 - -#### 节点模板的管理员控制 - -管理员可以控制所有节点模板。现在,管理员可以维护 Rancher 中的所有节点模板。当节点模板所有者不再使用 Rancher 时,他们创建的节点模板可以由管理员管理,以便继续更新和维护集群。 - -要访问所有节点模板,管理员需要执行以下操作: - -1. 点击 **☰ > 集群管理**。 -1. 单击 **RKE1 配置 > 节点模板**。 - -**结果**:列出所有节点模板。你可以通过单击 **⋮** 来编辑或克隆模板。 - -### 节点池 - -使用 Rancher,你可以基于[节点模板](#节点模板)创建节点池。 - -节点模板定义了节点的配置,例如要使用的操作系统、CPU 数量和内存量。 - -使用节点池的好处是,如果一个节点被销毁或删除,你可以增加 Active 节点的数量来补偿丢失的节点。节点池可以帮助你确保节点池的计数符合要求。 - -每个节点池必须分配一个或多个节点角色。 - -每个节点角色(即 etcd、controlplane 和 worker)都应分配给不同的节点池。虽然你可以将多个节点角色分配给同一个节点池,但不要在生产集群中执行此操作。 - -推荐的设置: - -- 具有 etcd 角色且计数为 3 的节点池 -- 具有 controlplane 角色且计数至少为 2 的节点池 -- 具有 worker 角色且计数至少为 2 的节点池 - -**离线环境中的 RKE1 下游集群节点**: - -默认情况下,在配置 RKE1 下游集群节点时(例如在 vSphere 中),Rancher 会尝试运行 Docker 安装脚本。但是,Rancher Docker 安装脚本在离线环境中会运行失败。要解决此问题,如果 Docker 已预安装到 VM 镜像上,你可以选择在创建节点模板时跳过安装 Docker。为此,你可以在 Rancher UI **引擎选项**下的 `Docker 安装 URL` 下拉列表中选择 **无**。 - -
**引擎选项下拉列表**
- -![引擎选项下拉列表](/img/node-template-engine-options-rke1.png) - -#### 节点池污点 - -如果你没有在节点模板上定义[污点](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/),则可以为每个节点池添加污点。将污点添加到节点池的好处是你可以更改节点模板,而不需要先确保污点存在于新模板中。 - -每个污点都将自动添加到节点池中已创建的节点。因此,如果你在已有节点的节点池中添加污点,污点不会应用到已有的节点,但是添加到该节点池中的新节点都将获得该污点。 - -如果污点同时添加到节点模板和节点池中,且添加了相同键的污点效果没有冲突,则所有污点都将添加到节点中。如果存在具有相同键但不同效果的污点,则节点池中的污点将覆盖节点模板中的污点。 - -#### 节点自动替换 - -Rancher 可以自动替换节点池中无法访问的节点。如果节点在指定的时间中处于 Inactive 状态,Rancher 将使用该节点池的节点模板来重新创建节点。 - -:::caution - -自我修复节点池的功能帮助你替换无状态应用的 worker 节点。不建议在 master 节点或连接了持久卷的节点的节点池上启用节点自动替换,因为虚拟机会被临时处理。节点池中的节点与集群断开连接时,其持久卷将被破坏,从而导致有状态应用的数据丢失。 - -::: - -节点自动替换基于 Kubernetes 节点控制器工作。节点控制器定期检查所有节点的状态(可通过 `kube-controller` 的 `--node-monitor-period` 标志配置)。一个节点不可访问时,节点控制器将污染该节点。发生这种情况时,Rancher 将开始其删除倒计时。你可以配置 Rancher 等待删除节点的时间。如果在删除倒计时结束前污点没有被删除,Rancher 将继续删除该节点。Rancher 会根据节点池设置的数量来创建新的节点。 - -#### 启用节点自动替换 - -创建节点池时,你可以指定 Rancher 替换无响应节点的等待时间(以分钟为单位)。 - -1. 在创建或编辑集群的表单中,转到**节点池**。 -1. 转到要启用节点自动替换的节点池。在 **Recreate Unreachable After** 字段中,输入 Rancher 在替换节点之前应该等待节点响应的分钟数。 -1. 填写表单的其余部分以创建或编辑集群。 - -**结果** :已为节点池启用节点自动替换。 - -#### 禁用节点自动替换 - -你可以执行以下步骤从 Rancher UI 禁用节点自动替换: - -1. 点击 **☰ > 集群管理**。 -1. 在**集群**页面上,转到要禁用节点自动替换的集群,然后单击 **⋮ > 编辑配置**。 -1. 在**节点池**部分中,转到要启用节点自动替换的节点池。在 **Recreate Unreachable After** 字段中,输入 0。 -1. 单击**保存**。 - -**结果**:已禁用节点池的节点自动替换。 - -### 云凭证 - -节点模板可以使用云凭证,来存储用于在云提供商中启动节点的凭证,其优点是: - -- 凭证会存储为更安全的 Kubernetes 密文,而且你无需每次都输入凭证便可编辑节点模板。 - -- 创建云凭证后,你可以重新使用该凭证来创建其他节点模板。 - -- 多个节点模板可以使用相同的云凭证来创建节点池。如果你的密钥被泄露或过期,则可以在一个位置更新云凭证,从而一次更新所有使用该凭证的节点模板。 - -创建云凭证后,用户可以[管理创建的云凭证](../../../../reference-guides/user-settings/manage-cloud-credentials.md)。 - -### 主机驱动 - -如果你找不到想要的主机驱动,你可以在 Rancher 的[内置主机驱动](../../authentication-permissions-and-global-configuration/about-provisioning-drivers/manage-node-drivers.md#激活停用主机驱动)中查看并激活它,也可以[添加自定义主机驱动](../../authentication-permissions-and-global-configuration/about-provisioning-drivers/manage-node-drivers.md#添加自定义主机驱动)。 +1. Click the name of the RKE2 cluster. ## RKE2 集群 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.13/how-to-guides/new-user-guides/manage-clusters/access-clusters/authorized-cluster-endpoint.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.13/how-to-guides/new-user-guides/manage-clusters/access-clusters/authorized-cluster-endpoint.md index 587d3b1fc3bc..1544188714ab 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/version-2.13/how-to-guides/new-user-guides/manage-clusters/access-clusters/authorized-cluster-endpoint.md +++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.13/how-to-guides/new-user-guides/manage-clusters/access-clusters/authorized-cluster-endpoint.md @@ -21,21 +21,6 @@ kubeconfig 文件及其内容特定于各个集群。你可以从 Rancher 的** 如果管理员[关闭了 kubeconfig 令牌生成](../../../../api/api-tokens.md#在生成的-kubeconfig-中禁用令牌),则 kubeconfig 文件要求 [Rancher CLI](../../../../reference-guides/cli-with-rancher/rancher-cli.md) 存在于你的 PATH 中。 -## RKE 集群的两种身份验证方法 - -如果集群不是 [RKE 集群](../../launch-kubernetes-with-rancher/launch-kubernetes-with-rancher.md),kubeconfig 文件只允许你以一种方式访问​​集群,即通过 Rancher Server 进行身份验证,然后 Rancher 允许你在集群上运行 kubectl 命令。 - -对于 RKE 集群,kubeconfig 文件允许你通过两种方式进行身份验证: - -- **通过 Rancher Server 身份验证代理**:Rancher 的身份验证代理会验证你的身份,然后将你连接到要访问的下游集群。 -- **直接使用下游集群的 API Server**:RKE 集群默认启用授权集群端点。此端点允许你使用 kubectl CLI 和 kubeconfig 文件访问下游 Kubernetes 集群,且 RKE 集群默认启用该端点。在这种情况下,下游集群的 Kubernetes API server 通过调用 Rancher 设置的 webhook(`kube-api-auth` 微服务)对你进行身份验证。 - -第二种方法(即直接连接到集群的 Kubernetes API server)非常重要,因为如果你无法连接到 Rancher,这种方法可以让你访问下游集群。 - -要使用授权集群端点,你需要配置 kubectl,从而使用 Rancher 在创建 RKE 集群时生成的 kubeconfig 文件中的额外 kubectl 上下文。该文件可以从 Rancher UI 的**集群**视图中下载,配置 kubectl 的说明在[此页面](use-kubectl-and-kubeconfig.md#直接使用下游集群进行身份验证)。 - -[架构介绍](../../../../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md)也详细解释了这些与下游 Kubernetes 集群通信的方法,并介绍了 Rancher 的工作原理以及 Rancher 如何与下游集群通信的详细信息。 - ## 关于 kube-api-auth 身份验证 Webhook `kube-api-auth` 微服务是为[授权集群端点](../../../../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md#4-授权集群端点)提供用户认证功能而部署的。当你使用 `kubectl` 访问下游集群时,集群的 Kubernetes API server 会使用 `kube-api-auth` 服务作为 webhook 对你进行身份验证。 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.13/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/create-kubernetes-persistent-storage.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.13/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/create-kubernetes-persistent-storage.md index 4c4f0aaee77b..19bbb8431f31 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/version-2.13/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/create-kubernetes-persistent-storage.md +++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.13/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/create-kubernetes-persistent-storage.md @@ -64,10 +64,6 @@ Rancher v2.5 简化了在 Rancher 管理的集群上安装 Longhorn 的过程。 在将数据存储在 iSCSI 卷上的 [Rancher 启动的 Kubernetes 集群](../../launch-kubernetes-with-rancher/launch-kubernetes-with-rancher.md)中,你可能会遇到 kubelet 无法自动连接 iSCSI 卷的问题。有关解决此问题的详细信息,请参阅[此页面](manage-persistent-storage/install-iscsi-volumes.md)。 -## hostPath 卷 - -在创建 hostPath 卷之前,你需要在集群配置中设置 [extra_bind](https://rancher.com/docs/rke/latest/en/config-options/services/services-extras/#extra-binds/)。这会将路径作为卷安装在你的 kubelet 中,可用于工作负载中的 hostPath 卷。 - ## 将 vSphere Cloud Provider 从树内迁移到树外 Kubernetes 正在逐渐不在树内维护云提供商。vSphere 有一个树外云提供商,可通过安装 vSphere 云提供商和云存储插件来使用。 diff --git a/versioned_docs/version-2.12/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/about-rancher-agents.md b/versioned_docs/version-2.12/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/about-rancher-agents.md index 375397eb5135..01aa252eb4ee 100644 --- a/versioned_docs/version-2.12/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/about-rancher-agents.md +++ b/versioned_docs/version-2.12/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/about-rancher-agents.md @@ -27,39 +27,12 @@ The `cattle-cluster-agent` pod does not define the default CPU and memory reques To configure request values through the UI: - - - -1. When you [create](./launch-kubernetes-with-rancher.md) or edit an existing cluster, go to the **Cluster Options** section. -1. Expand the **Cluster Configuration** subsection. -1. Configure your request values using the **CPU Requests** and **Memory Requests** fields as needed. - - - - 1. When you [create](./launch-kubernetes-with-rancher.md) or edit an existing cluster, go to the **Cluster Configuration**. 1. Select the **Cluster Agent** subsection. 1. Configure your request values using the **CPU Reservation** and **Memory Reservation** fields as needed. - - - If you prefer to configure via YAML, add the following snippet to your configuration file: - - - -```yaml -cluster_agent_deployment_customization: - override_resource_requirements: - requests: - cpu: 50m - memory: 100Mi -``` - - - - ```yaml spec: clusterAgentDeploymentCustomization: @@ -69,9 +42,6 @@ spec: memory: 100Mi ``` - - - ### Scheduling rules The `cattle-cluster-agent` uses either a fixed set of tolerations, or dynamically-added tolerations based on taints applied to the control plane nodes. This structure allows [Taint based Evictions](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/#taint-based-evictions) to work properly for `cattle-cluster-agent`. diff --git a/versioned_docs/version-2.12/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/use-new-nodes-in-an-infra-provider.md b/versioned_docs/version-2.12/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/use-new-nodes-in-an-infra-provider.md index 45093ee8232a..bbe848936ede 100644 --- a/versioned_docs/version-2.12/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/use-new-nodes-in-an-infra-provider.md +++ b/versioned_docs/version-2.12/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/use-new-nodes-in-an-infra-provider.md @@ -6,130 +6,10 @@ title: Launching Kubernetes on New Nodes in an Infrastructure Provider -When you create an RKE or RKE2 cluster using a node template in Rancher, each resulting node pool is shown in a new **Machine Pools** tab. You can see the machine pools by doing the following: - -1. Click **☰ > Cluster Management**. -1. Click the name of the RKE or RKE2 cluster. - -## RKE Clusters - -Using Rancher, you can create pools of nodes based on a [node template](#node-templates). This node template defines the parameters you want to use to launch nodes in your infrastructure providers or cloud providers. - -One benefit of installing Kubernetes on node pools hosted by an infrastructure provider is that if a node loses connectivity with the cluster, Rancher can automatically create another node to join the cluster to ensure that the count of the node pool is as expected. - -The available cloud providers to create a node template are decided based on active [node drivers](#node-drivers). - -### Node Templates - -A node template is the saved configuration for the parameters to use when provisioning nodes in a specific cloud provider. These nodes can be launched from the UI. Rancher uses [Docker Machine](https://github.com/docker/docs/blob/vnext-engine/machine/overview.md) to provision these nodes. The available cloud providers to create node templates are based on the active node drivers in Rancher. - -After you create a node template in Rancher, it's saved so that you can use this template again to create node pools. Node templates are bound to your login. After you add a template, you can remove them from your user profile. - -#### Node Labels - -You can add [labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/) on each node template, so that any nodes created from the node template will automatically have these labels on them. - -Invalid labels can prevent upgrades or can prevent Rancher from starting. For details on label syntax requirements, see the [Kubernetes documentation.](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set) - -#### Node Taints - -You can add [taints](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) on each node template, so that any nodes created from the node template will automatically have these taints on them. - -Since taints can be added at a node template and node pool, if there is no conflict with the same key and effect of the taints, all taints will be added to the nodes. If there are taints with the same key and different effect, the taints from the node pool will override the taints from the node template. - -#### Administrator Control of Node Templates - -Administrators can control all node templates. Admins can now maintain all the node templates within Rancher. When a node template owner is no longer using Rancher, the node templates created by them can be managed by administrators so the cluster can continue to be updated and maintained. - -To access all node templates, an administrator will need to do the following: +When you create an RKE2 cluster using a node template in Rancher, each resulting node pool is shown in a new **Machine Pools** tab. You can see the machine pools by doing the following: 1. Click **☰ > Cluster Management**. -1. Click **RKE1 Configuration > Node Templates**. - -**Result:** All node templates are listed. The templates can be edited or cloned by clicking the **⋮**. - -### Node Pools - -Using Rancher, you can create pools of nodes based on a [node template](#node-templates). - -A node template defines the configuration of a node, like what operating system to use, number of CPUs, and amount of memory. - -The benefit of using a node pool is that if a node is destroyed or deleted, you can increase the number of live nodes to compensate for the node that was lost. The node pool helps you ensure that the count of the node pool is as expected. - -Each node pool must have one or more nodes roles assigned. - -Each node role (i.e. etcd, controlplane, and worker) should be assigned to a distinct node pool. Although it is possible to assign multiple node roles to a node pool, this should not be done for production clusters. - -The recommended setup is to have: - -- a node pool with the etcd node role and a count of three -- a node pool with the controlplane node role and a count of at least two -- a node pool with the worker node role and a count of at least two - -**RKE1 downstream cluster nodes in an air-gapped environment:** - -By default, Rancher tries to run the Docker Install script when provisioning RKE1 downstream cluster nodes, such as in vSphere. However, the Rancher Docker installation script would fail in air-gapped environments. To work around this issue, you may choose to skip installing Docker when creating a Node Template where Docker is pre-installed onto a VM image. You can accomplish this by selecting **None** in the dropdown list for `Docker Install URL` under **Engine Options** in the Rancher UI. - -
**Engine Options Dropdown:**
- -![Engine Options Dropdown](/img/node-template-engine-options-rke1.png) - -#### Node Pool Taints - -If you haven't defined [taints](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) on your node template, you can add taints for each node pool. The benefit of adding taints to a node pool is that you can change the node template without having to first ensure that the taint exists in the new template. - -For each taint, they will automatically be added to any created node in the node pool. Therefore, if you add taints to a node pool that have existing nodes, the taints won't apply to existing nodes in the node pool, but any new node added into the node pool will get the taint. - -When there are taints on the node pool and node template, if there is no conflict with the same key and effect of the taints, all taints will be added to the nodes. If there are taints with the same key and different effect, the taints from the node pool will override the taints from the node template. - -#### About Node Auto-replace - -If a node is in a node pool, Rancher can automatically replace unreachable nodes. Rancher will use the existing node template for the given node pool to recreate the node if it becomes inactive for a specified number of minutes. - -:::caution - -Self-healing node pools are designed to help you replace worker nodes for stateless applications. It is not recommended to enable node auto-replace on a node pool of master nodes or nodes with persistent volumes attached, because VMs are treated ephemerally. When a node in a node pool loses connectivity with the cluster, its persistent volumes are destroyed, resulting in data loss for stateful applications. - -::: - -Node auto-replace works on top of the Kubernetes node controller. The node controller periodically checks the status of all the nodes (configurable via the `--node-monitor-period` flag of the `kube-controller`). When a node is unreachable, the node controller will taint that node. When this occurs, Rancher will begin its deletion countdown. You can configure the amount of time Rancher waits to delete the node. If the taint is not removed before the deletion countdown ends, Rancher will proceed to delete the node object. Rancher will then provision a node in accordance with the set quantity of the node pool. - -#### Enabling Node Auto-replace - -When you create the node pool, you can specify the amount of time in minutes that Rancher will wait to replace an unresponsive node. - -1. In the form for creating or editing a cluster, go to the **Node Pools** section. -1. Go to the node pool where you want to enable node auto-replace. In the **Recreate Unreachable After** field, enter the number of minutes that Rancher should wait for a node to respond before replacing the node. -1. Fill out the rest of the form for creating or editing the cluster. - -**Result:** Node auto-replace is enabled for the node pool. - -#### Disabling Node Auto-replace - -You can disable node auto-replace from the Rancher UI with the following steps: - -1. Click **☰ > Cluster Management**. -1. On the **Clusters** page, go to the cluster where you want to disable node auto-replace and click **⋮ > Edit Config**. -1. In the **Node Pools** section, go to the node pool where you want to enable node auto-replace. In the **Recreate Unreachable After** field, enter 0. -1. Click **Save**. - -**Result:** Node auto-replace is disabled for the node pool. - -### Cloud Credentials - -Node templates can use cloud credentials to store credentials for launching nodes in your cloud provider, which has some benefits: - -- Credentials are stored as a Kubernetes secret, which is not only more secure, but it also allows you to edit a node template without having to enter your credentials every time. - -- After the cloud credential is created, it can be re-used to create additional node templates. - -- Multiple node templates can share the same cloud credential to create node pools. If your key is compromised or expired, the cloud credential can be updated in a single place, which allows all node templates that are using it to be updated at once. - -After cloud credentials are created, the user can start [managing the cloud credentials that they created](../../../../reference-guides/user-settings/manage-cloud-credentials.md). - -### Node Drivers - -If you don't find the node driver that you want to use, you can see if it is available in Rancher's built-in [node drivers and activate it](../../authentication-permissions-and-global-configuration/about-provisioning-drivers/manage-node-drivers.md#activatingdeactivating-node-drivers), or you can [add your own custom node driver](../../authentication-permissions-and-global-configuration/about-provisioning-drivers/manage-node-drivers.md#adding-custom-node-drivers). +1. Click the name of the RKE2 cluster. ## RKE2 Clusters @@ -147,8 +27,6 @@ The RKE2 CLI exposes two roles, `server` and `agent`, which represent the Kubern The same functionality of using `etcd`, `controlplane` and `worker` nodes is possible in the RKE2 CLI by using flags and node tainting to control where workloads and the Kubernetes master were scheduled. The reason those roles were not implemented as first-class roles in the RKE2 CLI is that RKE2 is conceptualized as a set of raw building blocks that are best leveraged through an orchestration system such as Rancher. -The implementation of the three node roles in Rancher means that Rancher managed RKE2 clusters are able to easily leverage all of the same architectural best practices that are recommended for RKE clusters. - In our [recommended cluster architecture](../../kubernetes-clusters-in-rancher-setup/checklist-for-production-ready-clusters/recommended-cluster-architecture.md), we outline how many nodes of each role clusters should have: - At least three nodes with the role etcd to survive losing one node diff --git a/versioned_docs/version-2.12/how-to-guides/new-user-guides/manage-clusters/access-clusters/authorized-cluster-endpoint.md b/versioned_docs/version-2.12/how-to-guides/new-user-guides/manage-clusters/access-clusters/authorized-cluster-endpoint.md index 328537bb4fd7..982151628bc8 100644 --- a/versioned_docs/version-2.12/how-to-guides/new-user-guides/manage-clusters/access-clusters/authorized-cluster-endpoint.md +++ b/versioned_docs/version-2.12/how-to-guides/new-user-guides/manage-clusters/access-clusters/authorized-cluster-endpoint.md @@ -25,21 +25,6 @@ After you download the kubeconfig file, you are able to use the kubeconfig file If admins have [kubeconfig token generation turned off](../../../../api/api-tokens.md#disable-tokens-in-generated-kubeconfigs), the kubeconfig file requires that the [Rancher CLI](../../../../reference-guides/cli-with-rancher/rancher-cli.md) to be present in your PATH. -### Two Authentication Methods for RKE Clusters - -If the cluster is not an [RKE cluster,](../../launch-kubernetes-with-rancher/launch-kubernetes-with-rancher.md) the kubeconfig file allows you to access the cluster in only one way: it lets you be authenticated with the Rancher server, then Rancher allows you to run kubectl commands on the cluster. - -For RKE clusters, the kubeconfig file allows you to be authenticated in two ways: - -- **Through the Rancher server authentication proxy:** Rancher's authentication proxy validates your identity, then connects you to the downstream cluster that you want to access. -- **Directly with the downstream cluster's API server:** RKE clusters have an authorized cluster endpoint enabled by default. This endpoint allows you to access your downstream Kubernetes cluster with the kubectl CLI and a kubeconfig file, and it is enabled by default for RKE clusters. In this scenario, the downstream cluster's Kubernetes API server authenticates you by calling a webhook (the `kube-api-auth` microservice) that Rancher set up. - -This second method, the capability to connect directly to the cluster's Kubernetes API server, is important because it lets you access your downstream cluster if you can't connect to Rancher. - -To use the authorized cluster endpoint, you need to configure kubectl to use the extra kubectl context in the kubeconfig file that Rancher generates for you when the RKE cluster is created. This file can be downloaded from the cluster view in the Rancher UI, and the instructions for configuring kubectl are on [this page.](use-kubectl-and-kubeconfig.md#authenticating-directly-with-a-downstream-cluster) - -These methods of communicating with downstream Kubernetes clusters are also explained in the [architecture page](../../../../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md) in the larger context of explaining how Rancher works and how Rancher communicates with downstream clusters. - ### About the kube-api-auth Authentication Webhook The `kube-api-auth` microservice is deployed to provide the user authentication functionality for the [authorized cluster endpoint](../../../../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md#4-authorized-cluster-endpoint). When you access the user cluster using `kubectl`, the cluster's Kubernetes API server authenticates you by using the `kube-api-auth` service as a webhook. diff --git a/versioned_docs/version-2.12/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/create-kubernetes-persistent-storage.md b/versioned_docs/version-2.12/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/create-kubernetes-persistent-storage.md index 67a8c9dffbd6..3c0803a9f53a 100644 --- a/versioned_docs/version-2.12/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/create-kubernetes-persistent-storage.md +++ b/versioned_docs/version-2.12/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/create-kubernetes-persistent-storage.md @@ -64,9 +64,6 @@ In clusters that store data on GlusterFS volumes, you may experience an issue wh In [Rancher Launched Kubernetes clusters](../../launch-kubernetes-with-rancher/launch-kubernetes-with-rancher.md) that store data on iSCSI volumes, you may experience an issue where kubelets fail to automatically connect with iSCSI volumes. For details on resolving this issue, refer to [this page.](manage-persistent-storage/install-iscsi-volumes.md) -### hostPath Volumes -Before you create a hostPath volume, you need to set up an [extra_bind](https://rancher.com/docs/rke/latest/en/config-options/services/services-extras/#extra-binds/) in your cluster configuration. This will mount the path as a volume in your kubelets, which can then be used for hostPath volumes in your workloads. - ### Migrating VMware vSphere Cloud Provider from In-tree to Out-of-tree Kubernetes is moving away from maintaining cloud providers in-tree. vSphere has an out-of-tree cloud provider that can be used by installing the vSphere cloud provider and cloud storage plugins. diff --git a/versioned_docs/version-2.13/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/about-rancher-agents.md b/versioned_docs/version-2.13/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/about-rancher-agents.md index 375397eb5135..01aa252eb4ee 100644 --- a/versioned_docs/version-2.13/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/about-rancher-agents.md +++ b/versioned_docs/version-2.13/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/about-rancher-agents.md @@ -27,39 +27,12 @@ The `cattle-cluster-agent` pod does not define the default CPU and memory reques To configure request values through the UI: - - - -1. When you [create](./launch-kubernetes-with-rancher.md) or edit an existing cluster, go to the **Cluster Options** section. -1. Expand the **Cluster Configuration** subsection. -1. Configure your request values using the **CPU Requests** and **Memory Requests** fields as needed. - - - - 1. When you [create](./launch-kubernetes-with-rancher.md) or edit an existing cluster, go to the **Cluster Configuration**. 1. Select the **Cluster Agent** subsection. 1. Configure your request values using the **CPU Reservation** and **Memory Reservation** fields as needed. - - - If you prefer to configure via YAML, add the following snippet to your configuration file: - - - -```yaml -cluster_agent_deployment_customization: - override_resource_requirements: - requests: - cpu: 50m - memory: 100Mi -``` - - - - ```yaml spec: clusterAgentDeploymentCustomization: @@ -69,9 +42,6 @@ spec: memory: 100Mi ``` - - - ### Scheduling rules The `cattle-cluster-agent` uses either a fixed set of tolerations, or dynamically-added tolerations based on taints applied to the control plane nodes. This structure allows [Taint based Evictions](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/#taint-based-evictions) to work properly for `cattle-cluster-agent`. diff --git a/versioned_docs/version-2.13/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/use-new-nodes-in-an-infra-provider.md b/versioned_docs/version-2.13/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/use-new-nodes-in-an-infra-provider.md index 45093ee8232a..bbe848936ede 100644 --- a/versioned_docs/version-2.13/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/use-new-nodes-in-an-infra-provider.md +++ b/versioned_docs/version-2.13/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/use-new-nodes-in-an-infra-provider.md @@ -6,130 +6,10 @@ title: Launching Kubernetes on New Nodes in an Infrastructure Provider -When you create an RKE or RKE2 cluster using a node template in Rancher, each resulting node pool is shown in a new **Machine Pools** tab. You can see the machine pools by doing the following: - -1. Click **☰ > Cluster Management**. -1. Click the name of the RKE or RKE2 cluster. - -## RKE Clusters - -Using Rancher, you can create pools of nodes based on a [node template](#node-templates). This node template defines the parameters you want to use to launch nodes in your infrastructure providers or cloud providers. - -One benefit of installing Kubernetes on node pools hosted by an infrastructure provider is that if a node loses connectivity with the cluster, Rancher can automatically create another node to join the cluster to ensure that the count of the node pool is as expected. - -The available cloud providers to create a node template are decided based on active [node drivers](#node-drivers). - -### Node Templates - -A node template is the saved configuration for the parameters to use when provisioning nodes in a specific cloud provider. These nodes can be launched from the UI. Rancher uses [Docker Machine](https://github.com/docker/docs/blob/vnext-engine/machine/overview.md) to provision these nodes. The available cloud providers to create node templates are based on the active node drivers in Rancher. - -After you create a node template in Rancher, it's saved so that you can use this template again to create node pools. Node templates are bound to your login. After you add a template, you can remove them from your user profile. - -#### Node Labels - -You can add [labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/) on each node template, so that any nodes created from the node template will automatically have these labels on them. - -Invalid labels can prevent upgrades or can prevent Rancher from starting. For details on label syntax requirements, see the [Kubernetes documentation.](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set) - -#### Node Taints - -You can add [taints](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) on each node template, so that any nodes created from the node template will automatically have these taints on them. - -Since taints can be added at a node template and node pool, if there is no conflict with the same key and effect of the taints, all taints will be added to the nodes. If there are taints with the same key and different effect, the taints from the node pool will override the taints from the node template. - -#### Administrator Control of Node Templates - -Administrators can control all node templates. Admins can now maintain all the node templates within Rancher. When a node template owner is no longer using Rancher, the node templates created by them can be managed by administrators so the cluster can continue to be updated and maintained. - -To access all node templates, an administrator will need to do the following: +When you create an RKE2 cluster using a node template in Rancher, each resulting node pool is shown in a new **Machine Pools** tab. You can see the machine pools by doing the following: 1. Click **☰ > Cluster Management**. -1. Click **RKE1 Configuration > Node Templates**. - -**Result:** All node templates are listed. The templates can be edited or cloned by clicking the **⋮**. - -### Node Pools - -Using Rancher, you can create pools of nodes based on a [node template](#node-templates). - -A node template defines the configuration of a node, like what operating system to use, number of CPUs, and amount of memory. - -The benefit of using a node pool is that if a node is destroyed or deleted, you can increase the number of live nodes to compensate for the node that was lost. The node pool helps you ensure that the count of the node pool is as expected. - -Each node pool must have one or more nodes roles assigned. - -Each node role (i.e. etcd, controlplane, and worker) should be assigned to a distinct node pool. Although it is possible to assign multiple node roles to a node pool, this should not be done for production clusters. - -The recommended setup is to have: - -- a node pool with the etcd node role and a count of three -- a node pool with the controlplane node role and a count of at least two -- a node pool with the worker node role and a count of at least two - -**RKE1 downstream cluster nodes in an air-gapped environment:** - -By default, Rancher tries to run the Docker Install script when provisioning RKE1 downstream cluster nodes, such as in vSphere. However, the Rancher Docker installation script would fail in air-gapped environments. To work around this issue, you may choose to skip installing Docker when creating a Node Template where Docker is pre-installed onto a VM image. You can accomplish this by selecting **None** in the dropdown list for `Docker Install URL` under **Engine Options** in the Rancher UI. - -
**Engine Options Dropdown:**
- -![Engine Options Dropdown](/img/node-template-engine-options-rke1.png) - -#### Node Pool Taints - -If you haven't defined [taints](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) on your node template, you can add taints for each node pool. The benefit of adding taints to a node pool is that you can change the node template without having to first ensure that the taint exists in the new template. - -For each taint, they will automatically be added to any created node in the node pool. Therefore, if you add taints to a node pool that have existing nodes, the taints won't apply to existing nodes in the node pool, but any new node added into the node pool will get the taint. - -When there are taints on the node pool and node template, if there is no conflict with the same key and effect of the taints, all taints will be added to the nodes. If there are taints with the same key and different effect, the taints from the node pool will override the taints from the node template. - -#### About Node Auto-replace - -If a node is in a node pool, Rancher can automatically replace unreachable nodes. Rancher will use the existing node template for the given node pool to recreate the node if it becomes inactive for a specified number of minutes. - -:::caution - -Self-healing node pools are designed to help you replace worker nodes for stateless applications. It is not recommended to enable node auto-replace on a node pool of master nodes or nodes with persistent volumes attached, because VMs are treated ephemerally. When a node in a node pool loses connectivity with the cluster, its persistent volumes are destroyed, resulting in data loss for stateful applications. - -::: - -Node auto-replace works on top of the Kubernetes node controller. The node controller periodically checks the status of all the nodes (configurable via the `--node-monitor-period` flag of the `kube-controller`). When a node is unreachable, the node controller will taint that node. When this occurs, Rancher will begin its deletion countdown. You can configure the amount of time Rancher waits to delete the node. If the taint is not removed before the deletion countdown ends, Rancher will proceed to delete the node object. Rancher will then provision a node in accordance with the set quantity of the node pool. - -#### Enabling Node Auto-replace - -When you create the node pool, you can specify the amount of time in minutes that Rancher will wait to replace an unresponsive node. - -1. In the form for creating or editing a cluster, go to the **Node Pools** section. -1. Go to the node pool where you want to enable node auto-replace. In the **Recreate Unreachable After** field, enter the number of minutes that Rancher should wait for a node to respond before replacing the node. -1. Fill out the rest of the form for creating or editing the cluster. - -**Result:** Node auto-replace is enabled for the node pool. - -#### Disabling Node Auto-replace - -You can disable node auto-replace from the Rancher UI with the following steps: - -1. Click **☰ > Cluster Management**. -1. On the **Clusters** page, go to the cluster where you want to disable node auto-replace and click **⋮ > Edit Config**. -1. In the **Node Pools** section, go to the node pool where you want to enable node auto-replace. In the **Recreate Unreachable After** field, enter 0. -1. Click **Save**. - -**Result:** Node auto-replace is disabled for the node pool. - -### Cloud Credentials - -Node templates can use cloud credentials to store credentials for launching nodes in your cloud provider, which has some benefits: - -- Credentials are stored as a Kubernetes secret, which is not only more secure, but it also allows you to edit a node template without having to enter your credentials every time. - -- After the cloud credential is created, it can be re-used to create additional node templates. - -- Multiple node templates can share the same cloud credential to create node pools. If your key is compromised or expired, the cloud credential can be updated in a single place, which allows all node templates that are using it to be updated at once. - -After cloud credentials are created, the user can start [managing the cloud credentials that they created](../../../../reference-guides/user-settings/manage-cloud-credentials.md). - -### Node Drivers - -If you don't find the node driver that you want to use, you can see if it is available in Rancher's built-in [node drivers and activate it](../../authentication-permissions-and-global-configuration/about-provisioning-drivers/manage-node-drivers.md#activatingdeactivating-node-drivers), or you can [add your own custom node driver](../../authentication-permissions-and-global-configuration/about-provisioning-drivers/manage-node-drivers.md#adding-custom-node-drivers). +1. Click the name of the RKE2 cluster. ## RKE2 Clusters @@ -147,8 +27,6 @@ The RKE2 CLI exposes two roles, `server` and `agent`, which represent the Kubern The same functionality of using `etcd`, `controlplane` and `worker` nodes is possible in the RKE2 CLI by using flags and node tainting to control where workloads and the Kubernetes master were scheduled. The reason those roles were not implemented as first-class roles in the RKE2 CLI is that RKE2 is conceptualized as a set of raw building blocks that are best leveraged through an orchestration system such as Rancher. -The implementation of the three node roles in Rancher means that Rancher managed RKE2 clusters are able to easily leverage all of the same architectural best practices that are recommended for RKE clusters. - In our [recommended cluster architecture](../../kubernetes-clusters-in-rancher-setup/checklist-for-production-ready-clusters/recommended-cluster-architecture.md), we outline how many nodes of each role clusters should have: - At least three nodes with the role etcd to survive losing one node diff --git a/versioned_docs/version-2.13/how-to-guides/new-user-guides/manage-clusters/access-clusters/authorized-cluster-endpoint.md b/versioned_docs/version-2.13/how-to-guides/new-user-guides/manage-clusters/access-clusters/authorized-cluster-endpoint.md index 328537bb4fd7..982151628bc8 100644 --- a/versioned_docs/version-2.13/how-to-guides/new-user-guides/manage-clusters/access-clusters/authorized-cluster-endpoint.md +++ b/versioned_docs/version-2.13/how-to-guides/new-user-guides/manage-clusters/access-clusters/authorized-cluster-endpoint.md @@ -25,21 +25,6 @@ After you download the kubeconfig file, you are able to use the kubeconfig file If admins have [kubeconfig token generation turned off](../../../../api/api-tokens.md#disable-tokens-in-generated-kubeconfigs), the kubeconfig file requires that the [Rancher CLI](../../../../reference-guides/cli-with-rancher/rancher-cli.md) to be present in your PATH. -### Two Authentication Methods for RKE Clusters - -If the cluster is not an [RKE cluster,](../../launch-kubernetes-with-rancher/launch-kubernetes-with-rancher.md) the kubeconfig file allows you to access the cluster in only one way: it lets you be authenticated with the Rancher server, then Rancher allows you to run kubectl commands on the cluster. - -For RKE clusters, the kubeconfig file allows you to be authenticated in two ways: - -- **Through the Rancher server authentication proxy:** Rancher's authentication proxy validates your identity, then connects you to the downstream cluster that you want to access. -- **Directly with the downstream cluster's API server:** RKE clusters have an authorized cluster endpoint enabled by default. This endpoint allows you to access your downstream Kubernetes cluster with the kubectl CLI and a kubeconfig file, and it is enabled by default for RKE clusters. In this scenario, the downstream cluster's Kubernetes API server authenticates you by calling a webhook (the `kube-api-auth` microservice) that Rancher set up. - -This second method, the capability to connect directly to the cluster's Kubernetes API server, is important because it lets you access your downstream cluster if you can't connect to Rancher. - -To use the authorized cluster endpoint, you need to configure kubectl to use the extra kubectl context in the kubeconfig file that Rancher generates for you when the RKE cluster is created. This file can be downloaded from the cluster view in the Rancher UI, and the instructions for configuring kubectl are on [this page.](use-kubectl-and-kubeconfig.md#authenticating-directly-with-a-downstream-cluster) - -These methods of communicating with downstream Kubernetes clusters are also explained in the [architecture page](../../../../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md) in the larger context of explaining how Rancher works and how Rancher communicates with downstream clusters. - ### About the kube-api-auth Authentication Webhook The `kube-api-auth` microservice is deployed to provide the user authentication functionality for the [authorized cluster endpoint](../../../../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md#4-authorized-cluster-endpoint). When you access the user cluster using `kubectl`, the cluster's Kubernetes API server authenticates you by using the `kube-api-auth` service as a webhook. diff --git a/versioned_docs/version-2.13/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/create-kubernetes-persistent-storage.md b/versioned_docs/version-2.13/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/create-kubernetes-persistent-storage.md index 67a8c9dffbd6..3c0803a9f53a 100644 --- a/versioned_docs/version-2.13/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/create-kubernetes-persistent-storage.md +++ b/versioned_docs/version-2.13/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/create-kubernetes-persistent-storage.md @@ -64,9 +64,6 @@ In clusters that store data on GlusterFS volumes, you may experience an issue wh In [Rancher Launched Kubernetes clusters](../../launch-kubernetes-with-rancher/launch-kubernetes-with-rancher.md) that store data on iSCSI volumes, you may experience an issue where kubelets fail to automatically connect with iSCSI volumes. For details on resolving this issue, refer to [this page.](manage-persistent-storage/install-iscsi-volumes.md) -### hostPath Volumes -Before you create a hostPath volume, you need to set up an [extra_bind](https://rancher.com/docs/rke/latest/en/config-options/services/services-extras/#extra-binds/) in your cluster configuration. This will mount the path as a volume in your kubelets, which can then be used for hostPath volumes in your workloads. - ### Migrating VMware vSphere Cloud Provider from In-tree to Out-of-tree Kubernetes is moving away from maintaining cloud providers in-tree. vSphere has an out-of-tree cloud provider that can be used by installing the vSphere cloud provider and cloud storage plugins.