Skip to content

Commit 3397eba

Browse files
authored
Merge pull request #39124 from kubernetes/dev-1.27
Official 1.27 Release Docs
2 parents 8057934 + 2e403eb commit 3397eba

File tree

143 files changed

+18827
-11450
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

143 files changed

+18827
-11450
lines changed

api-ref-assets/api/swagger.json

Lines changed: 11717 additions & 8245 deletions
Large diffs are not rendered by default.

api-ref-assets/config/fields.yaml

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -99,6 +99,7 @@
9999
- initContainerStatuses
100100
- containerStatuses
101101
- ephemeralContainerStatuses
102+
- resize
102103

103104
- definition: io.k8s.api.core.v1.Container
104105
field_categories:
@@ -127,6 +128,7 @@
127128
- name: Resources
128129
fields:
129130
- resources
131+
- resizePolicy
130132
- name: Lifecycle
131133
fields:
132134
- lifecycle
@@ -219,6 +221,9 @@
219221
fields:
220222
- volumeMounts
221223
- volumeDevices
224+
- name: Resources
225+
fields:
226+
- resizePolicy
222227
- name: Lifecycle
223228
fields:
224229
- terminationMessagePath

api-ref-assets/config/toc.yaml

Lines changed: 14 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -66,18 +66,18 @@ parts:
6666
- name: PriorityClass
6767
group: scheduling.k8s.io
6868
version: v1
69-
- name: PodScheduling
69+
- name: PodSchedulingContext
7070
group: resource.k8s.io
71-
version: v1alpha1
71+
version: v1alpha2
7272
- name: ResourceClaim
7373
group: resource.k8s.io
74-
version: v1alpha1
74+
version: v1alpha2
7575
- name: ResourceClaimTemplate
7676
group: resource.k8s.io
77-
version: v1alpha1
77+
version: v1alpha2
7878
- name: ResourceClass
7979
group: resource.k8s.io
80-
version: v1alpha1
80+
version: v1alpha2
8181
- name: Service Resources
8282
chapters:
8383
- name: Service
@@ -148,6 +148,12 @@ parts:
148148
- name: CertificateSigningRequest
149149
group: certificates.k8s.io
150150
version: v1
151+
- name: ClusterTrustBundle
152+
group: certificates.k8s.io
153+
version: v1alpha1
154+
- name: SelfSubjectReview
155+
group: authentication.k8s.io
156+
version: v1beta1
151157
- name: Authorization Resources
152158
chapters:
153159
- name: LocalSubjectAccessReview
@@ -191,6 +197,9 @@ parts:
191197
- name: PodDisruptionBudget
192198
group: policy
193199
version: v1
200+
- name: IPAddress
201+
group: networking.k8s.io
202+
version: v1alpha1
194203
- name: Extend Resources
195204
chapters:
196205
- name: CustomResourceDefinition

content/en/blog/_posts/2022-05-13-grpc-probes-in-beta.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,7 @@ slug: grpc-probes-now-in-beta
77

88
**Author**: Sergey Kanzhelev (Google)
99

10+
_Update: Since this article was posted, the feature was graduated to GA in v1.27 and doesn't require any feature gates to be enabled.
1011

1112
With Kubernetes 1.24 the gRPC probes functionality entered beta and is available by default.
1213
Now you can configure startup, liveness, and readiness probes for your gRPC app

content/en/docs/concepts/architecture/nodes.md

Lines changed: 21 additions & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -93,7 +93,15 @@ For self-registration, the kubelet is started with the following options:
9393
{{< glossary_tooltip text="taints" term_id="taint" >}} (comma separated `<key>=<value>:<effect>`).
9494

9595
No-op if `register-node` is false.
96-
- `--node-ip` - IP address of the node.
96+
- `--node-ip` - Optional comma-separated list of the IP addresses for the node.
97+
You can only specify a single address for each address family.
98+
For example, in a single-stack IPv4 cluster, you set this value to be the IPv4 address that the
99+
kubelet should use for the node.
100+
See [configure IPv4/IPv6 dual stack](/docs/concepts/services-networking/dual-stack/#configure-ipv4-ipv6-dual-stack)
101+
for details of running a dual-stack cluster.
102+
103+
If you don't provide this argument, the kubelet uses the node's default IPv4 address, if any;
104+
if the node has no IPv4 addresses then the kubelet uses the node's default IPv6 address.
97105
- `--node-labels` - {{< glossary_tooltip text="Labels" term_id="label" >}} to add when registering the node
98106
in the cluster (see label restrictions enforced by the
99107
[NodeRestriction admission plugin](/docs/reference/access-authn-authz/admission-controllers/#noderestriction)).
@@ -215,34 +223,20 @@ of the Node resource. For example, the following JSON structure describes a heal
215223
]
216224
```
217225

218-
If the `status` of the Ready condition remains `Unknown` or `False` for longer
219-
than the `pod-eviction-timeout` (an argument passed to the
220-
{{< glossary_tooltip text="kube-controller-manager" term_id="kube-controller-manager"
221-
>}}), then the [node controller](#node-controller) triggers
222-
{{< glossary_tooltip text="API-initiated eviction" term_id="api-eviction" >}}
223-
for all Pods assigned to that node. The default eviction timeout duration is
224-
**five minutes**.
225-
In some cases when the node is unreachable, the API server is unable to communicate
226-
with the kubelet on the node. The decision to delete the pods cannot be communicated to
227-
the kubelet until communication with the API server is re-established. In the meantime,
228-
the pods that are scheduled for deletion may continue to run on the partitioned node.
229-
230-
The node controller does not force delete pods until it is confirmed that they have stopped
231-
running in the cluster. You can see the pods that might be running on an unreachable node as
232-
being in the `Terminating` or `Unknown` state. In cases where Kubernetes cannot deduce from the
233-
underlying infrastructure if a node has permanently left a cluster, the cluster administrator
234-
may need to delete the node object by hand. Deleting the node object from Kubernetes causes
235-
all the Pod objects running on the node to be deleted from the API server and frees up their
236-
names.
237-
238226
When problems occur on nodes, the Kubernetes control plane automatically creates
239227
[taints](/docs/concepts/scheduling-eviction/taint-and-toleration/) that match the conditions
240-
affecting the node.
241-
The scheduler takes the Node's taints into consideration when assigning a Pod to a Node.
242-
Pods can also have {{< glossary_tooltip text="tolerations" term_id="toleration" >}} that let
243-
them run on a Node even though it has a specific taint.
244-
245-
See [Taint Nodes by Condition](/docs/concepts/scheduling-eviction/taint-and-toleration/#taint-nodes-by-condition)
228+
affecting the node. An example of this is when the `status` of the Ready condition
229+
remains `Unknown` or `False` for longer than the kube-controller-manager's `NodeMonitorGracePeriod`,
230+
which defaults to 40 seconds. This will cause either an `node.kubernetes.io/unreachable` taint, for an `Unknown` status,
231+
or a `node.kubernetes.io/not-ready` taint, for a `False` status, to be added to the Node.
232+
233+
These taints affect pending pods as the scheduler takes the Node's taints into consideration when
234+
assigning a pod to a Node. Existing pods scheduled to the node may be evicted due to the application
235+
of `NoExecute` taints. Pods may also have {{< glossary_tooltip text="tolerations" term_id="toleration" >}} that let
236+
them schedule to and continue running on a Node even though it has a specific taint.
237+
238+
See [Taint Based Evictions](/docs/concepts/scheduling-eviction/taint-and-toleration/#taint-based-evictions) and
239+
[Taint Nodes by Condition](/docs/concepts/scheduling-eviction/taint-and-toleration/#taint-nodes-by-condition)
246240
for more details.
247241

248242
### Capacity and Allocatable {#capacity}

content/en/docs/concepts/cluster-administration/system-logs.md

Lines changed: 47 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -231,6 +231,53 @@ Similar to the container logs, you should rotate system component logs in the `/
231231
In Kubernetes clusters created by the `kube-up.sh` script, log rotation is configured by the `logrotate` tool.
232232
The `logrotate` tool rotates logs daily, or once the log size is greater than 100MB.
233233

234+
## Log query
235+
236+
{{< feature-state for_k8s_version="v1.27" state="alpha" >}}
237+
238+
To help with debugging issues on nodes, Kubernetes v1.27 introduced a feature that allows viewing logs of services
239+
running on the node. To use the feature, ensure that the `NodeLogQuery`
240+
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) is enabled for that node, and that the
241+
kubelet configuration options `enableSystemLogHandler` and `enableSystemLogQuery` are both set to true. On Linux
242+
we assume that service logs are available via journald. On Windows we assume that service logs are available
243+
in the application log provider. On both operating systems, logs are also available by reading files within
244+
`/var/log/`.
245+
246+
Provided you are authorized to interact with node objects, you can try out this alpha feature on all your nodes or
247+
just a subset. Here is an example to retrieve the kubelet service logs from a node:
248+
```shell
249+
# Fetch kubelet logs from a node named node-1.example
250+
kubectl get --raw "/api/v1/nodes/node-1.example/proxy/logs/?query=kubelet"
251+
```
252+
253+
You can also fetch files, provided that the files are in a directory that the kubelet allows for log
254+
fetches. For example, you can fetch a log from `/var/log` on a Linux node:
255+
```shell
256+
kubectl get --raw "/api/v1/nodes/<insert-node-name-here>/proxy/logs/?query=/<insert-log-file-name-here>"
257+
```
258+
259+
The kubelet uses heuristics to retrieve logs. This helps if you are not aware whether a given system service is
260+
writing logs to the operating system's native logger like journald or to a log file in `/var/log/`. The heuristics
261+
first checks the native logger and if that is not available attempts to retrieve the first logs from
262+
`/var/log/<servicename>` or `/var/log/<servicename>.log` or `/var/log/<servicename>/<servicename>.log`.
263+
264+
The complete list of options that can be used are:
265+
266+
Option | Description
267+
------ | -----------
268+
`boot` | boot show messages from a specific system boot
269+
`pattern` | pattern filters log entries by the provided PERL-compatible regular expression
270+
`query` | query specifies services(s) or files from which to return logs (required)
271+
`sinceTime` | an [RFC3339](https://www.rfc-editor.org/rfc/rfc3339) timestamp from which to show logs (inclusive)
272+
`untilTime` | an [RFC3339](https://www.rfc-editor.org/rfc/rfc3339) timestamp until which to show logs (inclusive)
273+
`tailLines` | specify how many lines from the end of the log to retrieve; the default is to fetch the whole log
274+
275+
Example of a more complex query:
276+
```shell
277+
# Fetch kubelet logs from a node named node-1.example that have the word "error"
278+
kubectl get --raw "/api/v1/nodes/node-1.example/proxy/logs/?query=kubelet&pattern=error"
279+
```
280+
234281
## {{% heading "whatsnext" %}}
235282

236283
* Read about the [Kubernetes Logging Architecture](/docs/concepts/cluster-administration/logging/)

content/en/docs/concepts/cluster-administration/system-traces.md

Lines changed: 21 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ weight: 90
99

1010
<!-- overview -->
1111

12-
{{< feature-state for_k8s_version="v1.22" state="alpha" >}}
12+
{{< feature-state for_k8s_version="v1.27" state="beta" >}}
1313

1414
System component traces record the latency of and relationships between operations in the cluster.
1515

@@ -59,26 +59,24 @@ as the kube-apiserver is often a public endpoint.
5959
6060
#### Enabling tracing in the kube-apiserver
6161
62-
To enable tracing, enable the `APIServerTracing`
63-
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
64-
on the kube-apiserver. Also, provide the kube-apiserver with a tracing configuration file
62+
To enable tracing, provide the kube-apiserver with a tracing configuration file
6563
with `--tracing-config-file=<path-to-config>`. This is an example config that records
6664
spans for 1 in 10000 requests, and uses the default OpenTelemetry endpoint:
6765

6866
```yaml
69-
apiVersion: apiserver.config.k8s.io/v1alpha1
67+
apiVersion: apiserver.config.k8s.io/v1beta1
7068
kind: TracingConfiguration
7169
# default value
7270
#endpoint: localhost:4317
7371
samplingRatePerMillion: 100
7472
```
7573

7674
For more information about the `TracingConfiguration` struct, see
77-
[API server config API (v1alpha1)](/docs/reference/config-api/apiserver-config.v1alpha1/#apiserver-k8s-io-v1alpha1-TracingConfiguration).
75+
[API server config API (v1beta1)](/docs/reference/config-api/apiserver-config.v1beta1/#apiserver-k8s-io-v1beta1-TracingConfiguration).
7876

7977
### kubelet traces
8078

81-
{{< feature-state for_k8s_version="v1.25" state="alpha" >}}
79+
{{< feature-state for_k8s_version="v1.27" state="beta" >}}
8280

8381
The kubelet CRI interface and authenticated http servers are instrumented to generate
8482
trace spans. As with the apiserver, the endpoint and sampling rate are configurable.
@@ -88,10 +86,7 @@ Enabled without a configured endpoint, the default OpenTelemetry Collector recei
8886

8987
#### Enabling tracing in the kubelet
9088

91-
To enable tracing, enable the `KubeletTracing`
92-
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
93-
on the kubelet. Also, provide the kubelet with a
94-
[tracing configuration](https://github.com/kubernetes/component-base/blob/release-1.25/tracing/api/v1/types.go).
89+
To enable tracing, apply the [tracing configuration](https://github.com/kubernetes/component-base/blob/release-1.27/tracing/api/v1/types.go).
9590
This is an example snippet of a kubelet config that records spans for 1 in 10000 requests, and uses the default OpenTelemetry endpoint:
9691

9792
```yaml
@@ -105,6 +100,21 @@ tracing:
105100
samplingRatePerMillion: 100
106101
```
107102

103+
If the `samplingRatePerMillion` is set to one million (`1000000`), then every
104+
span will be sent to the exporter.
105+
106+
The kubelet in Kubernetes v{{< skew currentVersion >}} collects spans from
107+
the garbage collection, pod synchronization routine as well as every gRPC
108+
method. Connected container runtimes like CRI-O and containerd can link the
109+
traces to their exported spans to provide additional context of information.
110+
111+
Please note that exporting spans always comes with a small performance overhead
112+
on the networking and CPU side, depending on the overall configuration of the
113+
system. If there is any issue like that in a cluster which is running with
114+
tracing enabled, then mitigate the problem by either reducing the
115+
`samplingRatePerMillion` or disabling tracing completely by removing the
116+
configuration.
117+
108118
## Stability
109119

110120
Tracing instrumentation is still under active development, and may change

content/en/docs/concepts/containers/images.md

Lines changed: 42 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -157,6 +157,48 @@ that Kubernetes will keep trying to pull the image, with an increasing back-off
157157
Kubernetes raises the delay between each attempt until it reaches a compiled-in limit,
158158
which is 300 seconds (5 minutes).
159159

160+
## Serial and parallel image pulls
161+
162+
By default, kubelet pulls images serially. In other words, kubelet sends only
163+
one image pull request to the image service at a time. Other image pull requests
164+
have to wait until the one being processed is complete.
165+
166+
Nodes make image pull decisions in isolation. Even when you use serialized image
167+
pulls, two different nodes can pull the same image in parallel.
168+
169+
If you would like to enable parallel image pulls, you can set the field
170+
`serializeImagePulls` to false in the [kubelet configuration](/docs/reference/config-api/kubelet-config.v1beta1/).
171+
With `serializeImagePulls` set to false, image pull requests will be sent to the image service immediately,
172+
and multiple images will be pulled at the same time.
173+
174+
When enabling parallel image pulls, please make sure the image service of your
175+
container runtime can handle parallel image pulls.
176+
177+
The kubelet never pulls multiple images in parallel on behalf of one Pod. For example,
178+
if you have a Pod that has an init container and an application container, the image
179+
pulls for the two containers will not be parallelized. However, if you have two
180+
Pods that use different images, the kubelet pulls the images in parallel on
181+
behalf of the two different Pods, when parallel image pulls is enabled.
182+
183+
### Maximum parallel image pulls
184+
185+
{{< feature-state for_k8s_version="v1.27" state="alpha" >}}
186+
187+
When `serializeImagePulls` is set to false, the kubelet defaults to no limit on the
188+
maximum number of images being pulled at the same time. If you would like to
189+
limit the number of parallel image pulls, you can set the field `maxParallelImagePulls`
190+
in kubelet configuration. With `maxParallelImagePulls` set to _n_, only _n_ images
191+
can be pulled at the same time, and any image pull beyond _n_ will have to wait
192+
until at least one ongoing image pull is complete.
193+
194+
Limiting the number parallel image pulls would prevent image pulling from consuming
195+
too much network bandwidth or disk I/O, when parallel image pulling is enabled.
196+
197+
You can set `maxParallelImagePulls` to a positive number that is greater than or
198+
equal to 1. If you set `maxParallelImagePulls` to be greater than or equal to 2, you
199+
must set the `serializeImagePulls` to false. The kubelet will fail to start with invalid
200+
`maxParallelImagePulls` settings.
201+
160202
## Multi-architecture images with image indexes
161203

162204
As well as providing binary images, a container registry can also serve a

0 commit comments

Comments
 (0)