<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>生产级别的容器编排系统 on Kubernetes</title><link>https://andygol-k8s.netlify.app/zh-cn/</link><description>Recent content in 生产级别的容器编排系统 on Kubernetes</description><generator>Hugo</generator><language>zh-cn</language><atom:link href="https://andygol-k8s.netlify.app/zh-cn/feed.xml" rel="self" type="application/rss+xml"/><item><title>APIService</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/cluster-resources/api-service-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/cluster-resources/api-service-v1/</guid><description>&lt;!--
api_metadata:
 apiVersion: "apiregistration.k8s.io/v1"
 import: "k8s.io/kube-aggregator/pkg/apis/apiregistration/v1"
 kind: "APIService"
content_type: "api_reference"
description: "APIService represents a server for a particular GroupVersion."
title: "APIService"
weight: 1
auto_generated: true
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: apiregistration.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/kube-aggregator/pkg/apis/apiregistration/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="APIService"&gt;APIService&lt;/h2&gt;
&lt;!--
APIService represents a server for a particular GroupVersion. Name must be "version.group".
--&gt;
&lt;p&gt;APIService 是用来表示一个特定的 GroupVersion 的服务器。名称必须为 &amp;quot;version.group&amp;quot;。&lt;/p&gt;
&lt;hr&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;apiVersion&lt;/strong&gt;: apiregistration.k8s.io/v1&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;kind&lt;/strong&gt;: APIService&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;metadata&lt;/strong&gt; (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/object-meta/#ObjectMeta"&gt;ObjectMeta&lt;/a&gt;)&lt;/p&gt;
&lt;!--
Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
--&gt;
&lt;p&gt;标准的对象元数据。更多信息： &lt;a href="https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata"&gt;https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;spec&lt;/strong&gt; (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/cluster-resources/api-service-v1/#APIServiceSpec"&gt;APIServiceSpec&lt;/a&gt;)&lt;/p&gt;
&lt;!--
Spec contains information for locating and communicating with a server
--&gt;
&lt;p&gt;spec 包含用于定位和与服务器通信的信息&lt;/p&gt;</description></item><item><title>Babylon Case Study</title><link>https://andygol-k8s.netlify.app/zh-cn/case-studies/babylon/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/case-studies/babylon/</guid><description>&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;A large number of Babylon's products leverage machine learning and artificial intelligence, and in 2019, there wasn't enough computing power in-house to run a particular experiment. The company was also growing (from 100 to 1,600 in three years) and planning expansion into other countries.&lt;/p&gt;

&lt;h2&gt;Solution&lt;/h2&gt;

&lt;p&gt;Babylon had migrated its user-facing applications to a Kubernetes platform in 2018, so the infrastructure team turned to Kubeflow, a toolkit for machine learning on Kubernetes. "We tried to create a Kubernetes core server, we deployed Kubeflow, and we orchestrated the whole experiment, which ended up being a really good success," says AI Infrastructure Lead Jérémie Vallée. The team began building a self-service AI training platform on top of Kubernetes.&lt;/p&gt;</description></item><item><title>ConfigMap</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/config-and-storage-resources/config-map-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/config-and-storage-resources/config-map-v1/</guid><description>&lt;!--
api_metadata:
 apiVersion: "v1"
 import: "k8s.io/api/core/v1"
 kind: "ConfigMap"
content_type: "api_reference"
description: "ConfigMap holds configuration data for pods to consume."
title: "ConfigMap"
weight: 1
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/core/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="ConfigMap"&gt;ConfigMap&lt;/h2&gt;
&lt;!--
ConfigMap holds configuration data for pods to consume.
--&gt;
&lt;p&gt;ConfigMap 包含供 Pod 使用的配置数据。&lt;/p&gt;
&lt;hr&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;apiVersion&lt;/strong&gt;: v1&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;kind&lt;/strong&gt;: ConfigMap&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
- **metadata** (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/object-meta/#ObjectMeta"&gt;ObjectMeta&lt;/a&gt;)

 Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
--&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;metadata&lt;/strong&gt; (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/object-meta/#ObjectMeta"&gt;ObjectMeta&lt;/a&gt;)&lt;/p&gt;
&lt;p&gt;标准的对象元数据。
更多信息：
&lt;a href="https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata"&gt;https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
- **binaryData** (map[string][]byte)

 BinaryData contains the binary data. Each key must consist of alphanumeric characters, '-', '_' or '.'. BinaryData can contain byte sequences that are not in the UTF-8 range. The keys stored in BinaryData must not overlap with the ones in the Data field, this is enforced during validation process. Using this field will require 1.10+ apiserver and kubelet.
--&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;binaryData&lt;/strong&gt; (map[string][]byte)&lt;/p&gt;</description></item><item><title>CustomResourceDefinition</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/extend-resources/custom-resource-definition-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/extend-resources/custom-resource-definition-v1/</guid><description>&lt;!--
api_metadata:
 apiVersion: "apiextensions.k8s.io/v1"
 import: "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
 kind: "CustomResourceDefinition"
content_type: "api_reference"
description: "CustomResourceDefinition represents a resource that should be exposed on the API server."
title: "CustomResourceDefinition"
weight: 1
auto_generated: true
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: apiextensions.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="CustomResourceDefinition"&gt;CustomResourceDefinition&lt;/h2&gt;
&lt;!--
CustomResourceDefinition represents a resource that should be exposed on the API server. Its name MUST be in the format \&lt;.spec.name&gt;.\&lt;.spec.group&gt;.
--&gt;
&lt;p&gt;CustomResourceDefinition 表示应在 API 服务器上公开的资源。其名称必须采用
&lt;code&gt;&amp;lt;.spec.name&amp;gt;.&amp;lt;.spec.group&amp;gt;&lt;/code&gt; 格式。&lt;/p&gt;
&lt;hr&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;apiVersion&lt;/strong&gt;：apiextensions.k8s.io/v1&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;kind&lt;/strong&gt;：CustomResourceDefinition&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;metadata&lt;/strong&gt; (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/object-meta/#ObjectMeta"&gt;ObjectMeta&lt;/a&gt;)&lt;/p&gt;
&lt;!--
Standard object's metadata More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
--&gt;
&lt;p&gt;标准的对象元数据，更多信息：
&lt;a href="https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata"&gt;https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata&lt;/a&gt;&lt;/p&gt;</description></item><item><title>DeleteOptions</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/delete-options/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/delete-options/</guid><description>&lt;!--
api_metadata:
 apiVersion: ""
 import: "k8s.io/apimachinery/pkg/apis/meta/v1"
 kind: "DeleteOptions"
content_type: "api_reference"
description: "DeleteOptions may be provided when deleting an API object."
title: "DeleteOptions"
weight: 1
auto_generated: true
--&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/apimachinery/pkg/apis/meta/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;!--
DeleteOptions may be provided when deleting an API object.
--&gt;
&lt;p&gt;删除 API 对象时可以提供 DeleteOptions。&lt;/p&gt;
&lt;hr&gt;
&lt;!--
- **apiVersion** (string)

 APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
--&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;apiVersion&lt;/strong&gt; (string)&lt;/p&gt;</description></item><item><title>FlowSchema</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/policy-resources/flow-schema-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/policy-resources/flow-schema-v1/</guid><description>&lt;!--
api_metadata:
 apiVersion: "flowcontrol.apiserver.k8s.io/v1"
 import: "k8s.io/api/flowcontrol/v1"
 kind: "FlowSchema"
content_type: "api_reference"
description: "FlowSchema defines the schema of a group of flows."
title: "FlowSchema"
weight: 1
auto_generated: true
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: flowcontrol.apiserver.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/flowcontrol/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="FlowSchema"&gt;FlowSchema&lt;/h2&gt;
&lt;!--
FlowSchema defines the schema of a group of flows. Note that a flow is made up of a set of inbound API requests with similar attributes and is identified by a pair of strings: the name of the FlowSchema and a "flow distinguisher".
--&gt;
&lt;p&gt;FlowSchema 定义一组流的模式。请注意，一个流由属性类似的一组入站 API 请求组成，
用一对字符串进行标识：FlowSchema 的名称和一个 “流区分项”。&lt;/p&gt;</description></item><item><title>kubectl 介绍</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/introduction/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/introduction/</guid><description>&lt;!--
title: "Introduction to kubectl"
content_type: concept
weight: 1
--&gt;
&lt;!--
kubectl is the Kubernetes cli version of a swiss army knife, and can do many things.

While this Book is focused on using kubectl to declaratively manage applications in Kubernetes, it
also covers other kubectl functions.
--&gt;
&lt;p&gt;kubectl 是 Kubernetes CLI 版本的瑞士军刀，可以胜任多种多样的任务。&lt;/p&gt;
&lt;p&gt;本文主要介绍如何使用 kubectl 在 Kubernetes 中声明式管理应用，本文还涵盖了一些其他的 kubectl 功能。&lt;/p&gt;
&lt;!--
## Command Families

Most kubectl commands typically fall into one of a few categories:
--&gt;
&lt;h2 id="command-families"&gt;命令分类&lt;/h2&gt;
&lt;p&gt;大多数 kubectl 命令通常可以分为以下几类：&lt;/p&gt;</description></item><item><title>LocalSubjectAccessReview</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/authorization-resources/local-subject-access-review-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/authorization-resources/local-subject-access-review-v1/</guid><description>&lt;!--
api_metadata:
 apiVersion: "authorization.k8s.io/v1"
 import: "k8s.io/api/authorization/v1"
 kind: "LocalSubjectAccessReview"
content_type: "api_reference"
description: "LocalSubjectAccessReview checks whether or not a user or group can perform an action in a given namespace."
title: "LocalSubjectAccessReview"
weight: 1
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: authorization.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/authorization/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="LocalSubjectAccessReview"&gt;LocalSubjectAccessReview&lt;/h2&gt;
&lt;!--
LocalSubjectAccessReview checks whether or not a user or group can perform an action in a given namespace. Having a namespace scoped resource makes it much easier to grant namespace scoped policy that includes permissions checking.

&lt;hr&gt;

- **apiVersion**: authorization.k8s.io/v1

- **kind**: LocalSubjectAccessReview

- **metadata** (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/object-meta/#ObjectMeta"&gt;ObjectMeta&lt;/a&gt;)

 Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
--&gt;
&lt;p&gt;LocalSubjectAccessReview 检查用户或组是否可以在给定的命名空间内执行某操作。
划分命名空间范围的资源简化了命名空间范围的策略设置，例如权限检查。&lt;/p&gt;</description></item><item><title>Pod</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/workload-resources/pod-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/workload-resources/pod-v1/</guid><description>&lt;!--
api_metadata:
 apiVersion: "v1"
 import: "k8s.io/api/core/v1"
 kind: "Pod"
content_type: "api_reference"
description: "Pod is a collection of containers that can run on a host."
title: "Pod"
weight: 1
auto_generated: true
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/core/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="Pod"&gt;Pod&lt;/h2&gt;
&lt;!--
Pod is a collection of containers that can run on a host. This resource is created by clients and scheduled onto hosts.
--&gt;
&lt;p&gt;Pod 是可以在主机上运行的容器的集合。此资源由客户端创建并调度到主机上。&lt;/p&gt;
&lt;hr&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;apiVersion&lt;/strong&gt;: v1&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;kind&lt;/strong&gt;: Pod&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
- **metadata** (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/object-meta/#ObjectMeta"&gt;ObjectMeta&lt;/a&gt;)

 Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
--&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;metadata&lt;/strong&gt; (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/object-meta/#ObjectMeta"&gt;ObjectMeta&lt;/a&gt;)&lt;/p&gt;</description></item><item><title>Service</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/service-resources/service-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/service-resources/service-v1/</guid><description>&lt;!--
api_metadata:
 apiVersion: "v1"
 import: "k8s.io/api/core/v1"
 kind: "Service"
content_type: "api_reference"
description: "Service is a named abstraction of software service (for example, mysql) consisting of local port (for example 3306) that the proxy listens on, and the selector that determines which pods will answer requests sent through the proxy."
title: "Service"
weight: 1
auto_generated: true
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/core/v1”&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="Service"&gt;Service&lt;/h2&gt;
&lt;!--
Service is a named abstraction of software service (for example, mysql) consisting of local port (for example 3306) that the proxy listens on, and the selector that determines which pods will answer requests sent through the proxy.
--&gt;
&lt;p&gt;Service 是软件服务（例如 mysql）的命名抽象，包含代理要侦听的本地端口（例如 3306）和一个选择算符，
选择算符用来确定哪些 Pod 将响应通过代理发送的请求。&lt;/p&gt;</description></item><item><title>ServiceAccount</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/authentication-resources/service-account-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/authentication-resources/service-account-v1/</guid><description>&lt;!--
api_metadata:
 apiVersion: "v1"
 import: "k8s.io/api/core/v1"
 kind: "ServiceAccount"
content_type: "api_reference"
description: "ServiceAccount binds together: * a name, understood by users, and perhaps by peripheral systems, for an identity * a principal that can be authenticated and authorized * a set of secrets."
title: "ServiceAccount"
weight: 1
auto_generated: true
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/core/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="ServiceAccount"&gt;ServiceAccount&lt;/h2&gt;
&lt;!--
ServiceAccount binds together: * a name, understood by users, and perhaps by peripheral systems, for an identity * a principal that can be authenticated and authorized * a set of secrets
--&gt;
&lt;p&gt;ServiceAccount 将以下内容绑定在一起：&lt;/p&gt;</description></item><item><title>Binding</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/workload-resources/binding-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/workload-resources/binding-v1/</guid><description>&lt;!--
api_metadata:
 apiVersion: "v1"
 import: "k8s.io/api/core/v1"
 kind: "Binding"
content_type: "api_reference"
description: "Binding ties one object to another; for example, a pod is bound to a node by a scheduler."
title: "Binding"
weight: 2
auto_generated: true
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/core/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="Binding"&gt;Binding&lt;/h2&gt;
&lt;!--
Binding ties one object to another; for example, a pod is bound to a node by a scheduler.
--&gt;
&lt;p&gt;Binding 将一个对象与另一个对象绑定在一起；例如，调度程序将一个 Pod 绑定到一个节点。&lt;/p&gt;
&lt;hr&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;apiVersion&lt;/strong&gt;: v1&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;kind&lt;/strong&gt;: Binding&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;metadata&lt;/strong&gt; (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/object-meta/#ObjectMeta"&gt;ObjectMeta&lt;/a&gt;)&lt;/p&gt;</description></item><item><title>Booz Allen Case Study</title><link>https://andygol-k8s.netlify.app/zh-cn/case-studies/booz-allen/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/case-studies/booz-allen/</guid><description>&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;In 2017, Booz Allen Hamilton's Strategic Innovation Group worked with the federal government to relaunch the decade-old recreation.gov website, which provides information and real-time booking for more than 100,000 campsites and facilities on federal lands across the country. The infrastructure needed to be agile, reliable, and scalable—as well as repeatable for the other federal agencies that are among Booz Allen Hamilton's customers.&lt;/p&gt;

&lt;h2&gt;Solution&lt;/h2&gt;

&lt;p&gt;"The only way that we thought we could be successful with this problem across all the different agencies is to create a microservice architecture and containers, so that we could be very dynamic and very agile to any given agency for whatever requirements that they may have," says Booz Allen Hamilton Senior Lead Technologist Martin Folkoff. To meet those requirements, Folkoff's team looked to Kubernetes for orchestration.&lt;/p&gt;</description></item><item><title>Bose Case Study</title><link>https://andygol-k8s.netlify.app/zh-cn/case-studies/bose/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/case-studies/bose/</guid><description>&lt;!--
title: Bose Case Study
linkTitle: Bose
case_study_styles: true
cid: caseStudies
logo: bose_featured_logo.png
featured: false
weight: 2
quote: &gt;
 The CNCF Landscape quickly explains what's going on in all the different areas from storage to cloud providers to automation and so forth. This is our shopping cart to build a cloud infrastructure. We can go choose from the different aisles.

new_case_study_styles: true
heading_background: /images/case-studies/bose/banner1.jpg
heading_title_logo: /images/bose_logo.png
subheading: &gt;
 Bose: Supporting Rapid Development for Millions of IoT Products With Kubernetes
case_study_details:
 - Company: Bose Corporation
 - Location: Framingham, Massachusetts
 - Industry: Consumer Electronics
--&gt;

&lt;h2&gt;&lt;!--Challenge--&gt;挑战&lt;/h2&gt;

&lt;p&gt;
&lt;!--
A household name in high-quality audio equipment, &lt;a href="https://www.bose.com/en_us/index.html"&gt;Bose&lt;/a&gt; has offered connected products for more than five years, and as that demand grew, the infrastructure had to change to support it. "We needed to provide a mechanism for developers to rapidly prototype and deploy services all the way to production pretty fast," says Lead Cloud Engineer Josh West. In 2016, the company decided to start building a platform from scratch. The primary goal: "To be one to two steps ahead of the different product groups so that we are never scrambling to catch up with their scale," says Cloud Architecture Manager Dylan O'Mahony.
--&gt;
作为一家以高品质音频设备闻名的公司，[Bose](https://www.bose.com/en_us/index.html)
已经提供了超过五年的联网产品，随着这一需求的增长，基础设施不得不改变以支持它。
“我们需要为开发者提供一种机制，能够快速原型化并部署服务一直到生产环境”，首席云工程师 Josh West 说道。
2016年，公司决定开始从头构建一个平台。其主要目标是：“要领先于各个产品组一到两步，这样我们就永远不会匆忙追赶他们的规模”，
云架构经理 Dylan O'Mahony 说道。
&lt;/p&gt;</description></item><item><title>ComponentStatus</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/cluster-resources/component-status-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/cluster-resources/component-status-v1/</guid><description>&lt;!--
api_metadata:
 apiVersion: "v1"
 import: "k8s.io/api/core/v1"
 kind: "ComponentStatus"
content_type: "api_reference"
description: "ComponentStatus (and ComponentStatusList) holds the cluster validation info."
title: "ComponentStatus"
weight: 2
auto_generated: true
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/core/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="ComponentStatus"&gt;ComponentStatus&lt;/h2&gt;
&lt;!--
ComponentStatus (and ComponentStatusList) holds the cluster validation info. Deprecated: This API is deprecated in v1.19+
--&gt;
&lt;p&gt;ComponentStatus（和 ComponentStatusList）保存集群检验信息。
已废弃：该 API 在 v1.19 及更高版本中废弃。&lt;/p&gt;
&lt;hr&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;apiVersion&lt;/strong&gt;: v1&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;kind&lt;/strong&gt;: ComponentStatus&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;metadata&lt;/strong&gt; (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/object-meta/#ObjectMeta"&gt;ObjectMeta&lt;/a&gt;)&lt;/p&gt;
&lt;!--
Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
--&gt;
&lt;p&gt;标准的对象元数据。更多信息：
&lt;a href="https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata"&gt;https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata&lt;/a&gt;&lt;/p&gt;</description></item><item><title>DeviceClass</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/extend-resources/device-class-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/extend-resources/device-class-v1/</guid><description>&lt;!--
api_metadata:
 apiVersion: "resource.k8s.io/v1"
 import: "k8s.io/api/resource/v1"
 kind: "DeviceClass"
content_type: "api_reference"
description: "DeviceClass is a vendor- or admin-provided resource that contains device configuration and selectors."
title: "DeviceClass"
weight: 2
auto_generated: true
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: resource.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/resource/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="DeviceClass"&gt;DeviceClass&lt;/h2&gt;
&lt;!--
DeviceClass is a vendor- or admin-provided resource that contains device configuration and selectors. It can be referenced in the device requests of a claim to apply these presets. Cluster scoped.

This is an alpha type and requires enabling the DynamicResourceAllocation feature gate.
--&gt;
&lt;p&gt;DeviceClass 是由供应商或管理员提供的资源，包含设备配置和选择算符。
它可以在申领的设备请求中被引用，以应用预设值。作用域为集群范围。&lt;/p&gt;</description></item><item><title>Endpoints</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/service-resources/endpoints-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/service-resources/endpoints-v1/</guid><description>&lt;!--
api_metadata:
 apiVersion: "v1"
 import: "k8s.io/api/core/v1"
 kind: "Endpoints"
content_type: "api_reference"
description: "Endpoints is a collection of endpoints that implement the actual service."
title: "Endpoints"
weight: 2
auto_generated: true
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/core/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="Endpoints"&gt;Endpoints&lt;/h2&gt;
&lt;!--
Endpoints is a collection of endpoints that implement the actual service. Example:
--&gt;
&lt;p&gt;Endpoints 是实现实际 Service 的端点的集合。举例：&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;Name: &amp;#34;mysvc&amp;#34;,
Subsets: [
 {
 Addresses: [{&amp;#34;ip&amp;#34;: &amp;#34;10.10.1.1&amp;#34;}, {&amp;#34;ip&amp;#34;: &amp;#34;10.10.2.2&amp;#34;}],
 Ports: [{&amp;#34;name&amp;#34;: &amp;#34;a&amp;#34;, &amp;#34;port&amp;#34;: 8675}, {&amp;#34;name&amp;#34;: &amp;#34;b&amp;#34;, &amp;#34;port&amp;#34;: 309}]
 },
 {
 Addresses: [{&amp;#34;ip&amp;#34;: &amp;#34;10.10.3.3&amp;#34;}],
 Ports: [{&amp;#34;name&amp;#34;: &amp;#34;a&amp;#34;, &amp;#34;port&amp;#34;: 93}, {&amp;#34;name&amp;#34;: &amp;#34;b&amp;#34;, &amp;#34;port&amp;#34;: 76}]
 },
]
&lt;/code&gt;&lt;/pre&gt;&lt;hr&gt;
&lt;!--
Endpoints is a legacy API and does not contain information about all Service features. Use discoveryv1.EndpointSlice for complete information about Service endpoints.

Deprecated: This API is deprecated in v1.33+. Use discoveryv1.EndpointSlice.
--&gt;
&lt;p&gt;Endpoints 是遗留 API，不包含所有 Service 特性的信息。使用 &lt;code&gt;discoveryv1.EndpointSlice&lt;/code&gt;
获取关于 Service 端点的完整信息。&lt;/p&gt;</description></item><item><title>LabelSelector</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/label-selector/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/label-selector/</guid><description>&lt;!--
api_metadata:
 apiVersion: ""
 import: "k8s.io/apimachinery/pkg/apis/meta/v1"
 kind: "LabelSelector"
content_type: "api_reference"
description: "A label selector is a label query over a set of resources."
title: "LabelSelector"
weight: 2
auto_generated: true
--&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/apimachinery/pkg/apis/meta/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;!-- 
A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects.
--&gt;
&lt;p&gt;标签选择算符是对一组资源的标签查询。
&lt;code&gt;matchLabels&lt;/code&gt; 和 &lt;code&gt;matchExpressions&lt;/code&gt; 的结果按逻辑与的关系组合。
一个 &lt;code&gt;empty&lt;/code&gt; 标签选择算符匹配所有对象。一个 &lt;code&gt;null&lt;/code&gt; 标签选择算符不匹配任何对象。&lt;/p&gt;</description></item><item><title>LimitRange</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/policy-resources/limit-range-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/policy-resources/limit-range-v1/</guid><description>&lt;!--
api_metadata:
 apiVersion: "v1"
 import: "k8s.io/api/core/v1"
 kind: "LimitRange"
content_type: "api_reference"
description: "LimitRange sets resource usage limits for each kind of resource in a Namespace."
title: "LimitRange"
weight: 2
auto_generated: true
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/core/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="LimitRange"&gt;LimitRange&lt;/h2&gt;
&lt;!--
LimitRange sets resource usage limits for each kind of resource in a Namespace.
--&gt;
&lt;p&gt;LimitRange 设置名字空间中每个资源类别的资源用量限制。&lt;/p&gt;
&lt;hr&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;apiVersion&lt;/strong&gt;: v1&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;kind&lt;/strong&gt;: LimitRange&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
- **metadata** (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/object-meta/#ObjectMeta"&gt;ObjectMeta&lt;/a&gt;)

 Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata

- **spec** (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/policy-resources/limit-range-v1/#LimitRangeSpec"&gt;LimitRangeSpec&lt;/a&gt;)

 Spec defines the limits enforced. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
--&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;metadata&lt;/strong&gt; (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/object-meta/#ObjectMeta"&gt;ObjectMeta&lt;/a&gt;)&lt;/p&gt;</description></item><item><title>Secret</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/config-and-storage-resources/secret-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/config-and-storage-resources/secret-v1/</guid><description>&lt;!--
api_metadata:
 apiVersion: "v1"
 import: "k8s.io/api/core/v1"
 kind: "Secret"
content_type: "api_reference"
description: "Secret holds secret data of a certain type."
title: "Secret"
weight: 2
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/core/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="Secret"&gt;Secret&lt;/h2&gt;
&lt;!--
Secret holds secret data of a certain type. The total bytes of the values in the Data field must be less than MaxSecretSize bytes.
--&gt;
&lt;p&gt;Secret 包含某些类别的秘密数据。
data 字段值的总字节必须小于 MaxSecretSize 字节。&lt;/p&gt;
&lt;hr&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;apiVersion&lt;/strong&gt;: v1&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;kind&lt;/strong&gt;: Secret&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
- **metadata** (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/object-meta/#ObjectMeta"&gt;ObjectMeta&lt;/a&gt;)

 Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata

- **data** (map[string][]byte)

 Data contains the secret data. Each key must consist of alphanumeric characters, '-', '_' or '.'. The serialized form of the secret data is a base64 encoded string, representing the arbitrary (possibly non-string) data value here. Described in https://tools.ietf.org/html/rfc4648#section-4
--&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;metadata&lt;/strong&gt; (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/object-meta/#ObjectMeta"&gt;ObjectMeta&lt;/a&gt;)&lt;/p&gt;</description></item><item><title>SelfSubjectAccessReview</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/authorization-resources/self-subject-access-review-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/authorization-resources/self-subject-access-review-v1/</guid><description>&lt;!--
api_metadata:
 apiVersion: "authorization.k8s.io/v1"
 import: "k8s.io/api/authorization/v1"
 kind: "SelfSubjectAccessReview"
content_type: "api_reference"
description: "SelfSubjectAccessReview checks whether or the current user can perform an action."
title: "SelfSubjectAccessReview"
weight: 2
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: authorization.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/authorization/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="SelfSubjectAccessReview"&gt;SelfSubjectAccessReview&lt;/h2&gt;
&lt;!--
SelfSubjectAccessReview checks whether or the current user can perform an action. Not filling in a spec.namespace means "in all namespaces". Self is a special case, because users should always be able to check whether they can perform an action
--&gt;
&lt;p&gt;SelfSubjectAccessReview 检查当前用户是否可以执行某操作。
不填写 &lt;code&gt;spec.namespace&lt;/code&gt; 表示“在所有命名空间中”。
Self 是一个特殊情况，因为用户应始终能够检查自己是否可以执行某操作。&lt;/p&gt;</description></item><item><title>TokenRequest</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/authentication-resources/token-request-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/authentication-resources/token-request-v1/</guid><description>&lt;!--
api_metadata:
 apiVersion: "authentication.k8s.io/v1"
 import: "k8s.io/api/authentication/v1"
 kind: "TokenRequest"
content_type: "api_reference"
description: "TokenRequest requests a token for a given service account."
title: "TokenRequest"
weight: 2
auto_generated: true
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: authentication.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/authentication/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="TokenRequest"&gt;TokenRequest&lt;/h2&gt;
&lt;!--
TokenRequest requests a token for a given service account.
--&gt;
&lt;p&gt;TokenRequest 为给定的服务账号请求一个令牌。&lt;/p&gt;
&lt;hr&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;apiVersion&lt;/strong&gt;: authentication.k8s.io/v1&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;kind&lt;/strong&gt;: TokenRequest&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
- **metadata** (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/object-meta/#ObjectMeta"&gt;ObjectMeta&lt;/a&gt;)
 Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata

- **spec** (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/authentication-resources/token-request-v1/#TokenRequestSpec"&gt;TokenRequestSpec&lt;/a&gt;), required
 Spec holds information about the request being evaluated
- **status** (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/authentication-resources/token-request-v1/#TokenRequestStatus"&gt;TokenRequestStatus&lt;/a&gt;)

 Status is filled in by the server and indicates whether the token can be authenticated.
--&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;metadata&lt;/strong&gt; (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/object-meta/#ObjectMeta"&gt;ObjectMeta&lt;/a&gt;)&lt;/p&gt;</description></item><item><title>Booking.com Case Study</title><link>https://andygol-k8s.netlify.app/zh-cn/case-studies/booking-com/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/case-studies/booking-com/</guid><description>&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;In 2016, Booking.com migrated to an OpenShift platform, which gave product developers faster access to infrastructure. But because Kubernetes was abstracted away from the developers, the infrastructure team became a "knowledge bottleneck" when challenges arose. Trying to scale that support wasn't sustainable.&lt;/p&gt;

&lt;h2&gt;Solution&lt;/h2&gt;

&lt;p&gt;After a year operating OpenShift, the platform team decided to build its own vanilla Kubernetes platform—and ask developers to learn some Kubernetes in order to use it. "This is not a magical platform," says Ben Tyler, Principal Developer, B Platform Track. "We're not claiming that you can just use it with your eyes closed. Developers need to do some learning, and we're going to do everything we can to make sure they have access to that knowledge."&lt;/p&gt;</description></item><item><title>CSIDriver</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/config-and-storage-resources/csi-driver-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/config-and-storage-resources/csi-driver-v1/</guid><description>&lt;!--
api_metadata:
 apiVersion: "storage.k8s.io/v1"
 import: "k8s.io/api/storage/v1"
 kind: "CSIDriver"
content_type: "api_reference"
description: "CSIDriver captures information about a Container Storage Interface (CSI) volume driver deployed on the cluster."
title: "CSIDriver"
weight: 3
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: storage.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/storage/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="CSIDriver"&gt;CSIDriver&lt;/h2&gt;
&lt;!--
CSIDriver captures information about a Container Storage Interface (CSI) volume driver deployed on the cluster. Kubernetes attach detach controller uses this object to determine whether attach is required. Kubelet uses this object to determine whether pod information needs to be passed on mount. CSIDriver objects are non-namespaced.
--&gt;
&lt;p&gt;CSIDriver 抓取集群上部署的容器存储接口（CSI）卷驱动有关的信息。
Kubernetes 挂接/解除挂接控制器使用此对象来决定是否需要挂接。
kubelet 使用此对象决定挂载时是否需要传递 Pod 信息。
CSIDriver 对象未划分命名空间。&lt;/p&gt;</description></item><item><title>EndpointSlice</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/service-resources/endpoint-slice-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/service-resources/endpoint-slice-v1/</guid><description>&lt;!--
api_metadata:
 apiVersion: "discovery.k8s.io/v1"
 import: "k8s.io/api/discovery/v1"
 kind: "EndpointSlice"
content_type: "api_reference"
description: "EndpointSlice represents a set of service endpoints."
title: "EndpointSlice"
weight: 3
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: discovery.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/discovery/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="EndpointSlice"&gt;EndpointSlice&lt;/h2&gt;
&lt;!--
EndpointSlice represents a set of service endpoints. Most EndpointSlices are created by the EndpointSlice controller to represent the Pods selected by Service objects. For a given service there may be multiple EndpointSlice objects which must be joined to produce the full set of endpoints; you can find all of the slices for a given service by listing EndpointSlices in the service's namespace whose `kubernetes.io/service-name` label contains the service's name.
--&gt;
&lt;p&gt;EndpointSlice 表示一组服务端点。大多数 EndpointSlice 由 EndpointSlice
控制器创建，用于表示被 Service 对象选中的 Pod。对于一个给定的服务，可能存在多个
EndpointSlice 对象，这些对象必须被组合在一起以产生完整的端点集合；
你可以通过在服务的命名空间中列出 &lt;code&gt;kubernetes.io/service-name&lt;/code&gt;
标签包含 Service 名称的 EndpointSlices 来找到给定 Service 的所有 slices。&lt;/p&gt;</description></item><item><title>Event</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/cluster-resources/event-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/cluster-resources/event-v1/</guid><description>&lt;!--
api_metadata:
 apiVersion: "events.k8s.io/v1"
 import: "k8s.io/api/events/v1"
 kind: "Event"
content_type: "api_reference"
description: "Event is a report of an event somewhere in the cluster."
title: "Event"
weight: 3
auto_generated: true
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: events.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/events/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="Event"&gt;Event&lt;/h2&gt;
&lt;!--
Event is a report of an event somewhere in the cluster. It generally denotes some state change in the system.
Events have a limited retention time and triggers and messages may evolve with time. 
Event consumers should not rely on the timing of an event with a given Reason reflecting a consistent underlying trigger, or the continued existence of events with that Reason. 
Events should be treated as informative, best-effort, supplemental data.
--&gt;
&lt;p&gt;Event 是集群中某个事件的报告。它一般表示系统的某些状态变化。
Event 的保留时间有限，触发器和消息可能会随着时间的推移而演变。
事件消费者不应假定给定原因的事件的时间所反映的是一致的下层触发因素，或具有该原因的事件的持续存在。
Events 应被视为通知性质的、尽最大努力而提供的补充数据。&lt;/p&gt;</description></item><item><title>ListMeta</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/list-meta/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/list-meta/</guid><description>&lt;!--
api_metadata:
 apiVersion: ""
 import: "k8s.io/apimachinery/pkg/apis/meta/v1"
 kind: "ListMeta"
content_type: "api_reference"
description: "ListMeta describes metadata that synthetic resources must have, including lists and various status objects."
title: "ListMeta"
weight: 3
auto_generated: true
--&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/apimachinery/pkg/apis/meta/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;!--
ListMeta describes metadata that synthetic resources must have, including lists and various status objects. A resource may have only one of {ObjectMeta, ListMeta}.
--&gt;
&lt;p&gt;&lt;code&gt;ListMeta&lt;/code&gt; 描述了合成资源必须具有的元数据，包括列表和各种状态对象。
一个资源仅能有 &lt;code&gt;{ObjectMeta, ListMeta}&lt;/code&gt; 中的一个。&lt;/p&gt;
&lt;hr&gt;
&lt;!--
- **continue** (string)

 continue may be set if the user set a limit on the number of items returned, and indicates that the server has more data available. The value is opaque and may be used to issue another request to the endpoint that served this list to retrieve the next set of available objects. Continuing a consistent list may not be possible if the server configuration has changed or more than a few minutes have passed. The resourceVersion field returned when using this continue value will be identical to the value in the first response, unless you have received this token from an error message.
--&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;continue&lt;/strong&gt; (string)&lt;/p&gt;</description></item><item><title>MutatingWebhookConfiguration</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/extend-resources/mutating-webhook-configuration-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/extend-resources/mutating-webhook-configuration-v1/</guid><description>&lt;!-- 
api_metadata:
 apiVersion: "admissionregistration.k8s.io/v1"
 import: "k8s.io/api/admissionregistration/v1"
 kind: "MutatingWebhookConfiguration"
content_type: "api_reference"
description: "MutatingWebhookConfiguration describes the configuration of and admission webhook that accept or reject and object without changing it."
title: "MutatingWebhookConfiguration"
weight: 3
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: admissionregistration.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/admissionregistration/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="MutatingWebhookConfiguration"&gt;MutatingWebhookConfiguration&lt;/h2&gt;
&lt;!-- 
MutatingWebhookConfiguration describes the configuration of and admission webhook that accept or reject and may change the object.
--&gt;
&lt;p&gt;MutatingWebhookConfiguration 描述准入 Webhook 的配置，该 Webhook 可接受或拒绝对象请求，并且可能变更对象。&lt;/p&gt;
&lt;hr&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;apiVersion&lt;/strong&gt;：admissionregistration.k8s.io/v1&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;kind&lt;/strong&gt;：MutatingWebhookConfiguration&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;!-- 
- **metadata** (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/object-meta/#ObjectMeta"&gt;ObjectMeta&lt;/a&gt;)

 Standard object metadata; More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata. 
--&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;metadata&lt;/strong&gt; (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/object-meta/#ObjectMeta"&gt;ObjectMeta&lt;/a&gt;)&lt;/p&gt;</description></item><item><title>PodTemplate</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/workload-resources/pod-template-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/workload-resources/pod-template-v1/</guid><description>&lt;!--
api_metadata:
 apiVersion: "v1"
 import: "k8s.io/api/core/v1"
 kind: "PodTemplate"
content_type: "api_reference"
description: "PodTemplate describes a template for creating copies of a predefined pod."
title: "PodTemplate"
weight: 3
auto_generated: true
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/core/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="PodTemplate"&gt;PodTemplate&lt;/h2&gt;
&lt;!--
PodTemplate describes a template for creating copies of a predefined pod.
--&gt;
&lt;p&gt;PodTemplate 描述一种模板，用来为预定义的 Pod 生成副本。&lt;/p&gt;
&lt;hr&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;apiVersion&lt;/strong&gt;: v1&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;kind&lt;/strong&gt;: PodTemplate&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;metadata&lt;/strong&gt; (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/object-meta/#ObjectMeta"&gt;ObjectMeta&lt;/a&gt;)&lt;/p&gt;
&lt;!--
Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
--&gt;
&lt;p&gt;标准的对象元数据。更多信息：
&lt;a href="https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata"&gt;https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;template&lt;/strong&gt; (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/workload-resources/pod-template-v1/#PodTemplateSpec"&gt;PodTemplateSpec&lt;/a&gt;)&lt;/p&gt;
&lt;!--
Template defines the pods that will be created from this pod template. https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
--&gt;
&lt;p&gt;template 定义将基于此 Pod 模板所创建的 Pod。
&lt;a href="https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status"&gt;https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status&lt;/a&gt;&lt;/p&gt;</description></item><item><title>ResourceQuota</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/policy-resources/resource-quota-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/policy-resources/resource-quota-v1/</guid><description>&lt;!--
api_metadata:
 apiVersion: "v1"
 import: "k8s.io/api/core/v1"
 kind: "ResourceQuota"
content_type: "api_reference"
description: "ResourceQuota sets aggregate quota restrictions enforced per namespace."
title: "ResourceQuota"
weight: 3
auto_generated: true 
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/core/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="ResourceQuota"&gt;ResourceQuota&lt;/h2&gt;
&lt;!-- 
ResourceQuota sets aggregate quota restrictions enforced per namespace 
--&gt;
&lt;p&gt;ResourceQuota 设置每个命名空间强制执行的聚合配额限制。&lt;/p&gt;
&lt;hr&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;apiVersion&lt;/strong&gt;: v1&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;kind&lt;/strong&gt;: ResourceQuota&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;metadata&lt;/strong&gt; (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/object-meta/#ObjectMeta"&gt;ObjectMeta&lt;/a&gt;)&lt;/p&gt;
&lt;!-- 
Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
--&gt;
&lt;p&gt;标准的对象元数据。
更多信息： &lt;a href="https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata"&gt;https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;spec&lt;/strong&gt; (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/policy-resources/resource-quota-v1/#ResourceQuotaSpec"&gt;ResourceQuotaSpec&lt;/a&gt;)&lt;/p&gt;
&lt;!-- 
Spec defines the desired quota. https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
--&gt;
&lt;p&gt;&lt;code&gt;spec&lt;/code&gt; 定义所需的配额。
&lt;a href="https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status"&gt;https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status&lt;/a&gt;&lt;/p&gt;</description></item><item><title>SelfSubjectRulesReview</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/authorization-resources/self-subject-rules-review-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/authorization-resources/self-subject-rules-review-v1/</guid><description>&lt;!--
api_metadata:
 apiVersion: "authorization.k8s.io/v1"
 import: "k8s.io/api/authorization/v1"
 kind: "SelfSubjectRulesReview"
content_type: "api_reference"
description: "SelfSubjectRulesReview enumerates the set of actions the current user can perform within a namespace."
title: "SelfSubjectRulesReview"
weight: 3
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: authorization.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/authorization/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="SelfSubjectRulesReview"&gt;SelfSubjectRulesReview&lt;/h2&gt;
&lt;!--
SelfSubjectRulesReview enumerates the set of actions the current user can perform within a namespace. The returned list of actions may be incomplete depending on the server's authorization mode, and any errors experienced during the evaluation. SelfSubjectRulesReview should be used by UIs to show/hide actions, or to quickly let an end user reason about their permissions. It should NOT Be used by external systems to drive authorization decisions as this raises confused deputy, cache lifetime/revocation, and correctness concerns. SubjectAccessReview, and LocalAccessReview are the correct way to defer authorization decisions to the API server.
--&gt;
&lt;p&gt;SelfSubjectRulesReview 枚举当前用户可以在某命名空间内执行的操作集合。
返回的操作列表可能不完整，具体取决于服务器的鉴权模式以及评估过程中遇到的任何错误。
SelfSubjectRulesReview 应由 UI 用于显示/隐藏操作，或让最终用户尽快理解自己的权限。
SelfSubjectRulesReview 不得被外部系统使用以驱动鉴权决策，
因为这会引起混淆代理人（Confused deputy）、缓存有效期/吊销（Cache lifetime/revocation）和正确性问题。
SubjectAccessReview 和 LocalAccessReview 是遵从 API 服务器所做鉴权决策的正确方式。&lt;/p&gt;</description></item><item><title>TokenReview</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/authentication-resources/token-review-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/authentication-resources/token-review-v1/</guid><description>&lt;!--
api_metadata:
 apiVersion: "authentication.k8s.io/v1"
 import: "k8s.io/api/authentication/v1"
 kind: "TokenReview"
content_type: "api_reference"
description: "TokenReview attempts to authenticate a token to a known user."
title: "TokenReview"
weight: 3
auto_generated: true
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: authentication.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/authentication/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;!--
## TokenReview {#TokenReview}
TokenReview attempts to authenticate a token to a known user. Note: TokenReview requests may be cached by the webhook token authenticator plugin in the kube-apiserver.
--&gt;
&lt;h2 id="TokenReview"&gt;TokenReview&lt;/h2&gt;
&lt;p&gt;TokenReview 尝试通过验证令牌来确认已知用户。
注意：TokenReview 请求可能会被 kube-apiserver 中的 Webhook 令牌验证器插件缓存。&lt;/p&gt;</description></item><item><title>AppDirect Case Study</title><link>https://andygol-k8s.netlify.app/zh-cn/case-studies/appdirect/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/case-studies/appdirect/</guid><description>&lt;div class="banner1" style="background-image: url('/images/case-studies/appdirect/banner1.jpg')"&gt;
 &lt;h1&gt; CASE STUDY:&lt;img src="https://andygol-k8s.netlify.app/images/appdirect_logo.png" class="header_logo" style="margin-bottom:-2%"&gt;&lt;br&gt; &lt;div class="subhead" style="margin-top:1%;font-size:0.5em"&gt;AppDirect: How AppDirect Supported the 10x Growth of Its Engineering Staff with Kubernetes
&lt;/div&gt;&lt;/h1&gt;

&lt;/div&gt;

&lt;div class="details"&gt;
 Company &amp;nbsp;&lt;b&gt;AppDirect&lt;/b&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Location &amp;nbsp;&lt;b&gt;San Francisco, California
&lt;/b&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Industry &amp;nbsp;&lt;b&gt;Software&lt;/b&gt;
&lt;/div&gt;

&lt;hr&gt;
&lt;section class="section1"&gt;
&lt;div class="cols"&gt;
 &lt;div class="col1" style="width:100%""&gt;
 &lt;h2&gt;Challenge&lt;/h2&gt;
 &lt;a href="https://www.appdirect.com/"&gt;AppDirect&lt;/a&gt; provides an end-to-end commerce platform for cloud-based products and services. When Director of Software Development Pierre-Alexandre Lacerte began working there in 2014, the company had a monolith application deployed on a "tomcat infrastructure, and the whole release process was complex for what it should be," he says. "There were a lot of manual steps involved, with one engineer building a feature, then another team picking up the change. So you had bottlenecks in the pipeline to ship a feature to production." At the same time, the engineering team was growing, and the company realized it needed a better infrastructure to both support that growth and increase velocity.
&lt;br&gt;&lt;br&gt;
 &lt;h2&gt;Solution&lt;/h2&gt;
 "My idea was: Let’s create an environment where teams can deploy their services faster, and they will say, ‘Okay, I don’t want to build in the monolith anymore. I want to build a service,’" says Lacerte. They considered and prototyped several different technologies before deciding to adopt &lt;a href="https://kubernetes.io/"&gt;Kubernetes&lt;/a&gt; in early 2016. Lacerte’s team has also integrated &lt;a href="https://prometheus.io/"&gt;Prometheus&lt;/a&gt; monitoring into the platform; tracing is next. Today, AppDirect has more than 50 microservices in production and 15 Kubernetes clusters deployed on &lt;a href="https://aws.amazon.com/"&gt;AWS&lt;/a&gt; and on premise around the world.
&lt;br&gt;&lt;br&gt;
&lt;h2&gt;Impact&lt;/h2&gt;
 The Kubernetes platform has helped support the engineering team’s 10x growth over the past few years. Coupled with the fact that they were continually adding new features, Lacerte says, "I think our velocity would have slowed down a lot if we didn’t have this new infrastructure." Moving to Kubernetes and services has meant that deployments have become much faster due to less dependency on custom-made, brittle shell scripts with SCP commands. Time to deploy a new version has shrunk from 4 hours to a few minutes. Additionally, the company invested a lot of effort to make things self-service for developers. "Onboarding a new service doesn’t require &lt;a href="https://www.atlassian.com/software/jira"&gt;Jira&lt;/a&gt; tickets or meeting with three different teams," says Lacerte. Today, the company sees 1,600 deployments per week, compared to 1-30 before. The company also achieved cost savings by moving its marketplace and billing monoliths to Kubernetes from legacy EC2 hosts as well as by leveraging autoscaling, as traffic is higher during business hours.
&lt;/div&gt;
&lt;/div&gt;
&lt;/section&gt;
&lt;div class="banner2"&gt;
 &lt;div class="banner2text"&gt;
 "It was an immense engineering culture shift, but the benefits are undeniable in terms of scale and speed."
&lt;br style="height:25px"&gt;&lt;span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"&gt;&lt;br&gt;- Alexandre Gervais, Staff Software Developer, AppDirect&lt;/span&gt;
 &lt;/div&gt;
&lt;/div&gt;
&lt;section class="section2"&gt;
&lt;div class="fullcol"&gt;
 &lt;h2&gt;With its end-to-end commerce platform for cloud-based products and services, &lt;a href="https://www.appdirect.com/"&gt;AppDirect&lt;/a&gt; has been helping organizations such as Comcast and GoDaddy simplify the digital supply chain since 2009. &lt;/h2&gt;
&lt;br&gt;
 When Director of Software Development Pierre-Alexandre Lacerte started working there in 2014, the company had a monolith application deployed on a "tomcat infrastructure, and the whole release process was complex for what it should be," he says. "There were a lot of manual steps involved, with one engineer building a feature then creating a pull request, and a QA or another engineer validating the feature. Then it gets merged and someone else will take care of the deployment. So we had bottlenecks in the pipeline to ship a feature to production." &lt;br&gt;&lt;br&gt;
 At the same time, the engineering team of 40 was growing, and the company wanted to add an increasing number of features to its products. As a member of the platform team, Lacerte began hearing from multiple teams that wanted to deploy applications using different frameworks and languages, from &lt;a href="https://nodejs.org/"&gt;Node.js&lt;/a&gt; to &lt;a href="https://spring.io/projects/spring-boot"&gt;Spring Boot Java&lt;/a&gt;. He soon realized that in order to both support growth and increase velocity, the company needed a better infrastructure, and a system in which teams are autonomous, can do their own deploys, and be responsible for their services in production.

&lt;/div&gt;
&lt;/section&gt;
&lt;div class="banner3" style="background-image: url('/images/case-studies/appdirect/banner3.jpg')"&gt;
 &lt;div class="banner3text"&gt;
 "We made the right decisions at the right time. Kubernetes and the cloud native technologies are now seen as the de facto ecosystem. We know where to focus our efforts in order to tackle the new wave of challenges we face as we scale out. The community is so active and vibrant, which is a great complement to our awesome internal team."&lt;br style="height:25px"&gt;&lt;span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"&gt;&lt;br&gt;- Alexandre Gervais, Staff Software Developer, AppDirect
&lt;/span&gt;

 &lt;/div&gt;
&lt;/div&gt;
&lt;section class="section3"&gt;
&lt;div class="fullcol"&gt;
 From the beginning, Lacerte says, "My idea was: Let’s create an environment where teams can deploy their services faster, and they will say, ‘Okay, I don’t want to build in the monolith anymore. I want to build a service.’" (Lacerte left the company in 2019.)&lt;br&gt;&lt;br&gt;
 Working with the operations team, Lacerte’s group got more control and access to the company’s &lt;a href="https://aws.amazon.com/"&gt;AWS infrastructure&lt;/a&gt;, and started prototyping several orchestration technologies. "Back then, Kubernetes was a little underground, unknown," he says. "But we looked at the community, the number of pull requests, the velocity on GitHub, and we saw it was getting traction. And we found that it was much easier for us to manage than the other technologies."
 They spun up the first few services on Kubernetes using &lt;a href="https://www.chef.io/"&gt;Chef&lt;/a&gt; and &lt;a href="https://www.terraform.io/"&gt;Terraform&lt;/a&gt; provisioning, and as more services were added, more automation was, too. "We have clusters around the world—in Korea, in Australia, in Germany, and in the U.S.," says Lacerte. "Automation is critical for us." They’re now largely using &lt;a href="https://github.com/kubernetes/kops"&gt;Kops&lt;/a&gt;, and are looking at managed Kubernetes offerings from several cloud providers.&lt;br&gt;&lt;br&gt;
 Today, though the monolith still exists, there are fewer and fewer commits and features. All teams are deploying on the new infrastructure, and services are the norm. AppDirect now has more than 50 microservices in production and 15 Kubernetes clusters deployed on AWS and on premise around the world.&lt;br&gt;&lt;br&gt;
 Lacerte’s strategy ultimately worked because of the very real impact the Kubernetes platform has had to deployment time. Due to less dependency on custom-made, brittle shell scripts with SCP commands, time to deploy a new version has shrunk from 4 hours to a few minutes. Additionally, the company invested a lot of effort to make things self-service for developers. "Onboarding a new service doesn’t require &lt;a href="https://www.atlassian.com/software/jira"&gt;Jira&lt;/a&gt; tickets or meeting with three different teams," says Lacerte. Today, the company sees 1,600 deployments per week, compared to 1-30 before.
&lt;/div&gt;
&lt;/section&gt;
&lt;div class="banner4" style="background-image: url('/images/case-studies/appdirect/banner4.jpg');width:100%;"&gt;
 &lt;div class="banner4text"&gt;
 "I think our velocity would have slowed down a lot if we didn’t have this new infrastructure."&lt;br style="height:25px"&gt;&lt;span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"&gt;&lt;br&gt;- Pierre-Alexandre Lacerte, Director of Software Development, AppDirect&lt;/span&gt;
 &lt;/div&gt;
&lt;/div&gt;

&lt;section class="section5" style="padding:0px !important"&gt;

&lt;div class="fullcol"&gt;
 Additionally, the Kubernetes platform has helped support the engineering team’s 10x growth over the past few years. "Ownership, a core value of AppDirect, reflects in our ability to ship services independently of our monolith code base," says Staff Software Developer Alexandre Gervais, who worked with Lacerte on the initiative. "Small teams now own critical parts of our business domain model, and they operate in their decoupled domain of expertise, with limited knowledge of the entire codebase. This reduces and isolates some of the complexity." Coupled with the fact that they were continually adding new features, Lacerte says, "I think our velocity would have slowed down a lot if we didn’t have this new infrastructure."
 The company also achieved cost savings by moving its marketplace and billing monoliths to Kubernetes from legacy EC2 hosts as well as by leveraging autoscaling, as traffic is higher during business hours.&lt;br&gt;&lt;br&gt;
 AppDirect’s cloud native stack also includes &lt;a href="https://grpc.io/"&gt;gRPC&lt;/a&gt; and &lt;a href="https://www.fluentd.org/"&gt;Fluentd&lt;/a&gt;, and the team is currently working on setting up &lt;a href="https://opencensus.io/"&gt;OpenCensus&lt;/a&gt;. The platform already has &lt;a href="https://prometheus.io/"&gt;Prometheus&lt;/a&gt; integrated, so "when teams deploy their service, they have their notifications, alerts and configurations," says Lacerte. "For example, in the test environment, I want to get a message on Slack, and in production, I want a &lt;a href="https://slack.com/"&gt;Slack&lt;/a&gt; message and I also want to get paged. We have integration with pager duty. Teams have more ownership on their services."

&lt;/div&gt;

&lt;div class="banner5" &gt;
 &lt;div class="banner5text"&gt;
"We moved from a culture limited to ‘pushing code in a branch’ to exciting new responsibilities outside of the code base: deployment of features and configurations; monitoring of application and business metrics; and on-call support in case of outages. It was an immense engineering culture shift, but the benefits are undeniable in terms of scale and speed."&lt;br style="height:25px"&gt;&lt;span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"&gt;&lt;br&gt;- Pierre-Alexandre Lacerte, Director of Software Development, AppDirect&lt;/span&gt;&lt;/div&gt;
&lt;/div&gt;

&lt;div class="fullcol"&gt;
 That of course also means more responsibility. "We asked engineers to expand their horizons," says Gervais. "We moved from a culture limited to ‘pushing code in a branch’ to exciting new responsibilities outside of the code base: deployment of features and configurations; monitoring of application and business metrics; and on-call support in case of outages. It was an immense engineering culture shift, but the benefits are undeniable in terms of scale and speed." &lt;br&gt;&lt;br&gt;
 As the engineering ranks continue to grow, the platform team has a new challenge, of making sure that the Kubernetes platform is accessible and easily utilized by everyone. "How can we make sure that when we add more people to our team that they are efficient, productive, and know how to ramp up on the platform?" Lacerte says. So we have the evangelists, the documentation, some project examples. We do demos, we have AMA sessions. We’re trying different strategies to get everyone’s attention."&lt;br&gt;&lt;br&gt;
 Three and a half years into their Kubernetes journey, Gervais feels AppDirect "made the right decisions at the right time," he says. "Kubernetes and the cloud native technologies are now seen as the de facto ecosystem. We know where to focus our efforts in order to tackle the new wave of challenges we face as we scale out. The community is so active and vibrant, which is a great complement to our awesome internal team. Going forward, our focus will really be geared towards benefiting from the ecosystem by providing added business value in our day-to-day operations."


&lt;/div&gt;
&lt;/section&gt;</description></item><item><title>CertificateSigningRequest</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/authentication-resources/certificate-signing-request-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/authentication-resources/certificate-signing-request-v1/</guid><description>&lt;!--
api_metadata:
 apiVersion: "certificates.k8s.io/v1"
 import: "k8s.io/api/certificates/v1"
 kind: "CertificateSigningRequest"
content_type: "api_reference"
description: "CertificateSigningRequest objects provide a mechanism to obtain x509 certificates by submitting a certificate signing request, and having it asynchronously approved and issued."
title: "CertificateSigningRequest"
weight: 4
auto_generated: true
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: certificates.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/certificates/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;!--
## CertificateSigningRequest {#CertificateSigningRequest}
--&gt;
&lt;h2 id="CertificateSigningRequest"&gt;证书签名请求 CertificateSigningRequest&lt;/h2&gt;
&lt;!--
CertificateSigningRequest objects provide a mechanism to obtain x509 certificates by submitting a certificate signing request, and having it asynchronously approved and issued.

Kubelets use this API to obtain:
 1. client certificates to authenticate to kube-apiserver (with the "kubernetes.io/kube-apiserver-client-kubelet" signerName).
 2. serving certificates for TLS endpoints kube-apiserver can connect to securely (with the "kubernetes.io/kubelet-serving" signerName).
--&gt;
&lt;p&gt;CertificateSigningRequest 对象提供了一种通过提交证书签名请求并异步批准和颁发 x509 证书的机制。&lt;/p&gt;</description></item><item><title>CSINode</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/config-and-storage-resources/csi-node-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/config-and-storage-resources/csi-node-v1/</guid><description>&lt;!--
api_metadata:
 apiVersion: "storage.k8s.io/v1"
 import: "k8s.io/api/storage/v1"
 kind: "CSINode"
content_type: "api_reference"
description: "CSINode holds information about all CSI drivers installed on a node."
title: "CSINode"
weight: 4
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: storage.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/storage/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="CSINode"&gt;CSINode&lt;/h2&gt;
&lt;!--
CSINode holds information about all CSI drivers installed on a node. CSI drivers do not need to create the CSINode object directly. As long as they use the node-driver-registrar sidecar container, the kubelet will automatically populate the CSINode object for the CSI driver as part of kubelet plugin registration. CSINode has the same name as a node. If the object is missing, it means either there are no CSI Drivers available on the node, or the Kubelet version is low enough that it doesn't create this object. CSINode has an OwnerReference that points to the corresponding node object.
--&gt;
&lt;p&gt;CSINode 包含节点上安装的所有 CSI 驱动有关的信息。CSI 驱动不需要直接创建 CSINode 对象。
只要这些驱动使用 node-driver-registrar 边车容器，kubelet 就会自动为 CSI 驱动填充 CSINode 对象，
作为 kubelet 插件注册操作的一部分。CSINode 的名称与节点名称相同。
如果不存在此对象，则说明该节点上没有可用的 CSI 驱动或 Kubelet 版本太低无法创建该对象。
CSINode 包含指向相应节点对象的 OwnerReference。&lt;/p&gt;</description></item><item><title>Denso Case Study</title><link>https://andygol-k8s.netlify.app/zh-cn/case-studies/denso/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/case-studies/denso/</guid><description>&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;DENSO Corporation is one of the biggest automotive components suppliers in the world. With the advent of connected cars, the company launched a Digital Innovation Department to expand into software, working on vehicle edge and vehicle cloud products. But there were several technical challenges to creating an integrated vehicle edge/cloud platform: "the amount of computing resources, the occasional lack of mobile signal, and an enormous number of distributed vehicles," says R&amp;D Product Manager Seiichi Koizumi.&lt;/p&gt;</description></item><item><title>Ingress</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/service-resources/ingress-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/service-resources/ingress-v1/</guid><description>&lt;!--
api_metadata:
 apiVersion: "networking.k8s.io/v1"
 import: "k8s.io/api/networking/v1"
 kind: "Ingress"
content_type: "api_reference"
description: "Ingress is a collection of rules that allow inbound connections to reach the endpoints defined by a backend."
title: "Ingress"
weight: 4
auto_generated: true
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: networking.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/networking/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;!--
## Ingress {#Ingress}

Ingress is a collection of rules that allow inbound connections to reach the endpoints defined by a backend. An Ingress can be configured to give services externally-reachable urls, load balance traffic, terminate SSL, offer name based virtual hosting etc.
--&gt;
&lt;h2 id="Ingress"&gt;Ingress&lt;/h2&gt;
&lt;p&gt;Ingress 是允许入站连接到达后端定义的端点的规则集合。
Ingress 可以配置为向服务提供外部可访问的 URL、负载均衡流量、终止 SSL、提供基于名称的虚拟主机等。&lt;/p&gt;</description></item><item><title>IPAddress</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/cluster-resources/ip-address-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/cluster-resources/ip-address-v1/</guid><description>&lt;!--
api_metadata:
 apiVersion: "networking.k8s.io/v1"
 import: "k8s.io/api/networking/v1"
 kind: "IPAddress"
content_type: "api_reference"
description: "IPAddress represents a single IP of a single IP Family."
title: "IPAddress"
weight: 4
auto_generated: true
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: networking.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/networking/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="IPAddress"&gt;IPAddress&lt;/h2&gt;
&lt;!--
IPAddress represents a single IP of a single IP Family. The object is designed to be used by APIs that operate on IP addresses. The object is used by the Service core API for allocation of IP addresses. An IP address can be represented in different formats, to guarantee the uniqueness of the IP, the name of the object is the IP address in canonical format, four decimal digits separated by dots suppressing leading zeros for IPv4 and the representation defined by RFC 5952 for IPv6. Valid: 192.168.1.5 or 2001:db8::1 or 2001:db8:aaaa:bbbb:cccc:dddd:eeee:1 Invalid: 10.01.2.3 or 2001:db8:0:0:0::1
--&gt;
&lt;p&gt;IPAddress 表示单个 IP 族的单个 IP。此对象旨在供操作 IP 地址的 API 使用。
此对象由 Service 核心 API 用于分配 IP 地址。
IP 地址可以用不同的格式表示，为了保证 IP 地址的唯一性，此对象的名称是格式规范的 IP 地址。
IPv4 地址由点分隔的四个十进制数字组成，前导零可省略；IPv6 地址按照 RFC 5952 的定义来表示。
有效值：192.168.1.5、2001:db8::1 或 2001:db8:aaaa:bbbb:cccc:dddd:eeee:1。
无效值：10.01.2.3 或 2001:db8:0:0:0::1。&lt;/p&gt;</description></item><item><title>LocalObjectReference</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/local-object-reference/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/local-object-reference/</guid><description>&lt;!--
api_metadata:
 apiVersion: ""
 import: "k8s.io/api/core/v1"
 kind: "LocalObjectReference"
content_type: "api_reference"
description: "LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace."
title: "LocalObjectReference"
weight: 4
auto_generated: true
--&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/core/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;!--
LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace.
--&gt;
&lt;p&gt;LocalObjectReference 包含足够的信息，可以让你在同一命名空间（namespace）内找到引用的对象。&lt;/p&gt;
&lt;hr&gt;
&lt;!--
- **name** (string)

 Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
--&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;name&lt;/strong&gt; (string)&lt;/p&gt;</description></item><item><title>NetworkPolicy</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/policy-resources/network-policy-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/policy-resources/network-policy-v1/</guid><description>&lt;!--
api_metadata:
 apiVersion: "networking.k8s.io/v1"
 import: "k8s.io/api/networking/v1"
 kind: "NetworkPolicy"
content_type: "api_reference"
description: "NetworkPolicy describes what network traffic is allowed for a set of Pods."
title: "NetworkPolicy"
weight: 4
auto_generated: true
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: networking.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/networking/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="NetworkPolicy"&gt;NetworkPolicy&lt;/h2&gt;
&lt;!--
NetworkPolicy describes what network traffic is allowed for a set of Pods
--&gt;
&lt;p&gt;NetworkPolicy 描述针对一组 Pod 所允许的网络流量。&lt;/p&gt;
&lt;hr&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;apiVersion&lt;/strong&gt;: networking.k8s.io/v1&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;kind&lt;/strong&gt;: NetworkPolicy&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;metadata&lt;/strong&gt; (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/object-meta/#ObjectMeta"&gt;ObjectMeta&lt;/a&gt;)&lt;/p&gt;
&lt;!--
Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
--&gt;
&lt;p&gt;标准的对象元数据。更多信息：
&lt;a href="https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata"&gt;https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;spec&lt;/strong&gt; (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/policy-resources/network-policy-v1/#NetworkPolicySpec"&gt;NetworkPolicySpec&lt;/a&gt;)&lt;/p&gt;</description></item><item><title>Ocado Case Study</title><link>https://andygol-k8s.netlify.app/zh-cn/case-studies/ocado/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/case-studies/ocado/</guid><description>&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;The world's largest online-only grocery retailer, &lt;a href="http://www.ocadogroup.com/"&gt;Ocado&lt;/a&gt; developed the Ocado Smart Platform to manage its own operations, from websites to warehouses, and is now licensing the technology to other retailers such as &lt;a href="http://fortune.com/2018/05/17/ocado-kroger-warehouse-automation-amazon-walmart/"&gt;Kroger&lt;/a&gt;. To set up the first warehouses for the platform, Ocado shifted from virtual machines and &lt;a href="https://puppet.com/"&gt;Puppet&lt;/a&gt; infrastructure to &lt;a href="https://www.docker.com/"&gt;Docker&lt;/a&gt; containers, using CoreOS's &lt;a href="https://github.com/coreos/fleet"&gt;fleet&lt;/a&gt; scheduler to provision all the services on its &lt;a href="https://www.openstack.org/"&gt;OpenStack&lt;/a&gt;-based private cloud on bare metal. As the Smart Platform grew and "fleet was going end-of-life," says Platform Engineer Mike Bryant, "we started looking for a more complete platform, with all of these disparate infrastructure services being brought together in one unified API."&lt;/p&gt;</description></item><item><title>ReplicaSet</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/workload-resources/replica-set-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/workload-resources/replica-set-v1/</guid><description>&lt;!--
api_metadata:
 apiVersion: "apps/v1"
 import: "k8s.io/api/apps/v1"
 kind: "ReplicaSet"
content_type: "api_reference"
description: "ReplicaSet ensures that a specified number of pod replicas are running at any given time."
title: "ReplicaSet"
weight: 4
auto_generated: true
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: apps/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/apps/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="ReplicaSet"&gt;ReplicaSet&lt;/h2&gt;
&lt;!--
ReplicaSet ensures that a specified number of pod replicas are running at any given time.
--&gt;
&lt;p&gt;ReplicaSet 确保在任何给定的时刻都在运行指定数量的 Pod 副本。&lt;/p&gt;
&lt;hr&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;apiVersion&lt;/strong&gt;: apps/v1&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;kind&lt;/strong&gt;: ReplicaSet&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
- **metadata** (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/object-meta/#ObjectMeta"&gt;ObjectMeta&lt;/a&gt;)

 If the Labels of a ReplicaSet are empty, they are defaulted to be the same as the Pod(s) that the ReplicaSet manages. Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata

- **spec** (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/workload-resources/replica-set-v1/#ReplicaSetSpec"&gt;ReplicaSetSpec&lt;/a&gt;)

 Spec defines the specification of the desired behavior of the ReplicaSet. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
--&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;metadata&lt;/strong&gt; (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/object-meta/#ObjectMeta"&gt;ObjectMeta&lt;/a&gt;)&lt;/p&gt;</description></item><item><title>ReplicationController</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/workload-resources/replication-controller-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/workload-resources/replication-controller-v1/</guid><description>&lt;!--
api_metadata:
 apiVersion: "v1"
 import: "k8s.io/api/core/v1"
 kind: "ReplicationController"
content_type: "api_reference"
description: "ReplicationController represents the configuration of a replication controller."
title: "ReplicationController"
weight: 4
auto_generated: true
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/core/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="ReplicationController"&gt;ReplicationController&lt;/h2&gt;
&lt;!--
ReplicationController represents the configuration of a replication controller.
--&gt;
&lt;p&gt;ReplicationController 表示一个副本控制器的配置。&lt;/p&gt;
&lt;hr&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;apiVersion&lt;/strong&gt;: v1&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;kind&lt;/strong&gt;: ReplicationController&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
- **metadata** (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/object-meta/#ObjectMeta"&gt;ObjectMeta&lt;/a&gt;)

 If the Labels of a ReplicationController are empty, they are defaulted to be the same as the Pod(s) that the replication controller manages. Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata

- **spec** (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/workload-resources/replication-controller-v1/#ReplicationControllerSpec"&gt;ReplicationControllerSpec&lt;/a&gt;)

 Spec defines the specification of the desired behavior of the replication controller. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
--&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;metadata&lt;/strong&gt; (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/object-meta/#ObjectMeta"&gt;ObjectMeta&lt;/a&gt;)&lt;/p&gt;</description></item><item><title>SubjectAccessReview</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/authorization-resources/subject-access-review-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/authorization-resources/subject-access-review-v1/</guid><description>&lt;!--
api_metadata:
 apiVersion: "authorization.k8s.io/v1"
 import: "k8s.io/api/authorization/v1"
 kind: "SubjectAccessReview"
content_type: "api_reference"
description: "SubjectAccessReview checks whether or not a user or group can perform an action."
title: "SubjectAccessReview"
weight: 4
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: authorization.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/authorization/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="SubjectAccessReview"&gt;SubjectAccessReview&lt;/h2&gt;
&lt;!--
SubjectAccessReview checks whether or not a user or group can perform an action.
--&gt;
&lt;p&gt;SubjectAccessReview 检查用户或组是否可以执行某操作。&lt;/p&gt;
&lt;hr&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;apiVersion&lt;/strong&gt;: authorization.k8s.io/v1&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;kind&lt;/strong&gt;: SubjectAccessReview&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;metadata&lt;/strong&gt; (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/object-meta/#ObjectMeta"&gt;ObjectMeta&lt;/a&gt;)&lt;/p&gt;
&lt;!--
Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
--&gt;
&lt;p&gt;标准的列表元数据。更多信息：
&lt;a href="https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata"&gt;https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
- **spec** (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/authorization-resources/subject-access-review-v1/#SubjectAccessReviewSpec"&gt;SubjectAccessReviewSpec&lt;/a&gt;), required
 Spec holds information about the request being evaluated
- **status** (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/authorization-resources/subject-access-review-v1/#SubjectAccessReviewStatus"&gt;SubjectAccessReviewStatus&lt;/a&gt;)
 Status is filled in by the server and indicates whether the request is allowed or not
--&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;spec&lt;/strong&gt; (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/authorization-resources/subject-access-review-v1/#SubjectAccessReviewSpec"&gt;SubjectAccessReviewSpec&lt;/a&gt;)，必需&lt;/p&gt;</description></item><item><title>ValidatingWebhookConfiguration</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/extend-resources/validating-webhook-configuration-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/extend-resources/validating-webhook-configuration-v1/</guid><description>&lt;!-- 
api_metadata:
 apiVersion: "admissionregistration.k8s.io/v1"
 import: "k8s.io/api/admissionregistration/v1"
 kind: "ValidatingWebhookConfiguration"
content_type: "api_reference"
description: "ValidatingWebhookConfiguration describes the configuration of and admission webhook that accept or reject and object without changing it."
title: "ValidatingWebhookConfiguration"
weight: 4
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: admissionregistration.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/admissionregistration/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="validatingWebhookConfiguration"&gt;ValidatingWebhookConfiguration&lt;/h2&gt;
&lt;!-- 
ValidatingWebhookConfiguration describes the configuration of and admission webhook that accept or reject and object without changing it.
--&gt;
&lt;p&gt;ValidatingWebhookConfiguration 描述准入 Webhook 的配置，此 Webhook
可在不更改对象的情况下接受或拒绝对象请求。&lt;/p&gt;
&lt;hr&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;apiVersion&lt;/strong&gt;: admissionregistration.k8s.io/v1&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;kind&lt;/strong&gt;: ValidatingWebhookConfiguration&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;!-- 
- **metadata** (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/object-meta/#ObjectMeta"&gt;ObjectMeta&lt;/a&gt;)

 Standard object metadata; More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata. 
--&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;metadata&lt;/strong&gt; (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/object-meta/#ObjectMeta"&gt;ObjectMeta&lt;/a&gt;)&lt;/p&gt;</description></item><item><title>ClusterRole</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/authorization-resources/cluster-role-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/authorization-resources/cluster-role-v1/</guid><description>&lt;!--
api_metadata:
 apiVersion: "rbac.authorization.k8s.io/v1"
 import: "k8s.io/api/rbac/v1"
 kind: "ClusterRole"
content_type: "api_reference"
description: "ClusterRole is a cluster level, logical grouping of PolicyRules that can be referenced as a unit by a RoleBinding or ClusterRoleBinding."
title: "ClusterRole"
weight: 5
auto_generated: true
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: rbac.authorization.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/rbac/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;!-- 
## ClusterRole {#ClusterRole}
ClusterRole is a cluster level, logical grouping of PolicyRules that can be referenced as a unit by a RoleBinding or ClusterRoleBinding.
&lt;hr&gt;
--&gt;
&lt;h2 id="ClusterRole"&gt;ClusterRole&lt;/h2&gt;
&lt;p&gt;ClusterRole 是一个集群级别的 PolicyRule 逻辑分组，
可以被 RoleBinding 或 ClusterRoleBinding 作为一个单元引用。&lt;/p&gt;</description></item><item><title>ClusterTrustBundle v1beta1</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/authentication-resources/cluster-trust-bundle-v1beta1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/authentication-resources/cluster-trust-bundle-v1beta1/</guid><description>&lt;!--
api_metadata:
 apiVersion: "certificates.k8s.io/v1beta1"
 import: "k8s.io/api/certificates/v1beta1"
 kind: "ClusterTrustBundle"
content_type: "api_reference"
description: "ClusterTrustBundle is a cluster-scoped container for X."
title: "ClusterTrustBundle v1beta1"
weight: 5
auto_generated: true
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: certificates.k8s.io/v1beta1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/certificates/v1beta1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="ClusterTrustBundle"&gt;ClusterTrustBundle&lt;/h2&gt;
&lt;!--
ClusterTrustBundle is a cluster-scoped container for X.509 trust anchors (root certificates).

ClusterTrustBundle objects are considered to be readable by any authenticated user in the cluster, because they can be mounted by pods using the `clusterTrustBundle` projection. All service accounts have read access to ClusterTrustBundles by default. Users who only have namespace-level access to a cluster can read ClusterTrustBundles by impersonating a serviceaccount that they have access to.
--&gt;
&lt;p&gt;ClusterTrustBundle 是一个集群范围的容器，用于存放 X.509 信任锚（根证书）。&lt;/p&gt;</description></item><item><title>CSIStorageCapacity</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/config-and-storage-resources/csi-storage-capacity-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/config-and-storage-resources/csi-storage-capacity-v1/</guid><description>&lt;!--
api_metadata:
 apiVersion: "storage.k8s.io/v1"
 import: "k8s.io/api/storage/v1"
 kind: "CSIStorageCapacity"
content_type: "api_reference"
description: "CSIStorageCapacity stores the result of one CSI GetCapacity call."
title: "CSIStorageCapacity"
weight: 5
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: storage.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/storage/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="CSIStorageCapacity"&gt;CSIStorageCapacity&lt;/h2&gt;
&lt;!--
CSIStorageCapacity stores the result of one CSI GetCapacity call. For a given StorageClass, this describes the available capacity in a particular topology segment. This can be used when considering where to instantiate new PersistentVolumes.

For example this can express things like: - StorageClass "standard" has "1234 GiB" available in "topology.kubernetes.io/zone=us-east1" - StorageClass "localssd" has "10 GiB" available in "kubernetes.io/hostname=knode-abc123"

The following three cases all imply that no capacity is available for a certain combination: - no object exists with suitable topology and storage class name - such an object exists, but the capacity is unset - such an object exists, but the capacity is zero
--&gt;
&lt;p&gt;CSIStorageCapacity 存储一个 CSI GetCapacity 调用的结果。
对于给定的 StorageClass，此结构描述了特定拓扑段中可用的容量。
当考虑在哪里实例化新的 PersistentVolume 时可以使用此项。&lt;/p&gt;</description></item><item><title>IngressClass</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/service-resources/ingress-class-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/service-resources/ingress-class-v1/</guid><description>&lt;!--
api_metadata:
 apiVersion: "networking.k8s.io/v1"
 import: "k8s.io/api/networking/v1"
 kind: "IngressClass"
content_type: "api_reference"
description: "IngressClass represents the class of the Ingress, referenced by the Ingress Spec."
title: "IngressClass"
weight: 5
auto_generated: true
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: networking.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/networking/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;!--
## IngressClass {#IngressClass}

IngressClass represents the class of the Ingress, referenced by the Ingress Spec. The `ingressclass.kubernetes.io/is-default-class` annotation can be used to indicate that an IngressClass should be considered default. When a single IngressClass resource has this annotation set to true, new Ingress resources without a class specified will be assigned this default class.
--&gt;
&lt;h2 id="IngressClass"&gt;IngressClass&lt;/h2&gt;
&lt;p&gt;IngressClass 代表 Ingress 的类，被 Ingress 的规约引用。
&lt;code&gt;ingressclass.kubernetes.io/is-default-class&lt;/code&gt;
注解可以用来标明一个 IngressClass 应该被视为默认的 Ingress 类。
当某个 IngressClass 资源将此注解设置为 true 时，
没有指定类的新 Ingress 资源将被分配到此默认类。&lt;/p&gt;</description></item><item><title>Lease</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/cluster-resources/lease-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/cluster-resources/lease-v1/</guid><description>&lt;!--
api_metadata:
 apiVersion: "coordination.k8s.io/v1"
 import: "k8s.io/api/coordination/v1"
 kind: "Lease"
content_type: "api_reference"
description: "Lease defines a lease concept."
title: "Lease"
weight: 5
auto_generated: true
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: coordination.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/coordination/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="Lease"&gt;Lease&lt;/h2&gt;
&lt;!--
Lease defines a lease concept.
--&gt;
&lt;p&gt;Lease 定义了租约的概念。&lt;/p&gt;
&lt;hr&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;apiVersion&lt;/strong&gt;: coordination.k8s.io/v1&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;kind&lt;/strong&gt;: Lease&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
- **metadata** (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/object-meta/#ObjectMeta"&gt;ObjectMeta&lt;/a&gt;)

 More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata

- **spec** (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/cluster-resources/lease-v1/#LeaseSpec"&gt;LeaseSpec&lt;/a&gt;)

 spec contains the specification of the Lease. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
--&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;metadata&lt;/strong&gt; (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/object-meta/#ObjectMeta"&gt;ObjectMeta&lt;/a&gt;)&lt;/p&gt;
&lt;p&gt;更多信息：
&lt;a href="https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata"&gt;https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;spec&lt;/strong&gt; (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/cluster-resources/lease-v1/#LeaseSpec"&gt;LeaseSpec&lt;/a&gt;)&lt;/p&gt;
&lt;p&gt;spec 包含 Lease 的规约。更多信息：
&lt;a href="https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status"&gt;https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status&lt;/a&gt;&lt;/p&gt;</description></item><item><title>NodeSelectorRequirement</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/node-selector-requirement/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/node-selector-requirement/</guid><description>&lt;!---
api_metadata:
 apiVersion: ""
 import: "k8s.io/api/core/v1"
 kind: "NodeSelectorRequirement"
content_type: "api_reference"
description: "A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values."
title: "NodeSelectorRequirement"
weight: 5
auto_generated: true
--&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/core/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;!--
A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
--&gt;
&lt;p&gt;节点选择算符需求是一个选择算符，其中包含值集、主键以及一个将键和值集关联起来的操作符。&lt;/p&gt;
&lt;hr&gt;
&lt;!--
- **key** (string), required

 The label key that the selector applies to.

- **operator** (string), required

 Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt.
--&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;key&lt;/strong&gt; (string)，必需&lt;/p&gt;</description></item><item><title>PodDisruptionBudget</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/policy-resources/pod-disruption-budget-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/policy-resources/pod-disruption-budget-v1/</guid><description>&lt;!--
api_metadata:
 apiVersion: "policy/v1"
 import: "k8s.io/api/policy/v1"
 kind: "PodDisruptionBudget"
content_type: "api_reference"
description: "PodDisruptionBudget is an object to define the max disruption that can be caused to a collection of pods."
title: "PodDisruptionBudget"
weight: 5
auto_generated: true
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: policy/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/policy/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="PodDisruptionBudget"&gt;PodDisruptionBudget&lt;/h2&gt;
&lt;!--
PodDisruptionBudget is an object to define the max disruption that can be caused to a collection of pods
--&gt;
&lt;p&gt;PodDisruptionBudget 是一个对象，用于定义可能对一组 Pod 造成的最大干扰。&lt;/p&gt;
&lt;hr&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;apiVersion&lt;/strong&gt;: policy/v1&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;kind&lt;/strong&gt;: PodDisruptionBudget&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
- **metadata** (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/object-meta/#ObjectMeta"&gt;ObjectMeta&lt;/a&gt;)

 Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
--&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;metadata&lt;/strong&gt; (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/object-meta/#ObjectMeta"&gt;ObjectMeta&lt;/a&gt;)&lt;/p&gt;</description></item><item><title>词汇表</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/glossary/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/glossary/</guid><description>&lt;!--
approvers:
- chenopis
- abiogenesis-now
title: Glossary
layout: glossary
noedit: true
body_class: glossary
default_active_tag: fundamental
weight: 5
card:
 name: reference
 weight: 10
 title: Glossary
--&gt;</description></item><item><title>构建一个基本的 DaemonSet</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/manage-daemon/create-daemon-set/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/manage-daemon/create-daemon-set/</guid><description>&lt;!--
title: Building a Basic DaemonSet 
content_type: task 
weight: 5
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page demonstrates how to build a basic &lt;a class='glossary-tooltip' title='确保 Pod 的副本在集群中的一组节点上运行。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/controllers/daemonset/' target='_blank' aria-label='DaemonSet'&gt;DaemonSet&lt;/a&gt;
that runs a Pod on every node in a Kubernetes cluster.
It covers a simple use case of mounting a file from the host, logging its contents using
an [init container](/docs/concepts/workloads/pods/init-containers/), and utilizing a pause container.
--&gt;
&lt;p&gt;本页演示如何构建一个基本的 &lt;a class='glossary-tooltip' title='确保 Pod 的副本在集群中的一组节点上运行。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/controllers/daemonset/' target='_blank' aria-label='DaemonSet'&gt;DaemonSet&lt;/a&gt;，
用其在 Kubernetes 集群中的每个节点上运行 Pod。
这个简单的使用场景包含了从主机挂载一个文件，使用
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/init-containers/"&gt;Init 容器&lt;/a&gt;记录文件的内容，
以及使用 &lt;code&gt;pause&lt;/code&gt; 容器。&lt;/p&gt;</description></item><item><title>你好，Minikube</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tutorials/hello-minikube/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tutorials/hello-minikube/</guid><description>&lt;!--
title: Hello Minikube
content_type: tutorial
weight: 5
card:
 name: tutorials
 weight: 10
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This tutorial shows you how to run a sample app on Kubernetes using minikube.
The tutorial provides a container image that uses NGINX to echo back all the requests.
--&gt;
&lt;p&gt;本教程向你展示如何使用 Minikube 在 Kubernetes 上运行一个应用示例。
教程提供了容器镜像，使用 NGINX 来对所有请求做出回应。&lt;/p&gt;
&lt;h2 id="教程目标"&gt;教程目标&lt;/h2&gt;
&lt;!--
* Deploy a sample application to minikube.
* Run the app.
* View application logs.
--&gt;
&lt;ul&gt;
&lt;li&gt;将一个示例应用部署到 Minikube。&lt;/li&gt;
&lt;li&gt;运行应用程序。&lt;/li&gt;
&lt;li&gt;查看应用日志。&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;!--
This tutorial assumes that you have already set up `minikube`.
See __Step 1__ in [minikube start](https://minikube.sigs.k8s.io/docs/start/) for installation instructions.
--&gt;
&lt;p&gt;本教程假设你已经安装了 &lt;code&gt;minikube&lt;/code&gt;。
有关安装说明，请参阅 &lt;a href="https://minikube.sigs.k8s.io/docs/start/"&gt;minikube start&lt;/a&gt; 的&lt;strong&gt;步骤 1&lt;/strong&gt;。&lt;/p&gt;</description></item><item><title>ClusterRoleBinding</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/authorization-resources/cluster-role-binding-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/authorization-resources/cluster-role-binding-v1/</guid><description>&lt;!--
api_metadata:
 apiVersion: "rbac.authorization.k8s.io/v1"
 import: "k8s.io/api/rbac/v1"
 kind: "ClusterRoleBinding"
content_type: "api_reference"
description: "ClusterRoleBinding references a ClusterRole, but not contain it."
title: "ClusterRoleBinding"
weight: 6
auto_generated: true
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: rbac.authorization.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/rbac/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;!--
## ClusterRoleBinding {#ClusterRoleBinding}
ClusterRoleBinding references a ClusterRole, but not contain it. It can reference a ClusterRole in the global namespace, and adds who information via Subject.
--&gt;
&lt;h2 id="ClusterRoleBinding"&gt;ClusterRoleBinding&lt;/h2&gt;
&lt;p&gt;ClusterRoleBinding 引用 ClusterRole，但不包含它。
它可以引用全局命名空间中的 ClusterRole，并通过 Subject 添加主体信息。&lt;/p&gt;
&lt;!-- 
&lt;hr&gt;
- **apiVersion**: rbac.authorization.k8s.io/v1
- **kind**: ClusterRoleBinding
- **metadata** (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/object-meta/#ObjectMeta"&gt;ObjectMeta&lt;/a&gt;)
 Standard object's metadata.
--&gt;
&lt;hr&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;apiVersion&lt;/strong&gt;: rbac.authorization.k8s.io/v1&lt;/p&gt;</description></item><item><title>Deployment</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/workload-resources/deployment-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/workload-resources/deployment-v1/</guid><description>&lt;!--
api_metadata:
 apiVersion: "apps/v1"
 import: "k8s.io/api/apps/v1"
 kind: "Deployment"
content_type: "api_reference"
description: "Deployment enables declarative updates for Pods and ReplicaSets."
title: "Deployment"
weight: 6
auto_generated: true
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: apps/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/apps/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="Deployment"&gt;Deployment&lt;/h2&gt;
&lt;!--
Deployment enables declarative updates for Pods and ReplicaSets.
--&gt;
&lt;p&gt;Deployment 使得 Pod 和 ReplicaSet 能够进行声明式更新。&lt;/p&gt;
&lt;hr&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;apiVersion&lt;/strong&gt;: apps/v1&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;kind&lt;/strong&gt;: Deployment&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
- **metadata** (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/object-meta/#ObjectMeta"&gt;ObjectMeta&lt;/a&gt;)

 Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata

- **spec** (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/workload-resources/deployment-v1/#DeploymentSpec"&gt;DeploymentSpec&lt;/a&gt;)

 Specification of the desired behavior of the Deployment.

- **status** (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/workload-resources/deployment-v1/#DeploymentStatus"&gt;DeploymentStatus&lt;/a&gt;)

 Most recently observed status of the Deployment.
--&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;metadata&lt;/strong&gt; (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/object-meta/#ObjectMeta"&gt;ObjectMeta&lt;/a&gt;)&lt;/p&gt;</description></item><item><title>LeaseCandidate v1beta1</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/cluster-resources/lease-candidate-v1beta1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/cluster-resources/lease-candidate-v1beta1/</guid><description>&lt;!--
api_metadata:
 apiVersion: "coordination.k8s.io/v1beta1"
 import: "k8s.io/api/coordination/v1beta1"
 kind: "LeaseCandidate"
content_type: "api_reference"
description: "LeaseCandidate defines a candidate for a Lease object."
title: "LeaseCandidate v1beta1"
weight: 6
auto_generated: true
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: coordination.k8s.io/v1beta1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/coordination/v1beta1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="LeaseCandidate"&gt;LeaseCandidate&lt;/h2&gt;
&lt;!--
LeaseCandidate defines a candidate for a Lease object. Candidates are created such that coordinated leader election will pick the best leader from the list of candidates.
--&gt;
&lt;p&gt;LeaseCandidate 定义一个 Lease 对象的候选者。
通过创建候选者，协同式领导者选举能够从候选者列表中选出最佳的领导者。&lt;/p&gt;
&lt;hr&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;apiVersion&lt;/strong&gt;: coordination.k8s.io/v1beta1&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;kind&lt;/strong&gt;: LeaseCandidate&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
- **metadata** (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/object-meta/#ObjectMeta"&gt;ObjectMeta&lt;/a&gt;)

 More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata

- **spec** (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/cluster-resources/lease-candidate-v1beta1/#LeaseCandidateSpec"&gt;LeaseCandidateSpec&lt;/a&gt;)

 spec contains the specification of the Lease. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
--&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;metadata&lt;/strong&gt; (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/object-meta/#ObjectMeta"&gt;ObjectMeta&lt;/a&gt;)&lt;/p&gt;</description></item><item><title>ObjectFieldSelector</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/object-field-selector/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/object-field-selector/</guid><description>&lt;!--
api_metadata:
 apiVersion: ""
 import: "k8s.io/api/core/v1"
 kind: "ObjectFieldSelector"
content_type: "api_reference"
description: "ObjectFieldSelector selects an APIVersioned field of an object."
title: "ObjectFieldSelector"
weight: 6
auto_generated: true
--&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/core/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;!--
ObjectFieldSelector selects an APIVersioned field of an object.
--&gt;
&lt;p&gt;ObjectFieldSelector 选择对象的 APIVersioned 字段。&lt;/p&gt;
&lt;hr&gt;
&lt;!--
- **fieldPath** (string), required

 Path of the field to select in the specified API version.

- **apiVersion** (string)

 Version of the schema the FieldPath is written in terms of, defaults to "v1".
--&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;fieldPath&lt;/strong&gt; (string)，必需&lt;/p&gt;</description></item><item><title>PersistentVolumeClaim</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/config-and-storage-resources/persistent-volume-claim-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/config-and-storage-resources/persistent-volume-claim-v1/</guid><description>&lt;!--
api_metadata:
 apiVersion: "v1"
 import: "k8s.io/api/core/v1"
 kind: "PersistentVolumeClaim"
content_type: "api_reference"
description: "PersistentVolumeClaim is a user's request for and claim to a persistent volume."
title: "PersistentVolumeClaim"
weight: 6
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/core/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="PersistentVolumeClaim"&gt;PersistentVolumeClaim&lt;/h2&gt;
&lt;!--
PersistentVolumeClaim is a user's request for and claim to a persistent volume
--&gt;
&lt;p&gt;PersistentVolumeClaim 是用户针对一个持久卷的请求和申领。&lt;/p&gt;
&lt;hr&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;apiVersion&lt;/strong&gt;: v1&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;kind&lt;/strong&gt;: PersistentVolumeClaim&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
- **metadata** (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/object-meta/#ObjectMeta"&gt;ObjectMeta&lt;/a&gt;)

 Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata

- **spec** (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/config-and-storage-resources/persistent-volume-claim-v1/#PersistentVolumeClaimSpec"&gt;PersistentVolumeClaimSpec&lt;/a&gt;)

 spec defines the desired characteristics of a volume requested by a pod author. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims
--&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;metadata&lt;/strong&gt; (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/object-meta/#ObjectMeta"&gt;ObjectMeta&lt;/a&gt;)&lt;/p&gt;</description></item><item><title>PriorityLevelConfiguration v1</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/policy-resources/priority-level-configuration-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/policy-resources/priority-level-configuration-v1/</guid><description>&lt;!--
api_metadata:
 apiVersion: "flowcontrol.apiserver.k8s.io/v1"
 import: "k8s.io/api/flowcontrol/v1"
 kind: "PriorityLevelConfiguration"
content_type: "api_reference"
description: "PriorityLevelConfiguration represents the configuration of a priority level."
title: "PriorityLevelConfiguration"
weight: 6
auto_generated: true
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: flowcontrol.apiserver.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/flowcontrol/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="PriorityLevelConfiguration"&gt;PriorityLevelConfiguration&lt;/h2&gt;
&lt;!--
PriorityLevelConfiguration represents the configuration of a priority level.
--&gt;
&lt;p&gt;PriorityLevelConfiguration 表示一个优先级的配置。&lt;/p&gt;
&lt;hr&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;apiVersion&lt;/strong&gt;: flowcontrol.apiserver.k8s.io/v1&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;kind&lt;/strong&gt;: PriorityLevelConfiguration&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
- **metadata** (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/object-meta/#ObjectMeta"&gt;ObjectMeta&lt;/a&gt;)

 `metadata` is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata

- **spec** (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/policy-resources/priority-level-configuration-v1/#PriorityLevelConfigurationSpec"&gt;PriorityLevelConfigurationSpec&lt;/a&gt;)

 `spec` is the specification of the desired behavior of a "request-priority". More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
--&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;metadata&lt;/strong&gt; (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/object-meta/#ObjectMeta"&gt;ObjectMeta&lt;/a&gt;)&lt;/p&gt;</description></item><item><title>SelfSubjectReview</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/authentication-resources/self-subject-review-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/authentication-resources/self-subject-review-v1/</guid><description>&lt;!--
api_metadata:
 apiVersion: "authentication.k8s.io/v1"
 import: "k8s.io/api/authentication/v1"
 kind: "SelfSubjectReview"
content_type: "api_reference"
description: "SelfSubjectReview contains the user information that the kube-apiserver has about the user making this request."
title: "SelfSubjectReview"
weight: 6
auto_generated: true
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: authentication.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/authentication/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="SelfSubjectReview"&gt;SelfSubjectReview&lt;/h2&gt;
&lt;!--
SelfSubjectReview contains the user information that the kube-apiserver has about the user making this request. When using impersonation, users will receive the user info of the user being impersonated. If impersonation or request header authentication is used, any extra keys will have their case ignored and returned as lowercase.
--&gt;
&lt;p&gt;SelfSubjectReview 包含 kube-apiserver 所拥有的与发出此请求的用户有关的用户信息。
使用伪装时，用户将收到被伪装用户的用户信息。
如果使用伪装或请求头部进行身份验证，则所有额外的键都将被忽略大小写并以小写形式返回结果。&lt;/p&gt;</description></item><item><title>Namespace</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/cluster-resources/namespace-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/cluster-resources/namespace-v1/</guid><description>&lt;!--
api_metadata:
 apiVersion: "v1"
 import: "k8s.io/api/core/v1"
 kind: "Namespace"
content_type: "api_reference"
description: "Namespace provides a scope for Names."
title: "Namespace"
weight: 7
auto_generated: true
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/core/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="Namespace"&gt;Namespace&lt;/h2&gt;
&lt;!--
Namespace provides a scope for Names. Use of multiple namespaces is optional.
--&gt;
&lt;p&gt;Namespace 为名字提供作用域。使用多个命名空间是可选的。&lt;/p&gt;
&lt;hr&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;apiVersion&lt;/strong&gt;: v1&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;kind&lt;/strong&gt;: Namespace&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;metadata&lt;/strong&gt; (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/object-meta/#ObjectMeta"&gt;ObjectMeta&lt;/a&gt;)&lt;/p&gt;
&lt;!--
Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
--&gt;
&lt;p&gt;标准的对象元数据。更多信息：
&lt;a href="https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata"&gt;https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;spec&lt;/strong&gt; (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/cluster-resources/namespace-v1/#NamespaceSpec"&gt;NamespaceSpec&lt;/a&gt;)&lt;/p&gt;
&lt;!--
Spec defines the behavior of the Namespace. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
--&gt;
&lt;p&gt;&lt;code&gt;spec&lt;/code&gt; 定义了 Namespace 的行为。更多信息：
&lt;a href="https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status"&gt;https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status&lt;/a&gt;&lt;/p&gt;</description></item><item><title>ObjectMeta</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/object-meta/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/object-meta/</guid><description>&lt;!-- 
api_metadata:
 apiVersion: ""
 import: "k8s.io/apimachinery/pkg/apis/meta/v1"
 kind: "ObjectMeta"
content_type: "api_reference"
description: "ObjectMeta is metadata that all persisted resources must have, which includes all objects users must create."
title: "ObjectMeta"
weight: 7
auto_generated: true
--&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/apimachinery/pkg/apis/meta/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;!-- 
ObjectMeta is metadata that all persisted resources must have, which includes all objects users must create.
--&gt;
&lt;p&gt;ObjectMeta 是所有持久化资源必须具有的元数据，其中包括用户必须创建的所有对象。&lt;/p&gt;
&lt;hr&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;name&lt;/strong&gt; (string)&lt;/p&gt;
&lt;!-- 
Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names#names
--&gt;
&lt;p&gt;name 在命名空间内必须是唯一的。创建资源时需要，尽管某些资源可能允许客户端请求自动地生成适当的名称。
名称主要用于创建幂等性和配置定义。无法更新。更多信息：
&lt;a href="https://kubernetes.io/zh-cn/docs/concepts/overview/working-with-objects/names#names"&gt;https://kubernetes.io/zh-cn/docs/concepts/overview/working-with-objects/names#names&lt;/a&gt;&lt;/p&gt;</description></item><item><title>PersistentVolume</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/config-and-storage-resources/persistent-volume-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/config-and-storage-resources/persistent-volume-v1/</guid><description>&lt;!--
api_metadata:
 apiVersion: "v1"
 import: "k8s.io/api/core/v1"
 kind: "PersistentVolume"
content_type: "api_reference"
description: "PersistentVolume (PV) is a storage resource provisioned by an administrator."
title: "PersistentVolume"
weight: 7
auto_generated: true
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/core/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="PersistentVolume"&gt;PersistentVolume&lt;/h2&gt;
&lt;!--
PersistentVolume (PV) is a storage resource provisioned by an administrator. It is analogous to a node. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes
--&gt;
&lt;p&gt;PersistentVolume (PV) 是管理员制备的一个存储资源。它类似于一个节点。更多信息：
&lt;a href="https://kubernetes.io/zh-cn/docs/concepts/storage/persistent-volumes"&gt;https://kubernetes.io/zh-cn/docs/concepts/storage/persistent-volumes&lt;/a&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;apiVersion&lt;/strong&gt;: v1&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;kind&lt;/strong&gt;: PersistentVolume&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
- **metadata** (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/object-meta/#ObjectMeta"&gt;ObjectMeta&lt;/a&gt;)

 Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata

- **spec** (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/config-and-storage-resources/persistent-volume-v1/#PersistentVolumeSpec"&gt;PersistentVolumeSpec&lt;/a&gt;)

 spec defines a specification of a persistent volume owned by the cluster. Provisioned by an administrator. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistent-volumes
--&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;metadata&lt;/strong&gt; (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/object-meta/#ObjectMeta"&gt;ObjectMeta&lt;/a&gt;)&lt;/p&gt;</description></item><item><title>Role</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/authorization-resources/role-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/authorization-resources/role-v1/</guid><description>&lt;!--
api_metadata:
 apiVersion: "rbac.authorization.k8s.io/v1"
 import: "k8s.io/api/rbac/v1"
 kind: "Role"
content_type: "api_reference"
description: "Role is a namespaced, logical grouping of PolicyRules that can be referenced as a unit by a RoleBinding."
title: "Role"
weight: 7
auto_generated: true
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: rbac.authorization.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/rbac/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="Role"&gt;Role&lt;/h2&gt;
&lt;!--
Role is a namespaced, logical grouping of PolicyRules that can be referenced as a unit by a RoleBinding.
--&gt;
&lt;p&gt;Role 是一个按命名空间划分的 PolicyRule 逻辑分组，可以被 RoleBinding 作为一个单元引用。&lt;/p&gt;
&lt;hr&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;apiVersion&lt;/strong&gt;: rbac.authorization.k8s.io/v1&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;kind&lt;/strong&gt;: Role&lt;/p&gt;</description></item><item><title>StatefulSet</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/workload-resources/stateful-set-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/workload-resources/stateful-set-v1/</guid><description>&lt;!-- 
api_metadata:
 apiVersion: "apps/v1"
 import: "k8s.io/api/apps/v1"
 kind: "StatefulSet"
content_type: "api_reference"
description: "StatefulSet represents a set of pods with consistent identities."
title: "StatefulSet"
weight: 7
auto_generated: true
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: apps/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/apps/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="StatefulSet"&gt;StatefulSet&lt;/h2&gt;
&lt;!-- 
StatefulSet represents a set of pods with consistent identities. Identities are defined as:
 - Network: A single stable DNS and hostname.
 - Storage: As many VolumeClaims as requested.

The StatefulSet guarantees that a given network identity will always map to the same storage identity. 
--&gt;
&lt;p&gt;StatefulSet 表示一组具有一致身份的 Pod。身份定义为：&lt;/p&gt;</description></item><item><title>ValidatingAdmissionPolicy</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/policy-resources/validating-admission-policy-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/policy-resources/validating-admission-policy-v1/</guid><description>&lt;!--
api_metadata:
 apiVersion: "admissionregistration.k8s.io/v1"
 import: "k8s.io/api/admissionregistration/v1"
 kind: "ValidatingAdmissionPolicy"
content_type: "api_reference"
description: "ValidatingAdmissionPolicy describes the definition of an admission validation policy that accepts or rejects an object without changing it."
title: "ValidatingAdmissionPolicy"
weight: 7
auto_generated: true
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: admissionregistration.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/admissionregistration/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="ValidatingAdmissionPolicy"&gt;ValidatingAdmissionPolicy&lt;/h2&gt;
&lt;!--
ValidatingAdmissionPolicy describes the definition of an admission validation policy that accepts or rejects an object without changing it.
--&gt;
&lt;p&gt;ValidatingAdmissionPolicy 描述了一种准入验证策略的定义，
这种策略用于接受或拒绝一个对象，而不对其进行修改。&lt;/p&gt;
&lt;hr&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;apiVersion&lt;/strong&gt;: admissionregistration.k8s.io/v1&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;kind&lt;/strong&gt;: ValidatingAdmissionPolicy&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;metadata&lt;/strong&gt; (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/object-meta/#ObjectMeta"&gt;ObjectMeta&lt;/a&gt;)&lt;/p&gt;</description></item><item><title>ControllerRevision</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/workload-resources/controller-revision-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/workload-resources/controller-revision-v1/</guid><description>&lt;!--
api_metadata:
 apiVersion: "apps/v1"
 import: "k8s.io/api/apps/v1"
 kind: "ControllerRevision"
content_type: "api_reference"
description: "ControllerRevision implements an immutable snapshot of state data."
title: "ControllerRevision"
weight: 8
auto_generated: true
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: apps/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/apps/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;!-- 
## ControllerRevision {#ControllerRevision 
--&gt;
&lt;h2 id="ControllerRevision"&gt;ControllerRevision&lt;/h2&gt;
&lt;!--
ControllerRevision implements an immutable snapshot of state data.
Clients are responsible for serializing and deserializing the objects that contain their internal state.
Once a ControllerRevision has been successfully created, it can not be updated. 
The API Server will fail validation of all requests that attempt to mutate the Data field.
ControllerRevisions may, however, be deleted. Note that, due to its use by both the DaemonSet and StatefulSet controllers for update and rollback,
this object is beta. However, it may be subject to name and representation changes in future releases, and clients should not depend on its stability.
It is primarily for internal use by controllers.
--&gt;
&lt;p&gt;ControllerRevision 实现了状态数据的不可变快照。
客户端负责序列化和反序列化对象，包含对象内部状态。
成功创建 ControllerRevision 后，将无法对其进行更新。
API 服务器将无法成功验证所有尝试改变 data 字段的请求。
但是，可以删除 ControllerRevisions。
请注意，由于 DaemonSet 和 StatefulSet 控制器都使用它来进行更新和回滚，所以这个对象是 Beta 版。
但是，它可能会在未来版本中更改名称和表示形式，客户不应依赖其稳定性。
它主要供控制器内部使用。&lt;/p&gt;</description></item><item><title>Node</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/cluster-resources/node-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/cluster-resources/node-v1/</guid><description>&lt;!-- 
api_metadata:
 apiVersion: "v1"
 import: "k8s.io/api/core/v1"
 kind: "Node"
content_type: "api_reference"
description: "Node is a worker node in Kubernetes."
title: "Node"
weight: 8
auto_generated: true
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/core/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="Node"&gt;Node&lt;/h2&gt;
&lt;!-- 
Node is a worker node in Kubernetes. Each node will have a unique identifier in the cache (i.e. in etcd). 
--&gt;
&lt;p&gt;Node 是 Kubernetes 中的工作节点。
每个节点在缓存中（即在 etcd 中）都有一个唯一的标识符。&lt;/p&gt;
&lt;hr&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;apiVersion&lt;/strong&gt;: v1&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;kind&lt;/strong&gt;: Node&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;!-- 
- **metadata** (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/object-meta/#ObjectMeta"&gt;ObjectMeta&lt;/a&gt;)

 Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 
--&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;metadata&lt;/strong&gt; (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/object-meta/#ObjectMeta"&gt;ObjectMeta&lt;/a&gt;)&lt;/p&gt;</description></item><item><title>ObjectReference</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/object-reference/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/object-reference/</guid><description>&lt;!--
api_metadata:
 apiVersion: ""
 import: "k8s.io/api/core/v1"
 kind: "ObjectReference"
content_type: "api_reference"
description: "ObjectReference contains enough information to let you inspect or modify the referred object."
title: "ObjectReference"
weight: 8
auto_generated: true
--&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/core/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;!--
ObjectReference contains enough information to let you inspect or modify the referred object.
--&gt;
&lt;p&gt;ObjectReference 包含足够的信息，允许你检查或修改引用的对象。&lt;/p&gt;
&lt;hr&gt;
&lt;!--
- **apiVersion** (string)

 API version of the referent.

- **fieldPath** (string)

 If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object.
--&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;apiVersion&lt;/strong&gt; (string)&lt;/p&gt;</description></item><item><title>RoleBinding</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/authorization-resources/role-binding-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/authorization-resources/role-binding-v1/</guid><description>&lt;!--
api_metadata:
 apiVersion: "rbac.authorization.k8s.io/v1"
 import: "k8s.io/api/rbac/v1"
 kind: "RoleBinding"
content_type: "api_reference"
description: "RoleBinding references a role, but does not contain it."
title: "RoleBinding"
weight: 8
auto_generated: true
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: rbac.authorization.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/rbac/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="RoleBinding"&gt;RoleBinding&lt;/h2&gt;
&lt;!--
RoleBinding references a role, but does not contain it. It can reference a Role in the same namespace or a ClusterRole in the global namespace. It adds who information via Subjects and namespace information by which namespace it exists in. RoleBindings in a given namespace only have effect in that namespace.
--&gt;
&lt;p&gt;RoleBinding 引用一个角色，但不包含它。
RoleBinding 可以引用相同命名空间中的 Role 或全局命名空间中的 ClusterRole。
RoleBinding 通过 Subjects 和所在的命名空间信息添加主体信息。
处于给定命名空间中的 RoleBinding 仅在该命名空间中有效。&lt;/p&gt;</description></item><item><title>StorageClass</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/config-and-storage-resources/storage-class-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/config-and-storage-resources/storage-class-v1/</guid><description>&lt;!--
api_metadata:
 apiVersion: "storage.k8s.io/v1"
 import: "k8s.io/api/storage/v1"
 kind: "StorageClass"
content_type: "api_reference"
description: "StorageClass describes the parameters for a class of storage for which PersistentVolumes can be dynamically provisioned."
title: "StorageClass"
weight: 8
auto_generated: true
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: storage.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/storage/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="StorageClass"&gt;StorageClass&lt;/h2&gt;
&lt;!--
StorageClass describes the parameters for a class of storage for which PersistentVolumes can be dynamically provisioned.

StorageClasses are non-namespaced; the name of the storage class according to etcd is in ObjectMeta.Name.
--&gt;
&lt;p&gt;StorageClass 为可以动态制备 PersistentVolume 的存储类描述参数。&lt;/p&gt;</description></item><item><title>ValidatingAdmissionPolicyBinding</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/policy-resources/validating-admission-policy-binding-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/policy-resources/validating-admission-policy-binding-v1/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.

To zh-cn localizaation team:
We need to have a plan to localize this page.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: admissionregistration.k8s.io/v1&lt;/code&gt;&lt;/p&gt;</description></item><item><title>DaemonSet</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/workload-resources/daemon-set-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/workload-resources/daemon-set-v1/</guid><description>&lt;!--
api_metadata:
apiVersion: "apps/v1"
import: "k8s.io/api/apps/v1"
kind: "DaemonSet"
content_type: "api_reference"
description: "DaemonSet represents the configuration of a daemon set."
title: "DaemonSet"
weight: 9
auto_generated: true
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: apps/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/apps/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;!--
## DaemonSet {#DaemonSet}

DaemonSet represents the configuration of a daemon set.
--&gt;
&lt;h2 id="DaemonSet"&gt;DaemonSet&lt;/h2&gt;
&lt;p&gt;DaemonSet 表示守护进程集的配置。&lt;/p&gt;
&lt;hr&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;apiVersion&lt;/strong&gt;: apps/v1&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;kind&lt;/strong&gt;: DaemonSet&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
- **metadata** (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/object-meta/#ObjectMeta"&gt;ObjectMeta&lt;/a&gt;)

 Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
--&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;metadata&lt;/strong&gt; (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/object-meta/#ObjectMeta"&gt;ObjectMeta&lt;/a&gt;)&lt;/p&gt;
&lt;p&gt;标准的对象元数据。更多信息：
&lt;a href="https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata"&gt;https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
- **spec** (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/workload-resources/daemon-set-v1/#DaemonSetSpec"&gt;DaemonSetSpec&lt;/a&gt;)

 The desired behavior of this daemon set. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
--&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;spec&lt;/strong&gt; (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/workload-resources/daemon-set-v1/#DaemonSetSpec"&gt;DaemonSetSpec&lt;/a&gt;)&lt;/p&gt;</description></item><item><title>Patch</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/patch/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/patch/</guid><description>&lt;!-- 
api_metadata:
 apiVersion: ""
 import: "k8s.io/apimachinery/pkg/apis/meta/v1"
 kind: "Patch"
content_type: "api_reference"
description: "Patch is provided to give a concrete name and type to the Kubernetes PATCH request body."
title: "Patch"
weight: 9
auto_generated: true
--&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/apimachinery/pkg/apis/meta/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;!--
Patch is provided to give a concrete name and type to the Kubernetes PATCH request body.
--&gt;
&lt;p&gt;提供 Patch 是为了给 Kubernetes PATCH 请求正文提供一个具体的名称和类型。&lt;/p&gt;
&lt;hr&gt;</description></item><item><title>RuntimeClass</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/cluster-resources/runtime-class-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/cluster-resources/runtime-class-v1/</guid><description>&lt;!--
api_metadata:
 apiVersion: "node.k8s.io/v1"
 import: "k8s.io/api/node/v1"
 kind: "RuntimeClass"
content_type: "api_reference"
description: "RuntimeClass defines a class of container runtime supported in the cluster."
title: "RuntimeClass"
weight: 9
auto_generated: true
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: node.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/node/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="RuntimeClass"&gt;RuntimeClass&lt;/h2&gt;
&lt;!--
RuntimeClass defines a class of container runtime supported in the cluster. The RuntimeClass is used to determine which container runtime is used to run all containers in a pod. RuntimeClasses are manually defined by a user or cluster provisioner, and referenced in the PodSpec. The Kubelet is responsible for resolving the RuntimeClassName reference before running the pod. For more details, see https://kubernetes.io/docs/concepts/containers/runtime-class/
--&gt;
&lt;p&gt;RuntimeClass 定义集群中支持的容器运行时类。
RuntimeClass 用于确定哪个容器运行时用于运行某 Pod 中的所有容器。
RuntimeClass 由用户或集群制备程序手动定义，并在 PodSpec 中引用。
kubelet 负责在运行 Pod 之前解析 RuntimeClassName 引用。
有关更多详细信息，请参阅
&lt;a href="https://kubernetes.io/zh-cn/docs/concepts/containers/runtime-class/"&gt;https://kubernetes.io/zh-cn/docs/concepts/containers/runtime-class/&lt;/a&gt;&lt;/p&gt;</description></item><item><title>StorageVersionMigration v1alpha1</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/config-and-storage-resources/storage-version-migration-v1alpha1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/config-and-storage-resources/storage-version-migration-v1alpha1/</guid><description>&lt;!--
api_metadata:
 apiVersion: "storagemigration.k8s.io/v1alpha1"
 import: "k8s.io/api/storagemigration/v1alpha1"
 kind: "StorageVersionMigration"
content_type: "api_reference"
description: "StorageVersionMigration represents a migration of stored data to the latest storage version."
title: "StorageVersionMigration v1alpha1"
weight: 9
auto_generated: true
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: storagemigration.k8s.io/v1alpha1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/storagemigration/v1alpha1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="StorageVersionMigration"&gt;StorageVersionMigration&lt;/h2&gt;
&lt;!--
StorageVersionMigration represents a migration of stored data to the latest storage version.
--&gt;
&lt;p&gt;StorageVersionMigration 表示存储的数据向最新存储版本的一次迁移。&lt;/p&gt;
&lt;hr&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;apiVersion&lt;/strong&gt;: storagemigration.k8s.io/v1alpha1&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;kind&lt;/strong&gt;: StorageVersionMigration&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
- **metadata** (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/object-meta/#ObjectMeta"&gt;ObjectMeta&lt;/a&gt;)

 Standard object metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata

- **spec** (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/config-and-storage-resources/storage-version-migration-v1alpha1/#StorageVersionMigrationSpec"&gt;StorageVersionMigrationSpec&lt;/a&gt;)

 Specification of the migration.
--&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;metadata&lt;/strong&gt; (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/object-meta/#ObjectMeta"&gt;ObjectMeta&lt;/a&gt;)&lt;/p&gt;</description></item><item><title>Deployments</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/controllers/deployment/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/controllers/deployment/</guid><description>&lt;!--
reviewers:
- janetkuo
title: Deployments
api_metadata:
- apiVersion: "apps/v1"
 kind: "Deployment"
feature:
 title: Automated rollouts and rollbacks
 description: &gt;
 Kubernetes progressively rolls out changes to your application or its configuration, while monitoring application health to ensure it doesn't kill all your instances at the same time. If something goes wrong, Kubernetes will rollback the change for you. Take advantage of a growing ecosystem of deployment solutions.
description: &gt;-
 A Deployment manages a set of Pods to run an application workload, usually one that doesn't maintain state.
content_type: concept
weight: 10
hide_summary: true # Listed separately in section index
--&gt;
&lt;!-- overview --&gt;
&lt;!--
A _Deployment_ provides declarative updates for &lt;a class='glossary-tooltip' title='Pod 表示你的集群上一组正在运行的容器。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/' target='_blank' aria-label='Pods'&gt;Pods&lt;/a&gt; and
&lt;a class='glossary-tooltip' title='ReplicaSet 确保一次运行指定数量的 Pod 副本。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/controllers/replicaset/' target='_blank' aria-label='ReplicaSets'&gt;ReplicaSets&lt;/a&gt;.
--&gt;
&lt;p&gt;一个 Deployment 为 &lt;a class='glossary-tooltip' title='Pod 表示你的集群上一组正在运行的容器。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/' target='_blank' aria-label='Pod'&gt;Pod&lt;/a&gt;
和 &lt;a class='glossary-tooltip' title='ReplicaSet 确保一次运行指定数量的 Pod 副本。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/controllers/replicaset/' target='_blank' aria-label='ReplicaSet'&gt;ReplicaSet&lt;/a&gt;
提供声明式的更新能力。&lt;/p&gt;</description></item><item><title>Job</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/workload-resources/job-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/workload-resources/job-v1/</guid><description>&lt;!--
api_metadata:
apiVersion: "batch/v1"
import: "k8s.io/api/batch/v1"
kind: "Job"
content_type: "api_reference"
description: "Job represents the configuration of a single job."
title: "Job"
weight: 10
auto_generated: true
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: batch/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/batch/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="Job"&gt;Job&lt;/h2&gt;
&lt;!--
Job represents the configuration of a single job.
--&gt;
&lt;p&gt;Job 表示单个任务的配置。&lt;/p&gt;
&lt;hr&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;apiVersion&lt;/strong&gt;: batch/v1&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;kind&lt;/strong&gt;: Job&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
- **metadata** (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/object-meta/#ObjectMeta"&gt;ObjectMeta&lt;/a&gt;)

 Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
--&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;metadata&lt;/strong&gt; (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/object-meta/#ObjectMeta"&gt;ObjectMeta&lt;/a&gt;)&lt;/p&gt;
&lt;p&gt;标准的对象元数据。更多信息：
&lt;a href="https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata"&gt;https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
- **spec** (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/workload-resources/job-v1/#JobSpec"&gt;JobSpec&lt;/a&gt;)

 Specification of the desired behavior of a job. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
--&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;spec&lt;/strong&gt; (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/workload-resources/job-v1/#JobSpec"&gt;JobSpec&lt;/a&gt;)&lt;/p&gt;</description></item><item><title>kubectl 故障排查</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/debug/debug-cluster/troubleshoot-kubectl/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/debug/debug-cluster/troubleshoot-kubectl/</guid><description>&lt;!--
title: "Troubleshooting kubectl"
content_type: task
weight: 10
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This documentation is about investigating and diagnosing
&lt;a class='glossary-tooltip' title='kubectl 是用来和 Kubernetes 集群进行通信的命令行工具。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/' target='_blank' aria-label='kubectl'&gt;kubectl&lt;/a&gt; related issues.
If you encounter issues accessing `kubectl` or connecting to your cluster, this
document outlines various common scenarios and potential solutions to help
identify and address the likely cause.
--&gt;
&lt;p&gt;本文讲述研判和诊断 &lt;a class='glossary-tooltip' title='kubectl 是用来和 Kubernetes 集群进行通信的命令行工具。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/' target='_blank' aria-label='kubectl'&gt;kubectl&lt;/a&gt; 相关的问题。
如果你在访问 &lt;code&gt;kubectl&lt;/code&gt; 或连接到集群时遇到问题，本文概述了各种常见的情况和可能的解决方案，
以帮助确定和解决可能的原因。&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;!--
* You need to have a Kubernetes cluster.
* You also need to have `kubectl` installed - see [install tools](/docs/tasks/tools/#kubectl)
--&gt;
&lt;ul&gt;
&lt;li&gt;你需要有一个 Kubernetes 集群。&lt;/li&gt;
&lt;li&gt;你还需要安装好 &lt;code&gt;kubectl&lt;/code&gt;，参见&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/tasks/tools/#kubectl"&gt;安装工具&lt;/a&gt;。&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
## Verify kubectl setup

Make sure you have installed and configured `kubectl` correctly on your local machine.
Check the `kubectl` version to ensure it is up-to-date and compatible with your cluster.

Check kubectl version:
--&gt;
&lt;h2 id="verify-kubectl-setup"&gt;验证 kubectl 设置&lt;/h2&gt;
&lt;p&gt;确保你已在本机上正确安装和配置了 &lt;code&gt;kubectl&lt;/code&gt;。
检查 &lt;code&gt;kubectl&lt;/code&gt; 版本以确保其是最新的，并与你的集群兼容。&lt;/p&gt;</description></item><item><title>kubectl 快速参考</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/quick-reference/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/quick-reference/</guid><description>&lt;!--
title: kubectl Quick Reference
reviewers:
- erictune
- krousey
- clove
content_type: concept
weight: 10 # highlight it
card:
 name: tasks
 weight: 10
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page contains a list of commonly used `kubectl` commands and flags.
--&gt;
&lt;p&gt;本页列举常用的 &lt;code&gt;kubectl&lt;/code&gt; 命令和参数。&lt;/p&gt;

&lt;div class="alert alert-info" role="note"&gt;&lt;h4 class="alert-heading"&gt;说明：&lt;/h4&gt;&lt;!--
These instructions are for Kubernetes v1.35. To check the version, use the `kubectl version` command.
--&gt;
&lt;p&gt;这些指令适用于 Kubernetes v1.35。要检查版本，请使用 &lt;code&gt;kubectl version&lt;/code&gt; 命令。&lt;/p&gt;&lt;/div&gt;

&lt;!-- body --&gt;
&lt;!--
## Kubectl autocomplete

### BASH
--&gt;
&lt;h2 id="kubectl-autocomplete"&gt;kubectl 自动补全&lt;/h2&gt;
&lt;h3 id="bash"&gt;BASH&lt;/h3&gt;
&lt;!--
```bash
source &lt;(kubectl completion bash) # set up autocomplete in bash into the current shell, bash-completion package should be installed first.
echo "source &lt;(kubectl completion bash)" &gt;&gt; ~/.bashrc # add autocomplete permanently to your bash shell.
```

You can also use a shorthand alias for `kubectl` that also works with completion:
--&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#a2f"&gt;source&lt;/span&gt; &amp;lt;&lt;span style="color:#666"&gt;(&lt;/span&gt;kubectl completion bash&lt;span style="color:#666"&gt;)&lt;/span&gt; &lt;span style="color:#080;font-style:italic"&gt;# 在 bash 中设置当前 shell 的自动补全，要先安装 bash-completion 包&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#a2f"&gt;echo&lt;/span&gt; &lt;span style="color:#b44"&gt;&amp;#34;source &amp;lt;(kubectl completion bash)&amp;#34;&lt;/span&gt; &amp;gt;&amp;gt; ~/.bashrc &lt;span style="color:#080;font-style:italic"&gt;# 在你的 bash shell 中永久地添加自动补全&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;你还可以在补全时为 &lt;code&gt;kubectl&lt;/code&gt; 使用一个速记别名：&lt;/p&gt;</description></item><item><title>Kubelet Checkpoint API</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/node/kubelet-checkpoint-api/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/node/kubelet-checkpoint-api/</guid><description>&lt;div class="feature-state-notice feature-beta" title="特性门控： ContainerCheckpoint"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt; 
 &lt;code&gt;Kubernetes v1.30 [beta]&lt;/code&gt;（默认启用）&lt;/div&gt;

&lt;!--
Checkpointing a container is the functionality to create a stateful copy of a
running container. Once you have a stateful copy of a container, you could
move it to a different computer for debugging or similar purposes.

If you move the checkpointed container data to a computer that's able to restore
it, that restored container continues to run at exactly the same
point it was checkpointed. You can also inspect the saved data, provided that you
have suitable tools for doing so.
--&gt;
&lt;p&gt;为容器生成检查点这个功能可以为一个正在运行的容器创建有状态的拷贝。
一旦容器有一个有状态的拷贝，你就可以将其移动到其他计算机进行调试或类似用途。&lt;/p&gt;</description></item><item><title>Kubernetes 调度器</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/scheduling-eviction/kube-scheduler/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/scheduling-eviction/kube-scheduler/</guid><description>&lt;!--
title: Kubernetes Scheduler
content_type: concept
weight: 10
--&gt;
&lt;!-- overview --&gt;
&lt;!--
In Kubernetes, _scheduling_ refers to making sure that &lt;a class='glossary-tooltip' title='Pod 表示你的集群上一组正在运行的容器。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/' target='_blank' aria-label='Pods'&gt;Pods&lt;/a&gt;
are matched to &lt;a class='glossary-tooltip' title='Kubernetes 中的工作机器称作节点。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/nodes/' target='_blank' aria-label='Nodes'&gt;Nodes&lt;/a&gt; so that
&lt;a class='glossary-tooltip' title='一个在集群中每个节点上运行的代理。它保证容器都运行在 Pod 中。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/reference/command-line-tools-reference/kubelet' target='_blank' aria-label='Kubelet'&gt;Kubelet&lt;/a&gt; can run them.
--&gt;
&lt;p&gt;在 Kubernetes 中，&lt;strong&gt;调度&lt;/strong&gt;是指将 &lt;a class='glossary-tooltip' title='Pod 表示你的集群上一组正在运行的容器。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/' target='_blank' aria-label='Pod'&gt;Pod&lt;/a&gt;
放置到合适的&lt;a class='glossary-tooltip' title='Kubernetes 中的工作机器称作节点。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/nodes/' target='_blank' aria-label='节点'&gt;节点&lt;/a&gt;上，以便对应节点上的
&lt;a class='glossary-tooltip' title='一个在集群中每个节点上运行的代理。它保证容器都运行在 Pod 中。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/reference/command-line-tools-reference/kubelet' target='_blank' aria-label='Kubelet'&gt;Kubelet&lt;/a&gt; 能够运行这些 Pod。&lt;/p&gt;
&lt;!-- body --&gt;
&lt;!--
## Scheduling overview {#scheduling}
--&gt;
&lt;h2 id="scheduling"&gt;调度概览&lt;/h2&gt;
&lt;!--
A scheduler watches for newly created Pods that have no Node assigned. For
every Pod that the scheduler discovers, the scheduler becomes responsible
for finding the best Node for that Pod to run on. The scheduler reaches
this placement decision taking into account the scheduling principles
described below.
--&gt;
&lt;p&gt;调度器通过 Kubernetes 的监测（Watch）机制来发现集群中新创建且尚未被调度到节点上的 Pod。
调度器会将所发现的每一个未调度的 Pod 调度到一个合适的节点上来运行。
调度器会依据下文的调度原则来做出调度选择。&lt;/p&gt;</description></item><item><title>Kubernetes 文档支持的版本</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/home/supported-doc-versions/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/home/supported-doc-versions/</guid><description>&lt;!--
title: Available Documentation Versions
content_type: custom
layout: supported-versions
weight: 10
--&gt;
&lt;!-- overview --&gt;
&lt;!-- 
This website contains documentation for the current version of Kubernetes
and the four previous versions of Kubernetes.

The availability of documentation for a Kubernetes version is separate from whether
that release is currently supported.
Read [Support period](/releases/patch-releases/#support-period) to learn about
which versions of Kubernetes are officially supported, and for how long.
--&gt;
&lt;p&gt;本网站包含当前版本和之前四个版本的 Kubernetes 文档。&lt;/p&gt;</description></item><item><title>Kubernetes 问题追踪</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/issues-security/issues/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/issues-security/issues/</guid><description>&lt;!--
title: Kubernetes Issue Tracker
weight: 10
aliases: [/cve/,/cves/]
--&gt;
&lt;!--
To report a security issue, please follow the [Kubernetes security disclosure process](/docs/reference/issues-security/security/#report-a-vulnerability).
--&gt;
&lt;p&gt;要报告安全问题，请遵循
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/issues-security/security/#report-a-vulnerability"&gt;Kubernetes 安全问题公开流程&lt;/a&gt;。&lt;/p&gt;
&lt;!--
Work on Kubernetes code and public issues are tracked using [GitHub Issues](https://github.com/kubernetes/kubernetes/issues/).
--&gt;
&lt;p&gt;使用 &lt;a href="https://github.com/kubernetes/kubernetes/issues/"&gt;GitHub Issues&lt;/a&gt;
跟踪 Kubernetes 编码工作和公开问题。&lt;/p&gt;
&lt;!--
* Official [list of known CVEs](/docs/reference/issues-security/official-cve-feed/)
 (security vulnerabilities) that have been announced by the
 [Security Response Committee](https://github.com/kubernetes/committee-security-response)
* [CVE-related GitHub issues](https://github.com/kubernetes/kubernetes/issues?utf8=%E2%9C%93&amp;q=is%3Aissue+label%3Aarea%2Fsecurity+in%3Atitle+CVE)
--&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/kubernetes/committee-security-response"&gt;安全响应委员会（Security Response Committee，SRC）&lt;/a&gt;已公布的&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/issues-security/official-cve-feed/"&gt;已知 CVE 官方列表&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/kubernetes/kubernetes/issues?utf8=%E2%9C%93&amp;q=is%3Aissue+label%3Aarea%2Fsecurity+in%3Atitle+CVE"&gt;CVE 相关问题&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
Security-related announcements are sent to the [kubernetes-security-announce@googlegroups.com](https://groups.google.com/forum/#!forum/kubernetes-security-announce) mailing list.
--&gt;
&lt;p&gt;与安全性相关的公告将发送到
&lt;a href="https://groups.google.com/forum/#!forum/kubernetes-security-announce"&gt;kubernetes-security-announce@googlegroups.com&lt;/a&gt;
邮件列表。&lt;/p&gt;</description></item><item><title>Kubernetes 组件</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/overview/components/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/overview/components/</guid><description>&lt;!--
reviewers:
- lavalamp
title: Kubernetes Components
content_type: concept
description: &gt;
 An overview of the key components that make up a Kubernetes cluster.
weight: 10
card:
 title: Components of a cluster
 name: concepts
 weight: 20
--&gt;
&lt;!-- overview --&gt;
&lt;!--
本页面概述了组成 Kubernetes 集群的基本组件。

{{ &lt; figure src="https://andygol-k8s.netlify.app/images/docs/components-of-kubernetes.svg" alt="Components of Kubernetes" caption="The components of a Kubernetes cluster" class="diagram-large" clicktozoom="true" &gt;}}
--&gt;
&lt;p&gt;本文档概述了一个正常运行的 Kubernetes 集群所需的各种组件。&lt;/p&gt;


&lt;figure class="diagram-large clickable-zoom"&gt;
 &lt;img src="https://andygol-k8s.netlify.app/zh-cn/docs/images/components-of-kubernetes.svg"
 alt="Kubernetes 的组件"/&gt; &lt;figcaption&gt;
 &lt;p&gt;Kubernetes 集群的组件&lt;/p&gt;
 &lt;/figcaption&gt;
&lt;/figure&gt;
&lt;!-- body --&gt;
&lt;!--
## Core Components

A Kubernetes cluster consists of a control plane and one or more worker nodes.
Here's a brief overview of the main components:
--&gt;
&lt;h2 id="core-components"&gt;核心组件&lt;/h2&gt;
&lt;p&gt;Kubernetes 集群由控制平面和一个或多个工作节点组成。以下是主要组件的简要概述：&lt;/p&gt;</description></item><item><title>Linux 内核版本要求</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/node/kernel-version-requirements/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/node/kernel-version-requirements/</guid><description>&lt;!--
content_type: "reference"
title: Linux Kernel Version Requirements
weight: 10
--&gt;
&lt;div class="alert alert-secondary callout third-party-content" role="note"&gt;&lt;strong&gt;说明：&lt;/strong&gt;&amp;puncsp;本部分链接到提供 Kubernetes 所需功能的第三方项目。Kubernetes 项目作者不负责这些项目。此页面遵循&lt;a href="https://github.com/cncf/foundation/blob/main/policies-guidance/website-guidelines.md" target="_blank"&gt;CNCF 网站指南&lt;/a&gt;，按字母顺序列出项目。要将项目添加到此列表中，请在提交更改之前阅读&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/contribute/style/content-guide/#third-party-content"&gt;内容指南&lt;/a&gt;。&lt;/div&gt;
&lt;!--
Many features rely on specific kernel functionalities and have minimum kernel version requirements.
However, relying solely on kernel version numbers may not be sufficient
for certain operating system distributions,
as maintainers for distributions such as RHEL, Ubuntu and SUSE often backport selected features
to older kernel releases (retaining the older kernel version).
--&gt;
&lt;p&gt;许多特性依赖于特定的内核功能，并且有最低的内核版本要求。
然而，单纯依赖内核版本号可能不足以满足某些操作系统发行版，
因为像 RHEL、Ubuntu 和 SUSE 等发行版的维护者们通常会将选定的特性反向移植到较旧的内核版本（保留较旧的内核版本）。&lt;/p&gt;</description></item><item><title>Quantity</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/quantity/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/quantity/</guid><description>&lt;!-- 
api_metadata:
 apiVersion: ""
 import: "k8s.io/apimachinery/pkg/api/resource"
 kind: "Quantity"
content_type: "api_reference"
description: "Quantity is a fixed-point representation of a number."
title: "Quantity"
weight: 10
auto_generated: true
--&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/apimachinery/pkg/api/resource&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;!-- 
Quantity is a fixed-point representation of a number. 
It provides convenient marshaling/unmarshaling in JSON and YAML, 
in addition to String() and AsInt64() accessors.

The serialization format is:
--&gt;
&lt;p&gt;数量（Quantity）是数字的定点表示。
除了 String() 和 AsInt64() 的访问接口之外，
它以 JSON 和 YAML 形式提供方便的打包和解包方法。&lt;/p&gt;
&lt;p&gt;序列化格式如下：&lt;/p&gt;
&lt;!-- 
```
 \&lt;quantity&gt; ::= \&lt;signedNumber&gt;\&lt;suffix&gt;

 (Note that \&lt;suffix&gt; may be empty, from the "" case in \&lt;decimalSI&gt;.)

\&lt;digit&gt; ::= 0 | 1 | ... | 9 \&lt;digits&gt; ::= \&lt;digit&gt; | \&lt;digit&gt;\&lt;digits&gt; \&lt;number&gt; ::= \&lt;digits&gt; | \&lt;digits&gt;.\&lt;digits&gt; | \&lt;digits&gt;. | .\&lt;digits&gt; \&lt;sign&gt; ::= "+" | "-" \&lt;signedNumber&gt; ::= \&lt;number&gt; | \&lt;sign&gt;\&lt;number&gt; \&lt;suffix&gt; ::= \&lt;binarySI&gt; | \&lt;decimalExponent&gt; | \&lt;decimalSI&gt; \&lt;binarySI&gt; ::= Ki | Mi | Gi | Ti | Pi | Ei

 (International System of units; See: http://physics.nist.gov/cuu/Units/binary.html)

\&lt;decimalSI&gt; ::= m | "" | k | M | G | T | P | E

 (Note that 1024 = 1Ki but 1000 = 1k; I didn't choose the capitalization.)

\&lt;decimalExponent&gt; ::= "e" \&lt;signedNumber&gt; | "E" \&lt;signedNumber&gt; 
```
--&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;&amp;lt;quantity&amp;gt; ::= &amp;lt;signedNumber&amp;gt;&amp;lt;suffix&amp;gt;

（注意 &amp;lt;suffix&amp;gt; 可能为空，例如 &amp;lt;decimalSI&amp;gt; 的 &amp;#34;&amp;#34; 情形。）

&amp;lt;digit&amp;gt; ::= 0 | 1 | ... | 9
&amp;lt;digits&amp;gt; ::= &amp;lt;digit&amp;gt; | &amp;lt;digit&amp;gt;&amp;lt;digits&amp;gt;
&amp;lt;number&amp;gt; ::= &amp;lt;digits&amp;gt; | &amp;lt;digits&amp;gt;.&amp;lt;digits&amp;gt; | &amp;lt;digits&amp;gt;. | .&amp;lt;digits&amp;gt;
&amp;lt;sign&amp;gt; ::= &amp;#34;+&amp;#34; | &amp;#34;-&amp;#34;
&amp;lt;signedNumber&amp;gt; ::= &amp;lt;number&amp;gt; | &amp;lt;sign&amp;gt;&amp;lt;number&amp;gt;
&amp;lt;suffix&amp;gt; ::= &amp;lt;binarySI&amp;gt; | &amp;lt;decimalExponent&amp;gt; | &amp;lt;decimalSI&amp;gt;
&amp;lt;binarySI&amp;gt; ::= Ki | Mi | Gi | Ti | Pi | Ei

（国际单位制度；查阅： http://physics.nist.gov/cuu/Units/binary.html）

&amp;lt;decimalSI&amp;gt; ::= m | &amp;#34;&amp;#34; | k | M | G | T | P | E

（注意，1024 = 1ki 但 1000 = 1k；我没有选择大写。）

&amp;lt;decimalExponent&amp;gt; ::= &amp;#34;e&amp;#34; &amp;lt;signedNumber&amp;gt; | &amp;#34;E&amp;#34; &amp;lt;signedNumber&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;!-- 
No matter which of the three exponent forms is used, no quantity may represent a number greater than 2^63-1 in magnitude, nor may it have more than 3 decimal places. Numbers larger or more precise will be capped or rounded up. (E.g.: 0.1m will rounded up to 1m.) This may be extended in the future if we require larger or smaller quantities.

When a Quantity is parsed from a string, it will remember the type of suffix it had, and will use the same type again when it is serialized.
--&gt;
&lt;p&gt;无论使用三种指数形式中哪一种，没有数量可以表示大于 2&lt;sup&gt;63&lt;/sup&gt;-1 的数，也不可能超过 3 个小数位。
更大或更精确的数字将被截断或向上取整（例如：0.1m 将向上取整为 1m）。
如果将来我们需要更大或更小的数量，可能会扩展。&lt;/p&gt;</description></item><item><title>参考文档快速入门</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/contribute/generate-ref-docs/quickstart/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/contribute/generate-ref-docs/quickstart/</guid><description>&lt;!--
title: Reference Documentation Quickstart
linkTitle: Quickstart
content_type: task
weight: 10
hide_summary: true
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page shows how to use the `update-imported-docs.py` script to generate
the Kubernetes reference documentation. The script automates
the build setup and generates the reference documentation for a release.
--&gt;
&lt;p&gt;本页讨论如何使用 &lt;code&gt;update-imported-docs.py&lt;/code&gt; 脚本来生成 Kubernetes 参考文档。
此脚本将构建的配置过程自动化，并为某个发行版本生成参考文档。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;

	&lt;!--
### Requirements:

- You need a machine that is running Linux or macOS.

- You need to have these tools installed:

 - [Python](https://www.python.org/downloads/) v3.7.x+
 - [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git)
 - [Golang](https://go.dev/dl/) version 1.13+
 - [Pip](https://pypi.org/project/pip/) used to install PyYAML
 - [PyYAML](https://pyyaml.org/) v5.1.2
 - [make](https://www.gnu.org/software/make/)
 - [gcc compiler/linker](https://gcc.gnu.org/)
 - [Docker](https://docs.docker.com/engine/installation/) (Required only for `kubectl` command reference)
--&gt;
&lt;h3 id="requirements"&gt;需求&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;你需要一台 Linux 或 macOS 机器。&lt;/p&gt;</description></item><item><title>Service 所用的协议</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/networking/service-protocols/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/networking/service-protocols/</guid><description>&lt;!--
title: Protocols for Services
content_type: reference
weight: 10
--&gt;
&lt;!-- overview --&gt;
&lt;!--
If you configure a &lt;a class='glossary-tooltip' title='将运行在一组 Pods 上的应用程序公开为网络服务的抽象方法。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/services-networking/service/' target='_blank' aria-label='Service'&gt;Service&lt;/a&gt;,
you can select from any network protocol that Kubernetes supports.

Kubernetes supports the following protocols with Services:

- [`SCTP`](#protocol-sctp)
- [`TCP`](#protocol-tcp) _(the default)_
- [`UDP`](#protocol-udp)
--&gt;
&lt;p&gt;如果你配置 &lt;a class='glossary-tooltip' title='将运行在一组 Pods 上的应用程序公开为网络服务的抽象方法。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/services-networking/service/' target='_blank' aria-label='Service'&gt;Service&lt;/a&gt;，
你可以从 Kubernetes 支持的任何网络协议中选择一个协议。&lt;/p&gt;
&lt;p&gt;Kubernetes 支持以下协议用于 Service：&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#protocol-sctp"&gt;&lt;code&gt;SCTP&lt;/code&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#protocol-tcp"&gt;&lt;code&gt;TCP&lt;/code&gt;&lt;/a&gt; &lt;strong&gt;（默认值）&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#protocol-udp"&gt;&lt;code&gt;UDP&lt;/code&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
When you define a Service, you can also specify the
[application protocol](/docs/concepts/services-networking/service/#application-protocol)
that it uses.

This document details some special cases, all of them typically using TCP
as a transport protocol:

- [HTTP](#protocol-http-special) and [HTTPS](#protocol-http-special)
- [PROXY protocol](#protocol-proxy-special)
- [TLS](#protocol-tls-special) termination at the load balancer
--&gt;
&lt;p&gt;当你定义 Service 时，
你还可以指定其使用的&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/services-networking/service/#application-protocol"&gt;应用协议&lt;/a&gt;。&lt;/p&gt;</description></item><item><title>ServiceCIDR</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/cluster-resources/service-cidr-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/cluster-resources/service-cidr-v1/</guid><description>&lt;!--
api_metadata:
 apiVersion: "networking.k8s.io/v1"
 import: "k8s.io/api/networking/v1"
 kind: "ServiceCIDR"
content_type: "api_reference"
description: "ServiceCIDR defines a range of IP addresses using CIDR format (e."
title: "ServiceCIDR"
weight: 10
auto_generated: true
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: networking.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/networking/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="ServiceCIDR"&gt;ServiceCIDR&lt;/h2&gt;
&lt;!--
ServiceCIDR defines a range of IP addresses using CIDR format (e.g. 192.168.0.0/24 or 2001:db2::/64). This range is used to allocate ClusterIPs to Service objects.
--&gt;
&lt;p&gt;ServiceCIDR 使用 CIDR 格式定义 IP 地址的范围（例如 192.168.0.0/24 或 2001:db2::/64）。
此范围用于向 Service 对象分配 ClusterIP。&lt;/p&gt;</description></item><item><title>StatefulSet 基础</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tutorials/stateful-application/basic-stateful-set/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tutorials/stateful-application/basic-stateful-set/</guid><description>&lt;!--
reviewers:
- enisoc
- erictune
- foxish
- janetkuo
- kow3ns
- smarterclayton
title: StatefulSet Basics
content_type: tutorial
weight: 10
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This tutorial provides an introduction to managing applications with
&lt;a class='glossary-tooltip' title='StatefulSet 用来管理某 Pod 集合的部署和扩缩，并为这些 Pod 提供持久存储和持久标识符。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/controllers/statefulset/' target='_blank' aria-label='StatefulSets'&gt;StatefulSets&lt;/a&gt;.
It demonstrates how to create, delete, scale, and update the Pods of StatefulSets.
--&gt;
&lt;p&gt;本教程介绍了如何使用
&lt;a class='glossary-tooltip' title='StatefulSet 用来管理某 Pod 集合的部署和扩缩，并为这些 Pod 提供持久存储和持久标识符。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/controllers/statefulset/' target='_blank' aria-label='StatefulSet'&gt;StatefulSet&lt;/a&gt;
来管理应用。
演示了如何创建、删除、扩容/缩容和更新 StatefulSet 的 Pod。&lt;/p&gt;</description></item><item><title>Volume</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/config-and-storage-resources/volume/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/config-and-storage-resources/volume/</guid><description>&lt;!--
api_metadata:
 apiVersion: ""
 import: "k8s.io/api/core/v1"
 kind: "Volume"
content_type: "api_reference"
description: "Volume represents a named volume in a pod that may be accessed by any container in the pod."
title: "Volume"
weight: 10
auto_generated: true
--&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/core/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="Volume"&gt;Volume&lt;/h2&gt;
&lt;!--
Volume represents a named volume in a pod that may be accessed by any container in the pod.
--&gt;
&lt;p&gt;Volume 表示 Pod 中一个有名字的卷，可以由 Pod 中的任意容器进行访问。&lt;/p&gt;
&lt;hr&gt;
&lt;!--
- **name** (string), required
 name of the volume. Must be a DNS_LABEL and unique within the pod. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
--&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;name&lt;/strong&gt; (string)，必需&lt;/p&gt;</description></item><item><title>安装 kubeadm</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/setup/production-environment/tools/kubeadm/install-kubeadm/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/setup/production-environment/tools/kubeadm/install-kubeadm/</guid><description>&lt;!--
title: Installing kubeadm
content_type: task
weight: 10
card:
 name: setup
 weight: 20
 title: Install the kubeadm setup tool
--&gt;
&lt;!-- overview --&gt;
&lt;!--
&lt;img src="https://andygol-k8s.netlify.app/images/kubeadm-stacked-color.png" align="right" width="150px"&gt;&lt;/img&gt;
This page shows how to install the `kubeadm` toolbox.
For information on how to create a cluster with kubeadm once you have performed this installation process,
see the [Creating a cluster with kubeadm](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/) page.
--&gt;
&lt;p&gt;&lt;img src="https://andygol-k8s.netlify.app/images/kubeadm-stacked-color.png" align="right" width="150px"&gt;&lt;/img&gt;
本页面显示如何安装 &lt;code&gt;kubeadm&lt;/code&gt; 工具箱。
有关在执行此安装过程后如何使用 kubeadm 创建集群的信息，
请参见&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/"&gt;使用 kubeadm 创建集群&lt;/a&gt;。&lt;/p&gt;






&lt;div class="version-list"&gt;
 &lt;p&gt;
 此 installation guide 适用于 Kubernetes v1.35。如果你想使用其他 Kubernetes 版本，请参考以下页面：
 &lt;/p&gt;</description></item><item><title>部署和访问 Kubernetes 仪表板（Dashboard）</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/access-application-cluster/web-ui-dashboard/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/access-application-cluster/web-ui-dashboard/</guid><description>&lt;!--
reviewers:
- floreks
- maciaszczykm
- shu-mutou
- mikedanese
title: Deploy and Access the Kubernetes Dashboard
description: &gt;-
 Deploy the web UI (Kubernetes Dashboard) and access it.
content_type: concept
weight: 10
card:
 name: tasks
 weight: 30
 title: Use the Web UI Dashboard
--&gt;
&lt;!-- overview --&gt;
&lt;!--
Dashboard is a web-based Kubernetes user interface.
You can use Dashboard to deploy containerized applications to a Kubernetes cluster,
troubleshoot your containerized application, and manage the cluster resources.
You can use Dashboard to get an overview of applications running on your cluster,
as well as for creating or modifying individual Kubernetes resources
(such as Deployments, Jobs, DaemonSets, etc).
For example, you can scale a Deployment, initiate a rolling update, restart a pod
or deploy new applications using a deploy wizard.

Dashboard also provides information on the state of Kubernetes resources in your cluster and on any errors that may have occurred.
--&gt;
&lt;p&gt;Dashboard 是基于网页的 Kubernetes 用户界面。
你可以使用 Dashboard 将容器应用部署到 Kubernetes 集群中，也可以对容器应用排错，还能管理集群资源。
你可以使用 Dashboard 获取运行在集群中的应用的概览信息，也可以创建或者修改 Kubernetes 资源
（如 Deployment、Job、DaemonSet 等等）。
例如，你可以对 Deployment 实现弹性伸缩、发起滚动升级、重启 Pod 或者使用向导创建新的应用。&lt;/p&gt;</description></item><item><title>查看 Pod 和节点</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tutorials/kubernetes-basics/explore/explore-intro/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tutorials/kubernetes-basics/explore/explore-intro/</guid><description>&lt;!--
title: Viewing Pods and Nodes
weight: 10
--&gt;
&lt;h2 id="教程目标"&gt;教程目标&lt;/h2&gt;
&lt;!--
* Learn about Kubernetes Pods.
* Learn about Kubernetes Nodes.
* Troubleshoot deployed applications.
--&gt;
&lt;ul&gt;
&lt;li&gt;了解 Kubernetes Pod。&lt;/li&gt;
&lt;li&gt;了解 Kubernetes 节点。&lt;/li&gt;
&lt;li&gt;对已部署的应用进行故障排查。&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
## Kubernetes Pods
--&gt;
&lt;h2 id="kubernetes-pod"&gt;Kubernetes Pod&lt;/h2&gt;
&lt;div class="alert alert-primary" role="alert"&gt;
&lt;!--
_A Pod is a group of one or more application containers (such as Docker) and includes
shared storage (volumes), IP address and information about how to run them._
--&gt;
&lt;p&gt;&lt;strong&gt;Pod 是一个或多个应用容器（例如 Docker）的组合，并且包含共享的存储（卷）、IP
地址和有关如何运行它们的信息。&lt;/strong&gt;&lt;/p&gt;</description></item><item><title>大规模集群的注意事项</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/setup/best-practices/cluster-large/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/setup/best-practices/cluster-large/</guid><description>&lt;!-- 
reviewers:
- davidopp
- lavalamp
title: Considerations for large clusters
weight: 10
--&gt;
&lt;!--
A cluster is a set of &lt;a class='glossary-tooltip' title='Kubernetes 中的工作机器称作节点。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/nodes/' target='_blank' aria-label='nodes'&gt;nodes&lt;/a&gt; (physical
or virtual machines) running Kubernetes agents, managed by the
&lt;a class='glossary-tooltip' title='控制平面是指容器编排层，它暴露 API 和接口来定义、部署容器和管理容器的生命周期。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/reference/glossary/?all=true#term-control-plane' target='_blank' aria-label='control plane'&gt;control plane&lt;/a&gt;.
Kubernetes v1.35 supports clusters with up to 5,000 nodes. More specifically,
Kubernetes is designed to accommodate configurations that meet *all* of the following criteria:
--&gt;
&lt;p&gt;集群包含多个运行着 Kubernetes 代理程序、
由&lt;a class='glossary-tooltip' title='控制平面是指容器编排层，它暴露 API 和接口来定义、部署容器和管理容器的生命周期。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/reference/glossary/?all=true#term-control-plane' target='_blank' aria-label='控制平面'&gt;控制平面&lt;/a&gt;管理的一组&lt;a class='glossary-tooltip' title='Kubernetes 中的工作机器称作节点。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/nodes/' target='_blank' aria-label='节点'&gt;节点&lt;/a&gt;（物理机或虚拟机）。
Kubernetes v1.35 单个集群支持的最大节点数为 5,000。
更具体地说，Kubernetes 设计为满足以下&lt;strong&gt;所有&lt;/strong&gt;标准的配置：&lt;/p&gt;</description></item><item><title>调试 Pod</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/debug/debug-application/debug-pods/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/debug/debug-application/debug-pods/</guid><description>&lt;!-- 
reviewers:
- mikedanese
- thockin
title: Debug Pods
content_type: task
weight: 10
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This guide is to help users debug applications that are deployed into Kubernetes
and not behaving correctly. This is *not* a guide for people who want to debug their cluster.
For that you should check out [this guide](/docs/tasks/debug/debug-cluster).
--&gt;
&lt;p&gt;本指南帮助用户调试那些部署到 Kubernetes 上后没有正常运行的应用。
本指南 &lt;strong&gt;并非&lt;/strong&gt; 指导用户如何调试集群。
如果想调试集群的话，请参阅&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/tasks/debug/debug-cluster"&gt;这里&lt;/a&gt;。&lt;/p&gt;
&lt;!-- body --&gt;
&lt;!--
## Diagnosing the problem

The first step in troubleshooting is triage. What is the problem?
Is it your Pods, your Replication Controller or your Service?

 * [Debugging Pods](#debugging-pods)
 * [Debugging Replication Controllers](#debugging-replication-controllers)
 * [Debugging Services](#debugging-services)
--&gt;
&lt;h2 id="diagnosing-the-problem"&gt;诊断问题&lt;/h2&gt;
&lt;p&gt;故障排查的第一步是先给问题分类。问题是什么？是关于 Pod、Replication Controller 还是 Service？&lt;/p&gt;</description></item><item><title>定制资源</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/extend-kubernetes/api-extension/custom-resources/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/extend-kubernetes/api-extension/custom-resources/</guid><description>&lt;!--
title: Custom Resources
reviewers:
- enisoc
- deads2k
api_metadata:
- apiVersion: "apiextensions.k8s.io/v1"
 kind: "CustomResourceDefinition"
content_type: concept
weight: 10
--&gt;
&lt;!-- overview --&gt;
&lt;!--
*Custom resources* are extensions of the Kubernetes API. This page discusses when to add a custom
resource to your Kubernetes cluster and when to use a standalone service. It describes the two
methods for adding custom resources and how to choose between them.
--&gt;
&lt;p&gt;&lt;strong&gt;定制资源（Custom Resource）&lt;/strong&gt; 是对 Kubernetes API 的扩展。
本页讨论何时向 Kubernetes 集群添加定制资源，何时使用独立的服务。
本页描述添加定制资源的两种方法以及怎样在二者之间做出抉择。&lt;/p&gt;</description></item><item><title>对 DaemonSet 执行滚动更新</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/manage-daemon/update-daemon-set/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/manage-daemon/update-daemon-set/</guid><description>&lt;!--
reviewers:
- janetkuo
title: Perform a Rolling Update on a DaemonSet
content_type: task
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page shows how to perform a rolling update on a DaemonSet.
--&gt;
&lt;p&gt;本文介绍了如何对 DaemonSet 执行滚动更新。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;!--
You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
[minikube](https://minikube.sigs.k8s.io/docs/tutorials/multi_node/)
or you can use one of these Kubernetes playgrounds:
--&gt;
&lt;p&gt;你必须拥有一个 Kubernetes 的集群，且必须配置 kubectl 命令行工具让其与你的集群通信。
建议运行本教程的集群至少有两个节点，且这两个节点不能作为控制平面主机。
如果你还没有集群，你可以通过 &lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;Minikube&lt;/a&gt;
构建一个你自己的集群，或者你可以使用下面的 Kubernetes 练习环境之一：&lt;/p&gt;</description></item><item><title>发起拉取请求（PR）</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/contribute/new-content/open-a-pr/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/contribute/new-content/open-a-pr/</guid><description>&lt;!--
title: Opening a pull request
content_type: concept
weight: 10
card:
 name: contribute
 weight: 40
--&gt;
&lt;!-- overview --&gt;

&lt;div class="alert alert-info" role="note"&gt;&lt;h4 class="alert-heading"&gt;说明：&lt;/h4&gt;&lt;!--
**Code developers**: If you are documenting a new feature for an
upcoming Kubernetes release, see
[Document a new feature](/docs/contribute/new-content/new-features/).
--&gt;
&lt;p&gt;&lt;strong&gt;代码开发者们&lt;/strong&gt;：如果你在为下一个 Kubernetes 发行版本中的某功能特性撰写文档，
请参考&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/contribute/new-content/new-features/"&gt;为发行版本撰写功能特性文档&lt;/a&gt;。&lt;/p&gt;&lt;/div&gt;

&lt;!--
To contribute new content pages or improve existing content pages, open a pull request (PR).
Make sure you follow all the requirements in the
[Before you begin](/docs/contribute/new-content/) section.
--&gt;
&lt;p&gt;要贡献新的内容页面或者改进已有内容页面，请发起拉取请求（PR）。
请确保你满足了&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/contribute/new-content/"&gt;开始之前&lt;/a&gt;一节中所列举的所有要求。&lt;/p&gt;</description></item><item><title>服务（Service）</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/services-networking/service/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/services-networking/service/</guid><description>&lt;!--
reviewers:
- bprashanth
title: Service
api_metadata:
- apiVersion: "v1"
 kind: "Service"
feature:
 title: Service discovery and load balancing
 description: &gt;
 No need to modify your application to use an unfamiliar service discovery mechanism. Kubernetes gives Pods their own IP addresses and a single DNS name for a set of Pods, and can load-balance across them.
description: &gt;-
 Expose an application running in your cluster behind a single outward-facing
 endpoint, even when the workload is split across multiple backends.
content_type: concept
weight: 10
--&gt;
&lt;!-- overview --&gt;
&lt;p&gt;Kubernetes 中 Service 是 将运行在一个或一组 &lt;a class='glossary-tooltip' title='Pod 表示你的集群上一组正在运行的容器。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/' target='_blank' aria-label='Pod'&gt;Pod&lt;/a&gt; 上的网络应用程序公开为网络服务的方法。&lt;/p&gt;</description></item><item><title>公开外部 IP 地址以访问集群中的应用</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tutorials/stateless-application/expose-external-ip-address/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tutorials/stateless-application/expose-external-ip-address/</guid><description>&lt;!--
title: Exposing an External IP Address to Access an Application in a Cluster
content_type: tutorial
weight: 10
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page shows how to create a Kubernetes Service object that exposes an
external IP address.
--&gt;
&lt;p&gt;此页面显示如何创建公开外部 IP 地址的 Kubernetes 服务对象。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;!--
* Install [kubectl](/docs/tasks/tools/).
* Use a cloud provider like Google Kubernetes Engine or Amazon Web Services to
 create a Kubernetes cluster. This tutorial creates an
 [external load balancer](/docs/tasks/access-application-cluster/create-external-load-balancer/),
 which requires a cloud provider.
* Configure `kubectl` to communicate with your Kubernetes API server. For instructions, see the
 documentation for your cloud provider.
--&gt;
&lt;ul&gt;
&lt;li&gt;安装 &lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/tasks/tools/"&gt;kubectl&lt;/a&gt;。&lt;/li&gt;
&lt;li&gt;使用 Google Kubernetes Engine 或 Amazon Web Services 等云供应商创建 Kubernetes 集群。
本教程创建了一个&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/tasks/access-application-cluster/create-external-load-balancer/"&gt;外部负载均衡器&lt;/a&gt;，
需要云供应商。&lt;/li&gt;
&lt;li&gt;配置 &lt;code&gt;kubectl&lt;/code&gt; 与 Kubernetes API 服务器通信。有关说明，请参阅云供应商文档。&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="教程目标"&gt;教程目标&lt;/h2&gt;
&lt;!--
* Run five instances of a Hello World application.
* Create a Service object that exposes an external IP address.
* Use the Service object to access the running application.
--&gt;
&lt;ul&gt;
&lt;li&gt;运行 Hello World 应用的五个实例。&lt;/li&gt;
&lt;li&gt;创建一个公开外部 IP 地址的 Service 对象。&lt;/li&gt;
&lt;li&gt;使用 Service 对象访问正在运行的应用。&lt;/li&gt;
&lt;/ul&gt;
&lt;!-- lessoncontent --&gt;
&lt;!--
## Creating a service for an application running in five pods
--&gt;
&lt;h2 id="creating-a-service-for-an-app-running-in-five-pods"&gt;为在五个 Pod 中运行的应用创建服务&lt;/h2&gt;
&lt;!--
1. Run a Hello World application in your cluster:
--&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;在集群中运行 Hello World 应用：&lt;/p&gt;</description></item><item><title>将节点上的容器运行时从 Docker Engine 改为 containerd</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/migrating-from-dockershim/change-runtime-containerd/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/migrating-from-dockershim/change-runtime-containerd/</guid><description>&lt;!--
title: "Changing the Container Runtime on a Node from Docker Engine to containerd"
weight: 10
content_type: task 
--&gt;
&lt;!--
This task outlines the steps needed to update your container runtime to containerd from Docker. It
is applicable for cluster operators running Kubernetes 1.23 or earlier. This also covers an
example scenario for migrating from dockershim to containerd. Alternative container runtimes
can be picked from this [page](/docs/setup/production-environment/container-runtimes/).
--&gt;
&lt;p&gt;本任务给出将容器运行时从 Docker 改为 containerd 所需的步骤。
此任务适用于运行 1.23 或更早版本 Kubernetes 的集群操作人员。
同时，此任务也涉及从 dockershim 迁移到 containerd 的示例场景。
有关其他备选的容器运行时，可查阅
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/setup/production-environment/container-runtimes/"&gt;此页面&lt;/a&gt;进行拣选。&lt;/p&gt;</description></item><item><title>角色与责任</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/contribute/participate/roles-and-responsibilities/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/contribute/participate/roles-and-responsibilities/</guid><description>&lt;!-- overview --&gt;
&lt;!--
Anyone can contribute to Kubernetes. As your contributions to SIG Docs grow, you can apply for different levels of membership in the community.
These roles allow you to take on more responsibility within the community.
Each role requires more time and commitment. The roles are:

- Anyone: regular contributors to the Kubernetes documentation
- Members: can assign and triage issues and provide non-binding review on pull requests
- Reviewers: can lead reviews on documentation pull requests and can vouch for a change's quality
- Approvers: can lead reviews on documentation and merge changes
--&gt;
&lt;p&gt;任何人都可以为 Kubernetes 作出贡献。随着你对 SIG Docs 的贡献增多，你可以申请
社区内不同级别的成员资格。
这些角色使得你可以在社区中承担更多的责任。
每个角色都需要更多的时间和投入。具体包括：&lt;/p&gt;</description></item><item><title>节点</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/nodes/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/nodes/</guid><description>&lt;!--
reviewers:
- caesarxuchao
- dchen1107
title: Nodes
api_metadata:
- apiVersion: "v1"
 kind: "Node"
content_type: concept
weight: 10
--&gt;
&lt;!-- overview --&gt;
&lt;!--
Kubernetes runs your &lt;a class='glossary-tooltip' title='工作负载是在 Kubernetes 上运行的应用程序。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/' target='_blank' aria-label='workload'&gt;workload&lt;/a&gt;
by placing containers into Pods to run on _Nodes_.
A node may be a virtual or physical machine, depending on the cluster. Each node
is managed by the
&lt;a class='glossary-tooltip' title='控制平面是指容器编排层，它暴露 API 和接口来定义、部署容器和管理容器的生命周期。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/reference/glossary/?all=true#term-control-plane' target='_blank' aria-label='control plane'&gt;control plane&lt;/a&gt;
and contains the services necessary to run
&lt;a class='glossary-tooltip' title='Pod 表示你的集群上一组正在运行的容器。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/' target='_blank' aria-label='Pods'&gt;Pods&lt;/a&gt;.

Typically you have several nodes in a cluster; in a learning or resource-limited
environment, you might have only one node.

The [components](/docs/concepts/architecture/#node-components) on a node include the
&lt;a class='glossary-tooltip' title='一个在集群中每个节点上运行的代理。它保证容器都运行在 Pod 中。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/reference/command-line-tools-reference/kubelet' target='_blank' aria-label='kubelet'&gt;kubelet&lt;/a&gt;, a
&lt;a class='glossary-tooltip' title='容器运行时是负责运行容器的软件。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/setup/production-environment/container-runtimes' target='_blank' aria-label='container runtime'&gt;container runtime&lt;/a&gt;, and the
&lt;a class='glossary-tooltip' title='kube-proxy 是集群中每个节点上运行的网络代理。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/reference/command-line-tools-reference/kube-proxy/' target='_blank' aria-label='kube-proxy'&gt;kube-proxy&lt;/a&gt;.
--&gt;
&lt;p&gt;Kubernetes 通过将容器放入在节点（Node）上运行的 Pod
中来执行你的&lt;a class='glossary-tooltip' title='工作负载是在 Kubernetes 上运行的应用程序。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/' target='_blank' aria-label='工作负载'&gt;工作负载&lt;/a&gt;。
节点可以是一个虚拟机或者物理机器，取决于所在的集群配置。
每个节点包含运行 &lt;a class='glossary-tooltip' title='Pod 表示你的集群上一组正在运行的容器。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/' target='_blank' aria-label='Pod'&gt;Pod&lt;/a&gt; 所需的服务；
这些节点由&lt;a class='glossary-tooltip' title='控制平面是指容器编排层，它暴露 API 和接口来定义、部署容器和管理容器的生命周期。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/reference/glossary/?all=true#term-control-plane' target='_blank' aria-label='控制面'&gt;控制面&lt;/a&gt;负责管理。&lt;/p&gt;</description></item><item><title>节点关闭</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/cluster-administration/node-shutdown/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/cluster-administration/node-shutdown/</guid><description>&lt;!--
title: Node Shutdowns
content_type: concept
weight: 10
--&gt;
&lt;!-- overview --&gt;
&lt;!--
In a Kubernetes cluster, a &lt;a class='glossary-tooltip' title='Kubernetes 中的工作机器称作节点。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/nodes/' target='_blank' aria-label='node'&gt;node&lt;/a&gt;
can be shut down in a planned graceful way or unexpectedly because of reasons such
as a power outage or something else external. A node shutdown could lead to workload
failure if the node is not drained before the shutdown. A node shutdown can be
either **graceful** or **non-graceful**.
--&gt;
&lt;p&gt;在 Kubernetes 集群中，&lt;a class='glossary-tooltip' title='Kubernetes 中的工作机器称作节点。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/nodes/' target='_blank' aria-label='节点'&gt;节点&lt;/a&gt;可以按计划的体面方式关闭，
也可能因断电或其他某些外部原因被意外关闭。如果节点在关闭之前未被排空，则节点关闭可能会导致工作负载失败。
节点可以&lt;strong&gt;体面关闭&lt;/strong&gt;或&lt;strong&gt;非体面关闭&lt;/strong&gt;。&lt;/p&gt;</description></item><item><title>镜像</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/containers/images/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/containers/images/</guid><description>&lt;!--
reviewers:
- erictune
- thockin
title: Images
content_type: concept
weight: 10
hide_summary: true # Listed separately in section index
--&gt;
&lt;!-- overview --&gt;
&lt;!--
A container image represents binary data that encapsulates an application and all its
software dependencies. Container images are executable software bundles that can run
standalone and that make very well-defined assumptions about their runtime environment.

You typically create a container image of your application and push it to a registry
before referring to it in a &lt;a class='glossary-tooltip' title='Pod 表示你的集群上一组正在运行的容器。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/' target='_blank' aria-label='Pod'&gt;Pod&lt;/a&gt;.

This page provides an outline of the container image concept.
--&gt;
&lt;p&gt;容器镜像（Image）所承载的是封装了应用程序及其所有软件依赖的二进制数据。
容器镜像是可执行的软件包，可以单独运行；该软件包对所处的运行时环境具有明确定义的运行时环境假定。&lt;/p&gt;</description></item><item><title>卷</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/volumes/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/volumes/</guid><description>&lt;!--
reviewers:
- jsafrane
- saad-ali
- thockin
- msau42
title: Volumes
api_metadata:
- apiVersion: ""
 kind: "Volume"
content_type: concept
weight: 10
--&gt;
&lt;!-- overview --&gt;
&lt;!--
Kubernetes _volumes_ provide a way for containers in a &lt;a class='glossary-tooltip' title='Pod 表示你的集群上一组正在运行的容器。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/' target='_blank' aria-label='pod'&gt;pod&lt;/a&gt;
to access and share data via the filesystem. There are different kinds of volume that you can use for different purposes,
such as:
--&gt;
&lt;p&gt;Kubernetes &lt;strong&gt;卷&lt;/strong&gt;为 &lt;a class='glossary-tooltip' title='Pod 表示你的集群上一组正在运行的容器。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/' target='_blank' aria-label='Pod'&gt;Pod&lt;/a&gt;
中的容器提供了一种通过文件系统访问和共享数据的方式。存在不同类别的卷，你可以将其用于各种用途，例如：&lt;/p&gt;</description></item><item><title>文档内容指南</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/contribute/style/content-guide/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/contribute/style/content-guide/</guid><description>&lt;!--
title: Documentation Content Guide
linktitle: Content guide
content_type: concept
weight: 10
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page contains guidelines for Kubernetes documentation.

If you have questions about what's allowed, join the #sig-docs channel in
[Kubernetes Slack](https://slack.k8s.io/) and ask!

You can register for Kubernetes Slack at https://slack.k8s.io/.

For information on creating new content for the Kubernetes
docs, follow the [style guide](/docs/contribute/style/style-guide).
--&gt;
&lt;p&gt;本页包含 Kubernetes 文档的一些指南。&lt;/p&gt;
&lt;p&gt;如果你不清楚哪些事情是可以做的，请加入到
&lt;a href="https://slack.k8s.io/"&gt;Kubernetes Slack&lt;/a&gt; 的 &lt;code&gt;#sig-docs&lt;/code&gt; 频道提问！
你可以在 &lt;a href="https://slack.k8s.io"&gt;https://slack.k8s.io&lt;/a&gt; 注册到 Kubernetes Slack。&lt;/p&gt;</description></item><item><title>配置聚合层</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/extend-kubernetes/configure-aggregation-layer/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/extend-kubernetes/configure-aggregation-layer/</guid><description>&lt;!--
title: Configure the Aggregation Layer
reviewers:
- lavalamp
- cheftako
- chenopis
content_type: task
weight: 10
--&gt;
&lt;!-- overview --&gt;
&lt;!--
Configuring the [aggregation layer](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/)
allows the Kubernetes apiserver to be extended with additional APIs, which are not
part of the core Kubernetes APIs.
--&gt;
&lt;p&gt;配置&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/"&gt;聚合层&lt;/a&gt;可以允许
Kubernetes apiserver 使用其它 API 扩展，这些 API 不是核心 Kubernetes API 的一部分。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;p&gt;&lt;!--
You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
[minikube](https://minikube.sigs.k8s.io/docs/tutorials/multi_node/)
or you can use one of these Kubernetes playgrounds:
--&gt;
&lt;p&gt;你必须拥有一个 Kubernetes 的集群，且必须配置 kubectl 命令行工具让其与你的集群通信。
建议运行本教程的集群至少有两个节点，且这两个节点不能作为控制平面主机。
如果你还没有集群，你可以通过 &lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;Minikube&lt;/a&gt;
构建一个你自己的集群，或者你可以使用下面的 Kubernetes 练习环境之一：&lt;/p&gt;</description></item><item><title>评审 PR</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/contribute/review/reviewing-prs/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/contribute/review/reviewing-prs/</guid><description>&lt;!--
title: Reviewing pull requests
content_type: concept
main_menu: true
weight: 10
--&gt;
&lt;!-- overview --&gt;
&lt;!--
Anyone can review a documentation pull request. Visit the [pull requests](https://github.com/kubernetes/website/pulls) section in the Kubernetes website repository to see open pull requests.

Reviewing documentation pull requests is a
great way to introduce yourself to the Kubernetes community.
It helps you learn the code base and build trust with other contributors.

Before reviewing, it's a good idea to:

- Read the [content guide](/docs/contribute/style/content-guide/) and
 [style guide](/docs/contribute/style/style-guide/) so you can leave informed comments.
- Understand the different
 [roles and responsibilities](/docs/contribute/participate/roles-and-responsibilities/)
 in the Kubernetes documentation community.
--&gt;
&lt;p&gt;任何人均可评审文档的拉取请求。
访问 Kubernetes 网站仓库的 &lt;a href="https://github.com/kubernetes/website/pulls"&gt;pull requests&lt;/a&gt; 部分，
可以查看所有待处理的拉取请求（PR）。&lt;/p&gt;</description></item><item><title>审计注解</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/labels-annotations-taints/audit-annotations/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/labels-annotations-taints/audit-annotations/</guid><description>&lt;!--
title: "Audit Annotations"
weight: 10
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page serves as a reference for the audit annotations of the kubernetes.io
namespace. These annotations apply to `Event` object from API group
`audit.k8s.io`.
--&gt;
&lt;p&gt;本页面作为 kubernetes.io 名字空间的审计注解的参考。这些注解适用于 API 组
&lt;code&gt;audit.k8s.io&lt;/code&gt; 中的 &lt;code&gt;Event&lt;/code&gt; 对象。&lt;/p&gt;

&lt;div class="alert alert-info" role="note"&gt;&lt;h4 class="alert-heading"&gt;说明：&lt;/h4&gt;&lt;!--
The following annotations are not used within the Kubernetes API. When you
[enable auditing](/docs/tasks/debug/debug-cluster/audit/) in your cluster,
audit event data is written using `Event` from API group `audit.k8s.io`.
The annotations apply to audit events. Audit events are different from objects in the
[Event API](/docs/reference/kubernetes-api/cluster-resources/event-v1/) (API group
`events.k8s.io`).
--&gt;
&lt;p&gt;Kubernetes API 中不使用以下注解。当你在集群中&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/tasks/debug/debug-cluster/audit/"&gt;启用审计&lt;/a&gt;时，
审计事件数据将使用 API 组 &lt;code&gt;audit.k8s.io&lt;/code&gt; 中的 &lt;code&gt;Event&lt;/code&gt; 写入。此注解适用于审计事件。
审计事件不同于 &lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/cluster-resources/event-v1/"&gt;Event API&lt;/a&gt;
（API 组 &lt;code&gt;events.k8s.io&lt;/code&gt;）中的对象。&lt;/p&gt;</description></item><item><title>使用 Antrea 提供 NetworkPolicy</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/network-policy-provider/antrea-network-policy/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/network-policy-provider/antrea-network-policy/</guid><description>&lt;!--
---
title: Use Antrea for NetworkPolicy
content_type: task
weight: 10
---
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page shows how to install and use Antrea CNI plugin on Kubernetes.
For background on Project Antrea, read the [Introduction to Antrea](https://antrea.io/docs/).
--&gt;
&lt;p&gt;本页展示了如何在 kubernetes 中安装和使用 Antrea CNI 插件。
要了解 Antrea 项目的背景，请阅读 &lt;a href="https://antrea.io/docs/"&gt;Antrea 介绍&lt;/a&gt;。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;!--
You need to have a Kubernetes cluster. Follow the
[kubeadm getting started guide](/docs/reference/setup-tools/kubeadm/) to bootstrap one.
--&gt;
&lt;p&gt;你需要拥有一个 kuernetes 集群。
遵循 &lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/"&gt;kubeadm 入门指南&lt;/a&gt;自行创建一个。&lt;/p&gt;</description></item><item><title>使用 CronJob 运行自动化任务</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/job/automated-tasks-with-cron-jobs/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/job/automated-tasks-with-cron-jobs/</guid><description>&lt;!--
title: Running Automated Tasks with a CronJob
min-kubernetes-server-version: v1.21
reviewers:
- chenopis
content_type: task
weight: 10
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page shows how to run automated tasks using Kubernetes &lt;a class='glossary-tooltip' title='周期调度的任务（作业）。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/controllers/cron-jobs/' target='_blank' aria-label='CronJob'&gt;CronJob&lt;/a&gt; object.
--&gt;
&lt;p&gt;本页演示如何使用 Kubernetes &lt;a class='glossary-tooltip' title='周期调度的任务（作业）。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/controllers/cron-jobs/' target='_blank' aria-label='CronJob'&gt;CronJob&lt;/a&gt;
对象运行自动化任务。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;!--
You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
[minikube](https://minikube.sigs.k8s.io/docs/tutorials/multi_node/)
or you can use one of these Kubernetes playgrounds:
--&gt;
&lt;p&gt;你必须拥有一个 Kubernetes 的集群，且必须配置 kubectl 命令行工具让其与你的集群通信。
建议运行本教程的集群至少有两个节点，且这两个节点不能作为控制平面主机。
如果你还没有集群，你可以通过 &lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;Minikube&lt;/a&gt;
构建一个你自己的集群，或者你可以使用下面的 Kubernetes 练习环境之一：&lt;/p&gt;</description></item><item><title>使用 Deployment 运行一个无状态应用</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/run-application/run-stateless-application-deployment/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/run-application/run-stateless-application-deployment/</guid><description>&lt;!-- overview --&gt;
&lt;!--
This page shows how to run an application using a Kubernetes Deployment object.
--&gt;
&lt;p&gt;本文介绍如何通过 Kubernetes Deployment 对象去运行一个应用。&lt;/p&gt;
&lt;h2 id="教程目标"&gt;教程目标&lt;/h2&gt;
&lt;!--
- Create an nginx deployment.
- Use kubectl to list information about the deployment.
- Update the deployment.
--&gt;
&lt;ul&gt;
&lt;li&gt;创建一个 nginx Deployment。&lt;/li&gt;
&lt;li&gt;使用 kubectl 列举该 Deployment 的相关信息。&lt;/li&gt;
&lt;li&gt;更新该 Deployment。&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;p&gt;&lt;!--
You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
[minikube](https://minikube.sigs.k8s.io/docs/tutorials/multi_node/)
or you can use one of these Kubernetes playgrounds:
--&gt;
&lt;p&gt;你必须拥有一个 Kubernetes 的集群，且必须配置 kubectl 命令行工具让其与你的集群通信。
建议运行本教程的集群至少有两个节点，且这两个节点不能作为控制平面主机。
如果你还没有集群，你可以通过 &lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;Minikube&lt;/a&gt;
构建一个你自己的集群，或者你可以使用下面的 Kubernetes 练习环境之一：&lt;/p&gt;</description></item><item><title>使用 kubectl 创建 Deployment</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro/</guid><description>&lt;!--
title: Using kubectl to Create a Deployment
weight: 10
--&gt;
&lt;h2 id="教程目标"&gt;教程目标&lt;/h2&gt;
&lt;!--
* Learn about application Deployments.
* Deploy your first app on Kubernetes with kubectl.
--&gt;
&lt;ul&gt;
&lt;li&gt;学习应用的部署。&lt;/li&gt;
&lt;li&gt;使用 kubectl 在 Kubernetes 上部署第一个应用。&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
## Kubernetes Deployments
--&gt;
&lt;h2 id="kubernetes-deployment"&gt;Kubernetes Deployment&lt;/h2&gt;
&lt;div class="alert alert-primary" role="alert"&gt;
&lt;!--
_A Deployment is responsible for creating and updating instances of your application._
--&gt;
&lt;p&gt;&lt;strong&gt;Deployment 负责创建和更新应用的实例&lt;/strong&gt;&lt;/p&gt;
&lt;/div&gt;

&lt;div class="alert alert-info" role="note"&gt;&lt;h4 class="alert-heading"&gt;说明：&lt;/h4&gt;&lt;!--
This tutorial uses a container that requires the AMD64 architecture. If you are using
minikube on a computer with a different CPU architecture, you could try using minikube with
a driver that can emulate AMD64. For example, the Docker Desktop driver can do this.
--&gt;
&lt;p&gt;本教程使用了一个需要 AMD64 架构的容器。如果你在使用 Minikube
的计算机上使用了不同的 CPU 架构，可以尝试使用能够模拟 AMD64
的 Minikube 驱动程序。例如，Docker Desktop 驱动程序可以实现这一点。&lt;/p&gt;</description></item><item><title>使用 kubectl 管理 Secret</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configmap-secret/managing-secret-using-kubectl/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configmap-secret/managing-secret-using-kubectl/</guid><description>&lt;!--
title: Managing Secrets using kubectl
content_type: task
weight: 10
description: Creating Secret objects using kubectl command line.
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page shows you how to create, edit, manage, and delete Kubernetes
&lt;a class='glossary-tooltip' title='Secret 用于存储敏感信息，如密码、 OAuth 令牌和 SSH 密钥。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/configuration/secret/' target='_blank' aria-label='Secrets'&gt;Secrets&lt;/a&gt; using the `kubectl`
command-line tool.
--&gt;
&lt;p&gt;本页向你展示如何使用 &lt;code&gt;kubectl&lt;/code&gt; 命令行工具来创建、编辑、管理和删除。
Kubernetes &lt;a class='glossary-tooltip' title='Secret 用于存储敏感信息，如密码、 OAuth 令牌和 SSH 密钥。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/configuration/secret/' target='_blank' aria-label='Secrets'&gt;Secrets&lt;/a&gt;&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;!--
You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
[minikube](https://minikube.sigs.k8s.io/docs/tutorials/multi_node/)
or you can use one of these Kubernetes playgrounds:
--&gt;
&lt;p&gt;你必须拥有一个 Kubernetes 的集群，且必须配置 kubectl 命令行工具让其与你的集群通信。
建议运行本教程的集群至少有两个节点，且这两个节点不能作为控制平面主机。
如果你还没有集群，你可以通过 &lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;Minikube&lt;/a&gt;
构建一个你自己的集群，或者你可以使用下面的 Kubernetes 练习环境之一：&lt;/p&gt;</description></item><item><title>使用 Minikube 创建集群</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro/</guid><description>&lt;!--
title: Using Minikube to Create a Cluster
weight: 10
--&gt;
&lt;h2 id="教程目标"&gt;教程目标&lt;/h2&gt;
&lt;!--
* Learn what a Kubernetes cluster is.
* Learn what Minikube is.
* Start a Kubernetes cluster on your computer.
--&gt;
&lt;ul&gt;
&lt;li&gt;了解 Kubernetes 集群。&lt;/li&gt;
&lt;li&gt;了解 Minikube。&lt;/li&gt;
&lt;li&gt;在你的电脑上启动一个 Kubernetes 集群。&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
## Kubernetes Clusters
--&gt;
&lt;h2 id="kubernetes-集群"&gt;Kubernetes 集群&lt;/h2&gt;
&lt;div class="alert alert-primary" role="alert"&gt;
&lt;!--
_Kubernetes is a production-grade, open-source platform that orchestrates
the placement (scheduling) and execution of application containers
within and across computer clusters._
--&gt;
&lt;p&gt;&lt;strong&gt;Kubernetes 是一个生产级别的开源平台，
可编排在计算机集群内和跨计算机集群的应用容器的部署（调度）和执行。&lt;/strong&gt;&lt;/p&gt;</description></item><item><title>使用 Service 公开你的应用</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tutorials/kubernetes-basics/expose/expose-intro/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tutorials/kubernetes-basics/expose/expose-intro/</guid><description>&lt;!--
title: Using a Service to Expose Your App
weight: 10
--&gt;
&lt;h2 id="教程目标"&gt;教程目标&lt;/h2&gt;
&lt;!--
* Learn about a Service in Kubernetes.
* Understand how labels and selectors relate to a Service.
* Expose an application outside a Kubernetes cluster.
--&gt;
&lt;ul&gt;
&lt;li&gt;了解 Kubernetes 中的 Service&lt;/li&gt;
&lt;li&gt;了解标签（Label）和选择算符（Selector）如何与 Service 关联&lt;/li&gt;
&lt;li&gt;用 Service 向 Kubernetes 集群外公开应用&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
## Overview of Kubernetes Services

Kubernetes [Pods](/docs/concepts/workloads/pods/) are mortal. Pods have a
[lifecycle](/docs/concepts/workloads/pods/pod-lifecycle/). When a worker node dies,
the Pods running on the Node are also lost. A [Replicaset](/docs/concepts/workloads/controllers/replicaset/)
might then dynamically drive the cluster back to the desired state via the creation
of new Pods to keep your application running. As another example, consider an image-processing
backend with 3 replicas. Those replicas are exchangeable; the front-end system should
not care about backend replicas or even if a Pod is lost and recreated. That said,
each Pod in a Kubernetes cluster has a unique IP address, even Pods on the same Node,
so there needs to be a way of automatically reconciling changes among Pods so that your
applications continue to function.
--&gt;
&lt;h2 id="overview-of-kubernetes-services"&gt;Kubernetes Service 概述&lt;/h2&gt;
&lt;p&gt;Kubernetes &lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/"&gt;Pod&lt;/a&gt; 是有生命期的。
Pod 拥有&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/pod-lifecycle/"&gt;生命周期&lt;/a&gt;。
当一个工作节点停止工作后，在节点上运行的 Pod 也会消亡。
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/controllers/replicaset/"&gt;ReplicaSet&lt;/a&gt;
会自动地通过创建新的 Pod 驱动集群回到期望状态，以保证应用正常运行。
换一个例子，考虑一个具有 3 个副本的用作图像处理的后端程序。
这些副本是彼此可替换的。前端系统不应该关心后端副本，即使某个 Pod 丢失或被重新创建。
此外，Kubernetes 集群中的每个 Pod 都有一个唯一的 IP 地址，即使是在同一个 Node 上的 Pod 也是如此，
因此需要一种方法来自动协调 Pod 集合中的变化，以便应用保持运行。&lt;/p&gt;</description></item><item><title>使用配置文件对 Kubernetes 对象进行声明式管理</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/manage-kubernetes-objects/declarative-config/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/manage-kubernetes-objects/declarative-config/</guid><description>&lt;!--
title: Declarative Management of Kubernetes Objects Using Configuration Files
content_type: task
weight: 10
--&gt;
&lt;!-- overview --&gt;
&lt;!--
Kubernetes objects can be created, updated, and deleted by storing multiple
object configuration files in a directory and using `kubectl apply` to
recursively create and update those objects as needed. This method
retains writes made to live objects without merging the changes
back into the object configuration files. `kubectl diff` also gives you a
preview of what changes `apply` will make.
--&gt;
&lt;p&gt;你可以通过在一个目录中存储多个对象配置文件、并使用 &lt;code&gt;kubectl apply&lt;/code&gt;
来递归地创建和更新对象来创建、更新和删除 Kubernetes 对象。
这种方法会保留对现有对象已作出的修改，而不会将这些更改写回到对象配置文件中。
&lt;code&gt;kubectl diff&lt;/code&gt; 也会给你呈现 &lt;code&gt;apply&lt;/code&gt; 将作出的变更的预览。&lt;/p&gt;</description></item><item><title>特性门控</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/command-line-tools-reference/feature-gates/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/command-line-tools-reference/feature-gates/</guid><description>&lt;!--
title: Feature Gates
weight: 10
content_type: concept
card:
 name: reference
 weight: 60
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page contains an overview of the various feature gates an administrator
can specify on different Kubernetes components.

See [feature stages](#feature-stages) for an explanation of the stages for a feature.
--&gt;
&lt;p&gt;本页详述了管理员可以在不同的 Kubernetes 组件上指定的各种特性门控。&lt;/p&gt;
&lt;p&gt;关于特性各个阶段的说明，请参见&lt;a href="#feature-stages"&gt;特性阶段&lt;/a&gt;。&lt;/p&gt;
&lt;!-- body --&gt;
&lt;!--
## Overview

Feature gates are a set of key=value pairs that describe Kubernetes features.
You can turn these features on or off using the `--feature-gates` command line flag
on each Kubernetes component.
--&gt;
&lt;h2 id="overview"&gt;概述&lt;/h2&gt;
&lt;p&gt;特性门控是描述 Kubernetes 特性的一组键值对。你可以在 Kubernetes 的各个组件中使用
&lt;code&gt;--feature-gates&lt;/code&gt; 标志来启用或禁用这些特性。&lt;/p&gt;</description></item><item><title>添加 Linux 工作节点</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/kubeadm/adding-linux-nodes/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/kubeadm/adding-linux-nodes/</guid><description>&lt;!--
title: Adding Linux worker nodes
content_type: task
weight: 10
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page explains how to add Linux worker nodes to a kubeadm cluster.
--&gt;
&lt;p&gt;本页介绍如何将 Linux 工作节点添加到 kubeadm 集群。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;!--
* Each joining worker node has installed the required components from
[Installing kubeadm](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/), such as,
kubeadm, the kubelet and a &lt;a class='glossary-tooltip' title='容器运行时是负责运行容器的软件。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/setup/production-environment/container-runtimes' target='_blank' aria-label='container runtime'&gt;container runtime&lt;/a&gt;.
* A running kubeadm cluster created by `kubeadm init` and following the steps
in the document [Creating a cluster with kubeadm](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/).
* You need superuser access to the node.
--&gt;
&lt;ul&gt;
&lt;li&gt;每个要加入的工作节点都已安装
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/setup/production-environment/tools/kubeadm/install-kubeadm/"&gt;安装 kubeadm&lt;/a&gt;
中所需的组件，例如 kubeadm、kubelet 和
&lt;a class='glossary-tooltip' title='容器运行时是负责运行容器的软件。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/setup/production-environment/container-runtimes' target='_blank' aria-label='容器运行时'&gt;容器运行时&lt;/a&gt;。&lt;/li&gt;
&lt;li&gt;一个正在运行的、由 &lt;code&gt;kubeadm init&lt;/code&gt; 命令所创建的 kubeadm 集群，且该集群的创建遵循
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/"&gt;使用 kubeadm 创建集群&lt;/a&gt;
文档中所给的步骤。&lt;/li&gt;
&lt;li&gt;你需要对节点拥有超级用户权限。&lt;/li&gt;
&lt;/ul&gt;
&lt;!-- steps --&gt;
&lt;!--
## Adding Linux worker nodes

To add new Linux worker nodes to your cluster do the following for each machine:

1. Connect to the machine by using SSH or another method.
1. Run the command that was output by `kubeadm init`. For example:

### Additional information for kubeadm join
--&gt;
&lt;h2 id="additional-information-for-kubeadm-join"&gt;添加 Linux 工作节点&lt;/h2&gt;
&lt;p&gt;要将新的 Linux 工作节点添加到集群中，请对每台机器执行以下步骤：&lt;/p&gt;</description></item><item><title>网络插件</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/</guid><description>&lt;!--
reviewers:
- dcbw
- freehan
- thockin
title: Network Plugins
content_type: concept
weight: 10
--&gt;
&lt;!-- overview --&gt;
&lt;!-- 
Kubernetes (version 1.3 through to the latest 1.35, and likely onwards) lets you use
[Container Network Interface](https://github.com/containernetworking/cni)
(CNI) plugins for cluster networking. You must use a CNI plugin that is compatible with your
cluster and that suits your needs. Different plugins are available (both open- and closed- source)
in the wider Kubernetes ecosystem.
--&gt;
&lt;p&gt;Kubernetes（1.3 版本至最新 1.35，并可能包括未来版本）
允许你使用&lt;a href="https://github.com/containernetworking/cni"&gt;容器网络接口&lt;/a&gt;（CNI）
插件来完成集群联网。
你必须使用和你的集群相兼容并且满足你的需求的 CNI 插件。
在更广泛的 Kubernetes 生态系统中你可以使用不同的插件（开源和闭源）。&lt;/p&gt;</description></item><item><title>为集群超配节点容量</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/node-overprovisioning/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/node-overprovisioning/</guid><description>&lt;!--
title: Overprovision Node Capacity For A Cluster
content_type: task
weight: 10
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page guides you through configuring &lt;a class='glossary-tooltip' title='Kubernetes 中的工作机器称作节点。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/nodes/' target='_blank' aria-label='Node'&gt;Node&lt;/a&gt;
overprovisioning in your Kubernetes cluster. Node overprovisioning is a strategy that proactively
reserves a portion of your cluster's compute resources. This reservation helps reduce the time
required to schedule new pods during scaling events, enhancing your cluster's responsiveness
to sudden spikes in traffic or workload demands.

By maintaining some unused capacity, you ensure that resources are immediately available when
new pods are created, preventing them from entering a pending state while the cluster scales up.
--&gt;
&lt;p&gt;本页指导你在 Kubernetes 集群中配置&lt;a class='glossary-tooltip' title='Kubernetes 中的工作机器称作节点。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/nodes/' target='_blank' aria-label='节点'&gt;节点&lt;/a&gt;超配。
节点超配是一种主动预留部分集群计算资源的策略。这种预留有助于减少在扩缩容事件期间调度新 Pod 所需的时间，
从而增强集群对突发流量或突发工作负载需求的响应能力。&lt;/p&gt;</description></item><item><title>为命名空间配置默认的内存请求和限制</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/</guid><description>&lt;!--
title: Configure Default Memory Requests and Limits for a Namespace
content_type: task
weight: 10
description: &gt;-
 Define a default memory resource limit for a namespace, so that every new Pod
 in that namespace has a memory resource limit configured.
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page shows how to configure default memory requests and limits for a
&lt;a class='glossary-tooltip' title='名字空间是 Kubernetes 用来支持隔离单个集群中的资源组的一种抽象。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/overview/working-with-objects/namespaces/' target='_blank' aria-label='namespace'&gt;namespace&lt;/a&gt;.

A Kubernetes cluster can be divided into namespaces. Once you have a namespace that
has a default memory
[limit](/docs/concepts/configuration/manage-resources-containers/#requests-and-limits),
and you then try to create a Pod with a container that does not specify its own memory
limit, then the
&lt;a class='glossary-tooltip' title='控制平面是指容器编排层，它暴露 API 和接口来定义、部署容器和管理容器的生命周期。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/reference/glossary/?all=true#term-control-plane' target='_blank' aria-label='control plane'&gt;control plane&lt;/a&gt; assigns the default
memory limit to that container.

Kubernetes assigns a default memory request under certain conditions that are explained later in this topic.
--&gt;
&lt;p&gt;本章介绍如何为&lt;a class='glossary-tooltip' title='名字空间是 Kubernetes 用来支持隔离单个集群中的资源组的一种抽象。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/overview/working-with-objects/namespaces/' target='_blank' aria-label='命名空间'&gt;命名空间&lt;/a&gt;配置默认的内存请求和限制。&lt;/p&gt;</description></item><item><title>为容器和 Pod 分配内存资源</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configure-pod-container/assign-memory-resource/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configure-pod-container/assign-memory-resource/</guid><description>&lt;!--
title: Assign Memory Resources to Containers and Pods
content_type: task
weight: 10
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page shows how to assign a memory *request* and a memory *limit* to a
Container. A Container is guaranteed to have as much memory as it requests,
but is not allowed to use more memory than its limit.
--&gt;
&lt;p&gt;此页面展示如何将内存&lt;strong&gt;请求&lt;/strong&gt;（request）和内存&lt;strong&gt;限制&lt;/strong&gt;（limit）分配给一个容器。
我们保障容器拥有它请求数量的内存，但不允许使用超过限制数量的内存。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;p&gt;&lt;!--
You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
[minikube](https://minikube.sigs.k8s.io/docs/tutorials/multi_node/)
or you can use one of these Kubernetes playgrounds:
--&gt;
&lt;p&gt;你必须拥有一个 Kubernetes 的集群，且必须配置 kubectl 命令行工具让其与你的集群通信。
建议运行本教程的集群至少有两个节点，且这两个节点不能作为控制平面主机。
如果你还没有集群，你可以通过 &lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;Minikube&lt;/a&gt;
构建一个你自己的集群，或者你可以使用下面的 Kubernetes 练习环境之一：&lt;/p&gt;</description></item><item><title>为容器设置启动时要执行的命令和参数</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/inject-data-application/define-command-argument-container/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/inject-data-application/define-command-argument-container/</guid><description>&lt;!--
title: Define a Command and Arguments for a Container
content_type: task
weight: 10
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page shows how to define commands and arguments when you run a container
in a &lt;a class='glossary-tooltip' title='Pod 表示你的集群上一组正在运行的容器。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/' target='_blank' aria-label='Pod'&gt;Pod&lt;/a&gt;.
--&gt;
&lt;p&gt;本页将展示如何为 &lt;a class='glossary-tooltip' title='Pod 表示你的集群上一组正在运行的容器。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/' target='_blank' aria-label='Pod'&gt;Pod&lt;/a&gt;
中容器设置启动时要执行的命令及其参数。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;p&gt;&lt;!--
You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
[minikube](https://minikube.sigs.k8s.io/docs/tutorials/multi_node/)
or you can use one of these Kubernetes playgrounds:
--&gt;
&lt;p&gt;你必须拥有一个 Kubernetes 的集群，且必须配置 kubectl 命令行工具让其与你的集群通信。
建议运行本教程的集群至少有两个节点，且这两个节点不能作为控制平面主机。
如果你还没有集群，你可以通过 &lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;Minikube&lt;/a&gt;
构建一个你自己的集群，或者你可以使用下面的 Kubernetes 练习环境之一：&lt;/p&gt;</description></item><item><title>限制范围（LimitRange）</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/policy/limit-range/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/policy/limit-range/</guid><description>&lt;!--
reviewers:
- nelvadas
title: Limit Ranges
api_metadata:
- apiVersion: "v1"
 kind: "LimitRange"
content_type: concept
weight: 10
--&gt;
&lt;!-- overview --&gt;
&lt;!--
By default, containers run with unbounded
[compute resources](/docs/concepts/configuration/manage-resources-containers/) on a Kubernetes cluster.
Using Kubernetes [resource quotas](/docs/concepts/policy/resource-quotas/),
administrators (also termed _cluster operators_) can restrict consumption and creation
of cluster resources (such as CPU time, memory, and persistent storage) within a specified
&lt;a class='glossary-tooltip' title='名字空间是 Kubernetes 用来支持隔离单个集群中的资源组的一种抽象。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/overview/working-with-objects/namespaces/' target='_blank' aria-label='namespace'&gt;namespace&lt;/a&gt;.
Within a namespace, a &lt;a class='glossary-tooltip' title='Pod 表示你的集群上一组正在运行的容器。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/' target='_blank' aria-label='Pod'&gt;Pod&lt;/a&gt; can consume as much CPU and memory
as is allowed by the ResourceQuotas that apply to that namespace.
As a cluster operator, or as a namespace-level administrator, you might also be concerned
about making sure that a single object cannot monopolize all available resources within a namespace.

A LimitRange is a policy to constrain the resource allocations (limits and requests) that you can specify for
each applicable object kind (such as Pod or &lt;a class='glossary-tooltip' title='声明在持久卷中定义的存储资源，以便可以将其挂载为容器中的卷。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims' target='_blank' aria-label='PersistentVolumeClaim'&gt;PersistentVolumeClaim&lt;/a&gt;)
in a namespace.
--&gt;
&lt;p&gt;默认情况下， Kubernetes 集群上的容器运行使用的&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/configuration/manage-resources-containers/"&gt;计算资源&lt;/a&gt;没有限制。
使用 Kubernetes &lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/policy/resource-quotas/"&gt;资源配额&lt;/a&gt;，
管理员（也称为&lt;strong&gt;集群操作者&lt;/strong&gt;）可以在一个指定的&lt;a class='glossary-tooltip' title='名字空间是 Kubernetes 用来支持隔离单个集群中的资源组的一种抽象。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/overview/working-with-objects/namespaces/' target='_blank' aria-label='命名空间'&gt;命名空间&lt;/a&gt;内限制集群资源的使用与创建。
在命名空间中，一个 &lt;a class='glossary-tooltip' title='Pod 表示你的集群上一组正在运行的容器。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/' target='_blank' aria-label='Pod'&gt;Pod&lt;/a&gt; 最多能够使用命名空间的资源配额所定义的 CPU 和内存用量。
作为集群操作者或命名空间级的管理员，你可能也会担心如何确保一个 Pod 不会垄断命名空间内所有可用的资源。&lt;/p&gt;</description></item><item><title>以独立模式运行 kubelet</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tutorials/cluster-management/kubelet-standalone/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tutorials/cluster-management/kubelet-standalone/</guid><description>&lt;!--
title: Running Kubelet in Standalone Mode
content_type: tutorial
weight: 10
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This tutorial shows you how to run a standalone kubelet instance.

You may have different motivations for running a standalone kubelet.
This tutorial is aimed at introducing you to Kubernetes, even if you don't have
much experience with it. You can follow this tutorial and learn about node setup,
basic (static) Pods, and how Kubernetes manages containers.
--&gt;
&lt;p&gt;本教程将向你展示如何运行一个独立的 kubelet 实例。&lt;/p&gt;</description></item><item><title>用户认证</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/access-authn-authz/authentication/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/access-authn-authz/authentication/</guid><description>&lt;!--
reviewers:
- erictune
- lavalamp
- deads2k
- liggitt
title: Authenticating
content_type: concept
weight: 10
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page provides an overview of authentication in Kubernetes, with a focus on
authentication to the [Kubernetes API](/docs/concepts/overview/kubernetes-api/).
--&gt;
&lt;p&gt;本页提供 Kubernetes 中身份认证有关的概述，重点介绍与
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/overview/kubernetes-api/"&gt;Kubernetes API&lt;/a&gt; 有关的身份认证。&lt;/p&gt;
&lt;!-- body --&gt;
&lt;!--
## Users in Kubernetes

All Kubernetes clusters have two categories of users: service accounts managed
by Kubernetes, and normal users.

It is assumed that a cluster-independent service manages normal users in the following ways:

- an administrator distributing private keys
- a user store like Keystone or Google Accounts
- a file with a list of usernames and passwords

In this regard, _Kubernetes does not have objects which represent normal user accounts._
Normal users cannot be added to a cluster through an API call.
--&gt;
&lt;h2 id="users-in-kubernetes"&gt;Kubernetes 中的用户&lt;/h2&gt;
&lt;p&gt;所有 Kubernetes 集群都有两类用户：由 Kubernetes 管理的服务账号和普通用户。&lt;/p&gt;</description></item><item><title>云原生安全和 Kubernetes</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/security/cloud-native-security/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/security/cloud-native-security/</guid><description>&lt;!--
---
title: "Cloud Native Security and Kubernetes"
linkTitle: "Cloud Native Security"
weight: 10

# The section index lists this explicitly
hide_summary: true

description: &gt;
 Concepts for keeping your cloud native workload secure.
---
--&gt;
&lt;!-- 
Kubernetes is based on a cloud native architecture and draws on advice from the
&lt;a class='glossary-tooltip' title='云原生计算基金会（Cloud Native Computing Foundation）' data-toggle='tooltip' data-placement='top' href='https://cncf.io/' target='_blank' aria-label='CNCF'&gt;CNCF&lt;/a&gt; about good practices for
cloud native information security. 
--&gt;
&lt;p&gt;Kubernetes 基于云原生架构，并借鉴了
&lt;a class='glossary-tooltip' title='云原生计算基金会（Cloud Native Computing Foundation）' data-toggle='tooltip' data-placement='top' href='https://cncf.io/' target='_blank' aria-label='CNCF'&gt;CNCF&lt;/a&gt;
有关云原生信息安全良好实践的建议。&lt;/p&gt;</description></item><item><title>运行多实例的应用</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tutorials/kubernetes-basics/scale/scale-intro/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tutorials/kubernetes-basics/scale/scale-intro/</guid><description>&lt;!--
title: Running Multiple Instances of Your App
weight: 10
--&gt;
&lt;h2 id="教程目标"&gt;教程目标&lt;/h2&gt;
&lt;!--
* Scale an existing app manually using kubectl.
--&gt;
&lt;ul&gt;
&lt;li&gt;使用 kubectl 手动扩缩现有的应用&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
## Scaling an application
--&gt;
&lt;h2 id="扩缩应用"&gt;扩缩应用&lt;/h2&gt;
&lt;div class="alert alert-primary" role="alert"&gt;
&lt;!--
_You can create from the start a Deployment with multiple instances using the --replicas
parameter for the kubectl create deployment command._
--&gt;
&lt;p&gt;&lt;strong&gt;通过在使用 &lt;code&gt;kubectl create deployment&lt;/code&gt; 命令时设置 &lt;code&gt;--replicas&lt;/code&gt; 参数，
你可以在启动 Deployment 时创建多个实例。&lt;/strong&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;!--
Previously we created a [Deployment](/docs/concepts/workloads/controllers/deployment/),
and then exposed it publicly via a [Service](/docs/concepts/services-networking/service/).
The Deployment created only one Pod for running our application. When traffic increases,
we will need to scale the application to keep up with user demand.

If you haven't worked through the earlier sections, start from
[Using minikube to create a cluster](/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro/).

_Scaling_ is accomplished by changing the number of replicas in a Deployment.
--&gt;
&lt;p&gt;之前我们创建了一个 &lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/controllers/deployment/"&gt;Deployment&lt;/a&gt;，
然后通过 &lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/services-networking/service/"&gt;Service&lt;/a&gt; 让其可以公开访问。
Deployment 仅创建了一个 Pod 用于运行这个应用。当流量增加时，我们需要扩容应用满足用户需求。&lt;/p&gt;</description></item><item><title>在 Linux 系统中安装并设置 kubectl</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/tools/install-kubectl-linux/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/tools/install-kubectl-linux/</guid><description>&lt;!-- 
reviewers:
- mikedanese
title: Install and Set Up kubectl on Linux
content_type: task
weight: 10
--&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;!--
You must use a kubectl version that is within one minor version difference of
your cluster. For example, a v1.35 client can communicate
with v1.34, v1.35,
and v1.36 control planes.
Using the latest compatible version of kubectl helps avoid unforeseen issues.
--&gt;
&lt;p&gt;kubectl 版本和集群版本之间的差异必须在一个小版本号内。
例如：v1.35 版本的客户端能与 v1.34、
v1.35 和 v1.36 版本的控制面通信。
用最新兼容版的 kubectl 有助于避免不可预见的问题。&lt;/p&gt;</description></item><item><title>在 macOS 系统上安装和设置 kubectl</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/tools/install-kubectl-macos/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/tools/install-kubectl-macos/</guid><description>&lt;!-- 
reviewers:
- mikedanese
title: Install and Set Up kubectl on macOS
content_type: task
weight: 10
--&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;!--
You must use a kubectl version that is within one minor version difference of
your cluster. For example, a v1.35 client can communicate
with v1.34, v1.35,
and v1.36 control planes.
Using the latest compatible version of kubectl helps avoid unforeseen issues.
--&gt;
&lt;p&gt;kubectl 版本和集群之间的差异必须在一个小版本号之内。
例如：v1.35 版本的客户端能与 v1.34、
v1.35 和 v1.36 版本的控制面通信。
用最新兼容版本的 kubectl 有助于避免不可预见的问题。&lt;/p&gt;</description></item><item><title>在 Windows 上安装 kubectl</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/tools/install-kubectl-windows/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/tools/install-kubectl-windows/</guid><description>&lt;!--
reviewers:
- mikedanese
title: Install and Set Up kubectl on Windows
content_type: task
weight: 10
--&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;!--
You must use a kubectl version that is within one minor version difference of
your cluster. For example, a v1.35 client can communicate
with v1.34, v1.35,
and v1.36 control planes.
Using the latest compatible version of kubectl helps avoid unforeseen issues.
--&gt;
&lt;p&gt;kubectl 版本和集群版本之间的差异必须在一个小版本号内。
例如：v1.35 版本的客户端能与 v1.34、
v1.35 和 v1.36 版本的控制面通信。
用最新兼容版的 kubectl 有助于避免不可预见的问题。&lt;/p&gt;</description></item><item><title>在集群级别应用 Pod 安全标准</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tutorials/security/cluster-level-pss/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tutorials/security/cluster-level-pss/</guid><description>&lt;!--
title: Apply Pod Security Standards at the Cluster Level
content_type: tutorial
weight: 10
--&gt;
&lt;div class="alert alert-primary" role="alert"&gt;&lt;div class="h4 alert-heading" role="heading"&gt;Note&lt;/div&gt;
&lt;!--
This tutorial applies only for new clusters.
--&gt;
&lt;p&gt;本教程仅适用于新集群。&lt;/p&gt;
&lt;/div&gt;
&lt;!--
Pod Security is an admission controller that carries out checks against the Kubernetes
[Pod Security Standards](/docs/concepts/security/pod-security-standards/) when new pods are
created. It is a feature GA'ed in v1.25.
This tutorial shows you how to enforce the `baseline` Pod Security
Standard at the cluster level which applies a standard configuration
to all namespaces in a cluster.

To apply Pod Security Standards to specific namespaces, refer to
[Apply Pod Security Standards at the namespace level](/docs/tutorials/security/ns-level-pss).

If you are running a version of Kubernetes other than v1.35,
check the documentation for that version.
--&gt;
&lt;p&gt;Pod 安全是一个准入控制器，当新的 Pod 被创建时，它会根据 Kubernetes &lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/security/pod-security-standards/"&gt;Pod 安全标准&lt;/a&gt;
进行检查。这是在 v1.25 中达到正式发布（GA）的功能。
本教程将向你展示如何在集群级别实施 &lt;code&gt;baseline&lt;/code&gt; Pod 安全标准，
该标准将标准配置应用于集群中的所有名字空间。&lt;/p&gt;</description></item><item><title>在集群中安装 DRA</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configure-pod-container/assign-resources/set-up-dra-cluster/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configure-pod-container/assign-resources/set-up-dra-cluster/</guid><description>&lt;!--
title: "Set Up DRA in a Cluster"
content_type: task
min-kubernetes-server-version: v1.34
weight: 10
--&gt;







 &lt;div class="feature-state-notice feature-stable" title="特性门控： DynamicResourceAllocation"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt; 
 &lt;code&gt;Kubernetes v1.35 [stable]&lt;/code&gt;（默认启用）&lt;/div&gt;

&lt;!-- overview --&gt;
&lt;!--
This page shows you how to configure _dynamic resource allocation (DRA)_ in a
Kubernetes cluster by enabling API groups and configuring classes of devices.
These instructions are for cluster administrators.
--&gt;
&lt;p&gt;本文介绍如何在 Kubernetes 集群中通过启用 API 组并配置设备类别来设置&lt;strong&gt;动态资源分配（DRA）&lt;/strong&gt;。
这些指示说明适用于集群管理员。&lt;/p&gt;
&lt;!-- body --&gt;
&lt;!--
## About DRA {#about-dra}
--&gt;
&lt;h2 id="about-dra"&gt;关于 DRA&lt;/h2&gt;
&lt;!--
title: Dynamic Resource Allocation
id: dra
date: 2025-05-13
full_link: /docs/concepts/scheduling-eviction/dynamic-resource-allocation/
short_description: &gt;
 A Kubernetes feature for requesting and sharing resources, like hardware
 accelerators, among Pods.

aka:
- DRA
tags:
- extension
--&gt;
&lt;!--
A Kubernetes feature that lets you request and share resources among Pods.
These resources are often attached
&lt;a class='glossary-tooltip' title='直接或间接挂接到集群节点上的所有资源，例如 GPU 或电路板。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/reference/glossary/?all=true#term-device' target='_blank' aria-label='devices'&gt;devices&lt;/a&gt; like hardware
accelerators.
--&gt;
&lt;p&gt;Kubernetes 提供的一项特性，允许你在多个 Pod 之间请求和共享资源。&lt;br&gt;
这些资源通常是挂接的&lt;a class='glossary-tooltip' title='直接或间接挂接到集群节点上的所有资源，例如 GPU 或电路板。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/reference/glossary/?all=true#term-device' target='_blank' aria-label='设备'&gt;设备&lt;/a&gt;，例如硬件加速器。&lt;/p&gt;</description></item><item><title>执行滚动更新</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tutorials/kubernetes-basics/update/update-intro/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tutorials/kubernetes-basics/update/update-intro/</guid><description>&lt;!--
title: Performing a Rolling Update
weight: 10
--&gt;
&lt;h2 id="教程目标"&gt;教程目标&lt;/h2&gt;
&lt;!--
Perform a rolling update using kubectl.
--&gt;
&lt;p&gt;使用 kubectl 执行滚动更新。&lt;/p&gt;
&lt;!--
## Updating an application
--&gt;
&lt;h2 id="更新应用"&gt;更新应用&lt;/h2&gt;
&lt;div class="alert alert-primary" role="alert"&gt;
&lt;!--
_Rolling updates allow Deployments' update to take place with zero downtime by
incrementally updating Pods instances with new ones._
--&gt;
&lt;p&gt;&lt;strong&gt;滚动更新通过增量式更新 Pod 实例并替换为新的实例，允许在
Deployment 更新过程中实现零停机。&lt;/strong&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;!--
Users expect applications to be available all the time, and developers are expected
to deploy new versions of them several times a day. In Kubernetes this is done with
rolling updates. A **rolling update** allows a Deployment update to take place with
zero downtime. It does this by incrementally replacing the current Pods with new ones.
The new Pods are scheduled on Nodes with available resources, and Kubernetes waits
for those new Pods to start before removing the old Pods.
--&gt;
&lt;p&gt;用户希望应用程序始终可用，而开发人员则需要每天多次部署它们的新版本。
在 Kubernetes 中，这些是通过滚动更新（Rolling Update）完成的。
&lt;strong&gt;滚动更新&lt;/strong&gt;允许通过使用新的实例逐步更新 Pod 实例，实现零停机的 Deployment 更新。
新的 Pod 将被调度到具有可用资源的节点上。&lt;/p&gt;</description></item><item><title>CronJob</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/workload-resources/cron-job-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/workload-resources/cron-job-v1/</guid><description>&lt;!--
api_metadata:
apiVersion: "batch/v1"
import: "k8s.io/api/batch/v1"
kind: "CronJob"
content_type: "api_reference"
description: "CronJob represents the configuration of a single cron job."
title: "CronJob"
weight: 11
auto_generated: true
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: batch/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/batch/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="CronJob"&gt;CronJob&lt;/h2&gt;
&lt;!--
CronJob represents the configuration of a single cron job.
--&gt;
&lt;p&gt;CronJob 代表单个定时作业（Cron Job）的配置。&lt;/p&gt;
&lt;hr&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;apiVersion&lt;/strong&gt;: batch/v1&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;kind&lt;/strong&gt;: CronJob&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;metadata&lt;/strong&gt; (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/object-meta/#ObjectMeta"&gt;ObjectMeta&lt;/a&gt;)&lt;/p&gt;
&lt;!--
Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
--&gt;
&lt;p&gt;标准的对象元数据。更多信息：
&lt;a href="https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata"&gt;https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;spec&lt;/strong&gt; (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/workload-resources/cron-job-v1/#CronJobSpec"&gt;CronJobSpec&lt;/a&gt;)&lt;/p&gt;
&lt;!--
Specification of the desired behavior of a cron job, including the schedule. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
--&gt;
&lt;p&gt;定时作业的预期行为的规约，包括排期表（Schedule）。更多信息：
&lt;a href="https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status"&gt;https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status&lt;/a&gt;&lt;/p&gt;</description></item><item><title>ResourceFieldSelector</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/resource-field-selector/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/resource-field-selector/</guid><description>&lt;!-- 
api_metadata:
 apiVersion: ""
 import: "k8s.io/api/core/v1"
 kind: "ResourceFieldSelector"
content_type: "api_reference"
description: "ResourceFieldSelector represents container resources (cpu, memory) and their output format."
title: "ResourceFieldSelector"
weight: 11
auto_generated: true
--&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/core/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;!--
ResourceFieldSelector represents container resources (cpu, memory) and their output format
--&gt;
&lt;p&gt;ResourceFieldSelector 表示容器资源（CPU，内存）及其输出格式。&lt;/p&gt;
&lt;hr&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;resource&lt;/strong&gt; (string)，必需&lt;/p&gt;
&lt;!--
Required: resource to select
--&gt;
&lt;p&gt;必需：选择的资源。&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;containerName&lt;/strong&gt; (string)&lt;/p&gt;
&lt;!--
Container name: required for volumes, optional for env vars
--&gt;
&lt;p&gt;容器名称：对卷必需，对环境变量可选。&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;divisor&lt;/strong&gt; (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/quantity/#Quantity"&gt;Quantity&lt;/a&gt;)&lt;/p&gt;
&lt;!--
Specifies the output format of the exposed resources, defaults to "1"
--&gt;
&lt;p&gt;指定所公开的资源的输出格式，默认值为 “1”。&lt;/p&gt;</description></item><item><title>VolumeAttachment</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/config-and-storage-resources/volume-attachment-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/config-and-storage-resources/volume-attachment-v1/</guid><description>&lt;!--
api_metadata:
 apiVersion: "storage.k8s.io/v1"
 import: "k8s.io/api/storage/v1"
 kind: "VolumeAttachment"
content_type: "api_reference"
description: "VolumeAttachment captures the intent to attach or detach the specified volume to/from the specified node."
title: "VolumeAttachment"
weight: 11
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: storage.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/storage/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="VolumeAttachment"&gt;VolumeAttachment&lt;/h2&gt;
&lt;!--
VolumeAttachment captures the intent to attach or detach the specified volume to/from the specified node.

VolumeAttachment objects are non-namespaced.
--&gt;
&lt;p&gt;VolumeAttachment 抓取将指定卷挂接到指定节点或从指定节点解除挂接指定卷的意图。&lt;/p&gt;
&lt;p&gt;VolumeAttachment 对象未划分命名空间。&lt;/p&gt;
&lt;hr&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;apiVersion&lt;/strong&gt;: storage.k8s.io/v1&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;kind&lt;/strong&gt;: VolumeAttachment&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
- **metadata** (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/object-meta/#ObjectMeta"&gt;ObjectMeta&lt;/a&gt;)

 Standard object metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata

- **spec** (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/config-and-storage-resources/volume-attachment-v1/#VolumeAttachmentSpec"&gt;VolumeAttachmentSpec&lt;/a&gt;), required

 spec represents specification of the desired attach/detach volume behavior. Populated by the Kubernetes system.
--&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;metadata&lt;/strong&gt; (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/object-meta/#ObjectMeta"&gt;ObjectMeta&lt;/a&gt;)&lt;/p&gt;</description></item><item><title>常用参数</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-parameters/common-parameters/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-parameters/common-parameters/</guid><description>&lt;!--
api_metadata:
 apiVersion: ""
 import: ""
 kind: "Common Parameters"
content_type: "api_reference"
description: ""
title: "Common Parameters"
weight: 11
auto_generated: true
--&gt;
&lt;h2 id="allowWatchBookmarks"&gt;allowWatchBookmarks&lt;/h2&gt;
&lt;!--
allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored.
--&gt;
&lt;p&gt;allowWatchBookmarks 字段请求类型为 BOOKMARK 的监视事件。
没有实现书签的服务器可能会忽略这个标志，并根据服务器的判断发送书签。
客户端不应该假设书签会在任何特定的时间间隔返回，也不应该假设服务器会在会话期间发送任何书签事件。
如果当前请求不是 watch 请求，则忽略该字段。&lt;/p&gt;</description></item><item><title>添加 Windows 工作节点</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/</guid><description>&lt;!--
title: Adding Windows worker nodes
content_type: task
weight: 11
--&gt;
&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-beta"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.18 [beta]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;!--
This page explains how to add Windows worker nodes to a kubeadm cluster.
--&gt;
&lt;p&gt;本页介绍如何将 Linux 工作节点添加到 kubeadm 集群。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;!--
* A running [Windows Server 2022](https://www.microsoft.com/cloud-platform/windows-server-pricing)
(or higher) instance with administrative access.
* A running kubeadm cluster created by `kubeadm init` and following the steps
in the document [Creating a cluster with kubeadm](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/).
--&gt;
&lt;ul&gt;
&lt;li&gt;一个正在运行的 &lt;a href="https://www.microsoft.com/cloud-platform/windows-server-pricing"&gt;Windows Server 2022&lt;/a&gt;
（或更高版本）实例，且具备管理权限。&lt;/li&gt;
&lt;li&gt;一个正在运行的、由 &lt;code&gt;kubeadm init&lt;/code&gt; 命令创建的集群，且集群的创建遵循
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/"&gt;使用 kubeadm 创建集群&lt;/a&gt;
文档中所给的步骤。&lt;/li&gt;
&lt;/ul&gt;
&lt;!-- steps --&gt;
&lt;!--
## Adding Windows worker nodes
--&gt;
&lt;h2 id="adding-windows-worker-nodes"&gt;添加 Windows 工作节点&lt;/h2&gt;

&lt;div class="alert alert-info" role="note"&gt;&lt;h4 class="alert-heading"&gt;说明：&lt;/h4&gt;&lt;!--
To facilitate the addition of Windows worker nodes to a cluster, PowerShell scripts from the repository
https://sigs.k8s.io/sig-windows-tools are used.
--&gt;
&lt;p&gt;为了方便将 Windows 工作节点添加到集群，下面会用到代码仓库
&lt;a href="https://sigs.k8s.io/sig-windows-tools"&gt;https://sigs.k8s.io/sig-windows-tools&lt;/a&gt; 里的 PowerShell 脚本。&lt;/p&gt;</description></item><item><title>HorizontalPodAutoscaler</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/workload-resources/horizontal-pod-autoscaler-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/workload-resources/horizontal-pod-autoscaler-v1/</guid><description>&lt;!--
api_metadata:
 apiVersion: "autoscaling/v1"
 import: "k8s.io/api/autoscaling/v1"
 kind: "HorizontalPodAutoscaler"
content_type: "api_reference"
description: "configuration of a horizontal pod autoscaler."
title: "HorizontalPodAutoscaler"
weight: 12
auto_generated: true
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: autoscaling/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/autoscaling/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;!--
## HorizontalPodAutoscaler {#HorizontalPodAutoscaler}

configuration of a horizontal pod autoscaler.
--&gt;
&lt;h2 id="HorizontalPodAutoscaler"&gt;HorizontalPodAutoscaler&lt;/h2&gt;
&lt;p&gt;水平 Pod 自动缩放器的配置。&lt;/p&gt;
&lt;hr&gt;
&lt;!--
- **apiVersion**: autoscaling/v1

- **kind**: HorizontalPodAutoscaler
--&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;apiVersion&lt;/strong&gt;: autoscaling/v1&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;kind&lt;/strong&gt;: HorizontalPodAutoscaler&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
- **metadata** (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/object-meta/#ObjectMeta"&gt;ObjectMeta&lt;/a&gt;)

 Standard object metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
--&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;metadata&lt;/strong&gt; (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/object-meta/#ObjectMeta"&gt;ObjectMeta&lt;/a&gt;)&lt;/p&gt;
&lt;p&gt;标准的对象元数据。
更多信息： &lt;a href="https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata"&gt;https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata&lt;/a&gt;&lt;/p&gt;</description></item><item><title>Status</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/status/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/status/</guid><description>&lt;!--
api_metadata:
 apiVersion: ""
 import: "k8s.io/apimachinery/pkg/apis/meta/v1"
 kind: "Status"
content_type: "api_reference"
description: "Status is a return value for calls that don't return other objects."
title: "Status"
weight: 12
auto_generated: true
--&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/apimachinery/pkg/apis/meta/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;!--
Status is a return value for calls that don't return other objects.
--&gt;
&lt;p&gt;状态（Status）是不返回其他对象的调用的返回值。&lt;/p&gt;
&lt;hr&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;apiVersion&lt;/strong&gt; (string)&lt;/p&gt;
&lt;!--
APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources 
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion&lt;/code&gt; 定义对象表示的版本化模式。
服务器应将已识别的模式转换为最新的内部值，并可能拒绝无法识别的值。
更多信息： &lt;a href="https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources"&gt;https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources&lt;/a&gt;&lt;/p&gt;</description></item><item><title>VolumeAttributesClass</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/config-and-storage-resources/volume-attributes-class-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/config-and-storage-resources/volume-attributes-class-v1/</guid><description>&lt;!--
api_metadata:
 apiVersion: "storage.k8s.io/v1"
 import: "k8s.io/api/storage/v1"
 kind: "VolumeAttributesClass"
content_type: "api_reference"
description: "VolumeAttributesClass represents a specification of mutable volume attributes defined by the CSI driver."
title: "VolumeAttributesClass"
weight: 12
auto_generated: true
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: storage.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/storage/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="VolumeAttributesClass"&gt;VolumeAttributesClass&lt;/h2&gt;
&lt;!--
VolumeAttributesClass represents a specification of mutable volume attributes defined by the CSI driver. The class can be specified during dynamic provisioning of PersistentVolumeClaims, and changed in the PersistentVolumeClaim spec after provisioning.
--&gt;
&lt;p&gt;VolumeAttributesClass 表示由 CSI 驱动所定义的可变更卷属性的规约。
此类可以在动态制备 PersistentVolumeClaim 期间被指定，
并且可以在制备之后在 PersistentVolumeClaim 规约中更改。&lt;/p&gt;</description></item><item><title>HorizontalPodAutoscaler</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/workload-resources/horizontal-pod-autoscaler-v2/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/workload-resources/horizontal-pod-autoscaler-v2/</guid><description>&lt;!--
api_metadata:
 apiVersion: "autoscaling/v2"
 import: "k8s.io/api/autoscaling/v2"
 kind: "HorizontalPodAutoscaler"
content_type: "api_reference"
description: "HorizontalPodAutoscaler is the configuration for a horizontal pod autoscaler, which automatically manages the replica count of any resource implementing the scale subresource based on the metrics specified."
title: "HorizontalPodAutoscaler"
weight: 13
auto_generated: true
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: autoscaling/v2&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/autoscaling/v2&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="HorizontalPodAutoscaler"&gt;HorizontalPodAutoscaler&lt;/h2&gt;
&lt;!--
HorizontalPodAutoscaler is the configuration for a horizontal pod autoscaler, which automatically manages the replica count of any resource implementing the scale subresource based on the metrics specified.
--&gt;
&lt;p&gt;HorizontalPodAutoscaler 是水平 Pod 自动扩缩器的配置，
它根据指定的指标自动管理实现 scale 子资源的任何资源的副本数。&lt;/p&gt;</description></item><item><title>TypedLocalObjectReference</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/typed-local-object-reference/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/typed-local-object-reference/</guid><description>&lt;!--
api_metadata:
 apiVersion: ""
 import: "k8s.io/api/core/v1"
 kind: "TypedLocalObjectReference"
content_type: "api_reference"
description: "TypedLocalObjectReference contains enough information to let you locate the typed referenced object inside the same namespace."
title: "TypedLocalObjectReference"
weight: 13
auto_generated: true
--&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/core/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;!--
TypedLocalObjectReference contains enough information to let you locate the typed referenced object inside the same namespace.
--&gt;
&lt;p&gt;TypedLocalObjectReference 包含足够的信息，可以让你在同一个名称空间中定位特定类型的引用对象。&lt;/p&gt;
&lt;hr&gt;
&lt;!--
- **kind** (string), required

 Kind is the type of resource being referenced

- **name** (string), required

 Name is the name of resource being referenced

- **apiGroup** (string)

 APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required.
--&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;kind&lt;/strong&gt; (string)，必需&lt;/p&gt;</description></item><item><title>PriorityClass</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/workload-resources/priority-class-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/workload-resources/priority-class-v1/</guid><description>&lt;!--
api_metadata:
 apiVersion: "scheduling.k8s.io/v1"
 import: "k8s.io/api/scheduling/v1"
 kind: "PriorityClass"
content_type: "api_reference"
description: "PriorityClass defines mapping from a priority class name to the priority integer value."
title: "PriorityClass"
weight: 14
auto_generated: true
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: scheduling.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/scheduling/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="PriorityClass"&gt;PriorityClass&lt;/h2&gt;
&lt;!-- 
PriorityClass defines mapping from a priority class name to the priority integer value. The value can be any valid integer.
--&gt;
&lt;p&gt;PriorityClass 定义了从优先级类名到优先级数值的映射。
该值可以是任何有效的整数。&lt;/p&gt;
&lt;hr&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;apiVersion&lt;/strong&gt;: scheduling.k8s.io/v1&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;kind&lt;/strong&gt;: PriorityClass&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
- **metadata** (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/object-meta/#ObjectMeta"&gt;ObjectMeta&lt;/a&gt;)

 Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
--&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;metadata&lt;/strong&gt; (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/object-meta/#ObjectMeta"&gt;ObjectMeta&lt;/a&gt;)&lt;/p&gt;</description></item><item><title>DeviceTaintRule v1alpha3</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/workload-resources/device-taint-rule-v1alpha3/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/workload-resources/device-taint-rule-v1alpha3/</guid><description>&lt;!--
api_metadata:
 apiVersion: "resource.k8s.io/v1alpha3"
 import: "k8s.io/api/resource/v1alpha3"
 kind: "DeviceTaintRule"
content_type: "api_reference"
description: "DeviceTaintRule adds one taint to all devices which match the selector."
title: "DeviceTaintRule v1alpha3"
weight: 15
auto_generated: true
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: resource.k8s.io/v1alpha3&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/resource/v1alpha3&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="DeviceTaintRule"&gt;DeviceTaintRule&lt;/h2&gt;
&lt;!--
DeviceTaintRule adds one taint to all devices which match the selector. This has the same effect as if the taint was specified directly in the ResourceSlice by the DRA driver.
--&gt;
&lt;p&gt;DeviceTaintRule 添加一个污点到与选择算符匹配的所有设备上。
这与通过 DRA 驱动直接在 ResourceSlice 中指定污点具有同样的效果。&lt;/p&gt;</description></item><item><title>Node 自动扩缩容</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/cluster-administration/node-autoscaling/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/cluster-administration/node-autoscaling/</guid><description>&lt;!--
reviewers:
- gjtempleton
- jonathan-innis
- maciekpytel
title: Node Autoscaling
linkTitle: Node Autoscaling
description: &gt;-
 Automatically provision and consolidate the Nodes in your cluster to adapt to demand and optimize cost.
content_type: concept
weight: 15
--&gt;
&lt;!--
In order to run workloads in your cluster, you need
&lt;a class='glossary-tooltip' title='Kubernetes 中的工作机器称作节点。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/nodes/' target='_blank' aria-label='Nodes'&gt;Nodes&lt;/a&gt;. Nodes in your cluster can be _autoscaled_ -
dynamically [_provisioned_](#provisioning), or [_consolidated_](#consolidation) to provide needed
capacity while optimizing cost. Autoscaling is performed by Node [_autoscalers_](#autoscalers).
--&gt;
&lt;p&gt;为了在集群中运行负载，你需要 &lt;a class='glossary-tooltip' title='Kubernetes 中的工作机器称作节点。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/nodes/' target='_blank' aria-label='Node'&gt;Node&lt;/a&gt;。
集群中的 Node 可以被&lt;strong&gt;自动扩缩容&lt;/strong&gt;：
通过动态&lt;a href="#provisioning"&gt;&lt;strong&gt;制备&lt;/strong&gt;&lt;/a&gt;或&lt;a href="#consolidation"&gt;&lt;strong&gt;整合&lt;/strong&gt;&lt;/a&gt;的方式提供所需的容量并优化成本。
自动扩缩容操作是由 Node &lt;a href="#autoscalers"&gt;&lt;strong&gt;Autoscaler&lt;/strong&gt;&lt;/a&gt; 执行的。&lt;/p&gt;</description></item><item><title>Pod 安全性标准</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/security/pod-security-standards/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/security/pod-security-standards/</guid><description>&lt;!--
reviewers:
- tallclair
title: Pod Security Standards
description: &gt;
 A detailed look at the different policy levels defined in the Pod Security Standards.
content_type: concept
weight: 15
--&gt;
&lt;!-- overview --&gt;
&lt;!--
The Pod Security Standards define three different _policies_ to broadly cover the security
spectrum. These policies are _cumulative_ and range from highly-permissive to highly-restrictive.
This guide outlines the requirements of each policy.
--&gt;
&lt;p&gt;Pod 安全性标准定义了三种不同的&lt;strong&gt;策略（Policy）&lt;/strong&gt;，以广泛覆盖安全应用场景。
这些策略是&lt;strong&gt;叠加式的（Cumulative）&lt;/strong&gt;，安全级别从高度宽松至高度受限。
本指南概述了每个策略的要求。&lt;/p&gt;
&lt;!--
| Profile | Description |
| ------ | ----------- |
| &lt;strong style="white-space: nowrap"&gt;Privileged&lt;/strong&gt; | Unrestricted policy, providing the widest possible level of permissions. This policy allows for known privilege escalations. |
| &lt;strong style="white-space: nowrap"&gt;Baseline&lt;/strong&gt; | Minimally restrictive policy which prevents known privilege escalations. Allows the default (minimally specified) Pod configuration. |
| &lt;strong style="white-space: nowrap"&gt;Restricted&lt;/strong&gt; | Heavily restricted policy, following current Pod hardening best practices. |
--&gt;
&lt;table&gt;
 &lt;thead&gt;
 &lt;tr&gt;
 &lt;th&gt;Profile&lt;/th&gt;
 &lt;th&gt;描述&lt;/th&gt;
 &lt;/tr&gt;
 &lt;/thead&gt;
 &lt;tbody&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;strong style="white-space: nowrap"&gt;Privileged&lt;/strong&gt;&lt;/td&gt;
 &lt;td&gt;不受限制的策略，提供最大可能范围的权限许可。此策略允许已知的特权提升。&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;strong style="white-space: nowrap"&gt;Baseline&lt;/strong&gt;&lt;/td&gt;
 &lt;td&gt;限制性最弱的策略，禁止已知的特权提升。允许使用默认的（规定最少）Pod 配置。&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;strong style="white-space: nowrap"&gt;Restricted&lt;/strong&gt;&lt;/td&gt;
 &lt;td&gt;限制性非常强的策略，遵循当前的保护 Pod 的最佳实践。&lt;/td&gt;
 &lt;/tr&gt;
 &lt;/tbody&gt;
&lt;/table&gt;
&lt;!-- body --&gt;
&lt;!--
## Profile Details
--&gt;
&lt;h2 id="profile-details"&gt;Profile 细节&lt;/h2&gt;
&lt;h3 id="privileged"&gt;Privileged&lt;/h3&gt;
&lt;!--
**The _Privileged_ policy is purposely-open, and entirely unrestricted.** This type of policy is
typically aimed at system- and infrastructure-level workloads managed by privileged, trusted users.

The Privileged policy is defined by an absence of restrictions. If you define a Pod where the Privileged
security policy applies, the Pod you define is able to bypass typical container isolation mechanisms.
For example, you can define a Pod that has access to the node's host network.
--&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt;Privileged&lt;/em&gt; 策略是有目的地开放且完全无限制的策略。&lt;/strong&gt;
此类策略通常针对由特权较高、受信任的用户所管理的系统级或基础设施级负载。&lt;/p&gt;</description></item><item><title>安装一个扩展的 API server</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/extend-kubernetes/setup-extension-api-server/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/extend-kubernetes/setup-extension-api-server/</guid><description>&lt;!--
title: Setup an extension API server
reviewers:
- lavalamp
- cheftako
- chenopis
content_type: task
weight: 15
--&gt;
&lt;!-- overview --&gt;
&lt;!--
Setting up an extension API server to work the aggregation layer allows the Kubernetes apiserver to be extended with additional APIs, which are not part of the core Kubernetes APIs.
--&gt;
&lt;p&gt;安装扩展的 API 服务器来使用聚合层以让 Kubernetes API 服务器使用
其它 API 进行扩展，
这些 API 不是核心 Kubernetes API 的一部分。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;p&gt;&lt;!--
You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
[minikube](https://minikube.sigs.k8s.io/docs/tutorials/multi_node/)
or you can use one of these Kubernetes playgrounds:
--&gt;
&lt;p&gt;你必须拥有一个 Kubernetes 的集群，且必须配置 kubectl 命令行工具让其与你的集群通信。
建议运行本教程的集群至少有两个节点，且这两个节点不能作为控制平面主机。
如果你还没有集群，你可以通过 &lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;Minikube&lt;/a&gt;
构建一个你自己的集群，或者你可以使用下面的 Kubernetes 练习环境之一：&lt;/p&gt;</description></item><item><title>特性门控（已移除）</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/command-line-tools-reference/feature-gates-removed/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/command-line-tools-reference/feature-gates-removed/</guid><description>&lt;!--
title: Feature Gates (removed)
weight: 15
content_type: concept
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page contains list of feature gates that have been removed. The information on this page is for reference.
A removed feature gate is different from a GA'ed or deprecated one in that a removed one is
no longer recognized as a valid feature gate.
However, a GA'ed or a deprecated feature gate is still recognized by the corresponding Kubernetes
components although they are unable to cause any behavior differences in a cluster.
--&gt;
&lt;p&gt;本页包含了已移除的特性门控的列表。本页的信息仅供参考。
已移除的特性门控不同于正式发布（GA）或废弃的特性门控，因为已移除的特性门控将不再被视为有效的特性门控。
然而，正式发布或废弃的特性门控仍然能被对应的 Kubernetes 组件识别，这些特性门控在集群中不会造成任何行为差异。&lt;/p&gt;</description></item><item><title>资源监控工具</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/debug/debug-cluster/resource-usage-monitoring/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/debug/debug-cluster/resource-usage-monitoring/</guid><description>&lt;!--
reviewers:
- mikedanese
content_type: concept
title: Tools for Monitoring Resources
weight: 15
--&gt;
&lt;!-- overview --&gt;
&lt;!--
To scale an application and provide a reliable service, you need to
understand how the application behaves when it is deployed. You can examine
application performance in a Kubernetes cluster by examining the containers,
[pods](/docs/concepts/workloads/pods/),
[services](/docs/concepts/services-networking/service/), and
the characteristics of the overall cluster. Kubernetes provides detailed
information about an application's resource usage at each of these levels.
This information allows you to evaluate your application's performance and
where bottlenecks can be removed to improve overall performance.
--&gt;
&lt;p&gt;要扩展应用程序并提供可靠的服务，你需要了解应用程序在部署时的行为。
你可以通过检测容器、&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods"&gt;Pod&lt;/a&gt;、
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/services-networking/service/"&gt;Service&lt;/a&gt;
和整个集群的特征来检查 Kubernetes 集群中应用程序的性能。
Kubernetes 在每个级别上提供有关应用程序资源使用情况的详细信息。
此信息使你可以评估应用程序的性能，以及在何处可以消除瓶颈以提高整体性能。&lt;/p&gt;</description></item><item><title>资源指标管道</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/debug/debug-cluster/resource-metrics-pipeline/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/debug/debug-cluster/resource-metrics-pipeline/</guid><description>&lt;!--
reviewers:
- fgrzadkowski
- piosz
title: Resource metrics pipeline
content_type: concept
weight: 15
--&gt;
&lt;!-- overview --&gt;
&lt;!--
For Kubernetes, the _Metrics API_ offers a basic set of metrics to support automatic scaling and
similar use cases. This API makes information available about resource usage for node and pod,
including metrics for CPU and memory. If you deploy the Metrics API into your cluster, clients of
the Kubernetes API can then query for this information, and you can use Kubernetes' access control
mechanisms to manage permissions to do so.
--&gt;
&lt;p&gt;对于 Kubernetes，&lt;strong&gt;Metrics API&lt;/strong&gt; 提供了一组基本的指标，以支持自动伸缩和类似的用例。
该 API 提供有关节点和 Pod 的资源使用情况的信息，
包括 CPU 和内存的指标。如果将 Metrics API 部署到集群中，
那么 Kubernetes API 的客户端就可以查询这些信息，并且可以使用 Kubernetes 的访问控制机制来管理权限。&lt;/p&gt;</description></item><item><title>ResourceClaim v1beta2</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/workload-resources/resource-claim-v1beta2/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/workload-resources/resource-claim-v1beta2/</guid><description>&lt;!--
api_metadata:
 apiVersion: "resource.k8s.io/v1beta2"
 import: "k8s.io/api/resource/v1beta2"
 kind: "ResourceClaim"
content_type: "api_reference"
description: "ResourceClaim describes a request for access to resources in the cluster, for use by workloads."
title: "ResourceClaim v1beta2"
weight: 16
auto_generated: true
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: resource.k8s.io/v1beta2&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/resource/v1beta2&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="ResourceClaim"&gt;ResourceClaim&lt;/h2&gt;
&lt;!--
ResourceClaim describes a request for access to resources in the cluster, for use by workloads. For example, if a workload needs an accelerator device with specific properties, this is how that request is expressed. The status stanza tracks whether this claim has been satisfied and what specific resources have been allocated.

This is an alpha type and requires enabling the DynamicResourceAllocation feature gate.
--&gt;
&lt;p&gt;ResourceClaim 描述对集群中供工作负载使用的资源的访问请求。
例如，如果某个工作负载需要具有特定属性的加速器设备，这就是表达该请求的方式。
状态部分跟踪此申领是否已被满足，以及具体分配了哪些资源。&lt;/p&gt;</description></item><item><title>ResourceClaimTemplate v1beta2</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/workload-resources/resource-claim-template-v1beta2/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/workload-resources/resource-claim-template-v1beta2/</guid><description>&lt;!--
api_metadata:
 apiVersion: "resource.k8s.io/v1beta2"
 import: "k8s.io/api/resource/v1beta2"
 kind: "ResourceClaimTemplate"
content_type: "api_reference"
description: "ResourceClaimTemplate is used to produce ResourceClaim objects."
title: "ResourceClaimTemplate v1beta2"
weight: 17
auto_generated: true
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: resource.k8s.io/v1beta2&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/resource/v1beta2&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="ResourceClaimTemplate"&gt;ResourceClaimTemplate&lt;/h2&gt;
&lt;!--
ResourceClaimTemplate is used to produce ResourceClaim objects.

This is an alpha type and requires enabling the DynamicResourceAllocation feature gate.
--&gt;
&lt;p&gt;ResourceClaimTemplate 用于生成 ResourceClaim 对象。&lt;/p&gt;
&lt;p&gt;这是一个 Alpha 类型的特性，需要启用 DynamicResourceAllocation 特性门控。&lt;/p&gt;
&lt;hr&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;apiVersion&lt;/strong&gt;: resource.k8s.io/v1beta2&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;kind&lt;/strong&gt;: ResourceClaimTemplate&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;metadata&lt;/strong&gt; (&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/common-definitions/object-meta/#ObjectMeta"&gt;ObjectMeta&lt;/a&gt;)&lt;/p&gt;
&lt;!--
Standard object metadata
--&gt;
&lt;p&gt;标准的对象元数据。&lt;/p&gt;</description></item><item><title>ResourceSlice v1beta1</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/workload-resources/resource-slice-v1beta1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/workload-resources/resource-slice-v1beta1/</guid><description>&lt;!--
api_metadata:
 apiVersion: "resource.k8s.io/v1beta1"
 import: "k8s.io/api/resource/v1beta1"
 kind: "ResourceSlice"
content_type: "api_reference"
description: "ResourceSlice represents one or more resources in a pool of similar resources, managed by a common driver."
title: "ResourceSlice v1beta1"
weight: 17
auto_generated: true
--&gt;
&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: resource.k8s.io/v1beta1&lt;/code&gt;&lt;/p&gt;</description></item><item><title>ResourceSlice v1alpha3</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/workload-resources/resource-slice-v1alpha3/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/workload-resources/resource-slice-v1alpha3/</guid><description>&lt;!--
api_metadata:
 apiVersion: "resource.k8s.io/v1alpha3"
 import: "k8s.io/api/resource/v1alpha3"
 kind: "ResourceSlice"
content_type: "api_reference"
description: "ResourceSlice represents one or more resources in a pool of similar resources, managed by a common driver."
title: "ResourceSlice v1alpha3"
weight: 18
auto_generated: true
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: resource.k8s.io/v1alpha3&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/resource/v1alpha3&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;!--
## ResourceSlice {#ResourceSlice}

ResourceSlice represents one or more resources in a pool of similar resources, managed by a common driver. A pool may span more than one ResourceSlice, and exactly how many ResourceSlices comprise a pool is determined by the driver.

At the moment, the only supported resources are devices with attributes and capacities. Each device in a given pool, regardless of how many ResourceSlices, must have a unique name. The ResourceSlice in which a device gets published may change over time. The unique identifier for a device is the tuple \&lt;driver name&gt;, \&lt;pool name&gt;, \&lt;device name&gt;.
--&gt;
&lt;h2 id="ResourceSlice"&gt;ResourceSlice&lt;/h2&gt;
&lt;p&gt;ResourceSlice 表示一个或多个资源，这些资源位于同一个驱动所管理的、彼此相似的资源构成的资源池。
一个池可以包含多个 ResourceSlice，一个池包含多少个 ResourceSlice 由驱动确定。&lt;/p&gt;</description></item><item><title>ConfigMap</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/configuration/configmap/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/configuration/configmap/</guid><description>&lt;!--
title: ConfigMaps
api_metadata:
- apiVersion: "v1"
 kind: "ConfigMap"
content_type: concept
weight: 20
--&gt;
&lt;!-- overview --&gt;
&lt;!--
title: ConfigMap
id: configmap
full_link: /docs/concepts/configuration/configmap/
short_description: &gt;
 An API object used to store non-confidential data in key-value pairs. Can be consumed as environment variables, command-line arguments, or configuration files in a volume.

aka: 
tags:
- core-object
--&gt;
&lt;!--
 An API object used to store non-confidential data in key-value pairs.
&lt;a class='glossary-tooltip' title='Pod 表示你的集群上一组正在运行的容器。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/' target='_blank' aria-label='Pods'&gt;Pods&lt;/a&gt; can consume ConfigMaps as
environment variables, command-line arguments, or as configuration files in a
&lt;a class='glossary-tooltip' title='包含可被 Pod 中容器访问的数据的目录。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/volumes/' target='_blank' aria-label='volume'&gt;volume&lt;/a&gt;.
--&gt;
&lt;p&gt;ConfigMap 是一种 API 对象，用来将非机密性的数据保存到键值对中。使用时，
&lt;a class='glossary-tooltip' title='Pod 表示你的集群上一组正在运行的容器。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/' target='_blank' aria-label='Pod'&gt;Pod&lt;/a&gt; 可以将其用作环境变量、命令行参数或者存储卷中的配置文件。&lt;/p&gt;</description></item><item><title>Issue 管理者</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/contribute/participate/issue-wrangler/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/contribute/participate/issue-wrangler/</guid><description>&lt;!--
title: Issue Wranglers
content_type: concept
weight: 20
--&gt;
&lt;!-- overview --&gt;
&lt;!--
Alongside the [PR Wrangler](/docs/contribute/participate/pr-wranglers),formal approvers,
and reviewers, members of SIG Docs take week long shifts
[triaging and categorising issues](/docs/contribute/review/for-approvers/#triage-and-categorize-issues)
for the repository.
--&gt;
&lt;p&gt;除了承担 &lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/contribute/participate/pr-wranglers"&gt;PR 管理者&lt;/a&gt;的职责外，
SIG Docs 正式的批准人（Approver）、评审人（Reviewer）和成员（Member）
按周轮流&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/contribute/review/for-approvers/#triage-and-categorize-issues"&gt;归类仓库的 Issue&lt;/a&gt;。&lt;/p&gt;
&lt;!-- body --&gt;
&lt;!--
## Duties

Each day in a week-long shift the Issue Wrangler will be responsible for:

- Triaging and tagging incoming issues daily. See
 [Triage and categorize issues](/docs/contribute/review/for-approvers/#triage-and-categorize-issues)
 for guidelines on how SIG Docs uses metadata.
- Keeping an eye on stale &amp; rotten issues within the kubernetes/website repository.
- Maintenance of the [Issues board](https://github.com/orgs/kubernetes/projects/72/views/1).
--&gt;
&lt;h2 id="duties"&gt;职责&lt;/h2&gt;
&lt;p&gt;在为期一周的轮值期内，Issue 管理者每天负责：&lt;/p&gt;</description></item><item><title>kubeadm init</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-init/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-init/</guid><description>&lt;!-- overview --&gt;
&lt;!--
This command initializes a Kubernetes control plane node.
--&gt;
&lt;p&gt;此命令初始化一个 Kubernetes 控制平面节点。&lt;/p&gt;
&lt;!-- body --&gt;

	&lt;!--
### Synopsis 
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Run this command in order to set up the Kubernetes control plane
--&gt;
&lt;p&gt;运行此命令来搭建 Kubernetes 控制平面节点。&lt;/p&gt;
&lt;!--
The "init" command executes the following phases:
--&gt;
&lt;p&gt;&amp;quot;init&amp;quot; 命令执行以下阶段：&lt;/p&gt;
&lt;!--
```
preflight Run pre-flight checks
certs Certificate generation
 /ca Generate the self-signed Kubernetes CA to provision identities for other Kubernetes components
 /apiserver Generate the certificate for serving the Kubernetes API
 /apiserver-kubelet-client Generate the certificate for the API server to connect to kubelet
 /front-proxy-ca Generate the self-signed CA to provision identities for front proxy
 /front-proxy-client Generate the certificate for the front proxy client
 /etcd-ca Generate the self-signed CA to provision identities for etcd
 /etcd-server Generate the certificate for serving etcd
 /etcd-peer Generate the certificate for etcd nodes to communicate with each other
 /etcd-healthcheck-client Generate the certificate for liveness probes to healthcheck etcd
 /apiserver-etcd-client Generate the certificate the apiserver uses to access etcd
 /sa Generate a private key for signing service account tokens along with its public key
kubeconfig Generate all kubeconfig files necessary to establish the control plane and the admin kubeconfig file
 /admin Generate a kubeconfig file for the admin to use and for kubeadm itself
 /super-admin Generate a kubeconfig file for the super-admin
 /kubelet Generate a kubeconfig file for the kubelet to use *only* for cluster bootstrapping purposes
 /controller-manager Generate a kubeconfig file for the controller manager to use
 /scheduler Generate a kubeconfig file for the scheduler to use
etcd Generate static Pod manifest file for local etcd
 /local Generate the static Pod manifest file for a local, single-node local etcd instance
control-plane Generate all static Pod manifest files necessary to establish the control plane
 /apiserver Generates the kube-apiserver static Pod manifest
 /controller-manager Generates the kube-controller-manager static Pod manifest
 /scheduler Generates the kube-scheduler static Pod manifest
kubelet-start Write kubelet settings and (re)start the kubelet
wait-control-plane Wait for the control plane to start
upload-config Upload the kubeadm and kubelet configuration to a ConfigMap
 /kubeadm Upload the kubeadm ClusterConfiguration to a ConfigMap
 /kubelet Upload the kubelet component config to a ConfigMap
upload-certs Upload certificates to kubeadm-certs
mark-control-plane Mark a node as a control-plane
bootstrap-token Generates bootstrap tokens used to join a node to a cluster
kubelet-finalize Updates settings relevant to the kubelet after TLS bootstrap
addon Install required addons for passing conformance tests
 /coredns Install the CoreDNS addon to a Kubernetes cluster
 /kube-proxy Install the kube-proxy addon to a Kubernetes cluster
show-join-command Show the join command for control-plane and worker node
```
--&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;preflight 预检
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;certs 生成证书
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; /ca 生成自签名根 CA 用于配置其他 kubernetes 组件
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; /apiserver 生成 apiserver 的证书
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; /apiserver-kubelet-client 生成 apiserver 连接到 kubelet 的证书
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; /front-proxy-ca 生成前端代理自签名 CA（扩展apiserver）
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; /front-proxy-client 生成前端代理客户端的证书（扩展 apiserver）
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; /etcd-ca 生成 etcd 自签名 CA
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; /etcd-server 生成 etcd 服务器证书
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; /etcd-peer 生成 etcd 节点相互通信的证书
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; /etcd-healthcheck-client 生成 etcd 健康检查的证书
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; /apiserver-etcd-client 生成 apiserver 访问 etcd 的证书
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; /sa 生成用于签署服务帐户令牌的私钥和公钥
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeconfig 生成建立控制平面和管理所需的所有 kubeconfig 文件
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; /admin 生成一个 kubeconfig 文件供管理员使用以及供 kubeadm 本身使用
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; /super-admin 为超级管理员生成 kubeconfig 文件
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; /kubelet 为 kubelet 生成一个 kubeconfig 文件，*仅*用于集群引导
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; /controller-manager 生成 kubeconfig 文件供控制器管理器使用
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; /scheduler 生成 kubeconfig 文件供调度程序使用
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;etcd 为本地 etcd 生成静态 Pod 清单文件
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; /local 为本地单节点本地 etcd 实例生成静态 Pod 清单文件
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;control-plane 生成建立控制平面所需的所有静态 Pod 清单文件
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; /apiserver 生成 kube-apiserver 静态 Pod 清单
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; /controller-manager 生成 kube-controller-manager 静态 Pod 清单
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; /scheduler 生成 kube-scheduler 静态 Pod 清单
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubelet-start 写入 kubelet 设置并启动（或重启）kubelet
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;wait-control-plane 等待控制平面启动
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;upload-config 将 kubeadm 和 kubelet 配置上传到 ConfigMap
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; /kubeadm 将 kubeadm 集群配置上传到 ConfigMap
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; /kubelet 将 kubelet 组件配置上传到 ConfigMap
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;upload-certs 将证书上传到 kubeadm-certs
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;mark-control-plane 将节点标记为控制面
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;bootstrap-token 生成用于将节点加入集群的引导令牌
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubelet-finalize 在 TLS 引导后更新与 kubelet 相关的设置
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;addon 安装用于通过一致性测试所需的插件
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; /coredns 将 CoreDNS 插件安装到 Kubernetes 集群
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; /kube-proxy 将 kube-proxy 插件安装到 Kubernetes 集群
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;show-join-command 显示控制平面和工作节点的加入命令
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm init &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!-- 
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--apiserver-advertise-address string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
The IP address the API Server will advertise it's listening on. If not set the default network interface will be used.
--&gt;
API 服务器所公布的其正在监听的 IP 地址。如果未设置，则使用默认网络接口。
&lt;/p&gt;</description></item><item><title>kubectl 命令</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/kubectl-cmds/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/kubectl-cmds/</guid><description>&lt;!--
title: kubectl Commands
weight: 20
--&gt;
&lt;!--
[kubectl Command Reference](/docs/reference/kubectl/generated/kubectl/)
--&gt;
&lt;p&gt;&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl/"&gt;kubectl 命令参考&lt;/a&gt;&lt;/p&gt;</description></item><item><title>kubelet</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/command-line-tools-reference/kubelet/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/command-line-tools-reference/kubelet/</guid><description>&lt;h2 id="简介"&gt;简介&lt;/h2&gt;
&lt;!--
The kubelet is the primary "node agent" that runs on each node. It can
register the node with the apiserver using one of: the hostname; a flag to
override the hostname; or specific logic for a cloud provider.
--&gt;
&lt;p&gt;kubelet 是在每个节点上运行的主要 “节点代理”。它可以使用以下方式之一向 API 服务器注册：&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;主机名（hostname）；&lt;/li&gt;
&lt;li&gt;覆盖主机名的参数；&lt;/li&gt;
&lt;li&gt;特定于某云驱动的逻辑。&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
The kubelet works in terms of a PodSpec. A PodSpec is a YAML or JSON object
that describes a pod. The kubelet takes a set of PodSpecs that are provided
through various mechanisms (primarily through the apiserver) and ensures that
the containers described in those PodSpecs are running and healthy. The
kubelet doesn't manage containers which were not created by Kubernetes.
--&gt;
&lt;p&gt;kubelet 是基于 PodSpec 来工作的。每个 PodSpec 是一个描述 Pod 的 YAML 或 JSON 对象。
kubelet 接受通过各种机制（主要是通过 apiserver）提供的一组 PodSpec，并确保这些
PodSpec 中描述的容器处于运行状态且运行状况良好。
kubelet 不管理不是由 Kubernetes 创建的容器。&lt;/p&gt;</description></item><item><title>Kubernetes API 概念</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/using-api/api-concepts/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/using-api/api-concepts/</guid><description>&lt;!--
title: Kubernetes API Concepts
reviewers:
- smarterclayton
- lavalamp
- liggitt
content_type: concept
weight: 20
--&gt;
&lt;!-- overview --&gt;
&lt;!--
The Kubernetes API is a resource-based (RESTful) programmatic interface
provided via HTTP. It supports retrieving, creating, updating, and deleting
primary resources via the standard HTTP verbs (POST, PUT, PATCH, DELETE,
GET).

For some resources, the API includes additional subresources that allow
fine-grained authorization (such as separate views for Pod details and
log retrievals), and can accept and serve those resources in different
representations for convenience or efficiency.
--&gt;
&lt;p&gt;Kubernetes API 是通过 HTTP 提供的基于资源 (RESTful) 的编程接口。
它支持通过标准 HTTP 动词（POST、PUT、PATCH、DELETE、GET）检索、创建、更新和删除主要资源。&lt;/p&gt;</description></item><item><title>Kubernetes API 聚合层</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/</guid><description>&lt;!--
title: Kubernetes API Aggregation Layer
reviewers:
- lavalamp
- cheftako
- chenopis
content_type: concept
weight: 20
--&gt;
&lt;!-- overview --&gt;
&lt;!--
The aggregation layer allows Kubernetes to be extended with additional APIs, beyond what is
offered by the core Kubernetes APIs.
The additional APIs can either be ready-made solutions such as a
[metrics server](https://github.com/kubernetes-sigs/metrics-server), or APIs that you develop yourself.
--&gt;
&lt;p&gt;使用聚合层（Aggregation Layer），用户可以通过附加的 API 扩展 Kubernetes，
而不局限于 Kubernetes 核心 API 提供的功能。
这里的附加 API 可以是现成的解决方案，比如
&lt;a href="https://github.com/kubernetes-sigs/metrics-server"&gt;metrics server&lt;/a&gt;，
或者你自己开发的 API。&lt;/p&gt;</description></item><item><title>Kubernetes 安全和信息披露</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/issues-security/security/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/issues-security/security/</guid><description>&lt;!--
title: Kubernetes Security and Disclosure Information
aliases: [/security/]
reviewers:
- eparis
- erictune
- philips
- jessfraz
content_type: concept
weight: 20
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page describes Kubernetes security and disclosure information.
--&gt;
&lt;p&gt;本页面介绍 Kubernetes 安全和信息披露相关的内容。&lt;/p&gt;
&lt;!-- body --&gt;
&lt;!--
## Security Announcements
--&gt;
&lt;h2 id="security-announcements"&gt;安全公告&lt;/h2&gt;
&lt;!--
Join the [kubernetes-security-announce](https://groups.google.com/forum/#!forum/kubernetes-security-announce)
group for emails about security and major API announcements.
--&gt;
&lt;p&gt;加入 &lt;a href="https://groups.google.com/forum/#!forum/kubernetes-security-announce"&gt;kubernetes-security-announce&lt;/a&gt;
组，以获取关于安全性和主要 API 公告的电子邮件。&lt;/p&gt;
&lt;!--
## Report a Vulnerability
--&gt;
&lt;h2 id="report-a-vulnerability"&gt;报告一个漏洞&lt;/h2&gt;
&lt;!--
We're extremely grateful for security researchers and users that report vulnerabilities to
the Kubernetes Open Source Community. All reports are thoroughly investigated by a set of community volunteers.
--&gt;
&lt;p&gt;我们非常感谢向 Kubernetes 开源社区报告漏洞的安全研究人员和用户。
所有的报告都由社区志愿者进行彻底调查。&lt;/p&gt;</description></item><item><title>Kubernetes 对象管理</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/overview/working-with-objects/object-management/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/overview/working-with-objects/object-management/</guid><description>&lt;!-- overview --&gt;
&lt;!--
The `kubectl` command-line tool supports several different ways to create and manage
Kubernetes &lt;a class='glossary-tooltip' title='Kubernetes 系统中的实体，代表了集群的部分状态。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/overview/working-with-objects/#kubernetes-objects' target='_blank' aria-label='objects'&gt;objects&lt;/a&gt;. This document provides an overview of the different
approaches. Read the [Kubectl book](https://kubectl.docs.kubernetes.io) for
details of managing objects by Kubectl.
--&gt;
&lt;p&gt;&lt;code&gt;kubectl&lt;/code&gt; 命令行工具支持多种不同的方式来创建和管理 Kubernetes
&lt;a class='glossary-tooltip' title='Kubernetes 系统中的实体，代表了集群的部分状态。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/overview/working-with-objects/#kubernetes-objects' target='_blank' aria-label='对象'&gt;对象&lt;/a&gt;。
本文档概述了不同的方法。
阅读 &lt;a href="https://kubectl.docs.kubernetes.io/zh/"&gt;Kubectl book&lt;/a&gt; 来了解 kubectl
管理对象的详细信息。&lt;/p&gt;
&lt;!-- body --&gt;
&lt;!--
## Management techniques
--&gt;
&lt;h2 id="管理技巧"&gt;管理技巧&lt;/h2&gt;
&lt;div class="alert alert-danger" role="note"&gt;&lt;h4 class="alert-heading"&gt;警告：&lt;/h4&gt;&lt;!--
A Kubernetes object should be managed using only one technique. Mixing
and matching techniques for the same object results in undefined behavior.
--&gt;
&lt;p&gt;应该只使用一种技术来管理 Kubernetes 对象。混合和匹配技术作用在同一对象上将导致未定义行为。&lt;/p&gt;</description></item><item><title>Pod 安全性准入</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/security/pod-security-admission/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/security/pod-security-admission/</guid><description>&lt;!--
reviewers:
- tallclair
- liggitt
title: Pod Security Admission
description: &gt;
 An overview of the Pod Security Admission Controller, which can enforce the Pod Security
 Standards.
content_type: concept
weight: 20
--&gt;
&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.25 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;!--
The Kubernetes [Pod Security Standards](/docs/concepts/security/pod-security-standards/) define
different isolation levels for Pods. These standards let you define how you want to restrict the
behavior of pods in a clear, consistent fashion.
--&gt;
&lt;p&gt;Kubernetes &lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/security/pod-security-standards/"&gt;Pod 安全性标准（Security Standard）&lt;/a&gt;
为 Pod 定义不同的隔离级别。这些标准能够让你以一种清晰、一致的方式定义如何限制 Pod 行为。&lt;/p&gt;</description></item><item><title>PR 管理者</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/contribute/participate/pr-wranglers/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/contribute/participate/pr-wranglers/</guid><description>&lt;!--
title: PR wranglers
content_type: concept
weight: 20
--&gt;
&lt;!-- overview --&gt;
&lt;!--
SIG Docs [approvers](/docs/contribute/participate/roles-and-responsibilities/#approvers)
take week-long shifts [managing pull requests](https://github.com/kubernetes/website/wiki/PR-Wranglers)
for the repository.

This section covers the duties of a PR wrangler. For more information on giving good reviews,
see [Reviewing changes](/docs/contribute/review/).
--&gt;
&lt;p&gt;SIG Docs 的&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/contribute/participate/roles-and-responsibilities/#approvers"&gt;批准人（Approver）&lt;/a&gt;
们每周轮流负责&lt;a href="https://github.com/kubernetes/website/wiki/PR-Wranglers"&gt;管理仓库的 PR&lt;/a&gt;。&lt;/p&gt;
&lt;p&gt;本节介绍 PR 管理者的职责。关于如何提供较好的评审意见，
可参阅&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/contribute/review/"&gt;评审变更&lt;/a&gt;。&lt;/p&gt;
&lt;!-- body --&gt;
&lt;!--
## Duties

Each day in a week-long shift as PR Wrangler:

- Review [open pull requests](https://github.com/kubernetes/website/pulls) for quality
 and adherence to the [Style](/docs/contribute/style/style-guide/) and
 [Content](/docs/contribute/style/content-guide/) guides.
 - Start with the smallest PRs (`size/XS`) first, and end with the largest (`size/XXL`).
 Review as many PRs as you can.
--&gt;
&lt;h2 id="duties"&gt;职责&lt;/h2&gt;
&lt;p&gt;在为期一周的轮值期内，PR 管理者要：&lt;/p&gt;</description></item><item><title>ReplicaSet</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/controllers/replicaset/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/controllers/replicaset/</guid><description>&lt;!--
# NOTE TO LOCALIZATION TEAMS
#
# If updating front matter for your localization because there is still
# a "feature" key in this page, then you also need to update
# content/??/docs/concepts/architecture/self-healing.md (which is where
# it moved to)
reviewers:
- Kashomon
- bprashanth
- madhusudancs
title: ReplicaSet
api_metadata:
- apiVersion: "apps/v1"
 kind: "ReplicaSet"
content_type: concept
description: &gt;-
 A ReplicaSet's purpose is to maintain a stable set of replica Pods running at any given time.
 Usually, you define a Deployment and let that Deployment manage ReplicaSets automatically.
weight: 20
hide_summary: true # Listed separately in section index
--&gt;
&lt;!-- overview --&gt;
&lt;!--
A ReplicaSet's purpose is to maintain a stable set of replica Pods running at any given time. As such, it is often
used to guarantee the availability of a specified number of identical Pods.
--&gt;
&lt;p&gt;ReplicaSet 的目的是维护一组在任何时候都处于运行状态的 Pod 副本的稳定集合。
因此，它通常用来保证给定数量的、完全相同的 Pod 的可用性。&lt;/p&gt;</description></item><item><title>持久卷</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/persistent-volumes/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/persistent-volumes/</guid><description>&lt;!--
reviewers:
- jsafrane
- saad-ali
- thockin
- msau42
- xing-yang
title: Persistent Volumes
api_metadata:
- apiVersion: "v1"
 kind: "PersistentVolume"
- apiVersion: "v1"
 kind: "PersistentVolumeClaim"
feature:
 title: Storage orchestration
 description: &gt;
 Automatically mount the storage system of your choice, whether from local storage, a public cloud provider, or a network storage system such as iSCSI or NFS.
content_type: concept
weight: 20
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This document describes _persistent volumes_ in Kubernetes. Familiarity with
[volumes](/docs/concepts/storage/volumes/), [StorageClasses](/docs/concepts/storage/storage-classes/)
and [VolumeAttributesClasses](/docs/concepts/storage/volume-attributes-classes/) is suggested.
--&gt;
&lt;p&gt;本文描述 Kubernetes 中的&lt;strong&gt;持久卷（Persistent Volumes）&lt;/strong&gt;。
建议先熟悉&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/volumes/"&gt;卷（Volume）&lt;/a&gt;、
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/storage-classes/"&gt;存储类（StorageClass）&lt;/a&gt;和
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/volume-attributes-classes/"&gt;卷属性类（VolumeAttributesClass）&lt;/a&gt;。&lt;/p&gt;</description></item><item><title>调度器配置</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/scheduling/config/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/scheduling/config/</guid><description>&lt;!--
title: Scheduler Configuration
content_type: concept
weight: 20
--&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.25 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;!--
You can customize the behavior of the `kube-scheduler` by writing a configuration
file and passing its path as a command line argument.
--&gt;
&lt;p&gt;你可以通过编写配置文件，并将其路径传给 &lt;code&gt;kube-scheduler&lt;/code&gt; 的命令行参数，定制 &lt;code&gt;kube-scheduler&lt;/code&gt; 的行为。&lt;/p&gt;
&lt;!-- overview --&gt;
&lt;!-- body --&gt;
&lt;!--
A scheduling Profile allows you to configure the different stages of scheduling
in the &lt;a class='glossary-tooltip' title='控制平面组件，负责监视新创建的、未指定运行节点的 Pod，选择节点让 Pod 在上面运行。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/reference/command-line-tools-reference/kube-scheduler/' target='_blank' aria-label='kube-scheduler'&gt;kube-scheduler&lt;/a&gt;.
Each stage is exposed in an extension point. Plugins provide scheduling behaviors
by implementing one or more of these extension points.
--&gt;
&lt;p&gt;调度模板（Profile）允许你配置 &lt;a class='glossary-tooltip' title='控制平面组件，负责监视新创建的、未指定运行节点的 Pod，选择节点让 Pod 在上面运行。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/reference/command-line-tools-reference/kube-scheduler/' target='_blank' aria-label='kube-scheduler'&gt;kube-scheduler&lt;/a&gt;
中的不同调度阶段。每个阶段都暴露于某个扩展点中。插件通过实现一个或多个扩展点来提供调度行为。&lt;/p&gt;</description></item><item><title>调试 Service</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/debug/debug-application/debug-service/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/debug/debug-application/debug-service/</guid><description>&lt;!--
reviewers:
- thockin
- bowei
content_type: task
title: Debug Services
weight: 20
--&gt;
&lt;!-- overview --&gt;
&lt;!--
An issue that comes up rather frequently for new installations of Kubernetes is
that a Service is not working properly. You've run your Pods through a
Deployment (or other workload controller) and created a Service, but you
get no response when you try to access it. This document will hopefully help
you to figure out what's going wrong.
--&gt;
&lt;p&gt;对于新安装的 Kubernetes，经常出现的问题是 Service 无法正常运行。你已经通过
Deployment（或其他工作负载控制器）运行了 Pod，并创建 Service ，
但是当你尝试访问它时，没有任何响应。此文档有望对你有所帮助并找出问题所在。&lt;/p&gt;</description></item><item><title>定义相互依赖的环境变量</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/inject-data-application/define-interdependent-environment-variables/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/inject-data-application/define-interdependent-environment-variables/</guid><description>&lt;!-- 
title: Define Dependent Environment Variables
--&gt;
&lt;!-- overview --&gt;
&lt;!-- 
This page shows how to define dependent environment variables for a container
in a Kubernetes Pod.
--&gt;
&lt;p&gt;本页展示了如何为 Kubernetes Pod 中的容器定义相互依赖的环境变量。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;!--
You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
[minikube](https://minikube.sigs.k8s.io/docs/tutorials/multi_node/)
or you can use one of these Kubernetes playgrounds:
--&gt;
&lt;p&gt;你必须拥有一个 Kubernetes 的集群，且必须配置 kubectl 命令行工具让其与你的集群通信。
建议运行本教程的集群至少有两个节点，且这两个节点不能作为控制平面主机。
如果你还没有集群，你可以通过 &lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;Minikube&lt;/a&gt;
构建一个你自己的集群，或者你可以使用下面的 Kubernetes 练习环境之一：&lt;/p&gt;</description></item><item><title>对 DaemonSet 执行回滚</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/manage-daemon/rollback-daemon-set/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/manage-daemon/rollback-daemon-set/</guid><description>&lt;!--
reviewers:
- janetkuo
title: Perform a Rollback on a DaemonSet
content_type: task
weight: 20
min-kubernetes-server-version: 1.7
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page shows how to perform a rollback on a &lt;a class='glossary-tooltip' title='确保 Pod 的副本在集群中的一组节点上运行。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/controllers/daemonset/' target='_blank' aria-label='DaemonSet'&gt;DaemonSet&lt;/a&gt;.
--&gt;
&lt;p&gt;本文展示了如何对 &lt;a class='glossary-tooltip' title='确保 Pod 的副本在集群中的一组节点上运行。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/controllers/daemonset/' target='_blank' aria-label='DaemonSet'&gt;DaemonSet&lt;/a&gt; 执行回滚。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;p&gt;&lt;!--
You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
[minikube](https://minikube.sigs.k8s.io/docs/tutorials/multi_node/)
or you can use one of these Kubernetes playgrounds:
--&gt;
&lt;p&gt;你必须拥有一个 Kubernetes 的集群，且必须配置 kubectl 命令行工具让其与你的集群通信。
建议运行本教程的集群至少有两个节点，且这两个节点不能作为控制平面主机。
如果你还没有集群，你可以通过 &lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;Minikube&lt;/a&gt;
构建一个你自己的集群，或者你可以使用下面的 Kubernetes 练习环境之一：&lt;/p&gt;</description></item><item><title>对 kubeadm 进行故障排查</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/</guid><description>&lt;!--
title: Troubleshooting kubeadm
content_type: concept
weight: 20
--&gt;
&lt;!-- overview --&gt;
&lt;!--
As with any program, you might run into an error installing or running kubeadm.
This page lists some common failure scenarios and have provided steps that can help you understand and fix the problem.

If your problem is not listed below, please follow the following steps:

- If you think your problem is a bug with kubeadm:
 - Go to [github.com/kubernetes/kubeadm](https://github.com/kubernetes/kubeadm/issues) and search for existing issues.
 - If no issue exists, please [open one](https://github.com/kubernetes/kubeadm/issues/new) and follow the issue template.

- If you are unsure about how kubeadm works, you can ask on [Slack](https://slack.k8s.io/) in `#kubeadm`,
 or open a question on [StackOverflow](https://stackoverflow.com/questions/tagged/kubernetes). Please include
 relevant tags like `#kubernetes` and `#kubeadm` so folks can help you.
--&gt;
&lt;p&gt;与任何程序一样，你可能会在安装或者运行 kubeadm 时遇到错误。
本文列举了一些常见的故障场景，并提供可帮助你理解和解决这些问题的步骤。&lt;/p&gt;</description></item><item><title>访问集群</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/access-application-cluster/access-cluster/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/access-application-cluster/access-cluster/</guid><description>&lt;!--
title: Accessing Clusters
weight: 20
content_type: concept
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This topic discusses multiple ways to interact with clusters.
--&gt;
&lt;p&gt;本文阐述多种与集群交互的方法。&lt;/p&gt;
&lt;nav id="TableOfContents"&gt;
 &lt;ul&gt;
 &lt;li&gt;&lt;a href="#accessing-for-the-first-time-with-kubectl"&gt;使用 kubectl 完成集群的第一次访问&lt;/a&gt;&lt;/li&gt;
 &lt;li&gt;&lt;a href="#directly-accessing-the-rest-api"&gt;直接访问 REST API&lt;/a&gt;
 &lt;ul&gt;
 &lt;li&gt;&lt;a href="#using-kubectl-proxy"&gt;使用 kubectl proxy&lt;/a&gt;&lt;/li&gt;
 &lt;li&gt;&lt;a href="#without-kubectl-proxy"&gt;不使用 kubectl proxy&lt;/a&gt;&lt;/li&gt;
 &lt;/ul&gt;
 &lt;/li&gt;
 &lt;li&gt;&lt;a href="#programmatic-access-to-the-api"&gt;以编程方式访问 API&lt;/a&gt;
 &lt;ul&gt;
 &lt;li&gt;&lt;a href="#go-client"&gt;Go 客户端&lt;/a&gt;&lt;/li&gt;
 &lt;li&gt;&lt;a href="#python-client"&gt;Python 客户端&lt;/a&gt;&lt;/li&gt;
 &lt;li&gt;&lt;a href="#other-languages"&gt;其它语言&lt;/a&gt;&lt;/li&gt;
 &lt;li&gt;&lt;a href="#accessing-the-api-from-a-pod"&gt;从 Pod 中访问 API&lt;/a&gt;&lt;/li&gt;
 &lt;/ul&gt;
 &lt;/li&gt;
 &lt;li&gt;&lt;a href="#accessing-services-running-on-the-cluster"&gt;访问集群上运行的服务&lt;/a&gt;&lt;/li&gt;
 &lt;li&gt;&lt;a href="#requesting-redirects"&gt;请求重定向&lt;/a&gt;&lt;/li&gt;
 &lt;li&gt;&lt;a href="#so-many-proxies"&gt;多种代理&lt;/a&gt;&lt;/li&gt;
 &lt;/ul&gt;
&lt;/nav&gt;
&lt;!-- body --&gt;
&lt;!--
## Accessing for the first time with kubectl

When accessing the Kubernetes API for the first time, we suggest using the
Kubernetes CLI, `kubectl`.

To access a cluster, you need to know the location of the cluster and have credentials
to access it. Typically, this is automatically set-up when you work through
a [Getting started guide](/docs/setup/),
or someone else set up the cluster and provided you with credentials and a location.

Check the location and credentials that kubectl knows about with this command:
--&gt;
&lt;h2 id="accessing-for-the-first-time-with-kubectl"&gt;使用 kubectl 完成集群的第一次访问&lt;/h2&gt;
&lt;p&gt;当你第一次访问 Kubernetes API 的时候，我们建议你使用 Kubernetes CLI 工具 &lt;code&gt;kubectl&lt;/code&gt;。&lt;/p&gt;</description></item><item><title>Kubernetes 组件 SLI 指标</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/instrumentation/slis/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/instrumentation/slis/</guid><description>&lt;!--
reviewers:
- logicalhan
title: Kubernetes Component SLI Metrics
linkTitle: Service Level Indicator Metrics
content_type: reference
weight: 20
description: &gt;-
 High-level indicators for measuring the reliability and performance of Kubernetes components.
--&gt;
&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable" title="特性门控： ComponentSLIs"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt; 
 &lt;code&gt;Kubernetes v1.32 [stable]&lt;/code&gt;（默认启用）&lt;/div&gt;

&lt;!--
By default, Kubernetes 1.35 publishes Service Level Indicator (SLI) metrics 
for each Kubernetes component binary. This metric endpoint is exposed on the serving 
HTTPS port of each component, at the path `/metrics/slis`. The 
`ComponentSLIs` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
defaults to enabled for each Kubernetes component as of v1.27.
--&gt;
&lt;p&gt;默认情况下，Kubernetes 1.35 会为每个 Kubernetes 组件的二进制文件发布服务等级指标（SLI）。
此指标端点被暴露在每个组件提供 HTTPS 服务的端口上，路径为 &lt;code&gt;/metrics/slis&lt;/code&gt;。
从 v1.27 版本开始，对每个 Kubernetes 组件而言，
&lt;code&gt;ComponentSLIs&lt;/code&gt; &lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/command-line-tools-reference/feature-gates/"&gt;特性门控&lt;/a&gt;都是默认启用的。&lt;/p&gt;</description></item><item><title>关于 dockershim 移除和使用兼容 CRI 运行时的文章</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/node/topics-on-dockershim-and-cri-compatible-runtimes/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/node/topics-on-dockershim-and-cri-compatible-runtimes/</guid><description>&lt;!-- 
title: Articles on dockershim Removal and on Using CRI-compatible Runtimes
content_type: reference
weight: 20
--&gt;
&lt;!-- overview --&gt;
&lt;!-- 
This is a list of articles and other pages that are either
about the Kubernetes' deprecation and removal of _dockershim_,
or about using CRI-compatible container runtimes,
in connection with that removal.
--&gt;
&lt;p&gt;这是关于 Kubernetes 弃用和移除 &lt;strong&gt;dockershim&lt;/strong&gt;
或使用兼容 CRI 的容器运行时相关的文章和其他页面的列表。&lt;/p&gt;
&lt;!-- body --&gt;
&lt;!-- 
## Kubernetes project

* Kubernetes blog: [Dockershim Removal FAQ](/blog/2020/12/02/dockershim-faq/) (originally published 2020/12/02)

* Kubernetes blog: [Updated: Dockershim Removal FAQ](/blog/2022/02/17/dockershim-faq/) (updated published 2022/02/17)

* Kubernetes blog: [Kubernetes is Moving on From Dockershim: Commitments and Next Steps](/blog/2022/01/07/kubernetes-is-moving-on-from-dockershim/) (published 2022/01/07)

* Kubernetes blog: [Dockershim removal is coming. Are you ready?](/blog/2021/11/12/are-you-ready-for-dockershim-removal/) (published 2021/11/12)

* Kubernetes documentation: [Migrating from dockershim](/docs/tasks/administer-cluster/migrating-from-dockershim/)

* Kubernetes documentation: [Container Runtimes](/docs/setup/production-environment/container-runtimes/)

* Kubernetes enhancement proposal: [KEP-2221: Removing dockershim from kubelet](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/2221-remove-dockershim/README.md)

* Kubernetes enhancement proposal issue: [Removing dockershim from kubelet](https://github.com/kubernetes/enhancements/issues/2221) (_k/enhancements#2221_)
--&gt;
&lt;h2 id="kubernetes-project"&gt;Kubernetes 项目&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Kubernetes 博客：&lt;a href="https://andygol-k8s.netlify.app/zh-cn/blog/2020/12/02/dockershim-faq/"&gt;Dockershim 移除常见问题解答&lt;/a&gt;（最初发表于 2020/12/02）&lt;/p&gt;</description></item><item><title>将 Pod 指派给节点</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/scheduling-eviction/assign-pod-node/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/scheduling-eviction/assign-pod-node/</guid><description>&lt;!--
reviewers:
- davidopp
- dom4ha
- kevin-wangzefeng
- macsko
- sanposhiho
title: Assigning Pods to Nodes
content_type: concept
weight: 20
--&gt;
&lt;!-- overview --&gt;
&lt;!--
You can constrain a &lt;a class='glossary-tooltip' title='Pod 表示你的集群上一组正在运行的容器。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/' target='_blank' aria-label='Pod'&gt;Pod&lt;/a&gt; so that it is
_restricted_ to run on particular &lt;a class='glossary-tooltip' title='Kubernetes 中的工作机器称作节点。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/nodes/' target='_blank' aria-label='node(s)'&gt;node(s)&lt;/a&gt;,
or to _prefer_ to run on particular nodes.
There are several ways to do this and the recommended approaches all use
[label selectors](/docs/concepts/overview/working-with-objects/labels/) to facilitate the selection.
Often, you do not need to set any such constraints; the
&lt;a class='glossary-tooltip' title='控制平面组件，负责监视新创建的、未指定运行节点的 Pod，选择节点让 Pod 在上面运行。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/reference/command-line-tools-reference/kube-scheduler/' target='_blank' aria-label='scheduler'&gt;scheduler&lt;/a&gt; will automatically do a reasonable placement
(for example, spreading your Pods across nodes so as not place Pods on a node with insufficient free resources).
However, there are some circumstances where you may want to control which node
the Pod deploys to, for example, to ensure that a Pod ends up on a node with an SSD attached to it,
or to co-locate Pods from two different services that communicate a lot into the same availability zone.
--&gt;
&lt;p&gt;你可以约束一个 &lt;a class='glossary-tooltip' title='Pod 表示你的集群上一组正在运行的容器。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/' target='_blank' aria-label='Pod'&gt;Pod&lt;/a&gt;
以便&lt;strong&gt;限制&lt;/strong&gt;其只能在特定的&lt;a class='glossary-tooltip' title='Kubernetes 中的工作机器称作节点。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/nodes/' target='_blank' aria-label='节点'&gt;节点&lt;/a&gt;上运行，
或优先在特定的节点上运行。有几种方法可以实现这点，
推荐的方法都是用&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/overview/working-with-objects/labels/"&gt;标签选择算符&lt;/a&gt;来进行选择。
通常这样的约束不是必须的，因为调度器将自动进行合理的放置（比如，将 Pod 分散到节点上，
而不是将 Pod 放置在可用资源不足的节点上等等）。但在某些情况下，你可能需要进一步控制
Pod 被部署到哪个节点。例如，确保 Pod 最终落在连接了 SSD 的机器上，
或者将来自两个不同的服务且有大量通信的 Pod 被放置在同一个可用区。&lt;/p&gt;</description></item><item><title>节点健康监测</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/debug/debug-cluster/monitor-node-health/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/debug/debug-cluster/monitor-node-health/</guid><description>&lt;!-- 
title: Monitor Node Health
content_type: task
reviewers:
- Random-Liu
- dchen1107
weight: 20
--&gt;
&lt;!-- overview --&gt;
&lt;!--
*Node Problem Detector* is a daemon for monitoring and reporting about a node's health.
You can run Node Problem Detector as a `DaemonSet` or as a standalone daemon.
Node Problem Detector collects information about node problems from various daemons
and reports these conditions to the API server as Node [Condition](/docs/concepts/architecture/nodes/#condition)s
or as [Event](/docs/reference/kubernetes-api/cluster-resources/event-v1)s.

To learn how to install and use Node Problem Detector, see
[Node Problem Detector project documentation](https://github.com/kubernetes/node-problem-detector).
--&gt;
&lt;p&gt;&lt;strong&gt;节点问题检测器（Node Problem Detector）&lt;/strong&gt; 是一个守护程序，用于监视和报告节点的健康状况。
你可以将节点问题探测器以 &lt;code&gt;DaemonSet&lt;/code&gt; 或独立守护程序运行。
节点问题检测器从各种守护进程收集节点问题，并以节点
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/nodes/#condition"&gt;Condition&lt;/a&gt; 和
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/cluster-resources/event-v1"&gt;Event&lt;/a&gt;
的形式报告给 API 服务器。&lt;/p&gt;</description></item><item><title>节点与控制面之间的通信</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/control-plane-node-communication/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/control-plane-node-communication/</guid><description>&lt;!--
reviewers:
- dchen1107
- liggitt
title: Communication between Nodes and the Control Plane
content_type: concept
weight: 20
aliases:
- master-node-communication
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This document catalogs the communication paths between the &lt;a class='glossary-tooltip' title='提供 Kubernetes API 服务的控制面组件。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/#kube-apiserver' target='_blank' aria-label='API server'&gt;API server&lt;/a&gt;
and the Kubernetes &lt;a class='glossary-tooltip' title='一组工作机器，称为节点，会运行容器化应用程序。每个集群至少有一个工作节点。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/reference/glossary/?all=true#term-cluster' target='_blank' aria-label='cluster'&gt;cluster&lt;/a&gt;.
The intent is to allow users to customize their installation to harden the network configuration
such that the cluster can be run on an untrusted network (or on fully public IPs on a cloud
provider).
--&gt;
&lt;p&gt;本文列举控制面节点（确切地说是 &lt;a class='glossary-tooltip' title='提供 Kubernetes API 服务的控制面组件。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/#kube-apiserver' target='_blank' aria-label='API 服务器'&gt;API 服务器&lt;/a&gt;）和
Kubernetes &lt;a class='glossary-tooltip' title='一组工作机器，称为节点，会运行容器化应用程序。每个集群至少有一个工作节点。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/reference/glossary/?all=true#term-cluster' target='_blank' aria-label='集群'&gt;集群&lt;/a&gt;之间的通信路径。
目的是为了让用户能够自定义他们的安装，以实现对网络配置的加固，
使得集群能够在不可信的网络上（或者在一个云服务商完全公开的 IP 上）运行。&lt;/p&gt;</description></item><item><title>配置多个调度器</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/extend-kubernetes/configure-multiple-schedulers/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/extend-kubernetes/configure-multiple-schedulers/</guid><description>&lt;!--
reviewers:
- davidopp
- madhusudancs
title: Configure Multiple Schedulers
content_type: task
weight: 20
--&gt;
&lt;!-- overview --&gt;
&lt;!--
Kubernetes ships with a default scheduler that is described
[here](/docs/reference/command-line-tools-reference/kube-scheduler/).
If the default scheduler does not suit your needs you can implement your own scheduler.
Moreover, you can even run multiple schedulers simultaneously alongside the default
scheduler and instruct Kubernetes what scheduler to use for each of your pods. Let's
learn how to run multiple schedulers in Kubernetes with an example.
--&gt;
&lt;p&gt;Kubernetes 自带了一个默认调度器，其详细描述请查阅
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/command-line-tools-reference/kube-scheduler/"&gt;这里&lt;/a&gt;。
如果默认调度器不适合你的需求，你可以实现自己的调度器。
而且，你甚至可以和默认调度器一起同时运行多个调度器，并告诉 Kubernetes 为每个
Pod 使用哪个调度器。
让我们通过一个例子讲述如何在 Kubernetes 中运行多个调度器。&lt;/p&gt;</description></item><item><title>评阅人和批准人文档</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/contribute/review/for-approvers/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/contribute/review/for-approvers/</guid><description>&lt;!--
title: Reviewing for approvers and reviewers
linktitle: For approvers and reviewers
slug: for-approvers
content_type: concept
weight: 20
--&gt;
&lt;!-- overview --&gt;
&lt;!--
SIG Docs [Reviewers](/docs/contribute/participate/#reviewers) and
[Approvers](/docs/contribute/participate/#approvers) do a few extra things
when reviewing a change.

Every week a specific docs approver volunteers to triage and review pull requests.
This person is the "PR Wrangler" for the week. See the
[PR Wrangler scheduler](https://github.com/kubernetes/website/wiki/PR-Wranglers)
for more information. To become a PR Wrangler, attend the weekly SIG Docs meeting
and volunteer. Even if you are not on the schedule for the current week, you can
still review pull requests (PRs) that are not already under active review.

In addition to the rotation, a bot assigns reviewers and approvers
for the PR based on the owners for the affected files.
--&gt;
&lt;p&gt;SIG Docs
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/contribute/participate/#reviewers"&gt;评阅人（Reviewers）&lt;/a&gt;
和&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/contribute/participate/#approvers"&gt;批准人（Approvers）&lt;/a&gt;
在对变更进行评审时需要做一些额外的事情。&lt;/p&gt;</description></item><item><title>容器环境</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/containers/container-environment/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/containers/container-environment/</guid><description>&lt;!--
reviewers:
- mikedanese
- thockin
title: Container Environment
content_type: concept
weight: 20
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page describes the resources available to Containers in the Container environment. 
--&gt;
&lt;p&gt;本页描述了在容器环境里容器可用的资源。&lt;/p&gt;
&lt;!-- body --&gt;
&lt;!--
## Container environment

The Kubernetes Container environment provides several important resources to Containers:

* A filesystem, which is a combination of an [image](/docs/concepts/containers/images/) and one or more [volumes](/docs/concepts/storage/volumes/).
* Information about the Container itself.
* Information about other objects in the cluster.
--&gt;
&lt;h2 id="container-environment"&gt;容器环境&lt;/h2&gt;
&lt;p&gt;Kubernetes 的容器环境给容器提供了几个重要的资源：&lt;/p&gt;</description></item><item><title>容器运行时</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/setup/production-environment/container-runtimes/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/setup/production-environment/container-runtimes/</guid><description>&lt;!--
reviewers:
- vincepri
- bart0sh
title: Container Runtimes
content_type: concept
weight: 20
--&gt;
&lt;!-- overview --&gt;
&lt;div class="alert alert-secondary callout note" role="note"&gt;
 &lt;strong&gt;说明：&lt;/strong&gt; 自 1.24 版起，Dockershim 已从 Kubernetes 项目中移除。阅读 &lt;a href="https://andygol-k8s.netlify.app/zh-cn/dockershim"&gt;Dockershim 移除的常见问题&lt;/a&gt;了解更多详情。
&lt;/div&gt;
&lt;!-- 
You need to install a
&lt;a class='glossary-tooltip' title='容器运行时是负责运行容器的软件。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/setup/production-environment/container-runtimes' target='_blank' aria-label='container runtime'&gt;container runtime&lt;/a&gt;
into each node in the cluster so that Pods can run there. This page outlines
what is involved and describes related tasks for setting up nodes.
--&gt;
&lt;p&gt;你需要在集群内每个节点上安装一个
&lt;a class='glossary-tooltip' title='容器运行时是负责运行容器的软件。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/setup/production-environment/container-runtimes' target='_blank' aria-label='容器运行时'&gt;容器运行时&lt;/a&gt;
以使 Pod 可以运行在上面。本文概述了所涉及的内容并描述了与节点设置相关的任务。&lt;/p&gt;</description></item><item><title>设备插件</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/</guid><description>&lt;!--
title: Device Plugins
description: &gt;
 Device plugins let you configure your cluster with support for devices or resources that require
 vendor-specific setup, such as GPUs, NICs, FPGAs, or non-volatile main memory.
content_type: concept
weight: 20
--&gt;
&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.26 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;!--
Kubernetes provides a device plugin framework that you can use to advertise system hardware
resources to the &lt;a class='glossary-tooltip' title='一个在集群中每个节点上运行的代理。它保证容器都运行在 Pod 中。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/reference/command-line-tools-reference/kubelet' target='_blank' aria-label='Kubelet'&gt;Kubelet&lt;/a&gt;.

Instead of customizing the code for Kubernetes itself, vendors can implement a
device plugin that you deploy either manually or as a &lt;a class='glossary-tooltip' title='确保 Pod 的副本在集群中的一组节点上运行。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/controllers/daemonset/' target='_blank' aria-label='DaemonSet'&gt;DaemonSet&lt;/a&gt;.
The targeted devices include GPUs, high-performance NICs, FPGAs, InfiniBand adapters,
and other similar computing resources that may require vendor specific initialization
and setup.
--&gt;
&lt;p&gt;Kubernetes 提供了一个设备插件框架，你可以用它来将系统硬件资源发布到
&lt;a class='glossary-tooltip' title='一个在集群中每个节点上运行的代理。它保证容器都运行在 Pod 中。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/reference/command-line-tools-reference/kubelet' target='_blank' aria-label='Kubelet'&gt;Kubelet&lt;/a&gt;。&lt;/p&gt;</description></item><item><title>声明式 API 验证</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/using-api/declarative-validation/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/using-api/declarative-validation/</guid><description>&lt;!--
title: Declarative API Validation
reviewers:
- aaron-prindle
- yongruilin
- jpbetz
- thockin
content_type: concept
weight: 20
--&gt;







 &lt;div class="feature-state-notice feature-beta"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.33 [beta]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;!--
Kubernetes 1.35 includes optional _declarative validation_ for APIs. When enabled, the Kubernetes API server can use this mechanism rather than the legacy approach that relies on hand-written Go
code (`validation.go` files) to ensure that requests against the API are valid.
Kubernetes developers, and people [extending the Kubernetes API](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/),
can define validation rules directly alongside the API type definitions (`types.go` files). Code authors define
pecial comment tags (e.g., `+k8s:minimum=0`). A code generator (`validation-gen`) then uses these tags to produce
optimized Go code for API validation.
--&gt;
&lt;p&gt;Kubernetes 1.35 包含可选用于 API的&lt;strong&gt;声明式验证&lt;/strong&gt;特性。
当启用时，Kubernetes API 服务器可以使用此机制而不是依赖手写的
Go 代码（&lt;code&gt;validation.go&lt;/code&gt; 文件）来确保针对 API 的请求是有效的。
Kubernetes 开发者和&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/"&gt;扩展 Kubernetes API&lt;/a&gt;
的人员可以直接在 API 类型定义（&lt;code&gt;types.go&lt;/code&gt; 文件）旁边定义验证规则。
代码作者定义特殊的注释标签（例如，&lt;code&gt;+k8s:minimum=0&lt;/code&gt;）。
然后，一个代码生成器（&lt;code&gt;validation-gen&lt;/code&gt;）会使用这些标签来生成用于 API 验证的优化 Go 代码。&lt;/p&gt;</description></item><item><title>使用 Calico 提供 NetworkPolicy</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/network-policy-provider/calico-network-policy/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/network-policy-provider/calico-network-policy/</guid><description>&lt;!--
reviewers:
- caseydavenport
title: Use Calico for NetworkPolicy
content_type: task
weight: 20
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page shows a couple of quick ways to create a Calico cluster on Kubernetes.
--&gt;
&lt;p&gt;本页展示了几种在 Kubernetes 上快速创建 Calico 集群的方法。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;!--
Decide whether you want to deploy a [cloud](#creating-a-calico-cluster-with-google-kubernetes-engine-gke) or [local](#creating-a-local-calico-cluster-with-kubeadm) cluster.
--&gt;
&lt;p&gt;确定你想部署一个&lt;a href="#gke-cluster"&gt;云版本&lt;/a&gt;还是&lt;a href="#local-cluster"&gt;本地版本&lt;/a&gt;的集群。&lt;/p&gt;
&lt;!-- steps --&gt;
&lt;!--
## Creating a Calico cluster with Google Kubernetes Engine (GKE)

**Prerequisite**: [gcloud](https://cloud.google.com/sdk/docs/quickstarts).
--&gt;
&lt;h2 id="gke-cluster"&gt;在 Google Kubernetes Engine (GKE) 上创建一个 Calico 集群&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;先决条件&lt;/strong&gt;：&lt;a href="https://cloud.google.com/sdk/docs/quickstarts"&gt;gcloud&lt;/a&gt;&lt;/p&gt;</description></item><item><title>使用 CustomResourceDefinition 扩展 Kubernetes API</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/</guid><description>&lt;!--
title: Extend the Kubernetes API with CustomResourceDefinitions
reviewers:
- deads2k
- jpbetz
- liggitt
- roycaihw
- sttts
content_type: task
min-kubernetes-server-version: 1.16
weight: 20
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page shows how to install a
[custom resource](/docs/concepts/extend-kubernetes/api-extension/custom-resources/)
into the Kubernetes API by creating a
[CustomResourceDefinition](/docs/reference/generated/kubernetes-api/v1.35/#customresourcedefinition-v1-apiextensions-k8s-io).
--&gt;
&lt;p&gt;本页展示如何使用
&lt;a href="https://andygol-k8s.netlify.app/docs/reference/generated/kubernetes-api/v1.35/#customresourcedefinition-v1-apiextensions-k8s-io"&gt;CustomResourceDefinition&lt;/a&gt;
将&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/extend-kubernetes/api-extension/custom-resources/"&gt;定制资源（Custom Resource）&lt;/a&gt;
安装到 Kubernetes API 上。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;p&gt;&lt;!--
You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
[minikube](https://minikube.sigs.k8s.io/docs/tutorials/multi_node/)
or you can use one of these Kubernetes playgrounds:
--&gt;
&lt;p&gt;你必须拥有一个 Kubernetes 的集群，且必须配置 kubectl 命令行工具让其与你的集群通信。
建议运行本教程的集群至少有两个节点，且这两个节点不能作为控制平面主机。
如果你还没有集群，你可以通过 &lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;Minikube&lt;/a&gt;
构建一个你自己的集群，或者你可以使用下面的 Kubernetes 练习环境之一：&lt;/p&gt;</description></item><item><title>使用 DRA 为工作负载分配设备</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configure-pod-container/assign-resources/allocate-devices-dra/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configure-pod-container/assign-resources/allocate-devices-dra/</guid><description>&lt;!--
title: Allocate Devices to Workloads with DRA
content_type: task
min-kubernetes-server-version: v1.34
weight: 20
--&gt;







 &lt;div class="feature-state-notice feature-stable" title="特性门控： DynamicResourceAllocation"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt; 
 &lt;code&gt;Kubernetes v1.35 [stable]&lt;/code&gt;（默认启用）&lt;/div&gt;

&lt;!-- overview --&gt;
&lt;!--
This page shows you how to allocate devices to your Pods by using
_dynamic resource allocation (DRA)_. These instructions are for workload
operators. Before reading this page, familiarize yourself with how DRA works and
with DRA terminology like
&lt;a class='glossary-tooltip' title='描述工作负载所需的资源，例如设备。ResourceClaim 可以针对某些 DeviceClass 请求设备。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/scheduling-eviction/dynamic-resource-allocation/#resourceclaims-templates' target='_blank' aria-label='ResourceClaims'&gt;ResourceClaims&lt;/a&gt; and
&lt;a class='glossary-tooltip' title='定义一个模板，Kubernetes 据此创建 ResourceClaim。此模板用于为每个 Pod 提供对一些独立、相似的资源的访问权限。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/scheduling-eviction/dynamic-resource-allocation/#resourceclaims-templates' target='_blank' aria-label='ResourceClaimTemplates'&gt;ResourceClaimTemplates&lt;/a&gt;.
For more information, see
[Dynamic Resource Allocation (DRA)](/docs/concepts/scheduling-eviction/dynamic-resource-allocation/).
--&gt;
&lt;p&gt;本文介绍如何使用&lt;strong&gt;动态资源分配（DRA）&lt;/strong&gt; 为 Pod 分配设备。
这些指示说明面向工作负载运维人员。在阅读本文之前，请先了解 DRA 的工作原理以及相关术语，例如
&lt;a class='glossary-tooltip' title='描述工作负载所需的资源，例如设备。ResourceClaim 可以针对某些 DeviceClass 请求设备。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/scheduling-eviction/dynamic-resource-allocation/#resourceclaims-templates' target='_blank' aria-label='ResourceClaim'&gt;ResourceClaim&lt;/a&gt; 和
&lt;a class='glossary-tooltip' title='定义一个模板，Kubernetes 据此创建 ResourceClaim。此模板用于为每个 Pod 提供对一些独立、相似的资源的访问权限。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/scheduling-eviction/dynamic-resource-allocation/#resourceclaims-templates' target='_blank' aria-label='ResourceClaimTemplate'&gt;ResourceClaimTemplate&lt;/a&gt;。
更多信息参阅&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/scheduling-eviction/dynamic-resource-allocation/"&gt;动态资源分配（DRA）&lt;/a&gt;。&lt;/p&gt;</description></item><item><title>使用 Kustomize 对 Kubernetes 对象进行声明式管理</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/manage-kubernetes-objects/kustomization/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/manage-kubernetes-objects/kustomization/</guid><description>&lt;!--
title: Declarative Management of Kubernetes Objects Using Kustomize
content_type: task
weight: 20
--&gt;
&lt;!-- overview --&gt;
&lt;!--
[Kustomize](https://github.com/kubernetes-sigs/kustomize) is a standalone tool
to customize Kubernetes objects
through a [kustomization file](https://kubectl.docs.kubernetes.io/references/kustomize/glossary/#kustomization).
--&gt;
&lt;p&gt;&lt;a href="https://github.com/kubernetes-sigs/kustomize"&gt;Kustomize&lt;/a&gt; 是一个独立的工具，用来通过
&lt;a href="https://kubectl.docs.kubernetes.io/references/kustomize/glossary/#kustomization"&gt;kustomization 文件&lt;/a&gt;
定制 Kubernetes 对象。&lt;/p&gt;
&lt;!--
Since 1.14, kubectl also
supports the management of Kubernetes objects using a kustomization file.
To view resources found in a directory containing a kustomization file, run the following command:
--&gt;
&lt;p&gt;从 1.14 版本开始，&lt;code&gt;kubectl&lt;/code&gt; 也开始支持使用 kustomization 文件来管理 Kubernetes 对象。
要查看包含 kustomization 文件的目录中的资源，执行下面的命令：&lt;/p&gt;</description></item><item><title>使用 Service 连接到应用</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tutorials/services/connect-applications-service/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tutorials/services/connect-applications-service/</guid><description>&lt;!--
reviewers:
- caesarxuchao
- lavalamp
- thockin
title: Connecting Applications with Services
content_type: tutorial
weight: 20
--&gt;
&lt;!-- overview --&gt;
&lt;!--
## The Kubernetes model for connecting containers

Now that you have a continuously running, replicated application you can expose it on a network.

Kubernetes assumes that pods can communicate with other pods, regardless of which host they land on.
Kubernetes gives every pod its own cluster-private IP address, so you do not need to explicitly
create links between pods or map container ports to host ports. This means that containers within
a Pod can all reach each other's ports on localhost, and all pods in a cluster can see each other
without NAT. The rest of this document elaborates on how you can run reliable services on such a
networking model.

This tutorial uses a simple nginx web server to demonstrate the concept.
--&gt;
&lt;h2 id="the-kubernetes-model-for-connecting-containers"&gt;Kubernetes 连接容器的模型&lt;/h2&gt;
&lt;p&gt;既然有了一个持续运行、可复制的应用，我们就能够将它暴露到网络上。&lt;/p&gt;</description></item><item><title>使用工作队列进行粗粒度并行处理</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/job/coarse-parallel-processing-work-queue/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/job/coarse-parallel-processing-work-queue/</guid><description>&lt;!--
title: Coarse Parallel Processing Using a Work Queue
content_type: task
weight: 20
--&gt;
&lt;!-- overview --&gt;
&lt;!--
In this example, you will run a Kubernetes Job with multiple parallel
worker processes.

In this example, as each pod is created, it picks up one unit of work
from a task queue, completes it, deletes it from the queue, and exits.

Here is an overview of the steps in this example:

1. **Start a message queue service.** In this example, you use RabbitMQ, but you could use another
 one. In practice you would set up a message queue service once and reuse it for many jobs.
1. **Create a queue, and fill it with messages.** Each message represents one task to be done. In
 this example, a message is an integer that we will do a lengthy computation on.
1. **Start a Job that works on tasks from the queue**. The Job starts several pods. Each pod takes
 one task from the message queue, processes it, and exits.
--&gt;
&lt;p&gt;本例中，你将会运行包含多个并行工作进程的 Kubernetes Job。&lt;/p&gt;</description></item><item><title>使用配置文件管理 Secret</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configmap-secret/managing-secret-using-config-file/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configmap-secret/managing-secret-using-config-file/</guid><description>&lt;!-- 
title: Managing Secrets using Configuration File
content_type: task
weight: 20
description: Creating Secret objects using resource configuration file.
--&gt;
&lt;!-- overview --&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;!--
You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
[minikube](https://minikube.sigs.k8s.io/docs/tutorials/multi_node/)
or you can use one of these Kubernetes playgrounds:
--&gt;
&lt;p&gt;你必须拥有一个 Kubernetes 的集群，且必须配置 kubectl 命令行工具让其与你的集群通信。
建议运行本教程的集群至少有两个节点，且这两个节点不能作为控制平面主机。
如果你还没有集群，你可以通过 &lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;Minikube&lt;/a&gt;
构建一个你自己的集群，或者你可以使用下面的 Kubernetes 练习环境之一：&lt;/p&gt;</description></item><item><title>使用启动引导令牌（Bootstrap Tokens）认证</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/access-authn-authz/bootstrap-tokens/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/access-authn-authz/bootstrap-tokens/</guid><description>&lt;!--
reviewers:
- jbeda
title: Authenticating with Bootstrap Tokens
content_type: concept
weight: 20
--&gt;
&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.18 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;!--
Bootstrap tokens are a simple bearer token that is meant to be used when
creating new clusters or joining new nodes to an existing cluster.
It was built to support [kubeadm](/docs/reference/setup-tools/kubeadm/), but can be used in other contexts
for users that wish to start clusters without `kubeadm`. It is also built to
work, via RBAC policy, with the
[kubelet TLS Bootstrapping](/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/) system.
--&gt;
&lt;p&gt;启动引导令牌是一种简单的持有者令牌（Bearer Token），这种令牌是在新建集群
或者在现有集群中添加新节点时使用的。
它被设计成能够支持 &lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/"&gt;&lt;code&gt;kubeadm&lt;/code&gt;&lt;/a&gt;，
但是也可以被用在其他的案例中以便用户在不使用 &lt;code&gt;kubeadm&lt;/code&gt; 的情况下启动集群。
它也被设计成可以通过 RBAC 策略，结合
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/"&gt;kubelet TLS 启动引导&lt;/a&gt;
系统进行工作。&lt;/p&gt;</description></item><item><title>示例：使用 Redis 部署 PHP 留言板应用</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tutorials/stateless-application/guestbook/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tutorials/stateless-application/guestbook/</guid><description>&lt;!--
title: "Example: Deploying PHP Guestbook application with Redis"
reviewers:
- ahmetb
- jimangel
content_type: tutorial
weight: 20
card:
 name: tutorials
 weight: 30
 title: "Stateless Example: PHP Guestbook with Redis"
min-kubernetes-server-version: v1.14
source: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This tutorial shows you how to build and deploy a simple _(not production
ready)_, multi-tier web application using Kubernetes and
[Docker](https://www.docker.com/). This example consists of the following
components:
--&gt;
&lt;p&gt;本教程向你展示如何使用 Kubernetes 和 &lt;a href="https://www.docker.com/"&gt;Docker&lt;/a&gt;
构建和部署一个简单的 &lt;strong&gt;(非面向生产的)&lt;/strong&gt; 多层 Web 应用。本例由以下组件组成：&lt;/p&gt;</description></item><item><title>示例：使用持久卷部署 WordPress 和 MySQL</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/</guid><description>&lt;!--
title: "Example: Deploying WordPress and MySQL with Persistent Volumes"
reviewers:
- ahmetb
content_type: tutorial
weight: 20
card:
 name: tutorials
 weight: 40
 title: "Stateful Example: Wordpress with Persistent Volumes"
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This tutorial shows you how to deploy a WordPress site and a MySQL database using
Minikube. Both applications use PersistentVolumes and PersistentVolumeClaims to store data.
--&gt;
&lt;p&gt;本示例描述了如何通过 Minikube 在 Kubernetes 上安装 WordPress 和 MySQL。
这两个应用都使用 PersistentVolumes 和 PersistentVolumeClaims 保存数据。&lt;/p&gt;</description></item><item><title>提出内容改进建议</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/contribute/suggesting-improvements/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/contribute/suggesting-improvements/</guid><description>&lt;!--
title: Suggesting content improvements
content_type: concept
weight: 20
card:
 name: contribute
 weight: 15
 anchors:
 - anchor: "#opening-an-issue"
 title: Suggest content improvements
--&gt;
&lt;!-- overview --&gt;
&lt;!--
If you notice an issue with Kubernetes documentation or have an idea for new content,
then open an issue. All you need is a [GitHub account](https://github.com/join) and
a web browser.

In most cases, new work on Kubernetes documentation begins with an issue in GitHub. Kubernetes contributors
then review, categorize and tag issues as needed. Next, you or another member
of the Kubernetes community open a pull request with changes to resolve the issue.
--&gt;
&lt;p&gt;如果你发现 Kubernetes 文档中存在问题或者你有一个关于新内容的想法，
可以考虑提出一个问题（issue）。你只需要具有 &lt;a href="https://github.com/join"&gt;GitHub 账号&lt;/a&gt;和 Web
浏览器就可以完成这件事。&lt;/p&gt;</description></item><item><title>通过 ConfigMap 更新配置</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tutorials/configuration/updating-configuration-via-a-configmap/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tutorials/configuration/updating-configuration-via-a-configmap/</guid><description>&lt;!--
title: Updating Configuration via a ConfigMap
content_type: tutorial
weight: 20
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page provides a step-by-step example of updating configuration within a Pod via a ConfigMap
and builds upon the [Configure a Pod to Use a ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/) task.
At the end of this tutorial, you will understand how to change the configuration for a running application.
This tutorial uses the `alpine` and `nginx` images as examples.
--&gt;
&lt;p&gt;本页提供了通过 ConfigMap 更新 Pod 中配置信息的分步示例，
本教程的前置任务是&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configure-pod-container/configure-pod-configmap/"&gt;配置 Pod 以使用 ConfigMap&lt;/a&gt;。
在本教程结束时，你将了解如何变更运行中应用的配置。
本教程以 &lt;code&gt;alpine&lt;/code&gt; 和 &lt;code&gt;nginx&lt;/code&gt; 镜像为例。&lt;/p&gt;</description></item><item><title>为发行版本撰写功能特性文档</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/contribute/new-content/new-features/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/contribute/new-content/new-features/</guid><description>&lt;!--
title: Documenting a feature for a release
linktitle: Documenting for a release
content_type: concept
main_menu: true
weight: 20
card:
 name: contribute
 weight: 45
 title: Documenting a feature for a release
--&gt;
&lt;!-- overview --&gt;
&lt;!--
Each major Kubernetes release introduces new features that require documentation.
New releases also bring updates to existing features and documentation
(such as upgrading a feature from alpha to beta).

Generally, the SIG responsible for a feature submits draft documentation of the
feature as a pull request to the appropriate development branch of the
`kubernetes/website` repository, and someone on the SIG Docs team provides
editorial feedback or edits the draft directly. This section covers the branching
conventions and process used during a release by both groups.
--&gt;
&lt;p&gt;Kubernetes 的每个主要版本发布都会包含一些需要文档说明的新功能。
新的发行版本也会更新已有的功能特性和文档（例如将某功能特性从 Alpha 升级为 Beta）。&lt;/p&gt;</description></item><item><title>为命名空间配置默认的 CPU 请求和限制</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/</guid><description>&lt;!--
title: Configure Default CPU Requests and Limits for a Namespace
content_type: task
weight: 20
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page shows how to configure default CPU requests and limits for a
&lt;a class='glossary-tooltip' title='名字空间是 Kubernetes 用来支持隔离单个集群中的资源组的一种抽象。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/overview/working-with-objects/namespaces/' target='_blank' aria-label='namespace'&gt;namespace&lt;/a&gt;.

A Kubernetes cluster can be divided into namespaces. If you create a Pod within a
namespace that has a default CPU
[limit](/docs/concepts/configuration/manage-resources-containers/#requests-and-limits), and any container in that Pod does not specify
its own CPU limit, then the
&lt;a class='glossary-tooltip' title='控制平面是指容器编排层，它暴露 API 和接口来定义、部署容器和管理容器的生命周期。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/reference/glossary/?all=true#term-control-plane' target='_blank' aria-label='control plane'&gt;control plane&lt;/a&gt; assigns the default
CPU limit to that container.

Kubernetes assigns a default CPU
[request](/docs/concepts/configuration/manage-resources-containers/#requests-and-limits),
but only under certain conditions that are explained later in this page.
--&gt;
&lt;p&gt;本章介绍如何为&lt;a class='glossary-tooltip' title='名字空间是 Kubernetes 用来支持隔离单个集群中的资源组的一种抽象。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/overview/working-with-objects/namespaces/' target='_blank' aria-label='命名空间'&gt;命名空间&lt;/a&gt;配置默认的 CPU 请求和限制。&lt;/p&gt;</description></item><item><title>为容器和 Pods 分配 CPU 资源</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configure-pod-container/assign-cpu-resource/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configure-pod-container/assign-cpu-resource/</guid><description>&lt;!--
title: Assign CPU Resources to Containers and Pods
content_type: task
weight: 20
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page shows how to assign a CPU *request* and a CPU *limit* to
a container. Containers cannot use more CPU than the configured limit.
Provided the system has CPU time free, a container is guaranteed to be
allocated as much CPU as it requests.
--&gt;
&lt;p&gt;本页面展示如何为容器设置 CPU &lt;strong&gt;request（请求）&lt;/strong&gt; 和 CPU &lt;strong&gt;limit（限制）&lt;/strong&gt;。
容器使用的 CPU 不能超过所配置的限制。
如果系统有空闲的 CPU 时间，则可以保证给容器分配其所请求数量的 CPU 资源。&lt;/p&gt;</description></item><item><title>为容器设置环境变量</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/inject-data-application/define-environment-variable-container/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/inject-data-application/define-environment-variable-container/</guid><description>&lt;!--
title: Define Environment Variables for a Container
content_type: task
weight: 20
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page shows how to define environment variables for a container
in a Kubernetes Pod. 
--&gt;
&lt;p&gt;本页将展示如何为 Kubernetes Pod 下的容器设置环境变量。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;!--
You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
[minikube](https://minikube.sigs.k8s.io/docs/tutorials/multi_node/)
or you can use one of these Kubernetes playgrounds:
--&gt;
&lt;p&gt;你必须拥有一个 Kubernetes 的集群，且必须配置 kubectl 命令行工具让其与你的集群通信。
建议运行本教程的集群至少有两个节点，且这两个节点不能作为控制平面主机。
如果你还没有集群，你可以通过 &lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;Minikube&lt;/a&gt;
构建一个你自己的集群，或者你可以使用下面的 Kubernetes 练习环境之一：&lt;/p&gt;</description></item><item><title>为上游 Kubernetes 代码库做出贡献</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/contribute/generate-ref-docs/contribute-upstream/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/contribute/generate-ref-docs/contribute-upstream/</guid><description>&lt;!--
title: Contributing to the Upstream Kubernetes Code
content_type: task
weight: 20
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page shows how to contribute to the upstream kubernetes/kubernetes project
to fix bugs found in the Kubernetes API documentation or the `kube-*`
components such as `kube-apiserver`, `kube-controller-manager`, etc.
--&gt;
&lt;p&gt;此页面描述如何为上游 &lt;code&gt;kubernetes/kubernetes&lt;/code&gt; 项目做出贡献，如修复 Kubernetes API
文档或 Kubernetes 组件（例如 &lt;code&gt;kubeadm&lt;/code&gt;、&lt;code&gt;kube-apiserver&lt;/code&gt;、&lt;code&gt;kube-controller-manager&lt;/code&gt; 等）
中发现的错误。&lt;/p&gt;
&lt;!--
If you instead want to regenerate the reference documentation for the Kubernetes
API or the `kube-*` components from the upstream code, see the following instructions:

- [Generating Reference Documentation for the Kubernetes API](/docs/contribute/generate-ref-docs/kubernetes-api/)
- [Generating Reference Documentation for the Kubernetes Components and Tools](/docs/contribute/generate-ref-docs/kubernetes-components/)
--&gt;
&lt;p&gt;如果你仅想从上游代码重新生成 Kubernetes API 或 &lt;code&gt;kube-*&lt;/code&gt; 组件的参考文档。请参考以下说明：&lt;/p&gt;</description></item><item><title>运行一个单实例有状态应用</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/run-application/run-single-instance-stateful-application/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/run-application/run-single-instance-stateful-application/</guid><description>&lt;!-- overview --&gt;
&lt;!--
This page shows you how to run a single-instance stateful application
in Kubernetes using a PersistentVolume and a Deployment. The
application is MySQL.
--&gt;
&lt;p&gt;本文介绍在 Kubernetes 中如何使用 PersistentVolume 和 Deployment 运行一个单实例有状态应用。
该示例应用是 MySQL。&lt;/p&gt;
&lt;h2 id="教程目标"&gt;教程目标&lt;/h2&gt;
&lt;!--
* Create a PersistentVolume referencing a disk in your environment.
* Create a MySQL Deployment.
* Expose MySQL to other pods in the cluster at a known DNS name.
--&gt;
&lt;ul&gt;
&lt;li&gt;在你的环境中创建一个引用磁盘的 PersistentVolume。&lt;/li&gt;
&lt;li&gt;创建一个 MySQL Deployment。&lt;/li&gt;
&lt;li&gt;在集群内以一个已知的 DNS 名称将 MySQL 暴露给其他 Pod。&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;!--
You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
[minikube](https://minikube.sigs.k8s.io/docs/tutorials/multi_node/)
or you can use one of these Kubernetes playgrounds:
--&gt;
&lt;p&gt;你必须拥有一个 Kubernetes 的集群，且必须配置 kubectl 命令行工具让其与你的集群通信。
建议运行本教程的集群至少有两个节点，且这两个节点不能作为控制平面主机。
如果你还没有集群，你可以通过 &lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;Minikube&lt;/a&gt;
构建一个你自己的集群，或者你可以使用下面的 Kubernetes 练习环境之一：&lt;/p&gt;</description></item><item><title>运行于多可用区环境</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/setup/best-practices/multiple-zones/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/setup/best-practices/multiple-zones/</guid><description>&lt;!--
reviewers:
- jlowdermilk
- justinsb
- quinton-hoole
title: Running in multiple zones
weight: 20
content_type: concept
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page describes running a cluster across multiple zones.
--&gt;
&lt;p&gt;本页描述如何跨多个区（Zone）运行集群。&lt;/p&gt;
&lt;!-- body --&gt;
&lt;!--
## Background

Kubernetes is designed so that a single Kubernetes cluster can run
across multiple failure zones, typically where these zones fit within
a logical grouping called a _region_. Major cloud providers define a region
as a set of failure zones (also called _availability zones_) that provide
a consistent set of features: within a region, each zone offers the same
APIs and services.

Typical cloud architectures aim to minimize the chance that a failure in
one zone also impairs services in another zone.
--&gt;
&lt;h2 id="background"&gt;背景&lt;/h2&gt;
&lt;p&gt;Kubernetes 从设计上允许同一个 Kubernetes 集群跨多个失效区来运行，
通常这些区位于某个称作 &lt;strong&gt;区域（Region）&lt;/strong&gt; 逻辑分组中。
主要的云提供商都将区域定义为一组失效区的集合（也称作 &lt;strong&gt;可用区（Availability Zones&lt;/strong&gt;）），
能够提供一组一致的功能特性：每个区域内，各个可用区提供相同的 API 和服务。&lt;/p&gt;</description></item><item><title>在名字空间级别应用 Pod 安全标准</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tutorials/security/ns-level-pss/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tutorials/security/ns-level-pss/</guid><description>&lt;!--
title: Apply Pod Security Standards at the Namespace Level
content_type: tutorial
weight: 20
--&gt;
&lt;div class="alert alert-primary" role="alert"&gt;&lt;div class="h4 alert-heading" role="heading"&gt;Note&lt;/div&gt;
&lt;!--
This tutorial applies only for new clusters.
--&gt;
&lt;p&gt;本教程仅适用于新集群。&lt;/p&gt;
&lt;/div&gt;
&lt;!--
Pod Security Admission is an admission controller that applies
[Pod Security Standards](/docs/concepts/security/pod-security-standards/) 
when pods are created. It is a feature GA'ed in v1.25.
In this tutorial, you will enforce the `baseline` Pod Security Standard,
one namespace at a time.

You can also apply Pod Security Standards to multiple namespaces at once at the cluster
level. For instructions, refer to
[Apply Pod Security Standards at the cluster level](/docs/tutorials/security/cluster-level-pss/).
--&gt;
&lt;p&gt;Pod Security Admission 是一个准入控制器，在创建 Pod 时应用 &lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/security/pod-security-standards/"&gt;Pod 安全标准&lt;/a&gt;。
这是在 v1.25 中达到正式发布（GA）的功能。
在本教程中，你将应用 &lt;code&gt;baseline&lt;/code&gt; Pod 安全标准，每次一个名字空间。&lt;/p&gt;</description></item><item><title>证书</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/cluster-administration/certificates/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/cluster-administration/certificates/</guid><description>&lt;!--
title: Certificates
content_type: concept
weight: 20
--&gt;
&lt;!-- overview --&gt;
&lt;!--
To learn how to generate certificates for your cluster, see [Certificates](/docs/tasks/administer-cluster/certificates/).
--&gt;
&lt;p&gt;要了解如何为集群生成证书，参阅&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/certificates/"&gt;证书&lt;/a&gt;。&lt;/p&gt;</description></item><item><title>资源配额</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/policy/resource-quotas/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/policy/resource-quotas/</guid><description>&lt;!--
reviewers:
- derekwaynecarr
title: Resource Quotas
api_metadata:
- apiVersion: "v1"
 kind: "ResourceQuota"
content_type: concept
weight: 20
--&gt;
&lt;!-- overview --&gt;
&lt;!--
When several users or teams share a cluster with a fixed number of nodes,
there is a concern that one team could use more than its fair share of resources.

_Resource quotas_ are a tool for administrators to address this concern.
--&gt;
&lt;p&gt;当多个用户或团队共享具有固定节点数目的集群时，人们会担心有人使用超过其基于公平原则所分配到的资源量。&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;资源配额&lt;/strong&gt;是帮助管理员解决这一问题的工具。&lt;/p&gt;
&lt;!--
A resource quota, defined by a ResourceQuota object, provides constraints that limit
aggregate resource consumption per &lt;a class='glossary-tooltip' title='名字空间是 Kubernetes 用来支持隔离单个集群中的资源组的一种抽象。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/overview/working-with-objects/namespaces/' target='_blank' aria-label='namespace'&gt;namespace&lt;/a&gt;. A ResourceQuota can also
limit the [quantity of objects that can be created in a namespace](#quota-on-object-count) by API kind, as well as the total
amount of &lt;a class='glossary-tooltip' title='数量确定的可供使用的基础设施（CPU、内存等）。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/reference/glossary/?all=true#term-infrastructure-resource' target='_blank' aria-label='infrastructure resources'&gt;infrastructure resources&lt;/a&gt; that may be consumed by
API objects found in that namespace.
--&gt;
&lt;p&gt;资源配额，由 ResourceQuota 对象定义，
提供了限制每个&lt;a class='glossary-tooltip' title='名字空间是 Kubernetes 用来支持隔离单个集群中的资源组的一种抽象。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/overview/working-with-objects/namespaces/' target='_blank' aria-label='命名空间'&gt;命名空间&lt;/a&gt;的资源总消耗的约束。
资源配额还可以限制在命名空间中可以创建的&lt;a href="#quota-on-object-count"&gt;对象数量&lt;/a&gt;（按 API 类型计算），
以及该命名空间中存在的 API
对象可能消耗的&lt;a class='glossary-tooltip' title='数量确定的可供使用的基础设施（CPU、内存等）。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/reference/glossary/?all=true#term-infrastructure-resource' target='_blank' aria-label='基础设施资源'&gt;基础设施资源&lt;/a&gt;的总量。&lt;/p&gt;</description></item><item><title>投射卷</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/projected-volumes/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/projected-volumes/</guid><description>&lt;!--
reviewers:
- marosset
- jsturtevant
- zshihang
title: Projected Volumes
content_type: concept
weight: 21 # just after persistent volumes
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This document describes _projected volumes_ in Kubernetes. Familiarity with [volumes](/docs/concepts/storage/volumes/) is suggested.
--&gt;
&lt;p&gt;本文档描述 Kubernetes 中的&lt;strong&gt;投射卷（Projected Volume）&lt;/strong&gt;。
建议先熟悉&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/volumes/"&gt;卷&lt;/a&gt;概念。&lt;/p&gt;
&lt;!-- body --&gt;
&lt;!--
## Introduction

A `projected` volume maps several existing volume sources into the same directory.

Currently, the following types of volume sources can be projected:

* [`secret`](/docs/concepts/storage/volumes/#secret)
* [`downwardAPI`](/docs/concepts/storage/volumes/#downwardapi)
* [`configMap`](/docs/concepts/storage/volumes/#configmap)
* [`serviceAccountToken`](#serviceaccounttoken)
* [`clusterTrustBundle`](#clustertrustbundle)
* [`podCertificate`](#podcertificate)
--&gt;
&lt;h2 id="introduction"&gt;介绍&lt;/h2&gt;
&lt;p&gt;一个 &lt;code&gt;projected&lt;/code&gt; 卷可以将若干现有的卷源映射到同一个目录之上。&lt;/p&gt;</description></item><item><title>官方 CVE 订阅源</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/issues-security/official-cve-feed/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/issues-security/official-cve-feed/</guid><description>&lt;!--
title: Official CVE Feed
linkTitle: CVE feed
weight: 25
outputs:
 - json
 - html
 - rss
layout: cve-feed
--&gt;







 &lt;div class="feature-state-notice feature-beta"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.27 [beta]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;!--
This is a community maintained list of official CVEs announced by
the Kubernetes Security Response Committee. See
[Kubernetes Security and Disclosure Information](/docs/reference/issues-security/security/)
for more details.

The Kubernetes project publishes a programmatically accessible feed of published
security issues in [JSON feed](/docs/reference/issues-security/official-cve-feed/index.json)
and [RSS feed](/docs/reference/issues-security/official-cve-feed/feed.xml)
formats. You can access it by executing the following commands:
--&gt;
&lt;p&gt;这是由 Kubernetes 安全响应委员会（Security Response Committee, SRC）公布的经社区维护的官方 CVE 列表。
更多细节请参阅 &lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/issues-security/security/"&gt;Kubernetes 安全和信息披露&lt;/a&gt;。&lt;/p&gt;</description></item><item><title>服务器端应用（Server-Side Apply）</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/using-api/server-side-apply/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/using-api/server-side-apply/</guid><description>&lt;!--
title: Server-Side Apply
reviewers:
- smarterclayton
- apelisse
- lavalamp
- liggitt
content_type: concept
weight: 25
--&gt;
&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable" title="特性门控： ServerSideApply"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt; 
 &lt;code&gt;Kubernetes v1.22 [stable]&lt;/code&gt;（默认启用）&lt;/div&gt;

&lt;!--
Kubernetes supports multiple appliers collaborating to manage the fields
of a single [object](/docs/concepts/overview/working-with-objects/).

Server-Side Apply provides an optional mechanism for your cluster's control plane to track
changes to an object's fields. At the level of a specific resource, Server-Side
Apply records and tracks information about control over the fields of that object.
--&gt;
&lt;p&gt;Kubernetes 支持多个应用程序协作管理一个&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/overview/working-with-objects/"&gt;对象&lt;/a&gt;的字段。
服务器端应用为集群的控制平面提供了一种可选机制，用于跟踪对对象字段的更改。
在特定资源级别，服务器端应用记录并跟踪有关控制该对象字段的信息。&lt;/p&gt;</description></item><item><title>服务账号</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/security/service-accounts/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/security/service-accounts/</guid><description>&lt;!--
title: Service Accounts
description: &gt;
 Learn about ServiceAccount objects in Kubernetes.
api_metadata:
- apiVersion: "v1"
 kind: "ServiceAccount"
content_type: concept
weight: 25
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page introduces the ServiceAccount object in Kubernetes, providing
information about how service accounts work, use cases, limitations,
alternatives, and links to resources for additional guidance.
--&gt;
&lt;p&gt;本页介绍 Kubernetes 中的 ServiceAccount 对象，
讲述服务账号的工作原理、使用场景、限制、替代方案，还提供了一些资源链接方便查阅更多指导信息。&lt;/p&gt;
&lt;!-- body --&gt;
&lt;!--
## What are service accounts? {#what-are-service-accounts}
--&gt;
&lt;h2 id="what-are-service-accounts"&gt;什么是服务账号？&lt;/h2&gt;
&lt;!--
A service account is a type of non-human account that, in Kubernetes, provides
a distinct identity in a Kubernetes cluster. Application Pods, system
components, and entities inside and outside the cluster can use a specific
ServiceAccount's credentials to identify as that ServiceAccount. This identity
is useful in various situations, including authenticating to the API server or
implementing identity-based security policies.
--&gt;
&lt;p&gt;服务账号是在 Kubernetes 中一种用于非人类用户的账号，在 Kubernetes 集群中提供不同的身份标识。
应用 Pod、系统组件以及集群内外的实体可以使用特定 ServiceAccount 的凭据来将自己标识为该 ServiceAccount。
这种身份可用于许多场景，包括向 API 服务器进行身份认证或实现基于身份的安全策略。&lt;/p&gt;</description></item><item><title>CustomResourceDefinition 的版本</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/</guid><description>&lt;!--
title: Versions in CustomResourceDefinitions
reviewers:
- sttts
- liggitt
content_type: task
weight: 30
min-kubernetes-server-version: v1.16
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page explains how to add versioning information to
[CustomResourceDefinitions](/docs/reference/kubernetes-api/extend-resources/custom-resource-definition-v1/), to indicate the stability
level of your CustomResourceDefinitions or advance your API to a new version with conversion between API representations. It also describes how to upgrade an object from one version to another.
--&gt;
&lt;p&gt;本页介绍如何添加版本信息到
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/extend-resources/custom-resource-definition-v1/"&gt;CustomResourceDefinitions&lt;/a&gt;。
目的是标明 CustomResourceDefinitions 的稳定级别或者服务于 API 升级。
API 升级时需要在不同 API 表示形式之间进行转换。
本页还描述如何将对象从一个版本升级到另一个版本。&lt;/p&gt;</description></item><item><title>Ingress</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/services-networking/ingress/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/services-networking/ingress/</guid><description>&lt;!--
reviewers:
- robscott
- rikatz
title: Ingress
api_metadata:
- apiVersion: "networking.k8s.io/v1"
 kind: "Ingress"
- apiVersion: "networking.k8s.io/v1"
 kind: "IngressClass"
content_type: concept
description: &gt;-
 Make your HTTP (or HTTPS) network service available using a protocol-aware configuration
 mechanism, that understands web concepts like URIs, hostnames, paths, and more.
 The Ingress concept lets you map traffic to different backends based on rules you define
 via the Kubernetes API.
weight: 30
--&gt;
&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.19 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;!--
title: Ingress
id: ingress
date: 2018-04-12
full_link: /docs/concepts/services-networking/ingress/
short_description: &gt;
 An API object that manages external access to the services in a cluster, typically HTTP.

aka: 
tags:
- networking
- architecture
- extension
--&gt;
&lt;!--
 An API object that manages external access to the services in a cluster, typically HTTP.
--&gt;
&lt;p&gt;Ingress 是对集群中服务的外部访问进行管理的 API 对象，典型的访问方式是 HTTP。&lt;/p&gt;</description></item><item><title>kube-apiserver</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/command-line-tools-reference/kube-apiserver/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/command-line-tools-reference/kube-apiserver/</guid><description>&lt;h2 id="简介"&gt;简介&lt;/h2&gt;
&lt;!--
The Kubernetes API server validates and configures data
for the api objects which include pods, services, replicationcontrollers, and
others. The API Server services REST operations and provides the frontend to the
cluster's shared state through which all other components interact.
--&gt;
&lt;p&gt;Kubernetes API 服务器验证并配置 API 对象的数据，
这些对象包括 pods、services、replicationcontrollers 等。
API 服务器为 REST 操作提供服务，并为集群的共享状态提供前端，
所有其他组件都通过该前端进行交互。&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kube-apiserver [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id="options"&gt;选项&lt;/h2&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--admission-control strings&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Admission is divided into two phases. In the first phase, only mutating admission plugins run. In the second phase, only validating admission plugins run. The names in the below list may represent a validating plugin, a mutating plugin, or both. The order of plugins in which they are passed to this flag does not matter. Comma-delimited list of: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, CertificateApproval, CertificateSigning, CertificateSubjectRestriction, ClusterTrustBundleAttest, DefaultIngressClass, DefaultStorageClass, DefaultTolerationSeconds, DenyServiceExternalIPs, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionPolicy, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeDeclaredFeatureValidator, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PodNodeSelector, PodSecurity, PodTolerationRestriction, PodTopologyLabels, Priority, ResourceQuota, RuntimeClass, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionPolicy, ValidatingAdmissionWebhook. (DEPRECATED: Use --enable-admission-plugins or --disable-admission-plugins instead. Will be removed in a future version.)
--&gt;
准入过程分为两个阶段。第一阶段仅运行变更型准入插件。第二阶段仅运行验证型准入插件。
以下列表中的名称可能代表验证型插件、变更型插件或两者兼有。
传递给此标志的插件顺序无关紧要。以逗号分隔的列表：
AlwaysAdmit、AlwaysDeny、AlwaysPullImages、CertificateApproval、
CertificateSigning、CertificateSubjectRestriction、
ClusterTrustBundleAttest、DefaultIngressClass、DefaultStorageClass、
DefaultTolerationSeconds、DenyServiceExternalIPs、EventRateLimit、
ExtendedResourceToleration、ImagePolicyWebhook、LimitPodHardAntiAffinityTopology、
LimitRanger、MutatingAdmissionPolicy、MutatingAdmissionWebhook、NamespaceAutoProvision、
NamespaceExists、NamespaceLifecycle、NodeDeclaredFeatureValidator、NodeRestriction、
OwnerReferencesPermissionEnforcement、PersistentVolumeClaimResize、PodNodeSelector、
PodSecurity、PodTolerationRestriction、PodTopologyLabels、Priority、ResourceQuota、
RuntimeClass、ServiceAccount、StorageObjectInUseProtection TaintNodesByCondition、
ValidatingAdmissionPolicy、ValidatingAdmissionWebhook。
（已弃用：请改用 &lt;code&gt;--enable-admission-plugins&lt;/code&gt; 或 &lt;code&gt;--disable-admission-plugins&lt;/code&gt;。将在未来版本中移除。）
&lt;/p&gt;</description></item><item><title>kube-controller-manager</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/command-line-tools-reference/kube-controller-manager/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/command-line-tools-reference/kube-controller-manager/</guid><description>&lt;h2 id="简介"&gt;简介&lt;/h2&gt;
&lt;!--
The Kubernetes controller manager is a daemon that embeds
the core control loops shipped with Kubernetes. In applications of robotics and
automation, a control loop is a non-terminating loop that regulates the state of
the system. In Kubernetes, a controller is a control loop that watches the shared
state of the cluster through the apiserver and makes changes attempting to move the
current state towards the desired state. Examples of controllers that ship with
Kubernetes today are the replication controller, endpoints controller, namespace
controller, and serviceaccounts controller.
--&gt;
&lt;p&gt;Kubernetes 控制器管理器是一个守护进程，内嵌随 Kubernetes 一起发布的核心控制回路。
在机器人和自动化的应用中，控制回路是一个永不休止的循环，用于调节系统状态。
在 Kubernetes 中，每个控制器是一个控制回路，通过 API 服务器监视集群的共享状态，
并尝试进行更改以将当前状态转为期望状态。
目前，Kubernetes 自带的控制器例子包括副本控制器、节点控制器、命名空间控制器和服务账号控制器等。&lt;/p&gt;</description></item><item><title>kube-proxy</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/command-line-tools-reference/kube-proxy/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/command-line-tools-reference/kube-proxy/</guid><description>&lt;!-- 
title: kube-proxy
content_type: tool-reference
weight: 30
--&gt;
&lt;h2 id="简介"&gt;简介&lt;/h2&gt;
&lt;!--
The Kubernetes network proxy runs on each node. This
reflects services as defined in the Kubernetes API on each node and can do simple
TCP, UDP, and SCTP stream forwarding or round robin TCP, UDP, and SCTP forwarding across a set of backends.
Service cluster IPs and ports are currently found through Docker-links-compatible
environment variables specifying ports opened by the service proxy. There is an optional
addon that provides cluster DNS for these cluster IPs. The user must create a service
with the apiserver API to configure the proxy.
--&gt;
&lt;p&gt;Kubernetes 网络代理在每个节点上运行。网络代理反映了每个节点上 Kubernetes API
中定义的服务，并且可以执行简单的 TCP、UDP 和 SCTP 流转发，或者在一组后端进行
循环 TCP、UDP 和 SCTP 转发。
当前可通过 Docker-links-compatible 环境变量找到服务集群 IP 和端口，
这些环境变量指定了服务代理打开的端口。
有一个可选的插件，可以为这些集群 IP 提供集群 DNS。
用户必须使用 apiserver API 创建服务才能配置代理。&lt;/p&gt;</description></item><item><title>kube-scheduler</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/command-line-tools-reference/kube-scheduler/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/command-line-tools-reference/kube-scheduler/</guid><description>&lt;!-- 
title: kube-scheduler
content_type: tool-reference
weight: 30
auto_generated: true
--&gt;
&lt;h2 id="简介"&gt;简介&lt;/h2&gt;
&lt;!-- 
The Kubernetes scheduler is a control plane process which assigns
Pods to Nodes. The scheduler determines which Nodes are valid placements for
each Pod in the scheduling queue according to constraints and available
resources. The scheduler then ranks each valid Node and binds the Pod to a
suitable Node. Multiple different schedulers may be used within a cluster;
kube-scheduler is the reference implementation.
See [scheduling](https://kubernetes.io/docs/concepts/scheduling-eviction/)
for more information about scheduling and the kube-scheduler component.
--&gt;
&lt;p&gt;Kubernetes 调度器是一个控制面进程，负责将 Pods 指派到节点上。
调度器基于约束和可用资源为调度队列中每个 Pod 确定其可合法放置的节点。
调度器之后对所有合法的节点进行排序，将 Pod 绑定到一个合适的节点。
在同一个集群中可以使用多个不同的调度器；kube-scheduler 是其参考实现。
参阅&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/scheduling-eviction/"&gt;调度&lt;/a&gt;以获得关于调度和
kube-scheduler 组件的更多信息。&lt;/p&gt;</description></item><item><title>kubeadm join</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-join/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-join/</guid><description>&lt;!--
reviewers:
- luxas
- jbeda
title: kubeadm join
content_type: concept
weight: 30
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This command initializes a new Kubernetes node and joins it to the cluster.
--&gt;
&lt;p&gt;此命令用来初始化新的 Kubernetes 节点并将其加入集群。&lt;/p&gt;
&lt;!-- body --&gt;

	&lt;!-- 
Run this on any machine you wish to join an existing cluster 
--&gt;
&lt;p&gt;在你希望加入现有集群的任何机器上运行它。&lt;/p&gt;
&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
When joining a kubeadm initialized cluster, we need to establish
bidirectional trust. This is split into discovery (having the Node
trust the Kubernetes Control Plane) and TLS bootstrap (having the
Kubernetes Control Plane trust the Node).
--&gt;
&lt;p&gt;当节点加入 kubeadm 初始化的集群时，我们需要建立双向信任。
这个过程可以分解为发现（让待加入节点信任 Kubernetes 控制平面节点）和
TLS 引导（让 Kubernetes 控制平面节点信任待加入节点）两个部分。&lt;/p&gt;</description></item><item><title>kubectl</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl/</guid><description>&lt;!--
title: kubectl
content_type: tool-reference
weight: 30
auto_generated: true
--&gt;
&lt;h2 id="简介"&gt;简介&lt;/h2&gt;
&lt;!--
kubectl controls the Kubernetes cluster manager.

 Find more information at: https://kubernetes.io/docs/reference/kubectl/
--&gt;
&lt;p&gt;kubectl 用于控制 Kubernetes 集群管理器。&lt;/p&gt;
&lt;p&gt;参阅更多细节：
&lt;a href="https://kubernetes.io/zh-cn/docs/reference/kubectl/"&gt;https://kubernetes.io/zh-cn/docs/reference/kubectl/&lt;/a&gt;&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="选项"&gt;选项&lt;/h2&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--as string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Username to impersonate for the operation. User could be a regular user or a service account in a namespace.
--&gt;
操作所用的伪装用户名。用户可以是常规用户或命名空间中的服务账号。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--as-group strings&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Group to impersonate for the operation, this flag can be repeated to specify multiple groups.
--&gt;
操作所用的伪装用户组，此标志可以被重复设置以指定多个组。
&lt;/p&gt;</description></item><item><title>kubectl</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/kubectl/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/kubectl/</guid><description>&lt;!--
title: kubectl
content_type: tool-reference
weight: 30
--&gt;
&lt;h2 id="简介"&gt;简介&lt;/h2&gt;
&lt;!--
kubectl controls the Kubernetes cluster manager.
--&gt;
&lt;p&gt;kubectl 管理控制 Kubernetes 集群。&lt;/p&gt;
&lt;!--
Find more information in [Command line tool](/docs/reference/kubectl/) (`kubectl`).
--&gt;
&lt;p&gt;更多信息请查阅&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/"&gt;命令行工具&lt;/a&gt;（&lt;code&gt;kubectl&lt;/code&gt;）。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="选项"&gt;选项&lt;/h2&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
 &lt;tr&gt;
 &lt;td colspan="2"&gt;--add-dir-header&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
 &lt;!--
 If true, adds the file directory to the header of the log messages
 --&gt;
 设置为 true 表示添加文件目录到日志信息头中。
 &lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td colspan="2"&gt;--alsologtostderr&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
 &lt;!--
 log to standard error as well as files
 --&gt;
 表示将日志输出到文件的同时输出到 stderr。
 &lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td colspan="2"&gt;--as string&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
 &lt;!--
 Username to impersonate for the operation
 --&gt;
 以指定用户的身份执行操作。
 &lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td colspan="2"&gt;--as-group stringArray&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
 &lt;!--
 Group to impersonate for the operation, this flag can be repeated to specify multiple groups.
 --&gt;
 模拟指定的组来执行操作，可以使用这个标志来指定多个组。
 &lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td colspan="2"&gt;--azure-container-registry-config string&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
 &lt;!--
 Path to the file containing Azure container registry configuration information.
 --&gt;
 包含 Azure 容器仓库配置信息的文件的路径。
 &lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td colspan="2"&gt;--cache-dir string&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;!-- Default: --&gt;默认值："$HOME/.kube/cache"&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
 &lt;!--
 Default cache directory
 --&gt;
 默认缓存目录。
 &lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td colspan="2"&gt;--certificate-authority string&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
 &lt;!--
 Path to a cert file for the certificate authority
 --&gt;
 指向证书机构的 cert 文件路径。
 &lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td colspan="2"&gt;--client-certificate string&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
 &lt;!--
 Path to a client certificate file for TLS
 --&gt;
 TLS 使用的客户端证书路径。
 &lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td colspan="2"&gt;--client-key string&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
 &lt;!--
 Path to a client key file for TLS
 --&gt;
 TLS 使用的客户端密钥文件路径。
 &lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td colspan="2"&gt;--cloud-provider-gce-l7lb-src-cidrs cidrs&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;!-- Default: --&gt;默认值：130.211.0.0/22,35.191.0.0/16&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
 &lt;!--CIDRs opened in GCE firewall for L7 LB traffic proxy &amp; health checks--&gt;
 在 GCE 防火墙中开放的 CIDR，用来进行 L7 LB 流量代理和健康检查。
 &lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td colspan="2"&gt;--cloud-provider-gce-lb-src-cidrs cidrs&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;!-- Default: --&gt;默认值：130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
 &lt;!--
 CIDRs opened in GCE firewall for L4 LB traffic proxy &amp; health checks
 --&gt;
 在 GCE 防火墙中开放的 CIDR，用来进行 L4 LB 流量代理和健康检查。
 &lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td colspan="2"&gt;--cluster string&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
 &lt;!--
 The name of the kubeconfig cluster to use
 --&gt;
 要使用的 kubeconfig 集群的名称。
 &lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td colspan="2"&gt;--context string&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
 &lt;!--
 The name of the kubeconfig context to use
 --&gt;
 要使用的 kubeconfig 上下文的名称。
 &lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td colspan="2"&gt;--default-not-ready-toleration-seconds int&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;!-- Default: --&gt;默认值：300&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
 &lt;!--
 Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.
 --&gt;
 表示 `notReady` 状态的容忍度秒数：默认情况下，`NoExecute` 被添加到尚未具有此容忍度的每个 Pod 中。
 &lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td colspan="2"&gt;--default-unreachable-toleration-seconds int&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;!-- Default: --&gt;默认值：300&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
 &lt;!--
 Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.
 --&gt;
 表示 `unreachable` 状态的容忍度秒数：默认情况下，`NoExecute` 被添加到尚未具有此容忍度的每个 Pod 中。
 &lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
 &lt;!--
 help for kubectl
 --&gt;
 kubectl 操作的帮助命令
 &lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td colspan="2"&gt;--insecure-skip-tls-verify&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
 &lt;!--
 If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure
 --&gt;
 设置为 true，则表示不会检查服务器证书的有效性。这样会导致你的 HTTPS 连接不安全。
 &lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td colspan="2"&gt;--kubeconfig string&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
 &lt;!--
 Path to the kubeconfig file to use for CLI requests.
 --&gt;
 CLI 请求使用的 kubeconfig 配置文件的路径。
 &lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td colspan="2"&gt;--log-backtrace-at traceLocation&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;!-- Default: --&gt;默认值：0&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
 &lt;!--
 when logging hits line file:N, emit a stack trace
 --&gt;
 当日志机制运行到指定文件的指定行（file:N）时，打印调用堆栈信息
 &lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td colspan="2"&gt;--log-dir string&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
 &lt;!--
 If non-empty, write log files in this directory
 --&gt;
 如果不为空，则将日志文件写入此目录
 &lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td colspan="2"&gt;--log-file string&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
 &lt;!--
 If non-empty, use this log file
 --&gt;
 如果不为空，则将使用此日志文件
 &lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td colspan="2"&gt;--log-file-max-size uint&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;!-- Default: --&gt;默认值：1800&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
 &lt;!--
 Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited.
 --&gt;
 定义日志文件的最大尺寸。单位为兆字节。如果值设置为 0，则表示日志文件大小不受限制。
 &lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td colspan="2"&gt;--log-flush-frequency duration&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;默认值：5s&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;/td&gt;
 &lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
 &lt;!--
 Maximum number of seconds between log flushes
 --&gt;
 两次日志刷新操作之间的最长时间（秒）
 &lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td colspan="2"&gt;--logtostderr&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;!-- Default: --&gt;默认值：true&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;/td&gt;
 &lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
 &lt;!--
 log to standard error instead of files
 --&gt;
 日志输出到 stderr 而不是文件中
 &lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td colspan="2"&gt;--match-server-version&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;/td&gt;
 &lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
 &lt;!--
 Require server version to match client version
 --&gt;
 要求客户端版本和服务端版本相匹配
 &lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td colspan="2"&gt;-n, --namespace string&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;/td&gt;
 &lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
 &lt;!--
 If present, the namespace scope for this CLI request
 --&gt;
 如果存在，CLI 请求将使用此命名空间
 &lt;/td&gt;
 &lt;tr&gt;
 &lt;td colspan="2"&gt;--one-output&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;/td&gt;
 &lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
 &lt;!--
 If true, only write logs to their native severity level (vs also writing to each lower severity level)
 --&gt;
 如果为 true，则只将日志写入初始严重级别（而不是同时写入所有较低的严重级别）。
 &lt;/td&gt;
 &lt;/tr&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td colspan="2"&gt;--password string&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;/td&gt;
 &lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
 &lt;!--
 Password for basic authentication to the API server
 --&gt;
 API 服务器进行基本身份验证的密码
 &lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td colspan="2"&gt;--profile string&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;!-- Default: --&gt;默认值："none"&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;/td&gt;
 &lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
 &lt;!--
 Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)
 --&gt;
 要记录的性能指标的名称。可取（none|cpu|heap|goroutine|threadcreate|block|mutex）其中之一。
 &lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td colspan="2"&gt;--profile-output string&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;!-- Default: --&gt;默认值："profile.pprof"&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;/td&gt;
 &lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
 &lt;!--
 Name of the file to write the profile to
 --&gt;
 用于转储所记录的性能信息的文件名。
 &lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td colspan="2"&gt;--request-timeout string&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;!-- Default: --&gt;默认值："0"&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;/td&gt;
 &lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
 &lt;!--
 The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.
 --&gt;
 放弃单个服务器请求之前的等待时间，非零值需要包含相应时间单位（例如：1s、2m、3h）。
 零值则表示不做超时要求。
 &lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td colspan="2"&gt;-s, --server string&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;/td&gt;
 &lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
 &lt;!--
 The address and port of the Kubernetes API server
 --&gt;
 Kubernetes API 服务器的地址和端口。
 &lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td colspan="2"&gt;--skip-headers&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;/td&gt;
 &lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
 &lt;!--
 If true, avoid header prefixes in the log messages
 --&gt;
 设置为 true 则表示跳过在日志消息中出现 header 前缀信息。
 &lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td colspan="2"&gt;--skip-log-headers&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;/td&gt;
 &lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
 &lt;!--
 If true, avoid headers when opening log files
 --&gt;
 设置为 true 则表示在打开日志文件时跳过 header 信息。
 &lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td colspan="2"&gt;--stderrthreshold severity&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;!-- Default: --&gt;默认值：2&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;/td&gt;
 &lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
 &lt;!--
 logs at or above this threshold go to stderr
 --&gt;
 等于或高于此阈值的日志将输出到标准错误输出（stderr）
 &lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td colspan="2"&gt;--token string&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;/td&gt;
 &lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
 &lt;!--
 Bearer token for authentication to the API server
 --&gt;
 用于对 API 服务器进行身份认证的持有者令牌
 &lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td colspan="2"&gt;--user string&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;/td&gt;
 &lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
 &lt;!--
 The name of the kubeconfig user to use
 --&gt;
 指定使用 kubeconfig 配置文件中的用户名
 &lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td colspan="2"&gt;--username string&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
 &lt;!--
 Username for basic authentication to the API server
 --&gt;
 用于 API 服务器的基本身份验证的用户名
 &lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td colspan="2"&gt;-v, --v Level&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;/td&gt;
 &lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
 &lt;!--
 number for the log level verbosity
 --&gt;
 指定输出日志的日志详细级别
 &lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td colspan="2"&gt;--version version[=true]&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;/td&gt;
 &lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
 &lt;!--
 Print version information and quit
 --&gt;
 打印 kubectl 版本信息并退出
 &lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td colspan="2"&gt;--vmodule moduleSpec&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;/td&gt;
 &lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
 &lt;!--
 comma-separated list of pattern=N settings for file-filtered logging
 --&gt;
 以逗号分隔的 pattern=N 设置列表，用于过滤文件的日志记录
 &lt;/td&gt;
 &lt;/tr&gt;
 &lt;/tbody&gt;
&lt;/table&gt;
&lt;h2 id="环境变量"&gt;环境变量&lt;/h2&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;KUBECONFIG&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;!--
Path to the kubectl configuration ("kubeconfig") file. Default: "$HOME/.kube/config"
--&gt;
kubectl 的配置 ("kubeconfig") 文件的路径。默认值："$HOME/.kube/config"
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;KUBECTL_EXPLAIN_OPENAPIV3&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;!--
Toggles whether calls to `kubectl explain` use the new OpenAPIv3 data source available. OpenAPIV3 is enabled by default since Kubernetes 1.24.
--&gt;
切换对 &lt;code&gt;kubectl explain&lt;/code&gt; 的调用是否使用可用的新 OpenAPIv3 数据源。
OpenAPIV3 自 Kubernetes 1.24 起默认被启用。
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;KUBECTL_ENABLE_CMD_SHADOW&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;!--
When set to true, external plugins can be used as subcommands for builtin commands if subcommand does not exist. In alpha stage, this feature can only be used for create command(e.g. kubectl create networkpolicy).
--&gt;
当设置为 true 时，如果子命令不存在，外部插件可以用作内置命令的子命令。
此功能处于 alpha 阶段，只能用于 create 命令（例如 &lt;code&gt;kubectl create networkpolicy&lt;/code&gt;）。
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;KUBECTL_PORT_FORWARD_WEBSOCKETS&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;!--
When set to true, the kubectl port-forward command will attempt to stream using the websockets protocol.
If the upgrade to websockets fails, the commands will fallback to use the current SPDY protocol.
--&gt;
当设置为 true 时，&lt;code&gt;kubectl port-forward&lt;/code&gt; 命令将尝试使用 WebSocket 协议进行流式传输。
如果升级到 WebSocket 失败，命令将回退到使用当前的 SPDY 协议。
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;KUBECTL_REMOTE_COMMAND_WEBSOCKETS&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;!--
When set to true, the kubectl exec, cp, and attach commands will attempt to stream using the websockets protocol. If the upgrade to websockets fails, the commands will fallback to use the current SPDY protocol.
--&gt;
当设置为 true 时，kubectl exec、cp 和 attach 命令将尝试使用 WebSocket 协议进行流式传输。
如果升级到 WebSocket 失败，这些命令将回退为使用当前的 SPDY 协议。
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;KUBECTL_KUBERC&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;!--
When set to true, kuberc file is taken into account to define user specific preferences.
--&gt;
当设置为 true 时，kuberc 文件会被纳入考虑，用于定义用户特定偏好设置。
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;KUBECTL_KYAML&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;!--
When set to true, kubectl is capable of producing Kubernetes-specific dialect of YAML output format.
--&gt;
当设置为 true 时，kubectl 可以生成 Kubernetes 特定的 YAML 输出格式。
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2 id="另请参见"&gt;另请参见&lt;/h2&gt;
&lt;!--
* [kubectl annotate](/docs/reference/kubectl/generated/kubectl_annotate/) - Update the annotations on a resource
* [kubectl api-resources](/docs/reference/kubectl/generated/kubectl_api-resources/) - Print the supported API resources on the server
* [kubectl api-versions](/docs/reference/kubectl/generated/kubectl_api-versions/) - Print the supported API versions on the server,
 in the form of "group/version"
* [kubectl apply](/docs/reference/kubectl/generated/kubectl_apply/) - Apply a configuration to a resource by filename or stdin
* [kubectl attach](/docs/reference/kubectl/generated/kubectl_attach/) - Attach to a running container
--&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_annotate/"&gt;kubectl annotate&lt;/a&gt; - 更新资源所关联的注解&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_api-resources/"&gt;kubectl api-resources&lt;/a&gt; - 打印服务器上所支持的 API 资源&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_api-versions/"&gt;kubectl api-versions&lt;/a&gt; - 以“组/版本”的格式输出服务端所支持的 API 版本&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_apply/"&gt;kubectl apply&lt;/a&gt; - 基于文件名或标准输入，将新的配置应用到资源上&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_attach/"&gt;kubectl attach&lt;/a&gt; - 挂接到一个正在运行的容器&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
* [kubectl auth](/docs/reference/kubectl/generated/kubectl_auth/) - Inspect authorization
* [kubectl autoscale](/docs/reference/kubectl/generated/kubectl_autoscale/) - Auto-scale a Deployment, ReplicaSet, or ReplicationController
* [kubectl certificate](/docs/reference/kubectl/generated/kubectl_certificate/) - Modify certificate resources.
* [kubectl cluster-info](/docs/reference/kubectl/generated/kubectl_cluster-info/) - Display cluster info
* [kubectl completion](/docs/reference/kubectl/generated/kubectl_completion/) - Output shell completion code for the specified shell (bash or zsh)
* [kubectl config](/docs/reference/kubectl/generated/kubectl_config/) - Modify kubeconfig files
--&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_auth/"&gt;kubectl auth&lt;/a&gt; - 检查授权信息&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_autoscale/"&gt;kubectl autoscale&lt;/a&gt; - 对一个资源对象
（Deployment、ReplicaSet 或 ReplicationController）进行自动扩缩&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_certificate/"&gt;kubectl certificate&lt;/a&gt; - 修改证书资源&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_cluster-info/"&gt;kubectl cluster-info&lt;/a&gt; - 显示集群信息&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_completion/"&gt;kubectl completion&lt;/a&gt; - 根据已经给出的 Shell（bash 或 zsh），
输出 Shell 补全后的代码&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_config/"&gt;kubectl config&lt;/a&gt; - 修改 kubeconfig 配置文件&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
* [kubectl cordon](/docs/reference/kubectl/generated/kubectl_cordon/) - Mark node as unschedulable
* [kubectl cp](/docs/reference/kubectl/generated/kubectl_cp/) - Copy files and directories to and from containers.
* [kubectl create](/docs/reference/kubectl/generated/kubectl_create/) - Create a resource from a file or from stdin.
* [kubectl debug](/docs/reference/kubectl/generated/kubectl_debug/) - Create debugging sessions for troubleshooting workloads and nodes
* [kubectl delete](/docs/reference/kubectl/generated/kubectl_delete/) - Delete resources by filenames,
 stdin, resources and names, or by resources and label selector
--&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_cordon/"&gt;kubectl cordon&lt;/a&gt; - 标记节点为不可调度的&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_cp/"&gt;kubectl cp&lt;/a&gt; - 将文件和目录拷入/拷出容器&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_create/"&gt;kubectl create&lt;/a&gt; - 通过文件或标准输入来创建资源&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_debug/"&gt;kubectl debug&lt;/a&gt; - 创建用于排查工作负载和节点故障的调试会话&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_delete/"&gt;kubectl delete&lt;/a&gt; - 通过文件名、标准输入、资源和名字删除资源，
或者通过资源和标签选择算符来删除资源&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
* [kubectl describe](/docs/reference/kubectl/generated/kubectl_describe/) - Show details of a specific resource or group of resources
* [kubectl diff](/docs/reference/kubectl/generated/kubectl_diff/) - Diff live version against would-be applied version
* [kubectl drain](/docs/reference/kubectl/generated/kubectl_drain/) - Drain node in preparation for maintenance
* [kubectl edit](/docs/reference/kubectl/generated/kubectl_edit/) - Edit a resource on the server
* [kubectl events](/docs/reference/kubectl/generated/kubectl_events/) - List events
* [kubectl exec](/docs/reference/kubectl/generated/kubectl_exec/) - Execute a command in a container
* [kubectl explain](/docs/reference/kubectl/generated/kubectl_explain/) - Documentation of resources
* [kubectl expose](/docs/reference/kubectl/generated/kubectl_expose/) - Take a replication controller,
 service, deployment or pod and expose it as a new Kubernetes Service
--&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_describe/"&gt;kubectl describe&lt;/a&gt; - 显示某个资源或某组资源的详细信息&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_diff/"&gt;kubectl diff&lt;/a&gt; - 显示目前版本与将要应用的版本之间的差异&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_drain/"&gt;kubectl drain&lt;/a&gt; - 腾空节点，准备维护&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_edit/"&gt;kubectl edit&lt;/a&gt; - 修改服务器上的某资源&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_events/"&gt;kubectl events&lt;/a&gt; - 列举事件&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_exec/"&gt;kubectl exec&lt;/a&gt; - 在容器中执行相关命令&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_explain/"&gt;kubectl explain&lt;/a&gt; - 显示资源文档说明&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_expose/"&gt;kubectl expose&lt;/a&gt; - 给定副本控制器、服务、Deployment 或 Pod，
将其暴露为新的 kubernetes Service&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
* [kubectl get](/docs/reference/kubectl/generated/kubectl_get/) - Display one or many resources
* [kubectl kustomize](/docs/reference/kubectl/generated/kubectl_kustomize/) - Build a kustomization
 target from a directory or a remote url.
* [kubectl label](/docs/reference/kubectl/generated/kubectl_label/) - Update the labels on a resource
* [kubectl logs](/docs/reference/kubectl/generated/kubectl_logs/) - Print the logs for a container in a pod
* [kubectl options](/docs/reference/kubectl/generated/kubectl_options/) - Print the list of flags inherited by all commands
* [kubectl patch](/docs/reference/kubectl/generated/kubectl_patch/) - Update field(s) of a resource
--&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_get/"&gt;kubectl get&lt;/a&gt; - 显示一个或者多个资源信息&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_kustomize/"&gt;kubectl kustomize&lt;/a&gt; - 从目录或远程 URL 中构建 kustomization&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_label/"&gt;kubectl label&lt;/a&gt; - 更新资源的标签&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_logs/"&gt;kubectl logs&lt;/a&gt; - 输出 Pod 中某容器的日志&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_options/"&gt;kubectl options&lt;/a&gt; - 打印所有命令都支持的共有参数列表&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_patch/"&gt;kubectl patch&lt;/a&gt; - 更新某资源中的字段&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
* [kubectl plugin](/docs/reference/kubectl/generated/kubectl_plugin/) - Provides utilities for interacting with plugins.
* [kubectl port-forward](/docs/reference/kubectl/generated/kubectl_port-forward/) - Forward one or more local ports to a pod
* [kubectl proxy](/docs/reference/kubectl/generated/kubectl_proxy/) - Run a proxy to the Kubernetes API server
* [kubectl replace](/docs/reference/kubectl/generated/kubectl_replace/) - Replace a resource by filename or stdin
* [kubectl rollout](/docs/reference/kubectl/generated/kubectl_rollout/) - Manage the rollout of a resource
* [kubectl run](/docs/reference/kubectl/generated/kubectl_run/) - Run a particular image on the cluster
--&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_plugin/"&gt;kubectl plugin&lt;/a&gt; - 运行命令行插件&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_port-forward/"&gt;kubectl port-forward&lt;/a&gt; - 将一个或者多个本地端口转发到 Pod&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_proxy/"&gt;kubectl proxy&lt;/a&gt; - 运行一个 kubernetes API 服务器代理&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_replace/"&gt;kubectl replace&lt;/a&gt; - 基于文件名或标准输入替换资源&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_rollout/"&gt;kubectl rollout&lt;/a&gt; - 管理资源的上线&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_run/"&gt;kubectl run&lt;/a&gt; - 在集群中使用指定镜像启动容器&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
* [kubectl scale](/docs/reference/kubectl/generated/kubectl_scale/) - Set a new size for a Deployment, ReplicaSet or Replication Controller
* [kubectl set](/docs/reference/kubectl/generated/kubectl_set/) - Set specific features on objects
* [kubectl taint](/docs/reference/kubectl/generated/kubectl_taint/) - Update the taints on one or more nodes
* [kubectl top](/docs/reference/kubectl/generated/kubectl_top/) - Display Resource (CPU/Memory/Storage) usage.
* [kubectl uncordon](/docs/reference/kubectl/generated/kubectl_uncordon/) - Mark node as schedulable
* [kubectl version](/docs/reference/kubectl/generated/kubectl_version/) - Print the client and server version information
* [kubectl wait](/docs/reference/kubectl/generated/kubectl_wait/) - Experimental: Wait for a specific condition on one or many resources.
--&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_scale/"&gt;kubectl scale&lt;/a&gt; - 为一个 Deployment、ReplicaSet 或
ReplicationController 设置一个新的规模值&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_set/"&gt;kubectl set&lt;/a&gt; - 为对象设置功能特性&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_taint/"&gt;kubectl taint&lt;/a&gt; - 在一个或者多个节点上更新污点配置&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_top/"&gt;kubectl top&lt;/a&gt; - 显示资源（CPU/内存/存储）使用率&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_uncordon/"&gt;kubectl uncordon&lt;/a&gt; - 标记节点为可调度的&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_version/"&gt;kubectl version&lt;/a&gt; - 打印客户端和服务器的版本信息&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_wait/"&gt;kubectl wait&lt;/a&gt; - 实验级特性：等待一个或多个资源达到某种状态&lt;/li&gt;
&lt;/ul&gt;</description></item><item><title>kubectl apply edit-last-applied</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_apply/kubectl_apply_edit-last-applied/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_apply/kubectl_apply_edit-last-applied/</guid><description>&lt;!--
title: kubectl apply edit-last-applied
content_type: tool-reference
weight: 30
auto_generated: true
--&gt;
&lt;h2 id="简介"&gt;简介&lt;/h2&gt;
&lt;!--
Edit the latest last-applied-configuration annotations of resources from the default editor.

 The edit-last-applied command allows you to directly edit any API resource you can retrieve via the command-line tools. It will open the editor defined by your KUBE_EDITOR, or EDITOR environment variables, or fall back to 'vi' for Linux or 'notepad' for Windows. You can edit multiple objects, although changes are applied one at a time. The command accepts file names as well as command-line arguments, although the files you point to must be previously saved versions of resources.

 The default format is YAML. To edit in JSON, specify "-o json".

 The flag --windows-line-endings can be used to force Windows line endings, otherwise the default for your operating system will be used.

 In the event an error occurs while updating, a temporary file will be created on disk that contains your unapplied changes. The most common error when updating a resource is another editor changing the resource on the server. When this occurs, you will have to apply your changes to the newer version of the resource, or update your temporary saved copy to include the latest resource version.
--&gt;
&lt;p&gt;使用默认编辑器编辑资源的最新的 last-applied-configuration 注解。&lt;/p&gt;</description></item><item><title>kubectl apply set-last-applied</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_apply/kubectl_apply_set-last-applied/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_apply/kubectl_apply_set-last-applied/</guid><description>&lt;!--
title: kubectl apply set-last-applied
content_type: tool-reference
weight: 30
auto_generated: true
--&gt;
&lt;h2 id="简介"&gt;简介&lt;/h2&gt;
&lt;!--
Set the latest last-applied-configuration annotations by setting it to match the contents of a file. This results in the last-applied-configuration being updated as though 'kubectl apply -f&amp;lt;file&amp;gt; ' was run, without updating any other parts of the object.
--&gt;
&lt;p&gt;设置 last-applied-configuration 注解使之与某文件内容相匹配。
这会导致 last-applied-configuration 被更新，就像运行了 &lt;code&gt;kubectl apply -f &amp;lt;file&amp;gt;&lt;/code&gt; 一样，
但是不会更新对象的任何其他部分。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl apply set-last-applied -f FILENAME
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="示例"&gt;示例&lt;/h2&gt;
&lt;!--
```
# Set the last-applied-configuration of a resource to match the contents of a file
kubectl apply set-last-applied -f deploy.yaml

# Execute set-last-applied against each configuration file in a directory
kubectl apply set-last-applied -f path/

# Set the last-applied-configuration of a resource to match the contents of a file; will create the annotation if it does not already exist
kubectl apply set-last-applied -f deploy.yaml --create-annotation=true
```
--&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 设置资源的 last-applied-configuration，使之与某文件内容相同&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl apply set-last-applied -f deploy.yaml
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; 
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 针对目录中的每一个配置文件执行 set-last-applied 操作&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl apply set-last-applied -f path/
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; 
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 设置资源的 last-applied-configuration 注解，使之与某文件内容匹配；如果该注解尚不存在，则会被创建。&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl apply set-last-applied -f deploy.yaml --create-annotation&lt;span style="color:#666"&gt;=&lt;/span&gt;&lt;span style="color:#a2f"&gt;true&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="选项"&gt;选项&lt;/h2&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--allow-missing-template-keys&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;!--Default:--&gt;默认值：true&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.
--&gt;
如果为 true，在模板中字段或映射键缺失时忽略模板中的错误。
仅适用于 golang 和 jsonpath 输出格式。
&lt;/p&gt;</description></item><item><title>kubectl apply view-last-applied</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_apply/kubectl_apply_view-last-applied/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_apply/kubectl_apply_view-last-applied/</guid><description>&lt;!--
title: kubectl apply view-last-applied
content_type: tool-reference
weight: 30
auto_generated: true
--&gt;
&lt;h2 id="简介"&gt;简介&lt;/h2&gt;
&lt;!--
View the latest last-applied-configuration annotations by type/name or file.

 The default output will be printed to stdout in YAML format. You can use the -o option to change the output format.
--&gt;
&lt;p&gt;根据所给类别/名称或文件来查看最新的 last-applied-configuration 注解。&lt;/p&gt;
&lt;p&gt;默认输出将以 YAML 格式打印到标准输出。你可以使用 -o 选项来更改输出格式。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl apply view-last-applied &lt;span style="color:#666"&gt;(&lt;/span&gt;TYPE &lt;span style="color:#666"&gt;[&lt;/span&gt;NAME | -l label&lt;span style="color:#666"&gt;]&lt;/span&gt; | TYPE/NAME | -f FILENAME&lt;span style="color:#666"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="示例"&gt;示例&lt;/h2&gt;
&lt;!--
```
# View the last-applied-configuration annotations by type/name in YAML
kubectl apply view-last-applied deployment/nginx

# View the last-applied-configuration annotations by file in JSON
kubectl apply view-last-applied -f deploy.yaml -o json
```
--&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 根据所给类别/名称以 YAML 格式查看 last-applied-configuration 注解&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl apply view-last-applied deployment/nginx
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; 
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 根据所给文件以 JSON 格式查看 last-applied-configuration 注解&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl apply view-last-applied -f deploy.yaml -o json
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="选项"&gt;选项&lt;/h2&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--all&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Select all resources in the namespace of the specified resource types
--&gt;
选择指定资源类型的命名空间中的所有资源。
&lt;/p&gt;</description></item><item><title>kubectl auth can-i</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_auth/kubectl_auth_can-i/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_auth/kubectl_auth_can-i/</guid><description>&lt;!--
title: kubectl auth can-i
content_type: tool-reference
weight: 30
auto_generated: true
--&gt;
&lt;h2 id="简介"&gt;简介&lt;/h2&gt;
&lt;!--
Check whether an action is allowed.

 VERB is a logical Kubernetes API verb like 'get', 'list', 'watch', 'delete', etc. TYPE is a Kubernetes resource. Shortcuts and groups will be resolved. NONRESOURCEURL is a partial URL that starts with "/". NAME is the name of a particular Kubernetes resource. This command pairs nicely with impersonation. See --as global flag.
--&gt;
&lt;p&gt;检查某个操作是否被允许。&lt;/p&gt;</description></item><item><title>kubectl auth reconcile</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_auth/kubectl_auth_reconcile/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_auth/kubectl_auth_reconcile/</guid><description>&lt;!--
title: kubectl auth reconcile
content_type: tool-reference
weight: 30
auto_generated: true
--&gt;
&lt;h2 id="简介"&gt;简介&lt;/h2&gt;
&lt;!--
Reconciles rules for RBAC role, role binding, cluster role, and cluster role binding objects.

 Missing objects are created, and the containing namespace is created for namespaced objects, if required.

 Existing roles are updated to include the permissions in the input objects, and remove extra permissions if --remove-extra-permissions is specified.

 Existing bindings are updated to include the subjects in the input objects, and remove extra subjects if --remove-extra-subjects is specified.

 This is preferred to 'apply' for RBAC resources so that semantically-aware merging of rules and subjects is done.
--&gt;
&lt;p&gt;调和 RBAC 角色、角色绑定、集群角色和集群角色绑定对象的规则。&lt;/p&gt;</description></item><item><title>kubectl auth whoami</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_auth/kubectl_auth_whoami/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_auth/kubectl_auth_whoami/</guid><description>&lt;!--
title: kubectl auth whoami
content_type: tool-reference
weight: 30
auto_generated: true
--&gt;
&lt;h2 id="简介"&gt;简介&lt;/h2&gt;
&lt;!--
Experimental: Check who you are and your attributes (groups, extra).

 This command is helpful to get yourself aware of the current user attributes,
 especially when dynamic authentication, e.g., token webhook, auth proxy, or OIDC provider,
 is enabled in the Kubernetes cluster.
--&gt;
&lt;p&gt;实验性功能：检查你的身份和属性（如所属的组、额外信息等）。&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;此命令有助于让你了解当前用户属性，尤其是在 Kubernetes
集群中启用动态身份验证（例如令牌 Webhook、身份认证代理或 OIDC 提供程序）时。&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl auth whoami
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="示例"&gt;示例&lt;/h2&gt;
&lt;!--
```
 # Get your subject attributes
 kubectl auth whoami
 
 # Get your subject attributes in JSON format
 kubectl auth whoami -o json
```
--&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 获取你的主体属性&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl auth whoami
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; 
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 以 JSON 格式获取主体属性&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl auth whoami -o json
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="选项"&gt;选项&lt;/h2&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--allow-missing-template-keys&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;!--Default:--&gt;默认值：true&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.
--&gt;
如果为 true，在模板中字段或映射键缺失时忽略模板中的错误。
仅适用于 golang 和 jsonpath 输出格式。
&lt;/p&gt;</description></item><item><title>kubectl certificate approve</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_certificate/kubectl_certificate_approve/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_certificate/kubectl_certificate_approve/</guid><description>&lt;!--
title: kubectl certificate approve
content_type: tool-reference
weight: 30
auto_generated: true
--&gt;
&lt;h2 id="简介"&gt;简介&lt;/h2&gt;
&lt;!--
Approve a certificate signing request.

 kubectl certificate approve allows a cluster admin to approve a certificate signing request (CSR). This action tells a certificate signing controller to issue a certificate to the requester with the attributes requested in the CSR.

 SECURITY NOTICE: Depending on the requested attributes, the issued certificate can potentially grant a requester access to cluster resources or to authenticate as a requested identity. Before approving a CSR, ensure you understand what the signed certificate can do.
--&gt;
&lt;p&gt;批准证书签名请求。&lt;/p&gt;</description></item><item><title>kubectl certificate deny</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_certificate/kubectl_certificate_deny/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_certificate/kubectl_certificate_deny/</guid><description>&lt;!--
title: kubectl certificate deny
content_type: tool-reference
weight: 30
auto_generated: true
--&gt;
&lt;h2 id="简介"&gt;简介&lt;/h2&gt;
&lt;!--
Deny a certificate signing request.

 kubectl certificate deny allows a cluster admin to deny a certificate signing request (CSR). This action tells a certificate signing controller to not to issue a certificate to the requester.
--&gt;
&lt;p&gt;拒绝证书签名请求。&lt;/p&gt;
&lt;p&gt;&lt;code&gt;kubectl certificate deny&lt;/code&gt; 允许集群管理员拒绝证书签名请求 (CSR)。
此操作通知证书签名控制器不向请求者颁发证书。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl certificate deny &lt;span style="color:#666"&gt;(&lt;/span&gt;-f FILENAME | NAME&lt;span style="color:#666"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="示例"&gt;示例&lt;/h2&gt;
&lt;!--
```
 # Deny CSR 'csr-sqgzp'
 kubectl certificate deny csr-sqgzp
```
--&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 拒绝 CSR &amp;#39;csr-sqgzp&amp;#39;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl certificate deny csr-sqgzp
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="选项"&gt;选项&lt;/h2&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--allow-missing-template-keys&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;!--Default:--&gt;默认值：true&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.
--&gt;
如果为 true，在模板中字段或映射键缺失时忽略模板中的错误。
仅适用于 golang 和 jsonpath 输出格式。
&lt;/p&gt;</description></item><item><title>kubectl cluster-info dump</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_cluster-info/kubectl_cluster-info_dump/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_cluster-info/kubectl_cluster-info_dump/</guid><description>&lt;!--
title: kubectl cluster-info dump
content_type: tool-reference
weight: 30
auto_generated: true
--&gt;
&lt;h2 id="简介"&gt;简介&lt;/h2&gt;
&lt;!--
Dump cluster information out suitable for debugging and diagnosing cluster problems. By default, dumps everything to stdout. You can optionally specify a directory with --output-directory. If you specify a directory, Kubernetes will build a set of files in that directory. By default, only dumps things in the current namespace and 'kube-system' namespace, but you can switch to a different namespace with the --namespaces flag, or specify --all-namespaces to dump all namespaces.

 The command also dumps the logs of all of the pods in the cluster; these logs are dumped into different directories based on namespace and pod name.
--&gt;
&lt;p&gt;转储集群信息，适合于调试和诊断集群问题。默认情况下，将所有内容转储到 &lt;code&gt;stdout&lt;/code&gt;。你可以使用
&lt;code&gt;--output-directory&lt;/code&gt; 指定目录。如果指定目录，Kubernetes 将在该目录中构建一组文件。
默认情况下，仅转储当前命名空间和 &amp;quot;kube-system&amp;quot; 命名空间中的内容，但你也可以使用 &lt;code&gt;--namespaces&lt;/code&gt;
标志切换到其他命名空间，或指定 &lt;code&gt;--all-namespaces&lt;/code&gt; 以转储所有命名空间。&lt;/p&gt;</description></item><item><title>kubectl config current-context</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_config/kubectl_config_current-context/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_config/kubectl_config_current-context/</guid><description>&lt;!--
title: kubectl config current-context
content_type: tool-reference
weight: 30
auto_generated: true
--&gt;
&lt;h2 id="简介"&gt;简介&lt;/h2&gt;
&lt;!--
Display the current-context.
--&gt;
&lt;p&gt;显示当前上下文。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl config current-context &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="示例"&gt;示例&lt;/h2&gt;
&lt;!--
# Display the current-context
--&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 显示当前上下文&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl config current-context
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="选项"&gt;选项&lt;/h2&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
help for current-context
--&gt;
关于 current-context 的帮助信息。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2 id="继承于父命令的选项"&gt;继承于父命令的选项&lt;/h2&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--as string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Username to impersonate for the operation. User could be a regular user or a service account in a namespace.
--&gt;
操作所用的伪装用户名。用户可以是常规用户或命名空间中的服务账号。
&lt;/p&gt;</description></item><item><title>kubectl config delete-cluster</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_config/kubectl_config_delete-cluster/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_config/kubectl_config_delete-cluster/</guid><description>&lt;!--
title: kubectl config delete-cluster
content_type: tool-reference
weight: 30
auto_generated: true
--&gt;
&lt;h2 id="简介"&gt;简介&lt;/h2&gt;
&lt;!--
Delete the specified cluster from the kubeconfig.
--&gt;
&lt;p&gt;从 kubeconfig 中删除指定的集群。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl config delete-cluster NAME
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="示例"&gt;示例&lt;/h2&gt;
&lt;!--
# Delete the minikube cluster
--&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 删除 minikube 集群&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl config delete-cluster minikube
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="选项"&gt;选项&lt;/h2&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
help for delete-cluster
--&gt;
关于 delete-cluster 的帮助信息。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2 id="继承于父命令的选项"&gt;继承于父命令的选项&lt;/h2&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--as string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Username to impersonate for the operation. User could be a regular user or a service account in a namespace.
--&gt;
操作所用的伪装用户名。用户可以是常规用户或命名空间中的服务账号。
&lt;/p&gt;</description></item><item><title>kubectl config delete-context</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_config/kubectl_config_delete-context/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_config/kubectl_config_delete-context/</guid><description>&lt;!--
title: kubectl config delete-context
content_type: tool-reference
weight: 30
auto_generated: true
--&gt;
&lt;h2 id="简介"&gt;简介&lt;/h2&gt;
&lt;!--
Delete the specified context from the kubeconfig.
--&gt;
&lt;p&gt;从 kubeconfig 中删除指定的上下文。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl config delete-context NAME
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="示例"&gt;示例&lt;/h2&gt;
&lt;!--
# Delete the context for the minikube cluster
--&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 删除 minikube 集群的上下文&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl config delete-context minikube
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="选项"&gt;选项&lt;/h2&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
help for delete-context
--&gt;
关于 delete-context 的帮助信息。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2 id="继承于父命令的选项"&gt;继承于父命令的选项&lt;/h2&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--as string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Username to impersonate for the operation. User could be a regular user or a service account in a namespace.
--&gt;
操作所用的伪装用户名。用户可以是常规用户或命名空间中的服务账号。
&lt;/p&gt;</description></item><item><title>kubectl config delete-user</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_config/kubectl_config_delete-user/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_config/kubectl_config_delete-user/</guid><description>&lt;!--
title: kubectl config delete-user
content_type: tool-reference
weight: 30
auto_generated: true
--&gt;
&lt;h2 id="简介"&gt;简介&lt;/h2&gt;
&lt;!--
Delete the specified user from the kubeconfig.
--&gt;
&lt;p&gt;从 kubeconfig 中删除指定用户。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl config delete-user NAME
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="示例"&gt;示例&lt;/h2&gt;
&lt;!--
```
# Delete the minikube user
kubectl config delete-user minikube
```
--&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 删除 minikube 用户&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl config delete-user minikube
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="选项"&gt;选项&lt;/h2&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
help for delete-user
--&gt;
关于 delete-user 的帮助信息。
&lt;/p&gt;</description></item><item><title>kubectl config get-clusters</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_config/kubectl_config_get-clusters/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_config/kubectl_config_get-clusters/</guid><description>&lt;h2 id="简介"&gt;简介&lt;/h2&gt;
&lt;!--
Display clusters defined in the kubeconfig.
--&gt;
&lt;p&gt;显示 kubeconfig 中定义的集群。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl config get-clusters &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="示例"&gt;示例&lt;/h2&gt;
&lt;!--
```
# List the clusters that kubectl knows about
kubectl config get-clusters
```
--&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 列出 kubectl 所知悉的集群&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl config get-clusters
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="选项"&gt;选项&lt;/h2&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
help for get-clusters
--&gt;
关于 get-clusters 的帮助信息。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2 id="继承于父命令的选项"&gt;继承于父命令的选项&lt;/h2&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--as string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Username to impersonate for the operation. User could be a regular user or a service account in a namespace.
--&gt;
操作所用的伪装用户名。用户可以是常规用户或命名空间中的服务账号。
&lt;/p&gt;</description></item><item><title>kubectl config get-contexts</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_config/kubectl_config_get-contexts/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_config/kubectl_config_get-contexts/</guid><description>&lt;!--
title: kubectl config get-contexts
content_type: tool-reference
weight: 30
auto_generated: true
--&gt;
&lt;h2 id="简介"&gt;简介&lt;/h2&gt;
&lt;!--
Display one or many contexts from the kubeconfig file.
--&gt;
&lt;p&gt;显示 kubeconfig 文件中的一个或多个上下文。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl config get-contexts &lt;span style="color:#666"&gt;[(&lt;/span&gt;-o|--output&lt;span style="color:#666"&gt;=)&lt;/span&gt;name&lt;span style="color:#666"&gt;)]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="示例"&gt;示例&lt;/h2&gt;
&lt;!--
# List all the contexts in your kubeconfig file
# Describe one context in your kubeconfig file
--&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 列出 kubeconfig 文件中的所有上下文&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl config get-contexts
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; 
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 描述 kubeconfig 文件中指定上下文的详细信息&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl config get-contexts my-context
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="选项"&gt;选项&lt;/h2&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
help for get-contexts
--&gt;
关于 get-contexts 的帮助信息。
&lt;/p&gt;</description></item><item><title>kubectl config get-users</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_config/kubectl_config_get-users/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_config/kubectl_config_get-users/</guid><description>&lt;!--
title: kubectl config get-users
content_type: tool-reference
weight: 30
auto_generated: true
--&gt;
&lt;h2 id="简介"&gt;简介&lt;/h2&gt;
&lt;!--
Display users defined in the kubeconfig.
--&gt;
&lt;p&gt;显示 kubeconfig 中定义的用户。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl config get-users &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="示例"&gt;示例&lt;/h2&gt;
&lt;!--
# List the users that kubectl knows about
--&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 列出 kubectl 知悉的用户&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl config get-users
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="选项"&gt;选项&lt;/h2&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
help for get-users
--&gt;
关于 get-users 的帮助信息。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2 id="继承于父命令的选项"&gt;继承于父命令的选项&lt;/h2&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--as string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Username to impersonate for the operation. User could be a regular user or a service account in a namespace.
--&gt;
操作所用的伪装用户名。用户可以是常规用户或命名空间中的服务账号。
&lt;/p&gt;</description></item><item><title>kubectl config rename-context</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_config/kubectl_config_rename-context/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_config/kubectl_config_rename-context/</guid><description>&lt;!--
title: kubectl config rename-context
content_type: tool-reference
weight: 30
auto_generated: true
--&gt;
&lt;h2 id="简介"&gt;简介&lt;/h2&gt;
&lt;!--
Renames a context from the kubeconfig file.

 CONTEXT_NAME is the context name that you want to change.

 NEW_NAME is the new name you want to set.

 Note: If the context being renamed is the 'current-context', this field will also be updated.
--&gt;
&lt;p&gt;重命名 kubeconfig 文件中的上下文。&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;CONTEXT_NAME 是要更改的上下文名称。&lt;/li&gt;
&lt;li&gt;NEW_NAME 是要设置的新名称。&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;注意：如果重命名的上下文是“当前上下文”，则该字段也将被更新。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl config rename-context CONTEXT_NAME NEW_NAME
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="示例"&gt;示例&lt;/h2&gt;
&lt;!--
# Rename the context 'old-name' to 'new-name' in your kubeconfig file
--&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 将 kubeconfig 文件中上下文 &amp;#34;old-name&amp;#34; 重命名为 &amp;#34;new-name&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl config rename-context old-name new-name
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="选项"&gt;选项&lt;/h2&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
help for rename-context
--&gt;
关于 rename-context 的帮助信息。
&lt;/p&gt;</description></item><item><title>kubectl config set</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_config/kubectl_config_set/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_config/kubectl_config_set/</guid><description>&lt;!--
title: kubectl config set
content_type: tool-reference
weight: 30
auto_generated: true
--&gt;
&lt;h2 id="简介"&gt;简介&lt;/h2&gt;
&lt;!--
Set an individual value in a kubeconfig file.

 PROPERTY_NAME is a dot delimited name where each token represents either an attribute name or a map key. Map keys may not contain dots.

 PROPERTY_VALUE is the new value you want to set. Binary fields such as 'certificate-authority-data' expect a base64 encoded string unless the --set-raw-bytes flag is used.

 Specifying an attribute name that already exists will merge new fields on top of existing values.
--&gt;
&lt;p&gt;设置 kubeconfig 文件中的单个值。&lt;/p&gt;</description></item><item><title>kubectl config set-cluster</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_config/kubectl_config_set-cluster/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_config/kubectl_config_set-cluster/</guid><description>&lt;!--
title: kubectl config set-cluster
content_type: tool-reference
weight: 30
auto_generated: true
--&gt;
&lt;h2 id="简介"&gt;简介&lt;/h2&gt;
&lt;!--
Set a cluster entry in kubeconfig.

 Specifying a name that already exists will merge new fields on top of existing values for those fields.
--&gt;
&lt;p&gt;设置 kubeconfig 中的集群条目。&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;指定已存在的属性名称将把新字段值与现有值合并。&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl config set-cluster NAME &lt;span style="color:#666"&gt;[&lt;/span&gt;--server&lt;span style="color:#666"&gt;=&lt;/span&gt;server&lt;span style="color:#666"&gt;]&lt;/span&gt; &lt;span style="color:#666"&gt;[&lt;/span&gt;--certificate-authority&lt;span style="color:#666"&gt;=&lt;/span&gt;path/to/certificate/authority&lt;span style="color:#666"&gt;]&lt;/span&gt; &lt;span style="color:#666"&gt;[&lt;/span&gt;--insecure-skip-tls-verify&lt;span style="color:#666"&gt;=&lt;/span&gt;true&lt;span style="color:#666"&gt;]&lt;/span&gt; &lt;span style="color:#666"&gt;[&lt;/span&gt;--tls-server-name&lt;span style="color:#666"&gt;=&lt;/span&gt;example.com&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="示例"&gt;示例&lt;/h2&gt;
&lt;!--
```
# Set only the server field on the e2e cluster entry without touching other values
kubectl config set-cluster e2e --server=https://1.2.3.4
 
# Embed certificate authority data for the e2e cluster entry
kubectl config set-cluster e2e --embed-certs --certificate-authority=~/.kube/e2e/kubernetes.ca.crt
 
# Disable cert checking for the e2e cluster entry
kubectl config set-cluster e2e --insecure-skip-tls-verify=true
 
# Set the custom TLS server name to use for validation for the e2e cluster entry
kubectl config set-cluster e2e --tls-server-name=my-cluster-name
 
# Set the proxy URL for the e2e cluster entry
kubectl config set-cluster e2e --proxy-url=https://1.2.3.4
```
--&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 仅设置 e2e 集群条目上的 server 字段，不触及其他值&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl config set-cluster e2e --server&lt;span style="color:#666"&gt;=&lt;/span&gt;https://1.2.3.4
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; 
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 在 e2e 集群条目中嵌入证书颁发机构的数据&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl config set-cluster e2e --embed-certs --certificate-authority&lt;span style="color:#666"&gt;=&lt;/span&gt;~/.kube/e2e/kubernetes.ca.crt
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; 
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 禁用 e2e 集群条目中的证书检查&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl config set-cluster e2e --insecure-skip-tls-verify&lt;span style="color:#666"&gt;=&lt;/span&gt;&lt;span style="color:#a2f"&gt;true&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; 
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 设置用于验证 e2e 集群条目的自定义 TLS 服务器名称&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl config set-cluster e2e --tls-server-name&lt;span style="color:#666"&gt;=&lt;/span&gt;my-cluster-name
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; 
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 设置 e2e 集群条目的代理 URL&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl config set-cluster e2e --proxy-url&lt;span style="color:#666"&gt;=&lt;/span&gt;https://1.2.3.4
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="选项"&gt;选项&lt;/h2&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--certificate-authority string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Path to certificate-authority file for the cluster entry in kubeconfig
--&gt;
kubeconfig 中集群条目的证书颁发机构文件的路径。
&lt;/p&gt;</description></item><item><title>kubectl config set-context</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_config/kubectl_config_set-context/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_config/kubectl_config_set-context/</guid><description>&lt;!--
title: kubectl config set-context
content_type: tool-reference
weight: 30
auto_generated: true
--&gt;
&lt;h2 id="简介"&gt;简介&lt;/h2&gt;
&lt;!--
Set a context entry in kubeconfig.

 Specifying a name that already exists will merge new fields on top of existing values for those fields.
--&gt;
&lt;p&gt;在 kubeconfig 中设置上下文条目。&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;指定已存在的属性名称将把新字段值与现有值合并。&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl config set-context &lt;span style="color:#666"&gt;[&lt;/span&gt;NAME | --current&lt;span style="color:#666"&gt;]&lt;/span&gt; &lt;span style="color:#666"&gt;[&lt;/span&gt;--cluster&lt;span style="color:#666"&gt;=&lt;/span&gt;cluster_nickname&lt;span style="color:#666"&gt;]&lt;/span&gt; &lt;span style="color:#666"&gt;[&lt;/span&gt;--user&lt;span style="color:#666"&gt;=&lt;/span&gt;user_nickname&lt;span style="color:#666"&gt;]&lt;/span&gt; &lt;span style="color:#666"&gt;[&lt;/span&gt;--namespace&lt;span style="color:#666"&gt;=&lt;/span&gt;namespace&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="示例"&gt;示例&lt;/h2&gt;
&lt;!--
# Set the user field on the gce context entry without touching other values
--&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 在 gce 上下文条目上设置用户字段，而不影响其他值&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl config set-context gce --user&lt;span style="color:#666"&gt;=&lt;/span&gt;cluster-admin
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="选项"&gt;选项&lt;/h2&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--cluster string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
cluster for the context entry in kubeconfig
--&gt;
kubeconfig 中上下文条目的集群。
&lt;/p&gt;</description></item><item><title>kubectl config set-credentials</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_config/kubectl_config_set-credentials/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_config/kubectl_config_set-credentials/</guid><description>&lt;!--
title: kubectl config set-credentials
content_type: tool-reference
weight: 30
auto_generated: true
--&gt;
&lt;h2 id="简介"&gt;简介&lt;/h2&gt;
&lt;!--
Set a user entry in kubeconfig.

 Specifying a name that already exists will merge new fields on top of existing values.

 Client-certificate flags:
 --client-certificate=certfile --client-key=keyfile
 
 Bearer token flags:
 --token=bearer_token
 
 Basic auth flags:
 --username=basic_user --password=basic_password
 
 Bearer token and basic auth are mutually exclusive.
--&gt;
&lt;p&gt;在 kubeconfig 中设置用户条目。&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;指定已存在的属性名称将把新字段值与现有值合并。
&lt;ul&gt;
&lt;li&gt;客户端证书标志：--client-certificate=certfile --client-key=keyfile&lt;/li&gt;
&lt;li&gt;持有者令牌标志：--token=bearer_token&lt;/li&gt;
&lt;li&gt;基本身份验证标志：--username=basic_user --password=basic_password&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;持有者令牌和基本身份验证是互斥的（不可同时使用）。&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl config set-credentials NAME &lt;span style="color:#666"&gt;[&lt;/span&gt;--client-certificate&lt;span style="color:#666"&gt;=&lt;/span&gt;path/to/certfile&lt;span style="color:#666"&gt;]&lt;/span&gt; &lt;span style="color:#666"&gt;[&lt;/span&gt;--client-key&lt;span style="color:#666"&gt;=&lt;/span&gt;path/to/keyfile&lt;span style="color:#666"&gt;]&lt;/span&gt; &lt;span style="color:#666"&gt;[&lt;/span&gt;--token&lt;span style="color:#666"&gt;=&lt;/span&gt;bearer_token&lt;span style="color:#666"&gt;]&lt;/span&gt; &lt;span style="color:#666"&gt;[&lt;/span&gt;--username&lt;span style="color:#666"&gt;=&lt;/span&gt;basic_user&lt;span style="color:#666"&gt;]&lt;/span&gt; &lt;span style="color:#666"&gt;[&lt;/span&gt;--password&lt;span style="color:#666"&gt;=&lt;/span&gt;basic_password&lt;span style="color:#666"&gt;]&lt;/span&gt; &lt;span style="color:#666"&gt;[&lt;/span&gt;--auth-provider&lt;span style="color:#666"&gt;=&lt;/span&gt;provider_name&lt;span style="color:#666"&gt;]&lt;/span&gt; &lt;span style="color:#666"&gt;[&lt;/span&gt;--auth-provider-arg&lt;span style="color:#666"&gt;=&lt;/span&gt;&lt;span style="color:#b8860b"&gt;key&lt;/span&gt;&lt;span style="color:#666"&gt;=&lt;/span&gt;value&lt;span style="color:#666"&gt;]&lt;/span&gt; &lt;span style="color:#666"&gt;[&lt;/span&gt;--exec-command&lt;span style="color:#666"&gt;=&lt;/span&gt;exec_command&lt;span style="color:#666"&gt;]&lt;/span&gt; &lt;span style="color:#666"&gt;[&lt;/span&gt;--exec-api-version&lt;span style="color:#666"&gt;=&lt;/span&gt;exec_api_version&lt;span style="color:#666"&gt;]&lt;/span&gt; &lt;span style="color:#666"&gt;[&lt;/span&gt;--exec-arg&lt;span style="color:#666"&gt;=&lt;/span&gt;arg&lt;span style="color:#666"&gt;]&lt;/span&gt; &lt;span style="color:#666"&gt;[&lt;/span&gt;--exec-env&lt;span style="color:#666"&gt;=&lt;/span&gt;&lt;span style="color:#b8860b"&gt;key&lt;/span&gt;&lt;span style="color:#666"&gt;=&lt;/span&gt;value&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="示例"&gt;示例&lt;/h2&gt;
&lt;!--
```
 # Set only the "client-key" field on the "cluster-admin"
 # entry, without touching other values
 kubectl config set-credentials cluster-admin --client-key=~/.kube/admin.key
 
 # Set basic auth for the "cluster-admin" entry
 kubectl config set-credentials cluster-admin --username=admin --password=uXFGweU9l35qcif
 
 # Embed client certificate data in the "cluster-admin" entry
 kubectl config set-credentials cluster-admin --client-certificate=~/.kube/admin.crt --embed-certs=true
 
 # Enable the Google Compute Platform auth provider for the "cluster-admin" entry
 kubectl config set-credentials cluster-admin --auth-provider=gcp
 
 # Enable the OpenID Connect auth provider for the "cluster-admin" entry with additional arguments
 kubectl config set-credentials cluster-admin --auth-provider=oidc --auth-provider-arg=client-id=foo --auth-provider-arg=client-secret=bar
 
 # Remove the "client-secret" config value for the OpenID Connect auth provider for the "cluster-admin" entry
 kubectl config set-credentials cluster-admin --auth-provider=oidc --auth-provider-arg=client-secret-
 
 # Enable new exec auth plugin for the "cluster-admin" entry
 kubectl config set-credentials cluster-admin --exec-command=/path/to/the/executable --exec-api-version=client.authentication.k8s.io/v1beta1
 
 # Enable new exec auth plugin for the "cluster-admin" entry with interactive mode
 kubectl config set-credentials cluster-admin --exec-command=/path/to/the/executable --exec-api-version=client.authentication.k8s.io/v1beta1 --exec-interactive-mode=Never
 
 # Define new exec auth plugin arguments for the "cluster-admin" entry
 kubectl config set-credentials cluster-admin --exec-arg=arg1 --exec-arg=arg2
 
 # Create or update exec auth plugin environment variables for the "cluster-admin" entry
 kubectl config set-credentials cluster-admin --exec-env=key1=val1 --exec-env=key2=val2
 
 # Remove exec auth plugin environment variables for the "cluster-admin" entry
 kubectl config set-credentials cluster-admin --exec-env=var-to-remove-
```
--&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 仅设置 &amp;#34;cluster-admin&amp;#34; 条目上的 &amp;#34;client-key&amp;#34; 字段，不触及其他值&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl config set-credentials cluster-admin --client-key&lt;span style="color:#666"&gt;=&lt;/span&gt;~/.kube/admin.key
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; 
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 为 &amp;#34;cluster-admin&amp;#34; 条目设置基本身份验证&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl config set-credentials cluster-admin --username&lt;span style="color:#666"&gt;=&lt;/span&gt;admin --password&lt;span style="color:#666"&gt;=&lt;/span&gt;uXFGweU9l35qcif
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; 
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 在 &amp;#34;cluster-admin&amp;#34; 条目中嵌入客户端证书数据&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl config set-credentials cluster-admin --client-certificate&lt;span style="color:#666"&gt;=&lt;/span&gt;~/.kube/admin.crt --embed-certs&lt;span style="color:#666"&gt;=&lt;/span&gt;&lt;span style="color:#a2f"&gt;true&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; 
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 为 &amp;#34;cluster-admin&amp;#34; 条目启用 Google Compute Platform 身份认证提供程序&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl config set-credentials cluster-admin --auth-provider&lt;span style="color:#666"&gt;=&lt;/span&gt;gcp
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; 
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 使用附加参数为 &amp;#34;cluster-admin&amp;#34; 条目启用 OpenID Connect 身份认证提供程序&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl config set-credentials cluster-admin --auth-provider&lt;span style="color:#666"&gt;=&lt;/span&gt;oidc --auth-provider-arg&lt;span style="color:#666"&gt;=&lt;/span&gt;client-id&lt;span style="color:#666"&gt;=&lt;/span&gt;foo --auth-provider-arg&lt;span style="color:#666"&gt;=&lt;/span&gt;client-secret&lt;span style="color:#666"&gt;=&lt;/span&gt;bar
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; 
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 删除 &amp;#34;cluster-admin&amp;#34; 条目的 OpenID Connect 身份验证提供程序的 &amp;#34;client-secret&amp;#34; 配置值&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl config set-credentials cluster-admin --auth-provider&lt;span style="color:#666"&gt;=&lt;/span&gt;oidc --auth-provider-arg&lt;span style="color:#666"&gt;=&lt;/span&gt;client-secret-
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; 
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 为 &amp;#34;cluster-admin&amp;#34; 条目启用新的 exec 认证插件&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl config set-credentials cluster-admin --exec-command&lt;span style="color:#666"&gt;=&lt;/span&gt;/path/to/the/executable --exec-api-version&lt;span style="color:#666"&gt;=&lt;/span&gt;client.authentication.k8s.io/v1beta1
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; 
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 为 &amp;#34;cluster-admin&amp;#34; 条目启用新的、带交互模式的 exec 认证插件&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl config set-credentials cluster-admin --exec-command&lt;span style="color:#666"&gt;=&lt;/span&gt;/path/to/the/executable --exec-api-version&lt;span style="color:#666"&gt;=&lt;/span&gt;client.authentication.k8s.io/v1beta1 --exec-interactive-mode&lt;span style="color:#666"&gt;=&lt;/span&gt;Never
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; 
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 为 &amp;#34;cluster-admin&amp;#34; 条目定义新的 exec 认证插件参数&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl config set-credentials cluster-admin --exec-arg&lt;span style="color:#666"&gt;=&lt;/span&gt;arg1 --exec-arg&lt;span style="color:#666"&gt;=&lt;/span&gt;arg2
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; 
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 为 &amp;#34;cluster-admin&amp;#34; 条目创建或更新 exec 认证插件环境变量&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl config set-credentials cluster-admin --exec-env&lt;span style="color:#666"&gt;=&lt;/span&gt;&lt;span style="color:#b8860b"&gt;key1&lt;/span&gt;&lt;span style="color:#666"&gt;=&lt;/span&gt;val1 --exec-env&lt;span style="color:#666"&gt;=&lt;/span&gt;&lt;span style="color:#b8860b"&gt;key2&lt;/span&gt;&lt;span style="color:#666"&gt;=&lt;/span&gt;val2
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; 
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 删除 &amp;#34;cluster-admin&amp;#34; 条目的 exec 认证插件环境变量&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl config set-credentials cluster-admin --exec-env&lt;span style="color:#666"&gt;=&lt;/span&gt;var-to-remove-
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="选项"&gt;选项&lt;/h2&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--auth-provider string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Auth provider for the user entry in kubeconfig
--&gt;
kubeconfig 中用户条目的身份验证提供程序。
&lt;/p&gt;</description></item><item><title>kubectl config unset</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_config/kubectl_config_unset/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_config/kubectl_config_unset/</guid><description>&lt;!--
title: kubectl config unset
content_type: tool-reference
weight: 30
auto_generated: true
--&gt;
&lt;h2 id="简介"&gt;简介&lt;/h2&gt;
&lt;!--
Unset an individual value in a kubeconfig file.

 PROPERTY_NAME is a dot delimited name where each token represents either an attribute name or a map key. Map keys may not contain dots.
--&gt;
&lt;p&gt;去除 kubeconfig 文件中的某个值的设置。&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;PROPERTY_NAME 是一个以点分隔的名称，其中每个元素代表一个属性名称或一个键名。键名不得包含点。&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl config &lt;span style="color:#a2f"&gt;unset&lt;/span&gt; PROPERTY_NAME
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="示例"&gt;示例&lt;/h2&gt;
&lt;!--
# Unset the current-context
# Unset namespace in foo context
--&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 去除 current-context 设置&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl config &lt;span style="color:#a2f"&gt;unset&lt;/span&gt; current-context
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; 
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 去掉 foo 上下文中的 namespace 设置&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl config &lt;span style="color:#a2f"&gt;unset&lt;/span&gt; contexts.foo.namespace
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="选项"&gt;选项&lt;/h2&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
help for unset
--&gt;
关于 unset 的帮助信息。
&lt;/p&gt;</description></item><item><title>kubectl config use-context</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_config/kubectl_config_use-context/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_config/kubectl_config_use-context/</guid><description>&lt;!--
title: kubectl config use-context
content_type: tool-reference
weight: 30
auto_generated: true
--&gt;
&lt;h2 id="简介"&gt;简介&lt;/h2&gt;
&lt;!--
Set the current-context in a kubeconfig file.
--&gt;
&lt;p&gt;在 kubeconfig 文件中设置当前上下文。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl config use-context CONTEXT_NAME
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="示例"&gt;示例&lt;/h2&gt;
&lt;!--
# Use the context for the minikube cluster
--&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 使用 minikube 集群的上下文&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl config use-context minikube
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="选项"&gt;选项&lt;/h2&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
help for use-context
--&gt;
关于 use-context 的帮助信息。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2 id="继承于父命令的选项"&gt;继承于父命令的选项&lt;/h2&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--as string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Username to impersonate for the operation. User could be a regular user or a service account in a namespace.
--&gt;
操作所用的伪装用户名。用户可以是常规用户或命名空间中的服务账号。
&lt;/p&gt;</description></item><item><title>kubectl config view</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_config/kubectl_config_view/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_config/kubectl_config_view/</guid><description>&lt;!--
title: kubectl config view
content_type: tool-reference
weight: 30
auto_generated: true
--&gt;
&lt;h2 id="简介"&gt;简介&lt;/h2&gt;
&lt;!--
Display merged kubeconfig settings or a specified kubeconfig file.

 You can use --output jsonpath={...} to extract specific values using a jsonpath expression.
--&gt;
&lt;p&gt;显示合并的 kubeconfig 配置或指定的 kubeconfig 文件。&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;你可以使用 &lt;code&gt;--output jsonpath={...}&lt;/code&gt; 通过 jsonpath 表达式提取特定值。&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl config view &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="示例"&gt;示例&lt;/h2&gt;
&lt;!--
# Show merged kubeconfig settings
# Show merged kubeconfig settings, raw certificate data, and exposed secrets
# Get the password for the e2e user
--&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 显示合并的 kubeconfig 设置&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl config view
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; 
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 显示合并的 kubeconfig 设置、原始证书数据和公开的密钥&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl config view --raw
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; 
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 获取 e2e 用户的密码&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl config view -o &lt;span style="color:#b8860b"&gt;jsonpath&lt;/span&gt;&lt;span style="color:#666"&gt;=&lt;/span&gt;&lt;span style="color:#b44"&gt;&amp;#39;{.users[?(@.name == &amp;#34;e2e&amp;#34;)].user.password}&amp;#39;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="选项"&gt;选项&lt;/h2&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--allow-missing-template-keys&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;!--Default: true--&gt;默认值：true&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.
--&gt;
如果为 true，在模板中字段或映射键缺失时忽略模板中的错误。
仅适用于 golang 和 jsonpath 输出格式。
&lt;/p&gt;</description></item><item><title>kubectl create clusterrole</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_create/kubectl_create_clusterrole/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_create/kubectl_create_clusterrole/</guid><description>&lt;!--
title: kubectl create clusterrole
content_type: tool-reference
weight: 30
auto_generated: true
--&gt;
&lt;h2 id="简介"&gt;简介&lt;/h2&gt;
&lt;!--
Create a cluster role.
--&gt;
&lt;p&gt;创建一个集群角色。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl create clusterrole NAME --verb&lt;span style="color:#666"&gt;=&lt;/span&gt;verb --resource&lt;span style="color:#666"&gt;=&lt;/span&gt;resource.group &lt;span style="color:#666"&gt;[&lt;/span&gt;--resource-name&lt;span style="color:#666"&gt;=&lt;/span&gt;resourcename&lt;span style="color:#666"&gt;]&lt;/span&gt; &lt;span style="color:#666"&gt;[&lt;/span&gt;--dry-run&lt;span style="color:#666"&gt;=&lt;/span&gt;server|client|none&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="示例"&gt;示例&lt;/h2&gt;
&lt;!--
```
 # Create a cluster role named "pod-reader" that allows user to perform "get", "watch" and "list" on pods
 kubectl create clusterrole pod-reader --verb=get,list,watch --resource=pods
 
 # Create a cluster role named "pod-reader" with ResourceName specified
 kubectl create clusterrole pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod
 
 # Create a cluster role named "foo" with API Group specified
 kubectl create clusterrole foo --verb=get,list,watch --resource=rs.apps
 
 # Create a cluster role named "foo" with SubResource specified
 kubectl create clusterrole foo --verb=get,list,watch --resource=pods,pods/status
 
 # Create a cluster role name "foo" with NonResourceURL specified
 kubectl create clusterrole "foo" --verb=get --non-resource-url=https://andygol-k8s.netlify.app/logs/*
 
 # Create a cluster role name "monitoring" with AggregationRule specified
 kubectl create clusterrole monitoring --aggregation-rule="rbac.example.com/aggregate-to-monitoring=true"
```
--&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; &lt;span style="color:#080;font-style:italic"&gt;# 创建一个名为 &amp;#34;pod-reader&amp;#34; 的集群角色，允许用户对 Pod 执行 &amp;#34;get&amp;#34;、&amp;#34;watch&amp;#34; 和 &amp;#34;list&amp;#34; 操作&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; kubectl create clusterrole pod-reader --verb&lt;span style="color:#666"&gt;=&lt;/span&gt;get,list,watch --resource&lt;span style="color:#666"&gt;=&lt;/span&gt;pods
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; 
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; &lt;span style="color:#080;font-style:italic"&gt;# 创建一个名为 &amp;#34;pod-reader&amp;#34; 的集群角色，并指定 ResourceName&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; kubectl create clusterrole pod-reader --verb&lt;span style="color:#666"&gt;=&lt;/span&gt;get --resource&lt;span style="color:#666"&gt;=&lt;/span&gt;pods --resource-name&lt;span style="color:#666"&gt;=&lt;/span&gt;readablepod --resource-name&lt;span style="color:#666"&gt;=&lt;/span&gt;anotherpod
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; 
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; &lt;span style="color:#080;font-style:italic"&gt;# 创建一个名为 &amp;#34;foo&amp;#34; 的集群角色，并指定 API 组&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; kubectl create clusterrole foo --verb&lt;span style="color:#666"&gt;=&lt;/span&gt;get,list,watch --resource&lt;span style="color:#666"&gt;=&lt;/span&gt;rs.apps
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; 
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; &lt;span style="color:#080;font-style:italic"&gt;# 创建一个名为 &amp;#34;foo&amp;#34; 的集群角色，并指定 SubResource&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; kubectl create clusterrole foo --verb&lt;span style="color:#666"&gt;=&lt;/span&gt;get,list,watch --resource&lt;span style="color:#666"&gt;=&lt;/span&gt;pods,pods/status
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; 
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; &lt;span style="color:#080;font-style:italic"&gt;# 创建一个名为 &amp;#34;foo&amp;#34; 的集群角色，并指定 NonResourceURL&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; kubectl create clusterrole &lt;span style="color:#b44"&gt;&amp;#34;foo&amp;#34;&lt;/span&gt; --verb&lt;span style="color:#666"&gt;=&lt;/span&gt;get --non-resource-url&lt;span style="color:#666"&gt;=&lt;/span&gt;/logs/*
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; 
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; &lt;span style="color:#080;font-style:italic"&gt;# 创建一个名为 &amp;#34;monitoring&amp;#34; 的集群角色，并指定 AggregationRule&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; kubectl create clusterrole monitoring --aggregation-rule&lt;span style="color:#666"&gt;=&lt;/span&gt;&lt;span style="color:#b44"&gt;&amp;#34;rbac.example.com/aggregate-to-monitoring=true&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="选项"&gt;选项&lt;/h2&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--aggregation-rule &amp;lt;&lt;!--comma-separated 'key=value' pairs--&gt;英文逗号分隔的 'key=value' 对&amp;gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
An aggregation label selector for combining ClusterRoles.
--&gt;
用于组合 ClusterRole 的聚合标签选择算符。
&lt;/p&gt;</description></item><item><title>kubectl create clusterrolebinding</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_create/kubectl_create_clusterrolebinding/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_create/kubectl_create_clusterrolebinding/</guid><description>&lt;!--
title: kubectl create clusterrolebinding
content_type: tool-reference
weight: 30
auto_generated: true
--&gt;
&lt;h2 id="简介"&gt;简介&lt;/h2&gt;
&lt;!--
Create a cluster role binding for a particular cluster role.
--&gt;
&lt;p&gt;为特定的集群角色创建一个集群角色绑定。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl create clusterrolebinding NAME --clusterrole&lt;span style="color:#666"&gt;=&lt;/span&gt;NAME &lt;span style="color:#666"&gt;[&lt;/span&gt;--user&lt;span style="color:#666"&gt;=&lt;/span&gt;username&lt;span style="color:#666"&gt;]&lt;/span&gt; &lt;span style="color:#666"&gt;[&lt;/span&gt;--group&lt;span style="color:#666"&gt;=&lt;/span&gt;groupname&lt;span style="color:#666"&gt;]&lt;/span&gt; &lt;span style="color:#666"&gt;[&lt;/span&gt;--serviceaccount&lt;span style="color:#666"&gt;=&lt;/span&gt;namespace:serviceaccountname&lt;span style="color:#666"&gt;]&lt;/span&gt; &lt;span style="color:#666"&gt;[&lt;/span&gt;--dry-run&lt;span style="color:#666"&gt;=&lt;/span&gt;server|client|none&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="示例"&gt;示例&lt;/h2&gt;
&lt;!--
```
# Create a cluster role binding for user1, user2, and group1 using the cluster-admin cluster role
kubectl create clusterrolebinding cluster-admin --clusterrole=cluster-admin --user=user1 --user=user2 --group=group1
```
--&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 使用 cluster-admin 集群角色为 user1、user2 和 group1 创建一个集群角色绑定&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl create clusterrolebinding cluster-admin --clusterrole&lt;span style="color:#666"&gt;=&lt;/span&gt;cluster-admin --user&lt;span style="color:#666"&gt;=&lt;/span&gt;user1 --user&lt;span style="color:#666"&gt;=&lt;/span&gt;user2 --group&lt;span style="color:#666"&gt;=&lt;/span&gt;group1
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="选项"&gt;选项&lt;/h2&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--allow-missing-template-keys&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;!--Default:--&gt;默认值：true&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.
--&gt;
如果为 true，在模板中字段或映射键缺失时忽略模板中的错误。
仅适用于 golang 和 jsonpath 输出格式。
&lt;/p&gt;</description></item><item><title>kubectl create configmap</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_create/kubectl_create_configmap/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_create/kubectl_create_configmap/</guid><description>&lt;!--
title: kubectl create configmap
content_type: tool-reference
weight: 30
auto_generated: true
--&gt;
&lt;h2 id="简介"&gt;简介&lt;/h2&gt;
&lt;!--
Create a config map based on a file, directory, or specified literal value.

 A single config map may package one or more key/value pairs.

 When creating a config map based on a file, the key will default to the basename of the file, and the value will default to the file content. If the basename is an invalid key, you may specify an alternate key.

 When creating a config map based on a directory, each file whose basename is a valid key in the directory will be packaged into the config map. Any directory entries except regular files are ignored (e.g. subdirectories, symlinks, devices, pipes, etc).
--&gt;
&lt;p&gt;基于文件、目录或指定的文字值创建 ConfigMap。&lt;/p&gt;</description></item><item><title>kubectl create cronjob</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_create/kubectl_create_cronjob/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_create/kubectl_create_cronjob/</guid><description>&lt;!--
title: kubectl create cronjob
content_type: tool-reference
weight: 30
auto_generated: true
--&gt;
&lt;h2 id="简介"&gt;简介&lt;/h2&gt;
&lt;!--
Create a cron job with the specified name.

```
kubectl create cronjob NAME --image=image --schedule='0/5 * * * ?' -- [COMMAND] [args...] [flags]
```
--&gt;
&lt;p&gt;创建具有指定名称的 CronJob。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl create cronjob NAME --image&lt;span style="color:#666"&gt;=&lt;/span&gt;image --schedule&lt;span style="color:#666"&gt;=&lt;/span&gt;&lt;span style="color:#b44"&gt;&amp;#39;0/5 * * * ?&amp;#39;&lt;/span&gt; -- &lt;span style="color:#666"&gt;[&lt;/span&gt;COMMAND&lt;span style="color:#666"&gt;]&lt;/span&gt; &lt;span style="color:#666"&gt;[&lt;/span&gt;args...&lt;span style="color:#666"&gt;]&lt;/span&gt; &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="示例"&gt;示例&lt;/h2&gt;
&lt;!--
```
# Create a cron job
kubectl create cronjob my-job --image=busybox --schedule="*/1 * * * *"

# Create a cron job with a command
kubectl create cronjob my-job --image=busybox --schedule="*/1 * * * *" -- date
```
--&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 创建 CronJob&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl create cronjob my-job --image&lt;span style="color:#666"&gt;=&lt;/span&gt;busybox --schedule&lt;span style="color:#666"&gt;=&lt;/span&gt;&lt;span style="color:#b44"&gt;&amp;#34;*/1 * * * *&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; 
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 创建带有命令的 CronJob&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl create cronjob my-job --image&lt;span style="color:#666"&gt;=&lt;/span&gt;busybox --schedule&lt;span style="color:#666"&gt;=&lt;/span&gt;&lt;span style="color:#b44"&gt;&amp;#34;*/1 * * * *&amp;#34;&lt;/span&gt; -- date
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="选项"&gt;选项&lt;/h2&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--allow-missing-template-keys&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;!--Default:--&gt;默认值：true&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.
--&gt;
如果为 true，在模板中字段或映射键缺失时忽略模板中的错误。
仅适用于 golang 和 jsonpath 输出格式。
&lt;/p&gt;</description></item><item><title>kubectl create deployment</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_create/kubectl_create_deployment/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_create/kubectl_create_deployment/</guid><description>&lt;!--
title: kubectl create deployment
content_type: tool-reference
weight: 30
auto_generated: true
--&gt;
&lt;h2 id="简介"&gt;简介&lt;/h2&gt;
&lt;!--
Create a deployment with the specified name.
--&gt;
&lt;p&gt;创建指定名称的 Deployment。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl create deployment NAME --image&lt;span style="color:#666"&gt;=&lt;/span&gt;image -- &lt;span style="color:#666"&gt;[&lt;/span&gt;COMMAND&lt;span style="color:#666"&gt;]&lt;/span&gt; &lt;span style="color:#666"&gt;[&lt;/span&gt;args...&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="示例"&gt;示例&lt;/h2&gt;
&lt;!--
```
# Create a deployment named my-dep that runs the busybox image
kubectl create deployment my-dep --image=busybox

# Create a deployment with a command
kubectl create deployment my-dep --image=busybox -- date

# Create a deployment named my-dep that runs the nginx image with 3 replicas
kubectl create deployment my-dep --image=nginx --replicas=3

# Create a deployment named my-dep that runs the busybox image and expose port 5701
kubectl create deployment my-dep --image=busybox --port=5701

# Create a deployment named my-dep that runs multiple containers
kubectl create deployment my-dep --image=busybox:latest --image=ubuntu:latest --image=nginx
```
--&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 创建一个名为 my-dep 的 Deployment，它将运行 busybox 镜像&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl create deployment my-dep --image&lt;span style="color:#666"&gt;=&lt;/span&gt;busybox
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 创建一个带有命令的 Deployment&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl create deployment my-dep --image&lt;span style="color:#666"&gt;=&lt;/span&gt;busybox -- date
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 创建一个名为 my-dep 的 Deployment，它将运行 nginx 镜像并有 3 个副本&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl create deployment my-dep --image&lt;span style="color:#666"&gt;=&lt;/span&gt;nginx --replicas&lt;span style="color:#666"&gt;=&lt;/span&gt;&lt;span style="color:#666"&gt;3&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 创建一个名为 my-dep 的 Deployment，它将运行 busybox 镜像并公开端口 5701&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl create deployment my-dep --image&lt;span style="color:#666"&gt;=&lt;/span&gt;busybox --port&lt;span style="color:#666"&gt;=&lt;/span&gt;&lt;span style="color:#666"&gt;5701&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 创建一个名为 my-dep 的 Deployment，它将运行多个容器&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl create deployment my-dep --image&lt;span style="color:#666"&gt;=&lt;/span&gt;busybox:latest --image&lt;span style="color:#666"&gt;=&lt;/span&gt;ubuntu:latest --image&lt;span style="color:#666"&gt;=&lt;/span&gt;nginx
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="选项"&gt;选项&lt;/h2&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--allow-missing-template-keys&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;!--Default:--&gt;默认值：true&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.
--&gt;
如果为 true，在模板中字段或映射键缺失时忽略模板中的错误。
仅适用于 golang 和 jsonpath 输出格式。
&lt;/p&gt;</description></item><item><title>kubectl create ingress</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_create/kubectl_create_ingress/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_create/kubectl_create_ingress/</guid><description>&lt;!--
title: kubectl create ingress
content_type: tool-reference
weight: 30
auto_generated: true
--&gt;
&lt;h2 id="简介"&gt;简介&lt;/h2&gt;
&lt;!--
Create an ingress with the specified name.
--&gt;
&lt;p&gt;创建指定名称的 Ingress。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl create ingress NAME --rule&lt;span style="color:#666"&gt;=&lt;/span&gt;host/path&lt;span style="color:#666"&gt;=&lt;/span&gt;service:port&lt;span style="color:#666"&gt;[&lt;/span&gt;,tls&lt;span style="color:#666"&gt;[=&lt;/span&gt;secret&lt;span style="color:#666"&gt;]]&lt;/span&gt; 
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="示例"&gt;示例&lt;/h2&gt;
&lt;!--
# Create a single ingress called 'simple' that directs requests to foo.com/bar to svc
# svc1:8080 with a TLS secret "my-cert"
kubectl create ingress simple --rule="foo.com/bar=svc1:8080,tls=my-cert"

# Create a catch all ingress of "/path" pointing to service svc:port and Ingress Class as "otheringress"
kubectl create ingress catch-all --class=otheringress --rule="/path=svc:port"

# Create an ingress with two annotations: ingress.annotation1 and ingress.annotations2
kubectl create ingress annotated --class=default --rule="foo.com/bar=svc:port" \
 --annotation ingress.annotation1=foo \
 --annotation ingress.annotation2=bla

# Create an ingress with the same host and multiple paths
kubectl create ingress multipath --class=default \
 --rule="foo.com/=svc:port" \
 --rule="foo.com/admin/=svcadmin:portadmin"
 
# Create an ingress with multiple hosts and the pathType as Prefix
kubectl create ingress ingress1 --class=default \
 --rule="foo.com/path*=svc:8080" \
 --rule="bar.com/admin*=svc2:http"

# Create an ingress with TLS enabled using the default ingress certificate and different path types
kubectl create ingress ingtls --class=default \
 --rule="foo.com/=svc:https,tls" \
 --rule="foo.com/path/subpath*=othersvc:8080"

# Create an ingress with TLS enabled using a specific secret and pathType as Prefix
kubectl create ingress ingsecret --class=default \
 --rule="foo.com/*=svc:8080,tls=secret1"

# Create an ingress with a default backend
kubectl create ingress ingdefault --class=default \
 --default-backend=defaultsvc:http \
 --rule="foo.com/*=svc:8080,tls=secret1"
--&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 创建一个名为 &amp;#39;simple&amp;#39; 的 Ingress，使用 TLS 类别 Secret &amp;#34;my-cert&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 将针对 foo.com/bar 的请求重定向到 svc1:8080&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl create ingress simple --rule&lt;span style="color:#666"&gt;=&lt;/span&gt;&lt;span style="color:#b44"&gt;&amp;#34;foo.com/bar=svc1:8080,tls=my-cert&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; 
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 创建一个 Ingress，获取指向服务 svc:port 的所有 &amp;#34;/path&amp;#34; 请求，并将 Ingress Class 设置为 &amp;#34;otheringress&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl create ingress catch-all --class&lt;span style="color:#666"&gt;=&lt;/span&gt;otheringress --rule&lt;span style="color:#666"&gt;=&lt;/span&gt;&lt;span style="color:#b44"&gt;&amp;#34;/path=svc:port&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 创建含两个注解 ingress.annotation1 和 ingress.annotation2 的 Ingress&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl create ingress annotated --class&lt;span style="color:#666"&gt;=&lt;/span&gt;default --rule&lt;span style="color:#666"&gt;=&lt;/span&gt;&lt;span style="color:#b44"&gt;&amp;#34;foo.com/bar=svc:port&amp;#34;&lt;/span&gt; &lt;span style="color:#b62;font-weight:bold"&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; --annotation ingress.annotation1&lt;span style="color:#666"&gt;=&lt;/span&gt;foo &lt;span style="color:#b62;font-weight:bold"&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; --annotation ingress.annotation2&lt;span style="color:#666"&gt;=&lt;/span&gt;bla
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; 
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 创建具有相同主机和多个路径的 Ingress&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl create ingress multipath --class&lt;span style="color:#666"&gt;=&lt;/span&gt;default &lt;span style="color:#b62;font-weight:bold"&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; --rule&lt;span style="color:#666"&gt;=&lt;/span&gt;&lt;span style="color:#b44"&gt;&amp;#34;foo.com/=svc:port&amp;#34;&lt;/span&gt; &lt;span style="color:#b62;font-weight:bold"&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; --rule&lt;span style="color:#666"&gt;=&lt;/span&gt;&lt;span style="color:#b44"&gt;&amp;#34;foo.com/admin/=svcadmin:portadmin&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; 
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 创建具有多个主机且 pathType 为 Prefix 的 Ingress&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl create ingress ingress1 --class&lt;span style="color:#666"&gt;=&lt;/span&gt;default &lt;span style="color:#b62;font-weight:bold"&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; --rule&lt;span style="color:#666"&gt;=&lt;/span&gt;&lt;span style="color:#b44"&gt;&amp;#34;foo.com/path*=svc:8080&amp;#34;&lt;/span&gt; &lt;span style="color:#b62;font-weight:bold"&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; --rule&lt;span style="color:#666"&gt;=&lt;/span&gt;&lt;span style="color:#b44"&gt;&amp;#34;bar.com/admin*=svc2:http&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; 
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 创建使用默认 Ingress 证书来启用 TLS 且具备不同路径类型的 Ingress&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl create ingress ingtls --class&lt;span style="color:#666"&gt;=&lt;/span&gt;default &lt;span style="color:#b62;font-weight:bold"&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; --rule&lt;span style="color:#666"&gt;=&lt;/span&gt;&lt;span style="color:#b44"&gt;&amp;#34;foo.com/=svc:https,tls&amp;#34;&lt;/span&gt; &lt;span style="color:#b62;font-weight:bold"&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; --rule&lt;span style="color:#666"&gt;=&lt;/span&gt;&lt;span style="color:#b44"&gt;&amp;#34;foo.com/path/subpath*=othersvc:8080&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; 
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 创建使用特定密钥来启用 TLS 且 pathType 为 Prefix 的 Ingress&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl create ingress ingsecret --class&lt;span style="color:#666"&gt;=&lt;/span&gt;default &lt;span style="color:#b62;font-weight:bold"&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; --rule&lt;span style="color:#666"&gt;=&lt;/span&gt;&lt;span style="color:#b44"&gt;&amp;#34;foo.com/*=svc:8080,tls=secret1&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; 
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 创建具有默认后端的 Ingress&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl create ingress ingdefault --class&lt;span style="color:#666"&gt;=&lt;/span&gt;default &lt;span style="color:#b62;font-weight:bold"&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; --default-backend&lt;span style="color:#666"&gt;=&lt;/span&gt;defaultsvc:http &lt;span style="color:#b62;font-weight:bold"&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; --rule&lt;span style="color:#666"&gt;=&lt;/span&gt;&lt;span style="color:#b44"&gt;&amp;#34;foo.com/*=svc:8080,tls=secret1&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="选项"&gt;选项&lt;/h2&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--allow-missing-template-keys&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;!--Default:--&gt;默认值：true&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.
--&gt;
如果为 true，在模板中字段或映射键缺失时忽略模板中的错误。
仅适用于 golang 和 jsonpath 输出格式。
&lt;/p&gt;</description></item><item><title>kubectl create job</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_create/kubectl_create_job/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_create/kubectl_create_job/</guid><description>&lt;!--
title: kubectl create job
content_type: tool-reference
weight: 30
auto_generated: true
--&gt;
&lt;h2 id="简介"&gt;简介&lt;/h2&gt;
&lt;!--
Create a job with the specified name.
--&gt;
&lt;p&gt;创建指定名称的 Job。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl create job NAME --image&lt;span style="color:#666"&gt;=&lt;/span&gt;image &lt;span style="color:#666"&gt;[&lt;/span&gt;--from&lt;span style="color:#666"&gt;=&lt;/span&gt;cronjob/name&lt;span style="color:#666"&gt;]&lt;/span&gt; -- &lt;span style="color:#666"&gt;[&lt;/span&gt;COMMAND&lt;span style="color:#666"&gt;]&lt;/span&gt; &lt;span style="color:#666"&gt;[&lt;/span&gt;args...&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="示例"&gt;示例&lt;/h2&gt;
&lt;!--
```
 # Create a job
 kubectl create job my-job --image=busybox
 
 # Create a job with a command
 kubectl create job my-job --image=busybox -- date
 
 # Create a job from a cron job named "a-cronjob"
 kubectl create job test-job --from=cronjob/a-cronjob
```
--&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 创建一个 Job&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl create job my-job --image&lt;span style="color:#666"&gt;=&lt;/span&gt;busybox
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; 
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 创建带一条命令的 Job&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl create job my-job --image&lt;span style="color:#666"&gt;=&lt;/span&gt;busybox -- date
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 从名为 &amp;#34;a-cronjob&amp;#34; 的定时任务创建一个 Job&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl create job test-job --from&lt;span style="color:#666"&gt;=&lt;/span&gt;cronjob/a-cronjob
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="选项"&gt;选项&lt;/h2&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--allow-missing-template-keys&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;!--Default:--&gt;默认值：true&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.
--&gt;
如果为 true，在模板中字段或映射键缺失时忽略模板中的错误。
仅适用于 golang 和 jsonpath 输出格式。
&lt;/p&gt;</description></item><item><title>kubectl create namespace</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_create/kubectl_create_namespace/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_create/kubectl_create_namespace/</guid><description>&lt;!--
title: kubectl create namespace
content_type: tool-reference
weight: 30
auto_generated: true
--&gt;
&lt;h2 id="简介"&gt;简介&lt;/h2&gt;
&lt;!--
Create a namespace with the specified name.
--&gt;
&lt;p&gt;用指定的名称创建命名空间。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl create namespace NAME &lt;span style="color:#666"&gt;[&lt;/span&gt;--dry-run&lt;span style="color:#666"&gt;=&lt;/span&gt;server|client|none&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="示例"&gt;示例&lt;/h2&gt;
&lt;!--
```
 # Create a new namespace named my-namespace
 kubectl create namespace my-namespace
```
--&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 新建一个名为 my-namespace 的命名空间&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl create namespace my-namespace
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="选项"&gt;选项&lt;/h2&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--allow-missing-template-keys&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;!--Default:--&gt;默认值：true&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.
--&gt;
如果为 true，在模板中字段或映射键缺失时忽略模板中的错误。
仅适用于 golang 和 jsonpath 输出格式。
&lt;/p&gt;</description></item><item><title>kubectl create poddisruptionbudget</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_create/kubectl_create_poddisruptionbudget/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_create/kubectl_create_poddisruptionbudget/</guid><description>&lt;!--
title: kubectl create poddisruptionbudget
content_type: tool-reference
weight: 30
auto_generated: true
--&gt;
&lt;h2 id="简介"&gt;简介&lt;/h2&gt;
&lt;!--
Create a pod disruption budget with the specified name, selector, and desired minimum available pods.

```
kubectl create poddisruptionbudget NAME --selector=SELECTOR --min-available=N [--dry-run=server|client|none]
```
--&gt;
&lt;p&gt;创建具有指定名称、选择算符和预期最少可用 Pod 个数的 Pod 干扰预算。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl create poddisruptionbudget NAME --selector&lt;span style="color:#666"&gt;=&lt;/span&gt;SELECTOR --min-available&lt;span style="color:#666"&gt;=&lt;/span&gt;N &lt;span style="color:#666"&gt;[&lt;/span&gt;--dry-run&lt;span style="color:#666"&gt;=&lt;/span&gt;server|client|none&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="示例"&gt;示例&lt;/h2&gt;
&lt;!--
```
 # Create a pod disruption budget named my-pdb that will select all pods with the app=rails label
 # and require at least one of them being available at any point in time
 kubectl create poddisruptionbudget my-pdb --selector=app=rails --min-available=1
 
 # Create a pod disruption budget named my-pdb that will select all pods with the app=nginx label
 # and require at least half of the pods selected to be available at any point in time
 kubectl create pdb my-pdb --selector=app=nginx --min-available=50%
```
--&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 创建一个名为 my-pdb 的 Pod 干扰预算，它将选择所有带有 app=rails 标签的 Pod&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 并要求至少有一个 Pod 在任何时候都是可用的&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl create poddisruptionbudget my-pdb --selector&lt;span style="color:#666"&gt;=&lt;/span&gt;&lt;span style="color:#b8860b"&gt;app&lt;/span&gt;&lt;span style="color:#666"&gt;=&lt;/span&gt;rails --min-available&lt;span style="color:#666"&gt;=&lt;/span&gt;&lt;span style="color:#666"&gt;1&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 创建一个名为 my-pdb 的 Pod 干扰预算，它将选择所有带有 app=nginx 标签的 Pod&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 并要求在任何时候所选 Pod 中至少有一半是可用的&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl create pdb my-pdb --selector&lt;span style="color:#666"&gt;=&lt;/span&gt;&lt;span style="color:#b8860b"&gt;app&lt;/span&gt;&lt;span style="color:#666"&gt;=&lt;/span&gt;nginx --min-available&lt;span style="color:#666"&gt;=&lt;/span&gt;50%
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="选项"&gt;选项&lt;/h2&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--allow-missing-template-keys&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;!--Default:--&gt;默认值：true&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.
--&gt;
如果为 true，在模板中字段或映射键缺失时忽略模板中的错误。
仅适用于 golang 和 jsonpath 输出格式。
&lt;/p&gt;</description></item><item><title>kubectl create priorityclass</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_create/kubectl_create_priorityclass/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_create/kubectl_create_priorityclass/</guid><description>&lt;!--
title: kubectl create priorityclass
content_type: tool-reference
weight: 30
auto_generated: true
--&gt;
&lt;h2 id="简介"&gt;简介&lt;/h2&gt;
&lt;!--
Create a priority class with the specified name, value, globalDefault and description.
--&gt;
&lt;p&gt;创建带有指定名称、取值、globalDefault 设置及描述的优先级类对象。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl create priorityclass NAME --value&lt;span style="color:#666"&gt;=&lt;/span&gt;VALUE --global-default&lt;span style="color:#666"&gt;=&lt;/span&gt;BOOL &lt;span style="color:#666"&gt;[&lt;/span&gt;--dry-run&lt;span style="color:#666"&gt;=&lt;/span&gt;server|client|none&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="示例"&gt;示例&lt;/h2&gt;
&lt;!--
```
# Create a priority class named high-priority
kubectl create priorityclass high-priority --value=1000 --description="high priority"

# Create a priority class named default-priority that is considered as the global default priority
kubectl create priorityclass default-priority --value=1000 --global-default=true --description="default priority"

# Create a priority class named high-priority that cannot preempt pods with lower priority
kubectl create priorityclass high-priority --value=1000 --description="high priority" --preemption-policy="Never"
```
--&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 创建一个名为 high-priority 的优先级类&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl create priorityclass high-priority --value&lt;span style="color:#666"&gt;=&lt;/span&gt;&lt;span style="color:#666"&gt;1000&lt;/span&gt; --description&lt;span style="color:#666"&gt;=&lt;/span&gt;&lt;span style="color:#b44"&gt;&amp;#34;high priority&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; 
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 创建一个名为 default-priority 的优先级类，并将其视为全局默认优先级&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl create priorityclass default-priority --value&lt;span style="color:#666"&gt;=&lt;/span&gt;&lt;span style="color:#666"&gt;1000&lt;/span&gt; --global-default&lt;span style="color:#666"&gt;=&lt;/span&gt;&lt;span style="color:#a2f"&gt;true&lt;/span&gt; --description&lt;span style="color:#666"&gt;=&lt;/span&gt;&lt;span style="color:#b44"&gt;&amp;#34;default priority&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; 
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 创建一个名为 high-priority 的优先级类，它不能抢占低优先级的 Pod&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl create priorityclass high-priority --value&lt;span style="color:#666"&gt;=&lt;/span&gt;&lt;span style="color:#666"&gt;1000&lt;/span&gt; --description&lt;span style="color:#666"&gt;=&lt;/span&gt;&lt;span style="color:#b44"&gt;&amp;#34;high priority&amp;#34;&lt;/span&gt; --preemption-policy&lt;span style="color:#666"&gt;=&lt;/span&gt;&lt;span style="color:#b44"&gt;&amp;#34;Never&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="选项"&gt;选项&lt;/h2&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--allow-missing-template-keys&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;!--Default:--&gt;默认值：true&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.
--&gt;
如果为 true，在模板中字段或映射键缺失时忽略模板中的错误。
仅适用于 golang 和 jsonpath 输出格式。
&lt;/p&gt;</description></item><item><title>kubectl create quota</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_create/kubectl_create_quota/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_create/kubectl_create_quota/</guid><description>&lt;!--
title: kubectl create quota
content_type: tool-reference
weight: 30
auto_generated: true
--&gt;
&lt;h2 id="简介"&gt;简介&lt;/h2&gt;
&lt;!--
Create a resource quota with the specified name, hard limits, and optional scopes.
--&gt;
&lt;p&gt;创建具有指定名称、硬性限制和可选范围的资源配额。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl create quota NAME &lt;span style="color:#666"&gt;[&lt;/span&gt;--hard&lt;span style="color:#666"&gt;=&lt;/span&gt;&lt;span style="color:#b8860b"&gt;key1&lt;/span&gt;&lt;span style="color:#666"&gt;=&lt;/span&gt;value1,key2&lt;span style="color:#666"&gt;=&lt;/span&gt;value2&lt;span style="color:#666"&gt;]&lt;/span&gt; &lt;span style="color:#666"&gt;[&lt;/span&gt;--scopes&lt;span style="color:#666"&gt;=&lt;/span&gt;Scope1,Scope2&lt;span style="color:#666"&gt;]&lt;/span&gt; &lt;span style="color:#666"&gt;[&lt;/span&gt;--dry-run&lt;span style="color:#666"&gt;=&lt;/span&gt;server|client|none&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="示例"&gt;示例&lt;/h2&gt;
&lt;!--
```
# Create a new resource quota named my-quota
kubectl create quota my-quota --hard=cpu=1,memory=1G,pods=2,services=3,replicationcontrollers=2,resourcequotas=1,secrets=5,persistentvolumeclaims=10

# Create a new resource quota named best-effort
kubectl create quota best-effort --hard=pods=100 --scopes=BestEffort
```
--&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 新建一个名为 my-quota 的资源配额&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl create quota my-quota --hard&lt;span style="color:#666"&gt;=&lt;/span&gt;&lt;span style="color:#b8860b"&gt;cpu&lt;/span&gt;&lt;span style="color:#666"&gt;=&lt;/span&gt;1,memory&lt;span style="color:#666"&gt;=&lt;/span&gt;1G,pods&lt;span style="color:#666"&gt;=&lt;/span&gt;2,services&lt;span style="color:#666"&gt;=&lt;/span&gt;3,replicationcontrollers&lt;span style="color:#666"&gt;=&lt;/span&gt;2,resourcequotas&lt;span style="color:#666"&gt;=&lt;/span&gt;1,secrets&lt;span style="color:#666"&gt;=&lt;/span&gt;5,persistentvolumeclaims&lt;span style="color:#666"&gt;=&lt;/span&gt;&lt;span style="color:#666"&gt;10&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; 
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 新建一个名为 best-effort 的资源配额&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl create quota best-effort --hard&lt;span style="color:#666"&gt;=&lt;/span&gt;&lt;span style="color:#b8860b"&gt;pods&lt;/span&gt;&lt;span style="color:#666"&gt;=&lt;/span&gt;&lt;span style="color:#666"&gt;100&lt;/span&gt; --scopes&lt;span style="color:#666"&gt;=&lt;/span&gt;BestEffort
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="选项"&gt;选项&lt;/h2&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--allow-missing-template-keys&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;!--Default:--&gt;默认值：true&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.
--&gt;
如果为 true，在模板中字段或映射键缺失时忽略模板中的错误。
仅适用于 golang 和 jsonpath 输出格式。
&lt;/p&gt;</description></item><item><title>kubectl create role</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_create/kubectl_create_role/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_create/kubectl_create_role/</guid><description>&lt;!--
title: kubectl create role
content_type: tool-reference
weight: 30
auto_generated: true
--&gt;
&lt;h2 id="简介"&gt;简介&lt;/h2&gt;
&lt;!--
Create a role with single rule.

```
kubectl create role NAME --verb=verb --resource=resource.group/subresource [--resource-name=resourcename] [--dry-run=server|client|none]
```
--&gt;
&lt;p&gt;创建单一规则的角色。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl create role NAME --verb&lt;span style="color:#666"&gt;=&lt;/span&gt;verb --resource&lt;span style="color:#666"&gt;=&lt;/span&gt;resource.group/subresource &lt;span style="color:#666"&gt;[&lt;/span&gt;--resource-name&lt;span style="color:#666"&gt;=&lt;/span&gt;resourcename&lt;span style="color:#666"&gt;]&lt;/span&gt; &lt;span style="color:#666"&gt;[&lt;/span&gt;--dry-run&lt;span style="color:#666"&gt;=&lt;/span&gt;server|client|none&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="示例"&gt;示例&lt;/h2&gt;
&lt;!--
```
# Create a role named "pod-reader" that allows user to perform "get", "watch" and "list" on pods
kubectl create role pod-reader --verb=get --verb=list --verb=watch --resource=pods

# Create a role named "pod-reader" with ResourceName specified
kubectl create role pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod

# Create a role named "foo" with API Group specified
kubectl create role foo --verb=get,list,watch --resource=rs.apps

# Create a role named "foo" with SubResource specified
kubectl create role foo --verb=get,list,watch --resource=pods,pods/status
```
--&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 创建一个名为 &amp;#34;pod-reader&amp;#34; 的角色，允许用户对 Pod 执行 &amp;#34;get&amp;#34;、&amp;#34;watch&amp;#34; 和 &amp;#34;list&amp;#34; 操作&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl create role pod-reader --verb&lt;span style="color:#666"&gt;=&lt;/span&gt;get --verb&lt;span style="color:#666"&gt;=&lt;/span&gt;list --verb&lt;span style="color:#666"&gt;=&lt;/span&gt;watch --resource&lt;span style="color:#666"&gt;=&lt;/span&gt;pods
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 创建一个名为 &amp;#34;pod-reader&amp;#34; 的角色，并指定资源名称&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl create role pod-reader --verb&lt;span style="color:#666"&gt;=&lt;/span&gt;get --resource&lt;span style="color:#666"&gt;=&lt;/span&gt;pods --resource-name&lt;span style="color:#666"&gt;=&lt;/span&gt;readablepod --resource-name&lt;span style="color:#666"&gt;=&lt;/span&gt;anotherpod
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 创建一个名为 &amp;#34;foo&amp;#34; 的角色，并指定 API 组&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl create role foo --verb&lt;span style="color:#666"&gt;=&lt;/span&gt;get,list,watch --resource&lt;span style="color:#666"&gt;=&lt;/span&gt;rs.apps
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 创建一个名为 &amp;#34;foo&amp;#34; 的角色，并指定子资源&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl create role foo --verb&lt;span style="color:#666"&gt;=&lt;/span&gt;get,list,watch --resource&lt;span style="color:#666"&gt;=&lt;/span&gt;pods,pods/status
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="选项"&gt;选项&lt;/h2&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--allow-missing-template-keys&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;!--Default:--&gt;默认值：true&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.
--&gt;
如果为 true，在模板中字段或映射键缺失时忽略模板中的错误。
仅适用于 golang 和 jsonpath 输出格式。
&lt;/p&gt;</description></item><item><title>kubectl create rolebinding</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_create/kubectl_create_rolebinding/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_create/kubectl_create_rolebinding/</guid><description>&lt;!--
title: kubectl create rolebinding
content_type: tool-reference
weight: 30
auto_generated: true
--&gt;
&lt;h2 id="简介"&gt;简介&lt;/h2&gt;
&lt;!--
Create a role binding for a particular role or cluster role.
--&gt;
&lt;p&gt;为特定角色或集群角色创建角色绑定。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl create rolebinding NAME --clusterrole&lt;span style="color:#666"&gt;=&lt;/span&gt;NAME|--role&lt;span style="color:#666"&gt;=&lt;/span&gt;NAME &lt;span style="color:#666"&gt;[&lt;/span&gt;--user&lt;span style="color:#666"&gt;=&lt;/span&gt;用户名&lt;span style="color:#666"&gt;]&lt;/span&gt; &lt;span style="color:#666"&gt;[&lt;/span&gt;--group&lt;span style="color:#666"&gt;=&lt;/span&gt;组名&lt;span style="color:#666"&gt;]&lt;/span&gt; &lt;span style="color:#666"&gt;[&lt;/span&gt;--serviceaccount&lt;span style="color:#666"&gt;=&lt;/span&gt;命名空间:服务账户名&lt;span style="color:#666"&gt;]&lt;/span&gt; &lt;span style="color:#666"&gt;[&lt;/span&gt;--dry-run&lt;span style="color:#666"&gt;=&lt;/span&gt;server|client|none&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="示例"&gt;示例&lt;/h2&gt;
&lt;!--
```
 # Create a role binding for user1, user2, and group1 using the admin cluster role
 kubectl create rolebinding admin --clusterrole=admin --user=user1 --user=user2 --group=group1
 
 # Create a role binding for service account monitoring:sa-dev using the admin role
 kubectl create rolebinding admin-binding --role=admin --serviceaccount=monitoring:sa-dev
```
--&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 使用 admin 集群角色为 user1、user2 和 group1 创建角色绑定&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl create rolebinding admin --clusterrole&lt;span style="color:#666"&gt;=&lt;/span&gt;admin --user&lt;span style="color:#666"&gt;=&lt;/span&gt;user1 --user&lt;span style="color:#666"&gt;=&lt;/span&gt;user2 --group&lt;span style="color:#666"&gt;=&lt;/span&gt;group1
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; 
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 使用 admin 角色为服务账户 monitoring:sa-dev 创建角色绑定&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl create rolebinding admin-binding --role&lt;span style="color:#666"&gt;=&lt;/span&gt;admin --serviceaccount&lt;span style="color:#666"&gt;=&lt;/span&gt;monitoring:sa-dev
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="选项"&gt;选项&lt;/h2&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--allow-missing-template-keys&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;!--Default:--&gt;默认值：true&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.
--&gt;
如果为 true，在模板中字段或映射键缺失时忽略模板中的错误。
仅适用于 golang 和 jsonpath 输出格式。
&lt;/p&gt;</description></item><item><title>kubectl create secret</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_create/kubectl_create_secret/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_create/kubectl_create_secret/</guid><description>&lt;!--
title: kubectl create secret
content_type: tool-reference
weight: 30
auto_generated: true
--&gt;
&lt;h2 id="简介"&gt;简介&lt;/h2&gt;
&lt;!--
Create a secret with specified type.

 A docker-registry type secret is for accessing a container registry.

 A generic type secret indicate an Opaque secret type.

 A tls type secret holds TLS certificate and its associated key.
--&gt;
&lt;p&gt;创建指定类型的 Secret：&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;docker-registry 类型 Secret 用于访问容器镜像仓库。&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;generic 类型 Secret 表示不透明 Secret 类型。&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;tls 类型 Secret 包含 TLS 证书及其关联密钥。&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl create secret &lt;span style="color:#666"&gt;(&lt;/span&gt;docker-registry | generic | tls&lt;span style="color:#666"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="选项"&gt;选项&lt;/h2&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
help for secret
--&gt;
secret 操作的帮助命令。
&lt;/p&gt;</description></item><item><title>kubectl create secret docker-registry</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_create/kubectl_create_secret_docker-registry/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_create/kubectl_create_secret_docker-registry/</guid><description>&lt;!--
title: kubectl create secret docker-registry
content_type: tool-reference
weight: 30
auto_generated: true
--&gt;
&lt;h2 id="简介"&gt;简介&lt;/h2&gt;
&lt;!--
Create a new secret for use with Docker registries.
 
 Dockercfg secrets are used to authenticate against Docker registries.
 
 When using the Docker command line to push images, you can authenticate to a given registry by running:
 '$ docker login DOCKER_REGISTRY_SERVER --username=DOCKER_USER --password=DOCKER_PASSWORD --email=DOCKER_EMAIL'.
--&gt;
&lt;p&gt;新建一个 Docker 仓库所用的 Secret。&lt;/p&gt;
&lt;p&gt;Dockercfg Secret 用于向 Docker 仓库进行身份认证。&lt;/p&gt;
&lt;p&gt;当使用 Docker 命令行推送镜像时，你可以通过运行以下命令向给定的仓库进行身份认证：&lt;/p&gt;</description></item><item><title>kubectl create secret generic</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_create/kubectl_create_secret_generic/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_create/kubectl_create_secret_generic/</guid><description>&lt;!--
title: kubectl create secret generic
content_type: tool-reference
weight: 30
auto_generated: true
--&gt;
&lt;h2 id="简介"&gt;简介&lt;/h2&gt;
&lt;!--
Create a secret based on a file, directory, or specified literal value.

 A single secret may package one or more key/value pairs.
--&gt;
&lt;p&gt;基于文件、目录或指定的文字值创建 Secret。&lt;/p&gt;
&lt;p&gt;单个 Secret 可以包含一个或多个键值对。&lt;/p&gt;
&lt;!--
When creating a secret based on a file, the key will default to the basename of the file, and the value will default to the file content. If the basename is an invalid key or you wish to chose your own, you may specify an alternate key.

 When creating a secret based on a directory, each file whose basename is a valid key in the directory will be packaged into the secret. Any directory entries except regular files are ignored (e.g. subdirectories, symlinks, devices, pipes, etc).
--&gt;
&lt;p&gt;当基于文件创建 Secret 时，键将默认为文件的基本名称，值将默认为文件内容。
如果基本名称是无效的键，或者你希望选择自己的键，你可以指定一个替代键。&lt;/p&gt;</description></item><item><title>kubectl create secret tls</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_create/kubectl_create_secret_tls/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_create/kubectl_create_secret_tls/</guid><description>&lt;!--
title: kubectl create secret tls
content_type: tool-reference
weight: 30
auto_generated: true
--&gt;
&lt;h2 id="简介"&gt;简介&lt;/h2&gt;
&lt;!--
Create a TLS secret from the given public/private key pair.

 The public/private key pair must exist beforehand. The public key certificate must be .PEM encoded and match the given private key.
--&gt;
&lt;p&gt;使用给定的公钥/私钥对创建 TLS Secret。&lt;/p&gt;
&lt;p&gt;事先公钥/私钥对必须存在。公钥证书必须是以 .PEM 编码的，并且与给定的私钥匹配。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl create secret tls NAME --cert&lt;span style="color:#666"&gt;=&lt;/span&gt;path/to/cert/file --key&lt;span style="color:#666"&gt;=&lt;/span&gt;path/to/key/file &lt;span style="color:#666"&gt;[&lt;/span&gt;--dry-run&lt;span style="color:#666"&gt;=&lt;/span&gt;server|client|none&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="示例"&gt;示例&lt;/h2&gt;
&lt;!--
```
# Create a new TLS secret named tls-secret with the given key pair
kubectl create secret tls tls-secret --cert=path/to/tls.crt --key=path/to/tls.key
```
--&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 使用给定的密钥对新建一个名为 tls-secret 的 TLS Secret&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl create secret tls tls-secret --cert&lt;span style="color:#666"&gt;=&lt;/span&gt;path/to/tls.crt --key&lt;span style="color:#666"&gt;=&lt;/span&gt;path/to/tls.key
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="选项"&gt;选项&lt;/h2&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--allow-missing-template-keys&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;!--Default:--&gt;默认值：true&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.
--&gt;
如果为 true，在模板中字段或映射键缺失时忽略模板中的错误。
仅适用于 golang 和 jsonpath 输出格式。
&lt;/p&gt;</description></item><item><title>kubectl create service</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_create/kubectl_create_service/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_create/kubectl_create_service/</guid><description>&lt;!--
title: kubectl create service
content_type: tool-reference
weight: 30
auto_generated: true
--&gt;
&lt;h2 id="简介"&gt;简介&lt;/h2&gt;
&lt;!--
Create a service using a specified subcommand.
--&gt;
&lt;p&gt;使用指定的子命令创建 Service。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl create service &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="选项"&gt;选项&lt;/h2&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
help for service
--&gt;
service 操作的帮助命令。
&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2 id="继承于父命令的选项"&gt;继承于父命令的选项&lt;/h2&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--as string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Username to impersonate for the operation. User could be a regular user or a service account in a namespace.
--&gt;
操作所用的伪装用户名。用户可以是常规用户或命名空间中的服务账号。
&lt;/p&gt;</description></item><item><title>kubectl create service clusterip</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_create/kubectl_create_service_clusterip/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_create/kubectl_create_service_clusterip/</guid><description>&lt;!--
title: kubectl create service clusterip
content_type: tool-reference
weight: 30
auto_generated: true
--&gt;
&lt;h2 id="简介"&gt;简介&lt;/h2&gt;
&lt;!--
Create a ClusterIP service with the specified name.
--&gt;
&lt;p&gt;创建指定名称的 ClusterIP Service。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl create service clusterip NAME &lt;span style="color:#666"&gt;[&lt;/span&gt;--tcp&lt;span style="color:#666"&gt;=&lt;/span&gt;&amp;lt;port&amp;gt;:&amp;lt;targetPort&amp;gt;&lt;span style="color:#666"&gt;]&lt;/span&gt; &lt;span style="color:#666"&gt;[&lt;/span&gt;--dry-run&lt;span style="color:#666"&gt;=&lt;/span&gt;server|client|none&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="示例"&gt;示例&lt;/h2&gt;
&lt;!--
```
# Create a new ClusterIP service named my-cs
kubectl create service clusterip my-cs --tcp=5678:8080

# Create a new ClusterIP service named my-cs (in headless mode)
kubectl create service clusterip my-cs --clusterip="None"
```
--&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 新建一个名为 my-cs 的 ClusterIP Service&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl create service clusterip my-cs --tcp&lt;span style="color:#666"&gt;=&lt;/span&gt;5678:8080
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; 
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 新建一个名为 my-cs 的 ClusterIP Service（无头模式）&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl create service clusterip my-cs --clusterip&lt;span style="color:#666"&gt;=&lt;/span&gt;&lt;span style="color:#b44"&gt;&amp;#34;None&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="选项"&gt;选项&lt;/h2&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--allow-missing-template-keys&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;!--Default:--&gt;默认值：true&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.
--&gt;
如果为 true，在模板中字段或映射键缺失时忽略模板中的错误。
仅适用于 golang 和 jsonpath 输出格式。
&lt;/p&gt;</description></item><item><title>kubectl create service externalname</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_create/kubectl_create_service_externalname/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_create/kubectl_create_service_externalname/</guid><description>&lt;!--
title: kubectl create service externalname
content_type: tool-reference
weight: 30
auto_generated: true
--&gt;
&lt;h2 id="简介"&gt;简介&lt;/h2&gt;
&lt;!--
Create an ExternalName service with the specified name.

 ExternalName service references to an external DNS address instead of only pods, which will allow application authors to reference services that exist off platform, on other clusters, or locally.
--&gt;
&lt;p&gt;创建指定名称的 ExternalName Service。&lt;/p&gt;
&lt;p&gt;ExternalName Service 引用外部 DNS 地址，而不仅仅是 Pod，
这类 Service 允许应用作者引用平台外、其他集群或本地存在的服务。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl create service externalname NAME --external-name external.name &lt;span style="color:#666"&gt;[&lt;/span&gt;--dry-run&lt;span style="color:#666"&gt;=&lt;/span&gt;server|client|none&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="示例"&gt;示例&lt;/h2&gt;
&lt;!--
```
 # Create a new ExternalName service named my-ns
 kubectl create service externalname my-ns --external-name bar.com
```
--&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 新建一个名为 my-ns 的 ExternalName Service&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl create service externalname my-ns --external-name bar.com
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="选项"&gt;选项&lt;/h2&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--allow-missing-template-keys&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;!--Default:--&gt;默认值：true&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.
--&gt;
如果为 true，在模板中字段或映射键缺失时忽略模板中的错误。
仅适用于 golang 和 jsonpath 输出格式。
&lt;/p&gt;</description></item><item><title>kubectl create service loadbalancer</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_create/kubectl_create_service_loadbalancer/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_create/kubectl_create_service_loadbalancer/</guid><description>&lt;!--
title: kubectl create service loadbalancer
content_type: tool-reference
weight: 30
auto_generated: true
--&gt;
&lt;h2 id="简介"&gt;简介&lt;/h2&gt;
&lt;!--
Create a LoadBalancer service with the specified name.
--&gt;
&lt;p&gt;创建指定名称的 LoadBalancer 类型 Service。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl create service loadbalancer NAME &lt;span style="color:#666"&gt;[&lt;/span&gt;--tcp&lt;span style="color:#666"&gt;=&lt;/span&gt;port:targetPort&lt;span style="color:#666"&gt;]&lt;/span&gt; &lt;span style="color:#666"&gt;[&lt;/span&gt;--dry-run&lt;span style="color:#666"&gt;=&lt;/span&gt;server|client|none&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="示例"&gt;示例&lt;/h2&gt;
&lt;!--
```
# Create a new LoadBalancer service named my-lbs
kubectl create service loadbalancer my-lbs --tcp=5678:8080
```
--&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 新建名为 my-lbs 的 LoadBalancer 类型 Service&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl create service loadbalancer my-lbs --tcp&lt;span style="color:#666"&gt;=&lt;/span&gt;5678:8080
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="选项"&gt;选项&lt;/h2&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--allow-missing-template-keys&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Default: true&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.
--&gt;
如果为 true，在模板中字段或映射键缺失时忽略模板中的错误。
仅适用于 golang 和 jsonpath 输出格式。
&lt;/p&gt;</description></item><item><title>kubectl create service nodeport</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_create/kubectl_create_service_nodeport/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_create/kubectl_create_service_nodeport/</guid><description>&lt;!--
title: kubectl create service nodeport
content_type: tool-reference
weight: 30
auto_generated: true
--&gt;
&lt;h2 id="简介"&gt;简介&lt;/h2&gt;
&lt;!--
Create a NodePort service with the specified name.
--&gt;
&lt;p&gt;创建一个指定名称的 NodePort 类型 Service。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl create service nodeport NAME &lt;span style="color:#666"&gt;[&lt;/span&gt;--tcp&lt;span style="color:#666"&gt;=&lt;/span&gt;port:targetPort&lt;span style="color:#666"&gt;]&lt;/span&gt; &lt;span style="color:#666"&gt;[&lt;/span&gt;--dry-run&lt;span style="color:#666"&gt;=&lt;/span&gt;server|client|none&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="示例"&gt;示例&lt;/h2&gt;
&lt;!--
# Create a new NodePort service named my-ns
--&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 新建一个名为 my-ns 的 NodePort 类型 Service&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl create service nodeport my-ns --tcp&lt;span style="color:#666"&gt;=&lt;/span&gt;5678:8080
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="选项"&gt;选项&lt;/h2&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--allow-missing-template-keys&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Default: true&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.
--&gt;
如果为 true，在模板中字段或映射键缺失时忽略模板中的错误。
仅适用于 golang 和 jsonpath 输出格式。
&lt;/p&gt;</description></item><item><title>kubectl create serviceaccount</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_create/kubectl_create_serviceaccount/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_create/kubectl_create_serviceaccount/</guid><description>&lt;!--
title: kubectl create serviceaccount
content_type: tool-reference
weight: 30
auto_generated: true
--&gt;
&lt;h2 id="简介"&gt;简介&lt;/h2&gt;
&lt;!--
Create a service account with the specified name.
--&gt;
&lt;p&gt;创建指定名称的服务账号。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl create serviceaccount NAME &lt;span style="color:#666"&gt;[&lt;/span&gt;--dry-run&lt;span style="color:#666"&gt;=&lt;/span&gt;server|client|none&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="示例"&gt;示例&lt;/h2&gt;
&lt;!--
```
# Create a new service account named my-service-account
kubectl create serviceaccount my-service-account
```
--&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 创建一个名为 my-service-account 的服务帐号&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl create serviceaccount my-service-account
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="选项"&gt;选项&lt;/h2&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--allow-missing-template-keys&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;!--Default:--&gt;默认值：true&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.
--&gt;
如果为 true，在模板中字段或映射键缺失时忽略模板中的错误。
仅适用于 golang 和 jsonpath 输出格式。
&lt;/p&gt;</description></item><item><title>kubectl create token</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_create/kubectl_create_token/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_create/kubectl_create_token/</guid><description>&lt;!--
title: kubectl create token
content_type: tool-reference
weight: 30
auto_generated: true
--&gt;
&lt;h2 id="简介"&gt;简介&lt;/h2&gt;
&lt;!--
Request a service account token.
--&gt;
&lt;p&gt;请求一个服务账号令牌。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl create token SERVICE_ACCOUNT_NAME
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="示例"&gt;示例&lt;/h2&gt;
&lt;!--
```
# Request a token to authenticate to the kube-apiserver as the service account "myapp" in the current namespace
kubectl create token myapp

# Request a token for a service account in a custom namespace
kubectl create token myapp --namespace myns

# Request a token with a custom expiration
kubectl create token myapp --duration 10m

# Request a token with a custom audience
kubectl create token myapp --audience https://example.com

# Request a token bound to an instance of a Secret object
kubectl create token myapp --bound-object-kind Secret --bound-object-name mysecret

# Request a token bound to an instance of a Secret object with a specific UID
kubectl create token myapp --bound-object-kind Secret --bound-object-name mysecret --bound-object-uid 0d4691ed-659b-4935-a832-355f77ee47cc
```
--&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 请求一个令牌，以当前命名空间中的服务账号 &amp;#34;myapp&amp;#34; 向 kube-apiserver 进行身份认证&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl create token myapp
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; 
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 为特定命名空间中的服务账号请求一个令牌&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl create token myapp --namespace myns
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; 
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 请求一个含自定义过期时间的令牌&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl create token myapp --duration 10m
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; 
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 请求一个包含特定受众的令牌&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl create token myapp --audience https://example.com
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; 
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 请求一个绑定到 Secret 对象实例的令牌&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl create token myapp --bound-object-kind Secret --bound-object-name mysecret
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; 
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 请求一个绑定到特定 UID 的 Secret 对象实例的令牌&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl create token myapp --bound-object-kind Secret --bound-object-name mysecret --bound-object-uid 0d4691ed-659b-4935-a832-355f77ee47cc
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="选项"&gt;选项&lt;/h2&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--allow-missing-template-keys&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;!--Default:--&gt;默认值：true&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.
--&gt;
如果为 true，在模板中字段或映射键缺失时忽略模板中的错误。
仅适用于 golang 和 jsonpath 输出格式。
&lt;/p&gt;</description></item><item><title>kubectl plugin list</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_plugin/kubectl_plugin_list/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_plugin/kubectl_plugin_list/</guid><description>&lt;!--
title: kubectl plugin list
content_type: tool-reference
weight: 30
auto_generated: true
--&gt;
&lt;h2 id="简介"&gt;简介&lt;/h2&gt;
&lt;!--
List all available plugin files on a user's PATH.

 Available plugin files are those that are: - executable - anywhere on the user's PATH - begin with "kubectl-"
--&gt;
&lt;p&gt;列出用户 PATH 中所有可用的插件文件。&lt;/p&gt;
&lt;p&gt;可用的插件文件需符合以下条件：&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;可执行文件&lt;/li&gt;
&lt;li&gt;位于用户的 PATH 中某处&lt;/li&gt;
&lt;li&gt;以 &amp;quot;kubectl-&amp;quot; 开头&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl plugin list &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="示例"&gt;示例&lt;/h2&gt;
&lt;!--
# List all available plugins
# List only binary names of available plugins without paths
--&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 列出所有可用的插件&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl plugin list
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 仅列出可用插件的二进制名称，不包含路径&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl plugin list --name-only
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="选项"&gt;选项&lt;/h2&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
help for list
--&gt;
list 命令的帮助信息。
&lt;/p&gt;</description></item><item><title>kubectl rollout history</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_rollout/kubectl_rollout_history/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_rollout/kubectl_rollout_history/</guid><description>&lt;!--
title: kubectl rollout history
content_type: tool-reference
weight: 30
auto_generated: true
--&gt;
&lt;h2 id="简介"&gt;简介&lt;/h2&gt;
&lt;!--
View previous rollout revisions and configurations.
--&gt;
&lt;p&gt;查看以前上线的修订版本和配置。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl rollout &lt;span style="color:#a2f"&gt;history&lt;/span&gt; &lt;span style="color:#666"&gt;(&lt;/span&gt;TYPE NAME | TYPE/NAME&lt;span style="color:#666"&gt;)&lt;/span&gt; &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="示例"&gt;示例&lt;/h2&gt;
&lt;!--
```
 # View the rollout history of a deployment
 kubectl rollout history deployment/abc
 
 # View the details of daemonset revision 3
 kubectl rollout history daemonset/abc --revision=3
```
--&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 查看 Deployment 的上线历史记录&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl rollout &lt;span style="color:#a2f"&gt;history&lt;/span&gt; deployment/abc
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; 
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 查看 DaemonSet 修订版本 3 的详细信息&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl rollout &lt;span style="color:#a2f"&gt;history&lt;/span&gt; daemonset/abc --revision&lt;span style="color:#666"&gt;=&lt;/span&gt;&lt;span style="color:#666"&gt;3&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="选项"&gt;选项&lt;/h2&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--allow-missing-template-keys&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;!--Default: true--&gt;默认值：true&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.
--&gt;
如果为 true，在模板中字段或映射键缺失时忽略模板中的错误。
仅适用于 golang 和 jsonpath 输出格式。
&lt;/p&gt;</description></item><item><title>kubectl rollout pause</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_rollout/kubectl_rollout_pause/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_rollout/kubectl_rollout_pause/</guid><description>&lt;!--
title: kubectl rollout pause
content_type: tool-reference
weight: 30
auto_generated: true
--&gt;
&lt;h2 id="简介"&gt;简介&lt;/h2&gt;
&lt;!--
Mark the provided resource as paused.

 Paused resources will not be reconciled by a controller. Use "kubectl rollout resume" to resume a paused resource. Currently only deployments support being paused.
--&gt;
&lt;p&gt;将所提供的资源标记为已暂停。&lt;/p&gt;
&lt;p&gt;控制器不会调和已暂停的资源。使用 &lt;code&gt;kubectl rollout resume&lt;/code&gt; 可恢复已暂停的资源，
目前只有 Deployment 支持暂停。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl rollout pause RESOURCE
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="示例"&gt;示例&lt;/h2&gt;
&lt;!--
```
 # Mark the nginx deployment as paused
 # Any current state of the deployment will continue its function; new updates
 # to the deployment will not have an effect as long as the deployment is paused
 kubectl rollout pause deployment/nginx
```
--&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 将 nginx Deployment 标记为暂停&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# Deployment 的任何当前状态都将继续发挥作用；&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 只要 Deployment 处于暂停状态，对 Deployment 的更新就不会产生影响&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl rollout pause deployment/nginx
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="选项"&gt;选项&lt;/h2&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--allow-missing-template-keys&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;!--Default: true--&gt;默认值：true&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.
--&gt;
如果为 true，在模板中字段或映射键缺失时忽略模板中的错误。
仅适用于 golang 和 jsonpath 输出格式。
&lt;/p&gt;</description></item><item><title>kubectl rollout restart</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_rollout/kubectl_rollout_restart/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_rollout/kubectl_rollout_restart/</guid><description>&lt;!--
title: kubectl rollout restart
content_type: tool-reference
weight: 30
auto_generated: true
--&gt;
&lt;h2 id="简介"&gt;简介&lt;/h2&gt;
&lt;!--
Restart a resource.

 Resource rollout will be restarted.
--&gt;
&lt;p&gt;重启资源。&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;资源将重新开始上线。&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl rollout restart RESOURCE
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="示例"&gt;示例&lt;/h2&gt;
&lt;!--
```
 # Restart all deployments in the test-namespace namespace
 kubectl rollout restart deployment -n test-namespace
 
 # Restart a deployment
 kubectl rollout restart deployment/nginx
 
 # Restart a daemon set
 kubectl rollout restart daemonset/abc
 
 # Restart deployments with the app=nginx label
 kubectl rollout restart deployment --selector=app=nginx
```
--&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 重启 test-namespace 命名空间下的所有 Deployment&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl rollout restart deployment -n test-namespace
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; 
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 重启 Deployment&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl rollout restart deployment/nginx
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; 
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 重启 DaemonSet&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl rollout restart daemonset/abc
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; 
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 重启带有标签 app=nginx 的 Deployment&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl rollout restart deployment --selector&lt;span style="color:#666"&gt;=&lt;/span&gt;&lt;span style="color:#b8860b"&gt;app&lt;/span&gt;&lt;span style="color:#666"&gt;=&lt;/span&gt;nginx
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="选项"&gt;选项&lt;/h2&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--allow-missing-template-keys&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;!--Default: true--&gt;默认值：true&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.
--&gt;
如果为 true，在模板中字段或映射键缺失时忽略模板中的错误。
仅适用于 golang 和 jsonpath 输出格式。
&lt;/p&gt;</description></item><item><title>kubectl rollout resume</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_rollout/kubectl_rollout_resume/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_rollout/kubectl_rollout_resume/</guid><description>&lt;!--
title: kubectl rollout resume
content_type: tool-reference
weight: 30
auto_generated: true
--&gt;
&lt;h2 id="简介"&gt;简介&lt;/h2&gt;
&lt;!--
Resume a paused resource.

 Paused resources will not be reconciled by a controller. By resuming a resource, we allow it to be reconciled again. Currently only deployments support being resumed.
--&gt;
&lt;p&gt;恢复暂停的资源。&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;控制器不会调和已暂停的资源。通过恢复资源，我们可以让控制器再次调和它，
目前只有 Deployment 支持恢复。&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl rollout resume RESOURCE
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="示例"&gt;示例&lt;/h2&gt;
&lt;!--
```
# Resume an already paused deployment
kubectl rollout resume deployment/nginx
```
--&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 恢复已暂停的 Deployment&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl rollout resume deployment/nginx
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="选项"&gt;选项&lt;/h2&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--allow-missing-template-keys&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;!--Default: true--&gt;默认值：true&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.
--&gt;
如果为 true，在模板中字段或映射键缺失时忽略模板中的错误。
仅适用于 golang 和 jsonpath 输出格式。
&lt;/p&gt;</description></item><item><title>kubectl rollout status</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_rollout/kubectl_rollout_status/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_rollout/kubectl_rollout_status/</guid><description>&lt;!--
title: kubectl rollout status
content_type: tool-reference
weight: 30
auto_generated: true
--&gt;
&lt;h2 id="简介"&gt;简介&lt;/h2&gt;
&lt;!--
Show the status of the rollout.

 By default 'rollout status' will watch the status of the latest rollout until it's done. If you don't want to wait for the rollout to finish then you can use --watch=false. Note that if a new rollout starts in-between, then 'rollout status' will continue watching the latest revision. If you want to pin to a specific revision and abort if it is rolled over by another revision, use --revision=N where N is the revision you need to watch for.
--&gt;
&lt;p&gt;显示上线的状态。&lt;/p&gt;</description></item><item><title>kubectl rollout undo</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_rollout/kubectl_rollout_undo/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_rollout/kubectl_rollout_undo/</guid><description>&lt;!--
title: kubectl rollout undo
content_type: tool-reference
weight: 30
auto_generated: true
--&gt;
&lt;h2 id="简介"&gt;简介&lt;/h2&gt;
&lt;!--
Roll back to a previous rollout.
--&gt;
&lt;p&gt;回滚到之前上线的版本。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl rollout undo &lt;span style="color:#666"&gt;(&lt;/span&gt;TYPE NAME | TYPE/NAME&lt;span style="color:#666"&gt;)&lt;/span&gt; &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="示例"&gt;示例&lt;/h2&gt;
&lt;!--
```
 # Roll back to the previous deployment
 kubectl rollout undo deployment/abc
 
 # Roll back to daemonset revision 3
 kubectl rollout undo daemonset/abc --to-revision=3
 
 # Roll back to the previous deployment with dry-run
 kubectl rollout undo --dry-run=server deployment/abc
```
--&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 回滚到上一个 Deployment 的上一次部署状态&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl rollout undo deployment/abc
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; 
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 回滚到 DaemonSet 的修订版本 3&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl rollout undo daemonset/abc --to-revision&lt;span style="color:#666"&gt;=&lt;/span&gt;&lt;span style="color:#666"&gt;3&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; 
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 试运行回滚到 Deployment 的上一次部署状态&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl rollout undo --dry-run&lt;span style="color:#666"&gt;=&lt;/span&gt;server deployment/abc
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="选项"&gt;选项&lt;/h2&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--allow-missing-template-keys&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;!--Default: true--&gt;默认值：true&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.
--&gt;
如果为 true，在模板中字段或映射键缺失时忽略模板中的错误。
仅适用于 golang 和 jsonpath 输出格式。
&lt;/p&gt;</description></item><item><title>kubectl set env</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_set/kubectl_set_env/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_set/kubectl_set_env/</guid><description>&lt;!--
title: kubectl set env
content_type: tool-reference
weight: 30
auto_generated: true
--&gt;
&lt;h2 id="简介"&gt;简介&lt;/h2&gt;
&lt;!--
Update environment variables on a pod template.

 List environment variable definitions in one or more pods, pod templates. Add, update, or remove container environment variable definitions in one or more pod templates (within replication controllers or deployment configurations). View or modify the environment variable definitions on all containers in the specified pods or pod templates, or just those that match a wildcard.
--&gt;
&lt;p&gt;更新 Pod 模板中的环境变量。&lt;/p&gt;</description></item><item><title>kubectl set image</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_set/kubectl_set_image/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_set/kubectl_set_image/</guid><description>&lt;!--
title: kubectl set image
content_type: tool-reference
weight: 30
auto_generated: true
--&gt;
&lt;h2 id="简介"&gt;简介&lt;/h2&gt;
&lt;!--
Update existing container image(s) of resources.

 Possible resources include (case insensitive):

 pod (po), replicationcontroller (rc), deployment (deploy), daemonset (ds), statefulset (sts), cronjob (cj), replicaset (rs)
--&gt;
&lt;p&gt;更新资源的现有容器镜像。&lt;/p&gt;
&lt;p&gt;可能的资源包括（不区分大小写）：&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;pod (po), replicationcontroller (rc), deployment (deploy), daemonset (ds), statefulset (sts), cronjob (cj), replicaset (rs)
&lt;/code&gt;&lt;/pre&gt;&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl &lt;span style="color:#a2f"&gt;set&lt;/span&gt; image &lt;span style="color:#666"&gt;(&lt;/span&gt;-f FILENAME | TYPE NAME&lt;span style="color:#666"&gt;)&lt;/span&gt; &lt;span style="color:#b8860b"&gt;CONTAINER_NAME_1&lt;/span&gt;&lt;span style="color:#666"&gt;=&lt;/span&gt;CONTAINER_IMAGE_1 ... &lt;span style="color:#b8860b"&gt;CONTAINER_NAME_N&lt;/span&gt;&lt;span style="color:#666"&gt;=&lt;/span&gt;CONTAINER_IMAGE_N
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="示例"&gt;示例&lt;/h2&gt;
&lt;!--
```
# Set a deployment's nginx container image to 'nginx:1.9.1', and its busybox container image to 'busybox'
kubectl set image deployment/nginx busybox=busybox nginx=nginx:1.9.1

# Update all deployments' and rc's nginx container's image to 'nginx:1.9.1'
kubectl set image deployments,rc nginx=nginx:1.9.1 --all

# Update image of all containers of daemonset abc to 'nginx:1.9.1'
kubectl set image daemonset abc *=nginx:1.9.1

# Print result (in yaml format) of updating nginx container image from local file, without hitting the server
kubectl set image -f path/to/file.yaml nginx=nginx:1.9.1 --local -o yaml
```
--&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 将 Deployment 的 nginx 容器镜像设置为 “nginx:1.9.1”，并将其 busybox 容器镜像设置为 “busybox”&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl &lt;span style="color:#a2f"&gt;set&lt;/span&gt; image deployment/nginx &lt;span style="color:#b8860b"&gt;busybox&lt;/span&gt;&lt;span style="color:#666"&gt;=&lt;/span&gt;busybox &lt;span style="color:#b8860b"&gt;nginx&lt;/span&gt;&lt;span style="color:#666"&gt;=&lt;/span&gt;nginx:1.9.1
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 更新所有 Deployment 和副本控制器的 nginx 容器镜像为 “nginx:1.9.1”&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl &lt;span style="color:#a2f"&gt;set&lt;/span&gt; image deployments,rc &lt;span style="color:#b8860b"&gt;nginx&lt;/span&gt;&lt;span style="color:#666"&gt;=&lt;/span&gt;nginx:1.9.1 --all
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; 
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 更新 DaemonSet abc 的所有容器镜像为 &amp;#34;nginx:1.9.1&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl &lt;span style="color:#a2f"&gt;set&lt;/span&gt; image daemonset abc *&lt;span style="color:#666"&gt;=&lt;/span&gt;nginx:1.9.1
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; 
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 使用本地文件更新 nginx 容器镜像，并以 YAML 格式打印结果，但不向服务器发出请求&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl &lt;span style="color:#a2f"&gt;set&lt;/span&gt; image -f path/to/file.yaml &lt;span style="color:#b8860b"&gt;nginx&lt;/span&gt;&lt;span style="color:#666"&gt;=&lt;/span&gt;nginx:1.9.1 --local -o yaml
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="选项"&gt;选项&lt;/h2&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--all&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Select all resources, in the namespace of the specified resource types
--&gt;
在指定资源类型的命名空间中，选择所有资源。
&lt;/p&gt;</description></item><item><title>kubectl set resources</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_set/kubectl_set_resources/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_set/kubectl_set_resources/</guid><description>&lt;!--
title: kubectl set resources
content_type: tool-reference
weight: 30
auto_generated: true
--&gt;
&lt;h2 id="简介"&gt;简介&lt;/h2&gt;
&lt;!--
Specify compute resource requirements (CPU, memory) for any resource that defines a pod template. If a pod is successfully scheduled, it is guaranteed the amount of resource requested, but may burst up to its specified limits.

 For each compute resource, if a limit is specified and a request is omitted, the request will default to the limit.

 Possible resources include (case insensitive): Use "kubectl api-resources" for a complete list of supported resources..
--&gt;
&lt;p&gt;为定义 Pod 模板的任一资源指定计算资源要求（CPU、内存）。
如果 Pod 被成功调度，将保证获得所请求的资源量，但可以在某一瞬间达到其指定的限制值。&lt;/p&gt;</description></item><item><title>kubectl set selector</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_set/kubectl_set_selector/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_set/kubectl_set_selector/</guid><description>&lt;!--
title: kubectl set selector
content_type: tool-reference
weight: 30
auto_generated: true
--&gt;
&lt;h2 id="简介"&gt;简介&lt;/h2&gt;
&lt;!--
Set the selector on a resource. Note that the new selector will overwrite the old selector if the resource had one prior to the invocation of 'set selector'.

 A selector must begin with a letter or number, and may contain letters, numbers, hyphens, dots, and underscores, up to 63 characters. If --resource-version is specified, then updates will use this resource version, otherwise the existing resource-version will be used. Note: currently selectors can only be set on Service objects.
--&gt;
&lt;p&gt;为某个资源设置选择算符。请注意，
如果资源在 &lt;code&gt;set selector&lt;/code&gt; 调用之前已有选择算符，则新的选择算符将覆盖旧的选择算符。&lt;/p&gt;</description></item><item><title>kubectl set serviceaccount</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_set/kubectl_set_serviceaccount/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_set/kubectl_set_serviceaccount/</guid><description>&lt;!--
title: kubectl set serviceaccount
content_type: tool-reference
weight: 30
auto_generated: true
--&gt;
&lt;h2 id="简介"&gt;简介&lt;/h2&gt;
&lt;!--
Update the service account of pod template resources.

 Possible resources (case insensitive) can be:

 replicationcontroller (rc), deployment (deploy), daemonset (ds), job, replicaset (rs), statefulset
--&gt;
&lt;p&gt;更新 Pod 模板资源的服务账号。&lt;/p&gt;
&lt;p&gt;可能的资源（不区分大小写）可以是：&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;replicationcontroller (rc), deployment (deploy), daemonset (ds), job, replicaset (rs), statefulset
&lt;/code&gt;&lt;/pre&gt;&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl &lt;span style="color:#a2f"&gt;set&lt;/span&gt; serviceaccount &lt;span style="color:#666"&gt;(&lt;/span&gt;-f FILENAME | TYPE NAME&lt;span style="color:#666"&gt;)&lt;/span&gt; SERVICE_ACCOUNT
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="示例"&gt;示例&lt;/h2&gt;
&lt;!--
```
# Set deployment nginx-deployment's service account to serviceaccount1
kubectl set serviceaccount deployment nginx-deployment serviceaccount1

# Print the result (in YAML format) of updated nginx deployment with the service account from local file, without hitting the API server
kubectl set sa -f nginx-deployment.yaml serviceaccount1 --local --dry-run=client -o yaml
```
--&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 将名为 nginx-deployment 的 Deployment 的服务账号设置为 serviceaccount1&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl &lt;span style="color:#a2f"&gt;set&lt;/span&gt; serviceaccount deployment nginx-deployment serviceaccount1
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 打印使用本地文件中服务账号更新 nginx Deployment 后的结果（以 YAML 格式），不向 API 服务器发送请求&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl &lt;span style="color:#a2f"&gt;set&lt;/span&gt; sa -f nginx-deployment.yaml serviceaccount1 --local --dry-run&lt;span style="color:#666"&gt;=&lt;/span&gt;client -o yaml
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="选项"&gt;选项&lt;/h2&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--all&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Select all resources, in the namespace of the specified resource types
--&gt;
在指定资源类型的命名空间中，选择所有资源。
&lt;/p&gt;</description></item><item><title>kubectl set subject</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_set/kubectl_set_subject/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_set/kubectl_set_subject/</guid><description>&lt;!--
title: kubectl set subject
content_type: tool-reference
weight: 30
auto_generated: true
--&gt;
&lt;h2 id="简介"&gt;简介&lt;/h2&gt;
&lt;!--
Update the user, group, or service account in a role binding or cluster role binding.
--&gt;
&lt;p&gt;更新角色绑定或集群角色绑定中的用户、组或服务账号。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl &lt;span style="color:#a2f"&gt;set&lt;/span&gt; subject &lt;span style="color:#666"&gt;(&lt;/span&gt;-f FILENAME | TYPE NAME&lt;span style="color:#666"&gt;)&lt;/span&gt; &lt;span style="color:#666"&gt;[&lt;/span&gt;--user&lt;span style="color:#666"&gt;=&lt;/span&gt;username&lt;span style="color:#666"&gt;]&lt;/span&gt; &lt;span style="color:#666"&gt;[&lt;/span&gt;--group&lt;span style="color:#666"&gt;=&lt;/span&gt;groupname&lt;span style="color:#666"&gt;]&lt;/span&gt; &lt;span style="color:#666"&gt;[&lt;/span&gt;--serviceaccount&lt;span style="color:#666"&gt;=&lt;/span&gt;namespace:serviceaccountname&lt;span style="color:#666"&gt;]&lt;/span&gt; &lt;span style="color:#666"&gt;[&lt;/span&gt;--dry-run&lt;span style="color:#666"&gt;=&lt;/span&gt;server|client|none&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="示例"&gt;示例&lt;/h2&gt;
&lt;!--
```
# Update a cluster role binding for serviceaccount1
kubectl set subject clusterrolebinding admin --serviceaccount=namespace:serviceaccount1

# Update a role binding for user1, user2, and group1
kubectl set subject rolebinding admin --user=user1 --user=user2 --group=group1

# Print the result (in YAML format) of updating rolebinding subjects from a local, without hitting the server
kubectl create rolebinding admin --role=admin --user=admin -o yaml --dry-run=client | kubectl set subject --local -f - --user=foo -o yaml
```
--&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 更新 serviceaccount1 的集群角色绑定&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl &lt;span style="color:#a2f"&gt;set&lt;/span&gt; subject clusterrolebinding admin --serviceaccount&lt;span style="color:#666"&gt;=&lt;/span&gt;namespace:serviceaccount1
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; 
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 更新 user1、user2 和 group1 的角色绑定&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl &lt;span style="color:#a2f"&gt;set&lt;/span&gt; subject rolebinding admin --user&lt;span style="color:#666"&gt;=&lt;/span&gt;user1 --user&lt;span style="color:#666"&gt;=&lt;/span&gt;user2 --group&lt;span style="color:#666"&gt;=&lt;/span&gt;group1
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; 
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 打印从本地更新角色绑定主体的结果（以 YAML 格式），但不向服务器发送请求&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl create rolebinding admin --role&lt;span style="color:#666"&gt;=&lt;/span&gt;admin --user&lt;span style="color:#666"&gt;=&lt;/span&gt;admin -o yaml --dry-run&lt;span style="color:#666"&gt;=&lt;/span&gt;client | kubectl &lt;span style="color:#a2f"&gt;set&lt;/span&gt; subject --local -f - --user&lt;span style="color:#666"&gt;=&lt;/span&gt;foo -o yaml
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="选项"&gt;选项&lt;/h2&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--all&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;
&lt;!--
Select all resources, in the namespace of the specified resource types
--&gt;
在指定资源类型的命名空间中，选择所有资源。
&lt;/p&gt;</description></item><item><title>kubectl top node</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_top/kubectl_top_node/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_top/kubectl_top_node/</guid><description>&lt;!--
title: kubectl top node
content_type: tool-reference
weight: 30
auto_generated: true
--&gt;
&lt;h2 id="简介"&gt;简介&lt;/h2&gt;
&lt;!--
Display resource (CPU/memory) usage of nodes.

 The top-node command allows you to see the resource consumption of nodes.
--&gt;
&lt;p&gt;显示节点的资源（CPU/内存）使用情况。&lt;/p&gt;
&lt;p&gt;top-node 命令可以让你查看节点的资源消耗情况。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl top node &lt;span style="color:#666"&gt;[&lt;/span&gt;NAME | -l label&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="示例"&gt;示例&lt;/h2&gt;
&lt;!--
```
 # Show metrics for all nodes
 kubectl top node
 
 # Show metrics for a given node
 kubectl top node NODE_NAME
```
--&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 显示所有节点的指标&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl top node
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; 
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 显示某个指定节点的指标&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl top node NODE_NAME
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="选项"&gt;选项&lt;/h2&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
help for node
--&gt;
node 操作的帮助命令。
&lt;/p&gt;</description></item><item><title>kubectl top pod</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_top/kubectl_top_pod/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/generated/kubectl_top/kubectl_top_pod/</guid><description>&lt;!--
title: kubectl top pod
content_type: tool-reference
weight: 30
auto_generated: true
--&gt;
&lt;h2 id="简介"&gt;简介&lt;/h2&gt;
&lt;!--
Display resource (CPU/memory) usage of pods.

 The 'top pod' command allows you to see the resource consumption of pods.

 Due to the metrics pipeline delay, they may be unavailable for a few minutes since pod creation.
--&gt;
&lt;p&gt;显示 Pod 的资源（CPU/内存）使用情况。&lt;/p&gt;
&lt;p&gt;&lt;code&gt;top pod&lt;/code&gt; 命令允许你查看 Pod 的资源消耗情况。&lt;/p&gt;
&lt;p&gt;由于指标管道的延迟，Pod 创建后的几分钟内可能无法获取资源消耗数据。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl top pod &lt;span style="color:#666"&gt;[&lt;/span&gt;NAME | -l label&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="示例"&gt;示例&lt;/h2&gt;
&lt;!--
```
# Show metrics for all pods in the default namespace
kubectl top pod

# Show metrics for all pods in the given namespace
kubectl top pod --namespace=NAMESPACE

# Show metrics for a given pod and its containers
kubectl top pod POD_NAME --containers

# Show metrics for the pods defined by label name=myLabel
kubectl top pod -l name=myLabel
```
--&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 显示 default 命名空间中所有 Pod 的指标&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl top pod
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 显示指定命名空间中所有 Pod 的指标&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl top pod --namespace&lt;span style="color:#666"&gt;=&lt;/span&gt;NAMESPACE
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 显示指定 Pod 及其容器的指标&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl top pod POD_NAME --containers
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 显示由标签 name=myLabel 所定义的 Pod 的指标&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl top pod -l &lt;span style="color:#b8860b"&gt;name&lt;/span&gt;&lt;span style="color:#666"&gt;=&lt;/span&gt;myLabel
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="选项"&gt;选项&lt;/h2&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-A, --all-namespaces&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
If present, list the requested object(s) across all namespaces. Namespace in current context is ignored even if specified with --namespace.
--&gt;
如果存在，则列举所有命名空间中请求的对象。
即使使用 &lt;code&gt;--namespace&lt;/code&gt; 指定，当前上下文中的命名空间也会被忽略。
&lt;/p&gt;</description></item><item><title>Operator 模式</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/extend-kubernetes/operator/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/extend-kubernetes/operator/</guid><description>&lt;!--
title: Operator pattern
content_type: concept
weight: 30
--&gt;
&lt;!-- overview --&gt;
&lt;!--
Operators are software extensions to Kubernetes that make use of
[custom resources](/docs/concepts/extend-kubernetes/api-extension/custom-resources/)
to manage applications and their components. Operators follow
Kubernetes principles, notably the [control loop](/docs/concepts/architecture/controller).
--&gt;
&lt;p&gt;Operator 是 Kubernetes 的扩展软件，
它利用&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/extend-kubernetes/api-extension/custom-resources/"&gt;定制资源&lt;/a&gt;管理应用及其组件。
Operator 遵循 Kubernetes 的理念，特别是在&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/controller"&gt;控制器&lt;/a&gt;方面。&lt;/p&gt;
&lt;!-- body --&gt;
&lt;!--
## Motivation

The _operator pattern_ aims to capture the key aim of a human operator who
is managing a service or set of services. Human operators who look after
specific applications and services have deep knowledge of how the system
ought to behave, how to deploy it, and how to react if there are problems.

People who run workloads on Kubernetes often like to use automation to take
care of repeatable tasks. The operator pattern captures how you can write
code to automate a task beyond what Kubernetes itself provides.
--&gt;
&lt;h2 id="motivation"&gt;初衷&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Operator 模式&lt;/strong&gt;旨在记述（正在管理一个或一组服务的）运维人员的关键目标。
这些运维人员负责一些特定的应用和 Service，他们需要清楚地知道系统应该如何运行、如何部署以及出现问题时如何处理。&lt;/p&gt;</description></item><item><title>Pinterest Case Study</title><link>https://andygol-k8s.netlify.app/zh-cn/case-studies/pinterest/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/case-studies/pinterest/</guid><description>&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;After eight years in existence, Pinterest had grown into 1,000 microservices and multiple layers of infrastructure and diverse set-up tools and platforms. In 2016 the company launched a roadmap towards a new compute platform, led by the vision of creating the fastest path from an idea to production, without making engineers worry about the underlying infrastructure.&lt;/p&gt;

&lt;h2&gt;Solution&lt;/h2&gt;

&lt;p&gt;The first phase involved moving services to Docker containers. Once these services went into production in early 2017, the team began looking at orchestration to help create efficiencies and manage them in a decentralized way. After an evaluation of various solutions, Pinterest went with Kubernetes.&lt;/p&gt;</description></item><item><title>Pod 安全策略</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/security/pod-security-policy/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/security/pod-security-policy/</guid><description>&lt;!--
title: Pod Security Policies
content_type: concept
weight: 30
--&gt;
&lt;!-- overview --&gt;
&lt;div class="alert alert-warning" role="alert"&gt;&lt;div class="h4 alert-heading" role="heading"&gt;被移除的特性&lt;/div&gt;
&lt;!--
PodSecurityPolicy was [deprecated](/blog/2021/04/08/kubernetes-1-21-release-announcement/#podsecuritypolicy-deprecation)
in Kubernetes v1.21, and removed from Kubernetes in v1.25.
--&gt;
&lt;p&gt;PodSecurityPolicy 在 Kubernetes v1.21
中&lt;a href="https://andygol-k8s.netlify.app/blog/2021/04/08/kubernetes-1-21-release-announcement/#podsecuritypolicy-deprecation"&gt;被弃用&lt;/a&gt;，
在 Kubernetes v1.25 中被移除。&lt;/p&gt;
&lt;/div&gt;
&lt;!--
Instead of using PodSecurityPolicy, you can enforce similar restrictions on Pods using
either or both:
--&gt;
&lt;p&gt;作为替代，你可以使用下面任一方式执行类似的限制，或者同时使用下面这两种方式。&lt;/p&gt;
&lt;!--
- [Pod Security Admission](/docs/concepts/security/pod-security-admission/)
- a 3rd party admission plugin, that you deploy and configure yourself
--&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/security/pod-security-admission/"&gt;Pod 安全准入&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;自行部署并配置第三方准入插件&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
For a migration guide, see [Migrate from PodSecurityPolicy to the Built-In PodSecurity Admission Controller](/docs/tasks/configure-pod-container/migrate-from-psp/).
For more information on the removal of this API,
see [PodSecurityPolicy Deprecation: Past, Present, and Future](/blog/2021/04/06/podsecuritypolicy-deprecation-past-present-and-future/).
--&gt;
&lt;p&gt;有关如何迁移，
参阅&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configure-pod-container/migrate-from-psp/"&gt;从 PodSecurityPolicy 迁移到内置的 PodSecurity 准入控制器&lt;/a&gt;。
有关移除此 API 的更多信息，参阅
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/blog/2021/04/06/podsecuritypolicy-deprecation-past-present-and-future/"&gt;弃用 PodSecurityPolicy：过去、现在、未来&lt;/a&gt;。&lt;/p&gt;</description></item><item><title>Pod 的生命周期</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/pod-lifecycle/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/pod-lifecycle/</guid><description>&lt;!--
title: Pod Lifecycle
content_type: concept
weight: 30
math: true
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page describes the lifecycle of a Pod. Pods follow a defined lifecycle, starting
in the `Pending` [phase](#pod-phase), moving through `Running` if at least one
of its primary containers starts OK, and then through either the `Succeeded` or
`Failed` phases depending on whether any container in the Pod terminated in failure.
--&gt;
&lt;p&gt;本页面讲述 Pod 的生命周期。
Pod 遵循预定义的生命周期，起始于 &lt;code&gt;Pending&lt;/code&gt; &lt;a href="#pod-phase"&gt;阶段&lt;/a&gt;，
如果至少其中有一个主要容器正常启动，则进入 &lt;code&gt;Running&lt;/code&gt;，之后取决于 Pod
中是否有容器以失败状态结束而进入 &lt;code&gt;Succeeded&lt;/code&gt; 或者 &lt;code&gt;Failed&lt;/code&gt; 阶段。&lt;/p&gt;</description></item><item><title>Pod 开销</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/scheduling-eviction/pod-overhead/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/scheduling-eviction/pod-overhead/</guid><description>&lt;!--
---
reviewers:
- dchen1107
- egernst
- tallclair
title: Pod Overhead
content_type: concept
weight: 30
---
--&gt;
&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.24 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;!--
When you run a Pod on a Node, the Pod itself takes an amount of system resources. These
resources are additional to the resources needed to run the container(s) inside the Pod.
In Kubernetes, _Pod Overhead_ is a way to account for the resources consumed by the Pod
infrastructure on top of the container requests &amp; limits.
--&gt;
&lt;p&gt;在节点上运行 Pod 时，Pod 本身占用大量系统资源。这些是运行 Pod 内容器所需资源之外的资源。
在 Kubernetes 中，&lt;em&gt;POD 开销&lt;/em&gt; 是一种方法，用于计算 Pod 基础设施在容器请求和限制之上消耗的资源。&lt;/p&gt;</description></item><item><title>Secret</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/configuration/secret/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/configuration/secret/</guid><description>&lt;!--
reviewers:
- mikedanese
title: Secrets
api_metadata:
- apiVersion: "v1"
 kind: "Secret"
content_type: concept
feature:
 title: Secret and configuration management
 description: &gt;
 Deploy and update Secrets and application configuration without rebuilding your image
 and without exposing Secrets in your stack configuration.
weight: 30
--&gt;
&lt;!-- overview --&gt;
&lt;!--
A Secret is an object that contains a small amount of sensitive data such as
a password, a token, or a key. Such information might otherwise be put in a
&lt;a class='glossary-tooltip' title='Pod 表示你的集群上一组正在运行的容器。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/' target='_blank' aria-label='Pod'&gt;Pod&lt;/a&gt; specification or in a
&lt;a class='glossary-tooltip' title='镜像（Image）是保存的容器实例，它打包了应用运行所需的一组软件。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/reference/glossary/?all=true#term-image' target='_blank' aria-label='container image'&gt;container image&lt;/a&gt;. Using a
Secret means that you don't need to include confidential data in your
application code.
--&gt;
&lt;p&gt;Secret 是一种包含少量敏感信息例如密码、令牌或密钥的对象。
这样的信息可能会被放在 &lt;a class='glossary-tooltip' title='Pod 表示你的集群上一组正在运行的容器。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/' target='_blank' aria-label='Pod'&gt;Pod&lt;/a&gt; 规约中或者镜像中。
使用 Secret 意味着你不需要在应用程序代码中包含机密数据。&lt;/p&gt;</description></item><item><title>StatefulSet</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/controllers/statefulset/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/controllers/statefulset/</guid><description>&lt;!--
reviewers:
- enisoc
- erictune
- foxish
- janetkuo
- kow3ns
- smarterclayton
title: StatefulSets
api_metadata:
- apiVersion: "apps/v1"
 kind: "StatefulSet"
content_type: concept
description: &gt;-
 A StatefulSet runs a group of Pods, and maintains a sticky identity for each of those Pods. This is useful for managing
 applications that need persistent storage or a stable, unique network identity.
weight: 30
hide_summary: true # Listed separately in section index
--&gt;
&lt;!-- overview --&gt;
&lt;!--
StatefulSet is the workload API object used to manage stateful applications.
--&gt;
&lt;p&gt;StatefulSet 是用来管理有状态应用的工作负载 API 对象。&lt;/p&gt;</description></item><item><title>提交案例分析</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/contribute/new-content/case-studies/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/contribute/new-content/case-studies/</guid><description>&lt;!--
title: Submitting case studies
linktitle: Case studies
slug: case-studies
content_type: concept
weight: 30
--&gt;
&lt;!-- overview --&gt;
&lt;!--
Case studies highlight how organizations are using Kubernetes to solve real-world problems. The
Kubernetes marketing team and members of the &lt;a class='glossary-tooltip' title='云原生计算基金会（Cloud Native Computing Foundation）' data-toggle='tooltip' data-placement='top' href='https://cncf.io/' target='_blank' aria-label='CNCF'&gt;CNCF&lt;/a&gt;
collaborate with you on all case studies.

Case studies require extensive review before they're approved.
--&gt;
&lt;p&gt;案例分析用来概述组织如何使用 Kubernetes 解决现实世界的问题。
Kubernetes 市场化团队和 &lt;a class='glossary-tooltip' title='云原生计算基金会（Cloud Native Computing Foundation）' data-toggle='tooltip' data-placement='top' href='https://cncf.io/' target='_blank' aria-label='CNCF'&gt;CNCF&lt;/a&gt; 成员会与你一起工作，
撰写所有的案例分析。&lt;/p&gt;</description></item><item><title>查明节点上所使用的容器运行时</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/migrating-from-dockershim/find-out-runtime-you-use/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/migrating-from-dockershim/find-out-runtime-you-use/</guid><description>&lt;!--
title: Find Out What Container Runtime is Used on a Node
content_type: task
reviewers:
- SergeyKanzhelev
weight: 30
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page outlines steps to find out what [container runtime](/docs/setup/production-environment/container-runtimes/)
the nodes in your cluster use.
--&gt;
&lt;p&gt;本页面描述查明集群中节点所使用的&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/setup/production-environment/container-runtimes/"&gt;容器运行时&lt;/a&gt;
的步骤。&lt;/p&gt;
&lt;!--
Depending on the way you run your cluster, the container runtime for the nodes may
have been pre-configured or you need to configure it. If you're using a managed
Kubernetes service, there might be vendor-specific ways to check what container runtime is
configured for the nodes. The method described on this page should work whenever
the execution of `kubectl` is allowed.
--&gt;
&lt;p&gt;取决于你运行集群的方式，节点所使用的容器运行时可能是事先配置好的，
也可能需要你来配置。如果你在使用托管的 Kubernetes 服务，
可能存在特定于厂商的方法来检查节点上配置的容器运行时。
本页描述的方法应该在能够执行 &lt;code&gt;kubectl&lt;/code&gt; 的场合下都可以工作。&lt;/p&gt;</description></item><item><title>带 Pod 间通信的 Job</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/job/job-with-pod-to-pod-communication/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/job/job-with-pod-to-pod-communication/</guid><description>&lt;!--
title: Job with Pod-to-Pod Communication
content_type: task
min-kubernetes-server-version: v1.21
weight: 30
--&gt;
&lt;!-- overview --&gt;
&lt;!--
In this example, you will run a Job in [Indexed completion mode](/blog/2021/04/19/introducing-indexed-jobs/)
configured such that the pods created by the Job can communicate with each other using pod hostnames rather
than pod IP addresses.

Pods within a Job might need to communicate among themselves. The user workload running in each pod
could query the Kubernetes API server to learn the IPs of the other Pods, but it's much simpler to
rely on Kubernetes' built-in DNS resolution.
--&gt;
&lt;p&gt;在此例中，你将以&lt;a href="https://andygol-k8s.netlify.app/blog/2021/04/19/introducing-indexed-jobs/"&gt;索引完成模式&lt;/a&gt;运行一个 Job，
并通过配置使得该 Job 所创建的各 Pod 之间可以使用 Pod 主机名而不是 Pod IP 地址进行通信。&lt;/p&gt;</description></item><item><title>调度策略</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/scheduling/policies/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/scheduling/policies/</guid><description>&lt;!--
title: Scheduling Policies
content_type: concept
sitemap:
 priority: 0.2 # Scheduling priorities are deprecated
weight: 30
--&gt;
&lt;!-- overview --&gt;
&lt;!--
In Kubernetes versions before v1.23, a scheduling policy can be used to specify the *predicates* and *priorities* process. For example, you can set a scheduling policy by
running `kube-scheduler --policy-config-file &lt;filename&gt;` or `kube-scheduler --policy-configmap &lt;ConfigMap&gt;`.

This scheduling policy is not supported since Kubernetes v1.23. Associated flags `policy-config-file`, `policy-configmap`, `policy-configmap-namespace` and `use-legacy-policy-config` are also not supported. Instead, use the [Scheduler Configuration](/docs/reference/scheduling/config/) to achieve similar behavior.
--&gt;
&lt;p&gt;在 Kubernetes v1.23 版本之前，可以使用调度策略来指定 &lt;strong&gt;predicates&lt;/strong&gt; 和 &lt;strong&gt;priorities&lt;/strong&gt; 进程。
例如，可以通过运行 &lt;code&gt;kube-scheduler --policy-config-file &amp;lt;filename&amp;gt;&lt;/code&gt; 或者
&lt;code&gt;kube-scheduler --policy-configmap &amp;lt;ConfigMap&amp;gt;&lt;/code&gt; 设置调度策略。&lt;/p&gt;</description></item><item><title>调试 StatefulSet</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/debug/debug-application/debug-statefulset/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/debug/debug-application/debug-statefulset/</guid><description>&lt;!-- 
reviewers:
- bprashanth
- enisoc
- erictune
- foxish
- janetkuo
- kow3ns
- smarterclayton
title: Debug a StatefulSet
content_type: task
weight: 30
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This task shows you how to debug a StatefulSet.
--&gt;
&lt;p&gt;此任务展示如何调试 StatefulSet。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;!--
* You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster.
* You should have a StatefulSet running that you want to investigate.
--&gt;
&lt;ul&gt;
&lt;li&gt;你需要有一个 Kubernetes 集群，已配置好的 kubectl 命令行工具与你的集群进行通信。&lt;/li&gt;
&lt;li&gt;你应该有一个运行中的 StatefulSet，以便用于调试。&lt;/li&gt;
&lt;/ul&gt;
&lt;!-- steps --&gt;
&lt;!--
## Debugging a StatefulSet

In order to list all the pods which belong to a StatefulSet, which have a label `app.kubernetes.io/name=MyApp` set on them,
you can use the following:
--&gt;
&lt;h2 id="debugging-a-statefulset"&gt;调试 StatefulSet&lt;/h2&gt;
&lt;p&gt;StatefulSet 在创建 Pod 时为其设置了 &lt;code&gt;app.kubernetes.io/name=MyApp&lt;/code&gt; 标签，列出仅属于某 StatefulSet
的所有 Pod 时，可以使用以下命令：&lt;/p&gt;</description></item><item><title>调整分配给容器的 CPU 和内存资源</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configure-pod-container/resize-container-resources/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configure-pod-container/resize-container-resources/</guid><description>&lt;!--
title: Resize CPU and Memory Resources assigned to Containers
content_type: task
weight: 30
min-kubernetes-server-version: 1.33
--&gt;
&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable" title="特性门控： InPlacePodVerticalScaling"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt; 
 &lt;code&gt;Kubernetes v1.35 [stable]&lt;/code&gt;（默认启用）&lt;/div&gt;

&lt;!--
This page explains how to change the CPU and memory resource requests and limits
assigned to a container *without recreating the Pod*.
--&gt;
&lt;p&gt;本页面说明了如何在&lt;strong&gt;不重新创建 Pod&lt;/strong&gt; 的情况下，更改分配给容器的 CPU 和内存资源请求与限制。&lt;/p&gt;
&lt;!--
Traditionally, changing a Pod's resource requirements necessitated deleting the existing Pod
and creating a replacement, often managed by a [workload controller](/docs/concepts/workloads/controllers/).
In-place Pod Resize allows changing the CPU/memory allocation of container(s) within a running Pod
while potentially avoiding application disruption. The process for resizing Pod resources is covered in [Resize CPU and Memory Resources assigned to Pods](/docs/tasks/configure-pod-container/resize-pod-resources).
--&gt;
&lt;p&gt;传统上，更改 Pod 的资源需求需要删除现有 Pod 并创建一个替代 Pod，
这通常由&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/controllers/"&gt;工作负载控制器&lt;/a&gt;管理。
而就地 Pod 调整功能允许在运行中的 Pod 内变更容器的 CPU 和内存分配，从而可能避免干扰应用。
Pod 资源调整的流程详见：&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configure-pod-container/resize-pod-resources"&gt;调整分配给 Pod 的 CPU 与内存资源&lt;/a&gt;。&lt;/p&gt;</description></item><item><title>对象名称和 ID</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/overview/working-with-objects/names/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/overview/working-with-objects/names/</guid><description>&lt;!--
reviewers:
- mikedanese
- thockin
title: Object Names and IDs
content_type: concept
weight: 30
--&gt;
&lt;!-- overview --&gt;
&lt;!--
Each &lt;a class='glossary-tooltip' title='Kubernetes 系统中的实体，代表了集群的部分状态。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/overview/working-with-objects/#kubernetes-objects' target='_blank' aria-label='object'&gt;object&lt;/a&gt; in your cluster has a [_Name_](#names) that is unique for that type of resource.
Every Kubernetes object also has a [_UID_](#uids) that is unique across your whole cluster.

For example, you can only have one Pod named `myapp-1234` within the same [namespace](/docs/concepts/overview/working-with-objects/namespaces/), but you can have one Pod and one Deployment that are each named `myapp-1234`.
--&gt;
&lt;p&gt;集群中的每一个&lt;a class='glossary-tooltip' title='Kubernetes 系统中的实体，代表了集群的部分状态。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/overview/working-with-objects/#kubernetes-objects' target='_blank' aria-label='对象'&gt;对象&lt;/a&gt;都有一个&lt;a href="#names"&gt;&lt;strong&gt;名称&lt;/strong&gt;&lt;/a&gt;来标识在同类资源中的唯一性。&lt;/p&gt;</description></item><item><title>分配 Pod 级别 CPU 和内存资源</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configure-pod-container/assign-pod-level-resources/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configure-pod-container/assign-pod-level-resources/</guid><description>&lt;!--
title: Assign Pod-level CPU and memory resources
content_type: task
weight: 30
min-kubernetes-server-version: 1.34
--&gt;
&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-beta" title="特性门控： PodLevelResources"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt; 
 &lt;code&gt;Kubernetes v1.34 [beta]&lt;/code&gt;（默认启用）&lt;/div&gt;

&lt;!--
This page shows how to specify CPU and memory resources for a Pod at pod-level in
addition to container-level resource specifications. A Kubernetes node allocates
resources to a pod based on the pod's resource requests. These requests can be
defined at the pod level or individually for containers within the pod. When
both are present, the pod-level requests take precedence.
--&gt;
&lt;p&gt;本页介绍除了容器级别的资源规约外，如何在 Pod 级别指定 CPU 和内存资源。
Kubernetes 节点基于 Pod 的资源请求分配资源。
这些请求可以在 Pod 级别定义，也可以逐个为 Pod 内的容器定义。
当两种级别的请求都存在时，Pod 级别的请求优先。&lt;/p&gt;</description></item><item><title>给 Kubernetes 博客提交文章</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/contribute/blog/article-submission/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/contribute/blog/article-submission/</guid><description>&lt;!--
title: Submitting articles to Kubernetes blogs
slug: article-submission
content_type: concept
weight: 30
--&gt;
&lt;!-- overview --&gt;
&lt;!--
There are two official Kubernetes blogs, and the CNCF has its own blog where you can cover Kubernetes too.
For the [main Kubernetes blog](/docs/contribute/blog/), we (the Kubernetes project) like to publish articles with different perspectives and special focuses, that have a link to Kubernetes.

With only a few special case exceptions, we only publish content that hasn't been submitted or published anywhere else.
--&gt;
&lt;p&gt;Kubernetes 有两个官方博客，CNCF 也有自己的博客频道，你也可以在 CNCF 博客频道上发布与
Kubernetes 相关的内容。对于 &lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/contribute/blog/"&gt;Kubernetes 主博客&lt;/a&gt;，
我们（Kubernetes 项目组）希望发布与 Kubernetes 有关联的具有不同视角和独特关注点的文章。&lt;/p&gt;</description></item><item><title>鉴权</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/access-authn-authz/authorization/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/access-authn-authz/authorization/</guid><description>&lt;!--
reviewers:
- erictune
- lavalamp
- deads2k
- liggitt
title: Authorization
content_type: concept
weight: 30
description: &gt;
 Details of Kubernetes authorization mechanisms and supported authorization modes.
--&gt;
&lt;!-- overview --&gt;
&lt;!--
Kubernetes authorization takes place following
[authentication](/docs/reference/access-authn-authz/authentication/).
Usually, a client making a request must be authenticated (logged in) before its
request can be allowed; however, Kubernetes also allows anonymous requests in
some circumstances.

For an overview of how authorization fits into the wider context of API access
control, read
[Controlling Access to the Kubernetes API](/docs/concepts/security/controlling-access/).
--&gt;
&lt;p&gt;Kubernetes 鉴权在&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/access-authn-authz/authentication/"&gt;身份认证&lt;/a&gt;之后进行。
通常，发出请求的客户端必须经过身份验证（登录）才能允许其请求；
但是，Kubernetes 在某些情况下也允许匿名请求。&lt;/p&gt;</description></item><item><title>仅在某些节点上运行 Pod</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/manage-daemon/pods-some-nodes/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/manage-daemon/pods-some-nodes/</guid><description>&lt;!--
title: Running Pods on Only Some Nodes
content_type: task
weight: 30
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page demonstrates how can you run &lt;a class='glossary-tooltip' title='Pod 表示你的集群上一组正在运行的容器。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/' target='_blank' aria-label='Pods'&gt;Pods&lt;/a&gt;
on only some &lt;a class='glossary-tooltip' title='Kubernetes 中的工作机器称作节点。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/nodes/' target='_blank' aria-label='Nodes'&gt;Nodes&lt;/a&gt; as part of a
&lt;a class='glossary-tooltip' title='确保 Pod 的副本在集群中的一组节点上运行。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/controllers/daemonset/' target='_blank' aria-label='DaemonSet'&gt;DaemonSet&lt;/a&gt;
--&gt;
&lt;p&gt;本页演示了你如何能够仅在某些&lt;a class='glossary-tooltip' title='Kubernetes 中的工作机器称作节点。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/nodes/' target='_blank' aria-label='节点'&gt;节点&lt;/a&gt;上作为
&lt;a class='glossary-tooltip' title='确保 Pod 的副本在集群中的一组节点上运行。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/controllers/daemonset/' target='_blank' aria-label='DaemonSet'&gt;DaemonSet&lt;/a&gt;
的一部分运行&lt;a class='glossary-tooltip' title='Pod 表示你的集群上一组正在运行的容器。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/' target='_blank' aria-label='Pod'&gt;Pod&lt;/a&gt;。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;!--
You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
[minikube](https://minikube.sigs.k8s.io/docs/tutorials/multi_node/)
or you can use one of these Kubernetes playgrounds:
--&gt;
&lt;p&gt;你必须拥有一个 Kubernetes 的集群，且必须配置 kubectl 命令行工具让其与你的集群通信。
建议运行本教程的集群至少有两个节点，且这两个节点不能作为控制平面主机。
如果你还没有集群，你可以通过 &lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;Minikube&lt;/a&gt;
构建一个你自己的集群，或者你可以使用下面的 Kubernetes 练习环境之一：&lt;/p&gt;</description></item><item><title>客户端库</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/using-api/client-libraries/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/using-api/client-libraries/</guid><description>&lt;!--
title: Client Libraries
reviewers:
- ahmetb
content_type: concept
weight: 30
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page contains an overview of the client libraries for using the Kubernetes
API from various programming languages.
--&gt;
&lt;p&gt;本页面概要介绍了基于各种编程语言使用 Kubernetes API 的客户端库。&lt;/p&gt;
&lt;!-- body --&gt;
&lt;!--
To write applications using the [Kubernetes REST API](/docs/reference/using-api/),
you do not need to implement the API calls and request/response types yourself.
You can use a client library for the programming language you are using.
--&gt;
&lt;p&gt;在使用 &lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/using-api/"&gt;Kubernetes REST API&lt;/a&gt; 编写应用程序时，
你并不需要自己实现 API 调用和 “请求/响应” 类型。
你可以根据自己的编程语言需要选择使用合适的客户端库。&lt;/p&gt;</description></item><item><title>控制器</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/controller/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/controller/</guid><description>&lt;!-- 
title: Controllers
content_type: concept
weight: 30
--&gt;
&lt;!-- overview --&gt;
&lt;!--
In robotics and automation, a _control loop_ is
a non-terminating loop that regulates the state of a system.

Here is one example of a control loop: a thermostat in a room.

When you set the temperature, that's telling the thermostat
about your *desired state*. The actual room temperature is the
*current state*. The thermostat acts to bring the current state
closer to the desired state, by turning equipment on or off.
--&gt;
&lt;p&gt;在机器人技术和自动化领域，控制回路（Control Loop）是一个非终止回路，用于调节系统状态。&lt;/p&gt;</description></item><item><title>临时卷</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/ephemeral-volumes/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/ephemeral-volumes/</guid><description>&lt;!--
reviewers:
- jsafrane
- saad-ali
- msau42
- xing-yang
- pohly
title: Ephemeral Volumes
content_type: concept
weight: 30
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This document describes _ephemeral volumes_ in Kubernetes. Familiarity
with [volumes](/docs/concepts/storage/volumes/) is suggested, in
particular PersistentVolumeClaim and PersistentVolume.
--&gt;
&lt;p&gt;本文档描述 Kubernetes 中的 &lt;strong&gt;临时卷（Ephemeral Volume）&lt;/strong&gt;。
建议先了解&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/volumes/"&gt;卷&lt;/a&gt;，特别是 PersistentVolumeClaim 和 PersistentVolume。&lt;/p&gt;
&lt;!-- body --&gt;
&lt;!--
Some applications need additional storage but don't care whether that
data is stored persistently across restarts. For example, caching
services are often limited by memory size and can move infrequently
used data into storage that is slower than memory with little impact
on overall performance.
--&gt;
&lt;p&gt;有些应用程序需要额外的存储，但并不关心数据在重启后是否仍然可用。
例如，缓存服务经常受限于内存大小，而且可以将不常用的数据转移到比内存慢的存储中，对总体性能的影响并不大。&lt;/p&gt;</description></item><item><title>配置对多集群的访问</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/access-application-cluster/configure-access-multiple-clusters/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/access-application-cluster/configure-access-multiple-clusters/</guid><description>&lt;!--
title: Configure Access to Multiple Clusters
content_type: task
weight: 30
card:
 name: tasks
 weight: 25
 title: Configure access to clusters
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page shows how to configure access to multiple clusters by using
configuration files. After your clusters, users, and contexts are defined in
one or more configuration files, you can quickly switch between clusters by using the
`kubectl config use-context` command.
--&gt;
&lt;p&gt;本文展示如何使用配置文件来配置对多个集群的访问。
在将集群、用户和上下文定义在一个或多个配置文件中之后，用户可以使用
&lt;code&gt;kubectl config use-context&lt;/code&gt; 命令快速地在集群之间进行切换。&lt;/p&gt;</description></item><item><title>配置命名空间的最小和最大内存约束</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/</guid><description>&lt;!--
title: Configure Minimum and Maximum Memory Constraints for a Namespace
content_type: task
weight: 30
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page shows how to set minimum and maximum values for memory used by containers
running in a &lt;a class='glossary-tooltip' title='名字空间是 Kubernetes 用来支持隔离单个集群中的资源组的一种抽象。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/overview/working-with-objects/namespaces/' target='_blank' aria-label='namespace'&gt;namespace&lt;/a&gt;. 
You specify minimum and maximum memory values in a
[LimitRange](/docs/reference/kubernetes-api/policy-resources/limit-range-v1/)
object. If a Pod does not meet the constraints imposed by the LimitRange,
it cannot be created in the namespace.
--&gt;
&lt;p&gt;本页介绍如何设置在&lt;a class='glossary-tooltip' title='名字空间是 Kubernetes 用来支持隔离单个集群中的资源组的一种抽象。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/overview/working-with-objects/namespaces/' target='_blank' aria-label='名字空间'&gt;名字空间&lt;/a&gt;
中运行的容器所使用的内存的最小值和最大值。你可以在
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/policy-resources/limit-range-v1/"&gt;LimitRange&lt;/a&gt;
对象中指定最小和最大内存值。如果 Pod 不满足 LimitRange 施加的约束，
则无法在名字空间中创建它。&lt;/p&gt;</description></item><item><title>确定 Pod 失败的原因</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/debug/debug-application/determine-reason-pod-failure/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/debug/debug-application/determine-reason-pod-failure/</guid><description>&lt;!--
title: Determine the Reason for Pod Failure
content_type: task
weight: 30
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page shows how to write and read a Container termination message.
--&gt;
&lt;p&gt;本文介绍如何编写和读取容器的终止消息。&lt;/p&gt;
&lt;!--
Termination messages provide a way for containers to write
information about fatal events to a location where it can
be easily retrieved and surfaced by tools like dashboards
and monitoring software. In most cases, information that you
put in a termination message should also be written to
the general
[Kubernetes logs](/docs/concepts/cluster-administration/logging/).
--&gt;
&lt;p&gt;终止消息为容器提供了一种方法，可以将有关致命事件的信息写入某个位置，
在该位置可以通过仪表板和监控软件等工具轻松检索和显示致命事件。
在大多数情况下，你放入终止消息中的信息也应该写入
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/cluster-administration/logging/"&gt;常规 Kubernetes 日志&lt;/a&gt;。&lt;/p&gt;</description></item><item><title>容器运行时类（Runtime Class）</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/containers/runtime-class/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/containers/runtime-class/</guid><description>&lt;!--
reviewers:
 - tallclair
 - dchen1107
title: Runtime Class
content_type: concept
weight: 30
hide_summary: true # Listed separately in section index
--&gt;
&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.20 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;!--
This page describes the RuntimeClass resource and runtime selection mechanism.

RuntimeClass is a feature for selecting the container runtime configuration. The container runtime
configuration is used to run a Pod's containers.
--&gt;
&lt;p&gt;本页面描述了 RuntimeClass 资源和运行时的选择机制。&lt;/p&gt;
&lt;p&gt;RuntimeClass 是一个用于选择容器运行时配置的特性，容器运行时配置用于运行 Pod 中的容器。&lt;/p&gt;
&lt;!-- body --&gt;
&lt;!--
## Motivation

You can set a different RuntimeClass between different Pods to provide a balance of
performance versus security. For example, if part of your workload deserves a high
level of information security assurance, you might choose to schedule those Pods so
that they run in a container runtime that uses hardware virtualization. You'd then
benefit from the extra isolation of the alternative runtime, at the expense of some
additional overhead.
--&gt;
&lt;h2 id="motivation"&gt;动机&lt;/h2&gt;
&lt;p&gt;你可以在不同的 Pod 设置不同的 RuntimeClass，以提供性能与安全性之间的平衡。
例如，如果你的部分工作负载需要高级别的信息安全保证，你可以决定在调度这些 Pod
时尽量使它们在使用硬件虚拟化的容器运行时中运行。
这样，你将从这些不同运行时所提供的额外隔离中获益，代价是一些额外的开销。&lt;/p&gt;</description></item><item><title>升级 kubeadm 集群</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/</guid><description>&lt;!--
reviewers:
- sig-cluster-lifecycle
title: Upgrading kubeadm clusters
content_type: task
weight: 30
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page explains how to upgrade a Kubernetes cluster created with kubeadm from version
1.34.x to version 1.35.x, and from version
1.35.x to 1.35.y (where `y &gt; x`). Skipping MINOR versions
when upgrading is unsupported. For more details, please visit [Version Skew Policy](/releases/version-skew-policy/).
--&gt;
&lt;p&gt;本页介绍如何将 &lt;code&gt;kubeadm&lt;/code&gt; 创建的 Kubernetes 集群从 1.34.x
版本升级到 1.35.x 版本以及从 1.35.x
升级到 1.35.y（其中 &lt;code&gt;y &amp;gt; x&lt;/code&gt;）。略过次版本号的升级是不被支持的。
更多详情请访问&lt;a href="https://andygol-k8s.netlify.app/zh-cn/releases/version-skew-policy/"&gt;版本偏差策略&lt;/a&gt;。&lt;/p&gt;</description></item><item><title>使用 AppArmor 限制容器对资源的访问</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tutorials/security/apparmor/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tutorials/security/apparmor/</guid><description>&lt;!--
reviewers:
- stclair
title: Restrict a Container's Access to Resources with AppArmor
content_type: tutorial
weight: 30
--&gt;
&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable" title="特性门控： AppArmor"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt; 
 &lt;code&gt;Kubernetes v1.31 [stable]&lt;/code&gt;（默认启用）&lt;/div&gt;

&lt;!--
This page shows you how to load AppArmor profiles on your nodes and enforce
those profiles in Pods. To learn more about how Kubernetes can confine Pods using
AppArmor, see
[Linux kernel security constraints for Pods and containers](/docs/concepts/security/linux-kernel-security-constraints/#apparmor).
--&gt;
&lt;p&gt;本页面向你展示如何在节点上加载 AppArmor 配置文件并在 Pod 中强制应用这些配置文件。
要了解有关 Kubernetes 如何使用 AppArmor 限制 Pod 的更多信息，请参阅
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/security/linux-kernel-security-constraints/#apparmor"&gt;Pod 和容器的 Linux 内核安全约束&lt;/a&gt;。&lt;/p&gt;</description></item><item><title>使用 Cilium 提供 NetworkPolicy</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy/</guid><description>&lt;!--
reviewers:
- danwent
- aanm
title: Use Cilium for NetworkPolicy
content_type: task
weight: 30
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page shows how to use Cilium for NetworkPolicy.

For background on Cilium, read the [Introduction to Cilium](https://docs.cilium.io/en/stable/overview/intro).
--&gt;
&lt;p&gt;本页展示如何使用 Cilium 提供 NetworkPolicy。&lt;/p&gt;
&lt;p&gt;关于 Cilium 的背景知识，请阅读 &lt;a href="https://docs.cilium.io/en/stable/overview/intro"&gt;Cilium 介绍&lt;/a&gt;。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;p&gt;&lt;!--
You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
[minikube](https://minikube.sigs.k8s.io/docs/tutorials/multi_node/)
or you can use one of these Kubernetes playgrounds:
--&gt;
&lt;p&gt;你必须拥有一个 Kubernetes 的集群，且必须配置 kubectl 命令行工具让其与你的集群通信。
建议运行本教程的集群至少有两个节点，且这两个节点不能作为控制平面主机。
如果你还没有集群，你可以通过 &lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;Minikube&lt;/a&gt;
构建一个你自己的集群，或者你可以使用下面的 Kubernetes 练习环境之一：&lt;/p&gt;</description></item><item><title>使用 ConfigMap 来配置 Redis</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tutorials/configuration/configure-redis-using-configmap/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tutorials/configuration/configure-redis-using-configmap/</guid><description>&lt;!--
reviewers:
- eparis
- pmorie
title: Configuring Redis using a ConfigMap
content_type: tutorial
weight: 30
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page provides a real world example of how to configure Redis using a ConfigMap and
builds upon the [Configure a Pod to Use a ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/) task.
--&gt;
&lt;p&gt;这篇文档基于&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configure-pod-container/configure-pod-configmap/"&gt;配置 Pod 以使用 ConfigMap&lt;/a&gt;
这个任务，提供了一个使用 ConfigMap 来配置 Redis 的真实案例。&lt;/p&gt;
&lt;h2 id="教程目标"&gt;教程目标&lt;/h2&gt;
&lt;!--
* Create a ConfigMap with Redis configuration values
* Create a Redis Pod that mounts and uses the created ConfigMap
* Verify that the configuration was correctly applied.
--&gt;
&lt;ul&gt;
&lt;li&gt;使用 Redis 配置的值创建一个 ConfigMap&lt;/li&gt;
&lt;li&gt;创建一个 Redis Pod，挂载并使用创建的 ConfigMap&lt;/li&gt;
&lt;li&gt;验证配置已经被正确应用&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;p&gt;&lt;!--
You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
[minikube](https://minikube.sigs.k8s.io/docs/tutorials/multi_node/)
or you can use one of these Kubernetes playgrounds:
--&gt;
&lt;p&gt;你必须拥有一个 Kubernetes 的集群，且必须配置 kubectl 命令行工具让其与你的集群通信。
建议运行本教程的集群至少有两个节点，且这两个节点不能作为控制平面主机。
如果你还没有集群，你可以通过 &lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;Minikube&lt;/a&gt;
构建一个你自己的集群，或者你可以使用下面的 Kubernetes 练习环境之一：&lt;/p&gt;</description></item><item><title>使用 crictl 对 Kubernetes 节点进行调试</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/debug/debug-cluster/crictl/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/debug/debug-cluster/crictl/</guid><description>&lt;!--
reviewers:
- Random-Liu
- feiskyer
- mrunalp
title: Debugging Kubernetes nodes with crictl
content_type: task
weight: 30
--&gt;
&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.11 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;!--
`crictl` is a command-line interface for CRI-compatible container runtimes.
You can use it to inspect and debug container runtimes and applications on a
Kubernetes node. `crictl` and its source are hosted in the
[cri-tools](https://github.com/kubernetes-sigs/cri-tools) repository.
--&gt;
&lt;p&gt;&lt;code&gt;crictl&lt;/code&gt; 是 CRI 兼容的容器运行时命令行接口。
你可以使用它来检查和调试 Kubernetes 节点上的容器运行时和应用程序。
&lt;code&gt;crictl&lt;/code&gt; 和它的源代码在
&lt;a href="https://github.com/kubernetes-sigs/cri-tools"&gt;cri-tools&lt;/a&gt; 代码库。&lt;/p&gt;</description></item><item><title>使用 Init 容器定义环境变量值</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/inject-data-application/define-environment-variable-via-file/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/inject-data-application/define-environment-variable-via-file/</guid><description>&lt;!--
title: Define Environment Variable Values Using An Init Container
content_type: task
min-kubernetes-server-version: v1.34
weight: 30
--&gt;
&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-beta" title="特性门控： EnvFiles"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt; 
 &lt;code&gt;Kubernetes v1.35 [beta]&lt;/code&gt;（默认启用）&lt;/div&gt;

&lt;!--
This page show how to configure environment variables for containers in a Pod via file.
--&gt;
&lt;p&gt;本页展示如何通过文件为 Pod 中的容器配置环境变量。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;!--
You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
[minikube](https://minikube.sigs.k8s.io/docs/tutorials/multi_node/)
or you can use one of these Kubernetes playgrounds:
--&gt;
&lt;p&gt;你必须拥有一个 Kubernetes 的集群，且必须配置 kubectl 命令行工具让其与你的集群通信。
建议运行本教程的集群至少有两个节点，且这两个节点不能作为控制平面主机。
如果你还没有集群，你可以通过 &lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;Minikube&lt;/a&gt;
构建一个你自己的集群，或者你可以使用下面的 Kubernetes 练习环境之一：&lt;/p&gt;</description></item><item><title>使用 kubeadm 创建集群</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/</guid><description>&lt;!--
reviewers:
- sig-cluster-lifecycle
title: Creating a cluster with kubeadm
content_type: task
weight: 30
--&gt;
&lt;!-- overview --&gt;
&lt;!--
&lt;img src="https://andygol-k8s.netlify.app/images/kubeadm-stacked-color.png" align="right" width="150px"&gt;&lt;/img&gt;
Using `kubeadm`, you can create a minimum viable Kubernetes cluster that conforms to best practices.
In fact, you can use `kubeadm` to set up a cluster that will pass the
[Kubernetes Conformance tests](/blog/2017/10/software-conformance-certification/).
`kubeadm` also supports other cluster lifecycle functions, such as
[bootstrap tokens](/docs/reference/access-authn-authz/bootstrap-tokens/) and cluster upgrades.
--&gt;
&lt;p&gt;&lt;img src="https://andygol-k8s.netlify.app/images/kubeadm-stacked-color.png" align="right" width="150px"&gt;&lt;/img&gt;
使用 &lt;code&gt;kubeadm&lt;/code&gt;，你能创建一个符合最佳实践的最小化 Kubernetes 集群。
事实上，你可以使用 &lt;code&gt;kubeadm&lt;/code&gt; 配置一个通过
&lt;a href="https://andygol-k8s.netlify.app/blog/2017/10/software-conformance-certification/"&gt;Kubernetes 一致性测试&lt;/a&gt;的集群。
&lt;code&gt;kubeadm&lt;/code&gt; 还支持其他集群生命周期功能，
例如&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/access-authn-authz/bootstrap-tokens/"&gt;启动引导令牌&lt;/a&gt;和集群升级。&lt;/p&gt;</description></item><item><title>使用 Kustomize 管理 Secret</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configmap-secret/managing-secret-using-kustomize/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configmap-secret/managing-secret-using-kustomize/</guid><description>&lt;!-- 
title: Managing Secrets using Kustomize
content_type: task
weight: 30
description: Creating Secret objects using kustomization.yaml file.
--&gt;
&lt;!-- overview --&gt;
&lt;!-- 
`kubectl` supports using the [Kustomize object management tool](/docs/tasks/manage-kubernetes-objects/kustomization/) to manage Secrets
and ConfigMaps. You create a *resource generator* using Kustomize, which
generates a Secret that you can apply to the API server using `kubectl`.
--&gt;
&lt;p&gt;&lt;code&gt;kubectl&lt;/code&gt; 支持使用 &lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/tasks/manage-kubernetes-objects/kustomization/"&gt;Kustomize 对象管理工具&lt;/a&gt;来管理
Secret 和 ConfigMap。你可以使用 Kustomize 创建&lt;strong&gt;资源生成器（Resource Generator）&lt;/strong&gt;，
该生成器会生成一个 Secret，让你能够通过 &lt;code&gt;kubectl&lt;/code&gt; 应用到 API 服务器。&lt;/p&gt;</description></item><item><title>使用工作队列进行精细的并行处理</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/job/fine-parallel-processing-work-queue/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/job/fine-parallel-processing-work-queue/</guid><description>&lt;!--
title: Fine Parallel Processing Using a Work Queue
content_type: task
weight: 30
--&gt;
&lt;!-- overview --&gt;
&lt;!--
In this example, you will run a Kubernetes Job that runs multiple parallel
tasks as worker processes, each running as a separate Pod.
--&gt;
&lt;p&gt;在此示例中，你将运行一个 Kubernetes Job，该 Job 将多个并行任务作为工作进程运行，
每个任务在单独的 Pod 中运行。&lt;/p&gt;
&lt;!--
In this example, as each pod is created, it picks up one unit of work
from a task queue, processes it, and repeats until the end of the queue is reached.

Here is an overview of the steps in this example:
--&gt;
&lt;p&gt;在这个例子中，当每个 Pod 被创建时，它会从一个任务队列中获取一个工作单元，处理它，然后重复，直到到达队列的尾部。&lt;/p&gt;</description></item><item><title>使用索引作业完成静态工作分配下的并行处理</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/job/indexed-parallel-processing-static/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/job/indexed-parallel-processing-static/</guid><description>&lt;!-- 
title: Indexed Job for Parallel Processing with Static Work Assignment
content_type: task
min-kubernetes-server-version: v1.21
weight: 30
--&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.24 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;!-- overview --&gt;
&lt;!-- 
In this example, you will run a Kubernetes Job that uses multiple parallel
worker processes.
Each worker is a different container running in its own Pod. The Pods have an
_index number_ that the control plane sets automatically, which allows each Pod
to identify which part of the overall task to work on.
--&gt;
&lt;p&gt;在此示例中，你将运行一个使用多个并行工作进程的 Kubernetes Job。
每个 worker 都是在自己的 Pod 中运行的不同容器。
Pod 具有控制平面自动设置的&lt;strong&gt;索引编号（index number）&lt;/strong&gt;，
这些编号使得每个 Pod 能识别出要处理整个任务的哪个部分。&lt;/p&gt;</description></item><item><title>使用指令式命令管理 Kubernetes 对象</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/manage-kubernetes-objects/imperative-command/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/manage-kubernetes-objects/imperative-command/</guid><description>&lt;!--
title: Managing Kubernetes Objects Using Imperative Commands
content_type: task
weight: 30
--&gt;
&lt;!-- overview --&gt;
&lt;!--
Kubernetes objects can quickly be created, updated, and deleted directly using
imperative commands built into the `kubectl` command-line tool. This document
explains how those commands are organized and how to use them to manage live objects.
--&gt;
&lt;p&gt;使用构建在 &lt;code&gt;kubectl&lt;/code&gt; 命令行工具中的指令式命令可以直接快速创建、更新和删除
Kubernetes 对象。本文档解释这些命令的组织方式以及如何使用它们来管理活跃对象。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;!--
Install [`kubectl`](/docs/tasks/tools/).
--&gt;
&lt;p&gt;安装&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/tasks/tools/"&gt;&lt;code&gt;kubectl&lt;/code&gt;&lt;/a&gt;。&lt;/p&gt;
&lt;p&gt;&lt;!--
You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
[minikube](https://minikube.sigs.k8s.io/docs/tutorials/multi_node/)
or you can use one of these Kubernetes playgrounds:
--&gt;
&lt;p&gt;你必须拥有一个 Kubernetes 的集群，且必须配置 kubectl 命令行工具让其与你的集群通信。
建议运行本教程的集群至少有两个节点，且这两个节点不能作为控制平面主机。
如果你还没有集群，你可以通过 &lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;Minikube&lt;/a&gt;
构建一个你自己的集群，或者你可以使用下面的 Kubernetes 练习环境之一：&lt;/p&gt;</description></item><item><title>示例：使用 StatefulSet 部署 Cassandra</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tutorials/stateful-application/cassandra/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tutorials/stateful-application/cassandra/</guid><description>&lt;!--
title: "Example: Deploying Cassandra with a StatefulSet"
reviewers:
- ahmetb
content_type: tutorial
weight: 30
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This tutorial shows you how to run [Apache Cassandra](https://cassandra.apache.org/) on Kubernetes.
Cassandra, a database, needs persistent storage to provide data durability (application _state_).
In this example, a custom Cassandra seed provider lets the database discover new Cassandra instances as they join the Cassandra cluster.
--&gt;
&lt;p&gt;本教程描述了如何在 Kubernetes 上运行 &lt;a href="https://cassandra.apache.org/"&gt;Apache Cassandra&lt;/a&gt;。
数据库 Cassandra 需要永久性存储提供数据持久性（应用&lt;strong&gt;状态&lt;/strong&gt;）。
在此示例中，自定义 Cassandra seed provider 使数据库在接入 Cassandra 集群时能够发现新的 Cassandra 实例。&lt;/p&gt;</description></item><item><title>手动生成证书</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/certificates/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/certificates/</guid><description>&lt;!-- 
title: Generate Certificates Manually
content_type: task
weight: 30
--&gt;
&lt;!-- overview --&gt;
&lt;!-- 
When using client certificate authentication, you can generate certificates
manually through [`easyrsa`](https://github.com/OpenVPN/easy-rsa), [`openssl`](https://github.com/openssl/openssl) or [`cfssl`](https://github.com/cloudflare/cfssl).
--&gt;
&lt;p&gt;在使用客户端证书认证的场景下，你可以通过 &lt;a href="https://github.com/OpenVPN/easy-rsa"&gt;&lt;code&gt;easyrsa&lt;/code&gt;&lt;/a&gt;、
&lt;a href="https://github.com/openssl/openssl"&gt;&lt;code&gt;openssl&lt;/code&gt;&lt;/a&gt; 或 &lt;a href="https://github.com/cloudflare/cfssl"&gt;&lt;code&gt;cfssl&lt;/code&gt;&lt;/a&gt;
等工具以手工方式生成证书。&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h3 id="easyrsa"&gt;easyrsa&lt;/h3&gt;
&lt;!-- 
**easyrsa** can manually generate certificates for your cluster.
--&gt;
&lt;p&gt;&lt;strong&gt;easyrsa&lt;/strong&gt; 支持以手工方式为你的集群生成证书。&lt;/p&gt;
&lt;!-- 
1. Download, unpack, and initialize the patched version of `easyrsa3`.
--&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;下载、解压、初始化打过补丁的 &lt;code&gt;easyrsa3&lt;/code&gt;。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;curl -LO https://dl.k8s.io/easy-rsa/easy-rsa.tar.gz
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;tar xzf easy-rsa.tar.gz
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#a2f"&gt;cd&lt;/span&gt; easy-rsa-master/easyrsa3
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;./easyrsa init-pki
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;!-- 
1. Generate a new certificate authority (CA). `--batch` sets automatic mode;
 `--req-cn` specifies the Common Name (CN) for the CA's new root certificate.
--&gt;
&lt;ol start="2"&gt;
&lt;li&gt;
&lt;p&gt;生成新的证书颁发机构（CA）。参数 &lt;code&gt;--batch&lt;/code&gt; 用于设置自动模式；
参数 &lt;code&gt;--req-cn&lt;/code&gt; 用于设置新的根证书的通用名称（CN）。&lt;/p&gt;</description></item><item><title>通过环境变量将 Pod 信息呈现给容器</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/inject-data-application/environment-variable-expose-pod-information/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/inject-data-application/environment-variable-expose-pod-information/</guid><description>&lt;!--
title: Expose Pod Information to Containers Through Environment Variables
content_type: task
weight: 30
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page shows how a Pod can use environment variables to expose information
about itself to containers running in the Pod, using the _downward API_.
You can use environment variables to expose Pod fields, container fields, or both.
--&gt;
&lt;p&gt;此页面展示 Pod 如何使用 &lt;strong&gt;downward API&lt;/strong&gt; 通过环境变量把自身的信息呈现给 Pod 中运行的容器。
你可以使用环境变量来呈现 Pod 的字段、容器字段或两者。&lt;/p&gt;
&lt;!--
In Kubernetes, there are two ways to expose Pod and container fields to a running container:

* _Environment variables_, as explained in this task
* [Volume files](/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/)

Together, these two ways of exposing Pod and container fields are called the
downward API.

As Services are the primary mode of communication between containerized applications managed by Kubernetes, 
it is helpful to be able to discover them at runtime. 

Read more about accessing Services [here](/docs/tutorials/services/connect-applications-service/#accessing-the-service).
--&gt;
&lt;p&gt;在 Kubernetes 中有两种方式可以将 Pod 和容器字段呈现给运行中的容器：&lt;/p&gt;</description></item><item><title>为 Windows Pod 和容器配置 GMSA</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configure-pod-container/configure-gmsa/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configure-pod-container/configure-gmsa/</guid><description>&lt;!--
title: Configure GMSA for Windows Pods and containers
content_type: task
weight: 30
--&gt;
&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.18 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;!--
This page shows how to configure
[Group Managed Service Accounts](https://docs.microsoft.com/en-us/windows-server/security/group-managed-service-accounts/group-managed-service-accounts-overview) (GMSA)
for Pods and containers that will run on Windows nodes. Group Managed Service Accounts
are a specific type of Active Directory account that provides automatic password management,
simplified service principal name (SPN) management, and the ability to delegate the management
to other administrators across multiple servers.
--&gt;
&lt;p&gt;本页展示如何为将运行在 Windows 节点上的 Pod 和容器配置
&lt;a href="https://docs.microsoft.com/zh-cn/windows-server/security/group-managed-service-accounts/group-managed-service-accounts-overview"&gt;组管理的服务账号（Group Managed Service Accounts，GMSA）&lt;/a&gt;。
组管理的服务账号是活动目录（Active Directory）的一种特殊类型，
提供自动化的密码管理、简化的服务主体名称（Service Principal Name，SPN）
管理以及跨多个服务器将管理操作委派给其他管理员等能力。&lt;/p&gt;</description></item><item><title>校验节点设置</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/setup/best-practices/node-conformance/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/setup/best-practices/node-conformance/</guid><description>&lt;!--
reviewers:
- Random-Liu
title: Validate node setup
weight: 30
--&gt;
&lt;nav id="TableOfContents"&gt;
 &lt;ul&gt;
 &lt;li&gt;&lt;a href="#node-conformance-test"&gt;节点一致性测试&lt;/a&gt;&lt;/li&gt;
 &lt;li&gt;&lt;a href="#node-prerequisite"&gt;节点的前提条件&lt;/a&gt;&lt;/li&gt;
 &lt;li&gt;&lt;a href="#running-node-conformance-test"&gt;运行节点一致性测试&lt;/a&gt;&lt;/li&gt;
 &lt;li&gt;&lt;a href="#running-node-conformance-test-for-other-architectures"&gt;针对其他硬件体系结构运行节点一致性测试&lt;/a&gt;&lt;/li&gt;
 &lt;li&gt;&lt;a href="#running-selected-test"&gt;运行特定的测试&lt;/a&gt;&lt;/li&gt;
 &lt;li&gt;&lt;a href="#caveats"&gt;注意事项&lt;/a&gt;&lt;/li&gt;
 &lt;/ul&gt;
&lt;/nav&gt;
&lt;!--
## Node Conformance Test
--&gt;
&lt;h2 id="node-conformance-test"&gt;节点一致性测试&lt;/h2&gt;
&lt;!--
*Node conformance test* is a containerized test framework that provides a system
verification and functionality test for a node. The test validates whether the
node meets the minimum requirements for Kubernetes; a node that passes the test
is qualified to join a Kubernetes cluster.
--&gt;
&lt;p&gt;&lt;strong&gt;节点一致性测试&lt;/strong&gt;是一个容器化的测试框架，提供了针对节点的系统验证和功能测试。
测试验证节点是否满足 Kubernetes 的最低要求；通过测试的节点有资格加入 Kubernetes 集群。&lt;/p&gt;</description></item><item><title>运行一个有状态的应用程序</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/run-application/run-replicated-stateful-application/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/run-application/run-replicated-stateful-application/</guid><description>&lt;!--
reviewers:
- enisoc
- erictune
- foxish
- janetkuo
- kow3ns
- smarterclayton
title: Run a Replicated Stateful Application
content_type: tutorial
weight: 30
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page shows how to run a replicated stateful application using a
&lt;a class='glossary-tooltip' title='StatefulSet 用来管理某 Pod 集合的部署和扩缩，并为这些 Pod 提供持久存储和持久标识符。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/controllers/statefulset/' target='_blank' aria-label='StatefulSet'&gt;StatefulSet&lt;/a&gt;.
This application is a replicated MySQL database. The example topology has a
single primary server and multiple replicas, using asynchronous row-based
replication.
--&gt;
&lt;p&gt;本页展示如何使用 &lt;a class='glossary-tooltip' title='StatefulSet 用来管理某 Pod 集合的部署和扩缩，并为这些 Pod 提供持久存储和持久标识符。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/controllers/statefulset/' target='_blank' aria-label='StatefulSet'&gt;StatefulSet&lt;/a&gt;
控制器运行一个有状态的应用程序。此例是多副本的 MySQL 数据库。
示例应用的拓扑结构有一个主服务器和多个副本，使用异步的基于行（Row-Based）
的数据复制。&lt;/p&gt;</description></item><item><title>租约（Lease）</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/leases/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/leases/</guid><description>&lt;!--
title: Leases
api_metadata:
- apiVersion: "coordination.k8s.io/v1"
 kind: "Lease"
content_type: concept
weight: 30
--&gt;
&lt;!-- overview --&gt;
&lt;!--
Distributed systems often have a need for _leases_, which provide a mechanism to lock shared resources
and coordinate activity between members of a set.
In Kubernetes, the lease concept is represented by [Lease](/docs/reference/kubernetes-api/cluster-resources/lease-v1/)
objects in the `coordination.k8s.io` &lt;a class='glossary-tooltip' title='Kubernetes API 中的一组相关路径。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/overview/kubernetes-api/#api-groups-and-versioning' target='_blank' aria-label='API Group'&gt;API Group&lt;/a&gt;,
which are used for system-critical capabilities such as node heartbeats and component-level leader election.
--&gt;
&lt;p&gt;分布式系统通常需要&lt;strong&gt;租约（Lease）&lt;/strong&gt;；租约提供了一种机制来锁定共享资源并协调集合成员之间的活动。
在 Kubernetes 中，租约概念表示为 &lt;code&gt;coordination.k8s.io&lt;/code&gt;
&lt;a class='glossary-tooltip' title='Kubernetes API 中的一组相关路径。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/overview/kubernetes-api/#api-groups-and-versioning' target='_blank' aria-label='API 组'&gt;API 组&lt;/a&gt;中的
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/cluster-resources/lease-v1/"&gt;Lease&lt;/a&gt; 对象，
常用于类似节点心跳和组件级领导者选举等系统核心能力。&lt;/p&gt;</description></item><item><title>使用 RBAC 鉴权</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/access-authn-authz/rbac/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/access-authn-authz/rbac/</guid><description>&lt;!--
reviewers:
- erictune
- deads2k
- liggitt
title: Using RBAC Authorization
content_type: concept
aliases: [/rbac/]
weight: 33
--&gt;
&lt;!-- overview --&gt;
&lt;!--
Role-based access control (RBAC) is a method of regulating access to computer or
network resources based on the roles of individual users within your organization.
--&gt;
&lt;p&gt;基于角色（Role）的访问控制（RBAC）是一种基于组织中用户的角色来调节控制对计算机或网络资源的访问的方法。&lt;/p&gt;
&lt;!-- body --&gt;
&lt;!--
RBAC authorization uses the `rbac.authorization.k8s.io`
&lt;a class='glossary-tooltip' title='Kubernetes API 中的一组相关路径。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/overview/kubernetes-api/#api-groups-and-versioning' target='_blank' aria-label='API group'&gt;API group&lt;/a&gt; to drive authorization
decisions, allowing you to dynamically configure policies through the Kubernetes API.
--&gt;
&lt;p&gt;RBAC 鉴权机制使用 &lt;code&gt;rbac.authorization.k8s.io&lt;/code&gt;
&lt;a class='glossary-tooltip' title='Kubernetes API 中的一组相关路径。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/overview/kubernetes-api/#api-groups-and-versioning' target='_blank' aria-label='API 组'&gt;API 组&lt;/a&gt;来驱动鉴权决定，
允许你通过 Kubernetes API 动态配置策略。&lt;/p&gt;</description></item><item><title>使用 Node 鉴权</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/access-authn-authz/node/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/access-authn-authz/node/</guid><description>&lt;!--
reviewers:
- timstclair
- deads2k
- liggitt
title: Using Node Authorization
content_type: concept
weight: 34
--&gt;
&lt;!-- overview --&gt;
&lt;!--
Node authorization is a special-purpose authorization mode that specifically
authorizes API requests made by kubelets.
--&gt;
&lt;p&gt;节点鉴权是一种特殊用途的鉴权模式，专门对 kubelet 发出的 API 请求进行授权。&lt;/p&gt;
&lt;!-- body --&gt;
&lt;!--
## Overview
--&gt;
&lt;h2 id="overview"&gt;概述&lt;/h2&gt;
&lt;!--
The Node authorizer allows a kubelet to perform API operations. This includes:
--&gt;
&lt;p&gt;节点鉴权器允许 kubelet 执行 API 操作。包括：&lt;/p&gt;
&lt;!--
Read operations:
--&gt;
&lt;p&gt;读取操作：&lt;/p&gt;</description></item><item><title>Kubernetes 中的通用表达式语言</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/using-api/cel/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/using-api/cel/</guid><description>&lt;!--
title: Common Expression Language in Kubernetes
reviewers:
- jpbetz
- cici37
content_type: concept
weight: 35
min-kubernetes-server-version: 1.25
--&gt;
&lt;!-- overview --&gt;
&lt;!--
The [Common Expression Language (CEL)](https://github.com/google/cel-go) is used
in the Kubernetes API to declare validation rules, policy rules, and other
constraints or conditions.

CEL expressions are evaluated directly in the
&lt;a class='glossary-tooltip' title='提供 Kubernetes API 服务的控制面组件。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/#kube-apiserver' target='_blank' aria-label='API server'&gt;API server&lt;/a&gt;, making CEL a
convenient alternative to out-of-process mechanisms, such as webhooks, for many
extensibility use cases. Your CEL expressions continue to execute so long as the
control plane's API server component remains available.
--&gt;
&lt;p&gt;&lt;a href="https://github.com/google/cel-go"&gt;通用表达式语言 (Common Expression Language, CEL)&lt;/a&gt;
用于声明 Kubernetes API 的验证规则、策略规则和其他限制或条件。&lt;/p&gt;</description></item><item><title>在 Kubernetes 节点上配置交换内存</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tutorials/configuration/provision-swap-memory/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tutorials/configuration/provision-swap-memory/</guid><description>&lt;!--
reviewers:
- lmktfy
title: Configuring swap memory on Kubernetes nodes
content_type: tutorial
weight: 35
min-kubernetes-server-version: "1.33"
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page provides an example of how to provision and configure swap memory on a Kubernetes node using kubeadm.
--&gt;
&lt;p&gt;本文演示了如何使用 kubeadm 在 Kubernetes 节点上制备和启用交换内存。&lt;/p&gt;
&lt;!-- lessoncontent --&gt;
&lt;h2 id="教程目标"&gt;教程目标&lt;/h2&gt;
&lt;!--
* Provision swap memory on a Kubernetes node using kubeadm.
* Learn to configure both encrypted and unencrypted swap.
* Learn to enable swap on boot.
--&gt;
&lt;ul&gt;
&lt;li&gt;使用 kubeadm 在 Kubernetes 节点上制备交换内存。&lt;/li&gt;
&lt;li&gt;学习配置加密和未加密的交换内存。&lt;/li&gt;
&lt;li&gt;学习如何在系统启动时启用交换内存。&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;p&gt;&lt;!--
You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
[minikube](https://minikube.sigs.k8s.io/docs/tutorials/multi_node/)
or you can use one of these Kubernetes playgrounds:
--&gt;
&lt;p&gt;你必须拥有一个 Kubernetes 的集群，且必须配置 kubectl 命令行工具让其与你的集群通信。
建议运行本教程的集群至少有两个节点，且这两个节点不能作为控制平面主机。
如果你还没有集群，你可以通过 &lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;Minikube&lt;/a&gt;
构建一个你自己的集群，或者你可以使用下面的 Kubernetes 练习环境之一：&lt;/p&gt;</description></item><item><title>Webhook 模式</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/access-authn-authz/webhook/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/access-authn-authz/webhook/</guid><description>&lt;!--
reviewers:
- erictune
- lavalamp
- deads2k
- liggitt
title: Webhook Mode
content_type: concept
weight: 36
--&gt;
&lt;!-- overview --&gt;
&lt;!--
A WebHook is an HTTP callback: an HTTP POST that occurs when something happens; a simple event-notification via HTTP POST. A web application implementing WebHooks will POST a message to a URL when certain things happen.
--&gt;
&lt;p&gt;Webhook 是一种 HTTP 回调：某些条件下触发的 HTTP POST 请求；通过 HTTP POST
发送的简单事件通知。一个基于 web 应用实现的 Webhook 会在特定事件发生时把消息发送给特定的 URL。&lt;/p&gt;</description></item><item><title>使用 ABAC 鉴权</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/access-authn-authz/abac/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/access-authn-authz/abac/</guid><description>&lt;!--
reviewers:
- erictune
- lavalamp
- deads2k
- liggitt
title: Using ABAC Authorization
content_type: concept
weight: 39
--&gt;
&lt;!-- overview --&gt;
&lt;!--
Attribute-based access control (ABAC) defines an access control paradigm whereby access rights are granted
to users through the use of policies which combine attributes together.
--&gt;
&lt;p&gt;基于属性的访问控制（Attribute-based access control，ABAC）定义了访问控制范例，
ABAC 通过使用将属性组合在一起的策略来向用户授予访问权限。&lt;/p&gt;
&lt;!-- body --&gt;
&lt;!--
## Policy File Format

To enable `ABAC` mode, specify `--authorization-policy-file=SOME_FILENAME` and `--authorization-mode=ABAC`
on startup.

The file format is [one JSON object per line](https://jsonlines.org/). There
should be no enclosing list or map, only one map per line.

Each line is a "policy object", where each such object is a map with the following
properties:
--&gt;
&lt;h2 id="policy-file-format"&gt;策略文件格式&lt;/h2&gt;
&lt;p&gt;要启用 &lt;code&gt;ABAC&lt;/code&gt; 模式，可以在启动时指定 &lt;code&gt;--authorization-policy-file=SOME_FILENAME&lt;/code&gt; 和 &lt;code&gt;--authorization-mode=ABAC&lt;/code&gt;。&lt;/p&gt;</description></item><item><title>DaemonSet</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/controllers/daemonset/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/controllers/daemonset/</guid><description>&lt;!--
reviewers:
- enisoc
- erictune
- foxish
- janetkuo
- kow3ns
title: DaemonSet
api_metadata:
- apiVersion: "apps/v1"
 kind: "DaemonSet"
description: &gt;-
 A DaemonSet defines Pods that provide node-local facilities. These might be fundamental to the operation of your cluster, such as a networking helper tool, or be part of an add-on.
content_type: concept
weight: 40
hide_summary: true # Listed separately in section index
---&gt;
&lt;!-- overview --&gt;
&lt;!--
A _DaemonSet_ ensures that all (or some) Nodes run a copy of a Pod. As nodes are added to the
cluster, Pods are added to them. As nodes are removed from the cluster, those Pods are garbage
collected. Deleting a DaemonSet will clean up the Pods it created.
---&gt;
&lt;p&gt;&lt;strong&gt;DaemonSet&lt;/strong&gt; 确保全部（或者某些）节点上运行一个 Pod 的副本。
当有节点加入集群时， 也会为他们新增一个 Pod 。
当有节点从集群移除时，这些 Pod 也会被回收。删除 DaemonSet 将会删除它创建的所有 Pod。&lt;/p&gt;</description></item><item><title>Init 容器</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/init-containers/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/init-containers/</guid><description>&lt;!---
reviewers:
- erictune
title: Init Containers
content_type: concept
weight: 40
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page provides an overview of init containers: specialized containers that run
before app containers in a &lt;a class='glossary-tooltip' title='Pod 表示你的集群上一组正在运行的容器。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/' target='_blank' aria-label='Pod'&gt;Pod&lt;/a&gt;.
Init containers can contain utilities or setup scripts not present in an app image.
--&gt;
&lt;p&gt;本页提供了 Init 容器的概览。Init 容器是一种特殊容器，在 &lt;a class='glossary-tooltip' title='Pod 表示你的集群上一组正在运行的容器。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/' target='_blank' aria-label='Pod'&gt;Pod&lt;/a&gt;
内的应用容器启动之前运行。Init 容器可以包括一些应用镜像中不存在的实用工具和安装脚本。&lt;/p&gt;
&lt;!--
You can specify init containers in the Pod specification alongside the `containers`
array (which describes app containers).
--&gt;
&lt;p&gt;你可以在 Pod 的规约中与用来描述应用容器的 &lt;code&gt;containers&lt;/code&gt; 数组平行的位置指定
Init 容器。&lt;/p&gt;</description></item><item><title>JSONPath 支持</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/jsonpath/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/jsonpath/</guid><description>&lt;!--
title: JSONPath Support
content_type: concept
weight: 40
math: true
--&gt;
&lt;!-- overview --&gt;
&lt;!--
The &lt;a class='glossary-tooltip' title='kubectl 是用来和 Kubernetes 集群进行通信的命令行工具。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/' target='_blank' aria-label='kubectl'&gt;kubectl&lt;/a&gt; tool supports JSONPath templates as an output format.
--&gt;
&lt;p&gt;&lt;a class='glossary-tooltip' title='kubectl 是用来和 Kubernetes 集群进行通信的命令行工具。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/' target='_blank' aria-label='kubectl'&gt;kubectl&lt;/a&gt;
工具支持 JSONPath 模板作为输出格式。&lt;/p&gt;
&lt;!-- body --&gt;
&lt;!--
A _JSONPath template_ is composed of JSONPath expressions enclosed by curly braces: `{` and `}`.
Kubectl uses JSONPath expressions to filter on specific fields in the JSON object and format the output.
In addition to the original JSONPath template syntax, the following functions and syntax are valid:
--&gt;
&lt;p&gt;&lt;strong&gt;JSONPath 模板&lt;/strong&gt;由大括号 &lt;code&gt;{&lt;/code&gt; 和 &lt;code&gt;}&lt;/code&gt; 包起来的 JSONPath 表达式组成。
kubectl 使用 JSONPath 表达式来过滤 JSON 对象中的特定字段并格式化输出。
除了原始的 JSONPath 模板语法，以下函数和语法也是有效的:&lt;/p&gt;</description></item><item><title>kubeadm upgrade</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-upgrade/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-upgrade/</guid><description>&lt;!--
reviewers:
- luxas
- jbeda
title: kubeadm upgrade
content_type: concept
weight: 40
--&gt;
&lt;!-- overview --&gt;
&lt;!--
`kubeadm upgrade` is a user-friendly command that wraps complex upgrading logic
behind one command, with support for both planning an upgrade and actually performing it.
--&gt;
&lt;p&gt;&lt;code&gt;kubeadm upgrade&lt;/code&gt; 是一个对用户友好的命令，它将复杂的升级逻辑包装在一条命令后面，支持升级的规划和实际执行。&lt;/p&gt;
&lt;!-- body --&gt;
&lt;!--
## kubeadm upgrade guidance
--&gt;
&lt;h2 id="kubeadm-upgrade-guidance"&gt;kubeadm upgrade 指南&lt;/h2&gt;
&lt;!--
The steps for performing an upgrade using kubeadm are outlined in [this document](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/).
For older versions of kubeadm, please refer to older documentation sets of the Kubernetes website.
--&gt;
&lt;p&gt;&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/"&gt;本文档&lt;/a&gt;概述使用
kubeadm 执行升级的步骤。与 kubeadm 旧版本相关的文档，请参阅 Kubernetes 网站的旧版文档。&lt;/p&gt;</description></item><item><title>kubeadm upgrade phases</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-upgrade-phase/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-upgrade-phase/</guid><description>&lt;!--
## kubeadm upgrade apply phase {#cmd-apply-phase}

Using the phases of `kubeadm upgrade apply`, you can choose to execute the separate steps of the initial upgrade
of a control plane node.
--&gt;
&lt;h2 id="cmd-apply-phase"&gt;kubeadm upgrade apply 阶段&lt;/h2&gt;
&lt;p&gt;使用 &lt;code&gt;kubeadm upgrade apply&lt;/code&gt; 的各个阶段，
你可以选择执行控制平面节点初始升级的单独步骤。&lt;/p&gt;
&lt;ul class="nav nav-tabs" id="tab-apply-phase" role="tablist"&gt;&lt;li class="nav-item"&gt;&lt;a data-toggle="tab" class="nav-link active" href="#tab-apply-phase-0" role="tab" aria-controls="tab-apply-phase-0" aria-selected="true"&gt;phase&lt;/a&gt;&lt;/li&gt;
	 
		&lt;li class="nav-item"&gt;&lt;a data-toggle="tab" class="nav-link" href="#tab-apply-phase-1" role="tab" aria-controls="tab-apply-phase-1"&gt;preflight&lt;/a&gt;&lt;/li&gt;
		&lt;li class="nav-item"&gt;&lt;a data-toggle="tab" class="nav-link" href="#tab-apply-phase-2" role="tab" aria-controls="tab-apply-phase-2"&gt;control-plane&lt;/a&gt;&lt;/li&gt;
		&lt;li class="nav-item"&gt;&lt;a data-toggle="tab" class="nav-link" href="#tab-apply-phase-3" role="tab" aria-controls="tab-apply-phase-3"&gt;upload-config&lt;/a&gt;&lt;/li&gt;
		&lt;li class="nav-item"&gt;&lt;a data-toggle="tab" class="nav-link" href="#tab-apply-phase-4" role="tab" aria-controls="tab-apply-phase-4"&gt;kubelet-config&lt;/a&gt;&lt;/li&gt;
		&lt;li class="nav-item"&gt;&lt;a data-toggle="tab" class="nav-link" href="#tab-apply-phase-5" role="tab" aria-controls="tab-apply-phase-5"&gt;bootstrap-token&lt;/a&gt;&lt;/li&gt;
		&lt;li class="nav-item"&gt;&lt;a data-toggle="tab" class="nav-link" href="#tab-apply-phase-6" role="tab" aria-controls="tab-apply-phase-6"&gt;addon&lt;/a&gt;&lt;/li&gt;
		&lt;li class="nav-item"&gt;&lt;a data-toggle="tab" class="nav-link" href="#tab-apply-phase-7" role="tab" aria-controls="tab-apply-phase-7"&gt;post-upgrade&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;
&lt;div class="tab-content" id="tab-apply-phase"&gt;&lt;div id="tab-apply-phase-0" class="tab-pane show active" role="tabpanel" aria-labelledby="tab-apply-phase-0"&gt;

&lt;p&gt;&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Use this command to invoke single phase of the "apply" workflow
--&gt;
&lt;p&gt;使用此命令来调用 &amp;quot;apply&amp;quot; 工作流的单个阶段。&lt;/p&gt;</description></item><item><title>Kubernetes API</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/overview/kubernetes-api/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/overview/kubernetes-api/</guid><description>&lt;!--
reviewers:
- chenopis
title: The Kubernetes API
content_type: concept
weight: 40
description: &gt;
 The Kubernetes API lets you query and manipulate the state of objects in Kubernetes.
 The core of Kubernetes' control plane is the API server and the HTTP API that it exposes. Users, the different parts of your cluster, and external components all communicate with one another through the API server.
card:
 name: concepts
 weight: 30
--&gt;
&lt;!-- overview --&gt;
&lt;!--
The core of Kubernetes' &lt;a class='glossary-tooltip' title='控制平面是指容器编排层，它暴露 API 和接口来定义、部署容器和管理容器的生命周期。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/reference/glossary/?all=true#term-control-plane' target='_blank' aria-label='control plane'&gt;control plane&lt;/a&gt;
is the &lt;a class='glossary-tooltip' title='提供 Kubernetes API 服务的控制面组件。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/#kube-apiserver' target='_blank' aria-label='API server'&gt;API server&lt;/a&gt;. The API server
exposes an HTTP API that lets end users, different parts of your cluster, and
external components communicate with one another.

The Kubernetes API lets you query and manipulate the state of API objects in Kubernetes
(for example: Pods, Namespaces, ConfigMaps, and Events).
--&gt;
&lt;p&gt;Kubernetes &lt;a class='glossary-tooltip' title='控制平面是指容器编排层，它暴露 API 和接口来定义、部署容器和管理容器的生命周期。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/reference/glossary/?all=true#term-control-plane' target='_blank' aria-label='控制面'&gt;控制面&lt;/a&gt;的核心是
&lt;a class='glossary-tooltip' title='提供 Kubernetes API 服务的控制面组件。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/#kube-apiserver' target='_blank' aria-label='API 服务器'&gt;API 服务器&lt;/a&gt;。
API 服务器负责提供 HTTP API，以供用户、集群中的不同部分和集群外部组件相互通信。&lt;/p&gt;</description></item><item><title>Kubernetes 弃用策略</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/using-api/deprecation-policy/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/using-api/deprecation-policy/</guid><description>&lt;!--
reviewers:
- bgrant0607
- lavalamp
- thockin
title: Kubernetes Deprecation Policy
content_type: concept
weight: 40
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This document details the deprecation policy for various facets of the system.
--&gt;
&lt;p&gt;本文档详细解释系统中各个层面的弃用策略（Deprecation Policy）。&lt;/p&gt;
&lt;!-- body --&gt;
&lt;!--
Kubernetes is a large system with many components and many contributors. As
with any such software, the feature set naturally evolves over time, and
sometimes a feature may need to be removed. This could include an API, a flag,
or even an entire feature. To avoid breaking existing users, Kubernetes follows
a deprecation policy for aspects of the system that are slated to be removed.
--&gt;
&lt;p&gt;Kubernetes 是一个组件众多、贡献者人数众多的大系统。
就像很多类似的软件，所提供的功能特性集合会随着时间推移而自然发生变化，
而且有时候某个功能特性可能需要被移除。被移除的可能是一个 API、
一个参数标志或者甚至某整个功能特性。为了避免影响到现有用户，
Kubernetes 对于其中渐次移除的各个方面规定了一种弃用策略并遵从此策略。&lt;/p&gt;</description></item><item><title>Linux 节点的安全性</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/security/linux-security/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/security/linux-security/</guid><description>&lt;!--
reviewers:
- lmktfy
title: Security For Linux Nodes
content_type: concept
weight: 40
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page describes security considerations and best practices specific to the Linux operating system.
--&gt;
&lt;p&gt;本篇介绍特定于 Linux 操作系统的安全注意事项和最佳实践。&lt;/p&gt;
&lt;!-- body --&gt;
&lt;!--
## Protection for Secret data on nodes
--&gt;
&lt;h2 id="protection-for-secret-data-on-nodes"&gt;保护节点上的 Secret 数据&lt;/h2&gt;
&lt;!--
On Linux nodes, memory-backed volumes (such as [`secret`](/docs/concepts/configuration/secret/)
volume mounts, or [`emptyDir`](/docs/concepts/storage/volumes/#emptydir) with `medium: Memory`)
are implemented with a `tmpfs` filesystem.
--&gt;
&lt;p&gt;在 Linux 节点上，由内存支持的卷（例如 &lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/configuration/secret/"&gt;&lt;code&gt;secret&lt;/code&gt;&lt;/a&gt;
卷挂载，或带有 &lt;code&gt;medium: Memory&lt;/code&gt; 的 &lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/volumes/#emptydir"&gt;&lt;code&gt;emptyDir&lt;/code&gt;&lt;/a&gt;）
使用 &lt;code&gt;tmpfs&lt;/code&gt; 文件系统实现。&lt;/p&gt;</description></item><item><title>Pod 调度就绪态</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/scheduling-eviction/pod-scheduling-readiness/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/scheduling-eviction/pod-scheduling-readiness/</guid><description>&lt;!--
title: Pod Scheduling Readiness
content_type: concept
weight: 40
--&gt;
&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.30 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;!--
Pods were considered ready for scheduling once created. Kubernetes scheduler
does its due diligence to find nodes to place all pending Pods. However, in a
real-world case, some Pods may stay in a "miss-essential-resources" state for a long period.
These Pods actually churn the scheduler (and downstream integrators like Cluster AutoScaler)
in an unnecessary manner.

By specifying/removing a Pod's `.spec.schedulingGates`, you can control when a Pod is ready
to be considered for scheduling.
--&gt;
&lt;p&gt;Pod 一旦创建就被认为准备好进行调度。
Kubernetes 调度程序尽职尽责地寻找节点来放置所有待处理的 Pod。
然而，在实际环境中，会有一些 Pod 可能会长时间处于&amp;quot;缺少必要资源&amp;quot;状态。
这些 Pod 实际上以一种不必要的方式扰乱了调度器（以及 Cluster AutoScaler 这类下游的集成方）。&lt;/p&gt;</description></item><item><title>Pod 拓扑分布约束</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/scheduling-eviction/topology-spread-constraints/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/scheduling-eviction/topology-spread-constraints/</guid><description>&lt;!--
title: Pod Topology Spread Constraints
content_type: concept
weight: 40
--&gt;
&lt;!-- overview --&gt;
&lt;!--
You can use _topology spread constraints_ to control how
&lt;a class='glossary-tooltip' title='Pod 表示你的集群上一组正在运行的容器。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/' target='_blank' aria-label='Pods'&gt;Pods&lt;/a&gt; are spread across your cluster
among failure-domains such as regions, zones, nodes, and other user-defined topology
domains. This can help to achieve high availability as well as efficient resource
utilization.

You can set [cluster-level constraints](#cluster-level-default-constraints) as a default,
or configure topology spread constraints for individual workloads.
--&gt;
&lt;p&gt;你可以使用 &lt;strong&gt;拓扑分布约束（Topology Spread Constraints）&lt;/strong&gt; 来控制
&lt;a class='glossary-tooltip' title='Pod 表示你的集群上一组正在运行的容器。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/' target='_blank' aria-label='Pod'&gt;Pod&lt;/a&gt; 在集群内故障域之间的分布，
例如区域（Region）、可用区（Zone）、节点和其他用户自定义拓扑域。
这样做有助于实现高可用并提升资源利用率。&lt;/p&gt;</description></item><item><title>Turnkey 云解决方案</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/setup/production-environment/turnkey-solutions/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/setup/production-environment/turnkey-solutions/</guid><description>&lt;!-- 
---
title: Turnkey Cloud Solutions
content_type: concept
weight: 40
---
--&gt;
&lt;!-- overview --&gt;
&lt;!-- 
This page provides a list of Kubernetes certified solution providers. From each
provider page, you can learn how to install and setup production
ready clusters.
--&gt;
&lt;p&gt;本页列示 Kubernetes 认证解决方案供应商。
在每一个供应商分页，你可以学习如何安装和设置生产就绪的集群。&lt;/p&gt;
&lt;!-- body --&gt;





&lt;script&gt;
function updateLandscapeSource(button,shouldUpdateFragment) {
 console.log({button: button,shouldUpdateFragment: shouldUpdateFragment});
 try {
 if(shouldUpdateFragment) {
 window.location.hash = "#iframe-landscape-"+button.id;
 
 } else {
 var landscapeElements = document.querySelectorAll("#landscape");
 let categories=button.dataset.landscapeTypes;
 let link = `https://landscape.cncf.io/embed/embed.html?key=${encodeURIComponent(categories)}&amp;headers=false&amp;style=shadowed&amp;size=md&amp;bg-color=%23d95e00&amp;fg-color=%23ffffff&amp;iframe-resizer=true`
 landscapeElements[0].src = link;
 }
 }
 catch(err) {
 console.log({message: "error handling Landscape switch", error: err})
 }
}


document.addEventListener("DOMContentLoaded", function () {
 let hashChangeHandler = () =&gt; {
 if (window.location.hash) {
 let selectedTriggerElements = document.querySelectorAll(".landscape-trigger"+window.location.hash);
 if (selectedTriggerElements.length == 1) {
 landscapeSource = selectedTriggerElements[0];
 console.log("Updating Landscape source based on fragment:", window
 .location
 .hash
 .substring(1));
 updateLandscapeSource(landscapeSource,false);
 }
 }
 }
 var landscapeTriggerElements = document.querySelectorAll(".landscape-trigger");
 landscapeTriggerElements.forEach(element =&gt; {
 element.onclick = function() {
 updateLandscapeSource(element,true);
 };
 });
 var landscapeDefaultElements = document.querySelectorAll(".landscape-trigger.landscape-default");
 if (landscapeDefaultElements.length == 1) {
 let defaultLandscapeSource = landscapeDefaultElements[0];
 updateLandscapeSource(defaultLandscapeSource,false);
 }
 window.addEventListener("hashchange", hashChangeHandler, false);
 
 hashChangeHandler();
});
&lt;/script&gt;&lt;div id="frameHolder"&gt;
 
 &lt;iframe id="iframe-landscape" src="https://landscape.cncf.io/embed/embed.html?key=platform--certified-kubernetes-hosted&amp;headers=false&amp;style=shadowed&amp;size=md&amp;bg-color=%233371e3&amp;fg-color=%23ffffff&amp;iframe-resizer=true" style="width: 1px; min-width: 100%; min-height: 100px; border: 0;"&gt;&lt;/iframe&gt;
 &lt;script&gt;
 iFrameResize({ }, '#iframe-landscape');
 &lt;/script&gt;
 
&lt;/div&gt;</description></item><item><title>Windows 节点的安全性</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/security/windows-security/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/security/windows-security/</guid><description>&lt;!--
reviewers:
- jayunit100
- jsturtevant
- marosset
- perithompson
 title: Security For Windows Nodes
 content_type: concept
 weight: 75
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page describes security considerations and best practices specific to the Windows operating system.
--&gt;
&lt;p&gt;本篇介绍特定于 Windows 操作系统的安全注意事项和最佳实践。&lt;/p&gt;
&lt;!-- body --&gt;
&lt;!--
## Protection for Secret data on nodes
--&gt;
&lt;h2 id="protection-for-secret-data-on-nodes"&gt;保护节点上的 Secret 数据&lt;/h2&gt;
&lt;!--
On Windows, data from Secrets are written out in clear text onto the node's local
storage (as compared to using tmpfs / in-memory filesystems on Linux). As a cluster
operator, you should take both of the following additional measures:
--&gt;
&lt;p&gt;在 Windows 上，来自 Secret 的数据以明文形式写入节点的本地存储
（与在 Linux 上使用 tmpfs / 内存中文件系统不同）。
作为集群操作员，你应该采取以下两项额外措施：&lt;/p&gt;</description></item><item><title>标签和选择算符</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/overview/working-with-objects/labels/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/overview/working-with-objects/labels/</guid><description>&lt;!--
reviewers:
- mikedanese
title: Labels and Selectors
content_type: concept
weight: 40
--&gt;
&lt;!-- overview --&gt;
&lt;!--
_Labels_ are key/value pairs that are attached to
&lt;a class='glossary-tooltip' title='Kubernetes 系统中的实体，代表了集群的部分状态。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/overview/working-with-objects/#kubernetes-objects' target='_blank' aria-label='objects'&gt;objects&lt;/a&gt; such as pods.
Labels are intended to be used to specify identifying attributes of objects
that are meaningful and relevant to users, but do not directly imply semantics
to the core system. Labels can be used to organize and to select subsets of
objects. Labels can be attached to objects at creation time and subsequently
added and modified at any time. Each object can have a set of key/value labels
defined. Each Key must be unique for a given object.
--&gt;
&lt;p&gt;&lt;strong&gt;标签（Labels）&lt;/strong&gt; 是附加到 Kubernetes
&lt;a class='glossary-tooltip' title='Kubernetes 系统中的实体，代表了集群的部分状态。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/overview/working-with-objects/#kubernetes-objects' target='_blank' aria-label='对象'&gt;对象&lt;/a&gt;（比如 Pod）上的键值对。
标签旨在用于指定对用户有意义且相关的对象的标识属性，但不直接对核心系统有语义含义。
标签可以用于组织和选择对象的子集。标签可以在创建时附加到对象，随后可以随时添加和修改。
每个对象都可以定义一组键/值标签。每个键对于给定对象必须是唯一的。&lt;/p&gt;</description></item><item><title>博客指南</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/contribute/blog/guidelines/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/contribute/blog/guidelines/</guid><description>&lt;!--
title: Blog guidelines
content_type: concept
weight: 40
--&gt;
&lt;!-- overview --&gt;
&lt;!--
These guidelines cover the main Kubernetes blog and the Kubernetes
contributor blog.

All blog content must also adhere to the overall policy in the
[content guide](/docs/contribute/style/content-guide/).
--&gt;
&lt;p&gt;这些指南涵盖了 Kubernetes 主博客和 Kubernetes 贡献者博客。&lt;/p&gt;
&lt;p&gt;所有博客内容还必须遵循&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/contribute/style/content-guide/"&gt;内容指南&lt;/a&gt;中的总体政策。&lt;/p&gt;
&lt;h1 id="准备开始"&gt;准备开始&lt;/h1&gt;
&lt;!--
Make sure you are familiar with the introduction sections of
[contributing to Kubernetes blogs](/docs/contribute/blog/), not just to learn about
the two official blogs and the differences between them, but also to get an overview
of the process.
--&gt;
&lt;p&gt;确保你熟悉&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/contribute/blog/"&gt;为 Kubernetes 博客贡献内容&lt;/a&gt;的介绍部分，
不仅是为了了解两个官方博客及其之间的区别，也是为了对整个过程有一个概览。&lt;/p&gt;</description></item><item><title>存储类</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/storage-classes/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/storage-classes/</guid><description>&lt;!--
reviewers:
- jsafrane
- saad-ali
- thockin
- msau42
title: Storage Classes
api_metadata:
- apiVersion: "storage.k8s.io/v1"
 kind: "StorageClass"
content_type: concept
weight: 40
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This document describes the concept of a StorageClass in Kubernetes. Familiarity
with [volumes](/docs/concepts/storage/volumes/) and
[persistent volumes](/docs/concepts/storage/persistent-volumes) is suggested.
--&gt;
&lt;p&gt;本文描述了 Kubernetes 中 StorageClass 的概念。
建议先熟悉&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/volumes/"&gt;卷&lt;/a&gt;和&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/persistent-volumes"&gt;持久卷&lt;/a&gt;的概念。&lt;/p&gt;
&lt;!--
A StorageClass provides a way for administrators to describe the _classes_ of
storage they offer. Different classes might map to quality-of-service levels,
or to backup policies, or to arbitrary policies determined by the cluster
administrators. Kubernetes itself is unopinionated about what classes
represent.

The Kubernetes concept of a storage class is similar to “profiles” in some other
storage system designs.
--&gt;
&lt;p&gt;StorageClass 为管理员提供了描述存储&lt;strong&gt;类&lt;/strong&gt;的方法。
不同的类型可能会映射到不同的服务质量等级或备份策略，或是由集群管理员制定的任意策略。
Kubernetes 本身并不清楚各种类代表的什么。&lt;/p&gt;</description></item><item><title>存活、就绪和启动探针</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/configuration/liveness-readiness-startup-probes/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/configuration/liveness-readiness-startup-probes/</guid><description>&lt;!--
title: Liveness, Readiness, and Startup Probes
content_type: concept
weight: 40
--&gt;
&lt;!-- overview --&gt;
&lt;!--
Kubernetes has various types of probes:

- [Liveness probe](#liveness-probe)
- [Readiness probe](#readiness-probe)
- [Startup probe](#startup-probe)
--&gt;
&lt;p&gt;Kubernetes 提供了多种探针：&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#liveness-probe"&gt;存活探针&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#readiness-probe"&gt;就绪探针&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#startup-probe"&gt;启动探针&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;!-- body --&gt;
&lt;!--
## Liveness probe

Liveness probes determine when to restart a container. For example, liveness probes could catch a deadlock when an application is running, but unable to make progress.
--&gt;
&lt;h2 id="liveness-probe"&gt;存活探针&lt;/h2&gt;
&lt;p&gt;存活探针决定何时重启容器。
例如，当应用在运行但无法取得进展时，存活探针可以捕获这类死锁。&lt;/p&gt;</description></item><item><title>调试 Init 容器</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/debug/debug-application/debug-init-containers/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/debug/debug-application/debug-init-containers/</guid><description>&lt;!--
reviewers:
- bprashanth
- enisoc
- erictune
- foxish
- janetkuo
- kow3ns
- smarterclayton
title: Debug Init Containers
content_type: task
weight: 40
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page shows how to investigate problems related to the execution of
Init Containers. The example command lines below refer to the Pod as
`&lt;pod-name&gt;` and the Init Containers as `&lt;init-container-1&gt;` and
`&lt;init-container-2&gt;`.
--&gt;
&lt;p&gt;此页显示如何核查与 Init 容器执行相关的问题。
下面的示例命令行将 Pod 称为 &lt;code&gt;&amp;lt;pod-name&amp;gt;&lt;/code&gt;，而 Init 容器称为 &lt;code&gt;&amp;lt;init-container-1&amp;gt;&lt;/code&gt; 和
&lt;code&gt;&amp;lt;init-container-2&amp;gt;&lt;/code&gt;。&lt;/p&gt;</description></item><item><title>管理工作负载</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/management/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/management/</guid><description>&lt;!--
title: Managing Workloads
content_type: concept
reviewers:
- janetkuo
weight: 40
--&gt;
&lt;!-- overview --&gt;
&lt;!-- 
You've deployed your application and exposed it via a Service. Now what? Kubernetes provides a
number of tools to help you manage your application deployment, including scaling and updating. 
--&gt;
&lt;p&gt;你已经部署了你的应用并且通过 Service 将其暴露出来。现在要做什么？
Kubernetes 提供了一系列的工具帮助你管理应用的部署，包括扩缩和更新。&lt;/p&gt;
&lt;!-- body --&gt;
&lt;!--
## Organizing resource configurations
--&gt;
&lt;h2 id="组织资源配置"&gt;组织资源配置&lt;/h2&gt;
&lt;!--
Many applications require multiple resources to be created, such as a Deployment along with a Service.
Management of multiple resources can be simplified by grouping them together in the same file
(separated by `---` in YAML). For example: 
--&gt;
&lt;p&gt;一些应用需要创建多个资源，例如 Deployment 和 Service。
将多个资源归入同一个文件（在 YAML 中使用 &lt;code&gt;---&lt;/code&gt; 分隔）可以简化对多个资源的管理。例如：&lt;/p&gt;</description></item><item><title>进程 ID 约束与预留</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/policy/pid-limiting/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/policy/pid-limiting/</guid><description>&lt;!--
reviewers:
- derekwaynecarr
title: Process ID Limits And Reservations
content_type: concept
weight: 40
--&gt;
&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.20 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;!--
Kubernetes allow you to limit the number of process IDs (PIDs) that a
&lt;a class='glossary-tooltip' title='Pod 表示你的集群上一组正在运行的容器。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/' target='_blank' aria-label='Pod'&gt;Pod&lt;/a&gt; can use.
You can also reserve a number of allocatable PIDs for each &lt;a class='glossary-tooltip' title='Kubernetes 中的工作机器称作节点。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/nodes/' target='_blank' aria-label='node'&gt;node&lt;/a&gt;
for use by the operating system and daemons (rather than by Pods).
--&gt;
&lt;p&gt;Kubernetes 允许你限制一个 &lt;a class='glossary-tooltip' title='Pod 表示你的集群上一组正在运行的容器。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/' target='_blank' aria-label='Pod'&gt;Pod&lt;/a&gt;
中可以使用的进程 ID（PID）数目。
你也可以为每个&lt;a class='glossary-tooltip' title='Kubernetes 中的工作机器称作节点。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/nodes/' target='_blank' aria-label='节点'&gt;节点&lt;/a&gt;预留一定数量的可分配的 PID，
供操作系统和守护进程（而非 Pod）使用。&lt;/p&gt;</description></item><item><title>卷属性类</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/volume-attributes-classes/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/volume-attributes-classes/</guid><description>&lt;!--
reviewers:
- msau42
- xing-yang
title: Volume Attributes Classes
content_type: concept
weight: 40
--&gt;
&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable" title="特性门控： VolumeAttributesClass"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt; 
 &lt;code&gt;Kubernetes v1.34 [stable]&lt;/code&gt;（默认启用）&lt;/div&gt;

&lt;!--
This page assumes that you are familiar with [StorageClasses](/docs/concepts/storage/storage-classes/),
[volumes](/docs/concepts/storage/volumes/) and [PersistentVolumes](/docs/concepts/storage/persistent-volumes/)
in Kubernetes.
--&gt;
&lt;p&gt;本页假设你已经熟悉 Kubernetes 中的 &lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/storage-classes/"&gt;StorageClass&lt;/a&gt;、
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/volumes/"&gt;Volume&lt;/a&gt; 和
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/persistent-volumes/"&gt;PersistentVolume&lt;/a&gt;。&lt;/p&gt;
&lt;!-- body --&gt;
&lt;!--
A VolumeAttributesClass provides a way for administrators to describe the mutable
"classes" of storage they offer. Different classes might map to different quality-of-service levels.
Kubernetes itself is un-opinionated about what these classes represent.

This feature is generally available (GA) as of version 1.34, and users have the option to disable it.
--&gt;
&lt;p&gt;卷属性类（VolumeAttributesClass）为管理员提供了一种描述可变更的存储“类”的方法。
不同的类可以映射到不同的服务质量级别。Kubernetes 本身不关注这些类代表什么。&lt;/p&gt;</description></item><item><title>排查 CNI 插件相关的错误</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/migrating-from-dockershim/troubleshooting-cni-plugin-related-errors/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/migrating-from-dockershim/troubleshooting-cni-plugin-related-errors/</guid><description>&lt;!--
title: Troubleshooting CNI plugin-related errors
content_type: task
reviewers:
- mikebrow
- divya-mohan0209
weight: 40
--&gt;
&lt;!-- overview --&gt;
&lt;!--
To avoid CNI plugin-related errors, verify that you are using or upgrading to a
container runtime that has been tested to work correctly with your version of
Kubernetes.
--&gt;
&lt;p&gt;为了避免 CNI 插件相关的错误，需要验证你正在使用或升级到的容器运行时经过测试能够在你的
Kubernetes 版本上正常工作。&lt;/p&gt;
&lt;!--
## About the "Incompatible CNI versions" and "Failed to destroy network for sandbox" errors
--&gt;
&lt;h2 id="about-the-incompatible-cni-versions-and-failed-to-destroy-network-for-sandbox-errors"&gt;关于 &amp;quot;Incompatible CNI versions&amp;quot; 和 &amp;quot;Failed to destroy network for sandbox&amp;quot; 错误&lt;/h2&gt;
&lt;!--
Service issues exist for pod CNI network setup and tear down in containerd
v1.6.0-v1.6.3 when the CNI plugins have not been upgraded and/or the CNI config
version is not declared in the CNI config files. The containerd team reports,
"these issues are resolved in containerd v1.6.4."

With containerd v1.6.0-v1.6.3, if you do not upgrade the CNI plugins and/or
declare the CNI config version, you might encounter the following "Incompatible
CNI versions" or "Failed to destroy network for sandbox" error conditions.
--&gt;
&lt;p&gt;在 containerd v1.6.0 到 v1.6.3 中，当配置或清除 Pod CNI 网络时，如果 CNI 插件没有升级和/或
CNI 配置文件中没有声明 CNI 配置版本，会出现服务问题。containerd 团队报告说：
“这些问题在 containerd v1.6.4 中得到了解决。”&lt;/p&gt;</description></item><item><title>强制实施 Pod 安全性标准</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/setup/best-practices/enforcing-pod-security-standards/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/setup/best-practices/enforcing-pod-security-standards/</guid><description>&lt;!--
reviewers:
- tallclair
- liggitt
title: Enforcing Pod Security Standards
weight: 40
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page provides an overview of best practices when it comes to enforcing
[Pod Security Standards](/docs/concepts/security/pod-security-standards).
--&gt;
&lt;p&gt;本页提供实施 &lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/security/pod-security-standards"&gt;Pod 安全标准（Pod Security Standards）&lt;/a&gt;
时的一些最佳实践。&lt;/p&gt;
&lt;!-- body --&gt;
&lt;!--
## Using the built-in Pod Security Admission Controller
--&gt;
&lt;h2 id="使用内置的-pod-安全性准入控制器"&gt;使用内置的 Pod 安全性准入控制器&lt;/h2&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.25 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;!--
The [Pod Security Admission Controller](/docs/reference/access-authn-authz/admission-controllers/#podsecurity)
intends to replace the deprecated PodSecurityPolicies. 
--&gt;
&lt;p&gt;&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/access-authn-authz/admission-controllers/#podsecurity"&gt;Pod 安全性准入控制器&lt;/a&gt;
尝试替换已被废弃的 PodSecurityPolicies。&lt;/p&gt;</description></item><item><title>容器生命周期回调</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/containers/container-lifecycle-hooks/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/containers/container-lifecycle-hooks/</guid><description>&lt;!--
reviewers:
- mikedanese
- thockin
title: Container Lifecycle Hooks
content_type: concept
weight: 40
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page describes how kubelet managed Containers can use the Container lifecycle hook framework
to run code triggered by events during their management lifecycle.
--&gt;
&lt;p&gt;这个页面描述了 kubelet 管理的容器如何使用容器生命周期回调框架，
藉由其管理生命周期中的事件触发，运行指定代码。&lt;/p&gt;
&lt;!-- body --&gt;
&lt;!--
## Overview

Analogous to many programming language frameworks that have component lifecycle hooks, such as Angular,
Kubernetes provides Containers with lifecycle hooks.
The hooks enable Containers to be aware of events in their management lifecycle
and run code implemented in a handler when the corresponding lifecycle hook is executed.
--&gt;
&lt;h2 id="overview"&gt;概述&lt;/h2&gt;
&lt;p&gt;类似于许多具有生命周期回调组件的编程语言框架，例如 Angular、Kubernetes 为容器提供了生命周期回调。
回调使容器能够了解其管理生命周期中的事件，并在执行相应的生命周期回调时运行在处理程序中实现的代码。&lt;/p&gt;</description></item><item><title>升级 Linux 节点</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/kubeadm/upgrading-linux-nodes/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/kubeadm/upgrading-linux-nodes/</guid><description>&lt;!--
title: Upgrading Linux nodes
content_type: task
weight: 40
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page explains how to upgrade a Linux Worker Nodes created with kubeadm.
--&gt;
&lt;p&gt;本页讲述了如何升级用 kubeadm 创建的 Linux 工作节点。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;p&gt;&lt;!--
You need to have shell access to all the nodes, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial 
on a cluster with at least two nodes that are not acting as control plane hosts.
--&gt;
&lt;p&gt;你必须有 Shell 能访问所有节点，且必须配置 kubectl 命令行工具让其与你的集群通信。
建议运行本教程的集群至少有两个节点，且这两个节点不能作为控制平面主机。&lt;/p&gt;</description></item><item><title>使用 HTTP 代理访问 Kubernetes API</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/extend-kubernetes/http-proxy-access-api/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/extend-kubernetes/http-proxy-access-api/</guid><description>&lt;!--
---
title: Use an HTTP Proxy to Access the Kubernetes API
content_type: task
weight: 40
---
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page shows how to use an HTTP proxy to access the Kubernetes API.
--&gt;
&lt;p&gt;本文说明如何使用 HTTP 代理访问 Kubernetes API。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;!--
You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
[minikube](https://minikube.sigs.k8s.io/docs/tutorials/multi_node/)
or you can use one of these Kubernetes playgrounds:
--&gt;
&lt;p&gt;你必须拥有一个 Kubernetes 的集群，且必须配置 kubectl 命令行工具让其与你的集群通信。
建议运行本教程的集群至少有两个节点，且这两个节点不能作为控制平面主机。
如果你还没有集群，你可以通过 &lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;Minikube&lt;/a&gt;
构建一个你自己的集群，或者你可以使用下面的 Kubernetes 练习环境之一：&lt;/p&gt;</description></item><item><title>使用 kube-router 提供 NetworkPolicy</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/network-policy-provider/kube-router-network-policy/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/network-policy-provider/kube-router-network-policy/</guid><description>&lt;!--
reviewers:
- murali-reddy
title: Use Kube-router for NetworkPolicy
content_type: task
weight: 40
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page shows how to use [Kube-router](https://github.com/cloudnativelabs/kube-router) for NetworkPolicy.
--&gt;
&lt;p&gt;本页展示如何使用 &lt;a href="https://github.com/cloudnativelabs/kube-router"&gt;Kube-router&lt;/a&gt; 提供 NetworkPolicy。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;!--
You need to have a Kubernetes cluster running. If you do not already have a cluster, you can create one by using any of the cluster installers like Kops, Bootkube, Kubeadm etc.
--&gt;
&lt;p&gt;你需要拥有一个运行中的 Kubernetes 集群。如果你还没有集群，可以使用任意的集群
安装程序如 Kops、Bootkube、Kubeadm 等创建一个。&lt;/p&gt;</description></item><item><title>使用 kubeadm API 定制组件</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/setup/production-environment/tools/kubeadm/control-plane-flags/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/setup/production-environment/tools/kubeadm/control-plane-flags/</guid><description>&lt;!--
reviewers:
- sig-cluster-lifecycle
title: Customizing components with the kubeadm API
content_type: concept
weight: 40
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page covers how to customize the components that kubeadm deploys. For control plane components
you can use flags in the `ClusterConfiguration` structure or patches per-node. For the kubelet
and kube-proxy you can use `KubeletConfiguration` and `KubeProxyConfiguration`, accordingly.

All of these options are possible via the kubeadm configuration API.
For more details on each field in the configuration you can navigate to our
[API reference pages](/docs/reference/config-api/kubeadm-config.v1beta4/).
--&gt;
&lt;p&gt;本页面介绍了如何自定义 kubeadm 部署的组件。
你可以使用 &lt;code&gt;ClusterConfiguration&lt;/code&gt; 结构中定义的参数，或者在每个节点上应用补丁来定制控制平面组件。
你可以使用 &lt;code&gt;KubeletConfiguration&lt;/code&gt; 和 &lt;code&gt;KubeProxyConfiguration&lt;/code&gt; 结构分别定制 kubelet 和 kube-proxy 组件。&lt;/p&gt;</description></item><item><title>使用 seccomp 限制容器的系统调用</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tutorials/security/seccomp/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tutorials/security/seccomp/</guid><description>&lt;!-- 
reviewers:
- hasheddan
- pjbgf
- saschagrunert
title: Restrict a Container's Syscalls with seccomp
content_type: tutorial
weight: 40
min-kubernetes-server-version: v1.22
--&gt;
&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.19 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;!-- 
Seccomp stands for secure computing mode and has been a feature of the Linux
kernel since version 2.6.12. It can be used to sandbox the privileges of a
process, restricting the calls it is able to make from userspace into the
kernel. Kubernetes lets you automatically apply seccomp profiles loaded onto a
&lt;a class='glossary-tooltip' title='Kubernetes 中的工作机器称作节点。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/nodes/' target='_blank' aria-label='node'&gt;node&lt;/a&gt; to your Pods and containers.

Identifying the privileges required for your workloads can be difficult. In this
tutorial, you will go through how to load seccomp profiles into a local
Kubernetes cluster, how to apply them to a Pod, and how you can begin to craft
profiles that give only the necessary privileges to your container processes.
--&gt;
&lt;p&gt;Seccomp 代表安全计算（Secure Computing）模式，自 2.6.12 版本以来，一直是 Linux 内核的一个特性。
它可以用来沙箱化进程的权限，限制进程从用户态到内核态的调用。
Kubernetes 能使你自动将加载到&lt;a class='glossary-tooltip' title='Kubernetes 中的工作机器称作节点。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/nodes/' target='_blank' aria-label='节点'&gt;节点&lt;/a&gt;上的
seccomp 配置文件应用到你的 Pod 和容器。&lt;/p&gt;</description></item><item><title>使用边车（Sidecar）容器</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tutorials/configuration/pod-sidecar-containers/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tutorials/configuration/pod-sidecar-containers/</guid><description>&lt;!--
title: Adopting Sidecar Containers
content_type: tutorial
weight: 40
min-kubernetes-server-version: 1.29
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This section is relevant for people adopting a new built-in
[sidecar containers](/docs/concepts/workloads/pods/sidecar-containers/) feature for their workloads.

Sidecar container is not a new concept as posted in the
[blog post](/blog/2015/06/the-distributed-system-toolkit-patterns/).
Kubernetes allows running multiple containers in a Pod to implement this concept.
However, running a sidecar container as a regular container
has a lot of limitations being fixed with the new built-in sidecar containers support.
--&gt;
&lt;p&gt;本文适用于使用新的内置&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/sidecar-containers/"&gt;边车容器&lt;/a&gt;特性的用户。&lt;/p&gt;</description></item><item><title>使用端口转发来访问集群中的应用</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/access-application-cluster/port-forward-access-application-cluster/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/access-application-cluster/port-forward-access-application-cluster/</guid><description>&lt;!--
title: Use Port Forwarding to Access Applications in a Cluster
content_type: task
weight: 40
min-kubernetes-server-version: v1.10
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page shows how to use `kubectl port-forward` to connect to a MongoDB
server running in a Kubernetes cluster. This type of connection can be useful
for database debugging.
--&gt;
&lt;p&gt;本文展示如何使用 &lt;code&gt;kubectl port-forward&lt;/code&gt; 连接到在 Kubernetes 集群中运行的 MongoDB 服务。
这种类型的连接对数据库调试很有用。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;!--
You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
[minikube](https://minikube.sigs.k8s.io/docs/tutorials/multi_node/)
or you can use one of these Kubernetes playgrounds:
--&gt;
&lt;p&gt;你必须拥有一个 Kubernetes 的集群，且必须配置 kubectl 命令行工具让其与你的集群通信。
建议运行本教程的集群至少有两个节点，且这两个节点不能作为控制平面主机。
如果你还没有集群，你可以通过 &lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;Minikube&lt;/a&gt;
构建一个你自己的集群，或者你可以使用下面的 Kubernetes 练习环境之一：&lt;/p&gt;</description></item><item><title>使用配置文件对 Kubernetes 对象进行命令式管理</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/manage-kubernetes-objects/imperative-config/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/manage-kubernetes-objects/imperative-config/</guid><description>&lt;!--
title: Imperative Management of Kubernetes Objects Using Configuration Files
content_type: task
weight: 40
--&gt;
&lt;!-- overview --&gt;
&lt;!--
Kubernetes objects can be created, updated, and deleted by using the `kubectl`
command-line tool along with an object configuration file written in YAML or JSON.
This document explains how to define and manage objects using configuration files.
--&gt;
&lt;p&gt;可以使用 &lt;code&gt;kubectl&lt;/code&gt; 命令行工具以及用 YAML 或 JSON 编写的对象配置文件来创建、更新和删除 Kubernetes 对象。
本文档说明了如何使用配置文件定义和管理对象。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;!--
Install [`kubectl`](/docs/tasks/tools/).
--&gt;
&lt;p&gt;安装 &lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/tasks/tools/"&gt;&lt;code&gt;kubectl&lt;/code&gt;&lt;/a&gt;。&lt;/p&gt;</description></item><item><title>使用源 IP</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tutorials/services/source-ip/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tutorials/services/source-ip/</guid><description>&lt;!-- 
title: Using Source IP
content_type: tutorial
min-kubernetes-server-version: v1.5
weight: 40
--&gt;
&lt;!-- overview --&gt;
&lt;!-- 
Applications running in a Kubernetes cluster find and communicate with each
other, and the outside world, through the Service abstraction. This document
explains what happens to the source IP of packets sent to different types
of Services, and how you can toggle this behavior according to your needs.
--&gt;
&lt;p&gt;运行在 Kubernetes 集群中的应用程序通过 Service 抽象发现彼此并相互通信，它们也用 Service 与外部世界通信。
本文解释了发送到不同类型 Service 的数据包的源 IP 会发生什么情况，以及如何根据需要切换此行为。&lt;/p&gt;</description></item><item><title>通过文件将 Pod 信息呈现给容器</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/</guid><description>&lt;!--
title: Expose Pod Information to Containers Through Files
content_type: task
weight: 40
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page shows how a Pod can use a
[`downwardAPI` volume](/docs/concepts/storage/volumes/#downwardapi),
to expose information about itself to containers running in the Pod.
A `downwardAPI` volume can expose Pod fields and container fields.
--&gt;
&lt;p&gt;此页面描述 Pod 如何使用
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/volumes/#downwardapi"&gt;&lt;code&gt;downwardAPI&lt;/code&gt; 卷&lt;/a&gt;
把自己的信息呈现给 Pod 中运行的容器。
&lt;code&gt;downwardAPI&lt;/code&gt; 卷可以呈现 Pod 和容器的字段。&lt;/p&gt;
&lt;!--
In Kubernetes, there are two ways to expose Pod and container fields to a running container:

* [Environment variables](/docs/tasks/inject-data-application/environment-variable-expose-pod-information/)
* Volume files, as explained in this task

Together, these two ways of exposing Pod and container fields are called the
_downward API_.
--&gt;
&lt;p&gt;在 Kubernetes 中，有两种方式可以将 Pod 和容器字段呈现给运行中的容器：&lt;/p&gt;</description></item><item><title>为 Pod 和容器管理资源</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/configuration/manage-resources-containers/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/configuration/manage-resources-containers/</guid><description>&lt;!--
title: Resource Management for Pods and Containers
content_type: concept
weight: 40
feature:
 title: Automatic bin packing
 description: &gt;
 Automatically places containers based on their resource requirements and other constraints, while not sacrificing availability.
 Mix critical and best-effort workloads in order to drive up utilization and save even more resources.
--&gt;
&lt;!-- overview --&gt;
&lt;!--
When you specify a &lt;a class='glossary-tooltip' title='Pod 表示你的集群上一组正在运行的容器。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/' target='_blank' aria-label='Pod'&gt;Pod&lt;/a&gt;, you can optionally specify how much of each resource a 
&lt;a class='glossary-tooltip' title='容器是可移植、可执行的轻量级的镜像，镜像中包含软件及其相关依赖。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/containers/' target='_blank' aria-label='container'&gt;container&lt;/a&gt; needs. The most common resources to specify are CPU and memory 
(RAM); there are others.
--&gt;
&lt;p&gt;当你定义 &lt;a class='glossary-tooltip' title='Pod 表示你的集群上一组正在运行的容器。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/' target='_blank' aria-label='Pod'&gt;Pod&lt;/a&gt;
时可以选择性地为每个&lt;a class='glossary-tooltip' title='容器是可移植、可执行的轻量级的镜像，镜像中包含软件及其相关依赖。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/containers/' target='_blank' aria-label='容器'&gt;容器&lt;/a&gt;设定所需要的资源数量。
最常见的可设定资源是 CPU 和内存（RAM）大小；此外还有其他类型的资源。&lt;/p&gt;</description></item><item><title>为 Windows 的 Pod 和容器配置 RunAsUserName</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configure-pod-container/configure-runasusername/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configure-pod-container/configure-runasusername/</guid><description>&lt;!--
title: Configure RunAsUserName for Windows pods and containers
content_type: task
weight: 40
--&gt;
&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.18 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;!--
This page shows how to use the `runAsUserName` setting for Pods and containers that will run on Windows nodes. This is roughly equivalent of the Linux-specific `runAsUser` setting, allowing you to run applications in a container as a different username than the default.
--&gt;
&lt;p&gt;本页展示如何为运行为在 Windows 节点上运行的 Pod 和容器配置 &lt;code&gt;RunAsUserName&lt;/code&gt;。
大致相当于 Linux 上的 &lt;code&gt;runAsUser&lt;/code&gt;，允许在容器中以与默认值不同的用户名运行应用。&lt;/p&gt;</description></item><item><title>为命名空间配置 CPU 最小和最大约束</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/</guid><description>&lt;!--
title: Configure Minimum and Maximum CPU Constraints for a Namespace
content_type: task
weight: 40
description: &gt;-
 Define a range of valid CPU resource limits for a namespace, so that every new Pod
 in that namespace falls within the range you configure.
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page shows how to set minimum and maximum values for the CPU resources used by containers
and Pods in a &lt;a class='glossary-tooltip' title='名字空间是 Kubernetes 用来支持隔离单个集群中的资源组的一种抽象。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/overview/working-with-objects/namespaces/' target='_blank' aria-label='namespace'&gt;namespace&lt;/a&gt;. You specify minimum
and maximum CPU values in a
[LimitRange](/docs/reference/kubernetes-api/policy-resources/limit-range-v1/)
object. If a Pod does not meet the constraints imposed by the LimitRange, it cannot be created
in the namespace.
--&gt;
&lt;p&gt;本页介绍如何为&lt;a class='glossary-tooltip' title='名字空间是 Kubernetes 用来支持隔离单个集群中的资源组的一种抽象。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/overview/working-with-objects/namespaces/' target='_blank' aria-label='命名空间'&gt;命名空间&lt;/a&gt;中的容器和 Pod
设置其所使用的 CPU 资源的最小和最大值。你可以通过 &lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/policy-resources/limit-range-v1/"&gt;LimitRange&lt;/a&gt;
对象声明 CPU 的最小和最大值.
如果 Pod 不能满足 LimitRange 的限制，就无法在该命名空间中被创建。&lt;/p&gt;</description></item><item><title>文档样式指南</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/contribute/style/style-guide/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/contribute/style/style-guide/</guid><description>&lt;!--
title: Documentation Style Guide
linktitle: Style guide
content_type: concept
weight: 40
math: true
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page gives writing style guidelines for the Kubernetes documentation.
These are guidelines, not rules. Use your best judgment, and feel free to
propose changes to this document in a pull request.

For additional information on creating new content for the Kubernetes
documentation, read the [Documentation Content Guide](/docs/contribute/style/content-guide/).

Changes to the style guide are made by SIG Docs as a group. To propose a change
or addition, [add it to the agenda](https://bit.ly/sig-docs-agenda) for an upcoming
SIG Docs meeting, and attend the meeting to participate in the discussion.
--&gt;
&lt;p&gt;本页讨论 Kubernetes 文档的样式指南。
这些仅仅是指南而不是规则。
你可以自行决定，且欢迎使用 PR 来为此文档提供修改意见。&lt;/p&gt;</description></item><item><title>由 kubelet 填充的节点标签</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/node/node-labels/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/node/node-labels/</guid><description>&lt;!--
content_type: "reference"
title: Node Labels Populated By The Kubelet
weight: 40
--&gt;
&lt;!--
Kubernetes &lt;a class='glossary-tooltip' title='Kubernetes 中的工作机器称作节点。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/nodes/' target='_blank' aria-label='nodes'&gt;nodes&lt;/a&gt; come pre-populated
with a standard set of &lt;a class='glossary-tooltip' title='用来为对象设置可标识的属性标记；这些标记对用户而言是有意义且重要的。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/overview/working-with-objects/labels/' target='_blank' aria-label='labels'&gt;labels&lt;/a&gt;.

You can also set your own labels on nodes, either through the kubelet configuration or
using the Kubernetes API.
--&gt;
&lt;p&gt;Kubernetes &lt;a class='glossary-tooltip' title='Kubernetes 中的工作机器称作节点。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/nodes/' target='_blank' aria-label='节点'&gt;节点&lt;/a&gt;预先填充了一组标准
&lt;a class='glossary-tooltip' title='用来为对象设置可标识的属性标记；这些标记对用户而言是有意义且重要的。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/overview/working-with-objects/labels/' target='_blank' aria-label='标签'&gt;标签&lt;/a&gt;。&lt;/p&gt;
&lt;p&gt;你还可以通过 kubelet 配置或使用 Kubernetes API 在节点上设置自己的标签。&lt;/p&gt;
&lt;!--
## Preset labels

The preset labels that Kubernetes sets on nodes are:
--&gt;
&lt;h2 id="preset-labels"&gt;预设标签&lt;/h2&gt;
&lt;p&gt;Kubernetes 在节点上设置的预设标签有：&lt;/p&gt;</description></item><item><title>云控制器管理器</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/cloud-controller/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/cloud-controller/</guid><description>&lt;!--
title: Cloud Controller Manager
content_type: concept
weight: 40
--&gt;
&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-beta"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.11 [beta]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;!--
Cloud infrastructure technologies let you run Kubernetes on public, private, and hybrid clouds.
Kubernetes believes in automated, API-driven infrastructure without tight coupling between
components.
--&gt;
&lt;p&gt;使用云基础设施技术，你可以在公有云、私有云或者混合云环境中运行 Kubernetes。
Kubernetes 的信条是基于自动化的、API 驱动的基础设施，同时避免组件间紧密耦合。&lt;/p&gt;
&lt;!--
title: Cloud Controller Manager
id: cloud-controller-manager
full_link: /docs/concepts/architecture/cloud-controller/
short_description: &gt;
 Control plane component that integrates Kubernetes with third-party cloud providers.
aka: 
tags:
- architecture
- operation
--&gt;
&lt;!--
 A Kubernetes &lt;a class='glossary-tooltip' title='控制平面是指容器编排层，它暴露 API 和接口来定义、部署容器和管理容器的生命周期。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/reference/glossary/?all=true#term-control-plane' target='_blank' aria-label='control plane'&gt;control plane&lt;/a&gt; component
that embeds cloud-specific control logic. The cloud controller manager lets you link your
cluster into your cloud provider's API, and separates out the components that interact
with that cloud platform from components that only interact with your cluster.
--&gt;
&lt;p&gt;&lt;p&gt;组件 cloud-controller-manager 是指云控制器管理器， 一个 Kubernetes &lt;a class='glossary-tooltip' title='控制平面是指容器编排层，它暴露 API 和接口来定义、部署容器和管理容器的生命周期。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/reference/glossary/?all=true#term-control-plane' target='_blank' aria-label='控制平面'&gt;控制平面&lt;/a&gt;组件，
嵌入了特定于云平台的控制逻辑。
云控制器管理器（Cloud Controller Manager）允许将你的集群连接到云提供商的 API 之上，
并将与该云平台交互的组件同与你的集群交互的组件分离开来。&lt;/p&gt;</description></item><item><title>运行 ZooKeeper，一个分布式协调系统</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tutorials/stateful-application/zookeeper/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tutorials/stateful-application/zookeeper/</guid><description>&lt;!--
reviewers:
- bprashanth
- enisoc
- erictune
- foxish
- janetkuo
- kow3ns
- smarterclayton
title: Running ZooKeeper, A Distributed System Coordinator
content_type: tutorial
weight: 40
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This tutorial demonstrates running [Apache Zookeeper](https://zookeeper.apache.org) on
Kubernetes using [StatefulSets](/docs/concepts/workloads/controllers/statefulset/),
[PodDisruptionBudgets](/docs/concepts/workloads/pods/disruptions/#pod-disruption-budget),
and [PodAntiAffinity](/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity).
--&gt;
&lt;p&gt;本教程展示了在 Kubernetes 上使用
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/controllers/statefulset/"&gt;StatefulSet&lt;/a&gt;、
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/disruptions/#pod-disruption-budget"&gt;PodDisruptionBudget&lt;/a&gt; 和
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity"&gt;PodAntiAffinity&lt;/a&gt;
特性运行 &lt;a href="https://zookeeper.apache.org"&gt;Apache Zookeeper&lt;/a&gt;。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;!--
Before starting this tutorial, you should be familiar with the following
Kubernetes concepts.
--&gt;
&lt;p&gt;在开始本教程前，你应该熟悉以下 Kubernetes 概念。&lt;/p&gt;</description></item><item><title>Kubernetes 中的准入控制</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/access-authn-authz/admission-controllers/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/access-authn-authz/admission-controllers/</guid><description>&lt;!--
reviewers:
- lavalamp
- davidopp
- derekwaynecarr
- erictune
- janetkuo
- thockin
title: Admission Control in Kubernetes
linkTitle: Admission Control
content_type: concept
weight: 40
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page provides an overview of _admission controllers_.
--&gt;
&lt;p&gt;此页面提供&lt;strong&gt;准入控制器（Admission Controller）&lt;/strong&gt; 的概述。&lt;/p&gt;
&lt;!--
An admission controller is a piece of code that intercepts requests to the
Kubernetes API server prior to persistence of the resource, but after the request
is authenticated and authorized.

Several important features of Kubernetes require an admission controller to be enabled in order
to properly support the feature. As a result, a Kubernetes API server that is not properly
configured with the right set of admission controllers is an incomplete server that will not
support all the features you expect.
--&gt;
&lt;p&gt;准入控制器是一段代码，它会在请求通过认证和鉴权之后、对象被持久化之前拦截到达 API 服务器的请求。&lt;/p&gt;</description></item><item><title>升级 Windows 节点</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/kubeadm/upgrading-windows-nodes/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/kubeadm/upgrading-windows-nodes/</guid><description>&lt;!--
title: Upgrading Windows nodes
min-kubernetes-server-version: 1.17
content_type: task
weight: 41
--&gt;
&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-beta"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.18 [beta]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;!--
This page explains how to upgrade a Windows node created with kubeadm.
--&gt;
&lt;p&gt;本页解释如何升级用 kubeadm 创建的 Windows 节点。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;p&gt;&lt;!--
You need to have shell access to all the nodes, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial 
on a cluster with at least two nodes that are not acting as control plane hosts.
--&gt;
&lt;p&gt;你必须有 Shell 能访问所有节点，且必须配置 kubectl 命令行工具让其与你的集群通信。
建议运行本教程的集群至少有两个节点，且这两个节点不能作为控制平面主机。&lt;/p&gt;</description></item><item><title>kubelet 所使用的本地文件和路径</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/node/kubelet-files/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/node/kubelet-files/</guid><description>&lt;!--
content_type: "reference"
title: Local Files And Paths Used By The Kubelet
weight: 42
--&gt;
&lt;!--
The &lt;a class='glossary-tooltip' title='一个在集群中每个节点上运行的代理。它保证容器都运行在 Pod 中。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/reference/command-line-tools-reference/kubelet' target='_blank' aria-label='kubelet'&gt;kubelet&lt;/a&gt; is mostly a stateless
process running on a Kubernetes &lt;a class='glossary-tooltip' title='Kubernetes 中的工作机器称作节点。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/nodes/' target='_blank' aria-label='node'&gt;node&lt;/a&gt;.
This document outlines files that kubelet reads and writes.
--&gt;
&lt;p&gt;&lt;a class='glossary-tooltip' title='一个在集群中每个节点上运行的代理。它保证容器都运行在 Pod 中。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/reference/command-line-tools-reference/kubelet' target='_blank' aria-label='kubelet'&gt;kubelet&lt;/a&gt; 是一个运行在 Kubernetes
&lt;a class='glossary-tooltip' title='Kubernetes 中的工作机器称作节点。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/nodes/' target='_blank' aria-label='节点'&gt;节点&lt;/a&gt;上的无状态进程。本文简要介绍了 kubelet 读写的文件。&lt;/p&gt;

&lt;div class="alert alert-info" role="note"&gt;&lt;h4 class="alert-heading"&gt;说明：&lt;/h4&gt;&lt;!--
This document is for informational purpose and not describing any guaranteed behaviors or APIs.
It lists resources used by the kubelet, which is an implementation detail and a subject to change at any release.
--&gt;
&lt;p&gt;本文仅供参考，而非描述保证会发生的行为或 API。
本文档列举 kubelet 所使用的资源。所给的信息属于实现细节，可能会在后续版本中发生变更。&lt;/p&gt;</description></item><item><title>使用 SOCKS5 代理访问 Kubernetes API</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/extend-kubernetes/socks5-proxy-access-api/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/extend-kubernetes/socks5-proxy-access-api/</guid><description>&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.24 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;!--
This page shows how to use a SOCKS5 proxy to access the API of a remote Kubernetes cluster.
This is useful when the cluster you want to access does not expose its API directly on the public internet.
--&gt;
&lt;p&gt;本文展示了如何使用 SOCKS5 代理访问远程 Kubernetes 集群的 API。
当你要访问的集群不直接在公共 Internet 上公开其 API 时，这很有用。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;p&gt;&lt;!--
You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
[minikube](https://minikube.sigs.k8s.io/docs/tutorials/multi_node/)
or you can use one of these Kubernetes playgrounds:
--&gt;
&lt;p&gt;你必须拥有一个 Kubernetes 的集群，且必须配置 kubectl 命令行工具让其与你的集群通信。
建议运行本教程的集群至少有两个节点，且这两个节点不能作为控制平面主机。
如果你还没有集群，你可以通过 &lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;Minikube&lt;/a&gt;
构建一个你自己的集群，或者你可以使用下面的 Kubernetes 练习环境之一：&lt;/p&gt;</description></item><item><title>动态准入控制</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/access-authn-authz/extensible-admission-controllers/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/access-authn-authz/extensible-admission-controllers/</guid><description>&lt;!--
reviewers:
- smarterclayton
- lavalamp
- caesarxuchao
- deads2k
- liggitt
- jpbetz
title: Dynamic Admission Control
content_type: concept
weight: 45
--&gt;
&lt;!-- overview --&gt;
&lt;!--
In addition to [compiled-in admission plugins](/docs/reference/access-authn-authz/admission-controllers/),
admission plugins can be developed as extensions and run as webhooks configured at runtime.
This page describes how to build, configure, use, and monitor admission webhooks.
--&gt;
&lt;p&gt;除了&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/access-authn-authz/admission-controllers/"&gt;内置的 admission 插件&lt;/a&gt;，
准入插件可以作为扩展独立开发，并以运行时所配置的 Webhook 的形式运行。
此页面描述了如何构建、配置、使用和监视准入 Webhook。&lt;/p&gt;
&lt;!-- body --&gt;
&lt;!--
## What are admission webhooks?
--&gt;
&lt;h2 id="what-are-admission-webhooks"&gt;什么是准入 Webhook？&lt;/h2&gt;
&lt;!--
Admission webhooks are HTTP callbacks that receive admission requests and do
something with them. You can define two types of admission webhooks,
[validating admission webhook](/docs/reference/access-authn-authz/admission-controllers/#validatingadmissionwebhook)
and
[mutating admission webhook](/docs/reference/access-authn-authz/admission-controllers/#mutatingadmissionwebhook).
Mutating admission webhooks are invoked first, and can modify objects sent to the API server to enforce custom defaults.
--&gt;
&lt;p&gt;准入 Webhook 是一种用于接收准入请求并对其进行处理的 HTTP 回调机制。
可以定义两种类型的准入 Webhook，
即&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/access-authn-authz/admission-controllers/#validatingadmissionwebhook"&gt;验证性质的准入 Webhook&lt;/a&gt;
和&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/access-authn-authz/admission-controllers/#mutatingadmissionwebhook"&gt;变更性质的准入 Webhook&lt;/a&gt;。
变更性质的准入 Webhook 会先被调用。它们可以修改发送到 API
服务器的对象以执行自定义的设置默认值操作。&lt;/p&gt;</description></item><item><title>名字空间</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/overview/working-with-objects/namespaces/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/overview/working-with-objects/namespaces/</guid><description>&lt;!--
reviewers:
- derekwaynecarr
- mikedanese
- thockin
title: Namespaces
api_metadata:
- apiVersion: "v1"
 kind: "Namespace"
content_type: concept
weight: 45
--&gt;
&lt;!-- overview --&gt;
&lt;!--
In Kubernetes, _namespaces_ provide a mechanism for isolating groups of resources within a single cluster. Names of resources need to be unique within a namespace, but not across namespaces. Namespace-based scoping is applicable only for namespaced &lt;a class='glossary-tooltip' title='Kubernetes 系统中的实体，代表了集群的部分状态。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/overview/working-with-objects/#kubernetes-objects' target='_blank' aria-label='objects'&gt;objects&lt;/a&gt; _(e.g. Deployments, Services, etc)_ and not for cluster-wide objects _(e.g. StorageClass, Nodes, PersistentVolumes, etc.)_.
--&gt;
&lt;p&gt;在 Kubernetes 中，&lt;strong&gt;名字空间（Namespace）&lt;/strong&gt; 提供一种机制，将同一集群中的资源划分为相互隔离的组。
同一名字空间内的资源名称要唯一，但跨名字空间时没有这个要求。
名字空间作用域仅针对带有名字空间的&lt;a class='glossary-tooltip' title='Kubernetes 系统中的实体，代表了集群的部分状态。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/overview/working-with-objects/#kubernetes-objects' target='_blank' aria-label='对象'&gt;对象&lt;/a&gt;，
（例如 Deployment、Service 等），这种作用域对集群范围的对象
（例如 StorageClass、Node、PersistentVolume 等）不适用。&lt;/p&gt;</description></item><item><title>已弃用 API 的迁移指南</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/using-api/deprecation-guide/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/using-api/deprecation-guide/</guid><description>&lt;!--
reviewers:
- liggitt
- lavalamp
- thockin
- smarterclayton
title: "Deprecated API Migration Guide"
weight: 45
content_type: reference
--&gt;
&lt;!-- overview --&gt;
&lt;!--
As the Kubernetes API evolves, APIs are periodically reorganized or upgraded.
When APIs evolve, the old API is deprecated and eventually removed.
This page contains information you need to know when migrating from
deprecated API versions to newer and more stable API versions.
--&gt;
&lt;p&gt;随着 Kubernetes API 的演化，API 会周期性地被重组或升级。
当 API 演化时，老的 API 会被弃用并被最终删除。
本页面包含你在将已弃用 API 版本迁移到新的更稳定的 API 版本时需要了解的知识。&lt;/p&gt;</description></item><item><title>Adform Case Study</title><link>https://andygol-k8s.netlify.app/zh-cn/case-studies/adform/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/case-studies/adform/</guid><description>&lt;div class="banner1 desktop" style="background-image: url('/images/case-studies/adform/banner1.jpg')"&gt;
 &lt;h1&gt; CASE STUDY:&lt;img src="https://andygol-k8s.netlify.app/images/adform_logo.png" style="width:15%;margin-bottom:0%" class="header_logo"&gt;&lt;br&gt; &lt;div class="subhead"&gt;Improving Performance and Morale with Cloud Native

&lt;/div&gt;&lt;/h1&gt;

&lt;/div&gt;

&lt;div class="details"&gt;
 Company &amp;nbsp;&lt;b&gt;AdForm&lt;/b&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Location &amp;nbsp;&lt;b&gt;Copenhagen, Denmark&lt;/b&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Industry &amp;nbsp;&lt;b&gt;Adtech&lt;/b&gt;
&lt;/div&gt;

&lt;hr&gt;
&lt;section class="section1"&gt;
&lt;div class="cols"&gt;
 &lt;div class="col1"&gt;
 &lt;h2&gt;Challenge&lt;/h2&gt;
 &lt;a href="https://site.adform.com/"&gt;Adform’s&lt;/a&gt; mission is to provide a secure and transparent full stack of advertising technology to enable digital ads across devices. The company has a large infrastructure: &lt;a href="https://www.openstack.org/"&gt;OpenStack&lt;/a&gt;-based private clouds running on 1,100 physical servers in 7 data centers around the world, 3 of which were opened in the past year. With the company’s growth, the infrastructure team felt that "our private cloud was not really flexible enough," says IT System Engineer Edgaras Apšega. "The biggest pain point is that our developers need to maintain their virtual machines, so rolling out technology and new software takes time. We were really struggling with our releases, and we didn’t have self-healing infrastructure."


&lt;br&gt;

 &lt;h2&gt;Solution&lt;/h2&gt;
 The team, which had already been using &lt;a href="https://prometheus.io/"&gt;Prometheus&lt;/a&gt; for monitoring, embraced &lt;a href="https://kubernetes.io/"&gt;Kubernetes&lt;/a&gt; and cloud native practices in 2017. "To start our Kubernetes journey, we had to adapt all our software, so we had to choose newer frameworks," says Apšega. "We also adopted the microservices way, so observability is much better because you can inspect the bug or the services separately."


&lt;/div&gt;

&lt;div class="col2"&gt;

&lt;h2&gt;Impact&lt;/h2&gt;
 "Kubernetes helps our business a lot because our features are coming to market faster," says Apšega. The release process went from several hours to several minutes. Autoscaling has been at least 6 times faster than the semi-manual VM bootstrapping and application deployment required before. The team estimates that the company has experienced cost savings of 4-5x due to less hardware and fewer man hours needed to set up the hardware and virtual machines, metrics, and logging. Utilization of the hardware resources has been reduced as well, with containers notching 2-3 times more efficiency over virtual machines. "The deployments are very easy because developers just push the code and it automatically appears on Kubernetes," says Apšega. Prometheus has also had a positive impact: "It provides high availability for metrics and alerting. We monitor everything starting from hardware to applications. Having all the metrics in &lt;a href="https://grafana.com/"&gt;Grafana&lt;/a&gt; dashboards provides great insight on your systems."


&lt;/div&gt;

&lt;/div&gt;
&lt;/section&gt;
&lt;div class="banner2"&gt;
 &lt;div class="banner2text"&gt;
"Kubernetes enabled the self-healing and immutable infrastructure. We can do faster releases, so our developers are really happy. They can ship our features faster than before, and that makes our clients happier."&lt;span style="font-size:14px;letter-spacing:0.12em;padding-top:20px;text-transform:uppercase;line-height:14px"&gt;&lt;br&gt;&lt;br&gt;— Edgaras Apšega, IT Systems Engineer, Adform&lt;/span&gt;

 &lt;/div&gt;
&lt;/div&gt;


&lt;section class="section2"&gt;
&lt;div class="fullcol"&gt;
&lt;h2&gt;Adform made &lt;a href="https://www.wsj.com/articles/fake-ad-operation-used-to-steal-from-publishers-is-uncovered-1511290981"&gt;headlines&lt;/a&gt; last year when it detected the HyphBot ad fraud network that was costing some businesses hundreds of thousands of dollars a day.&lt;/h2&gt;With its mission to provide a secure and transparent full stack of advertising technology to enable an open internet, Adform published a &lt;a href="https://site.adform.com/media/85132/hyphbot_whitepaper_.pdf"&gt;white paper&lt;/a&gt; revealing what it did—and others could too—to limit customers’ exposure to the scam. &lt;br&gt;&lt;br&gt;
In that same spirit, Adform is sharing its cloud native journey. "When you see that everyone shares their best practices, it inspires you to contribute back to the project," says IT Systems Engineer Edgaras Apšega.&lt;br&gt;&lt;br&gt;
The company has a large infrastructure: &lt;a href="https://www.openstack.org/"&gt;OpenStack&lt;/a&gt;-based private clouds running on 1,100 physical servers in their own seven data centers around the world, three of which were opened in the past year. With the company’s growth, the infrastructure team felt that "our private cloud was not really flexible enough," says Apšega. "The biggest pain point is that our developers need to maintain their virtual machines, so rolling out technology and new software really takes time. We were really struggling with our releases, and we didn’t have self-healing infrastructure."


&lt;/div&gt;
&lt;/section&gt;
&lt;div class="banner3" style="background-image: url('/images/case-studies/adform/banner3.jpg')"&gt;
 &lt;div class="banner3text"&gt;
 "The fact that Cloud Native Computing Foundation incubated Kubernetes was a really big point for us because it was vendor neutral. And we can see that a community really gathers around it. Everyone shares their experiences, their knowledge, and the fact that it’s open source, you can contribute."&lt;span style="font-size:14px;letter-spacing:0.12em;padding-top:20px;text-transform:uppercase;line-height:14px"&gt;&lt;br&gt;&lt;br&gt;— Edgaras Apšega, IT Systems Engineer, Adform&lt;/span&gt;
 &lt;/div&gt;
&lt;/div&gt;
&lt;section class="section3"&gt;
&lt;div class="fullcol"&gt;

The team, which had already been using Prometheus for monitoring, embraced Kubernetes, microservices, and cloud native practices. "The fact that Cloud Native Computing Foundation incubated Kubernetes was a really big point for us because it was vendor neutral," says Apšega. "And we can see that a community really gathers around it."&lt;br&gt;&lt;br&gt;
A proof of concept project was started, with a Kubernetes cluster running on bare metal in the data center. When developers saw how quickly containers could be spun up compared to the virtual machine process, "they wanted to ship their containers in production right away, and we were still doing proof of concept," says IT Systems Engineer Andrius Cibulskis.
Of course, a lot of work still had to be done. "First of all, we had to learn Kubernetes, see all of the moving parts, how they glue together," says Apšega. "Second of all, the whole CI/CD part had to be redone, and our DevOps team had to invest more man hours to implement it. And third is that developers had to rewrite the code, and they’re still doing it."
&lt;br&gt;&lt;br&gt;
The first production cluster was launched in the spring of 2018, and is now up to 20 physical machines dedicated for pods throughout three data centers, with plans for separate clusters in the other four data centers. The user-facing Adform application platform, data distribution platform, and back ends are now all running on Kubernetes. "Many APIs for critical applications are being developed for Kubernetes," says Apšega. "Teams are rewriting their applications to .NET core, because it supports containers, and preparing to move to Kubernetes. And new applications, by default, go in containers."


&lt;/div&gt;
&lt;/section&gt;
&lt;div class="banner4" style="background-image: url('/images/case-studies/adform/banner4.jpg')"&gt;
 &lt;div class="banner4text"&gt;
"Releases are really nice for them, because they just push their code to Git and that’s it. They don’t have to worry about their virtual machines anymore." &lt;span style="font-size:14px;letter-spacing:0.12em;padding-top:20px;text-transform:uppercase;line-height:14px"&gt;&lt;br&gt;&lt;br&gt;— Andrius Cibulskis, IT Systems Engineer, Adform&lt;/span&gt;
 &lt;/div&gt;
&lt;/div&gt;

&lt;section class="section5"&gt;
&lt;div class="fullcol"&gt;
This big push has been driven by the real impact that these new practices have had. "Kubernetes helps our business a lot because our features are coming to market faster," says Apšega. "The deployments are very easy because developers just push the code and it automatically appears on Kubernetes." The release process went from several hours to several minutes. Autoscaling is at least six times faster than the semi-manual VM bootstrapping and application deployment required before. &lt;br&gt;&lt;br&gt;
The team estimates that the company has experienced cost savings of 4-5x due to less hardware and fewer man hours needed to set up the hardware and virtual machines, metrics, and logging. Utilization of the hardware resources has been reduced as well, with containers notching two to three times more efficiency over virtual machines. &lt;br&gt;&lt;br&gt;
Prometheus has also had a positive impact: "It provides high availability for metrics and alerting," says Apšega. "We monitor everything starting from hardware to applications. Having all the metrics in Grafana dashboards provides great insight on our systems."



&lt;/div&gt;

&lt;div class="banner5"&gt;
 &lt;div class="banner5text"&gt;
 "I think that our company just started our cloud native journey. It seems like a huge road ahead, but we’re really happy that we joined it." &lt;span style="font-size:14px;letter-spacing:0.12em;padding-top:20px;text-transform:uppercase;line-height:14px"&gt;&lt;br&gt;&lt;br&gt;— Edgaras Apšega, IT Systems Engineer, Adform&lt;/span&gt;
 &lt;/div&gt;
&lt;/div&gt;

&lt;div class="fullcol"&gt;
All of these benefits have trickled down to individual team members, whose working lives have been changed for the better. "They used to have to get up at night to re-start some services, and now Kubernetes handles all of that," says Apšega. Adds Cibulskis: "Releases are really nice for them, because they just push their code to Git and that’s it. They don’t have to worry about their virtual machines anymore." Even the security teams have been impacted. "Security teams are always not happy," says Apšega, "and now they’re happy because they can easily inspect the containers."
The company plans to remain in the data centers for now, "mostly because we want to keep all the data, to not share it in any way," says Cibulskis, "and it’s cheaper at our scale." But, Apšega says, the possibility of using a hybrid cloud for computing is intriguing: "One of the projects we’re interested in is the &lt;a href="https://github.com/virtual-kubelet/virtual-kubelet"&gt;Virtual Kubelet&lt;/a&gt; that lets you spin up the working nodes on different clouds to do some computing."
&lt;br&gt;&lt;br&gt;
Apšega, Cibulskis and their colleagues are keeping tabs on how the cloud native ecosystem develops, and are excited to contribute where they can. "I think that our company just started our cloud native journey," says Apšega. "It seems like a huge road ahead, but we’re really happy that we joined it."



&lt;/div&gt;

&lt;/section&gt;</description></item><item><title>案例研究：Ygrene</title><link>https://andygol-k8s.netlify.app/zh-cn/case-studies/ygrene/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/case-studies/ygrene/</guid><description>&lt;!-- 
title: Ygrene Case Study
linkTitle: Ygrene
case_study_styles: true
cid: caseStudies
logo: ygrene_featured_logo.png
featured: true
weight: 48
quote: &gt;
 We had to change some practices and code, and the way things were built, but we were able to get our main systems onto Kubernetes in a month or so, and then into production within two months. That's very fast for a finance company.

new_case_study_styles: true
heading_background: /images/case-studies/ygrene/banner1.jpg
heading_title_logo: /images/ygrene_logo.png
subheading: &gt;
 Ygrene: Using Cloud Native to Bring Security and Scalability to the Finance Industry
case_study_details:
 - Company: Ygrene
 - Location: Petaluma, Calif.
 - Industry: Clean energy financing 
--&gt;

&lt;!--
&lt;h2&gt;Challenges&lt;/h2&gt;
--&gt;
&lt;h2&gt;挑战&lt;/h2&gt;

&lt;!-- 
&lt;p&gt;A PACE (Property Assessed Clean Energy) financing company, Ygrene has funded more than $1 billion in loans since 2010. In order to approve and process those loans, "We have lots of data sources that are being aggregated, and we also have lots of systems that need to churn on that data," says Ygrene Development Manager Austin Adams. The company was utilizing massive servers, and "we just reached the limit of being able to scale them vertically. We had a really unstable system that became overwhelmed with requests just for doing background data processing in real time. The performance the users saw was very poor. We needed a solution that wouldn't require us to make huge refactors to the code base." As a finance company, Ygrene also needed to ensure that they were shipping their applications securely.&lt;/p&gt;</description></item><item><title>SlingTV Case Study</title><link>https://andygol-k8s.netlify.app/zh-cn/case-studies/slingtv/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/case-studies/slingtv/</guid><description>&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;Launched by DISH Network in 2015, Sling TV experienced great customer growth from the beginning. After just a year, "we were going through some growing pains of some of the legacy systems and trying to find the right architecture to enable our future," says Brad Linder, Sling TV's Cloud Native &amp; Big Data Evangelist. The company has particular challenges: "We take live TV and distribute it over the internet out to a user's device that we do not control," says Linder. "In a lot of ways, we are working in the Wild West: The internet is what it is going to be, and if a customer's service does not work for whatever reason, they do not care why. They just want things to work. Those are the variables of the equation that we have to try to solve. We really have to try to enable optionality and good customer experience at web scale."&lt;/p&gt;</description></item><item><title>CRI Pod 和容器指标</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/instrumentation/cri-pod-container-metrics/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/instrumentation/cri-pod-container-metrics/</guid><description>&lt;!--
title: CRI Pod &amp; Container Metrics
content_type: reference
weight: 50
description: &gt;-
 Collection of Pod &amp; Container metrics via the CRI.
--&gt;
&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-alpha"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.23 [alpha]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;!--
The [kubelet](/docs/reference/command-line-tools-reference/kubelet/) collects pod and
container metrics via [cAdvisor](https://github.com/google/cadvisor). As an alpha feature,
Kubernetes lets you configure the collection of pod and container
metrics via the &lt;a class='glossary-tooltip' title='在 kubelet 和本地容器运行时之间通讯的协议' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/cri' target='_blank' aria-label='Container Runtime Interface'&gt;Container Runtime Interface&lt;/a&gt; (CRI). You
must enable the `PodAndContainerStatsFromCRI` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) and
use a compatible CRI implementation (containerd &gt;= 1.6.0, CRI-O &gt;= 1.23.0) to
use the CRI based collection mechanism.
--&gt;
&lt;p&gt;&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/command-line-tools-reference/kubelet/"&gt;kubelet&lt;/a&gt; 通过
&lt;a href="https://github.com/google/cadvisor"&gt;cAdvisor&lt;/a&gt; 收集 Pod 和容器指标。作为一个 Alpha 特性，
Kubernetes 允许你通过&lt;a class='glossary-tooltip' title='在 kubelet 和本地容器运行时之间通讯的协议' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/cri' target='_blank' aria-label='容器运行时接口'&gt;容器运行时接口&lt;/a&gt;（CRI）
配置收集 Pod 和容器指标。要使用基于 CRI 的收集机制，你必须启用 &lt;code&gt;PodAndContainerStatsFromCRI&lt;/code&gt;
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/command-line-tools-reference/feature-gates/"&gt;特性门控&lt;/a&gt;
并使用兼容的 CRI 实现（containerd &amp;gt;= 1.6.0, CRI-O &amp;gt;= 1.23.0）。&lt;/p&gt;</description></item><item><title>ING Case Study</title><link>https://andygol-k8s.netlify.app/zh-cn/case-studies/ing/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/case-studies/ing/</guid><description>&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;After undergoing an agile transformation, &lt;a href="https://www.ing.com/"&gt;ING&lt;/a&gt; realized it needed a standardized platform to support the work their developers were doing. "Our DevOps teams got empowered to be autonomous," says Infrastructure Architect Thijs Ebbers. "It has benefits; you get all kinds of ideas. But a lot of teams are going to devise the same wheel. Teams started tinkering with &lt;a href="https://www.docker.com/"&gt;Docker&lt;/a&gt;, Docker Swarm, &lt;a href="https://kubernetes.io/"&gt;Kubernetes&lt;/a&gt;, &lt;a href="https://mesosphere.com/"&gt;Mesos&lt;/a&gt;. Well, it's not really useful for a company to have one hundred wheels, instead of one good wheel.&lt;/p&gt;</description></item><item><title>Ingress 控制器</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/services-networking/ingress-controllers/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/services-networking/ingress-controllers/</guid><description>&lt;!--
title: Ingress Controllers
description: &gt;-
 In order for an [Ingress](/docs/concepts/services-networking/ingress/) to work in your cluster,
 there must be an _ingress controller_ running.
 You need to select at least one ingress controller and make sure it is set up in your cluster. 
 This page lists common ingress controllers that you can deploy.
content_type: concept
weight: 50
--&gt;

&lt;div class="alert alert-info" role="note"&gt;&lt;h4 class="alert-heading"&gt;说明：&lt;/h4&gt;&lt;!--
The Kubernetes project recommends using [Gateway](https://gateway-api.sigs.k8s.io/) instead of
[Ingress](/docs/concepts/services-networking/ingress/).
The Ingress API has been frozen.
--&gt;
&lt;p&gt;Kubernetes 项目推荐使用 &lt;a href="https://gateway-api.sigs.k8s.io/"&gt;Gateway&lt;/a&gt; 而不是
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/services-networking/ingress/"&gt;Ingress&lt;/a&gt;。
Ingress API 已经被冻结。&lt;/p&gt;</description></item><item><title>Job</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/controllers/job/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/controllers/job/</guid><description>&lt;!--
reviewers:
- erictune
- mimowo
- soltysh
title: Jobs
content_type: concept
description: &gt;-
 Jobs represent one-off tasks that run to completion and then stop.
feature:
 title: Batch execution
 description: &gt;
 In addition to services, Kubernetes can manage your batch and CI workloads, replacing containers that fail, if desired.
weight: 50
hide_summary: true # Listed separately in section index
--&gt;
&lt;!-- overview --&gt;
&lt;!--
A Job creates one or more Pods and will continue to retry execution of the Pods until a specified number of them successfully terminate.
As pods successfully complete, the Job tracks the successful completions. When a specified number
of successful completions is reached, the task (ie, Job) is complete. Deleting a Job will clean up
the Pods it created. Suspending a Job will delete its active Pods until the Job
is resumed again.
--&gt;
&lt;p&gt;Job 会创建一个或者多个 Pod，并将继续重试 Pod 的执行，直到指定数量的 Pod 成功终止。
随着 Pod 成功结束，Job 跟踪记录成功完成的 Pod 个数。
当数量达到指定的成功个数阈值时，任务（即 Job）结束。
删除 Job 的操作会清除所创建的全部 Pod。
挂起 Job 的操作会删除 Job 的所有活跃 Pod，直到 Job 被再次恢复执行。&lt;/p&gt;</description></item><item><title>kubeadm config</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-config/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-config/</guid><description>&lt;!--
reviewers:
- luxas
- jbeda
title: kubeadm config
content_type: concept
weight: 50
--&gt;
&lt;!-- overview --&gt;
&lt;!--
During `kubeadm init`, kubeadm uploads the `ClusterConfiguration` object to your cluster
in a ConfigMap called `kubeadm-config` in the `kube-system` namespace. This configuration is then read during
`kubeadm join`, `kubeadm reset` and `kubeadm upgrade`.
--&gt;
&lt;p&gt;在 &lt;code&gt;kubeadm init&lt;/code&gt; 执行期间，kubeadm 将 &lt;code&gt;ClusterConfiguration&lt;/code&gt; 对象上传
到你的集群的 &lt;code&gt;kube-system&lt;/code&gt; 名字空间下名为 &lt;code&gt;kubeadm-config&lt;/code&gt; 的 ConfigMap 对象中。
然后在 &lt;code&gt;kubeadm join&lt;/code&gt;、&lt;code&gt;kubeadm reset&lt;/code&gt; 和 &lt;code&gt;kubeadm upgrade&lt;/code&gt; 执行期间读取此配置。&lt;/p&gt;</description></item><item><title>kubelet 配置目录合并</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/node/kubelet-config-directory-merging/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/node/kubelet-config-directory-merging/</guid><description>&lt;!--
content_type: "reference"
title: Kubelet Configuration Directory Merging
weight: 50
--&gt;
&lt;!--
When using the kubelet's `--config-dir` flag to specify a drop-in directory for
configuration, there is some specific behavior on how different types are
merged.

Here are some examples of how different data types behave during configuration merging:
--&gt;
&lt;p&gt;当使用 kubelet 的 &lt;code&gt;--config-dir&lt;/code&gt; 标志来指定存放配置的目录时，不同类型的配置会有一些特定的行为。&lt;/p&gt;
&lt;p&gt;以下是在配置合并过程中不同数据类型的一些行为示例：&lt;/p&gt;
&lt;!--
### Structure Fields

There are two types of structure fields in a YAML structure: singular (or a
scalar type) and embedded (structures that contain scalar types).
The configuration merging process handles the overriding of singular and embedded struct fields to create a resulting kubelet configuration.
--&gt;
&lt;h3 id="structure-fields"&gt;结构字段&lt;/h3&gt;
&lt;p&gt;在 YAML 结构中有两种结构字段：独立（标量类型）和嵌入式（此结构包含标量类型）。
配置合并过程将处理独立构造字段和嵌入式构造字段的重载，以创建最终的 kubelet 配置。&lt;/p&gt;</description></item><item><title>Kubelet 设备管理器 API 版本</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/node/device-plugin-api-versions/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/node/device-plugin-api-versions/</guid><description>&lt;!--
content_type: "reference"
title: Kubelet Device Manager API Versions
weight: 50
--&gt;
&lt;!--
This page provides details of version compatibility between the Kubernetes
[device plugin API](https://github.com/kubernetes/kubelet/tree/master/pkg/apis/deviceplugin),
and different versions of Kubernetes itself.
--&gt;
&lt;p&gt;本页详述了 Kubernetes
&lt;a href="https://github.com/kubernetes/kubelet/tree/master/pkg/apis/deviceplugin"&gt;设备插件 API&lt;/a&gt;
与不同版本的 Kubernetes 本身之间的版本兼容性。&lt;/p&gt;
&lt;!--
## Compatibility matrix
--&gt;
&lt;h2 id="compatibility-matrix"&gt;兼容性矩阵&lt;/h2&gt;
&lt;table&gt;
 &lt;thead&gt;
 &lt;tr&gt;
 &lt;th&gt;&lt;/th&gt;
 &lt;th&gt;&lt;code&gt;v1alpha1&lt;/code&gt;&lt;/th&gt;
 &lt;th&gt;&lt;code&gt;v1beta1&lt;/code&gt;&lt;/th&gt;
 &lt;/tr&gt;
 &lt;/thead&gt;
 &lt;tbody&gt;
 &lt;tr&gt;
 &lt;td&gt;Kubernetes 1.21&lt;/td&gt;
 &lt;td&gt;-&lt;/td&gt;
 &lt;td&gt;✓&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Kubernetes 1.22&lt;/td&gt;
 &lt;td&gt;-&lt;/td&gt;
 &lt;td&gt;✓&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Kubernetes 1.23&lt;/td&gt;
 &lt;td&gt;-&lt;/td&gt;
 &lt;td&gt;✓&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Kubernetes 1.24&lt;/td&gt;
 &lt;td&gt;-&lt;/td&gt;
 &lt;td&gt;✓&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Kubernetes 1.25&lt;/td&gt;
 &lt;td&gt;-&lt;/td&gt;
 &lt;td&gt;✓&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Kubernetes 1.26&lt;/td&gt;
 &lt;td&gt;-&lt;/td&gt;
 &lt;td&gt;✓&lt;/td&gt;
 &lt;/tr&gt;
 &lt;/tbody&gt;
&lt;/table&gt;
&lt;!--
Key:

* `✓` Exactly the same features / API objects in both device plugin API and
 the Kubernetes version.
--&gt;
&lt;p&gt;简要说明：&lt;/p&gt;</description></item><item><title>Kubernetes API 访问控制</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/security/controlling-access/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/security/controlling-access/</guid><description>&lt;!--
reviewers:
- erictune
- lavalamp
title: Controlling Access to the Kubernetes API
content_type: concept
weight: 50
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page provides an overview of controlling access to the Kubernetes API.
--&gt;
&lt;p&gt;本页面概述了对 Kubernetes API 的访问控制。&lt;/p&gt;
&lt;!-- body --&gt;
&lt;!--
Users access the [Kubernetes API](/docs/concepts/overview/kubernetes-api/) using `kubectl`,
client libraries, or by making REST requests. Both human users and
[Kubernetes service accounts](/docs/tasks/configure-pod-container/configure-service-account/) can be
authorized for API access.
When a request reaches the API, it goes through several stages, illustrated in the
following diagram:
--&gt;
&lt;p&gt;用户使用 &lt;code&gt;kubectl&lt;/code&gt;、客户端库或构造 REST 请求来访问 &lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/overview/kubernetes-api/"&gt;Kubernetes API&lt;/a&gt;。
人类用户和 &lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configure-pod-container/configure-service-account/"&gt;Kubernetes 服务账号&lt;/a&gt;都可以被鉴权访问 API。
当请求到达 API 时，它会经历多个阶段，如下图所示：&lt;/p&gt;</description></item><item><title>Kubernetes API 健康端点</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/using-api/health-checks/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/using-api/health-checks/</guid><description>&lt;!-- 
title: Kubernetes API health endpoints
reviewers:
- logicalhan
content_type: concept
weight: 50
 --&gt;
&lt;!-- overview --&gt;
&lt;!-- 
The Kubernetes &lt;a class='glossary-tooltip' title='提供 Kubernetes API 服务的控制面组件。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/#kube-apiserver' target='_blank' aria-label='API server'&gt;API server&lt;/a&gt; provides API endpoints to indicate the current status of the API server.
This page describes these API endpoints and explains how you can use them. 
--&gt;
&lt;p&gt;Kubernetes &lt;a class='glossary-tooltip' title='提供 Kubernetes API 服务的控制面组件。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/#kube-apiserver' target='_blank' aria-label='API 服务器'&gt;API 服务器&lt;/a&gt; 提供 API 端点以指示 API 服务器的当前状态。
本文描述了这些 API 端点，并说明如何使用。&lt;/p&gt;</description></item><item><title>Kubernetes 自我修复</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/self-healing/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/self-healing/</guid><description>&lt;!--
title: Kubernetes Self-Healing 
content_type: concept 
weight: 50
feature:
 title: Self-healing
 anchor: Automated recovery from damage
 description: &gt;
 Kubernetes restarts containers that crash, replaces entire Pods where needed,
 reattaches storage in response to wider failures, and can integrate with
 node autoscalers to self-heal even at the node level.
--&gt;
&lt;!-- overview --&gt;
&lt;!--
Kubernetes is designed with self-healing capabilities that help maintain the health and availability of workloads. 
It automatically replaces failed containers, reschedules workloads when nodes become unavailable, and ensures that the desired state of the system is maintained.
--&gt;
&lt;p&gt;Kubernetes 旨在通过自我修复能力来维护工作负载的健康和可用性。&lt;br&gt;
它能够自动替换失败的容器，在节点不可用时重新调度工作负载，
并确保系统的期望状态得以维持。&lt;/p&gt;</description></item><item><title>PKI 证书和要求</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/setup/best-practices/certificates/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/setup/best-practices/certificates/</guid><description>&lt;!--
title: PKI certificates and requirements
reviewers:
- sig-cluster-lifecycle
content_type: concept
weight: 50
--&gt;
&lt;!-- overview --&gt;
&lt;!--
Kubernetes requires PKI certificates for authentication over TLS.
If you install Kubernetes with [kubeadm](/docs/reference/setup-tools/kubeadm/), the certificates
that your cluster requires are automatically generated.
You can also generate your own certificates -- for example, to keep your private keys more secure
by not storing them on the API server.
This page explains the certificates that your cluster requires.
--&gt;
&lt;p&gt;Kubernetes 需要 PKI 证书才能进行基于 TLS 的身份验证。如果你是使用
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/"&gt;kubeadm&lt;/a&gt; 安装的 Kubernetes，
则会自动生成集群所需的证书。
你也可以自己生成证书 --- 例如，不将私钥存储在 API 服务器上，
可以让私钥更加安全。此页面说明了集群必需的证书。&lt;/p&gt;</description></item><item><title>本地化 Kubernetes 文档</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/contribute/localization/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/contribute/localization/</guid><description>&lt;!--
title: Localizing Kubernetes documentation
content_type: concept
approvers:
- remyleone
- rlenferink
weight: 50
card:
 name: contribute
 weight: 50
 title: Localizing the docs
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page shows you how to
[localize](https://blog.mozilla.org/l10n/2011/12/14/i18n-vs-l10n-whats-the-diff/)
the docs for a different language.
--&gt;
&lt;p&gt;此页面描述如何为其他语言的文档提供
&lt;a href="https://blog.mozilla.org/l10n/2011/12/14/i18n-vs-l10n-whats-the-diff/"&gt;本地化&lt;/a&gt;版本。&lt;/p&gt;
&lt;!-- body --&gt;
&lt;!--
## Contribute to an existing localization

You can help add or improve the content of an existing localization. In
[Kubernetes Slack](https://slack.k8s.io/), you can find a channel for each
localization. There is also a general
[SIG Docs Localizations Slack channel](https://kubernetes.slack.com/messages/sig-docs-localizations)
where you can say hello.
--&gt;
&lt;h2 id="contribute-to-an-existing-localization"&gt;为现有的本地化做出贡献&lt;/h2&gt;
&lt;p&gt;你可以帮助添加或改进现有本地化的内容。在 &lt;a href="https://slack.k8s.io/"&gt;Kubernetes Slack&lt;/a&gt;
中，你能找到每个本地化的频道。还有一个通用的
&lt;a href="https://kubernetes.slack.com/messages/sig-docs-localizations"&gt;SIG Docs Localizations Slack 频道&lt;/a&gt;，
你可以在这里打个招呼。&lt;/p&gt;</description></item><item><title>边车容器</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/sidecar-containers/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/sidecar-containers/</guid><description>&lt;!--
title: Sidecar Containers
content_type: concept
weight: 50
--&gt;
&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable" title="特性门控： SidecarContainers"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt; 
 &lt;code&gt;Kubernetes v1.33 [stable]&lt;/code&gt;（默认启用）&lt;/div&gt;

&lt;!--
Sidecar containers are the secondary containers that run along with the main
application container within the same &lt;a class='glossary-tooltip' title='Pod 表示你的集群上一组正在运行的容器。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/' target='_blank' aria-label='Pod'&gt;Pod&lt;/a&gt;.
These containers are used to enhance or to extend the functionality of the primary _app
container_ by providing additional services, or functionality such as logging, monitoring,
security, or data synchronization, without directly altering the primary application code.
--&gt;
&lt;p&gt;边车容器是与&lt;strong&gt;主应用容器&lt;/strong&gt;在同一个 &lt;a class='glossary-tooltip' title='Pod 表示你的集群上一组正在运行的容器。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/' target='_blank' aria-label='Pod'&gt;Pod&lt;/a&gt; 中运行的辅助容器。
这些容器通过提供额外的服务或功能（如日志记录、监控、安全性或数据同步）来增强或扩展主应用容器的功能，
而无需直接修改主应用代码。&lt;/p&gt;</description></item><item><title>博客文章镜像</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/contribute/blog/article-mirroring/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/contribute/blog/article-mirroring/</guid><description>&lt;!--
title: Blog article mirroring
slug: article-mirroring
content_type: concept
weight: 50
--&gt;
&lt;!-- overview --&gt;
&lt;!--
There are two official Kubernetes blogs, and the CNCF has its own blog where you can cover Kubernetes too.
For the main Kubernetes blog, we (the Kubernetes project) like to publish articles with different perspectives and special focuses, that have a link to Kubernetes.

Some articles appear on both blogs: there is a primary version of the article, and
a _mirror article_ on the other blog.

This page describes the criteria for mirroring, the motivation for mirroring, and
explains what you should do to ensure that an article publishes to both blogs.
--&gt;
&lt;p&gt;官方有两个 Kubernetes 博客，CNCF 也有自己的博客，你也可以在其中了解 Kubernetes。&lt;br&gt;
对于主要的 Kubernetes 博客，我们（Kubernetes 项目）喜欢发表具有不同视角和特别焦点的文章，
这些文章与 Kubernetes 有一定的关联。&lt;/p&gt;</description></item><item><title>创建 Windows HostProcess Pod</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configure-pod-container/create-hostprocess-pod/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configure-pod-container/create-hostprocess-pod/</guid><description>&lt;!--
title: Create a Windows HostProcess Pod
content_type: task
weight: 50
min-kubernetes-server-version: 1.23
--&gt;
&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.26 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;!--
Windows HostProcess containers enable you to run containerized
workloads on a Windows host. These containers operate as
normal processes but have access to the host network namespace,
storage, and devices when given the appropriate user privileges.
HostProcess containers can be used to deploy network plugins,
storage configurations, device plugins, kube-proxy, and other
components to Windows nodes without the need for dedicated proxies or
the direct installation of host services.
--&gt;
&lt;p&gt;Windows HostProcess 容器让你能够在 Windows 主机上运行容器化负载。
这类容器以普通的进程形式运行，但能够在具有合适用户特权的情况下，
访问主机网络名字空间、存储和设备。HostProcess 容器可用来在 Windows
节点上部署网络插件、存储配置、设备插件、kube-proxy 以及其他组件，
同时不需要配置专用的代理或者直接安装主机服务。&lt;/p&gt;</description></item><item><title>动态卷制备</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/dynamic-provisioning/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/dynamic-provisioning/</guid><description>&lt;!--
reviewers:
- saad-ali
- jsafrane
- thockin
- msau42
title: Dynamic Volume Provisioning
content_type: concept
weight: 50
--&gt;
&lt;!-- overview --&gt;
&lt;!--
Dynamic volume provisioning allows storage volumes to be created on-demand.
Without dynamic provisioning, cluster administrators have to manually make
calls to their cloud or storage provider to create new storage volumes, and
then create [`PersistentVolume` objects](/docs/concepts/storage/persistent-volumes/)
to represent them in Kubernetes. The dynamic provisioning feature eliminates
the need for cluster administrators to pre-provision storage. Instead, it
automatically provisions storage when users create
[`PersistentVolumeClaim` objects](/docs/concepts/storage/persistent-volumes/).
--&gt;
&lt;p&gt;动态卷制备允许按需创建存储卷。
如果没有动态制备，集群管理员必须手动地联系他们的云或存储提供商来创建新的存储卷，
然后在 Kubernetes 集群创建
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/persistent-volumes/"&gt;&lt;code&gt;PersistentVolume&lt;/code&gt; 对象&lt;/a&gt;来表示这些卷。
动态制备功能消除了集群管理员预先配置存储的需要。相反，它在用户创建
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/persistent-volumes/"&gt;&lt;code&gt;PersistentVolumeClaim&lt;/code&gt; 对象&lt;/a&gt;时自动制备存储。&lt;/p&gt;</description></item><item><title>端口和协议</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/networking/ports-and-protocols/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/networking/ports-and-protocols/</guid><description>&lt;!--
title: Ports and Protocols
content_type: reference
weight: 50
--&gt;
&lt;!--
When running Kubernetes in an environment with strict network boundaries, such 
as on-premises datacenter with physical network firewalls or Virtual 
Networks in Public Cloud, it is useful to be aware of the ports and protocols 
used by Kubernetes components
--&gt;
&lt;p&gt;当你在一个有严格网络边界的环境里运行 Kubernetes，例如拥有物理网络防火墙或者拥有公有云中虚拟网络的自有数据中心，
了解 Kubernetes 组件使用了哪些端口和协议是非常有用的。&lt;/p&gt;
&lt;!--
## Control plane

| Protocol | Direction | Port Range | Purpose | Used By |
|----------|-----------|------------|-------------------------|---------------------------|
| TCP | Inbound | 6443 | Kubernetes API server | All |
| TCP | Inbound | 2379-2380 | etcd server client API | kube-apiserver, etcd |
| TCP | Inbound | 10250 | Kubelet API | Self, Control plane |
| TCP | Inbound | 10259 | kube-scheduler | Self |
| TCP | Inbound | 10257 | kube-controller-manager | Self |

Although etcd ports are included in control plane section, you can also host your own
etcd cluster externally or on custom ports. 
--&gt;
&lt;h2 id="control-plane"&gt;控制面&lt;/h2&gt;
&lt;table&gt;
 &lt;thead&gt;
 &lt;tr&gt;
 &lt;th&gt;协议&lt;/th&gt;
 &lt;th&gt;方向&lt;/th&gt;
 &lt;th&gt;端口范围&lt;/th&gt;
 &lt;th&gt;目的&lt;/th&gt;
 &lt;th&gt;使用者&lt;/th&gt;
 &lt;/tr&gt;
 &lt;/thead&gt;
 &lt;tbody&gt;
 &lt;tr&gt;
 &lt;td&gt;TCP&lt;/td&gt;
 &lt;td&gt;入站&lt;/td&gt;
 &lt;td&gt;6443&lt;/td&gt;
 &lt;td&gt;Kubernetes API 服务器&lt;/td&gt;
 &lt;td&gt;所有&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;TCP&lt;/td&gt;
 &lt;td&gt;入站&lt;/td&gt;
 &lt;td&gt;2379-2380&lt;/td&gt;
 &lt;td&gt;etcd 服务器客户端 API&lt;/td&gt;
 &lt;td&gt;kube-apiserver、etcd&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;TCP&lt;/td&gt;
 &lt;td&gt;入站&lt;/td&gt;
 &lt;td&gt;10250&lt;/td&gt;
 &lt;td&gt;kubelet API&lt;/td&gt;
 &lt;td&gt;自身、控制面&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;TCP&lt;/td&gt;
 &lt;td&gt;入站&lt;/td&gt;
 &lt;td&gt;10259&lt;/td&gt;
 &lt;td&gt;kube-scheduler&lt;/td&gt;
 &lt;td&gt;自身&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;TCP&lt;/td&gt;
 &lt;td&gt;入站&lt;/td&gt;
 &lt;td&gt;10257&lt;/td&gt;
 &lt;td&gt;kube-controller-manager&lt;/td&gt;
 &lt;td&gt;自身&lt;/td&gt;
 &lt;/tr&gt;
 &lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;尽管 etcd 的端口也列举在控制面的部分，但你也可以在外部自己托管 etcd 集群或者自定义端口。&lt;/p&gt;</description></item><item><title>高可用拓扑选项</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/setup/production-environment/tools/kubeadm/ha-topology/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/setup/production-environment/tools/kubeadm/ha-topology/</guid><description>&lt;!--
reviewers:
- sig-cluster-lifecycle
title: Options for Highly Available Topology
content_type: concept
weight: 50
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page explains the two options for configuring the topology of your highly available (HA) Kubernetes clusters.
--&gt;
&lt;p&gt;本页面介绍了配置高可用（HA）Kubernetes 集群拓扑的两个选项。&lt;/p&gt;
&lt;!--
You can set up an HA cluster:
--&gt;
&lt;p&gt;你可以设置 HA 集群：&lt;/p&gt;
&lt;!--
- With stacked control plane nodes, where etcd nodes are colocated with control plane nodes
- With external etcd nodes, where etcd runs on separate nodes from the control plane
--&gt;
&lt;ul&gt;
&lt;li&gt;使用堆叠（stacked）控制平面节点，其中 etcd 节点与控制平面节点共存&lt;/li&gt;
&lt;li&gt;使用外部 etcd 节点，其中 etcd 在与控制平面不同的节点上运行&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
You should carefully consider the advantages and disadvantages of each topology before setting up an HA cluster.
--&gt;
&lt;p&gt;在设置 HA 集群之前，你应该仔细考虑每种拓扑的优缺点。&lt;/p&gt;</description></item><item><title>关于 CGroup v2</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/cgroups/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/cgroups/</guid><description>&lt;!--
title: About cgroup v2
content_type: concept
weight: 50
--&gt;
&lt;!-- overview --&gt;
&lt;!--
On Linux, &lt;a class='glossary-tooltip' title='一组具有可选资源隔离、审计和限制的 Linux 进程。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/reference/glossary/?all=true#term-cgroup' target='_blank' aria-label='control groups'&gt;control groups&lt;/a&gt;
constrain resources that are allocated to processes.

The &lt;a class='glossary-tooltip' title='一个在集群中每个节点上运行的代理。它保证容器都运行在 Pod 中。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/reference/command-line-tools-reference/kubelet' target='_blank' aria-label='kubelet'&gt;kubelet&lt;/a&gt; and the
underlying container runtime need to interface with cgroups to enforce
[resource management for pods and containers](/docs/concepts/configuration/manage-resources-containers/) which
includes cpu/memory requests and limits for containerized workloads.

There are two versions of cgroups in Linux: cgroup v1 and cgroup v2. cgroup v2 is
the new generation of the `cgroup` API.
--&gt;
&lt;p&gt;在 Linux 上，&lt;a class='glossary-tooltip' title='一组具有可选资源隔离、审计和限制的 Linux 进程。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/reference/glossary/?all=true#term-cgroup' target='_blank' aria-label='控制组'&gt;控制组&lt;/a&gt;约束分配给进程的资源。&lt;/p&gt;</description></item><item><title>管理服务账号</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/access-authn-authz/service-accounts-admin/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/access-authn-authz/service-accounts-admin/</guid><description>&lt;!--
reviewers:
 - liggitt
 - enj
title: Managing Service Accounts
content_type: concept
weight: 50
--&gt;
&lt;!-- overview --&gt;
&lt;!--
A _ServiceAccount_ provides an identity for processes that run in a Pod.

A process inside a Pod can use the identity of its associated service account to
authenticate to the cluster's API server.
--&gt;
&lt;p&gt;&lt;strong&gt;ServiceAccount&lt;/strong&gt; 为 Pod 中运行的进程提供了一个身份。&lt;/p&gt;
&lt;p&gt;Pod 内的进程可以使用其关联服务账号的身份，向集群的 API 服务器进行身份认证。&lt;/p&gt;
&lt;!--
For an introduction to service accounts, read [configure service accounts](/docs/tasks/configure-pod-container/configure-service-account/).

This task guide explains some of the concepts behind ServiceAccounts. The
guide also explains how to obtain or revoke tokens that represent
ServiceAccounts, and how to (optionally) bind a ServiceAccount's validity to
the lifetime of an API object.
--&gt;
&lt;p&gt;有关服务账号的介绍，
请参阅&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configure-pod-container/configure-service-account/"&gt;配置服务账号&lt;/a&gt;。&lt;/p&gt;</description></item><item><title>集群网络系统</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/cluster-administration/networking/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/cluster-administration/networking/</guid><description>&lt;!--
reviewers:
- thockin
title: Cluster Networking
content_type: concept
weight: 50
--&gt;
&lt;!-- overview --&gt;
&lt;!--
Networking is a central part of Kubernetes, but it can be challenging to
understand exactly how it is expected to work. There are 4 distinct networking
problems to address:

1. Highly-coupled container-to-container communications: this is solved by
 &lt;a class='glossary-tooltip' title='Pod 表示你的集群上一组正在运行的容器。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/' target='_blank' aria-label='Pods'&gt;Pods&lt;/a&gt; and `localhost` communications.
2. Pod-to-Pod communications: this is the primary focus of this document.
3. Pod-to-Service communications: this is covered by [Services](/docs/concepts/services-networking/service/).
4. External-to-Service communications: this is also covered by Services.
--&gt;
&lt;p&gt;集群网络系统是 Kubernetes 的核心部分，但是想要准确理解它的工作原理可是个不小的挑战。
下面列出的是网络系统的的四个主要问题：&lt;/p&gt;</description></item><item><title>检查移除 Dockershim 是否对你有影响</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-removal-affects-you/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-removal-affects-you/</guid><description>&lt;!-- 
title: Check whether dockershim removal affects you
content_type: task
reviewers:
- SergeyKanzhelev
weight: 50
--&gt;
&lt;!-- overview --&gt;
&lt;!--
The `dockershim` component of Kubernetes allows the use of Docker as a Kubernetes's
&lt;a class='glossary-tooltip' title='容器运行时是负责运行容器的软件。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/setup/production-environment/container-runtimes' target='_blank' aria-label='container runtime'&gt;container runtime&lt;/a&gt;.
Kubernetes' built-in `dockershim` component was removed in release v1.24.
--&gt;
&lt;p&gt;Kubernetes 的 &lt;code&gt;dockershim&lt;/code&gt; 组件使得你可以把 Docker 用作 Kubernetes 的
&lt;a class='glossary-tooltip' title='容器运行时是负责运行容器的软件。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/setup/production-environment/container-runtimes' target='_blank' aria-label='容器运行时'&gt;容器运行时&lt;/a&gt;。
在 Kubernetes v1.24 版本中，内建组件 &lt;code&gt;dockershim&lt;/code&gt; 被移除。&lt;/p&gt;
&lt;!--
This page explains how your cluster could be using Docker as a container runtime,
provides details on the role that `dockershim` plays when in use, and shows steps
you can take to check whether any workloads could be affected by `dockershim` removal.
--&gt;
&lt;p&gt;本页讲解你的集群把 Docker 用作容器运行时的运作机制，
并提供使用 &lt;code&gt;dockershim&lt;/code&gt; 时，它所扮演角色的详细信息，
继而展示了一组操作，可用来检查移除 &lt;code&gt;dockershim&lt;/code&gt; 对你的工作负载是否有影响。&lt;/p&gt;</description></item><item><title>节点指标数据</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/instrumentation/node-metrics/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/instrumentation/node-metrics/</guid><description>&lt;!--
title: Node metrics data
content_type: reference
weight: 50
description: &gt;-
 Mechanisms for accessing metrics at node, volume, pod and container level,
 as seen by the kubelet.
--&gt;
&lt;!--
The [kubelet](/docs/reference/command-line-tools-reference/kubelet/)
gathers metric statistics at the node, volume, pod and container level,
and emits this information in the
[Summary API](/docs/reference/config-api/kubelet-stats.v1alpha1/).
--&gt;
&lt;p&gt;&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/command-line-tools-reference/kubelet/"&gt;kubelet&lt;/a&gt;
在节点、卷、Pod 和容器级别收集统计信息，
并在 &lt;a href="zh-cn/docs/reference/config-api/kubelet-stats.v1alpha1/"&gt;Summary API&lt;/a&gt;
中输出这些信息。&lt;/p&gt;
&lt;!--
You can send a proxied request to the stats summary API via the
Kubernetes API server.

Here is an example of a Summary API request for a node named `minikube`:
--&gt;
&lt;p&gt;你可以通过 Kubernetes API 服务器将代理的请求发送到 stats Summary API。&lt;/p&gt;</description></item><item><title>节点资源管理器</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/policy/node-resource-managers/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/policy/node-resource-managers/</guid><description>&lt;!-- 
reviewers:
- derekwaynecarr
- klueska
title: Node Resource Managers 
content_type: concept
weight: 50
--&gt;
&lt;!-- overview --&gt;
&lt;!-- 
In order to support latency-critical and high-throughput workloads, Kubernetes offers a suite of
Resource Managers. The managers aim to co-ordinate and optimise the alignment of node's resources for pods
configured with a specific requirement for CPUs, devices, and memory (hugepages) resources.
--&gt;
&lt;p&gt;Kubernetes 提供了一组资源管理器，用于支持延迟敏感的、高吞吐量的工作负载。
资源管理器的目标是协调和优化节点资源，以支持对 CPU、设备和内存（巨页）等资源有特殊需求的 Pod。&lt;/p&gt;
&lt;!-- body --&gt;
&lt;!-- 
## Hardware topology alignment policies
--&gt;
&lt;h2 id="hardware-topology-alignment-policies"&gt;硬件拓扑对齐策略&lt;/h2&gt;
&lt;!--
_Topology Manager_ is a kubelet component that aims to coordinate the set of components that are
responsible for these optimizations. The overall resource management process is governed using
the policy you specify. To learn more, read
[Control Topology Management Policies on a Node](/docs/tasks/administer-cluster/topology-manager/).
--&gt;
&lt;p&gt;**拓扑管理器（Topology Manager）**是一个 kubelet 组件，旨在协调负责这些优化的组件集。
整体资源管理过程通过你指定的策略进行管理。
要了解更多信息，请阅读&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/topology-manager/"&gt;控制节点上的拓扑管理策略&lt;/a&gt;。&lt;/p&gt;</description></item><item><title>扩缩 StatefulSet</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/run-application/scale-stateful-set/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/run-application/scale-stateful-set/</guid><description>&lt;!--
reviewers:
- bprashanth
- enisoc
- erictune
- foxish
- janetkuo
- kow3ns
- smarterclayton
title: Scale a StatefulSet
content_type: task
weight: 50
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This task shows how to scale a StatefulSet. Scaling a StatefulSet refers to
increasing or decreasing the number of replicas.
--&gt;
&lt;p&gt;本文介绍如何扩缩 StatefulSet。StatefulSet 的扩缩指的是增加或者减少副本个数。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;!--
- StatefulSets are only available in Kubernetes version 1.5 or later.
 To check your version of Kubernetes, run `kubectl version`.

- Not all stateful applications scale nicely. If you are unsure about whether
 to scale your StatefulSets, see [StatefulSet concepts](/docs/concepts/workloads/controllers/statefulset/)
 or [StatefulSet tutorial](/docs/tutorials/stateful-application/basic-stateful-set/) for further information.

- You should perform scaling only when you are confident that your stateful application
 cluster is completely healthy.
--&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;StatefulSets 仅适用于 Kubernetes 1.5 及以上版本。
要查看你的 Kubernetes 版本，运行 &lt;code&gt;kubectl version&lt;/code&gt;。&lt;/p&gt;</description></item><item><title>配置 cgroup 驱动</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/</guid><description>&lt;!-- 
title: Configuring a cgroup driver
content_type: task
weight: 50
--&gt;
&lt;!-- overview --&gt;
&lt;!-- 
This page explains how to configure the kubelet's cgroup driver to match the container
runtime cgroup driver for kubeadm clusters.
--&gt;
&lt;p&gt;本页阐述如何配置 kubelet 的 cgroup 驱动以匹配 kubeadm 集群中的容器运行时的 cgroup 驱动。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;!-- 
You should be familiar with the Kubernetes
[container runtime requirements](/docs/setup/production-environment/container-runtimes).
--&gt;
&lt;p&gt;你应该熟悉 Kubernetes 的&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/setup/production-environment/container-runtimes"&gt;容器运行时需求&lt;/a&gt;。&lt;/p&gt;
&lt;!-- steps --&gt;
&lt;!-- 
## Configuring the container runtime cgroup driver
--&gt;
&lt;h2 id="configuring-the-container-runtime-cgroup-driver"&gt;配置容器运行时 cgroup 驱动&lt;/h2&gt;
&lt;!-- 
The [Container runtimes](/docs/setup/production-environment/container-runtimes) page
explains that the `systemd` driver is recommended for kubeadm based setups instead
of the kubelet's [default](/docs/reference/config-api/kubelet-config.v1beta1) `cgroupfs` driver,
because kubeadm manages the kubelet as a
[systemd service](/docs/setup/production-environment/tools/kubeadm/kubelet-integration).
--&gt;
&lt;p&gt;&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/setup/production-environment/container-runtimes"&gt;容器运行时&lt;/a&gt;页面提到，
由于 kubeadm 把 kubelet 视为一个
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/setup/production-environment/tools/kubeadm/kubelet-integration"&gt;系统服务&lt;/a&gt;来管理，
所以对基于 kubeadm 的安装， 我们推荐使用 &lt;code&gt;systemd&lt;/code&gt; 驱动，
不推荐 kubelet &lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/config-api/kubelet-config.v1beta1"&gt;默认&lt;/a&gt;的 &lt;code&gt;cgroupfs&lt;/code&gt; 驱动。&lt;/p&gt;</description></item><item><title>使用 kubectl patch 更新 API 对象</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/</guid><description>&lt;!--
title: Update API Objects in Place Using kubectl patch
description: Use kubectl patch to update Kubernetes API objects in place. Do a strategic merge patch or a JSON merge patch.
content_type: task
weight: 50
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This task shows how to use `kubectl patch` to update an API object in place. The exercises
in this task demonstrate a strategic merge patch and a JSON merge patch.
--&gt;
&lt;p&gt;这个任务展示如何使用 &lt;code&gt;kubectl patch&lt;/code&gt; 就地更新 API 对象。
这个任务中的练习演示了一个策略性合并 patch 和一个 JSON 合并 patch。&lt;/p&gt;</description></item><item><title>使用 Romana 提供 NetworkPolicy</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/network-policy-provider/romana-network-policy/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/network-policy-provider/romana-network-policy/</guid><description>&lt;!--
reviewers:
- chrismarino
title: Romana for NetworkPolicy
content_type: task
weight: 50
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page shows how to use Romana for NetworkPolicy.
--&gt;
&lt;p&gt;本页展示如何使用 Romana 作为 NetworkPolicy。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;!--
Complete steps 1, 2, and 3 of the [kubeadm getting started guide](/docs/reference/setup-tools/kubeadm/).
--&gt;
&lt;p&gt;完成 &lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/"&gt;kubeadm 入门指南&lt;/a&gt;中的 1、2、3 步。&lt;/p&gt;
&lt;!-- steps --&gt;
&lt;!--
## Installing Romana with kubeadm

Follow the [containerized installation guide](https://github.com/romana/romana/tree/master/containerize) for kubeadm.

## Applying network policies

To apply network policies use one of the following:

* [Romana network policies](https://github.com/romana/romana/wiki/Romana-policies).
 * [Example of Romana network policy](https://github.com/romana/core/blob/master/doc/policy.md).
* The NetworkPolicy API.
 --&gt;
&lt;h2 id="使用-kubeadm-安装-romana"&gt;使用 kubeadm 安装 Romana&lt;/h2&gt;
&lt;p&gt;按照&lt;a href="https://github.com/romana/romana/tree/master/containerize"&gt;容器化安装指南&lt;/a&gt;，
使用 kubeadm 安装。&lt;/p&gt;</description></item><item><title>使用 Secret 安全地分发凭据</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/inject-data-application/distribute-credentials-secure/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/inject-data-application/distribute-credentials-secure/</guid><description>&lt;!--
title: Distribute Credentials Securely Using Secrets
content_type: task
weight: 50
min-kubernetes-server-version: v1.6
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page shows how to securely inject sensitive data, such as passwords and
encryption keys, into Pods.
--&gt;
&lt;p&gt;本文展示如何安全地将敏感数据（如密码和加密密钥）注入到 Pod 中。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;!--
You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
[minikube](https://minikube.sigs.k8s.io/docs/tutorials/multi_node/)
or you can use one of these Kubernetes playgrounds:
--&gt;
&lt;p&gt;你必须拥有一个 Kubernetes 的集群，且必须配置 kubectl 命令行工具让其与你的集群通信。
建议运行本教程的集群至少有两个节点，且这两个节点不能作为控制平面主机。
如果你还没有集群，你可以通过 &lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;Minikube&lt;/a&gt;
构建一个你自己的集群，或者你可以使用下面的 Kubernetes 练习环境之一：&lt;/p&gt;</description></item><item><title>使用展开的方式进行并行处理</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/job/parallel-processing-expansion/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/job/parallel-processing-expansion/</guid><description>&lt;!--
title: Parallel Processing using Expansions
content_type: task
min-kubernetes-server-version: v1.8
weight: 50
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This task demonstrates running multiple &lt;a class='glossary-tooltip' title='Job 是需要运行完成的确定性的或批量的任务。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/controllers/job/' target='_blank' aria-label='Jobs'&gt;Jobs&lt;/a&gt;
based on a common template. You can use this approach to process batches of work in
parallel.

For this example there are only three items: _apple_, _banana_, and _cherry_.
The sample Jobs process each item by printing a string then pausing.

See [using Jobs in real workloads](#using-jobs-in-real-workloads) to learn about how
this pattern fits more realistic use cases.
--&gt;
&lt;p&gt;本任务展示基于一个公共的模板运行多个&lt;a class='glossary-tooltip' title='Job 是需要运行完成的确定性的或批量的任务。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/controllers/job/' target='_blank' aria-label='Jobs'&gt;Jobs&lt;/a&gt;。
你可以用这种方法来并行执行批处理任务。&lt;/p&gt;</description></item><item><title>适用于 Docker 用户的 kubectl</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/docker-cli-to-kubectl/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/docker-cli-to-kubectl/</guid><description>&lt;!--
title: kubectl for Docker Users
content_type: concept
reviewers:
- brendandburns
- thockin
weight: 50
--&gt;
&lt;!-- overview --&gt;
&lt;!--
You can use the Kubernetes command line tool `kubectl` to interact with the API Server. Using kubectl is straightforward if you are familiar with the Docker command line tool. However, there are a few differences between the Docker commands and the kubectl commands. The following sections show a Docker sub-command and describe the equivalent `kubectl` command.
--&gt;
&lt;p&gt;你可以使用 Kubernetes 命令行工具 &lt;code&gt;kubectl&lt;/code&gt; 与 API 服务器进行交互。如果你熟悉 Docker 命令行工具，
则使用 kubectl 非常简单。但是，Docker 命令和 kubectl 命令之间有一些区别。以下显示了 Docker 子命令，
并描述了等效的 &lt;code&gt;kubectl&lt;/code&gt; 命令。&lt;/p&gt;</description></item><item><title>为 Kubernetes API 生成参考文档</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/contribute/generate-ref-docs/kubernetes-api/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/contribute/generate-ref-docs/kubernetes-api/</guid><description>&lt;!--
title: Generating Reference Documentation for the Kubernetes API
content_type: task
weight: 50
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page shows how to update the Kubernetes API reference documentation.

The Kubernetes API reference documentation is built from the
[Kubernetes OpenAPI spec](https://github.com/kubernetes/kubernetes/blob/master/api/openapi-spec/swagger.json)
using the [kubernetes-sigs/reference-docs](https://github.com/kubernetes-sigs/reference-docs) generation code.

If you find bugs in the generated documentation, you need to
[fix them upstream](/docs/contribute/generate-ref-docs/contribute-upstream/).

If you need only to regenerate the reference documentation from the
[OpenAPI](https://github.com/OAI/OpenAPI-Specification)
spec, continue reading this page.
--&gt;
&lt;p&gt;本页面显示了如何更新 Kubernetes API 参考文档。&lt;/p&gt;</description></item><item><title>为命名空间配置内存和 CPU 配额</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/</guid><description>&lt;!--
title: Configure Memory and CPU Quotas for a Namespace
content_type: task
weight: 50
description: &gt;-
 Define overall memory and CPU resource limits for a namespace.
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page shows how to set quotas for the total amount memory and CPU that
can be used by all Pods running in a &lt;a class='glossary-tooltip' title='名字空间是 Kubernetes 用来支持隔离单个集群中的资源组的一种抽象。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/overview/working-with-objects/namespaces/' target='_blank' aria-label='namespace'&gt;namespace&lt;/a&gt;.
You specify quotas in a
[ResourceQuota](/docs/reference/kubernetes-api/policy-resources/resource-quota-v1/)
object.
--&gt;
&lt;p&gt;本文介绍如何为&lt;a class='glossary-tooltip' title='名字空间是 Kubernetes 用来支持隔离单个集群中的资源组的一种抽象。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/overview/working-with-objects/namespaces/' target='_blank' aria-label='命名空间'&gt;命名空间&lt;/a&gt;下运行的所有
Pod 设置总的内存和 CPU 配额。你可以通过使用 &lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/policy-resources/resource-quota-v1/"&gt;ResourceQuota&lt;/a&gt;
对象设置配额.&lt;/p&gt;</description></item><item><title>污点和容忍度</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/scheduling-eviction/taint-and-toleration/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/scheduling-eviction/taint-and-toleration/</guid><description>&lt;!--
reviewers:
- davidopp
- kevin-wangzefeng
- bsalamat
title: Taints and Tolerations
content_type: concept
weight: 50
--&gt;
&lt;!-- overview --&gt;
&lt;!--
[_Node affinity_](/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity)
is a property of &lt;a class='glossary-tooltip' title='Pod 表示你的集群上一组正在运行的容器。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/' target='_blank' aria-label='Pods'&gt;Pods&lt;/a&gt; that *attracts* them to
a set of &lt;a class='glossary-tooltip' title='Kubernetes 中的工作机器称作节点。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/nodes/' target='_blank' aria-label='nodes'&gt;nodes&lt;/a&gt; (either as a preference or a
hard requirement). _Taints_ are the opposite -- they allow a node to repel a set of pods.
--&gt;
&lt;p&gt;&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity"&gt;节点亲和性&lt;/a&gt;
是 &lt;a class='glossary-tooltip' title='Pod 表示你的集群上一组正在运行的容器。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/' target='_blank' aria-label='Pod'&gt;Pod&lt;/a&gt; 的一种属性，它使 Pod
被吸引到一类特定的&lt;a class='glossary-tooltip' title='Kubernetes 中的工作机器称作节点。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/nodes/' target='_blank' aria-label='节点'&gt;节点&lt;/a&gt;
（这可能出于一种偏好，也可能是硬性要求）。
&lt;strong&gt;污点（Taint）&lt;/strong&gt; 则相反——它使节点能够排斥一类特定的 Pod。&lt;/p&gt;</description></item><item><title>虚拟 IP 和服务代理</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/networking/virtual-ips/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/networking/virtual-ips/</guid><description>&lt;!--
title: Virtual IPs and Service Proxies
content_type: reference
weight: 50
--&gt;
&lt;!-- overview --&gt;
&lt;!--
Every &lt;a class='glossary-tooltip' title='Kubernetes 中的工作机器称作节点。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/nodes/' target='_blank' aria-label='node'&gt;node&lt;/a&gt; in a Kubernetes
&lt;a class='glossary-tooltip' title='一组工作机器，称为节点，会运行容器化应用程序。每个集群至少有一个工作节点。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/reference/glossary/?all=true#term-cluster' target='_blank' aria-label='cluster'&gt;cluster&lt;/a&gt; runs a
[kube-proxy](/docs/reference/command-line-tools-reference/kube-proxy/)
(unless you have deployed your own alternative component in place of `kube-proxy`).
--&gt;
&lt;p&gt;Kubernetes &lt;a class='glossary-tooltip' title='一组工作机器，称为节点，会运行容器化应用程序。每个集群至少有一个工作节点。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/reference/glossary/?all=true#term-cluster' target='_blank' aria-label='集群'&gt;集群&lt;/a&gt;中的每个
&lt;a class='glossary-tooltip' title='Kubernetes 中的工作机器称作节点。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/nodes/' target='_blank' aria-label='节点'&gt;节点&lt;/a&gt;会运行一个
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/command-line-tools-reference/kube-proxy/"&gt;kube-proxy&lt;/a&gt;
（除非你已经部署了自己的替换组件来替代 &lt;code&gt;kube-proxy&lt;/code&gt;）。&lt;/p&gt;
&lt;!--
The `kube-proxy` component is responsible for implementing a _virtual IP_
mechanism for &lt;a class='glossary-tooltip' title='将运行在一组 Pods 上的应用程序公开为网络服务的抽象方法。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/services-networking/service/' target='_blank' aria-label='Services'&gt;Services&lt;/a&gt;
of `type` other than
[`ExternalName`](/docs/concepts/services-networking/service/#externalname).
--&gt;
&lt;p&gt;&lt;code&gt;kube-proxy&lt;/code&gt; 组件负责除 &lt;code&gt;type&lt;/code&gt; 为
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/services-networking/service/#externalname"&gt;&lt;code&gt;ExternalName&lt;/code&gt;&lt;/a&gt;
以外的 &lt;a class='glossary-tooltip' title='将运行在一组 Pods 上的应用程序公开为网络服务的抽象方法。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/services-networking/service/' target='_blank' aria-label='Service'&gt;Service&lt;/a&gt; 实现&lt;strong&gt;虚拟 IP&lt;/strong&gt; 机制。&lt;/p&gt;</description></item><item><title>自动扩缩工作负载</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/autoscaling/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/autoscaling/</guid><description>&lt;!--
title: Autoscaling Workloads
description: &gt;-
 With autoscaling, you can automatically update your workloads in one way or another. This allows your cluster to react to changes in resource demand more elastically and efficiently.
content_type: concept
weight: 50
--&gt;
&lt;!-- overview --&gt;
&lt;!--
In Kubernetes, you can _scale_ a workload depending on the current demand of resources.
This allows your cluster to react to changes in resource demand more elastically and efficiently.

When you scale a workload, you can either increase or decrease the number of replicas managed by
the workload, or adjust the resources available to the replicas in-place.

The first approach is referred to as _horizontal scaling_, while the second is referred to as
_vertical scaling_.

There are manual and automatic ways to scale your workloads, depending on your use case.
--&gt;
&lt;p&gt;在 Kubernetes 中，你可以根据当前的资源需求&lt;strong&gt;扩缩&lt;/strong&gt;工作负载。
这让你的集群可以更灵活、更高效地面对资源需求的变化。&lt;/p&gt;</description></item><item><title>Gateway API</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/services-networking/gateway/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/services-networking/gateway/</guid><description>&lt;!-- 
title: Gateway API
content_type: concept
description: &gt;-
 Gateway API is a family of API kinds that provide dynamic infrastructure provisioning
 and advanced traffic routing.
weight: 55
--&gt;
&lt;!-- overview --&gt;
&lt;!-- 
Make network services available by using an extensible, role-oriented, protocol-aware configuration
mechanism. [Gateway API](https://gateway-api.sigs.k8s.io/) is an &lt;a class='glossary-tooltip' title='扩展 Kubernetes 功能的资源。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/cluster-administration/addons/' target='_blank' aria-label='add-on'&gt;add-on&lt;/a&gt;
containing API [kinds](https://gateway-api.sigs.k8s.io/references/spec/) that provide dynamic infrastructure
provisioning and advanced traffic routing.
--&gt;
&lt;p&gt;&lt;a href="https://gateway-api.sigs.k8s.io/"&gt;Gateway API&lt;/a&gt; 通过使用可扩展的、角色导向的、
协议感知的配置机制来提供网络服务。它是一个&lt;a class='glossary-tooltip' title='扩展 Kubernetes 功能的资源。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/cluster-administration/addons/' target='_blank' aria-label='附加组件'&gt;附加组件&lt;/a&gt;，
包含可提供动态基础设施配置和高级流量路由的
API &lt;a href="https://gateway-api.sigs.k8s.io/references/spec/"&gt;类别&lt;/a&gt;。&lt;/p&gt;</description></item><item><title>可观测性</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/cluster-administration/observability/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/cluster-administration/observability/</guid><description>&lt;!--
title: Observability
reviewers:
weight: 55
content_type: concept
description: &gt;
 Understand how to gain end-to-end visibility of a Kubernetes cluster through the collection of metrics, logs, and traces.
no_list: true
card:
 name: setup
 weight: 60
 anchors:
 - anchor: "#metrics"
 title: Metrics
 - anchor: "#logs"
 title: Logs
 - anchor: "#traces"
 title: Traces
--&gt;
&lt;!-- overview --&gt;
&lt;!--
In Kubernetes, observability is the process of collecting and analyzing metrics, logs, and traces—often referred to as the three pillars of observability—in order to obtain a better understanding of the internal state, performance, and health of the cluster.
--&gt;
&lt;p&gt;在 Kubernetes 中，可观测性是通过收集和分析指标、日志和链路（通常被称为可观测性的三大支柱），
以便更好地了解集群的内部状态、性能和健康情况的过程。&lt;/p&gt;</description></item><item><title>Admission Webhook 良好实践</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/cluster-administration/admission-webhooks-good-practices/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/cluster-administration/admission-webhooks-good-practices/</guid><description>&lt;!--
title: Admission Webhook Good Practices
description: &gt;
 Recommendations for designing and deploying admission webhooks in Kubernetes.
content_type: concept
weight: 60
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page provides good practices and considerations when designing
_admission webhooks_ in Kubernetes. This information is intended for
cluster operators who run admission webhook servers or third-party applications
that modify or validate your API requests.

Before reading this page, ensure that you're familiar with the following
concepts:

* [Admission controllers](/docs/reference/access-authn-authz/admission-controllers/)
* [Admission webhooks](/docs/reference/access-authn-authz/extensible-admission-controllers/#what-are-admission-webhooks)
--&gt;
&lt;p&gt;本页面提供了在 Kubernetes 中设计 &lt;strong&gt;Admission Webhook&lt;/strong&gt; 时的良好实践和注意事项。
此信息适用于运行准入 Webhook 服务器或第三方应用程序的集群操作员，
这些程序用于修改或验证你的 API 请求。&lt;/p&gt;</description></item><item><title>EndpointSlice</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/services-networking/endpoint-slices/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/services-networking/endpoint-slices/</guid><description>&lt;!--
reviewers:
- freehan
title: EndpointSlices
api_metadata:
- apiVersion: "discovery.k8s.io/v1"
 kind: "EndpointSlice"
content_type: concept
weight: 60
description: &gt;-
 The EndpointSlice API is the mechanism that Kubernetes uses to let your Service
 scale to handle large numbers of backends, and allows the cluster to update its
 list of healthy backends efficiently.
--&gt;
&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.21 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


EndpointSlices 跟踪后端端点的 IP 地址。
EndpointSlices 通常与某个 &lt;a class='glossary-tooltip' title='将运行在一组 Pods 上的应用程序公开为网络服务的抽象方法。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/services-networking/service/' target='_blank' aria-label='Service'&gt;Service&lt;/a&gt; 关联，
后端端点通常表示 &lt;a class='glossary-tooltip' title='Pod 表示你的集群上一组正在运行的容器。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/' target='_blank' aria-label='Pod'&gt;Pod&lt;/a&gt;。
&lt;!-- body --&gt;
&lt;!--
## EndpointSlice API {#endpointslice-resource}

In Kubernetes, an EndpointSlice contains references to a set of network
endpoints. The control plane automatically creates EndpointSlices
for any Kubernetes Service that has a &lt;a class='glossary-tooltip' title='选择算符允许用户通过标签对一组资源对象进行筛选过滤。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/overview/working-with-objects/labels/' target='_blank' aria-label='selector'&gt;selector&lt;/a&gt; specified. These EndpointSlices include
references to all the Pods that match the Service selector. EndpointSlices group
network endpoints together by unique combinations of IP family, protocol,
port number, and Service name.
The name of a EndpointSlice object must be a valid
[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).

As an example, here's a sample EndpointSlice object, that's owned by the `example`
Kubernetes Service.
--&gt;
&lt;h2 id="endpointslice-resource"&gt;EndpointSlice API&lt;/h2&gt;
&lt;p&gt;在 Kubernetes 中，&lt;code&gt;EndpointSlice&lt;/code&gt; 包含对一组网络端点的引用。
控制面会自动为设置了&lt;a class='glossary-tooltip' title='选择算符允许用户通过标签对一组资源对象进行筛选过滤。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/overview/working-with-objects/labels/' target='_blank' aria-label='选择算符'&gt;选择算符&lt;/a&gt;的
Kubernetes Service 创建 EndpointSlice。
这些 EndpointSlice 将包含对与 Service 选择算符匹配的所有 Pod 的引用。
EndpointSlice 通过唯一的 IP 地址簇、协议、端口号和 Service 名称将网络端点组织在一起。
EndpointSlice 的名称必须是合法的
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/overview/working-with-objects/names#dns-subdomain-names"&gt;DNS 子域名&lt;/a&gt;。&lt;/p&gt;</description></item><item><title>kubeadm reset</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-reset/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-reset/</guid><description>&lt;!--
reviewers:
- luxas
- jbeda
title: kubeadm reset
content_type: concept
weight: 60
--&gt;
&lt;!-- overview --&gt;
&lt;!--
Performs a best effort revert of changes made by `kubeadm init` or `kubeadm join`.
--&gt;
&lt;p&gt;该命令尽力还原由 &lt;code&gt;kubeadm init&lt;/code&gt; 或 &lt;code&gt;kubeadm join&lt;/code&gt; 所做的更改。&lt;/p&gt;
&lt;!-- body --&gt;

	&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Performs a best effort revert of changes made to this host by 'kubeadm init' or 'kubeadm join'
--&gt;
&lt;p&gt;尽最大努力还原通过 'kubeadm init' 或者 'kubeadm join' 操作对主机所作的更改。&lt;/p&gt;</description></item><item><title>kubectl 的用法约定</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/conventions/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/conventions/</guid><description>&lt;!--
title: kubectl Usage Conventions
reviewers:
- janetkuo
content_type: concept
weight: 60
--&gt;
&lt;!-- overview --&gt;
&lt;!--
Recommended usage conventions for `kubectl`.
--&gt;
&lt;p&gt;&lt;code&gt;kubectl&lt;/code&gt; 的推荐用法约定。&lt;/p&gt;
&lt;!-- body --&gt;
&lt;!--
## Using `kubectl` in Reusable Scripts
--&gt;
&lt;h2 id="using-kubectl-in-reusable-scripts"&gt;在可重用脚本中使用 &lt;code&gt;kubectl&lt;/code&gt;&lt;/h2&gt;
&lt;!--
For a stable output in a script:
--&gt;
&lt;p&gt;对于脚本中的稳定输出：&lt;/p&gt;
&lt;!--
* Request one of the machine-oriented output forms, such as `-o name`, `-o json`, `-o yaml`, `-o go-template`, or `-o jsonpath`.
* Fully-qualify the version. For example, `jobs.v1.batch/myjob`. This will ensure that kubectl does not use its default version that can change over time.
* Don't rely on context, preferences, or other implicit states.
--&gt;
&lt;ul&gt;
&lt;li&gt;请求一个面向机器的输出格式，例如 &lt;code&gt;-o name&lt;/code&gt;、&lt;code&gt;-o json&lt;/code&gt;、&lt;code&gt;-o yaml&lt;/code&gt;、&lt;code&gt;-o go template&lt;/code&gt; 或 &lt;code&gt;-o jsonpath&lt;/code&gt;。&lt;/li&gt;
&lt;li&gt;完全限定版本。例如 &lt;code&gt;jobs.v1.batch/myjob&lt;/code&gt;。这将确保 kubectl 不会使用其默认版本，该版本会随着时间的推移而更改。&lt;/li&gt;
&lt;li&gt;不要依赖上下文、首选项或其他隐式状态。&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
## Subresources
--&gt;
&lt;h2 id="subresources"&gt;子资源&lt;/h2&gt;
&lt;!--
* You can use the `--subresource` argument for kubectl subcommands such as `get`, `patch`,
`edit`, `apply` and `replace` to fetch and update subresources for all resources that
support them. In Kubernetes version 1.35, only the `status`, `scale`
and `resize` subresources are supported.
 * For `kubectl edit`, the `scale` subresource is not supported. If you use `--subresource` with
 `kubectl edit` and specify `scale` as the subresource, the command will error out.
* The API contract against a subresource is identical to a full resource. While updating the
`status` subresource to a new value, keep in mind that the subresource could be potentially
reconciled by a controller to a different value.
--&gt;
&lt;ul&gt;
&lt;li&gt;你可以将 &lt;code&gt;--subresource&lt;/code&gt; 参数用于 kubectl 命令，例如 &lt;code&gt;get&lt;/code&gt;、&lt;code&gt;patch&lt;/code&gt;、&lt;code&gt;edit&lt;/code&gt;、&lt;code&gt;apply&lt;/code&gt; 和 &lt;code&gt;replace&lt;/code&gt;
来获取和更新所有支持子资源的资源的子资源。Kubernetes 1.35 版本中，
仅支持 &lt;code&gt;status&lt;/code&gt;, &lt;code&gt;scale&lt;/code&gt; 和 &lt;code&gt;resize&lt;/code&gt; 子资源。
&lt;ul&gt;
&lt;li&gt;对于 &lt;code&gt;kubectl edit&lt;/code&gt;，不支持 &lt;code&gt;scale&lt;/code&gt; 子资源。如果将 &lt;code&gt;--subresource&lt;/code&gt; 与 &lt;code&gt;kubectl edit&lt;/code&gt; 一起使用，
并指定 &lt;code&gt;scale&lt;/code&gt; 作为子资源，则命令将会报错。&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;针对子资源的 API 协定与完整资源相同。在更新 &lt;code&gt;status&lt;/code&gt; 子资源为一个新值时，请记住，
子资源可能是潜在的由控制器调和为不同的值。&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
## Best Practices
--&gt;
&lt;h2 id="best-practices"&gt;最佳实践&lt;/h2&gt;
&lt;h3 id="kubectl-run"&gt;&lt;code&gt;kubectl run&lt;/code&gt;&lt;/h3&gt;
&lt;!--
For `kubectl run` to satisfy infrastructure as code:
--&gt;
&lt;p&gt;若希望 &lt;code&gt;kubectl run&lt;/code&gt; 满足基础设施即代码的要求：&lt;/p&gt;</description></item><item><title>Kubernetes z-pages</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/instrumentation/zpages/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/instrumentation/zpages/</guid><description>&lt;!--
title: Kubernetes z-pages
content_type: reference
weight: 60
reviewers:
- dashpole
description: &gt;-
 Provides runtime diagnostics for Kubernetes components, offering insights into component runtime status and configuration flags.
--&gt;
&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-alpha"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.32 [alpha]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;!--
Kubernetes core components can expose a suite of _z-endpoints_ to make it easier for users
to debug their cluster and its components. These endpoints are strictly to be used for human
inspection to gain real time debugging information of a component binary.
Avoid automated scraping of data returned by these endpoints; in Kubernetes 1.35
these are an **alpha** feature and the response format may change in future releases.
--&gt;
&lt;p&gt;Kubernetes 的核心组件可以暴露一系列 &lt;strong&gt;z-endpoints&lt;/strong&gt;，以便用户更轻松地调试他们的集群及其组件。
这些端点仅用于人工检查，以获取组件二进制文件的实时调试信息。请不要自动抓取这些端点返回的数据；
在 Kubernetes 1.35 中，这些是 &lt;strong&gt;Alpha&lt;/strong&gt; 特性，响应格式可能会在未来版本中发生变化。&lt;/p&gt;</description></item><item><title>Pod 水平自动扩缩</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/autoscaling/horizontal-pod-autoscale/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/autoscaling/horizontal-pod-autoscale/</guid><description>&lt;!--
reviewers:
- adrianmoisey
- omerap12
title: Horizontal Pod Autoscaling
feature:
 title: Horizontal scaling
 description: &gt;
 Scale your application up and down with a simple command, with a UI, or automatically based on CPU usage.
content_type: concept
weight: 60
math: true
--&gt;
&lt;!-- overview --&gt;
&lt;!--
In Kubernetes, a _HorizontalPodAutoscaler_ automatically updates a workload resource (such as
a &lt;a class='glossary-tooltip' title='管理集群上的多副本应用。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/controllers/deployment/' target='_blank' aria-label='Deployment'&gt;Deployment&lt;/a&gt; or
&lt;a class='glossary-tooltip' title='StatefulSet 用来管理某 Pod 集合的部署和扩缩，并为这些 Pod 提供持久存储和持久标识符。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/controllers/statefulset/' target='_blank' aria-label='StatefulSet'&gt;StatefulSet&lt;/a&gt;), with the
aim of automatically scaling capacity to match demand.
--&gt;
&lt;p&gt;在 Kubernetes 中，&lt;strong&gt;HorizontalPodAutoscaler&lt;/strong&gt; 自动更新工作负载资源
（例如 &lt;a class='glossary-tooltip' title='管理集群上的多副本应用。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/controllers/deployment/' target='_blank' aria-label='Deployment'&gt;Deployment&lt;/a&gt; 或者
&lt;a class='glossary-tooltip' title='StatefulSet 用来管理某 Pod 集合的部署和扩缩，并为这些 Pod 提供持久存储和持久标识符。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/controllers/statefulset/' target='_blank' aria-label='StatefulSet'&gt;StatefulSet&lt;/a&gt;），
目的是容量自动扩缩以满足需求。&lt;/p&gt;</description></item><item><title>从 dockershim 迁移遥测和安全代理</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/migrating-from-dockershim/migrating-telemetry-and-security-agents/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/migrating-from-dockershim/migrating-telemetry-and-security-agents/</guid><description>&lt;!-- 
title: Migrating telemetry and security agents from dockershim
content_type: task 
reviewers:
- SergeyKanzhelev
weight: 60
--&gt;
&lt;!-- overview --&gt;
&lt;div class="alert alert-secondary callout third-party-content" role="note"&gt;&lt;strong&gt;说明：&lt;/strong&gt;&amp;puncsp;本部分链接到提供 Kubernetes 所需功能的第三方项目。Kubernetes 项目作者不负责这些项目。此页面遵循&lt;a href="https://github.com/cncf/foundation/blob/main/policies-guidance/website-guidelines.md" target="_blank"&gt;CNCF 网站指南&lt;/a&gt;，按字母顺序列出项目。要将项目添加到此列表中，请在提交更改之前阅读&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/contribute/style/content-guide/#third-party-content"&gt;内容指南&lt;/a&gt;。&lt;/div&gt;
&lt;!-- 
Kubernetes' support for direct integration with Docker Engine is deprecated and
has been removed. Most apps do not have a direct dependency on runtime hosting
containers. However, there are still a lot of telemetry and monitoring agents
that have a dependency on Docker to collect containers metadata, logs, and
metrics. This document aggregates information on how to detect these
dependencies as well as links on how to migrate these agents to use generic tools or
alternative runtimes.
--&gt;
&lt;p&gt;Kubernetes 对与 Docker Engine 直接集成的支持已被弃用且已经被删除。
大多数应用程序不直接依赖于托管容器的运行时。但是，仍然有大量的遥测和监控代理依赖
Docker 来收集容器元数据、日志和指标。
本文汇总了一些如何探查这些依赖的信息以及如何迁移这些代理去使用通用工具或其他容器运行时的参考链接。&lt;/p&gt;</description></item><item><title>调度框架</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/scheduling-eviction/scheduling-framework/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/scheduling-eviction/scheduling-framework/</guid><description>&lt;!--
reviewers:
- ahg-g
title: Scheduling Framework
content_type: concept
weight: 60
--&gt;
&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.19 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;!--
The _scheduling framework_ is a pluggable architecture for the Kubernetes scheduler.
It consists of a set of "plugin" APIs that are compiled directly into the scheduler.
These APIs allow most scheduling features to be implemented as plugins,
while keeping the scheduling "core" lightweight and maintainable. Refer to the
[design proposal of the scheduling framework][kep] for more technical information on
the design of the framework.
--&gt;
&lt;p&gt;&lt;strong&gt;调度框架&lt;/strong&gt;是面向 Kubernetes 调度器的一种插件架构，
它由一组直接编译到调度程序中的“插件” API 组成。
这些 API 允许大多数调度功能以插件的形式实现，同时使调度“核心”保持简单且可维护。
请参考&lt;a href="https://github.com/kubernetes/enhancements/blob/master/keps/sig-scheduling/624-scheduling-framework/README.md"&gt;调度框架的设计提案&lt;/a&gt;
获取框架设计的更多技术信息。&lt;/p&gt;</description></item><item><title>发布后沟通</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/contribute/blog/release-comms/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/contribute/blog/release-comms/</guid><description>&lt;!--
title: Post-release communications
content_type: concept
weight: 60
--&gt;
&lt;!-- overview --&gt;
&lt;!--
The Kubernetes _Release Comms_ team (part of
[SIG Release](https://github.com/kubernetes/community/tree/master/sig-release))
looks after release announcements, which go onto the
[main project blog](/docs/contribute/blog/#main-blog).

After each release, the Release Comms team take over the main blog for a period
and publish a series of additional articles to explain or announce changes related to
that release. These additional articles are termed _post-release comms_.
--&gt;
&lt;p&gt;Kubernetes 的&lt;strong&gt;发布沟通（Release Comms）&lt;/strong&gt; 团队（隶属于
&lt;a href="https://github.com/kubernetes/community/tree/master/sig-release"&gt;SIG Release&lt;/a&gt;）
负责管理发布相关的公告，这些公告会发布在&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/contribute/blog/#main-blog"&gt;主项目博客&lt;/a&gt;上。&lt;/p&gt;</description></item><item><title>基于角色的访问控制良好实践</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/security/rbac-good-practices/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/security/rbac-good-practices/</guid><description>&lt;!--
reviewers:
title: Role Based Access Control Good Practices
description: &gt;
 Principles and practices for good RBAC design for cluster operators.
content_type: concept
weight: 60
--&gt;
&lt;!-- overview --&gt;
&lt;!--
Kubernetes &lt;a class='glossary-tooltip' title='管理授权决策，允许管理员通过 Kubernetes API 动态配置访问策略。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/reference/access-authn-authz/rbac/' target='_blank' aria-label='RBAC'&gt;RBAC&lt;/a&gt; is a key security control
to ensure that cluster users and workloads have only the access to resources required to
execute their roles. It is important to ensure that, when designing permissions for cluster
users, the cluster administrator understands the areas where privilge escalation could occur,
to reduce the risk of excessive access leading to security incidents.

The good practices laid out here should be read in conjunction with the general
[RBAC documentation](/docs/reference/access-authn-authz/rbac/#restrictions-on-role-creation-or-update).
--&gt;
&lt;p&gt;Kubernetes &lt;a class='glossary-tooltip' title='管理授权决策，允许管理员通过 Kubernetes API 动态配置访问策略。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/reference/access-authn-authz/rbac/' target='_blank' aria-label='RBAC'&gt;RBAC&lt;/a&gt;
是一项重要的安全控制措施，用于保证集群用户和工作负载只能访问履行自身角色所需的资源。
在为集群用户设计权限时，请务必确保集群管理员知道可能发生特权提级的地方，
降低因过多权限而导致安全事件的风险。&lt;/p&gt;</description></item><item><title>集群管理员使用动态资源分配的良好实践</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/cluster-administration/dra/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/cluster-administration/dra/</guid><description>&lt;!--
title: Good practices for Dynamic Resource Allocation as a Cluster Admin
content_type: concept
weight: 60
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page describes good practices when configuring a Kubernetes cluster
utilizing Dynamic Resource Allocation (DRA). These instructions are for cluster
administrators.
--&gt;
&lt;p&gt;本文介绍在利用动态资源分配（DRA）配置 Kubernetes 集群时的良好实践。这些指示说明适用于集群管理员。&lt;/p&gt;
&lt;!-- body --&gt;
&lt;!--
## Separate permissions to DRA related APIs

DRA is orchestrated through a number of different APIs. Use authorization tools
(like RBAC, or another solution) to control access to the right APIs depending
on the persona of your user.

In general, DeviceClasses and ResourceSlices should be restricted to admins and
the DRA drivers. Cluster operators that will be deploying Pods with claims will
need access to ResourceClaim and ResourceClaimTemplate APIs; both of these APIs
are namespace scoped.
--&gt;
&lt;h2 id="separate-permissions-to-dra-related-apis"&gt;分离 DRA 相关 API 的权限&lt;/h2&gt;
&lt;p&gt;DRA 是通过多个不同的 API 进行编排的。使用鉴权工具（如 RBAC 或其他方案）根据用户的角色来控制对相关 API 的访问权限。&lt;/p&gt;</description></item><item><title>卷快照</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/volume-snapshots/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/volume-snapshots/</guid><description>&lt;!--
reviewers:
- saad-ali
- thockin
- msau42
- jingxu97
- xing-yang
- yuxiangqian
title: Volume Snapshots
content_type: concept
weight: 60
--&gt;
&lt;!-- overview --&gt;
&lt;!--
In Kubernetes, a _VolumeSnapshot_ represents a snapshot of a volume on a storage
system. This document assumes that you are already familiar with Kubernetes
[persistent volumes](/docs/concepts/storage/persistent-volumes/).
--&gt;
&lt;p&gt;在 Kubernetes 中，&lt;strong&gt;卷快照&lt;/strong&gt; 是一个存储系统上卷的快照，本文假设你已经熟悉了 Kubernetes
的&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/persistent-volumes/"&gt;持久卷&lt;/a&gt;。&lt;/p&gt;
&lt;!-- body --&gt;
&lt;!--
## Introduction
--&gt;
&lt;h2 id="introduction"&gt;介绍&lt;/h2&gt;
&lt;!--
Similar to how API resources `PersistentVolume` and `PersistentVolumeClaim` are
used to provision volumes for users and administrators, `VolumeSnapshotContent`
and `VolumeSnapshot` API resources are provided to create volume snapshots for
users and administrators.
--&gt;
&lt;p&gt;与 &lt;code&gt;PersistentVolume&lt;/code&gt; 和 &lt;code&gt;PersistentVolumeClaim&lt;/code&gt; 这两个 API 资源用于给用户和管理员制备卷类似，
&lt;code&gt;VolumeSnapshotContent&lt;/code&gt; 和 &lt;code&gt;VolumeSnapshot&lt;/code&gt; 这两个 API 资源用于给用户和管理员创建卷快照。&lt;/p&gt;</description></item><item><title>利用 kubeadm 创建高可用集群</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/setup/production-environment/tools/kubeadm/high-availability/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/setup/production-environment/tools/kubeadm/high-availability/</guid><description>&lt;!--
reviewers:
- sig-cluster-lifecycle
title: Creating Highly Available Clusters with kubeadm
content_type: task
weight: 60
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page explains two different approaches to setting up a highly available Kubernetes
cluster using kubeadm:

- With stacked control plane nodes. This approach requires less infrastructure. The etcd members
 and control plane nodes are co-located.
- With an external etcd cluster. This approach requires more infrastructure. The
 control plane nodes and etcd members are separated.
--&gt;
&lt;p&gt;本文讲述了使用 kubeadm 设置一个高可用的 Kubernetes 集群的两种不同方式：&lt;/p&gt;</description></item><item><title>临时容器</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/ephemeral-containers/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/ephemeral-containers/</guid><description>&lt;!--
reviewers:
- verb
- yujuhong
title: Ephemeral Containers
content_type: concept
weight: 60
--&gt;
&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.25 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;!--
This page provides an overview of ephemeral containers: a special type of container
that runs temporarily in an existing &lt;a class='glossary-tooltip' title='Pod 表示你的集群上一组正在运行的容器。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/' target='_blank' aria-label='Pod'&gt;Pod&lt;/a&gt; to
accomplish user-initiated actions such as troubleshooting. You use ephemeral
containers to inspect services rather than to build applications.
--&gt;
&lt;p&gt;本页面概述了临时容器：一种特殊的容器，该容器在现有
&lt;a class='glossary-tooltip' title='Pod 表示你的集群上一组正在运行的容器。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/' target='_blank' aria-label='Pod'&gt;Pod&lt;/a&gt;
中临时运行，以便完成用户发起的操作，例如故障排查。
你会使用临时容器来检查服务，而不是用它来构建应用程序。&lt;/p&gt;</description></item><item><title>配置 Pod 的服务质量</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configure-pod-container/quality-service-pod/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configure-pod-container/quality-service-pod/</guid><description>&lt;!--
title: Configure Quality of Service for Pods
content_type: task
weight: 60
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page shows how to configure Pods so that they will be assigned particular
&lt;a class='glossary-tooltip' title='QoS 类（Quality of Service Class）为 Kubernetes 提供了一种将集群中的 Pod 分为几个类并做出有关调度和驱逐决策的方法。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/pod-qos/' target='_blank' aria-label='Quality of Service (QoS) classes'&gt;Quality of Service (QoS) classes&lt;/a&gt;.
Kubernetes uses QoS classes to make decisions about evicting Pods when Node resources are exceeded.
--&gt;
&lt;p&gt;本页介绍怎样配置 Pod 以让其归属于特定的
&lt;a class='glossary-tooltip' title='QoS 类（Quality of Service Class）为 Kubernetes 提供了一种将集群中的 Pod 分为几个类并做出有关调度和驱逐决策的方法。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/pod-qos/' target='_blank' aria-label='服务质量类（Quality of Service class，QoS class）'&gt;服务质量类（Quality of Service class，QoS class）&lt;/a&gt;.
Kubernetes 在 Node 资源不足时使用 QoS 类来就驱逐 Pod 作出决定。&lt;/p&gt;</description></item><item><title>配置命名空间下 Pod 配额</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/</guid><description>&lt;!--
title: Configure a Pod Quota for a Namespace
content_type: task
weight: 60
description: &gt;-
 Restrict how many Pods you can create within a namespace.
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page shows how to set a quota for the total number of Pods that can run
in a &lt;a class='glossary-tooltip' title='名字空间是 Kubernetes 用来支持隔离单个集群中的资源组的一种抽象。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/overview/working-with-objects/namespaces/' target='_blank' aria-label='Namespace'&gt;Namespace&lt;/a&gt;. You specify quotas in a
[ResourceQuota](/docs/reference/kubernetes-api/policy-resources/resource-quota-v1/)
object.
--&gt;
&lt;p&gt;本文主要介绍如何在&lt;a class='glossary-tooltip' title='名字空间是 Kubernetes 用来支持隔离单个集群中的资源组的一种抽象。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/overview/working-with-objects/namespaces/' target='_blank' aria-label='命名空间'&gt;命名空间&lt;/a&gt;中设置可运行 Pod 总数的配额。
你可以通过使用
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/policy-resources/resource-quota-v1/"&gt;ResourceQuota&lt;/a&gt;
对象来配置配额。&lt;/p&gt;</description></item><item><title>启用或禁用特性门控</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/configure-feature-gates/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/configure-feature-gates/</guid><description>&lt;!--
title: Enable Or Disable Feature Gates
content_type: task
weight: 60
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page shows how to enable or disable feature gates to control specific Kubernetes
features in your cluster. Enabling feature gates allows you to test and use Alpha or
Beta features before they become generally available.
--&gt;
&lt;p&gt;本页介绍如何启用或禁用特性门控（feature gates），
以便在你的集群中控制特定的 Kubernetes 特性。
启用特性门控可以让你在特性正式发布（GA）之前，测试并使用 Alpha 或 Beta 特性。&lt;/p&gt;

&lt;div class="alert alert-info" role="note"&gt;&lt;h4 class="alert-heading"&gt;说明：&lt;/h4&gt;&lt;!--
For some stable (GA) gates, you can also disable them, usually for one minor release
after GA; however if you do that, your cluster may not be conformant as Kubernetes.
--&gt;
&lt;p&gt;对于某些稳定（GA）的特性门控，你也可以禁用它们，通常只允许在 GA 之后的一个次要版本中这样做；
但如果你这样做，你的集群可能不再符合 Kubernetes 一致性（conformance）要求。&lt;/p&gt;</description></item><item><title>日志架构</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/cluster-administration/logging/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/cluster-administration/logging/</guid><description>&lt;!--
reviewers:
- piosz
- x13n
title: Logging Architecture
content_type: concept
weight: 60
--&gt;
&lt;!-- overview --&gt;
&lt;!--
Application logs can help you understand what is happening inside your application. The
logs are particularly useful for debugging problems and monitoring cluster activity. Most
modern applications have some kind of logging mechanism. Likewise, container engines
are designed to support logging. The easiest and most adopted logging method for
containerized applications is writing to standard output and standard error streams.
--&gt;
&lt;p&gt;应用日志可以让你了解应用内部的运行状况。日志对调试问题和监控集群活动非常有用。
大部分现代化应用都有某种日志记录机制。同样地，容器引擎也被设计成支持日志记录。
针对容器化应用，最简单且最广泛采用的日志记录方式就是写入标准输出和标准错误流。&lt;/p&gt;</description></item><item><title>容器运行时接口（CRI）</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/containers/cri/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/containers/cri/</guid><description>&lt;!-- 
title: Container Runtime Interface (CRI)
content_type: concept
weight: 60
--&gt;
&lt;!-- overview --&gt;
&lt;!-- 
The CRI is a plugin interface which enables the kubelet to use a wide variety of
container runtimes, without having a need to recompile the cluster components.

You need a working
&lt;a class='glossary-tooltip' title='容器运行时是负责运行容器的软件。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/setup/production-environment/container-runtimes' target='_blank' aria-label='container runtime'&gt;container runtime&lt;/a&gt; on
each Node in your cluster, so that the
&lt;a class='glossary-tooltip' title='一个在集群中每个节点上运行的代理。它保证容器都运行在 Pod 中。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/reference/command-line-tools-reference/kubelet' target='_blank' aria-label='kubelet'&gt;kubelet&lt;/a&gt; can launch
&lt;a class='glossary-tooltip' title='Pod 表示你的集群上一组正在运行的容器。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/' target='_blank' aria-label='Pods'&gt;Pods&lt;/a&gt; and their containers.
--&gt;
&lt;p&gt;CRI 是一个插件接口，它使 kubelet 能够使用各种容器运行时，无需重新编译集群组件。&lt;/p&gt;</description></item><item><title>删除 StatefulSet</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/run-application/delete-stateful-set/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/run-application/delete-stateful-set/</guid><description>&lt;!--
reviewers:
- bprashanth
- erictune
- foxish
- janetkuo
- smarterclayton
title: Delete a StatefulSet
content_type: task
weight: 60
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This task shows you how to delete a &lt;a class='glossary-tooltip' title='StatefulSet 用来管理某 Pod 集合的部署和扩缩，并为这些 Pod 提供持久存储和持久标识符。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/controllers/statefulset/' target='_blank' aria-label='StatefulSet'&gt;StatefulSet&lt;/a&gt;.
--&gt;
&lt;p&gt;本任务展示如何删除 &lt;a class='glossary-tooltip' title='StatefulSet 用来管理某 Pod 集合的部署和扩缩，并为这些 Pod 提供持久存储和持久标识符。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/controllers/statefulset/' target='_blank' aria-label='StatefulSet'&gt;StatefulSet&lt;/a&gt;。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;!--
- This task assumes you have an application running on your cluster represented by a StatefulSet.
--&gt;
&lt;ul&gt;
&lt;li&gt;本任务假设在你的集群上已经运行了由 StatefulSet 创建的应用。&lt;/li&gt;
&lt;/ul&gt;
&lt;!-- steps --&gt;
&lt;!--
## Deleting a StatefulSet

You can delete a StatefulSet in the same way you delete other resources in Kubernetes:
use the `kubectl delete` command, and specify the StatefulSet either by file or by name.
--&gt;
&lt;h2 id="deleting-a-statefulset"&gt;删除 StatefulSet&lt;/h2&gt;
&lt;p&gt;你可以像删除 Kubernetes 中的其他资源一样删除 StatefulSet：
使用 &lt;code&gt;kubectl delete&lt;/code&gt; 命令，并按文件或者名字指定 StatefulSet。&lt;/p&gt;</description></item><item><title>使用 HostAliases 向 Pod /etc/hosts 文件添加条目</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/network/customize-hosts-file-for-pods/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/network/customize-hosts-file-for-pods/</guid><description>&lt;!--
reviewers:
- rickypai
- thockin
title: Adding entries to Pod /etc/hosts with HostAliases
content_type: task
weight: 60
min-kubernetes-server-version: 1.7
--&gt;
&lt;!-- overview --&gt;
&lt;!--
Adding entries to a Pod's `/etc/hosts` file provides Pod-level override of hostname resolution when DNS and other options are not applicable. You can add these custom entries with the HostAliases field in PodSpec.

The Kubernetes project recommends modifying DNS configuration using the `hostAliases` field
(part of the `.spec` for a Pod), and not by using an init container or other means to edit `/etc/hosts`
directly.
Change made in other ways may be overwritten by the kubelet during Pod creation or restart.
--&gt;
&lt;p&gt;当 DNS 配置以及其它选项不合理的时候，通过向 Pod 的 &lt;code&gt;/etc/hosts&lt;/code&gt; 文件中添加条目，
可以在 Pod 级别覆盖对主机名的解析。你可以通过 PodSpec 的 HostAliases
字段来添加这些自定义条目。&lt;/p&gt;</description></item><item><title>使用 kubeconfig 文件组织集群访问</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/configuration/organize-cluster-access-kubeconfig/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/configuration/organize-cluster-access-kubeconfig/</guid><description>&lt;!--
title: Organizing Cluster Access Using kubeconfig Files
content_type: concept
weight: 60
--&gt;
&lt;!-- overview --&gt;
&lt;!--
Use kubeconfig files to organize information about clusters, users, namespaces, and
authentication mechanisms. The `kubectl` command-line tool uses kubeconfig files to
find the information it needs to choose a cluster and communicate with the API server
of a cluster.
--&gt;
&lt;p&gt;使用 kubeconfig 文件来组织有关集群、用户、命名空间和身份认证机制的信息。
&lt;code&gt;kubectl&lt;/code&gt; 命令行工具使用 kubeconfig 文件来查找选择集群所需的信息，并与集群的 API 服务器进行通信。&lt;/p&gt;

&lt;div class="alert alert-info" role="note"&gt;&lt;h4 class="alert-heading"&gt;说明：&lt;/h4&gt;&lt;!--
A file that is used to configure access to clusters is called
a *kubeconfig file*. This is a generic way of referring to configuration files.
It does not mean that there is a file named `kubeconfig`.
--&gt;
&lt;p&gt;用于配置集群访问的文件称为 &lt;strong&gt;kubeconfig 文件&lt;/strong&gt;。
这是引用到配置文件的通用方法，并不意味着有一个名为 &lt;code&gt;kubeconfig&lt;/code&gt; 的文件。&lt;/p&gt;</description></item><item><title>使用 Kubernetes API 访问集群</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/access-cluster-api/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/access-cluster-api/</guid><description>&lt;!--
title: Access Clusters Using the Kubernetes API
content_type: task
weight: 60
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page shows how to access clusters using the Kubernetes API.
--&gt;
&lt;p&gt;本页展示了如何使用 Kubernetes API 访问集群。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;p&gt;&lt;!--
You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
[minikube](https://minikube.sigs.k8s.io/docs/tutorials/multi_node/)
or you can use one of these Kubernetes playgrounds:
--&gt;
&lt;p&gt;你必须拥有一个 Kubernetes 的集群，且必须配置 kubectl 命令行工具让其与你的集群通信。
建议运行本教程的集群至少有两个节点，且这两个节点不能作为控制平面主机。
如果你还没有集群，你可以通过 &lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;Minikube&lt;/a&gt;
构建一个你自己的集群，或者你可以使用下面的 Kubernetes 练习环境之一：&lt;/p&gt;</description></item><item><title>使用 Pod 失效策略处理可重试和不可重试的 Pod 失效</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/job/pod-failure-policy/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/job/pod-failure-policy/</guid><description>&lt;!--
title: Handling retriable and non-retriable pod failures with Pod failure policy
content_type: task
min-kubernetes-server-version: v1.25
weight: 60
--&gt;







 &lt;div class="feature-state-notice feature-stable" title="特性门控： JobPodFailurePolicy"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt; 
 &lt;code&gt;Kubernetes v1.31 [stable]&lt;/code&gt;（默认启用）&lt;/div&gt;

&lt;!-- overview --&gt;
&lt;!--
This document shows you how to use the
[Pod failure policy](/docs/concepts/workloads/controllers/job#pod-failure-policy),
in combination with the default
[Pod backoff failure policy](/docs/concepts/workloads/controllers/job#pod-backoff-failure-policy),
to improve the control over the handling of container- or Pod-level failure
within a &lt;a class='glossary-tooltip' title='Job 是需要运行完成的确定性的或批量的任务。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/controllers/job/' target='_blank' aria-label='Job'&gt;Job&lt;/a&gt;.
--&gt;
&lt;p&gt;本文向你展示如何结合默认的 &lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/controllers/job#pod-backoff-failure-policy"&gt;Pod 回退失效策略&lt;/a&gt;来使用
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/controllers/job#pod-failure-policy"&gt;Pod 失效策略&lt;/a&gt;，
以改善 &lt;a class='glossary-tooltip' title='Job 是需要运行完成的确定性的或批量的任务。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/controllers/job/' target='_blank' aria-label='Job'&gt;Job&lt;/a&gt; 内处理容器级别或 Pod 级别的失效。&lt;/p&gt;</description></item><item><title>使用 Weave Net 提供 NetworkPolicy</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/network-policy-provider/weave-network-policy/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/network-policy-provider/weave-network-policy/</guid><description>&lt;!--
reviewers:
- bboreham
title: Weave Net for NetworkPolicy
content_type: task
weight: 60
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page shows how to use Weave Net for NetworkPolicy.
--&gt;
&lt;p&gt;本页展示如何使用 Weave Net 提供 NetworkPolicy。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;!--
You need to have a Kubernetes cluster. Follow the
[kubeadm getting started guide](/docs/reference/setup-tools/kubeadm/) to bootstrap one.
 --&gt;
&lt;p&gt;你需要拥有一个 Kubernetes 集群。按照
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/"&gt;kubeadm 入门指南&lt;/a&gt;
来启动一个。&lt;/p&gt;
&lt;!-- steps --&gt;
&lt;!--
## Install the Weave Net addon

Follow the [Integrating Kubernetes via the Addon](https://github.com/weaveworks/weave/blob/master/site/kubernetes/kube-addon.md#-installation) guide.

The Weave Net addon for Kubernetes comes with a
[Network Policy Controller](https://github.com/weaveworks/weave/blob/master/site/kubernetes/kube-addon.md#network-policy)
that automatically monitors Kubernetes for any NetworkPolicy annotations on all
namespaces and configures `iptables` rules to allow or block traffic as directed by the policies.
--&gt;
&lt;h2 id="install-the-weave-net-addon"&gt;安装 Weave Net 插件&lt;/h2&gt;
&lt;p&gt;按照&lt;a href="https://github.com/weaveworks/weave/blob/master/site/kubernetes/kube-addon.md#-installation"&gt;通过插件集成 Kubernetes&lt;/a&gt;
指南执行安装。&lt;/p&gt;</description></item><item><title>使用存储版本迁移功能来迁移 Kubernetes 对象</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/manage-kubernetes-objects/storage-version-migration/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/manage-kubernetes-objects/storage-version-migration/</guid><description>&lt;!--
title: Migrate Kubernetes Objects Using Storage Version Migration
reviewers:
- deads2k
- jpbetz
- enj
- nilekhc
content_type: task
min-kubernetes-server-version: v1.30
weight: 60
--&gt;
&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-beta" title="特性门控： StorageVersionMigrator"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt; 
 &lt;code&gt;Kubernetes v1.35 [beta]&lt;/code&gt;（默认禁用）&lt;/div&gt;

&lt;!--
Kubernetes relies on API data being actively re-written, to support some
maintenance activities related to at rest storage. Two prominent examples are
the versioned schema of stored resources (that is, the preferred storage schema
changing from v1 to v2 for a given resource) and encryption at rest
(that is, rewriting stale data based on a change in how the data should be encrypted).
--&gt;
&lt;p&gt;Kubernetes 依赖主动重写的 API 数据来支持与静态存储相关的一些维护活动。
两个著名的例子是已存储资源的版本化模式（即针对给定资源的首选存储模式从 v1 更改为 v2）
和静态加密（即基于数据加密方式的变化来重写过时的数据）。&lt;/p&gt;</description></item><item><title>使用服务来访问集群中的应用</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/access-application-cluster/service-access-application-cluster/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/access-application-cluster/service-access-application-cluster/</guid><description>&lt;!--
title: Use a Service to Access an Application in a Cluster
content_type: tutorial
weight: 60
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page shows how to create a Kubernetes Service object that external
clients can use to access an application running in a cluster. The Service
provides load balancing for an application that has two running instances.
--&gt;
&lt;p&gt;本文展示如何创建一个 Kubernetes 服务对象，能让外部客户端访问在集群中运行的应用。
该服务为一个应用的两个运行实例提供负载均衡。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;!--
You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
[minikube](https://minikube.sigs.k8s.io/docs/tutorials/multi_node/)
or you can use one of these Kubernetes playgrounds:
--&gt;
&lt;p&gt;你必须拥有一个 Kubernetes 的集群，且必须配置 kubectl 命令行工具让其与你的集群通信。
建议运行本教程的集群至少有两个节点，且这两个节点不能作为控制平面主机。
如果你还没有集群，你可以通过 &lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;Minikube&lt;/a&gt;
构建一个你自己的集群，或者你可以使用下面的 Kubernetes 练习环境之一：&lt;/p&gt;</description></item><item><title>探索 Pod 及其端点的终止行为</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tutorials/services/pods-and-endpoint-termination-flow/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tutorials/services/pods-and-endpoint-termination-flow/</guid><description>&lt;!--
title: Explore Termination Behavior for Pods And Their Endpoints
content_type: tutorial
weight: 60
--&gt;
&lt;!-- overview --&gt;
&lt;!--
Once you connected your Application with Service following steps
like those outlined in [Connecting Applications with Services](/docs/tutorials/services/connect-applications-service/),
you have a continuously running, replicated application, that is exposed on a network.
This tutorial helps you look at the termination flow for Pods and to explore ways to implement
graceful connection draining.
--&gt;
&lt;p&gt;一旦你参照&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/tutorials/services/connect-applications-service/"&gt;使用 Service 连接到应用&lt;/a&gt;中概述的那些步骤使用
Service 连接到了你的应用，你就有了一个持续运行的多副本应用暴露在了网络上。
本教程帮助你了解 Pod 的终止流程，探索实现连接排空的几种方式。&lt;/p&gt;</description></item><item><title>图表指南</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/contribute/style/diagram-guide/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/contribute/style/diagram-guide/</guid><description>&lt;!--
title: Diagram Guide
linktitle: Diagram guide
content_type: concept
weight: 60
--&gt;
&lt;!--Overview--&gt;
&lt;!--
This guide shows you how to create, edit and share diagrams using the Mermaid
JavaScript library. Mermaid.js allows you to generate diagrams using a simple
markdown-like syntax inside Markdown files. You can also use Mermaid to
generate `.svg` or `.png` image files that you can add to your documentation.

The target audience for this guide is anybody wishing to learn about Mermaid
and/or how to create and add diagrams to Kubernetes documentation.

Figure 1 outlines the topics covered in this section. 
--&gt;
&lt;p&gt;本指南为你展示如何创建、编辑和分享基于 Mermaid JavaScript 库的图表。
Mermaid.js 允许你使用简单的、类似于 Markdown 的语法来在 Markdown 文件中生成图表。
你也可以使用 Mermaid 来创建 &lt;code&gt;.svg&lt;/code&gt; 或 &lt;code&gt;.png&lt;/code&gt; 图片文件，将其添加到你的文档中。&lt;/p&gt;</description></item><item><title>证书和证书签名请求</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/access-authn-authz/certificate-signing-requests/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/access-authn-authz/certificate-signing-requests/</guid><description>&lt;!--
reviewers:
- liggitt
- mikedanese
- munnerz
- enj
title: Certificates and Certificate Signing Requests
api_metadata:
- apiVersion: "certificates.k8s.io/v1"
 kind: "CertificateSigningRequest"
 override_link_text: "CSR v1"
- apiVersion: "certificates.k8s.io/v1alpha1"
 kind: "ClusterTrustBundle" 
content_type: concept
weight: 60
--&gt;
&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.19 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;!--
Kubernetes certificate and trust bundle APIs enable automation of
[X.509](https://www.itu.int/rec/T-REC-X.509) credential provisioning by providing
a programmatic interface for clients of the Kubernetes API to request and obtain
X.509 &lt;a class='glossary-tooltip' title='证书是个安全加密文件，用来确认对 Kubernetes 集群访问的合法性。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/tasks/tls/managing-tls-in-a-cluster/' target='_blank' aria-label='certificates'&gt;certificates&lt;/a&gt; from a Certificate Authority (CA).

There is also experimental (alpha) support for distributing [trust bundles](#cluster-trust-bundles).
--&gt;
&lt;p&gt;Kubernetes 证书和信任包（trust bundle）API 可以通过为 Kubernetes API 的客户端提供编程接口，
实现 &lt;a href="https://www.itu.int/rec/T-REC-X.509"&gt;X.509&lt;/a&gt; 凭据的自动化制备，
从而请求并获取证书颁发机构（CA）发布的 X.509 &lt;a class='glossary-tooltip' title='证书是个安全加密文件，用来确认对 Kubernetes 集群访问的合法性。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/tasks/tls/managing-tls-in-a-cluster/' target='_blank' aria-label='证书'&gt;证书&lt;/a&gt;。&lt;/p&gt;</description></item><item><title>注解</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/overview/working-with-objects/annotations/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/overview/working-with-objects/annotations/</guid><description>&lt;!--
title: Annotations
content_type: concept
weight: 60
--&gt;
&lt;!-- overview --&gt;
&lt;!--
You can use Kubernetes annotations to attach arbitrary non-identifying metadata
to &lt;a class='glossary-tooltip' title='Kubernetes 系统中的实体，代表了集群的部分状态。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/overview/working-with-objects/#kubernetes-objects' target='_blank' aria-label='objects'&gt;objects&lt;/a&gt;.
Clients such as tools and libraries can retrieve this metadata.
--&gt;
&lt;p&gt;你可以使用 Kubernetes 注解为&lt;a class='glossary-tooltip' title='Kubernetes 系统中的实体，代表了集群的部分状态。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/overview/working-with-objects/#kubernetes-objects' target='_blank' aria-label='对象'&gt;对象&lt;/a&gt;附加任意的非标识的元数据。
客户端程序（例如工具和库）能够获取这些元数据信息。&lt;/p&gt;
&lt;!-- body --&gt;
&lt;!--
## Attaching metadata to objects

You can use either labels or annotations to attach metadata to Kubernetes
objects. Labels can be used to select objects and to find
collections of objects that satisfy certain conditions. In contrast, annotations
are not used to identify and select objects. The metadata
in an annotation can be small or large, structured or unstructured, and can
include characters not permitted by labels. It is possible to use labels as 
well as annotations in the metadata of the same object.

Annotations, like labels, are key/value maps:
--&gt;
&lt;h2 id="为对象附加元数据"&gt;为对象附加元数据&lt;/h2&gt;
&lt;p&gt;你可以使用标签或注解将元数据附加到 Kubernetes 对象。
标签可以用来选择对象和查找满足某些条件的对象集合。 相反，注解不用于标识和选择对象。
注解中的元数据，可以很小，也可以很大，可以是结构化的，也可以是非结构化的，能够包含标签不允许的字符。
可以在同一对象的元数据中同时使用标签和注解。&lt;/p&gt;</description></item><item><title>卷快照类</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/volume-snapshot-classes/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/volume-snapshot-classes/</guid><description>&lt;!-- overview --&gt;
&lt;!--
This document describes the concept of VolumeSnapshotClass in Kubernetes. Familiarity
with [volume snapshots](/docs/concepts/storage/volume-snapshots/) and
[storage classes](/docs/concepts/storage/storage-classes) is suggested.
--&gt;
&lt;p&gt;本文档描述了 Kubernetes 中 VolumeSnapshotClass 的概念。建议熟悉
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/volume-snapshots/"&gt;卷快照（Volume Snapshots）&lt;/a&gt;和
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/storage-classes"&gt;存储类（Storage Class）&lt;/a&gt;。&lt;/p&gt;
&lt;!-- body --&gt;
&lt;!--
## Introduction

Just like StorageClass provides a way for administrators to describe the "classes"
of storage they offer when provisioning a volume, VolumeSnapshotClass provides a
way to describe the "classes" of storage when provisioning a volume snapshot.
--&gt;
&lt;h2 id="introduction"&gt;介绍&lt;/h2&gt;
&lt;p&gt;就像 StorageClass 为管理员提供了一种在配置卷时描述存储“类”的方法，
VolumeSnapshotClass 提供了一种在配置卷快照时描述存储“类”的方法。&lt;/p&gt;</description></item><item><title>Kubernetes 中的 Windows 容器</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/windows/intro/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/windows/intro/</guid><description>&lt;!--
reviewers:
- jayunit100
- jsturtevant
- marosset
- perithompson
title: Windows containers in Kubernetes
content_type: concept
weight: 65
--&gt;
&lt;!-- overview --&gt;
&lt;!--
Windows applications constitute a large portion of the services and applications that
run in many organizations. [Windows containers](https://aka.ms/windowscontainers)
provide a way to encapsulate processes and package dependencies, making it easier
to use DevOps practices and follow cloud native patterns for Windows applications.

Organizations with investments in Windows-based applications and Linux-based
applications don't have to look for separate orchestrators to manage their workloads,
leading to increased operational efficiencies across their deployments, regardless
of operating system.
--&gt;
&lt;p&gt;在许多组织中，所运行的很大一部分服务和应用是 Windows 应用。
&lt;a href="https://aka.ms/windowscontainers"&gt;Windows 容器&lt;/a&gt;提供了一种封装进程和包依赖项的方式，
从而简化了 DevOps 实践，令 Windows 应用同样遵从云原生模式。&lt;/p&gt;</description></item><item><title>动态资源分配</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/scheduling-eviction/dynamic-resource-allocation/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/scheduling-eviction/dynamic-resource-allocation/</guid><description>&lt;!--
reviewers:
- klueska
- pohly
title: Dynamic Resource Allocation
content_type: concept
weight: 65
api_metadata:
- apiVersion: "resource.k8s.io/v1alpha3"
 kind: "DeviceTaintRule"
- apiVersion: "resource.k8s.io/v1beta1"
 kind: "ResourceClaim"
- apiVersion: "resource.k8s.io/v1beta1"
 kind: "ResourceClaimTemplate"
- apiVersion: "resource.k8s.io/v1beta1"
 kind: "DeviceClass"
- apiVersion: "resource.k8s.io/v1beta1"
 kind: "ResourceSlice"
- apiVersion: "resource.k8s.io/v1beta2"
 kind: "ResourceClaim"
- apiVersion: "resource.k8s.io/v1beta2"
 kind: "ResourceClaimTemplate"
- apiVersion: "resource.k8s.io/v1beta2"
 kind: "DeviceClass"
- apiVersion: "resource.k8s.io/v1beta2"
 kind: "ResourceSlice"
--&gt;
&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable" title="特性门控： DynamicResourceAllocation"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt; 
 &lt;code&gt;Kubernetes v1.35 [stable]&lt;/code&gt;（默认启用）&lt;/div&gt;

&lt;!--
This page describes _dynamic resource allocation (DRA)_ in Kubernetes.
--&gt;
&lt;p&gt;本页描述 Kubernetes 中的 &lt;strong&gt;动态资源分配（DRA）&lt;/strong&gt;。&lt;/p&gt;</description></item><item><title>CSI 卷克隆</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/volume-pvc-datasource/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/volume-pvc-datasource/</guid><description>&lt;!--
reviewers:
- jsafrane
- saad-ali
- thockin
- msau42
title: CSI Volume Cloning
content_type: concept
weight: 60
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This document describes the concept of cloning existing CSI Volumes in Kubernetes. 
Familiarity with [Volumes](/docs/concepts/storage/volumes) is suggested.
--&gt;
&lt;p&gt;本文档介绍 Kubernetes 中克隆现有 CSI 卷的概念。阅读前建议先熟悉
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/volumes"&gt;卷&lt;/a&gt;。&lt;/p&gt;
&lt;!-- body --&gt;
&lt;!--
## Introduction

The &lt;a class='glossary-tooltip' title='容器存储接口 （CSI）定义了存储系统暴露给容器的标准接口。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/volumes/#csi' target='_blank' aria-label='CSI'&gt;CSI&lt;/a&gt; Volume Cloning feature adds 
support for specifying existing &lt;a class='glossary-tooltip' title='声明在持久卷中定义的存储资源，以便可以将其挂载为容器中的卷。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims' target='_blank' aria-label='PVC'&gt;PVC&lt;/a&gt;s 
in the `dataSource` field to indicate a user would like to clone a &lt;a class='glossary-tooltip' title='包含可被 Pod 中容器访问的数据的目录。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/volumes/' target='_blank' aria-label='卷（Volume）'&gt;卷（Volume）&lt;/a&gt;.
--&gt;
&lt;h2 id="介绍"&gt;介绍&lt;/h2&gt;
&lt;p&gt;&lt;a class='glossary-tooltip' title='容器存储接口 （CSI）定义了存储系统暴露给容器的标准接口。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/volumes/#csi' target='_blank' aria-label='CSI'&gt;CSI&lt;/a&gt; 卷克隆功能增加了通过在
&lt;code&gt;dataSource&lt;/code&gt; 字段中指定存在的
&lt;a class='glossary-tooltip' title='声明在持久卷中定义的存储资源，以便可以将其挂载为容器中的卷。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims' target='_blank' aria-label='PVC'&gt;PVC&lt;/a&gt;，
来表示用户想要克隆的 &lt;a class='glossary-tooltip' title='包含可被 Pod 中容器访问的数据的目录。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/volumes/' target='_blank' aria-label='卷（Volume）'&gt;卷（Volume）&lt;/a&gt;。&lt;/p&gt;</description></item><item><title>kubeadm token</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-token/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-token/</guid><description>&lt;!--
reviewers:
- luxas
- jbeda
title: kubeadm token
content_type: concept
weight: 70
--&gt;
&lt;!-- overview --&gt;
&lt;!--
Bootstrap tokens are used for establishing bidirectional trust between a node joining
the cluster and a control-plane node, as described in [authenticating with bootstrap tokens](/docs/reference/access-authn-authz/bootstrap-tokens/).
--&gt;
&lt;p&gt;如&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/access-authn-authz/bootstrap-tokens/"&gt;使用引导令牌进行身份验证&lt;/a&gt;所述，
引导令牌用于在即将加入集群的节点和控制平面节点间建立双向认证。&lt;/p&gt;
&lt;!--
`kubeadm init` creates an initial token with a 24-hour TTL. The following commands allow you to manage
such a token and also to create and manage new ones.
--&gt;
&lt;p&gt;&lt;code&gt;kubeadm init&lt;/code&gt; 创建了一个有效期为 24 小时的令牌，下面的命令允许你管理令牌，也可以创建和管理新的令牌。&lt;/p&gt;</description></item><item><title>kubectl 用户偏好设置（kuberc）</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/kuberc/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/kuberc/</guid><description>&lt;!--
title: Kubectl user preferences (kuberc)
content_type: concept
weight: 70
--&gt;







 &lt;div class="feature-state-notice feature-beta"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt;
 &lt;code&gt;Kubernetes 1.34 [beta]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;!--
A Kubernetes `kuberc` configuration file allows you to define preferences for
&lt;a class='glossary-tooltip' title='kubectl 是用来和 Kubernetes 集群进行通信的命令行工具。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/' target='_blank' aria-label='kubectl'&gt;kubectl&lt;/a&gt;,
such as default options and command aliases. Unlike the kubeconfig file, a `kuberc`
configuration file does **not** contain cluster details, usernames or passwords.
--&gt;
&lt;p&gt;Kubernetes &lt;code&gt;kuberc&lt;/code&gt; 配置文件允许你定义 &lt;a class='glossary-tooltip' title='kubectl 是用来和 Kubernetes 集群进行通信的命令行工具。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/' target='_blank' aria-label='kubectl'&gt;kubectl&lt;/a&gt;
的偏好设置，例如默认选项和命令别名。
与 kubeconfig 文件不同，&lt;code&gt;kuberc&lt;/code&gt; 配置文件&lt;strong&gt;不&lt;/strong&gt;包含集群详情、用户名或密码。&lt;/p&gt;</description></item><item><title>Kubernetes Secret 良好实践</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/security/secrets-good-practices/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/security/secrets-good-practices/</guid><description>&lt;!--
title: Good practices for Kubernetes Secrets
description: &gt;
 Principles and practices for good Secret management for cluster administrators and application developers.
content_type: concept
weight: 70
--&gt;
&lt;!-- overview --&gt;
&lt;!--
---
title: Secret
id: secret
date: 2018-04-12
full_link: /docs/concepts/configuration/secret/
short_description: &gt;
 Stores sensitive information, such as passwords, OAuth tokens, and ssh keys.

aka: 
tags:
- core-object
- security
---
--&gt;
&lt;!--
 Stores sensitive information, such as passwords, OAuth tokens, and SSH keys.
--&gt;
&lt;p&gt;&lt;p&gt;在 Kubernetes 中，Secret 是这样一个对象： secret 用于存储敏感信息，如密码、OAuth 令牌和 SSH 密钥。&lt;/p&gt;</description></item><item><title>Kubernetes 控制平面组件的兼容版本</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/cluster-administration/compatibility-version/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/cluster-administration/compatibility-version/</guid><description>&lt;!--
title: Compatibility Version For Kubernetes Control Plane Components
reviewers:
- jpbetz
- siyuanfoundation
content_type: concept
weight: 70
--&gt;
&lt;!-- overview --&gt;
&lt;!--
Since release v1.32, we introduced configurable version compatibility and
emulation options to Kubernetes control plane components to make upgrades
safer by providing more control and increasing the granularity of steps
available to cluster administrators.
--&gt;
&lt;p&gt;自 v1.32 版本发布以来，我们为 Kubernetes 控制平面组件引入了可配置的版本兼容性和仿真选项，
为集群管理员提供更多控制选项并增加可用的细粒度步骤来让升级更加安全。&lt;/p&gt;
&lt;!-- body --&gt;
&lt;!--
## Emulated Version

The emulation option is set by the `--emulated-version` flag of control plane components.
It allows the component to emulate the behavior (APIs, features, ...) of an earlier version
of Kubernetes.
--&gt;
&lt;h2 id="emulated-version"&gt;仿真版本&lt;/h2&gt;
&lt;p&gt;仿真选项通过控制平面组件的 &lt;code&gt;--emulated-version&lt;/code&gt; 参数来设置。
此选项允许控制平面组件仿真 Kubernetes 早期版本的行为（API、特性等）。&lt;/p&gt;</description></item><item><title>Kubernetes 系统组件指标</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/cluster-administration/system-metrics/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/cluster-administration/system-metrics/</guid><description>&lt;!--
title: Metrics For Kubernetes System Components
reviewers:
- brancz
- logicalhan
- RainbowMango
content_type: concept
weight: 70
--&gt;
&lt;!-- overview --&gt;
&lt;!--
System component metrics can give a better look into what is happening inside them. Metrics are
particularly useful for building dashboards and alerts.

Kubernetes components emit metrics in [Prometheus format](https://prometheus.io/docs/instrumenting/exposition_formats/).
This format is structured plain text, designed so that people and machines can both read it.
--&gt;
&lt;p&gt;通过系统组件指标可以更好地了解系统组个内部发生的情况。系统组件指标对于构建仪表板和告警特别有用。&lt;/p&gt;</description></item><item><title>编组调度（Gang Scheduling）</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/scheduling-eviction/gang-scheduling/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/scheduling-eviction/gang-scheduling/</guid><description>&lt;!--
title: Gang Scheduling
content_type: concept
weight: 70
--&gt;
&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-alpha" title="特性门控： GangScheduling"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt; 
 &lt;code&gt;Kubernetes v1.35 [alpha]&lt;/code&gt;（默认禁用）&lt;/div&gt;

&lt;!--
Gang scheduling ensures that a group of Pods are scheduled on an "all-or-nothing" basis.
If the cluster cannot accommodate the entire group (or a defined minimum number of Pods),
none of the Pods are bound to a node.

This feature depends on the [Workload API](/docs/concepts/workloads/workload-api/).
Ensure the [`GenericWorkload`](/docs/reference/command-line-tools-reference/feature-gates/#GenericWorkload)
feature gate and the `scheduling.k8s.io/v1alpha1`
&lt;a class='glossary-tooltip' title='Kubernetes API 中的一组相关路径。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/overview/kubernetes-api/#api-groups-and-versioning' target='_blank' aria-label='API group'&gt;API group&lt;/a&gt; are enabled in the cluster.
--&gt;
&lt;p&gt;编组调度（Gang Scheduling）确保一组 Pod 以 &lt;strong&gt;全有或全无（all-or-nothing）&lt;/strong&gt; 的方式进行调度。
如果集群无法容纳整个组（或某确定的最小 Pod 数量），则不会将任何 Pod 绑定到节点上。&lt;/p&gt;</description></item><item><title>调度器性能调优</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/scheduling-eviction/scheduler-perf-tuning/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/scheduling-eviction/scheduler-perf-tuning/</guid><description>&lt;!--
---
reviewers:
- bsalamat
title: Scheduler Performance Tuning
content_type: concept
weight: 70
---
--&gt;
&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-beta"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.14 [beta]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;!--
[kube-scheduler](/docs/concepts/scheduling-eviction/kube-scheduler/#kube-scheduler)
is the Kubernetes default scheduler. It is responsible for placement of Pods
on Nodes in a cluster.
--&gt;
&lt;p&gt;作为 kubernetes 集群的默认调度器，
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/scheduling-eviction/kube-scheduler/#kube-scheduler"&gt;kube-scheduler&lt;/a&gt;
主要负责将 Pod 调度到集群的 Node 上。&lt;/p&gt;
&lt;!--
Nodes in a cluster that meet the scheduling requirements of a Pod are
called _feasible_ Nodes for the Pod. The scheduler finds feasible Nodes
for a Pod and then runs a set of functions to score the feasible Nodes,
picking a Node with the highest score among the feasible ones to run
the Pod. The scheduler then notifies the API server about this decision
in a process called _Binding_.
--&gt;
&lt;p&gt;在一个集群中，满足一个 Pod 调度请求的所有 Node 称之为&lt;strong&gt;可调度&lt;/strong&gt; Node。
调度器先在集群中找到一个 Pod 的可调度 Node，然后根据一系列函数对这些可调度 Node 打分，
之后选出其中得分最高的 Node 来运行 Pod。
最后，调度器将这个调度决定告知 kube-apiserver，这个过程叫做&lt;strong&gt;绑定（Binding）&lt;/strong&gt;。&lt;/p&gt;</description></item><item><title>干扰（Disruptions）</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/disruptions/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/disruptions/</guid><description>&lt;!--
reviewers:
- erictune
- foxish
- davidopp
title: Disruptions
content_type: concept
weight: 70
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This guide is for application owners who want to build
highly available applications, and thus need to understand
what types of disruptions can happen to Pods.
--&gt;
&lt;p&gt;本指南针对的是希望构建高可用性应用的应用所有者，他们有必要了解可能发生在 Pod 上的干扰类型。&lt;/p&gt;
&lt;!--
It is also for cluster administrators who want to perform automated
cluster actions, like upgrading and autoscaling clusters.
--&gt;
&lt;p&gt;文档同样适用于想要执行自动化集群操作（例如升级和自动扩展集群）的集群管理员。&lt;/p&gt;</description></item><item><title>垃圾收集</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/garbage-collection/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/garbage-collection/</guid><description>&lt;!--
title: Garbage Collection
content_type: concept
weight: 70
--&gt;
&lt;!-- overview --&gt;
&lt;!--
垃圾收集（Garbage Collection）是 Kubernetes 用于清理集群资源的各种机制的统称。 This
allows the clean up of resources like the following:
--&gt;
&lt;p&gt;垃圾收集（Garbage Collection）是 Kubernetes 用于清理集群资源的各种机制的统称。
垃圾收集允许系统清理如下资源：&lt;/p&gt;
&lt;!--
* [Terminated pods](/docs/concepts/workloads/pods/pod-lifecycle/#pod-garbage-collection)
* [Completed Jobs](/docs/concepts/workloads/controllers/ttlafterfinished/)
* [Objects without owner references](#owners-dependents)
* [Unused containers and container images](#containers-images)
* [Dynamically provisioned PersistentVolumes with a StorageClass reclaim policy of Delete](/docs/concepts/storage/persistent-volumes/#delete)
* [Stale or expired CertificateSigningRequests (CSRs)](/docs/reference/access-authn-authz/certificate-signing-requests/#request-signing-process)
* &lt;a class='glossary-tooltip' title='Kubernetes 中的工作机器称作节点。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/nodes/' target='_blank' aria-label='Nodes'&gt;Nodes&lt;/a&gt; deleted in the following scenarios:
 * On a cloud when the cluster uses a [cloud controller manager](/docs/concepts/architecture/cloud-controller/)
 * On-premises when the cluster uses an addon similar to a cloud controller
 manager
* [Node Lease objects](/docs/concepts/architecture/nodes/#heartbeats)
--&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/pod-lifecycle/#pod-garbage-collection"&gt;终止的 Pod&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/controllers/ttlafterfinished/"&gt;已完成的 Job&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#owners-dependents"&gt;不再存在属主引用的对象&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#containers-images"&gt;未使用的容器和容器镜像&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/persistent-volumes/#delete"&gt;动态制备的、StorageClass 回收策略为 Delete 的 PV 卷&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/access-authn-authz/certificate-signing-requests/#request-signing-process"&gt;阻滞或者过期的 CertificateSigningRequest (CSR)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;在以下情形中删除了的&lt;a class='glossary-tooltip' title='Kubernetes 中的工作机器称作节点。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/nodes/' target='_blank' aria-label='节点'&gt;节点&lt;/a&gt;对象：
&lt;ul&gt;
&lt;li&gt;当集群使用&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/cloud-controller/"&gt;云控制器管理器&lt;/a&gt;运行于云端时；&lt;/li&gt;
&lt;li&gt;当集群使用类似于云控制器管理器的插件运行在本地环境中时。&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/nodes/#heartbeats"&gt;节点租约对象&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
## Owners and dependents {#owners-dependents}

Many objects in Kubernetes link to each other through [*owner references*](/docs/concepts/overview/working-with-objects/owners-dependents/).
Owner references tell the control plane which objects are dependent on others.
Kubernetes uses owner references to give the control plane, and other API
clients, the opportunity to clean up related resources before deleting an
object. In most cases, Kubernetes manages owner references automatically.
--&gt;
&lt;h2 id="owners-dependents"&gt;属主与依赖&lt;/h2&gt;
&lt;p&gt;Kubernetes 中很多对象通过&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/overview/working-with-objects/owners-dependents/"&gt;&lt;strong&gt;属主引用&lt;/strong&gt;&lt;/a&gt;
链接到彼此。属主引用（Owner Reference）可以告诉控制面哪些对象依赖于其他对象。
Kubernetes 使用属主引用来为控制面以及其他 API 客户端在删除某对象时提供一个清理关联资源的机会。
在大多数场合，Kubernetes 都是自动管理属主引用的。&lt;/p&gt;</description></item><item><title>强制删除 StatefulSet 中的 Pod</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/run-application/force-delete-stateful-set-pod/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/run-application/force-delete-stateful-set-pod/</guid><description>&lt;!--
reviewers:
- bprashanth
- erictune
- foxish
- smarterclayton
title: Force Delete StatefulSet Pods
content_type: task
weight: 70
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page shows how to delete Pods which are part of a
&lt;a class='glossary-tooltip' title='StatefulSet 用来管理某 Pod 集合的部署和扩缩，并为这些 Pod 提供持久存储和持久标识符。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/controllers/statefulset/' target='_blank' aria-label='stateful set'&gt;stateful set&lt;/a&gt;,
and explains the considerations to keep in mind when doing so.
--&gt;
&lt;p&gt;本文介绍如何删除 &lt;a class='glossary-tooltip' title='StatefulSet 用来管理某 Pod 集合的部署和扩缩，并为这些 Pod 提供持久存储和持久标识符。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/controllers/statefulset/' target='_blank' aria-label='StatefulSet'&gt;StatefulSet&lt;/a&gt;
管理的 Pod，并解释这样操作时需要记住的一些注意事项。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;!--
- This is a fairly advanced task and has the potential to violate some of the properties
 inherent to StatefulSet.
- Before proceeding, make yourself familiar with the considerations enumerated below.
--&gt;
&lt;ul&gt;
&lt;li&gt;这是一项相当高级的任务，并且可能会违反 StatefulSet 固有的某些属性。&lt;/li&gt;
&lt;li&gt;请先熟悉下面列举的注意事项再开始操作。&lt;/li&gt;
&lt;/ul&gt;
&lt;!-- steps --&gt;
&lt;!--
## StatefulSet considerations

In normal operation of a StatefulSet, there is **never** a need to force delete a StatefulSet Pod.
The [StatefulSet controller](/docs/concepts/workloads/controllers/statefulset/) is responsible for
creating, scaling and deleting members of the StatefulSet. It tries to ensure that the specified
number of Pods from ordinal 0 through N-1 are alive and ready. StatefulSet ensures that, at any time,
there is at most one Pod with a given identity running in a cluster. This is referred to as
*at most one* semantics provided by a StatefulSet.
--&gt;
&lt;h2 id="statefulset-considerations"&gt;StatefulSet 注意事项&lt;/h2&gt;
&lt;p&gt;在正常操作 StatefulSet 时，&lt;strong&gt;永远不&lt;/strong&gt;需要强制删除 StatefulSet 管理的 Pod。
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/controllers/statefulset/"&gt;StatefulSet 控制器&lt;/a&gt;负责创建、
扩缩和删除 StatefulSet 管理的 Pod。此控制器尽力确保指定数量的从序数 0 到 N-1 的 Pod
处于活跃状态并准备就绪。StatefulSet 确保在任何时候，集群中最多只有一个具有给定标识的 Pod。
这就是所谓的由 StatefulSet 提供的&lt;strong&gt;最多一个（At Most One）&lt;/strong&gt; Pod 的语义。&lt;/p&gt;</description></item><item><title>设置 Konnectivity 服务</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/extend-kubernetes/setup-konnectivity/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/extend-kubernetes/setup-konnectivity/</guid><description>&lt;!-- overview --&gt;
&lt;!--
The Konnectivity service provides a TCP level proxy for the control plane to cluster
communication.
--&gt;
&lt;p&gt;Konnectivity 服务为控制平面提供集群通信的 TCP 级别代理。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;!--
You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this
tutorial on a cluster with at least two nodes that are not acting as control
plane hosts. If you do not already have a cluster, you can create one by using
[minikube](https://minikube.sigs.k8s.io/docs/tutorials/multi_node/).
--&gt;
&lt;p&gt;你需要有一个 Kubernetes 集群，并且 kubectl 命令可以与集群通信。
建议在至少有两个不充当控制平面主机的节点的集群上运行本教程。
如果你还没有集群，可以使用
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt; 创建一个集群。&lt;/p&gt;</description></item><item><title>使用 kubeadm 创建一个高可用 etcd 集群</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/</guid><description>&lt;!--
reviewers:
- sig-cluster-lifecycle
title: Set up a High Availability etcd Cluster with kubeadm
content_type: task
weight: 70
--&gt;
&lt;!-- overview --&gt;
&lt;!--
By default, kubeadm runs a local etcd instance on each control plane node.
It is also possible to treat the etcd cluster as external and provision
etcd instances on separate hosts. The differences between the two approaches are covered in the
[Options for Highly Available topology](/docs/setup/production-environment/tools/kubeadm/ha-topology) page.
--&gt;
&lt;p&gt;默认情况下，kubeadm 在每个控制平面节点上运行一个本地 etcd 实例。也可以使用外部的 etcd 集群，并在不同的主机上提供 etcd 实例。
这两种方法的区别在&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/setup/production-environment/tools/kubeadm/ha-topology"&gt;高可用拓扑的选项&lt;/a&gt;页面中阐述。&lt;/p&gt;</description></item><item><title>使用 Service 把前端连接到后端</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/access-application-cluster/connecting-frontend-backend/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/access-application-cluster/connecting-frontend-backend/</guid><description>&lt;!--
title: Connect a Frontend to a Backend Using Services
content_type: tutorial
weight: 70
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This task shows how to create a _frontend_ and a _backend_ microservice. The backend 
microservice is a hello greeter. The frontend exposes the backend using nginx and a 
Kubernetes &lt;a class='glossary-tooltip' title='将运行在一组 Pods 上的应用程序公开为网络服务的抽象方法。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/services-networking/service/' target='_blank' aria-label='服务（Service）'&gt;服务（Service）&lt;/a&gt; object.
--&gt;
&lt;p&gt;本任务会描述如何创建前端（Frontend）微服务和后端（Backend）微服务。后端微服务是一个 hello 欢迎程序。
前端通过 nginx 和一个 Kubernetes &lt;a class='glossary-tooltip' title='将运行在一组 Pods 上的应用程序公开为网络服务的抽象方法。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/services-networking/service/' target='_blank' aria-label='服务'&gt;服务&lt;/a&gt;
暴露后端所提供的服务。&lt;/p&gt;
&lt;h2 id="教程目标"&gt;教程目标&lt;/h2&gt;
&lt;!--
* Create and run a sample `hello` backend microservice using a
 &lt;a class='glossary-tooltip' title='管理集群上的多副本应用。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/controllers/deployment/' target='_blank' aria-label='Deployment'&gt;Deployment&lt;/a&gt; object.
* Use a Service object to send traffic to the backend microservice's multiple replicas.
* Create and run a `nginx` frontend microservice, also using a Deployment object.
* Configure the frontend microservice to send traffic to the backend microservice.
* Use a Service object of `type=LoadBalancer` to expose the frontend microservice
 outside the cluster.
--&gt;
&lt;ul&gt;
&lt;li&gt;使用部署对象（Deployment object）创建并运行一个 &lt;code&gt;hello&lt;/code&gt; 后端微服务&lt;/li&gt;
&lt;li&gt;使用一个 Service 对象将请求流量发送到后端微服务的多个副本&lt;/li&gt;
&lt;li&gt;同样使用一个 Deployment 对象创建并运行一个 &lt;code&gt;nginx&lt;/code&gt; 前端微服务&lt;/li&gt;
&lt;li&gt;配置前端微服务将请求流量发送到后端微服务&lt;/li&gt;
&lt;li&gt;使用 &lt;code&gt;type=LoadBalancer&lt;/code&gt; 的 Service 对象将前端微服务暴露到集群外部&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;p&gt;&lt;!--
You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
[minikube](https://minikube.sigs.k8s.io/docs/tutorials/multi_node/)
or you can use one of these Kubernetes playgrounds:
--&gt;
&lt;p&gt;你必须拥有一个 Kubernetes 的集群，且必须配置 kubectl 命令行工具让其与你的集群通信。
建议运行本教程的集群至少有两个节点，且这两个节点不能作为控制平面主机。
如果你还没有集群，你可以通过 &lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;Minikube&lt;/a&gt;
构建一个你自己的集群，或者你可以使用下面的 Kubernetes 练习环境之一：&lt;/p&gt;</description></item><item><title>网络策略</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/services-networking/network-policies/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/services-networking/network-policies/</guid><description>&lt;!--
reviewers:
- thockin
- caseydavenport
- danwinship
title: Network Policies
content_type: concept
api_metadata:
- apiVersion: "networking.k8s.io/v1"
 kind: "NetworkPolicy"
weight: 70
description: &gt;-
 If you want to control traffic flow at the IP address or port level (OSI layer 3 or 4),
 NetworkPolicies allow you to specify rules for traffic flow within your cluster, and
 also between Pods and the outside world.
 Your cluster must use a network plugin that supports NetworkPolicy enforcement.
--&gt;
&lt;!-- overview --&gt;
&lt;!--
If you want to control traffic flow at the IP address or port level for TCP, UDP, and SCTP protocols,
then you might consider using Kubernetes NetworkPolicies for particular applications in your cluster.
NetworkPolicies are an application-centric construct which allow you to specify how a
&lt;a class='glossary-tooltip' title='Pod 表示你的集群上一组正在运行的容器。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/' target='_blank' aria-label='pod'&gt;pod&lt;/a&gt; is allowed to communicate with various network
"entities" (we use the word "entity" here to avoid overloading the more common terms such as
"endpoints" and "services", which have specific Kubernetes connotations) over the network.
NetworkPolicies apply to a connection with a pod on one or both ends, and are not relevant to
other connections.
--&gt;
&lt;p&gt;如果你希望针对 TCP、UDP 和 SCTP 协议在 IP 地址或端口层面控制网络流量，
则你可以考虑为集群中特定应用使用 Kubernetes 网络策略（NetworkPolicy）。
NetworkPolicy 是一种以应用为中心的结构，允许你设置如何允许
&lt;a class='glossary-tooltip' title='Pod 表示你的集群上一组正在运行的容器。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/' target='_blank' aria-label='Pod'&gt;Pod&lt;/a&gt; 与网络上的各类网络“实体”
（我们这里使用实体以避免过度使用诸如“端点”和“服务”这类常用术语，
这些术语在 Kubernetes 中有特定含义）通信。
NetworkPolicy 适用于一端或两端与 Pod 的连接，与其他连接无关。&lt;/p&gt;</description></item><item><title>为节点发布扩展资源</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/extended-resource-node/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/extended-resource-node/</guid><description>&lt;!--
title: Advertise Extended Resources for a Node
content_type: task
weight: 70
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page shows how to specify extended resources for a Node.
Extended resources allow cluster administrators to advertise node-level
resources that would otherwise be unknown to Kubernetes.
--&gt;
&lt;p&gt;本文展示了如何为节点指定扩展资源（Extended Resource）。
扩展资源允许集群管理员发布节点级别的资源，这些资源在不进行发布的情况下无法被 Kubernetes 感知。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;p&gt;&lt;!--
You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
[minikube](https://minikube.sigs.k8s.io/docs/tutorials/multi_node/)
or you can use one of these Kubernetes playgrounds:
--&gt;
&lt;p&gt;你必须拥有一个 Kubernetes 的集群，且必须配置 kubectl 命令行工具让其与你的集群通信。
建议运行本教程的集群至少有两个节点，且这两个节点不能作为控制平面主机。
如果你还没有集群，你可以通过 &lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;Minikube&lt;/a&gt;
构建一个你自己的集群，或者你可以使用下面的 Kubernetes 练习环境之一：&lt;/p&gt;</description></item><item><title>为容器分派扩展资源</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configure-pod-container/extended-resource/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configure-pod-container/extended-resource/</guid><description>&lt;!--
title: Assign Extended Resources to a Container
content_type: task
weight: 70
--&gt;
&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.35 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;!--
This page shows how to assign extended resources to a Container.
--&gt;
&lt;p&gt;本文介绍如何为容器指定扩展资源。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;p&gt;&lt;!--
You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
[minikube](https://minikube.sigs.k8s.io/docs/tutorials/multi_node/)
or you can use one of these Kubernetes playgrounds:
--&gt;
&lt;p&gt;你必须拥有一个 Kubernetes 的集群，且必须配置 kubectl 命令行工具让其与你的集群通信。
建议运行本教程的集群至少有两个节点，且这两个节点不能作为控制平面主机。
如果你还没有集群，你可以通过 &lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;Minikube&lt;/a&gt;
构建一个你自己的集群，或者你可以使用下面的 Kubernetes 练习环境之一：&lt;/p&gt;</description></item><item><title>已完成 Job 的自动清理</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/controllers/ttlafterfinished/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/controllers/ttlafterfinished/</guid><description>&lt;!--
reviewers:
- janetkuo
title: Automatic Cleanup for Finished Jobs
content_type: concept
weight: 70
description: &gt;-
 A time-to-live mechanism to clean up old Jobs that have finished execution.
--&gt;
&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.23 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;!--
When your Job has finished, it's useful to keep that Job in the API (and not immediately delete the Job)
so that you can tell whether the Job succeeded or failed.

Kubernetes' TTL-after-finished &lt;a class='glossary-tooltip' title='控制器通过 API 服务器监控集群的公共状态，并致力于将当前状态转变为期望的状态。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/controller/' target='_blank' aria-label='controller'&gt;controller&lt;/a&gt; provides a
TTL (time to live) mechanism to limit the lifetime of Job objects that
have finished execution.
--&gt;
&lt;p&gt;当你的 Job 已结束时，将 Job 保留在 API 中（而不是立即删除 Job）很有用，
这样你就可以判断 Job 是成功还是失败。&lt;/p&gt;</description></item><item><title>撰写新主题</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/contribute/style/write-new-topic/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/contribute/style/write-new-topic/</guid><description>&lt;!--
title: Writing a new topic
content_type: task
weight: 70
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page shows how to create a new topic for the Kubernetes docs.
--&gt;
&lt;p&gt;本页面展示如何为 Kubernetes 文档库创建新主题。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;!--
Create a fork of the Kubernetes documentation repository as described in
[Open a PR](/docs/contribute/new-content/open-a-pr/).
--&gt;
&lt;p&gt;如&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/contribute/new-content/open-a-pr/"&gt;发起 PR&lt;/a&gt;中所述，创建 Kubernetes 文档库的派生副本。&lt;/p&gt;
&lt;!-- steps --&gt;
&lt;!--
## Choosing a page type

As you prepare to write a new topic, think about the page type that would fit your content the best:
--&gt;
&lt;h2 id="选择页面类型"&gt;选择页面类型&lt;/h2&gt;
&lt;p&gt;当你准备编写一个新的主题时，考虑一下最适合你的内容的页面类型：&lt;/p&gt;</description></item><item><title>字段选择算符</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/overview/working-with-objects/field-selectors/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/overview/working-with-objects/field-selectors/</guid><description>&lt;!--
title: Field Selectors
content_type: concept
weight: 70
--&gt;
&lt;!--
_Field selectors_ let you [select Kubernetes resources](/docs/concepts/overview/working-with-objects/kubernetes-objects) based on the value of one or more resource fields. Here are some examples of field selector queries:

_Field selectors_ let you select Kubernetes &lt;a class='glossary-tooltip' title='Kubernetes 系统中的实体，代表了集群的部分状态。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/overview/working-with-objects/#kubernetes-objects' target='_blank' aria-label='objects'&gt;objects&lt;/a&gt; based on the
value of one or more resource fields. Here are some examples of field selector queries:
--&gt;
&lt;p&gt;“字段选择算符（Field selectors）”允许你根据一个或多个资源字段的值筛选
Kubernetes &lt;a class='glossary-tooltip' title='Kubernetes 系统中的实体，代表了集群的部分状态。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/overview/working-with-objects/#kubernetes-objects' target='_blank' aria-label='对象'&gt;对象&lt;/a&gt;。
下面是一些使用字段选择算符查询的例子：&lt;/p&gt;</description></item><item><title>作为博客写作伙伴提供帮助</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/contribute/blog/writing-buddy/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/contribute/blog/writing-buddy/</guid><description>&lt;!--
title: Helping as a blog writing buddy
slug: writing-buddy
content_type: concept
weight: 70
--&gt;
&lt;!-- overview --&gt;
&lt;!--
There are two official Kubernetes blogs, and the CNCF has its own blog where you can cover Kubernetes too.
Read [contributing to Kubernetes blogs](/docs/contribute/blog/) to learn about these two blogs.

When people contribute to either blog as an author, the Kubernetes project pairs up authors
as _writing buddies_. This page explains how to fulfil the buddy role.

You should make sure that you have at least read an outline of [article submission](/docs/contribute/blog/submission/)
before you read on within this page.
--&gt;
&lt;p&gt;Kubernetes 有两个官方博客，同时 CNCF 也有自己的博客，你也可以在其中撰写与 Kubernetes 相关的内容。&lt;br&gt;
阅读&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/contribute/blog/"&gt;为 Kubernetes 博客贡献内容&lt;/a&gt;以了解这两个博客的详细信息。&lt;/p&gt;</description></item><item><title>Kubernetes 对象状态的指标</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/cluster-administration/kube-state-metrics/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/cluster-administration/kube-state-metrics/</guid><description>&lt;!--
title: Metrics for Kubernetes Object States
content_type: concept
weight: 75
description: &gt;-
 kube-state-metrics, an add-on agent to generate and expose cluster-level metrics.
--&gt;
&lt;!--
The state of Kubernetes objects in the Kubernetes API can be exposed as metrics.
An add-on agent called [kube-state-metrics](https://github.com/kubernetes/kube-state-metrics) can connect to the Kubernetes API server and expose a HTTP endpoint with metrics generated from the state of individual objects in the cluster.
It exposes various information about the state of objects like labels and annotations, startup and termination times, status or the phase the object currently is in.
For example, containers running in pods create a `kube_pod_container_info` metric.
This includes the name of the container, the name of the pod it is part of, the &lt;a class='glossary-tooltip' title='名字空间是 Kubernetes 用来支持隔离单个集群中的资源组的一种抽象。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/overview/working-with-objects/namespaces/' target='_blank' aria-label='namespace'&gt;namespace&lt;/a&gt; the pod is running in, the name of the container image, the ID of the image, the image name from the spec of the container, the ID of the running container and the ID of the pod as labels.
--&gt;
&lt;p&gt;Kubernetes API 中 Kubernetes 对象的状态可以被公开为指标。
一个名为 &lt;a href="https://github.com/kubernetes/kube-state-metrics"&gt;kube-state-metrics&lt;/a&gt;
的插件代理可以连接到 Kubernetes API 服务器并公开一个 HTTP 端点，提供集群中各个对象的状态所生成的指标。
此代理公开了关于对象状态的各种信息，如标签和注解、启动和终止时间、对象当前所处的状态或阶段。
例如，针对运行在 Pod 中的容器会创建一个 &lt;code&gt;kube_pod_container_info&lt;/code&gt; 指标。
其中包括容器的名称、所属的 Pod 的名称、Pod 所在的&lt;a class='glossary-tooltip' title='名字空间是 Kubernetes 用来支持隔离单个集群中的资源组的一种抽象。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/overview/working-with-objects/namespaces/' target='_blank' aria-label='命名空间'&gt;命名空间&lt;/a&gt;、
容器镜像的名称、镜像的 ID、容器规约中的镜像名称、运行中容器的 ID 和用作标签的 Pod ID。&lt;/p&gt;</description></item><item><title>Windows 节点的资源管理</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/configuration/windows-resource-management/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/configuration/windows-resource-management/</guid><description>&lt;!--
reviewers:
- jayunit100
- jsturtevant
- marosset
- perithompson
title: Resource Management for Windows nodes
content_type: concept
weight: 75
--&gt;
&lt;!-- overview 
This page outlines the differences in how resources are managed between Linux and Windows.
--&gt;
&lt;p&gt;本页概述了 Linux 和 Windows 在资源管理方式上的区别。&lt;/p&gt;
&lt;!-- body --&gt;
&lt;!--
On Linux nodes, &lt;a class='glossary-tooltip' title='一组具有可选资源隔离、审计和限制的 Linux 进程。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/reference/glossary/?all=true#term-cgroup' target='_blank' aria-label='cgroups'&gt;cgroups&lt;/a&gt; are used
as a pod boundary for resource control. Containers are created within that boundary for network, process and file system isolation. The Linux cgroup APIs can be used to gather CPU, I/O, and memory use statistics.

In contrast, Windows uses a [_job object_](https://docs.microsoft.com/windows/win32/procthread/job-objects) per container with a system namespace filter
to contain all processes in a container and provide logical isolation from the
host. (Job objects are a Windows process isolation mechanism and are different from what Kubernetes refers to as a &lt;a class='glossary-tooltip' title='Job 是需要运行完成的确定性的或批量的任务。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/controllers/job/' target='_blank' aria-label='Job'&gt;Job&lt;/a&gt;).

There is no way to run a Windows container without the namespace filtering in
place. This means that system privileges cannot be asserted in the context of the
host, and thus privileged containers are not available on Windows.
Containers cannot assume an identity from the host because the Security Account Manager (SAM) is separate.
--&gt;
&lt;p&gt;在 Linux 节点上，&lt;a class='glossary-tooltip' title='一组具有可选资源隔离、审计和限制的 Linux 进程。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/reference/glossary/?all=true#term-cgroup' target='_blank' aria-label='cgroup'&gt;cgroup&lt;/a&gt; 用作资源控制的 Pod 边界。
在这个边界内创建容器以便于隔离网络、进程和文件系统。
Linux cgroup API 可用于收集 CPU、I/O 和内存使用统计数据。&lt;/p&gt;</description></item><item><title>在 Kubernetes 中运行 Windows 容器的指南</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/windows/user-guide/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/windows/user-guide/</guid><description>&lt;!-- 
reviewers:
- jayunit100
- jsturtevant
- marosset
title: Guide for Running Windows Containers in Kubernetes
content_type: tutorial
weight: 75
--&gt;
&lt;!-- overview --&gt;
&lt;!-- 
This page provides a walkthrough for some steps you can follow to run
Windows containers using Kubernetes.
The page also highlights some Windows specific functionality within Kubernetes.

It is important to note that creating and deploying services and workloads on Kubernetes
behaves in much the same way for Linux and Windows containers.
The [kubectl commands](/docs/reference/kubectl/) to interface with the cluster are identical.
The examples in this page are provided to jumpstart your experience with Windows containers.
--&gt;
&lt;p&gt;本文提供了一些参考演示步骤，方便你使用 Kubernetes 运行 Windows 容器。
本文还重点介绍了 Kubernetes 中专为 Windows 设计的一些特有功能。&lt;/p&gt;</description></item><item><title>CronJob</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/controllers/cron-jobs/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/controllers/cron-jobs/</guid><description>&lt;!--
reviewers:
- erictune
- soltysh
- janetkuo
title: CronJob
api_metadata:
- apiVersion: "batch/v1"
 kind: "CronJob"
content_type: concept
description: &gt;-
 A CronJob starts one-time Jobs on a repeating schedule.
weight: 80
hide_summary: true # Listed separately in section index
--&gt;
&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.21 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;!--
A _CronJob_ creates &lt;a class='glossary-tooltip' title='Job 是需要运行完成的确定性的或批量的任务。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/controllers/job/' target='_blank' aria-label='Jobs'&gt;Jobs&lt;/a&gt; on a repeating schedule.

CronJob is meant for performing regular scheduled actions such as backups, report generation,
and so on. One CronJob object is like one line of a _crontab_ (cron table) file on a
Unix system. It runs a Job periodically on a given schedule, written in
[Cron](https://en.wikipedia.org/wiki/Cron) format.
--&gt;
&lt;p&gt;&lt;strong&gt;CronJob&lt;/strong&gt; 创建基于时隔重复调度的 &lt;a class='glossary-tooltip' title='Job 是需要运行完成的确定性的或批量的任务。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/controllers/job/' target='_blank' aria-label='Job'&gt;Job&lt;/a&gt;。&lt;/p&gt;</description></item><item><title>Finalizers</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/overview/working-with-objects/finalizers/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/overview/working-with-objects/finalizers/</guid><description>&lt;!-- overview --&gt;
&lt;!--
title: Finalizer
id: finalizer
date: 2021-07-07
full_link: /zh-cn/docs/concepts/overview/working-with-objects/finalizers/
short_description: &gt;
 A namespaced key that tells Kubernetes to wait until specific conditions are met
 before it fully deletes an object marked for deletion.
aka: 
tags:
- fundamental
- operation
--&gt;
&lt;!--
Finalizers are namespaced keys that tell Kubernetes to wait until specific
conditions are met before it fully deletes &lt;a class='glossary-tooltip' title='Kubernetes 实体，表示 Kubernetes API 服务器上的端点。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/reference/using-api/api-concepts/#standard-api-terminology' target='_blank' aria-label='resources'&gt;resources&lt;/a&gt;
that are marked for deletion.
Finalizers alert &lt;a class='glossary-tooltip' title='控制器通过 API 服务器监控集群的公共状态，并致力于将当前状态转变为期望的状态。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/controller/' target='_blank' aria-label='controllers'&gt;controllers&lt;/a&gt;
to clean up resources the deleted object owned.
--&gt;
&lt;p&gt;Finalizer 是带有命名空间的键，告诉 Kubernetes 等到特定的条件被满足后，
再完全删除被标记为删除的&lt;a class='glossary-tooltip' title='Kubernetes 实体，表示 Kubernetes API 服务器上的端点。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/reference/using-api/api-concepts/#standard-api-terminology' target='_blank' aria-label='资源'&gt;资源&lt;/a&gt;。
Finalizer 提醒&lt;a class='glossary-tooltip' title='控制器通过 API 服务器监控集群的公共状态，并致力于将当前状态转变为期望的状态。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/controller/' target='_blank' aria-label='控制器'&gt;控制器&lt;/a&gt;清理被删除的对象拥有的资源。&lt;/p&gt;</description></item><item><title>kubeadm version</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-version/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-version/</guid><description>&lt;!--
reviewers:
- luxas
- jbeda
title: kubeadm version
content_type: concept
weight: 80
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This command prints the version of kubeadm.
--&gt;
&lt;p&gt;此命令用来输出 kubeadm 的版本。&lt;/p&gt;
&lt;!-- body --&gt;

	&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Print the version of kubeadm
--&gt;
&lt;p&gt;打印 kubeadm 的版本。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm version &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
help for version
--&gt;
version 操作的帮助命令。
&lt;/p&gt;</description></item><item><title>kubelet systemd 看门狗</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/node/systemd-watchdog/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/node/systemd-watchdog/</guid><description>&lt;!--
content_type: "reference"
title: Kubelet Systemd Watchdog
weight: 80
math: true # for a division by 2
--&gt;







 &lt;div class="feature-state-notice feature-beta" title="特性门控： SystemdWatchdog"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt; 
 &lt;code&gt;Kubernetes v1.32 [beta]&lt;/code&gt;（默认启用）&lt;/div&gt;

&lt;!--
On Linux nodes, Kubernetes 1.35 supports integrating with
[systemd](https://systemd.io/) to allow the operating system supervisor to recover
a failed kubelet. This integration is not enabled by default.
It can be used as an alternative to periodically requesting
the kubelet's `/healthz` endpoint for health checks. If the kubelet
does not respond to the watchdog within the timeout period, the watchdog
will kill the kubelet.
--&gt;
&lt;p&gt;在 Linux 节点上，Kubernetes 1.35 支持与
&lt;a href="https://systemd.io/"&gt;systemd&lt;/a&gt; 集成，以允许操作系统监视程序恢复失败的 kubelet。
这种集成默认并未被启用。它可以作为一个替代方案，通过定期请求 kubelet 的 &lt;code&gt;/healthz&lt;/code&gt; 端点进行健康检查。
如果 kubelet 在设定的超时时限内未对看门狗做出响应，看门狗将杀死 kubelet。&lt;/p&gt;</description></item><item><title>Seccomp 和 Kubernetes</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/node/seccomp/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/node/seccomp/</guid><description>&lt;!--
content_type: reference
title: Seccomp and Kubernetes
weight: 80
--&gt;
&lt;!-- overview --&gt;
&lt;!--
Seccomp stands for secure computing mode and has been a feature of the Linux
kernel since version 2.6.12. It can be used to sandbox the privileges of a
process, restricting the calls it is able to make from userspace into the
kernel. Kubernetes lets you automatically apply seccomp profiles loaded onto a
&lt;a class='glossary-tooltip' title='Kubernetes 中的工作机器称作节点。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/nodes/' target='_blank' aria-label='node'&gt;node&lt;/a&gt; to your Pods and containers.
--&gt;
&lt;p&gt;Seccomp 表示安全计算（Secure Computing）模式，自 2.6.12 版本以来，一直是 Linux 内核的一个特性。
它可以用来沙箱化进程的权限，限制进程从用户态到内核态的调用。
Kubernetes 能使你自动将加载到&lt;a class='glossary-tooltip' title='Kubernetes 中的工作机器称作节点。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/nodes/' target='_blank' aria-label='节点'&gt;节点&lt;/a&gt;上的
seccomp 配置文件应用到你的 Pod 和容器。&lt;/p&gt;</description></item><item><title>Service 与 Pod 的 DNS</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/services-networking/dns-pod-service/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/services-networking/dns-pod-service/</guid><description>&lt;!--
reviewers:
- jbelamaric
- bowei
- thockin
title: DNS for Services and Pods
content_type: concept
weight: 80
description: &gt;-
 Your workload can discover Services within your cluster using DNS;
 this page explains how that works.
--&gt;
&lt;!-- overview --&gt;
&lt;!--
Kubernetes creates DNS records for Services and Pods. You can contact
Services with consistent DNS names instead of IP addresses.
--&gt;
&lt;p&gt;Kubernetes 为 Service 和 Pod 创建 DNS 记录。
你可以使用稳定的 DNS 名称而非 IP 地址访问 Service。&lt;/p&gt;</description></item><item><title>创建外部负载均衡器</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/access-application-cluster/create-external-load-balancer/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/access-application-cluster/create-external-load-balancer/</guid><description>&lt;!--
title: Create an External Load Balancer
content_type: task
weight: 80
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page shows how to create an external load balancer.
--&gt;
&lt;p&gt;本文展示如何创建一个外部负载均衡器。&lt;/p&gt;
&lt;!--
When creating a &lt;a class='glossary-tooltip' title='将运行在一组 Pods 上的应用程序公开为网络服务的抽象方法。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/services-networking/service/' target='_blank' aria-label='Service'&gt;Service&lt;/a&gt;, you have
the option of automatically creating a cloud load balancer. This provides an
externally-accessible IP address that sends traffic to the correct port on your cluster
nodes,
_provided your cluster runs in a supported environment and is configured with
the correct cloud load balancer provider package_.
--&gt;
&lt;p&gt;创建&lt;a class='glossary-tooltip' title='将运行在一组 Pods 上的应用程序公开为网络服务的抽象方法。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/services-networking/service/' target='_blank' aria-label='服务'&gt;服务&lt;/a&gt;时，你可以选择自动创建云网络负载均衡器。
负载均衡器提供外部可访问的 IP 地址，可将流量发送到集群节点上的正确端口上
（&lt;strong&gt;假设集群在支持的环境中运行，并配置了正确的云负载均衡器驱动包&lt;/strong&gt;）。&lt;/p&gt;</description></item><item><title>存储容量</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/storage-capacity/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/storage-capacity/</guid><description>&lt;!--
reviewers:
- jsafrane
- saad-ali
- msau42
- xing-yang
- pohly
title: Storage Capacity
content_type: concept
weight: 80
--&gt;
&lt;!-- overview --&gt;
&lt;!--
Storage capacity is limited and may vary depending on the node on
which a pod runs: network-attached storage might not be accessible by
all nodes, or storage is local to a node to begin with.
--&gt;
&lt;p&gt;存储容量是有限的，并且会因为运行 Pod 的节点不同而变化：
网络存储可能并非所有节点都能够访问，或者对于某个节点存储是本地的。&lt;/p&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.24 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;!--
This page describes how Kubernetes keeps track of storage capacity and
how the scheduler uses that information to [schedule Pods](/docs/concepts/scheduling-eviction/) onto nodes
that have access to enough storage capacity for the remaining missing
volumes. Without storage capacity tracking, the scheduler may choose a
node that doesn't have enough capacity to provision a volume and
multiple scheduling retries will be needed.
--&gt;
&lt;p&gt;本页面描述了 Kubernetes 如何跟踪存储容量以及调度程序如何为了余下的尚未挂载的卷使用该信息将
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/scheduling-eviction/"&gt;Pod 调度&lt;/a&gt;到能够访问到足够存储容量的节点上。
如果没有跟踪存储容量，调度程序可能会选择一个没有足够容量来提供卷的节点，并且需要多次调度重试。&lt;/p&gt;</description></item><item><title>多租户</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/security/multi-tenancy/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/security/multi-tenancy/</guid><description>&lt;!--
title: Multi-tenancy
content_type: concept
weight: 80
--&gt;
&lt;!--
This page provides an overview of available configuration options and best practices for cluster
multi-tenancy.
--&gt;
&lt;p&gt;此页面概述了集群多租户的可用配置选项和最佳实践。&lt;/p&gt;
&lt;!--
Sharing clusters saves costs and simplifies administration. However, sharing clusters also
presents challenges such as security, fairness, and managing _noisy neighbors_.
--&gt;
&lt;p&gt;共享集群可以节省成本并简化管理。
然而，共享集群也带来了诸如安全性、公平性和管理&lt;strong&gt;嘈杂邻居&lt;/strong&gt;等挑战。&lt;/p&gt;
&lt;!--
Clusters can be shared in many ways. In some cases, different applications may run in the same
cluster. In other cases, multiple instances of the same application may run in the same cluster,
one for each end user. All these types of sharing are frequently described using the umbrella term
_multi-tenancy_.
--&gt;
&lt;p&gt;集群可以通过多种方式共享。在某些情况下，不同的应用可能会在同一个集群中运行。
在其他情况下，同一应用的多个实例可能在同一个集群中运行，每个实例对应一个最终用户。
所有这些类型的共享经常使用一个总括术语 &lt;strong&gt;多租户（Multi-Tenancy）&lt;/strong&gt; 来表述。&lt;/p&gt;</description></item><item><title>节点状态</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/node/node-status/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/node/node-status/</guid><description>&lt;!--
content_type: reference
title: Node Status
weight: 80
--&gt;
&lt;!-- overview --&gt;
&lt;!--
The status of a [node](/docs/concepts/architecture/nodes/) in Kubernetes is a critical
aspect of managing a Kubernetes cluster. In this article, we'll cover the basics of
monitoring and maintaining node status to ensure a healthy and stable cluster.
--&gt;
&lt;p&gt;在 Kubernetes 中，&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/nodes/"&gt;节点&lt;/a&gt;的状态是管理 Kubernetes
集群的一个关键方面。在本文中，我们将简要介绍如何监控和维护节点状态以确保集群的健康和稳定。&lt;/p&gt;
&lt;!--
## Node status fields

A Node's status contains the following information:

* [Addresses](#addresses)
* [Conditions](#condition)
* [Capacity and Allocatable](#capacity)
* [Info](#info)
* [Declared Features](#declaredfeatures)
--&gt;
&lt;h2 id="node-status-fields"&gt;节点状态字段&lt;/h2&gt;
&lt;p&gt;一个节点的状态包含以下信息:&lt;/p&gt;</description></item><item><title>配置 Pod 以使用卷进行存储</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configure-pod-container/configure-volume-storage/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configure-pod-container/configure-volume-storage/</guid><description>&lt;!--
title: Configure a Pod to Use a Volume for Storage
content_type: task
weight: 50
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page shows how to configure a Pod to use a Volume for storage.

A Container's file system lives only as long as the Container does. So when a
Container terminates and restarts, filesystem changes are lost. For more
consistent storage that is independent of the Container, you can use a
[Volume](/docs/concepts/storage/volumes/). This is especially important for stateful
applications, such as key-value stores (such as Redis) and databases.
--&gt;
&lt;p&gt;此页面展示了如何配置 Pod 以使用卷进行存储。&lt;/p&gt;</description></item><item><title>使用 CertificateSigningRequest 为 Kubernetes API 客户端颁发证书</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/tls/certificate-issue-client-csr/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/tls/certificate-issue-client-csr/</guid><description>&lt;!--
title: Issue a Certificate for a Kubernetes API Client Using A CertificateSigningRequest
api_metadata:
- apiVersion: "certificates.k8s.io/v1"
 kind: "CertificateSigningRequest"
 override_link_text: "CSR v1"
weight: 80

# Docs maintenance note
#
# If there is a future page /docs/tasks/tls/certificate-issue-client-manually/ then this page
# should link there, and the new page should link back to this one.
--&gt;
&lt;!-- overview --&gt;
&lt;!--
Kubernetes lets you use a public key infrastructure (PKI) to authenticate to your cluster
as a client.

A few steps are required in order to get a normal user to be able to
authenticate and invoke an API. First, this user must have an [X.509](https://www.itu.int/rec/T-REC-X.509) certificate
issued by an authority that your Kubernetes cluster trusts. The client must then present that certificate to the Kubernetes API.
--&gt;
&lt;p&gt;Kubernetes 允许你使用公钥基础设施 (PKI) 对你的集群进行身份认证，这类似于对客户端进行身份认证。&lt;/p&gt;</description></item><item><title>使用 kubeadm 进行证书管理</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/</guid><description>&lt;!--
reviewers:
- sig-cluster-lifecycle
title: Certificate Management with kubeadm
content_type: task
weight: 80
--&gt;
&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.15 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;!--
Client certificates generated by [kubeadm](/docs/reference/setup-tools/kubeadm/) expire after 1 year.
This page explains how to manage certificate renewals with kubeadm. It also covers other tasks related
to kubeadm certificate management.
--&gt;
&lt;p&gt;由 &lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/"&gt;kubeadm&lt;/a&gt; 生成的客户端证书在 1 年后到期。
本页说明如何使用 kubeadm 管理证书续订，同时也涵盖其他与 kubeadm 证书管理相关的说明。&lt;/p&gt;
&lt;!--
The Kubernetes project recommends upgrading to the latest patch releases promptly, and
to ensure that you are running a supported minor release of Kubernetes.
Following this recommendation helps you to stay secure.
--&gt;
&lt;p&gt;Kubernetes 项目建议及时升级到最新的补丁版本，并确保你正在运行受支持的 Kubernetes 次要版本。
遵循这一建议有助于你确保安全。&lt;/p&gt;</description></item><item><title>使用 kubeadm 配置集群中的每个 kubelet</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/setup/production-environment/tools/kubeadm/kubelet-integration/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/setup/production-environment/tools/kubeadm/kubelet-integration/</guid><description>&lt;!--
reviewers:
- sig-cluster-lifecycle
title: Configuring each kubelet in your cluster using kubeadm
content_type: concept
weight: 80
--&gt;
&lt;!-- overview --&gt;
&lt;div class="alert alert-secondary callout note" role="note"&gt;
 &lt;strong&gt;说明：&lt;/strong&gt; 自 1.24 版起，Dockershim 已从 Kubernetes 项目中移除。阅读 &lt;a href="https://andygol-k8s.netlify.app/zh-cn/dockershim"&gt;Dockershim 移除的常见问题&lt;/a&gt;了解更多详情。
&lt;/div&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.11 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;!--
The lifecycle of the kubeadm CLI tool is decoupled from the
[kubelet](/docs/reference/command-line-tools-reference/kubelet), which is a daemon that runs
on each node within the Kubernetes cluster. The kubeadm CLI tool is executed by the user when Kubernetes is
initialized or upgraded, whereas the kubelet is always running in the background.

Since the kubelet is a daemon, it needs to be maintained by some kind of an init
system or service manager. When the kubelet is installed using DEBs or RPMs,
systemd is configured to manage the kubelet. You can use a different service
manager instead, but you need to configure it manually.

Some kubelet configuration details need to be the same across all kubelets involved in the cluster, while
other configuration aspects need to be set on a per-kubelet basis to accommodate the different
characteristics of a given machine (such as OS, storage, and networking). You can manage the configuration
of your kubelets manually, but kubeadm now provides a `KubeletConfiguration` API type for
[managing your kubelet configurations centrally](#configure-kubelets-using-kubeadm).
--&gt;
&lt;p&gt;kubeadm CLI 工具的生命周期与 &lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/command-line-tools-reference/kubelet"&gt;kubelet&lt;/a&gt;
解耦；kubelet 是一个守护程序，在 Kubernetes 集群中的每个节点上运行。
当 Kubernetes 初始化或升级时，kubeadm CLI 工具由用户执行，而 kubelet 始终在后台运行。&lt;/p&gt;</description></item><item><title>系统日志</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/cluster-administration/system-logs/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/cluster-administration/system-logs/</guid><description>&lt;!-- 
reviewers:
- dims
- 44past4
title: System Logs
content_type: concept
weight: 80
--&gt;
&lt;!-- overview --&gt;
&lt;!--
System component logs record events happening in cluster, which can be very useful for debugging.
You can configure log verbosity to see more or less detail.
Logs can be as coarse-grained as showing errors within a component, or as fine-grained as showing
step-by-step traces of events (like HTTP access logs, pod state changes, controller actions, or
scheduler decisions).
--&gt;
&lt;p&gt;系统组件的日志记录集群中发生的事件，这对于调试非常有用。
你可以配置日志的精细度，以展示更多或更少的细节。
日志可以是粗粒度的，如只显示组件内的错误，
也可以是细粒度的，如显示事件的每一个跟踪步骤（比如 HTTP 访问日志、pod 状态更新、控制器动作或调度器决策）。&lt;/p&gt;</description></item><item><title>页面内容类型</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/contribute/style/page-content-types/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/contribute/style/page-content-types/</guid><description>&lt;!--
title: Page content types
content_type: concept
weight: 80
card:
 name: contribute
 weight: 30
--&gt;
&lt;!-- overview --&gt;
&lt;!--
The Kubernetes documentation follows several types of page content:

- Concept
- Task
- Tutorial
- Reference
--&gt;
&lt;p&gt;Kubernetes 文档包含以下几种页面内容类型：&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;概念（Concept）&lt;/li&gt;
&lt;li&gt;任务（Task）&lt;/li&gt;
&lt;li&gt;教程（Tutorial）&lt;/li&gt;
&lt;li&gt;参考（Reference）&lt;/li&gt;
&lt;/ul&gt;
&lt;!-- body --&gt;
&lt;!--
## Content sections

Each page content type contains a number of sections defined by
Markdown comments and HTML headings. You can add content headings to
your page with the `heading` shortcode. The comments and headings help
maintain the structure of the page content types.

Examples of Markdown comments defining page content sections:
--&gt;
&lt;h2 id="content-sections"&gt;内容章节&lt;/h2&gt;
&lt;p&gt;每种页面内容类型都有一些使用 Markdown 注释和 HTML 标题定义的章节。
你可以使用 &lt;code&gt;heading&lt;/code&gt; 短代码将内容标题添加到你的页面中。
注释和标题有助于维护对应页面内容类型的结构组织。&lt;/p&gt;</description></item><item><title>资源装箱</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/scheduling-eviction/resource-bin-packing/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/scheduling-eviction/resource-bin-packing/</guid><description>&lt;!--
reviewers:
- bsalamat
- k82cn
- ahg-g
title: Resource Bin Packing
content_type: concept
weight: 80
--&gt;
&lt;!-- overview --&gt;
&lt;!--
In the [scheduling-plugin](/docs/reference/scheduling/config/#scheduling-plugins) `NodeResourcesFit` of kube-scheduler, there are two
scoring strategies that support the bin packing of resources: `MostAllocated` and `RequestedToCapacityRatio`.
--&gt;
&lt;p&gt;在 kube-scheduler 的&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/scheduling/config/#scheduling-plugins"&gt;调度插件&lt;/a&gt;
&lt;code&gt;NodeResourcesFit&lt;/code&gt; 中存在两种支持资源装箱（bin packing）的策略：&lt;code&gt;MostAllocated&lt;/code&gt; 和
&lt;code&gt;RequestedToCapacityRatio&lt;/code&gt;。&lt;/p&gt;
&lt;!-- body --&gt;
&lt;!--
## Enabling bin packing using MostAllocated strategy

The `MostAllocated` strategy scores the nodes based on the utilization of resources, favoring the ones with higher allocation.
For each resource type, you can set a weight to modify its influence in the node score.

To set the `MostAllocated` strategy for the `NodeResourcesFit` plugin, use a
[scheduler configuration](/docs/reference/scheduling/config) similar to the following:
--&gt;
&lt;h2 id="enabling-bin-packing-using-mostallocated-strategy"&gt;使用 MostAllocated 策略启用资源装箱&lt;/h2&gt;
&lt;p&gt;&lt;code&gt;MostAllocated&lt;/code&gt; 策略基于资源的利用率来为节点计分，优选分配比率较高的节点。
针对每种资源类型，你可以设置一个权重值以改变其对节点得分的影响。&lt;/p&gt;</description></item><item><title>自动扩缩集群 DNS 服务</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/dns-horizontal-autoscaling/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/dns-horizontal-autoscaling/</guid><description>&lt;!--
title: Autoscale the DNS Service in a Cluster
content_type: task
weight: 80
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page shows how to enable and configure autoscaling of the DNS service in
your Kubernetes cluster.
--&gt;
&lt;p&gt;本页展示了如何在你的 Kubernetes 集群中启用和配置 DNS 服务的自动扩缩功能。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;!--
You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
[minikube](https://minikube.sigs.k8s.io/docs/tutorials/multi_node/)
or you can use one of these Kubernetes playgrounds:
--&gt;
&lt;p&gt;你必须拥有一个 Kubernetes 的集群，且必须配置 kubectl 命令行工具让其与你的集群通信。
建议运行本教程的集群至少有两个节点，且这两个节点不能作为控制平面主机。
如果你还没有集群，你可以通过 &lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;Minikube&lt;/a&gt;
构建一个你自己的集群，或者你可以使用下面的 Kubernetes 练习环境之一：&lt;/p&gt;</description></item><item><title>Pod QoS 类</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/pod-qos/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/pod-qos/</guid><description>&lt;!--
title: Pod Quality of Service Classes
content_type: concept
weight: 85
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page introduces _Quality of Service (QoS) classes_ in Kubernetes, and explains
how Kubernetes assigns a QoS class to each Pod as a consequence of the resource
constraints that you specify for the containers in that Pod. Kubernetes relies on this
classification to make decisions about which Pods to evict when there are not enough
available resources on a Node.
--&gt;
&lt;p&gt;本页介绍 Kubernetes 中的 &lt;strong&gt;服务质量（Quality of Service，QoS）&lt;/strong&gt; 类，
阐述 Kubernetes 如何根据为 Pod 中的容器指定的资源约束为每个 Pod 设置 QoS 类。
Kubernetes 依赖这种分类来决定当 Node 上没有足够可用资源时要驱逐哪些 Pod。&lt;/p&gt;</description></item><item><title>Pod 主机名</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/pod-hostname/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/pod-hostname/</guid><description>&lt;!--
title: Pod Hostname
content_type: concept
weight: 85
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page explains how to set a Pod's hostname, 
potential side effects after configuration, and the underlying mechanics.
--&gt;
&lt;p&gt;本文讲述如何设置 Pod 的主机名、配置主机名后的潜在副作用以及底层机制。&lt;/p&gt;
&lt;!-- body --&gt;
&lt;!--
## Default Pod hostname

When a Pod is created, its hostname (as observed from within the Pod) 
is derived from the Pod's metadata.name value. 
Both the hostname and its corresponding fully qualified domain name (FQDN) 
are set to the metadata.name value (from the Pod's perspective)
--&gt;
&lt;h2 id="default-pod-hostname"&gt;默认 Pod 主机名&lt;/h2&gt;
&lt;p&gt;当 Pod 被创建时，其主机名（从 Pod 内部观察）来源于 Pod 的 &lt;code&gt;metadata.name&lt;/code&gt; 值。
主机名和其对应的完全限定域名（FQDN）都会被设置为 &lt;code&gt;metadata.name&lt;/code&gt; 值（从 Pod 的角度）。&lt;/p&gt;</description></item><item><title>IPv4/IPv6 双协议栈</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/services-networking/dual-stack/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/services-networking/dual-stack/</guid><description>&lt;!--
title: IPv4/IPv6 dual-stack
description: &gt;-
 Kubernetes lets you configure single-stack IPv4 networking,
 single-stack IPv6 networking, or dual stack networking with
 both network families active. This page explains how.
feature:
 title: IPv4/IPv6 dual-stack
 description: &gt;
 Allocation of IPv4 and IPv6 addresses to Pods and Services
content_type: concept
reviewers:
 - lachie83
 - khenidak
 - aramase
 - bridgetkromhout
weight: 90
--&gt;
&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.23 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;!--
IPv4/IPv6 dual-stack networking enables the allocation of both IPv4 and IPv6 addresses to
&lt;a class='glossary-tooltip' title='Pod 表示你的集群上一组正在运行的容器。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/' target='_blank' aria-label='Pods'&gt;Pods&lt;/a&gt; and &lt;a class='glossary-tooltip' title='将运行在一组 Pods 上的应用程序公开为网络服务的抽象方法。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/services-networking/service/' target='_blank' aria-label='Services'&gt;Services&lt;/a&gt;.
--&gt;
&lt;p&gt;IPv4/IPv6 双协议栈网络能够将 IPv4 和 IPv6 地址分配给
&lt;a class='glossary-tooltip' title='Pod 表示你的集群上一组正在运行的容器。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/' target='_blank' aria-label='Pod'&gt;Pod&lt;/a&gt; 和
&lt;a class='glossary-tooltip' title='将运行在一组 Pods 上的应用程序公开为网络服务的抽象方法。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/services-networking/service/' target='_blank' aria-label='Service'&gt;Service&lt;/a&gt;。&lt;/p&gt;</description></item><item><title>kubeadm alpha</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-alpha/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-alpha/</guid><description>&lt;!--
title: kubeadm alpha
content_type: concept
weight: 90
--&gt;
&lt;div class="alert alert-caution" role="note"&gt;&lt;h4 class="alert-heading"&gt;注意：&lt;/h4&gt;&lt;!--
`kubeadm alpha` provides a preview of a set of features made available for gathering feedback
 from the community. Please try it out and give us feedback!
 --&gt;
&lt;p&gt;&lt;code&gt;kubeadm alpha&lt;/code&gt; 提供了一组可用于收集社区反馈的预览性质功能。
请试用这些功能并给我们提供反馈！&lt;/p&gt;&lt;/div&gt;

&lt;!--
Currently there are no experimental commands under `kubeadm alpha`.
--&gt;
&lt;p&gt;目前在 &lt;code&gt;kubeadm alpha&lt;/code&gt; 之下没有试验性质的命令。&lt;/p&gt;
&lt;h2 id="接下来"&gt;接下来&lt;/h2&gt;
&lt;!--
* [kubeadm init](/docs/reference/setup-tools/kubeadm/kubeadm-init/) to bootstrap a Kubernetes control-plane node
* [kubeadm join](/docs/reference/setup-tools/kubeadm/kubeadm-join/) to connect a node to the cluster
* [kubeadm reset](/docs/reference/setup-tools/kubeadm/kubeadm-reset/) to revert any changes made to this host by `kubeadm init` or `kubeadm join`
--&gt;
&lt;ul&gt;
&lt;li&gt;用来启动引导 Kubernetes 控制平面节点的
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-init/"&gt;kubeadm init&lt;/a&gt;
命令&lt;/li&gt;
&lt;li&gt;用来将节点连接到集群的
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-join/"&gt;kubeadm join&lt;/a&gt;
命令&lt;/li&gt;
&lt;li&gt;用来还原 &lt;code&gt;kubeadm init&lt;/code&gt; 或 &lt;code&gt;kubeadm join&lt;/code&gt; 操作对主机所做的任何更改的
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-reset/"&gt;kubeadm reset&lt;/a&gt;
命令&lt;/li&gt;
&lt;/ul&gt;</description></item><item><title>kubeadm certs</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-certs/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-certs/</guid><description>&lt;!--
`kubeadm certs` provides utilities for managing certificates.
For more details on how these commands can be used, see
[Certificate Management with kubeadm](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/).
--&gt;
&lt;p&gt;&lt;code&gt;kubeadm certs&lt;/code&gt; 提供管理证书的工具。关于如何使用这些命令的细节，
可参见&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/"&gt;使用 kubeadm 管理证书&lt;/a&gt;。&lt;/p&gt;
&lt;h2 id="cmd-certs"&gt;kubeadm certs&lt;/h2&gt;
&lt;!--
A collection of operations for operating Kubernetes certificates.
--&gt;
&lt;p&gt;用来操作 Kubernetes 证书的一组命令。&lt;/p&gt;
&lt;ul class="nav nav-tabs" id="tab-certs" role="tablist"&gt;&lt;li class="nav-item"&gt;&lt;a data-toggle="tab" class="nav-link active" href="#tab-certs-0" role="tab" aria-controls="tab-certs-0" aria-selected="true"&gt;概览&lt;/a&gt;&lt;/li&gt;
	 &lt;/ul&gt;
&lt;div class="tab-content" id="tab-certs"&gt;&lt;div id="tab-certs-0" class="tab-pane show active" role="tabpanel" aria-labelledby="tab-certs-0"&gt;

&lt;p&gt;&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Commands related to handling Kubernetes certificates
--&gt;
&lt;p&gt;处理 Kubernetes 证书相关的命令。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm certs &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
help for certs
--&gt;
certs 操作的帮助命令。
&lt;/p&gt;</description></item><item><title>kubeadm init phase</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/</guid><description>&lt;!--
title: kubeadm init phase
weight: 90
content_type: concept
--&gt;
&lt;!--
`kubeadm init phase` enables you to invoke atomic steps of the bootstrap process.
Hence, you can let kubeadm do some of the work and you can fill in the gaps
if you wish to apply customization.
--&gt;
&lt;p&gt;&lt;code&gt;kubeadm init phase&lt;/code&gt; 能确保调用引导过程的原子步骤。
因此，如果希望自定义应用，则可以让 kubeadm 做一些工作，然后填补空白。&lt;/p&gt;
&lt;!--
`kubeadm init phase` is consistent with the [kubeadm init workflow](/docs/reference/setup-tools/kubeadm/kubeadm-init/#init-workflow),
and behind the scene both use the same code.
--&gt;
&lt;p&gt;&lt;code&gt;kubeadm init phase&lt;/code&gt; 与 &lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-init/#init-workflow"&gt;kubeadm init 工作流&lt;/a&gt;
一致，后台都使用相同的代码。&lt;/p&gt;</description></item><item><title>kubeadm join phase</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-join-phase/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-join-phase/</guid><description>&lt;!--
title: kubeadm join phase
weight: 90
content_type: concept
--&gt;
&lt;!--
`kubeadm join phase` enables you to invoke atomic steps of the join process.
Hence, you can let kubeadm do some of the work and you can fill in the gaps
if you wish to apply customization.
--&gt;
&lt;p&gt;&lt;code&gt;kubeadm join phase&lt;/code&gt; 使你能够调用 &lt;code&gt;join&lt;/code&gt; 过程的基本原子步骤。
因此，如果希望执行自定义操作，可以让 kubeadm 做一些工作，然后由用户来补足剩余操作。&lt;/p&gt;
&lt;!--
`kubeadm join phase` is consistent with the [kubeadm join workflow](/docs/reference/setup-tools/kubeadm/kubeadm-join/#join-workflow),
and behind the scene both use the same code.
--&gt;
&lt;p&gt;&lt;code&gt;kubeadm join phase&lt;/code&gt; 与
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-join/#join-workflow"&gt;kubeadm join 工作流程&lt;/a&gt;一致，
后台都使用相同的代码。&lt;/p&gt;</description></item><item><title>kubeadm kubeconfig</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-kubeconfig/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-kubeconfig/</guid><description>&lt;!--
`kubeadm kubeconfig` provides utilities for managing kubeconfig files.

For examples on how to use `kubeadm kubeconfig user` see
[Generating kubeconfig files for additional users](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs#kubeconfig-additional-users).
--&gt;
&lt;p&gt;&lt;code&gt;kubeadm kubeconfig&lt;/code&gt; 提供用来管理 kubeconfig 文件的工具。&lt;/p&gt;
&lt;p&gt;如果希望查看如何使用 &lt;code&gt;kubeadm kubeconfig user&lt;/code&gt; 的示例，
请参阅&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/kubeadm/kubeadm-certs#kubeconfig-additional-users"&gt;为其他用户生成 kubeconfig 文件&lt;/a&gt;。&lt;/p&gt;
&lt;h2 id="cmd-kubeconfig"&gt;kubeadm kubeconfig&lt;/h2&gt;
&lt;ul class="nav nav-tabs" id="tab-kubeconfig" role="tablist"&gt;&lt;li class="nav-item"&gt;&lt;a data-toggle="tab" class="nav-link active" href="#tab-kubeconfig-0" role="tab" aria-controls="tab-kubeconfig-0" aria-selected="true"&gt;概述&lt;/a&gt;&lt;/li&gt;
	 &lt;/ul&gt;
&lt;div class="tab-content" id="tab-kubeconfig"&gt;&lt;div id="tab-kubeconfig-0" class="tab-pane show active" role="tabpanel" aria-labelledby="tab-kubeconfig-0"&gt;

&lt;p&gt;&lt;!--
### Synopsis

Kubeconfig file utilities.

### Options
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;p&gt;kubeconfig 文件工具。&lt;/p&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
help for kubeconfig
--&gt;
kubeconfig 操作的帮助命令。
&lt;p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;!--
### Options inherited from parent commands
--&gt;
&lt;h3 id="从父命令继承的选项"&gt;从父命令继承的选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--rootfs string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
The path to the 'real' host root filesystem. This will cause kubeadm to chroot into the provided path.
--&gt;
到“真实”主机根文件系统的路径。设置此标志将导致 kubeadm 切换到所提供的路径。
&lt;/p&gt;</description></item><item><title>kubeadm reset phase</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-reset-phase/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-reset-phase/</guid><description>&lt;!--
title: kubeadm reset phase
weight: 90
content_type: concept
--&gt;
&lt;!--
`kubeadm reset phase` enables you to invoke atomic steps of the node reset process.
Hence, you can let kubeadm do some of the work and you can fill in the gaps
if you wish to apply customization.
--&gt;
&lt;p&gt;&lt;code&gt;kubeadm reset phase&lt;/code&gt; 使你能够调用 &lt;code&gt;reset&lt;/code&gt; 过程的基本原子步骤。
因此，如果希望执行自定义操作，你可以让 kubeadm 做一些工作，然后由用户来补足剩余操作。&lt;/p&gt;
&lt;!--
`kubeadm reset phase` is consistent with the [kubeadm reset workflow](/docs/reference/setup-tools/kubeadm/kubeadm-reset/#reset-workflow),
and behind the scene both use the same code.
--&gt;
&lt;p&gt;&lt;code&gt;kubeadm reset phase&lt;/code&gt; 与
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-reset/#reset-workflow"&gt;kubeadm reset 工作流程&lt;/a&gt;一致，
后台都使用相同的代码。&lt;/p&gt;</description></item><item><title>Kubernetes API 服务器旁路风险</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/security/api-server-bypass-risks/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/security/api-server-bypass-risks/</guid><description>&lt;!--
title: Kubernetes API Server Bypass Risks
description: &gt;
 Security architecture information relating to the API server and other components
content_type: concept
weight: 90
--&gt;
&lt;!-- overview --&gt;
&lt;!--
The Kubernetes API server is the main point of entry to a cluster for external parties
(users and services) interacting with it.
 --&gt;
&lt;p&gt;Kubernetes API 服务器是外部（用户和服务）与集群交互的主要入口。&lt;/p&gt;
&lt;!--
As part of this role, the API server has several key built-in security controls, such as
audit logging and &lt;a class='glossary-tooltip' title='在对象持久化之前拦截 Kubernetes API 服务器请求的一段代码。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/reference/access-authn-authz/admission-controllers/' target='_blank' aria-label='admission controllers'&gt;admission controllers&lt;/a&gt;.
However, there are ways to modify the configuration
or content of the cluster that bypass these controls.
--&gt;
&lt;p&gt;API 服务器作为交互的主要入口，还提供了几种关键的内置安全控制，
例如审计日志和&lt;a class='glossary-tooltip' title='在对象持久化之前拦截 Kubernetes API 服务器请求的一段代码。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/reference/access-authn-authz/admission-controllers/' target='_blank' aria-label='准入控制器'&gt;准入控制器&lt;/a&gt;。
但有一些方式可以绕过这些安全控制从而修改集群的配置或内容。&lt;/p&gt;</description></item><item><title>Pod 优先级和抢占</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/scheduling-eviction/pod-priority-preemption/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/scheduling-eviction/pod-priority-preemption/</guid><description>&lt;!--
title: Pod Priority and Preemption
content_type: concept
weight: 90
--&gt;
&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.14 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;!--
[Pods](/docs/concepts/workloads/pods/) can have _priority_. Priority indicates the
importance of a Pod relative to other Pods. If a Pod cannot be scheduled, the
scheduler tries to preempt (evict) lower priority Pods to make scheduling of the
pending Pod possible.
--&gt;
&lt;p&gt;&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/"&gt;Pod&lt;/a&gt; 可以有&lt;strong&gt;优先级&lt;/strong&gt;。
优先级表示一个 Pod 相对于其他 Pod 的重要性。
如果一个 Pod 无法被调度，调度程序会尝试抢占（驱逐）较低优先级的 Pod，
以使悬决 Pod 可以被调度。&lt;/p&gt;</description></item><item><title>ReplicationController</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/controllers/replicationcontroller/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/controllers/replicationcontroller/</guid><description>&lt;!--
reviewers:
- bprashanth
- janetkuo
title: ReplicationController
api_metadata:
- apiVersion: "v1"
 kind: "ReplicationController"
content_type: concept
weight: 90
description: &gt;-
 Legacy API for managing workloads that can scale horizontally.
 Superseded by the Deployment and ReplicaSet APIs.
--&gt;
&lt;!-- overview --&gt;

&lt;div class="alert alert-info" role="note"&gt;&lt;h4 class="alert-heading"&gt;说明：&lt;/h4&gt;&lt;!--
A [`Deployment`](/docs/concepts/workloads/controllers/deployment/) that configures a [`ReplicaSet`](/docs/concepts/workloads/controllers/replicaset/) is now the recommended way to set up replication.
--&gt;
&lt;p&gt;现在推荐使用配置 &lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/controllers/replicaset/"&gt;&lt;code&gt;ReplicaSet&lt;/code&gt;&lt;/a&gt; 的
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/controllers/deployment/"&gt;&lt;code&gt;Deployment&lt;/code&gt;&lt;/a&gt; 来建立副本管理机制。&lt;/p&gt;&lt;/div&gt;

&lt;!--
A _ReplicationController_ ensures that a specified number of pod replicas are running at any one
time. In other words, a ReplicationController makes sure that a pod or a homogeneous set of pods is
always up and available.
--&gt;
&lt;p&gt;&lt;strong&gt;ReplicationController&lt;/strong&gt; 确保在任何时候都有特定数量的 Pod 副本处于运行状态。
换句话说，ReplicationController 确保一个 Pod 或一组同类的 Pod 总是可用的。&lt;/p&gt;</description></item><item><title>从轮询切换为基于 CRI 事件的更新来获取容器状态</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/switch-to-evented-pleg/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/switch-to-evented-pleg/</guid><description>&lt;!--
title: Switching from Polling to CRI Event-based Updates to Container Status
min-kubernetes-server-version: 1.26
content_type: task
weight: 90
--&gt;







 &lt;div class="feature-state-notice feature-alpha" title="特性门控： EventedPLEG"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt; 
 &lt;code&gt;Kubernetes v1.26 [alpha]&lt;/code&gt;（默认禁用）&lt;/div&gt;

&lt;!-- overview --&gt;
&lt;!--
This page shows how to migrate nodes to use event based updates for container status. The event-based
implementation reduces node resource consumption by the kubelet, compared to the legacy approach
that relies on polling.
You may know this feature as _evented Pod lifecycle event generator (PLEG)_. That's the name used
internally within the Kubernetes project for a key implementation detail.

The polling based approach is referred to as _generic PLEG_.
--&gt;
&lt;p&gt;本页展示了如何迁移节点以使用基于事件的更新来获取容器状态。
与依赖轮询的传统方法相比，基于事件的实现可以减少 kubelet 对节点资源的消耗。
你可以将这个特性称为&lt;strong&gt;事件驱动的 Pod 生命周期事件生成器 (PLEG)&lt;/strong&gt;。
这是在 Kubernetes 项目内部针对关键实现细节所用的名称。&lt;/p&gt;</description></item><item><title>改变默认 StorageClass</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/change-default-storage-class/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/change-default-storage-class/</guid><description>&lt;!-- overview --&gt;
&lt;!--
This page shows how to change the default Storage Class that is used to
provision volumes for PersistentVolumeClaims that have no special requirements.
--&gt;
&lt;p&gt;本文展示了如何改变默认的 Storage Class，它用于为没有特殊需求的 PersistentVolumeClaims 配置 volumes。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;p&gt;&lt;!--
You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
[minikube](https://minikube.sigs.k8s.io/docs/tutorials/multi_node/)
or you can use one of these Kubernetes playgrounds:
--&gt;
&lt;p&gt;你必须拥有一个 Kubernetes 的集群，且必须配置 kubectl 命令行工具让其与你的集群通信。
建议运行本教程的集群至少有两个节点，且这两个节点不能作为控制平面主机。
如果你还没有集群，你可以通过 &lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;Minikube&lt;/a&gt;
构建一个你自己的集群，或者你可以使用下面的 Kubernetes 练习环境之一：&lt;/p&gt;</description></item><item><title>加固指南 - 调度器配置</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/security/hardening-guide/scheduler/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/security/hardening-guide/scheduler/</guid><description>&lt;!--
title: "Hardening Guide - Scheduler Configuration"
description: &gt;
 Information about how to make the Kubernetes scheduler more secure.
content_type: concept
weight: 90
--&gt;
&lt;!-- overview --&gt;
&lt;!--
The Kubernetes &lt;a class='glossary-tooltip' title='控制平面组件，负责监视新创建的、未指定运行节点的 Pod，选择节点让 Pod 在上面运行。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/reference/command-line-tools-reference/kube-scheduler/' target='_blank' aria-label='scheduler'&gt;scheduler&lt;/a&gt; is
one of the critical components of the
&lt;a class='glossary-tooltip' title='控制平面是指容器编排层，它暴露 API 和接口来定义、部署容器和管理容器的生命周期。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/reference/glossary/?all=true#term-control-plane' target='_blank' aria-label='control plane'&gt;control plane&lt;/a&gt;.

This document covers how to improve the security posture of the Scheduler.

A misconfigured scheduler can have security implications. 
Such a scheduler can target specific nodes and evict the workloads or applications that are sharing the node and its resources. 
This can aid an attacker with a [Yo-Yo attack](https://arxiv.org/abs/2105.00542): an attack on a vulnerable autoscaler.
--&gt;
&lt;p&gt;Kubernetes &lt;a class='glossary-tooltip' title='控制平面组件，负责监视新创建的、未指定运行节点的 Pod，选择节点让 Pod 在上面运行。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/reference/command-line-tools-reference/kube-scheduler/' target='_blank' aria-label='调度器'&gt;调度器&lt;/a&gt;是&lt;a class='glossary-tooltip' title='控制平面是指容器编排层，它暴露 API 和接口来定义、部署容器和管理容器的生命周期。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/reference/glossary/?all=true#term-control-plane' target='_blank' aria-label='控制平面'&gt;控制平面&lt;/a&gt;的关键组件之一。&lt;/p&gt;</description></item><item><title>加固指南 - 身份认证机制</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/security/hardening-guide/authentication-mechanisms/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/security/hardening-guide/authentication-mechanisms/</guid><description>&lt;!--
---
title: Hardening Guide - Authentication Mechanisms
description: &gt;
 Information on authentication options in Kubernetes and their security properties.
content_type: concept
weight: 90
---
--&gt;
&lt;!-- overview --&gt;
&lt;!--
Selecting the appropriate authentication mechanism(s) is a crucial aspect of securing your cluster.
Kubernetes provides several built-in mechanisms, each with its own strengths and weaknesses that
should be carefully considered when choosing the best authentication mechanism for your cluster.
--&gt;
&lt;p&gt;选择合适的身份认证机制是确保集群安全的一个重要方面。
Kubernetes 提供了多种内置机制，
当为你的集群选择最好的身份认证机制时需要谨慎考虑每种机制的优缺点。&lt;/p&gt;</description></item><item><title>将 PersistentVolume 的访问模式更改为 ReadWriteOncePod</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/change-pv-access-mode-readwriteoncepod/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/change-pv-access-mode-readwriteoncepod/</guid><description>&lt;!--
title: Change the Access Mode of a PersistentVolume to ReadWriteOncePod
content_type: task
weight: 90
min-kubernetes-server-version: v1.22
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page shows how to change the access mode on an existing PersistentVolume to
use `ReadWriteOncePod`.
--&gt;
&lt;p&gt;本文演示了如何将现有 PersistentVolume 的访问模式更改为使用 &lt;code&gt;ReadWriteOncePod&lt;/code&gt;。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;p&gt;&lt;!--
You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
[minikube](https://minikube.sigs.k8s.io/docs/tutorials/multi_node/)
or you can use one of these Kubernetes playgrounds:
--&gt;
&lt;p&gt;你必须拥有一个 Kubernetes 的集群，且必须配置 kubectl 命令行工具让其与你的集群通信。
建议运行本教程的集群至少有两个节点，且这两个节点不能作为控制平面主机。
如果你还没有集群，你可以通过 &lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;Minikube&lt;/a&gt;
构建一个你自己的集群，或者你可以使用下面的 Kubernetes 练习环境之一：&lt;/p&gt;</description></item><item><title>内容组织</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/contribute/style/content-organization/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/contribute/style/content-organization/</guid><description>&lt;!--
title: Content organization
content_type: concept
weight: 90
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This site uses Hugo. In Hugo, [content
organization](https://gohugo.io/content-management/organization/) is a core
concept.
--&gt;
&lt;p&gt;本网站使用了 Hugo。在 Hugo 中，&lt;a href="https://gohugo.io/content-management/organization/"&gt;内容组织&lt;/a&gt;是一个核心概念。&lt;/p&gt;
&lt;!-- body --&gt;
&lt;!--
**Hugo Tip:** Start Hugo with `hugo server -navigateToChanged` for content edit-sessions.
--&gt;

&lt;div class="alert alert-info" role="note"&gt;&lt;h4 class="alert-heading"&gt;说明：&lt;/h4&gt;&lt;strong&gt;Hugo 提示：&lt;/strong&gt; 用 &lt;code&gt;hugo server --navigateToChanged&lt;/code&gt; 命令启动 Hugo 以进行内容编辑会话。&lt;/div&gt;

&lt;!--
## Page Lists

### Page Order

The documentation side menu, the documentation page browser etc. are listed using
Hugo's default sort order, which sorts by weight (from 1), date (newest first),
and finally by the link title.

Given that, if you want to move a page or a section up, set a weight in the page's front matter:
--&gt;
&lt;h2 id="页面列表"&gt;页面列表&lt;/h2&gt;
&lt;h3 id="页面顺序"&gt;页面顺序&lt;/h3&gt;
&lt;p&gt;文档侧方菜单、文档页面浏览器等以 Hugo 的默认排序顺序列出。Hugo 会按照权重（从 1 开始）、
日期（最新的排最前面）排序，最后按链接标题排序。&lt;/p&gt;</description></item><item><title>配置 Pod 以使用 PersistentVolume 作为存储</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configure-pod-container/configure-persistent-volume-storage/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configure-pod-container/configure-persistent-volume-storage/</guid><description>&lt;!--
title: Configure a Pod to Use a PersistentVolume for Storage
content_type: task
weight: 90
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page shows you how to configure a Pod to use a
&lt;a class='glossary-tooltip' title='声明在持久卷中定义的存储资源，以便可以将其挂载为容器中的卷。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims' target='_blank' aria-label='PersistentVolumeClaim'&gt;PersistentVolumeClaim&lt;/a&gt;
for storage.
Here is a summary of the process:

1. You, as cluster administrator, create a PersistentVolume backed by physical
 storage. You do not associate the volume with any Pod.

1. You, now taking the role of a developer / cluster user, create a
 PersistentVolumeClaim that is automatically bound to a suitable
 PersistentVolume.

1. You create a Pod that uses the above PersistentVolumeClaim for storage.
--&gt;
&lt;p&gt;本文将向你介绍如何配置 Pod 使用
&lt;a class='glossary-tooltip' title='声明在持久卷中定义的存储资源，以便可以将其挂载为容器中的卷。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims' target='_blank' aria-label='PersistentVolumeClaim'&gt;PersistentVolumeClaim&lt;/a&gt;
作为存储。
以下是该过程的总结：&lt;/p&gt;</description></item><item><title>属主与附属</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/overview/working-with-objects/owners-dependents/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/overview/working-with-objects/owners-dependents/</guid><description>&lt;!-- 
title: Owners and Dependents
content_type: concept
weight: 60
--&gt;
&lt;!-- overview --&gt;
&lt;!--
In Kubernetes, some &lt;a class='glossary-tooltip' title='Kubernetes 系统中的实体，代表了集群的部分状态。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/overview/working-with-objects/#kubernetes-objects' target='_blank' aria-label='objects'&gt;objects&lt;/a&gt; are
*owners* of other objects. For example, a
&lt;a class='glossary-tooltip' title='ReplicaSet 确保一次运行指定数量的 Pod 副本。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/controllers/replicaset/' target='_blank' aria-label='ReplicaSet'&gt;ReplicaSet&lt;/a&gt; is the owner
of a set of Pods. These owned objects are *dependents* of their owner.
--&gt;
&lt;p&gt;在 Kubernetes 中，一些&lt;a class='glossary-tooltip' title='Kubernetes 系统中的实体，代表了集群的部分状态。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/overview/working-with-objects/#kubernetes-objects' target='_blank' aria-label='对象'&gt;对象&lt;/a&gt;是其他对象的“属主（Owner）”。
例如，&lt;a class='glossary-tooltip' title='ReplicaSet 确保一次运行指定数量的 Pod 副本。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/controllers/replicaset/' target='_blank' aria-label='ReplicaSet'&gt;ReplicaSet&lt;/a&gt; 是一组 Pod 的属主。
具有属主的对象是属主的“附属（Dependent）”。&lt;/p&gt;</description></item><item><title>特定于节点的卷数限制</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/storage-limits/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/storage-limits/</guid><description>&lt;!--
reviewers:
- jsafrane
- saad-ali
- thockin
- msau42
title: Node-specific Volume Limits
content_type: concept
weight: 90
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page describes the maximum number of volumes that can be attached
to a Node for various cloud providers.
--&gt;
&lt;p&gt;此页面描述了各个云供应商可挂接至一个节点的最大卷数。&lt;/p&gt;
&lt;!--
Cloud providers like Google, Amazon, and Microsoft typically have a limit on
how many volumes can be attached to a Node. It is important for Kubernetes to
respect those limits. Otherwise, Pods scheduled on a Node could get stuck
waiting for volumes to attach.
--&gt;
&lt;p&gt;谷歌、亚马逊和微软等云供应商通常对可以挂接到节点的卷数量进行限制。
Kubernetes 需要尊重这些限制。否则，在节点上调度的 Pod 可能会卡住去等待卷的挂接。&lt;/p&gt;</description></item><item><title>为 kubectl 命令集生成参考文档</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/contribute/generate-ref-docs/kubectl/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/contribute/generate-ref-docs/kubectl/</guid><description>&lt;!--
title: Generating Reference Documentation for kubectl Commands
content_type: task
weight: 90
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page shows how to generate the `kubectl` command reference.
--&gt;
&lt;p&gt;本页面描述了如何生成 &lt;code&gt;kubectl&lt;/code&gt; 命令参考。&lt;/p&gt;
&lt;!--
This topic shows how to generate reference documentation for
[kubectl commands](/docs/reference/generated/kubectl/kubectl-commands) like
[kubectl apply](/docs/reference/generated/kubectl/kubectl-commands#apply) and
[kubectl taint](/docs/reference/generated/kubectl/kubectl-commands#taint).
This topic does not show how to generate the
[kubectl](/docs/reference/generated/kubectl/kubectl/)
options reference page. For instructions on how to generate the kubectl options
reference page, see
[Generating Reference Pages for Kubernetes Components and Tools](/docs/home/contribute/generated-reference/kubernetes-components/).
--&gt;

&lt;div class="alert alert-info" role="note"&gt;&lt;h4 class="alert-heading"&gt;说明：&lt;/h4&gt;本主题描述了如何为 &lt;a href="https://andygol-k8s.netlify.app/docs/reference/generated/kubectl/kubectl-commands"&gt;kubectl 命令&lt;/a&gt;
生成参考文档，如 &lt;a href="https://andygol-k8s.netlify.app/docs/reference/generated/kubectl/kubectl-commands#apply"&gt;kubectl apply&lt;/a&gt; 和
&lt;a href="https://andygol-k8s.netlify.app/docs/reference/generated/kubectl/kubectl-commands#taint"&gt;kubectl taint&lt;/a&gt;。
本主题没有讨论如何生成 &lt;a href="https://andygol-k8s.netlify.app/docs/reference/generated/kubectl/kubectl-commands/"&gt;kubectl&lt;/a&gt; 组件选项的参考页面。
相关说明请参见&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/contribute/generate-ref-docs/kubernetes-components/"&gt;为 Kubernetes 组件和工具生成参考页面&lt;/a&gt;。&lt;/div&gt;

&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;

	&lt;!--
### Requirements:

- You need a machine that is running Linux or macOS.

- You need to have these tools installed:

 - [Python](https://www.python.org/downloads/) v3.7.x+
 - [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git)
 - [Golang](https://go.dev/dl/) version 1.13+
 - [Pip](https://pypi.org/project/pip/) used to install PyYAML
 - [PyYAML](https://pyyaml.org/) v5.1.2
 - [make](https://www.gnu.org/software/make/)
 - [gcc compiler/linker](https://gcc.gnu.org/)
 - [Docker](https://docs.docker.com/engine/installation/) (Required only for `kubectl` command reference)
--&gt;
&lt;h3 id="requirements"&gt;需求&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;你需要一台 Linux 或 macOS 机器。&lt;/p&gt;</description></item><item><title>重新配置 kubeadm 集群</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/kubeadm/kubeadm-reconfigure/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/kubeadm/kubeadm-reconfigure/</guid><description>&lt;!--
reviewers:
- sig-cluster-lifecycle
title: Reconfiguring a kubeadm cluster
content_type: task
weight: 90
--&gt;
&lt;!-- overview --&gt;
&lt;!--
kubeadm does not support automated ways of reconfiguring components that
were deployed on managed nodes. One way of automating this would be
by using a custom [operator](/docs/concepts/extend-kubernetes/operator/).
--&gt;
&lt;p&gt;kubeadm 不支持自动重新配置部署在托管节点上的组件的方式。
一种自动化的方法是使用自定义的
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/extend-kubernetes/operator/"&gt;operator&lt;/a&gt;。&lt;/p&gt;
&lt;!--
To modify the components configuration you must manually edit associated cluster
objects and files on disk.

This guide shows the correct sequence of steps that need to be performed
to achieve kubeadm cluster reconfiguration.
--&gt;
&lt;p&gt;要修改组件配置，你必须手动编辑磁盘上关联的集群对象和文件。
本指南展示了实现 kubeadm 集群重新配置所需执行的正确步骤顺序。&lt;/p&gt;</description></item><item><title>追踪 Kubernetes 系统组件</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/cluster-administration/system-traces/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/cluster-administration/system-traces/</guid><description>&lt;!-- 
title: Traces For Kubernetes System Components
reviewers:
- logicalhan
- lilic
content_type: concept
weight: 90
--&gt;
&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-beta"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.27 [beta]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;!-- 
System component traces record the latency of and relationships between operations in the cluster.
--&gt;
&lt;p&gt;系统组件追踪功能记录各个集群操作的时延信息和这些操作之间的关系。&lt;/p&gt;
&lt;!-- 
Kubernetes components emit traces using the
[OpenTelemetry Protocol](https://opentelemetry.io/docs/specs/otlp/)
with the gRPC exporter and can be collected and routed to tracing backends using an
[OpenTelemetry Collector](https://github.com/open-telemetry/opentelemetry-collector#-opentelemetry-collector).
--&gt;
&lt;p&gt;Kubernetes 组件基于 gRPC 导出器的
&lt;a href="https://opentelemetry.io/docs/specs/otlp/"&gt;OpenTelemetry 协议&lt;/a&gt;
发送追踪信息，并用
&lt;a href="https://github.com/open-telemetry/opentelemetry-collector#-opentelemetry-collector"&gt;OpenTelemetry Collector&lt;/a&gt;
收集追踪信息，再将其转交给追踪系统的后台。&lt;/p&gt;</description></item><item><title>从 PodSecurityPolicy 映射到 Pod 安全性标准</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/access-authn-authz/psp-to-pod-security-standards/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/access-authn-authz/psp-to-pod-security-standards/</guid><description>&lt;!--
reviewers:
- tallclair
- liggitt
title: Mapping PodSecurityPolicies to Pod Security Standards
content_type: concept
weight: 95
--&gt;
&lt;!-- overview --&gt;
&lt;!--
The tables below enumerate the configuration parameters on
`PodSecurityPolicy` objects, whether the field mutates
and/or validates pods, and how the configuration values map to the
[Pod Security Standards](/docs/concepts/security/pod-security-standards/).
--&gt;
&lt;p&gt;下面的表格列举了 &lt;code&gt;PodSecurityPolicy&lt;/code&gt;
对象上的配置参数，这些字段是否会变更或检查 Pod 配置，以及这些配置值如何映射到
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/security/pod-security-standards/"&gt;Pod 安全性标准（Pod Security Standards）&lt;/a&gt;
之上。&lt;/p&gt;
&lt;!--
For each applicable parameter, the allowed values for the
[Baseline](/docs/concepts/security/pod-security-standards/#baseline) and
[Restricted](/docs/concepts/security/pod-security-standards/#restricted) profiles are listed.
Anything outside the allowed values for those profiles would fall under the
[Privileged](/docs/concepts/security/pod-security-standards/#privileged) profile. "No opinion"
means all values are allowed under all Pod Security Standards.
--&gt;
&lt;p&gt;对于每个可应用的参数，表格中给出了
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/security/pod-security-standards/#baseline"&gt;Baseline&lt;/a&gt; 和
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/security/pod-security-standards/#restricted"&gt;Restricted&lt;/a&gt;
配置下可接受的取值。
对这两种配置而言不可接受的取值均归入
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/security/pod-security-standards/#privileged"&gt;Privileged&lt;/a&gt;
配置下。“无意见”意味着对所有 Pod 安全性标准而言所有取值都可接受。&lt;/p&gt;</description></item><item><title>HorizontalPodAutoscaler 演练</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/</guid><description>&lt;!--
reviewers:
- fgrzadkowski
- jszczepkowski
- justinsb
- directxman12
title: Horizontal Pod Autoscaler Walkthrough
content_type: task
weight: 100
min-kubernetes-server-version: 1.23
--&gt;
&lt;!-- overview --&gt;
&lt;!--
A [HorizontalPodAutoscaler](/docs/concepts/workloads/autoscaling/horizontal-pod-autoscale/)
(HPA for short)
automatically updates a workload resource (such as
a &lt;a class='glossary-tooltip' title='管理集群上的多副本应用。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/controllers/deployment/' target='_blank' aria-label='Deployment'&gt;Deployment&lt;/a&gt; or
&lt;a class='glossary-tooltip' title='StatefulSet 用来管理某 Pod 集合的部署和扩缩，并为这些 Pod 提供持久存储和持久标识符。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/controllers/statefulset/' target='_blank' aria-label='StatefulSet'&gt;StatefulSet&lt;/a&gt;), with the
aim of automatically scaling the workload to match demand.
--&gt;
&lt;p&gt;&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/autoscaling/horizontal-pod-autoscale/"&gt;HorizontalPodAutoscaler&lt;/a&gt;（简称 HPA ）
自动更新工作负载资源（例如 &lt;a class='glossary-tooltip' title='管理集群上的多副本应用。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/controllers/deployment/' target='_blank' aria-label='Deployment'&gt;Deployment&lt;/a&gt; 或者
&lt;a class='glossary-tooltip' title='StatefulSet 用来管理某 Pod 集合的部署和扩缩，并为这些 Pod 提供持久存储和持久标识符。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/controllers/statefulset/' target='_blank' aria-label='StatefulSet'&gt;StatefulSet&lt;/a&gt;），
目的是自动扩缩工作负载以满足需求。&lt;/p&gt;</description></item><item><title>Kubernetes 中的代理</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/cluster-administration/proxies/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/cluster-administration/proxies/</guid><description>&lt;!--
title: Proxies in Kubernetes
content_type: concept
weight: 90
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page explains proxies used with Kubernetes.
--&gt;
&lt;p&gt;本文讲述了 Kubernetes 中所使用的代理。&lt;/p&gt;
&lt;!-- body --&gt;
&lt;!--
## Proxies

There are several different proxies you may encounter when using Kubernetes:
--&gt;
&lt;h2 id="proxies"&gt;代理&lt;/h2&gt;
&lt;p&gt;用户在使用 Kubernetes 的过程中可能遇到几种不同的代理（proxy）：&lt;/p&gt;
&lt;!--
1. The [kubectl proxy](/docs/tasks/access-application-cluster/access-cluster/#directly-accessing-the-rest-api):

 - runs on a user's desktop or in a pod
 - proxies from a localhost address to the Kubernetes apiserver
 - client to proxy uses HTTP
 - proxy to apiserver uses HTTPS
 - locates apiserver
 - adds authentication headers
--&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/tasks/access-application-cluster/access-cluster/#directly-accessing-the-rest-api"&gt;kubectl proxy&lt;/a&gt;：&lt;/p&gt;</description></item><item><title>安全检查清单</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/security/security-checklist/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/security/security-checklist/</guid><description>&lt;!--
title: Security Checklist
description: &gt;
 Baseline checklist for ensuring security in Kubernetes clusters.
content_type: concept
weight: 100
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This checklist aims at providing a basic list of guidance with links to more
comprehensive documentation on each topic. It does not claim to be exhaustive
and is meant to evolve.

On how to read and use this document:

- The order of topics does not reflect an order of priority.
- Some checklist items are detailed in the paragraph below the list of each section.
--&gt;
&lt;p&gt;本清单旨在提供一个基本的指导列表，其中包含链接，指向各个主题的更为全面的文档。
此清单不求详尽无遗，是预计会不断演化的。&lt;/p&gt;</description></item><item><title>更改 PersistentVolume 的回收策略</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/change-pv-reclaim-policy/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/change-pv-reclaim-policy/</guid><description>&lt;!-- overview --&gt;
&lt;!--
This page shows how to change the reclaim policy of a Kubernetes
PersistentVolume.
--&gt;
&lt;p&gt;本文展示了如何更改 Kubernetes PersistentVolume 的回收策略。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;p&gt;&lt;!--
You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
[minikube](https://minikube.sigs.k8s.io/docs/tutorials/multi_node/)
or you can use one of these Kubernetes playgrounds:
--&gt;
&lt;p&gt;你必须拥有一个 Kubernetes 的集群，且必须配置 kubectl 命令行工具让其与你的集群通信。
建议运行本教程的集群至少有两个节点，且这两个节点不能作为控制平面主机。
如果你还没有集群，你可以通过 &lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;Minikube&lt;/a&gt;
构建一个你自己的集群，或者你可以使用下面的 Kubernetes 练习环境之一：&lt;/p&gt;</description></item><item><title>节点压力驱逐</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/scheduling-eviction/node-pressure-eviction/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/scheduling-eviction/node-pressure-eviction/</guid><description>&lt;!--
title: Node-pressure Eviction
content_type: concept
weight: 100
--&gt;
&lt;p&gt;节点压力驱逐是 &lt;a class='glossary-tooltip' title='一个在集群中每个节点上运行的代理。它保证容器都运行在 Pod 中。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/reference/command-line-tools-reference/kubelet' target='_blank' aria-label='kubelet'&gt;kubelet&lt;/a&gt; 主动终止 Pod 以回收节点上资源的过程。&lt;/br&gt;&lt;/p&gt;
&lt;!--
The &lt;a class='glossary-tooltip' title='一个在集群中每个节点上运行的代理。它保证容器都运行在 Pod 中。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/reference/command-line-tools-reference/kubelet' target='_blank' aria-label='kubelet'&gt;kubelet&lt;/a&gt; monitors resources
like memory, disk space, and filesystem inodes on your cluster's nodes.
When one or more of these resources reach specific consumption levels, the
kubelet can proactively fail one or more pods on the node to reclaim resources
and prevent starvation.
--&gt;
&lt;p&gt;&lt;a class='glossary-tooltip' title='一个在集群中每个节点上运行的代理。它保证容器都运行在 Pod 中。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/reference/command-line-tools-reference/kubelet' target='_blank' aria-label='kubelet'&gt;kubelet&lt;/a&gt;
监控集群节点的内存、磁盘空间和文件系统的 inode 等资源。
当这些资源中的一个或者多个达到特定的消耗水平，
kubelet 可以主动地使节点上一个或者多个 Pod 失效，以回收资源防止饥饿。&lt;/p&gt;</description></item><item><title>进阶贡献</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/contribute/advanced/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/contribute/advanced/</guid><description>&lt;!--
title: Advanced contributing
slug: advanced
content_type: concept
weight: 100
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page assumes that you understand how to
[contribute to new content](/docs/contribute/new-content/) and
[review others' work](/docs/contribute/review/reviewing-prs/), and are ready
to learn about more ways to contribute. You need to use the Git command line
client and other tools for some of these tasks.
--&gt;
&lt;p&gt;如果你已经了解如何&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/contribute/new-content/"&gt;贡献新内容&lt;/a&gt;和
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/contribute/review/reviewing-prs/"&gt;评阅他人工作&lt;/a&gt;，并准备了解更多贡献的途径，
请阅读此文。你需要使用 Git 命令行工具和其他工具做这些工作。&lt;/p&gt;
&lt;!-- body --&gt;
&lt;!--
## Propose improvements

SIG Docs [members](/docs/contribute/participate/roles-and-responsibilities/#members)
can propose improvements.
--&gt;
&lt;h2 id="propose-improvements"&gt;提出改进建议&lt;/h2&gt;
&lt;p&gt;SIG Docs 的&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/contribute/participate/roles-and-responsibilities/#members"&gt;成员&lt;/a&gt;可以提出改进建议。&lt;/p&gt;</description></item><item><title>卷健康监测</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/volume-health-monitoring/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/volume-health-monitoring/</guid><description>&lt;!-- 
reviewers:
- jsafrane
- saad-ali
- msau42
- xing-yang
title: Volume Health Monitoring
content_type: concept
weight: 100
--&gt;
&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-alpha"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.21 [alpha]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;!--
&lt;a class='glossary-tooltip' title='容器存储接口 （CSI）定义了存储系统暴露给容器的标准接口。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/volumes/#csi' target='_blank' aria-label='CSI'&gt;CSI&lt;/a&gt; volume health monitoring allows
CSI Drivers to detect abnormal volume conditions from the underlying storage systems
and report them as events on &lt;a class='glossary-tooltip' title='声明在持久卷中定义的存储资源，以便可以将其挂载为容器中的卷。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims' target='_blank' aria-label='PVCs'&gt;PVCs&lt;/a&gt;
or &lt;a class='glossary-tooltip' title='Pod 表示你的集群上一组正在运行的容器。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/' target='_blank' aria-label='Pods'&gt;Pods&lt;/a&gt;.
--&gt;
&lt;p&gt;&lt;a class='glossary-tooltip' title='容器存储接口 （CSI）定义了存储系统暴露给容器的标准接口。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/volumes/#csi' target='_blank' aria-label='CSI'&gt;CSI&lt;/a&gt; 卷健康监测支持 CSI 驱动从底层的存储系统着手，
探测异常的卷状态，并以事件的形式上报到 &lt;a class='glossary-tooltip' title='声明在持久卷中定义的存储资源，以便可以将其挂载为容器中的卷。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims' target='_blank' aria-label='PVC'&gt;PVC&lt;/a&gt;
或 &lt;a class='glossary-tooltip' title='Pod 表示你的集群上一组正在运行的容器。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/' target='_blank' aria-label='Pod'&gt;Pod&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>列出集群中所有运行容器的镜像</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/access-application-cluster/list-all-running-container-images/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/access-application-cluster/list-all-running-container-images/</guid><description>&lt;!--
title: List All Container Images Running in a Cluster
content_type: task
weight: 100
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page shows how to use kubectl to list all of the Container images
for Pods running in a cluster.
--&gt;
&lt;p&gt;本文展示如何使用 kubectl 来列出集群中所有运行 Pod 的容器的镜像&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;p&gt;&lt;!--
You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
[minikube](https://minikube.sigs.k8s.io/docs/tutorials/multi_node/)
or you can use one of these Kubernetes playgrounds:
--&gt;
&lt;p&gt;你必须拥有一个 Kubernetes 的集群，且必须配置 kubectl 命令行工具让其与你的集群通信。
建议运行本教程的集群至少有两个节点，且这两个节点不能作为控制平面主机。
如果你还没有集群，你可以通过 &lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;Minikube&lt;/a&gt;
构建一个你自己的集群，或者你可以使用下面的 Kubernetes 练习环境之一：&lt;/p&gt;</description></item><item><title>配置 Pod 使用投射卷作存储</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configure-pod-container/configure-projected-volume-storage/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configure-pod-container/configure-projected-volume-storage/</guid><description>&lt;!--
reviewers:
- jpeeler
- pmorie
title: Configure a Pod to Use a Projected Volume for Storage
content_type: task
weight: 100
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page shows how to use a [`projected`](/docs/concepts/storage/volumes/#projected) Volume to mount
several existing volume sources into the same directory. Currently, `secret`, `configMap`, `downwardAPI`,
and `serviceAccountToken` volumes can be projected.
--&gt;
&lt;p&gt;本文介绍怎样通过 &lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/volumes/#projected"&gt;&lt;code&gt;projected&lt;/code&gt;&lt;/a&gt; 卷将现有的多个卷资源挂载到相同的目录。
当前，&lt;code&gt;secret&lt;/code&gt;、&lt;code&gt;configMap&lt;/code&gt;、&lt;code&gt;downwardAPI&lt;/code&gt; 和 &lt;code&gt;serviceAccountToken&lt;/code&gt; 卷可以被投射。&lt;/p&gt;
&lt;!--
`serviceAccountToken` is not a volume type.
--&gt;

&lt;div class="alert alert-info" role="note"&gt;&lt;h4 class="alert-heading"&gt;说明：&lt;/h4&gt;&lt;code&gt;serviceAccountToken&lt;/code&gt; 不是一种卷类型。&lt;/div&gt;

&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;p&gt;&lt;!--
You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
[minikube](https://minikube.sigs.k8s.io/docs/tutorials/multi_node/)
or you can use one of these Kubernetes playgrounds:
--&gt;
&lt;p&gt;你必须拥有一个 Kubernetes 的集群，且必须配置 kubectl 命令行工具让其与你的集群通信。
建议运行本教程的集群至少有两个节点，且这两个节点不能作为控制平面主机。
如果你还没有集群，你可以通过 &lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;Minikube&lt;/a&gt;
构建一个你自己的集群，或者你可以使用下面的 Kubernetes 练习环境之一：&lt;/p&gt;</description></item><item><title>实现细节</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/implementation-details/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/implementation-details/</guid><description>&lt;!-- 
reviewers:
- luxas
- jbeda
title: Implementation details
content_type: concept
weight: 100
--&gt;
&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.10 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;!-- 
`kubeadm init` and `kubeadm join` together provide a nice user experience for creating a
bare Kubernetes cluster from scratch, that aligns with the best-practices.
However, it might not be obvious _how_ kubeadm does that.
--&gt;
&lt;p&gt;&lt;code&gt;kubeadm init&lt;/code&gt; 和 &lt;code&gt;kubeadm join&lt;/code&gt; 结合在一起为从头开始创建最基本的 Kubernetes
集群提供了良好的用户体验，这与最佳实践一致。
但是，kubeadm &lt;strong&gt;如何&lt;/strong&gt;做到这一点可能并不明显。&lt;/p&gt;
&lt;!-- 
This document provides additional details on what happens under the hood, with the aim of sharing
knowledge on the best practices for a Kubernetes cluster.
--&gt;
&lt;p&gt;本文档提供了更多幕后的详细信息，旨在分享有关 Kubernetes 集群最佳实践的知识。&lt;/p&gt;</description></item><item><title>使用 kubeadm 支持双协议栈</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/setup/production-environment/tools/kubeadm/dual-stack-support/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/setup/production-environment/tools/kubeadm/dual-stack-support/</guid><description>&lt;!--
title: Dual-stack support with kubeadm
content_type: task
weight: 100
min-kubernetes-server-version: 1.21
--&gt;
&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.23 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;!--
Your Kubernetes cluster includes [dual-stack](/docs/concepts/services-networking/dual-stack/)
networking, which means that cluster networking lets you use either address family.
In a cluster, the control plane can assign both an IPv4 address and an IPv6 address to a single
&lt;a class='glossary-tooltip' title='Pod 表示你的集群上一组正在运行的容器。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/' target='_blank' aria-label='Pod'&gt;Pod&lt;/a&gt; or a &lt;a class='glossary-tooltip' title='将运行在一组 Pods 上的应用程序公开为网络服务的抽象方法。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/services-networking/service/' target='_blank' aria-label='Service'&gt;Service&lt;/a&gt;.
--&gt;
&lt;p&gt;你的集群包含&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/services-networking/dual-stack/"&gt;双协议栈&lt;/a&gt;组网支持，
这意味着集群网络允许你在两种地址族间任选其一。在集群中，控制面可以为同一个
&lt;a class='glossary-tooltip' title='Pod 表示你的集群上一组正在运行的容器。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/' target='_blank' aria-label='Pod'&gt;Pod&lt;/a&gt; 或者
&lt;a class='glossary-tooltip' title='将运行在一组 Pods 上的应用程序公开为网络服务的抽象方法。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/services-networking/service/' target='_blank' aria-label='Service'&gt;Service&lt;/a&gt;
同时赋予 IPv4 和 IPv6 地址。&lt;/p&gt;</description></item><item><title>拓扑感知路由</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/services-networking/topology-aware-routing/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/services-networking/topology-aware-routing/</guid><description>&lt;!--
reviewers:
- robscott
title: Topology Aware Routing
content_type: concept
weight: 100
description: &gt;-
 _Topology Aware Routing_ provides a mechanism to help keep network traffic within the zone
 where it originated. Preferring same-zone traffic between Pods in your cluster can help
 with reliability, performance (network latency and throughput), or cost.
--&gt;
&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-beta"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.23 [beta]&lt;/code&gt;
 &lt;/div&gt;
 



&lt;div class="alert alert-info" role="note"&gt;&lt;h4 class="alert-heading"&gt;说明：&lt;/h4&gt;&lt;!--
Prior to Kubernetes 1.27, this feature was known as _Topology Aware Hints_.
--&gt;
&lt;p&gt;在 Kubernetes 1.27 之前，此特性称为&lt;strong&gt;拓扑感知提示（Topology Aware Hint）&lt;/strong&gt;。&lt;/p&gt;</description></item><item><title>推荐使用的标签</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/overview/working-with-objects/common-labels/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/overview/working-with-objects/common-labels/</guid><description>&lt;!--
---
title: Recommended Labels
content_type: concept
weight: 100
---
--&gt;
&lt;!-- overview --&gt;
&lt;!--
You can visualize and manage Kubernetes objects with more tools than kubectl and
the dashboard. A common set of labels allows tools to work interoperably, describing
objects in a common manner that all tools can understand.
--&gt;
&lt;p&gt;除了 kubectl 和 dashboard 之外，你还可以使用其他工具来可视化和管理 Kubernetes 对象。
一组通用的标签可以让多个工具之间相互操作，用所有工具都能理解的通用方式描述对象。&lt;/p&gt;
&lt;!--
In addition to supporting tooling, the recommended labels describe applications
in a way that can be queried.
--&gt;
&lt;p&gt;除了支持工具外，推荐的标签还以一种可以查询的方式描述了应用程序。&lt;/p&gt;</description></item><item><title>为指标生成参考文档</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/contribute/generate-ref-docs/metrics-reference/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/contribute/generate-ref-docs/metrics-reference/</guid><description>&lt;!--
title: Generating reference documentation for metrics
content_type: task
weight: 100
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page demonstrates the generation of metrics reference documentation.
--&gt;
&lt;p&gt;本页演示如何生成指标参考文档。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;

	&lt;!--
### Requirements:

- You need a machine that is running Linux or macOS.

- You need to have these tools installed:

 - [Python](https://www.python.org/downloads/) v3.7.x+
 - [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git)
 - [Golang](https://go.dev/dl/) version 1.13+
 - [Pip](https://pypi.org/project/pip/) used to install PyYAML
 - [PyYAML](https://pyyaml.org/) v5.1.2
 - [make](https://www.gnu.org/software/make/)
 - [gcc compiler/linker](https://gcc.gnu.org/)
 - [Docker](https://docs.docker.com/engine/installation/) (Required only for `kubectl` command reference)
--&gt;
&lt;h3 id="requirements"&gt;需求&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;你需要一台 Linux 或 macOS 机器。&lt;/p&gt;</description></item><item><title>针对 Pod 和容器的 Linux 内核安全约束</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/security/linux-kernel-security-constraints/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/security/linux-kernel-security-constraints/</guid><description>&lt;!--
title: Linux kernel security constraints for Pods and containers
description: &gt;
 Overview of Linux kernel security modules and constraints that you can use to
 harden your Pods and containers.
content_type: concept
weight: 100
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page describes some of the security features that are built into the Linux
kernel that you can use in your Kubernetes workloads. To learn how to apply
these features to your Pods and containers, refer to
[Configure a SecurityContext for a Pod or Container](/docs/tasks/configure-pod-container/security-context/).
You should already be familiar with Linux and with the basics of Kubernetes
workloads.
--&gt;
&lt;p&gt;本页描述了一些 Linux 内核中内置的、你可以在 Kubernetes 工作负载中使用的安全特性。
要了解如何将这些特性应用到你的 Pod 和容器，
请参阅&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configure-pod-container/security-context/"&gt;为 Pod 或容器配置 SecurityContext&lt;/a&gt;。
你须熟悉 Linux 和 Kubernetes 工作负载的基础知识。&lt;/p&gt;</description></item><item><title>API 发起的驱逐</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/scheduling-eviction/api-eviction/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/scheduling-eviction/api-eviction/</guid><description>&lt;!--
title: API-initiated Eviction
content_type: concept
weight: 110
--&gt;
&lt;p&gt;API 发起的驱逐是一个先调用
&lt;a href="https://andygol-k8s.netlify.app/docs/reference/generated/kubernetes-api/v1.35/#create-eviction-pod-v1-core"&gt;Eviction API&lt;/a&gt;
创建 &lt;code&gt;Eviction&lt;/code&gt; 对象，再由该对象体面地中止 Pod 的过程。 &lt;/br&gt;&lt;/p&gt;
&lt;!--
You can request eviction by calling the Eviction API directly, or programmatically
using a client of the &lt;a class='glossary-tooltip' title='提供 Kubernetes API 服务的控制面组件。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/#kube-apiserver' target='_blank' aria-label='API server'&gt;API server&lt;/a&gt;, like the `kubectl drain` command. This
creates an `Eviction` object, which causes the API server to terminate the Pod.

API-initiated evictions respect your configured [`PodDisruptionBudgets`](/docs/tasks/run-application/configure-pdb/)
and [`terminationGracePeriodSeconds`](/docs/concepts/workloads/pods/pod-lifecycle#pod-termination).

Using the API to create an Eviction object for a Pod is like performing a
policy-controlled [`DELETE` operation](/docs/reference/kubernetes-api/workload-resources/pod-v1/#delete-delete-a-pod)
on the Pod.
--&gt;
&lt;p&gt;你可以通过直接调用 Eviction API 发起驱逐，也可以通过编程的方式使用
&lt;a class='glossary-tooltip' title='提供 Kubernetes API 服务的控制面组件。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/#kube-apiserver' target='_blank' aria-label='API 服务器'&gt;API 服务器&lt;/a&gt;的客户端来发起驱逐，
比如 &lt;code&gt;kubectl drain&lt;/code&gt; 命令。
此操作创建一个 &lt;code&gt;Eviction&lt;/code&gt; 对象，该对象再驱动 API 服务器终止选定的 Pod。&lt;/p&gt;</description></item><item><title>API 优先级和公平性</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/cluster-administration/flow-control/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/cluster-administration/flow-control/</guid><description>&lt;!--
title: API Priority and Fairness
content_type: concept
min-kubernetes-server-version: v1.18
weight: 110
--&gt;
&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.29 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;!--
Controlling the behavior of the Kubernetes API server in an overload situation
is a key task for cluster administrators. The &lt;a class='glossary-tooltip' title='提供 Kubernetes API 服务的控制面组件。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/#kube-apiserver' target='_blank' aria-label='kube-apiserver'&gt;kube-apiserver&lt;/a&gt; has some controls available
(i.e. the `--max-requests-inflight` and `--max-mutating-requests-inflight`
command-line flags) to limit the amount of outstanding work that will be
accepted, preventing a flood of inbound requests from overloading and
potentially crashing the API server, but these flags are not enough to ensure
that the most important requests get through in a period of high traffic.
--&gt;
&lt;p&gt;对于集群管理员来说，控制 Kubernetes API 服务器在过载情况下的行为是一项关键任务。
&lt;a class='glossary-tooltip' title='提供 Kubernetes API 服务的控制面组件。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/#kube-apiserver' target='_blank' aria-label='kube-apiserver'&gt;kube-apiserver&lt;/a&gt;
有一些控件（例如：命令行标志 &lt;code&gt;--max-requests-inflight&lt;/code&gt; 和 &lt;code&gt;--max-mutating-requests-inflight&lt;/code&gt;），
可以限制将要接受的未处理的请求，从而防止过量请求入站，潜在导致 API 服务器崩溃。
但是这些标志不足以保证在高流量期间，最重要的请求仍能被服务器接受。&lt;/p&gt;</description></item><item><title>kubelet 认证/鉴权</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/access-authn-authz/kubelet-authn-authz/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/access-authn-authz/kubelet-authn-authz/</guid><description>&lt;!--
reviewers:
- liggitt
title: Kubelet authentication/authorization
weight: 110
--&gt;
&lt;!--
## Overview
--&gt;
&lt;h2 id="overview"&gt;概述&lt;/h2&gt;
&lt;!--
A kubelet's HTTPS endpoint exposes APIs which give access to data of varying sensitivity,
and allow you to perform operations with varying levels of power on the node and within containers.
--&gt;
&lt;p&gt;kubelet 的 HTTPS 端点公开了一些 API，这些 API 可以访问敏感度不同的数据，
并允许你在节点上和容器内以不同级别的权限执行操作。&lt;/p&gt;
&lt;!--
This document describes how to authenticate and authorize access to the kubelet's HTTPS endpoint.
--&gt;
&lt;p&gt;本文档介绍了如何对 kubelet 的 HTTPS 端点的访问进行认证和鉴权。&lt;/p&gt;</description></item><item><title>Kubernetes 云管理控制器</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/running-cloud-controller/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/running-cloud-controller/</guid><description>&lt;!--
reviewers:
- luxas
- thockin
- wlan0
title: Kubernetes Cloud Controller Manager
content_type: concept
weight: 110
--&gt;
&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-beta"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.11 [beta]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;!--
Since cloud providers develop and release at a different pace compared to the
Kubernetes project, abstracting the provider-specific code to the
`&lt;a class='glossary-tooltip' title='将 Kubernetes 与第三方云提供商进行集成的控制平面组件。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/cloud-controller/' target='_blank' aria-label='cloud-controller-manager'&gt;cloud-controller-manager&lt;/a&gt;`
binary allows cloud vendors to evolve independently from the core Kubernetes code.
--&gt;
&lt;p&gt;由于云驱动的开发和发布的步调与 Kubernetes 项目不同，将服务提供商专用代码抽象到
&lt;code&gt;&lt;a class='glossary-tooltip' title='将 Kubernetes 与第三方云提供商进行集成的控制平面组件。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/cloud-controller/' target='_blank' aria-label='cloud-controller-manager'&gt;cloud-controller-manager&lt;/a&gt;&lt;/code&gt;
二进制中有助于云服务厂商在 Kubernetes 核心代码之外独立进行开发。&lt;/p&gt;</description></item><item><title>Linux 节点的交换（Swap）行为</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/node/swap-behavior/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/node/swap-behavior/</guid><description>&lt;!--
content_type: "reference"
title: Linux Node Swap Behaviors
weight: 110
--&gt;
&lt;!--
To allow Kubernetes workloads to use swap, on a Linux node,
you must disable the kubelet's default behavior of failing when swap is detected,
and specify memory-swap behavior as `LimitedSwap`:
--&gt;
&lt;p&gt;要允许 Kubernetes 工作负载在 Linux 节点上使用交换分区，
你必须禁用 kubelet 在检测到交换分区时失败的默认行为，
并指定内存交换行为为 &lt;code&gt;LimitedSwap&lt;/code&gt;：&lt;/p&gt;
&lt;!--
The available choices for swap behavior are:
--&gt;
&lt;p&gt;可用的交换行为选项有：&lt;/p&gt;
&lt;!--
`NoSwap`
: (default) Workloads running as Pods on this node do not and cannot use swap. However, processes
 outside of Kubernetes' scope, such as system daemons (including the kubelet itself!) **can** utilize swap.
 This behavior is beneficial for protecting the node from system-level memory spikes,
 but it does not safeguard the workloads themselves from such spikes.
--&gt;
&lt;dl&gt;
&lt;dt&gt;&lt;code&gt;NoSwap&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;（默认）在此节点上作为 Pod 运行的工作负载不会也不能使用交换分区。
然而，系统守护进程（包括 kubelet 本身！）等这类 Kubernetes 范围之外的进程&lt;strong&gt;可以&lt;/strong&gt;利用交换分区。
这种行为有助于保护节点免受系统级别的内存峰值影响，
但这不能保护工作负载本身不受此类峰值的影响。&lt;/dd&gt;
&lt;/dl&gt;
&lt;!--
`LimitedSwap`
: Kubernetes workloads can utilize swap memory. The amount of swap available to a Pod is determined automatically.

To learn more, read [swap memory management](/docs/concepts/cluster-administration/swap-memory-management/).
--&gt;
&lt;dl&gt;
&lt;dt&gt;&lt;code&gt;LimitedSwap&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;Kubernetes 工作负载可以使用交换内存，Pod 可用的交换量是自动确定的。&lt;/dd&gt;
&lt;/dl&gt;
&lt;p&gt;要了解更多，请阅读&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/cluster-administration/swap-memory-management/"&gt;交换内存管理&lt;/a&gt;。&lt;/p&gt;</description></item><item><title>Windows 存储</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/windows-storage/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/windows-storage/</guid><description>&lt;!--
reviewers:
- jingxu97
- mauriciopoppe
- jayunit100
- jsturtevant
- marosset
- aravindhp
title: Windows Storage
content_type: concept
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page provides an storage overview specific to the Windows operating system.
--&gt;
&lt;p&gt;此页面提供特定于 Windows 操作系统的存储概述。&lt;/p&gt;
&lt;!-- body --&gt;
&lt;!--
## Persistent storage {#storage}

Windows has a layered filesystem driver to mount container layers and create a copy
filesystem based on NTFS. All file paths in the container are resolved only within
the context of that container.
--&gt;
&lt;h2 id="storage"&gt;持久存储&lt;/h2&gt;
&lt;p&gt;Windows 有一个分层文件系统驱动程序用来挂载容器层和创建基于 NTFS 的文件系统拷贝。
容器中的所有文件路径仅在该容器的上下文中解析。&lt;/p&gt;</description></item><item><title>Windows 网络</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/services-networking/windows-networking/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/services-networking/windows-networking/</guid><description>&lt;!--
reviewers:
- aravindhp
- jayunit100
- jsturtevant
- marosset
title: Networking on Windows
content_type: concept
weight: 110
--&gt;
&lt;!-- overview --&gt;
&lt;!--
Kubernetes supports running nodes on either Linux or Windows. You can mix both kinds of node within a single cluster.
This page provides an overview to networking specific to the Windows operating system.
--&gt;
&lt;p&gt;Kubernetes 支持运行 Linux 或 Windows 节点。
你可以在统一集群内混布这两种节点。
本页提供了特定于 Windows 操作系统的网络概述。&lt;/p&gt;
&lt;!-- body --&gt;
&lt;!--
## Container networking on Windows {#networking}

Networking for Windows containers is exposed through
[CNI plugins](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/).
Windows containers function similarly to virtual machines in regards to
networking. Each container has a virtual network adapter (vNIC) which is connected
to a Hyper-V virtual switch (vSwitch). The Host Networking Service (HNS) and the
Host Compute Service (HCS) work together to create containers and attach container
vNICs to networks. HCS is responsible for the management of containers whereas HNS
is responsible for the management of networking resources such as:

* Virtual networks (including creation of vSwitches)
* Endpoints / vNICs
* Namespaces
* Policies including packet encapsulations, load-balancing rules, ACLs, and NAT rules.
--&gt;
&lt;h2 id="networking"&gt;Windows 容器网络&lt;/h2&gt;
&lt;p&gt;Windows 容器网络通过 &lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/"&gt;CNI 插件&lt;/a&gt;暴露。
Windows 容器网络的工作方式与虚拟机类似。
每个容器都有一个连接到 Hyper-V 虚拟交换机（vSwitch）的虚拟网络适配器（vNIC）。
主机网络服务（Host Networking Service，HNS）和主机计算服务（Host Compute Service，HCS）
协同创建容器并将容器 vNIC 挂接到网络。
HCS 负责管理容器，而 HNS 负责管理以下网络资源：&lt;/p&gt;</description></item><item><title>为 Pod 或容器配置安全上下文</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configure-pod-container/security-context/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configure-pod-container/security-context/</guid><description>&lt;!--
reviewers:
- erictune
- mikedanese
- thockin
title: Configure a Security Context for a Pod or Container
content_type: task
weight: 110
--&gt;
&lt;!-- overview --&gt;
&lt;!--
A security context defines privilege and access control settings for
a Pod or Container. Security context settings include, but are not limited to:

* Discretionary Access Control: Permission to access an object, like a file, is based on
 [user ID (UID) and group ID (GID)](https://wiki.archlinux.org/index.php/users_and_groups).

* [Security Enhanced Linux (SELinux)](https://en.wikipedia.org/wiki/Security-Enhanced_Linux):
 Objects are assigned security labels.

* Running as privileged or unprivileged.

* [Linux Capabilities](https://linux-audit.com/linux-capabilities-hardening-linux-binaries-by-removing-setuid/):
 Give a process some privileges, but not all the privileges of the root user.
--&gt;
&lt;p&gt;安全上下文（Security Context）定义 Pod 或 Container 的特权与访问控制设置。
安全上下文包括但不限于：&lt;/p&gt;</description></item><item><title>为应用程序设置干扰预算（Disruption Budget）</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/run-application/configure-pdb/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/run-application/configure-pdb/</guid><description>&lt;!--
title: Specifying a Disruption Budget for your Application
content_type: task
weight: 110
--&gt;
&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.21 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;!--
This page shows how to limit the number of concurrent disruptions
that your application experiences, allowing for higher availability
while permitting the cluster administrator to manage the clusters
nodes.
--&gt;
&lt;p&gt;本文展示如何限制应用程序的并发干扰数量，在允许集群管理员管理集群节点的同时保证高可用。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;


你的 Kubernetes 服务器版本必须不低于版本 v1.21.
 &lt;p&gt;要获知版本信息，请输入 &lt;code&gt;kubectl version&lt;/code&gt;.&lt;/p&gt;

&lt;!--
- You are the owner of an application running on a Kubernetes cluster that requires
 high availability.
- You should know how to deploy [Replicated Stateless Applications](/docs/tasks/run-application/run-stateless-application-deployment/)
 and/or [Replicated Stateful Applications](/docs/tasks/run-application/run-replicated-stateful-application/).
- You should have read about [Pod Disruptions](/docs/concepts/workloads/pods/disruptions/).
- You should confirm with your cluster owner or service provider that they respect
 Pod Disruption Budgets.
--&gt;
&lt;ul&gt;
&lt;li&gt;你是 Kubernetes 集群中某应用的所有者，该应用有高可用要求。&lt;/li&gt;
&lt;li&gt;你应了解如何部署&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/tasks/run-application/run-stateless-application-deployment/"&gt;无状态应用&lt;/a&gt;
和/或&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/tasks/run-application/run-replicated-stateful-application/"&gt;有状态应用&lt;/a&gt;。&lt;/li&gt;
&lt;li&gt;你应当已经阅读过关于 &lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/disruptions/"&gt;Pod 干扰&lt;/a&gt;的文档。&lt;/li&gt;
&lt;li&gt;用户应当与集群所有者或服务提供者确认其遵从 Pod 干扰预算（Pod Disruption Budgets）的规则。&lt;/li&gt;
&lt;/ul&gt;
&lt;!-- steps --&gt;
&lt;!--
## Protecting an Application with a PodDisruptionBudget

1. Identify what application you want to protect with a PodDisruptionBudget (PDB).
1. Think about how your application reacts to disruptions.
1. Create a PDB definition as a YAML file.
1. Create the PDB object from the YAML file.
--&gt;
&lt;h2 id="protecting-app-with-pdb"&gt;用 PodDisruptionBudget 来保护应用&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;确定想要使用 PodDisruptionBudget（PDB）来保护的应用。&lt;/li&gt;
&lt;li&gt;考虑应用对干扰的反应。&lt;/li&gt;
&lt;li&gt;以 YAML 文件形式定义 PDB。&lt;/li&gt;
&lt;li&gt;通过 YAML 文件创建 PDB 对象。&lt;/li&gt;
&lt;/ol&gt;
&lt;!-- discussion --&gt;
&lt;!--
## Identify an Application to Protect

The most common use case when you want to protect an application
specified by one of the built-in Kubernetes controllers:
--&gt;
&lt;h2 id="identify-app-to-protect"&gt;确定要保护的应用&lt;/h2&gt;
&lt;p&gt;用户想要保护通过内置的 Kubernetes 控制器指定的应用，这是最常见的使用场景：&lt;/p&gt;</description></item><item><title>应用安全检查清单</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/security/application-security-checklist/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/security/application-security-checklist/</guid><description>&lt;!--
title: Application Security Checklist
description: &gt;
 Baseline guidelines around ensuring application security on Kubernetes, aimed at application developers
content_type: concept
weight: 110
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This checklist aims to provide basic guidelines on securing applications
running in Kubernetes from a developer's perspective.
This list is not meant to be exhaustive and is intended to evolve over time.
--&gt;
&lt;p&gt;本检查清单旨在为开发者提供在 Kubernetes 上安全地运行应用的基本指南。
此列表并不打算详尽无遗，会随着时间的推移而不断演变。&lt;/p&gt;
&lt;!-- The following is taken from the existing checklist created for Kubernetes admins. https://kubernetes.io/docs/concepts/security/security-checklist/
--&gt;
&lt;!--
On how to read and use this document:

- The order of topics does not reflect an order of priority.
- Some checklist items are detailed in the paragraph below the list of each section.
- This checklist assumes that a `developer` is a Kubernetes cluster user who
 interacts with namespaced scope objects.
--&gt;
&lt;p&gt;关于如何阅读和使用本文档：&lt;/p&gt;</description></item><item><title>Service ClusterIP 分配</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/services-networking/cluster-ip-allocation/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/services-networking/cluster-ip-allocation/</guid><description>&lt;!--
reviewers:
- sftim
- thockin
title: Service ClusterIP allocation
content_type: concept
weight: 120
--&gt;
&lt;!-- overview --&gt;
&lt;!--
In Kubernetes, [Services](/docs/concepts/services-networking/service/) are an abstract way to expose
an application running on a set of Pods. Services
can have a cluster-scoped virtual IP address (using a Service of `type: ClusterIP`).
Clients can connect using that virtual IP address, and Kubernetes then load-balances traffic to that
Service across the different backing Pods.
--&gt;
&lt;p&gt;在 Kubernetes 中，&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/services-networking/service/"&gt;Service&lt;/a&gt; 是一种抽象的方式，
用于公开在一组 Pod 上运行的应用。
Service 可以具有集群作用域的虚拟 IP 地址（使用 &lt;code&gt;type: ClusterIP&lt;/code&gt; 的 Service）。
客户端可以使用该虚拟 IP 地址进行连接，Kubernetes 通过不同的后台 Pod 对该 Service 的流量进行负载均衡。&lt;/p&gt;</description></item><item><title>TLS 启动引导</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/</guid><description>&lt;!--
reviewers:
- mikedanese
- liggitt
- smarterclayton
- awly
title: TLS bootstrapping
content_type: concept
weight: 120
--&gt;
&lt;!-- overview --&gt;
&lt;!--
In a Kubernetes cluster, the components on the worker nodes - kubelet and kube-proxy - need
to communicate with Kubernetes control plane components, specifically kube-apiserver.
In order to ensure that communication is kept private, not interfered with, and ensure that
each component of the cluster is talking to another trusted component, we strongly
recommend using client TLS certificates on nodes.
--&gt;
&lt;p&gt;在一个 Kubernetes 集群中，工作节点上的组件（kubelet 和 kube-proxy）需要与
Kubernetes 控制平面组件通信，尤其是 kube-apiserver。
为了确保通信本身是私密的、不被干扰，并且确保集群的每个组件都在与另一个可信的组件通信，
我们强烈建议使用节点上的客户端 TLS 证书。&lt;/p&gt;</description></item><item><title>安装扩展（Addon）</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/cluster-administration/addons/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/cluster-administration/addons/</guid><description>&lt;!--
title: Installing Addons
content_type: concept
weight: 120
--&gt;
&lt;!-- overview --&gt;
&lt;div class="alert alert-secondary callout third-party-content" role="note"&gt;&lt;strong&gt;说明：&lt;/strong&gt;&amp;puncsp;本部分链接到提供 Kubernetes 所需功能的第三方项目。Kubernetes 项目作者不负责这些项目。此页面遵循&lt;a href="https://github.com/cncf/foundation/blob/main/policies-guidance/website-guidelines.md" target="_blank"&gt;CNCF 网站指南&lt;/a&gt;，按字母顺序列出项目。要将项目添加到此列表中，请在提交更改之前阅读&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/contribute/style/content-guide/#third-party-content"&gt;内容指南&lt;/a&gt;。&lt;/div&gt;
&lt;!--
Add-ons extend the functionality of Kubernetes.

This page lists some of the available add-ons and links to their respective
installation instructions. The list does not try to be exhaustive.
--&gt;
&lt;p&gt;Add-on 扩展了 Kubernetes 的功能。&lt;/p&gt;
&lt;p&gt;本文列举了一些可用的 add-on 以及到它们各自安装说明的链接。该列表并不试图详尽无遗。&lt;/p&gt;
&lt;!-- body --&gt;
&lt;!--
## Networking and Network Policy

* [ACI](https://www.github.com/noironetworks/aci-containers) provides integrated
 container networking and network security with Cisco ACI.
* [Antrea](https://antrea.io/) operates at Layer 3/4 to provide networking and
 security services for Kubernetes, leveraging Open vSwitch as the networking
 data plane. Antrea is a [CNCF project at the Sandbox level](https://www.cncf.io/projects/antrea/).
* [Calico](https://www.tigera.io/project-calico/) is a networking and network
 policy provider. Calico supports a flexible set of networking options so you
 can choose the most efficient option for your situation, including non-overlay
 and overlay networks, with or without BGP. Calico uses the same engine to
 enforce network policy for hosts, pods, and (if using Istio &amp; Envoy)
 applications at the service mesh layer.
* [Canal](https://projectcalico.docs.tigera.io/getting-started/kubernetes/flannel/flannel)
 unites Flannel and Calico, providing networking and network policy.
* [Cilium](https://github.com/cilium/cilium) is a networking, observability,
 and security solution with an eBPF-based data plane. Cilium provides a
 simple flat Layer 3 network with the ability to span multiple clusters
 in either a native routing or overlay/encapsulation mode, and can enforce
 network policies on L3-L7 using an identity-based security model that is
 decoupled from network addressing. Cilium can act as a replacement for
 kube-proxy; it also offers additional, opt-in observability and security features.
 Cilium is a [CNCF project at the Graduated level](https://www.cncf.io/projects/cilium/).
--&gt;
&lt;h2 id="networking-and-network-policy"&gt;联网和网络策略&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.github.com/noironetworks/aci-containers"&gt;ACI&lt;/a&gt; 通过 Cisco ACI 提供集成的容器网络和安全网络。&lt;/li&gt;
&lt;li&gt;&lt;a href="https://antrea.io/"&gt;Antrea&lt;/a&gt; 在第 3/4 层执行操作，为 Kubernetes
提供网络连接和安全服务。Antrea 利用 Open vSwitch 作为网络的数据面。
Antrea 是一个&lt;a href="https://www.cncf.io/projects/antrea/"&gt;沙箱级的 CNCF 项目&lt;/a&gt;。&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.tigera.io/project-calico/"&gt;Calico&lt;/a&gt; 是一个联网和网络策略供应商。
Calico 支持一套灵活的网络选项，因此你可以根据自己的情况选择最有效的选项，包括非覆盖和覆盖网络，带或不带 BGP。
Calico 使用相同的引擎为主机、Pod 和（如果使用 Istio 和 Envoy）应用程序在服务网格层执行网络策略。&lt;/li&gt;
&lt;li&gt;&lt;a href="https://projectcalico.docs.tigera.io/getting-started/kubernetes/flannel/flannel"&gt;Canal&lt;/a&gt;
结合 Flannel 和 Calico，提供联网和网络策略。&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/cilium/cilium"&gt;Cilium&lt;/a&gt; 是一种网络、可观察性和安全解决方案，具有基于 eBPF 的数据平面。
Cilium 提供了简单的 3 层扁平网络，
能够以原生路由（routing）和覆盖/封装（overlay/encapsulation）模式跨越多个集群，
并且可以使用与网络寻址分离的基于身份的安全模型在 L3 至 L7 上实施网络策略。
Cilium 可以作为 kube-proxy 的替代品；它还提供额外的、可选的可观察性和安全功能。
Cilium 是一个&lt;a href="https://www.cncf.io/projects/cilium/"&gt;毕业级别的 CNCF 项目&lt;/a&gt;。&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
* [CNI-Genie](https://github.com/cni-genie/CNI-Genie) enables Kubernetes to seamlessly
 connect to a choice of CNI plugins, such as Calico, Canal, Flannel, or Weave.
 CNI-Genie is a [CNCF project at the Sandbox level](https://www.cncf.io/projects/cni-genie/).
* [Contiv](https://contivpp.io/) provides configurable networking (native L3 using BGP,
 overlay using vxlan, classic L2, and Cisco-SDN/ACI) for various use cases and a rich
 policy framework. Contiv project is fully [open sourced](https://github.com/contiv).
 The [installer](https://github.com/contiv/install) provides both kubeadm and
 non-kubeadm based installation options.
* [Contrail](https://www.juniper.net/us/en/products-services/sdn/contrail/contrail-networking/),
 based on [Tungsten Fabric](https://tungsten.io), is an open source, multi-cloud
 network virtualization and policy management platform. Contrail and Tungsten
 Fabric are integrated with orchestration systems such as Kubernetes, OpenShift,
 OpenStack and Mesos, and provide isolation modes for virtual machines, containers/pods
 and bare metal workloads.
--&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/cni-genie/CNI-Genie"&gt;CNI-Genie&lt;/a&gt; 使 Kubernetes 无缝连接到
Calico、Canal、Flannel 或 Weave 等其中一种 CNI 插件。
CNI-Genie 是一个&lt;a href="https://www.cncf.io/projects/cni-genie/"&gt;沙箱级的 CNCF 项目&lt;/a&gt;。&lt;/li&gt;
&lt;li&gt;&lt;a href="https://contivpp.io/"&gt;Contiv&lt;/a&gt; 为各种用例和丰富的策略框架提供可配置的网络
（带 BGP 的原生 L3、带 vxlan 的覆盖、标准 L2 和 Cisco-SDN/ACI）。
Contiv 项目完全&lt;a href="https://github.com/contiv"&gt;开源&lt;/a&gt;。
其&lt;a href="https://github.com/contiv/install"&gt;安装程序&lt;/a&gt; 提供了基于 kubeadm 和非 kubeadm 的安装选项。&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.juniper.net/us/en/products-services/sdn/contrail/contrail-networking/"&gt;Contrail&lt;/a&gt; 基于
&lt;a href="https://tungsten.io"&gt;Tungsten Fabric&lt;/a&gt;，是一个开源的多云网络虚拟化和策略管理平台。
Contrail 和 Tungsten Fabric 与业务流程系统（例如 Kubernetes、OpenShift、OpenStack 和 Mesos）集成在一起，
为虚拟机、容器或 Pod 以及裸机工作负载提供了隔离模式。&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
* [Flannel](https://github.com/flannel-io/flannel#deploying-flannel-manually) is
 an overlay network provider that can be used with Kubernetes.
* [Gateway API](/docs/concepts/services-networking/gateway/) is an open source project managed by
 the [SIG Network](https://github.com/kubernetes/community/tree/master/sig-network) community and
 provides an expressive, extensible, and role-oriented API for modeling service networking.
* [Knitter](https://github.com/ZTE/Knitter/) is a plugin to support multiple network
 interfaces in a Kubernetes pod.
* [Multus](https://github.com/k8snetworkplumbingwg/multus-cni) is a Multi plugin for
 multiple network support in Kubernetes to support all CNI plugins
 (e.g. Calico, Cilium, Contiv, Flannel), in addition to SRIOV, DPDK, OVS-DPDK and
 VPP based workloads in Kubernetes.
--&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/flannel-io/flannel#deploying-flannel-manually"&gt;Flannel&lt;/a&gt;
是一个可以用于 Kubernetes 的 overlay 网络提供者。&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/services-networking/gateway/"&gt;Gateway API&lt;/a&gt; 是一个由
&lt;a href="https://github.com/kubernetes/community/tree/master/sig-network"&gt;SIG Network&lt;/a&gt; 社区管理的开源项目，
为服务网络建模提供一种富有表达力、可扩展和面向角色的 API。&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/ZTE/Knitter/"&gt;Knitter&lt;/a&gt; 是在一个 Kubernetes Pod 中支持多个网络接口的插件。&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/k8snetworkplumbingwg/multus-cni"&gt;Multus&lt;/a&gt; 是一个多插件，
可在 Kubernetes 中提供多种网络支持，以支持所有 CNI 插件（例如 Calico、Cilium、Contiv、Flannel），
而且包含了在 Kubernetes 中基于 SRIOV、DPDK、OVS-DPDK 和 VPP 的工作负载。&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
* [OVN-Kubernetes](https://github.com/ovn-org/ovn-kubernetes/) is a networking
 provider for Kubernetes based on [OVN (Open Virtual Network)](https://github.com/ovn-org/ovn/),
 a virtual networking implementation that came out of the Open vSwitch (OVS) project.
 OVN-Kubernetes provides an overlay based networking implementation for Kubernetes,
 including an OVS based implementation of load balancing and network policy.
* [Nodus](https://github.com/akraino-edge-stack/icn-nodus) is an OVN based CNI
 controller plugin to provide cloud native based Service function chaining(SFC).
--&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/ovn-org/ovn-kubernetes/"&gt;OVN-Kubernetes&lt;/a&gt; 是一个 Kubernetes 网络驱动，
基于 &lt;a href="https://github.com/ovn-org/ovn/"&gt;OVN（Open Virtual Network）&lt;/a&gt;实现，是从 Open vSwitch (OVS)
项目衍生出来的虚拟网络实现。OVN-Kubernetes 为 Kubernetes 提供基于覆盖网络的网络实现，
包括一个基于 OVS 实现的负载均衡器和网络策略。&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/akraino-edge-stack/icn-nodus"&gt;Nodus&lt;/a&gt; 是一个基于 OVN 的 CNI 控制器插件，
提供基于云原生的服务功能链 (SFC)。&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
* [NSX-T](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/index.html) Container Plug-in (NCP)
 provides integration between VMware NSX-T and container orchestrators such as
 Kubernetes, as well as integration between NSX-T and container-based CaaS/PaaS
 platforms such as Pivotal Container Service (PKS) and OpenShift.
* [Nuage](https://github.com/nuagenetworks/nuage-kubernetes/blob/v5.1.1-1/docs/kubernetes-1-installation.rst)
 is an SDN platform that provides policy-based networking between Kubernetes
 Pods and non-Kubernetes environments with visibility and security monitoring.
* [Romana](https://github.com/romana) is a Layer 3 networking solution for pod
 networks that also supports the [NetworkPolicy](/docs/concepts/services-networking/network-policies/) API.
* [Spiderpool](https://github.com/spidernet-io/spiderpool) is an underlay and RDMA
 networking solution for Kubernetes. Spiderpool is supported on bare metal, virtual machines,
 and public cloud environments.
* [Terway](https://github.com/AliyunContainerService/terway/) is a suite of CNI plugins
 based on AlibabaCloud's VPC and ECS network products. It provides native VPC networking
 and network policies in AlibabaCloud environments.
* [Weave Net](https://github.com/rajch/weave#using-weave-on-kubernetes)
 provides networking and network policy, will carry on working on both sides
 of a network partition, and does not require an external database.
--&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.vmware.com/en/VMware-NSX-T-Data-Center/index.html"&gt;NSX-T&lt;/a&gt; 容器插件（NCP）
提供了 VMware NSX-T 与容器协调器（例如 Kubernetes）之间的集成，以及 NSX-T 与基于容器的
CaaS / PaaS 平台（例如关键容器服务（PKS）和 OpenShift）之间的集成。&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/nuagenetworks/nuage-kubernetes/blob/v5.1.1-1/docs/kubernetes-1-installation.rst"&gt;Nuage&lt;/a&gt;
是一个 SDN 平台，可在 Kubernetes Pods 和非 Kubernetes 环境之间提供基于策略的联网，并具有可视化和安全监控。&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/romana"&gt;Romana&lt;/a&gt; 是一个 Pod 网络的第三层解决方案，并支持
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/services-networking/network-policies/"&gt;NetworkPolicy&lt;/a&gt; API。&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/spidernet-io/spiderpool"&gt;Spiderpool&lt;/a&gt; 为 Kubernetes
提供了下层网络和 RDMA 高速网络解决方案，兼容裸金属、虚拟机和公有云等运行环境。&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/AliyunContainerService/terway/"&gt;Terway&lt;/a&gt;
是一套基于阿里云 VPC 和 ECS 网络产品的 CNI 插件，能够在阿里云环境中提供原生的 VPC 网络和网络策略支持。&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/rajch/weave#using-weave-on-kubernetes"&gt;Weave Net&lt;/a&gt;
提供在网络分组两端参与工作的联网和网络策略，并且不需要额外的数据库。&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
## Service Discovery

* [CoreDNS](https://coredns.io) is a flexible, extensible DNS server which can
 be [installed](https://github.com/coredns/helm)
 as the in-cluster DNS for pods.
--&gt;
&lt;h2 id="service-discovery"&gt;服务发现&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://coredns.io"&gt;CoreDNS&lt;/a&gt; 是一种灵活的，可扩展的 DNS 服务器，可以
&lt;a href="https://github.com/coredns/helm"&gt;安装&lt;/a&gt;为集群内的 Pod 提供 DNS 服务。&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
## Visualization &amp;amp; Control

* [Dashboard](https://github.com/kubernetes/dashboard#kubernetes-dashboard)
 is a dashboard web interface for Kubernetes.
* [Headlamp](https://headlamp.dev/) is an extensible Kub
 deployed in-cluster or used as a desktop application.
--&gt;
&lt;h2 id="visualization-and-control"&gt;可视化管理&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/kubernetes/dashboard#kubernetes-dashboard"&gt;Dashboard&lt;/a&gt; 是一个 Kubernetes 的 Web 控制台界面。&lt;/li&gt;
&lt;li&gt;&lt;a href="https://headlamp.dev/"&gt;Headlamp&lt;/a&gt; 是一个&lt;strong&gt;可扩展的 Kubernetes 用户界面（UI）&lt;/strong&gt;，
既可以&lt;strong&gt;以集群内方式部署&lt;/strong&gt;，也可以&lt;strong&gt;作为桌面应用程序使用&lt;/strong&gt;。&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
## Infrastructure

* [KubeVirt](https://kubevirt.io/user-guide/#/installation/installation) is an add-on
 to run virtual machines on Kubernetes. Usually run on bare-metal clusters.
* The
 [node problem detector](https://github.com/kubernetes/node-problem-detector)
 runs on Linux nodes and reports system issues as either
 [Events](/docs/reference/kubernetes-api/cluster-resources/event-v1/) or
 [Node conditions](/docs/concepts/architecture/nodes/#condition).
--&gt;
&lt;h2 id="infrastructure"&gt;基础设施&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://kubevirt.io/user-guide/#/installation/installation"&gt;KubeVirt&lt;/a&gt; 是可以让 Kubernetes
运行虚拟机的 add-on。通常运行在裸机集群上。&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/kubernetes/node-problem-detector"&gt;节点问题检测器&lt;/a&gt; 在 Linux 节点上运行，
并将系统问题报告为&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/cluster-resources/event-v1/"&gt;事件&lt;/a&gt;
或&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/nodes/#condition"&gt;节点状况&lt;/a&gt;。&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
## Instrumentation

* [kube-state-metrics](/docs/concepts/cluster-administration/kube-state-metrics)
--&gt;
&lt;h2 id="instrumentation"&gt;插桩&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/cluster-administration/kube-state-metrics"&gt;kube-state-metrics&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
## Legacy Add-ons

There are several other add-ons documented in the deprecated
[cluster/addons](https://git.k8s.io/kubernetes/cluster/addons) directory.

Well-maintained ones should be linked to here. PRs welcome!
--&gt;
&lt;h2 id="legacy-addons"&gt;遗留 Add-on&lt;/h2&gt;
&lt;p&gt;还有一些其它 add-on 归档在已废弃的 &lt;a href="https://git.k8s.io/kubernetes/cluster/addons"&gt;cluster/addons&lt;/a&gt; 路径中。&lt;/p&gt;</description></item><item><title>查看站点分析</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/contribute/analytics/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/contribute/analytics/</guid><description>&lt;!-- 
title: Viewing Site Analytics
content_type: concept
weight: 120
card:
 name: contribute
 weight: 100
--&gt;
&lt;!-- overview --&gt;
&lt;!-- 
This page contains information about the kubernetes.io analytics dashboard.
--&gt;
&lt;p&gt;此页面包含有关 kubernetes.io 分析仪表板的信息。&lt;/p&gt;
&lt;!-- body --&gt;
&lt;!-- 
[View the dashboard](https://lookerstudio.google.com/u/0/reporting/fe615dc5-59b0-4db5-8504-ef9eacb663a9/page/4VDGB/).

This dashboard is built using [Google Looker Studio](https://lookerstudio.google.com/overview) and shows information collected on kubernetes.io using Google Analytics 4 since August 2022. 
--&gt;
&lt;p&gt;&lt;a href="https://lookerstudio.google.com/u/0/reporting/fe615dc5-59b0-4db5-8504-ef9eacb663a9/page/4VDGB/"&gt;查看仪表板&lt;/a&gt;。&lt;/p&gt;
&lt;p&gt;此仪表板使用 &lt;a href="https://lookerstudio.google.com/overview"&gt;Google Looker Studio&lt;/a&gt; 构建，并显示自 2022 年 8 月以来使用 Google Analytics 4 在 kubernetes.io 上收集的信息。&lt;/p&gt;</description></item><item><title>从 Pod 中访问 Kubernetes API</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/run-application/access-api-from-pod/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/run-application/access-api-from-pod/</guid><description>&lt;!--
title: Accessing the Kubernetes API from a Pod
content_type: task
weight: 120
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This guide demonstrates how to access the Kubernetes API from within a pod.
--&gt;
&lt;p&gt;本指南演示了如何从 Pod 中访问 Kubernetes API。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;!--
You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
[minikube](https://minikube.sigs.k8s.io/docs/tutorials/multi_node/)
or you can use one of these Kubernetes playgrounds:
--&gt;
&lt;p&gt;你必须拥有一个 Kubernetes 的集群，且必须配置 kubectl 命令行工具让其与你的集群通信。
建议运行本教程的集群至少有两个节点，且这两个节点不能作为控制平面主机。
如果你还没有集群，你可以通过 &lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;Minikube&lt;/a&gt;
构建一个你自己的集群，或者你可以使用下面的 Kubernetes 练习环境之一：&lt;/p&gt;</description></item><item><title>定制 Hugo 短代码</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/contribute/style/hugo-shortcodes/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/contribute/style/hugo-shortcodes/</guid><description>&lt;!--
title: Custom Hugo Shortcodes
content_type: concept
weight: 120
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page explains the custom Hugo shortcodes that can be used in Kubernetes Markdown documentation.
--&gt;
&lt;p&gt;本页面将介绍 Hugo 自定义短代码，可以用于 Kubernetes Markdown 文档书写。&lt;/p&gt;
&lt;!--
Read more about shortcodes in the [Hugo documentation](https://gohugo.io/content-management/shortcodes).
--&gt;
&lt;p&gt;关于短代码的更多信息可参见
&lt;a href="https://gohugo.io/content-management/shortcodes"&gt;Hugo 文档&lt;/a&gt;。&lt;/p&gt;
&lt;!-- body --&gt;
&lt;!--
## Feature state

In a Markdown page (`.md` file) on this site, you can add a shortcode to
display version and state of the documented feature.
--&gt;
&lt;h2 id="feature-state"&gt;功能状态&lt;/h2&gt;
&lt;p&gt;在本站的 Markdown 页面（&lt;code&gt;.md&lt;/code&gt; 文件）中，你可以加入短代码来展示所描述的功能特性的版本和状态。&lt;/p&gt;</description></item><item><title>服务内部流量策略</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/services-networking/service-traffic-policy/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/services-networking/service-traffic-policy/</guid><description>&lt;!-- 
---
reviewers:
- maplain
title: Service Internal Traffic Policy
content_type: concept
weight: 120
description: &gt;-
 If two Pods in your cluster want to communicate, and both Pods are actually running on
 the same node, _Service Internal Traffic Policy_ to keep network traffic within that node.
 Avoiding a round trip via the cluster network can help with reliability, performance
 (network latency and throughput), or cost.
---
--&gt;
&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.26 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;!-- 
_Service Internal Traffic Policy_ enables internal traffic restrictions to only route
internal traffic to endpoints within the node the traffic originated from. The
"internal" traffic here refers to traffic originated from Pods in the current
cluster. This can help to reduce costs and improve performance.
--&gt;
&lt;p&gt;&lt;strong&gt;服务内部流量策略&lt;/strong&gt;开启了内部流量限制，将内部流量只路由到发起方所处节点内的服务端点。
这里的”内部“流量指当前集群中的 Pod 所发起的流量。
这种机制有助于节省开销，提升效率。&lt;/p&gt;</description></item><item><title>配置 kubelet 镜像凭据提供程序</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/kubelet-credential-provider/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/kubelet-credential-provider/</guid><description>&lt;!-- 
title: Configure a kubelet image credential provider
reviewers:
- liggitt
- cheftako
content_type: task
min-kubernetes-server-version: v1.26
weight: 120
--&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.26 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;!-- overview --&gt;
&lt;!-- 
Starting from Kubernetes v1.20, the kubelet can dynamically retrieve credentials for a container image registry
using exec plugins. The kubelet and the exec plugin communicate through stdio (stdin, stdout, and stderr) using
Kubernetes versioned APIs. These plugins allow the kubelet to request credentials for a container registry dynamically
as opposed to storing static credentials on disk. For example, the plugin may talk to a local metadata server to retrieve
short-lived credentials for an image that is being pulled by the kubelet.
--&gt;
&lt;p&gt;从 Kubernetes v1.20 开始，kubelet 可以使用 exec 插件动态获得针对某容器镜像库的凭据。
kubelet 使用 Kubernetes 版本化 API 通过标准输入输出（标准输入、标准输出和标准错误）和
exec 插件通信。这些插件允许 kubelet 动态请求容器仓库的凭据，而不是将静态凭据存储在磁盘上。
例如，插件可能会与本地元数据服务器通信，以获得 kubelet 正在拉取的镜像的短期凭据。&lt;/p&gt;</description></item><item><title>同 Pod 内的容器使用共享卷通信</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/</guid><description>&lt;!--
title: Communicate Between Containers in the Same Pod Using a Shared Volume
content_type: task
weight: 120
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page shows how to use a Volume to communicate between two Containers running
in the same Pod. See also how to allow processes to communicate by
[sharing process namespace](/docs/tasks/configure-pod-container/share-process-namespace/)
between containers.
--&gt;
&lt;p&gt;本文旨在说明如何让一个 Pod 内的两个容器使用一个卷（Volume）进行通信。
参阅如何让两个进程跨容器通过
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configure-pod-container/share-process-namespace/"&gt;共享进程名字空间&lt;/a&gt;。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;p&gt;&lt;!--
You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
[minikube](https://minikube.sigs.k8s.io/docs/tutorials/multi_node/)
or you can use one of these Kubernetes playgrounds:
--&gt;
&lt;p&gt;你必须拥有一个 Kubernetes 的集群，且必须配置 kubectl 命令行工具让其与你的集群通信。
建议运行本教程的集群至少有两个节点，且这两个节点不能作为控制平面主机。
如果你还没有集群，你可以通过 &lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;Minikube&lt;/a&gt;
构建一个你自己的集群，或者你可以使用下面的 Kubernetes 练习环境之一：&lt;/p&gt;</description></item><item><title>为 Kubernetes 组件和工具生成参考文档</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/contribute/generate-ref-docs/kubernetes-components/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/contribute/generate-ref-docs/kubernetes-components/</guid><description>&lt;!--
title: Generating Reference Pages for Kubernetes Components and Tools
content_type: task
weight: 120
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page shows how to build the Kubernetes component and tool reference pages.
--&gt;
&lt;p&gt;本页面描述如何构造 Kubernetes 组件和工具的参考文档。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;!--
Start with the [Prerequisites section](/docs/contribute/generate-ref-docs/quickstart/#before-you-begin)
in the Reference Documentation Quickstart guide.
--&gt;
&lt;p&gt;阅读参考文档快速入门指南中的&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/contribute/generate-ref-docs/quickstart/#before-you-begin"&gt;准备工作&lt;/a&gt;节。&lt;/p&gt;
&lt;!-- steps --&gt;
&lt;!--
Follow the [Reference Documentation Quickstart](/docs/contribute/generate-ref-docs/quickstart/)
to generate the Kubernetes component and tool reference pages.
--&gt;
&lt;p&gt;按照&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/contribute/generate-ref-docs/quickstart/"&gt;参考文档快速入门&lt;/a&gt;
指引，生成 Kubernetes 组件和工具的参考文档。&lt;/p&gt;</description></item><item><title>为 Pod 配置服务账号</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configure-pod-container/configure-service-account/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configure-pod-container/configure-service-account/</guid><description>&lt;!--
reviewers:
- enj
- liggitt
- thockin
title: Configure Service Accounts for Pods
content_type: task
weight: 120
--&gt;
&lt;!-- overview --&gt;
&lt;!--
Kubernetes offers two distinct ways for clients that run within your
cluster, or that otherwise have a relationship to your cluster's
&lt;a class='glossary-tooltip' title='控制平面是指容器编排层，它暴露 API 和接口来定义、部署容器和管理容器的生命周期。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/reference/glossary/?all=true#term-control-plane' target='_blank' aria-label='control plane'&gt;control plane&lt;/a&gt;
to authenticate to the
&lt;a class='glossary-tooltip' title='提供 Kubernetes API 服务的控制面组件。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/#kube-apiserver' target='_blank' aria-label='API server'&gt;API server&lt;/a&gt;.
--&gt;
&lt;p&gt;Kubernetes 提供两种完全不同的方式来为客户端提供支持，这些客户端可能运行在你的集群中，
也可能与你的集群的&lt;a class='glossary-tooltip' title='控制平面是指容器编排层，它暴露 API 和接口来定义、部署容器和管理容器的生命周期。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/reference/glossary/?all=true#term-control-plane' target='_blank' aria-label='控制面'&gt;控制面&lt;/a&gt;相关，
需要向 &lt;a class='glossary-tooltip' title='提供 Kubernetes API 服务的控制面组件。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/#kube-apiserver' target='_blank' aria-label='API 服务器'&gt;API 服务器&lt;/a&gt;完成身份认证。&lt;/p&gt;</description></item><item><title>从私有仓库拉取镜像</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configure-pod-container/pull-image-private-registry/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configure-pod-container/pull-image-private-registry/</guid><description>&lt;!--
title: Pull an Image from a Private Registry
content_type: task
weight: 130
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page shows how to create a Pod that uses a
&lt;a class='glossary-tooltip' title='Secret 用于存储敏感信息，如密码、 OAuth 令牌和 SSH 密钥。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/configuration/secret/' target='_blank' aria-label='Secret'&gt;Secret&lt;/a&gt; to pull an image
from a private container image registry or repository. There are many private
registries in use. This task uses [Docker Hub](https://www.docker.com/products/docker-hub)
as an example registry.
--&gt;
&lt;p&gt;本文介绍如何使用 &lt;a class='glossary-tooltip' title='Secret 用于存储敏感信息，如密码、 OAuth 令牌和 SSH 密钥。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/configuration/secret/' target='_blank' aria-label='Secret'&gt;Secret&lt;/a&gt;
从私有的镜像仓库或代码仓库拉取镜像来创建 Pod。
有很多私有镜像仓库正在使用中。这个任务使用的镜像仓库是
&lt;a href="https://www.docker.com/products/docker-hub"&gt;Docker Hub&lt;/a&gt;。&lt;/p&gt;</description></item><item><title>流控</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/debug-cluster/flow-control/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/debug-cluster/flow-control/</guid><description>&lt;!--
title: Flow control
weight: 130
--&gt;
&lt;!-- overview --&gt;
&lt;!--
API Priority and Fairness controls the behavior of the Kubernetes API server in
an overload situation. You can find more information about it in the
[API Priority and Fairness](/docs/concepts/cluster-administration/flow-control/)
documentation.
--&gt;
&lt;p&gt;API 优先级和公平性控制着 Kubernetes API 服务器在负载过高的情况下的行为。你可以在
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/cluster-administration/flow-control/"&gt;API 优先级和公平性&lt;/a&gt;文档中找到更多信息。&lt;/p&gt;
&lt;!-- body --&gt;
&lt;!--
## Diagnostics

Every HTTP response from an API server with the priority and fairness feature
enabled has two extra headers: `X-Kubernetes-PF-FlowSchema-UID` and
`X-Kubernetes-PF-PriorityLevel-UID`, noting the flow schema that matched the request
and the priority level to which it was assigned, respectively. The API objects'
names are not included in these headers (to avoid revealing details in case the
requesting user does not have permission to view them). When debugging, you
can use a command such as:
--&gt;
&lt;h2 id="diagnostics"&gt;问题诊断&lt;/h2&gt;
&lt;p&gt;对于启用了 APF 的 API 服务器，每个 HTTP 响应都有两个额外的 HTTP 头：
&lt;code&gt;X-Kubernetes-PF-FlowSchema-UID&lt;/code&gt; 和 &lt;code&gt;X-Kubernetes-PF-PriorityLevel-UID&lt;/code&gt;，
给出与请求匹配的 FlowSchema 和已分配的优先级级别。
如果请求用户没有查看这些对象的权限，则这些 HTTP 头中将不包含 API 对象的名称，
因此在调试时，你可以使用类似如下的命令：&lt;/p&gt;</description></item><item><title>配置 API 对象配额</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/quota-api-object/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/quota-api-object/</guid><description>&lt;!--
title: Configure Quotas for API Objects
content_type: task
weight: 130
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page shows how to configure quotas for API objects, including
PersistentVolumeClaims and Services. A quota restricts the number of
objects, of a particular type, that can be created in a namespace.
You specify quotas in a
[ResourceQuota](/docs/reference/generated/kubernetes-api/v1.35/#resourcequota-v1-core)
object.
--&gt;
&lt;p&gt;本文讨论如何为 API 对象配置配额，包括 PersistentVolumeClaim 和 Service。
配额限制了可以在命名空间中创建的特定类型对象的数量。
你可以在 &lt;a href="https://andygol-k8s.netlify.app/docs/reference/generated/kubernetes-api/v1.35/#resourcequota-v1-core"&gt;ResourceQuota&lt;/a&gt;
对象中指定配额。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;p&gt;&lt;!--
You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
[minikube](https://minikube.sigs.k8s.io/docs/tutorials/multi_node/)
or you can use one of these Kubernetes playgrounds:
--&gt;
&lt;p&gt;你必须拥有一个 Kubernetes 的集群，且必须配置 kubectl 命令行工具让其与你的集群通信。
建议运行本教程的集群至少有两个节点，且这两个节点不能作为控制平面主机。
如果你还没有集群，你可以通过 &lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;Minikube&lt;/a&gt;
构建一个你自己的集群，或者你可以使用下面的 Kubernetes 练习环境之一：&lt;/p&gt;</description></item><item><title>为集群配置 DNS</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/access-application-cluster/configure-dns-cluster/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/access-application-cluster/configure-dns-cluster/</guid><description>&lt;!--
---
title: Configure DNS for a Cluster
weight: 130
content_type: concept
---
--&gt;
&lt;!-- overview --&gt;
&lt;!--
Kubernetes offers a DNS cluster addon, which most of the supported environments enable by default. In Kubernetes version 1.11 and later, CoreDNS is recommended and is installed by default with kubeadm.
--&gt;
&lt;p&gt;Kubernetes 提供 DNS 集群插件，大多数支持的环境默认情况下都会启用。
在 Kubernetes 1.11 及其以后版本中，推荐使用 CoreDNS，
kubeadm 默认会安装 CoreDNS。&lt;/p&gt;
&lt;!-- body --&gt;
&lt;!--
For more information on how to configure CoreDNS for a Kubernetes cluster, see the [Customizing DNS Service](/docs/tasks/administer-cluster/dns-custom-nameservers/). 
--&gt;
&lt;p&gt;要了解关于如何为 Kubernetes 集群配置 CoreDNS 的更多信息，参阅
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/dns-custom-nameservers/"&gt;定制 DNS 服务&lt;/a&gt;。&lt;/p&gt;</description></item><item><title>访问集群上运行的服务</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/access-application-cluster/access-cluster-services/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/access-application-cluster/access-cluster-services/</guid><description>&lt;!--
title: Access Services Running on Clusters
content_type: task
weight: 140
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page shows how to connect to services running on the Kubernetes cluster.
--&gt;
&lt;p&gt;本文展示了如何连接 Kubernetes 集群上运行的服务。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;p&gt;&lt;!--
You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
[minikube](https://minikube.sigs.k8s.io/docs/tutorials/multi_node/)
or you can use one of these Kubernetes playgrounds:
--&gt;
&lt;p&gt;你必须拥有一个 Kubernetes 的集群，且必须配置 kubectl 命令行工具让其与你的集群通信。
建议运行本教程的集群至少有两个节点，且这两个节点不能作为控制平面主机。
如果你还没有集群，你可以通过 &lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;Minikube&lt;/a&gt;
构建一个你自己的集群，或者你可以使用下面的 Kubernetes 练习环境之一：&lt;/p&gt;</description></item><item><title>控制节点上的 CPU 管理策略</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/cpu-management-policies/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/cpu-management-policies/</guid><description>&lt;!--
title: Control CPU Management Policies on the Node
reviewers:
- sjenning
- ConnorDoyle
- balajismaniam

content_type: task
min-kubernetes-server-version: v1.26
weight: 140
--&gt;
&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.26 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;!--
Kubernetes keeps many aspects of how pods execute on nodes abstracted
from the user. This is by design.  However, some workloads require
stronger guarantees in terms of latency and/or performance in order to operate
acceptably. The kubelet provides methods to enable more complex workload
placement policies while keeping the abstraction free from explicit placement
directives.
--&gt;
&lt;p&gt;按照设计，Kubernetes 对 Pod 执行相关的很多方面进行了抽象，使得用户不必关心。
然而，为了正常运行，有些工作负载要求在延迟和/或性能方面有更强的保证。
为此，kubelet 提供方法来实现更复杂的负载放置策略，同时保持抽象，避免显式的放置指令。&lt;/p&gt;</description></item><item><title>配置存活、就绪和启动探针</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/</guid><description>&lt;!--
title: Configure Liveness, Readiness and Startup Probes
content_type: task
weight: 140
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page shows how to configure liveness, readiness and startup probes for containers.

For more information about probes, see [Liveness, Readiness and Startup Probes](/docs/concepts/configuration/liveness-readiness-startup-probes)

The [kubelet](/docs/reference/command-line-tools-reference/kubelet/) uses
liveness probes to know when to restart a container. For example, liveness
probes could catch a deadlock, where an application is running, but unable to
make progress. Restarting a container in such a state can help to make the
application more available despite bugs.
--&gt;
&lt;p&gt;这篇文章介绍如何给容器配置存活（Liveness）、就绪（Readiness）和启动（Startup）探针。&lt;/p&gt;</description></item><item><title>更改 Kubernetes 软件包仓库</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/kubeadm/change-package-repository/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/kubeadm/change-package-repository/</guid><description>&lt;!--
title: Changing The Kubernetes Package Repository
content_type: task
weight: 150
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page explains how to enable a package repository for the desired
Kubernetes minor release upon upgrading a cluster. This is only needed
for users of the community-owned package repositories hosted at `pkgs.k8s.io`.
Unlike the legacy package repositories, the community-owned package
repositories are structured in a way that there's a dedicated package
repository for each Kubernetes minor version.
--&gt;
&lt;p&gt;本页介绍了如何在升级集群时启用包含 Kubernetes 次要版本的软件包仓库。
这仅适用于使用托管在 &lt;code&gt;pkgs.k8s.io&lt;/code&gt; 上社区自治软件包仓库的用户。
启用新的 Kubernetes 小版本的软件包仓库。与传统的软件包仓库不同，
社区自治的软件包仓库所采用的结构为每个 Kubernetes 小版本都有一个专门的软件包仓库。&lt;/p&gt;</description></item><item><title>将 Pod 分配给节点</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configure-pod-container/assign-pods-nodes/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configure-pod-container/assign-pods-nodes/</guid><description>&lt;!--
title: Assign Pods to Nodes
content_type: task
weight: 150
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page shows how to assign a Kubernetes Pod to a particular node in a
Kubernetes cluster.
--&gt;
&lt;p&gt;此页面显示如何将 Kubernetes Pod 指派给 Kubernetes 集群中的特定节点。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;p&gt;&lt;!--
You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
[minikube](https://minikube.sigs.k8s.io/docs/tutorials/multi_node/)
or you can use one of these Kubernetes playgrounds:
--&gt;
&lt;p&gt;你必须拥有一个 Kubernetes 的集群，且必须配置 kubectl 命令行工具让其与你的集群通信。
建议运行本教程的集群至少有两个节点，且这两个节点不能作为控制平面主机。
如果你还没有集群，你可以通过 &lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;Minikube&lt;/a&gt;
构建一个你自己的集群，或者你可以使用下面的 Kubernetes 练习环境之一：&lt;/p&gt;</description></item><item><title>控制节点上的拓扑管理策略</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/topology-manager/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/topology-manager/</guid><description>&lt;!--
title: Control Topology Management Policies on a node
reviewers:
- ConnorDoyle
- klueska
- lmdaly
- nolancon
- bg-chun
content_type: task
min-kubernetes-server-version: v1.18
weight: 150
--&gt;
&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-beta"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.27 [beta]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;!--
An increasing number of systems leverage a combination of CPUs and hardware accelerators to
support latency-critical execution and high-throughput parallel computation. These include
workloads in fields such as telecommunications, scientific computing, machine learning, financial
services and data analytics. Such hybrid systems comprise a high performance environment.
--&gt;
&lt;p&gt;越来越多的系统利用 CPU 和硬件加速器的组合来支持要求低延迟的任务和高吞吐量的并行计算。
这类负载包括电信、科学计算、机器学习、金融服务和数据分析等。
此类混合系统需要有高性能环境支持。&lt;/p&gt;</description></item><item><title>用户命名空间</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/user-namespaces/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/user-namespaces/</guid><description>&lt;!--
title: User Namespaces
reviewers:
content_type: concept
weight: 160
min-kubernetes-server-version: v1.25
--&gt;
&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-beta"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.30 [beta]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;!--
This page explains how user namespaces are used in Kubernetes pods. A user
namespace isolates the user running inside the container from the one
in the host.

A process running as root in a container can run as a different (non-root) user
in the host; in other words, the process has full privileges for operations
inside the user namespace, but is unprivileged for operations outside the
namespace.
--&gt;
&lt;p&gt;本页解释了在 Kubernetes Pod 中如何使用用户命名空间。
用户命名空间将容器内运行的用户与主机中的用户隔离开来。&lt;/p&gt;</description></item><item><title>用节点亲和性把 Pod 分配到节点</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/</guid><description>&lt;!--
title: Assign Pods to Nodes using Node Affinity
min-kubernetes-server-version: v1.10
content_type: task
weight: 160
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page shows how to assign a Kubernetes Pod to a particular node using Node Affinity in a
Kubernetes cluster.
--&gt;
&lt;p&gt;本页展示在 Kubernetes 集群中，如何使用节点亲和性把 Kubernetes Pod 分配到特定节点。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;p&gt;&lt;!--
You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
[minikube](https://minikube.sigs.k8s.io/docs/tutorials/multi_node/)
or you can use one of these Kubernetes playgrounds:
--&gt;
&lt;p&gt;你必须拥有一个 Kubernetes 的集群，且必须配置 kubectl 命令行工具让其与你的集群通信。
建议运行本教程的集群至少有两个节点，且这两个节点不能作为控制平面主机。
如果你还没有集群，你可以通过 &lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;Minikube&lt;/a&gt;
构建一个你自己的集群，或者你可以使用下面的 Kubernetes 练习环境之一：&lt;/p&gt;</description></item><item><title>自定义 DNS 服务</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/dns-custom-nameservers/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/dns-custom-nameservers/</guid><description>&lt;!-- 
reviewers:
- bowei
- zihongz
title: Customizing DNS Service
content_type: task
min-kubernetes-server-version: v1.12
weight: 160
--&gt;
&lt;!-- overview --&gt;
&lt;!-- 
This page explains how to configure your DNS
&lt;a class='glossary-tooltip' title='Pod 表示你的集群上一组正在运行的容器。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/' target='_blank' aria-label='Pod(s)'&gt;Pod(s)&lt;/a&gt; and customize the
DNS resolution process in your cluster.
--&gt;
&lt;p&gt;本页说明如何配置 DNS &lt;a class='glossary-tooltip' title='Pod 表示你的集群上一组正在运行的容器。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/' target='_blank' aria-label='Pod'&gt;Pod&lt;/a&gt;，以及定制集群中 DNS 解析过程。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;!--
You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
[minikube](https://minikube.sigs.k8s.io/docs/tutorials/multi_node/)
or you can use one of these Kubernetes playgrounds:
--&gt;
&lt;p&gt;你必须拥有一个 Kubernetes 的集群，且必须配置 kubectl 命令行工具让其与你的集群通信。
建议运行本教程的集群至少有两个节点，且这两个节点不能作为控制平面主机。
如果你还没有集群，你可以通过 &lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;Minikube&lt;/a&gt;
构建一个你自己的集群，或者你可以使用下面的 Kubernetes 练习环境之一：&lt;/p&gt;</description></item><item><title>Downward API</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/downward-api/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/downward-api/</guid><description>&lt;!--
title: Downward API
content_type: concept
weight: 170
description: &gt;
 There are two ways to expose Pod and container fields to a running container:
 environment variables, and as files that are populated by a special volume type.
 Together, these two ways of exposing Pod and container fields are called the downward API.
--&gt;
&lt;!-- overview --&gt;
&lt;!--
It is sometimes useful for a container to have information about itself, without
being overly coupled to Kubernetes. The _downward API_ allows containers to consume
information about themselves or the cluster without using the Kubernetes client
or API server.
--&gt;
&lt;p&gt;对于容器来说，在不与 Kubernetes 过度耦合的情况下，拥有关于自身的信息有时是很有用的。
&lt;strong&gt;Downward API&lt;/strong&gt; 允许容器在不使用 Kubernetes 客户端或 API 服务器的情况下获得自己或集群的信息。&lt;/p&gt;</description></item><item><title>调试 DNS 问题</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/dns-debugging-resolution/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/dns-debugging-resolution/</guid><description>&lt;!--
reviewers:
- bowei
- zihongz
title: Debugging DNS Resolution
content_type: task
min-kubernetes-server-version: v1.6
weight: 170
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page provides hints on diagnosing DNS problems.
--&gt;
&lt;p&gt;这篇文章提供了一些关于 DNS 问题诊断的方法。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;!--
You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
[minikube](https://minikube.sigs.k8s.io/docs/tutorials/multi_node/)
or you can use one of these Kubernetes playgrounds:
--&gt;
&lt;p&gt;你必须拥有一个 Kubernetes 的集群，且必须配置 kubectl 命令行工具让其与你的集群通信。
建议运行本教程的集群至少有两个节点，且这两个节点不能作为控制平面主机。
如果你还没有集群，你可以通过 &lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;Minikube&lt;/a&gt;
构建一个你自己的集群，或者你可以使用下面的 Kubernetes 练习环境之一：&lt;/p&gt;</description></item><item><title>配置 Pod 初始化</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configure-pod-container/configure-pod-initialization/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configure-pod-container/configure-pod-initialization/</guid><description>&lt;!--
title: Configure Pod Initialization
content_type: task
weight: 170
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page shows how to use an Init Container to initialize a Pod before an
application Container runs.
--&gt;
&lt;p&gt;本文介绍在应用容器运行前，怎样利用 Init 容器初始化 Pod。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;p&gt;&lt;!--
You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
[minikube](https://minikube.sigs.k8s.io/docs/tutorials/multi_node/)
or you can use one of these Kubernetes playgrounds:
--&gt;
&lt;p&gt;你必须拥有一个 Kubernetes 的集群，且必须配置 kubectl 命令行工具让其与你的集群通信。
建议运行本教程的集群至少有两个节点，且这两个节点不能作为控制平面主机。
如果你还没有集群，你可以通过 &lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;Minikube&lt;/a&gt;
构建一个你自己的集群，或者你可以使用下面的 Kubernetes 练习环境之一：&lt;/p&gt;</description></item><item><title>声明网络策略</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/declare-network-policy/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/declare-network-policy/</guid><description>&lt;!--
reviewers:
- caseydavenport
- danwinship
title: Declare Network Policy
min-kubernetes-server-version: v1.8
content_type: task
weight: 180
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This document helps you get started using the Kubernetes [NetworkPolicy API](/docs/concepts/services-networking/network-policies/) to declare network policies that govern how pods communicate with each other.
--&gt;
&lt;p&gt;本文可以帮助你开始使用 Kubernetes 的
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/services-networking/network-policies/"&gt;NetworkPolicy API&lt;/a&gt;
声明网络策略去管理 Pod 之间的通信&lt;/p&gt;
&lt;div class="alert alert-secondary callout third-party-content" role="note"&gt;&lt;strong&gt;说明：&lt;/strong&gt;&amp;puncsp;本部分链接到提供 Kubernetes 所需功能的第三方项目。Kubernetes 项目作者不负责这些项目。此页面遵循&lt;a href="https://github.com/cncf/foundation/blob/main/policies-guidance/website-guidelines.md" target="_blank"&gt;CNCF 网站指南&lt;/a&gt;，按字母顺序列出项目。要将项目添加到此列表中，请在提交更改之前阅读&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/contribute/style/content-guide/#third-party-content"&gt;内容指南&lt;/a&gt;。&lt;/div&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;p&gt;&lt;!--
You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
[minikube](https://minikube.sigs.k8s.io/docs/tutorials/multi_node/)
or you can use one of these Kubernetes playgrounds:
--&gt;
&lt;p&gt;你必须拥有一个 Kubernetes 的集群，且必须配置 kubectl 命令行工具让其与你的集群通信。
建议运行本教程的集群至少有两个节点，且这两个节点不能作为控制平面主机。
如果你还没有集群，你可以通过 &lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;Minikube&lt;/a&gt;
构建一个你自己的集群，或者你可以使用下面的 Kubernetes 练习环境之一：&lt;/p&gt;</description></item><item><title>为容器的生命周期事件设置处理函数</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/</guid><description>&lt;!--
title: Attach Handlers to Container Lifecycle Events
content_type: task
weight: 180
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page shows how to attach handlers to Container lifecycle events. Kubernetes supports
the postStart and preStop events. Kubernetes sends the postStart event immediately
after a Container is started, and it sends the preStop event immediately before the
Container is terminated. A Container may specify one handler per event.
--&gt;
&lt;p&gt;这个页面将演示如何为容器的生命周期事件挂接处理函数。Kubernetes 支持 postStart 和 preStop 事件。
当一个容器启动后，Kubernetes 将立即发送 postStart 事件；在容器被终结之前，
Kubernetes 将发送一个 preStop 事件。容器可以为每个事件指定一个处理程序。&lt;/p&gt;</description></item><item><title>开发云控制器管理器</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/developing-cloud-controller-manager/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/developing-cloud-controller-manager/</guid><description>&lt;!--
reviewers:
- luxas
- thockin
- wlan0
title: Developing Cloud Controller Manager
content_type: concept
weight: 190
--&gt;
&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-beta"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.11 [beta]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;!--
title: Cloud Controller Manager
id: cloud-controller-manager
full_link: /docs/concepts/architecture/cloud-controller/
short_description: &gt;
 Control plane component that integrates Kubernetes with third-party cloud providers.
aka: 
tags:
- architecture
- operation
--&gt;
&lt;!--
 A Kubernetes &lt;a class='glossary-tooltip' title='控制平面是指容器编排层，它暴露 API 和接口来定义、部署容器和管理容器的生命周期。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/reference/glossary/?all=true#term-control-plane' target='_blank' aria-label='control plane'&gt;control plane&lt;/a&gt; component
that embeds cloud-specific control logic. The cloud controller manager lets you link your
cluster into your cloud provider's API, and separates out the components that interact
with that cloud platform from components that only interact with your cluster.
--&gt;
&lt;p&gt;一个 Kubernetes &lt;a class='glossary-tooltip' title='控制平面是指容器编排层，它暴露 API 和接口来定义、部署容器和管理容器的生命周期。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/reference/glossary/?all=true#term-control-plane' target='_blank' aria-label='控制平面'&gt;控制平面&lt;/a&gt;组件，
嵌入了特定于云平台的控制逻辑。
云控制器管理器（Cloud Controller Manager）允许将你的集群连接到云提供商的 API 之上，
并将与该云平台交互的组件同与你的集群交互的组件分离开来。&lt;/p&gt;</description></item><item><title>配置 Pod 使用 ConfigMap</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configure-pod-container/configure-pod-configmap/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configure-pod-container/configure-pod-configmap/</guid><description>&lt;!--
title: Configure a Pod to Use a ConfigMap
content_type: task
weight: 190
card:
 name: tasks
 weight: 50
--&gt;
&lt;!-- overview --&gt;
&lt;!--
Many applications rely on configuration which is used during either application initialization or runtime.
Most times, there is a requirement to adjust values assigned to configuration parameters.
ConfigMaps are a Kubernetes mechanism that let you inject configuration data into application
&lt;a class='glossary-tooltip' title='Pod 表示你的集群上一组正在运行的容器。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/' target='_blank' aria-label='pods'&gt;pods&lt;/a&gt;.
--&gt;
&lt;p&gt;很多应用在其初始化或运行期间要依赖一些配置信息。
大多数时候，存在要调整配置参数所设置的数值的需求。
ConfigMap 是 Kubernetes 的一种机制，可让你将配置数据注入到应用的
&lt;a class='glossary-tooltip' title='Pod 表示你的集群上一组正在运行的容器。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/' target='_blank' aria-label='Pod'&gt;Pod&lt;/a&gt; 内部。&lt;/p&gt;</description></item><item><title>启用/禁用 Kubernetes API</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/enable-disable-api/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/enable-disable-api/</guid><description>&lt;!-- 
---
title: Enable Or Disable A Kubernetes API
content_type: task
weight: 200
---
--&gt;
&lt;!-- overview --&gt;
&lt;!-- 
This page shows how to enable or disable an API version from your cluster's
&lt;a class='glossary-tooltip' title='控制平面是指容器编排层，它暴露 API 和接口来定义、部署容器和管理容器的生命周期。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/reference/glossary/?all=true#term-control-plane' target='_blank' aria-label='control plane'&gt;control plane&lt;/a&gt;.
--&gt;
&lt;p&gt;本页展示怎么用集群的
&lt;a class='glossary-tooltip' title='控制平面是指容器编排层，它暴露 API 和接口来定义、部署容器和管理容器的生命周期。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/reference/glossary/?all=true#term-control-plane' target='_blank' aria-label='控制平面'&gt;控制平面&lt;/a&gt;.
启用/禁用 API 版本。&lt;/p&gt;
&lt;!-- steps --&gt;
&lt;!-- 
Specific API versions can be turned on or off by passing `--runtime-config=api/&lt;version&gt;` as a
command line argument to the API server. The values for this argument are a comma-separated
list of API versions. Later values override earlier values.

The `runtime-config` command line argument also supports 2 special keys:
--&gt;
&lt;p&gt;通过 API 服务器的命令行参数 &lt;code&gt;--runtime-config=api/&amp;lt;version&amp;gt;&lt;/code&gt; ，
可以开启/关闭某个指定的 API 版本。
此参数的值是一个逗号分隔的 API 版本列表。
此列表中，后面的值可以覆盖前面的值。&lt;/p&gt;</description></item><item><title>协调领导者选举</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/cluster-administration/coordinated-leader-election/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/cluster-administration/coordinated-leader-election/</guid><description>&lt;!--
reviewers:
- jpbetz
title: Coordinated Leader Election
content_type: concept
weight: 200
--&gt;
&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-beta" title="特性门控： CoordinatedLeaderElection"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt; 
 &lt;code&gt;Kubernetes v1.33 [beta]&lt;/code&gt;（默认禁用）&lt;/div&gt;

&lt;!--
Kubernetes 1.35 includes a beta feature that allows &lt;a class='glossary-tooltip' title='控制平面是指容器编排层，它暴露 API 和接口来定义、部署容器和管理容器的生命周期。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/reference/glossary/?all=true#term-control-plane' target='_blank' aria-label='control plane'&gt;control plane&lt;/a&gt; components to
deterministically select a leader via _coordinated leader election_.
This is useful to satisfy Kubernetes version skew constraints during cluster upgrades.
Currently, the only builtin selection strategy is `OldestEmulationVersion`,
preferring the leader with the lowest emulation version, followed by binary
version, followed by creation timestamp.
--&gt;
&lt;p&gt;Kubernetes 1.35 包含一个 Beta 特性，
允许&lt;a class='glossary-tooltip' title='控制平面是指容器编排层，它暴露 API 和接口来定义、部署容器和管理容器的生命周期。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/reference/glossary/?all=true#term-control-plane' target='_blank' aria-label='控制平面'&gt;控制平面&lt;/a&gt;组件通过&lt;strong&gt;协调领导者选举&lt;/strong&gt;确定性地选择一个领导者。
这对于在集群升级期间满足 Kubernetes 版本偏差约束非常有用。
目前，唯一内置的选择策略是 &lt;code&gt;OldestEmulationVersion&lt;/code&gt;，
此策略会优先选择最低仿真版本作为领导者，其次按二进制版本选择领导者，最后会按创建时间戳选择领导者。&lt;/p&gt;</description></item><item><title>在 Pod 中的容器之间共享进程命名空间</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configure-pod-container/share-process-namespace/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configure-pod-container/share-process-namespace/</guid><description>&lt;!--
---
title: Share Process Namespace between Containers in a Pod
reviewers:
- verb
- yujuhong
- dchen1107
content_type: task
weight: 200
---
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page shows how to configure process namespace sharing for a pod. When
process namespace sharing is enabled, processes in a container are visible
to all other containers in the same pod.
--&gt;
&lt;p&gt;此页面展示如何为 Pod 配置进程命名空间共享。
当启用进程命名空间共享时，容器中的进程对同一 Pod 中的所有其他容器都是可见的。&lt;/p&gt;
&lt;!--
You can use this feature to configure cooperating containers, such as a log
handler sidecar container, or to troubleshoot container images that don't
include debugging utilities like a shell.
--&gt;
&lt;p&gt;你可以使用此功能来配置协作容器，比如日志处理 sidecar 容器，
或者对那些不包含诸如 shell 等调试实用工具的镜像进行故障排查。&lt;/p&gt;</description></item><item><title>Pod 使用镜像卷</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configure-pod-container/image-volumes/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configure-pod-container/image-volumes/</guid><description>&lt;!--
title: Use an Image Volume With a Pod
reviewers:
content_type: task
weight: 210
min-kubernetes-server-version: v1.31
--&gt;
&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-beta" title="特性门控： ImageVolume"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt; 
 &lt;code&gt;Kubernetes v1.35 [beta]&lt;/code&gt;（默认启用）&lt;/div&gt;

&lt;!--
This page shows how to configure a pod using image volumes. This allows you to
mount content from OCI registries inside containers.
--&gt;
&lt;p&gt;本页展示了如何使用镜像卷配置 Pod。此特性允许你在容器内挂载来自 OCI 镜像仓库的内容。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;p&gt;&lt;!--
You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
[minikube](https://minikube.sigs.k8s.io/docs/tutorials/multi_node/)
or you can use one of these Kubernetes playgrounds:
--&gt;
&lt;p&gt;你必须拥有一个 Kubernetes 的集群，且必须配置 kubectl 命令行工具让其与你的集群通信。
建议运行本教程的集群至少有两个节点，且这两个节点不能作为控制平面主机。
如果你还没有集群，你可以通过 &lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;Minikube&lt;/a&gt;
构建一个你自己的集群，或者你可以使用下面的 Kubernetes 练习环境之一：&lt;/p&gt;</description></item><item><title>静态加密机密数据</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/encrypt-data/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/encrypt-data/</guid><description>&lt;!--
title: Encrypting Confidential Data at Rest
reviewers:
- aramase
- enj
content_type: task
weight: 210
--&gt;
&lt;!-- overview --&gt;
&lt;!--
All of the APIs in Kubernetes that let you write persistent API resource data support
at-rest encryption. For example, you can enable at-rest encryption for
&lt;a class='glossary-tooltip' title='Secret 用于存储敏感信息，如密码、 OAuth 令牌和 SSH 密钥。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/configuration/secret/' target='_blank' aria-label='Secrets'&gt;Secrets&lt;/a&gt;.
This at-rest encryption is additional to any system-level encryption for the
etcd cluster or for the filesystem(s) on hosts where you are running the
kube-apiserver.

This page shows how to enable and configure encryption of API data at rest.
--&gt;
&lt;p&gt;Kubernetes 中允许允许用户编辑的持久 API 资源数据的所有 API 都支持静态加密。
例如，你可以启用静态加密 &lt;a class='glossary-tooltip' title='Secret 用于存储敏感信息，如密码、 OAuth 令牌和 SSH 密钥。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/configuration/secret/' target='_blank' aria-label='Secret'&gt;Secret&lt;/a&gt;。
此静态加密是对 etcd 集群或运行 kube-apiserver 的主机上的文件系统的任何系统级加密的补充。&lt;/p&gt;</description></item><item><title>为 Pod 配置 user 名字空间</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configure-pod-container/user-namespaces/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configure-pod-container/user-namespaces/</guid><description>&lt;!--
title: Use a User Namespace With a Pod
reviewers:
content_type: task
weight: 210
min-kubernetes-server-version: v1.25
--&gt;
&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-beta" title="特性门控： UserNamespacesSupport"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt; 
 &lt;code&gt;Kubernetes v1.33 [beta]&lt;/code&gt;（默认启用）&lt;/div&gt;

&lt;!--
This page shows how to configure a user namespace for pods. This allows you to
isolate the user running inside the container from the one in the host.
--&gt;
&lt;p&gt;本页展示如何为 Pod 配置 user 名字空间。可以将容器内的用户与主机上的用户隔离开来。&lt;/p&gt;
&lt;!--
A process running as root in a container can run as a different (non-root) user
in the host; in other words, the process has full privileges for operations
inside the user namespace, but is unprivileged for operations outside the
namespace.
--&gt;
&lt;p&gt;在容器中以 root 用户运行的进程可以以不同的（非 root）用户在宿主机上运行；换句话说，
进程在 user 名字空间内部拥有执行操作的全部特权，但在 user 名字空间外部并没有执行操作的特权。&lt;/p&gt;</description></item><item><title>解密已静态加密的机密数据</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/decrypt-data/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/decrypt-data/</guid><description>&lt;!--
title: Decrypt Confidential Data that is Already Encrypted at Rest
content_type: task
weight: 215
--&gt;
&lt;!-- overview --&gt;
&lt;!--
All of the APIs in Kubernetes that let you write persistent API resource data support
at-rest encryption. For example, you can enable at-rest encryption for
&lt;a class='glossary-tooltip' title='Secret 用于存储敏感信息，如密码、 OAuth 令牌和 SSH 密钥。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/configuration/secret/' target='_blank' aria-label='Secrets'&gt;Secrets&lt;/a&gt;.
This at-rest encryption is additional to any system-level encryption for the
etcd cluster or for the filesystem(s) on hosts where you are running the
kube-apiserver.
--&gt;
&lt;p&gt;Kubernetes 中允许允许你写入持久性 API 资源数据的所有 API 都支持静态加密。
例如，你可以为 &lt;a class='glossary-tooltip' title='Secret 用于存储敏感信息，如密码、 OAuth 令牌和 SSH 密钥。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/configuration/secret/' target='_blank' aria-label='Secret'&gt;Secret&lt;/a&gt; 启用静态加密。
此静态加密是对 etcd 集群或运行 kube-apiserver 的主机上的文件系统的所有系统级加密的补充。&lt;/p&gt;</description></item><item><title>创建静态 Pod</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configure-pod-container/static-pod/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configure-pod-container/static-pod/</guid><description>&lt;!--
reviewers:
- jsafrane
title: Create static Pods
weight: 220
content_type: task
--&gt;
&lt;!-- overview --&gt;
&lt;!--
*Static Pods* are managed directly by the kubelet daemon on a specific node,
without the &lt;a class='glossary-tooltip' title='提供 Kubernetes API 服务的控制面组件。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/#kube-apiserver' target='_blank' aria-label='API server'&gt;API server&lt;/a&gt;
observing them.
Unlike Pods that are managed by the control plane (for example, a
&lt;a class='glossary-tooltip' title='管理集群上的多副本应用。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/controllers/deployment/' target='_blank' aria-label='Deployment'&gt;Deployment&lt;/a&gt;);
instead, the kubelet watches each static Pod (and restarts it if it fails).
--&gt;
&lt;p&gt;&lt;strong&gt;静态 Pod&lt;/strong&gt; 在指定的节点上由 kubelet 守护进程直接管理，不需要
&lt;a class='glossary-tooltip' title='提供 Kubernetes API 服务的控制面组件。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/#kube-apiserver' target='_blank' aria-label='API 服务器'&gt;API 服务器&lt;/a&gt;监管。
与由控制面管理的 Pod（例如，&lt;a class='glossary-tooltip' title='管理集群上的多副本应用。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/controllers/deployment/' target='_blank' aria-label='Deployment'&gt;Deployment&lt;/a&gt;）
不同；kubelet 监视每个静态 Pod（在它失败之后重新启动）。&lt;/p&gt;</description></item><item><title>关键插件 Pod 的调度保证</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/</guid><description>&lt;!--
title: Guaranteed Scheduling For Critical Add-On Pods
content_type: concept
weight: 220
--&gt;
&lt;!-- overview --&gt;
&lt;!-- 
Kubernetes core components such as the API server, scheduler, and controller-manager run on a control plane node. However, add-ons must run on a regular cluster node.
Some of these add-ons are critical to a fully functional cluster, such as metrics-server, DNS, and UI.
A cluster may stop working properly if a critical add-on is evicted (either manually or as a side effect of another operation like upgrade)
and becomes pending (for example when the cluster is highly utilized and either there are other pending pods that schedule into the space
vacated by the evicted critical add-on pod or the amount of resources available on the node changed for some other reason).
--&gt;
&lt;p&gt;Kubernetes 核心组件（如 API 服务器、调度器、控制器管理器）在控制平面节点上运行。
但是插件必须在常规集群节点上运行。
其中一些插件对于功能完备的集群至关重要，例如 Heapster、DNS 和 UI。
如果关键插件被逐出（手动或作为升级等其他操作的副作用）或者变成挂起状态，集群可能会停止正常工作。
关键插件进入挂起状态的例子有：集群利用率过高；被逐出的关键插件 Pod 释放了空间，但该空间被之前悬决的
Pod 占用；由于其它原因导致节点上可用资源的总量发生变化。&lt;/p&gt;</description></item><item><title>混合版本代理</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/mixed-version-proxy/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/mixed-version-proxy/</guid><description>&lt;!--
reviewers:
- jpbetz
title: Mixed Version Proxy
content_type: concept
weight: 220
--&gt;
&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-alpha" title="特性门控： UnknownVersionInteroperabilityProxy"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt; 
 &lt;code&gt;Kubernetes v1.28 [alpha]&lt;/code&gt;（默认禁用）&lt;/div&gt;

&lt;!--
Kubernetes 1.35 includes an alpha feature that lets an
&lt;a class='glossary-tooltip' title='提供 Kubernetes API 服务的控制面组件。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/#kube-apiserver' target='_blank' aria-label='API Server'&gt;API Server&lt;/a&gt;
proxy resource requests to other _peer_ API servers. It also lets clients get 
a holistic view of resources served across the entire cluster through discovery.
This is useful when there are multiple
API servers running different versions of Kubernetes in one cluster
(for example, during a long-lived rollout to a new release of Kubernetes).
--&gt;
&lt;p&gt;Kubernetes 1.35 包含了一个 Alpha 特性，可以让
&lt;a class='glossary-tooltip' title='提供 Kubernetes API 服务的控制面组件。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/#kube-apiserver' target='_blank' aria-label='API 服务器'&gt;API 服务器&lt;/a&gt;代理指向其他&lt;strong&gt;对等&lt;/strong&gt;
API 服务器的资源请求。
它还允许客户通过发现功能全面了解整个集群提供的资源。
当一个集群中运行着多个 API 服务器，且各服务器的 Kubernetes 版本不同时
（例如在上线 Kubernetes 新版本的时间跨度较长时），这一特性非常有用。&lt;/p&gt;</description></item><item><title>IP Masquerade Agent 用户指南</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/ip-masq-agent/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/ip-masq-agent/</guid><description>&lt;!--
title: IP Masquerade Agent User Guide
content_type: task
weight: 230
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page shows how to configure and enable the `ip-masq-agent`.
--&gt;
&lt;p&gt;此页面展示如何配置和启用 &lt;code&gt;ip-masq-agent&lt;/code&gt;。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;p&gt;&lt;!--
You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
[minikube](https://minikube.sigs.k8s.io/docs/tutorials/multi_node/)
or you can use one of these Kubernetes playgrounds:
--&gt;
&lt;p&gt;你必须拥有一个 Kubernetes 的集群，且必须配置 kubectl 命令行工具让其与你的集群通信。
建议运行本教程的集群至少有两个节点，且这两个节点不能作为控制平面主机。
如果你还没有集群，你可以通过 &lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;Minikube&lt;/a&gt;
构建一个你自己的集群，或者你可以使用下面的 Kubernetes 练习环境之一：&lt;/p&gt;</description></item><item><title>将 Docker Compose 文件转换为 Kubernetes 资源</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configure-pod-container/translate-compose-kubernetes/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configure-pod-container/translate-compose-kubernetes/</guid><description>&lt;!--
reviewers:
- cdrage
title: Translate a Docker Compose File to Kubernetes Resources
content_type: task
weight: 230
--&gt;
&lt;!-- overview --&gt;
&lt;!--
What's Kompose? It's a conversion tool for all things compose (namely Docker Compose) to container orchestrators (Kubernetes or OpenShift).
--&gt;
&lt;p&gt;Kompose 是什么？它是一个转换工具，可将 Compose
（即 Docker Compose）所组装的所有内容转换成容器编排器（Kubernetes 或 OpenShift）可识别的形式。&lt;/p&gt;
&lt;!--
More information can be found on the Kompose website at [https://kompose.io/](https://kompose.io/).
--&gt;
&lt;p&gt;更多信息请参考 Kompose 官网 &lt;a href="https://kompose.io/"&gt;https://kompose.io/&lt;/a&gt;。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;p&gt;&lt;!--
You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
[minikube](https://minikube.sigs.k8s.io/docs/tutorials/multi_node/)
or you can use one of these Kubernetes playgrounds:
--&gt;
&lt;p&gt;你必须拥有一个 Kubernetes 的集群，且必须配置 kubectl 命令行工具让其与你的集群通信。
建议运行本教程的集群至少有两个节点，且这两个节点不能作为控制平面主机。
如果你还没有集群，你可以通过 &lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;Minikube&lt;/a&gt;
构建一个你自己的集群，或者你可以使用下面的 Kubernetes 练习环境之一：&lt;/p&gt;</description></item><item><title>通过配置内置准入控制器实施 Pod 安全标准</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configure-pod-container/enforce-standards-admission-controller/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configure-pod-container/enforce-standards-admission-controller/</guid><description>&lt;!--
title: Enforce Pod Security Standards by Configuring the Built-in Admission Controller
reviewers:
- tallclair
- liggitt
content_type: task
weight: 240
--&gt;
&lt;!--
Kubernetes provides a built-in [admission controller](/docs/reference/access-authn-authz/admission-controllers/#podsecurity)
to enforce the [Pod Security Standards](/docs/concepts/security/pod-security-standards).
You can configure this admission controller to set cluster-wide defaults and [exemptions](/docs/concepts/security/pod-security-admission/#exemptions).
--&gt;
&lt;p&gt;Kubernetes 提供一种内置的&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/access-authn-authz/admission-controllers/#podsecurity"&gt;准入控制器&lt;/a&gt;
用来强制实施 &lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/security/pod-security-standards"&gt;Pod 安全性标准&lt;/a&gt;。
你可以配置此准入控制器来设置集群范围的默认值和&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/security/pod-security-admission/#exemptions"&gt;豁免选项&lt;/a&gt;。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;!--
Following an alpha release in Kubernetes v1.22,
Pod Security Admission became available by default in Kubernetes v1.23, as
a beta. From version 1.25 onwards, Pod Security Admission is generally
available.
--&gt;
&lt;p&gt;Pod 安全性准入（Pod Security Admission）在 Kubernetes v1.22 作为 Alpha 特性发布，
在 Kubernetes v1.23 中作为 Beta 特性默认可用。从 1.25 版本起，
此特性进阶至正式发布（Generally Available）。&lt;/p&gt;</description></item><item><title>限制存储使用量</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/limit-storage-consumption/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/limit-storage-consumption/</guid><description>&lt;!--
title: Limit Storage Consumption
content_type: task
weight: 240
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This example demonstrates how to limit the amount of storage consumed in a namespace
--&gt;
&lt;p&gt;此示例演示如何限制一个名字空间中的存储使用量。&lt;/p&gt;
&lt;!--
The following resources are used in the demonstration: [ResourceQuota](/docs/concepts/policy/resource-quotas/),
[LimitRange](/docs/tasks/administer-cluster/memory-default-namespace/),
and [PersistentVolumeClaim](/docs/concepts/storage/persistent-volumes/).
--&gt;
&lt;p&gt;演示中用到了以下资源：&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/policy/resource-quotas/"&gt;ResourceQuota&lt;/a&gt;、
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/"&gt;LimitRange&lt;/a&gt; 和
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/persistent-volumes/"&gt;PersistentVolumeClaim&lt;/a&gt;。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;!--
You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
[minikube](https://minikube.sigs.k8s.io/docs/tutorials/multi_node/)
or you can use one of these Kubernetes playgrounds:
--&gt;
&lt;p&gt;你必须拥有一个 Kubernetes 的集群，且必须配置 kubectl 命令行工具让其与你的集群通信。
建议运行本教程的集群至少有两个节点，且这两个节点不能作为控制平面主机。
如果你还没有集群，你可以通过 &lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;Minikube&lt;/a&gt;
构建一个你自己的集群，或者你可以使用下面的 Kubernetes 练习环境之一：&lt;/p&gt;</description></item><item><title>迁移多副本的控制面以使用云控制器管理器</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/controller-manager-leader-migration/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/controller-manager-leader-migration/</guid><description>&lt;!--
reviewers:
- jpbetz
- cheftako
title: Migrate Replicated Control Plane To Use Cloud Controller Manager
linkTitle: "Migrate Replicated Control Plane To Use Cloud Controller Manager"
content_type: task
weight: 250
--&gt;
&lt;!-- overview --&gt;
&lt;!--
title: Cloud Controller Manager
id: cloud-controller-manager
full_link: /docs/concepts/architecture/cloud-controller/
short_description: &gt;
 Control plane component that integrates Kubernetes with third-party cloud providers.
aka: 
tags:
- architecture
- operation
--&gt;
&lt;!--
 A Kubernetes &lt;a class='glossary-tooltip' title='控制平面是指容器编排层，它暴露 API 和接口来定义、部署容器和管理容器的生命周期。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/reference/glossary/?all=true#term-control-plane' target='_blank' aria-label='control plane'&gt;control plane&lt;/a&gt; component
that embeds cloud-specific control logic. The cloud controller manager lets you link your
cluster into your cloud provider's API, and separates out the components that interact
with that cloud platform from components that only interact with your cluster.
--&gt;
&lt;p&gt;一个 Kubernetes &lt;a class='glossary-tooltip' title='控制平面是指容器编排层，它暴露 API 和接口来定义、部署容器和管理容器的生命周期。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/reference/glossary/?all=true#term-control-plane' target='_blank' aria-label='控制平面'&gt;控制平面&lt;/a&gt;组件，
嵌入了特定于云平台的控制逻辑。
云控制器管理器（Cloud Controller Manager）允许将你的集群连接到云提供商的 API 之上，
并将与该云平台交互的组件同与你的集群交互的组件分离开来。&lt;/p&gt;</description></item><item><title>使用名字空间标签来实施 Pod 安全性标准</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configure-pod-container/enforce-standards-namespace-labels/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configure-pod-container/enforce-standards-namespace-labels/</guid><description>&lt;!--
title: Enforce Pod Security Standards with Namespace Labels
reviewers:
- tallclair
- liggitt
content_type: task
weight: 250
--&gt;
&lt;!--
Namespaces can be labeled to enforce the [Pod Security Standards](/docs/concepts/security/pod-security-standards). The three policies
[privileged](/docs/concepts/security/pod-security-standards/#privileged), [baseline](/docs/concepts/security/pod-security-standards/#baseline)
and [restricted](/docs/concepts/security/pod-security-standards/#restricted) broadly cover the security spectrum
and are implemented by the [Pod Security](/docs/concepts/security/pod-security-admission/) &lt;a class='glossary-tooltip' title='在对象持久化之前拦截 Kubernetes API 服务器请求的一段代码。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/reference/access-authn-authz/admission-controllers/' target='_blank' aria-label='admission controller'&gt;admission controller&lt;/a&gt;.
--&gt;
&lt;p&gt;名字空间可以打上标签以强制执行 &lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/security/pod-security-standards"&gt;Pod 安全性标准&lt;/a&gt;。
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/security/pod-security-standards/#privileged"&gt;特权（privileged）&lt;/a&gt;、
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/security/pod-security-standards/#baseline"&gt;基线（baseline）&lt;/a&gt;和
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/security/pod-security-standards/#restricted"&gt;受限（restricted）&lt;/a&gt;
这三种策略涵盖了广泛安全范围，并由
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/security/pod-security-admission/"&gt;Pod 安全&lt;/a&gt;&lt;a class='glossary-tooltip' title='在对象持久化之前拦截 Kubernetes API 服务器请求的一段代码。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/reference/access-authn-authz/admission-controllers/' target='_blank' aria-label='准入控制器'&gt;准入控制器&lt;/a&gt;实现。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;!--
Pod Security Admission was available by default in Kubernetes v1.23, as
a beta. From version 1.25 onwards, Pod Security Admission is generally
available.
--&gt;
&lt;p&gt;Pod 安全性准入（Pod Security Admission）在 Kubernetes v1.23 中作为 Beta 特性默认可用。
从 1.25 版本起，此特性进阶至正式发布（Generally Available）。&lt;/p&gt;</description></item><item><title>从 PodSecurityPolicy 迁移到内置的 PodSecurity 准入控制器</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configure-pod-container/migrate-from-psp/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configure-pod-container/migrate-from-psp/</guid><description>&lt;!--
title: Migrate from PodSecurityPolicy to the Built-In PodSecurity Admission Controller
reviewers:
- tallclair
- liggitt
content_type: task
min-kubernetes-server-version: v1.22
weight: 260
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page describes the process of migrating from PodSecurityPolicies to the built-in PodSecurity
admission controller. This can be done effectively using a combination of dry-run and `audit` and
`warn` modes, although this becomes harder if mutating PSPs are used.
--&gt;
&lt;p&gt;本页面描述从 PodSecurityPolicy 迁移到内置的 PodSecurity 准入控制器的过程。
这一迁移过程可以通过综合使用试运行、&lt;code&gt;audit&lt;/code&gt; 和 &lt;code&gt;warn&lt;/code&gt; 模式等来实现，
尽管在使用了变更式 PSP 时会变得有些困难。&lt;/p&gt;</description></item><item><title>名字空间演练</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tutorials/cluster-management/namespaces-walkthrough/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tutorials/cluster-management/namespaces-walkthrough/</guid><description>&lt;!--
reviewers:
- derekwaynecarr
- janetkuo
title: Namespaces Walkthrough
content_type: task
weight: 260
--&gt;
&lt;!-- overview --&gt;
&lt;!--
Kubernetes &lt;a class='glossary-tooltip' title='名字空间是 Kubernetes 用来支持隔离单个集群中的资源组的一种抽象。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/overview/working-with-objects/namespaces/' target='_blank' aria-label='namespaces'&gt;namespaces&lt;/a&gt;
help different projects, teams, or customers to share a Kubernetes cluster.
--&gt;
&lt;p&gt;Kubernetes &lt;a class='glossary-tooltip' title='名字空间是 Kubernetes 用来支持隔离单个集群中的资源组的一种抽象。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/overview/working-with-objects/namespaces/' target='_blank' aria-label='名字空间'&gt;名字空间&lt;/a&gt;有助于不同的项目、团队或客户去共享
Kubernetes 集群。&lt;/p&gt;
&lt;!--
It does this by providing the following:

1. A scope for [Names](/docs/concepts/overview/working-with-objects/names/).
2. A mechanism to attach authorization and policy to a subsection of the cluster.
--&gt;
&lt;p&gt;名字空间通过以下方式实现这点：&lt;/p&gt;</description></item><item><title>操作 Kubernetes 中的 etcd 集群</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/configure-upgrade-etcd/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/configure-upgrade-etcd/</guid><description>&lt;!--
reviewers:
- mml
- wojtek-t
- jpbetz
title: Operating etcd clusters for Kubernetes
content_type: task
weight: 270
--&gt;
&lt;!-- overview --&gt;
&lt;!--
title: etcd
id: etcd
date: 2018-04-12
full_link: /docs/tasks/administer-cluster/configure-upgrade-etcd/
short_description: &gt;
 Consistent and highly-available key value store used as backing store of Kubernetes for all cluster data.

aka: 
tags:
- architecture
- storage
--&gt;
&lt;!--
 Consistent and highly-available key value store used as backing store of Kubernetes for all cluster data.
--&gt;
&lt;p&gt;&lt;p&gt;etcd 是 一致且高可用的键值存储，用作 Kubernetes 所有集群数据的后台数据库。&lt;/p&gt;</description></item><item><title>为系统守护进程预留计算资源</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/reserve-compute-resources/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/reserve-compute-resources/</guid><description>&lt;!--
reviewers:
- vishh
- derekwaynecarr
- dashpole
title: Reserve Compute Resources for System Daemons
content_type: task
weight: 290
--&gt;
&lt;!-- overview --&gt;
&lt;!--
Kubernetes nodes can be scheduled to `Capacity`. Pods can consume all the
available capacity on a node by default. This is an issue because nodes
typically run quite a few system daemons that power the OS and Kubernetes
itself. Unless resources are set aside for these system daemons, pods and system
daemons compete for resources and lead to resource starvation issues on the
node.

The `kubelet` exposes a feature named 'Node Allocatable' that helps to reserve
compute resources for system daemons. Kubernetes recommends cluster
administrators to configure 'Node Allocatable' based on their workload density
on each node.
--&gt;
&lt;p&gt;Kubernetes 的节点可以按照 &lt;code&gt;Capacity&lt;/code&gt; 调度。默认情况下 pod 能够使用节点全部可用容量。
这是个问题，因为节点自己通常运行了不少驱动 OS 和 Kubernetes 的系统守护进程。
除非为这些系统守护进程留出资源，否则它们将与 Pod 争夺资源并导致节点资源短缺问题。&lt;/p&gt;</description></item><item><title>以非 root 用户身份运行 Kubernetes 节点组件</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/kubelet-in-userns/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/kubelet-in-userns/</guid><description>&lt;!--
title: Running Kubernetes Node Components as a Non-root User
content_type: task
min-kubernetes-server-version: 1.22
weight: 300
--&gt;
&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-alpha"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.22 [alpha]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;!--
This document describes how to run Kubernetes Node components such as kubelet, CRI, OCI, and CNI
without root privileges, by using a &lt;a class='glossary-tooltip' title='一种为非特权用户模拟超级用户特权的 Linux 内核功能特性。' data-toggle='tooltip' data-placement='top' href='https://man7.org/linux/man-pages/man7/user_namespaces.7.html' target='_blank' aria-label='user namespace'&gt;user namespace&lt;/a&gt;.

This technique is also known as _rootless mode_.


&lt;div class="alert alert-info" role="note"&gt;&lt;h4 class="alert-heading"&gt;说明：&lt;/h4&gt;&lt;p&gt;This document describes how to run Kubernetes Node components (and hence pods) as a non-root user.&lt;/p&gt;</description></item><item><title>安全地清空一个节点</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/safely-drain-node/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/safely-drain-node/</guid><description>&lt;!--
reviewers:
- davidopp
- mml
- foxish
- kow3ns
title: Safely Drain a Node
content_type: task
weight: 310
--&gt;
&lt;!-- overview --&gt;
&lt;!-- 
This page shows how to safely drain a node, respecting the PodDisruptionBudget you have defined.
--&gt;
&lt;p&gt;本页展示了如何在确保 PodDisruptionBudget 的前提下，
安全地清空一个&lt;a class='glossary-tooltip' title='Kubernetes 中的工作机器称作节点。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/nodes/' target='_blank' aria-label='节点'&gt;节点&lt;/a&gt;。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;!-- 
This task assumes that you have met the following prerequisites:

1. You do not require your applications to be highly available during the
 node drain, or
1. You have read about the [PodDisruptionBudget](/docs/concepts/workloads/pods/disruptions/) concept,
 and have [configured PodDisruptionBudgets](/docs/tasks/run-application/configure-pdb/) for
 applications that need them.
--&gt;
&lt;p&gt;此任务假定你已经满足了以下先决条件：&lt;/p&gt;</description></item><item><title>保护集群</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/securing-a-cluster/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/securing-a-cluster/</guid><description>&lt;!--
reviewers:
- smarterclayton
- liggitt
- enj
title: Securing a Cluster
content_type: task
weight: 320
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This document covers topics related to protecting a cluster from accidental or malicious access
and provides recommendations on overall security.
--&gt;
&lt;p&gt;本文档涉及与保护集群免受意外或恶意访问有关的主题，并对总体安全性提出建议。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;!--
You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
[minikube](https://minikube.sigs.k8s.io/docs/tutorials/multi_node/)
or you can use one of these Kubernetes playgrounds:
--&gt;
&lt;p&gt;你必须拥有一个 Kubernetes 的集群，且必须配置 kubectl 命令行工具让其与你的集群通信。
建议运行本教程的集群至少有两个节点，且这两个节点不能作为控制平面主机。
如果你还没有集群，你可以通过 &lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;Minikube&lt;/a&gt;
构建一个你自己的集群，或者你可以使用下面的 Kubernetes 练习环境之一：&lt;/p&gt;</description></item><item><title>通过配置文件设置 kubelet 参数</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/kubelet-config-file/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/kubelet-config-file/</guid><description>&lt;!--
reviewers:
- mtaufen
- dawnchen
title: Set Kubelet Parameters Via A Configuration File
content_type: task
weight: 330
---&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;!--
Some steps in this page use the `jq` tool. If you don't have `jq`, you can
install it via your operating system's software sources, or fetch it from
[https://jqlang.github.io/jq/](https://jqlang.github.io/jq/).

Some steps also involve installing `curl`, which can be installed via your
operating system's software sources.
--&gt;
&lt;p&gt;此页面中的某些步骤使用 &lt;code&gt;jq&lt;/code&gt; 工具。如果你没有 &lt;code&gt;jq&lt;/code&gt;，你可以通过操作系统的软件源安装它，或者从
&lt;a href="https://jqlang.github.io/jq/"&gt;https://jqlang.github.io/jq/&lt;/a&gt; 中获取它。&lt;/p&gt;</description></item><item><title>通过名字空间共享集群</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/namespaces/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/namespaces/</guid><description>&lt;!--
reviewers:
- derekwaynecarr
- janetkuo
title: Share a Cluster with Namespaces
content_type: task
weight: 340
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page shows how to view, work in, and delete &lt;a class='glossary-tooltip' title='名字空间是 Kubernetes 用来支持隔离单个集群中的资源组的一种抽象。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/overview/working-with-objects/namespaces/' target='_blank' aria-label='namespaces'&gt;namespaces&lt;/a&gt;.
The page also shows how to use Kubernetes namespaces to subdivide your cluster.
--&gt;
&lt;p&gt;本页展示如何查看、使用和删除&lt;a class='glossary-tooltip' title='名字空间是 Kubernetes 用来支持隔离单个集群中的资源组的一种抽象。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/overview/working-with-objects/namespaces/' target='_blank' aria-label='名字空间'&gt;名字空间&lt;/a&gt;。
本页同时展示如何使用 Kubernetes 名字空间来划分集群。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;!--
* Have an [existing Kubernetes cluster](/docs/setup/).
* You have a basic understanding of Kubernetes &lt;a class='glossary-tooltip' title='Pod 表示你的集群上一组正在运行的容器。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/' target='_blank' aria-label='Pods'&gt;Pods&lt;/a&gt;,
 &lt;a class='glossary-tooltip' title='将运行在一组 Pods 上的应用程序公开为网络服务的抽象方法。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/services-networking/service/' target='_blank' aria-label='Services'&gt;Services&lt;/a&gt;, and
 &lt;a class='glossary-tooltip' title='管理集群上的多副本应用。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/controllers/deployment/' target='_blank' aria-label='Deployments'&gt;Deployments&lt;/a&gt;.
--&gt;
&lt;ul&gt;
&lt;li&gt;你已拥有一个&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/setup/"&gt;配置好的 Kubernetes 集群&lt;/a&gt;。&lt;/li&gt;
&lt;li&gt;你已对 Kubernetes 的 &lt;a class='glossary-tooltip' title='Pod 表示你的集群上一组正在运行的容器。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/' target='_blank' aria-label='Pod'&gt;Pod&lt;/a&gt;、
&lt;a class='glossary-tooltip' title='将运行在一组 Pods 上的应用程序公开为网络服务的抽象方法。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/services-networking/service/' target='_blank' aria-label='Service'&gt;Service&lt;/a&gt; 和
&lt;a class='glossary-tooltip' title='管理集群上的多副本应用。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/controllers/deployment/' target='_blank' aria-label='Deployment'&gt;Deployment&lt;/a&gt; 有基本理解。&lt;/li&gt;
&lt;/ul&gt;
&lt;!-- steps --&gt;
&lt;!--
## Viewing namespaces
--&gt;
&lt;h2 id="查看名字空间"&gt;查看名字空间&lt;/h2&gt;
&lt;!--
List the current namespaces in a cluster using:
--&gt;
&lt;p&gt;列出集群中现有的名字空间：&lt;/p&gt;</description></item><item><title>升级集群</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/cluster-upgrade/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/cluster-upgrade/</guid><description>&lt;!--
title: Upgrade A Cluster
content_type: task
weight: 350
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page provides an overview of the steps you should follow to upgrade a
Kubernetes cluster.

The Kubernetes project recommends upgrading to the latest patch releases promptly, and
to ensure that you are running a supported minor release of Kubernetes.
Following this recommendation helps you to stay secure.

The way that you upgrade a cluster depends on how you initially deployed it
and on any subsequent changes.

At a high level, the steps you perform are:
--&gt;
&lt;p&gt;本页概述升级 Kubernetes 集群的步骤。&lt;/p&gt;</description></item><item><title>在集群中使用级联删除</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/use-cascading-deletion/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/use-cascading-deletion/</guid><description>&lt;!--
title: Use Cascading Deletion in a Cluster
content_type: task
weight: 360
--&gt;
&lt;!--overview--&gt;
&lt;!--
This page shows you how to specify the type of
[cascading deletion](/docs/concepts/architecture/garbage-collection/#cascading-deletion)
to use in your cluster during &lt;a class='glossary-tooltip' title='Kubernetes 用于清理集群资源的各种机制的统称。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/garbage-collection/' target='_blank' aria-label='garbage collection'&gt;garbage collection&lt;/a&gt;.
--&gt;
&lt;p&gt;本页面向你展示如何设置在你的集群执行&lt;a class='glossary-tooltip' title='Kubernetes 用于清理集群资源的各种机制的统称。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/garbage-collection/' target='_blank' aria-label='垃圾收集'&gt;垃圾收集&lt;/a&gt;
时要使用的&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/garbage-collection/#cascading-deletion"&gt;级联删除&lt;/a&gt;
类型。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;!--
You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
[minikube](https://minikube.sigs.k8s.io/docs/tutorials/multi_node/)
or you can use one of these Kubernetes playgrounds:
--&gt;
&lt;p&gt;你必须拥有一个 Kubernetes 的集群，且必须配置 kubectl 命令行工具让其与你的集群通信。
建议运行本教程的集群至少有两个节点，且这两个节点不能作为控制平面主机。
如果你还没有集群，你可以通过 &lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;Minikube&lt;/a&gt;
构建一个你自己的集群，或者你可以使用下面的 Kubernetes 练习环境之一：&lt;/p&gt;</description></item><item><title>使用 KMS 驱动进行数据加密</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/kms-provider/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/kms-provider/</guid><description>&lt;!--
reviewers:
- aramase
- enj
title: Using a KMS provider for data encryption
content_type: task
weight: 370
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page shows how to configure a Key Management Service (KMS) provider and plugin to enable secret data encryption.
In Kubernetes 1.35 there are two versions of KMS at-rest encryption.
You should use KMS v2 if feasible because KMS v1 is deprecated (since Kubernetes v1.28) and disabled by default (since Kubernetes v1.29).
KMS v2 offers significantly better performance characteristics than KMS v1.
--&gt;
&lt;p&gt;本页展示了如何配置密钥管理服务（Key Management Service，KMS）驱动和插件以启用 Secret 数据加密。
在 Kubernetes 1.35 中，存在两个版本的 KMS 静态加密方式。
如果可行的话，建议使用 KMS v2，因为（自 Kubernetes v1.28 起）KMS v1 已经被弃用并
（自 Kubernetes v1.29 起）默认被禁用。
KMS v2 提供了比 KMS v1 明显更好的性能特征。&lt;/p&gt;</description></item><item><title>使用 CoreDNS 进行服务发现</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/coredns/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/coredns/</guid><description>&lt;!--
reviewers:
- johnbelamaric
title: Using CoreDNS for Service Discovery
min-kubernetes-server-version: v1.9
content_type: task
weight: 380
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page describes the CoreDNS upgrade process and how to install CoreDNS instead of kube-dns.
--&gt;
&lt;p&gt;此页面介绍了 CoreDNS 升级过程以及如何安装 CoreDNS 而不是 kube-dns。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;p&gt;&lt;!--
You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
[minikube](https://minikube.sigs.k8s.io/docs/tutorials/multi_node/)
or you can use one of these Kubernetes playgrounds:
--&gt;
&lt;p&gt;你必须拥有一个 Kubernetes 的集群，且必须配置 kubectl 命令行工具让其与你的集群通信。
建议运行本教程的集群至少有两个节点，且这两个节点不能作为控制平面主机。
如果你还没有集群，你可以通过 &lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;Minikube&lt;/a&gt;
构建一个你自己的集群，或者你可以使用下面的 Kubernetes 练习环境之一：&lt;/p&gt;</description></item><item><title>在 Kubernetes 集群中使用 NodeLocal DNSCache</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/nodelocaldns/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/nodelocaldns/</guid><description>&lt;!--
reviewers:
- bowei
- zihongz
- sftim
title: Using NodeLocal DNSCache in Kubernetes Clusters
content_type: task
weight: 390
--&gt;
&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.18 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;!--
This page provides an overview of NodeLocal DNSCache feature in Kubernetes.
--&gt;
&lt;p&gt;本页概述了 Kubernetes 中的 NodeLocal DNSCache 功能。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;p&gt;&lt;!--
 You need to have a Kubernetes cluster, and the kubectl command-line tool must
 be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
 cluster, you can create one by using
 [minikube](https://minikube.sigs.k8s.io/docs/tutorials/multi_node/)
 or you can use one of these Kubernetes playgrounds:
 --&gt;
 &lt;p&gt;你必须拥有一个 Kubernetes 的集群，且必须配置 kubectl 命令行工具让其与你的集群通信。
 建议运行本教程的集群至少有两个节点，且这两个节点不能作为控制平面主机。
 如果你还没有集群，你可以通过 &lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;Minikube&lt;/a&gt;
 构建一个你自己的集群，或者你可以使用下面的 Kubernetes 练习环境之一：&lt;/p&gt;</description></item><item><title>在 Kubernetes 集群中使用 sysctl</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/sysctl-cluster/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/sysctl-cluster/</guid><description>&lt;!--
title: Using sysctls in a Kubernetes Cluster
reviewers:
- sttts
content_type: task
weight: 400
---&gt;
&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.21 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;!--
This document describes how to configure and use kernel parameters within a
Kubernetes cluster using the &lt;a class='glossary-tooltip' title='用于获取和设置 Unix 内核参数的接口' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/sysctl-cluster/' target='_blank' aria-label='sysctl'&gt;sysctl&lt;/a&gt;
interface.
--&gt;
&lt;p&gt;本文档介绍如何通过 &lt;a class='glossary-tooltip' title='用于获取和设置 Unix 内核参数的接口' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/sysctl-cluster/' target='_blank' aria-label='sysctl'&gt;sysctl&lt;/a&gt;
接口在 Kubernetes 集群中配置和使用内核参数。&lt;/p&gt;

&lt;div class="alert alert-info" role="note"&gt;&lt;h4 class="alert-heading"&gt;说明：&lt;/h4&gt;&lt;!--
Starting from Kubernetes version 1.23, the kubelet supports the use of either `/` or `.`
as separators for sysctl names.
Starting from Kubernetes version 1.25, setting Sysctls for a Pod supports setting sysctls with slashes.
For example, you can represent the same sysctl name as `kernel.shm_rmid_forced` using a
period as the separator, or as `kernel/shm_rmid_forced` using a slash as a separator.
For more sysctl parameter conversion method details, please refer to
the page [sysctl.d(5)](https://man7.org/linux/man-pages/man5/sysctl.d.5.html) from
the Linux man-pages project.
--&gt;
&lt;p&gt;从 Kubernetes 1.23 版本开始，kubelet 支持使用 &lt;code&gt;/&lt;/code&gt; 或 &lt;code&gt;.&lt;/code&gt; 作为 sysctl 参数的分隔符。
从 Kubernetes 1.25 版本开始，支持为 Pod 设置 sysctl 时使用设置名字带有斜线的 sysctl。
例如，你可以使用点或者斜线作为分隔符表示相同的 sysctl 参数，以点作为分隔符表示为： &lt;code&gt;kernel.shm_rmid_forced&lt;/code&gt;，
或者以斜线作为分隔符表示为：&lt;code&gt;kernel/shm_rmid_forced&lt;/code&gt;。
更多 sysctl 参数转换方法详情请参考 Linux man-pages
&lt;a href="https://man7.org/linux/man-pages/man5/sysctl.d.5.html"&gt;sysctl.d(5)&lt;/a&gt;。&lt;/p&gt;</description></item><item><title>使用 NUMA 感知的内存管理器</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/memory-manager/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/memory-manager/</guid><description>&lt;!--
title: Utilizing the NUMA-aware Memory Manager

reviewers:
- klueska
- derekwaynecarr

content_type: task
min-kubernetes-server-version: v1.32
weight: 410
--&gt;
&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable" title="特性门控： MemoryManager"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt; 
 &lt;code&gt;Kubernetes v1.32 [stable]&lt;/code&gt;（默认启用）&lt;/div&gt;

&lt;!--
The Kubernetes *Memory Manager* enables the feature of guaranteed memory (and hugepages)
allocation for pods in the `Guaranteed` &lt;a class='glossary-tooltip' title='QoS 类（Quality of Service Class）为 Kubernetes 提供了一种将集群中的 Pod 分为几个类并做出有关调度和驱逐决策的方法。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/pod-qos/' target='_blank' aria-label='QoS class'&gt;QoS class&lt;/a&gt;.

The Memory Manager employs hint generation protocol to yield the most suitable NUMA affinity for a pod.
The Memory Manager feeds the central manager (*Topology Manager*) with these affinity hints.
Based on both the hints and Topology Manager policy, the pod is rejected or admitted to the node.
--&gt;
&lt;p&gt;Kubernetes 内存管理器（Memory Manager）为 &lt;code&gt;Guaranteed&lt;/code&gt;
&lt;a class='glossary-tooltip' title='QoS 类（Quality of Service Class）为 Kubernetes 提供了一种将集群中的 Pod 分为几个类并做出有关调度和驱逐决策的方法。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/pod-qos/' target='_blank' aria-label='QoS 类'&gt;QoS 类&lt;/a&gt;
的 Pods 提供可保证的内存（及大页面）分配能力。&lt;/p&gt;</description></item><item><title>验证已签名容器镜像</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/verify-signed-artifacts/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/verify-signed-artifacts/</guid><description>&lt;!--
title: Verify Signed Container Images
content_type: task
min-kubernetes-server-version: v1.24
weight: 420
--&gt;
&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-alpha"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.24 [alpha]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;!--
You will need to have the following tools installed:

- `cosign` ([install guide](https://docs.sigstore.dev/cosign/system_config/installation/))
- `curl` (often provided by your operating system)
- `jq` ([download jq](https://jqlang.github.io/jq/download/))
--&gt;
&lt;p&gt;你需要安装以下工具：&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;cosign&lt;/code&gt;（&lt;a href="https://docs.sigstore.dev/cosign/system_config/installation/"&gt;安装指南&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;&lt;code&gt;curl&lt;/code&gt;（通常由你的操作系统提供）&lt;/li&gt;
&lt;li&gt;&lt;code&gt;jq&lt;/code&gt;（&lt;a href="https://jqlang.github.io/jq/download/"&gt;下载 jq&lt;/a&gt;）&lt;/li&gt;
&lt;/ul&gt;
&lt;!-- 
## Verifying binary signatures

The Kubernetes release process signs all binary artifacts (tarballs, SPDX files,
standalone binaries) by using cosign's keyless signing. To verify a particular
binary, retrieve it together with its signature and certificate: 
--&gt;
&lt;h2 id="verifying-binary-signatures"&gt;验证二进制签名&lt;/h2&gt;
&lt;p&gt;Kubernetes 发布过程使用 cosign 的无密钥签名对所有二进制工件（压缩包、
SPDX 文件、 独立的二进制文件）签名。要验证一个特定的二进制文件，
获取组件时要包含其签名和证书：&lt;/p&gt;</description></item><item><title>Headlamp 2025 年度项目亮点</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2026/01/22/headlamp-in-2025-project-highlights/</link><pubDate>Thu, 22 Jan 2026 10:00:00 +0800</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2026/01/22/headlamp-in-2025-project-highlights/</guid><description>&lt;!--
title: "Headlamp in 2025: Project Highlights"
date: 2026-01-22T10:00:00+08:00
slug: headlamp-in-2025-project-highlights
author: &gt;
 Evangelos Skopelitis (Microsoft)
--&gt;
&lt;!--
_This announcement is a recap from a post originally [published](https://headlamp.dev/blog/2025/11/13/headlamp-in-2025) on the Headlamp blog._
--&gt;
&lt;p&gt;&lt;strong&gt;本公告是对最初在 Headlamp 博客上&lt;a href="https://headlamp.dev/blog/2025/11/13/headlamp-in-2025"&gt;发布&lt;/a&gt;的帖子的回顾。&lt;/strong&gt;&lt;/p&gt;
&lt;!--
[Headlamp](https://headlamp.dev/) has come a long way in 2025. The project has continued to grow – reaching more teams across platforms, powering new workflows and integrations through plugins, and seeing increased collaboration from the broader community.
--&gt;
&lt;p&gt;&lt;a href="https://headlamp.dev/"&gt;Headlamp&lt;/a&gt; 在 2025 年取得了长足的发展。该项目持续成长，覆盖了更多平台和团队；
通过插件机制支持了新的工作流和集成方式；同时也看到了来自更广泛社区的协作不断增强。&lt;/p&gt;</description></item><item><title>Kubernetes v1.35：云控制器管理器中的基于监视的路由协调</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2026/01/08/kubernetes-v1-35-watch-based-route-reconciliation-in-ccm/</link><pubDate>Thu, 08 Jan 2026 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2026/01/08/kubernetes-v1-35-watch-based-route-reconciliation-in-ccm/</guid><description>&lt;!--
---
layout: blog
title: "Kubernetes v1.35: Watch Based Route Reconciliation in the Cloud Controller Manager"
date: 2026-01-08T10:30:00-08:00
slug: kubernetes-v1-35-watch-based-route-reconciliation-in-ccm
author: &gt;
 [Lukas Metzner](https://github.com/lukasmetzner) (Hetzner)
---
--&gt;
&lt;!--
Up to and including Kubernetes v1.34,
the route controller in Cloud Controller Manager (CCM)
implementations built using the
[k8s.io/cloud-provider](https://github.com/kubernetes/cloud-provider)
library reconciles routes at a fixed interval.
This causes unnecessary API requests to the cloud provider when
there are no changes to routes. Other controllers implemented
through the same library already use watch-based mechanisms,
leveraging informers to avoid unnecessary API calls.
A new feature gate is being introduced in v1.35 to allow
changing the behavior of the route controller to use watch-based informers.
--&gt;
&lt;p&gt;在 Kubernetes v1.34 及更早版本中，使用
&lt;a href="https://github.com/kubernetes/cloud-provider"&gt;k8s.io/cloud-provider&lt;/a&gt;
库构建的云控制器管理器（CCM）实现中的路由控制器会以固定的时间间隔进行路由协调。
这会导致在路由没有变化的情况下，向云提供商发出不必要的 API 请求。
其他使用同一库实现的控制器已经使用基于监听的机制，
利用 informer 来避免不必要的 API 调用。
v1.35 版本引入了一个新的特性门控，允许更改路由控制器的行为，
使其使用基于监听的 informer。&lt;/p&gt;</description></item><item><title>Kubernetes v1.35: 通过就地重启 Pod 实现更高的效率</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2026/01/05/kubernetes-v1-35-restart-all-containers/</link><pubDate>Mon, 05 Jan 2026 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2026/01/05/kubernetes-v1-35-restart-all-containers/</guid><description>&lt;!--
layout: blog
title: "Kubernetes v1.35: New level of efficiency with in-place Pod restart"
date: 2026-01-05T10:30:00-08:00
slug: kubernetes-v1-35-restart-all-containers
author: &gt;
 [Yuan Wang](https://github.com/yuanwang04)
 [Giuseppe Tinti Tomio](https://github.com/GiuseppeTT)
 [Sergey Kanzhelev](https://github.com/SergeyKanzhelev)
translator: &gt;
 [Xin Li](https://github.com/my-git9)
--&gt;
&lt;!--
The release of Kubernetes 1.35 introduces a powerful new feature that provides a much-requested capability: the ability to trigger a full, in-place restart of the Pod. This feature, *Restart All Containers* (alpha in 1.35), allows for an efficient way to reset a Pod's state compared to resource-intensive approach of deleting and recreating the entire Pod. This feature is especially useful for AI/ML workloads allowing application developers to concentrate on their core training logic while offloading complex failure-handling and recovery mechanisms to sidecars and declarative Kubernetes configuration. With `RestartAllContainers` and other planned enhancements, Kubernetes continues to add building blocks for creating the most flexible, robust, and efficient platforms for AI/ML workloads.

This new functionality is available by enabling the `RestartAllContainersOnContainerExits` feature gate. This alpha feature extends the [*Container Restart Rules* feature](/docs/concepts/workloads/pods/pod-lifecycle/#container-restart-rules), which graduated to beta in Kubernetes 1.35.
--&gt;
&lt;p&gt;Kubernetes 1.35 版本引入了一项强大的新特性，满足了用户对 Pod 就地重启的迫切需求。
这项名为“重启所有容器”（Restart All Containers，1.35 版本为 Alpha 版）的特性，
相比于资源用量较高的删除并重建整个 Pod 的方式，能够更高效地重置 Pod 的状态。
该特性对于 AI/ML 工作负载尤为实用，使应用程序开发人员能够专注于核心训练逻辑，
同时将复杂的故障处理和恢复机制交给边车容器和声明式 Kubernetes 配置来处理。
凭借 &lt;code&gt;RestartAllContainers&lt;/code&gt; 和其他计划中的增强特性，
Kubernetes 将继续构建更灵活、更健壮、更高效的 AI/ML 工作负载平台。&lt;/p&gt;</description></item><item><title>Kubernetes v1.35：扩展容忍度运算符以支持数值比较（Alpha）</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2026/01/05/kubernetes-v1-35-numeric-toleration-operators/</link><pubDate>Mon, 05 Jan 2026 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2026/01/05/kubernetes-v1-35-numeric-toleration-operators/</guid><description>&lt;!--
layout: blog
title: "Kubernetes v1.35: Extended Toleration Operators to Support Numeric Comparisons (Alpha)"
date: 2026-01-05T10:30:00-08:00
slug: kubernetes-v1-35-numeric-toleration-operators
author: &gt;
 Heba Elayoty (Microsoft)
--&gt;
&lt;!--
Many production Kubernetes clusters blend on-demand (higher-SLA) and spot/preemptible (lower-SLA) nodes to optimize costs while maintaining reliability for critical workloads. Platform teams need a safe default that keeps most workloads away from risky capacity, while allowing specific workloads to opt-in with explicit thresholds like "I can tolerate nodes with failure probability up to 5%".
--&gt;
&lt;p&gt;许多生产级 Kubernetes 集群会混合使用按需（on-demand，高 SLA）节点与 spot/可抢占（preemptible，低 SLA）节点，
以在保证关键工作负载可靠性的同时优化成本。平台团队需要一个“安全默认值”，让大多数工作负载远离风险容量，
同时又允许特定工作负载用明确阈值显式选择接受（opt-in），例如“我可以容忍失败概率最高 5% 的节点”。&lt;/p&gt;</description></item><item><title>Kubernetes v1.35：Job Managed By 特性正式发布（GA）</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2025/12/18/kubernetes-v1-35-job-managedby-for-jobs-goes-ga/</link><pubDate>Thu, 18 Dec 2025 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2025/12/18/kubernetes-v1-35-job-managedby-for-jobs-goes-ga/</guid><description>&lt;!--
layout: blog
title: "Kubernetes v1.35: Job Managed By Goes GA"
date: 2025-12-18T10:30:00-08:00
slug: kubernetes-v1-35-job-managedby-for-jobs-goes-ga
author: &gt;
 [Dejan Zele Pejchev](https://github.com/dejanzele) (G-Research),
 [Michał Woźniak](https://github.com/mimowo) (Google)
--&gt;
&lt;!--
In Kubernetes v1.35, the ability to specify an external Job controller (through `.spec.managedBy`) graduates to General Availability.
--&gt;
&lt;p&gt;在 Kubernetes v1.35 中，通过 &lt;code&gt;.spec.managedBy&lt;/code&gt; 指定外部 Job 控制器的能力升级为正式可用（GA）。&lt;/p&gt;
&lt;!--
This feature allows external controllers to take full responsibility for Job reconciliation, unlocking powerful scheduling patterns like multi-cluster dispatching with [MultiKueue](https://kueue.sigs.k8s.io/docs/concepts/multikueue/).
--&gt;
&lt;p&gt;该特性允许外部控制器对 Job 的调谐（reconciliation）承担完全责任，从而解锁更强大的调度模式，
例如借助 &lt;a href="https://kueue.sigs.k8s.io/docs/concepts/multikueue/"&gt;MultiKueue&lt;/a&gt; 进行跨多集群派发。&lt;/p&gt;</description></item><item><title>Kubernetes v1.35：Timbernetes（世界树版本）</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2025/12/17/kubernetes-v1-35-release/</link><pubDate>Wed, 17 Dec 2025 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2025/12/17/kubernetes-v1-35-release/</guid><description>&lt;!--
layout: blog
title: "Kubernetes v1.35: Timbernetes (The World Tree Release)"
date: 2025-12-17T10:30:00-08:00
evergreen: true
slug: kubernetes-v1-35-release
author: &gt;
 [Kubernetes v1.35 Release Team](https://github.com/kubernetes/sig-release/blob/master/releases/release-1.35/release-team.md)
--&gt;
&lt;!--
**Editors**: Aakanksha Bhende, Arujjwal Negi, Chad M. Crowell, Graziano Casto, Swathi Rao
--&gt;
&lt;p&gt;&lt;strong&gt;编辑&lt;/strong&gt;：Aakanksha Bhende、Arujjwal Negi、Chad M. Crowell、Graziano Casto、Swathi Rao&lt;/p&gt;
&lt;!--
Similar to previous releases, the release of Kubernetes v1.35 introduces new stable, beta, and alpha features. The consistent delivery of high-quality releases underscores the strength of our development cycle and the vibrant support from our community.
--&gt;
&lt;p&gt;与之前版本类似，Kubernetes v1.35 的发布引入了新的稳定（GA）、Beta 和 Alpha 特性。
持续交付高质量版本，体现了我们开发周期的韧性，也离不开社区的热情支持。&lt;/p&gt;</description></item><item><title>Kubernetes v1.35 抢先一览</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2025/11/26/kubernetes-v1-35-sneak-peek/</link><pubDate>Wed, 26 Nov 2025 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2025/11/26/kubernetes-v1-35-sneak-peek/</guid><description>&lt;!--
layout: blog
title: 'Kubernetes v1.35 Sneak Peek'
date: 2025-11-26
slug: kubernetes-v1-35-sneak-peek
author: &gt;
 Aakanksha Bhende,
 Arujjwal Negi,
 Chad M. Crowell,
 Graziano Casto,
 Swathi Rao
--&gt;
&lt;!--
As the release of Kubernetes v1.35 approaches, the Kubernetes project continues to evolve.
Features may be deprecated, removed, or replaced to improve the project's overall health.
This blog post outlines planned changes for the v1.35 release that the release team believes
you should be aware of to ensure the continued smooth operation of your Kubernetes cluster(s),
and to keep you up to date with the latest developments.
The information below is based on the current status of the v1.35 release
and is subject to change before the final release date.
--&gt;
&lt;p&gt;随着 Kubernetes v1.35 发布的临近，Kubernetes 项目持续演进。
为了改善项目的整体健康状况，某些功能可能会被弃用、移除或替换。
本博客文章概述了 v1.35 版本的计划变更，
发布团队认为你应该了解这些变更，以确保 Kubernetes 集群的持续平稳运行，
并让你了解最新进展。
以下信息基于 v1.35 版本的当前状态，在最终发布日期之前可能会发生变化。&lt;/p&gt;</description></item><item><title>Kubernetes 配置最佳实践</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2025/11/25/configuration-good-practices/</link><pubDate>Tue, 25 Nov 2025 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2025/11/25/configuration-good-practices/</guid><description>&lt;!--
layout: blog
title: "Kubernetes Configuration Good Practices"
date: 2025-11-25T00:00:00+00:00
slug: configuration-good-practices
evergreen: true
author: Kirti Goyal
--&gt;
&lt;!--
Configuration is one of those things in Kubernetes that seems small until it's not.
Configuration is at the heart of every Kubernetes workload.
A missing quote, a wrong API version or a misplaced YAML indent can ruin your entire deploy.
--&gt;
&lt;p&gt;配置是 Kubernetes 中看似微不足道，实则关键的事情之一。
配置是每个 Kubernetes 工作负载的核心。
一个缺失的引号、错误的 API 版本或错位的 YAML 缩进都可能毁掉你的整个部署。&lt;/p&gt;</description></item><item><title>Ingress NGINX 退役：你需要了解的内容</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2025/11/11/ingress-nginx-retirement/</link><pubDate>Tue, 11 Nov 2025 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2025/11/11/ingress-nginx-retirement/</guid><description>&lt;!--
layout: blog
title: "Ingress NGINX Retirement: What You Need to Know"
slug: ingress-nginx-retirement
canonicalUrl: https://www.kubernetes.dev/blog/2025/11/12/ingress-nginx-retirement
date: 2025-11-11T10:30:00-08:00
author: &gt;
 Tabitha Sable (Kubernetes SRC)
--&gt;
&lt;!--
To prioritize the safety and security of the ecosystem, Kubernetes SIG Network and the Security Response Committee are announcing the upcoming retirement of [Ingress NGINX](https://github.com/kubernetes/ingress-nginx/). Best-effort maintenance will continue until March 2026. Afterward, there will be no further releases, no bugfixes, and no updates to resolve any security vulnerabilities that may be discovered. **Existing deployments of Ingress NGINX will continue to function and installation artifacts will remain available.**

We recommend migrating to one of the many alternatives. Consider [migrating to Gateway API](https://gateway-api.sigs.k8s.io/guides/), the modern replacement for Ingress. If you must continue using Ingress, many alternative Ingress controllers are [listed in the Kubernetes documentation](https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/). Continue reading for further information about the history and current state of Ingress NGINX, as well as next steps.
--&gt;
&lt;p&gt;为了优先考虑生态系统的安全，Kubernetes SIG Network 和安全响应委员会宣布
&lt;a href="https://github.com/kubernetes/ingress-nginx/"&gt;Ingress NGINX&lt;/a&gt; 即将退役，
并将尽力将其维护期持续到 2026 年 3 月。
之后，将不再有进一步的版本发布、错误修复和更新来解决可能发现的任何安全漏洞。
&lt;strong&gt;现有的 Ingress NGINX Deployment 将继续运行，并且安装工件仍将可用。&lt;/strong&gt;&lt;/p&gt;</description></item><item><title>公布 2025 年指导委员会选举结果</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2025/11/09/steering-committee-results-2025/</link><pubDate>Sun, 09 Nov 2025 15:10:00 -0500</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2025/11/09/steering-committee-results-2025/</guid><description>&lt;!--
layout: blog
title: "Announcing the 2025 Steering Committee Election Results"
slug: steering-committee-results-2025
canonicalUrl: https://www.kubernetes.dev/blog/2025/11/09/steering-committee-results-2025
date: 2025-11-09T15:10:00-05:00
author: &gt;
 Arujjwal Negi
--&gt;
&lt;!--
The [2025 Steering Committee Election](https://github.com/kubernetes/community/tree/master/elections/steering/2025) is now complete. The Kubernetes Steering Committee consists of 7 seats, 4 of which were up for election in 2025. Incoming committee members serve a term of 2 years, and all members are elected by the Kubernetes Community.

The Steering Committee oversees the governance of the entire Kubernetes project. With that great power comes great responsibility. You can learn more about the steering committee’s role in their [charter](https://github.com/kubernetes/steering/blob/master/charter.md).

Thank you to everyone who voted in the election; your participation helps support the community’s continued health and success.
--&gt;
&lt;p&gt;&lt;a href="https://github.com/kubernetes/community/tree/master/elections/steering/2025"&gt;2025 指导委员会选举&lt;/a&gt;现已结束。
Kubernetes 指导委员会由 7 个席位组成，其中 4 个席位在 2025 年进行了选举。
新当选的委员会成员将任职 2 年，所有成员均由 Kubernetes 社区选举产生。&lt;/p&gt;</description></item><item><title>7 个常见的 Kubernetes 坑（以及我是如何避开的）</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2025/10/20/seven-kubernetes-pitfalls-and-how-to-avoid/</link><pubDate>Mon, 20 Oct 2025 08:30:00 -0700</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2025/10/20/seven-kubernetes-pitfalls-and-how-to-avoid/</guid><description>&lt;!--
layout: blog
title: "7 Common Kubernetes Pitfalls (and How I Learned to Avoid Them)"
date: 2025-10-20T08:30:00-07:00
slug: seven-kubernetes-pitfalls-and-how-to-avoid
author: &gt;
 Abdelkoddous Lhajouji
--&gt;
&lt;!--
It's no secret that Kubernetes can be both powerful and frustrating at times. When I first started dabbling with container orchestration, I made more than my fair share of mistakes enough to compile a whole list of pitfalls. In this post, I want to walk through seven big gotchas I've encountered (or seen others run into) and share some tips on how to avoid them. Whether you're just kicking the tires on Kubernetes or already managing production clusters, I hope these insights help you steer clear of a little extra stress.
--&gt;
&lt;p&gt;Kubernetes 功能强大，但有时也会令人沮丧，这已不是什么秘密。
当我刚开始接触容器编排时，我犯了不少错误，足以列出一整张误区清单。
在这篇文章中，我想分享我遇到的（或看到其他人遇到的）七个常见误区，
以及如何避免它们的建议。
无论你只是刚开始尝试 Kubernetes，还是已经在管理生产集群，
我希望这些见解能帮助你避免一些额外的麻烦。&lt;/p&gt;</description></item><item><title>Kubernetes v1.34：从存储卷扩展失效中恢复（GA）</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2025/09/19/kubernetes-v1-34-recover-expansion-failure/</link><pubDate>Fri, 19 Sep 2025 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2025/09/19/kubernetes-v1-34-recover-expansion-failure/</guid><description>&lt;!--
layout: blog
title: "Kubernetes v1.34: Recovery From Volume Expansion Failure (GA)"
date: 2025-09-19T10:30:00-08:00
slug: kubernetes-v1-34-recover-expansion-failure
author: &gt;
 [Hemant Kumar](https://github.com/gnufied) (Red Hat)
--&gt;
&lt;!--
Have you ever made a typo when expanding your persistent volumes in Kubernetes? Meant to specify `2TB`
but specified `20TiB`? This seemingly innocuous problem was kinda hard to fix - and took the project almost 5 years to fix.
[Automated recovery from storage expansion](/docs/concepts/storage/persistent-volumes/#recovering-from-failure-when-expanding-volumes) has been around for a while in beta; however, with the v1.34 release, we have graduated this to
**general availability**.
--&gt;
&lt;p&gt;你是否曾经在扩展 Kubernetes 中的持久卷时犯过拼写错误？本来想指定 &lt;code&gt;2TB&lt;/code&gt; 却写成了 &lt;code&gt;20TiB&lt;/code&gt;？
这个看似无害的问题实际上很难修复——项目花了将近 5 年时间才解决。
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/persistent-volumes/#recovering-from-failure-when-expanding-volumes"&gt;存储扩展的自动恢复&lt;/a&gt;
此特性在一段时间内一直处于 Beta 状态；不过，随着 v1.34 版本的发布，我们已经将其提升到&lt;strong&gt;正式发布&lt;/strong&gt;状态。&lt;/p&gt;</description></item><item><title>Kubernetes v1.34: 将卷组快照推进至 v1beta2 阶段</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2025/09/16/kubernetes-v1-34-volume-group-snapshot-beta-2/</link><pubDate>Tue, 16 Sep 2025 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2025/09/16/kubernetes-v1-34-volume-group-snapshot-beta-2/</guid><description>&lt;!--
layout: blog
title: "Kubernetes v1.34: Moving Volume Group Snapshots to v1beta2"
date: 2025-09-16T10:30:00-08:00
slug: kubernetes-v1-34-volume-group-snapshot-beta-2
author: &gt;
 Xing Yang (VMware by Broadcom)
--&gt;
&lt;!--
Volume group snapshots were [introduced](/blog/2023/05/08/kubernetes-1-27-volume-group-snapshot-alpha/)
as an Alpha feature with the Kubernetes 1.27 release and moved to [Beta](/blog/2024/12/18/kubernetes-1-32-volume-group-snapshot-beta/) in the Kubernetes 1.32 release.
The recent release of Kubernetes v1.34 moved that support to a second beta.
The support for volume group snapshots relies on a set of
[extension APIs for group snapshots](https://kubernetes-csi.github.io/docs/group-snapshot-restore-feature.html#volume-group-snapshot-apis).
These APIs allow users to take crash consistent snapshots for a set of volumes.
Behind the scenes, Kubernetes uses a label selector to group multiple PersistentVolumeClaims
for snapshotting.
A key aim is to allow you restore that set of snapshots to new volumes and
recover your workload based on a crash consistent recovery point.

This new feature is only supported for [CSI](https://kubernetes-csi.github.io/docs/) volume drivers.
--&gt;
&lt;p&gt;卷组快照在 Kubernetes 1.27 版本中作为 Alpha 特性被引入，
并在 Kubernetes 1.32 版本中移至 &lt;a href="https://andygol-k8s.netlify.app/zh-cn/blog/2024/12/18/kubernetes-1-32-volume-group-snapshot-beta/"&gt;Beta&lt;/a&gt; 阶段。
Kubernetes v1.34 的最近一次发布将该支持移至第二个 Beta 阶段。
对卷组快照的支持依赖于一组&lt;a href="https://kubernetes-csi.github.io/docs/group-snapshot-restore-feature.html#volume-group-snapshot-apis"&gt;用于组快照的扩展 API&lt;/a&gt;。
这些 API 允许用户为一组卷获取崩溃一致性快照。在后台，Kubernetes 根据标签选择器对多个
PersistentVolumeClaim 分组，并进行快照操作。关键目标是允许你将这组快照恢复到新卷上，
并基于崩溃一致性恢复点恢复工作负载。&lt;/p&gt;</description></item><item><title>Kubernetes v1.34：可变 CSI 节点可分配数进阶至 Beta</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2025/09/11/kubernetes-v1-34-mutable-csi-node-allocatable-count/</link><pubDate>Thu, 11 Sep 2025 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2025/09/11/kubernetes-v1-34-mutable-csi-node-allocatable-count/</guid><description>&lt;!--
layout: blog
title: "Kubernetes v1.34: Mutable CSI Node Allocatable Graduates to Beta"
date: 2025-09-11T10:30:00-08:00
slug: kubernetes-v1-34-mutable-csi-node-allocatable-count
author: Eddie Torres (Amazon Web Services)
--&gt;
&lt;!--
The [functionality for CSI drivers to update information about attachable volume count on the nodes](https://kep.k8s.io/4876), first introduced as Alpha in Kubernetes v1.33, has graduated to **Beta** in the Kubernetes v1.34 release! This marks a significant milestone in enhancing the accuracy of stateful pod scheduling by reducing failures due to outdated attachable volume capacity information.
--&gt;
&lt;p&gt;&lt;a href="https://kep.k8s.io/4876"&gt;CSI 驱动更新节点上可挂接卷数量信息的这一功能&lt;/a&gt;在 Kubernetes v1.33
中首次以 Alpha 引入，如今在 Kubernetes v1.34 中进阶为 &lt;strong&gt;Beta&lt;/strong&gt;！
这是提升有状态 Pod 调度准确性的重要里程碑，可减少因可挂接卷容量信息过时所导致的调度失败问题。&lt;/p&gt;</description></item><item><title>Kubernetes v1.34: 使用 Init 容器定义应用环境变量</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2025/09/10/kubernetes-v1-34-env-files/</link><pubDate>Wed, 10 Sep 2025 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2025/09/10/kubernetes-v1-34-env-files/</guid><description>&lt;!--
layout: blog
title: "Kubernetes v1.34: Use An Init Container To Define App Environment Variables"
date: 2025-09-10T10:30:00-08:00
draft: true
slug: kubernetes-v1-34-env-files
author: &gt;
 HirazawaUi
--&gt;
&lt;!--
Kubernetes typically uses ConfigMaps and Secrets to set environment variables,
which introduces additional API calls and complexity,
For example, you need to separately manage the Pods of your workloads 
and their configurations, while ensuring orderly 
updates for both the configurations and the workload Pods.

Alternatively, you might be using a vendor-supplied container 
that requires environment variables (such as a license key or a one-time token),
but you don’t want to hard-code them or mount volumes just to get the job done.
--&gt;
&lt;p&gt;Kubernetes 通常使用 ConfigMap 和 Secret 来设置环境变量，
这会引入额外的 API 调用和复杂性。例如，你需要分别管理工作负载的 Pod 和它们的配置，
同时还要确保配置和工作负载 Pod 的有序更新。&lt;/p&gt;</description></item><item><title>Kubernetes v1.34: Kubernetes 中的 PSI 指标进入 Beta 阶段</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2025/09/04/kubernetes-v1-34-introducing-psi-metrics-beta/</link><pubDate>Thu, 04 Sep 2025 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2025/09/04/kubernetes-v1-34-introducing-psi-metrics-beta/</guid><description>&lt;!--
layout: blog
title: "PSI Metrics for Kubernetes Graduates to Beta"
date: 2025-09-04T10:30:00-08:00
slug: kubernetes-v1-34-introducing-psi-metrics-beta
author: "Haowei Cai (Google)"
--&gt;
&lt;!--
As Kubernetes clusters grow in size and complexity, understanding the health and performance of individual nodes becomes increasingly critical. We are excited to announce that as of Kubernetes v1.34, **Pressure Stall Information (PSI) Metrics** has graduated to Beta.
--&gt;
&lt;p&gt;随着 Kubernetes 集群规模和复杂性的增长，了解各个节点的健康状况和性能变得越来越关键。
我们很高兴地宣布，从 Kubernetes v1.34 开始，&lt;strong&gt;压力停滞信息 (PSI) 指标&lt;/strong&gt;已升级到 Beta 版本。&lt;/p&gt;</description></item><item><title>Headlamp AI 助手简介</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2025/08/07/introducing-headlamp-ai-assistant/</link><pubDate>Thu, 07 Aug 2025 20:00:00 +0100</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2025/08/07/introducing-headlamp-ai-assistant/</guid><description>&lt;!--
layout: blog
title: "Introducing Headlamp AI Assistant"
date: 2025-08-07T20:00:00+01:00
slug: introducing-headlamp-ai-assistant
author: &gt;
 Joaquim Rocha (Microsoft)
canonicalUrl: "https://headlamp.dev/blog/2025/08/07/introducing-the-headlamp-ai-assistant"
--&gt;
&lt;!--
_This announcement originally [appeared](https://headlamp.dev/blog/2025/08/07/introducing-the-headlamp-ai-assistant) on the Headlamp blog._

To simplify Kubernetes management and troubleshooting, we're thrilled to
introduce [Headlamp AI Assistant](https://github.com/headlamp-k8s/plugins/tree/main/ai-assistant#readme): a powerful new plugin for Headlamp that helps
you understand and operate your Kubernetes clusters and applications with
greater clarity and ease.
--&gt;
&lt;p&gt;&lt;strong&gt;本文是 &lt;a href="https://headlamp.dev/blog/2025/08/07/introducing-the-headlamp-ai-assistant"&gt;Headlamp AI 助手介绍&lt;/a&gt;这篇博客的中文译稿。&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;为了简化 Kubernetes 的管理和故障排除，我们非常高兴地推出
&lt;a href="https://github.com/headlamp-k8s/plugins/tree/main/ai-assistant#readme"&gt;Headlamp AI 助手&lt;/a&gt;：
这是 Headlamp 的一个强大的新插件，可以帮助你更清晰、更轻松地理解和操作你的 Kubernetes 集群和应用程序。&lt;/p&gt;</description></item><item><title>Kubernetes v1.34 抢先一览</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2025/07/28/kubernetes-v1-34-sneak-peek/</link><pubDate>Mon, 28 Jul 2025 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2025/07/28/kubernetes-v1-34-sneak-peek/</guid><description>&lt;!--
layout: blog
title: 'Kubernetes v1.34 Sneak Peek'
date: 2025-07-28
slug: kubernetes-v1-34-sneak-peek
author: &gt;
 Agustina Barbetta,
 Alejandro Josue Leon Bellido,
 Graziano Casto,
 Melony Qin,
 Dipesh Rawat
--&gt;
&lt;!--
Kubernetes v1.34 is coming at the end of August 2025. 
This release will not include any removal or deprecation, but it is packed with an impressive number of enhancements. 
Here are some of the features we are most excited about in this cycle! 

Please note that this information reflects the current state of v1.34 development and may change before release.
--&gt;
&lt;p&gt;Kubernetes v1.34 将于 2025 年 8 月底发布。
本次发版不会移除或弃用任何特性，但包含了数量惊人的增强特性。
以下列出一些本次发版最令人兴奋的特性！&lt;/p&gt;</description></item><item><title>云原生环境中的镜像兼容性</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2025/06/25/image-compatibility-in-cloud-native-environments/</link><pubDate>Wed, 25 Jun 2025 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2025/06/25/image-compatibility-in-cloud-native-environments/</guid><description>&lt;!--
layout: blog
title: "Image Compatibility In Cloud Native Environments"
date: 2025-06-25
draft: false
slug: image-compatibility-in-cloud-native-environments
author: &gt;
 Chaoyi Huang (Huawei),
 Marcin Franczyk (Huawei),
 Vanessa Sochat (Lawrence Livermore National Laboratory)
--&gt;
&lt;!--
In industries where systems must run very reliably and meet strict performance criteria such as telecommunication, high-performance or AI computing, containerized applications often need specific operating system configuration or hardware presence.
It is common practice to require the use of specific versions of the kernel, its configuration, device drivers, or system components.
Despite the existence of the [Open Container Initiative (OCI)](https://opencontainers.org/), a governing community to define standards and specifications for container images, there has been a gap in expression of such compatibility requirements.
The need to address this issue has led to different proposals and, ultimately, an implementation in Kubernetes' [Node Feature Discovery (NFD)](https://kubernetes-sigs.github.io/node-feature-discovery/stable/get-started/index.html).
--&gt;
&lt;p&gt;在电信、高性能或 AI 计算等必须高度可靠且满足严格性能标准的行业中，容器化应用通常需要特定的操作系统配置或硬件支持。
通常的做法是要求使用特定版本的内核、其配置、设备驱动程序或系统组件。
尽管存在&lt;a href="https://opencontainers.org/"&gt;开放容器倡议 (OCI)&lt;/a&gt; 这样一个定义容器镜像标准和规范的治理社区，
但在表达这种兼容性需求方面仍存在空白。为了解决这一问题，业界提出了多个提案，并最终在 Kubernetes
的&lt;a href="https://kubernetes-sigs.github.io/node-feature-discovery/stable/get-started/index.html"&gt;节点特性发现 (NFD)&lt;/a&gt; 项目中实现了相关功能。&lt;/p&gt;</description></item><item><title>Kubernetes Slack 变更公告</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2025/06/16/changes-to-kubernetes-slack/</link><pubDate>Mon, 16 Jun 2025 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2025/06/16/changes-to-kubernetes-slack/</guid><description>&lt;!--
layout: blog
title: "Changes to Kubernetes Slack"
date: 2025-06-16
canonicalUrl: https://www.kubernetes.dev/blog/2025/06/16/changes-to-kubernetes-slack-2025/
slug: changes-to-kubernetes-slack
Author: &gt;
 [Josh Berkus](https://github.com/jberkus)
--&gt;
&lt;!--
**UPDATE**: We've received notice from Salesforce that our Slack workspace **WILL NOT BE DOWNGRADED** on June 20th. Stand by for more details, but for now, there is no urgency to back up private channels or direct messages.
--&gt;
&lt;p&gt;&lt;strong&gt;更新&lt;/strong&gt;：我们已收到 Salesforce 的通知，我们的 Slack 工作区在 6 月 20 日&lt;strong&gt;不会被降级&lt;/strong&gt;。
请等待更多细节更新，目前&lt;strong&gt;无需紧急备份&lt;/strong&gt;私有频道或私信。&lt;/p&gt;
&lt;!--
~Kubernetes Slack will lose its special status and will be changing into a standard free Slack on June 20, 2025~~. Sometime later this year, our community may move to a new platform. If you are responsible for a channel or private channel, or a member of a User Group, you will need to take some actions as soon as you can.
--&gt;
&lt;p&gt;&lt;del&gt;Kubernetes Slack 将在 6 月 20 日失去原有的专属支持，并转变为标准免费版 Slack&lt;/del&gt;~。
今年晚些时候，我们的社区可能会迁移到新平台。
如果你是频道或私有频道的负责人，又或是用户组的成员，你需要尽快采取一些行动。&lt;/p&gt;</description></item><item><title>通过自定义聚合增强 Kubernetes Event 管理</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2025/06/10/enhancing-kubernetes-event-management-custom-aggregation/</link><pubDate>Tue, 10 Jun 2025 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2025/06/10/enhancing-kubernetes-event-management-custom-aggregation/</guid><description>&lt;!--
layout: blog
title: "Enhancing Kubernetes Event Management with Custom Aggregation"
date: 2025-06-10
draft: false
slug: enhancing-kubernetes-event-management-custom-aggregation
Author: &gt;
 [Rez Moss](https://github.com/rezmoss)
--&gt;
&lt;!--
Kubernetes [Events](/docs/reference/kubernetes-api/cluster-resources/event-v1/) provide crucial insights into cluster operations, but as clusters grow, managing and analyzing these events becomes increasingly challenging. This blog post explores how to build custom event aggregation systems that help engineering teams better understand cluster behavior and troubleshoot issues more effectively.
--&gt;
&lt;p&gt;Kubernetes &lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubernetes-api/cluster-resources/event-v1/"&gt;Event&lt;/a&gt;
提供了集群操作的关键洞察信息，但随着集群的增长，管理和分析这些 Event 变得越来越具有挑战性。
这篇博客文章探讨了如何构建自定义 Event 聚合系统，以帮助工程团队更好地理解集群行为并更有效地解决问题。&lt;/p&gt;</description></item><item><title>介绍 Gateway API 推理扩展</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2025/06/05/introducing-gateway-api-inference-extension/</link><pubDate>Thu, 05 Jun 2025 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2025/06/05/introducing-gateway-api-inference-extension/</guid><description>&lt;!--
layout: blog
title: "Introducing Gateway API Inference Extension"
date: 2025-06-05
slug: introducing-gateway-api-inference-extension
draft: false
author: &gt;
 Daneyon Hansen (Solo.io),
 Kaushik Mitra (Google),
 Jiaxin Shan (Bytedance),
 Kellen Swain (Google)
--&gt;
&lt;!--
Modern generative AI and large language model (LLM) services create unique traffic-routing challenges
on Kubernetes. Unlike typical short-lived, stateless web requests, LLM inference sessions are often
long-running, resource-intensive, and partially stateful. For example, a single GPU-backed model server
may keep multiple inference sessions active and maintain in-memory token caches.

Traditional load balancers focused on HTTP path or round-robin lack the specialized capabilities needed
for these workloads. They also don’t account for model identity or request criticality (e.g., interactive
chat vs. batch jobs). Organizations often patch together ad-hoc solutions, but a standardized approach
is missing.
--&gt;
&lt;p&gt;现代生成式 AI 和大语言模型（LLM）服务在 Kubernetes 上带来独特的流量路由挑战。
与典型的短生命期的无状态 Web 请求不同，LLM 推理会话通常是长时间运行的、资源密集型的，并且具有一定的状态性。
例如，单个由 GPU 支撑的模型服务器可能会保持多个推理会话处于活跃状态，并保留内存中的令牌缓存。&lt;/p&gt;</description></item><item><title>先启动边车：如何避免障碍</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2025/06/03/start-sidecar-first/</link><pubDate>Tue, 03 Jun 2025 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2025/06/03/start-sidecar-first/</guid><description>&lt;!--
layout: blog
title: "Start Sidecar First: How To Avoid Snags"
date: 2025-06-03
draft: false
slug: start-sidecar-first
author: Agata Skorupka (The Scale Factory)
--&gt;
&lt;!--
From the [Kubernetes Multicontainer Pods: An Overview blog post](/blog/2025/04/22/multi-container-pods-overview/) you know what their job is, what are the main architectural patterns, and how they are implemented in Kubernetes. The main thing I’ll cover in this article is how to ensure that your sidecar containers start before the main app. It’s more complicated than you might think!
--&gt;
&lt;p&gt;从 &lt;a href="https://andygol-k8s.netlify.app/zh-cn/blog/2025/04/22/multi-container-pods-overview/"&gt;&amp;quot;Kubernetes 多容器 Pod：概述&amp;quot;博客&lt;/a&gt;中，
你了解了 Pod 的工作方式，Pod 的主要架构模式，以及 Pod 在 Kubernetes 中是如何实现的。
本文主要介绍的是如何确保你的边车容器在主应用之前启动。这比你想象的要复杂得多！&lt;/p&gt;</description></item><item><title>Gateway API v1.3.0：流量复制、CORS、Gateway 合并和重试预算的改进</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2025/06/02/gateway-api-v1-3/</link><pubDate>Mon, 02 Jun 2025 09:00:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2025/06/02/gateway-api-v1-3/</guid><description>&lt;!--
layout: blog
title: "Gateway API v1.3.0: Advancements in Request Mirroring, CORS, Gateway Merging, and Retry Budgets"
date: 2025-06-02T09:00:00-08:00
draft: false
slug: gateway-api-v1-3
author: &gt;
 [Candace Holman](https://github.com/candita) (Red Hat)
--&gt;
&lt;p&gt;&lt;img src="https://andygol-k8s.netlify.app/zh-cn/blog/2025/06/02/gateway-api-v1-3/gateway-api-logo.svg" alt="Gateway API logo"&gt;&lt;/p&gt;
&lt;!--
Join us in the Kubernetes SIG Network community in celebrating the general
availability of [Gateway API](https://gateway-api.sigs.k8s.io/) v1.3.0! We are
also pleased to announce that there are already a number of conformant
implementations to try, made possible by postponing this blog
announcement. Version 1.3.0 of the API was released about a month ago on
April 24, 2025.
--&gt;
&lt;p&gt;加入 Kubernetes SIG Network 社区，共同庆祝 &lt;a href="https://gateway-api.sigs.k8s.io/"&gt;Gateway API&lt;/a&gt; v1.3.0 正式发布！
我们很高兴地宣布，通过推迟这篇博客的发布，现在已经有了多个符合规范的实现可供试用。
API 1.3.0 版本已于 2025 年 4 月 24 日发布。&lt;/p&gt;</description></item><item><title>Kubernetes v1.33：原地调整 Pod 资源特性升级为 Beta</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2025/05/16/kubernetes-v1-33-in-place-pod-resize-beta/</link><pubDate>Fri, 16 May 2025 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2025/05/16/kubernetes-v1-33-in-place-pod-resize-beta/</guid><description>&lt;!--
layout: blog
title: "Kubernetes v1.33: In-Place Pod Resize Graduated to Beta"
slug: kubernetes-v1-33-in-place-pod-resize-beta
date: 2025-05-16T10:30:00-08:00
author: "Tim Allclair (Google)"
--&gt;
&lt;!--
On behalf of the Kubernetes project, I am excited to announce that the **in-place Pod resize** feature (also known as In-Place Pod Vertical Scaling), first introduced as alpha in Kubernetes v1.27, has graduated to **Beta** and will be enabled by default in the Kubernetes v1.33 release! This marks a significant milestone in making resource management for Kubernetes workloads more flexible and less disruptive.
--&gt;
&lt;p&gt;代表 Kubernetes 项目，我很高兴地宣布，&lt;strong&gt;原地 Pod 调整大小&lt;/strong&gt;特性（也称为原地 Pod 垂直缩放），
在 Kubernetes v1.27 中首次引入为 Alpha 版本，现在已升级为 &lt;strong&gt;Beta&lt;/strong&gt; 版本，
并将在 Kubernetes v1.33 发行版中默认启用！
这标志着 Kubernetes 工作负载的资源管理变得更加灵活和不那么具有干扰性的一个重要里程碑。&lt;/p&gt;</description></item><item><title>Kubernetes 1.33：Job 的 SuccessPolicy 进阶至 GA</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2025/05/15/kubernetes-1-33-jobs-success-policy-goes-ga/</link><pubDate>Thu, 15 May 2025 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2025/05/15/kubernetes-1-33-jobs-success-policy-goes-ga/</guid><description>&lt;!--
layout: blog
title: "Kubernetes 1.33: Job's SuccessPolicy Goes GA"
date: 2025-05-15T10:30:00-08:00
slug: kubernetes-1-33-jobs-success-policy-goes-ga
authors: &gt;
 [Yuki Iwai](https://github.com/tenzen-y) (CyberAgent, Inc)
--&gt;
&lt;!--
On behalf of the Kubernetes project, I'm pleased to announce that Job _success policy_ has graduated to General Availability (GA) as part of the v1.33 release.
--&gt;
&lt;p&gt;我代表 Kubernetes 项目组，很高兴地宣布在 v1.33 版本中，Job 的&lt;strong&gt;成功策略&lt;/strong&gt;已进阶至 GA（正式发布）。&lt;/p&gt;
&lt;!--
## About Job's Success Policy

In batch workloads, you might want to use leader-follower patterns like [MPI](https://en.wikipedia.org/wiki/Message_Passing_Interface),
in which the leader controls the execution, including the followers' lifecycle.
--&gt;
&lt;h2 id="about-jobs-success-policy"&gt;关于 Job 的成功策略&lt;/h2&gt;
&lt;p&gt;在批处理工作负载中，你可能希望使用类似
&lt;a href="https://zh.wikipedia.org/zh-cn/%E8%A8%8A%E6%81%AF%E5%82%B3%E9%81%9E%E4%BB%8B%E9%9D%A2"&gt;MPI（消息传递接口）&lt;/a&gt;
的领导者跟随者（leader-follower）模式，其中领导者控制执行过程，包括跟随者的生命周期。&lt;/p&gt;</description></item><item><title>Kubernetes v1.33：容器生命周期更新</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2025/05/14/kubernetes-v1-33-updates-to-container-lifecycle/</link><pubDate>Wed, 14 May 2025 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2025/05/14/kubernetes-v1-33-updates-to-container-lifecycle/</guid><description>&lt;!--
layout: blog
title: "Kubernetes v1.33: Updates to Container Lifecycle"
date: 2025-05-14T10:30:00-08:00
slug: kubernetes-v1-33-updates-to-container-lifecycle
author: &gt;
 Sreeram Venkitesh (DigitalOcean)
--&gt;
&lt;!--
Kubernetes v1.33 introduces a few updates to the lifecycle of containers. The Sleep action for container lifecycle hooks now supports a zero sleep duration (feature enabled by default).
There is also alpha support for customizing the stop signal sent to containers when they are being terminated.
--&gt;
&lt;p&gt;Kubernetes v1.33 引入了对容器生命周期的一些更新。
容器生命周期回调的 Sleep 动作现在支持零睡眠时长（特性默认启用）。
同时还为定制发送给终止中的容器的停止信号提供了 Alpha 级别支持。&lt;/p&gt;</description></item><item><title>Kubernetes v1.33：Job 逐索引的回退限制进阶至 GA</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2025/05/13/kubernetes-v1-33-jobs-backoff-limit-per-index-goes-ga/</link><pubDate>Tue, 13 May 2025 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2025/05/13/kubernetes-v1-33-jobs-backoff-limit-per-index-goes-ga/</guid><description>&lt;!--
layout: blog
title: "Kubernetes v1.33: Job's Backoff Limit Per Index Goes GA"
date: 2025-05-13T10:30:00-08:00
slug: kubernetes-v1-33-jobs-backoff-limit-per-index-goes-ga
author: &gt;
 [Michał Woźniak](https://github.com/mimowo) (Google)
--&gt;
&lt;!--
In Kubernetes v1.33, the _Backoff Limit Per Index_ feature reaches general
availability (GA). This blog describes the Backoff Limit Per Index feature and
its benefits.
--&gt;
&lt;p&gt;在 Kubernetes v1.33 中，&lt;strong&gt;逐索引的回退限制&lt;/strong&gt;特性进阶至 GA（正式发布）。本文介绍此特性及其优势。&lt;/p&gt;
&lt;!--
## About backoff limit per index

When you run workloads on Kubernetes, you must consider scenarios where Pod
failures can affect the completion of your workloads. Ideally, your workload
should tolerate transient failures and continue running.

To achieve failure tolerance in a Kubernetes Job, you can set the
`spec.backoffLimit` field. This field specifies the total number of tolerated
failures.
--&gt;
&lt;h2 id="about-backoff-limit-per-index"&gt;关于逐索引的回退限制&lt;/h2&gt;
&lt;p&gt;当你在 Kubernetes 上运行工作负载时，必须考虑 Pod 失效可能影响工作负载完成的场景。
理想情况下，你的工作负载应该能够容忍短暂的失效并继续运行。&lt;/p&gt;</description></item><item><title>Kubernetes v1.33：镜像拉取策略终于按你的预期工作了！</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2025/05/12/kubernetes-v1-33-ensure-secret-pulled-images-alpha/</link><pubDate>Mon, 12 May 2025 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2025/05/12/kubernetes-v1-33-ensure-secret-pulled-images-alpha/</guid><description>&lt;!--
layout: blog
title: "Kubernetes v1.33: Image Pull Policy the way you always thought it worked!"
date: 2025-05-12T10:30:00-08:00
slug: kubernetes-v1-33-ensure-secret-pulled-images-alpha
author: &gt;
 [Ben Petersen](https://github.com/benjaminapetersen) (Microsoft),
 [Stanislav Láznička](https://github.com/stlaz) (Microsoft)
--&gt;
&lt;!--
## Image Pull Policy the way you always thought it worked!

Some things in Kubernetes are surprising, and the way `imagePullPolicy` behaves might
be one of them. Given Kubernetes is all about running pods, it may be peculiar
to learn that there has been a caveat to restricting pod access to authenticated images for
over 10 years in the form of [issue 18787](https://github.com/kubernetes/kubernetes/issues/18787)!
It is an exciting release when you can resolve a ten-year-old issue.
--&gt;
&lt;h2 id="镜像拉取策略终于按你的预期工作了"&gt;镜像拉取策略终于按你的预期工作了！&lt;/h2&gt;
&lt;p&gt;Kubernetes 中有些东西让人感到奇怪，&lt;code&gt;imagePullPolicy&lt;/code&gt; 的行为就是其中之一。
Kubernetes 作为一个专注于运行 Pod 的平台，居然在限制 Pod 访问经认证的镜像方面，存在一个长达十余年的问题，
详见 &lt;a href="https://github.com/kubernetes/kubernetes/issues/18787"&gt;Issue 18787&lt;/a&gt;！
v1.33 解决了这个十年前的老问题，这真是一个有纪念意义的版本。&lt;/p&gt;</description></item><item><title>Kubernetes v1.33：流式 List 响应</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2025/05/09/kubernetes-v1-33-streaming-list-responses/</link><pubDate>Fri, 09 May 2025 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2025/05/09/kubernetes-v1-33-streaming-list-responses/</guid><description>&lt;!--
layout: blog
title: "Kubernetes v1.33: Streaming List responses"
date: 2025-05-09T10:30:00-08:00
slug: kubernetes-v1-33-streaming-list-responses
author: &gt;
 Marek Siarkowicz (Google),
 Wei Fu (Microsoft)
--&gt;
&lt;!--
Managing Kubernetes cluster stability becomes increasingly critical as your infrastructure grows. One of the most challenging aspects of operating large-scale clusters has been handling List requests that fetch substantial datasets - a common operation that could unexpectedly impact your cluster's stability.

Today, the Kubernetes community is excited to announce a significant architectural improvement: streaming encoding for List responses.
--&gt;
&lt;p&gt;随着基础设施的增长，管理 Kubernetes 集群的稳定性变得愈发重要。
在大规模集群的运维中，最具挑战性的操作之一就是处理获取大量数据集的 List 请求。
List 请求是一种常见的操作，却可能意外影响集群的稳定性。&lt;/p&gt;</description></item><item><title>Kubernetes 1.33：卷填充器进阶至 GA</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2025/05/08/kubernetes-v1-33-volume-populators-ga/</link><pubDate>Thu, 08 May 2025 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2025/05/08/kubernetes-v1-33-volume-populators-ga/</guid><description>&lt;!--
layout: blog
title: "Kubernetes 1.33: Volume Populators Graduate to GA"
date: 2025-05-08T10:30:00-08:00
slug: kubernetes-v1-33-volume-populators-ga
author: &gt;
 Danna Wang (Google)
 Sunny Song (Google)
--&gt;
&lt;!--
Kubernetes _volume populators_ are now generally available (GA)! The `AnyVolumeDataSource` feature
gate is treated as always enabled for Kubernetes v1.33, which means that users can specify any appropriate
[custom resource](/docs/concepts/extend-kubernetes/api-extension/custom-resources/#custom-resources)
as the data source of a PersistentVolumeClaim (PVC).

An example of how to use dataSourceRef in PVC:
--&gt;
&lt;p&gt;Kubernetes 的&lt;strong&gt;卷填充器&lt;/strong&gt;现已进阶至 GA（正式发布）！
&lt;code&gt;AnyVolumeDataSource&lt;/code&gt; 特性门控在 Kubernetes v1.33 中设为始终启用，
这意味着用户可以将任何合适的&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/extend-kubernetes/api-extension/custom-resources/#custom-resources"&gt;自定义资源&lt;/a&gt;作为
PersistentVolumeClaim（PVC）的数据源。&lt;/p&gt;</description></item><item><title>Kubernetes v1.33：防止无序删除时 PersistentVolume 泄漏特性进阶到 GA</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2025/05/05/kubernetes-v1-33-prevent-persistentvolume-leaks-when-deleting-out-of-order-graduate-to-ga/</link><pubDate>Mon, 05 May 2025 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2025/05/05/kubernetes-v1-33-prevent-persistentvolume-leaks-when-deleting-out-of-order-graduate-to-ga/</guid><description>&lt;!--
layout: blog
title: 'Kubernetes v1.33: Prevent PersistentVolume Leaks When Deleting out of Order graduates to GA'
date: 2025-05-05T10:30:00-08:00
slug: kubernetes-v1-33-prevent-persistentvolume-leaks-when-deleting-out-of-order-graduate-to-ga
author: &gt;
 Deepak Kinni (Broadcom)
--&gt;
&lt;!--
I am thrilled to announce that the feature to prevent
[PersistentVolume](/docs/concepts/storage/persistent-volumes/) (or PVs for short)
leaks when deleting out of order has graduated to General Availability (GA) in
Kubernetes v1.33! This improvement, initially introduced as a beta
feature in Kubernetes v1.31, ensures that your storage resources are properly
reclaimed, preventing unwanted leaks.
--&gt;
&lt;p&gt;我很高兴地宣布，当无序删除时防止
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/persistent-volumes/"&gt;PersistentVolume&lt;/a&gt;（简称 PV）
泄漏的特性已经在 Kubernetes v1.33 中进阶为正式版（GA）！这项改进最初在
Kubernetes v1.31 中作为 Beta 特性引入，
确保你的存储资源能够被正确回收，防止不必要的泄漏。&lt;/p&gt;</description></item><item><title>Kubernetes v1.33：可变的 CSI 节点可分配数</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2025/05/02/kubernetes-1-33-mutable-csi-node-allocatable-count/</link><pubDate>Fri, 02 May 2025 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2025/05/02/kubernetes-1-33-mutable-csi-node-allocatable-count/</guid><description>&lt;!--
layout: blog
title: "Kubernetes v1.33: Mutable CSI Node Allocatable Count"
date: 2025-05-02T10:30:00-08:00
slug: kubernetes-1-33-mutable-csi-node-allocatable-count
author: Eddie Torres (Amazon Web Services)
--&gt;
&lt;!--
Scheduling stateful applications reliably depends heavily on accurate information about resource availability on nodes.
Kubernetes v1.33 introduces an alpha feature called *mutable CSI node allocatable count*, allowing Container Storage Interface (CSI) drivers to dynamically update the reported maximum number of volumes that a node can handle.
This capability significantly enhances the accuracy of pod scheduling decisions and reduces scheduling failures caused by outdated volume capacity information.
--&gt;
&lt;p&gt;可靠调度有状态应用极度依赖于节点上资源可用性的准确信息。&lt;br&gt;
Kubernetes v1.33 引入一个名为&lt;strong&gt;可变的 CSI 节点可分配计数&lt;/strong&gt;的 Alpha 特性，允许
CSI（容器存储接口）驱动动态更新节点可以处理的最大卷数量。&lt;br&gt;
这一能力显著提升 Pod 调度决策的准确性，并减少因卷容量信息过时而导致的调度失败。&lt;/p&gt;</description></item><item><title>Kubernetes v1.33：存储动态制备模式下的节点存储容量评分（Alpha 版）</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2025/04/30/kubernetes-v1-33-storage-capacity-scoring-feature/</link><pubDate>Wed, 30 Apr 2025 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2025/04/30/kubernetes-v1-33-storage-capacity-scoring-feature/</guid><description>&lt;!--
layout: blog
title: "Kubernetes v1.33: Storage Capacity Scoring of Nodes for Dynamic Provisioning (alpha)"
date: 2025-04-30T10:30:00-08:00
slug: kubernetes-v1-33-storage-capacity-scoring-feature
author: &gt;
 Yuma Ogami (Cybozu)
--&gt;
&lt;!--
Kubernetes v1.33 introduces a new alpha feature called `StorageCapacityScoring`. This feature adds a scoring method for pod scheduling
with [the topology-aware volume provisioning](/blog/2018/10/11/topology-aware-volume-provisioning-in-kubernetes/).
This feature eases to schedule pods on nodes with either the most or least available storage capacity.
--&gt;
&lt;p&gt;Kubernetes v1.33 引入了一个名为 &lt;code&gt;StorageCapacityScoring&lt;/code&gt; 的新 Alpha 级别&lt;strong&gt;特性&lt;/strong&gt;。
此&lt;strong&gt;特性&lt;/strong&gt;添加了一种为 Pod 调度评分的方法，
并与&lt;a href="https://andygol-k8s.netlify.app/blog/2018/10/11/topology-aware-volume-provisioning-in-kubernetes/"&gt;拓扑感知卷制备&lt;/a&gt;相关。
此&lt;strong&gt;特性&lt;/strong&gt;可以轻松地选择在具有最多或最少可用存储容量的节点上调度 Pod。&lt;/p&gt;</description></item><item><title>Kubernetes v1.33：镜像卷进阶至 Beta！</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2025/04/29/kubernetes-v1-33-image-volume-beta/</link><pubDate>Tue, 29 Apr 2025 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2025/04/29/kubernetes-v1-33-image-volume-beta/</guid><description>&lt;!--
layout: blog
title: "Kubernetes v1.33: Image Volumes graduate to beta!"
date: 2025-04-29T10:30:00-08:00
slug: kubernetes-v1-33-image-volume-beta
author: Sascha Grunert (Red Hat)
--&gt;
&lt;!--
[Image Volumes](/blog/2024/08/16/kubernetes-1-31-image-volume-source) were
introduced as an Alpha feature with the Kubernetes v1.31 release as part of
[KEP-4639](https://github.com/kubernetes/enhancements/issues/4639). In Kubernetes v1.33, this feature graduates to **beta**.
--&gt;
&lt;p&gt;&lt;a href="https://andygol-k8s.netlify.app/zh-cn/blog/2024/08/16/kubernetes-1-31-image-volume-source"&gt;镜像卷&lt;/a&gt;作为
Alpha 特性首次引入 Kubernetes v1.31 版本，并作为
&lt;a href="https://github.com/kubernetes/enhancements/issues/4639"&gt;KEP-4639&lt;/a&gt;
的一部分发布。在 Kubernetes v1.33 中，此特性进阶至 &lt;strong&gt;Beta&lt;/strong&gt;。&lt;/p&gt;
&lt;!--
Please note that the feature is still _disabled_ by default, because not all
[container runtimes](/docs/setup/production-environment/container-runtimes) have
full support for it. [CRI-O](https://cri-o.io) supports the initial feature since version v1.31 and
will add support for Image Volumes as beta in v1.33.
[containerd merged](https://github.com/containerd/containerd/pull/10579) support
for the alpha feature which will be part of the v2.1.0 release and is working on
beta support as part of [PR #11578](https://github.com/containerd/containerd/pull/11578).
--&gt;
&lt;p&gt;请注意，此特性目前仍默认&lt;strong&gt;禁用&lt;/strong&gt;，
因为并非所有的&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/setup/production-environment/container-runtimes"&gt;容器运行时&lt;/a&gt;都完全支持此特性。
&lt;a href="https://cri-o.io"&gt;CRI-O&lt;/a&gt; 自 v1.31 起就支持此初始特性，并将在 v1.33 中添加对镜像卷的 Beta 支持。
&lt;a href="https://github.com/containerd/containerd/pull/10579"&gt;containerd 已合并&lt;/a&gt;对 Alpha 特性的支持，
此特性将包含在 containerd v2.1.0 版本中，并正通过
&lt;a href="https://github.com/containerd/containerd/pull/11578"&gt;PR #11578&lt;/a&gt; 实现对 Beta 的支持。&lt;/p&gt;</description></item><item><title>Kubernetes v1.33：HorizontalPodAutoscaler 可配置容差</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2025/04/28/kubernetes-v1-33-hpa-configurable-tolerance/</link><pubDate>Mon, 28 Apr 2025 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2025/04/28/kubernetes-v1-33-hpa-configurable-tolerance/</guid><description>&lt;!--
layout: blog
title: "Kubernetes v1.33: HorizontalPodAutoscaler Configurable Tolerance"
slug: kubernetes-v1-33-hpa-configurable-tolerance
math: true # for formulae
date: 2025-04-28T10:30:00-08:00
author: "Jean-Marc François (Google)"
--&gt;
&lt;!--
This post describes _configurable tolerance for horizontal Pod autoscaling_,
a new alpha feature first available in Kubernetes 1.33.
--&gt;
&lt;p&gt;这篇文章描述了&lt;strong&gt;水平 Pod 自动扩缩的可配置容差&lt;/strong&gt;，
这是在 Kubernetes 1.33 中首次出现的一个新的 Alpha 特性。&lt;/p&gt;
&lt;!--
## What is it?

[Horizontal Pod Autoscaling](/docs/tasks/run-application/horizontal-pod-autoscale/)
is a well-known Kubernetes feature that allows your workload to
automatically resize by adding or removing replicas based on resource
utilization.
--&gt;
&lt;h2 id="它是什么"&gt;它是什么？&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/tasks/run-application/horizontal-pod-autoscale/"&gt;水平 Pod 自动扩缩&lt;/a&gt;
是 Kubernetes 中一个众所周知的特性，它允许你的工作负载根据资源利用率自动增减副本数量。&lt;/p&gt;</description></item><item><title>Kubernetes 多容器 Pod：概述</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2025/04/22/multi-container-pods-overview/</link><pubDate>Tue, 22 Apr 2025 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2025/04/22/multi-container-pods-overview/</guid><description>&lt;!--
layout: blog
title: "Kubernetes Multicontainer Pods: An Overview"
date: 2025-04-22
draft: false
slug: multi-container-pods-overview
author: Agata Skorupka (The Scale Factory)
--&gt;
&lt;!--
As cloud-native architectures continue to evolve, Kubernetes has become the go-to platform for deploying complex, distributed systems. One of the most powerful yet nuanced design patterns in this ecosystem is the sidecar pattern—a technique that allows developers to extend application functionality without diving deep into source code.
--&gt;
&lt;p&gt;随着云原生架构的不断演进，Kubernetes 已成为部署复杂分布式系统的首选平台。
在这个生态系统中，最强大却又微妙的设计模式之一是边车（Sidecar）
模式 —— 一种允许开发者扩展应用功能而不深入源代码的技术。&lt;/p&gt;</description></item><item><title>kube-scheduler-simulator 介绍</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2025/04/07/introducing-kube-scheduler-simulator/</link><pubDate>Mon, 07 Apr 2025 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2025/04/07/introducing-kube-scheduler-simulator/</guid><description>&lt;!--
layout: blog
title: "Introducing kube-scheduler-simulator"
date: 2025-04-07
draft: false 
slug: introducing-kube-scheduler-simulator
author: Kensei Nakada (Tetrate)
--&gt;
&lt;!--
The Kubernetes Scheduler is a crucial control plane component that determines which node a Pod will run on. 
Thus, anyone utilizing Kubernetes relies on a scheduler.

[kube-scheduler-simulator](https://github.com/kubernetes-sigs/kube-scheduler-simulator) is a _simulator_ for the Kubernetes scheduler, that started as a [Google Summer of Code 2021](https://summerofcode.withgoogle.com/) project developed by me (Kensei Nakada) and later received a lot of contributions.
This tool allows users to closely examine the scheduler’s behavior and decisions. 
--&gt;
&lt;p&gt;Kubernetes 调度器（Scheduler）是一个关键的控制平面组件，负责决定 Pod 将运行在哪个节点上。&lt;br&gt;
因此，任何使用 Kubernetes 的人都依赖于调度器。&lt;/p&gt;</description></item><item><title>Kubernetes v1.33 预览</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2025/03/26/kubernetes-v1-33-upcoming-changes/</link><pubDate>Wed, 26 Mar 2025 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2025/03/26/kubernetes-v1-33-upcoming-changes/</guid><description>&lt;!--
layout: blog
title: 'Kubernetes v1.33 sneak peek'
date: 2025-03-26T10:30:00-08:00
slug: kubernetes-v1-33-upcoming-changes
author: &gt;
 Agustina Barbetta,
 Aakanksha Bhende,
 Udi Hofesh,
 Ryota Sawada,
 Sneha Yadav
--&gt;
&lt;!--
As the release of Kubernetes v1.33 approaches, the Kubernetes project continues to evolve. Features may be deprecated, removed, or replaced to improve the overall health of the project. This blog post outlines some planned changes for the v1.33 release, which the release team believes you should be aware of to ensure the continued smooth operation of your Kubernetes environment and to keep you up-to-date with the latest developments. The information below is based on the current status of the v1.33 release and is subject to change before the final release date.
--&gt;
&lt;p&gt;随着 Kubernetes v1.33 版本的发布临近，Kubernetes 项目仍在不断发展。
为了提升项目的整体健康状况，某些特性可能会被弃用、移除或替换。
这篇博客文章概述了 v1.33 版本的一些计划变更，发布团队认为你有必要了解这些内容，
以确保 Kubernetes 环境的持续平稳运行，并让你掌握最新的发展动态。
以下信息基于 v1.33 版本的当前状态，在最终发布日期之前可能会有所变化。&lt;/p&gt;</description></item><item><title>ingress-nginx CVE-2025-1974 须知</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2025/03/24/ingress-nginx-cve-2025-1974/</link><pubDate>Mon, 24 Mar 2025 12:00:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2025/03/24/ingress-nginx-cve-2025-1974/</guid><description>&lt;!--
layout: blog
title: "Ingress-nginx CVE-2025-1974: What You Need to Know"
date: 2025-03-24T12:00:00-08:00
slug: ingress-nginx-CVE-2025-1974
author: &gt;
 Tabitha Sable (Kubernetes Security Response Committee)
--&gt;
&lt;!--
Today, the ingress-nginx maintainers have [released patches for a batch of critical vulnerabilities](https://github.com/kubernetes/ingress-nginx/releases) that could make it easy for attackers to take over your Kubernetes cluster. If you are among the over 40% of Kubernetes administrators using [ingress-nginx](https://github.com/kubernetes/ingress-nginx/), you should take action immediately to protect your users and data.
--&gt;
&lt;p&gt;今天，ingress-nginx 项目的维护者们&lt;a href="https://github.com/kubernetes/ingress-nginx/releases"&gt;发布了一批关键漏洞的修复补丁&lt;/a&gt;，
这些漏洞可能让攻击者轻易接管你的 Kubernetes 集群。目前有 40% 以上的 Kubernetes 管理员正在使用
&lt;a href="https://github.com/kubernetes/ingress-nginx/"&gt;ingress-nginx&lt;/a&gt;，
如果你是其中之一，请立即采取行动，保护你的用户和数据。&lt;/p&gt;</description></item><item><title>JobSet 介绍</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2025/03/23/introducing-jobset/</link><pubDate>Sun, 23 Mar 2025 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2025/03/23/introducing-jobset/</guid><description>&lt;!--
layout: blog
title: "Introducing JobSet"
date: 2025-03-23
slug: introducing-jobset

**Authors**: Daniel Vega-Myhre (Google), Abdullah Gharaibeh (Google), Kevin Hannon (Red Hat)
--&gt;
&lt;!--
In this article, we introduce [JobSet](https://jobset.sigs.k8s.io/), an open source API for
representing distributed jobs. The goal of JobSet is to provide a unified API for distributed ML
training and HPC workloads on Kubernetes.
--&gt;
&lt;p&gt;在本文中，我们介绍 &lt;a href="https://jobset.sigs.k8s.io/"&gt;JobSet&lt;/a&gt;，这是一个用于表示分布式任务的开源 API。
JobSet 的目标是为 Kubernetes 上的分布式机器学习训练和高性能计算（HPC）工作负载提供统一的 API。&lt;/p&gt;
&lt;!--
## Why JobSet?

The Kubernetes community’s recent enhancements to the batch ecosystem on Kubernetes has attracted ML
engineers who have found it to be a natural fit for the requirements of running distributed training
workloads. 

Large ML models (particularly LLMs) which cannot fit into the memory of the GPU or TPU chips on a
single host are often distributed across tens of thousands of accelerator chips, which in turn may
span thousands of hosts.
--&gt;
&lt;h2 id="why-jobset"&gt;为什么需要 JobSet？&lt;/h2&gt;
&lt;p&gt;Kubernetes 社区近期对 Kubernetes 批处理生态系统的增强，吸引了许多机器学习工程师，
他们发现这非常符合运行分布式训练工作负载的需求。&lt;/p&gt;</description></item><item><title>聚焦 SIG Apps</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2025/03/12/sig-apps-spotlight-2025/</link><pubDate>Wed, 12 Mar 2025 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2025/03/12/sig-apps-spotlight-2025/</guid><description>&lt;!--
layout: blog
title: "Spotlight on SIG Apps"
slug: sig-apps-spotlight-2025
canonicalUrl: https://www.kubernetes.dev/blog/2025/03/12/sig-apps-spotlight-2025
date: 2025-03-12
author: "Sandipan Panda (DevZero)"
--&gt;
&lt;!--
In our ongoing SIG Spotlight series, we dive into the heart of the Kubernetes project by talking to
the leaders of its various Special Interest Groups (SIGs). This time, we focus on 
**[SIG Apps](https://github.com/kubernetes/community/tree/master/sig-apps#apps-special-interest-group)**,
the group responsible for everything related to developing, deploying, and operating applications on
Kubernetes. [Sandipan Panda](https://www.linkedin.com/in/sandipanpanda)
([DevZero](https://www.devzero.io/)) had the opportunity to interview [Maciej
Szulik](https://github.com/soltysh) ([Defense Unicorns](https://defenseunicorns.com/)) and [Janet
Kuo](https://github.com/janetkuo) ([Google](https://about.google/)), the chairs and tech leads of
SIG Apps. They shared their experiences, challenges, and visions for the future of application
management within the Kubernetes ecosystem.
--&gt;
&lt;p&gt;在我们正在进行的 SIG 聚焦系列中，我们通过与 Kubernetes 项目各个特别兴趣小组（SIG）的领导者对话，
深入探讨 Kubernetes 项目的核心。这一次，我们聚焦于
&lt;strong&gt;&lt;a href="https://github.com/kubernetes/community/tree/master/sig-apps#apps-special-interest-group"&gt;SIG Apps&lt;/a&gt;&lt;/strong&gt;，
这个小组负责 Kubernetes 上与应用程序开发、部署和操作相关的所有内容。
&lt;a href="https://www.linkedin.com/in/sandipanpanda"&gt;Sandipan Panda&lt;/a&gt;（[DevZero](&lt;a href="https://www.devzero.io/"&gt;https://www.devzero.io/&lt;/a&gt;））
有机会采访了 SIG Apps 的主席和技术负责人
&lt;a href="https://github.com/soltysh"&gt;Maciej Szulik&lt;/a&gt;（&lt;a href="https://defenseunicorns.com/"&gt;Defense Unicorns&lt;/a&gt;）
以及 &lt;a href="https://github.com/janetkuo"&gt;Janet Kuo&lt;/a&gt;（&lt;a href="https://about.google/"&gt;Google&lt;/a&gt;）。
他们分享了在 Kubernetes 生态系统中关于应用管理的经验、挑战以及未来愿景。&lt;/p&gt;</description></item><item><title>kube-proxy 的 NFTables 模式</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2025/02/28/nftables-kube-proxy/</link><pubDate>Fri, 28 Feb 2025 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2025/02/28/nftables-kube-proxy/</guid><description>&lt;!--
layout: blog
title: "NFTables mode for kube-proxy"
date: 2025-02-28
slug: nftables-kube-proxy
author: &gt;
 Dan Winship (Red Hat)
--&gt;
&lt;!--
A new nftables mode for kube-proxy was introduced as an alpha feature
in Kubernetes 1.29. Currently in beta, it is expected to be GA as of
1.33. The new mode fixes long-standing performance problems with the
iptables mode and all users running on systems with reasonably-recent
kernels are encouraged to try it out. (For compatibility reasons, even
once nftables becomes GA, iptables will still be the _default_.)
--&gt;
&lt;p&gt;Kubernetes 1.29 引入了一种新的 Alpha 特性：kube-proxy 的 nftables 模式。
目前该模式处于 Beta 阶段，并预计将在 1.33 版本中达到一般可用（GA）状态。
新模式解决了 iptables 模式长期存在的性能问题，建议所有运行在较新内核版本系统上的用户尝试使用。
出于兼容性原因，即使 nftables 成为 GA 功能，iptables 仍将是&lt;strong&gt;默认&lt;/strong&gt;模式。&lt;/p&gt;</description></item><item><title>云控制器管理器（Cloud Controller Manager）'鸡与蛋'的问题</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2025/02/14/cloud-controller-manager-chicken-egg-problem/</link><pubDate>Fri, 14 Feb 2025 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2025/02/14/cloud-controller-manager-chicken-egg-problem/</guid><description>&lt;!--
layout: blog
title: "The Cloud Controller Manager Chicken and Egg Problem"
date: 2025-02-14
slug: cloud-controller-manager-chicken-egg-problem
author: &gt;
 Antonio Ojea,
 Michael McCune
--&gt;
&lt;!--
Kubernetes 1.31
[completed the largest migration in Kubernetes history][migration-blog], removing the in-tree
cloud provider. While the component migration is now done, this leaves some additional
complexity for users and installer projects (for example, kOps or Cluster API) . We will go
over those additional steps and failure points and make recommendations for cluster owners.
This migration was complex and some logic had to be extracted from the core components,
building four new subsystems.
--&gt;
&lt;p&gt;Kubernetes 1.31&lt;br&gt;
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/blog/2024/05/20/completing-cloud-provider-migration/"&gt;完成了 Kubernetes 历史上最大的迁移&lt;/a&gt;，移除了树内云驱动（in-tree cloud provider）。
虽然组件迁移已经完成，但这为用户和安装项目（例如 kOps 或 Cluster API）带来了一些额外的复杂性。
我们将回顾这些额外的步骤和可能的故障点，并为集群所有者提供改进建议。&lt;br&gt;
此次迁移非常复杂，必须从核心组件中提取部分逻辑，构建四个新的子系统。&lt;/p&gt;</description></item><item><title>聚焦 SIG Architecture: Enhancements</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2025/01/21/sig-architecture-enhancements/</link><pubDate>Tue, 21 Jan 2025 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2025/01/21/sig-architecture-enhancements/</guid><description>&lt;!--
layout: blog
title: "Spotlight on SIG Architecture: Enhancements"
slug: sig-architecture-enhancements
canonicalUrl: https://www.kubernetes.dev/blog/2025/01/21/sig-architecture-enhancements
date: 2025-01-21
author: "Frederico Muñoz (SAS Institute)"
--&gt;
&lt;!--
_This is the fourth interview of a SIG Architecture Spotlight series that will cover the different
subprojects, and we will be covering [SIG Architecture:
Enhancements](https://github.com/kubernetes/community/blob/master/sig-architecture/README.md#enhancements)._

In this SIG Architecture spotlight we talked with [Kirsten
Garrison](https://github.com/kikisdeliveryservice), lead of the Enhancements subproject.
--&gt;
&lt;p&gt;&lt;strong&gt;这是 SIG Architecture 聚光灯系列的第四次采访，我们将介绍
&lt;a href="https://github.com/kubernetes/community/blob/master/sig-architecture/README.md#enhancements"&gt;SIG Architecture: Enhancements&lt;/a&gt;。&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;在本次 SIG Architecture 专题采访中，我们访谈了 Enhancements
子项目的负责人 &lt;a href="https://github.com/kikisdeliveryservice"&gt;Kirsten Garrison&lt;/a&gt;。&lt;/p&gt;</description></item><item><title>使用 API 流式传输来增强 Kubernetes API 服务器效率</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2024/12/17/kube-apiserver-api-streaming/</link><pubDate>Tue, 17 Dec 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2024/12/17/kube-apiserver-api-streaming/</guid><description>&lt;!--
layout: blog
title: 'Enhancing Kubernetes API Server Efficiency with API Streaming'
date: 2024-12-17
slug: kube-apiserver-api-streaming
author: &gt;
 Stefan Schimanski (Upbound),
 Wojciech Tyczynski (Google),
 Lukasz Szaszkiewicz (Red Hat)
--&gt;
&lt;!--
Managing Kubernetes clusters efficiently is critical, especially as their size is growing. 
A significant challenge with large clusters is the memory overhead caused by **list** requests.
--&gt;
&lt;p&gt;高效管理 Kubernetes 集群至关重要，特别是在集群规模不断增长的情况下更是如此。
大型集群面临的一个重大挑战是 &lt;strong&gt;list&lt;/strong&gt; 请求所造成的内存开销。&lt;/p&gt;
&lt;!--
In the existing implementation, the kube-apiserver processes **list** requests by assembling the entire response in-memory before transmitting any data to the client. 
But what if the response body is substantial, say hundreds of megabytes? Additionally, imagine a scenario where multiple **list** requests flood in simultaneously, perhaps after a brief network outage. 
While [API Priority and Fairness](/docs/concepts/cluster-administration/flow-control) has proven to reasonably protect kube-apiserver from CPU overload, its impact is visibly smaller for memory protection. 
This can be explained by the differing nature of resource consumption by a single API request - the CPU usage at any given time is capped by a constant, whereas memory, being uncompressible, can grow proportionally with the number of processed objects and is unbounded.
This situation poses a genuine risk, potentially overwhelming and crashing any kube-apiserver within seconds due to out-of-memory (OOM) conditions. To better visualize the issue, let's consider the below graph.
--&gt;
&lt;p&gt;在现有的实现中，kube-apiserver 在处理 &lt;strong&gt;list&lt;/strong&gt; 请求时，先在内存中组装整个响应，再将所有数据传输给客户端。
但如果响应体非常庞大，比如数百兆字节呢？另外再想象这样一种场景，有多个 &lt;strong&gt;list&lt;/strong&gt; 请求同时涌入，可能是在短暂的网络中断后涌入。
虽然 &lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/cluster-administration/flow-control"&gt;API 优先级和公平性&lt;/a&gt;已经证明可以合理地保护
kube-apiserver 免受 CPU 过载，但其对内存保护的影响却明显较弱。这可以解释为各个 API 请求的资源消耗性质有所不同。
在任何给定时间，CPU 使用量都会受到某个常量的限制，而内存由于不可压缩，会随着处理对象数量的增加而成比例增长，且没有上限。
这种情况会带来真正的风险，kube-apiserver 可能会在几秒钟内因内存不足（OOM）状况而淹没和崩溃。
为了更直观地查验这个问题，我们看看下面的图表。&lt;/p&gt;</description></item><item><title>Kubernetes v1.32 增加了新的 CPU Manager 静态策略选项用于严格 CPU 预留</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2024/12/16/cpumanager-strict-cpu-reservation/</link><pubDate>Mon, 16 Dec 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2024/12/16/cpumanager-strict-cpu-reservation/</guid><description>&lt;!--
layout: blog
title: 'Kubernetes v1.32 Adds A New CPU Manager Static Policy Option For Strict CPU Reservation'
date: 2024-12-16
slug: cpumanager-strict-cpu-reservation
author: &gt;
 [Jing Zhang](https://github.com/jingczhang) (Nokia)
--&gt;
&lt;!--
In Kubernetes v1.32, after years of community discussion, we are excited to introduce a
`strict-cpu-reservation` option for the [CPU Manager static policy](/docs/tasks/administer-cluster/cpu-management-policies/#static-policy-options).
This feature is currently in alpha, with the associated policy hidden by default. You can only use the
policy if you explicitly enable the alpha behavior in your cluster.
--&gt;
&lt;p&gt;在 Kubernetes v1.32 中，经过社区多年的讨论，我们很高兴地引入了
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/cpu-management-policies/#static-policy-options"&gt;CPU Manager 静态策略&lt;/a&gt;的
&lt;code&gt;strict-cpu-reservation&lt;/code&gt; 选项。此特性当前处于 Alpha 阶段，默认情况下关联的策略是隐藏的。
只有在你的集群中明确启用了此 Alpha 行为后，才能使用此策略。&lt;/p&gt;</description></item><item><title>Kubernetes v1.32：内存管理器进阶至 GA</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2024/12/13/memory-manager-goes-ga/</link><pubDate>Fri, 13 Dec 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2024/12/13/memory-manager-goes-ga/</guid><description>&lt;!--
layout: blog
title: "Kubernetes v1.32: Memory Manager Goes GA"
date: 2024-12-13
slug: memory-manager-goes-ga
author: &gt;
 [Talor Itzhak](https://github.com/Tal-or) (Red Hat)
--&gt;
&lt;!--
With Kubernetes 1.32, the memory manager has officially graduated to General Availability (GA),
marking a significant milestone in the journey toward efficient and predictable memory allocation for containerized applications.
Since Kubernetes v1.22, where it graduated to beta, the memory manager has proved itself reliable, stable and a good complementary feature for the
[CPU Manager](/docs/tasks/administer-cluster/cpu-management-policies/).
--&gt;
&lt;p&gt;随着 Kubernetes 1.32 的发布，内存管理器已进阶至正式发布（GA），
这标志着在为容器化应用实现高效和可预测的内存分配的旅程中迈出了重要的一步。
内存管理器自 Kubernetes v1.22 进阶至 Beta 后，其可靠性、稳定性已得到证实，
是 &lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/cpu-management-policies/"&gt;CPU 管理器&lt;/a&gt;的一个良好补充特性。&lt;/p&gt;</description></item><item><title>Kubernetes v1.32：QueueingHint 为优化 Pod 调度带来了新的可能</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2024/12/12/scheduler-queueinghint/</link><pubDate>Thu, 12 Dec 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2024/12/12/scheduler-queueinghint/</guid><description>&lt;!--
layout: blog
title: "Kubernetes v1.32: QueueingHint Brings a New Possibility to Optimize Pod Scheduling"
date: 2024-12-12
slug: scheduler-queueinghint
Author: &gt;
 [Kensei Nakada](https://github.com/sanposhiho) (Tetrate.io)
--&gt;
&lt;!--
The Kubernetes [scheduler](/docs/concepts/scheduling-eviction/kube-scheduler/) is the core
component that selects the nodes on which new Pods run. The scheduler processes
these new Pods **one by one**. Therefore, the larger your clusters, the more important
the throughput of the scheduler becomes.

Over the years, Kubernetes SIG Scheduling has improved the throughput
of the scheduler in multiple enhancements. This blog post describes a major improvement to the
scheduler in Kubernetes v1.32: a 
[scheduling context element](/docs/concepts/scheduling-eviction/scheduling-framework/#extension-points)
named _QueueingHint_. This page provides background knowledge of the scheduler and explains how
QueueingHint improves scheduling throughput.
--&gt;
&lt;p&gt;Kubernetes &lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/scheduling-eviction/kube-scheduler/"&gt;调度器&lt;/a&gt;是为新
Pod 选择运行节点的核心组件，调度器会&lt;strong&gt;逐一&lt;/strong&gt;处理这些新 Pod。
因此，集群规模越大，调度器的吞吐量就越重要。&lt;/p&gt;</description></item><item><title>Kubernetes v1.32 预览</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2024/11/08/kubernetes-1-32-upcoming-changes/</link><pubDate>Fri, 08 Nov 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2024/11/08/kubernetes-1-32-upcoming-changes/</guid><description>&lt;!--
layout: blog
title: 'Kubernetes v1.32 sneak peek'
date: 2024-11-08
slug: kubernetes-1-32-upcoming-changes
author: &gt;
 Matteo Bianchi,
 Edith Puclla,
 William Rizzo,
 Ryota Sawada,
 Rashan Smith
--&gt;
&lt;!--
As we get closer to the release date for Kubernetes v1.32, the project develops and matures.
Features may be deprecated, removed, or replaced with better ones for the project's overall health. 

This blog outlines some of the planned changes for the Kubernetes v1.32 release,
that the release team feels you should be aware of, for the continued maintenance
of your Kubernetes environment and keeping up to date with the latest changes.
Information listed below is based on the current status of the v1.32 release
and may change before the actual release date. 
--&gt;
&lt;p&gt;随着 Kubernetes v1.32 发布日期的临近，Kubernetes 项目继续发展和成熟。
在这个过程中，某些特性可能会被弃用、移除或被更好的特性取代，以确保项目的整体健康与发展。&lt;/p&gt;</description></item><item><title>关于日本的 Kubernetes 上游培训的特别报道</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2024/10/28/k8s-upstream-training-japan-spotlight/</link><pubDate>Mon, 28 Oct 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2024/10/28/k8s-upstream-training-japan-spotlight/</guid><description>&lt;!--
layout: blog
title: "Spotlight on Kubernetes Upstream Training in Japan"
slug: k8s-upstream-training-japan-spotlight
date: 2024-10-28
canonicalUrl: https://www.k8s.dev/blog/2024/10/28/k8s-upstream-training-japan-spotlight/
author: &gt;
 [Junya Okabe](https://github.com/Okabe-Junya) (University of Tsukuba) / 
 Organizing team of Kubernetes Upstream Training in Japan
--&gt;
&lt;!--
We are organizers of [Kubernetes Upstream Training in Japan](https://github.com/kubernetes-sigs/contributor-playground/tree/master/japan).
Our team is composed of members who actively contribute to Kubernetes, including individuals who hold roles such as member, reviewer, approver, and chair.
--&gt;
&lt;p&gt;我们是&lt;a href="https://github.com/kubernetes-sigs/contributor-playground/tree/master/japan"&gt;日本 Kubernetes 上游培训&lt;/a&gt;的组织者。
我们的团队由积极向 Kubernetes 做贡献的成员组成，他们在社区中担任了 Member、Reviewer、Approver 和 Chair 等角色。&lt;/p&gt;</description></item><item><title>公布 2024 年指导委员会选举结果</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2024/10/02/steering-committee-results-2024/</link><pubDate>Wed, 02 Oct 2024 15:10:00 -0500</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2024/10/02/steering-committee-results-2024/</guid><description>&lt;!--
layout: blog
title: "Announcing the 2024 Steering Committee Election Results"
slug: steering-committee-results-2024
canonicalUrl: https://www.kubernetes.dev/blog/2024/10/02/steering-committee-results-2024
date: 2024-10-02T15:10:00-05:00
author: &gt;
 Bridget Kromhout
--&gt;
&lt;!--
The [2024 Steering Committee Election](https://github.com/kubernetes/community/tree/master/elections/steering/2024) is now complete. The Kubernetes Steering Committee consists of 7 seats, 3 of which were up for election in 2024. Incoming committee members serve a term of 2 years, and all members are elected by the Kubernetes Community.

This community body is significant since it oversees the governance of the entire Kubernetes project. With that great power comes great responsibility. You can learn more about the steering committee’s role in their [charter](https://github.com/kubernetes/steering/blob/master/charter.md).

Thank you to everyone who voted in the election; your participation helps support the community’s continued health and success.
--&gt;
&lt;p&gt;&lt;a href="https://github.com/kubernetes/community/tree/master/elections/steering/2024"&gt;2024 年指导委员会选举&lt;/a&gt;现已完成。
Kubernetes 指导委员会由 7 个席位组成，其中 3 个席位于 2024 年进行选举。
新任委员会成员的任期为 2 年，所有成员均由 Kubernetes 社区选举产生。&lt;/p&gt;</description></item><item><title>SIG Scheduling 访谈</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2024/09/24/sig-scheduling-spotlight-2024/</link><pubDate>Tue, 24 Sep 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2024/09/24/sig-scheduling-spotlight-2024/</guid><description>&lt;!--
layout: blog
title: "Spotlight on SIG Scheduling"
slug: sig-scheduling-spotlight-2024
canonicalUrl: https://www.kubernetes.dev/blog/2024/09/24/sig-scheduling-spotlight-2024
date: 2024-09-24
author: "Arvind Parekh"
--&gt;
&lt;!--
In this SIG Scheduling spotlight we talked with [Kensei Nakada](https://github.com/sanposhiho/), an
approver in SIG Scheduling.

## Introductions

**Arvind:** **Hello, thank you for the opportunity to learn more about SIG Scheduling! Would you
like to introduce yourself and tell us a bit about your role, and how you got involved with
Kubernetes?**
--&gt;
&lt;p&gt;在本次 SIG Scheduling 的访谈中，我们与 &lt;a href="https://github.com/sanposhiho/"&gt;Kensei Nakada&lt;/a&gt;
进行了交流，他是 SIG Scheduling 的一名 Approver。&lt;/p&gt;</description></item><item><title>Kubernetes v1.31：kubeadm v1beta4</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2024/08/23/kubernetes-1-31-kubeadm-v1beta4/</link><pubDate>Fri, 23 Aug 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2024/08/23/kubernetes-1-31-kubeadm-v1beta4/</guid><description>&lt;!--
layout: blog
title: 'Kubernetes v1.31: kubeadm v1beta4'
date: 2024-08-23
slug: kubernetes-1-31-kubeadm-v1beta4
author: &gt;
 Paco Xu (DaoCloud)
--&gt;
&lt;!--
As part of the Kubernetes v1.31 release, [`kubeadm`](/docs/reference/setup-tools/kubeadm/) is
adopting a new ([v1beta4](/docs/reference/config-api/kubeadm-config.v1beta4/)) version of
its configuration file format. Configuration in the previous v1beta3 format is now formally
deprecated, which means it's supported but you should migrate to v1beta4 and stop using
the deprecated format.
Support for v1beta3 configuration will be removed after a minimum of 3 Kubernetes minor releases.
--&gt;
&lt;p&gt;作为 Kubernetes v1.31 发布的一部分，&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/"&gt;&lt;code&gt;kubeadm&lt;/code&gt;&lt;/a&gt;
采用了全新版本（&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/config-api/kubeadm-config.v1beta4/"&gt;v1beta4&lt;/a&gt;）的配置文件格式。
之前 v1beta3 格式的配置现已正式弃用，这意味着尽管之前的格式仍然受支持，但你应迁移到 v1beta4 并停止使用已弃用的格式。
对 v1beta3 配置的支持将在至少 3 次 Kubernetes 次要版本发布后被移除。&lt;/p&gt;</description></item><item><title>Kubernetes 1.31：kubectl debug 中的自定义模板化配置特性已进入 Beta 阶段</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2024/08/22/kubernetes-1-31-custom-profiling-kubectl-debug/</link><pubDate>Thu, 22 Aug 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2024/08/22/kubernetes-1-31-custom-profiling-kubectl-debug/</guid><description>&lt;!--
layout: blog
title: "Kubernetes 1.31: Custom Profiling in Kubectl Debug Graduates to Beta"
date: 2024-08-22
slug: kubernetes-1-31-custom-profiling-kubectl-debug
author: &gt;
 Arda Güçlü (Red Hat)
--&gt;
&lt;!--
There are many ways of troubleshooting the pods and nodes in the cluster. However, `kubectl debug` is one of the easiest, highly used and most prominent ones. It
provides a set of static profiles and each profile serves for a different kind of role. For instance, from the network administrator's point of view, 
debugging the node should be as easy as this:
--&gt;
&lt;p&gt;有很多方法可以对集群中的 Pod 和节点进行故障排查，而 &lt;code&gt;kubectl debug&lt;/code&gt; 是最简单、使用最广泛、最突出的方法之一。
它提供了一组静态配置，每个配置适用于不同类型的角色。
例如，从网络管理员的视角来看，调试节点应该像这样简单：&lt;/p&gt;</description></item><item><title>Kubernetes 1.31：细粒度的 SupplementalGroups 控制</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2024/08/22/fine-grained-supplementalgroups-control/</link><pubDate>Thu, 22 Aug 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2024/08/22/fine-grained-supplementalgroups-control/</guid><description>&lt;!--
layout: blog
title: 'Kubernetes 1.31: Fine-grained SupplementalGroups control'
date: 2024-08-22
slug: fine-grained-supplementalgroups-control
author: &gt;
 Shingo Omura (Woven By Toyota)
--&gt;
&lt;!--
This blog discusses a new feature in Kubernetes 1.31 to improve the handling of supplementary groups in containers within Pods.
--&gt;
&lt;p&gt;本博客讨论了 Kubernetes 1.31 中的一项新特性，目的是改善处理 Pod 中容器内的附加组。&lt;/p&gt;
&lt;!--
## Motivation: Implicit group memberships defined in `/etc/group` in the container image

Although this behavior may not be popular with many Kubernetes cluster users/admins, kubernetes, by default, _merges_ group information from the Pod with information defined in `/etc/group` in the container image.

Let's see an example, below Pod specifies `runAsUser=1000`, `runAsGroup=3000` and `supplementalGroups=4000` in the Pod's security context.
--&gt;
&lt;h2 id="动机-容器镜像中-etc-group-中定义的隐式组成员关系"&gt;动机：容器镜像中 &lt;code&gt;/etc/group&lt;/code&gt; 中定义的隐式组成员关系&lt;/h2&gt;
&lt;p&gt;尽管这种行为可能并不受许多 Kubernetes 集群用户/管理员的欢迎，
但 Kubernetes 默认情况下会将 Pod 中的组信息与容器镜像中 &lt;code&gt;/etc/group&lt;/code&gt; 中定义的信息进行&lt;strong&gt;合并&lt;/strong&gt;。&lt;/p&gt;</description></item><item><title>Kubernetes v1.31：全新的 Kubernetes CPUManager 静态策略：跨核分发 CPU</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2024/08/22/cpumanager-static-policy-distributed-cpu-across-cores/</link><pubDate>Thu, 22 Aug 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2024/08/22/cpumanager-static-policy-distributed-cpu-across-cores/</guid><description>&lt;!--
layout: blog
title: 'Kubernetes v1.31: New Kubernetes CPUManager Static Policy: Distribute CPUs Across Cores'
date: 2024-08-22
slug: cpumanager-static-policy-distributed-cpu-across-cores
author: &gt;
 [Jiaxin Shan](https://github.com/Jeffwan) (Bytedance)
--&gt;
&lt;!--
In Kubernetes v1.31, we are excited to introduce a significant enhancement to CPU management capabilities: the `distribute-cpus-across-cores` option for the [CPUManager static policy](/docs/tasks/administer-cluster/cpu-management-policies/#static-policy-options). This feature is currently in alpha and hidden by default, marking a strategic shift aimed at optimizing CPU utilization and improving system performance across multi-core processors.
--&gt;
&lt;p&gt;在 Kubernetes v1.31 中，我们很高兴引入了对 CPU 管理能力的重大增强：针对
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/cpu-management-policies/#static-policy-options"&gt;CPUManager 静态策略&lt;/a&gt;的
&lt;code&gt;distribute-cpus-across-cores&lt;/code&gt; 选项。此特性目前处于 Alpha 阶段，
默认被隐藏，标志着旨在优化 CPU 利用率和改善多核处理器系统性能的战略转变。&lt;/p&gt;</description></item><item><title>Kubernetes 1.31: 节点 Cgroup 驱动程序的自动配置 (beta)</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2024/08/21/cri-cgroup-driver-lookup-now-beta/</link><pubDate>Wed, 21 Aug 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2024/08/21/cri-cgroup-driver-lookup-now-beta/</guid><description>&lt;!--
layout: blog
title: "Kubernetes 1.31: Autoconfiguration For Node Cgroup Driver (beta)"
date: 2024-08-21
slug: cri-cgroup-driver-lookup-now-beta
author: &gt;
Peter Hunt (Red Hat)
--&gt;
&lt;!--
Historically, configuring the correct cgroup driver has been a pain point for users running new
Kubernetes clusters. On Linux systems, there are two different cgroup drivers:
`cgroupfs` and `systemd`. In the past, both the [kubelet](/docs/reference/command-line-tools-reference/kubelet/)
and CRI implementation (like CRI-O or containerd) needed to be configured to use
the same cgroup driver, or else the kubelet would exit with an error. This was a
source of headaches for many cluster admins. However, there is light at the end of the tunnel!
--&gt;
&lt;p&gt;一直以来，为新运行的 Kubernetes 集群配置正确的 cgroup 驱动程序是用户的一个痛点。
在 Linux 系统中，存在两种不同的 cgroup 驱动程序：&lt;code&gt;cgroupfs&lt;/code&gt; 和 &lt;code&gt;systemd&lt;/code&gt;。
过去，&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/command-line-tools-reference/kubelet/"&gt;kubelet&lt;/a&gt; 和 CRI
实现（如 CRI-O 或 containerd）需要配置为使用相同的 cgroup 驱动程序， 否则 kubelet 会报错并退出。
这让许多集群管理员头疼不已。不过，现在曙光乍现！&lt;/p&gt;</description></item><item><title>Kubernetes 1.31：流式传输从 SPDY 转换为 WebSocket</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2024/08/20/websockets-transition/</link><pubDate>Tue, 20 Aug 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2024/08/20/websockets-transition/</guid><description>&lt;!--
layout: blog
title: 'Kubernetes 1.31: Streaming Transitions from SPDY to WebSockets'
date: 2024-08-20
slug: websockets-transition
author: &gt;
 [Sean Sullivan](https://github.com/seans3) (Google)
 [Shannon Kularathna](https://github.com/shannonxtreme) (Google)
--&gt;
&lt;!--
In Kubernetes 1.31, by default kubectl now uses the WebSocket protocol
instead of SPDY for streaming.

This post describes what these changes mean for you and why these streaming APIs
matter.
--&gt;
&lt;p&gt;在 Kubernetes 1.31 中，kubectl 现在默认使用 WebSocket 协议而不是 SPDY 进行流式传输。&lt;/p&gt;
&lt;p&gt;这篇文章介绍了这些变化对你意味着什么以及这些流式传输 API 的重要性。&lt;/p&gt;
&lt;!--
## Streaming APIs in Kubernetes

In Kubernetes, specific endpoints that are exposed as an HTTP or RESTful
interface are upgraded to streaming connections, which require a streaming
protocol. Unlike HTTP, which is a request-response protocol, a streaming
protocol provides a persistent connection that's bi-directional, low-latency,
and lets you interact in real-time. Streaming protocols support reading and
writing data between your client and the server, in both directions, over the
same connection. This type of connection is useful, for example, when you create
a shell in a running container from your local workstation and run commands in
the container.
--&gt;
&lt;h2 id="kubernetes-中的流式-api"&gt;Kubernetes 中的流式 API&lt;/h2&gt;
&lt;p&gt;在 Kubernetes 中，某些以 HTTP 或 RESTful 接口公开的某些端点会被升级为流式连接，因而需要使用流式协议。
与 HTTP 这种请求-响应协议不同，流式协议提供了一种持久的双向连接，具有低延迟的特点，并允许实时交互。
流式协议支持在客户端与服务器之间通过同一个连接进行双向的数据读写。
这种类型的连接非常有用，例如，当你从本地工作站在某个运行中的容器内创建 shell 并在该容器中运行命令时。&lt;/p&gt;</description></item><item><title>Kubernetes 1.31：针对 Job 的 Pod 失效策略进阶至 GA</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2024/08/19/kubernetes-1-31-pod-failure-policy-for-jobs-goes-ga/</link><pubDate>Mon, 19 Aug 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2024/08/19/kubernetes-1-31-pod-failure-policy-for-jobs-goes-ga/</guid><description>&lt;!--
layout: blog
title: "Kubernetes 1.31: Pod Failure Policy for Jobs Goes GA"
date: 2024-08-19
slug: kubernetes-1-31-pod-failure-policy-for-jobs-goes-ga
author: &gt;
 [Michał Woźniak](https://github.com/mimowo) (Google),
 [Shannon Kularathna](https://github.com/shannonxtreme) (Google)
--&gt;
&lt;!--
This post describes _Pod failure policy_, which graduates to stable in Kubernetes
1.31, and how to use it in your Jobs.
--&gt;
&lt;p&gt;这篇博文阐述在 Kubernetes 1.31 中进阶至 Stable 的 &lt;strong&gt;Pod 失效策略&lt;/strong&gt;，还介绍如何在你的 Job 中使用此策略。&lt;/p&gt;
&lt;!--
## About Pod failure policy

When you run workloads on Kubernetes, Pods might fail for a variety of reasons.
Ideally, workloads like Jobs should be able to ignore transient, retriable
failures and continue running to completion.
--&gt;
&lt;h2 id="关于-pod-失效策略"&gt;关于 Pod 失效策略&lt;/h2&gt;
&lt;p&gt;当你在 Kubernetes 上运行工作负载时，Pod 可能因各种原因而失效。
理想情况下，像 Job 这样的工作负载应该能够忽略瞬时的、可重试的失效，并继续运行直到完成。&lt;/p&gt;</description></item><item><title>Kubernetes 1.31：podAffinity 中的 matchLabelKeys 进阶至 Beta</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2024/08/16/matchlabelkeys-podaffinity/</link><pubDate>Fri, 16 Aug 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2024/08/16/matchlabelkeys-podaffinity/</guid><description>&lt;!--
layout: blog
title: 'Kubernetes 1.31: MatchLabelKeys in PodAffinity graduates to beta'
date: 2024-08-16
slug: matchlabelkeys-podaffinity
author: &gt;
 Kensei Nakada (Tetrate)
--&gt;
&lt;!--
Kubernetes 1.29 introduced new fields `matchLabelKeys` and `mismatchLabelKeys` in `podAffinity` and `podAntiAffinity`.

In Kubernetes 1.31, this feature moves to beta and the corresponding feature gate (`MatchLabelKeysInPodAffinity`) gets enabled by default.
--&gt;
&lt;p&gt;Kubernetes 1.29 在 &lt;code&gt;podAffinity&lt;/code&gt; 和 &lt;code&gt;podAntiAffinity&lt;/code&gt; 中引入了新的字段 &lt;code&gt;matchLabelKeys&lt;/code&gt; 和 &lt;code&gt;mismatchLabelKeys&lt;/code&gt;。&lt;/p&gt;
&lt;p&gt;在 Kubernetes 1.31 中，此特性进阶至 Beta，并且相应的特性门控（&lt;code&gt;MatchLabelKeysInPodAffinity&lt;/code&gt;）默认启用。&lt;/p&gt;
&lt;!--
## `matchLabelKeys` - Enhanced scheduling for versatile rolling updates

During a workload's (e.g., Deployment) rolling update, a cluster may have Pods from multiple versions at the same time.
However, the scheduler cannot distinguish between old and new versions based on the `labelSelector` specified in `podAffinity` or `podAntiAffinity`. As a result, it will co-locate or disperse Pods regardless of their versions.
--&gt;
&lt;h2 id="matchlabelkeys-为多样化滚动更新增强了调度"&gt;&lt;code&gt;matchLabelKeys&lt;/code&gt; - 为多样化滚动更新增强了调度&lt;/h2&gt;
&lt;p&gt;在工作负载（例如 Deployment）的滚动更新期间，集群中可能同时存在多个版本的 Pod。&lt;br&gt;
然而，调度器无法基于 &lt;code&gt;podAffinity&lt;/code&gt; 或 &lt;code&gt;podAntiAffinity&lt;/code&gt; 中指定的 &lt;code&gt;labelSelector&lt;/code&gt; 区分新旧版本。
结果，调度器将并置或分散调度 Pod，不会考虑这些 Pod 的版本。&lt;/p&gt;</description></item><item><title>Kubernetes 1.31：防止无序删除时 PersistentVolume 泄漏</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2024/08/16/kubernetes-1-31-prevent-persistentvolume-leaks-when-deleting-out-of-order/</link><pubDate>Fri, 16 Aug 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2024/08/16/kubernetes-1-31-prevent-persistentvolume-leaks-when-deleting-out-of-order/</guid><description>&lt;!--
layout: blog
title: 'Kubernetes 1.31: Prevent PersistentVolume Leaks When Deleting out of Order'
date: 2024-08-16
slug: kubernetes-1-31-prevent-persistentvolume-leaks-when-deleting-out-of-order
author: &gt;
 Deepak Kinni (Broadcom)
--&gt;
&lt;!--
[PersistentVolume](/docs/concepts/storage/persistent-volumes/) (or PVs for short) are
associated with [Reclaim Policy](/docs/concepts/storage/persistent-volumes/#reclaim-policy).
The reclaim policy is used to determine the actions that need to be taken by the storage
backend on deletion of the PVC Bound to a PV.
When the reclaim policy is `Delete`, the expectation is that the storage backend
releases the storage resource allocated for the PV. In essence, the reclaim
policy needs to be honored on PV deletion.

With the recent Kubernetes v1.31 release, a beta feature lets you configure your
cluster to behave that way and honor the configured reclaim policy.
--&gt;
&lt;p&gt;&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/persistent-volumes/"&gt;PersistentVolume&lt;/a&gt;（简称 PV）
具有与之关联的&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/persistent-volumes/#reclaim-policy"&gt;回收策略&lt;/a&gt;。
回收策略用于确定在删除绑定到 PV 的 PVC 时存储后端需要采取的操作。当回收策略为 &lt;code&gt;Delete&lt;/code&gt; 时，
期望存储后端释放为 PV 所分配的存储资源。实际上，在 PV 被删除时就需要执行此回收策略。&lt;/p&gt;</description></item><item><title>Kubernetes 1.31：基于 OCI 工件的只读卷 (Alpha)</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2024/08/16/kubernetes-1-31-image-volume-source/</link><pubDate>Fri, 16 Aug 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2024/08/16/kubernetes-1-31-image-volume-source/</guid><description>&lt;!--
layout: blog
title: "Kubernetes 1.31: Read Only Volumes Based On OCI Artifacts (alpha)"
date: 2024-08-16
slug: kubernetes-1-31-image-volume-source
author: Sascha Grunert
--&gt;
&lt;!--
The Kubernetes community is moving towards fulfilling more Artificial
Intelligence (AI) and Machine Learning (ML) use cases in the future. While the
project has been designed to fulfill microservice architectures in the past,
it’s now time to listen to the end users and introduce features which have a
stronger focus on AI/ML.
--&gt;
&lt;p&gt;Kubernetes 社区正朝着在未来满足更多人工智能（AI）和机器学习（ML）使用场景的方向发展。
虽然此项目在过去设计为满足微服务架构，但现在是时候听听最终用户的声音并引入更侧重于 AI/ML 的特性了。&lt;/p&gt;</description></item><item><title>Kubernetes 1.31：通过 VolumeAttributesClass 修改卷进阶至 Beta</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2024/08/15/kubernetes-1-31-volume-attributes-class/</link><pubDate>Thu, 15 Aug 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2024/08/15/kubernetes-1-31-volume-attributes-class/</guid><description>&lt;!--
layout: blog
title: "Kubernetes 1.31: VolumeAttributesClass for Volume Modification Beta"
date: 2024-08-15
slug: kubernetes-1-31-volume-attributes-class
author: &gt;
 Sunny Song (Google)
 Matthew Cary (Google)
--&gt;
&lt;!--
Volumes in Kubernetes have been described by two attributes: their storage class, and
their capacity. The storage class is an immutable property of the volume, while the
capacity can be changed dynamically with [volume
resize](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims).

This complicates vertical scaling of workloads with volumes. While cloud providers and
storage vendors often offer volumes which allow specifying IO quality of service
(Performance) parameters like IOPS or throughput and tuning them as workloads operate,
Kubernetes has no API which allows changing them.
--&gt;
&lt;p&gt;在 Kubernetes 中，卷由两个属性描述：存储类和容量。存储类是卷的不可变属性，
而容量可以通过&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims"&gt;卷调整大小&lt;/a&gt;进行动态变更。&lt;/p&gt;</description></item><item><title>Kubernetes v1.31：通过基于缓存的一致性读加速集群性能</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2024/08/15/consistent-read-from-cache-beta/</link><pubDate>Thu, 15 Aug 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2024/08/15/consistent-read-from-cache-beta/</guid><description>&lt;!--
layout: blog
title: 'Kubernetes v1.31: Accelerating Cluster Performance with Consistent Reads from Cache'
date: 2024-08-15
slug: consistent-read-from-cache-beta
author: &gt;
 Marek Siarkowicz (Google)
-&gt;

&lt;!--
Kubernetes is renowned for its robust orchestration of containerized applications,
but as clusters grow, the demands on the control plane can become a bottleneck.
A key challenge has been ensuring strongly consistent reads from the etcd datastore,
requiring resource-intensive quorum reads.
--&gt;
&lt;p&gt;Kubernetes 以其强大的容器化应用编排能力而闻名，但随着集群规模扩大，
对控制平面的需求可能成为性能瓶颈。其中一个主要挑战是确保从
etcd 数据存储进行强一致性读，这通常需要资源密集型仲裁读取操作。&lt;/p&gt;</description></item><item><title>Kubernetes 1.31：对 cgroup v1 的支持转为维护模式</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2024/08/14/kubernetes-1-31-moving-cgroup-v1-support-maintenance-mode/</link><pubDate>Wed, 14 Aug 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2024/08/14/kubernetes-1-31-moving-cgroup-v1-support-maintenance-mode/</guid><description>&lt;!--
layout: blog
title: "Kubernetes 1.31: Moving cgroup v1 Support into Maintenance Mode"
date: 2024-08-14
slug: kubernetes-1-31-moving-cgroup-v1-support-maintenance-mode
author: Harshal Patil
--&gt;
&lt;!--
As Kubernetes continues to evolve and adapt to the changing landscape of
container orchestration, the community has decided to move cgroup v1 support
into [maintenance mode](#what-does-maintenance-mode-mean) in v1.31.
This shift aligns with the broader industry's move towards cgroup v2, offering
improved functionalities: including scalability and a more consistent interface.
Before we dive into the consequences for Kubernetes, let's take a step back to
understand what cgroups are and their significance in Linux.
--&gt;
&lt;p&gt;随着 Kubernetes 不断发展，为了适应容器编排全景图的变化，社区决定在 v1.31 中将对 cgroup v1
的支持转为&lt;a href="#what-does-maintenance-mode-mean"&gt;维护模式&lt;/a&gt;。
这一转变与行业更广泛地向 cgroup v2 的迁移保持一致，后者的功能更强，
包括可扩展性和更加一致的接口。在我们深入探讨对 Kubernetes 的影响之前，
先回顾一下 cgroup 的概念及其在 Linux 中的重要意义。&lt;/p&gt;</description></item><item><title>Kubernetes v1.31: PersistentVolume 的最后阶段转换时间进阶到 GA</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2024/08/14/last-phase-transition-time-ga/</link><pubDate>Wed, 14 Aug 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2024/08/14/last-phase-transition-time-ga/</guid><description>&lt;!--
layout: blog
title: "Kubernetes v1.31: PersistentVolume Last Phase Transition Time Moves to GA"
date: 2024-08-14
slug: last-phase-transition-time-ga
author: &gt;
 Roman Bednář (Red Hat)
--&gt;
&lt;!--
Announcing the graduation to General Availability (GA) of the PersistentVolume `lastTransitionTime` status
field, in Kubernetes v1.31!

The Kubernetes SIG Storage team is excited to announce that the "PersistentVolumeLastPhaseTransitionTime" feature, introduced
as an alpha in Kubernetes v1.28, has now reached GA status and is officially part of the Kubernetes v1.31 release. This enhancement
helps Kubernetes users understand when a [PersistentVolume](/docs/concepts/storage/persistent-volumes/) transitions between 
different phases, allowing for more efficient and informed resource management.
--&gt;
&lt;p&gt;现在宣布 PersistentVolume 的 &lt;code&gt;lastTransitionTime&lt;/code&gt; 状态字段在 Kubernetes v1.31
版本进阶至正式发布（GA）！&lt;/p&gt;</description></item><item><title>Kubernetes v1.31: Elli</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2024/08/13/kubernetes-v1-31-release/</link><pubDate>Tue, 13 Aug 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2024/08/13/kubernetes-v1-31-release/</guid><description>&lt;!--
---
layout: blog
title: 'Kubernetes v1.31: Elli'
date: 2024-08-13
slug: kubernetes-v1-31-release
author: &gt;
 [Kubernetes v1.31 Release Team](https://github.com/kubernetes/sig-release/blob/master/releases/release-1.31/release-team.md)
---
--&gt;
&lt;!--
**Editors:** Matteo Bianchi, Yigit Demirbas, Abigail McCarthy, Edith Puclla, Rashan Smith

Announcing the release of Kubernetes v1.31: Elli!

Similar to previous releases, the release of Kubernetes v1.31 introduces new
stable, beta, and alpha features. 
The consistent delivery of high-quality releases underscores the strength of our development cycle and the vibrant support from our community.
This release consists of 45 enhancements.
Of those enhancements, 11 have graduated to Stable, 22 are entering Beta, 
and 12 have graduated to Alpha.
--&gt;
&lt;p&gt;&lt;strong&gt;编辑:&lt;/strong&gt; Matteo Bianchi, Yigit Demirbas, Abigail McCarthy, Edith Puclla, Rashan Smith&lt;/p&gt;</description></item><item><title>向 Client-Go 引入特性门控：增强灵活性和控制力</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2024/08/12/feature-gates-in-client-go/</link><pubDate>Mon, 12 Aug 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2024/08/12/feature-gates-in-client-go/</guid><description>&lt;!--
layout: blog
title: 'Introducing Feature Gates to Client-Go: Enhancing Flexibility and Control'
date: 2024-08-12
slug: feature-gates-in-client-go
author: &gt;
 Ben Luddy (Red Hat),
 Lukasz Szaszkiewicz (Red Hat)
--&gt;
&lt;!--
Kubernetes components use on-off switches called _feature gates_ to manage the risk of adding a new feature.
The feature gate mechanism is what enables incremental graduation of a feature through the stages Alpha, Beta, and GA.
--&gt;
&lt;p&gt;Kubernetes 组件使用称为“特性门控（Feature Gates）”的开关来管理添加新特性的风险，
特性门控机制使特性能够通过 Alpha、Beta 和 GA 阶段逐步升级。&lt;/p&gt;</description></item><item><title>聚焦 SIG API Machinery</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2024/08/07/sig-api-machinery-spotlight-2024/</link><pubDate>Wed, 07 Aug 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2024/08/07/sig-api-machinery-spotlight-2024/</guid><description>&lt;!--
layout: blog
title: "Spotlight on SIG API Machinery"
slug: sig-api-machinery-spotlight-2024
canonicalUrl: https://www.kubernetes.dev/blog/2024/08/07/sig-api-machinery-spotlight-2024
date: 2024-08-07
author: "Frederico Muñoz (SAS Institute)"
--&gt;
&lt;!--
We recently talked with [Federico Bongiovanni](https://github.com/fedebongio) (Google) and [David
Eads](https://github.com/deads2k) (Red Hat), Chairs of SIG API Machinery, to know a bit more about
this Kubernetes Special Interest Group.
--&gt;
&lt;p&gt;我们最近与 SIG API Machinery 的主席
&lt;a href="https://github.com/fedebongio"&gt;Federico Bongiovanni&lt;/a&gt;（Google）和
&lt;a href="https://github.com/deads2k"&gt;David Eads&lt;/a&gt;（Red Hat）进行了访谈，
了解一些有关这个 Kubernetes 特别兴趣小组的信息。&lt;/p&gt;
&lt;!--
## Introductions

**Frederico (FSM): Hello, and thank your for your time. To start with, could you tell us about
yourselves and how you got involved in Kubernetes?**
--&gt;
&lt;h2 id="introductions"&gt;介绍&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Frederico (FSM)：你好，感谢你抽时间参与访谈。首先，你能做个自我介绍以及你是如何参与到 Kubernetes 的？&lt;/strong&gt;&lt;/p&gt;</description></item><item><title>Kubernetes v1.31 中的移除和主要变更</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2024/07/19/kubernetes-1-31-upcoming-changes/</link><pubDate>Fri, 19 Jul 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2024/07/19/kubernetes-1-31-upcoming-changes/</guid><description>&lt;!--
layout: blog
title: 'Kubernetes Removals and Major Changes In v1.31'
date: 2024-07-19
slug: kubernetes-1-31-upcoming-changes
author: &gt;
 Abigail McCarthy,
 Edith Puclla,
 Matteo Bianchi,
 Rashan Smith,
 Yigit Demirbas 
--&gt;
&lt;!--
As Kubernetes develops and matures, features may be deprecated, removed, or replaced with better ones for the project's overall health. 
This article outlines some planned changes for the Kubernetes v1.31 release that the release team feels you should be aware of for the continued maintenance of your Kubernetes environment. 
The information listed below is based on the current status of the v1.31 release. 
It may change before the actual release date. 
--&gt;
&lt;p&gt;随着 Kubernetes 的发展和成熟，为了项目的整体健康，某些特性可能会被弃用、删除或替换为更好的特性。
本文阐述了 Kubernetes v1.31 版本的一些更改计划，发行团队认为你应当了解这些更改，
以便持续维护 Kubernetes 环境。
下面列出的信息基于 v1.31 版本的当前状态；这些状态可能会在实际发布日期之前发生变化。&lt;/p&gt;</description></item><item><title>Kubernetes 的十年</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2024/06/06/10-years-of-kubernetes/</link><pubDate>Thu, 06 Jun 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2024/06/06/10-years-of-kubernetes/</guid><description>&lt;!--
layout: blog
title: "10 Years of Kubernetes"
date: 2024-06-06
slug: 10-years-of-kubernetes
author: &gt;
 [Bob Killen](https://github.com/mrbobbytables) (CNCF),
 [Chris Short](https://github.com/chris-short) (AWS),
 [Frederico Muñoz](https://github.com/fsmunoz) (SAS),
 [Kaslin Fields](https://github.com/kaslin) (Google),
 [Tim Bannister](https://github.com/sftim) (The Scale Factory),
 and every contributor across the globe
--&gt;
&lt;!--
![KCSEU 2024 group photo](kcseu2024.jpg)

Ten (10) years ago, on June 6th, 2014, the
[first commit](https://github.com/kubernetes/kubernetes/commit/2c4b3a562ce34cddc3f8218a2c4d11c7310e6d56)
of Kubernetes was pushed to GitHub. That first commit with 250 files and 47,501 lines of go, bash
and markdown kicked off the project we have today. Who could have predicted that 10 years later,
Kubernetes would grow to become one of the largest Open Source projects to date with over
[88,000 contributors](https://k8s.devstats.cncf.io/d/24/overall-project-statistics?orgId=1) from
more than [8,000 companies](https://www.cncf.io/reports/kubernetes-project-journey-report/), across
44 countries.
--&gt;
&lt;p&gt;&lt;img src="https://andygol-k8s.netlify.app/zh-cn/blog/2024/06/06/10-years-of-kubernetes/kcseu2024.jpg" alt="KCSEU 2024 团体照片"&gt;&lt;/p&gt;</description></item><item><title>完成 Kubernetes 史上最大规模迁移</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2024/05/20/completing-cloud-provider-migration/</link><pubDate>Mon, 20 May 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2024/05/20/completing-cloud-provider-migration/</guid><description>&lt;!--
layout: blog
title: 'Completing the largest migration in Kubernetes history'
date: 2024-05-20
slug: completing-cloud-provider-migration
author: &gt;
 Andrew Sy Kim (Google),
 Michelle Au (Google),
 Walter Fender (Google),
 Michael McCune (Red Hat)
--&gt;
&lt;!--
Since as early as Kubernetes v1.7, the Kubernetes project has pursued the ambitious goal of removing built-in cloud provider integrations ([KEP-2395](https://github.com/kubernetes/enhancements/blob/master/keps/sig-cloud-provider/2395-removing-in-tree-cloud-providers/README.md)).
While these integrations were instrumental in Kubernetes' early development and growth, their removal was driven by two key factors:
the growing complexity of maintaining native support for every cloud provider across millions of lines of Go code, and the desire to establish
Kubernetes as a truly vendor-neutral platform.
--&gt;
&lt;p&gt;早自 Kubernetes v1.7 起，Kubernetes 项目就开始追求取消集成内置云驱动
（&lt;a href="https://github.com/kubernetes/enhancements/blob/master/keps/sig-cloud-provider/2395-removing-in-tree-cloud-providers/README.md"&gt;KEP-2395&lt;/a&gt;）。
虽然这些集成对于 Kubernetes 的早期发展和增长发挥了重要作用，但它们的移除是由两个关键因素驱动的：
为各云启动维护数百万行 Go 代码的原生支持所带来的日趋增长的复杂度，以及将 Kubernetes 打造为真正的供应商中立平台的愿景。&lt;/p&gt;</description></item><item><title>Gateway API v1.1：服务网格、GRPCRoute 和更多变化</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2024/05/09/gateway-api-v1-1/</link><pubDate>Thu, 09 May 2024 09:00:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2024/05/09/gateway-api-v1-1/</guid><description>&lt;!--
layout: blog
title: "Gateway API v1.1: Service mesh, GRPCRoute, and a whole lot more"
date: 2024-05-09T09:00:00-08:00
slug: gateway-api-v1-1
author: &gt;
 [Richard Belleville](https://github.com/gnossen) (Google),
 [Frank Budinsky](https://github.com/frankbu) (IBM),
 [Arko Dasgupta](https://github.com/arkodg) (Tetrate),
 [Flynn](https://github.com/kflynn) (Buoyant),
 [Candace Holman](https://github.com/candita) (Red Hat),
 [John Howard](https://github.com/howardjohn) (Solo.io),
 [Christine Kim](https://github.com/xtineskim) (Isovalent),
 [Mattia Lavacca](https://github.com/mlavacca) (Kong),
 [Keith Mattix](https://github.com/keithmattix) (Microsoft),
 [Mike Morris](https://github.com/mikemorris) (Microsoft),
 [Rob Scott](https://github.com/robscott) (Google),
 [Grant Spence](https://github.com/gcs278) (Red Hat),
 [Shane Utt](https://github.com/shaneutt) (Kong),
 [Gina Yeh](https://github.com/ginayeh) (Google),
 and other review and release note contributors
--&gt;
&lt;p&gt;&lt;img src="https://andygol-k8s.netlify.app/zh-cn/blog/2024/05/09/gateway-api-v1-1/gateway-api-logo.svg" alt="Gateway API logo"&gt;&lt;/p&gt;</description></item><item><title>Kubernetes 1.30：防止未经授权的卷模式转换进阶到 GA</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2024/04/30/prevent-unauthorized-volume-mode-conversion-ga/</link><pubDate>Tue, 30 Apr 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2024/04/30/prevent-unauthorized-volume-mode-conversion-ga/</guid><description>&lt;!--
layout: blog
title: "Kubernetes 1.30: Preventing unauthorized volume mode conversion moves to GA"
date: 2024-04-30
slug: prevent-unauthorized-volume-mode-conversion-ga
author: &gt;
 Raunak Pradip Shah (Mirantis)
--&gt;
&lt;p&gt;&lt;strong&gt;作者:&lt;/strong&gt; Raunak Pradip Shah (Mirantis)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者:&lt;/strong&gt; Xin Li (DaoCloud)&lt;/p&gt;
&lt;!--
With the release of Kubernetes 1.30, the feature to prevent the modification of the volume mode
of a [PersistentVolumeClaim](/docs/concepts/storage/persistent-volumes/) that was created from
an existing VolumeSnapshot in a Kubernetes cluster, has moved to GA!
--&gt;
&lt;p&gt;随着 Kubernetes 1.30 的发布，防止修改从 Kubernetes 集群中现有
VolumeSnapshot 创建的 &lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/persistent-volumes/"&gt;PersistentVolumeClaim&lt;/a&gt;
的卷模式的特性已被升级至 GA！&lt;/p&gt;</description></item><item><title>Kubernetes 1.30：结构化身份认证配置进阶至 Beta</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2024/04/25/structured-authentication-moves-to-beta/</link><pubDate>Thu, 25 Apr 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2024/04/25/structured-authentication-moves-to-beta/</guid><description>&lt;!--
layout: blog
title: "Kubernetes 1.30: Structured Authentication Configuration Moves to Beta"
date: 2024-04-25
slug: structured-authentication-moves-to-beta
author: &gt;
 [Anish Ramasekar](https://github.com/aramase) (Microsoft)
--&gt;
&lt;!--
With Kubernetes 1.30, we (SIG Auth) are moving Structured Authentication Configuration to beta.

Today's article is about _authentication_: finding out who's performing a task, and checking
that they are who they say they are. Check back in tomorrow to find about what's new in
Kubernetes v1.30 around _authorization_ (deciding what someone can and can't access).
--&gt;
&lt;p&gt;在 Kubernetes 1.30 中，我们（SIG Auth）将结构化身份认证配置（Structured Authentication Configuration）进阶至 Beta。&lt;/p&gt;</description></item><item><title>Kubernetes 1.30：验证准入策略 ValidatingAdmissionPolicy 正式发布</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2024/04/24/validating-admission-policy-ga/</link><pubDate>Wed, 24 Apr 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2024/04/24/validating-admission-policy-ga/</guid><description>&lt;!--
layout: blog
title: "Kubernetes 1.30: Validating Admission Policy Is Generally Available"
slug: validating-admission-policy-ga
date: 2024-04-24
author: &gt;
 Jiahui Feng (Google)
--&gt;
&lt;!--
On behalf of the Kubernetes project, I am excited to announce that ValidatingAdmissionPolicy has reached
**general availability**
as part of Kubernetes 1.30 release. If you have not yet read about this new declarative alternative to
validating admission webhooks, it may be interesting to read our
[previous post](/blog/2022/12/20/validating-admission-policies-alpha/) about the new feature.
If you have already heard about ValidatingAdmissionPolicies and you are eager to try them out,
there is no better time to do it than now.

Let's have a taste of a ValidatingAdmissionPolicy, by replacing a simple webhook.
--&gt;
&lt;p&gt;我代表 Kubernetes 项目组成员，很高兴地宣布 ValidatingAdmissionPolicy 已经作为 Kubernetes 1.30 发布的一部分&lt;strong&gt;正式发布&lt;/strong&gt;。
如果你还不了解这个全新的声明式验证准入 Webhook 的替代方案，
请参阅有关这个新特性的&lt;a href="https://andygol-k8s.netlify.app/blog/2022/12/20/validating-admission-policies-alpha/"&gt;上一篇博文&lt;/a&gt;。
如果你已经对 ValidatingAdmissionPolicy 有所了解并且想要尝试一下，那么现在是最好的时机。&lt;/p&gt;</description></item><item><title>Kubernetes 1.30：只读卷挂载终于可以真正实现只读了</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2024/04/23/recursive-read-only-mounts/</link><pubDate>Tue, 23 Apr 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2024/04/23/recursive-read-only-mounts/</guid><description>&lt;p&gt;&lt;strong&gt;作者:&lt;/strong&gt; Akihiro Suda (NTT)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者:&lt;/strong&gt; Xin Li (DaoCloud)&lt;/p&gt;
&lt;!--
layout: blog
title: 'Kubernetes 1.30: Read-only volume mounts can be finally literally read-only'
date: 2024-04-23
slug: recursive-read-only-mounts
author: &gt;
 Akihiro Suda (NTT)
--&gt;
&lt;!--
Read-only volume mounts have been a feature of Kubernetes since the beginning.
Surprisingly, read-only mounts are not completely read-only under certain conditions on Linux.
As of the v1.30 release, they can be made completely read-only,
with alpha support for _recursive read-only mounts_.
--&gt;
&lt;p&gt;只读卷挂载从一开始就是 Kubernetes 的一个特性。
令人惊讶的是，在 Linux 上的某些条件下，只读挂载并不是完全只读的。
从 v1.30 版本开始，这类卷挂载可以被处理为完全只读；v1.30 为&lt;strong&gt;递归只读挂载&lt;/strong&gt;提供 Alpha 支持。&lt;/p&gt;</description></item><item><title>Kubernetes 1.30：对 Pod 使用用户命名空间的支持进阶至 Beta</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2024/04/22/userns-beta/</link><pubDate>Mon, 22 Apr 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2024/04/22/userns-beta/</guid><description>&lt;!--
layout: blog
title: "Kubernetes 1.30: Beta Support For Pods With User Namespaces"
date: 2024-04-22
slug: userns-beta
author: &gt;
 Rodrigo Campos Catelin (Microsoft),
 Giuseppe Scrivano (Red Hat),
 Sascha Grunert (Red Hat)
--&gt;
&lt;!--
Linux provides different namespaces to isolate processes from each other. For
example, a typical Kubernetes pod runs within a network namespace to isolate the
network identity and a PID namespace to isolate the processes.

One Linux namespace that was left behind is the [user
namespace](https://man7.org/linux/man-pages/man7/user_namespaces.7.html). This
namespace allows us to isolate the user and group identifiers (UIDs and GIDs) we
use inside the container from the ones on the host.
--&gt;
&lt;p&gt;Linux 提供了不同的命名空间来将进程彼此隔离。
例如，一个典型的 Kubernetes Pod 运行在一个网络命名空间中可以隔离网络身份，运行在一个 PID 命名空间中可以隔离进程。&lt;/p&gt;</description></item><item><title>SIG Architecture 特别报道：代码组织</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2024/04/11/sig-architecture-code-spotlight-2024/</link><pubDate>Thu, 11 Apr 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2024/04/11/sig-architecture-code-spotlight-2024/</guid><description>&lt;!--
layout: blog
title: "Spotlight on SIG Architecture: Code Organization"
slug: sig-architecture-code-spotlight-2024
canonicalUrl: https://www.kubernetes.dev/blog/2024/04/11/sig-architecture-code-spotlight-2024
date: 2024-04-11
author: &gt;
 Frederico Muñoz (SAS Institute)
--&gt;
&lt;p&gt;&lt;strong&gt;作者:&lt;/strong&gt; Frederico Muñoz (SAS Institute)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者:&lt;/strong&gt; Xin Li (DaoCloud)&lt;/p&gt;
&lt;!--
_This is the third interview of a SIG Architecture Spotlight series that will cover the different
subprojects. We will cover [SIG Architecture: Code Organization](https://github.com/kubernetes/community/blob/e44c2c9d0d3023e7111d8b01ac93d54c8624ee91/sig-architecture/README.md#code-organization)._

In this SIG Architecture spotlight I talked with [Madhav Jivrajani](https://github.com/MadhavJivrajani)
(VMware), a member of the Code Organization subproject.
--&gt;
&lt;p&gt;&lt;strong&gt;这是 SIG Architecture Spotlight 系列的第三次采访，该系列将涵盖不同的子项目。
我们将介绍 &lt;a href="https://github.com/kubernetes/community/blob/e44c2c9d0d3023e7111d8b01ac93d54c8624ee91/sig-architecture/README.md#code-organization"&gt;SIG Architecture：代码组织&lt;/a&gt;。&lt;/strong&gt;&lt;/p&gt;</description></item><item><title>Windows 操作就绪规范简介</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2024/04/03/intro-windows-ops-readiness/</link><pubDate>Wed, 03 Apr 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2024/04/03/intro-windows-ops-readiness/</guid><description>&lt;!--
layout: blog
title: "Introducing the Windows Operational Readiness Specification"
date: 2024-04-03
slug: intro-windows-ops-readiness
author: &gt;
 Jay Vyas (Tesla),
 Amim Knabben (Broadcom),
 Tatenda Zifudzi (AWS)
--&gt;
&lt;!--
Since Windows support [graduated to stable](/blog/2019/03/25/kubernetes-1-14-release-announcement/)
with Kubernetes 1.14 in 2019, the capability to run Windows workloads has been much
appreciated by the end user community. The level of and availability of Windows workload
support has consistently been a major differentiator for Kubernetes distributions used by
large enterprises. However, with more Windows workloads being migrated to Kubernetes
and new Windows features being continuously released, it became challenging to test
Windows worker nodes in an effective and standardized way.
--&gt;
&lt;p&gt;自从 2019 年 Kubernetes 1.14 将对 Windows
的支持&lt;a href="https://andygol-k8s.netlify.app/zh-cn/blog/2019/03/25/kubernetes-1-14-release-announcement/"&gt;升级为稳定版&lt;/a&gt;以来，
能够运行 Windows 工作负载的能力一直深受最终用户社区的认可。对于大型企业来说，
对 Windows 工作负载支持的水平和可用性一直是各大企业选择 Kubernetes 发行版的重要差异化因素。
然而，随着越来越多的 Windows 工作负载迁移到 Kubernetes，以及新的 Windows 特性不断发布，
要高效且标准化地测试 Windows 工作节点变得越来越具有挑战性。&lt;/p&gt;</description></item><item><title>Kubernetes v1.30 初探</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2024/03/12/kubernetes-1-30-upcoming-changes/</link><pubDate>Tue, 12 Mar 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2024/03/12/kubernetes-1-30-upcoming-changes/</guid><description>&lt;!--
layout: blog
title: 'A Peek at Kubernetes v1.30'
date: 2024-03-12
slug: kubernetes-1-30-upcoming-changes
--&gt;
&lt;!-- 
**Authors:** Amit Dsouza, Frederick Kautz, Kristin Martin, Abigail McCarthy, Natali Vlatko
--&gt;
&lt;p&gt;&lt;strong&gt;作者:&lt;/strong&gt; Amit Dsouza, Frederick Kautz, Kristin Martin, Abigail McCarthy, Natali Vlatko&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者:&lt;/strong&gt; Paco Xu (DaoCloud)&lt;/p&gt;
&lt;!--
## A quick look: exciting changes in Kubernetes v1.30

It's a new year and a new Kubernetes release. We're halfway through the release cycle and
have quite a few interesting and exciting enhancements coming in v1.30. From brand new features
in alpha, to established features graduating to stable, to long-awaited improvements, this release
has something for everyone to pay attention to!

To tide you over until the official release, here's a sneak peek of the enhancements we're most
excited about in this cycle!
--&gt;
&lt;h2 id="快速预览-kubernetes-v1-30-中令人兴奋的变化"&gt;快速预览：Kubernetes v1.30 中令人兴奋的变化&lt;/h2&gt;
&lt;p&gt;新年新版本，v1.30 发布周期已过半，我们将迎来一系列有趣且令人兴奋的增强功能。
从全新的 alpha 特性，到已有的特性升级为稳定版，再到期待已久的改进，这个版本对每个人都有值得关注的内容！&lt;/p&gt;</description></item><item><title>走进 Kubernetes 读书会（Book Club）</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2024/02/22/k8s-book-club/</link><pubDate>Thu, 22 Feb 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2024/02/22/k8s-book-club/</guid><description>&lt;!--
layout: blog
title: "A look into the Kubernetes Book Club"
slug: k8s-book-club
date: 2024-02-22
canonicalUrl: https://www.k8s.dev/blog/2024/02/22/k8s-book-club/
author: &gt;
 Frederico Muñoz (SAS Institute)
--&gt;
&lt;!--
Learning Kubernetes and the entire ecosystem of technologies around it is not without its
challenges. In this interview, we will talk with [Carlos Santana
(AWS)](https://www.linkedin.com/in/csantanapr/) to learn a bit more about how he created the
[Kubernetes Book Club](https://community.cncf.io/kubernetes-virtual-book-club/), how it works, and
how anyone can join in to take advantage of a community-based learning experience.
--&gt;
&lt;p&gt;学习 Kubernetes 及其整个生态的技术并非易事。在本次采访中，我们的访谈对象是
&lt;a href="https://www.linkedin.com/in/csantanapr/"&gt;Carlos Santana (AWS)&lt;/a&gt;，
了解他是如何创办 &lt;a href="https://community.cncf.io/kubernetes-virtual-book-club/"&gt;Kubernetes 读书会（Book Club）&lt;/a&gt;的，
整个读书会是如何运作的，以及大家如何加入其中，进而更好地利用社区学习体验。&lt;/p&gt;</description></item><item><title>镜像文件系统：配置 Kubernetes 将容器存储在独立的文件系统上</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2024/01/23/kubernetes-separate-image-filesystem/</link><pubDate>Tue, 23 Jan 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2024/01/23/kubernetes-separate-image-filesystem/</guid><description>&lt;!--
layout: blog
title: 'Image Filesystem: Configuring Kubernetes to store containers on a separate filesystem'
date: 2024-01-23
slug: kubernetes-separate-image-filesystem
--&gt;
&lt;!--
**Author:** Kevin Hannon (Red Hat)
--&gt;
&lt;p&gt;&lt;strong&gt;作者:&lt;/strong&gt; Kevin Hannon (Red Hat)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者:&lt;/strong&gt; &lt;a href="https://github.com/windsonsea"&gt;Michael Yao&lt;/a&gt;&lt;/p&gt;
&lt;!--
A common issue in running/operating Kubernetes clusters is running out of disk space.
When the node is provisioned, you should aim to have a good amount of storage space for your container images and running containers.
The [container runtime](/docs/setup/production-environment/container-runtimes/) usually writes to `/var`. 
This can be located as a separate partition or on the root filesystem.
CRI-O, by default, writes its containers and images to `/var/lib/containers`, while containerd writes its containers and images to `/var/lib/containerd`.
--&gt;
&lt;p&gt;磁盘空间不足是运行或操作 Kubernetes 集群时的一个常见问题。
在制备节点时，你应该为容器镜像和正在运行的容器留足够的存储空间。
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/setup/production-environment/container-runtimes/"&gt;容器运行时&lt;/a&gt;通常会向 &lt;code&gt;/var&lt;/code&gt; 目录写入数据。
此目录可以位于单独的分区或根文件系统上。CRI-O 默认将其容器和镜像写入 &lt;code&gt;/var/lib/containers&lt;/code&gt;，
而 containerd 将其容器和镜像写入 &lt;code&gt;/var/lib/containerd&lt;/code&gt;。&lt;/p&gt;</description></item><item><title>Kubernetes 1.29 中的上下文日志生成：更好的故障排除和增强的日志记录</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2023/12/20/contextual-logging-in-kubernetes-1-29/</link><pubDate>Wed, 20 Dec 2023 09:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2023/12/20/contextual-logging-in-kubernetes-1-29/</guid><description>&lt;!--
layout: blog
title: "Contextual logging in Kubernetes 1.29: Better troubleshooting and enhanced logging"
slug: contextual-logging-in-kubernetes-1-29
date: 2023-12-20T09:30:00-08:00
canonicalUrl: https://www.kubernetes.dev/blog/2023/12/20/contextual-logging/
--&gt;
&lt;!--
**Authors**: [Mengjiao Liu](https://github.com/mengjiao-liu/) (DaoCloud), [Patrick Ohly](https://github.com/pohly) (Intel)
--&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;：&lt;a href="https://github.com/mengjiao-liu/"&gt;Mengjiao Liu&lt;/a&gt; (DaoCloud), &lt;a href="https://github.com/pohly"&gt;Patrick Ohly&lt;/a&gt; (Intel)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者&lt;/strong&gt;：&lt;a href="https://github.com/mengjiao-liu/"&gt;Mengjiao Liu&lt;/a&gt; (DaoCloud)&lt;/p&gt;
&lt;!--
On behalf of the [Structured Logging Working Group](https://github.com/kubernetes/community/blob/master/wg-structured-logging/README.md) 
and [SIG Instrumentation](https://github.com/kubernetes/community/tree/master/sig-instrumentation#readme), 
we are pleased to announce that the contextual logging feature
introduced in Kubernetes v1.24 has now been successfully migrated to
two components (kube-scheduler and kube-controller-manager)
as well as some directories. This feature aims to provide more useful logs 
for better troubleshooting of Kubernetes and to empower developers to enhance Kubernetes.
--&gt;
&lt;p&gt;代表&lt;a href="https://github.com/kubernetes/community/blob/master/wg-structed-logging/README.md"&gt;结构化日志工作组&lt;/a&gt;和
&lt;a href="https://github.com/kubernetes/community/tree/master/sig-instrumentation#readme"&gt;SIG Instrumentation&lt;/a&gt;，
我们很高兴地宣布在 Kubernetes v1.24 中引入的上下文日志记录功能现已成功迁移了两个组件（kube-scheduler 和 kube-controller-manager）
以及一些目录。该功能旨在为 Kubernetes 提供更多有用的日志以更好地进行故障排除，并帮助开发人员增强 Kubernetes。&lt;/p&gt;</description></item><item><title>Kubernetes 1.29: 解耦污点管理器与节点生命周期控制器</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2023/12/19/kubernetes-1-29-taint-eviction-controller/</link><pubDate>Tue, 19 Dec 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2023/12/19/kubernetes-1-29-taint-eviction-controller/</guid><description>&lt;!--
layout: blog
title: "Kubernetes 1.29: Decoupling taint-manager from node-lifecycle-controller"
date: 2023-12-19
slug: kubernetes-1-29-taint-eviction-controller
--&gt;
&lt;!-- 
**Authors:** Yuan Chen (Apple), Andrea Tosatto (Apple) 
--&gt;
&lt;p&gt;&lt;strong&gt;作者:&lt;/strong&gt; Yuan Chen (Apple), Andrea Tosatto (Apple)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者:&lt;/strong&gt; Allen Zhang&lt;/p&gt;
&lt;!-- 
This blog discusses a new feature in Kubernetes 1.29 to improve the handling of taint-based pod eviction. 
--&gt;
&lt;p&gt;这篇博客讨论在 Kubernetes 1.29 中基于污点的 Pod 驱逐处理的新特性。&lt;/p&gt;
&lt;!-- 
## Background 
--&gt;
&lt;h2 id="背景"&gt;背景&lt;/h2&gt;
&lt;!-- 
In Kubernetes 1.29, an improvement has been introduced to enhance the taint-based pod eviction handling on nodes.
This blog discusses the changes made to node-lifecycle-controller
to separate its responsibilities and improve overall code maintainability. 
--&gt;
&lt;p&gt;在 Kubernetes 1.29 中引入了一项改进，以加强节点上基于污点的 Pod 驱逐处理。
本文将讨论对节点生命周期控制器（node-lifecycle-controller）所做的更改，以分离职责并提高代码的整体可维护性。&lt;/p&gt;</description></item><item><title>Kubernetes 1.29：PodReadyToStartContainers 状况进阶至 Beta</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2023/12/19/pod-ready-to-start-containers-condition-now-in-beta/</link><pubDate>Tue, 19 Dec 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2023/12/19/pod-ready-to-start-containers-condition-now-in-beta/</guid><description>&lt;!--
layout: blog
title: "Kubernetes 1.29: PodReadyToStartContainers Condition Moves to Beta"
date: 2023-12-19
slug: pod-ready-to-start-containers-condition-now-in-beta
--&gt;
&lt;!--
**Authors**: Zefeng Chen (independent), Kevin Hannon (Red Hat)
--&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;：Zefeng Chen (independent), Kevin Hannon (Red Hat)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者&lt;/strong&gt;：&lt;a href="https://github.com/windsonsea"&gt;Michael Yao&lt;/a&gt;&lt;/p&gt;
&lt;!--
With the recent release of Kubernetes 1.29, the `PodReadyToStartContainers`
[condition](/docs/concepts/workloads/pods/pod-lifecycle/#pod-conditions) is 
available by default.
The kubelet manages the value for that condition throughout a Pod's lifecycle, 
in the status field of a Pod. The kubelet will use the `PodReadyToStartContainers`
condition to accurately surface the initialization state of a Pod,
from the perspective of Pod sandbox creation and network configuration by a container runtime.
--&gt;
&lt;p&gt;随着最近发布的 Kubernetes 1.29，&lt;code&gt;PodReadyToStartContainers&lt;/code&gt;
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/pod-lifecycle/#pod-conditions"&gt;状况&lt;/a&gt;默认可用。
kubelet 在 Pod 的整个生命周期中管理该状况的值，将其存储在 Pod 的状态字段中。
kubelet 将通过容器运行时从 Pod 沙箱创建和网络配置的角度使用 &lt;code&gt;PodReadyToStartContainers&lt;/code&gt;
状况准确地展示 Pod 的初始化状态，&lt;/p&gt;</description></item><item><title>Kubernetes 1.29 新的 Alpha 特性：Service 的负载均衡器 IP 模式</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2023/12/18/kubernetes-1-29-feature-loadbalancer-ip-mode-alpha/</link><pubDate>Mon, 18 Dec 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2023/12/18/kubernetes-1-29-feature-loadbalancer-ip-mode-alpha/</guid><description>&lt;!-- 
layout: blog
title: "Kubernetes 1.29: New (alpha) Feature, Load Balancer IP Mode for Services"
date: 2023-12-18
slug: kubernetes-1-29-feature-loadbalancer-ip-mode-alpha
--&gt;
&lt;!-- **Author:** [Aohan Yang](https://github.com/RyanAoh) --&gt;
&lt;p&gt;&lt;strong&gt;作者：&lt;/strong&gt; &lt;a href="https://github.com/RyanAoh"&gt;Aohan Yang&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者：&lt;/strong&gt; Allen Zhang&lt;/p&gt;
&lt;!-- 
This blog introduces a new alpha feature in Kubernetes 1.29. 
It provides a configurable approach to define how Service implementations, 
exemplified in this blog by kube-proxy, 
handle traffic from pods to the Service, within the cluster. 
--&gt;
&lt;p&gt;本文介绍 Kubernetes 1.29 中一个新的 Alpha 特性。
此特性提供了一种可配置的方式用于定义 Service 的实现方式，本文中以
kube-proxy 为例介绍如何处理集群内从 Pod 到 Service 的流量。&lt;/p&gt;</description></item><item><title>Kubernetes 1.29：修改卷之 VolumeAttributesClass</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2023/12/15/kubernetes-1-29-volume-attributes-class/</link><pubDate>Fri, 15 Dec 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2023/12/15/kubernetes-1-29-volume-attributes-class/</guid><description>&lt;!--
layout: blog
title: "Kubernetes 1.29: VolumeAttributesClass for Volume Modification"
date: 2023-12-15
slug: kubernetes-1-29-volume-attributes-class
--&gt;
&lt;p&gt;&lt;strong&gt;译者&lt;/strong&gt;：&lt;a href="https://github.com/carlory"&gt;Baofa Fan&lt;/a&gt; (DaoCloud)&lt;/p&gt;
&lt;!--
The v1.29 release of Kubernetes introduced an alpha feature to support modifying a volume
by changing the `volumeAttributesClassName` that was specified for a PersistentVolumeClaim (PVC).
With the feature enabled, Kubernetes can handle updates of volume attributes other than capacity.
Allowing volume attributes to be changed without managing it through different
provider's APIs directly simplifies the current flow.

You can read about VolumeAttributesClass usage details in the Kubernetes documentation 
or you can read on to learn about why the Kubernetes project is supporting this feature.
--&gt;
&lt;p&gt;Kubernetes v1.29 版本引入了一个 Alpha 功能，支持通过变更 PersistentVolumeClaim（PVC）的
&lt;code&gt;volumeAttributesClassName&lt;/code&gt; 字段来修改卷。启用该功能后，Kubernetes 可以处理除容量以外的卷属性的更新。
允许更改卷属性，而无需通过不同提供商的 API 对其进行管理，这直接简化了当前流程。&lt;/p&gt;</description></item><item><title>聚焦 SIG Testing</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2023/11/24/sig-testing-spotlight-2023/</link><pubDate>Fri, 24 Nov 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2023/11/24/sig-testing-spotlight-2023/</guid><description>&lt;!--
layout: blog
title: "Spotlight on SIG Testing"
slug: sig-testing-spotlight-2023
date: 2023-11-24
canonicalUrl: https://www.kubernetes.dev/blog/2023/11/24/sig-testing-spotlight-2023/
--&gt;
&lt;p&gt;&lt;strong&gt;作者:&lt;/strong&gt; Sandipan Panda&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者:&lt;/strong&gt; &lt;a href="https://github.com/windsonsea"&gt;Michael Yao&lt;/a&gt;&lt;/p&gt;
&lt;!--
Welcome to another edition of the _SIG spotlight_ blog series, where we
highlight the incredible work being done by various Special Interest
Groups (SIGs) within the Kubernetes project. In this edition, we turn
our attention to [SIG Testing](https://github.com/kubernetes/community/tree/master/sig-testing#readme),
a group interested in effective testing of Kubernetes and automating
away project toil. SIG Testing focus on creating and running tools and
infrastructure that make it easier for the community to write and run
tests, and to contribute, analyze and act upon test results.
--&gt;
&lt;p&gt;欢迎阅读又一期的 “SIG 聚光灯” 系列博客，这些博客重点介绍 Kubernetes
项目中各个特别兴趣小组（SIG）所从事的令人赞叹的工作。这篇博客将聚焦
&lt;a href="https://github.com/kubernetes/community/tree/master/sig-testing#readme"&gt;SIG Testing&lt;/a&gt;，
这是一个致力于有效测试 Kubernetes，让此项目的繁琐工作实现自动化的兴趣小组。
SIG Testing 专注于创建和运行工具和基础设施，使社区更容易编写和运行测试，并对测试结果做贡献、分析和处理。&lt;/p&gt;</description></item><item><title>Kubernetes 1.29 中的移除、弃用和主要变更</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2023/11/16/kubernetes-1-29-upcoming-changes/</link><pubDate>Thu, 16 Nov 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2023/11/16/kubernetes-1-29-upcoming-changes/</guid><description>&lt;!--
layout: blog
title: 'Kubernetes Removals, Deprecations, and Major Changes in Kubernetes 1.29'
date: 2023-11-16
slug: kubernetes-1-29-upcoming-changes
--&gt;
&lt;!--
**Authors:** Carol Valencia, Kristin Martin, Abigail McCarthy, James Quigley, Hosam Kamel
--&gt;
&lt;p&gt;&lt;strong&gt;作者:&lt;/strong&gt; Carol Valencia, Kristin Martin, Abigail McCarthy, James Quigley, Hosam Kamel&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者:&lt;/strong&gt; &lt;a href="https://github.com/windsonsea"&gt;Michael Yao&lt;/a&gt; (DaoCloud)&lt;/p&gt;
&lt;!--
As with every release, Kubernetes v1.29 will introduce feature deprecations and removals. Our continued ability to produce high-quality releases is a testament to our robust development cycle and healthy community. The following are some of the deprecations and removals coming in the Kubernetes 1.29 release.
--&gt;
&lt;p&gt;和其他每次发布一样，Kubernetes v1.29 将弃用和移除一些特性。
一贯以来生成高质量发布版本的能力是开发周期稳健和社区健康的证明。
下文列举即将发布的 Kubernetes 1.29 中的一些弃用和移除事项。&lt;/p&gt;</description></item><item><title>介绍 SIG etcd</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2023/11/07/introducing-sig-etcd/</link><pubDate>Tue, 07 Nov 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2023/11/07/introducing-sig-etcd/</guid><description>&lt;!--
layout: blog
title: "Introducing SIG etcd"
slug: introducing-sig-etcd
date: 2023-11-07
canonicalUrl: https://etcd.io/blog/2023/introducing-sig-etcd/
--&gt;
&lt;!--
**Authors**: Han Kang (Google), Marek Siarkowicz (Google), Frederico Muñoz (SAS Institute)
--&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;：Han Kang (Google), Marek Siarkowicz (Google), Frederico Muñoz (SAS Institute)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者&lt;/strong&gt;：Xin Li (Daocloud)&lt;/p&gt;
&lt;!--
Special Interest Groups (SIGs) are a fundamental part of the Kubernetes project,
with a substantial share of the community activity happening within them.
When the need arises, [new SIGs can be created](https://github.com/kubernetes/community/blob/master/sig-wg-lifecycle.md),
and that was precisely what happened recently.
--&gt;
&lt;p&gt;特殊兴趣小组（SIG）是 Kubernetes 项目的基本组成部分，很大一部分的 Kubernetes 社区活动都在其中进行。
当有需要时，可以创建&lt;a href="https://github.com/kubernetes/community/blob/master/sig-wg-lifecycle.md"&gt;新的 SIG&lt;/a&gt;，
而这正是最近发生的事情。&lt;/p&gt;</description></item><item><title>Gateway API v1.0：正式发布（GA）</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2023/10/31/gateway-api-ga/</link><pubDate>Tue, 31 Oct 2023 10:00:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2023/10/31/gateway-api-ga/</guid><description>&lt;!--
layout: blog
title: "Gateway API v1.0: GA Release"
date: 2023-10-31T10:00:00-08:00
slug: gateway-api-ga
--&gt;
&lt;!--
**Authors:** Shane Utt (Kong), Nick Young (Isovalent), Rob Scott (Google)
--&gt;
&lt;p&gt;&lt;strong&gt;作者：&lt;/strong&gt; Shane Utt (Kong), Nick Young (Isovalent), Rob Scott (Google)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者：&lt;/strong&gt; Xin Li (Daocloud)&lt;/p&gt;
&lt;!--
On behalf of Kubernetes SIG Network, we are pleased to announce the v1.0 release of [Gateway
API](https://gateway-api.sigs.k8s.io/)! This release marks a huge milestone for
this project. Several key APIs are graduating to GA (generally available), while
other significant features have been added to the Experimental channel.
--&gt;
&lt;p&gt;我们代表 Kubernetes SIG Network 很高兴地宣布 &lt;a href="https://gateway-api.sigs.k8s.io/"&gt;Gateway API&lt;/a&gt;
v1.0 版本发布！此版本是该项目的一个重要里程碑。几个关键的 API 正在逐步进入 GA（正式发布）阶段，
同时其他重要特性已添加到实验（Experimental）通道中。&lt;/p&gt;</description></item><item><title>Kubernetes 中 PersistentVolume 的最后阶段转换时间</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2023/10/23/persistent-volume-last-phase-transition-time/</link><pubDate>Mon, 23 Oct 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2023/10/23/persistent-volume-last-phase-transition-time/</guid><description>&lt;!--
layout: blog
title: PersistentVolume Last Phase Transition Time in Kubernetes
date: 2023-10-23
slug: persistent-volume-last-phase-transition-time
--&gt;
&lt;!--
**Author:** Roman Bednář (Red Hat)
--&gt;
&lt;p&gt;&lt;strong&gt;作者：&lt;/strong&gt; Roman Bednář (Red Hat)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者：&lt;/strong&gt; Xin Li (DaoCloud)&lt;/p&gt;
&lt;!--
In the recent Kubernetes v1.28 release, we (SIG Storage) introduced a new alpha feature that aims to improve PersistentVolume (PV)
storage management and help cluster administrators gain better insights into the lifecycle of PVs.
With the addition of the `lastPhaseTransitionTime` field into the status of a PV,
cluster administrators are now able to track the last time a PV transitioned to a different
[phase](/docs/concepts/storage/persistent-volumes/#phase), allowing for more efficient
and informed resource management.
--&gt;
&lt;p&gt;在最近的 Kubernetes v1.28 版本中，我们（SIG Storage）引入了一项新的 Alpha 级别特性，
旨在改进 PersistentVolume（PV）存储管理并帮助集群管理员更好地了解 PV 的生命周期。
通过将 &lt;code&gt;lastPhaseTransitionTime&lt;/code&gt; 字段添加到 PV 的状态中，集群管理员现在可以跟踪
PV 上次转换到不同&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/persistent-volumes/#phase"&gt;阶段&lt;/a&gt;的时间，
从而实现更高效、更明智的资源管理。&lt;/p&gt;</description></item><item><title>2023 中国 Kubernetes 贡献者峰会简要回顾</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2023/10/20/kcs-shanghai/</link><pubDate>Fri, 20 Oct 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2023/10/20/kcs-shanghai/</guid><description>&lt;!--
layout: blog
title: "A Quick Recap of 2023 China Kubernetes Contributor Summit"
slug: kcs-shanghai
date: 2023-10-20
canonicalUrl: https://www.kubernetes.dev/blog/2023/10/20/kcs-shanghai/
--&gt;
&lt;!--
**Author:** Paco Xu and Michael Yao (DaoCloud)

On September 26, 2023, the first day of
[KubeCon + CloudNativeCon + Open Source Summit China 2023](https://www.lfasiallc.com/kubecon-cloudnativecon-open-source-summit-china/),
nearly 50 contributors gathered in Shanghai for the Kubernetes Contributor Summit.
--&gt;
&lt;p&gt;&lt;strong&gt;作者：&lt;/strong&gt; Paco Xu 和 Michael Yao (DaoCloud)&lt;/p&gt;
&lt;p&gt;2023 年 9 月 26 日，即
&lt;a href="https://www.lfasiallc.com/kubecon-cloudnativecon-open-source-summit-china/"&gt;KubeCon + CloudNativeCon + Open Source Summit China 2023&lt;/a&gt;
第一天，近 50 位社区贡献者济济一堂，在上海聚首 Kubernetes 贡献者峰会。&lt;/p&gt;</description></item><item><title>CRI-O 正迁移至 pkgs.k8s.io</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2023/10/10/cri-o-community-package-infrastructure/</link><pubDate>Tue, 10 Oct 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2023/10/10/cri-o-community-package-infrastructure/</guid><description>&lt;!--
layout: blog
title: "CRI-O is moving towards pkgs.k8s.io"
date: 2023-10-10
slug: cri-o-community-package-infrastructure
--&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;：Sascha Grunert&lt;/p&gt;
&lt;!--
**Author:** Sascha Grunert
--&gt;
&lt;p&gt;&lt;strong&gt;译者&lt;/strong&gt;：Wilson Wu (DaoCloud)&lt;/p&gt;
&lt;!--
The Kubernetes community [recently announced](/blog/2023/08/31/legacy-package-repository-deprecation/) that their legacy package repositories are frozen, and now they moved to [introduced community-owned package repositories](/blog/2023/08/15/pkgs-k8s-io-introduction) powered by the [OpenBuildService (OBS)](https://build.opensuse.org/project/subprojects/isv:kubernetes). CRI-O has a long history of utilizing [OBS for their package builds](https://github.com/cri-o/cri-o/blob/e292f17/install.md#install-packaged-versions-of-cri-o), but all of the packaging efforts have been done manually so far.
--&gt;
&lt;p&gt;Kubernetes 社区&lt;a href="https://andygol-k8s.netlify.app/zh-cn/blog/2023/08/31/legacy-package-repository-deprecation/"&gt;最近宣布&lt;/a&gt;旧的软件包仓库已被冻结，
现在这些软件包将被迁移到由 &lt;a href="https://build.opensuse.org/project/subprojects/isv:kubernetes"&gt;OpenBuildService（OBS）&lt;/a&gt;
提供支持的&lt;a href="https://andygol-k8s.netlify.app/blog/2023/08/15/pkgs-k8s-io-introduction"&gt;社区自治软件包仓库&lt;/a&gt;中。
很久以来，CRI-O 一直在利用 &lt;a href="https://github.com/cri-o/cri-o/blob/e292f17/install.md#install-packaged-versions-of-cri-o"&gt;OBS 进行软件包构建&lt;/a&gt;，
但到目前为止，所有打包工作都是手动完成的。&lt;/p&gt;</description></item><item><title>聚焦 SIG Architecture: Conformance</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2023/10/05/sig-architecture-conformance-spotlight-2023/</link><pubDate>Thu, 05 Oct 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2023/10/05/sig-architecture-conformance-spotlight-2023/</guid><description>&lt;!--
layout: blog
title: "Spotlight on SIG Architecture: Conformance"
slug: sig-architecture-conformance-spotlight-2023
date: 2023-10-05
canonicalUrl: https://www.k8s.dev/blog/2023/10/05/sig-architecture-conformance-spotlight-2023/
--&gt;
&lt;!--
**Author**: Frederico Muñoz (SAS Institute)
--&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;：Frederico Muñoz (SAS Institute)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者&lt;/strong&gt;：&lt;a href="https://github.com/windsonsea"&gt;Michael Yao&lt;/a&gt; (DaoCloud)&lt;/p&gt;
&lt;!--
_This is the first interview of a SIG Architecture Spotlight series
that will cover the different subprojects. We start with the SIG
Architecture: Conformance subproject_

In this [SIG
Architecture](https://github.com/kubernetes/community/blob/master/sig-architecture/README.md)
spotlight, we talked with [Riaan
Kleinhans](https://github.com/Riaankl) (ii.nz), Lead for the
[Conformance
sub-project](https://github.com/kubernetes/community/blob/master/sig-architecture/README.md#conformance-definition-1).
--&gt;
&lt;p&gt;&lt;strong&gt;这是 SIG Architecture 焦点访谈系列的首次采访，这一系列访谈将涵盖多个子项目。
我们从 SIG Architecture：Conformance 子项目开始。&lt;/strong&gt;&lt;/p&gt;</description></item><item><title>公布 2023 年指导委员会选举结果</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2023/10/02/steering-committee-results-2023/</link><pubDate>Mon, 02 Oct 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2023/10/02/steering-committee-results-2023/</guid><description>&lt;!--
layout: blog
title: "Announcing the 2023 Steering Committee Election Results"
date: 2023-10-02
slug: steering-committee-results-2023
--&gt;
&lt;!--
**Author**: Kaslin Fields
--&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;：Kaslin Fields&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者&lt;/strong&gt;：Xin Li(DaoCloud)&lt;/p&gt;
&lt;!--
The [2023 Steering Committee Election](https://github.com/kubernetes/community/tree/master/events/elections/2023) is now complete.
The Kubernetes Steering Committee consists of 7 seats, 4 of which were up for election in 2023.
Incoming committee members serve a term of 2 years, and all members are elected by the Kubernetes Community.
--&gt;
&lt;p&gt;&lt;a href="https://github.com/kubernetes/community/tree/master/events/elections/2023"&gt;2023 年指导委员会选举&lt;/a&gt;现已完成。
Kubernetes 指导委员会由 7 个席位组成，其中 4 个席位于 2023 年进行选举。
新任委员会成员的任期为 2 年，所有成员均由 Kubernetes 社区选举产生。&lt;/p&gt;</description></item><item><title>kubeadm 七周年生日快乐！</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2023/09/26/happy-7th-birthday-kubeadm/</link><pubDate>Tue, 26 Sep 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2023/09/26/happy-7th-birthday-kubeadm/</guid><description>&lt;!--
layout: blog
title: 'Happy 7th Birthday kubeadm!'
date: 2023-09-26
slug: happy-7th-birthday-kubeadm
--&gt;
&lt;!--
**Author:** Fabrizio Pandini (VMware)
--&gt;
&lt;p&gt;&lt;strong&gt;作者:&lt;/strong&gt; Fabrizio Pandini (VMware)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者:&lt;/strong&gt; &lt;a href="https://github.com/windsonsea"&gt;Michael Yao&lt;/a&gt; (DaoCloud)&lt;/p&gt;
&lt;!--
What a journey so far!

Starting from the initial blog post [“How we made Kubernetes insanely easy to install”](/blog/2016/09/how-we-made-kubernetes-easy-to-install/) in September 2016, followed by an exciting growth that lead to general availability / [“Production-Ready Kubernetes Cluster Creation with kubeadm”](/blog/2018/12/04/production-ready-kubernetes-cluster-creation-with-kubeadm/) two years later.

And later on a continuous, steady and reliable flow of small improvements that is still going on as of today.
--&gt;
&lt;p&gt;回首向来萧瑟处，七年光阴风雨路！&lt;/p&gt;</description></item><item><title>kubeadm：使用 etcd Learner 安全地接入控制平面节点</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2023/09/25/kubeadm-use-etcd-learner-mode/</link><pubDate>Mon, 25 Sep 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2023/09/25/kubeadm-use-etcd-learner-mode/</guid><description>&lt;!--
layout: blog
title: 'kubeadm: Use etcd Learner to Join a Control Plane Node Safely'
date: 2023-09-25
slug: kubeadm-use-etcd-learner-mode
--&gt;
&lt;!--
**Author:** Paco Xu (DaoCloud)
--&gt;
&lt;p&gt;&lt;strong&gt;作者:&lt;/strong&gt; Paco Xu (DaoCloud)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者:&lt;/strong&gt; &lt;a href="https://github.com/windsonsea"&gt;Michael Yao&lt;/a&gt; (DaoCloud)&lt;/p&gt;
&lt;!--
The [`kubeadm`](/docs/reference/setup-tools/kubeadm/) tool now supports etcd learner mode, which
allows you to enhance the resilience and stability
of your Kubernetes clusters by leveraging the [learner mode](https://etcd.io/docs/v3.4/learning/design-learner/#appendix-learner-implementation-in-v34)
feature introduced in etcd version 3.4.
This guide will walk you through using etcd learner mode with kubeadm. By default, kubeadm runs
a local etcd instance on each control plane node.
--&gt;
&lt;p&gt;&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/"&gt;&lt;code&gt;kubeadm&lt;/code&gt;&lt;/a&gt; 工具现在支持 etcd learner 模式，
借助 etcd 3.4 版本引入的
&lt;a href="https://etcd.io/docs/v3.4/learning/design-learner/#appendix-learner-implementation-in-v34"&gt;learner 模式&lt;/a&gt;特性，
可以提高 Kubernetes 集群的弹性和稳定性。本文将介绍如何在 kubeadm 中使用 etcd learner 模式。
默认情况下，kubeadm 在每个控制平面节点上运行一个本地 etcd 实例。&lt;/p&gt;</description></item><item><title>用户命名空间：对运行有状态 Pod 的支持进入 Alpha 阶段!</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2023/09/13/userns-alpha/</link><pubDate>Wed, 13 Sep 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2023/09/13/userns-alpha/</guid><description>&lt;!--
layout: blog
title: "User Namespaces: Now Supports Running Stateful Pods in Alpha!"
date: 2023-09-13
slug: userns-alpha
--&gt;
&lt;!--
**Authors:** Rodrigo Campos Catelin (Microsoft), Giuseppe Scrivano (Red Hat), Sascha Grunert (Red Hat)
--&gt;
&lt;p&gt;&lt;strong&gt;作者：&lt;/strong&gt; Rodrigo Campos Catelin (Microsoft), Giuseppe Scrivano (Red Hat), Sascha Grunert (Red Hat)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者：&lt;/strong&gt; Xin Li (DaoCloud)&lt;/p&gt;
&lt;!--
Kubernetes v1.25 introduced support for user namespaces for only stateless
pods. Kubernetes 1.28 lifted that restriction, after some design changes were
done in 1.27.
--&gt;
&lt;p&gt;Kubernetes v1.25 引入用户命名空间（User Namespace）特性，仅支持无状态（Stateless）Pod。
Kubernetes 1.28 在 1.27 的基础上中进行了一些改进后，取消了这一限制。&lt;/p&gt;</description></item><item><title>比较本地 Kubernetes 开发工具：Telepresence、Gefyra 和 mirrord</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2023/09/12/local-k8s-development-tools/</link><pubDate>Tue, 12 Sep 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2023/09/12/local-k8s-development-tools/</guid><description>&lt;!--
layout: blog
title: 'Comparing Local Kubernetes Development Tools: Telepresence, Gefyra, and mirrord'
date: 2023-09-12
slug: local-k8s-development-tools
--&gt;
&lt;!--
**Author:** Eyal Bukchin (MetalBear)
--&gt;
&lt;p&gt;&lt;strong&gt;作者:&lt;/strong&gt; Eyal Bukchin (MetalBear)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者:&lt;/strong&gt; &lt;a href="https://github.com/windsonsea"&gt;Michael Yao&lt;/a&gt; (DaoCloud)&lt;/p&gt;
&lt;!--
The Kubernetes development cycle is an evolving landscape with a myriad of tools seeking to streamline the process. Each tool has its unique approach, and the choice often comes down to individual project requirements, the team's expertise, and the preferred workflow.
--&gt;
&lt;p&gt;Kubernetes 的开发周期是一个不断演化的领域，有许多工具在寻求简化这个过程。
每个工具都有其独特的方法，具体选择通常取决于各个项目的要求、团队的专业知识以及所偏好的工作流。&lt;/p&gt;</description></item><item><title>Kubernetes 旧版软件包仓库将于 2023 年 9 月 13 日被冻结</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2023/08/31/legacy-package-repository-deprecation/</link><pubDate>Thu, 31 Aug 2023 15:30:00 -0700</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2023/08/31/legacy-package-repository-deprecation/</guid><description>&lt;!--
layout: blog
title: "Kubernetes Legacy Package Repositories Will Be Frozen On September 13, 2023"
date: 2023-08-31T15:30:00-07:00
slug: legacy-package-repository-deprecation
evergreen: true
author: &gt;
 Bob Killen (Google),
 Chris Short (AWS),
 Jeremy Rickard (Microsoft),
 Marko Mudrinić (Kubermatic),
 Tim Bannister (The Scale Factory)
--&gt;
&lt;!--
On August 15, 2023, the Kubernetes project announced the general availability of
the community-owned package repositories for Debian and RPM packages available
at `pkgs.k8s.io`. The new package repositories are replacement for the legacy
Google-hosted package repositories: `apt.kubernetes.io` and `yum.kubernetes.io`.
The
[announcement blog post for `pkgs.k8s.io`](/blog/2023/08/15/pkgs-k8s-io-introduction/)
highlighted that we will stop publishing packages to the legacy repositories in
the future.
--&gt;
&lt;p&gt;2023 年 8 月 15 日，Kubernetes 项目宣布社区拥有的 Debian 和 RPM
软件包仓库在 &lt;code&gt;pkgs.k8s.io&lt;/code&gt; 上正式提供。新的软件包仓库将取代旧的由
Google 托管的软件包仓库：&lt;code&gt;apt.kubernetes.io&lt;/code&gt; 和 &lt;code&gt;yum.kubernetes.io&lt;/code&gt;。
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/blog/2023/08/15/pkgs-k8s-io-introduction/"&gt;&lt;code&gt;pkgs.k8s.io&lt;/code&gt; 的公告博客文章&lt;/a&gt;强调我们未来将停止将软件包发布到旧仓库。&lt;/p&gt;</description></item><item><title>Gateway API v0.8.0：引入服务网格支持</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2023/08/29/gateway-api-v0-8/</link><pubDate>Tue, 29 Aug 2023 10:00:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2023/08/29/gateway-api-v0-8/</guid><description>&lt;!--
layout: blog
title: "Gateway API v0.8.0: Introducing Service Mesh Support"
date: 2023-08-29T10:00:00-08:00
slug: gateway-api-v0-8
--&gt;
&lt;!--
***Authors:*** Flynn (Buoyant), John Howard (Google), Keith Mattix (Microsoft), Michael Beaumont (Kong), Mike Morris (independent), Rob Scott (Google)
--&gt;
&lt;p&gt;&lt;strong&gt;作者：&lt;/strong&gt; Flynn (Buoyant), John Howard (Google), Keith Mattix (Microsoft), Michael Beaumont (Kong), Mike Morris (independent), Rob Scott (Google)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者：&lt;/strong&gt; Xin Li (Daocloud)&lt;/p&gt;
&lt;!--
We are thrilled to announce the v0.8.0 release of Gateway API! With this
release, Gateway API support for service mesh has reached [Experimental
status][status]. We look forward to your feedback!

We're especially delighted to announce that Kuma 2.3+, Linkerd 2.14+, and Istio
1.16+ are all fully-conformant implementations of Gateway API service mesh
support.
--&gt;
&lt;p&gt;我们很高兴地宣布 Gateway API 的 v0.8.0 版本发布了！
通过此版本，Gateway API 对服务网格的支持已达到&lt;a href="https://gateway-api.sigs.k8s.io/geps/overview/#status"&gt;实验性（Experimental）状态&lt;/a&gt;。
我们期待你的反馈！&lt;/p&gt;</description></item><item><title>Kubernetes 1.28：用于改进集群安全升级的新（Alpha）机制</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2023/08/28/kubernetes-1-28-feature-mixed-version-proxy-alpha/</link><pubDate>Mon, 28 Aug 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2023/08/28/kubernetes-1-28-feature-mixed-version-proxy-alpha/</guid><description>&lt;!--
layout: blog
title: "Kubernetes 1.28: A New (alpha) Mechanism For Safer Cluster Upgrades"
date: 2023-08-28
slug: kubernetes-1-28-feature-mixed-version-proxy-alpha
--&gt;
&lt;!--
**Author:** Richa Banker (Google)
--&gt;
&lt;p&gt;&lt;strong&gt;作者：&lt;/strong&gt; Richa Banker (Google)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者：&lt;/strong&gt; Xin Li (DaoCloud)&lt;/p&gt;
&lt;!--
This blog describes the _mixed version proxy_, a new alpha feature in Kubernetes 1.28. The
mixed version proxy enables an HTTP request for a resource to be served by the correct API server
in cases where there are multiple API servers at varied versions in a cluster. For example,
this is useful during a cluster upgrade, or when you're rolling out the runtime configuration of
the cluster's control plane.
--&gt;
&lt;p&gt;本博客介绍了&lt;strong&gt;混合版本代理（Mixed Version Proxy）&lt;/strong&gt;，这是 Kubernetes 1.28 中的一个新的
Alpha 级别特性。当集群中存在多个不同版本的 API 服务器时，混合版本代理使对资源的 HTTP 请求能够被正确的
API 服务器处理。例如，在集群升级期间或当发布集群控制平面的运行时配置时此特性非常有用。&lt;/p&gt;</description></item><item><title>Kubernetes v1.28：介绍原生边车容器</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2023/08/25/native-sidecar-containers/</link><pubDate>Fri, 25 Aug 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2023/08/25/native-sidecar-containers/</guid><description>&lt;!--
layout: blog
title: "Kubernetes v1.28: Introducing native sidecar containers"
date: 2023-08-25
slug: native-sidecar-containers
--&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;：Todd Neal (AWS), Matthias Bertschy (ARMO), Sergey Kanzhelev (Google), Gunju Kim (NAVER), Shannon Kularathna (Google)&lt;/p&gt;
&lt;!--
***Authors:*** Todd Neal (AWS), Matthias Bertschy (ARMO), Sergey Kanzhelev (Google), Gunju Kim (NAVER), Shannon Kularathna (Google)
--&gt;
&lt;!--
This post explains how to use the new sidecar feature, which enables restartable init containers and is available in alpha in Kubernetes 1.28. We want your feedback so that we can graduate this feature as soon as possible.
--&gt;
&lt;p&gt;本文介绍了如何使用新的边车（Sidecar）功能，该功能支持可重新启动的 Init 容器，
并且在 Kubernetes 1.28 以 Alpha 版本发布。我们希望得到你的反馈，以便我们尽快完成此功能。&lt;/p&gt;</description></item><item><title>Kubernetes 1.28：在 Linux 上使用交换内存的 Beta 支持</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2023/08/24/swap-linux-beta/</link><pubDate>Thu, 24 Aug 2023 10:00:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2023/08/24/swap-linux-beta/</guid><description>&lt;!--
layout: blog
title: "Kubernetes 1.28: Beta support for using swap on Linux"
date: 2023-08-24T10:00:00-08:00
slug: swap-linux-beta
--&gt;
&lt;!--
**Author:** Itamar Holder (Red Hat)
--&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;：Itamar Holder (Red Hat)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者&lt;/strong&gt;：Wilson Wu (DaoCloud)&lt;/p&gt;
&lt;!--
The 1.22 release [introduced Alpha support](/blog/2021/08/09/run-nodes-with-swap-alpha/) for configuring swap memory usage for Kubernetes workloads running on Linux on a per-node basis. Now, in release 1.28, support for swap on Linux nodes has graduated to Beta, along with many new improvements.
--&gt;
&lt;p&gt;Kubernetes 1.22 版本为交换内存&lt;a href="https://andygol-k8s.netlify.app/blog/2021/08/09/run-nodes-with-swap-alpha/"&gt;引入了一项 Alpha 支持&lt;/a&gt;，
用于为在 Linux 节点上运行的 Kubernetes 工作负载逐个节点地配置交换内存使用。
现在，在 1.28 版中，对 Linux 节点上的交换内存的支持已升级为 Beta 版，并有许多新的改进。&lt;/p&gt;</description></item><item><title>Kubernetes 1.28：节点 podresources API 正式发布</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2023/08/23/kubelet-podresources-api-ga/</link><pubDate>Wed, 23 Aug 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2023/08/23/kubelet-podresources-api-ga/</guid><description>&lt;!--
layout: blog
title: 'Kubernetes 1.28: Node podresources API Graduates to GA'
date: 2023-08-23
slug: kubelet-podresources-api-GA
--&gt;
&lt;!--
**Author:**
Francesco Romani (Red Hat)
--&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;：Francesco Romani (Red Hat)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者&lt;/strong&gt;：Wilson Wu (DaoCloud)&lt;/p&gt;
&lt;!--
The podresources API is an API served by the kubelet locally on the node, which exposes the compute resources exclusively allocated to containers. With the release of Kubernetes 1.28, that API is now Generally Available.
--&gt;
&lt;p&gt;podresources API 是由 kubelet 提供的节点本地 API，它用于公开专门分配给容器的计算资源。
随着 Kubernetes 1.28 的发布，该 API 现已正式发布。&lt;/p&gt;</description></item><item><title>Kubernetes 1.28：Job 失效处理的改进</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2023/08/21/kubernetes-1-28-jobapi-update/</link><pubDate>Mon, 21 Aug 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2023/08/21/kubernetes-1-28-jobapi-update/</guid><description>&lt;!--
layout: blog
title: "Kubernetes 1.28: Improved failure handling for Jobs"
date: 2023-08-21
slug: kubernetes-1-28-jobapi-update
--&gt;
&lt;!--
**Authors:** Kevin Hannon (G-Research), Michał Woźniak (Google)
--&gt;
&lt;p&gt;&lt;strong&gt;作者：&lt;/strong&gt; Kevin Hannon (G-Research), Michał Woźniak (Google)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者：&lt;/strong&gt; Xin Li (Daocloud)&lt;/p&gt;
&lt;!--
This blog discusses two new features in Kubernetes 1.28 to improve Jobs for batch
users: [Pod replacement policy](/docs/concepts/workloads/controllers/job/#pod-replacement-policy)
and [Backoff limit per index](/docs/concepts/workloads/controllers/job/#backoff-limit-per-index).
--&gt;
&lt;p&gt;本博客讨论 Kubernetes 1.28 中的两个新特性，用于为批处理用户改进 Job：
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/controllers/job/#pod-replacement-policy"&gt;Pod 更换策略&lt;/a&gt;
和&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/controllers/job/#backoff-limit-per-index"&gt;基于索引的回退限制&lt;/a&gt;。&lt;/p&gt;
&lt;!--
These features continue the effort started by the
[Pod failure policy](/docs/concepts/workloads/controllers/job/#pod-failure-policy)
to improve the handling of Pod failures in a Job.
--&gt;
&lt;p&gt;这些特性延续了 &lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/controllers/job/#pod-failure-policy"&gt;Pod 失效策略&lt;/a&gt;
为开端的工作，用来改进对 Job 中 Pod 失效的处理。&lt;/p&gt;</description></item><item><title>Kubernetes v1.28：可追溯的默认 StorageClass 进阶至 GA</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2023/08/18/retroactive-default-storage-class-ga/</link><pubDate>Fri, 18 Aug 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2023/08/18/retroactive-default-storage-class-ga/</guid><description>&lt;!--
layout: blog
title: "Kubernetes v1.28: Retroactive Default StorageClass move to GA"
date: 2023-08-18
slug: retroactive-default-storage-class-ga
--&gt;
&lt;!--
**Author:** Roman Bednář (Red Hat)
--&gt;
&lt;p&gt;&lt;strong&gt;作者:&lt;/strong&gt; Roman Bednář (Red Hat)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者:&lt;/strong&gt; &lt;a href="https://github.com/windsonsea"&gt;Michael Yao&lt;/a&gt; (DaoCloud)&lt;/p&gt;
&lt;!--
Announcing graduation to General Availability (GA) - Retroactive Default StorageClass Assignment
in Kubernetes v1.28!
--&gt;
&lt;p&gt;可追溯的默认 StorageClass 赋值（Retroactive Default StorageClass Assignment）在
Kubernetes v1.28 中宣布进阶至正式发布（GA）！&lt;/p&gt;
&lt;!--
Kubernetes SIG Storage team is thrilled to announce that the
"Retroactive Default StorageClass Assignment" feature,
introduced as an alpha in Kubernetes v1.25, has now graduated to GA
and is officially part of the Kubernetes v1.28 release.
This enhancement brings a significant improvement to how default
[StorageClasses](/docs/concepts/storage/storage-classes/) are assigned
to PersistentVolumeClaims (PVCs).
--&gt;
&lt;p&gt;Kubernetes SIG Storage 团队非常高兴地宣布，在 Kubernetes v1.25 中作为
Alpha 特性引入的 “可追溯默认 StorageClass 赋值” 现已进阶至 GA，
并正式成为 Kubernetes v1.28 发行版的一部分。
这项增强特性极大地改进了默认的 &lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/storage-classes/"&gt;StorageClasses&lt;/a&gt;
为 PersistentVolumeClaim (PVC) 赋值的方式。&lt;/p&gt;</description></item><item><title>Kubernetes 1.28: 节点非体面关闭进入 GA 阶段（正式发布）</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2023/08/16/kubernetes-1-28-non-graceful-node-shutdown-ga/</link><pubDate>Wed, 16 Aug 2023 10:00:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2023/08/16/kubernetes-1-28-non-graceful-node-shutdown-ga/</guid><description>&lt;!--
layout: blog
title: "Kubernetes 1.28: Non-Graceful Node Shutdown Moves to GA"
date: 2023-08-16T10:00:00-08:00
slug: kubernetes-1-28-non-graceful-node-shutdown-GA
--&gt;
&lt;!--
**Authors:** Xing Yang (VMware) and Ashutosh Kumar (Elastic)
--&gt;
&lt;p&gt;&lt;strong&gt;作者：&lt;/strong&gt; Xing Yang (VMware) 和 Ashutosh Kumar (Elastic)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者：&lt;/strong&gt; Xin Li (Daocloud)&lt;/p&gt;
&lt;!--
The Kubernetes Non-Graceful Node Shutdown feature is now GA in Kubernetes v1.28.
It was introduced as
[alpha](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/2268-non-graceful-shutdown)
in Kubernetes v1.24, and promoted to
[beta](https://kubernetes.io/blog/2022/12/16/kubernetes-1-26-non-graceful-node-shutdown-beta/)
in Kubernetes v1.26.
This feature allows stateful workloads to restart on a different node if the
original node is shutdown unexpectedly or ends up in a non-recoverable state
such as the hardware failure or unresponsive OS.
--&gt;
&lt;p&gt;Kubernetes 节点非体面关闭特性现已在 Kubernetes v1.28 中正式发布。&lt;/p&gt;</description></item><item><title>pkgs.k8s.io：介绍 Kubernetes 社区自有的包仓库</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2023/08/15/pkgs-k8s-io-introduction/</link><pubDate>Tue, 15 Aug 2023 20:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2023/08/15/pkgs-k8s-io-introduction/</guid><description>&lt;!--
layout: blog
title: "pkgs.k8s.io: Introducing Kubernetes Community-Owned Package Repositories"
date: 2023-08-15T20:00:00+0000
slug: pkgs-k8s-io-introduction
author: &gt;
 Marko Mudrinić (Kubermatic)
--&gt;
&lt;!--
On behalf of Kubernetes SIG Release, I am very excited to introduce the
Kubernetes community-owned software
repositories for Debian and RPM packages: `pkgs.k8s.io`! The new package
repositories are replacement for the Google-hosted package repositories
(`apt.kubernetes.io` and `yum.kubernetes.io`) that we've been using since
Kubernetes v1.5.
--&gt;
&lt;p&gt;我很高兴代表 Kubernetes SIG Release 介绍 Kubernetes
社区自有的 Debian 和 RPM 软件仓库：&lt;code&gt;pkgs.k8s.io&lt;/code&gt;！
这些全新的仓库取代了我们自 Kubernetes v1.5 以来一直使用的托管在
Google 的仓库（&lt;code&gt;apt.kubernetes.io&lt;/code&gt; 和 &lt;code&gt;yum.kubernetes.io&lt;/code&gt;）。&lt;/p&gt;</description></item><item><title>聚焦 SIG CLI</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2023/07/20/sig-cli-spotlight-2023/</link><pubDate>Thu, 20 Jul 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2023/07/20/sig-cli-spotlight-2023/</guid><description>&lt;!--
layout: blog
title: "Spotlight on SIG CLI"
date: 2023-07-20
slug: sig-cli-spotlight-2023
canonicalUrl: https://www.kubernetes.dev/blog/2023/07/13/sig-cli-spotlight-2023/
--&gt;
&lt;!--
**Author**: Arpit Agrawal
--&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;：Arpit Agrawal&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者&lt;/strong&gt;：Xin Li (Daocloud)&lt;/p&gt;
&lt;!--
In the world of Kubernetes, managing containerized applications at
scale requires powerful and efficient tools. The command-line
interface (CLI) is an integral part of any developer or operator’s
toolkit, offering a convenient and flexible way to interact with a
Kubernetes cluster.
--&gt;
&lt;p&gt;在 Kubernetes 的世界中，大规模管理容器化应用程序需要强大而高效的工具。
命令行界面（CLI）是任何开发人员或操作人员工具包不可或缺的一部分，
其提供了一种方便灵活的方式与 Kubernetes 集群交互。&lt;/p&gt;</description></item><item><title>Kubernetes 机密：使用机密虚拟机和安全区来增强你的集群安全性</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2023/07/06/confidential-kubernetes/</link><pubDate>Thu, 06 Jul 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2023/07/06/confidential-kubernetes/</guid><description>&lt;!--
layout: blog
title: "Confidential Kubernetes: Use Confidential Virtual Machines and Enclaves to improve your cluster security"
date: 2023-07-06
slug: "confidential-kubernetes"
--&gt;
&lt;!--
**Authors:** Fabian Kammel (Edgeless Systems), Mikko Ylinen (Intel), Tobin Feldman-Fitzthum (IBM)
--&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;：Fabian Kammel (Edgeless Systems), Mikko Ylinen (Intel), Tobin Feldman-Fitzthum (IBM)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者&lt;/strong&gt;：&lt;a href="https://github.com/asa3311"&gt;顾欣&lt;/a&gt;&lt;/p&gt;
&lt;!--
In this blog post, we will introduce the concept of Confidential Computing (CC) to improve any computing environment's security and privacy properties. Further, we will show how
the Cloud-Native ecosystem, particularly Kubernetes, can benefit from the new compute paradigm.

Confidential Computing is a concept that has been introduced previously in the cloud-native world. The
[Confidential Computing Consortium](https://confidentialcomputing.io/) (CCC) is a project community in the Linux Foundation
that already worked on
[Defining and Enabling Confidential Computing](https://confidentialcomputing.io/wp-content/uploads/sites/85/2019/12/CCC_Overview.pdf).
In the [Whitepaper](https://confidentialcomputing.io/wp-content/uploads/sites/85/2023/01/CCC-A-Technical-Analysis-of-Confidential-Computing-v1.3_Updated_November_2022.pdf),
they provide a great motivation for the use of Confidential Computing:

 &gt; Data exists in three states: in transit, at rest, and in use. …Protecting sensitive data
 &gt; in all of its states is more critical than ever. Cryptography is now commonly deployed
 &gt; to provide both data confidentiality (stopping unauthorized viewing) and data integrity
 &gt; (preventing or detecting unauthorized changes). While techniques to protect data in transit
 &gt; and at rest are now commonly deployed, the third state - protecting data in use - is the new frontier.

Confidential Computing aims to primarily solve the problem of **protecting data in use**
by introducing a hardware-enforced Trusted Execution Environment (TEE).
--&gt;
&lt;p&gt;在这篇博客文章中，我们将介绍机密计算（Confidential Computing，简称 CC）的概念，
以增强任何计算环境的安全和隐私属性。此外，我们将展示云原生生态系统，
特别是 Kubernetes，如何从新的计算范式中受益。&lt;/p&gt;</description></item><item><title>在 CRI 运行时内验证容器镜像签名</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2023/06/29/container-image-signature-verification/</link><pubDate>Thu, 29 Jun 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2023/06/29/container-image-signature-verification/</guid><description>&lt;!--
layout: blog
title: "Verifying Container Image Signatures Within CRI Runtimes"
date: 2023-06-29
slug: container-image-signature-verification
--&gt;
&lt;!--
**Author**: Sascha Grunert
--&gt;
&lt;p&gt;&lt;strong&gt;作者:&lt;/strong&gt; Sascha Grunert&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者:&lt;/strong&gt; &lt;a href="https://github.com/windsonsea"&gt;Michael Yao&lt;/a&gt; (DaoCloud)&lt;/p&gt;
&lt;!--
The Kubernetes community has been signing their container image-based artifacts
since release v1.24. While the graduation of the [corresponding enhancement][kep]
from `alpha` to `beta` in v1.26 introduced signatures for the binary artifacts,
other projects followed the approach by providing image signatures for their
releases, too. This means that they either create the signatures within their
own CI/CD pipelines, for example by using GitHub actions, or rely on the
Kubernetes [image promotion][promo] process to automatically sign the images by
proposing pull requests to the [k/k8s.io][k8s.io] repository. A requirement for
using this process is that the project is part of the `kubernetes` or
`kubernetes-sigs` GitHub organization, so that they can utilize the community
infrastructure for pushing images into staging buckets.
--&gt;
&lt;p&gt;Kubernetes 社区自 v1.24 版本开始对基于容器镜像的工件进行签名。在 v1.26 中，
&lt;a href="https://github.com/kubernetes/enhancements/issues/3031"&gt;相应的增强特性&lt;/a&gt;从 &lt;code&gt;alpha&lt;/code&gt; 进阶至 &lt;code&gt;beta&lt;/code&gt;，引入了针对二进制工件的签名。
其他项目也采用了类似的方法，为其发布版本提供镜像签名。这意味着这些项目要么使用 GitHub actions
在自己的 CI/CD 流程中创建签名，要么依赖于 Kubernetes 的&lt;a href="https://github.com/kubernetes-sigs/promo-tools/blob/e2b96dd/docs/image-promotion.md"&gt;镜像推广&lt;/a&gt;流程，
通过向 &lt;a href="https://github.com/kubernetes/k8s.io/tree/4b95cc2/k8s.gcr.io"&gt;k/k8s.io&lt;/a&gt; 仓库提交 PR 来自动签名镜像。
使用此流程的前提要求是项目必须属于 &lt;code&gt;kubernetes&lt;/code&gt; 或 &lt;code&gt;kubernetes-sigs&lt;/code&gt; GitHub 组织，
这样能够利用社区基础设施将镜像推送到暂存桶中。&lt;/p&gt;</description></item><item><title>dl.k8s.io 采用内容分发网络（CDN）</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2023/06/09/dl-adopt-cdn/</link><pubDate>Fri, 09 Jun 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2023/06/09/dl-adopt-cdn/</guid><description>&lt;!--
layout: blog
title: "dl.k8s.io to adopt a Content Delivery Network"
date: 2023-06-09
slug: dl-adopt-cdn
--&gt;
&lt;!--
**Authors**: Arnaud Meukam (VMware), Hannah Aubry (Fast Forward), Frederico
Muñoz (SAS Institute)
--&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;：Arnaud Meukam (VMware), Hannah Aubry (Fast Forward), Frederico Muñoz (SAS Institute)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者&lt;/strong&gt;：&lt;a href="https://github.com/my-git9"&gt;Xin Li&lt;/a&gt; (Daocloud)&lt;/p&gt;
&lt;!--
We're happy to announce that dl.k8s.io, home of the official Kubernetes
binaries, will soon be powered by [Fastly](https://www.fastly.com).

Fastly is known for its high-performance content delivery network (CDN) designed
to deliver content quickly and reliably around the world. With its powerful
network, Fastly will help us deliver official Kubernetes binaries to users
faster and more reliably than ever before.
--&gt;
&lt;p&gt;我们很高兴地宣布，官方 Kubernetes 二进制文件的主页 dl.k8s.io 很快将由
&lt;a href="https://www.fastly.com"&gt;Fastly&lt;/a&gt; 提供支持。&lt;/p&gt;</description></item><item><title>使用 OCI 工件为 seccomp、SELinux 和 AppArmor 分发安全配置文件</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2023/05/24/oci-security-profiles/</link><pubDate>Wed, 24 May 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2023/05/24/oci-security-profiles/</guid><description>&lt;!--
layout: blog
title: "Using OCI artifacts to distribute security profiles for seccomp, SELinux and AppArmor"
date: 2023-05-24
slug: oci-security-profiles
--&gt;
&lt;!--
**Author**: Sascha Grunert
--&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;: Sascha Grunert&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者&lt;/strong&gt;: &lt;a href="https://github.com/windsonsea"&gt;Michael Yao&lt;/a&gt; (DaoCloud)&lt;/p&gt;
&lt;!--
The [Security Profiles Operator (SPO)][spo] makes managing seccomp, SELinux and
AppArmor profiles within Kubernetes easier than ever. It allows cluster
administrators to define the profiles in a predefined custom resource YAML,
which then gets distributed by the SPO into the whole cluster. Modification and
removal of the security profiles are managed by the operator in the same way,
but that’s a small subset of its capabilities.
--&gt;
&lt;p&gt;&lt;a href="https://github.com/kubernetes-sigs/security-profiles-operator"&gt;Security Profiles Operator (SPO)&lt;/a&gt; 使得在 Kubernetes 中管理
seccomp、SELinux 和 AppArmor 配置文件变得更加容易。
它允许集群管理员在预定义的自定义资源 YAML 中定义配置文件，然后由 SPO 分发到整个集群中。
安全配置文件的修改和移除也由 Operator 以同样的方式进行管理，但这只是其能力的一小部分。&lt;/p&gt;</description></item><item><title>在边缘上玩转 seccomp 配置文件</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2023/05/18/seccomp-profiles-edge/</link><pubDate>Thu, 18 May 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2023/05/18/seccomp-profiles-edge/</guid><description>&lt;!--
layout: blog
title: "Having fun with seccomp profiles on the edge"
date: 2023-05-18
slug: seccomp-profiles-edge
--&gt;
&lt;!--
**Author**: Sascha Grunert
--&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;: Sascha Grunert&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者&lt;/strong&gt;: &lt;a href="https://github.com/windsonsea"&gt;Michael Yao&lt;/a&gt; (DaoCloud)&lt;/p&gt;
&lt;!--
The [Security Profiles Operator (SPO)][spo] is a feature-rich
[operator][operator] for Kubernetes to make managing seccomp, SELinux and
AppArmor profiles easier than ever. Recording those profiles from scratch is one
of the key features of this operator, which usually involves the integration
into large CI/CD systems. Being able to test the recording capabilities of the
operator in edge cases is one of the recent development efforts of the SPO and
makes it excitingly easy to play around with seccomp profiles.
--&gt;
&lt;p&gt;[Security Profiles Operator (SPO)][spo] 是一个功能丰富的 Kubernetes [operator][operator]，
相比以往可以简化 seccomp、SELinux 和 AppArmor 配置文件的管理。
从头开始记录这些配置文件是该 Operator 的关键特性之一，这通常涉及与大型 CI/CD 系统集成。
在边缘场景中测试 Operator 的记录能力是 SPO 的最新开发工作之一，
非常有助于轻松玩转 seccomp 配置文件。&lt;/p&gt;</description></item><item><title>Kubernetes 1.27: KMS V2 进入 Beta 阶段</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2023/05/16/kms-v2-moves-to-beta/</link><pubDate>Tue, 16 May 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2023/05/16/kms-v2-moves-to-beta/</guid><description>&lt;!--
layout: blog
title: "Kubernetes 1.27: KMS V2 Moves to Beta"
date: 2023-05-16
slug: kms-v2-moves-to-beta
--&gt;
&lt;!--
**Authors:** Anish Ramasekar, Mo Khan, and Rita Zhang (Microsoft)
--&gt;
&lt;p&gt;&lt;strong&gt;作者：&lt;/strong&gt; Anish Ramasekar, Mo Khan, and Rita Zhang (Microsoft)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者：&lt;/strong&gt; Xin Li (DaoCloud)&lt;/p&gt;
&lt;!--
With Kubernetes 1.27, we (SIG Auth) are moving Key Management Service (KMS) v2 API to beta.
--&gt;
&lt;p&gt;在 Kubernetes 1.27 中，我们（SIG Auth）将密钥管理服务（KMS）v2 API 带入 Beta 阶段。&lt;/p&gt;
&lt;!--
## What is KMS?
One of the first things to consider when securing a Kubernetes cluster is encrypting etcd data at
rest. KMS provides an interface for a provider to utilize a key stored in an external key service to
perform this encryption.
--&gt;
&lt;h2 id="kms-是什么"&gt;KMS 是什么？&lt;/h2&gt;
&lt;p&gt;保护 Kubernetes 集群时首先要考虑的事情之一是加密静态的 etcd 数据。
KMS 为供应商提供了一个接口，以便利用存储在外部密钥服务中的密钥来执行此加密。&lt;/p&gt;</description></item><item><title>Kubernetes 1.27：关于加快 Pod 启动的进展</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2023/05/15/speed-up-pod-startup/</link><pubDate>Mon, 15 May 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2023/05/15/speed-up-pod-startup/</guid><description>&lt;!--
layout: blog
title: "Kubernetes 1.27: updates on speeding up Pod startup"
date: 2023-05-15T00:00:00+0000
slug: speed-up-pod-startup
--&gt;
&lt;!--
**Authors**: Paco Xu (DaoCloud), Sergey Kanzhelev (Google), Ruiwen Zhao (Google)
--&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;：Paco Xu (DaoCloud), Sergey Kanzhelev (Google), Ruiwen Zhao (Google)
&lt;strong&gt;译者&lt;/strong&gt;：Michael Yao (DaoCloud)&lt;/p&gt;
&lt;!--
How can Pod start-up be accelerated on nodes in large clusters? This is a common issue that
cluster administrators may face.

This blog post focuses on methods to speed up pod start-up from the kubelet side. It does not
involve the creation time of pods by controller-manager through kube-apiserver, nor does it
include scheduling time for pods or webhooks executed on it.
--&gt;
&lt;p&gt;如何在大型集群中加快节点上的 Pod 启动？这是集群管理员可能面临的常见问题。&lt;/p&gt;</description></item><item><title>Kubernetes 1.27: 原地调整 Pod 资源 (alpha)</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2023/05/12/in-place-pod-resize-alpha/</link><pubDate>Fri, 12 May 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2023/05/12/in-place-pod-resize-alpha/</guid><description>&lt;!--
layout: blog
title: "Kubernetes 1.27: In-place Resource Resize for Kubernetes Pods (alpha)"
date: 2023-05-12
slug: in-place-pod-resize-alpha
--&gt;
&lt;p&gt;&lt;strong&gt;作者:&lt;/strong&gt; Vinay Kulkarni (Kubescaler Labs)&lt;/p&gt;
&lt;!--
**Author:** [Vinay Kulkarni](https://github.com/vinaykul) (Kubescaler Labs)
--&gt;
&lt;p&gt;&lt;strong&gt;译者&lt;/strong&gt;：&lt;a href="https://github.com/pacoxu"&gt;Paco Xu&lt;/a&gt; (Daocloud)&lt;/p&gt;
&lt;!--
If you have deployed Kubernetes pods with CPU and/or memory resources
specified, you may have noticed that changing the resource values involves
restarting the pod. This has been a disruptive operation for running
workloads... until now.
--&gt;
&lt;p&gt;如果你部署的 Pod 设置了 CPU 或内存资源，你就可能已经注意到更改资源值会导致 Pod 重新启动。
以前，这对于运行的负载来说是一个破坏性的操作。&lt;/p&gt;</description></item><item><title>Kubernetes 1.27：为 NodePort Service 分配端口时避免冲突</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2023/05/11/nodeport-dynamic-and-static-allocation/</link><pubDate>Thu, 11 May 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2023/05/11/nodeport-dynamic-and-static-allocation/</guid><description>&lt;!--
layout: blog
title: "Kubernetes 1.27: Avoid Collisions Assigning Ports to NodePort Services"
date: 2023-05-11
slug: nodeport-dynamic-and-static-allocation
--&gt;
&lt;!--
**Author:** Xu Zhenglun (Alibaba)
--&gt;
&lt;p&gt;&lt;strong&gt;作者:&lt;/strong&gt; Xu Zhenglun (Alibaba)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者:&lt;/strong&gt; &lt;a href="https://github.com/windsonsea"&gt;Michael Yao&lt;/a&gt; (DaoCloud)&lt;/p&gt;
&lt;!--
In Kubernetes, a Service can be used to provide a unified traffic endpoint for 
applications running on a set of Pods. Clients can use the virtual IP address (or _VIP_) provided
by the Service for access, and Kubernetes provides load balancing for traffic accessing
different back-end Pods, but a ClusterIP type of Service is limited to providing access to
nodes within the cluster, while traffic from outside the cluster cannot be routed.
One way to solve this problem is to use a `type: NodePort` Service, which sets up a mapping
to a specific port of all nodes in the cluster, thus redirecting traffic from the
outside to the inside of the cluster.
--&gt;
&lt;p&gt;在 Kubernetes 中，对于以一组 Pod 运行的应用，Service 可以为其提供统一的流量端点。
客户端可以使用 Service 提供的虚拟 IP 地址（或 &lt;strong&gt;VIP&lt;/strong&gt;）进行访问，
Kubernetes 为访问不同的后端 Pod 的流量提供负载均衡能力，
但 ClusterIP 类型的 Service 仅限于供集群内的节点来访问，
而来自集群外的流量无法被路由。解决这个难题的一种方式是使用 &lt;code&gt;type: NodePort&lt;/code&gt; Service，
这种服务会在集群所有节点上为特定端口建立映射关系，从而将来自集群外的流量重定向到集群内。&lt;/p&gt;</description></item><item><title>Kubernetes 1.27：kubectl apply 裁剪更安全、更高效</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2023/05/09/introducing-kubectl-applyset-pruning/</link><pubDate>Tue, 09 May 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2023/05/09/introducing-kubectl-applyset-pruning/</guid><description>&lt;!--
layout: blog
title: "Kubernetes 1.27: Safer, More Performant Pruning in kubectl apply"
date: 2023-05-09
slug: introducing-kubectl-applyset-pruning
--&gt;
&lt;!--
**Authors:** Katrina Verey (independent) and Justin Santa Barbara (Google)
--&gt;
&lt;p&gt;&lt;strong&gt;作者:&lt;/strong&gt; Katrina Verey（独立个人）和 Justin Santa Barbara (Google)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者:&lt;/strong&gt; &lt;a href="https://github.com/windsonsea"&gt;Michael Yao&lt;/a&gt; (DaoCloud)&lt;/p&gt;
&lt;!--
Declarative configuration management with the `kubectl apply` command is the gold standard approach
to creating or modifying Kubernetes resources. However, one challenge it presents is the deletion
of resources that are no longer needed. In Kubernetes version 1.5, the `--prune` flag was
introduced to address this issue, allowing kubectl apply to automatically clean up previously
applied resources removed from the current configuration.
--&gt;
&lt;p&gt;通过 &lt;code&gt;kubectl apply&lt;/code&gt; 命令执行声明式配置管理是创建或修改 Kubernetes 资源的黄金标准方法。
但这种方法也带来了一个挑战，那就是删除不再需要的资源。
在 Kubernetes 1.5 版本中，引入了 &lt;code&gt;--prune&lt;/code&gt; 标志来解决此问题，
允许 &lt;code&gt;kubectl apply&lt;/code&gt; 自动清理从当前配置中删除的先前应用的资源。&lt;/p&gt;</description></item><item><title>Kubernetes 1.27：介绍用于磁盘卷组快照的新 API</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2023/05/08/kubernetes-1-27-volume-group-snapshot-alpha/</link><pubDate>Mon, 08 May 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2023/05/08/kubernetes-1-27-volume-group-snapshot-alpha/</guid><description>&lt;!--
layout: blog
title: "Kubernetes 1.27: Introducing An API For Volume Group Snapshots"
date: 2023-05-08
slug: kubernetes-1-27-volume-group-snapshot-alpha
--&gt;
&lt;p&gt;&lt;strong&gt;Author:&lt;/strong&gt; Xing Yang (VMware)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者&lt;/strong&gt;: &lt;a href="https://github.com/asa3311"&gt;顾欣&lt;/a&gt;&lt;/p&gt;
&lt;!--
Volume group snapshot is introduced as an Alpha feature in Kubernetes v1.27.
This feature introduces a Kubernetes API that allows users to take crash consistent
snapshots for multiple volumes together. It uses a label selector to group multiple
`PersistentVolumeClaims` for snapshotting.
This new feature is only supported for [CSI](https://kubernetes-csi.github.io/docs/) volume drivers.
--&gt;
&lt;p&gt;磁盘卷组快照在 Kubernetes v1.27 中作为 Alpha 特性被引入。
此特性引入了一个 Kubernetes API，允许用户对多个卷进行快照，以保证在发生故障时数据的一致性。
它使用标签选择器来将多个 &lt;code&gt;PersistentVolumeClaims&lt;/code&gt; （持久卷申领）分组以进行快照。
这个新特性仅支持 CSI 卷驱动器。&lt;/p&gt;</description></item><item><title>Kubernetes 1.27：内存资源的服务质量（QoS）Alpha</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2023/05/05/qos-memory-resources/</link><pubDate>Fri, 05 May 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2023/05/05/qos-memory-resources/</guid><description>&lt;!--
layout: blog
title: 'Kubernetes 1.27: Quality-of-Service for Memory Resources (alpha)'
date: 2023-05-05
slug: qos-memory-resources
--&gt;
&lt;!--
**Authors:** Dixita Narang (Google)
--&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;：Dixita Narang (Google)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者&lt;/strong&gt;：Wilson Wu (DaoCloud)&lt;/p&gt;
&lt;!--
Kubernetes v1.27, released in April 2023, introduced changes to Memory QoS (alpha) to improve memory management capabilites in Linux nodes.
--&gt;
&lt;p&gt;Kubernetes v1.27 于 2023 年 4 月发布，引入了对内存 QoS（Alpha）的更改，用于提高 Linux 节点中的内存管理功能。&lt;/p&gt;
&lt;!--
Support for Memory QoS was initially added in Kubernetes v1.22, and later some
[limitations](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/2570-memory-qos#reasons-for-changing-the-formula-of-memoryhigh-calculation-in-alpha-v127)
around the formula for calculating `memory.high` were identified. These limitations are addressed in Kubernetes v1.27.
--&gt;
&lt;p&gt;对内存 QoS 的支持最初是在 Kubernetes v1.22 中添加的，后来发现了关于计算 &lt;code&gt;memory.high&lt;/code&gt;
公式的一些&lt;a href="https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/2570-memory-qos#reasons-for-changing-the-formula-of-memoryhigh-calculation-in-alpha-v127"&gt;不足&lt;/a&gt;。
这些不足在 Kubernetes v1.27 中得到解决。&lt;/p&gt;</description></item><item><title>Kubernetes 1.27: StatefulSet PVC 自动删除(beta)</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2023/05/04/kubernetes-1-27-statefulset-pvc-auto-deletion-beta/</link><pubDate>Thu, 04 May 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2023/05/04/kubernetes-1-27-statefulset-pvc-auto-deletion-beta/</guid><description>&lt;!--
layout: blog
title: 'Kubernetes 1.27: StatefulSet PVC Auto-Deletion (beta)'
date: 2023-05-04
slug: kubernetes-1-27-statefulset-pvc-auto-deletion-beta
--&gt;
&lt;!--
**Author:** Matthew Cary (Google)
--&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;：Matthew Cary (Google)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者&lt;/strong&gt;：顾欣 (ICBC)&lt;/p&gt;
&lt;!--
Kubernetes v1.27 graduated to beta a new policy mechanism for
[`StatefulSets`](/docs/concepts/workloads/controllers/statefulset/) that controls the lifetime of
their [`PersistentVolumeClaims`](/docs/concepts/storage/persistent-volumes/) (PVCs). The new PVC
retention policy lets users specify if the PVCs generated from the `StatefulSet` spec template should
be automatically deleted or retrained when the `StatefulSet` is deleted or replicas in the `StatefulSet`
are scaled down.
--&gt;
&lt;p&gt;Kubernetes v1.27 将一种新的策略机制升级到 Beta 阶段，这一策略用于控制
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/controllers/statefulset/"&gt;&lt;code&gt;StatefulSets&lt;/code&gt;&lt;/a&gt;
的 &lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/persistent-volumes/"&gt;&lt;code&gt;PersistentVolumeClaims&lt;/code&gt;&lt;/a&gt;（PVCs）的生命周期。
这种新的 PVC 保留策略允许用户指定当删除 &lt;code&gt;StatefulSet&lt;/code&gt; 或者缩减 &lt;code&gt;StatefulSet&lt;/code&gt; 中的副本时，
是自动删除还是保留从 &lt;code&gt;StatefulSet&lt;/code&gt; 规约模板生成的 PVC。&lt;/p&gt;</description></item><item><title>Kubernetes 1.27：HorizontalPodAutoscaler ContainerResource 类型指标进阶至 Beta</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2023/05/02/hpa-container-resource-metric/</link><pubDate>Tue, 02 May 2023 12:00:00 +0800</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2023/05/02/hpa-container-resource-metric/</guid><description>&lt;!--
layout: blog
title: "Kubernetes 1.27: HorizontalPodAutoscaler ContainerResource type metric moves to beta"
date: 2023-05-02T12:00:00+0800
slug: hpa-container-resource-metric
--&gt;
&lt;!--
**Author:** [Kensei Nakada](https://github.com/sanposhiho) (Mercari)
--&gt;
&lt;p&gt;&lt;strong&gt;作者:&lt;/strong&gt; &lt;a href="https://github.com/sanposhiho"&gt;Kensei Nakada&lt;/a&gt; (Mercari)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者:&lt;/strong&gt; &lt;a href="https://github.com/windsonsea"&gt;Michael Yao&lt;/a&gt; (DaoCloud)&lt;/p&gt;
&lt;!--
Kubernetes 1.20 introduced the [`ContainerResource` type metric](/docs/tasks/run-application/horizontal-pod-autoscale/#container-resource-metrics)
in HorizontalPodAutoscaler (HPA).

In Kubernetes 1.27, this feature moves to beta and the corresponding feature gate (`HPAContainerMetrics`) gets enabled by default.
--&gt;
&lt;p&gt;Kubernetes 1.20 在 HorizontalPodAutoscaler (HPA) 中引入了
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/tasks/run-application/horizontal-pod-autoscale/#container-resource-metrics"&gt;&lt;code&gt;ContainerResource&lt;/code&gt; 类型指标&lt;/a&gt;。&lt;/p&gt;
&lt;p&gt;在 Kubernetes 1.27 中，此特性进阶至 Beta，相应的特性门控 (&lt;code&gt;HPAContainerMetrics&lt;/code&gt;) 默认被启用。&lt;/p&gt;</description></item><item><title>Kubernetes 1.27: StatefulSet 启动序号简化了迁移</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2023/04/28/statefulset-start-ordinal/</link><pubDate>Fri, 28 Apr 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2023/04/28/statefulset-start-ordinal/</guid><description>&lt;!--
layout: blog
title: "Kubernetes 1.27: StatefulSet Start Ordinal Simplifies Migration"
date: 2023-04-28
slug: statefulset-start-ordinal
--&gt;
&lt;!--
**Author**: Peter Schuurman (Google)
--&gt;
&lt;p&gt;&lt;strong&gt;作者：&lt;/strong&gt; Peter Schuurman (Google)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者：&lt;/strong&gt; Xin Li (DaoCloud)&lt;/p&gt;
&lt;!--
Kubernetes v1.26 introduced a new, alpha-level feature for
[StatefulSets](/docs/concepts/workloads/controllers/statefulset/) that controls
the ordinal numbering of Pod replicas. As of Kubernetes v1.27, this feature is
now beta. Ordinals can start from arbitrary
non-negative numbers. This blog post will discuss how this feature can be
used.
--&gt;
&lt;p&gt;Kubernetes v1.26 为 &lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/controllers/statefulset/"&gt;StatefulSet&lt;/a&gt;
引入了一个新的 Alpha 级别特性，可以控制 Pod 副本的序号。
从 Kubernetes v1.27 开始，此特性进级到 Beta 阶段。序数可以从任意非负数开始，
这篇博文将讨论如何使用此功能。&lt;/p&gt;</description></item><item><title>官方自动刷新 CVE 订阅源的更新</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2023/04/25/k8s-cve-feed-beta/</link><pubDate>Tue, 25 Apr 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2023/04/25/k8s-cve-feed-beta/</guid><description>&lt;!--
layout: blog
title: Updates to the Auto-refreshing Official CVE Feed
date: 2023-04-25
slug: k8s-cve-feed-beta
--&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;：Cailyn Edwards (Shopify), Mahé Tardy (Isovalent), Pushkar Joglekar&lt;/p&gt;
&lt;!--
**Authors**: Cailyn Edwards (Shopify), Mahé Tardy (Isovalent), Pushkar Joglekar
--&gt;
&lt;p&gt;&lt;strong&gt;译者&lt;/strong&gt;：Wilson Wu (DaoCloud)&lt;/p&gt;
&lt;!--
Since launching the [Auto-refreshing Official CVE feed](/docs/reference/issues-security/official-cve-feed/) as an alpha
feature in the 1.25 release, we have made significant improvements and updates. We are excited to announce the release of the
beta version of the feed. This blog post will outline the feedback received, the changes made, and talk about how you can help 
as we prepare to make this a stable feature in a future Kubernetes Release.
--&gt;
&lt;p&gt;自从在 1.25 版本中将&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/issues-security/official-cve-feed/"&gt;官方自动刷新 CVE 订阅源&lt;/a&gt;作为 Alpha
功能启用以来，我们已经做了一些重大改进和更新。我们很高兴宣布该订阅源的 Beta 版现已发布。这篇博文将列举收到的反馈、所做的更改，
还讨论了在未来 Kubernetes 版本中准备使其进阶成为一个稳定功能时你可以如何提供帮助。&lt;/p&gt;</description></item><item><title>Kubernetes 1.27：服务器端字段校验和 OpenAPI V3 进阶至 GA</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2023/04/24/openapi-v3-field-validation-ga/</link><pubDate>Mon, 24 Apr 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2023/04/24/openapi-v3-field-validation-ga/</guid><description>&lt;!--
layout: blog
title: "Kubernetes 1.27: Server Side Field Validation and OpenAPI V3 move to GA"
date: 2023-04-24
slug: openapi-v3-field-validation-ga
--&gt;
&lt;!--
**Author**: Jeffrey Ying (Google), Antoine Pelisse (Google)
--&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;：Jeffrey Ying (Google), Antoine Pelisse (Google)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者&lt;/strong&gt;：Michael Yao (DaoCloud)&lt;/p&gt;
&lt;!--
Before Kubernetes v1.8 (!), typos, mis-indentations or minor errors in
YAMLs could have catastrophic consequences (e.g. a typo like
forgetting the trailing s in `replica: 1000` could cause an outage,
because the value would be ignored and missing, forcing a reset of
replicas back to 1). This was solved back then by fetching the OpenAPI
v2 in kubectl and using it to verify that fields were correct and
present before applying. Unfortunately, at that time, Custom Resource
Definitions didn’t exist, and the code was written under that
assumption. When CRDs were later introduced, the lack of flexibility
in the validation code forced some hard decisions in the way CRDs
exposed their schema, leaving us in a cycle of bad validation causing
bad OpenAPI and vice-versa. With the new OpenAPI v3 and Server Field
Validation being GA in 1.27, we’ve now solved both of these problems.
--&gt;
&lt;p&gt;在 Kubernetes v1.8 之前，YAML 文件中的拼写错误、缩进错误或其他小错误可能会产生灾难性后果
（例如像在 &lt;code&gt;replica: 1000&lt;/code&gt; 中忘记了结尾的字母 “s”，可能会导致宕机。
因为该值会被忽略并且丢失，并强制将副本重置回 1）。当时解决这个问题的办法是：
在 kubectl 中获取 OpenAPI v2 并在应用之前使用 OpenAPI v2 来校验字段是否正确且存在。
不过当时没有自定义资源定义 (CRD)，相关代码是在当时那样的假设下编写的。
之后引入了 CRD，发现校验代码缺乏灵活性，迫使 CRD 在公开其模式定义时做出了一些艰难的决策，
使得我们进入了不良校验造成不良 OpenAPI，不良 OpenAPI 无法校验的循环。
随着新的 OpenAPI v3 和服务器端字段校验在 1.27 中进阶至 GA，我们现在已经解决了这两个问题。&lt;/p&gt;</description></item><item><title>Kubernetes 1.27: 使用 Kubelet API 查询节点日志</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2023/04/21/node-log-query-alpha/</link><pubDate>Fri, 21 Apr 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2023/04/21/node-log-query-alpha/</guid><description>&lt;!--
layout: blog
title: "Kubernetes 1.27: Query Node Logs Using The Kubelet API"
date: 2023-04-21
slug: node-log-query-alpha
--&gt;
&lt;!--
**Author:** Aravindh Puthiyaparambil (Red Hat)
--&gt;
&lt;p&gt;&lt;strong&gt;作者：&lt;/strong&gt; Aravindh Puthiyaparambil (Red Hat)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者：&lt;/strong&gt; Xin Li (DaoCloud)&lt;/p&gt;
&lt;!--
Kubernetes 1.27 introduced a new feature called _Node log query_ that allows
viewing logs of services running on the node.
--&gt;
&lt;p&gt;Kubernetes 1.27 引入了一个名为&lt;strong&gt;节点日志查询&lt;/strong&gt;的新功能，
可以查看节点上运行的服务的日志。&lt;/p&gt;
&lt;!--
## What problem does it solve?
Cluster administrators face issues when debugging malfunctioning services
running on the node. They usually have to SSH or RDP into the node to view the
logs of the service to debug the issue. The _Node log query_ feature helps with
this scenario by allowing the cluster administrator to view the logs using
_kubectl_. This is especially useful with Windows nodes where you run into the
issue of the node going to the ready state but containers not coming up due to
CNI misconfigurations and other issues that are not easily identifiable by
looking at the Pod status.
--&gt;
&lt;h2 id="它解决了什么问题"&gt;它解决了什么问题？&lt;/h2&gt;
&lt;p&gt;集群管理员在调试节点上运行的表现不正常的服务时会遇到问题。
他们通常必须通过 SSH 或 RDP 进入节点以查看服务日志以调试问题。
&lt;strong&gt;节点日志查询&lt;/strong&gt;功能通过允许集群管理员使用 &lt;strong&gt;kubectl&lt;/strong&gt;
查看日志的方式来帮助解决这种情况。这对于 Windows 节点特别有用，
在 Windows 节点中，你会遇到节点进入就绪状态但由于 CNI
错误配置和其他不易通过查看 Pod 状态来辨别的问题而导致容器无法启动的情况。&lt;/p&gt;</description></item><item><title>Kubernetes 1.27：持久卷的单个 Pod 访问模式升级到 Beta</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2023/04/20/read-write-once-pod-access-mode-beta/</link><pubDate>Thu, 20 Apr 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2023/04/20/read-write-once-pod-access-mode-beta/</guid><description>&lt;!--
layout: blog
title: "Kubernetes 1.27: Single Pod Access Mode for PersistentVolumes Graduates to Beta"
date: 2023-04-20
slug: read-write-once-pod-access-mode-beta
--&gt;
&lt;!--
**Author:** Chris Henzie (Google)
--&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;：Chris Henzie (Google)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者&lt;/strong&gt;：顾欣 (ICBC)&lt;/p&gt;
&lt;!--
With the release of Kubernetes v1.27 the ReadWriteOncePod feature has graduated
to beta. In this blog post, we'll take a closer look at this feature, what it
does, and how it has evolved in the beta release.
--&gt;
&lt;p&gt;随着 Kubernetes v1.27 的发布，ReadWriteOncePod 功能已经升级为 Beta 版。
在这篇博客文章中，我们将更详细地介绍这个功能，作用以及在 Beta 版本中的发展。&lt;/p&gt;</description></item><item><title>Kubernetes 1.27：高效的 SELinux 卷重新标记（Beta 版）</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2023/04/18/kubernetes-1-27-efficient-selinux-relabeling-beta/</link><pubDate>Tue, 18 Apr 2023 10:00:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2023/04/18/kubernetes-1-27-efficient-selinux-relabeling-beta/</guid><description>&lt;!--
layout: blog
title: "Kubernetes 1.27: Efficient SELinux volume relabeling (Beta)"
date: 2023-04-18T10:00:00-08:00
slug: kubernetes-1-27-efficient-selinux-relabeling-beta
--&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;：Jan Šafránek (Red Hat)&lt;/p&gt;
&lt;!--
**Author:** Jan Šafránek (Red Hat)
--&gt;
&lt;p&gt;&lt;strong&gt;译者&lt;/strong&gt;：Wilson Wu (DaoCloud)&lt;/p&gt;
&lt;!--
## The problem
--&gt;
&lt;h2 id="the-problem"&gt;问题&lt;/h2&gt;
&lt;!--
On Linux with Security-Enhanced Linux (SELinux) enabled, it's traditionally the container runtime that applies SELinux labels to a Pod and all its volumes. Kubernetes only passes the SELinux label from a Pod's `securityContext` fields to the container runtime.
--&gt;
&lt;p&gt;在启用了 Security-Enhancled Linux（SELinux）系统上，传统做法是让容器运行时负责为
Pod 及所有卷应用 SELinux 标签。
Kubernetes 仅将 SELinux 标签从 Pod 的 &lt;code&gt;securityContext&lt;/code&gt; 字段传递给容器运行时。&lt;/p&gt;</description></item><item><title>Kubernetes 1.27：更多精细粒度的 Pod 拓扑分布策略进阶至 Beta</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2023/04/17/fine-grained-pod-topology-spread-features-beta/</link><pubDate>Mon, 17 Apr 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2023/04/17/fine-grained-pod-topology-spread-features-beta/</guid><description>&lt;!--
layout: blog
title: "Kubernetes 1.27: More fine-grained pod topology spread policies reached beta"
date: 2023-04-17
slug: fine-grained-pod-topology-spread-features-beta
--&gt;
&lt;!--
**Authors:** [Alex Wang](https://github.com/denkensk) (Shopee), [Kante Yin](https://github.com/kerthcet) (DaoCloud), [Kensei Nakada](https://github.com/sanposhiho) (Mercari)
--&gt;
&lt;p&gt;&lt;strong&gt;作者:&lt;/strong&gt; &lt;a href="https://github.com/denkensk"&gt;Alex Wang&lt;/a&gt; (Shopee),
&lt;a href="https://github.com/kerthcet"&gt;Kante Yin&lt;/a&gt; (DaoCloud),
&lt;a href="https://github.com/sanposhiho"&gt;Kensei Nakada&lt;/a&gt; (Mercari)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者:&lt;/strong&gt; &lt;a href="https://github.com/windsonsea"&gt;Michael Yao&lt;/a&gt; (DaoCloud)&lt;/p&gt;
&lt;!--
In Kubernetes v1.19, [Pod topology spread constraints](/docs/concepts/scheduling-eviction/topology-spread-constraints/)
went to general availability (GA).

As time passed, we - SIG Scheduling - received feedback from users,
and, as a result, we're actively working on improving the Topology Spread feature via three KEPs.
All of these features have reached beta in Kubernetes v1.27 and are enabled by default.

This blog post introduces each feature and the use case behind each of them.
--&gt;
&lt;p&gt;在 Kubernetes v1.19 中，
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/scheduling-eviction/topology-spread-constraints/"&gt;Pod 拓扑分布约束&lt;/a&gt;进阶至正式发布 (GA)。&lt;/p&gt;</description></item><item><title>“使用更新后的 Go 版本保持 Kubernetes 安全”</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2023/04/06/keeping-kubernetes-secure-with-updated-go-versions/</link><pubDate>Thu, 06 Apr 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2023/04/06/keeping-kubernetes-secure-with-updated-go-versions/</guid><description>&lt;!--
layout: blog
title: "Keeping Kubernetes Secure with Updated Go Versions"
date: 2023-04-06
slug: keeping-kubernetes-secure-with-updated-go-versions
--&gt;
&lt;!--
**Author**: [Jordan Liggitt](https://github.com/liggitt) (Google)
--&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;：&lt;a href="https://github.com/liggitt"&gt;Jordan Liggitt&lt;/a&gt; (Google)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者&lt;/strong&gt;：顾欣 (ICBC)&lt;/p&gt;
&lt;h3 id="the-problem"&gt;问题&lt;/h3&gt;
&lt;!--
Since v1.19 (released in 2020), the Kubernetes project provides 12-14 months of patch releases for each minor version.
This enables users to qualify and adopt Kubernetes versions in an annual upgrade cycle and receive security fixes for a year.
--&gt;
&lt;p&gt;从 2020 年发布的 v1.19 版本以来，Kubernetes 项目为每个次要版本提供 12-14 个月的补丁维护期。
这使得用户可以按照年度升级周期来评估和选用 Kubernetes 版本，并持续一年获得安全修复。&lt;/p&gt;</description></item><item><title>Kubernetes 验证准入策略：一个真实示例</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2023/03/30/kubescape-validating-admission-policy-library/</link><pubDate>Thu, 30 Mar 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2023/03/30/kubescape-validating-admission-policy-library/</guid><description>&lt;!--
layout: blog
title: "Kubernetes Validating Admission Policies: A Practical Example"
date: 2023-03-30T00:00:00+0000
slug: kubescape-validating-admission-policy-library
--&gt;
&lt;!--
**Authors**: Craig Box (ARMO), Ben Hirschberg (ARMO)
--&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;：Craig Box (ARMO), Ben Hirschberg (ARMO)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者&lt;/strong&gt;：Xiaoyang Zhang (Huawei)&lt;/p&gt;
&lt;!--
Admission control is an important part of the Kubernetes control plane, with several internal
features depending on the ability to approve or change an API object as it is submitted to the
server. It is also useful for an administrator to be able to define business logic, or policies,
regarding what objects can be admitted into a cluster. To better support that use case, [Kubernetes
introduced external admission control in
v1.7](/blog/2017/06/kubernetes-1-7-security-hardening-stateful-application-extensibility-updates/).
--&gt;
&lt;p&gt;准入控制是 Kubernetes 控制平面的重要组成部分，在向服务器提交请求时，可根据批准或更改 API 对象的能力来实现多项内部功能。
对于管理员来说，定义有关哪些对象可以进入集群的业务逻辑或策略是很有用的。为了更好地支持该场景，
&lt;a href="https://andygol-k8s.netlify.app/blog/2017/06/kubernetes-1-7-security-hardening-stateful-application-extensibility-updates/"&gt;Kubernetes 在 v1.7 中引入了外部准入控制&lt;/a&gt;。&lt;/p&gt;</description></item><item><title>Kubernetes 在 v1.27 中移除的特性和主要变更</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2023/03/17/upcoming-changes-in-kubernetes-v1-27/</link><pubDate>Fri, 17 Mar 2023 14:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2023/03/17/upcoming-changes-in-kubernetes-v1-27/</guid><description>&lt;!--
layout: blog
title: "Kubernetes Removals and Major Changes In v1.27"
date: 2023-03-17T14:00:00+0000
slug: upcoming-changes-in-kubernetes-v1-27
--&gt;
&lt;!--
**Author**: Harshita Sao
--&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;：Harshita Sao&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者&lt;/strong&gt;：Michael Yao (DaoCloud)&lt;/p&gt;
&lt;!--
As Kubernetes develops and matures, features may be deprecated, removed, or replaced
with better ones for the project's overall health. Based on the information available
at this point in the v1.27 release process, which is still ongoing and can introduce
additional changes, this article identifies and describes some of the planned changes
for the Kubernetes v1.27 release.
--&gt;
&lt;p&gt;随着 Kubernetes 发展和成熟，为了此项目的整体健康，某些特性可能会被弃用、移除或替换为优化过的特性。
基于目前在 v1.27 发布流程中获得的信息，本文将列举并描述一些计划在 Kubernetes v1.27 发布中的变更，
发布工作目前仍在进行中，可能会引入更多变更。&lt;/p&gt;</description></item><item><title>k8s.gcr.io 重定向到 registry.k8s.io - 用户须知</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2023/03/10/image-registry-redirect/</link><pubDate>Fri, 10 Mar 2023 17:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2023/03/10/image-registry-redirect/</guid><description>&lt;!--
layout: blog
title: "k8s.gcr.io Redirect to registry.k8s.io - What You Need to Know"
date: 2023-03-10T17:00:00.000Z
slug: image-registry-redirect
--&gt;
&lt;!--
**Authors**: Bob Killen (Google), Davanum Srinivas (AWS), Chris Short (AWS), Frederico Muñoz (SAS
Institute), Tim Bannister (The Scale Factory), Ricky Sadowski (AWS), Grace Nguyen (Expo), Mahamed
Ali (Rackspace Technology), Mars Toktonaliev (independent), Laura Santamaria (Dell), Kat Cosgrove
(Dell)
--&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;：Bob Killen (Google)、Davanum Srinivas (AWS)、Chris Short (AWS)、Frederico Muñoz (SAS
Institute)、Tim Bannister (The Scale Factory)、Ricky Sadowski (AWS)、Grace Nguyen (Expo)、Mahamed
Ali (Rackspace Technology)、Mars Toktonaliev（独立个人）、Laura Santamaria (Dell)、Kat Cosgrove
(Dell)&lt;/p&gt;</description></item><item><title>Kubernetes 的容器检查点分析</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2023/03/10/forensic-container-analysis/</link><pubDate>Fri, 10 Mar 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2023/03/10/forensic-container-analysis/</guid><description>&lt;!--
layout: blog
title: "Forensic container analysis"
date: 2023-03-10
slug: forensic-container-analysis
--&gt;
&lt;p&gt;&lt;strong&gt;作者：&lt;/strong&gt;&lt;a href="https://github.com/adrianreber"&gt;Adrian Reber&lt;/a&gt; (Red Hat)&lt;/p&gt;
&lt;!--
**Authors:** Adrian Reber (Red Hat)
--&gt;
&lt;p&gt;&lt;strong&gt;译者：&lt;/strong&gt;&lt;a href="https://github.com/pacoxu"&gt;Paco Xu&lt;/a&gt; (DaoCloud)&lt;/p&gt;
&lt;!-- 
In my previous article, [Forensic container checkpointing in
Kubernetes][forensic-blog], I introduced checkpointing in Kubernetes
and how it has to be setup and how it can be used. The name of the
feature is Forensic container checkpointing, but I did not go into
any details how to do the actual analysis of the checkpoint created by
Kubernetes. In this article I want to provide details how the
checkpoint can be analyzed.
--&gt;
&lt;p&gt;我之前在 &lt;a href="https://kubernetes.io/zh-cn/blog/2022/12/05/forensic-container-checkpointing-alpha/"&gt;Kubernetes 中的取证容器检查点&lt;/a&gt; 一文中，介绍了检查点以及如何创建和使用它。
该特性的名称是取证容器检查点，但我没有详细介绍如何对 Kubernetes 创建的检查点进行实际分析。
在本文中，我想提供如何分析检查点的详细信息。&lt;/p&gt;</description></item><item><title>介绍 KWOK（Kubernetes WithOut Kubelet，没有 Kubelet 的 Kubernetes）</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2023/03/01/introducing-kwok/</link><pubDate>Wed, 01 Mar 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2023/03/01/introducing-kwok/</guid><description>&lt;!--
layout: blog
title: "Introducing KWOK: Kubernetes WithOut Kubelet"
date: 2023-03-01
slug: introducing-kwok
canonicalUrl: https://kubernetes.dev/blog/2023/03/01/introducing-kwok/
--&gt;
&lt;!--
**Author:** Shiming Zhang (DaoCloud), Wei Huang (Apple), Yibo Zhuang (Apple)
--&gt;
&lt;p&gt;&lt;strong&gt;作者:&lt;/strong&gt; Shiming Zhang (DaoCloud), Wei Huang (Apple), Yibo Zhuang (Apple)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者:&lt;/strong&gt; Michael Yao (DaoCloud)&lt;/p&gt;
&lt;img style="float: right; display: inline-block; margin-left: 2em; max-width: 15em;" src="https://andygol-k8s.netlify.app/blog/2023/03/01/introducing-kwok/kwok.svg" alt="KWOK logo" /&gt;
&lt;!--
Have you ever wondered how to set up a cluster of thousands of nodes just in seconds, how to simulate real nodes with a low resource footprint, and how to test your Kubernetes controller at scale without spending much on infrastructure?

If you answered "yes" to any of these questions, then you might be interested in KWOK, a toolkit that enables you to create a cluster of thousands of nodes in seconds.
--&gt;
&lt;p&gt;你是否曾想过在几秒钟内搭建一个由数千个节点构成的集群，如何用少量资源模拟真实的节点，
如何不耗费太多基础设施就能大规模地测试你的 Kubernetes 控制器？&lt;/p&gt;</description></item><item><title>免费的 Katacoda Kubernetes 教程即将关闭</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2023/02/14/kubernetes-katacoda-tutorials-stop-from-2023-03-31/</link><pubDate>Tue, 14 Feb 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2023/02/14/kubernetes-katacoda-tutorials-stop-from-2023-03-31/</guid><description>&lt;!--
layout: blog
title: "Free Katacoda Kubernetes Tutorials Are Shutting Down"
date: 2023-02-14
slug: kubernetes-katacoda-tutorials-stop-from-2023-03-31
evergreen: true
--&gt;
&lt;!--
**Author**: Natali Vlatko, SIG Docs Co-Chair for Kubernetes
--&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;：Natali Vlatko，Kubernetes SIG Docs 联合主席&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者&lt;/strong&gt;：Michael Yao (DaoCloud)&lt;/p&gt;
&lt;!--
[Katacoda](https://katacoda.com/kubernetes), the popular learning platform from O’Reilly that has been helping people learn all about 
Java, Docker, Kubernetes, Python, Go, C++, and more, [shut down for public use in June 2022](https://www.oreilly.com/online-learning/leveraging-katacoda-technology.html). 
However, tutorials specifically for Kubernetes, linked from the Kubernetes website for our project’s 
users and contributors, remained available and active after this change. Unfortunately, this will no 
longer be the case, and Katacoda tutorials for learning Kubernetes will cease working after March 31st, 2023.
--&gt;
&lt;p&gt;&lt;a href="https://katacoda.com/kubernetes"&gt;Katacoda&lt;/a&gt; 是 O’Reilly 开设的热门学习平台，
帮助人们学习 Java、Docker、Kubernetes、Python、Go、C++ 和其他更多内容，
这个学习平台于 &lt;a href="https://www.oreilly.com/online-learning/leveraging-katacoda-technology.html"&gt;2022 年 6 月停止对公众开放&lt;/a&gt;。
但是，从 Kubernetes 网站为相关项目用户和贡献者关联的 Kubernetes 专门教程在那次变更后仍然可用并处于活跃状态。
遗憾的是，接下来情况将发生变化，Katacoda 上有关学习 Kubernetes 的教程将在 2023 年 3 月 31 日之后彻底关闭。&lt;/p&gt;</description></item><item><title>k8s.gcr.io 镜像仓库将从 2023 年 4 月 3 日起被冻结</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2023/02/06/k8s-gcr-io-freeze-announcement/</link><pubDate>Mon, 06 Feb 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2023/02/06/k8s-gcr-io-freeze-announcement/</guid><description>&lt;!--
layout: blog
title: "k8s.gcr.io Image Registry Will Be Frozen From the 3rd of April 2023"
date: 2023-02-06
slug: k8s-gcr-io-freeze-announcement
--&gt;
&lt;!--
**Authors**: Mahamed Ali (Rackspace Technology)
--&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;：Mahamed Ali (Rackspace Technology)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者&lt;/strong&gt;：Michael Yao (Daocloud)&lt;/p&gt;
&lt;!--
The Kubernetes project runs a community-owned image registry called `registry.k8s.io`
to host its container images. On the 3rd of April 2023, the old registry `k8s.gcr.io`
will be frozen and no further images for Kubernetes and related subprojects will be
pushed to the old registry.

This registry `registry.k8s.io` replaced the old one and has been generally available
for several months. We have published a [blog post](/blog/2022/11/28/registry-k8s-io-faster-cheaper-ga/)
about its benefits to the community and the Kubernetes project. This post also
announced that future versions of Kubernetes will not be available in the old
registry. Now that time has come.
--&gt;
&lt;p&gt;Kubernetes 项目运行一个名为 &lt;code&gt;registry.k8s.io&lt;/code&gt;、由社区管理的镜像仓库来托管其容器镜像。
2023 年 4 月 3 日，旧仓库 &lt;code&gt;k8s.gcr.io&lt;/code&gt; 将被冻结，Kubernetes 及其相关子项目的镜像将不再推送到这个旧仓库。&lt;/p&gt;</description></item><item><title>聚光灯下的 SIG Instrumentation</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2023/02/03/sig-instrumentation-spotlight-2023/</link><pubDate>Fri, 03 Feb 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2023/02/03/sig-instrumentation-spotlight-2023/</guid><description>&lt;!--
layout: blog
title: "Spotlight on SIG Instrumentation"
slug: sig-instrumentation-spotlight-2023
date: 2023-02-03
canonicalUrl: https://www.kubernetes.dev/blog/2023/02/03/sig-instrumentation-spotlight-2023/
--&gt;
&lt;!--
**Author:** Imran Noor Mohamed (Delivery Hero)
--&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;: Imran Noor Mohamed (Delivery Hero)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者&lt;/strong&gt;: &lt;a href="https://github.com/kevin1689-cloud"&gt;Kevin Yang&lt;/a&gt;&lt;/p&gt;
&lt;!--
Observability requires the right data at the right time for the right consumer
(human or piece of software) to make the right decision. In the context of Kubernetes,
having best practices for cluster observability across all Kubernetes components is crucial.
--&gt;
&lt;p&gt;可观测性需要在合适的时间提供合适的数据，以便合适的消费者（人员或软件）做出正确的决策。
在 Kubernetes 的环境中，拥有跨所有 Kubernetes 组件的集群可观测性最佳实践是至关重要的。&lt;/p&gt;</description></item><item><title>考虑所有微服务的脆弱性并对其行为进行监控</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2023/01/20/security-behavior-analysis/</link><pubDate>Fri, 20 Jan 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2023/01/20/security-behavior-analysis/</guid><description>&lt;!-- 
layout: blog
title: Consider All Microservices Vulnerable — And Monitor Their Behavior
date: 2023-01-20
slug: security-behavior-analysis
--&gt;
&lt;!--
**Author:**
David Hadas (IBM Research Labs)
--&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;：David Hadas (IBM Research Labs)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者&lt;/strong&gt;：Wilson Wu (DaoCloud)&lt;/p&gt;
&lt;!--
_This post warns Devops from a false sense of security. Following security best practices when developing and configuring microservices do not result in non-vulnerable microservices. The post shows that although all deployed microservices are vulnerable, there is much that can be done to ensure microservices are not exploited. It explains how analyzing the behavior of clients and services from a security standpoint, named here **"Security-Behavior Analytics"**, can protect the deployed vulnerable microservices. It points to [Guard](http://knative.dev/security-guard), an open source project offering security-behavior monitoring and control of Kubernetes microservices presumed vulnerable._
--&gt;
&lt;p&gt;&lt;em&gt;本文对 DevOps 产生的错误安全意识做出提醒。开发和配置微服务时遵循安全最佳实践并不能让微服务不易被攻击。
本文说明，即使所有已部署的微服务都容易被攻击，但仍然可以采取很多措施来确保微服务不被利用。
本文解释了如何从安全角度对客户端和服务的行为进行分析，此处称为 &lt;strong&gt;“安全行为分析”&lt;/strong&gt; ，
可以对已部署的易被攻击的微服务进行保护。本文会引用 &lt;a href="http://knative.dev/security-guard"&gt;Guard&lt;/a&gt;，
一个开源项目，对假定易被攻击的 Kubernetes 微服务的安全行为提供监测与控制。&lt;/em&gt;&lt;/p&gt;</description></item><item><title>使用 PriorityClass 确保你的关键任务 Pod 免遭驱逐</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2023/01/12/protect-mission-critical-pods-priorityclass/</link><pubDate>Thu, 12 Jan 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2023/01/12/protect-mission-critical-pods-priorityclass/</guid><description>&lt;!--
layout: blog
title: "Protect Your Mission-Critical Pods From Eviction With PriorityClass"
date: 2023-01-12
slug: protect-mission-critical-pods-priorityclass
description: "Pod priority and preemption help to make sure that mission-critical pods are up in the event of a resource crunch by deciding order of scheduling and eviction."
--&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;：Sunny Bhambhani (InfraCloud Technologies)&lt;/p&gt;
&lt;!--
**Author:** Sunny Bhambhani (InfraCloud Technologies)
--&gt;
&lt;p&gt;&lt;strong&gt;译者&lt;/strong&gt;：Wilson Wu (DaoCloud)&lt;/p&gt;
&lt;!--
Kubernetes has been widely adopted, and many organizations use it as their de-facto orchestration engine for running workloads that need to be created and deleted frequently.
--&gt;
&lt;p&gt;Kubernetes 已被广泛使用，许多组织将其用作事实上的编排引擎，用于运行需要频繁被创建和删除的工作负载。&lt;/p&gt;</description></item><item><title>Kubernetes 1.26：PodDisruptionBudget 守护的不健康 Pod 所用的驱逐策略</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2023/01/06/unhealthy-pod-eviction-policy-for-pdbs/</link><pubDate>Fri, 06 Jan 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2023/01/06/unhealthy-pod-eviction-policy-for-pdbs/</guid><description>&lt;!--
layout: blog
title: "Kubernetes 1.26: Eviction policy for unhealthy pods guarded by PodDisruptionBudgets"
date: 2023-01-06
slug: "unhealthy-pod-eviction-policy-for-pdbs"
--&gt;
&lt;!--
**Authors:** Filip Křepinský (Red Hat), Morten Torkildsen (Google), Ravi Gudimetla (Apple)
--&gt;
&lt;p&gt;&lt;strong&gt;作者：&lt;/strong&gt; Filip Křepinský (Red Hat), Morten Torkildsen (Google), Ravi Gudimetla (Apple)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者：&lt;/strong&gt; Michael Yao (DaoCloud)&lt;/p&gt;
&lt;!--
Ensuring the disruptions to your applications do not affect its availability isn't a simple
task. Last month's release of Kubernetes v1.26 lets you specify an _unhealthy pod eviction policy_
for [PodDisruptionBudgets](/docs/concepts/workloads/pods/disruptions/#pod-disruption-budgets) (PDBs)
to help you maintain that availability during node management operations.
In this article, we will dive deeper into what modifications were introduced for PDBs to
give application owners greater flexibility in managing disruptions.
--&gt;
&lt;p&gt;确保对应用的干扰不影响其可用性不是一个简单的任务。
上个月发布的 Kubernetes v1.26 允许针对
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/disruptions/#pod-disruption-budgets"&gt;PodDisruptionBudget&lt;/a&gt; (PDB)
指定&lt;strong&gt;不健康 Pod 驱逐策略&lt;/strong&gt;，这有助于在节点执行管理操作期间保持可用性。&lt;/p&gt;</description></item><item><title>Kubernetes v1.26：可追溯的默认 StorageClass</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2023/01/05/retroactive-default-storage-class/</link><pubDate>Thu, 05 Jan 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2023/01/05/retroactive-default-storage-class/</guid><description>&lt;!--
layout: blog
title: "Kubernetes v1.26: Retroactive Default StorageClass"
date: 2023-01-05
slug: retroactive-default-storage-class
--&gt;
&lt;!--
**Author:** Roman Bednář (Red Hat)
--&gt;
&lt;p&gt;&lt;strong&gt;作者：&lt;/strong&gt; Roman Bednář (Red Hat)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者：&lt;/strong&gt; Michael Yao (DaoCloud)&lt;/p&gt;
&lt;!--
The v1.25 release of Kubernetes introduced an alpha feature to change how a default
StorageClass was assigned to a PersistentVolumeClaim (PVC). With the feature enabled,
you no longer need to create a default StorageClass first and PVC second to assign the
class. Additionally, any PVCs without a StorageClass assigned can be updated later.
This feature was graduated to beta in Kubernetes v1.26.
--&gt;
&lt;p&gt;Kubernetes v1.25 引入了一个 Alpha 特性来更改默认 StorageClass 被分配到 PersistentVolumeClaim (PVC) 的方式。
启用此特性后，你不再需要先创建默认 StorageClass，再创建 PVC 来分配类。
此外，任何未分配 StorageClass 的 PVC 都可以在后续被更新。此特性在 Kubernetes v1.26 中已进阶至 Beta。&lt;/p&gt;</description></item><item><title>Kubernetes v1.26：对跨名字空间存储数据源的 Alpha 支持</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2023/01/02/cross-namespace-data-sources-alpha/</link><pubDate>Mon, 02 Jan 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2023/01/02/cross-namespace-data-sources-alpha/</guid><description>&lt;!--
layout: blog
title: "Kubernetes v1.26: Alpha support for cross-namespace storage data sources"
date: 2023-01-02
slug: cross-namespace-data-sources-alpha
--&gt;
&lt;!--
**Author:** Takafumi Takahashi (Hitachi Vantara)
--&gt;
&lt;p&gt;&lt;strong&gt;作者：&lt;/strong&gt; Takafumi Takahashi (Hitachi Vantara)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者：&lt;/strong&gt; Michael Yao (DaoCloud)&lt;/p&gt;
&lt;!--
Kubernetes v1.26, released last month, introduced an alpha feature that
lets you specify a data source for a PersistentVolumeClaim, even where the source
data belong to a different namespace.
With the new feature enabled, you specify a namespace in the `dataSourceRef` field of
a new PersistentVolumeClaim. Once Kubernetes checks that access is OK, the new
PersistentVolume can populate its data from the storage source specified in that other
namespace.
Before Kubernetes v1.26, provided your cluster had the `AnyVolumeDataSource` feature enabled,
you could already provision new volumes from a data source in the **same**
namespace.
However, that only worked for the data source in the same namespace,
therefore users couldn't provision a PersistentVolume with a claim
in one namespace from a data source in other namespace.
To solve this problem, Kubernetes v1.26 added a new alpha `namespace` field
to `dataSourceRef` field in PersistentVolumeClaim the API.
--&gt;
&lt;p&gt;上个月发布的 Kubernetes v1.26 引入了一个 Alpha 特性，允许你在源数据属于不同的名字空间时为
PersistentVolumeClaim 指定数据源。启用这个新特性后，你在新 PersistentVolumeClaim 的
&lt;code&gt;dataSourceRef&lt;/code&gt; 字段中指定名字空间。一旦 Kubernetes 发现访问权限正常，新的 PersistentVolume
就可以从其他名字空间中指定的存储源填充其数据。在 Kubernetes v1.26 之前，如果集群已启用了
&lt;code&gt;AnyVolumeDataSource&lt;/code&gt; 特性，你可能已经从&lt;strong&gt;相同的&lt;/strong&gt;名字空间中的数据源制备新卷。
但这仅适用于同一名字空间中的数据源，因此用户无法基于一个名字空间中的数据源使用另一个名字空间中的声明来制备
PersistentVolume。为了解决这个问题，Kubernetes v1.26 在 PersistentVolumeClaim API 的
&lt;code&gt;dataSourceRef&lt;/code&gt; 字段中添加了一个新的 Alpha &lt;code&gt;namespace&lt;/code&gt; 字段。&lt;/p&gt;</description></item><item><title>Kubernetes v1.26：Kubernetes 中流量工程的进步</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2022/12/30/advancements-in-kubernetes-traffic-engineering/</link><pubDate>Fri, 30 Dec 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2022/12/30/advancements-in-kubernetes-traffic-engineering/</guid><description>&lt;!--
layout: blog
title: "Kubernetes v1.26: Advancements in Kubernetes Traffic Engineering"
date: 2022-12-30
slug: advancements-in-kubernetes-traffic-engineering
--&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;：Andrew Sy Kim (Google)&lt;/p&gt;
&lt;!--
**Authors:** Andrew Sy Kim (Google)
--&gt;
&lt;p&gt;&lt;strong&gt;译者&lt;/strong&gt;：Wilson Wu (DaoCloud)&lt;/p&gt;
&lt;!--
Kubernetes v1.26 includes significant advancements in network traffic engineering with the graduation of two features (Service internal traffic policy support, and EndpointSlice terminating conditions) to GA, and a third feature (Proxy terminating endpoints) to beta. The combination of these enhancements aims to address short-comings in traffic engineering that people face today, and unlock new capabilities for the future.
--&gt;
&lt;p&gt;Kubernetes v1.26 在网络流量工程方面取得了重大进展，
两项功能（服务内部流量策略支持和 EndpointSlice 终止状况）升级为正式发布版本，
第三项功能（代理终止端点）升级为 Beta。这些增强功能的结合旨在解决人们目前所面临的流量工程短板，并在未来解锁新的功能。&lt;/p&gt;</description></item><item><title>Kubernetes v1.26：CPUManager 正式发布</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2022/12/27/cpumanager-ga/</link><pubDate>Tue, 27 Dec 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2022/12/27/cpumanager-ga/</guid><description>&lt;!--
layout: blog
title: 'Kubernetes v1.26: CPUManager goes GA'
date: 2022-12-27
slug: cpumanager-ga
--&gt;
&lt;!--
**Author:**
Francesco Romani (Red Hat)
--&gt;
&lt;p&gt;&lt;strong&gt;作者：&lt;/strong&gt; Francesco Romani (Red Hat)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者：&lt;/strong&gt; Michael Yao (DaoCloud)&lt;/p&gt;
&lt;!--
The CPU Manager is a part of the kubelet, the Kubernetes node agent, which enables the user to allocate exclusive CPUs to containers.
Since Kubernetes v1.10, where it [graduated to Beta](/blog/2018/07/24/feature-highlight-cpu-manager/), the CPU Manager proved itself reliable and
fulfilled its role of allocating exclusive CPUs to containers, so adoption has steadily grown making it a staple component of performance-critical
and low-latency setups. Over time, most changes were about bugfixes or internal refactoring, with the following noteworthy user-visible changes:
--&gt;
&lt;p&gt;CPU 管理器是 kubelet 的一部分；kubelet 是 Kubernetes 的节点代理，能够让用户给容器分配独占 CPU。
CPU 管理器自从 Kubernetes v1.10 &lt;a href="https://andygol-k8s.netlify.app/blog/2018/07/24/feature-highlight-cpu-manager/"&gt;进阶至 Beta&lt;/a&gt;，
已证明了它本身的可靠性，能够充分胜任将独占 CPU 分配给容器，因此采用率稳步增长，
使其成为性能关键型和低延迟场景的基本组件。随着时间的推移，大多数变更均与错误修复或内部重构有关，
以下列出了几个值得关注、用户可见的变更：&lt;/p&gt;</description></item><item><title>Kubernetes 1.26：Pod 调度就绪态</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2022/12/26/pod-scheduling-readiness-alpha/</link><pubDate>Mon, 26 Dec 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2022/12/26/pod-scheduling-readiness-alpha/</guid><description>&lt;!--
layout: blog
title: "Kubernetes 1.26: Pod Scheduling Readiness"
date: 2022-12-26
slug: pod-scheduling-readiness-alpha
--&gt;
&lt;!--
**Author:** Wei Huang (Apple), Abdullah Gharaibeh (Google)
--&gt;
&lt;p&gt;&lt;strong&gt;作者：&lt;/strong&gt; Wei Huang (Apple), Abdullah Gharaibeh (Google)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者：&lt;/strong&gt; XiaoYang Zhang (HuaWei)&lt;/p&gt;
&lt;!--
Kubernetes 1.26 introduced a new Pod feature: _scheduling gates_. In Kubernetes, scheduling gates
are keys that tell the scheduler when a Pod is ready to be considered for scheduling.
--&gt;
&lt;p&gt;Kubernetes 1.26 引入了一个新的 Pod 特性：&lt;strong&gt;调度门控&lt;/strong&gt;。
在 Kubernetes 中，调度门控是通知调度器何时可以考虑 Pod 调度的关键。&lt;/p&gt;</description></item><item><title>Kubernetes 1.26: 支持在挂载时将 Pod fsGroup 传递给 CSI 驱动程序</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2022/12/23/kubernetes-12-06-fsgroup-on-mount/</link><pubDate>Fri, 23 Dec 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2022/12/23/kubernetes-12-06-fsgroup-on-mount/</guid><description>&lt;!--
layout: blog
title: "Kubernetes 1.26: Support for Passing Pod fsGroup to CSI Drivers At Mount Time"
date: 2022-12-23
slug: kubernetes-12-06-fsgroup-on-mount
--&gt;
&lt;!--
**Authors:** Fabio Bertinatto (Red Hat), Hemant Kumar (Red Hat)
--&gt;
&lt;p&gt;&lt;strong&gt;作者：&lt;/strong&gt; Fabio Bertinatto (Red Hat), Hemant Kumar (Red Hat)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者：&lt;/strong&gt; Xin Li (DaoCloud)&lt;/p&gt;
&lt;!--
Delegation of `fsGroup` to CSI drivers was first introduced as alpha in Kubernetes 1.22,
and graduated to beta in Kubernetes 1.25.
For Kubernetes 1.26, we are happy to announce that this feature has graduated to
General Availability (GA). 

In this release, if you specify a `fsGroup` in the
[security context](/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod),
for a (Linux) Pod, all processes in the pod's containers are part of the additional group
that you specified.
--&gt;
&lt;p&gt;将 &lt;code&gt;fsGroup&lt;/code&gt; 委托给 CSI 驱动程序管理首先在 Kubernetes 1.22 中作为 Alpha 特性引入，
并在 Kubernetes 1.25 中进阶至 Beta 状态。
对于 Kubernetes 1.26，我们很高兴地宣布此特性已进阶至正式发布（GA）状态。&lt;/p&gt;</description></item><item><title>Kubernetes 1.26：设备管理器正式发布</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2022/12/19/devicemanager-ga/</link><pubDate>Mon, 19 Dec 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2022/12/19/devicemanager-ga/</guid><description>&lt;!--
layout: blog
title: 'Kubernetes 1.26: Device Manager graduates to GA'
date: 2022-12-19
slug: devicemanager-ga
--&gt;
&lt;!--
**Author:** Swati Sehgal (Red Hat)
--&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;： Swati Sehgal (Red Hat)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者&lt;/strong&gt;： Jin Li (UOS)&lt;/p&gt;
&lt;!--
The Device Plugin framework was introduced in the Kubernetes v1.8 release as a vendor
independent framework to enable discovery, advertisement and allocation of external
devices without modifying core Kubernetes. The feature graduated to Beta in v1.10.
With the recent release of Kubernetes v1.26, Device Manager is now generally
available (GA).
--&gt;
&lt;p&gt;设备插件框架是在 Kubernetes v1.8 版本中引入的，它是一个与供应商无关的框架，
旨在实现对外部设备的发现、公布和分配，而无需修改核心 Kubernetes。
该功能在 v1.10 版本中升级为 Beta 版本。随着 Kubernetes v1.26 的最新发布，
设备管理器现已正式发布（GA）。&lt;/p&gt;</description></item><item><title>Kubernetes 1.26: 节点非体面关闭进入 Beta 阶段</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2022/12/16/kubernetes-1-26-non-graceful-node-shutdown-beta/</link><pubDate>Fri, 16 Dec 2022 10:00:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2022/12/16/kubernetes-1-26-non-graceful-node-shutdown-beta/</guid><description>&lt;!--
layout: blog
title: "Kubernetes 1.26: Non-Graceful Node Shutdown Moves to Beta"
date: 2022-12-16T10:00:00-08:00
slug: kubernetes-1-26-non-graceful-node-shutdown-beta
--&gt;
&lt;!--
**Author:** Xing Yang (VMware), Ashutosh Kumar (VMware)

Kubernetes v1.24 [introduced](https://kubernetes.io/blog/2022/05/20/kubernetes-1-24-non-graceful-node-shutdown-alpha/) an alpha quality implementation of improvements
for handling a [non-graceful node shutdown](/docs/concepts/architecture/nodes/#non-graceful-node-shutdown).
In Kubernetes v1.26, this feature moves to beta. This feature allows stateful workloads to failover to a different node after the original node is shut down or in a non-recoverable state, such as the hardware failure or broken OS.
--&gt;
&lt;p&gt;&lt;strong&gt;作者：&lt;/strong&gt; Xing Yang (VMware), Ashutosh Kumar (VMware)&lt;/p&gt;</description></item><item><title>Kubernetes 1.26: 动态资源分配 Alpha API</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2022/12/15/dynamic-resource-allocation/</link><pubDate>Thu, 15 Dec 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2022/12/15/dynamic-resource-allocation/</guid><description>&lt;!-- 
layout: blog
title: "Kubernetes 1.26: Alpha API For Dynamic Resource Allocation"
date: 2022-12-15
slug: dynamic-resource-allocation
--&gt;
&lt;!-- 
 **Authors:** Patrick Ohly (Intel), Kevin Klues (NVIDIA)
--&gt;
&lt;p&gt;&lt;strong&gt;作者:&lt;/strong&gt; Patrick Ohly (Intel)、Kevin Klues (NVIDIA)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者：&lt;/strong&gt; 空桐&lt;/p&gt;
&lt;!-- 
Dynamic resource allocation is a new API for requesting resources. It is a
generalization of the persistent volumes API for generic resources, making it possible to:

- access the same resource instance in different pods and containers,
- attach arbitrary constraints to a resource request to get the exact resource
 you are looking for,
- initialize a resource according to parameters provided by the user.
--&gt;
&lt;p&gt;动态资源分配是一个用于请求资源的新 API。
它是对为通用资源所提供的持久卷 API 的泛化。它可以：&lt;/p&gt;</description></item><item><title>Kubernetes 1.26: 我们现在正在对二进制发布工件进行签名!</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2022/12/12/kubernetes-release-artifact-signing/</link><pubDate>Mon, 12 Dec 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2022/12/12/kubernetes-release-artifact-signing/</guid><description>&lt;!--
layout: blog
title: "Kubernetes 1.26: We're now signing our binary release artifacts!"
date: 2022-12-12
slug: kubernetes-release-artifact-signing
--&gt;
&lt;!--
**Author:** Sascha Grunert
--&gt;
&lt;p&gt;&lt;strong&gt;作者：&lt;/strong&gt; Sascha Grunert&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者：&lt;/strong&gt; XiaoYang Zhang (HUAWEI)&lt;/p&gt;
&lt;!--
The Kubernetes Special Interest Group (SIG) Release is proud to announce that we
are digitally signing all release artifacts, and that this aspect of Kubernetes
has now reached _beta_.
--&gt;
&lt;p&gt;Kubernetes 特别兴趣小组 SIG Release 自豪地宣布，我们正在对所有发布工件进行数字签名，并且
Kubernetes 在这一方面现已达到 &lt;strong&gt;Beta&lt;/strong&gt;。&lt;/p&gt;
&lt;!--
Signing artifacts provides end users a chance to verify the integrity of the
downloaded resource. It allows to mitigate man-in-the-middle attacks directly on
the client side and therefore ensures the trustfulness of the remote serving the
artifacts. The overall goal of out past work was to define the used tooling for
signing all Kubernetes related artifacts as well as providing a standard signing
process for related projects (for example for those in [kubernetes-sigs][k-sigs]).
--&gt;
&lt;p&gt;签名工件为终端用户提供了验证下载资源完整性的机会。
它可以直接在客户端减轻中间人攻击，从而确保远程服务工件的可信度。
过去工作的总体目标是定义用于对所有 Kubernetes 相关工件进行签名的工具，
以及为相关项目（例如 &lt;a href="https://github.com/kubernetes-sigs"&gt;kubernetes-sigs&lt;/a&gt; 中的项目）提供标准签名流程。&lt;/p&gt;</description></item><item><title>Kubernetes 的取证容器检查点</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2022/12/05/forensic-container-checkpointing-alpha/</link><pubDate>Mon, 05 Dec 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2022/12/05/forensic-container-checkpointing-alpha/</guid><description>&lt;p&gt;&lt;strong&gt;作者:&lt;/strong&gt; &lt;a href="https://github.com/adrianreber"&gt;Adrian Reber&lt;/a&gt; (Red Hat)&lt;/p&gt;
&lt;!-- 
**Authors:** Adrian Reber (Red Hat)
--&gt;
&lt;!-- 
Forensic container checkpointing is based on [Checkpoint/Restore In
Userspace](https://criu.org/) (CRIU) and allows the creation of stateful copies
of a running container without the container knowing that it is being
checkpointed. The copy of the container can be analyzed and restored in a
sandbox environment multiple times without the original container being aware
of it. Forensic container checkpointing was introduced as an alpha feature in
Kubernetes v1.25.
--&gt;
&lt;p&gt;取证容器检查点（Forensic container checkpointing）基于 &lt;a href="https://criu.org/"&gt;CRIU&lt;/a&gt;（Checkpoint/Restore In Userspace ，用户空间的检查点/恢复），
并允许创建正在运行的容器的有状态副本，而容器不知道它正在被检查。容器的副本，可以在沙箱环境中被多次分析和恢复，而原始容器并不知道。
取证容器检查点是作为一个 alpha 特性在 Kubernetes v1.25 中引入的。&lt;/p&gt;</description></item><item><title>Kubernetes 1.26 中的移除、弃用和主要变更</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2022/11/18/upcoming-changes-in-kubernetes-1-26/</link><pubDate>Fri, 18 Nov 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2022/11/18/upcoming-changes-in-kubernetes-1-26/</guid><description>&lt;!--
layout: blog
title: "Kubernetes Removals, Deprecations, and Major Changes in 1.26"
date: 2022-11-18
slug: upcoming-changes-in-kubernetes-1-26
--&gt;
&lt;!--
**Author**: Frederico Muñoz (SAS)
--&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt; ：Frederico Muñoz (SAS)&lt;/p&gt;
&lt;!--
Change is an integral part of the Kubernetes life-cycle: as Kubernetes grows and matures, features may be deprecated, removed, or replaced with improvements for the health of the project. For Kubernetes v1.26 there are several planned: this article identifies and describes some of them, based on the information available at this mid-cycle point in the v1.26 release process, which is still ongoing and can introduce additional changes.
--&gt;
&lt;p&gt;变化是 Kubernetes 生命周期不可分割的一部分：随着 Kubernetes 成长和日趋成熟，
为了此项目的健康发展，某些功能特性可能会被弃用、移除或替换为优化过的功能特性。
Kubernetes v1.26 也做了若干规划：根据 v1.26 发布流程中期获得的信息，
本文将列举并描述其中一些变更，这些变更目前仍在进行中，可能会引入更多变更。&lt;/p&gt;</description></item><item><title>Kueue 介绍</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2022/10/04/introducing-kueue/</link><pubDate>Tue, 04 Oct 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2022/10/04/introducing-kueue/</guid><description>&lt;!--
layout: blog
title: "Introducing Kueue"
date: 2022-10-04
slug: introducing-kueue
--&gt;
&lt;!--
**Authors:** Abdullah Gharaibeh (Google), Aldo Culquicondor (Google)
--&gt;
&lt;p&gt;&lt;strong&gt;作者：&lt;/strong&gt; Abdullah Gharaibeh（谷歌），Aldo Culquicondor（谷歌）&lt;/p&gt;
&lt;!--
Whether on-premises or in the cloud, clusters face real constraints for resource usage, quota, and cost management reasons. 
Regardless of the autoscalling capabilities, clusters have finite capacity. As a result, users want an easy way to fairly and 
efficiently share resources. 
--&gt;
&lt;p&gt;无论是在本地还是在云端，集群都面临着资源使用、配额和成本管理方面的实际限制。
无论自动扩缩容能力如何，集群的容量都是有限的。
因此，用户需要一种简单的方法来公平有效地共享资源。&lt;/p&gt;</description></item><item><title>“Kubernetes 1.25：对使用用户名字空间运行 Pod 提供 Alpha 支持”</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2022/10/03/userns-alpha/</link><pubDate>Mon, 03 Oct 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2022/10/03/userns-alpha/</guid><description>&lt;!--
layout: blog
title: "Kubernetes 1.25: alpha support for running Pods with user namespaces"
date: 2022-10-03
slug: userns-alpha
--&gt;
&lt;!--
**Authors:** Rodrigo Campos (Microsoft), Giuseppe Scrivano (Red Hat)
--&gt;
&lt;p&gt;&lt;strong&gt;作者:&lt;/strong&gt; Rodrigo Campos（Microsoft）、Giuseppe Scrivano（Red Hat）&lt;/p&gt;
&lt;!--
Kubernetes v1.25 introduces the support for user namespaces.
--&gt;
&lt;p&gt;Kubernetes v1.25 引入了对用户名字空间的支持。&lt;/p&gt;
&lt;!--
This is a major improvement for running secure workloads in
Kubernetes. Each pod will have access only to a limited subset of the
available UIDs and GIDs on the system, thus adding a new security
layer to protect from other pods running on the same system.
--&gt;
&lt;p&gt;这是在 Kubernetes 中运行安全工作负载的一项重大改进。
每个 Pod 只能访问系统上可用 UID 和 GID 的有限子集，
因此添加了一个新的安全层来保护 Pod 免受运行在同一系统上的其他 Pod 的影响。&lt;/p&gt;</description></item><item><title>Kubernetes 1.25：应用滚动上线所用的两个特性进入稳定阶段</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2022/09/15/app-rollout-features-reach-stable/</link><pubDate>Thu, 15 Sep 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2022/09/15/app-rollout-features-reach-stable/</guid><description>&lt;!--
layout: blog
title: "Kubernetes 1.25: Two Features for Apps Rollouts Graduate to Stable"
date: 2022-09-15
slug: "app-rollout-features-reach-stable"
--&gt;
&lt;!--
**Authors:** Ravi Gudimetla (Apple), Filip Křepinský (Red Hat), Maciej Szulik (Red Hat)
--&gt;
&lt;p&gt;&lt;strong&gt;作者：&lt;/strong&gt; Ravi Gudimetla (Apple)、Filip Křepinský (Red Hat)、Maciej Szulik (Red Hat)&lt;/p&gt;
&lt;!--
This blog describes the two features namely `minReadySeconds` for StatefulSets and `maxSurge` for DaemonSets that SIG Apps is happy to graduate to stable in Kubernetes 1.25.

Specifying `minReadySeconds` slows down a rollout of a StatefulSet, when using a `RollingUpdate` value in `.spec.updateStrategy` field, by waiting for each pod for a desired time.
This time can be used for initializing the pod (e.g. warming up the cache) or as a delay before acknowledging the pod.
--&gt;
&lt;p&gt;这篇博客描述了两个特性，即用于 StatefulSet 的 &lt;code&gt;minReadySeconds&lt;/code&gt; 以及用于 DaemonSet 的 &lt;code&gt;maxSurge&lt;/code&gt;，
SIG Apps 很高兴宣布这两个特性在 Kubernetes 1.25 进入稳定阶段。&lt;/p&gt;</description></item><item><title>Kubernetes 1.25：Pod 新增 PodHasNetwork 状况</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2022/09/14/pod-has-network-condition/</link><pubDate>Wed, 14 Sep 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2022/09/14/pod-has-network-condition/</guid><description>&lt;!--
layout: blog
title: 'Kubernetes 1.25: PodHasNetwork Condition for Pods'
date: 2022-09-14
slug: pod-has-network-condition
author: &gt;
 Deep Debroy (Apple)
--&gt;
&lt;!--
Kubernetes 1.25 introduces Alpha support for a new kubelet-managed pod condition
in the status field of a pod: `PodHasNetwork`. The kubelet, for a worker node,
will use the `PodHasNetwork` condition to accurately surface the initialization
state of a pod from the perspective of pod sandbox creation and network
configuration by a container runtime (typically in coordination with CNI
plugins). The kubelet starts to pull container images and start individual
containers (including init containers) after the status of the `PodHasNetwork`
condition is set to `"True"`. Metrics collection services that report latency of
pod initialization from a cluster infrastructural perspective (i.e. agnostic of
per container characteristics like image size or payload) can utilize the
`PodHasNetwork` condition to accurately generate Service Level Indicators
(SLIs). Certain operators or controllers that manage underlying pods may utilize
the `PodHasNetwork` condition to optimize the set of actions performed when pods
repeatedly fail to come up.
--&gt;
&lt;p&gt;Kubernetes 1.25 引入了对 kubelet 所管理的新的 Pod 状况 &lt;code&gt;PodHasNetwork&lt;/code&gt; 的 Alpha 支持，
该状况位于 Pod 的 status 字段中 。对于工作节点，kubelet 将使用 &lt;code&gt;PodHasNetwork&lt;/code&gt; 状况从容器运行时
（通常与 CNI 插件协作）创建 Pod 沙箱和网络配置的角度准确地了解 Pod 的初始化状态。
在 &lt;code&gt;PodHasNetwork&lt;/code&gt; 状况的 status 设置为 &lt;code&gt;&amp;quot;True&amp;quot;&lt;/code&gt; 后，kubelet 开始拉取容器镜像并启动独立的容器
（包括 Init 容器）。从集群基础设施的角度报告 Pod 初始化延迟的指标采集服务
（无需知道每个容器的镜像大小或有效负载等特征）就可以利用 &lt;code&gt;PodHasNetwork&lt;/code&gt;
状况来准确生成服务水平指标（Service Level Indicator，SLI）。
某些管理底层 Pod 的 Operator 或控制器可以利用 &lt;code&gt;PodHasNetwork&lt;/code&gt; 状况来优化 Pod 反复出现失败时要执行的操作。&lt;/p&gt;</description></item><item><title>宣布自动刷新官方 Kubernetes CVE 订阅源</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2022/09/12/k8s-cve-feed-alpha/</link><pubDate>Mon, 12 Sep 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2022/09/12/k8s-cve-feed-alpha/</guid><description>&lt;!--
layout: blog 
title: Announcing the Auto-refreshing Official Kubernetes CVE Feed
date: 2022-09-12 
slug: k8s-cve-feed-alpha
--&gt;
&lt;!-- 
**Author**: Pushkar Joglekar (VMware)

A long-standing request from the Kubernetes community has been to have a
programmatic way for end users to keep track of Kubernetes security issues
(also called "CVEs", after the database that tracks public security issues across
different products and vendors). Accompanying the release of Kubernetes v1.25,
we are excited to announce availability of such
a [feed](/docs/reference/issues-security/official-cve-feed/) as an `alpha`
feature. This blog will cover the background and scope of this new service. 
--&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;：Pushkar Joglekar (VMware)&lt;/p&gt;</description></item><item><title>Kubernetes 的 iptables 链不是 API</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2022/09/07/iptables-chains-not-api/</link><pubDate>Wed, 07 Sep 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2022/09/07/iptables-chains-not-api/</guid><description>&lt;!--
layout: blog
title: "Kubernetes’s IPTables Chains Are Not API"
date: 2022-09-07
slug: iptables-chains-not-api
--&gt;
&lt;!--
**Author:** Dan Winship (Red Hat)
--&gt;
&lt;p&gt;&lt;strong&gt;作者：&lt;/strong&gt; Dan Winship (Red Hat)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者：&lt;/strong&gt; Xin Li (DaoCloud)&lt;/p&gt;
&lt;!--
Some Kubernetes components (such as kubelet and kube-proxy) create
iptables chains and rules as part of their operation. These chains
were never intended to be part of any Kubernetes API/ABI guarantees,
but some external components nonetheless make use of some of them (in
particular, using `KUBE-MARK-MASQ` to mark packets as needing to be
masqueraded).
--&gt;
&lt;p&gt;一些 Kubernetes 组件（例如 kubelet 和 kube-proxy）在执行操作时，会创建特定的 iptables 链和规则。
这些链从未被计划使其成为任何 Kubernetes API/ABI 保证的一部分，
但一些外部组件仍然使用其中的一些链（特别是使用 &lt;code&gt;KUBE-MARK-MASQ&lt;/code&gt; 将数据包标记为需要伪装）。&lt;/p&gt;</description></item><item><title>COSI 简介：使用 Kubernetes API 管理对象存储</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2022/09/02/cosi-kubernetes-object-storage-management/</link><pubDate>Fri, 02 Sep 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2022/09/02/cosi-kubernetes-object-storage-management/</guid><description>&lt;!--
layout: blog
title: "Introducing COSI: Object Storage Management using Kubernetes APIs"
date: 2022-09-02
slug: cosi-kubernetes-object-storage-management
--&gt;
&lt;!--
**Authors:** Sidhartha Mani ([Minio, Inc](https://min.io))
--&gt;
&lt;p&gt;&lt;strong&gt;作者：&lt;/strong&gt; Sidhartha Mani (&lt;a href="https://min.io"&gt;Minio, Inc&lt;/a&gt;)&lt;/p&gt;
&lt;!--
This article introduces the Container Object Storage Interface (COSI), a standard for provisioning and consuming object storage in Kubernetes. It is an alpha feature in Kubernetes v1.25.

File and block storage are treated as first class citizens in the Kubernetes ecosystem via [Container Storage Interface](https://kubernetes.io/blog/2019/01/15/container-storage-interface-ga/) (CSI). Workloads using CSI volumes enjoy the benefits of portability across vendors and across Kubernetes clusters without the need to change application manifests. An equivalent standard does not exist for Object storage.

Object storage has been rising in popularity in recent years as an alternative form of storage to filesystems and block devices. Object storage paradigm promotes disaggregation of compute and storage. This is done by making data available over the network, rather than locally. Disaggregated architectures allow compute workloads to be stateless, which consequently makes them easier to manage, scale and automate.
--&gt;
&lt;p&gt;本文介绍了容器对象存储接口 (COSI)，它是在 Kubernetes 中制备和使用对象存储的一个标准。
它是 Kubernetes v1.25 中的一个 Alpha 功能。&lt;/p&gt;</description></item><item><title>Kubernetes 1.25: cgroup v2 升级到 GA</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2022/08/31/cgroupv2-ga-1-25/</link><pubDate>Wed, 31 Aug 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2022/08/31/cgroupv2-ga-1-25/</guid><description>&lt;!--
layout: blog
title: "Kubernetes 1.25: cgroup v2 graduates to GA"
date: 2022-08-31
slug: cgroupv2-ga-1-25
--&gt;
&lt;!--
**Authors:**: David Porter (Google), Mrunal Patel (Red Hat)
--&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;: David Porter (Google), Mrunal Patel (Red Hat)&lt;/p&gt;
&lt;!--
Kubernetes 1.25 brings cgroup v2 to GA (general availability), letting the
[kubelet](/docs/concepts/overview/components/#kubelet) use the latest container resource
management capabilities.
--&gt;
&lt;p&gt;Kubernetes 1.25 将 cgroup v2 正式发布（GA），
让 &lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/overview/components/#kubelet"&gt;kubelet&lt;/a&gt; 使用最新的容器资源管理能力。&lt;/p&gt;
&lt;!--
## What are cgroups?
--&gt;
&lt;h2 id="什么是-cgroup"&gt;什么是 cgroup？&lt;/h2&gt;
&lt;!--
Effective [resource management](/docs/concepts/configuration/manage-resources-containers/) is a
critical aspect of Kubernetes. This involves managing the finite resources in
your nodes, such as CPU, memory, and storage.

*cgroups* are a Linux kernel capability that establish resource management
functionality like limiting CPU usage or setting memory limits for running
processes.
--&gt;
&lt;p&gt;有效的&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/configuration/manage-resources-containers/"&gt;资源管理&lt;/a&gt;是 Kubernetes 的一个关键方面。
这涉及管理节点中的有限资源，例如 CPU、内存和存储。&lt;/p&gt;</description></item><item><title>Kubernetes 1.25：CSI 内联存储卷正式发布</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2022/08/29/csi-inline-volumes-ga/</link><pubDate>Mon, 29 Aug 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2022/08/29/csi-inline-volumes-ga/</guid><description>&lt;!--
layout: blog
title: "Kubernetes 1.25: CSI Inline Volumes have graduated to GA"
date: 2022-08-29
slug: csi-inline-volumes-ga
--&gt;
&lt;!--
**Author:** Jonathan Dobson (Red Hat)
--&gt;
&lt;p&gt;&lt;strong&gt;作者：&lt;/strong&gt; Jonathan Dobson (Red Hat)&lt;/p&gt;
&lt;!--
CSI Inline Volumes were introduced as an alpha feature in Kubernetes 1.15 and have been beta since 1.16. We are happy to announce that this feature has graduated to General Availability (GA) status in Kubernetes 1.25.

CSI Inline Volumes are similar to other ephemeral volume types, such as `configMap`, `downwardAPI` and `secret`. The important difference is that the storage is provided by a CSI driver, which allows the use of ephemeral storage provided by third-party vendors. The volume is defined as part of the pod spec and follows the lifecycle of the pod, meaning the volume is created once the pod is scheduled and destroyed when the pod is destroyed.
--&gt;
&lt;p&gt;CSI 内联存储卷是在 Kubernetes 1.15 中作为 Alpha 功能推出的，并从 1.16 开始成为 Beta 版本。
我们很高兴地宣布，这项功能在 Kubernetes 1.25 版本中正式发布（GA）。&lt;/p&gt;</description></item><item><title>PodSecurityPolicy：历史背景</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2022/08/23/podsecuritypolicy-the-historical-context/</link><pubDate>Tue, 23 Aug 2022 15:00:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2022/08/23/podsecuritypolicy-the-historical-context/</guid><description>&lt;!--
layout: blog
title: "PodSecurityPolicy: The Historical Context"
date: 2022-08-23T15:00:00-0800
slug: podsecuritypolicy-the-historical-context
evergreen: true
--&gt;
&lt;!--
**Author:** Mahé Tardy (Quarkslab)
--&gt;
&lt;p&gt;&lt;strong&gt;作者：&lt;/strong&gt; Mahé Tardy (Quarkslab)&lt;/p&gt;
&lt;!--
The PodSecurityPolicy (PSP) admission controller has been removed, as of
Kubernetes v1.25. Its deprecation was announced and detailed in the blog post
[PodSecurityPolicy Deprecation: Past, Present, and Future](/blog/2021/04/06/podsecuritypolicy-deprecation-past-present-and-future/),
published for the Kubernetes v1.21 release.
--&gt;
&lt;p&gt;从 Kubernetes v1.25 开始，PodSecurityPolicy (PSP) 准入控制器已被移除。
在为 Kubernetes v1.21 发布的博文 &lt;a href="https://andygol-k8s.netlify.app/blog/2021/04/06/podsecuritypolicy-deprecation-past-present-and-future/"&gt;PodSecurityPolicy 弃用：过去、现在和未来&lt;/a&gt;
中，已经宣布并详细说明了它的弃用情况。&lt;/p&gt;</description></item><item><title>Kubernetes v1.25: Combiner</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2022/08/23/kubernetes-v1-25-release/</link><pubDate>Tue, 23 Aug 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2022/08/23/kubernetes-v1-25-release/</guid><description>&lt;!--
layout: blog
title: "Kubernetes v1.25: Combiner"
date: 2022-08-23
slug: kubernetes-v1-25-release
--&gt;
&lt;!--
**Authors**: [Kubernetes 1.25 Release Team](https://github.com/kubernetes/sig-release/blob/master/releases/release-1.25/release-team.md)
--&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;: &lt;a href="https://github.com/kubernetes/sig-release/blob/master/releases/release-1.25/release-team.md"&gt;Kubernetes 1.25 发布团队&lt;/a&gt;&lt;/p&gt;
&lt;!--
Announcing the release of Kubernetes v1.25!
--&gt;
&lt;p&gt;宣布 Kubernetes v1.25 的发版！&lt;/p&gt;
&lt;!--
This release includes a total of 40 enhancements. Fifteen of those enhancements are entering Alpha, ten are graduating to Beta, and thirteen are graduating to Stable. We also have two features being deprecated or removed.
--&gt;
&lt;p&gt;这个版本总共包括 40 项增强功能。
其中 15 项增强功能进入 Alpha，10 项进入 Beta，13 项进入 Stable。
我们也废弃/移除了两个功能。&lt;/p&gt;</description></item><item><title>聚焦 SIG Storage</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2022/08/22/sig-storage-spotlight/</link><pubDate>Mon, 22 Aug 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2022/08/22/sig-storage-spotlight/</guid><description>&lt;!--
layout: blog
title: "Spotlight on SIG Storage"
slug: sig-storage-spotlight
date: 2022-08-22
canonicalUrl: https://www.kubernetes.dev/blog/2022/08/22/sig-storage-spotlight-2022/
--&gt;
&lt;!--
**Author**: Frederico Muñoz (SAS)
--&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;：Frederico Muñoz (SAS)&lt;/p&gt;
&lt;!--
Since the very beginning of Kubernetes, the topic of persistent data and how to address the requirement of stateful applications has been an important topic. Support for stateless deployments was natural, present from the start, and garnered attention, becoming very well-known. Work on better support for stateful applications was also present from early on, with each release increasing the scope of what could be run on Kubernetes.
--&gt;
&lt;p&gt;自 Kubernetes 诞生之初，持久数据以及如何解决有状态应用程序的需求一直是一个重要的话题。
对无状态部署的支持是很自然的、从一开始就存在的，并引起了人们的关注，变得众所周知。
从早期开始，我们也致力于更好地支持有状态应用程序，每个版本都增加了可以在 Kubernetes 上运行的范围。&lt;/p&gt;</description></item><item><title>认识我们的贡献者 - 亚太地区（中国地区）</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2022/08/15/meet-our-contributors-china-ep-03/</link><pubDate>Mon, 15 Aug 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2022/08/15/meet-our-contributors-china-ep-03/</guid><description>&lt;!--
layout: blog
title: "Meet Our Contributors - APAC (China region)"
date: 2022-08-15
slug: meet-our-contributors-china-ep-03
canonicalUrl: https://www.kubernetes.dev/blog/2022/08/15/meet-our-contributors-chn-ep-03/
--&gt;
&lt;!--
**Authors &amp; Interviewers:** [Avinesh Tripathi](https://github.com/AvineshTripathi), [Debabrata Panigrahi](https://github.com/Debanitrkl), [Jayesh Srivastava](https://github.com/jayesh-srivastava), [Priyanka Saggu](https://github.com/Priyankasaggu11929/), [Purneswar Prasad](https://github.com/PurneswarPrasad), [Vedant Kakde](https://github.com/vedant-kakde)
--&gt;
&lt;p&gt;&lt;strong&gt;作者和受访者：&lt;/strong&gt; &lt;a href="https://github.com/AvineshTripathi"&gt;Avinesh Tripathi&lt;/a&gt;、
&lt;a href="https://github.com/Debanitrkl"&gt;Debabrata Panigrahi&lt;/a&gt;、
&lt;a href="https://github.com/jayesh-srivastava"&gt;Jayesh Srivastava&lt;/a&gt;、
&lt;a href="https://github.com/Priyankasaggu11929/"&gt;Priyanka Saggu&lt;/a&gt;、
&lt;a href="https://github.com/PurneswarPrasad"&gt;Purneswar Prasad&lt;/a&gt;、
&lt;a href="https://github.com/vedant-kakde"&gt;Vedant Kakde&lt;/a&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;!--
Hello, everyone 👋

Welcome back to the third edition of the "Meet Our Contributors" blog post series for APAC.

This post features four outstanding contributors from China, who have played diverse leadership and community roles in the upstream Kubernetes project.

So, without further ado, let's get straight to the article.
--&gt;
&lt;p&gt;大家好 👋&lt;/p&gt;</description></item><item><title>逐个 KEP 地增强 Kubernetes</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2022/08/11/enhancing-kubernetes-one-kep-at-a-time/</link><pubDate>Thu, 11 Aug 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2022/08/11/enhancing-kubernetes-one-kep-at-a-time/</guid><description>&lt;!--
layout: blog
title: "Enhancing Kubernetes one KEP at a Time"
date: 2022-08-11
slug: enhancing-kubernetes-one-kep-at-a-time
canonicalUrl: https://www.k8s.dev/blog/2022/08/11/enhancing-kubernetes-one-kep-at-a-time/
--&gt;
&lt;!--
**Author:** Ryler Hockenbury (Mastercard)
--&gt;
&lt;p&gt;&lt;strong&gt;作者：&lt;/strong&gt; Ryler Hockenbury（Mastercard）&lt;/p&gt;
&lt;!--
Did you know that Kubernetes v1.24 has [46 enhancements](https://kubernetes.io/blog/2022/05/03/kubernetes-1-24-release-announcement/)? That's a lot of new functionality packed into a 4-month release cycle. The Kubernetes release team coordinates the logistics of the release, from remediating test flakes to publishing updated docs. It's a ton of work, but they always deliver.

The release team comprises around 30 people across six subteams - Bug Triage, CI Signal, Enhancements, Release Notes, Communications, and Docs.  Each of these subteams manages a component of the release. This post will focus on the role of the enhancements subteam and how you can get involved.
--&gt;
&lt;p&gt;你是否知道 Kubernetes v1.24 有
&lt;a href="https://kubernetes.io/zh-cn/blog/2022/05/03/kubernetes-1-24-release-announcement/"&gt;46 个增强特性&lt;/a&gt;？
在为期 4 个月的发布周期内包含了大量新特性。
Kubernetes 发布团队协调发布的后勤工作，从修复测试问题到发布更新的文档。他们需要完成成吨的工作，但发布团队总是能按期交付。&lt;/p&gt;</description></item><item><title>Kubernetes 1.25 的移除说明和主要变更</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2022/08/04/upcoming-changes-in-kubernetes-1-25/</link><pubDate>Thu, 04 Aug 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2022/08/04/upcoming-changes-in-kubernetes-1-25/</guid><description>&lt;!-- 
layout: blog
title: "Kubernetes Removals and Major Changes In 1.25"
date: 2022-08-04
slug: upcoming-changes-in-kubernetes-1-25
--&gt;
&lt;!--
**Authors**: Kat Cosgrove, Frederico Muñoz, Debabrata Panigrahi
--&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;：Kat Cosgrove、Frederico Muñoz、Debabrata Panigrahi&lt;/p&gt;
&lt;!--
As Kubernetes grows and matures, features may be deprecated, removed, or replaced with improvements
for the health of the project. Kubernetes v1.25 includes several major changes and one major removal.
--&gt;
&lt;p&gt;随着 Kubernetes 成长和日趋成熟，为了此项目的健康发展，某些功能特性可能会被弃用、移除或替换为优化过的功能特性。
Kubernetes v1.25 包括几个主要变更和一个主要移除。&lt;/p&gt;
&lt;!--
## The Kubernetes API Removal and Deprecation process

The Kubernetes project has a well-documented [deprecation policy](/docs/reference/using-api/deprecation-policy/) for features. This policy states that stable APIs may only be deprecated when a newer, stable version of that same API is available and that APIs have a minimum lifetime for each stability level. A deprecated API is one that has been marked for removal in a future Kubernetes release; it will continue to function until removal (at least one year from the deprecation), but usage will result in a warning being displayed. Removed APIs are no longer available in the current version, at which point you must migrate to using the replacement.
--&gt;
&lt;h2 id="the-kubernetes-api-removal-and-deprecation-process"&gt;Kubernetes API 移除和弃用流程&lt;/h2&gt;
&lt;p&gt;Kubernetes 项目对功能特性有一个文档完备的&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/using-api/deprecation-policy/"&gt;弃用策略&lt;/a&gt;。
该策略规定，只有当较新的、稳定的相同 API 可用时，原有的稳定 API 才可能被弃用，每个稳定级别的 API 都有一个最短的生命周期。
弃用的 API 指的是已标记为将在后续发行某个 Kubernetes 版本时移除的 API；
移除之前该 API 将继续发挥作用（从弃用起至少一年时间），但使用时会显示一条警告。
移除的 API 将在当前版本中不再可用，此时你必须迁移以使用替换的 API。&lt;/p&gt;</description></item><item><title>聚光灯下的 SIG Docs</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2022/08/02/sig-docs-spotlight-2022/</link><pubDate>Tue, 02 Aug 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2022/08/02/sig-docs-spotlight-2022/</guid><description>&lt;!--
layout: blog
title: "Spotlight on SIG Docs"
date: 2022-08-02
slug: sig-docs-spotlight-2022
canonicalUrl: https://kubernetes.dev/blog/2022/08/02/sig-docs-spotlight-2022/
--&gt;
&lt;!--
**Author:** Purneswar Prasad
--&gt;
&lt;p&gt;&lt;strong&gt;作者：&lt;/strong&gt; Purneswar Prasad&lt;/p&gt;
&lt;!--
## Introduction

The official documentation is the go-to source for any open source project. For Kubernetes, 
it's an ever-evolving Special Interest Group (SIG) with people constantly putting in their efforts
to make details about the project easier to consume for new contributors and users. SIG Docs publishes 
the official documentation on [kubernetes.io](https://kubernetes.io) which includes, 
but is not limited to, documentation of the core APIs, core architectural details, and CLI tools 
shipped with the Kubernetes release.

To learn more about the work of SIG Docs and its future ahead in shaping the community, I have summarised 
my conversation with the co-chairs, [Divya Mohan](https://twitter.com/Divya_Mohan02) (DM), 
[Rey Lejano](https://twitter.com/reylejano) (RL) and Natali Vlatko (NV), who ran through the
SIG's goals and how fellow contributors can help.
--&gt;
&lt;h2 id="简介"&gt;简介&lt;/h2&gt;
&lt;p&gt;官方文档是所有开源项目的首选资料源。对于 Kubernetes，它是一个持续演进的特别兴趣小组 (SIG)，
人们持续不断努力制作详实的项目资料，让新贡献者和用户更容易取用这些文档。
SIG Docs 在 &lt;a href="https://kubernetes.io"&gt;kubernetes.io&lt;/a&gt; 上发布官方文档，
包括但不限于 Kubernetes 版本发布时附带的核心 API 文档、核心架构细节和 CLI 工具文档。&lt;/p&gt;</description></item><item><title>Kubernetes Gateway API 进入 Beta 阶段</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2022/07/13/gateway-api-graduates-to-beta/</link><pubDate>Wed, 13 Jul 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2022/07/13/gateway-api-graduates-to-beta/</guid><description>&lt;!-- 
layout: blog
title: Kubernetes Gateway API Graduates to Beta
date: 2022-07-13
slug: gateway-api-graduates-to-beta
canonicalUrl: https://gateway-api.sigs.k8s.io/blog/2022/graduating-to-beta/ 
--&gt;
&lt;!-- 
**Authors:** Shane Utt (Kong), Rob Scott (Google), Nick Young (VMware), Jeff Apple (HashiCorp) 
--&gt;
&lt;p&gt;&lt;strong&gt;作者：&lt;/strong&gt; Shane Utt (Kong)、Rob Scott (Google)、Nick Young (VMware)、Jeff Apple (HashiCorp)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者：&lt;/strong&gt; Michael Yao (DaoCloud)&lt;/p&gt;
&lt;!-- 
We are excited to announce the v0.5.0 release of Gateway API. For the first
time, several of our most important Gateway API resources are graduating to
beta. Additionally, we are starting a new initiative to explore how Gateway API
can be used for mesh and introducing new experimental concepts such as URL
rewrites. We'll cover all of this and more below. 
--&gt;
&lt;p&gt;我们很高兴地宣布 Gateway API 的 v0.5.0 版本发布。
我们最重要的几个 Gateway API 资源首次进入 Beta 阶段。
此外，我们正在启动一项新的倡议，探索如何将 Gateway API 用于网格，还引入了 URL 重写等新的实验性概念。
下文涵盖了这部分内容和更多说明。&lt;/p&gt;</description></item><item><title>2021 年度总结报告</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2022/06/01/annual-report-summary-2021/</link><pubDate>Wed, 01 Jun 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2022/06/01/annual-report-summary-2021/</guid><description>&lt;!--
layout: blog
title: "Annual Report Summary 2021"
date: 2022-06-01
slug: annual-report-summary-2021
--&gt;
&lt;!--
**Author:** Paris Pittman (Steering Committee)
--&gt;
&lt;p&gt;&lt;strong&gt;作者：&lt;/strong&gt; Paris Pittman（指导委员会）&lt;/p&gt;
&lt;!--
Last year, we published our first [Annual Report Summary](/blog/2021/06/28/announcing-kubernetes-community-group-annual-reports/) for 2020 and it's already time for our second edition!

[2021 Annual Report Summary](https://www.cncf.io/reports/kubernetes-annual-report-2021/)
--&gt;
&lt;p&gt;去年，我们发布了第一期
&lt;a href="https://andygol-k8s.netlify.app/blog/2021/06/28/announcing-kubernetes-community-group-annual-reports/"&gt;2020 年度总结报告&lt;/a&gt;，
现在已经是时候发布第二期了！&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.cncf.io/reports/kubernetes-annual-report-2021/"&gt;2021 年度总结报告&lt;/a&gt;&lt;/p&gt;
&lt;!--
This summary reflects the work that has been done in 2021 and the initiatives on deck for the rest of 2022. Please forward to organizations and indidviduals participating in upstream activities, planning cloud native strategies, and/or those looking to help out. To find a specific community group's complete report, go to the [kubernetes/community repo](https://github.com/kubernetes/community) under the groups folder. Example: [sig-api-machinery/annual-report-2021.md](https://github.com/kubernetes/community/blob/master/sig-api-machinery/annual-report-2021.md)
--&gt;
&lt;p&gt;这份总结反映了 2021 年已完成的工作以及 2022 下半年置于台面上的倡议。
请将这份总结转发给正参与上游活动、计划云原生战略和寻求帮助的那些组织和个人。
若要查阅特定社区小组的完整报告，请访问
&lt;a href="https://github.com/kubernetes/community"&gt;kubernetes/community 仓库&lt;/a&gt;查找各小组的文件夹。例如：
&lt;a href="https://github.com/kubernetes/community/blob/master/sig-api-machinery/annual-report-2021.md"&gt;sig-api-machinery/annual-report-2021.md&lt;/a&gt;&lt;/p&gt;</description></item><item><title>Kubernetes 1.24: StatefulSet 的最大不可用副本数</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2022/05/27/maxunavailable-for-statefulset/</link><pubDate>Fri, 27 May 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2022/05/27/maxunavailable-for-statefulset/</guid><description>&lt;!--
layout: blog
title: 'Kubernetes 1.24: Maximum Unavailable Replicas for StatefulSet'
date: 2022-05-27
slug: maxunavailable-for-statefulset

**Author:** Mayank Kumar (Salesforce)
--&gt;
&lt;p&gt;&lt;strong&gt;作者：&lt;/strong&gt; Mayank Kumar (Salesforce)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者：&lt;/strong&gt; Xiaoyang Zhang（Huawei）&lt;/p&gt;
&lt;!--
Kubernetes [StatefulSets](/docs/concepts/workloads/controllers/statefulset/), since their introduction in 
1.5 and becoming stable in 1.9, have been widely used to run stateful applications. They provide stable pod identity, persistent
per pod storage and ordered graceful deployment, scaling and rolling updates. You can think of StatefulSet as the atomic building
block for running complex stateful applications. As the use of Kubernetes has grown, so has the number of scenarios requiring
StatefulSets. Many of these scenarios, require faster rolling updates than the currently supported one-pod-at-a-time updates, in the 
case where you're using the `OrderedReady` Pod management policy for a StatefulSet.
--&gt;
&lt;p&gt;Kubernetes &lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/controllers/statefulset/"&gt;StatefulSet&lt;/a&gt;，
自 1.5 版本中引入并在 1.9 版本中变得稳定以来，已被广泛用于运行有状态应用。它提供固定的 Pod 身份标识、
每个 Pod 的持久存储以及 Pod 的有序部署、扩缩容和滚动更新功能。你可以将 StatefulSet
视为运行复杂有状态应用程序的原子构建块。随着 Kubernetes 的使用增多，需要 StatefulSet 的场景也越来越多。
当 StatefulSet 的 Pod 管理策略为 &lt;code&gt;OrderedReady&lt;/code&gt; 时，其中许多场景需要比当前所支持的一次一个 Pod
的更新更快的滚动更新。&lt;/p&gt;</description></item><item><title>Kubernetes 1.24 中的上下文日志记录</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2022/05/25/contextual-logging/</link><pubDate>Wed, 25 May 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2022/05/25/contextual-logging/</guid><description>&lt;!--
layout: blog
title: "Contextual Logging in Kubernetes 1.24"
date: 2022-05-25
slug: contextual-logging
canonicalUrl: https://kubernetes.dev/blog/2022/05/25/contextual-logging/
--&gt;
&lt;!--
 **Authors:** Patrick Ohly (Intel)
--&gt;
&lt;p&gt;&lt;strong&gt;作者:&lt;/strong&gt; Patrick Ohly (Intel)&lt;/p&gt;
&lt;!--
The [Structured Logging Working
Group](https://github.com/kubernetes/community/blob/master/wg-structured-logging/README.md)
has added new capabilities to the logging infrastructure in Kubernetes
1.24. This blog post explains how developers can take advantage of those to
make log output more useful and how they can get involved with improving Kubernetes.
--&gt;
&lt;p&gt;&lt;a href="https://github.com/kubernetes/community/blob/master/wg-structured-logging/README.md"&gt;结构化日志工作组&lt;/a&gt;
在 Kubernetes 1.24 中为日志基础设施添加了新功能。这篇博文解释了开发者如何利用这些功能使日志输出更有用，
以及他们如何参与改进 Kubernetes。&lt;/p&gt;</description></item><item><title>Kubernetes 1.24: 避免为 Services 分配 IP 地址时发生冲突</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2022/05/23/service-ip-dynamic-and-static-allocation/</link><pubDate>Mon, 23 May 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2022/05/23/service-ip-dynamic-and-static-allocation/</guid><description>&lt;!--
layout: blog
title: "Kubernetes 1.24: Avoid Collisions Assigning IP Addresses to Services"
date: 2022-05-23
slug: service-ip-dynamic-and-static-allocation
--&gt;
&lt;!--
**Author:** Antonio Ojea (Red Hat)
--&gt;
&lt;p&gt;&lt;strong&gt;作者：&lt;/strong&gt; Antonio Ojea (Red Hat)&lt;/p&gt;
&lt;!--
In Kubernetes, [Services](/docs/concepts/services-networking/service/) are an abstract way to expose
an application running on a set of Pods. Services
can have a cluster-scoped virtual IP address (using a Service of `type: ClusterIP`).
Clients can connect using that virtual IP address, and Kubernetes then load-balances traffic to that
Service across the different backing Pods.
--&gt;
&lt;p&gt;在 Kubernetes 中，&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/services-networking/service/"&gt;Services&lt;/a&gt;
是一种抽象，用来暴露运行在一组 Pod 上的应用。
Service 可以有一个集群范围的虚拟 IP 地址（使用 &lt;code&gt;type: ClusterIP&lt;/code&gt; 的 Service）。
客户端可以使用该虚拟 IP 地址进行连接， Kubernetes 为对该 Service 的访问流量提供负载均衡，以访问不同的后端 Pod。&lt;/p&gt;</description></item><item><title>Kubernetes 1.24: 节点非体面关闭特性进入 Alpha 阶段</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2022/05/20/kubernetes-1-24-non-graceful-node-shutdown-alpha/</link><pubDate>Fri, 20 May 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2022/05/20/kubernetes-1-24-non-graceful-node-shutdown-alpha/</guid><description>&lt;!--
layout: blog
title: "Kubernetes 1.24: Introducing Non-Graceful Node Shutdown Alpha"
date: 2022-05-20
slug: kubernetes-1-24-non-graceful-node-shutdown-alpha
--&gt;
&lt;!--
**Authors** Xing Yang and Yassine Tijani (VMware)
--&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;：Xing Yang 和 Yassine Tijani (VMware)&lt;/p&gt;
&lt;!--
Kubernetes v1.24 introduces alpha support for [Non-Graceful Node Shutdown](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/2268-non-graceful-shutdown). 
This feature allows stateful workloads to failover to a different node after the original node is shutdown or in a non-recoverable state such as hardware failure or broken OS.
--&gt;
&lt;p&gt;Kubernetes v1.24 引入了对&lt;a href="https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/2268-non-graceful-shutdown"&gt;节点非体面关闭&lt;/a&gt;
（Non-Graceful Node Shutdown）的 Alpha 支持。
此特性允许有状态工作负载在原节点关闭或处于不可恢复状态（如硬件故障或操作系统损坏）后，故障转移到不同的节点。&lt;/p&gt;</description></item><item><title>Kubernetes 1.24: 防止未经授权的卷模式转换</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2022/05/18/prevent-unauthorised-volume-mode-conversion-alpha/</link><pubDate>Wed, 18 May 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2022/05/18/prevent-unauthorised-volume-mode-conversion-alpha/</guid><description>&lt;!--
layout: blog
title: 'Kubernetes 1.24: Prevent unauthorised volume mode conversion'
date: 2022-05-18
slug: prevent-unauthorised-volume-mode-conversion-alpha
--&gt;
&lt;!--
**Author:** Raunak Pradip Shah (Mirantis)
--&gt;
&lt;p&gt;&lt;strong&gt;作者：&lt;/strong&gt; Raunak Pradip Shah (Mirantis)&lt;/p&gt;
&lt;!--
Kubernetes v1.24 introduces a new alpha-level feature that prevents unauthorised users 
from modifying the volume mode of a [`PersistentVolumeClaim`](/docs/concepts/storage/persistent-volumes/) created from an 
existing [`VolumeSnapshot`](/docs/concepts/storage/volume-snapshots/) in the Kubernetes cluster. 
--&gt;
&lt;p&gt;Kubernetes v1.24 引入了一个新的 alpha 级特性，可以防止未经授权的用户修改基于 Kubernetes
集群中已有的 &lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/volume-snapshots/"&gt;&lt;code&gt;VolumeSnapshot&lt;/code&gt;&lt;/a&gt;
创建的 &lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/persistent-volumes/"&gt;&lt;code&gt;PersistentVolumeClaim&lt;/code&gt;&lt;/a&gt; 的卷模式。&lt;/p&gt;
&lt;!--
### The problem
--&gt;
&lt;h3 id="问题"&gt;问题&lt;/h3&gt;
&lt;!--
The [Volume Mode](/docs/concepts/storage/persistent-volumes/#volume-mode) determines whether a volume 
is formatted into a filesystem or presented as a raw block device. 
--&gt;
&lt;p&gt;&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/persistent-volumes/#volume-mode"&gt;卷模式&lt;/a&gt;确定卷是格式化为文件系统还是显示为原始块设备。&lt;/p&gt;</description></item><item><title>Kubernetes 1.24: 卷填充器功能进入 Beta 阶段</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2022/05/16/volume-populators-beta/</link><pubDate>Mon, 16 May 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2022/05/16/volume-populators-beta/</guid><description>&lt;!--
layout: blog
title: "Kubernetes 1.24: Volume Populators Graduate to Beta"
date: 2022-05-16
slug: volume-populators-beta
--&gt;
&lt;!--
**Author:**
Ben Swartzlander (NetApp)
--&gt;
&lt;p&gt;&lt;strong&gt;作者：&lt;/strong&gt;
Ben Swartzlander (NetApp)&lt;/p&gt;
&lt;!--
The volume populators feature is now two releases old and entering beta! The `AnyVolumeDataSource` feature
gate defaults to enabled in Kubernetes v1.24, which means that users can specify any custom resource
as the data source of a PVC.
--&gt;
&lt;p&gt;卷填充器功能现在已经经历两个发行版本并进入 Beta 阶段！
在 Kubernetes v1.24 中 &lt;code&gt;AnyVolumeDataSource&lt;/code&gt; 特性门控默认被启用。
这意味着用户可以指定任何自定义资源作为 PVC 的数据源。&lt;/p&gt;</description></item><item><title>Kubernetes 1.24：gRPC 容器探针功能进入 Beta 阶段</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2022/05/13/grpc-probes-now-in-beta/</link><pubDate>Fri, 13 May 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2022/05/13/grpc-probes-now-in-beta/</guid><description>&lt;!--
layout: blog
title: "Kubernetes 1.24: gRPC container probes in beta"
date: 2022-05-13
slug: grpc-probes-now-in-beta
--&gt;
&lt;!--
**Author**: Sergey Kanzhelev (Google)
--&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;：Sergey Kanzhelev (Google)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者&lt;/strong&gt;：Xiaoyang Zhang（Huawei）&lt;/p&gt;
&lt;!--
With Kubernetes 1.24 the gRPC probes functionality entered beta and is available by default.
Now you can configure startup, liveness, and readiness probes for your gRPC app
without exposing any HTTP endpoint, nor do you need an executable. Kubernetes can natively connect to your workload via gRPC and query its status.
--&gt;
&lt;p&gt;在 Kubernetes 1.24 中，gRPC 探针（probe）功能进入了 beta 阶段，默认情况下可用。
现在，你可以为 gRPC 应用程序配置启动、活跃和就绪探测，而无需公开任何 HTTP 端点，
也不需要可执行文件。Kubernetes 可以通过 gRPC 直接连接到你的工作负载并查询其状态。&lt;/p&gt;</description></item><item><title>Kubernetes 1.24 版本中存储容量跟踪特性进入 GA 阶段</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2022/05/06/storage-capacity-ga/</link><pubDate>Fri, 06 May 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2022/05/06/storage-capacity-ga/</guid><description>&lt;!--
layout: blog
title: "Storage Capacity Tracking reaches GA in Kubernetes 1.24"
date: 2022-05-06
slug: storage-capacity-ga
--&gt;
&lt;!--
 **Authors:** Patrick Ohly (Intel)
--&gt;
&lt;p&gt;&lt;strong&gt;作者:&lt;/strong&gt; Patrick Ohly（Intel）&lt;/p&gt;
&lt;!--
The v1.24 release of Kubernetes brings [storage capacity](/docs/concepts/storage/storage-capacity/)
tracking as a generally available feature.
--&gt;
&lt;p&gt;在 Kubernetes v1.24 版本中，&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/storage-capacity/"&gt;存储容量&lt;/a&gt;跟踪已经成为一项正式发布的功能。&lt;/p&gt;
&lt;!--
## Problems we have solved
--&gt;
&lt;h2 id="已经解决的问题"&gt;已经解决的问题&lt;/h2&gt;
&lt;!--
As explained in more detail in the [previous blog post about this
feature](/blog/2021/04/14/local-storage-features-go-beta/), storage capacity
tracking allows a CSI driver to publish information about remaining
capacity. The kube-scheduler then uses that information to pick suitable nodes
for a Pod when that Pod has volumes that still need to be provisioned.
--&gt;
&lt;p&gt;如&lt;a href="https://andygol-k8s.netlify.app/blog/2021/04/14/local-storage-features-go-beta/"&gt;上一篇关于此功能的博文&lt;/a&gt;中所详细介绍的，
存储容量跟踪允许 CSI 驱动程序发布有关剩余容量的信息。当 Pod 仍然有需要配置的卷时，
kube-scheduler 使用该信息为 Pod 选择合适的节点。&lt;/p&gt;</description></item><item><title>Kubernetes 1.24：卷扩充现在成为稳定功能</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2022/05/05/volume-expansion-ga/</link><pubDate>Thu, 05 May 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2022/05/05/volume-expansion-ga/</guid><description>&lt;!-- 
---
layout: blog
title: "Kubernetes 1.24: Volume Expansion Now A Stable Feature"
date: 2022-05-05
slug: volume-expansion-ga
---
--&gt;
&lt;!-- 
**Author:** Hemant Kumar (Red Hat)

Volume expansion was introduced as a alpha feature in Kubernetes 1.8 and it went beta in 1.11 and with Kubernetes 1.24 we are excited to announce general availability(GA)
of volume expansion.

This feature allows Kubernetes users to simply edit their `PersistentVolumeClaim` objects and specify new size in PVC Spec and Kubernetes will automatically expand the volume
using storage backend and also expand the underlying file system in-use by the Pod without requiring any downtime at all if possible.
--&gt;
&lt;p&gt;&lt;strong&gt;作者：&lt;/strong&gt; Hemant Kumar (Red Hat)&lt;/p&gt;</description></item><item><title>Dockershim：历史背景</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2022/05/03/dockershim-historical-context/</link><pubDate>Tue, 03 May 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2022/05/03/dockershim-historical-context/</guid><description>&lt;!--
layout: blog
title: "Dockershim: The Historical Context"
date: 2022-05-03
slug: dockershim-historical-context
--&gt;
&lt;!--
**Author:** Kat Cosgrove

Dockershim has been removed as of Kubernetes v1.24, and this is a positive move for the project. However, context is important for fully understanding something, be it socially or in software development, and this deserves a more in-depth review. Alongside the dockershim removal in Kubernetes v1.24, we’ve seen some confusion (sometimes at a panic level) and dissatisfaction with this decision in the community, largely due to a lack of context around this removal. The decision to deprecate and eventually remove dockershim from Kubernetes was not made quickly or lightly. Still, it’s been in the works for so long that many of today’s users are newer than that decision, and certainly newer than the choices that led to the dockershim being necessary in the first place.

So what is the dockershim, and why is it going away?
--&gt;
&lt;p&gt;&lt;strong&gt;作者：&lt;/strong&gt; Kat Cosgrove&lt;/p&gt;</description></item><item><title>Kubernetes 1.24: 观星者</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2022/05/03/kubernetes-1-24-release-announcement/</link><pubDate>Tue, 03 May 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2022/05/03/kubernetes-1-24-release-announcement/</guid><description>&lt;!--
layout: blog
title: "Kubernetes 1.24: Stargazer"
date: 2022-05-03
slug: kubernetes-1-24-release-announcement
--&gt;
&lt;!--
**Authors**: [Kubernetes 1.24 Release Team](https://git.k8s.io/sig-release/releases/release-1.24/release-team.md)

We are excited to announce the release of Kubernetes 1.24, the first release of 2022!

This release consists of 46 enhancements: fourteen enhancements have graduated to stable,
fifteen enhancements are moving to beta, and thirteen enhancements are entering alpha.
Also, two features have been deprecated, and two features have been removed.
--&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;: &lt;a href="https://git.k8s.io/sig-release/releases/release-1.24/release-team.md"&gt;Kubernetes 1.24 发布团队&lt;/a&gt;&lt;/p&gt;</description></item><item><title>Frontiers, fsGroups and frogs: Kubernetes 1.23 发布采访</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2022/04/29/frontiers-fsgroups-and-frogs-kubernetes-1.23-%E5%8F%91%E5%B8%83%E9%87%87%E8%AE%BF/</link><pubDate>Fri, 29 Apr 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2022/04/29/frontiers-fsgroups-and-frogs-kubernetes-1.23-%E5%8F%91%E5%B8%83%E9%87%87%E8%AE%BF/</guid><description>&lt;!--
layout: blog
title: "Frontiers, fsGroups and frogs: the Kubernetes 1.23 release interview"
date: 2022-04-29
--&gt;
&lt;!--
**Author**: Craig Box (Google)
--&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;: Craig Box (Google)&lt;/p&gt;
&lt;!--
One of the highlights of hosting the weekly [Kubernetes Podcast from Google](https://kubernetespodcast.com/) is talking to the release managers for each new Kubernetes version. The release team is constantly refreshing. Many working their way from small documentation fixes, step up to shadow roles, and then eventually lead a release.
--&gt;
&lt;p&gt;举办每周一次的&lt;a href="https://kubernetespodcast.com/"&gt;来自 Google 的 Kubernetes 播客&lt;/a&gt;
的亮点之一是与每个新 Kubernetes 版本的发布经理交谈。发布团队不断刷新。许多人从小型文档修复开始，逐步晋升为影子角色，然后最终领导发布。&lt;/p&gt;</description></item><item><title>在 Ingress-NGINX v1.2.0 中提高安全标准</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2022/04/28/ingress-nginx-1-2-0/</link><pubDate>Thu, 28 Apr 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2022/04/28/ingress-nginx-1-2-0/</guid><description>&lt;!--
layout: blog
title: 'Increasing the security bar in Ingress-NGINX v1.2.0'
date: 2022-04-28
slug: ingress-nginx-1-2-0
--&gt;
&lt;!--
**Authors:** Ricardo Katz (VMware), James Strong (Chainguard)
--&gt;
&lt;p&gt;&lt;strong&gt;作者：&lt;/strong&gt; Ricardo Katz (VMware), James Strong (Chainguard)&lt;/p&gt;
&lt;!--
The [Ingress](/docs/concepts/services-networking/ingress/) may be one of the most targeted components
of Kubernetes. An Ingress typically defines an HTTP reverse proxy, exposed to the Internet, containing
multiple websites, and with some privileged access to Kubernetes API (such as to read Secrets relating to
TLS certificates and their private keys).
--&gt;
&lt;p&gt;&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/services-networking/ingress/"&gt;Ingress&lt;/a&gt; 可能是 Kubernetes 最容易受攻击的组件之一。
Ingress 通常定义一个 HTTP 反向代理，暴露在互联网上，包含多个网站，并具有对 Kubernetes API
的一些特权访问（例如读取与 TLS 证书及其私钥相关的 Secret）。&lt;/p&gt;</description></item><item><title>Kubernetes 1.24 中的移除和弃用</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2022/04/07/upcoming-changes-in-kubernetes-1-24/</link><pubDate>Thu, 07 Apr 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2022/04/07/upcoming-changes-in-kubernetes-1-24/</guid><description>&lt;!--
layout: blog
title: "Kubernetes Removals and Deprecations In 1.24"
date: 2022-04-07
slug: upcoming-changes-in-kubernetes-1-24
--&gt;
&lt;!--
**Author**: Mickey Boxell (Oracle)

As Kubernetes evolves, features and APIs are regularly revisited and removed. New features may offer
an alternative or improved approach to solving existing problems, motivating the team to remove the
old approach. 
--&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;：Mickey Boxell (Oracle)&lt;/p&gt;
&lt;p&gt;随着 Kubernetes 的发展，一些特性和 API 会被定期重检和移除。
新特性可能会提供替代或改进的方法，来解决现有的问题，从而激励团队移除旧的方法。&lt;/p&gt;
&lt;!--
We want to make sure you are aware of the changes coming in the Kubernetes 1.24 release. The release will 
**deprecate** several (beta) APIs in favor of stable versions of the same APIs. The major change coming 
in the Kubernetes 1.24 release is the 
[removal of Dockershim](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/2221-remove-dockershim). 
This is discussed below and will be explored in more depth at release time. For an early look at the 
changes coming in Kubernetes 1.24, take a look at the in-progress 
[CHANGELOG](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md).
--&gt;
&lt;p&gt;我们希望确保你了解 Kubernetes 1.24 版本的变化。该版本将 &lt;strong&gt;弃用&lt;/strong&gt; 一些（测试版/beta）API，
转而支持相同 API 的稳定版本。Kubernetes 1.24
版本的主要变化是&lt;a href="https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/2221-remove-dockershim"&gt;移除 Dockershim&lt;/a&gt;。
这将在下面讨论，并将在发布时更深入地探讨。
要提前了解 Kubernetes 1.24 中的更改，请查看正在更新中的
&lt;a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md"&gt;CHANGELOG&lt;/a&gt;。&lt;/p&gt;</description></item><item><title>你的集群准备好使用 v1.24 版本了吗？</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2022/03/31/ready-for-dockershim-removal/</link><pubDate>Thu, 31 Mar 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2022/03/31/ready-for-dockershim-removal/</guid><description>&lt;!--
layout: blog
title: "Is Your Cluster Ready for v1.24?"
date: 2022-03-31
slug: ready-for-dockershim-removal
--&gt;
&lt;!--
**Author:** Kat Cosgrove
--&gt;
&lt;p&gt;&lt;strong&gt;作者:&lt;/strong&gt; Kat Cosgrove&lt;/p&gt;
&lt;!--
Way back in December of 2020, Kubernetes announced the [deprecation of Dockershim](/blog/2020/12/02/dont-panic-kubernetes-and-docker/). In Kubernetes, dockershim is a software shim that allows you to use the entire Docker engine as your container runtime within Kubernetes. In the upcoming v1.24 release, we are removing Dockershim - the delay between deprecation and removal in line with the [project’s policy](https://kubernetes.io/docs/reference/using-api/deprecation-policy/) of supporting features for at least one year after deprecation. If you are a cluster operator, this guide includes the practical realities of what you need to know going into this release. Also, what do you need to do to ensure your cluster doesn’t fall over!
--&gt;
&lt;p&gt;早在 2020 年 12 月，Kubernetes 就宣布&lt;a href="https://andygol-k8s.netlify.app/zh-cn/blog/2020/12/02/dont-panic-kubernetes-and-docker/"&gt;弃用 Dockershim&lt;/a&gt;。
在 Kubernetes 中，dockershim 是一个软件 shim，
它允许你将整个 Docker 引擎用作 Kubernetes 中的容器运行时。
在即将发布的 v1.24 版本中，我们将移除 Dockershim -
在宣布弃用之后到彻底移除这段时间内，我们至少预留了一年的时间继续支持此功能，
这符合相关的&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/using-api/deprecation-policy/"&gt;项目策略&lt;/a&gt;。
如果你是集群操作员，则该指南包含你在此版本中需要了解的实际情况。
另外还包括你需要做些什么来确保你的集群不会崩溃！&lt;/p&gt;</description></item><item><title>认识我们的贡献者 - 亚太地区（澳大利亚-新西兰地区）</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2022/03/16/meet-our-contributors-au-nz-ep-02/</link><pubDate>Wed, 16 Mar 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2022/03/16/meet-our-contributors-au-nz-ep-02/</guid><description>&lt;!--
layout: blog
title: "Meet Our Contributors - APAC (Aus-NZ region)"
date: 2022-03-16
slug: meet-our-contributors-au-nz-ep-02
canonicalUrl: https://www.kubernetes.dev/blog/2022/03/14/meet-our-contributors-au-nz-ep-02/
--&gt;
&lt;!--
**Authors &amp; Interviewers:** [Anubhav Vardhan](https://github.com/anubha-v-ardhan), [Atharva Shinde](https://github.com/Atharva-Shinde), [Avinesh Tripathi](https://github.com/AvineshTripathi), [Brad McCoy](https://github.com/bradmccoydev), [Debabrata Panigrahi](https://github.com/Debanitrkl), [Jayesh Srivastava](https://github.com/jayesh-srivastava), [Kunal Verma](https://github.com/verma-kunal), [Pranshu Srivastava](https://github.com/PranshuSrivastava), [Priyanka Saggu](github.com/Priyankasaggu11929/), [Purneswar Prasad](https://github.com/PurneswarPrasad), [Vedant Kakde](https://github.com/vedant-kakde)
--&gt;
&lt;p&gt;&lt;strong&gt;作者和采访者：&lt;/strong&gt;
&lt;a href="https://github.com/anubha-v-ardhan"&gt;Anubhav Vardhan&lt;/a&gt;、
&lt;a href="https://github.com/Atharva-Shinde"&gt;Atharva Shinde&lt;/a&gt;、
&lt;a href="https://github.com/AvineshTripathi"&gt;Avinesh Tripathi&lt;/a&gt;、
&lt;a href="https://github.com/bradmccoydev"&gt;Brad McCoy&lt;/a&gt;、
&lt;a href="https://github.com/Debanitrkl"&gt;Debabrata Panigrahi&lt;/a&gt;、
&lt;a href="https://github.com/jayesh-srivastava"&gt;Jayesh Srivastava&lt;/a&gt;、
&lt;a href="https://github.com/verma-kunal"&gt;Kunal Verma&lt;/a&gt;、
&lt;a href="https://github.com/PranshuSrivastava"&gt;Pranshu Srivastava&lt;/a&gt;、
&lt;a href="github.com/Priyankasaggu11929/"&gt;Priyanka Saggu&lt;/a&gt;、
&lt;a href="https://github.com/PurneswarPrasad"&gt;Purneswar Prasad&lt;/a&gt;、
&lt;a href="https://github.com/vedant-kakde"&gt;Vedant Kakde&lt;/a&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;!--
Good day, everyone 👋
--&gt;
&lt;p&gt;大家好👋&lt;/p&gt;
&lt;!--
Welcome back to the second episode of the "Meet Our Contributors" blog post series for APAC.
--&gt;
&lt;p&gt;欢迎来到亚太地区的”认识我们的贡献者”博文系列第二期。&lt;/p&gt;</description></item><item><title>更新：移除 Dockershim 的常见问题</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2022/02/17/dockershim-faq/</link><pubDate>Thu, 17 Feb 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2022/02/17/dockershim-faq/</guid><description>&lt;!-- 
layout: blog
title: "Updated: Dockershim Removal FAQ"
linkTitle: "Dockershim Removal FAQ"
date: 2022-02-17
slug: dockershim-faq
aliases: [ '/dockershim' ]
--&gt;
&lt;!--
**This supersedes the original
[Dockershim Deprecation FAQ](/blog/2020/12/02/dockershim-faq/) article,
published in late 2020. The article includes updates from the v1.24
release of Kubernetes.**
--&gt;
&lt;p&gt;&lt;strong&gt;本文是针对 2020 年末发布的&lt;a href="https://andygol-k8s.netlify.app/zh-cn/blog/2020/12/02/dockershim-faq/"&gt;弃用 Dockershim 的常见问题&lt;/a&gt;的博客更新。
本文包括 Kubernetes v1.24 版本的更新。&lt;/strong&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;!--
This document goes over some frequently asked questions regarding the
removal of _dockershim_ from Kubernetes. The removal was originally
[announced](/blog/2020/12/08/kubernetes-1-20-release-announcement/)
as a part of the Kubernetes v1.20 release. The Kubernetes
[v1.24 release](/releases/#release-v1-24) actually removed the dockershim
from Kubernetes.
--&gt;
&lt;p&gt;本文介绍了一些关于从 Kubernetes 中移除 &lt;em&gt;dockershim&lt;/em&gt; 的常见问题。
该移除最初是作为 Kubernetes v1.20
版本的一部分&lt;a href="https://andygol-k8s.netlify.app/zh-cn/blog/2020/12/08/kubernetes-1-20-release-announcement/"&gt;宣布&lt;/a&gt;的。
Kubernetes 在 &lt;a href="https://andygol-k8s.netlify.app/zh-cn/releases/#release-v1-24"&gt;v1.24 版&lt;/a&gt;移除了 dockershim。&lt;/p&gt;</description></item><item><title>SIG Node CI 子项目庆祝测试改进两周年</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2022/02/16/sig-node-ci-subproject-celebrates/</link><pubDate>Wed, 16 Feb 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2022/02/16/sig-node-ci-subproject-celebrates/</guid><description>&lt;!--
---
layout: blog
title: 'SIG Node CI Subproject Celebrates Two Years of Test Improvements'
date: 2022-02-16
slug: sig-node-ci-subproject-celebrates
canonicalUrl: https://www.kubernetes.dev/blog/2022/02/16/sig-node-ci-subproject-celebrates-two-years-of-test-improvements/
url: /zh-cn/blog/2022/02/sig-node-ci-subproject-celebrates
---
--&gt;
&lt;p&gt;&lt;strong&gt;作者：&lt;/strong&gt; Sergey Kanzhelev (Google), Elana Hashman (Red Hat)&lt;/p&gt;
&lt;!--**Authors:** Sergey Kanzhelev (Google), Elana Hashman (Red Hat)--&gt;
&lt;!--Ensuring the reliability of SIG Node upstream code is a continuous effort
that takes a lot of behind-the-scenes effort from many contributors.
There are frequent releases of Kubernetes, base operating systems,
container runtimes, and test infrastructure that result in a complex matrix that
requires attention and steady investment to "keep the lights on."
In May 2020, the Kubernetes node special interest group ("SIG Node") organized a new
subproject for continuous integration (CI) for node-related code and tests. Since its
inauguration, the SIG Node CI subproject has run a weekly meeting, and even the full hour
is often not enough to complete triage of all bugs, test-related PRs and issues, and discuss all
related ongoing work within the subgroup.--&gt;
&lt;p&gt;保证 SIG 节点上游代码的可靠性是一项持续的工作，需要许多贡献者在幕后付出大量努力。
Kubernetes、基础操作系统、容器运行时和测试基础架构的频繁发布，导致了一个复杂的矩阵，
需要关注和稳定的投资来“保持灯火通明”。2020 年 5 月，Kubernetes Node 特殊兴趣小组
（“SIG Node”）为节点相关代码和测试组织了一个新的持续集成（CI）子项目。自成立以来，SIG Node CI
子项目每周举行一次会议，即使一整个小时通常也不足以完成对所有缺陷、测试相关的 PR 和问题的分类，
并讨论组内所有相关的正在进行的工作。&lt;/p&gt;</description></item><item><title>关注 SIG Multicluster</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2022/02/07/sig-multicluster-spotlight-2022/</link><pubDate>Mon, 07 Feb 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2022/02/07/sig-multicluster-spotlight-2022/</guid><description>&lt;!--
layout: blog
title: "Spotlight on SIG Multicluster"
date: 2022-02-07
slug: sig-multicluster-spotlight-2022
canonicalUrl: https://www.kubernetes.dev/blog/2022/02/04/sig-multicluster-spotlight-2022/
--&gt;
&lt;!--
**Authors:** Dewan Ahmed (Aiven) and Chris Short (AWS) 
--&gt;
&lt;p&gt;&lt;strong&gt;作者：&lt;/strong&gt; Dewan Ahmed (Aiven) 和 Chris Short (AWS)&lt;/p&gt;
&lt;!-- 
## Introduction

[SIG Multicluster](https://github.com/kubernetes/community/tree/master/sig-multicluster) is the SIG focused on how Kubernetes concepts are expanded and used beyond the cluster boundary. Historically, Kubernetes resources only interacted within that boundary - KRU or Kubernetes Resource Universe (not an actual Kubernetes concept). Kubernetes clusters, even now, don't really know anything about themselves or, about other clusters. Absence of cluster identifiers is a case in point. With the growing adoption of multicloud and multicluster deployments, the work SIG Multicluster doing is gaining a lot of attention. In this blog, [Jeremy Olmsted-Thompson, Google](https://twitter.com/jeremyot) and [Chris Short, AWS](https://twitter.com/ChrisShort) discuss the interesting problems SIG Multicluster is solving and how you can get involved. Their initials **JOT** and **CS** will be used for brevity. 
--&gt;
&lt;h2 id="简介"&gt;简介&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://github.com/kubernetes/community/tree/master/sig-multicluster"&gt;SIG Multicluster&lt;/a&gt;
是专注于如何拓展 Kubernetes 的概念并将其用于集群边界之外的 SIG。
以往 Kubernetes 资源仅在 Kubernetes Resource Universe (KRU) 这个边界内进行交互，其中 KRU 不是一个实际的 Kubernetes 概念。
即使是现在，Kubernetes 集群对自身或其他集群并不真正了解。集群标识符的缺失就是一个例子。
随着多云和多集群部署日益普及，SIG Multicluster 所做的工作越来越受到关注。
在这篇博客中，&lt;a href="https://twitter.com/jeremyot"&gt;来自 Google 的 Jeremy Olmsted-Thompson&lt;/a&gt; 和
&lt;a href="https://twitter.com/ChrisShort"&gt;来自 AWS 的 Chris Short&lt;/a&gt; 讨论了 SIG Multicluster
正在解决的一些有趣的问题和以及大家如何参与其中。
为简洁起见，下文将使用他们两位的首字母 &lt;strong&gt;JOT&lt;/strong&gt; 和 &lt;strong&gt;CS&lt;/strong&gt;。&lt;/p&gt;</description></item><item><title>确保准入控制器的安全</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2022/01/19/secure-your-admission-controllers-and-webhooks/</link><pubDate>Wed, 19 Jan 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2022/01/19/secure-your-admission-controllers-and-webhooks/</guid><description>&lt;!--
layout: blog
title: "Securing Admission Controllers"
date: 2022-01-19
slug: secure-your-admission-controllers-and-webhooks
--&gt;
&lt;!--
**Author:** Rory McCune (Aqua Security)
--&gt;
&lt;p&gt;&lt;strong&gt;作者:&lt;/strong&gt; Rory McCune (Aqua Security)&lt;/p&gt;
&lt;!--
[Admission control](/docs/reference/access-authn-authz/admission-controllers/) is a key part of Kubernetes security, alongside authentication and authorization. 
Webhook admission controllers are extensively used to help improve the security of Kubernetes clusters in a variety of ways including restricting the privileges of workloads and ensuring that images deployed to the cluster meet organization’s security requirements.
--&gt;
&lt;p&gt;&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/access-authn-authz/admission-controllers/"&gt;准入控制&lt;/a&gt;和认证、授权都是 Kubernetes 安全性的关键部分。
Webhook 准入控制器被广泛用于以多种方式帮助提高 Kubernetes 集群的安全性，
包括限制工作负载权限和确保部署到集群的镜像满足组织安全要求。&lt;/p&gt;</description></item><item><title>认识我们的贡献者 - 亚太地区（印度地区）</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2022/01/10/meet-our-contributors-india-ep-01/</link><pubDate>Mon, 10 Jan 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2022/01/10/meet-our-contributors-india-ep-01/</guid><description>&lt;!--
layout: blog
title: "Meet Our Contributors - APAC (India region)"
date: 2022-01-10
slug: meet-our-contributors-india-ep-01
canonicalUrl: https://www.kubernetes.dev/blog/2022/01/10/meet-our-contributors-india-ep-01/
--&gt;
&lt;!--
**Authors &amp; Interviewers:** [Anubhav Vardhan](https://github.com/anubha-v-ardhan), [Atharva Shinde](https://github.com/Atharva-Shinde), [Avinesh Tripathi](https://github.com/AvineshTripathi), [Debabrata Panigrahi](https://github.com/Debanitrkl), [Kunal Verma](https://github.com/verma-kunal), [Pranshu Srivastava](https://github.com/PranshuSrivastava), [Pritish Samal](https://github.com/CIPHERTron), [Purneswar Prasad](https://github.com/PurneswarPrasad), [Vedant Kakde](https://github.com/vedant-kakde)
--&gt;
&lt;p&gt;&lt;strong&gt;作者和采访者：&lt;/strong&gt; &lt;a href="https://github.com/anubha-v-ardhan"&gt;Anubhav Vardhan&lt;/a&gt;、
&lt;a href="https://github.com/Atharva-Shinde"&gt;Atharva Shinde&lt;/a&gt;、
&lt;a href="https://github.com/AvineshTripathi"&gt;Avinesh Tripathi&lt;/a&gt;、
&lt;a href="https://github.com/Debanitrkl"&gt;Debabrata Panigrahi&lt;/a&gt;、
&lt;a href="https://github.com/verma-kunal"&gt;Kunal Verma&lt;/a&gt;、
&lt;a href="https://github.com/PranshuSrivastava"&gt;Pranshu Srivastava&lt;/a&gt;、
&lt;a href="https://github.com/CIPHERTron"&gt;Pritish Samal&lt;/a&gt;、
&lt;a href="https://github.com/PurneswarPrasad"&gt;Purneswar Prasad&lt;/a&gt;、
&lt;a href="https://github.com/vedant-kakde"&gt;Vedant Kakde&lt;/a&gt;&lt;/p&gt;
&lt;!--
**Editor:** [Priyanka Saggu](https://psaggu.com)
--&gt;
&lt;p&gt;&lt;strong&gt;编辑：&lt;/strong&gt; &lt;a href="https://psaggu.com"&gt;Priyanka Saggu&lt;/a&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;!--
Good day, everyone 👋
--&gt;
&lt;p&gt;大家好 👋&lt;/p&gt;
&lt;!--
Welcome to the first episode of the APAC edition of the "Meet Our Contributors" blog post series.
--&gt;
&lt;p&gt;欢迎来到亚太地区的“认识我们的贡献者”博文系列第一期。&lt;/p&gt;</description></item><item><title>Kubernetes 即将移除 Dockershim：承诺和下一步</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2022/01/07/kubernetes-is-moving-on-from-dockershim/</link><pubDate>Fri, 07 Jan 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2022/01/07/kubernetes-is-moving-on-from-dockershim/</guid><description>&lt;!--
layout: blog
title: "Kubernetes is Moving on From Dockershim: Commitments and Next Steps"
date: 2022-01-07
slug: kubernetes-is-moving-on-from-dockershim
--&gt;
&lt;!--
**Authors:** Sergey Kanzhelev (Google), Jim Angel (Google), Davanum Srinivas (VMware), Shannon Kularathna (Google), Chris Short (AWS), Dawn Chen (Google)
--&gt;
&lt;p&gt;&lt;strong&gt;作者：&lt;/strong&gt; Sergey Kanzhelev (Google), Jim Angel (Google), Davanum Srinivas (VMware), Shannon Kularathna (Google), Chris Short (AWS), Dawn Chen (Google)&lt;/p&gt;
&lt;!--
Kubernetes is removing dockershim in the upcoming v1.24 release. We're excited
to reaffirm our community values by supporting open source container runtimes,
enabling a smaller kubelet, and increasing engineering velocity for teams using
Kubernetes. If you [use Docker Engine as a container runtime](/docs/tasks/administer-cluster/migrating-from-dockershim/find-out-runtime-you-use/)
for your Kubernetes cluster, get ready to migrate in 1.24! To check if you're
affected, refer to [Check whether dockershim removal affects you](/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-removal-affects-you/).
--&gt;
&lt;p&gt;Kubernetes 将在即将发布的 1.24 版本中移除 dockershim。我们很高兴能够通过支持开源容器运行时、支持更小的
kubelet 以及为使用 Kubernetes 的团队提高工程速度来重申我们的社区价值。
如果你&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/migrating-from-dockershim/find-out-runtime-you-use/"&gt;使用 Docker Engine 作为 Kubernetes 集群的容器运行时&lt;/a&gt;，
请准备好在 1.24 中迁移！要检查你是否受到影响，
请参考&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-removal-affects-you/"&gt;检查移除 Dockershim 对你的影响&lt;/a&gt;。&lt;/p&gt;</description></item><item><title>Security Profiles Operator v0.4.0 中的新功能</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2021/12/17/security-profiles-operator/</link><pubDate>Fri, 17 Dec 2021 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2021/12/17/security-profiles-operator/</guid><description>&lt;!--
layout: blog
title: "What's new in Security Profiles Operator v0.4.0"
date: 2021-12-17
slug: security-profiles-operator
--&gt;
&lt;!--
**Authors:** Jakub Hrozek, Juan Antonio Osorio, Paulo Gomes, Sascha Grunert
--&gt;
&lt;p&gt;&lt;strong&gt;作者:&lt;/strong&gt; Jakub Hrozek, Juan Antonio Osorio, Paulo Gomes, Sascha Grunert&lt;/p&gt;
&lt;hr&gt;
&lt;!--
The [Security Profiles Operator (SPO)](https://sigs.k8s.io/security-profiles-operator)
is an out-of-tree Kubernetes enhancement to make the management of
[seccomp](https://en.wikipedia.org/wiki/Seccomp),
[SELinux](https://en.wikipedia.org/wiki/Security-Enhanced_Linux) and
[AppArmor](https://en.wikipedia.org/wiki/AppArmor) profiles easier and more
convenient. We're happy to announce that we recently [released
v0.4.0](https://github.com/kubernetes-sigs/security-profiles-operator/releases/tag/v0.4.0)
of the operator, which contains a ton of new features, fixes and usability
improvements.
--&gt;
&lt;p&gt;&lt;a href="https://sigs.k8s.io/security-profiles-operator"&gt;Security Profiles Operator (SPO)&lt;/a&gt;
是一种树外 Kubernetes 增强功能，用于更方便、更便捷地管理 &lt;a href="https://en.wikipedia.org/wiki/Seccomp"&gt;seccomp&lt;/a&gt;、
&lt;a href="https://zh.wikipedia.org/wiki/%E5%AE%89%E5%85%A8%E5%A2%9E%E5%BC%BA%E5%BC%8FLinux"&gt;SELinux&lt;/a&gt; 和
&lt;a href="https://zh.wikipedia.org/wiki/AppArmor"&gt;AppArmor&lt;/a&gt; 配置文件。
我们很高兴地宣布，我们最近&lt;a href="https://github.com/kubernetes-sigs/security-profiles-operator/releases/tag/v0.4.0"&gt;发布了 v0.4.0&lt;/a&gt;
的 Operator，其中包含了大量的新功能、缺陷修复和可用性改进。&lt;/p&gt;</description></item><item><title>Kubernetes 1.23: StatefulSet PVC 自动删除 (alpha)</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2021/12/16/kubernetes-1-23-statefulset-pvc-auto-deletion/</link><pubDate>Thu, 16 Dec 2021 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2021/12/16/kubernetes-1-23-statefulset-pvc-auto-deletion/</guid><description>&lt;!--
layout: blog
title: 'Kubernetes 1.23: StatefulSet PVC Auto-Deletion (alpha)'
date: 2021-12-16
slug: kubernetes-1-23-statefulset-pvc-auto-deletion
--&gt;
&lt;!--
**Author:** Matthew Cary (Google)
--&gt;
&lt;p&gt;&lt;strong&gt;作者:&lt;/strong&gt; Matthew Cary (谷歌)&lt;/p&gt;
&lt;!--
Kubernetes v1.23 introduced a new, alpha-level policy for
[StatefulSets](/docs/concepts/workloads/controllers/statefulset/) that controls the lifetime of
[PersistentVolumeClaims](/docs/concepts/storage/persistent-volumes/) (PVCs) generated from the
StatefulSet spec template for cases when they should be deleted automatically when the StatefulSet
is deleted or pods in the StatefulSet are scaled down.
--&gt;
&lt;p&gt;Kubernetes v1.23 为 &lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/controllers/statefulset/"&gt;StatefulSets&lt;/a&gt;
引入了一个新的 alpha 级策略，用来控制由 StatefulSet 规约模板生成的
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/storage/persistent-volumes/"&gt;PersistentVolumeClaims&lt;/a&gt; (PVCs) 的生命周期，
用于当删除 StatefulSet 或减少 StatefulSet 中的 Pods 数量时 PVCs 应该被自动删除的场景。&lt;/p&gt;</description></item><item><title>Kubernetes 1.23：树内存储向 CSI 卷迁移工作的进展更新</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2021/12/10/storage-in-tree-to-csi-migration-status-update/</link><pubDate>Fri, 10 Dec 2021 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2021/12/10/storage-in-tree-to-csi-migration-status-update/</guid><description>&lt;!---
layout: blog
title: "Kubernetes 1.23: Kubernetes In-Tree to CSI Volume Migration Status Update"
date: 2021-12-10
slug: storage-in-tree-to-csi-migration-status-update
--&gt;
&lt;!---
**Author:** Jiawei Wang (Google)
--&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;: Jiawei Wang（谷歌）&lt;/p&gt;
&lt;!---
The Kubernetes in-tree storage plugin to [Container Storage Interface (CSI)](/blog/2019/01/15/container-storage-interface-ga/) migration infrastructure has already been [beta](/blog/2019/12/09/kubernetes-1-17-feature-csi-migration-beta/) since v1.17. CSI migration was introduced as alpha in Kubernetes v1.14.
--&gt;
&lt;p&gt;自 Kubernetes v1.14 引入容器存储接口（&lt;a href="https://andygol-k8s.netlify.app/blog/2019/01/15/container-storage-interface-ga/"&gt;Container Storage Interface, CSI&lt;/a&gt;）的工作达到 alpha 阶段后，自 v1.17 起，Kubernetes 树内存储插件（in-tree storage plugin）向 CSI 的迁移基础设施已步入 &lt;a href="https://andygol-k8s.netlify.app/blog/2019/12/09/kubernetes-1-17-feature-csi-migration-beta/"&gt;beta 阶段&lt;/a&gt;。&lt;/p&gt;</description></item><item><title>Kubernetes 1.23：IPv4/IPv6 双协议栈网络达到 GA</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2021/12/08/dual-stack-networking-ga/</link><pubDate>Wed, 08 Dec 2021 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2021/12/08/dual-stack-networking-ga/</guid><description>&lt;!--
layout: blog
title: 'Kubernetes 1.23: Dual-stack IPv4/IPv6 Networking Reaches GA'
date: 2021-12-08
slug: dual-stack-networking-ga
--&gt;
&lt;!--
**Author:** Bridget Kromhout (Microsoft)
--&gt;
&lt;p&gt;&lt;strong&gt;作者:&lt;/strong&gt; Bridget Kromhout (微软)&lt;/p&gt;
&lt;!--
"When will Kubernetes have IPv6?" This question has been asked with increasing frequency ever since alpha support for IPv6 was first added in k8s v1.9. While Kubernetes has supported IPv6-only clusters since v1.18, migration from IPv4 to IPv6 was not yet possible at that point. At long last, [dual-stack IPv4/IPv6 networking](https://github.com/kubernetes/enhancements/tree/master/keps/sig-network/563-dual-stack/) has reached general availability (GA) in Kubernetes v1.23.

What does dual-stack networking mean for you? Let’s take a look…
--&gt;
&lt;p&gt;“Kubernetes 何时支持 IPv6？” 自从 k8s v1.9 版本中首次添加对 IPv6 的 alpha 支持以来，这个问题的讨论越来越频繁。
虽然 Kubernetes 从 v1.18 版本开始就支持纯 IPv6 集群，但当时还无法支持 IPv4 迁移到 IPv6。
&lt;a href="https://github.com/kubernetes/enhancements/tree/master/keps/sig-network/563-dual-stack/"&gt;IPv4/IPv6 双协议栈网络&lt;/a&gt;
在 Kubernetes v1.23 版本中进入正式发布（GA）阶段。&lt;/p&gt;</description></item><item><title>公布 2021 年指导委员会选举结果</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2021/11/08/steering-committee-results-2021/</link><pubDate>Mon, 08 Nov 2021 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2021/11/08/steering-committee-results-2021/</guid><description>&lt;!--
layout: blog
title: "Announcing the 2021 Steering Committee Election Results"
date: 2021-11-08
slug: steering-committee-results-2021
--&gt;
&lt;!--
**Author**: Kaslin Fields
--&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;：Kaslin Fields&lt;/p&gt;
&lt;!--
The [2021 Steering Committee Election](https://github.com/kubernetes/community/tree/master/events/elections/2021) is now complete. The Kubernetes Steering Committee consists of 7 seats, 4 of which were up for election in 2021. Incoming committee members serve a term of 2 years, and all members are elected by the Kubernetes Community.
--&gt;
&lt;p&gt;&lt;a href="https://github.com/kubernetes/community/tree/master/events/elections/2021"&gt;2021 年指导委员会选举&lt;/a&gt;现已完成。
Kubernetes 指导委员会由 7 个席位组成，其中 4 个席位将在 2021 年进行选举。
新任委员会成员任期 2 年，所有成员均由 Kubernetes 社区选举产生。&lt;/p&gt;</description></item><item><title>关注 SIG Node</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2021/09/27/sig-node-spotlight-2021/</link><pubDate>Mon, 27 Sep 2021 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2021/09/27/sig-node-spotlight-2021/</guid><description>&lt;!--
---
layout: blog
title: "Spotlight on SIG Node"
date: 2021-09-27
slug: sig-node-spotlight-2021
--- 
--&gt;
&lt;p&gt;&lt;strong&gt;Author:&lt;/strong&gt; Dewan Ahmed, Red Hat&lt;/p&gt;
&lt;!--
**Author:** Dewan Ahmed, Red Hat
--&gt;
&lt;!--
## Introduction

In Kubernetes, a _Node_ is a representation of a single machine in your cluster. [SIG Node](https://github.com/kubernetes/community/tree/master/sig-node) owns that very important Node component and supports various subprojects such as Kubelet, Container Runtime Interface (CRI) and more to support how the pods and host resources interact. In this blog, we have summarized our conversation with [Elana Hashman (EH)](https://twitter.com/ehashdn) &amp; [Sergey Kanzhelev (SK)](https://twitter.com/SergeyKanzhelev), who walk us through the various aspects of being a part of the SIG and share some insights about how others can get involved.
--&gt;
&lt;h2 id="介绍"&gt;介绍&lt;/h2&gt;
&lt;p&gt;在 Kubernetes 中，一个 &lt;em&gt;Node&lt;/em&gt; 是你集群中的某台机器。
&lt;a href="https://github.com/kubernetes/community/tree/master/sig-node"&gt;SIG Node&lt;/a&gt; 负责这一非常重要的 Node 组件并支持各种子项目，
如 Kubelet, Container Runtime Interface (CRI) 以及其他支持 Pod 和主机资源间交互的子项目。
在这篇文章中，我们总结了和 &lt;a href="https://twitter.com/ehashdn"&gt;Elana Hashman (EH)&lt;/a&gt; &amp;amp; &lt;a href="https://twitter.com/SergeyKanzhelev"&gt;Sergey Kanzhelev (SK)&lt;/a&gt; 的对话，是他们带领我们了解作为此 SIG 一份子的各个方面，并分享一些关于其他人如何参与的见解。&lt;/p&gt;</description></item><item><title>更新 NGINX-Ingress 以使用稳定的 Ingress API</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2021/07/26/update-with-ingress-nginx/</link><pubDate>Mon, 26 Jul 2021 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2021/07/26/update-with-ingress-nginx/</guid><description>&lt;!--
layout: blog
title: 'Updating NGINX-Ingress to use the stable Ingress API'
date: 2021-07-26
slug: update-with-ingress-nginx
--&gt;
&lt;!--
**Authors:** James Strong, Ricardo Katz
--&gt;
&lt;p&gt;&lt;strong&gt;作者：&lt;/strong&gt; James Strong, Ricardo Katz&lt;/p&gt;
&lt;!--
With all Kubernetes APIs, there is a process to creating, maintaining, and
ultimately deprecating them once they become GA. The networking.k8s.io API group is no
different. The upcoming Kubernetes 1.22 release will remove several deprecated APIs
that are relevant to networking:
--&gt;
&lt;p&gt;对于所有 Kubernetes API，一旦它们被正式发布（GA），就有一个创建、维护和最终弃用它们的过程。
networking.k8s.io API 组也不例外。
即将发布的 Kubernetes 1.22 版本将移除几个与网络相关的已弃用 API：&lt;/p&gt;</description></item><item><title>聚焦 SIG Usability</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2021/07/15/sig-usability-spotlight-2021/</link><pubDate>Thu, 15 Jul 2021 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2021/07/15/sig-usability-spotlight-2021/</guid><description>&lt;!--
layout: blog
title: "Spotlight on SIG Usability"
date: 2021-07-15
slug: sig-usability-spotlight-2021
author: &gt;
 Kunal Kushwaha (Civo)
--&gt;

&lt;div class="alert alert-info" role="note"&gt;&lt;h4 class="alert-heading"&gt;说明：&lt;/h4&gt;&lt;!--
SIG Usability, which is featured in this Spotlight blog, has been deprecated and is no longer active.
As a result, the links and information provided in this blog post may no longer be valid or relevant.
Should there be renewed interest and increased participation in the future, the SIG may be revived.
However, as of August 2023 the SIG is inactive per the Kubernetes community policy.
The Kubernetes project encourages you to explore other
[SIGs](https://github.com/kubernetes/community/blob/master/sig-list.md#special-interest-groups)
and resources available on the Kubernetes website to stay up-to-date with the latest developments
and enhancements in Kubernetes.
--&gt;
&lt;p&gt;本篇聚焦博客提到的 SIG Usability 已被弃用并且不再活跃。因此，
本文中提供的链接和信息可能已失效或不再适用。
如果未来社区对该小组重新产生兴趣并有更多成员参与，
它有可能会被重新启用。但根据 Kubernetes 社区的政策，
截至 2023 年 8 月，该 SIG 已处于非活跃状态。
Kubernetes 项目鼓励你去探索其他
&lt;a href="https://github.com/kubernetes/community/blob/master/sig-list.md#special-interest-groups"&gt;SIG&lt;/a&gt;
以及 Kubernetes 官网提供的各类资源，以便及时了解 Kubernetes
的最新进展和功能增强。&lt;/p&gt;</description></item><item><title>卷健康监控的 Alpha 更新</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2021/04/16/volume-health-monitoring-alpha-update/</link><pubDate>Fri, 16 Apr 2021 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2021/04/16/volume-health-monitoring-alpha-update/</guid><description>&lt;!--
layout: blog
title: "Volume Health Monitoring Alpha Update"
date: 2021-04-16
slug: volume-health-monitoring-alpha-update
--&gt;
&lt;!--
**Author:** Xing Yang (VMware)
--&gt;
&lt;p&gt;&lt;strong&gt;作者：&lt;/strong&gt; Xing Yang (VMware)&lt;/p&gt;
&lt;!--
The CSI Volume Health Monitoring feature, originally introduced in 1.19 has undergone a large update for the 1.21 release.
--&gt;
&lt;p&gt;最初在 1.19 中引入的 CSI 卷健康监控功能在 1.21 版本中进行了大规模更新。&lt;/p&gt;
&lt;!--
## Why add Volume Health Monitoring to Kubernetes?
--&gt;
&lt;h2 id="为什么要向-kubernetes-添加卷健康监控"&gt;为什么要向 Kubernetes 添加卷健康监控？&lt;/h2&gt;
&lt;!--
Without Volume Health Monitoring, Kubernetes has no knowledge of the state of the underlying volumes of a storage system after a PVC is provisioned and used by a Pod. Many things could happen to the underlying storage system after a volume is provisioned in Kubernetes. For example, the volume could be deleted by accident outside of Kubernetes, the disk that the volume resides on could fail, it could be out of capacity, the disk may be degraded which affects its performance, and so on. Even when the volume is mounted on a pod and used by an application, there could be problems later on such as read/write I/O errors, file system corruption, accidental unmounting of the volume outside of Kubernetes, etc. It is very hard to debug and detect root causes when something happened like this.
--&gt;
&lt;p&gt;如果没有卷健康监控，在 PVC 被 Pod 配置和使用后，Kubernetes 将不知道存储系统的底层卷的状态。
在 Kubernetes 中配置卷后，底层存储系统可能会发生很多事情。
例如，卷可能在 Kubernetes 之外被意外删除、卷所在的磁盘可能发生故障、容量不足、磁盘可能被降级而影响其性能等等。
即使卷被挂载到 Pod 上并被应用程序使用，以后也可能会出现诸如读/写 I/O 错误、文件系统损坏、在 Kubernetes 之外被意外卸载卷等问题。
当发生这样的事情时，调试和检测根本原因是非常困难的。&lt;/p&gt;</description></item><item><title>弃用 PodSecurityPolicy：过去、现在、未来</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2021/04/06/podsecuritypolicy-deprecation-past-present-and-future/</link><pubDate>Tue, 06 Apr 2021 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2021/04/06/podsecuritypolicy-deprecation-past-present-and-future/</guid><description>&lt;!--
title: "PodSecurityPolicy Deprecation: Past, Present, and Future"
--&gt;
&lt;!--
**Author:** Tabitha Sable (Kubernetes SIG Security)
--&gt;
&lt;p&gt;作者：Tabitha Sable（Kubernetes SIG Security）&lt;/p&gt;
&lt;div class="pageinfo pageinfo-primary"&gt;
&lt;!--
**Update:** *With the release of Kubernetes v1.25, PodSecurityPolicy has been removed.*
--&gt;
&lt;p&gt;&lt;strong&gt;更新：随着 Kubernetes v1.25 的发布，PodSecurityPolicy 已被删除。&lt;/strong&gt;&lt;/p&gt;
&lt;!--
*You can read more information about the removal of PodSecurityPolicy in the
[Kubernetes 1.25 release notes](/blog/2022/08/23/kubernetes-v1-25-release/#pod-security-changes).*
--&gt;
&lt;p&gt;&lt;strong&gt;你可以在 &lt;a href="https://andygol-k8s.netlify.app/zh-cn/blog/2022/08/23/kubernetes-v1-25-release/#pod-security-changes"&gt;Kubernetes 1.25 发布说明&lt;/a&gt;
中阅读有关删除 PodSecurityPolicy 的更多信息。&lt;/strong&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;!--
PodSecurityPolicy (PSP) is being deprecated in Kubernetes 1.21, to be released later this week.
This starts the countdown to its removal, but doesn’t change anything else.
PodSecurityPolicy will continue to be fully functional for several more releases before being removed completely.
In the meantime, we are developing a replacement for PSP that covers key use cases more easily and sustainably.
--&gt;
&lt;p&gt;PodSecurityPolicy (PSP) 在 Kubernetes 1.21 中被弃用。&lt;!--to be released later this week这句感觉没必要翻译，非漏译--&gt;
PSP 日后会被移除，但目前不会改变任何其他内容。在移除之前，PSP 将继续在后续多个版本中完全正常运行。
与此同时，我们正在开发 PSP 的替代品，希望可以更轻松、更可持续地覆盖关键用例。&lt;/p&gt;</description></item><item><title>一个编排高可用应用的 Kubernetes 自定义调度器</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2020/12/21/writing-crl-scheduler/</link><pubDate>Mon, 21 Dec 2020 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2020/12/21/writing-crl-scheduler/</guid><description>&lt;!--
---
layout: blog
title: "A Custom Kubernetes Scheduler to Orchestrate Highly Available Applications"
date: 2020-12-21
slug: writing-crl-scheduler
---
--&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;: Chris Seto (Cockroach Labs)&lt;/p&gt;
&lt;!--
**Author**: Chris Seto (Cockroach Labs)
--&gt;
&lt;!--
As long as you're willing to follow the rules, deploying on Kubernetes and air travel can be quite pleasant. More often than not, things will "just work". However, if one is interested in travelling with an alligator that must remain alive or scaling a database that must remain available, the situation is likely to become a bit more complicated. It may even be easier to build one's own plane or database for that matter. Travelling with reptiles aside, scaling a highly available stateful system is no trivial task.
--&gt;
&lt;p&gt;只要你愿意遵守规则，那么在 Kubernetes 上的部署和探索可以是相当愉快的。更多时候，事情会 &amp;quot;顺利进行&amp;quot;。
然而，如果一个人对与必须保持存活的鳄鱼一起旅行或者是对必须保持可用的数据库进行扩展有兴趣，
情况可能会变得更复杂一点。
相较于这个问题，建立自己的飞机或数据库甚至还可能更容易一些。撇开与鳄鱼的旅行不谈，扩展一个高可用的有状态系统也不是一件小事。&lt;/p&gt;</description></item><item><title>Kubernetes 1.20：CSI 驱动程序中的 Pod 身份假扮和短时卷</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2020/12/18/kubernetes-1.20-pod-impersonation-short-lived-volumes-in-csi/</link><pubDate>Fri, 18 Dec 2020 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2020/12/18/kubernetes-1.20-pod-impersonation-short-lived-volumes-in-csi/</guid><description>&lt;!--
layout: blog
title: 'Kubernetes 1.20: Pod Impersonation and Short-lived Volumes in CSI Drivers'
date: 2020-12-18
slug: kubernetes-1.20-pod-impersonation-short-lived-volumes-in-csi
--&gt;
&lt;!--
**Author**: Shihang Zhang (Google)
--&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;: Shihang Zhang（谷歌）&lt;/p&gt;
&lt;!--
Typically when a [CSI](https://github.com/container-storage-interface/spec/blob/baa71a34651e5ee6cb983b39c03097d7aa384278/spec.md) driver mounts credentials such as secrets and certificates, it has to authenticate against storage providers to access the credentials. However, the access to those credentials are controlled on the basis of the pods' identities rather than the CSI driver's identity. CSI drivers, therefore, need some way to retrieve pod's service account token. 
--&gt;
&lt;p&gt;通常，当 &lt;a href="https://github.com/container-storage-interface/spec/blob/baa71a34651e5ee6cb983b39c03097d7aa384278/spec.md"&gt;CSI&lt;/a&gt; 驱动程序挂载
诸如 Secret 和证书之类的凭据时，它必须通过存储提供者的身份认证才能访问这些凭据。
然而，对这些凭据的访问是根据 Pod 的身份而不是 CSI 驱动程序的身份来控制的。
因此，CSI 驱动程序需要某种方法来取得 Pod 的服务帐户令牌。&lt;/p&gt;</description></item><item><title>Kubernetes 1.20: 最新版本</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2020/12/08/kubernetes-1-20-release-announcement/</link><pubDate>Tue, 08 Dec 2020 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2020/12/08/kubernetes-1-20-release-announcement/</guid><description>&lt;!-- ---
layout: blog
title: 'Kubernetes 1.20: The Raddest Release'
date: 2020-12-08
slug: kubernetes-1-20-release-announcement
evergreen: true
--- --&gt;
&lt;p&gt;&lt;strong&gt;作者:&lt;/strong&gt; &lt;a href="https://github.com/kubernetes/sig-release/blob/master/releases/release-1.20/release_team.md"&gt;Kubernetes 1.20 发布团队&lt;/a&gt;&lt;/p&gt;
&lt;!-- **Authors:** [Kubernetes 1.20 Release Team](https://github.com/kubernetes/sig-release/blob/master/releases/release-1.20/release_team.md) --&gt;
&lt;p&gt;我们很高兴地宣布 Kubernetes 1.20 的发布，这是我们 2020 年的第三个也是最后一个版本！此版本包含 42 项增强功能：11 项增强功能已升级到稳定版，15 项增强功能正在进入测试版，16 项增强功能正在进入 Alpha 版。&lt;/p&gt;
&lt;!-- We’re pleased to announce the release of Kubernetes 1.20, our third and final release of 2020! This release consists of 42 enhancements: 11 enhancements have graduated to stable, 15 enhancements are moving to beta, and 16 enhancements are entering alpha. --&gt;
&lt;p&gt;1.20 发布周期在上一个延长的发布周期之后恢复到 11 周的正常节奏。这是一段时间以来功能最密集的版本之一：Kubernetes 创新周期仍呈上升趋势。此版本具有更多的 Alpha 而非稳定的增强功能，表明云原生生态系统仍有许多需要探索的地方。&lt;/p&gt;</description></item><item><title>别慌: Kubernetes 和 Docker</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2020/12/02/dont-panic-kubernetes-and-docker/</link><pubDate>Wed, 02 Dec 2020 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2020/12/02/dont-panic-kubernetes-and-docker/</guid><description>&lt;!-- 
layout: blog
title: "Don't Panic: Kubernetes and Docker"
date: 2020-12-02
slug: dont-panic-kubernetes-and-docker
evergreen: true
--&gt;
&lt;p&gt;&lt;strong&gt;作者：&lt;/strong&gt; Jorge Castro, Duffie Cooley, Kat Cosgrove, Justin Garrison, Noah Kantrowitz, Bob Killen, Rey Lejano, Dan “POP” Papandrea, Jeffrey Sica, Davanum “Dims” Srinivas&lt;/p&gt;
&lt;!--
**Update:** _Kubernetes support for Docker via `dockershim` is now removed.
For more information, read the [removal FAQ](/dockershim).
You can also discuss the deprecation via a dedicated [GitHub issue](https://github.com/kubernetes/kubernetes/issues/106917)._
--&gt;
&lt;p&gt;&lt;strong&gt;更新&lt;/strong&gt;：Kubernetes 通过 &lt;code&gt;dockershim&lt;/code&gt; 对 Docker 的支持现已移除。
有关更多信息，请阅读&lt;a href="https://andygol-k8s.netlify.app/zh-cn/dockershim"&gt;移除 FAQ&lt;/a&gt;。
你还可以通过专门的 &lt;a href="https://github.com/kubernetes/kubernetes/issues/106917"&gt;GitHub issue&lt;/a&gt; 讨论弃用。&lt;/p&gt;</description></item><item><title>弃用 Dockershim 的常见问题</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2020/12/02/dockershim-faq/</link><pubDate>Wed, 02 Dec 2020 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2020/12/02/dockershim-faq/</guid><description>&lt;!-- 
layout: blog
title: "Dockershim Deprecation FAQ"
date: 2020-12-02
slug: dockershim-faq
--&gt;
&lt;!--
_**Update**: There is a [newer version](/blog/2022/02/17/dockershim-faq/) of this article available._
--&gt;
&lt;p&gt;&lt;em&gt;&lt;strong&gt;更新&lt;/strong&gt;：本文有&lt;a href="https://andygol-k8s.netlify.app/zh-cn/blog/2022/02/17/dockershim-faq/"&gt;较新版本&lt;/a&gt;。&lt;/em&gt;&lt;/p&gt;
&lt;!-- 
This document goes over some frequently asked questions regarding the Dockershim
deprecation announced as a part of the Kubernetes v1.20 release. For more detail
on the deprecation of Docker as a container runtime for Kubernetes kubelets, and
what that means, check out the blog post
[Don't Panic: Kubernetes and Docker](/blog/2020/12/02/dont-panic-kubernetes-and-docker/).

Also, you can read [check whether Dockershim removal affects you](/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-removal-affects-you/) to check whether it does.
--&gt;
&lt;p&gt;本文回顾了自 Kubernetes v1.20 版宣布弃用 Dockershim 以来所引发的一些常见问题。
关于 Kubernetes kubelets 从容器运行时的角度弃用 Docker 的细节以及这些细节背后的含义，请参考博文
&lt;a href="https://andygol-k8s.netlify.app/blog/2020/12/02/dont-panic-kubernetes-and-docker/"&gt;别慌: Kubernetes 和 Docker&lt;/a&gt;。&lt;/p&gt;</description></item><item><title>为开发指南做贡献</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2020/10/01/%E4%B8%BA%E5%BC%80%E5%8F%91%E6%8C%87%E5%8D%97%E5%81%9A%E8%B4%A1%E7%8C%AE/</link><pubDate>Thu, 01 Oct 2020 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2020/10/01/%E4%B8%BA%E5%BC%80%E5%8F%91%E6%8C%87%E5%8D%97%E5%81%9A%E8%B4%A1%E7%8C%AE/</guid><description>&lt;!-- 
---
title: "Contributing to the Development Guide"
linkTitle: "Contributing to the Development Guide"
Author: Erik L. Arneson
Description: "A new contributor describes the experience of writing and submitting changes to the Kubernetes Development Guide."
date: 2020-10-01
canonicalUrl: https://www.kubernetes.dev/blog/2020/09/28/contributing-to-the-development-guide/
resources:
- src: "jorge-castro-code-of-conduct.jpg"
 title: "Jorge Castro announcing the Kubernetes Code of Conduct during a weekly SIG ContribEx meeting."
--- 
--&gt;
&lt;!-- 
When most people think of contributing to an open source project, I suspect they probably think of
contributing code changes, new features, and bug fixes. As a software engineer and a long-time open
source user and contributor, that's certainly what I thought. Although I have written a good quantity
of documentation in different workflows, the massive size of the Kubernetes community was a new kind 
of "client." I just didn't know what to expect when Google asked my compatriots and me at
[Lion's Way](https://lionswaycontent.com/) to make much-needed updates to the Kubernetes Development Guide.

*This article originally appeared on the [Kubernetes Contributor Community blog](https://www.kubernetes.dev/blog/2020/09/28/contributing-to-the-development-guide/).*
--&gt;
&lt;p&gt;当大多数人想到为一个开源项目做贡献时，我猜想他们可能想到的是贡献代码修改、新功能和错误修复。作为一个软件工程师和一个长期的开源用户和贡献者，这也正是我的想法。
虽然我已经在不同的工作流中写了不少文档，但规模庞大的 Kubernetes 社区是一种新型 &amp;quot;客户&amp;quot;。我只是不知道当 Google 要求我和 &lt;a href="https://lionswaycontent.com/"&gt;Lion's Way&lt;/a&gt; 的同胞们对 Kubernetes 开发指南进行必要更新时会发生什么。&lt;/p&gt;</description></item><item><title>结构化日志介绍</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2020/09/04/kubernetes-1-19-introducing-structured-logs/</link><pubDate>Fri, 04 Sep 2020 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2020/09/04/kubernetes-1-19-introducing-structured-logs/</guid><description>&lt;!--
layout: blog
title: 'Introducing Structured Logs'
date: 2020-09-04
slug: kubernetes-1-19-Introducing-Structured-Logs
--&gt;
&lt;!--
**Authors:** Marek Siarkowicz (Google), Nathan Beach (Google)
--&gt;
&lt;p&gt;&lt;strong&gt;作者：&lt;/strong&gt; Marek Siarkowicz（谷歌），Nathan Beach（谷歌）&lt;/p&gt;
&lt;!--
Logs are an essential aspect of observability and a critical tool for debugging. But Kubernetes logs have traditionally been unstructured strings, making any automated parsing difficult and any downstream processing, analysis, or querying challenging to do reliably.
--&gt;
&lt;p&gt;日志是可观察性的一个重要方面，也是调试的重要工具。 但是Kubernetes日志传统上是非结构化的字符串，因此很难进行自动解析，以及任何可靠的后续处理、分析或查询。&lt;/p&gt;
&lt;!--
In Kubernetes 1.19, we are adding support for structured logs, which natively support (key, value) pairs and object references. We have also updated many logging calls such that over 99% of logging volume in a typical deployment are now migrated to the structured format.
--&gt;
&lt;p&gt;在Kubernetes 1.19中，我们添加结构化日志的支持，该日志本身支持（键，值）对和对象引用。 我们还更新了许多日志记录调用，以便现在将典型部署中超过99％的日志记录量迁移为结构化格式。&lt;/p&gt;</description></item><item><title>警告: 有用的预警</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2020/09/03/warnings/</link><pubDate>Thu, 03 Sep 2020 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2020/09/03/warnings/</guid><description>&lt;!--
layout: blog
title: "Warning: Helpful Warnings Ahead"
date: 2020-09-03
slug: warnings
evergreen: true
--&gt;
&lt;!--
**Author**: [Jordan Liggitt](https://github.com/liggitt) (Google)
--&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;: &lt;a href="https://github.com/liggitt"&gt;Jordan Liggitt&lt;/a&gt; (Google)&lt;/p&gt;
&lt;!--
As Kubernetes maintainers, we're always looking for ways to improve usability while preserving compatibility.
As we develop features, triage bugs, and answer support questions, we accumulate information that would be helpful for Kubernetes users to know.
In the past, sharing that information was limited to out-of-band methods like release notes, announcement emails, documentation, and blog posts.
Unless someone knew to seek out that information and managed to find it, they would not benefit from it.
--&gt;
&lt;p&gt;作为 Kubernetes 维护者，我们一直在寻找在保持兼容性的同时提高可用性的方法。
在开发功能、分类 Bug、和回答支持问题的过程中，我们积累了有助于 Kubernetes 用户了解的信息。
过去，共享这些信息仅限于发布说明、公告电子邮件、文档和博客文章等带外方法。
除非有人知道需要寻找这些信息并成功找到它们，否则他们不会从中受益。&lt;/p&gt;</description></item><item><title>Docsy 带来更好的 Docs UX</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2020/06/better-docs-ux-with-docsy/</link><pubDate>Mon, 15 Jun 2020 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2020/06/better-docs-ux-with-docsy/</guid><description>&lt;!--
layout: blog
title: A Better Docs UX With Docsy
date: 2020-06-15
slug: better-docs-ux-with-docsy
url: /blog/2020/06/better-docs-ux-with-docsy
--&gt;
&lt;!--
**Author:** Zach Corleissen, Cloud Native Computing Foundation
--&gt;
&lt;p&gt;&lt;strong&gt;作者：&lt;/strong&gt; Zach Corleissen，Cloud Native Computing Foundation&lt;/p&gt;
&lt;!--
*Editor's note: Zach is one of the chairs for the Kubernetes documentation special interest group (SIG Docs).*
--&gt;
&lt;p&gt;&lt;strong&gt;编者注：Zach 是 Kubernetes 文档特别兴趣小组（SIG Docs）的主席之一。&lt;/strong&gt;&lt;/p&gt;
&lt;!--
I'm pleased to announce that the [Kubernetes website](https://kubernetes.io) now features the [Docsy Hugo theme](https://docsy.dev).
--&gt;
&lt;p&gt;我很高兴地宣布 &lt;a href="https://kubernetes.io"&gt;Kubernetes 网站&lt;/a&gt;现在采用了 &lt;a href="https://docsy.dev"&gt;Docsy Hugo 主题&lt;/a&gt;。&lt;/p&gt;</description></item><item><title>Kubernetes 1.18: Fit &amp; Finish</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2020/03/25/kubernetes-1-18-release-announcement/</link><pubDate>Wed, 25 Mar 2020 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2020/03/25/kubernetes-1-18-release-announcement/</guid><description>&lt;!--
**Authors:** [Kubernetes 1.18 Release Team](https://github.com/kubernetes/sig-release/blob/master/releases/release-1.18/release_team.md)
--&gt;
&lt;p&gt;&lt;strong&gt;作者:&lt;/strong&gt; &lt;a href="https://github.com/kubernetes/sig-release/blob/master/releases/release-1.18/release_team.md"&gt;Kubernetes 1.18 发布团队&lt;/a&gt;&lt;/p&gt;
&lt;!--
We're pleased to announce the delivery of Kubernetes 1.18, our first release of 2020! Kubernetes 1.18 consists of 38 enhancements: 15 enhancements are moving to stable, 11 enhancements in beta, and 12 enhancements in alpha.
--&gt;
&lt;p&gt;我们很高兴宣布 Kubernetes 1.18 版本的交付，这是我们 2020 年的第一版！Kubernetes
1.18 包含 38 个增强功能：15 项增强功能已转为稳定版，11 项增强功能处于 beta
阶段，12 项增强功能处于 alpha 阶段。&lt;/p&gt;
&lt;!--
Kubernetes 1.18 is a "fit and finish" release. Significant work has gone into improving beta and stable features to ensure users have a better experience. An equal effort has gone into adding new developments and exciting new features that promise to enhance the user experience even more.
--&gt;
&lt;p&gt;Kubernetes 1.18 是一个近乎 “完美” 的版本。为了改善 beta 和稳定的特性，已进行了大量工作，
以确保用户获得更好的体验。我们在增强现有功能的同时也增加了令人兴奋的新特性，这些有望进一步增强用户体验。&lt;/p&gt;</description></item><item><title>基于 MIPS 架构的 Kubernetes 方案</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2020/01/15/kubernetes-on-mips/</link><pubDate>Wed, 15 Jan 2020 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2020/01/15/kubernetes-on-mips/</guid><description>&lt;!--
layout: blog
title: "Kubernetes on MIPS"
date: 2020-01-15
slug: Kubernetes-on-MIPS
--&gt;
&lt;!-- 
**Authors:** TimYin Shi, Dominic Yin, Wang Zhan, Jessica Jiang, Will Cai, Jeffrey Gao, Simon Sun (Inspur)
--&gt;
&lt;p&gt;&lt;strong&gt;作者:&lt;/strong&gt; 石光银，尹东超，展望，江燕，蔡卫卫，高传集，孙思清（浪潮）&lt;/p&gt;
&lt;!-- 
## Background
--&gt;
&lt;h2 id="背景"&gt;背景&lt;/h2&gt;
&lt;!-- 
[MIPS](https://en.wikipedia.org/wiki/MIPS_architecture) (Microprocessor without Interlocked Pipelined Stages) is a reduced instruction set computer (RISC) instruction set architecture (ISA), appeared in 1981 and developed by MIPS Technologies. Now MIPS architecture is widely used in many electronic products.
--&gt;
&lt;p&gt;&lt;a href="https://zh.wikipedia.org/wiki/MIPS%E6%9E%B6%E6%A7%8B"&gt;MIPS&lt;/a&gt; (Microprocessor without Interlocked Pipelined Stages) 是一种采取精简指令集（RISC）的处理器架构 (ISA)，出现于 1981 年，由 MIPS 科技公司开发。如今 MIPS 架构被广泛应用于许多电子产品上。&lt;/p&gt;</description></item><item><title>Kubernetes 1.17：稳定</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2019/12/09/kubernetes-1-17-release-announcement/</link><pubDate>Mon, 09 Dec 2019 13:00:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2019/12/09/kubernetes-1-17-release-announcement/</guid><description>&lt;!-- ---
layout: blog
title: "Kubernetes 1.17: Stability"
date: 2019-12-09T13:00:00-08:00
slug: kubernetes-1-17-release-announcement
evergreen: true
--- --&gt;
&lt;p&gt;&lt;strong&gt;作者:&lt;/strong&gt; &lt;a href="https://github.com/kubernetes/sig-release/blob/master/releases/release-1.17/release_team.md"&gt;Kubernetes 1.17发布团队&lt;/a&gt;&lt;/p&gt;
&lt;!--
**Authors:** [Kubernetes 1.17 Release Team](https://github.com/kubernetes/sig-release/blob/master/releases/release-1.17/release_team.md)
--&gt;
&lt;p&gt;我们高兴的宣布Kubernetes 1.17版本的交付，它是我们2019年的第四个也是最后一个发布版本。Kubernetes v1.17包含22个增强功能：有14个增强已经逐步稳定(stable)，4个增强功能已经进入公开测试版(beta)，4个增强功能刚刚进入内部测试版(alpha)。&lt;/p&gt;
&lt;!--
We’re pleased to announce the delivery of Kubernetes 1.17, our fourth and final release of 2019! Kubernetes v1.17 consists of 22 enhancements: 14 enhancements have graduated to stable, 4 enhancements are moving to beta, and 4 enhancements are entering alpha.
--&gt;
&lt;h2 id="主要的主题"&gt;主要的主题&lt;/h2&gt;
&lt;!--
## Major Themes
--&gt;
&lt;h3 id="云服务提供商标签基本可用"&gt;云服务提供商标签基本可用&lt;/h3&gt;
&lt;!--
### Cloud Provider Labels reach General Availability
--&gt;
&lt;p&gt;作为公开测试版特性添加到 v1.2 ，v1.17 中可以看到云提供商标签达到基本可用。&lt;/p&gt;</description></item><item><title>使用 Java 开发一个 Kubernetes controller</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2019/11/26/develop-a-kubernetes-controller-in-java/</link><pubDate>Tue, 26 Nov 2019 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2019/11/26/develop-a-kubernetes-controller-in-java/</guid><description>&lt;!--
---
layout: blog
title: "Develop a Kubernetes controller in Java"
date: 2019-11-26
slug: Develop-A-Kubernetes-Controller-in-Java
---
--&gt;
&lt;!--
**Authors:** Min Kim (Ant Financial), Tony Ado (Ant Financial)
--&gt;
&lt;p&gt;&lt;strong&gt;作者:&lt;/strong&gt; Min Kim (蚂蚁金服), Tony Ado (蚂蚁金服)&lt;/p&gt;
&lt;!--
The official [Kubernetes Java SDK](https://github.com/kubernetes-client/java) project
recently released their latest work on providing the Java Kubernetes developers
a handy Kubernetes controller-builder SDK which is helpful for easily developing
advanced workloads or systems. 
--&gt;
&lt;p&gt;&lt;a href="https://github.com/kubernetes-client/java"&gt;Kubernetes Java SDK&lt;/a&gt; 官方项目最近发布了他们的最新工作，为 Java Kubernetes 开发人员提供一个便捷的 Kubernetes 控制器-构建器 SDK，它有助于轻松开发高级工作负载或系统。&lt;/p&gt;</description></item><item><title>使用 Microk8s 在 Linux 上本地运行 Kubernetes</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2019/11/26/running-kubernetes-locally-on-linux-with-microk8s/</link><pubDate>Tue, 26 Nov 2019 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2019/11/26/running-kubernetes-locally-on-linux-with-microk8s/</guid><description>&lt;!--
title: 'Running Kubernetes locally on Linux with Microk8s' 
date: 2019-11-26
--&gt;
&lt;!--
**Authors**: [Ihor Dvoretskyi](https://twitter.com/idvoretskyi), Developer Advocate, Cloud Native Computing Foundation; [Carmine Rimi](https://twitter.com/carminerimi)
--&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;: &lt;a href="https://twitter.com/idvoretskyi"&gt;Ihor Dvoretskyi&lt;/a&gt;，开发支持者，云原生计算基金会；&lt;a href="https://twitter.com/carminerimi"&gt;Carmine Rimi&lt;/a&gt;&lt;/p&gt;
&lt;!--
This article, the second in a [series](/blog/2019/03/28/running-kubernetes-locally-on-linux-with-minikube-now-with-kubernetes-1.14-support/) about local deployment options on Linux, and covers [MicroK8s](https://microk8s.io/). Microk8s is the click-and-run solution for deploying a Kubernetes cluster locally, originally developed by Canonical, the publisher of Ubuntu.
--&gt;
&lt;p&gt;本文是关于 Linux 上的本地部署选项&lt;a href="https://twitter.com/idvoretskyi"&gt;系列&lt;/a&gt;的第二篇，涵盖了 &lt;a href="https://microk8s.io/"&gt;MicroK8s&lt;/a&gt;。Microk8s 是本地部署 Kubernetes 集群的 'click-and-run' 方案，最初由 Ubuntu 的发布者 Canonical 开发。&lt;/p&gt;</description></item><item><title>Kubernetes 文档最终用户调研</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2019/10/29/kubernetes-documentation-end-user-survey/</link><pubDate>Tue, 29 Oct 2019 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2019/10/29/kubernetes-documentation-end-user-survey/</guid><description>&lt;!--
---
layout: blog
title: "Kubernetes Documentation Survey"
date: 2019-10-29
slug: kubernetes-documentation-end-user-survey
---
--&gt;
&lt;p&gt;&lt;strong&gt;Author:&lt;/strong&gt; &lt;a href="https://www.linkedin.com/in/aimee-ukasick/"&gt;Aimee Ukasick&lt;/a&gt; and SIG Docs&lt;/p&gt;
&lt;!--
In September, SIG Docs conducted its first survey about the [Kubernetes
documentation](https://kubernetes.io/docs/). We'd like to thank the CNCF's Kim
McMahon for helping us create the survey and access the results.
--&gt;
&lt;p&gt;9月，SIG Docs 进行了第一次关于 &lt;a href="https://kubernetes.io/docs/"&gt;Kubernetes 文档&lt;/a&gt;的用户调研。我们要感谢 CNCF
的 Kim McMahon 帮助我们创建调查并获取结果。&lt;/p&gt;
&lt;!--
# Key takeaways
--&gt;
&lt;h1 id="主要收获"&gt;主要收获&lt;/h1&gt;
&lt;!--
Respondents would like more example code, more detailed content, and more
diagrams in the Concepts, Tasks, and Reference sections.
--&gt;
&lt;p&gt;受访者希望能在概念、任务和参考部分得到更多示例代码、更详细的内容和更多图表。&lt;/p&gt;</description></item><item><title>圣迭戈贡献者峰会日程公布！</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2019/10/10/contributor-summit-san-diego-schedule/</link><pubDate>Thu, 10 Oct 2019 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2019/10/10/contributor-summit-san-diego-schedule/</guid><description>&lt;!--
layout: blog
title: "Contributor Summit San Diego Schedule Announced!"
date: 2019-10-10
slug: contributor-summit-san-diego-schedule
--&gt;
&lt;!--
**Authors:** Josh Berkus (Red Hat), Paris Pittman (Google), Jonas Rosland (VMware)
--&gt;
&lt;p&gt;&lt;strong&gt;作者：&lt;/strong&gt; Josh Berkus (Red Hat), Paris Pittman (Google), Jonas Rosland (VMware)&lt;/p&gt;
&lt;!--
There are many great sessions planned for the Contributor Summit, spread across
five rooms of current contributor content in addition to the new contributor
workshops. Since this is an upstream contributor summit and we don't often meet,
being a globally distributed team, most of these sessions are discussions or
hands-on labs, not just presentations. We want folks to learn and have a
good time meeting their OSS teammates.
--&gt;
&lt;p&gt;除了新贡献者研讨会之外，贡献者峰会还安排了许多精彩的会议，这些会议分布在当前五个贡献者内容的会议室中。由于这是一个上游贡献者峰会，并且我们不经常见面，所以作为一个全球分布的团队，这些会议大多是讨论或动手实践，而不仅仅是演示。我们希望大家互相学习，并于他们的开源代码队友玩的开心。&lt;/p&gt;</description></item><item><title>2019 指导委员会选举结果</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2019/10/03/2019-steering-committee-election-results/</link><pubDate>Thu, 03 Oct 2019 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2019/10/03/2019-steering-committee-election-results/</guid><description>&lt;!--
---
layout: blog
title: "2019 Steering Committee Election Results"
date: 2019-10-03
slug: 2019-steering-committee-election-results
---
--&gt;
&lt;!--
**Authors**: Bob Killen (University of Michigan), Jorge Castro (VMware),
Brian Grant (Google), and Ihor Dvoretskyi (CNCF)
--&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;：Bob Killen (University of Michigan), Jorge Castro (VMware),
Brian Grant (Google), and Ihor Dvoretskyi (CNCF)&lt;/p&gt;
&lt;!--
The [2019 Steering Committee Election] is a landmark milestone for the
Kubernetes project. The initial bootstrap committee is graduating to emeritus
and the committee has now shrunk to its final allocation of seven seats. All
members of the Steering Committee are now fully elected by the Kubernetes
Community.
--&gt;
&lt;p&gt;&lt;a href="https://git.k8s.io/community/events/elections/2021"&gt;2019 指导委员会选举&lt;/a&gt; 是 Kubernetes 项目的重要里程碑。最初的自助委员会正逐步退休，现在该委员会已缩减到最后分配的 7 个席位。指导委员会的所有成员现在都由 Kubernetes 社区选举产生。&lt;/p&gt;</description></item><item><title>San Diego 贡献者峰会开放注册！</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2019/09/24/san-diego-contributor-summit/</link><pubDate>Tue, 24 Sep 2019 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2019/09/24/san-diego-contributor-summit/</guid><description>&lt;!--
---
layout: blog
title: "Contributor Summit San Diego Registration Open!"
date: 2019-09-24
slug: san-diego-contributor-summit
---
---&gt;
&lt;!--
**Authors:** Paris Pittman (Google), Jeffrey Sica (Red Hat), Jonas Rosland (VMware)
---&gt;
&lt;p&gt;&lt;strong&gt;作者：&lt;/strong&gt; Paris Pittman (Google), Jeffrey Sica (Red Hat), Jonas Rosland (VMware)&lt;/p&gt;
&lt;!--
[Contributor Summit San Diego 2019 Event Page] 
In record time, we’ve hit capacity for the *new contributor workshop* session of
the event!
---&gt;
&lt;p&gt;&lt;a href="https://events.linuxfoundation.org/events/kubernetes-contributor-summit-north-america-2019/"&gt;2019 San Diego 贡献者峰会活动页面&lt;/a&gt;
在创纪录的时间内，&lt;em&gt;新贡献者研讨会&lt;/em&gt; 活动已满员！&lt;/p&gt;</description></item><item><title>机器可以完成这项工作，一个关于 kubernetes 测试、CI 和自动化贡献者体验的故事</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2019/08/29/the-machines-can-do-the-work-a-story-of-kubernetes-testing-ci-and-automating-the-contributor-experience/</link><pubDate>Thu, 29 Aug 2019 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2019/08/29/the-machines-can-do-the-work-a-story-of-kubernetes-testing-ci-and-automating-the-contributor-experience/</guid><description>&lt;!--
layout: blog
title: 'The Machines Can Do the Work, a Story of Kubernetes Testing, CI, and Automating the Contributor Experience'
date: 2018-08-29
--&gt;
&lt;!--
**Author**: Aaron Crickenberger (Google) and Benjamin Elder (Google)
--&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;：Aaron Crickenberger（谷歌）和 Benjamin Elder（谷歌）&lt;/p&gt;
&lt;!--
_“Large projects have a lot of less exciting, yet, hard work. We value time spent automating repetitive work more highly than toil. Where that work cannot be automated, it is our culture to recognize and reward all types of contributions. However, heroism is not sustainable.”_ - [Kubernetes Community Values](https://git.k8s.io/community/values.md#automation-over-process)
--&gt;
&lt;p&gt;&lt;em&gt;”大型项目有很多不那么令人兴奋，但却很辛苦的工作。比起辛苦工作，我们更重视把时间花在自动化重复性工作上，如果这项工作无法实现自动化，我们的文化就是承认并奖励所有类型的贡献。然而，英雄主义是不可持续的。“&lt;/em&gt; - &lt;a href="https://git.k8s.io/community/values.md#automation-over-process"&gt;Kubernetes Community Values&lt;/a&gt;&lt;/p&gt;</description></item><item><title>OPA Gatekeeper：Kubernetes 的策略和管理</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2019/08/06/opa-gatekeeper-policy-and-governance-for-kubernetes/</link><pubDate>Tue, 06 Aug 2019 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2019/08/06/opa-gatekeeper-policy-and-governance-for-kubernetes/</guid><description>&lt;!--
---
layout: blog
title: "OPA Gatekeeper: Policy and Governance for Kubernetes"
date: 2019-08-06
slug: OPA-Gatekeeper-Policy-and-Governance-for-Kubernetes 
---
---&gt;
&lt;!--
**Authors:** Rita Zhang (Microsoft), Max Smythe (Google), Craig Hooper (Commonwealth Bank AU), Tim Hinrichs (Styra), Lachie Evenson (Microsoft), Torin Sandall (Styra)
---&gt;
&lt;p&gt;&lt;strong&gt;作者：&lt;/strong&gt; Rita Zhang (Microsoft), Max Smythe (Google), Craig Hooper (Commonwealth Bank AU), Tim Hinrichs (Styra), Lachie Evenson (Microsoft), Torin Sandall (Styra)&lt;/p&gt;
&lt;!--
The [Open Policy Agent Gatekeeper](https://github.com/open-policy-agent/gatekeeper) project can be leveraged to help enforce policies and strengthen governance in your Kubernetes environment. In this post, we will walk through the goals, history, and current state of the project.
---&gt;
&lt;p&gt;可以从项目 &lt;a href="https://github.com/open-policy-agent/gatekeeper"&gt;Open Policy Agent Gatekeeper&lt;/a&gt; 中获得帮助，在 Kubernetes 环境下实施策略并加强治理。在本文中，我们将逐步介绍该项目的目标，历史和当前状态。&lt;/p&gt;</description></item><item><title>欢迎参加在上海举行的贡献者峰会</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2019/06/11/join-us-at-the-contributor-summit-in-shanghai/</link><pubDate>Tue, 11 Jun 2019 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2019/06/11/join-us-at-the-contributor-summit-in-shanghai/</guid><description>&lt;!-- ---
layout: blog
title: 'Join us at the Contributor Summit in Shanghai'
date: 2019-06-11
--- --&gt;
&lt;p&gt;&lt;strong&gt;Author&lt;/strong&gt;: Josh Berkus (Red Hat)&lt;/p&gt;
&lt;!-- ![Picture of contributor panel at 2018 Shanghai contributor summit. Photo by Josh Berkus, licensed CC-BY 4.0](/images/blog/2019-
06-11-contributor-summit-shanghai/panel.png) --&gt;
&lt;p&gt;![贡献者小组讨论掠影，摄于 2018 年上海贡献者峰会，作者 Josh Berkus, 许可证 CC-BY 4.0](/images/blog/2019-
06-11-contributor-summit-shanghai/panel.png)&lt;/p&gt;
&lt;!-- For the second year, we will have [a Contributor Summit event](https://www.lfasiallc.com/events/contributors-summit-china-2019/) the day before [KubeCon China](https://events.linuxfoundation.cn/events/kubecon-cloudnativecon-china-2019/) in Shanghai. If you already contribute to Kubernetes or would like to contribute, please consider attending and [register](https://www.lfasiallc.com/events/contributors-summit-china-2019/register/). The Summit will be held June 24th, at the Shanghai Expo Center (the same location where KubeCon will take place), and will include a Current Contributor Day as well as the New Contributor Workshop and the Documentation Sprints. --&gt;
&lt;p&gt;连续第二年，我们将在 &lt;a href="https://events.linuxfoundation.cn/events/kubecon-cloudnativecon-china-2019/"&gt;KubeCon China&lt;/a&gt; 之前举行一天的 &lt;a href="https://www.lfasiallc.com/events/contributors-summit-china-2019/"&gt;贡献者峰会&lt;/a&gt;。
不管您是否已经是一名 Kubernetes 贡献者，还是想要加入社区队伍，贡献一份力量，都请考虑&lt;a href="https://www.lfasiallc.com/events/contributors-summit-china-2019/register/"&gt;注册&lt;/a&gt;参加这次活动。
这次峰会将于六月 24 号，在上海世博中心（和 KubeCon 的举办地点相同）举行，
一天的活动将包含“现有贡献者活动”，以及“新贡献者工作坊”和“文档小组活动”。&lt;/p&gt;</description></item><item><title>壮大我们的贡献者研讨会</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2019/05/14/expanding-our-contributor-workshops/</link><pubDate>Tue, 14 May 2019 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2019/05/14/expanding-our-contributor-workshops/</guid><description>&lt;!--
---
layout: blog
title: "Expanding our Contributor Workshops"
date: 2019-05-14
slug: expanding-our-contributor-workshops
---
--&gt;
&lt;!--
**Authors:** Guinevere Saenger (GitHub) and Paris Pittman (Google)
--&gt;
&lt;p&gt;&lt;strong&gt;作者:&lt;/strong&gt; Guinevere Saenger (GitHub) 和 Paris Pittman (Google)（谷歌）&lt;/p&gt;
&lt;!--
**tl;dr** - learn about the contributor community with us and land your first PR! We have spots available in [Barcelona][eu] (registration **closes** on Wednesday May 15, so grab your spot!) and the upcoming [Shanghai][cn] Summit.
--&gt;
&lt;p&gt;&lt;strong&gt;tl;dr&lt;/strong&gt; - 与我们一起了解贡献者社区，并获得你的第一份 PR ! 我们在[巴塞罗那][欧洲]有空位（登记 在5月15号周三&lt;strong&gt;结束&lt;/strong&gt;，所以抓住这次机会！）并且在[上海][中国]有即将到来的峰会。&lt;/p&gt;</description></item><item><title>如何参与 Kubernetes 文档的本地化工作</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2019/04/26/how-you-can-help-localize-kubernetes-docs/</link><pubDate>Fri, 26 Apr 2019 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2019/04/26/how-you-can-help-localize-kubernetes-docs/</guid><description>&lt;!-- 
layout: blog
title: 'How You Can Help Localize Kubernetes Docs'
date: 2019-04-26
--&gt;
&lt;!-- 
**Author: Zach Corleissen (Linux Foundation)**

Last year we optimized the Kubernetes website for [hosting multilingual content](/blog/2018/11/08/kubernetes-docs-updates-international-edition/). Contributors responded by adding multiple new localizations: as of April 2019, Kubernetes docs are partially available in nine different languages, with six added in 2019 alone. You can see a list of available languages in the language selector at the top of each page.

By _partially available_, I mean that localizations are ongoing projects. They range from mostly complete ([Chinese docs for 1.12](https://v1-12.docs.kubernetes.io/zh/)) to brand new (1.14 docs in [Portuguese](https://kubernetes.io/pt/)). If you're interested in helping an existing localization, read on!
--&gt;
&lt;p&gt;&lt;strong&gt;作者: Zach Corleissen（Linux 基金会）&lt;/strong&gt;&lt;/p&gt;</description></item><item><title>使用 Kubernetes 设备插件和 RuntimeClass 在 Ingress 控制器中实现硬件加速 SSL/TLS 终止</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2019/04/24/%E4%BD%BF%E7%94%A8-kubernetes-%E8%AE%BE%E5%A4%87%E6%8F%92%E4%BB%B6%E5%92%8C-runtimeclass-%E5%9C%A8-ingress-%E6%8E%A7%E5%88%B6%E5%99%A8%E4%B8%AD%E5%AE%9E%E7%8E%B0%E7%A1%AC%E4%BB%B6%E5%8A%A0%E9%80%9F-ssl/tls-%E7%BB%88%E6%AD%A2/</link><pubDate>Wed, 24 Apr 2019 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2019/04/24/%E4%BD%BF%E7%94%A8-kubernetes-%E8%AE%BE%E5%A4%87%E6%8F%92%E4%BB%B6%E5%92%8C-runtimeclass-%E5%9C%A8-ingress-%E6%8E%A7%E5%88%B6%E5%99%A8%E4%B8%AD%E5%AE%9E%E7%8E%B0%E7%A1%AC%E4%BB%B6%E5%8A%A0%E9%80%9F-ssl/tls-%E7%BB%88%E6%AD%A2/</guid><description>&lt;!--
layout: blog
title: 'Hardware Accelerated SSL/TLS Termination in Ingress Controllers using Kubernetes Device Plugins and RuntimeClass'
date: 2019-04-24
--&gt;
&lt;!--
**Authors:** Mikko Ylinen (Intel)
--&gt;
&lt;p&gt;&lt;strong&gt;作者：&lt;/strong&gt; Mikko Ylinen (Intel)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者：&lt;/strong&gt; &lt;a href="https://github.com/pegasas"&gt;pegasas&lt;/a&gt;&lt;/p&gt;
&lt;!--
## Abstract
--&gt;
&lt;h2 id="摘要"&gt;摘要&lt;/h2&gt;
&lt;!--
A Kubernetes Ingress is a way to connect cluster services to the world outside the cluster. In order
to correctly route the traffic to service backends, the cluster needs an Ingress controller. The
Ingress controller is responsible for setting the right destinations to backends based on the
Ingress API objects’ information. The actual traffic is routed through a proxy server that
is responsible for tasks such as load balancing and SSL/TLS (later “SSL” refers to both SSL
or TLS ) termination. The SSL termination is a CPU heavy operation due to the crypto operations
involved. To offload some of the CPU intensive work away from the CPU, OpenSSL based proxy
servers can take the benefit of OpenSSL Engine API and dedicated crypto hardware. This frees
CPU cycles for other things and improves the overall throughput of the proxy server.
--&gt;
&lt;p&gt;Kubernetes Ingress 是在集群服务与集群外部世界建立连接的一种方法。为了正确地将流量路由到服务后端，集群需要一个
Ingress 控制器。Ingress 控制器负责根据 Ingress API 对象的信息设置目标到正确的后端。实际流量通过代理服务器路由，
代理服务器负责诸如负载均衡和 SSL/TLS （稍后的“SSL”指 SSL 或 TLS）终止等任务。由于涉及加密操作，SSL 终止是一个
CPU 密集型操作。为了从 CPU 中分载一些 CPU 密集型工作，基于 OpenSSL 的代理服务器可以利用 OpenSSL Engine API 和专用加密硬件的优势。
这将为其他事情释放 CPU 周期，并提高代理服务器的总体吞吐量。&lt;/p&gt;</description></item><item><title>Kubernetes 1.14 稳定性改进中的进程ID限制</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2019/04/15/process-id-limiting-for-stability-improvements-in-kubernetes-1.14/</link><pubDate>Mon, 15 Apr 2019 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2019/04/15/process-id-limiting-for-stability-improvements-in-kubernetes-1.14/</guid><description>&lt;!--
title: 'Process ID Limiting for Stability Improvements in Kubernetes 1.14'
date: 2019-04-15
--&gt;
&lt;!--
**Author: Derek Carr**

Have you ever seen someone take more than their fair share of the cookies? The one person who reaches in and grabs a half dozen fresh baked chocolate chip chunk morsels and skitters off like Cookie Monster exclaiming “Om nom nom nom.”

In some rare workloads, a similar occurrence was taking place inside Kubernetes clusters. With each Pod and Node, there comes a finite number of possible process IDs (PIDs) for all applications to share. While it is rare for any one process or pod to reach in and grab all the PIDs, some users were experiencing resource starvation due to this type of behavior. So in Kubernetes 1.14, we introduced an enhancement to mitigate the risk of a single pod monopolizing all of the PIDs available.
--&gt;
&lt;p&gt;&lt;strong&gt;作者: Derek Carr&lt;/strong&gt;&lt;/p&gt;</description></item><item><title>Raw Block Volume 支持进入 Beta</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2019/03/07/raw-block-volume-support-to-beta/</link><pubDate>Thu, 07 Mar 2019 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2019/03/07/raw-block-volume-support-to-beta/</guid><description>&lt;!--
title: Raw Block Volume support to Beta
date: 2019-03-07
---&gt;
&lt;!--
**Authors:**
Ben Swartzlander (NetApp), Saad Ali (Google)

Kubernetes v1.13 moves raw block volume support to beta. This feature allows persistent volumes to be exposed inside containers as a block device instead of as a mounted file system.
---&gt;
&lt;p&gt;&lt;strong&gt;作者：&lt;/strong&gt;
Ben Swartzlander (NetApp), Saad Ali (Google)&lt;/p&gt;
&lt;p&gt;Kubernetes v1.13 中对原生数据块卷（Raw Block Volume）的支持进入 Beta 阶段。此功能允许将持久卷作为块设备而不是作为已挂载的文件系统暴露在容器内部。&lt;/p&gt;
&lt;!--
## What are block devices?

Block devices enable random access to data in fixed-size blocks. Hard drives, SSDs, and CD-ROMs drives are all examples of block devices.

Typically persistent storage is implemented in a layered maner with a file system (like ext4) on top of a block device (like a spinning disk or SSD). Applications then read and write files instead of operating on blocks. The operating systems take care of reading and writing files, using the specified filesystem, to the underlying device as blocks.

It's worth noting that while whole disks are block devices, so are disk partitions, and so are LUNs from a storage area network (SAN) device.
---&gt;
&lt;h2 id="什么是块设备"&gt;什么是块设备？&lt;/h2&gt;
&lt;p&gt;块设备允许对固定大小的块中的数据进行随机访问。硬盘驱动器、SSD 和 CD-ROM 驱动器都是块设备的例子。&lt;/p&gt;</description></item><item><title>新贡献者工作坊上海站</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2018/12/05/new-contributor-workshop-shanghai/</link><pubDate>Wed, 05 Dec 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2018/12/05/new-contributor-workshop-shanghai/</guid><description>&lt;!-- 
layout: blog
title: 'New Contributor Workshop Shanghai'
date: 2018-12-05
 --&gt;
&lt;!--
**Authors**: Josh Berkus (Red Hat), Yang Li (The Plant), Puja Abbassi (Giant Swarm), XiangPeng Zhao (ZTE)
 --&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;: Josh Berkus (红帽), Yang Li (The Plant), Puja Abbassi (Giant Swarm), XiangPeng Zhao (中兴通讯)&lt;/p&gt;
&lt;!--


&lt;figure&gt;
 &lt;img src="https://andygol-k8s.netlify.app/images/blog/2018-12-05-new-contributor-shanghai/attendees.png"
 alt="KubeCon Shanghai New Contributor Summit attendees. Photo by Jerry Zhang"/&gt; &lt;figcaption&gt;
 &lt;p&gt;KubeCon Shanghai New Contributor Summit attendees. Photo by Jerry Zhang&lt;/p&gt;
 &lt;/figcaption&gt;
&lt;/figure&gt;
 --&gt;


&lt;figure&gt;
 &lt;img src="https://andygol-k8s.netlify.app/images/blog/2018-12-05-new-contributor-shanghai/attendees.png"
 alt="KubeCon 上海站新贡献者峰会与会者，摄影：Jerry Zhang"/&gt; &lt;figcaption&gt;
 &lt;p&gt;KubeCon 上海站新贡献者峰会与会者，摄影：Jerry Zhang&lt;/p&gt;</description></item><item><title>Kubernetes 文档更新，国际版</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2018/11/08/kubernetes-docs-updates-international-edition/</link><pubDate>Thu, 08 Nov 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2018/11/08/kubernetes-docs-updates-international-edition/</guid><description>&lt;!--
layout: blog
title: 'Kubernetes Docs Updates, International Edition'
date: 2018-11-08
--&gt;
&lt;!-- **Author**: Zach Corleissen (Linux Foundation) --&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;：Zach Corleissen （Linux 基金会）&lt;/p&gt;
&lt;!-- As a co-chair of SIG Docs, I'm excited to share that Kubernetes docs have a fully mature workflow for localization (l10n). --&gt;
&lt;p&gt;作为文档特别兴趣小组（SIG Docs）的联合主席，我很高兴能与大家分享 Kubernetes 文档在本地化（l10n）方面所拥有的一个完全成熟的工作流。&lt;/p&gt;
&lt;!-- ## Abbreviations galore --&gt;
&lt;h2 id="丰富的缩写"&gt;丰富的缩写&lt;/h2&gt;
&lt;!-- L10n is an abbreviation for _localization_. --&gt;
&lt;p&gt;L10n 是 &lt;em&gt;localization&lt;/em&gt; 的缩写。&lt;/p&gt;
&lt;!-- I18n is an abbreviation for _internationalization_. --&gt;
&lt;p&gt;I18n 是 &lt;em&gt;internationalization&lt;/em&gt; 的缩写。&lt;/p&gt;</description></item><item><title>Kubernetes 2018 年北美贡献者峰会</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2018/10/16/kubernetes-2018-north-american-contributor-summit/</link><pubDate>Tue, 16 Oct 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2018/10/16/kubernetes-2018-north-american-contributor-summit/</guid><description>&lt;!--
layout: "Blog"
title: "Kubernetes 2018 North American Contributor Summit"
date: 2018-10-16 
--&gt;
&lt;!--
**Authors:**
--&gt;
&lt;p&gt;&lt;strong&gt;作者：&lt;/strong&gt;&lt;/p&gt;
&lt;!--
[Bob Killen][bob] (University of Michigan)
[Sahdev Zala][sahdev] (IBM),
[Ihor Dvoretskyi][ihor] (CNCF) 
--&gt;
&lt;p&gt;&lt;a href="https://twitter.com/mrbobbytables"&gt;Bob Killen&lt;/a&gt;（密歇根大学）
&lt;a href="https://twitter.com/sp_zala"&gt;Sahdev Zala&lt;/a&gt;（IBM），
&lt;a href="https://twitter.com/idvoretskyi"&gt;Ihor Dvoretskyi&lt;/a&gt;（CNCF）&lt;/p&gt;
&lt;!--
The 2018 North American Kubernetes Contributor Summit to be hosted right before
[KubeCon + CloudNativeCon][kubecon] Seattle is shaping up to be the largest yet.
--&gt;
&lt;p&gt;2018 年北美 Kubernetes 贡献者峰会将在西雅图 &lt;a href="https://events.linuxfoundation.org/events/kubecon-cloudnativecon-north-america-2018/"&gt;KubeCon + CloudNativeCon&lt;/a&gt; 会议之前举办，这将是迄今为止规模最大的一次盛会。&lt;/p&gt;
&lt;!--
It is an event that brings together new and current contributors alike to
connect and share face-to-face; and serves as an opportunity for existing
contributors to help shape the future of community development. For new
community members, it offers a welcoming space to learn, explore and put the
contributor workflow to practice.
--&gt;
&lt;p&gt;这是一个将新老贡献者聚集在一起，面对面交流和分享的活动；并为现有的贡献者提供一个机会，帮助塑造社区发展的未来。它为新的社区成员提供了一个学习、探索和实践贡献工作流程的良好空间。&lt;/p&gt;</description></item><item><title>2018 年督导委员会选举结果</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2018/10/15/2018-steering-committee-election-results/</link><pubDate>Mon, 15 Oct 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2018/10/15/2018-steering-committee-election-results/</guid><description>&lt;!--
layout: blog
title: '2018 Steering Committee Election Results'
date: 2018-10-15
--&gt;
&lt;!-- **Authors**: Jorge Castro (Heptio), Ihor Dvoretskyi (CNCF), Paris Pittman (Google) --&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;: Jorge Castro (Heptio), Ihor Dvoretskyi (CNCF), Paris Pittman (Google)&lt;/p&gt;
&lt;!--
## Results
--&gt;
&lt;h2 id="结果"&gt;结果&lt;/h2&gt;
&lt;!--
The [Kubernetes Steering Committee Election](https://kubernetes.io/blog/2018/09/06/2018-steering-committee-election-cycle-kicks-off/) is now complete and the following candidates came ahead to secure two year terms that start immediately:
--&gt;
&lt;p&gt;&lt;a href="https://kubernetes.io/blog/2018/09/06/2018-steering-committee-election-cycle-kicks-off/"&gt;Kubernetes 督导委员会选举&lt;/a&gt;现已完成，以下候选人获得了立即开始的两年任期：&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Aaron Crickenberger, Google, &lt;a href="https://github.com/spiffxp"&gt;@spiffxp&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Davanum Srinivas, Huawei, &lt;a href="https://github.com/dims"&gt;@dims&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Tim St. Clair, Heptio, &lt;a href="https://github.com/timothysc"&gt;@timothysc&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
## Big Thanks!
--&gt;
&lt;h2 id="十分感谢"&gt;十分感谢！&lt;/h2&gt;
&lt;!-- 
* Steering Committee Member Emeritus [Quinton Hoole](https://github.com/quinton-hoole) for his service to the community over the past year. We look forward to
* The candidates that came forward to run for election. May we always have a strong set of people who want to push community forward like yours in every election.
* All 307 voters who cast a ballot.
* And last but not least...Cornell University for hosting [CIVS](https://civs.cs.cornell.edu/)! 
--&gt;
&lt;ul&gt;
&lt;li&gt;督导委员会荣誉退休成员 &lt;a href="https://github.com/quinton-hoole"&gt;Quinton Hoole&lt;/a&gt;，表扬他在过去一年为社区所作的贡献。我们期待着&lt;/li&gt;
&lt;li&gt;参加竞选的候选人。愿我们永远拥有一群强大的人，他们希望在每一次选举中都能像你们一样推动社区向前发展。&lt;/li&gt;
&lt;li&gt;共计 307 名选民参与投票。&lt;/li&gt;
&lt;li&gt;本次选举由康奈尔大学主办 &lt;a href="https://civs.cs.cornell.edu/"&gt;CIVS&lt;/a&gt;！&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
## Get Involved with the Steering Committee
--&gt;
&lt;h2 id="加入督导委员会"&gt;加入督导委员会&lt;/h2&gt;
&lt;!--
You can follow along to Steering Committee [backlog items](https://git.k8s.io/steering/backlog.md) and weigh in by filing an issue or creating a PR against their [repo](https://github.com/kubernetes/steering). They meet bi-weekly on [Wednesdays at 8pm UTC](https://github.com/kubernetes/steering) and regularly attend Meet Our Contributors.
--&gt;
&lt;p&gt;你可以关注督导委员会的&lt;a href="https://git.k8s.io/steering/backlog.md"&gt;任务清单&lt;/a&gt;，并通过向他们的&lt;a href="https://github.com/kubernetes/steering"&gt;代码仓库&lt;/a&gt;提交 issue 或 PR 的方式来参与。他们也会在&lt;a href="https://github.com/kubernetes/steering"&gt;UTC 时间每周三晚 8 点&lt;/a&gt;举行会议，并定期与我们的贡献者见面。&lt;/p&gt;</description></item><item><title>Kubernetes 中的拓扑感知数据卷供应</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2018/10/11/topology-aware-volume-provisioning-in-kubernetes/</link><pubDate>Thu, 11 Oct 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2018/10/11/topology-aware-volume-provisioning-in-kubernetes/</guid><description>&lt;!--
layout: blog
title: 'Topology-Aware Volume Provisioning in Kubernetes'
date: 2018-10-11
--&gt;
&lt;!--
**Author**: Michelle Au (Google)
--&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;: Michelle Au（谷歌）&lt;/p&gt;
&lt;!--
The multi-zone cluster experience with persistent volumes is improving in Kubernetes 1.12 with the topology-aware dynamic provisioning beta feature. This feature allows Kubernetes to make intelligent decisions when dynamically provisioning volumes by getting scheduler input on the best place to provision a volume for a pod. In multi-zone clusters, this means that volumes will get provisioned in an appropriate zone that can run your pod, allowing you to easily deploy and scale your stateful workloads across failure domains to provide high availability and fault tolerance.
--&gt;
&lt;p&gt;通过提供拓扑感知动态卷供应功能，具有持久卷的多区域集群体验在 Kubernetes 1.12
中得到了改进。此功能使得 Kubernetes 在动态供应卷时能做出明智的决策，方法是从调度器获得为
Pod 提供数据卷的最佳位置。在多区域集群环境，这意味着数据卷能够在满足你的 Pod
运行需要的合适的区域被供应，从而允许你跨故障域轻松部署和扩展有状态工作负载，从而提供高可用性和容错能力。&lt;/p&gt;</description></item><item><title>Kubernetes v1.12: RuntimeClass 简介</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2018/10/10/kubernetes-v1.12-introducing-runtimeclass/</link><pubDate>Wed, 10 Oct 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2018/10/10/kubernetes-v1.12-introducing-runtimeclass/</guid><description>&lt;!--
layout: blog
title: 'Kubernetes v1.12: Introducing RuntimeClass'
date: 2018-10-10
--&gt;
&lt;!--
**Author**: Tim Allclair (Google)
--&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;: Tim Allclair (Google)&lt;/p&gt;
&lt;!--
Kubernetes originally launched with support for Docker containers running native applications on a Linux host. Starting with [rkt](https://kubernetes.io/blog/2016/07/rktnetes-brings-rkt-container-engine-to-kubernetes/) in Kubernetes 1.3 more runtimes were coming, which lead to the development of the [Container Runtime Interface](https://kubernetes.io/blog/2016/12/container-runtime-interface-cri-in-kubernetes/) (CRI). Since then, the set of alternative runtimes has only expanded: projects like [Kata Containers](https://katacontainers.io/) and [gVisor](https://github.com/google/gvisor) were announced for stronger workload isolation, and Kubernetes' Windows support has been [steadily progressing](https://kubernetes.io/blog/2018/01/kubernetes-v19-beta-windows-support/).
--&gt;
&lt;p&gt;Kubernetes 最初是为了支持在 Linux 主机上运行本机应用程序的 Docker 容器而创建的。
从 Kubernetes 1.3 中的 &lt;a href="https://kubernetes.io/blog/2016/07/rktnetes-brings-rkt-container-engine-to-kubernetes/"&gt;rkt&lt;/a&gt; 开始，更多的运行时间开始涌现，
这导致了&lt;a href="https://kubernetes.io/blog/2016/12/container-runtime-interface-cri-in-kubernetes/"&gt;容器运行时接口（Container Runtime Interface）&lt;/a&gt;（CRI）的开发。
从那时起，备用运行时集合越来越大：
为了加强工作负载隔离，&lt;a href="https://katacontainers.io/"&gt;Kata Containers&lt;/a&gt; 和 &lt;a href="https://github.com/google/gvisor"&gt;gVisor&lt;/a&gt; 等项目被发起，
并且 Kubernetes 对 Windows 的支持正在&lt;a href="https://kubernetes.io/blog/2018/01/kubernetes-v19-beta-windows-support/"&gt;稳步发展&lt;/a&gt;。&lt;/p&gt;</description></item><item><title>KubeDirector：在 Kubernetes 上运行复杂状态应用程序的简单方法</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2018/10/03/kubedirector-the-easy-way-to-run-complex-stateful-applications-on-kubernetes/</link><pubDate>Wed, 03 Oct 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2018/10/03/kubedirector-the-easy-way-to-run-complex-stateful-applications-on-kubernetes/</guid><description>&lt;!--
layout: blog
title: 'KubeDirector: The easy way to run complex stateful applications on Kubernetes'
date: 2018-10-03
--&gt;
&lt;!--
**Author**: Thomas Phelan (BlueData)
--&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;：Thomas Phelan（BlueData）&lt;/p&gt;
&lt;!--
KubeDirector is an open source project designed to make it easy to run complex stateful scale-out application clusters on Kubernetes. KubeDirector is built using the custom resource definition (CRD) framework and leverages the native Kubernetes API extensions and design philosophy. This enables transparent integration with Kubernetes user/resource management as well as existing clients and tools.
--&gt;
&lt;p&gt;KubeDirector 是一个开源项目，旨在简化在 Kubernetes 上运行复杂的有状态扩展应用程序集群。KubeDirector 使用自定义资源定义（CRD）
框架构建，并利用了本地 Kubernetes API 扩展和设计哲学。这支持与 Kubernetes 用户/资源 管理以及现有客户端和工具的透明集成。&lt;/p&gt;</description></item><item><title>在 Kubernetes 上对 gRPC 服务器进行健康检查</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2018/10/01/health-checking-grpc-servers-on-kubernetes/</link><pubDate>Mon, 01 Oct 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2018/10/01/health-checking-grpc-servers-on-kubernetes/</guid><description>&lt;!--
layout: blog
title: 'Health checking gRPC servers on Kubernetes'
date: 2018-10-01
---&gt;
&lt;!--
**Author**: [Ahmet Alp Balkan](https://twitter.com/ahmetb) (Google)
---&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;： &lt;a href="https://twitter.com/ahmetb"&gt;Ahmet Alp Balkan&lt;/a&gt; (Google)&lt;/p&gt;
&lt;!-- 
**Update (December 2021):** _Kubernetes now has built-in gRPC health probes starting in v1.23.
To learn more, see [Configure Liveness, Readiness and Startup Probes](/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-grpc-liveness-probe).
This article was originally written about an external tool to achieve the same task._
--&gt;
&lt;p&gt;&lt;strong&gt;更新（2021 年 12 月）：&lt;/strong&gt; “Kubernetes 从 v1.23 开始具有内置 gRPC 健康探测。
了解更多信息，请参阅&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-grpc-liveness-probe"&gt;配置存活探针、就绪探针和启动探针&lt;/a&gt;。
本文最初是为有关实现相同任务的外部工具所写。”&lt;/p&gt;</description></item><item><title>使用 CSI 和 Kubernetes 实现卷的动态扩容</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2018/08/02/dynamically-expand-volume-with-csi-and-kubernetes/</link><pubDate>Thu, 02 Aug 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2018/08/02/dynamically-expand-volume-with-csi-and-kubernetes/</guid><description>&lt;!--
layout: blog
title: 'Dynamically Expand Volume with CSI and Kubernetes'
date: 2018-08-02
--&gt;
&lt;!--
**Author**: Orain Xiong (Co-Founder, WoquTech)
--&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;：Orain Xiong（联合创始人, WoquTech）&lt;/p&gt;
&lt;!--
_There is a very powerful storage subsystem within Kubernetes itself, covering a fairly broad spectrum of use cases. Whereas, when planning to build a product-grade relational database platform with Kubernetes, we face a big challenge: coming up with storage. This article describes how to extend latest Container Storage Interface 0.2.0 and integrate with Kubernetes, and demonstrates the essential facet of dynamically expanding volume capacity._
--&gt;
&lt;p&gt;&lt;em&gt;Kubernetes 本身有一个非常强大的存储子系统，涵盖了相当广泛的用例。而当我们计划使用 Kubernetes 构建产品级关系型数据库平台时，我们面临一个巨大的挑战：提供存储。本文介绍了如何扩展最新的 Container Storage Interface 0.2.0 和与 Kubernetes 集成，并演示了卷动态扩容的基本方面。&lt;/em&gt;&lt;/p&gt;</description></item><item><title>使用 Kubernetes 调整 PersistentVolume 的大小</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2018/07/12/resize-pv-using-k8s/</link><pubDate>Thu, 12 Jul 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2018/07/12/resize-pv-using-k8s/</guid><description>&lt;!--
layout: blog
title: 'Resizing Persistent Volumes using Kubernetes'
date: 2018-07-12
--&gt;
&lt;!--
**Author**: Hemant Kumar (Red Hat)
--&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;: Hemant Kumar (Red Hat)&lt;/p&gt;
&lt;!--
**Editor’s note: this post is part of a [series of in-depth articles](https://kubernetes.io/blog/2018/06/27/kubernetes-1.11-release-announcement/) on what’s new in Kubernetes 1.11**

In Kubernetes v1.11 the persistent volume expansion feature is being promoted to beta. This feature allows users to easily resize an existing volume by editing the `PersistentVolumeClaim` (PVC) object. Users no longer have to manually interact with the storage backend or delete and recreate PV and PVC objects to increase the size of a volume. Shrinking persistent volumes is not supported.

Volume expansion was introduced in v1.8 as an Alpha feature, and versions prior to v1.11 required enabling the feature gate, `ExpandPersistentVolumes`, as well as the admission controller, `PersistentVolumeClaimResize` (which prevents expansion of PVCs whose underlying storage provider does not support resizing). In Kubernetes v1.11+, both the feature gate and admission controller are enabled by default.

Although the feature is enabled by default, a cluster admin must opt-in to allow users to resize their volumes. Kubernetes v1.11 ships with volume expansion support for the following in-tree volume plugins: AWS-EBS, GCE-PD, Azure Disk, Azure File, Glusterfs, Cinder, Portworx, and Ceph RBD. Once the admin has determined that volume expansion is supported for the underlying provider, they can make the feature available to users by setting the `allowVolumeExpansion` field to `true` in their `StorageClass` object(s). Only PVCs created from that `StorageClass` will be allowed to trigger volume expansion.
--&gt;
&lt;p&gt;&lt;strong&gt;编者注：这篇博客是&lt;a href="https://kubernetes.io/blog/2018/06/27/kubernetes-1.11-release-announcement/"&gt;深度文章系列&lt;/a&gt;的一部分，这个系列介绍了 Kubernetes 1.11 中的新增特性&lt;/strong&gt;&lt;/p&gt;</description></item><item><title>动态 Kubelet 配置</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2018/07/11/dynamic-kubelet-configuration/</link><pubDate>Wed, 11 Jul 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2018/07/11/dynamic-kubelet-configuration/</guid><description>&lt;!--
layout: blog
title: 'Dynamic Kubelet Configuration'
date: 2018-07-11
--&gt;
&lt;!--
**Author**: Michael Taufen (Google)
--&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;: Michael Taufen (Google)&lt;/p&gt;
&lt;!--
**Editor’s note: The feature has been removed in the version 1.24 after deprecation in 1.22.**
--&gt;
&lt;p&gt;&lt;strong&gt;编者注：在 1.22 版本弃用后，该功能已在 1.24 版本中删除。&lt;/strong&gt;&lt;/p&gt;
&lt;!--
**Editor’s note: this post is part of a [series of in-depth articles](https://kubernetes.io/blog/2018/06/27/kubernetes-1.11-release-announcement/) on what’s new in Kubernetes 1.11**
--&gt;
&lt;p&gt;&lt;strong&gt;编者注：这篇文章是&lt;a href="https://kubernetes.io/blog/2018/06/27/kubernetes-1.11-release-announcement/"&gt;一系列深度文章&lt;/a&gt; 的一部分，这个系列介绍了 Kubernetes 1.11 中的新增功能&lt;/strong&gt;&lt;/p&gt;
&lt;!--
## Why Dynamic Kubelet Configuration?
--&gt;
&lt;h2 id="为什么要进行动态-kubelet-配置"&gt;为什么要进行动态 Kubelet 配置？&lt;/h2&gt;
&lt;!--
Kubernetes provides API-centric tooling that significantly improves workflows for managing applications and infrastructure. Most Kubernetes installations, however, run the Kubelet as a native process on each host, outside the scope of standard Kubernetes APIs.
--&gt;
&lt;p&gt;Kubernetes 提供了以 API 为中心的工具，可显着改善用于管理应用程序和基础架构的工作流程。
但是，在大多数的 Kubernetes 安装中，kubelet 在每个主机上作为本机进程运行，因此
未被标准 Kubernetes API 覆盖。&lt;/p&gt;</description></item><item><title>用于 Kubernetes 集群 DNS 的 CoreDNS GA 正式发布</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2018/07/10/coredns-ga-for-kubernetes-cluster-dns/</link><pubDate>Tue, 10 Jul 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2018/07/10/coredns-ga-for-kubernetes-cluster-dns/</guid><description>&lt;!--
layout: blog
title: "CoreDNS GA for Kubernetes Cluster DNS"
date: 2018-07-10
---&gt;
&lt;!--
**Author**: John Belamaric (Infoblox)

**Editor’s note: this post is part of a [series of in-depth articles](https://kubernetes.io/blog/2018/06/27/kubernetes-1.11-release-announcement/) on what’s new in Kubernetes 1.11**
---&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;：John Belamaric (Infoblox)&lt;/p&gt;
&lt;p&gt;**编者注：这篇文章是 &lt;a href="https://kubernetes.io/blog/2018/06/27/kubernetes-1.11-release-announcement/"&gt;系列深度文章&lt;/a&gt; 中的一篇，介绍了 Kubernetes 1.11 新增的功能&lt;/p&gt;
&lt;!--
## Introduction

In Kubernetes 1.11, [CoreDNS](https://coredns.io) has reached General Availability (GA) for DNS-based service discovery, as an alternative to the kube-dns addon. This means that CoreDNS will be offered as an option in upcoming versions of the various installation tools. In fact, the kubeadm team chose to make it the default option starting with Kubernetes 1.11.
---&gt;
&lt;h2 id="介绍"&gt;介绍&lt;/h2&gt;
&lt;p&gt;在 Kubernetes 1.11 中，&lt;a href="https://coredns.io"&gt;CoreDNS&lt;/a&gt; 已经达到基于 DNS 服务发现的 General Availability (GA)，可以替代 kube-dns 插件。这意味着 CoreDNS 会作为即将发布的安装工具的选项之一上线。实际上，从 Kubernetes 1.11 开始，kubeadm 团队选择将它设为默认选项。&lt;/p&gt;</description></item><item><title>基于 IPVS 的集群内部负载均衡</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2018/07/09/ipvs-based-in-cluster-load-balancing-deep-dive/</link><pubDate>Mon, 09 Jul 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2018/07/09/ipvs-based-in-cluster-load-balancing-deep-dive/</guid><description>&lt;!-- 
layout: blog
title: 'IPVS-Based In-Cluster Load Balancing Deep Dive'
date: 2018-07-09
--&gt;
&lt;!--

Author: Jun Du(Huawei), Haibin Xie(Huawei), Wei Liang(Huawei)

Editor’s note: this post is part of a series of in-depth articles on what’s new in Kubernetes 1.11

--&gt;
&lt;p&gt;作者: Jun Du(华为), Haibin Xie(华为), Wei Liang(华为)&lt;/p&gt;
&lt;p&gt;注意: 这篇文章出自 系列深度文章 介绍 Kubernetes 1.11 的新特性&lt;/p&gt;
&lt;!--

Introduction

Per the Kubernetes 1.11 release blog post , we announced that IPVS-Based In-Cluster Service Load Balancing graduates to General Availability. In this blog, we will take you through a deep dive of the feature.

--&gt;
&lt;p&gt;介绍&lt;/p&gt;</description></item><item><title>Airflow 在 Kubernetes 中的使用（第一部分）：一种不同的操作器</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2018/06/28/airflow-on-kubernetes-part-1-a-different-kind-of-operator/</link><pubDate>Thu, 28 Jun 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2018/06/28/airflow-on-kubernetes-part-1-a-different-kind-of-operator/</guid><description>&lt;!-- 
layout: blog
title: 'Airflow on Kubernetes (Part 1): A Different Kind of Operator'
date: 2018-06-28
--&gt;
&lt;!--
Author: Daniel Imberman (Bloomberg LP)
--&gt;
&lt;p&gt;作者: Daniel Imberman (Bloomberg LP)&lt;/p&gt;
&lt;!--
## Introduction

As part of Bloomberg's [continued commitment to developing the Kubernetes ecosystem](https://www.techatbloomberg.com/blog/bloomberg-awarded-first-cncf-end-user-award-contributions-kubernetes/), we are excited to announce the Kubernetes Airflow Operator; a mechanism for [Apache Airflow](https://airflow.apache.org/), a popular workflow orchestration framework to natively launch arbitrary Kubernetes Pods using the Kubernetes API.
--&gt;
&lt;h2 id="介绍"&gt;介绍&lt;/h2&gt;
&lt;p&gt;作为 Bloomberg &lt;a href="https://www.techatbloomberg.com/blog/bloomberg-awarded-first-cncf-end-user-award-contributions-kubernetes/"&gt;持续致力于开发 Kubernetes 生态系统&lt;/a&gt;的一部分，
我们很高兴能够宣布 Kubernetes Airflow Operator 的发布;
&lt;a href="https://airflow.apache.org/"&gt;Apache Airflow&lt;/a&gt;的一种机制，一种流行的工作流程编排框架，
使用 Kubernetes API 可以在本机启动任意的 Kubernetes Pod。&lt;/p&gt;</description></item><item><title>Kubernetes 的动态 Ingress</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2018/06/07/dynamic-ingress-in-kubernetes/</link><pubDate>Thu, 07 Jun 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2018/06/07/dynamic-ingress-in-kubernetes/</guid><description>&lt;!--
title: Dynamic Ingress in Kubernetes
date: 2018-06-07
Author: Richard Li (Datawire)
--&gt;
&lt;!--
Kubernetes makes it easy to deploy applications that consist of many microservices, but one of the key challenges with this type of architecture is dynamically routing ingress traffic to each of these services. One approach is Ambassador, a Kubernetes-native open source API Gateway built on the Envoy Proxy. Ambassador is designed for dynamic environment where services may come and go frequently.
Ambassador is configured using Kubernetes annotations. Annotations are used to configure specific mappings from a given Kubernetes service to a particular URL. A mapping can include a number of annotations for configuring a route. Examples include rate limiting, protocol, cross-origin request sharing, traffic shadowing, and routing rules.
--&gt;
&lt;p&gt;Kubernetes 可以轻松部署由许多微服务组成的应用程序，但这种架构的关键挑战之一是动态地将流量路由到这些服务中的每一个。
一种方法是使用 &lt;a href="https://www.getambassador.io"&gt;Ambassador&lt;/a&gt;，
一个基于 &lt;a href="https://www.envoyproxy.io"&gt;Envoy Proxy&lt;/a&gt; 构建的 Kubernetes 原生开源 API 网关。
Ambassador 专为动态环境而设计，这类环境中的服务可能被频繁添加或删除。&lt;/p&gt;</description></item><item><title>Kubernetes 这四年</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2018/06/06/4-years-of-k8s/</link><pubDate>Wed, 06 Jun 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2018/06/06/4-years-of-k8s/</guid><description>&lt;!-- 
layout: blog
title: 4 Years of K8s
date: 2018-06-06
--&gt;
&lt;!--
**Author**: Joe Beda (CTO and Founder, Heptio)

On June 6, 2014 I checked in the [first commit](https://github.com/kubernetes/kubernetes/commit/2c4b3a562ce34cddc3f8218a2c4d11c7310e6d56) of what would become the public repository for Kubernetes. Many would assume that is where the story starts. It is the beginning of history, right? But that really doesn’t tell the whole story.
--&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;：Joe Beda（Heptio 首席技术官兼创始人）&lt;/p&gt;
&lt;p&gt;2014 年 6 月 6 日，我检查了 Kubernetes 公共代码库的&lt;a href="https://github.com/kubernetes/kubernetes/commit/2c4b3a562ce34cddc3f8218a2c4d11c7310e6d56"&gt;第一次 commit&lt;/a&gt; 。许多人会认为这是故事开始的地方。这难道不是一切开始的地方吗？但这的确不能把整个过程说清楚。&lt;/p&gt;</description></item><item><title>向 Discuss Kubernetes 问好</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2018/05/30/say-hello-to-discuss-kubernetes/</link><pubDate>Wed, 30 May 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2018/05/30/say-hello-to-discuss-kubernetes/</guid><description>&lt;!--
layout: blog
title: Say Hello to Discuss Kubernetes
date: 2018-05-30
--&gt;
&lt;!-- 

Author: Jorge Castro (Heptio)

--&gt;
&lt;p&gt;作者: Jorge Castro (Heptio)&lt;/p&gt;
&lt;!-- 

Communication is key when it comes to engaging a community of over 35,000 people in a global and remote environment. Keeping track of everything in the Kubernetes community can be an overwhelming task. On one hand we have our official resources, like Stack Overflow, GitHub, and the mailing lists, and on the other we have more ephemeral resources like Slack, where you can hop in, chat with someone, and then go on your merry way. 

--&gt;
&lt;p&gt;就一个超过 35,000 人的全球性社区而言，参与其中时沟通是非常关键的。 跟踪 Kubernetes 社区中的所有内容可能是一项艰巨的任务。 一方面，我们有官方资源，如 Stack Overflow，GitHub 和邮件列表，另一方面，我们有更多瞬时性的资源，如 Slack，你可以加入进去、与某人聊天然后各走各路。&lt;/p&gt;</description></item><item><title>在 Kubernetes 上开发</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2018/05/01/developing-on-kubernetes/</link><pubDate>Tue, 01 May 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2018/05/01/developing-on-kubernetes/</guid><description>&lt;!--
---
title: Developing on Kubernetes
date: 2018-05-01
slug: developing-on-kubernetes
---
--&gt;
&lt;!--
**Authors**: [Michael Hausenblas](https://twitter.com/mhausenblas) (Red Hat), [Ilya Dmitrichenko](https://twitter.com/errordeveloper) (Weaveworks)
--&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;： &lt;a href="https://twitter.com/mhausenblas"&gt;Michael Hausenblas&lt;/a&gt; (Red Hat), &lt;a href="https://twitter.com/errordeveloper"&gt;Ilya Dmitrichenko&lt;/a&gt; (Weaveworks)&lt;/p&gt;
&lt;!-- 
How do you develop a Kubernetes app? That is, how do you write and test an app that is supposed to run on Kubernetes? This article focuses on the challenges, tools and methods you might want to be aware of to successfully write Kubernetes apps alone or in a team setting.
--&gt;
&lt;p&gt;您将如何开发一个 Kubernetes 应用？也就是说，您如何编写并测试一个要在 Kubernetes 上运行的应用程序？本文将重点介绍在独自开发或者团队协作中，您可能希望了解到的为了成功编写 Kubernetes 应用程序而需面临的挑战，工具和方法。&lt;/p&gt;</description></item><item><title>Kubernetes 社区 - 2017 年开源排行榜榜首</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2018/04/25/open-source-charts-2017/</link><pubDate>Wed, 25 Apr 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2018/04/25/open-source-charts-2017/</guid><description>&lt;!--
---
title: Kubernetes Community - Top of the Open Source Charts in 2017
date: 2018-04-25
slug: open-source-charts-2017
---
---&gt;
&lt;!--
2017 was a huge year for Kubernetes, and GitHub’s latest [Octoverse report](https://octoverse.github.com) illustrates just how much attention this project has been getting.

Kubernetes, an [open source platform for running application containers](/docs/concepts/overview/what-is-kubernetes/), provides a consistent interface that enables developers and ops teams to automate the deployment, management, and scaling of a wide variety of applications on just about any infrastructure.
---&gt;
&lt;p&gt;对于 Kubernetes 来说，2017 年是丰收的一年，GitHub的最新 &lt;a href="https://octoverse.github.com"&gt;Octoverse 报告&lt;/a&gt; 说明了该项目获得了多少关注。&lt;/p&gt;</description></item><item><title>“基于容器的应用程序设计原理”</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2018/03/15/principles-of-container-app-design/</link><pubDate>Thu, 15 Mar 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2018/03/15/principles-of-container-app-design/</guid><description>&lt;!--
title: "Principles of Container-based Application Design"
date: 2018-03-15
slug: principles-of-container-app-design
url: /blog/2018/03/Principles-Of-Container-App-Design
--&gt;
&lt;!--
It's possible nowadays to put almost any application in a container and run it. Creating cloud-native applications, however—containerized applications that are automated and orchestrated effectively by a cloud-native platform such as Kubernetes—requires additional effort. Cloud-native applications anticipate failure; they run and scale reliably even when their infrastructure experiences outages. To offer such capabilities, cloud-native platforms like Kubernetes impose a set of contracts and constraints on applications. These contracts ensure that applications they run conform to certain constraints and allow the platform to automate application management.
--&gt;
&lt;p&gt;如今，可以将几乎所有应用程序放入容器中并运行它。
但是，创建云原生应用程序（由 Kubernetes 等云原生平台自动有效地编排的容器化应用程序）需要付出额外的努力。
云原生应用程序会预期失败；
它们可以可靠的运行和扩展，即使基础架构出现故障。
为了提供这样的功能，像 Kubernetes 这样的云原生平台对应用程序施加了一系列约定和约束。
这些合同确保运行的应用程序符合某些约束条件，并允许平台自动执行应用程序管理。&lt;/p&gt;</description></item><item><title>Kubernetes 1.9 对 Windows Server 容器提供 Beta 版本支持</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2018/01/09/kubernetes-v19-beta-windows-support/</link><pubDate>Tue, 09 Jan 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2018/01/09/kubernetes-v19-beta-windows-support/</guid><description>&lt;!--
title: Kubernetes v1.9 releases beta support for Windows Server Containers
date: 2018-01-09
slug: kubernetes-v19-beta-windows-support
url: /blog/2018/01/Kubernetes-V19-Beta-Windows-Support
---&gt;
&lt;!--
With the release of Kubernetes v1.9, our mission of ensuring Kubernetes works well everywhere and for everyone takes a great step forward. We’ve advanced support for Windows Server to beta along with continued feature and functional advancements on both the Kubernetes and Windows platforms. SIG-Windows has been working since March of 2016 to open the door for many Windows-specific applications and workloads to run on Kubernetes, significantly expanding the implementation scenarios and the enterprise reach of Kubernetes. 
---&gt;
&lt;p&gt;随着 Kubernetes v1.9 的发布，我们确保所有人在任何地方都能正常运行 Kubernetes 的使命前进了一大步。我们的 Beta 版本对 Windows Server 的支持进行了升级，并且在 Kubernetes 和 Windows 平台上都提供了持续的功能改进。为了在 Kubernetes 上运行许多特定于 Windows 的应用程序和工作负载，SIG-Windows 自2016年3月以来一直在努力，大大扩展了 Kubernetes 的实现场景和企业适用范围。&lt;/p&gt;</description></item><item><title> Kubernetes 中自动缩放</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2017/11/17/autoscaling-in-kubernetes/</link><pubDate>Fri, 17 Nov 2017 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2017/11/17/autoscaling-in-kubernetes/</guid><description>&lt;!--
title: " Autoscaling in Kubernetes "
date: 2017-11-17
slug: autoscaling-in-kubernetes
url: /blog/2017/11/Autoscaling-In-Kubernetes
--&gt;
&lt;!--
Kubernetes allows developers to automatically adjust cluster sizes and the number of pod replicas based on current traffic and load. These adjustments reduce the amount of unused nodes, saving money and resources. In this talk, Marcin Wielgus of Google walks you through the current state of pod and node autoscaling in Kubernetes: .how it works, and how to use it, including best practices for deployments in production applications.
--&gt;
&lt;p&gt;Kubernetes 允许开发人员根据当前的流量和负载自动调整集群大小和 pod 副本的数量。这些调整减少了未使用节点的数量，节省了资金和资源。
在这次演讲中，谷歌的 Marcin Wielgus 将带领您了解 Kubernetes 中 pod 和 node 自动调焦的当前状态：它是如何工作的，以及如何使用它，包括在生产应用程序中部署的最佳实践。&lt;/p&gt;</description></item><item><title> Kubernetes 1.8 的五天</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2017/10/24/five-days-of-kubernetes-18/</link><pubDate>Tue, 24 Oct 2017 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2017/10/24/five-days-of-kubernetes-18/</guid><description>&lt;!--
title: " Five Days of Kubernetes 1.8 "
date: 2017-10-24
slug: five-days-of-kubernetes-18
url: /blog/2017/10/Five-Days-Of-Kubernetes-18
--&gt;
&lt;!--
Kubernetes 1.8 is live, made possible by hundreds of contributors pushing thousands of commits in this latest releases.
--&gt;
&lt;p&gt;Kubernetes 1.8 已经推出，数百名贡献者在这个最新版本中推出了成千上万的提交。&lt;/p&gt;
&lt;!--
The community has tallied more than 66,000 commits in the main repo and continues rapid growth outside of the main repo, which signals growing maturity and stability for the project. The community has logged more than 120,000 commits across all repos and 17,839 commits across all repos for v1.7.0 to v1.8.0 alone.
--&gt;
&lt;p&gt;社区已经有超过 66,000 个提交在主仓库，并在主仓库之外继续快速增长，这标志着该项目日益成熟和稳定。仅 v1.7.0 到 v1.8.0，社区就记录了所有仓库的超过 120,000 次提交和 17839 次提交。&lt;/p&gt;</description></item><item><title> Kubernetes 社区指导委员会选举结果</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2017/10/05/kubernetes-community-steering-committee-election-results/</link><pubDate>Thu, 05 Oct 2017 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2017/10/05/kubernetes-community-steering-committee-election-results/</guid><description>&lt;!--
title: " Kubernetes Community Steering Committee Election Results "
date: 2017-10-05
slug: kubernetes-community-steering-committee-election-results
url: /blog/2017/10/Kubernetes-Community-Steering-Committee-Election-Results
--&gt;
&lt;!--
Beginning with the announcement of Kubernetes 1.0 at OSCON in 2015, there has been a concerted effort to share the power and burden of leadership across the Kubernetes community. 
--&gt;
&lt;p&gt;自 2015 年 OSCON 发布 Kubernetes 1.0 以来，大家一直在共同努力，在 Kubernetes 社区中共同分享领导力和责任。&lt;/p&gt;
&lt;!--
With the work of the Bootstrap Governance Committee, consisting of Brandon Philips, Brendan Burns, Brian Grant, Clayton Coleman, Joe Beda, Sarah Novotny and Tim Hockin - a cross section of long-time leaders representing 5 different companies with major investments of talent and effort in the Kubernetes Ecosystem - we wrote an initial [Steering Committee Charter](https://github.com/kubernetes/steering/blob/master/charter.md) and launched a community wide election to seat a Kubernetes Steering Committee. 
--&gt;
&lt;p&gt;在 Brandon Philips、Brendan Burns、Brian Grant、Clayton Coleman、Joe Beda、Sarah Novotny 和 Tim Hockin 组成的自举治理委员会的工作下 - 代表 5 家不同公司的长期领导者，他们对 Kubernetes 生态系统进行了大量的人才投资和努力 - 编写了初始的&lt;a href="https://github.com/kubernetes/steering/blob/master/charter.md"&gt;指导委员会章程&lt;/a&gt;，并发起了一次社区选举，以选举 Kubernetes 指导委员会成员。&lt;/p&gt;</description></item><item><title> 使用 Kubernetes Pet Sets 和 Datera Elastic Data Fabric 的 FlexVolume 扩展有状态的应用程序</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2016/08/29/stateful-applications-using-kubernetes-datera/</link><pubDate>Mon, 29 Aug 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2016/08/29/stateful-applications-using-kubernetes-datera/</guid><description>&lt;!--
title: " Scaling Stateful Applications using Kubernetes Pet Sets and FlexVolumes with Datera Elastic Data Fabric "
date: 2016-08-29
slug: stateful-applications-using-kubernetes-datera
url: /blog/2016/08/Stateful-Applications-Using-Kubernetes-Datera
---&gt;
&lt;!--
_Editor’s note: today’s guest post is by Shailesh Mittal, Software Architect and Ashok Rajagopalan, Sr Director Product at Datera Inc, talking about Stateful Application provisioning with Kubernetes on Datera Elastic Data Fabric._ 
---&gt;
&lt;p&gt;&lt;em&gt;编者注：今天的邀请帖子来自 Datera 公司的软件架构师 Shailesh Mittal 和高级产品总监 Ashok Rajagopalan，介绍在 Datera Elastic Data Fabric 上用 Kubernetes 配置状态应用程序。&lt;/em&gt;&lt;/p&gt;</description></item><item><title>SIG Apps: 为 Kubernetes 构建应用并在 Kubernetes 中进行运维</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2016/08/16/sig-apps-running-apps-in-kubernetes/</link><pubDate>Tue, 16 Aug 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2016/08/16/sig-apps-running-apps-in-kubernetes/</guid><description>&lt;!--
title: " SIG Apps: build apps for and operate them in Kubernetes "
date: 2016-08-16
slug: sig-apps-running-apps-in-kubernetes
canonicalUrl: https://kubernetes.io/blog/2016/08/sig-apps-running-apps-in-kubernetes/
url: /blog/2016/08/Sig-Apps-Running-Apps-In-Kubernetes
--&gt;
&lt;!--
_Editor’s note: This post is by the Kubernetes SIG-Apps team sharing how they focus on the developer and devops experience of running applications in Kubernetes._ 

Kubernetes is an incredible manager for containerized applications. Because of this, [numerous](https://kubernetes.io/blog/2016/02/sharethis-kubernetes-in-production) [companies](https://blog.box.com/blog/kubernetes-box-microservices-maximum-velocity/) [have](http://techblog.yahoo.co.jp/infrastructure/os_n_k8s/) [started](http://www.nextplatform.com/2015/11/12/inside-ebays-shift-to-kubernetes-and-containers-atop-openstack/) to run their applications in Kubernetes. 

Kubernetes Special Interest Groups ([SIGs](https://github.com/kubernetes/community/blob/master/README.md#special-interest-groups-sig)) have been around to support the community of developers and operators since around the 1.0 release. People organized around networking, storage, scaling and other operational areas. 

As Kubernetes took off, so did the need for tools, best practices, and discussions around building and operating cloud native applications. To fill that need the Kubernetes [SIG Apps](https://github.com/kubernetes/community/tree/master/sig-apps) came into existence. 

SIG Apps is a place where companies and individuals can:
--&gt; 
&lt;p&gt;&lt;strong&gt;编者注&lt;/strong&gt;：这篇文章由 Kubernetes SIG-Apps 团队撰写，分享他们如何关注在 Kubernetes
中运行应用的开发者和 devops 经验。&lt;/p&gt;</description></item><item><title> Kubernetes 生日快乐。哦，这是你要去的地方！</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2016/07/21/oh-the-places-you-will-go/</link><pubDate>Thu, 21 Jul 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2016/07/21/oh-the-places-you-will-go/</guid><description>&lt;!--
title: " Happy Birthday Kubernetes. Oh, the places you’ll go! "
date: 2016-07-21
slug: oh-the-places-you-will-go
url: /blog/2016/07/Oh-The-Places-You-Will-Go
--&gt;
&lt;!--
_Editor’s note, Today’s guest post is from an independent Kubernetes contributor, Justin Santa Barbara, sharing his reflection on growth of the project from inception to its future._

**Dear K8s,**

_It’s hard to believe you’re only one - you’ve grown up so fast. On the occasion of your first birthday, I thought I would write a little note about why I was so excited when you were born, why I feel fortunate to be part of the group that is raising you, and why I’m eager to watch you continue to grow up!_
--&gt;
&lt;p&gt;&lt;em&gt;编者按，今天的嘉宾帖子来自一位独立的 kubernetes 撰稿人 Justin Santa Barbara，分享了他对项目从一开始到未来发展的思考。&lt;/em&gt;&lt;/p&gt;</description></item><item><title> 将端到端的 Kubernetes 测试引入 Azure （第二部分）</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2016/07/18/bringing-end-to-end-kubernetes-testing-to-azure-2/</link><pubDate>Mon, 18 Jul 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2016/07/18/bringing-end-to-end-kubernetes-testing-to-azure-2/</guid><description>&lt;!--
title: " Bringing End-to-End Kubernetes Testing to Azure (Part 2) "
date: 2016-07-18
slug: bringing-end-to-end-kubernetes-testing-to-azure-2
url: /blog/2016/07/Bringing-End-To-End-Kubernetes-Testing-To-Azure-2
--&gt;
&lt;!--
_Editor’s Note: Today’s guest post is Part II from a [series](https://kubernetes.io/blog/2016/06/bringing-end-to-end-testing-to-azure) by Travis Newhouse, Chief Architect at AppFormix, writing about their contributions to Kubernetes._ 
--&gt;
&lt;p&gt;&lt;em&gt;作者标注：今天的邀请帖子是 Travis Newhouse 的 &lt;a href="https://kubernetes.io/blog/2016/06/bringing-end-to-end-testing-to-azure"&gt;系列&lt;/a&gt; 中的第二部分，他是 AppFormix 的首席架构师，这篇文章介绍了他们对 Kubernetes 的贡献。&lt;/em&gt;&lt;/p&gt;
&lt;!--
Historically, Kubernetes testing has been hosted by Google, running e2e tests on [Google Compute Engine](https://cloud.google.com/compute/) (GCE) and [Google Container Engine](https://cloud.google.com/container-engine/) (GKE). In fact, the gating checks for the submit-queue are a subset of tests executed on these test platforms. Federated testing aims to expand test coverage by enabling organizations to host test jobs for a variety of platforms and contribute test results to benefit the Kubernetes project. Members of the Kubernetes test team at Google and SIG-Testing have created a [Kubernetes test history dashboard](http://storage.googleapis.com/kubernetes-test-history/static/index.html) that publishes the results from all federated test jobs (including those hosted by Google). 

In this blog post, we describe extending the e2e test jobs for Azure, and show how to contribute a federated test to the Kubernetes project. 
--&gt;
&lt;p&gt;历史上，Kubernetes 测试一直由谷歌托管，在 &lt;a href="https://cloud.google.com/compute/"&gt;谷歌计算引擎&lt;/a&gt; (GCE) 和 &lt;a href="https://cloud.google.com/container-engine/"&gt;谷歌容器引擎&lt;/a&gt; (GKE) 上运行端到端测试。实际上，提交队列的选通检查是在这些测试平台上执行测试的子集。联合测试旨在通过使组织托管各种平台的测试作业并贡献测试结果，从而让 Kubernetes 项目受益来扩大测试范围。谷歌和 SIG-Testing 的 Kubernetes 测试小组成员已经创建了一个 &lt;a href="http://storage.googleapis.com/kubernetes-test-history/static/index.html"&gt;Kubernetes 测试历史记录仪表板&lt;/a&gt;，可以发布所有联合测试作业（包括谷歌托管的作业）的全部结果。&lt;/p&gt;</description></item><item><title> Dashboard - Kubernetes 的全功能 Web 界面</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2016/07/15/dashboard-web-interface-for-kubernetes/</link><pubDate>Fri, 15 Jul 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2016/07/15/dashboard-web-interface-for-kubernetes/</guid><description>&lt;!--
title: " Dashboard - Full Featured Web Interface for Kubernetes "
date: 2016-07-15
slug: dashboard-web-interface-for-kubernetes
url: /blog/2016/07/Dashboard-Web-Interface-For-Kubernetes
--&gt;
&lt;!--
_Editor’s note: this post is part of a [series of in-depth articles](https://kubernetes.io/blog/2016/07/five-days-of-kubernetes-1-3) on what's new in Kubernetes 1.3_

[Kubernetes Dashboard](http://github.com/kubernetes/dashboard) is a project that aims to bring a general purpose monitoring and operational web interface to the Kubernetes world.&amp;nbsp;Three months ago we [released](https://kubernetes.io/blog/2016/04/building-awesome-user-interfaces-for-kubernetes) the first production ready version, and since then the dashboard has made massive improvements. In a single UI, you’re able to perform majority of possible interactions with your Kubernetes clusters without ever leaving your browser. This blog post breaks down new features introduced in the latest release and outlines the roadmap for the future.&amp;nbsp;
--&gt;
&lt;p&gt;&lt;em&gt;编者按：这篇文章是&lt;a href="https://kubernetes.io/blog/2016/07/five-days-of-kubernetes-1-3"&gt;一系列深入的文章&lt;/a&gt; 中关于Kubernetes 1.3的新内容的一部分&lt;/em&gt;
&lt;a href="http://github.com/kubernetes/dashboard"&gt;Kubernetes Dashboard&lt;/a&gt;是一个旨在为 Kubernetes 世界带来通用监控和操作 Web 界面的项目。三个月前，我们&lt;a href="https://kubernetes.io/blog/2016/04/building-awesome-user-interfaces-for-kubernetes"&gt;发布&lt;/a&gt;第一个面向生产的版本，从那时起 dashboard 已经做了大量的改进。在一个 UI 中，您可以在不离开浏览器的情况下，与 Kubernetes 集群执行大多数可能的交互。这篇博客文章分解了最新版本中引入的新功能，并概述了未来的路线图。&lt;/p&gt;</description></item><item><title> Citrix + Kubernetes = 全垒打</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2016/07/14/citrix-netscaler-and-kubernetes/</link><pubDate>Thu, 14 Jul 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2016/07/14/citrix-netscaler-and-kubernetes/</guid><description>&lt;!--
title: " Citrix + Kubernetes = A Home Run "
date: 2016-07-14
slug: citrix-netscaler-and-kubernetes
url: /blog/2016/07/Citrix-Netscaler-And-Kubernetes
--&gt;
&lt;!--
_Editor’s note: today’s guest post is by Mikko Disini, a Director of Product Management at Citrix Systems, sharing their collaboration experience on a Kubernetes integration.&amp;nbsp;_
--&gt;
&lt;p&gt;编者按：今天的客座文章来自 Citrix Systems 的产品管理总监 Mikko Disini，他分享了他们在 Kubernetes 集成上的合作经验。 _&lt;/p&gt;
&lt;!--
Technical collaboration is like sports. If you work together as a team, you can go down the homestretch and pull through for a win. That’s our experience with the Google Cloud Platform team.
--&gt;
&lt;p&gt;技术合作就像体育运动。如果你能像一个团队一样合作，你就能在最后关头取得胜利。这就是我们对谷歌云平台团队的经验。&lt;/p&gt;</description></item><item><title>容器中运行有状态的应用！？ Kubernetes 1.3 说 “是！”</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2016/07/13/stateful-applications-in-containers-kubernetes/</link><pubDate>Wed, 13 Jul 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2016/07/13/stateful-applications-in-containers-kubernetes/</guid><description>&lt;!--
title: " Stateful Applications in Containers!? Kubernetes 1.3 Says “Yes!” "
date: 2016-07-13
slug: stateful-applications-in-containers-kubernetes
url: /blog/2016/07/stateful-applications-in-containers-kubernetes
--&gt;
&lt;!--
_Editor's note: today’s guest post is from Mark Balch, VP of Products at Diamanti, who’ll share more about the contributions they’ve made to Kubernetes._ 
--&gt;
&lt;p&gt;&lt;em&gt;编者注： 今天的来宾帖子来自 Diamanti 产品副总裁 Mark Balch，他将分享有关他们对 Kubernetes 所做的贡献的更多信息。&lt;/em&gt;&lt;/p&gt;
&lt;!--

Congratulations to the Kubernetes community on another [value-packed release](https://kubernetes.io/blog/2016/07/kubernetes-1-3-bridging-cloud-native-and-enterprise-workloads/). A focus on stateful applications and federated clusters are two reasons why I’m so excited about 1.3. Kubernetes support for stateful apps such as Cassandra, Kafka, and MongoDB is critical. Important services rely on databases, key value stores, message queues, and more. Additionally, relying on one data center or container cluster simply won’t work as apps grow to serve millions of users around the world. Cluster federation allows users to deploy apps across multiple clusters and data centers for scale and resiliency.

--&gt;
&lt;p&gt;祝贺 Kubernetes 社区发布了另一个&lt;a href="https://kubernetes.io/blog/2016/07/kubernetes-1-3-bridging-cloud-native-and-enterprise-workloads/"&gt;有价值的版本&lt;/a&gt;。
专注于有状态应用程序和联邦集群是我对 1.3 如此兴奋的两个原因。
Kubernetes 对有状态应用程序（例如 Cassandra、Kafka 和 MongoDB）的支持至关重要。
重要服务依赖于数据库、键值存储、消息队列等。
此外，随着应用程序的发展为全球数百万用户提供服务，仅依靠一个数据中心或容器集群将无法正常工作。
联邦集群允许用户跨多个集群和数据中心部署应用程序，以实现规模和弹性。&lt;/p&gt;</description></item><item><title> CoreOS Fest 2016: CoreOS 和 Kubernetes 在柏林（和旧金山）社区见面会</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2016/05/03/coreosfest2016-kubernetes-community/</link><pubDate>Tue, 03 May 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2016/05/03/coreosfest2016-kubernetes-community/</guid><description>&lt;!--
title: " CoreOS Fest 2016: CoreOS and Kubernetes Community meet in Berlin (&amp; San Francisco) "
date: 2016-05-03
slug: coreosfest2016-kubernetes-community
url: /blog/2016/05/Coreosfest2016-Kubernetes-Community
--&gt;
&lt;!--
[CoreOS Fest 2016](https://coreos.com/fest/) will bring together the container and open source distributed systems community, including many thought leaders in the Kubernetes space. It is the second annual CoreOS community conference, held for the first time in Berlin on May 9th and 10th. CoreOS believes Kubernetes is the container orchestration component to deliver GIFEE (Google’s Infrastructure for Everyone Else). 
--&gt;
&lt;p&gt;&lt;a href="https://coreos.com/fest/"&gt;CoreOS Fest 2016&lt;/a&gt; 将汇集容器和开源分布式系统社区，其中包括 Kubernetes 领域的许多思想领袖。
这是第二次年度 CoreOS 社区会议，于5月9日至10日在柏林首次举行。
CoreOS 相信 Kubernetes 是提供 GIFEE（适用于所有人的 Google 的基础架构）服务的合适的容器编排组件。&lt;/p&gt;</description></item><item><title> SIG-ClusterOps: 提升 Kubernetes 集群的可操作性和互操作性</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2016/04/19/sig-clusterops-promote-operability-and-interoperability-of-k8s-clusters/</link><pubDate>Tue, 19 Apr 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2016/04/19/sig-clusterops-promote-operability-and-interoperability-of-k8s-clusters/</guid><description>&lt;!--
title: " SIG-ClusterOps: Promote operability and interoperability of Kubernetes clusters "
date: 2016-04-19
slug: sig-clusterops-promote-operability-and-interoperability-of-k8s-clusters
url: /blog/2016/04/Sig-Clusterops-Promote-Operability-And-Interoperability-Of-K8S-Clusters
--&gt;
&lt;!--
_Editor’s note: This week we’re featuring [Kubernetes Special Interest Groups](https://github.com/kubernetes/kubernetes/wiki/Special-Interest-Groups-(SIGs)); Today’s post is by the SIG-ClusterOps team whose mission is to promote operability and interoperability of Kubernetes clusters -- to listen, help &amp; escalate._ 
--&gt;
&lt;p&gt;&lt;em&gt;编者注： 本周我们将推出 &lt;a href="https://github.com/kubernetes/kubernetes/wiki/Special-Interest-Groups-(SIGs)"&gt;Kubernetes 特殊兴趣小组&lt;/a&gt;；今天的帖子由 SIG-ClusterOps 团队负责，其任务是促进 Kubernetes 集群的可操作性和互操作性 -- 倾听，帮助和升级。&lt;/em&gt;&lt;/p&gt;
&lt;!--
We think Kubernetes is an awesome way to run applications at scale! Unfortunately, there's a bootstrapping problem: we need good ways to build secure &amp; reliable scale environments around Kubernetes. While some parts of the platform administration leverage the platform (cool!), there are fundamental operational topics that need to be addressed and questions (like upgrade and conformance) that need to be answered. 
--&gt;
&lt;p&gt;我们认为 Kubernetes 是大规模运行应用程序的绝佳方法！
不幸的是，存在一个引导问题：我们需要良好的方法来围绕 Kubernetes 构建安全可靠的扩展环境。
虽然平台管理的某些部分利用了平台（很酷！），这有一些基本的操作主题需要解决，还有一些问题（例如升级和一致性）需要回答。&lt;/p&gt;</description></item><item><title>“SIG-Networking：1.3 版本引入 Kubernetes 网络策略 API”</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2016/04/18/kubernetes-network-policy-apis/</link><pubDate>Mon, 18 Apr 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2016/04/18/kubernetes-network-policy-apis/</guid><description>&lt;!--
title: " SIG-Networking: Kubernetes Network Policy APIs Coming in 1.3 "
date: 2016-04-18
slug: kubernetes-network-policy-apis
url: /blog/2016/04/Kubernetes-Network-Policy-APIs
--&gt;
&lt;!--
_Editor’s note: This week we’re featuring [Kubernetes Special Interest Groups](https://github.com/kubernetes/kubernetes/wiki/Special-Interest-Groups-(SIGs)); Today’s post is by the Network-SIG team describing network policy APIs coming in 1.3 - policies for security, isolation and multi-tenancy._ 
--&gt;
&lt;p&gt;&lt;em&gt;编者注：本周我们将推出 &lt;a href="https://github.com/kubernetes/kubernetes/wiki/Special-Interest-Groups-(SIGs)"&gt;Kubernetes 特殊兴趣小组&lt;/a&gt;、
Network-SIG 小组今天的帖子描述了 1.3 版中的网络策略 API-安全，隔离和多租户策略。&lt;/em&gt;&lt;/p&gt;
&lt;!--
The [Kubernetes network SIG](https://kubernetes.slack.com/messages/sig-network/) has been meeting regularly since late last year to work on bringing network policy to Kubernetes and we’re starting to see the results of this effort. 
--&gt;
&lt;p&gt;自去年下半年以来，&lt;a href="https://kubernetes.slack.com/messages/sig-network/"&gt;Kubernetes SIG-Networking&lt;/a&gt; 一直在定期开会，致力于将网络策略引入 Kubernetes，我们开始看到这个努力的结果。&lt;/p&gt;</description></item><item><title> 在 Rancher 中添加对 Kuernetes 的支持</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2016/04/08/adding-support-for-kubernetes-in-rancher/</link><pubDate>Fri, 08 Apr 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2016/04/08/adding-support-for-kubernetes-in-rancher/</guid><description>&lt;!--
title: " Adding Support for Kubernetes in Rancher "
date: 2016-04-08
slug: adding-support-for-kubernetes-in-rancher
url: /blog/2016/04/Adding-Support-For-Kubernetes-In-Rancher
--&gt;
&lt;!--
_Today’s guest post is written by Darren Shepherd, Chief Architect at Rancher Labs, an open-source software platform for managing containers._ --&gt;
&lt;p&gt;&lt;em&gt;今天的来宾帖子由 Rancher Labs（用于管理容器的开源软件平台）的首席架构师 Darren Shepherd 撰写。&lt;/em&gt;&lt;/p&gt;
&lt;!--
Over the last year, we’ve seen a tremendous increase in the number of companies looking to leverage containers in their software development and IT organizations. To achieve this, organizations have been looking at how to build a centralized container management capability that will make it simple for users to get access to containers, while centralizing visibility and control with the IT organization. In 2014 we started the open-source Rancher project to address this by building a management platform for containers. 
--&gt;
&lt;p&gt;在过去的一年中，我们看到希望在其软件开发和IT组织中利用容器的公司数量激增。
为了实现这一目标，组织一直在研究如何构建集中式的容器管理功能，该功能将使用户可以轻松访问容器，同时集中管理IT组织的可见性和控制力。
2014年，我们启动了开源 Rancher 项目，通过构建容器管理平台来解决此问题。&lt;/p&gt;</description></item><item><title> KubeCon EU 2016：伦敦 Kubernetes 社区</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2016/02/24/kubecon-eu-2016-kubernetes-community-in/</link><pubDate>Wed, 24 Feb 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2016/02/24/kubecon-eu-2016-kubernetes-community-in/</guid><description>&lt;!--
title: " KubeCon EU 2016: Kubernetes Community in London "
date: 2016-02-24
slug: kubecon-eu-2016-kubernetes-community-in
url: /blog/2016/02/Kubecon-Eu-2016-Kubernetes-Community-In
--&gt;
&lt;!--
KubeCon EU 2016 is the inaugural [European Kubernetes](http://kubernetes.io/) community conference that follows on the American launch in November 2015. KubeCon is fully dedicated to education and community engagement for[Kubernetes](http://kubernetes.io/) enthusiasts, production users and the surrounding ecosystem.
--&gt;
&lt;p&gt;KubeCon EU 2016 是首届&lt;a href="http://kubernetes.io/"&gt;欧洲 Kubernetes&lt;/a&gt; 社区会议，紧随 2015 年 11 月召开的北美会议。KubeCon 致力于为 &lt;a href="http://kubernetes.io/"&gt;Kubernetes&lt;/a&gt; 爱好者、产品用户和周围的生态系统提供教育和社区参与。&lt;/p&gt;
&lt;!--
Come join us in London and hang out with hundreds from the Kubernetes community and experience a wide variety of deep technical expert talks and use cases.
--&gt;
&lt;p&gt;快来加入我们在伦敦，与 Kubernetes 社区的数百人一起出去，体验各种深入的技术专家讲座和用例。&lt;/p&gt;</description></item><item><title>Kubernetes 社区会议记录 - 20160218</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2016/02/23/kubernetes-community-meeting-notes_23/</link><pubDate>Tue, 23 Feb 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2016/02/23/kubernetes-community-meeting-notes_23/</guid><description>&lt;!--
---
title: " Kubernetes Community Meeting Notes - 20160218 "
date: 2016-02-23
slug: kubernetes-community-meeting-notes_23
url: /blog/2016/02/Kubernetes-community-meeting-notes-20160128
---
--&gt;
&lt;!--
##### February 18th - kmachine demo, clusterops SIG formed, new k8s.io website preview, 1.2 update and planning 1.3
The Kubernetes contributing community meets most Thursdays at 10:00PT to discuss the project's status via videoconference. Here are the notes from the latest meeting.
--&gt;
&lt;h5 id="2月18号-kmachine-演示-sig-clusterops-成立-新的-k8s-io-网站预览-1-2-版本更新和-1-3-版本计划"&gt;2月18号 - kmachine 演示、SIG clusterops 成立、新的 k8s.io 网站预览、1.2 版本更新和 1.3 版本计划&lt;/h5&gt;
&lt;p&gt;Kubernetes 贡献社区会议大多在星期四的 10:00 召开，通过视频会议讨论项目现有情况。这里是最近一次会议的笔记。&lt;/p&gt;</description></item><item><title> Kubernetes 社区会议记录 - 20160204</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2016/02/09/kubernetes-community-meeting-notes/</link><pubDate>Tue, 09 Feb 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2016/02/09/kubernetes-community-meeting-notes/</guid><description>&lt;!--
title: " Kubernetes community meeting notes - 20160204 "
date: 2016-02-09
slug: kubernetes-community-meeting-notes
url: /blog/2016/02/Kubernetes-Community-Meeting-Notes
--&gt;
&lt;!--
#### February 4th - rkt demo (congratulations on the 1.0, CoreOS!), eBay puts k8s on Openstack and considers Openstack on k8s, SIGs, and flaky test surge makes progress.
--&gt;
&lt;h4 id="2-月-4-日-rkt-演示-祝贺-1-0-版本-coreos-ebay-将-k8s-放在-openstack-上并认为-openstack-在-k8s-sig-和片状测试激增方面取得了进展"&gt;2 月 4 日 - rkt 演示（祝贺 1.0 版本，CoreOS！），eBay 将 k8s 放在 Openstack 上并认为 Openstack 在 k8s，SIG 和片状测试激增方面取得了进展。&lt;/h4&gt;
&lt;!--
The Kubernetes contributing community meets most Thursdays at 10:00PT to discuss the project's status via a videoconference. Here are the notes from the latest meeting.
--&gt;
&lt;p&gt;Kubernetes 贡献社区在每周四 10:00 PT 开会,通过视频会议讨论项目状态。以下是最近一次会议的笔记。&lt;/p&gt;</description></item><item><title> 容器世界现状，2016 年 1 月</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2016/02/01/state-of-container-world-january-2016/</link><pubDate>Mon, 01 Feb 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2016/02/01/state-of-container-world-january-2016/</guid><description>&lt;!--
title: " State of the Container World, January 2016 "
date: 2016-02-01
slug: state-of-container-world-january-2016
url: /blog/2016/02/State-Of-Container-World-January-2016
--&gt;
&lt;!--
At the start of the new year, we sent out a survey to gauge the state of the container world. We’re ready to send the [February edition](https://docs.google.com/forms/d/13yxxBqb5igUhwrrnDExLzZPjREiCnSs-AH-y4SSZ-5c/viewform), but before we do, let’s take a look at the January data from the 119 responses (thank you for participating!). 
--&gt;
&lt;p&gt;新年伊始，我们进行了一项调查，以评估容器世界的现状。
我们已经准备好发送&lt;a href="https://docs.google.com/forms/d/13yxxBqb5igUhwrrnDExLzZPjREiCnSs-AH-y4SSZ-5c/viewform"&gt; 2 月版&lt;/a&gt;但在此之前，让我们从 119 条回复中看一下 1 月的数据（感谢您的参与！）。&lt;/p&gt;</description></item><item><title> Kubernetes 社区会议记录 - 20160114</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2016/01/28/kubernetes-community-meeting-notes/</link><pubDate>Thu, 28 Jan 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2016/01/28/kubernetes-community-meeting-notes/</guid><description>&lt;!--
---
title: " Kubernetes Community Meeting Notes - 20160114 "
date: 2016-01-28
slug: kubernetes-community-meeting-notes
url: /zh/blog/2016/01/Kubernetes-Community-Meeting-Notes
---
--&gt;
&lt;!--
##### January 14 - RackN demo, testing woes, and KubeCon EU CFP.
---
 Note taker: Joe Beda
---
--&gt;
&lt;h5 id="1-月-14-日-rackn-演示-测试问题和-kubecon-eu-cfp"&gt;1 月 14 日 - RackN 演示、测试问题和 KubeCon EU CFP。&lt;/h5&gt;
&lt;hr&gt;
&lt;h2 id="记录者-joe-beda"&gt;记录者：Joe Beda&lt;/h2&gt;
&lt;!--
* Demonstration: Automated Deploy on Metal, AWS and others w/ Digital Rebar, Rob Hirschfeld and Greg Althaus from RackN

 * Greg Althaus. CTO. Digital Rebar is the product. Bare metal provisioning tool.

 * Detect hardware, bring it up, configure raid, OS and get workload deployed.

 * Been working on Kubernetes workload.

 * Seeing trend to start in cloud and then move back to bare metal.

 * New provider model to use provisioning system on both cloud and bare metal.

 * UI, REST API, CLI

 * Demo: Packet -- bare metal as a service

 * 4 nodes running grouped into a "deployment"

 * Functional roles/operations selected per node.

 * Decomposed the kubernetes bring up into units that can be ordered and synchronized. Dependency tree -- things like wait for etcd to be up before starting k8s master.

 * Using the Ansible playbook under the covers.

 * Demo brings up 5 more nodes -- packet will build those nodes

 * Pulled out basic parameters from the ansible playbook. Things like the network config, dns set up, etc.

 * Hierarchy of roles pulls in other components -- making a node a master brings in a bunch of other roles that are necessary for that.

 * Has all of this combined into a command line tool with a simple config file.

 * Forward: extending across multiple clouds for test deployments. Also looking to create split/replicated across bare metal and cloud.

 * Q: secrets? 
A: using ansible playbooks. Builds own certs and then distributes them. Wants to abstract them out and push that stuff upstream.

 * Q: Do you support bringing up from real bare metal with PXE boot? 
A: yes -- will discover bare metal systems and install OS, install ssh keys, build networking, etc.
--&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;演示：在 Metal，AWS 和其他平台上使用 Digital Rebar 自动化部署，来自 RackN 的 Rob Hirschfeld 和 Greg Althaus。&lt;/p&gt;</description></item><item><title> 为什么 Kubernetes 不用 libnetwork</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2016/01/14/why-kubernetes-doesnt-use-libnetwork/</link><pubDate>Thu, 14 Jan 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2016/01/14/why-kubernetes-doesnt-use-libnetwork/</guid><description>&lt;!--
title: " Why Kubernetes doesn’t use libnetwork "
date: 2016-01-14
slug: why-kubernetes-doesnt-use-libnetwork
url: /blog/2016/01/Why-Kubernetes-Doesnt-Use-Libnetwork
--&gt;
&lt;!-- Kubernetes has had a very basic form of network plugins since before version 1.0 was released — around the same time as Docker's [libnetwork](https://github.com/docker/libnetwork) and Container Network Model ([CNM](https://github.com/docker/libnetwork/blob/master/docs/design.md)) was introduced. Unlike libnetwork, the Kubernetes plugin system still retains its "alpha" designation. Now that Docker's network plugin support is released and supported, an obvious question we get is why Kubernetes has not adopted it yet. After all, vendors will almost certainly be writing plugins for Docker — we would all be better off using the same drivers, right? --&gt;
&lt;p&gt;在 1.0 版本发布之前，Kubernetes 已经有了一个非常基础的网络插件形式-大约在引入 Docker’s &lt;a href="https://github.com/docker/libnetwork"&gt;libnetwork&lt;/a&gt; 和 Container Network Model (&lt;a href="https://github.com/docker/libnetwork/blob/master/docs/design.md"&gt;CNM&lt;/a&gt;) 的时候。与 libnetwork 不同，Kubernetes 插件系统仍然保留它的 'alpha' 名称。现在 Docker 的网络插件支持已经发布并得到支持，我们发现一个明显的问题是 Kubernetes 尚未采用它。毕竟，供应商几乎肯定会为 Docker 编写插件-我们最好还是用相同的驱动程序，对吧？&lt;/p&gt;</description></item><item><title>Kubernetes 和 Docker 简单的 leader election</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2016/01/11/simple-leader-election-with-kubernetes/</link><pubDate>Mon, 11 Jan 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2016/01/11/simple-leader-election-with-kubernetes/</guid><description>&lt;!--
title: " Simple leader election with Kubernetes and Docker "
date: 2016-01-11
slug: simple-leader-election-with-kubernetes
--&gt;
&lt;!--
#### Overview

Kubernetes simplifies the deployment and operational management of services running on clusters. However, it also simplifies the development of these services. In this post we'll see how you can use Kubernetes to easily perform leader election in your distributed application. Distributed applications usually replicate the tasks of a service for reliability and scalability, but often it is necessary to designate one of the replicas as the leader who is responsible for coordination among all of the replicas.
--&gt;
&lt;h4 id="概述"&gt;概述&lt;/h4&gt;
&lt;p&gt;Kubernetes 简化了集群上运行的服务的部署和操作管理。但是，它也简化了这些服务的发展。在本文中，我们将看到如何使用 Kubernetes 在分布式应用程序中轻松地执行 leader election。分布式应用程序通常为了可靠性和可伸缩性而复制服务的任务，但通常需要指定其中一个副本作为负责所有副本之间协调的负责人。&lt;/p&gt;</description></item><item><title>使用 Puppet 管理 Kubernetes Pod、Service 和 Replication Controller</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2015/12/17/managing-kubernetes-pods-services-and-replication-controllers-with-puppet/</link><pubDate>Thu, 17 Dec 2015 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2015/12/17/managing-kubernetes-pods-services-and-replication-controllers-with-puppet/</guid><description>&lt;!--
title: " Managing Kubernetes Pods, Services and Replication Controllers with Puppet "
date: 2015-12-17
slug: managing-kubernetes-pods-services-and-replication-controllers-with-puppet
url: /blog/2015/12/Managing-Kubernetes-Pods-Services-And-Replication-Controllers-With-Puppet
--&gt;
&lt;!--
_Today’s guest post is written by Gareth Rushgrove, Senior Software Engineer at Puppet Labs, a leader in IT automation. Gareth tells us about a new Puppet module that helps manage resources in Kubernetes.&amp;nbsp;_

People familiar with [Puppet](https://github.com/puppetlabs/puppet)&amp;nbsp;might have used it for managing files, packages and users on host computers. But Puppet is first and foremost a configuration management tool, and config management is a much broader discipline than just managing host-level resources. A good definition of configuration management is that it aims to solve four related problems: identification, control, status accounting and verification and audit. These problems exist in the operation of any complex system, and with the new [Puppet Kubernetes module](https://forge.puppetlabs.com/garethr/kubernetes)&amp;nbsp;we’re starting to look at how we can solve those problems for Kubernetes.
--&gt;
&lt;p&gt;&lt;em&gt;今天的嘉宾帖子是由 IT 自动化领域的领导者 Puppet Labs 的高级软件工程师 Gareth Rushgrove 撰写的。Gareth告诉我们一个新的 Puppet 模块，它帮助管理 Kubernetes 中的资源。&lt;/em&gt;&lt;/p&gt;</description></item><item><title> Kubernetes 1.1 性能升级，工具改进和社区不断壮大</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2015/11/09/kubernetes-1-1-performance-upgrades-improved-tooling-and-a-growing-community/</link><pubDate>Mon, 09 Nov 2015 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2015/11/09/kubernetes-1-1-performance-upgrades-improved-tooling-and-a-growing-community/</guid><description>&lt;!--
title: " Kubernetes 1.1 Performance upgrades, improved tooling and a growing community "
date: 2015-11-09
slug: kubernetes-1-1-performance-upgrades-improved-tooling-and-a-growing-community
url: /blog/2015/11/Kubernetes-1-1-Performance-Upgrades-Improved-Tooling-And-A-Growing-Community
--&gt;
&lt;!--
Since the Kubernetes 1.0 release in July, we’ve seen tremendous adoption by companies building distributed systems to manage their container clusters. We’re also been humbled by the rapid growth of the community who help make Kubernetes better everyday. We have seen commercial offerings such as Tectonic by CoreOS and RedHat Atomic Host emerge to deliver deployment and support of Kubernetes. And a growing ecosystem has added Kubernetes support including tool vendors such as Sysdig and Project Calico. 
--&gt;
&lt;p&gt;自从 Kubernetes 1.0 在七月发布以来，我们已经看到大量公司采用建立分布式系统来管理其容器集群。
我们也对帮助 Kubernetes 社区变得更好，迅速发展的人感到钦佩。
我们已经看到诸如 CoreOS 的 Tectonic 和 RedHat Atomic Host 之类的商业产品应运而生，用以提供 Kubernetes 的部署和支持。
一个不断发展的生态系统增加了 Kubernetes 的支持，包括 Sysdig 和 Project Calico 等工具供应商。&lt;/p&gt;</description></item><item><title> Kubernetes 社区每周环聊笔记——2015 年 7 月 31 日</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2015/08/04/weekly-kubernetes-community-hangout/</link><pubDate>Tue, 04 Aug 2015 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2015/08/04/weekly-kubernetes-community-hangout/</guid><description>&lt;!--
title: " Weekly Kubernetes Community Hangout Notes - July 31 2015 "
date: 2015-08-04
slug: weekly-kubernetes-community-hangout
url: /blog/2015/08/Weekly-Kubernetes-Community-Hangout
--&gt;
&lt;!--
Every week the Kubernetes contributing community meet virtually over Google Hangouts. We want anyone who's interested to know what's discussed in this forum. 

Here are the notes from today's meeting: 
--&gt;
&lt;p&gt;每周，Kubernetes 贡献社区都会通过Google 环聊虚拟开会。我们希望任何有兴趣的人都知道本论坛讨论的内容。&lt;/p&gt;
&lt;p&gt;这是今天会议的笔记：&lt;/p&gt;
&lt;!--
* Private Registry Demo - Muhammed

 * Run docker-registry as an RC/Pod/Service

 * Run a proxy on every node

 * Access as localhost:5000

 * Discussion:

 * Should we back it by GCS or S3 when possible?

 * Run real registry backed by $object_store on each node

 * DNS instead of localhost?

 * disassemble image strings?

 * more like DNS policy?
--&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;私有镜像仓库演示 - Muhammed&lt;/p&gt;</description></item><item><title>宣布首个Kubernetes企业培训课程</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2015/07/08/announcing-first-kubernetes-enterprise/</link><pubDate>Wed, 08 Jul 2015 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2015/07/08/announcing-first-kubernetes-enterprise/</guid><description>&lt;!--
title: " Announcing the First Kubernetes Enterprise Training Course "
date: 2015-07-08
slug: announcing-first-kubernetes-enterprise
url: /blog/2015/07/Announcing-First-Kubernetes-Enterprise
--&gt;
&lt;!-- At Google we rely on Linux application containers to run our core infrastructure. Everything from Search to Gmail runs in containers. &amp;nbsp;In fact, we like containers so much that even our Google Compute Engine VMs run in containers! &amp;nbsp;Because containers are critical to our business, we have been working with the community on many of the basic container technologies (from cgroups to Docker’s LibContainer) and even decided to build the next generation of Google’s container scheduling technology, Kubernetes, in the open. --&gt;
&lt;p&gt;在谷歌，我们依赖 Linux 容器应用程序去运行我们的核心基础架构。所有服务，从搜索引擎到Gmail服务，都运行在容器中。事实上，我们非常喜欢容器，甚至我们的谷歌云计算引擎虚拟机也运行在容器上！由于容器对于我们的业务至关重要，我们已经与社区合作开发许多基本的容器技术（从 cgroups 到 Docker 的 LibContainer）,甚至决定去构建谷歌的下一代开源容器调度技术，Kubernetes。&lt;/p&gt;</description></item><item><title>幻灯片：Kubernetes 集群管理，爱丁堡大学演讲</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2015/06/26/slides-cluster-management-with/</link><pubDate>Fri, 26 Jun 2015 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2015/06/26/slides-cluster-management-with/</guid><description>&lt;!--
title: " Slides: Cluster Management with Kubernetes, talk given at the University of Edinburgh "
date: 2015-06-26
slug: slides-cluster-management-with
url: /blog/2015/06/Slides-Cluster-Management-With
--&gt;
&lt;!--
On Friday 5 June 2015 I gave a talk called [Cluster Management with Kubernetes](https://docs.google.com/presentation/d/1H4ywDb4vAJeg8KEjpYfhNqFSig0Q8e_X5I36kM9S6q0/pub?start=false&amp;loop=false&amp;delayms=3000) to a general audience at the University of Edinburgh. The talk includes an example of a music store system with a Kibana front end UI and an Elasticsearch based back end which helps to make concrete concepts like pods, replication controllers and services.

[Cluster Management with Kubernetes](https://docs.google.com/presentation/d/1H4ywDb4vAJeg8KEjpYfhNqFSig0Q8e_X5I36kM9S6q0/pub?start=false&amp;loop=false&amp;delayms=3000).
--&gt;
&lt;p&gt;2015年6月5日星期五，我在爱丁堡大学给普通听众做了一个演讲，题目是&lt;a href="https://docs.google.com/presentation/d/1H4ywDb4vAJeg8KEjpYfhNqFSig0Q8e_X5I36kM9S6q0/pub?start=false&amp;loop=false&amp;delayms=3000"&gt;使用 Kubernetes 进行集群管理&lt;/a&gt;。这次演讲包括一个带有 Kibana 前端 UI 的音乐存储系统的例子，以及一个基于 Elasticsearch 的后端，该后端有助于生成具体的概念，如 pods、复制控制器和服务。&lt;/p&gt;</description></item><item><title> OpenStack 上的 Kubernetes</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2015/05/19/kubernetes-on-openstack/</link><pubDate>Tue, 19 May 2015 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2015/05/19/kubernetes-on-openstack/</guid><description>&lt;!--
title: " Kubernetes on OpenStack "
date: 2015-05-19
slug: kubernetes-on-openstack
url: /blog/2015/05/Kubernetes-On-Openstack
--&gt;
&lt;p&gt;&lt;a href="https://3.bp.blogspot.com/-EOrCHChZJZE/VVZzq43g6CI/AAAAAAAAF-E/JUilRHk369E/s1600/Untitled%2Bdrawing.jpg"&gt;&lt;img src="https://3.bp.blogspot.com/-EOrCHChZJZE/VVZzq43g6CI/AAAAAAAAF-E/JUilRHk369E/s400/Untitled%2Bdrawing.jpg" alt=""&gt;&lt;/a&gt;&lt;/p&gt;
&lt;!--
Today, the [OpenStack foundation](https://www.openstack.org/foundation/) made it even easier for you deploy and manage clusters of Docker containers on OpenStack clouds by including Kubernetes in its [Community App Catalog](http://apps.openstack.org/). &amp;nbsp;At a keynote today at the OpenStack Summit in Vancouver, Mark Collier, COO of the OpenStack Foundation, and Craig Peters, &amp;nbsp;[Mirantis](https://www.mirantis.com/) product line manager, demonstrated the Community App Catalog workflow by launching a Kubernetes cluster in a matter of seconds by leveraging the compute, storage, networking and identity systems already present in an OpenStack cloud.
--&gt;
&lt;p&gt;今天，&lt;a href="https://www.openstack.org/foundation/"&gt;OpenStack 基金会&lt;/a&gt;通过在其&lt;a href="http://apps.openstack.org/"&gt;社区应用程序目录&lt;/a&gt;中包含 Kubernetes，使您更容易在 OpenStack 云上部署和管理 Docker 容器集群。
今天在温哥华 OpenStack 峰会上的主题演讲中，OpenStack 基金会的首席运营官：Mark Collier 和 &lt;a href="https://www.mirantis.com/"&gt;Mirantis&lt;/a&gt; 产品线经理 Craig Peters 通过利用 OpenStack 云中已经存在的计算、存储、网络和标识系统，在几秒钟内启动了 Kubernetes 集群，展示了社区应用程序目录的工作流。&lt;/p&gt;</description></item><item><title> Kubernetes 社区每周聚会笔记- 2015年5月1日</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2015/05/11/weekly-kubernetes-community-hangout/</link><pubDate>Mon, 11 May 2015 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2015/05/11/weekly-kubernetes-community-hangout/</guid><description>&lt;!--
title: " Weekly Kubernetes Community Hangout Notes - May 1 2015 "
date: 2015-05-11
slug: weekly-kubernetes-community-hangout
url: /blog/2015/05/Weekly-Kubernetes-Community-Hangout
--&gt;
&lt;!--
Every week the Kubernetes contributing community meet virtually over Google Hangouts. We want anyone who's interested to know what's discussed in this forum.
--&gt;
&lt;p&gt;每个星期，Kubernetes 贡献者社区几乎都会在谷歌 Hangouts 上聚会。我们希望任何对此感兴趣的人都能了解这个论坛的讨论内容。&lt;/p&gt;
&lt;!--

* Simple rolling update - Brendan

 * Rolling update = nice example of why RCs and Pods are good.

 * ...pause… (Brendan needs demo recovery tips from Kelsey)

 * Rolling update has recovery: Cancel update and restart, update continues from where it stopped.

 * New controller gets name of old controller, so appearance is pure update.

 * Can also name versions in update (won't do rename at the end).

--&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;简单的滚动更新 - Brendan&lt;/p&gt;</description></item><item><title> 通过 RKT 对 Kubernetes 的 AppC 支持</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2015/05/04/appc-support-for-kubernetes-through-rkt/</link><pubDate>Mon, 04 May 2015 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2015/05/04/appc-support-for-kubernetes-through-rkt/</guid><description>&lt;!--
title: " AppC Support for Kubernetes through RKT "
date: 2015-05-04
slug: appc-support-for-kubernetes-through-rkt
url: /blog/2015/05/Appc-Support-For-Kubernetes-Through-Rkt
--&gt;
&lt;!--
We very recently accepted a pull request to the Kubernetes project to add appc support for the Kubernetes community. &amp;nbsp;Appc is a new open container specification that was initiated by CoreOS, and is supported through CoreOS rkt container runtime.
--&gt;
&lt;p&gt;我们最近接受了对 Kubernetes 项目的拉取请求，以增加对 Kubernetes 社区的应用程序支持。  AppC 是由 CoreOS 发起的新的开放容器规范，并通过 CoreOS rkt 容器运行时受到支持。&lt;/p&gt;</description></item><item><title> Kubernetes 社区每周聚会笔记- 2015年4月24日</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2015/04/30/weekly-kubernetes-community-hangout_29/</link><pubDate>Thu, 30 Apr 2015 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2015/04/30/weekly-kubernetes-community-hangout_29/</guid><description>&lt;!--
---
title: " Weekly Kubernetes Community Hangout Notes - April 24 2015 "
date: 2015-04-30
slug: weekly-kubernetes-community-hangout_29
url: /zh/blog/2015/04/Weekly-Kubernetes-Community-Hangout_29
---

--&gt;
&lt;!--
Every week the Kubernetes contributing community meet virtually over Google Hangouts. We want anyone who's interested to know what's discussed in this forum.
--&gt;
&lt;p&gt;每个星期，Kubernetes 贡献者社区几乎都会在谷歌 Hangouts 上聚会。我们希望任何对此感兴趣的人都能了解这个论坛的讨论内容。&lt;/p&gt;
&lt;!--
Agenda:

* Flocker and Kubernetes integration demo

--&gt;
&lt;p&gt;日程安排：&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Flocker 和 Kubernetes 集成演示&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
Notes:

* flocker and kubernetes integration demo
* * Flocker Q/A

 * Does the file still exists on node1 after migration?

 * Brendan: Any plan this to make it a volume? So we don't need powerstrip?

 * Luke: Need to figure out interest to decide if we want to make it a first-class persistent disk provider in kube.

 * Brendan: Removing need for powerstrip would make it simple to use. Totally go for it.

 * Tim: Should take no more than 45 minutes to add it to kubernetes:)

--&gt;
&lt;p&gt;笔记：&lt;/p&gt;</description></item><item><title>Borg: Kubernetes 的前身</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2015/04/23/borg-predecessor-to-kubernetes/</link><pubDate>Thu, 23 Apr 2015 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2015/04/23/borg-predecessor-to-kubernetes/</guid><description>&lt;!--
title: " Borg: The Predecessor to Kubernetes "
date: 2015-04-23
slug: borg-predecessor-to-kubernetes
url: /blog/2015/04/Borg-Predecessor-To-Kubernetes
--&gt;
&lt;!--
Google has been running containerized workloads in production for more than a decade. Whether it's service jobs like web front-ends and stateful servers, infrastructure systems like [Bigtable](http://research.google.com/archive/bigtable.html) and [Spanner](http://research.google.com/archive/spanner.html), or batch frameworks like [MapReduce](http://research.google.com/archive/mapreduce.html) and [Millwheel](http://research.google.com/pubs/pub41378.html), virtually everything at Google runs as a container. Today, we took the wraps off of Borg, Google’s long-rumored internal container-oriented cluster-management system, publishing details at the academic computer systems conference [Eurosys](http://eurosys2015.labri.fr/). You can find the paper [here](https://research.google.com/pubs/pub43438.html).
--&gt;
&lt;p&gt;十多年来，谷歌一直在生产中运行容器化工作负载。
无论是像网络前端和有状态服务器之类的工作，像 &lt;a href="http://research.google.com/archive/bigtable.html"&gt;Bigtable&lt;/a&gt; 和
&lt;a href="http://research.google.com/archive/spanner.html"&gt;Spanner&lt;/a&gt;一样的基础架构系统，或是像
&lt;a href="http://research.google.com/archive/mapreduce.html"&gt;MapReduce&lt;/a&gt; 和 &lt;a href="http://research.google.com/pubs/pub41378.html"&gt;Millwheel&lt;/a&gt;一样的批处理框架，
Google 的几乎一切都是以容器的方式运行的。今天，我们揭开了 Borg 的面纱，Google 传闻已久的面向容器的内部集群管理系统，并在学术计算机系统会议 &lt;a href="http://eurosys2015.labri.fr/"&gt;Eurosys&lt;/a&gt; 上发布了详细信息。你可以在 &lt;a href="https://research.google.com/pubs/pub43438.html"&gt;此处&lt;/a&gt; 找到论文。&lt;/p&gt;</description></item><item><title> Kubernetes 社区每周聚会笔记- 2015年4月17日</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2015/04/17/weekly-kubernetes-community-hangout_17/</link><pubDate>Fri, 17 Apr 2015 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2015/04/17/weekly-kubernetes-community-hangout_17/</guid><description>&lt;!--
---
title: " Weekly Kubernetes Community Hangout Notes - April 17 2015 "
date: 2015-04-17
slug: weekly-kubernetes-community-hangout_17
url: /zh/blog/2015/04/Weekly-Kubernetes-Community-Hangout_17
---
--&gt;
&lt;!--
Every week the Kubernetes contributing community meet virtually over Google Hangouts. We want anyone who's interested to know what's discussed in this forum.
--&gt;
&lt;p&gt;每个星期，Kubernetes 贡献者社区几乎都会在谷歌 Hangouts 上聚会。我们希望任何对此感兴趣的人都能了解这个论坛的讨论内容。&lt;/p&gt;
&lt;!--
Agenda

* Mesos Integration
* High Availability (HA)
* Adding performance and profiling details to e2e to track regressions
* Versioned clients

--&gt;
&lt;p&gt;议程&lt;/p&gt;</description></item><item><title> Kubernetes Release: 0.15.0</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2015/04/16/kubernetes-release-0150/</link><pubDate>Thu, 16 Apr 2015 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2015/04/16/kubernetes-release-0150/</guid><description>&lt;!--
Release Notes:
--&gt;
&lt;p&gt;Release 说明：&lt;/p&gt;
&lt;!--

* Enables v1beta3 API and sets it to the default API version ([#6098][1])
* Added multi-port Services ([#6182][2])
 * New Getting Started Guides
 * Multi-node local startup guide ([#6505][3])
 * Mesos on Google Cloud Platform ([#5442][4])
 * Ansible Setup instructions ([#6237][5])
* Added a controller framework ([#5270][6], [#5473][7])
* The Kubelet now listens on a secure HTTPS port ([#6380][8])
* Made kubectl errors more user-friendly ([#6338][9])
* The apiserver now supports client cert authentication ([#6190][10])
* The apiserver now limits the number of concurrent requests it processes ([#6207][11])
* Added rate limiting to pod deleting ([#6355][12])
* Implement Balanced Resource Allocation algorithm as a PriorityFunction in scheduler package ([#6150][13])
* Enabled log collection from master ([#6396][14])
* Added an api endpoint to pull logs from Pods ([#6497][15])
* Added latency metrics to scheduler ([#6368][16])
* Added latency metrics to REST client ([#6409][17])

--&gt;
&lt;ul&gt;
&lt;li&gt;启用 1beta3 API 并将其设置为默认 API 版本 (&lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/6098" title="在 master 中默认启用 v1beta3 api 版本"&gt;#6098&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;增加了多端口服务(&lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/6182" title="实现多端口服务"&gt;#6182&lt;/a&gt;)
&lt;ul&gt;
&lt;li&gt;新入门指南&lt;/li&gt;
&lt;li&gt;多节点本地启动指南 (&lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/6505" title="Docker 多节点"&gt;#6505&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Google 云平台上的 Mesos (&lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/5442" title="谷歌云平台上 Mesos 入门指南"&gt;#5442&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Ansible 安装说明 (&lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/6237" title="示例 ansible 设置仓库"&gt;#6237&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;添加了一个控制器框架 (&lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/5270" title="控制器框架"&gt;#5270&lt;/a&gt;, &lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/5473" title="添加 DeltaFIFO（控制器框架块）"&gt;#5473&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Kubelet 现在监听一个安全的 HTTPS 端口 (&lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/6380" title="将 kubelet 配置为使用 HTTPS (获得 2)"&gt;#6380&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;使 kubectl 错误更加友好 (&lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/6338" title="返回用于配置验证的类型化错误，并简化错误"&gt;#6338&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;apiserver 现在支持客户端 cert 身份验证 (&lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/6190" title="添加客户端证书认证"&gt;#6190&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;apiserver 现在限制了它处理的并发请求的数量 (&lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/6207" title="为服务器处理的正在运行的请求数量添加一个限制。"&gt;#6207&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;添加速度限制删除 pod (&lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/6355" title="添加速度限制删除 pod"&gt;#6355&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;将平衡资源分配算法作为优先级函数实现在调度程序包中 (&lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/6150" title="将均衡资源分配算法作为优先级函数实现在调度程序包中。"&gt;#6150&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;从主服务器启用日志收集功能 (&lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/6396" title="启用主服务器收集日志。"&gt;#6396&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;添加了一个 api 端口来从 Pod 中提取日志 (&lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/6497" title="pod 子日志资源"&gt;#6497&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;为调度程序添加了延迟指标 (&lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/6368" title="将基本延迟指标添加到调度程序。"&gt;#6368&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;为 REST 客户端添加了延迟指标 (&lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/6409" title="向 REST 客户端添加延迟指标"&gt;#6409&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;
&lt;!--

* etcd now runs in a pod on the master ([#6221][18])
* nginx now runs in a container on the master ([#6334][19])
* Began creating Docker images for master components ([#6326][20])
* Updated GCE provider to work with gcloud 0.9.54 ([#6270][21])
* Updated AWS provider to fix Region vs Zone semantics ([#6011][22])
* Record event when image GC fails ([#6091][23])
* Add a QPS limiter to the kubernetes client ([#6203][24])
* Decrease the time it takes to run make release ([#6196][25])
* New volume support
 * Added iscsi volume plugin ([#5506][26])
 * Added glusterfs volume plugin ([#6174][27])
 * AWS EBS volume support ([#5138][28])
* Updated to heapster version to v0.10.0 ([#6331][29])
* Updated to etcd 2.0.9 ([#6544][30])
* Updated to Kibana to v1.2 ([#6426][31])
* Bug Fixes
 * Kube-proxy now updates iptables rules if a service's public IPs change ([#6123][32])
 * Retry kube-addons creation if the initial creation fails ([#6200][33])
 * Make kube-proxy more resiliant to running out of file descriptors ([#6727][34])

--&gt;
&lt;ul&gt;
&lt;li&gt;etcd 现在在 master 上的一个 pod 中运行 (&lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/6221" title="在 pod 中运行 etcd 2.0.5"&gt;#6221&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;nginx 现在在 master上的容器中运行 (&lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/6334" title="添加一个 nginx docker 镜像用于主程序。"&gt;#6334&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;开始为主组件构建 Docker 镜像 (&lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/6326" title="为主组件创建 Docker 镜像"&gt;#6326&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;更新了 GCE 程序以使用 gcloud 0.9.54 (&lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/6270" title="gcloud 0.9.54 的更新"&gt;#6270&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;更新了 AWS 程序来修复区域与区域语义 (&lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/6011" title="修复 AWS 区域 与 zone"&gt;#6011&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;记录镜像 GC 失败时的事件 (&lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/6091" title="记录镜像 GC 失败时的事件。"&gt;#6091&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;为 kubernetes 客户端添加 QPS 限制器 (&lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/6203" title="向 kubernetes 客户端添加 QPS 限制器。"&gt;#6203&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;减少运行 make release 所需的时间 (&lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/6196" title="在 `make release` 的构建和打包阶段并行化架构"&gt;#6196&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;新卷的支持
&lt;ul&gt;
&lt;li&gt;添加 iscsi 卷插件 (&lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/5506" title="添加 iscsi 卷插件"&gt;#5506&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;添加 glusterfs 卷插件 (&lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/6174" title="实现 glusterfs 卷插件"&gt;#6174&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;AWS EBS 卷支持 (&lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/5138" title="AWS EBS 卷支持"&gt;#5138&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;更新到 heapster 版本到 v0.10.0 (&lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/6331" title="将 heapster 版本更新到 v0.10.0"&gt;#6331&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;更新到 etcd 2.0.9 (&lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/6544" title="构建 etcd 镜像(版本 2.0.9)，并将 kubernetes 集群升级到新版本"&gt;#6544&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;更新到 Kibana 到 v1.2 (&lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/6426" title="更新 Kibana 到 v1.2，它对 Elasticsearch 的位置进行了参数化"&gt;#6426&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;漏洞修复
&lt;ul&gt;
&lt;li&gt;如果服务的公共 IP 发生变化，Kube-proxy现在会更新iptables规则 (&lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/6123" title="修复了 kube-proxy 中的一个错误，如果一个服务的公共 ip 发生变化，它不会更新 iptables 规则"&gt;#6123&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;如果初始创建失败，则重试 kube-addons 创建 (&lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/6200" title="如果 kube-addons 创建失败，请重试 kube-addons 创建。"&gt;#6200&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;使 kube-proxy 对耗尽文件描述符更具弹性 (&lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/6727" title="pkg/proxy: fd 用完后引起恐慌"&gt;#6727&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
To download, please visit https://github.com/GoogleCloudPlatform/kubernetes/releases/tag/v0.15.0
--&gt;
&lt;p&gt;要下载，请访问 &lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/releases/tag/v0.15.0"&gt;https://github.com/GoogleCloudPlatform/kubernetes/releases/tag/v0.15.0&lt;/a&gt;&lt;/p&gt;</description></item><item><title> 每周 Kubernetes 社区例会笔记 - 2015 年 4 月 3 日</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2015/04/04/weekly-kubernetes-community-hangout/</link><pubDate>Sat, 04 Apr 2015 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2015/04/04/weekly-kubernetes-community-hangout/</guid><description>&lt;!--
title: " Weekly Kubernetes Community Hangout Notes - April 3 2015 "
date: 2015-04-04
slug: weekly-kubernetes-community-hangout
url: /blog/2015/04/Weekly-Kubernetes-Community-Hangout
--&gt;
&lt;!--
# Kubernetes: Weekly Kubernetes Community Hangout Notes
--&gt;
&lt;h1 id="kubernetes-每周-kubernetes-社区聚会笔记"&gt;Kubernetes: 每周 Kubernetes 社区聚会笔记&lt;/h1&gt;
&lt;!--
Every week the Kubernetes contributing community meet virtually over Google Hangouts. We want anyone who's interested to know what's discussed in this forum.
--&gt;
&lt;p&gt;每周，Kubernetes 贡献社区几乎都会通过 Google Hangouts 开会。
我们希望任何有兴趣的人都知道本论坛讨论的内容。&lt;/p&gt;
&lt;!--
Agenda:
--&gt;
&lt;p&gt;议程：&lt;/p&gt;
&lt;!--
* Quinton - Cluster federation
* Satnam - Performance benchmarking update
--&gt;
&lt;ul&gt;
&lt;li&gt;Quinton - 集群联邦&lt;/li&gt;
&lt;li&gt;Satnam - 性能基准测试更新&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
*Notes from meeting:*
--&gt;
&lt;p&gt;&lt;em&gt;会议记录：&lt;/em&gt;&lt;/p&gt;</description></item><item><title>Kubernetes 社区每周聚会笔记 - 2015 年 3 月 27 日</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2015/03/28/weekly-kubernetes-community-hangout/</link><pubDate>Sat, 28 Mar 2015 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2015/03/28/weekly-kubernetes-community-hangout/</guid><description>&lt;!--
title: " Weekly Kubernetes Community Hangout Notes - March 27 2015 "
date: 2015-03-28
slug: weekly-kubernetes-community-hangout
url: /blog/2015/03/Weekly-Kubernetes-Community-Hangout
--&gt;
&lt;!--
Every week the Kubernetes contributing community meet virtually over Google Hangouts. We want anyone who's interested to know what's discussed in this forum.
--&gt;
&lt;p&gt;每个星期，Kubernetes 贡献者社区几乎都会在谷歌 Hangouts 上聚会。我们希望任何对此感兴趣的人都能了解这个论坛的讨论内容。&lt;/p&gt;
&lt;!--
Agenda:
--&gt;
&lt;p&gt;日程安排：&lt;/p&gt;
&lt;!--

\- Andy - demo remote execution and port forwarding

\- Quinton - Cluster federation - Postponed

\- Clayton - UI code sharing and collaboration around Kubernetes

--&gt;
&lt;p&gt;- Andy - 演示远程执行和端口转发&lt;/p&gt;</description></item><item><title> Kubernetes 采集视频</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2015/03/23/kubernetes-gathering-videos/</link><pubDate>Mon, 23 Mar 2015 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2015/03/23/kubernetes-gathering-videos/</guid><description>&lt;!--
title: " Kubernetes Gathering Videos "
date: 2015-03-23
slug: kubernetes-gathering-videos
url: /blog/2015/03/Kubernetes-Gathering-Videos
--&gt;
&lt;!--
If you missed the Kubernetes Gathering in SF last month, fear not! Here are the videos from the evening presentations organized into a playlist on YouTube

[![Kubernetes Gathering](https://img.youtube.com/vi/q8lGZCKktYo/0.jpg)](https://www.youtube.com/playlist?list=PL69nYSiGNLP2FBVvSLHpJE8_6hRHW8Kxe)
--&gt;
&lt;p&gt;如果你错过了上个月在旧金山举行的 Kubernetes 大会，不要害怕!以下是在 YouTube 上组织成播放列表的晚间演示文稿中的视频。&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.youtube.com/playlist?list=PL69nYSiGNLP2FBVvSLHpJE8_6hRHW8Kxe"&gt;&lt;img src="https://img.youtube.com/vi/q8lGZCKktYo/0.jpg" alt="Kubernetes Gathering"&gt;&lt;/a&gt;&lt;/p&gt;</description></item><item><title>欢迎来到 Kubernetes 博客!</title><link>https://andygol-k8s.netlify.app/zh-cn/blog/2015/03/20/welcome-to-kubernetes-blog/</link><pubDate>Fri, 20 Mar 2015 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/blog/2015/03/20/welcome-to-kubernetes-blog/</guid><description>&lt;!--
title: Welcome to the Kubernetes Blog!
date: 2015-03-20
slug: welcome-to-kubernetes-blog
url: /blog/2015/03/Welcome-To-Kubernetes-Blog
--&gt;
&lt;!--
Welcome to the new Kubernetes Blog. Follow this blog to learn about the Kubernetes Open Source project. We plan to post release notes, how-to articles, events, and maybe even some off topic fun here from time to time.
--&gt;
&lt;p&gt;欢迎来到新的 Kubernetes 博客。关注此博客，了解 Kubernetes 开源项目。我们计划时不时的发布发布说明，操作方法文章，活动，甚至一些非常有趣的话题。&lt;/p&gt;
&lt;!--
If you are using Kubernetes or contributing to the project and would like to do a guest post, [please let me know](mailto:kitm@google.com).
--&gt;
&lt;p&gt;如果您正在使用 Kubernetes 或为该项目做出贡献并想要发帖子，&lt;a href="mailto:kitm@google.com"&gt;请告诉我&lt;/a&gt;。&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/community/static/cncf-code-of-conduct/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/community/static/cncf-code-of-conduct/</guid><description>&lt;!--
Do not edit this file directly. Get the latest from
https://github.com/cncf/foundation/blob/master/code-of-conduct-languages/zh.md
--&gt;
&lt;!--
## CNCF Community Code of Conduct v1.3

### Community Code of Conduct
--&gt;
&lt;h2 id="cncf-community-code-of-conduct-v13"&gt;云原生计算基金会（CNCF）社区行为准则 1.3 版本&lt;/h2&gt;
&lt;h3 id="community-code-of-conduct"&gt;社区行为准则&lt;/h3&gt;
&lt;!--
As contributors, maintainers, and participants in the CNCF community, and in the interest of fostering
an open and welcoming community, we pledge to respect all people who participate or contribute
through reporting issues, posting feature requests, updating documentation,
submitting pull requests or patches, attending conferences or events, or engaging in other community or project activities.

We are committed to making participation in the CNCF community a harassment-free experience for everyone, regardless of age, body size, caste, disability, ethnicity, level of experience, family status, gender, gender identity and expression, marital status, military or veteran status, nationality, personal appearance, race, religion, sexual orientation, socieconomic status, tribe, or any other dimension of diversity.
--&gt;
&lt;p&gt;作为 CNCF 社区的贡献者、维护者和参与者，我们努力建设一个开放和受欢迎的社区，我们承诺尊重所有上报
Issue、发布功能需求、更新文档、提交 PR 或补丁、参加会议活动以及其他社区和项目活动的贡献者和参与者。&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/community/static/readme/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/community/static/readme/</guid><description>&lt;!-- The files in this directory have been imported from other sources. Do not
edit them directly, except by replacing them with new versions.
Localization note: you do not need to create localized versions of any of
 the files in this directory.
--&gt;
&lt;p&gt;本路径下的文件从其它地方导入。
除了版本更新，不要直接修改。
本地化说明：你无需为此目录中的任何文件创建本地化版本。&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/contribute/generate-ref-docs/prerequisites-ref-docs/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/contribute/generate-ref-docs/prerequisites-ref-docs/</guid><description>&lt;!--
### Requirements:

- You need a machine that is running Linux or macOS.

- You need to have these tools installed:

 - [Python](https://www.python.org/downloads/) v3.7.x+
 - [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git)
 - [Golang](https://go.dev/dl/) version 1.13+
 - [Pip](https://pypi.org/project/pip/) used to install PyYAML
 - [PyYAML](https://pyyaml.org/) v5.1.2
 - [make](https://www.gnu.org/software/make/)
 - [gcc compiler/linker](https://gcc.gnu.org/)
 - [Docker](https://docs.docker.com/engine/installation/) (Required only for `kubectl` command reference)
--&gt;
&lt;h3 id="requirements"&gt;需求&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;你需要一台 Linux 或 macOS 机器。&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;你需要安装以下工具：&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.python.org/downloads/"&gt;Python&lt;/a&gt; v3.7.x+&lt;/li&gt;
&lt;li&gt;&lt;a href="https://git-scm.com/book/en/v2/Getting-Started-Installing-Git"&gt;Git&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://go.dev/dl/"&gt;Golang&lt;/a&gt; 1.13+ 版本&lt;/li&gt;
&lt;li&gt;用来安装 PyYAML 的 &lt;a href="https://pypi.org/project/pip/"&gt;Pip&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://pyyaml.org/"&gt;PyYAML&lt;/a&gt; v5.1.2&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.gnu.org/software/make/"&gt;make&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://gcc.gnu.org/"&gt;gcc compiler/linker&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.docker.com/engine/installation/"&gt;Docker&lt;/a&gt; （仅用于 &lt;code&gt;kubectl&lt;/code&gt; 命令参考）&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
- Your `PATH` environment variable must include the required build tools, such as the `Go` binary and `python`.

- You need to know how to create a pull request to a GitHub repository.
 This involves creating your own fork of the repository. For more
 information, see [Work from a local clone](/docs/contribute/new-content/open-a-pr/#fork-the-repo).
--&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;你的 &lt;code&gt;PATH&lt;/code&gt; 环境变量必须包含所需要的构建工具，例如 &lt;code&gt;Go&lt;/code&gt; 程序和 &lt;code&gt;python&lt;/code&gt;。&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm/</guid><description>&lt;!-- 
kubeadm: easily bootstrap a secure Kubernetes cluster

### Synopsis 
--&gt;
&lt;p&gt;kubeadm：轻松创建一个安全的 Kubernetes 集群&lt;/p&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
 ┌──────────────────────────────────────────────────────────┐
 │ KUBEADM │
 │ Easily bootstrap a secure Kubernetes cluster │
 │ │
 │ Please give us feedback at: │
 │ https://github.com/kubernetes/kubeadm/issues │
 └──────────────────────────────────────────────────────────┘
--&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;┌──────────────────────────────────────────────────────────┐
│ KUBEADM │
│ 轻松创建一个安全的 Kubernetes 集群 │
│ │
│ 给我们反馈意见的地址： │
│ https://github.com/kubernetes/kubeadm/issues │
└──────────────────────────────────────────────────────────┘
&lt;/code&gt;&lt;/pre&gt;&lt;!-- 
Example usage: 
--&gt;
&lt;p&gt;用途示例：&lt;/p&gt;
&lt;!-- 
 Create a two-machine cluster with one control-plane node
 (which controls the cluster), and one worker node
 (where your workloads, like Pods and Deployments run).
--&gt;
&lt;p&gt;创建一个有两台机器的集群，包含一个控制平面节点（用来控制集群）
和一个工作节点（运行你的 Pod 和 Deployment 等工作负载）。&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs/kubeadm_certs_certificate-key/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs/kubeadm_certs_certificate-key/</guid><description>&lt;!--
Generate certificate keys
--&gt;
&lt;p&gt;生成证书密钥。&lt;/p&gt;
&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
This command will print out a secure randomly-generated certificate key that can be used with
the "init" command.
--&gt;
&lt;p&gt;该命令将打印出可以与 &amp;quot;init&amp;quot; 命令一起使用的安全的随机生成的证书密钥。&lt;/p&gt;
&lt;!--
You can also use "kubeadm init -upload-certs" without specifying a certificate key and it will generate and print one for you.
--&gt;
&lt;p&gt;你也可以使用 &lt;code&gt;kubeadm init --upload-certs&lt;/code&gt; 而无需指定证书密钥；
此命令将为你生成并打印一个证书密钥。&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm certs certificate-key [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
help for certificate-key
--&gt;
certificate-key 操作的帮助命令。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs/kubeadm_certs_check-expiration/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs/kubeadm_certs_check-expiration/</guid><description>&lt;!--
Check certificates expiration for a Kubernetes cluster 
--&gt;
&lt;p&gt;为一个 Kubernetes 集群检查证书的到期时间。&lt;/p&gt;
&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Checks expiration for the certificates in the local PKI managed by kubeadm.
--&gt;
&lt;p&gt;检查 kubeadm 管理的本地 PKI 中证书的到期时间。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm certs check-expiration &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;
&lt;!--
--allow-missing-template-keys&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Default: true
--&gt;
--allow-missing-template-keys&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;默认值：true
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.
--&gt;
如果为 true，忽略模板中缺少某字段或映射键的错误。仅适用于 golang 和
jsonpath 输出格式。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs/kubeadm_certs_generate-csr/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs/kubeadm_certs_generate-csr/</guid><description>&lt;!--
Generate keys and certificate signing requests
--&gt;
&lt;p&gt;生成密钥和证书签名请求。&lt;/p&gt;
&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!-- 
Generates keys and certificate signing requests (CSRs) for all the certificates required to run the control plane. This command also generates partial kubeconfig files with private key data in the "users &amp;gt; user &amp;gt; client-key-data" field, and for each kubeconfig file an accompanying ".csr" file is created.
--&gt;
&lt;p&gt;为运行控制平面所需的所有证书生成密钥和证书签名请求（CSR）。该命令会生成部分 kubeconfig 文件，
其中 &amp;quot;users &amp;gt; user &amp;gt; client-key-data&amp;quot; 字段包含私钥数据，并为每个 kubeconfig
文件创建一个随附的 &amp;quot;.csr&amp;quot; 文件。&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs/kubeadm_certs_renew/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs/kubeadm_certs_renew/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Renew certificates for a Kubernetes cluster
--&gt;
&lt;p&gt;为 Kubernetes 集群更新证书&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm certs renew &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
help for renew
--&gt;
renew 操作的帮助命令
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;!--
### Options inherited from parent commands
--&gt;
&lt;h3 id="从父命令继承的选项"&gt;从父命令继承的选项&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--rootfs string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
[EXPERIMENTAL] The path to the 'real' host root filesystem.
--&gt;
[实验] 到 '真实' 主机根文件系统的路径。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs/kubeadm_certs_renew_admin.conf/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs/kubeadm_certs_renew_admin.conf/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Renew the certificate embedded in the kubeconfig file for the admin to use and for kubeadm itself.
--&gt;
&lt;p&gt;续订 kubeconfig 文件中嵌入的证书，供管理员和 kubeadm 自身使用。&lt;/p&gt;
&lt;!--
Renewals run unconditionally, regardless of certificate expiration date; extra attributes such as SANs will be based on the existing file/certificates, there is no need to resupply them.
--&gt;
&lt;p&gt;无论证书的到期日期如何，续订都是无条件进行的；SAN
等额外属性将基于现有文件/证书，因此无需重新提供它们。&lt;/p&gt;
&lt;!--
Renewal by default tries to use the certificate authority in the local PKI managed by kubeadm; as alternative it is possible to use K8s certificate API for certificate renewal, or as a last option, to generate a CSR request.
--&gt;
&lt;p&gt;默认情况下，续订会尝试使用由 kubeadm 管理的本地 PKI 中的证书机构；
作为替代方案，也可以使用 K8s 证书 API 进行证书续订，
或者（作为最后一种选择）生成 CSR 请求。&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs/kubeadm_certs_renew_all/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs/kubeadm_certs_renew_all/</guid><description>&lt;!-- 
Renew all available certificates 
--&gt;
&lt;p&gt;续订所有可用证书。&lt;/p&gt;
&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Renew all known certificates necessary to run the control plane. Renewals are run unconditionally, regardless of expiration date. Renewals can also be run individually for more control.
--&gt;
&lt;p&gt;续订运行控制平面所需的所有已知证书。续订是无条件进行的，与到期日期无关。续订也可以单独运行以进行更多控制。&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm certs renew all [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;
&lt;!--
--cert-dir string&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Default: "/etc/kubernetes/pki"
--&gt;
--cert-dir string&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;默认值："/etc/kubernetes/pki"
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;!--
&lt;p&gt;The path where to save and store the certificates.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs/kubeadm_certs_renew_apiserver-etcd-client/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs/kubeadm_certs_renew_apiserver-etcd-client/</guid><description>&lt;!-- 
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Renew the certificate the apiserver uses to access etcd.
--&gt;
&lt;p&gt;续订 apiserver 用于访问 etcd 的证书。&lt;/p&gt;
&lt;!--
Renewals run unconditionally, regardless of certificate expiration date; extra attributes such as SANs will be based on the existing file/certificates, there is no need to resupply them.
--&gt;
&lt;p&gt;无论证书的到期日期如何，续订都会无条件地进行；SAN 等额外属性将基于现有文件/证书，
因此无需重新提供它们。&lt;/p&gt;
&lt;!--
Renewal by default tries to use the certificate authority in the local PKI managed by kubeadm; as alternative it is possible to use K8s certificate API for certificate renewal, or as a last option, to generate a CSR request.
--&gt;
&lt;p&gt;默认情况下，续订尝试使用在 kubeadm 所管理的本地 PKI 中的证书颁发机构；
作为替代方案，可以使用 K8s 证书 API 进行证书更新，或者（作为最后一个选项）生成
CSR 请求。&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs/kubeadm_certs_renew_apiserver-kubelet-client/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs/kubeadm_certs_renew_apiserver-kubelet-client/</guid><description>&lt;!-- 
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Renew the certificate for the API server to connect to kubelet.
--&gt;
&lt;p&gt;续订 apiserver 用于连接 kubelet 的证书。&lt;/p&gt;
&lt;!--
Renewals run unconditionally, regardless of certificate expiration date; extra attributes such as SANs will be based on the existing file/certificates, there is no need to resupply them.
--&gt;
&lt;p&gt;无论证书的到期日期如何，续订都会无条件地进行；SAN 等额外属性将基于现有文件/证书，
因此无需重新提供它们。&lt;/p&gt;
&lt;!--
Renewal by default tries to use the certificate authority in the local PKI managed by kubeadm; as alternative it is possible to use K8s certificate API for certificate renewal, or as a last option, to generate a CSR request.
--&gt;
&lt;p&gt;默认情况下，续订尝试使用位于 kubeadm 所管理的本地 PKI 中的证书颁发机构；作为替代方案，
也可能调用 K8s 证书 API 进行证书更新；亦或者，作为最后一个选择，生成 CSR 请求。&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs/kubeadm_certs_renew_apiserver/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs/kubeadm_certs_renew_apiserver/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Renew the certificate for serving the Kubernetes API.
--&gt;
&lt;p&gt;续订用于提供 Kubernetes API 的证书。&lt;/p&gt;
&lt;!--
Renewals run unconditionally, regardless of certificate expiration date; extra attributes such as SANs will be based on the existing file/certificates, there is no need to resupply them.
--&gt;
&lt;p&gt;无论证书的到期日期如何，续订都会无条件地进行；SAN
等额外属性将基于现有文件/证书，因此无需重新提供它们。&lt;/p&gt;
&lt;!--
Renewal by default tries to use the certificate authority in the local PKI managed by kubeadm; as alternative it is possible to use K8s certificate API for certificate renewal, or as a last option, to generate a CSR request.
--&gt;
&lt;p&gt;默认情况下，续订尝试在 kubeadm 管理的本地 PKI 中使用证书颁发机构；
作为替代方案，可以使用 K8s 证书 API 进行证书更新，
或者作为最后一个选择来生成 CSR 请求。&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs/kubeadm_certs_renew_controller-manager.conf/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs/kubeadm_certs_renew_controller-manager.conf/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Renew the certificate embedded in the kubeconfig file for the controller manager to use.
--&gt;
&lt;p&gt;续订 kubeconfig 文件中嵌入的证书，以供控制器管理器（Controller Manager）使用。&lt;/p&gt;
&lt;!--
Renewals run unconditionally, regardless of certificate expiration date; extra attributes such as SANs will be based on the existing file/certificates, there is no need to resupply them.
--&gt;
&lt;p&gt;续订无条件地进行，与证书的到期日期无关；SAN 等额外属性将基于现有的文件/证书，
因此无需重新提供它们。&lt;/p&gt;
&lt;!--
Renewal by default tries to use the certificate authority in the local PKI managed by kubeadm; as alternative it is possible to use K8s certificate API for certificate renewal, or as a last option, to generate a CSR request.
--&gt;
&lt;p&gt;默认情况下，续订会尝试使用 kubeadm 管理的本地 PKI 中的证书颁发机构；作为替代方案，
可以使用 K8s 证书 API 进行证书续订；亦或者，作为最后一种选择，生成 CSR 请求。&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs/kubeadm_certs_renew_etcd-healthcheck-client/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs/kubeadm_certs_renew_etcd-healthcheck-client/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Renew the certificate for liveness probes to healthcheck etcd.
--&gt;
&lt;p&gt;续订存活态探针的证书，用于对 etcd 执行健康检查。&lt;/p&gt;
&lt;!--
Renewals run unconditionally, regardless of certificate expiration date; extra attributes such as SANs will be based on the existing file/certificates, there is no need to resupply them.
--&gt;
&lt;p&gt;无论证书的到期日期如何，续订都是无条件进行的；SAN
等额外属性将基于现有文件/证书，因此无需重新提供它们。&lt;/p&gt;
&lt;!--
Renewal by default tries to use the certificate authority in the local PKI managed by kubeadm; as alternative it is possible to use K8s certificate API for certificate renewal, or as a last option, to generate a CSR request.
--&gt;
&lt;p&gt;默认情况下，续订会尝试使用由 kubeadm 管理的本地 PKI 中的证书机构；
作为替代方案，也可以使用 K8s certificate API 进行证书续订，
或者（作为最后一种选择）生成 CSR 请求。&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs/kubeadm_certs_renew_etcd-peer/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs/kubeadm_certs_renew_etcd-peer/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Renew the certificate for etcd nodes to communicate with each other.
--&gt;
&lt;p&gt;续订 etcd 节点间用来相互通信的证书。&lt;/p&gt;
&lt;!--
Renewals run unconditionally, regardless of certificate expiration date; extra attributes such as SANs will be based on the existing file/certificates, there is no need to resupply them.
--&gt;
&lt;p&gt;无论证书的到期日期如何，续订都是无条件进行的；SAN
等额外属性将基于现有文件/证书，因此无需重新提供它们。&lt;/p&gt;
&lt;!--
Renewal by default tries to use the certificate authority in the local PKI managed by kubeadm; as alternative it is possible to use K8s certificate API for certificate renewal, or as a last option, to generate a CSR request.
--&gt;
&lt;p&gt;默认情况下，续订会尝试使用由 kubeadm 管理的本地 PKI 中的证书机构；
作为替代方案，也可以使用 K8s certificate API 进行证书续订，
或者（作为最后一种选择）生成 CSR 请求。&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs/kubeadm_certs_renew_etcd-server/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs/kubeadm_certs_renew_etcd-server/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Renew the certificate for serving etcd.
--&gt;
&lt;p&gt;续订用于提供 etcd 服务的证书。&lt;/p&gt;
&lt;!--
Renewals run unconditionally, regardless of certificate expiration date; extra attributes such as SANs will be based on the existing file/certificates, there is no need to resupply them.
--&gt;
&lt;p&gt;续订无条件地进行，与证书的到期日期无关；SAN
等额外属性将基于现有的文件/证书，因此无需重新提供它们。&lt;/p&gt;
&lt;!--
Renewal by default tries to use the certificate authority in the local PKI managed by kubeadm; as alternative it is possible to use K8s certificate API for certificate renewal, or as a last option, to generate a CSR request.
--&gt;
&lt;p&gt;默认情况下，续订会尝试在 kubeadm 管理的本地 PKI 中使用证书颁发机构；
作为替代方案，可以使用 K8s 证书 API 进行证书续订，
或者作为最后一种选择来生成 CSR 请求。&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs/kubeadm_certs_renew_front-proxy-client/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs/kubeadm_certs_renew_front-proxy-client/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Renew the certificate for the front proxy client.
--&gt;
&lt;p&gt;为前端代理客户端续订证书。&lt;/p&gt;
&lt;!--
Renewals run unconditionally, regardless of certificate expiration date; extra attributes such as SANs will be based on the existing file/certificates, there is no need to resupply them.
--&gt;
&lt;p&gt;无论证书的到期日期如何，续订都会无条件地进行；SAN 等额外属性将基于现有文件/证书，
因此无需重新提供它们。&lt;/p&gt;
&lt;!--
Renewal by default tries to use the certificate authority in the local PKI managed by kubeadm; as alternative it is possible to use K8s certificate API for certificate renewal, or as a last option, to generate a CSR request.
--&gt;
&lt;p&gt;默认情况下，续订尝试使用位于 kubeadm 所管理的本地 PKI 中的证书颁发机构；作为替代方案，
也可以使用 K8s certificate API 进行证书续订；亦或者，作为最后一种方案，生成 CSR 请求。&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs/kubeadm_certs_renew_scheduler.conf/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs/kubeadm_certs_renew_scheduler.conf/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Renew the certificate embedded in the kubeconfig file for the scheduler manager to use.
--&gt;
&lt;p&gt;续订 kubeconfig 文件中嵌入的证书，以供调度管理器使用。&lt;/p&gt;
&lt;!--
Renewals run unconditionally, regardless of certificate expiration date; extra attributes such as SANs will be based on the existing file/certificates, there is no need to resupply them.
--&gt;
&lt;p&gt;续订无条件地进行，与证书的到期日期无关；SAN 等额外属性将基于现有的文件/证书，
因此无需重新提供它们。&lt;/p&gt;
&lt;!--
Renewal by default tries to use the certificate authority in the local PKI managed by kubeadm; as alternative it is possible to use K8s certificate API for certificate renewal, or as a last option, to generate a CSR request.
--&gt;
&lt;p&gt;默认情况下，续订会尝试使用在 kubeadm 所管理的本地 PKI 中的证书颁发机构；作为替代方案，
也可以使用 K8s certificate API 进行证书续订；亦或者，作为最后一种选择，生成 CSR 请求。&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs/kubeadm_certs_renew_super-admin.conf/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs/kubeadm_certs_renew_super-admin.conf/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Renew the certificate embedded in the kubeconfig file for the super-admin.
--&gt;
&lt;p&gt;为 super-admin 对嵌入于 kubeconfig 文件中的证书续期。&lt;/p&gt;
&lt;!--
Renewals run unconditionally, regardless of certificate expiration date; extra attributes such as SANs will be based on the existing file/certificates, there is no need to resupply them.
--&gt;
&lt;p&gt;续期操作将无条件进行，不论证书的到期日期是何时；诸如 SAN
之类的额外属性将基于现有文件/证书，无需重新提供。&lt;/p&gt;
&lt;!--
Renewal by default tries to use the certificate authority in the local PKI managed by kubeadm; as alternative it is possible to use K8s certificate API for certificate renewal, or as a last option, to generate a CSR request.
--&gt;
&lt;p&gt;默认情况下，续订会尝试使用由 kubeadm 管理的本地 PKI 中的证书机构；
作为替代方案，也可以使用 K8s certificate API 进行证书续订，
或者（作为最后一种选择）生成 CSR 请求。&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_config/kubeadm_config_images/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_config/kubeadm_config_images/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Interact with container images used by kubeadm.
--&gt;
&lt;p&gt;与 kubeadm 使用的容器镜像交互。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm config images &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
help for images
--&gt;
images 的帮助命令。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;!--
### Options inherited from parent commands
--&gt;
&lt;h3 id="从父命令继承的选项"&gt;从父命令继承的选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;
&lt;!--
--kubeconfig string&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Default: "/etc/kubernetes/admin.conf"
--&gt;
--kubeconfig string&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;默认值："/etc/kubernetes/admin.conf"
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
--&gt;
用于和集群通信的 kubeconfig 文件。如果它没有被设置，那么 kubeadm
将会搜索一个已经存在于标准路径的 kubeconfig 文件。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_config/kubeadm_config_images_list/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_config/kubeadm_config_images_list/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Print a list of images kubeadm will use. The configuration file is used in case any images or image repositories are customized
--&gt;
&lt;p&gt;打印 kubeadm 要使用的镜像列表。配置文件用于自定义镜像或镜像存储库。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm config images list &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;
&lt;!-- --allow-missing-template-keys&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Default: true --&gt;
--allow-missing-template-keys&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;默认值：true
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!-- 
If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.
--&gt;
如果设置为 true，则在模板中缺少字段或哈希表的键时忽略模板中的任何错误。
仅适用于 golang 和 jsonpath 输出格式。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_config/kubeadm_config_images_pull/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_config/kubeadm_config_images_pull/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Pull images used by kubeadm.
--&gt;
&lt;p&gt;拉取 kubeadm 使用的镜像。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm config images pull &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--config string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Path to a kubeadm configuration file.
--&gt;
kubeadm 配置文件的路径。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--cri-socket string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Path to the CRI socket to connect. If empty kubeadm will try to auto-detect this value; use this option only if you have more than one CRI installed or if you have non-standard CRI socket.
--&gt;
要连接的 CRI 套接字的路径。如果为空，则 kubeadm 将尝试自动检测此值；仅当安装了多个 CRI 或具有非标准 CRI 插槽时，才使用此选项。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_config/kubeadm_config_migrate/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_config/kubeadm_config_migrate/</guid><description>&lt;!-- 
Read an older version of the kubeadm configuration API types from a file, and output the similar config object for the newer version 
--&gt;
&lt;p&gt;从文件中读取旧版本的 kubeadm 配置的 API 类型，并为新版本输出类似的配置对象&lt;/p&gt;
&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
This command lets you convert configuration objects of older versions to the latest supported version,
locally in the CLI tool without ever touching anything in the cluster.
In this version of kubeadm, the following API versions are supported:
- kubeadm.k8s.io/v1beta3
--&gt;
&lt;p&gt;此命令允许你在 CLI 工具中将本地旧版本的配置对象转换为最新支持的版本，而无需变更集群中的任何内容。
在此版本的 kubeadm 中，支持以下 API 版本：&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_config/kubeadm_config_print/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_config/kubeadm_config_print/</guid><description>&lt;!--
Print configuration
--&gt;
&lt;p&gt;打印配置&lt;/p&gt;
&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
This command prints configurations for subcommands provided.
For details, see: https://pkg.go.dev/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm#section-directories
--&gt;
&lt;p&gt;此命令打印子命令所提供的配置信息。相关细节可参阅：
&lt;a href="https://pkg.go.dev/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm#section-directories"&gt;https://pkg.go.dev/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm#section-directories&lt;/a&gt;&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm config print [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;&lt;!--help for print--&gt;print 命令的帮助信息。&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;!--
### Options inherited from parent commands
--&gt;
&lt;h3 id="从父命令继承而来的选项"&gt;从父命令继承而来的选项&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--kubeconfig string&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;!--Default:--&gt;默认值："/etc/kubernetes/admin.conf"&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;!--
&lt;p&gt;The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_config/kubeadm_config_print_init-defaults/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_config/kubeadm_config_print_init-defaults/</guid><description>&lt;!-- 
Print default init configuration, that can be used for 'kubeadm init' 
--&gt;
&lt;p&gt;打印用于 'kubeadm init' 的默认 init 配置。&lt;/p&gt;
&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
This command prints objects such as the default init configuration that is used for 'kubeadm init'.
--&gt;
&lt;p&gt;此命令打印对象，例如用于 'kubeadm init' 的默认 init 配置对象。&lt;/p&gt;
&lt;!--
Note that sensitive values like the Bootstrap Token fields are replaced with placeholder values like "abcdef.0123456789abcdef" in order to pass validation but
not perform the real computation for creating a token.
--&gt;
&lt;p&gt;请注意，Bootstrap Token 字段之类的敏感值已替换为 &amp;quot;abcdef.0123456789abcdef&amp;quot;
之类的占位符值以通过验证，但不执行创建令牌的实际计算。&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_config/kubeadm_config_print_join-defaults/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_config/kubeadm_config_print_join-defaults/</guid><description>&lt;!--
Print default join configuration, that can be used for 'kubeadm join'
--&gt;
&lt;p&gt;打印默认的节点添加配置，该配置可用于 'kubeadm join' 命令。&lt;/p&gt;
&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
This command prints objects such as the default join configuration that is used for 'kubeadm join'.
--&gt;
&lt;p&gt;此命令打印对象，例如用于 'kubeadm join' 的默认 join 配置对象。&lt;/p&gt;
&lt;!--
Note that sensitive values like the Bootstrap Token fields are replaced with placeholder values like "abcdef.0123456789abcdef" in order to pass validation but
not perform the real computation for creating a token.
--&gt;
&lt;p&gt;请注意，诸如启动引导令牌字段之类的敏感值已替换为 &amp;quot;abcdef.0123456789abcdef&amp;quot; 之类的占位符值以通过验证，
但不执行创建令牌的实际计算。&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_config/kubeadm_config_print_reset-defaults/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_config/kubeadm_config_print_reset-defaults/</guid><description>&lt;!--
Print default reset configuration, that can be used for 'kubeadm reset'
--&gt;
&lt;p&gt;打印默认的 reset 配置，该配置可用于 'kubeadm reset' 命令。&lt;/p&gt;
&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
This command prints objects such as the default reset configuration that is used for 'kubeadm reset'.
--&gt;
&lt;p&gt;此命令打印 'kubeadm reset' 所用的默认 reset 配置等这类对象。&lt;/p&gt;
&lt;!--
Note that sensitive values like the Bootstrap Token fields are replaced with placeholder values like "abcdef.0123456789abcdef" in order to pass validation but
not perform the real computation for creating a token.
--&gt;
&lt;p&gt;请注意，诸如启动引导令牌（Bootstrap Token）字段这类敏感值已替换为 &amp;quot;abcdef.0123456789abcdef&amp;quot;
这类占位符值用来通过合法性检查，但不执行创建令牌的实际计算。&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_config/kubeadm_config_print_upgrade-defaults/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_config/kubeadm_config_print_upgrade-defaults/</guid><description>&lt;!--
Print default upgrade configuration, that can be used for 'kubeadm upgrade'
--&gt;
&lt;p&gt;打印可用于 &lt;code&gt;kubeadm upgrade&lt;/code&gt; 的默认升级配置。&lt;/p&gt;
&lt;!--
### Synopsis

This command prints objects such as the default upgrade configuration that is used for 'kubeadm upgrade'.

Note that sensitive values like the Bootstrap Token fields are replaced with placeholder values like "abcdef.0123456789abcdef" in order to pass validation but
not perform the real computation for creating a token.
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;p&gt;此命令打印 &lt;code&gt;kubeadm upgrade&lt;/code&gt; 所用的默认升级配置等这类对象。&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_config/kubeadm_config_validate/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_config/kubeadm_config_validate/</guid><description>&lt;!--
Read a file containing the kubeadm configuration API and report any validation problems
--&gt;
&lt;p&gt;读取包含 kubeadm 配置 API 的文件，并报告所有验证问题。&lt;/p&gt;
&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
This command lets you validate a kubeadm configuration API file and report any warnings and errors.
If there are no errors the exit status will be zero, otherwise it will be non-zero.
Any unmarshaling problems such as unknown API fields will trigger errors. Unknown API versions and
fields with invalid values will also trigger errors. Any other errors or warnings may be reported
depending on contents of the input file.
--&gt;
&lt;p&gt;这个命令允许你验证 kubeadm 配置 API 文件并报告所有警告和错误。
如果没有错误，退出状态将为零；否则，将为非零。
诸如未知 API 字段等任何解包问题都会触发错误。
未知的 API 版本和具有无效值的字段也会触发错误。
根据输入文件的内容，可能会报告任何其他错误或警告。&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Use this command to invoke single phase of the "init" workflow
--&gt;
&lt;p&gt;使用此命令可以调用 &amp;quot;init&amp;quot; 工作流程的单个阶段：&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm init phase &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!-- 
help for phase&lt;/p&gt;
--&gt;
phase 操作的帮助命令。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;!--
### Options inherited from parent commands
--&gt;
&lt;h3 id="继承于父命令的选择项"&gt;继承于父命令的选择项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--rootfs string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
[EXPERIMENTAL] The path to the 'real' host root filesystem.
--&gt;
[实验] 到 '真实' 主机根文件系统的路径。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_addon/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_addon/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Install required addons for passing conformance tests
--&gt;
&lt;p&gt;安装必要的插件以通过一致性测试。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm init phase addon &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
help for addon
--&gt;
addon 操作的帮助命令。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;!--
### Options inherited from parent commands
--&gt;
&lt;h3 id="继承于父命令的选项"&gt;继承于父命令的选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--rootfs string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
[EXPERIMENTAL] The path to the 'real' host root filesystem.
--&gt;
[实验] 到 '真实' 主机根文件系统的路径。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_addon_all/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_addon_all/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Install all the addons
--&gt;
&lt;p&gt;安装所有插件（addon）。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm init phase addon all &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--apiserver-advertise-address string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
The IP address the API Server will advertise it's listening on. If not set the default network interface will be used.
--&gt;
API 服务器所公布的其正在监听的 IP 地址。如果未设置，则将使用默认网络接口。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;
&lt;!--
--apiserver-bind-port int32&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Default: 6443
--&gt;
--apiserver-bind-port int32&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;默认值：6443
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Port for the API Server to bind to.
--&gt;
API 服务器绑定的端口。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_addon_coredns/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_addon_coredns/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Install the CoreDNS addon components via the API server. Please note that although the DNS server is deployed, it will not be scheduled until CNI is installed.
--&gt;
&lt;p&gt;通过 API 服务器安装 CoreDNS 附加组件。请注意，即使 DNS 服务器已部署，在安装 CNI
之前 DNS 服务器不会被调度执行。&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm init phase addon coredns [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--config string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Path to a kubeadm configuration file.
--&gt;
kubeadm 配置文件的路径。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_addon_kube-proxy/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_addon_kube-proxy/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Install the kube-proxy addon components via the API server.
--&gt;
&lt;p&gt;通过 API 服务器安装 kube-proxy 附加组件。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm init phase addon kube-proxy &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--apiserver-advertise-address string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
The IP address the API Server will advertise it's listening on. If not set the default network interface will be used.
--&gt;
API 服务器所公布的其正在监听的 IP 地址。如果未设置，则将使用默认网络接口。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;
&lt;!--
--apiserver-bind-port int32&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Default: 6443
--&gt;
--apiserver-bind-port int32&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;默认值: 6443
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;!--
&lt;p&gt;Port for the API Server to bind to.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_bootstrap-token/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_bootstrap-token/</guid><description>&lt;!-- 
Generates bootstrap tokens used to join a node to a cluster 
--&gt;
&lt;p&gt;生成用于将节点加入集群的引导令牌&lt;/p&gt;
&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Bootstrap tokens are used for establishing bidirectional trust between a node joining the cluster and a control-plane node.
--&gt;
&lt;p&gt;启动引导令牌（bootstrap token）用于在即将加入集群的节点和控制平面节点之间建立双向信任。&lt;/p&gt;
&lt;!--
This command makes all the configurations required to make bootstrap tokens works and then creates an initial token.
--&gt;
&lt;p&gt;该命令使启动引导令牌（bootstrap token）所需的所有配置生效，然后创建初始令牌。&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm init phase bootstrap-token [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;!--
### Examples
--&gt;
&lt;h3 id="示例"&gt;示例&lt;/h3&gt;
&lt;!--
# Make all the bootstrap token configurations and create an initial token, functionally equivalent to what generated by kubeadm init.
--&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;# 进行所有引导令牌配置，并创建一个初始令牌，功能上与 kubeadm init 生成的令牌等效。
kubeadm init phase bootstrap-token
&lt;/code&gt;&lt;/pre&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--config string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;!--
&lt;p&gt;Path to a kubeadm configuration file.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_certs/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_certs/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Certificate generation
--&gt;
&lt;p&gt;证书生成：&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm init phase certs &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
help for certs
--&gt;
certs 操作的帮助命令。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;!--
### Options inherited from parent commands
--&gt;
&lt;h3 id="从父指令中继承的选项"&gt;从父指令中继承的选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--rootfs string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
[EXPERIMENTAL] The path to the 'real' host root filesystem.
--&gt;
[实验] 到'真实'主机根文件系统的路径。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_certs_all/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_certs_all/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Generate all certificates
--&gt;
&lt;p&gt;生成所有证书。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm init phase certs all &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--apiserver-advertise-address string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
The IP address the API Server will advertise it's listening on. If not set the default network interface will be used.
--&gt;
API 服务器所公布的其正在监听的 IP 地址。如果未设置，将使用默认网络接口。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--apiserver-cert-extra-sans strings&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Optional extra Subject Alternative Names (SANs) to use for the API Server serving certificate. Can be both IP addresses and DNS names.
--&gt;
用于 API 服务器服务证书的可选额外替代名称（SAN）。可以同时使用 IP 地址和 DNS 名称。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_certs_apiserver-etcd-client/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_certs_apiserver-etcd-client/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Generate the certificate the apiserver uses to access etcd, and save them into apiserver-etcd-client.crt and apiserver-etcd-client.key files.
--&gt;
&lt;p&gt;生成 apiserver 用于访问 etcd 的证书，并将其保存到 &lt;code&gt;apiserver-etcd-client.crt&lt;/code&gt;
和 &lt;code&gt;apiserver-etcd-client.key&lt;/code&gt; 文件中。&lt;/p&gt;
&lt;!--
If both files already exist, kubeadm skips the generation step and existing files will be used.
--&gt;
&lt;p&gt;如果两个文件都已存在，则 kubeadm 将跳过生成步骤，使用现有文件。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm init phase certs apiserver-etcd-client &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;
&lt;!--
--cert-dir string&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Default: "/etc/kubernetes/pki"
--&gt;
--cert-dir string&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;默认值："/etc/kubernetes/pki"
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!-- 
The path where to save and store the certificates.
--&gt;
证书的存储路径。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_certs_apiserver-kubelet-client/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_certs_apiserver-kubelet-client/</guid><description>&lt;!-- 
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Generate the certificate for the API server to connect to kubelet, and save them into apiserver-kubelet-client.crt and apiserver-kubelet-client.key files.
--&gt;
&lt;p&gt;生成供 API 服务器连接 kubelet 的证书，并将其保存到 &lt;code&gt;apiserver-kubelet-client.crt&lt;/code&gt;
和 &lt;code&gt;apiserver-kubelet-client.key&lt;/code&gt; 文件中。&lt;/p&gt;
&lt;!--
If both files already exist, kubeadm skips the generation step and existing files will be used.
--&gt;
&lt;p&gt;如果两个文件都已存在，则 kubeadm 将跳过生成步骤，使用现有文件。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm init phase certs apiserver-kubelet-client &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;
&lt;!--
--cert-dir string&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Default: "/etc/kubernetes/pki"
--&gt;
--cert-dir string&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;默认值："/etc/kubernetes/pki"
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
The path where to save and store the certificates.
--&gt;
存储证书的路径。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_certs_apiserver/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_certs_apiserver/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Generate the certificate for serving the Kubernetes API, and save them into apiserver.crt and apiserver.key files.
--&gt;
&lt;p&gt;生成用于服务 Kubernetes API 的证书，并将其保存到 &lt;code&gt;apiserver.crt&lt;/code&gt; 和
&lt;code&gt;apiserver.key&lt;/code&gt; 文件中。&lt;/p&gt;
&lt;!--
If both files already exist, kubeadm skips the generation step and existing files will be used.
--&gt;
&lt;p&gt;如果两个文件都已存在，则 kubeadm 将跳过生成步骤，使用现有文件。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm init phase certs apiserver &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--apiserver-advertise-address string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
The IP address the API Server will advertise it's listening on. If not set the default network interface will be used.
--&gt;
API 服务器所公布的其正在监听的 IP 地址。如果未设置，则使用默认的网络接口。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_certs_ca/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_certs_ca/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Generate the self-signed Kubernetes CA to provision identities for other Kubernetes components, and save them into ca.crt and ca.key files.
--&gt;
&lt;p&gt;生成自签名的 Kubernetes CA 以便为其他 Kubernetes 组件提供身份标识，
并将其保存到 &lt;code&gt;ca.crt&lt;/code&gt; 和 &lt;code&gt;ca.key&lt;/code&gt; 文件中。&lt;/p&gt;
&lt;!--
If both files already exist, kubeadm skips the generation step and existing files will be used.
--&gt;
&lt;p&gt;如果两个文件都已存在，则 kubeadm 将跳过生成步骤，使用现有文件。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm init phase certs ca &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;
&lt;!--
--cert-dir string&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Default: "/etc/kubernetes/pki"
--&gt;
--cert-dir string&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;默认值："/etc/kubernetes/pki"
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
The path where to save and store the certificates.
--&gt;
证书的存储路径。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_certs_etcd-ca/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_certs_etcd-ca/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Generate the self-signed CA to provision identities for etcd, and save them into etcd/ca.crt and etcd/ca.key files.
--&gt;
&lt;p&gt;生成用于为 etcd 设置身份的自签名 CA，并将其保存到 &lt;code&gt;etcd/ca.crt&lt;/code&gt; 和 &lt;code&gt;etcd/ca.key&lt;/code&gt; 文件中。&lt;/p&gt;
&lt;!--
If both files already exist, kubeadm skips the generation step and existing files will be used.
--&gt;
&lt;p&gt;如果两个文件都已存在，则 kubeadm 将跳过生成步骤，使用现有文件。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm init phase certs etcd-ca &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;
&lt;!--
--cert-dir string&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Default: "/etc/kubernetes/pki"
--&gt;
--cert-dir string&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;默认值："/etc/kubernetes/pki"
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
The path where to save and store the certificates.
--&gt;
证书的存储路径。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_certs_etcd-healthcheck-client/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_certs_etcd-healthcheck-client/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Generate the certificate for liveness probes to healthcheck etcd, and save them into etcd/healthcheck-client.crt and etcd/healthcheck-client.key files
--&gt;
&lt;p&gt;生成用于 etcd 健康检查的活跃性探针的证书，并将其保存到 &lt;code&gt;etcd/healthcheck-client.crt&lt;/code&gt;
和 &lt;code&gt;etcd/healthcheck-client.key&lt;/code&gt; 文件中。&lt;/p&gt;
&lt;!--
If both files already exist, kubeadm skips the generation step and existing files will be used.
--&gt;
&lt;p&gt;如果两个文件都已存在，则 kubeadm 将跳过生成步骤，使用现有文件。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm init phase certs etcd-healthcheck-client &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;
&lt;!--
--cert-dir string&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Default: "/etc/kubernetes/pki"
--&gt;
--cert-dir string&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;默认值："/etc/kubernetes/pki"
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
The path where to save and store the certificates.
--&gt;
证书存储的路径。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_certs_etcd-peer/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_certs_etcd-peer/</guid><description>&lt;!-- 
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Generate the certificate for etcd nodes to communicate with each other, and save them into etcd/peer.crt and etcd/peer.key files.
--&gt;
&lt;p&gt;生成 etcd 节点相互通信的证书，并将其保存到 &lt;code&gt;etcd/peer.crt&lt;/code&gt; 和
&lt;code&gt;etcd/peer.key&lt;/code&gt; 文件中。&lt;/p&gt;
&lt;!--
Default SANs are localhost, 127.0.0.1, 127.0.0.1, ::1
--&gt;
&lt;p&gt;默认 SAN 为 localhost、127.0.0.1、127.0.0.1、::1&lt;/p&gt;
&lt;!--
If both files already exist, kubeadm skips the generation step and existing files will be used.
--&gt;
&lt;p&gt;如果两个文件都已存在，则 kubeadm 将跳过生成步骤，使用现有文件。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm init phase certs etcd-peer &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!-- 
### Options 
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;
&lt;!--
--cert-dir string&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Default: "/etc/kubernetes/pki"
--&gt;
--cert-dir string&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;默认值："/etc/kubernetes/pki"
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!-- 
The path where to save and store the certificates.
--&gt;
保存和存储证书的路径。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_certs_etcd-server/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_certs_etcd-server/</guid><description>&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;--&amp;gt;&lt;/p&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Generate the certificate for serving etcd, and save them into etcd/server.crt and etcd/server.key files.
--&gt;
&lt;p&gt;生成用于提供 etcd 服务的证书，并将其保存到 &lt;code&gt;etcd/server.crt&lt;/code&gt; 和
&lt;code&gt;etcd/server.key&lt;/code&gt; 文件中。&lt;/p&gt;
&lt;!--
Default SANs are localhost, 127.0.0.1, 127.0.0.1, ::1
--&gt;
&lt;p&gt;默认 SAN 为 localhost、127.0.0.1、127.0.0.1、::1&lt;/p&gt;
&lt;!--
If both files already exist, kubeadm skips the generation step and existing files will be used.
--&gt;
&lt;p&gt;如果两个文件都已存在，则 kubeadm 将跳过生成步骤，使用现有文件。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm init phase certs etcd-server &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!-- 
### Options 
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;
&lt;!--
--cert-dir string&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Default: "/etc/kubernetes/pki"
--&gt;
--cert-dir string&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;默认值："/etc/kubernetes/pki"
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
The path where to save and store the certificates.
--&gt;
保存和存储证书的路径。
&lt;p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--config string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Path to a kubeadm configuration file.
--&gt;
kubeadm 配置文件的路径。
&lt;p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--dry-run&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Don't apply any changes; just output what would be done.
--&gt;
不做任何更改；只输出将要执行的操作。
&lt;p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
help for etcd-server
--&gt;
etcd-server 操作的帮助命令。
&lt;p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;
&lt;!--
--kubernetes-version string&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Default: "stable-1"
--&gt;
--kubernetes-version string&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;默认值："stable-1"
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Choose a specific Kubernetes version for the control plane.
--&gt;
为控制平面指定特定的 Kubernetes 版本。
&lt;p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;!--
### Options inherited from parent commands
--&gt;
&lt;h3 id="继承于父命令的选项"&gt;继承于父命令的选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--rootfs string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
[EXPERIMENTAL] The path to the 'real' host root filesystem.
--&gt;
[实验] 到 '真实' 主机根文件系统的路径。
&lt;p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_certs_front-proxy-ca/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_certs_front-proxy-ca/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Generate the self-signed CA to provision identities for front proxy, and save them into front-proxy-ca.cert and front-proxy-ca.key files.
--&gt;
&lt;p&gt;生成自签名 CA 来提供前端代理的身份，并将其保存到 &lt;code&gt;front-proxy-ca.cert&lt;/code&gt; 和
&lt;code&gt;front-proxy-ca.key&lt;/code&gt; 文件中。&lt;/p&gt;
&lt;!--
If both files already exist, kubeadm skips the generation step and existing files will be used.
--&gt;
&lt;p&gt;如果两个文件都已存在，kubeadm 将跳过生成步骤并将使用现有文件。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm init phase certs front-proxy-ca &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;
&lt;!--
--cert-dir string&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Default: "/etc/kubernetes/pki"
--&gt;
--cert-dir string&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;默认值："/etc/kubernetes/pki"
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
The path where to save and store the certificates.
--&gt;
存储证书的路径。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_certs_front-proxy-client/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_certs_front-proxy-client/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Generate the certificate for the front proxy client, and save them into front-proxy-client.crt and front-proxy-client.key files.

If both files already exist, kubeadm skips the generation step and existing files will be used.
--&gt;
&lt;p&gt;为前端代理客户端生成证书，并将其保存到 &lt;code&gt;front-proxy-client.crt&lt;/code&gt; 和
&lt;code&gt;front-proxy-client.key&lt;/code&gt; 文件中。&lt;/p&gt;
&lt;p&gt;如果两个文件都已存在，kubeadm 将跳过生成步骤并将使用现有文件。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm init phase certs front-proxy-client &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;
&lt;!--
--cert-dir string&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Default: "/etc/kubernetes/pki"
--&gt;
--cert-dir string&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;默认："/etc/kubernetes/pki"&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
The path where to save and store the certificates.
--&gt;
存储证书的路径。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_certs_sa/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_certs_sa/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Generate the private key for signing service account tokens along with its public key, and save them into sa.key and sa.pub files.
--&gt;
&lt;p&gt;生成用来签署服务账号令牌的私钥及其公钥，并将其保存到 &lt;code&gt;sa.key&lt;/code&gt; 和
&lt;code&gt;sa.pub&lt;/code&gt; 文件中。&lt;/p&gt;
&lt;!--
If both files already exist, kubeadm skips the generation step and existing files will be used.
--&gt;
&lt;p&gt;如果两个文件都已存在，则 kubeadm 会跳过生成步骤，而将使用现有文件。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm init phase certs sa &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;
&lt;!--
--cert-dir string&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Default: "/etc/kubernetes/pki"
--&gt;
--cert-dir string&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;默认值："/etc/kubernetes/pki"
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
The path where to save and store the certificates.
--&gt;
保存和存储证书的路径。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_control-plane/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_control-plane/</guid><description>&lt;!-- 
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Generate all static Pod manifest files necessary to establish the control plane
--&gt;
&lt;p&gt;生成建立控制平面所需的所有静态 Pod 的清单文件&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm init phase control-plane &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!-- 
### Options 
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!-- 
help for control-plane
--&gt;
control-plane 操作的帮助命令。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;!--
### Options inherited from parent commands
--&gt;
&lt;h3 id="继承于父命令的选项"&gt;继承于父命令的选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--rootfs string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!-- 
[EXPERIMENTAL] The path to the 'real' host root filesystem.
--&gt;
[实验] 到'真实'主机根文件系统的路径。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_control-plane_all/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_control-plane_all/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Generate all static Pod manifest files
--&gt;
&lt;p&gt;生成所有的静态 Pod 清单文件。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm init phase control-plane all &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Examples

```
# Generates all static Pod manifest files for control plane components,
# functionally equivalent to what is generated by kubeadm init.
kubeadm init phase control-plane all

# Generates all static Pod manifest files using options read from a configuration file.
kubeadm init phase control-plane all --config config.yaml
```
--&gt;
&lt;h3 id="示例"&gt;示例&lt;/h3&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 为控制平面组件生成静态 Pod 清单文件，其功能等效于 kubeadm init 生成的文件。&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm init phase control-plane all
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 使用从某配置文件中读取的选项为生成静态 Pod 清单文件。&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm init phase control-plane all --config config.yaml
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--apiserver-advertise-address string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
The IP address the API Server will advertise it's listening on. If not set the default network interface will be used.
--&gt;
API 服务器所公布的其正在监听的 IP 地址。如果未设置，将使用默认的网络接口。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_control-plane_apiserver/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_control-plane_apiserver/</guid><description>&lt;!--
### Synopsis 
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Generates the kube-apiserver static Pod manifest 
--&gt;
&lt;p&gt;生成 kube-apiserver 静态 Pod 清单。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm init phase control-plane apiserver &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options 
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--apiserver-advertise-address string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
The IP address the API Server will advertise it's listening on. If not set the default network interface will be used.
--&gt;
API 服务器所公布的其正在监听的 IP 地址。如果未设置，将使用默认网络接口。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;
&lt;!--
--apiserver-bind-port int32&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Default: 6443
--&gt;
--apiserver-bind-port int32&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;默认值：6443
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Port for the API Server to bind to.
--&gt;
要绑定到 API 服务器的端口。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_control-plane_controller-manager/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_control-plane_controller-manager/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Generates the kube-controller-manager static Pod manifest
--&gt;
&lt;p&gt;生成 kube-controller-manager 静态 Pod 清单。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm init phase control-plane controller-manager &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--cert-dir string&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;!--Default: "/etc/kubernetes/pki"--&gt;默认值："/etc/kubernetes/pki"&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
The path where to save and store the certificates.
--&gt;
存储证书的路径。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--config string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Path to a kubeadm configuration file.
--&gt;
kubeadm 配置文件的路径。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--controller-manager-extra-args &amp;lt;comma-separated 'key=value' pairs&amp;gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
A set of extra flags to pass to the Controller Manager or override default ones in form of &amp;lt;flagname&amp;gt;=&amp;lt;value&amp;gt;
--&gt;
一组 &amp;lt;flagname&amp;gt;=&amp;lt; 形式的额外参数，传递给控制器管理器（Controller Manager）
或者覆盖其默认配置值
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_control-plane_scheduler/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_control-plane_scheduler/</guid><description>&lt;!-- 
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Generates the kube-scheduler static Pod manifest
--&gt;
&lt;p&gt;生成 kube-scheduler 静态 Pod 清单。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm init phase control-plane scheduler &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!-- 
### Options 
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;
&lt;!--
--cert-dir string&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Default: "/etc/kubernetes/pki"
--&gt;
--cert-dir string&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;默认值："/etc/kubernetes/pki"
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
The path where to save and store the certificates.
--&gt;
存储证书的路径。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--config string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Path to a kubeadm configuration file.
--&gt;
kubeadm 配置文件的路径。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_etcd/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_etcd/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Generate static Pod manifest file for local etcd
--&gt;
&lt;p&gt;为本地 etcd 创建静态 Pod 清单。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm init phase etcd &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
help for etcd
--&gt;
etcd 操作的帮助命令。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;!--
### Options inherited from parent commands
--&gt;
&lt;h3 id="继承于父命令的选项"&gt;继承于父命令的选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--rootfs string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
[EXPERIMENTAL] The path to the 'real' host root filesystem.
--&gt;
[实验] 到 '真实' 主机根文件系统的路径。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_etcd_local/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_etcd_local/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Generate the static Pod manifest file for a local, single-node local etcd instance
--&gt;
&lt;p&gt;为本地单节点 etcd 实例生成静态 Pod 清单文件。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm init phase etcd &lt;span style="color:#a2f"&gt;local&lt;/span&gt; &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Examples
--&gt;
&lt;h3 id="示例"&gt;示例&lt;/h3&gt;
&lt;!--
```
# Generates the static Pod manifest file for etcd, functionally
# equivalent to what is generated by kubeadm init.
kubeadm init phase etcd local

# Generates the static Pod manifest file for etcd using options
# read from a configuration file.
kubeadm init phase etcd local --config config.yaml
```
--&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 为 etcd 生成静态 Pod 清单文件，其功能等效于 kubeadm init 生成的文件。&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm init phase etcd &lt;span style="color:#a2f"&gt;local&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 使用从配置文件读取的选项为 etcd 生成静态 Pod 清单文件。&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm init phase etcd &lt;span style="color:#a2f"&gt;local&lt;/span&gt; --config config.yaml
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;
&lt;!--
--cert-dir string&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Default: "/etc/kubernetes/pki"
--&gt;
--cert-dir string&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;默认值："/etc/kubernetes/pki"
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
The path where to save and store the certificates.
--&gt;
存储证书的路径。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_kubeconfig/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_kubeconfig/</guid><description>&lt;!-- 
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!-- 
Generate all kubeconfig files necessary to establish the control plane and the admin kubeconfig file
--&gt;
&lt;p&gt;生成建立控制平面和管理 kubeconfig 文件所需的所有 kubeconfig 文件。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm init phase kubeconfig &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!-- 
### Options 
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
help for kubeconfig
--&gt;
kubeconfig 操作的帮助命令
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;!-- 
### Options inherited from parent commands 
--&gt;
&lt;h3 id="从父命令继承的选项"&gt;从父命令继承的选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--rootfs string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
[EXPERIMENTAL] The path to the 'real' host root filesystem.
--&gt;
[实验] 到'真实'主机根文件系统的路径。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_kubeconfig_admin/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_kubeconfig_admin/</guid><description>&lt;!-- 
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Generate the kubeconfig file for the admin and for kubeadm itself, and save it to admin.conf file.
--&gt;
&lt;p&gt;为管理员和 kubeadm 本身生成 kubeconfig 文件，并将其保存到 &lt;code&gt;admin.conf&lt;/code&gt; 文件中。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm init phase kubeconfig admin &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!-- 
### Options 
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--apiserver-advertise-address string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
The IP address the API Server will advertise it's listening on. If not set the default network interface will be used.
--&gt;
API 服务器所公布的其正在监听的 IP 地址。如果未设置，则使用默认的网络接口。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_kubeconfig_all/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_kubeconfig_all/</guid><description>&lt;!-- 
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Generate all kubeconfig files
--&gt;
&lt;p&gt;生成所有 kubeconfig 文件。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm init phase kubeconfig all &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!-- 
### Options 
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--apiserver-advertise-address string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
The IP address the API Server will advertise it's listening on. If not set the default network interface will be used.
--&gt;
API 服务器所公布的其正在监听的 IP 地址。如果没有设置，将使用默认的网络接口。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;
&lt;!--
--apiserver-bind-port int32&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;default: 6443
--&gt;
--apiserver-bind-port int32&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;默认值：6443
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Port for the API Server to bind to.
--&gt;
要绑定到 API 服务器的端口。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_kubeconfig_controller-manager/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_kubeconfig_controller-manager/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Generate the kubeconfig file for the controller manager to use and save it to controller-manager.conf file
--&gt;
&lt;p&gt;生成控制器管理器要使用的 kubeconfig 文件，并保存到 controller-manager.conf 文件中。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm init phase kubeconfig controller-manager &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--apiserver-advertise-address string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
The IP address the API Server will advertise it's listening on. If not set the default network interface will be used.
--&gt;
API 服务器所公布的其正在监听的 IP 地址。如果未设置，则使用默认的网络接口。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_kubeconfig_kubelet/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_kubeconfig_kubelet/</guid><description>&lt;!-- 
Generate a kubeconfig file for the kubelet to use *only* for cluster bootstrapping purposes
--&gt;
&lt;p&gt;为 kubelet 生成一个 kubeconfig 文件，&lt;strong&gt;仅仅&lt;/strong&gt;用于集群引导目的。&lt;/p&gt;
&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Generate the kubeconfig file for the kubelet to use and save it to kubelet.conf file.
--&gt;
&lt;p&gt;生成 kubelet 要使用的 kubeconfig 文件，并将其保存到 kubelet.conf 文件。&lt;/p&gt;
&lt;!--
Please note that this should *only* be used for cluster bootstrapping purposes. After your control plane is up, you should request all kubelet credentials from the CSR API.
--&gt;
&lt;p&gt;请注意，该操作目的是&lt;strong&gt;仅&lt;/strong&gt;用于引导集群。在控制平面启动之后，应该从 CSR API 请求所有 kubelet 凭据。&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_kubeconfig_scheduler/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_kubeconfig_scheduler/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Generate the kubeconfig file for the scheduler to use and save it to scheduler.conf file.
--&gt;
&lt;p&gt;生成调度器（scheduler）要使用的 kubeconfig 文件，并保存到
&lt;code&gt;scheduler.conf&lt;/code&gt; 文件中。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm init phase kubeconfig scheduler &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--apiserver-advertise-address string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
The IP address the API Server will advertise it's listening on. If not set the default network interface will be used.
--&gt;
API 服务器所公布的其正在监听的 IP 地址。如果未设置，则使用默认的网络接口。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_kubeconfig_super-admin/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_kubeconfig_super-admin/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Generate a kubeconfig file for the super-admin, and save it to super-admin.conf file.
--&gt;
&lt;p&gt;为 super-admin 生成一个 kubeconfig 文件，并将其保存到
&lt;code&gt;super-admin.conf&lt;/code&gt; 文件中。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm init phase kubeconfig super-admin &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--apiserver-advertise-address string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
The IP address the API Server will advertise it's listening on. If not set the default network interface will be used.
--&gt;
API 服务器所公布其监听的 IP 地址。如果未设置，则使用默认的网络接口。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_kubelet-finalize/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_kubelet-finalize/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Updates settings relevant to the kubelet after TLS bootstrap
--&gt;
&lt;p&gt;TLS 引导后更新与 kubelet 相关的设置。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm init phase kubelet-finalize &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Examples
--&gt;
&lt;h3 id="示例"&gt;示例&lt;/h3&gt;
&lt;!-- 
```
# Updates settings relevant to the kubelet after TLS bootstrap
kubeadm init phase kubelet-finalize all --config
```
--&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 在 TLS 引导后更新与 kubelet 相关的设置&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm init phase kubelet-finalize all --config
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
help for kubelet-finalize
--&gt;
kubelet-finalize 操作的帮助命令。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_kubelet-finalize_all/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_kubelet-finalize_all/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Run all kubelet-finalize phases
--&gt;
&lt;p&gt;运行 kubelet-finalize 的所有阶段&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm init phase kubelet-finalize all &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Examples
--&gt;
&lt;h3 id="示例"&gt;示例&lt;/h3&gt;
&lt;!-- 
```
 # Updates settings relevant to the kubelet after TLS bootstrap
 kubeadm init phase kubelet-finalize all --config
```
--&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;# 在 TLS 引导后更新与 kubelet 相关的设置
kubeadm init phase kubelet-finalize all --config
&lt;/code&gt;&lt;/pre&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;!-- &lt;td colspan="2"&gt;--cert-dir string&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Default: "/etc/kubernetes/pki"&lt;/td&gt; --&gt;
&lt;td colspan="2"&gt;--cert-dir string&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;默认值: "/etc/kubernetes/pki"&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;!-- &lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;The path where to save and store the certificates.&lt;/td&gt; --&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;保存和存储证书的路径。&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_kubelet-finalize_enable-client-cert-rotation/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_kubelet-finalize_enable-client-cert-rotation/</guid><description>&lt;!--
### Synopsis

Enable kubelet client certificate rotation
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;p&gt;启用 kubelet 客户端证书轮换&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm init phase kubelet-finalize enable-client-cert-rotation &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--cert-dir string&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;!-- Default: --&gt;默认值："/etc/kubernetes/pki"&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
The path where to save and store the certificates.
--&gt;
保存和存储证书的路径。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--config string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Path to a kubeadm configuration file.
--&gt;
kubeadm 配置文件的路径。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--dry-run&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Don't apply any changes; just output what would be done.
--&gt;
不做任何更改；只输出将要执行的操作。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_kubelet-start/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_kubelet-start/</guid><description>&lt;!-- 
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Write a file with KubeletConfiguration and an environment file with node specific kubelet settings, and then (re)start kubelet.
--&gt;
&lt;p&gt;使用 kubelet 配置文件编写一个文件，并使用特定节点的 kubelet
设置编写一个环境文件，然后（重新）启动 kubelet。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm init phase kubelet-start &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Examples
--&gt;
&lt;h3 id="示例"&gt;示例&lt;/h3&gt;
&lt;!--
```
# Writes a dynamic environment file with kubelet flags from a InitConfiguration file.
kubeadm init phase kubelet-start --config config.yaml
```
--&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 将来自 InitConfiguration 文件中的 kubelet 参数写入一个动态环境文件。&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm init phase kubelet-start --config config.yaml
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--config string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Path to a kubeadm configuration file.
--&gt;
kubeadm 配置文件的路径。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_mark-control-plane/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_mark-control-plane/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Mark a node as a control-plane
--&gt;
&lt;p&gt;标记节点为控制平面节点。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm init phase mark-control-plane &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Examples

```
# Applies control-plane label and taint to the current node, functionally equivalent to what executed by kubeadm init.
kubeadm init phase mark-control-plane --config config.yaml

# Applies control-plane label and taint to a specific node
kubeadm init phase mark-control-plane --node-name myNode
```
--&gt;
&lt;h3 id="示例"&gt;示例&lt;/h3&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 将控制平面标签和污点应用于当前节点，其功能等效于 kubeadm init 执行的操作&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm init phase mark-control-plane --config config.yaml
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 将控制平面标签和污点应用于特定节点&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm init phase mark-control-plane --node-name myNode
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--config string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Path to a kubeadm configuration file.
--&gt;
kubeadm 配置文件的路径。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_preflight/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_preflight/</guid><description>&lt;!-- 
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Run pre-flight checks for kubeadm init.
--&gt;
&lt;p&gt;运行 kubeadm init 前的预检。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm init phase preflight &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Examples
--&gt;
&lt;h3 id="示例"&gt;示例&lt;/h3&gt;
&lt;!--
```
# Run pre-flight checks for kubeadm init using a config file.
kubeadm init phase preflight --config kubeadm-config.yaml
```
--&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 使用配置文件对 kubeadm init 进行预检&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm init phase preflight --config kubeadm-config.yaml
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!-- 
### Options 
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--config string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Path to a kubeadm configuration file.
--&gt;
kubeadm 配置文件的路径。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_show-join-command/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_show-join-command/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Show the join command for control-plane and worker node
--&gt;
&lt;p&gt;显示针对控制平面和工作节点的 join 命令。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm init phase show-join-command &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
help for show-join-command
--&gt;
show-join-command 操作的帮助命令。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;!--
### Options inherited from parent commands
--&gt;
&lt;h3 id="从父命令继承的选项"&gt;从父命令继承的选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--rootfs string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
[EXPERIMENTAL] The path to the 'real' host root filesystem.
--&gt;
[实验] 到 '真实' 主机根文件系统的路径。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_upload-certs/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_upload-certs/</guid><description>&lt;!-- 
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Upload control plane certificates to the kubeadm-certs Secret
--&gt;
&lt;p&gt;将控制平面证书上传到 kubeadm-certs Secret。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm init phase upload-certs &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--certificate-key string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Key used to encrypt the control-plane certificates in the kubeadm-certs Secret.
The certificate key is a hex encoded string that is an AES key of size 32 bytes.
--&gt;
用于加密 kubeadm-certs Secret 中的控制平面证书的密钥。
证书密钥是十六进制编码的字符串，是大小为 32 字节的 AES 密钥。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_upload-config/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_upload-config/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!-- 
Upload the kubeadm and kubelet configuration to a ConfigMap
--&gt;
&lt;p&gt;上传 kubeadm 和 kubelet 配置到 ConfigMap 中。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm init phase upload-config &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
help for upload-config
--&gt;
upload-config 操作的帮助命令。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;!--
### Options inherited from parent commands
--&gt;
&lt;h3 id="从父命令中继承的选项"&gt;从父命令中继承的选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--rootfs string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
[EXPERIMENTAL] The path to the 'real' host root filesystem.
--&gt;
[实验] 到 '真实' 主机根文件系统的路径。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_upload-config_all/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_upload-config_all/</guid><description>&lt;!-- 
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Upload all configuration to a config map
--&gt;
&lt;p&gt;将所有配置上传到 ConfigMap&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm init phase upload-config all &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--config string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Path to a kubeadm configuration file.
--&gt;
kubeadm 配置文件的路径。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--cri-socket string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Path to the CRI socket to connect. If empty kubeadm will try to auto-detect this value; use this option only if you have more than one CRI installed or if you have non-standard CRI socket.
--&gt;
要连接的 CRI 套接字的路径。如果该值为空，kubeadm 将尝试自动检测；
仅当你安装了多个 CRI 或使用非标准的 CRI 套接字时才应使用此选项。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_upload-config_kubeadm/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_upload-config_kubeadm/</guid><description>&lt;!-- 
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Upload the kubeadm ClusterConfiguration to a ConfigMap called kubeadm-config in the kube-system namespace. This enables correct configuration of system components and a seamless user experience when upgrading.
--&gt;
&lt;p&gt;将 kubeadm ClusterConfiguration 上传到 kube-system 命名空间中名为 kubeadm-config 的 ConfigMap 中。
这样就可以正确配置系统组件，并在升级时提供无缝的用户体验。&lt;/p&gt;
&lt;!--
Alternatively, you can use kubeadm config.
--&gt;
&lt;p&gt;另外，可以使用 kubeadm 配置。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm init phase upload-config kubeadm &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Examples
--&gt;
&lt;h3 id="示例"&gt;示例&lt;/h3&gt;
&lt;!--
# upload the configuration of your cluster
--&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 上传集群配置&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm init phase upload-config kubeadm --config&lt;span style="color:#666"&gt;=&lt;/span&gt;myConfig.yaml
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--config string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Path to a kubeadm configuration file.
--&gt;
kubeadm 配置文件的路径。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_upload-config_kubelet/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_upload-config_kubelet/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Upload the kubelet configuration extracted from the kubeadm InitConfiguration object to a kubelet-config ConfigMap in the cluster
--&gt;
&lt;p&gt;将从 kubeadm InitConfiguration 对象提取的 kubelet 配置上传到集群中的
&lt;code&gt;kubelet-config&lt;/code&gt; ConfigMap。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm init phase upload-config kubelet &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Examples
--&gt;
&lt;h3 id="示例"&gt;示例&lt;/h3&gt;
&lt;!--
```
# Upload the kubelet configuration from the kubeadm Config file to a ConfigMap in the cluster.
kubeadm init phase upload-config kubelet --config kubeadm.yaml
```
--&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 将 kubelet 配置从 kubeadm 配置文件上传到集群中的 ConfigMap。&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm init phase upload-config kubelet --config kubeadm.yaml
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--config string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Path to a kubeadm configuration file.
--&gt;
到 kubeadm 配置文件的路径。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_wait-control-plane/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_wait-control-plane/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Wait for the control plane to start

```
kubeadm init phase wait-control-plane [flags]
```
--&gt;
&lt;p&gt;等待控制平面启动。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm init phase wait-control-plane &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
help for wait-control-plane
--&gt;
wait-control-plane 操作的帮助命令。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;!--
### Options inherited from parent commands
--&gt;
&lt;h3 id="从父命令继承的选项"&gt;从父命令继承的选项&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--rootfs string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
The path to the 'real' host root filesystem. This will cause kubeadm to chroot into the provided path.
--&gt;
到“真实”主机根文件系统的路径。设置此参数将导致 kubeadm 切换到所提供的路径。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_join/kubeadm_join_phase/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_join/kubeadm_join_phase/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Use this command to invoke single phase of the "join" workflow
--&gt;
&lt;p&gt;使用此命令来调用 &lt;code&gt;join&lt;/code&gt; 工作流程的某个阶段。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm join phase &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!-- 
help for phase
--&gt;
phase 操作的帮助命令。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;!--
### Options inherited from parent commands
--&gt;
&lt;h3 id="从父命令中继承的选项"&gt;从父命令中继承的选项&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--rootfs string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!-- 
[EXPERIMENTAL] The path to the 'real' host root filesystem.
--&gt;
[实验] 指向 '真实' 宿主机根文件系统的路径。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_join/kubeadm_join_phase_control-plane-join/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_join/kubeadm_join_phase_control-plane-join/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Join a machine as a control plane instance
--&gt;
&lt;p&gt;添加作为控制平面实例的机器。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm join phase control-plane-join &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Examples
--&gt;
&lt;h3 id="示例"&gt;示例&lt;/h3&gt;
&lt;!--
```
# Joins a machine as a control plane instance
kubeadm join phase control-plane-join all
```
--&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 将机器作为控制平面实例加入&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm join phase control-plane-join all
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
help for control-plane-join
--&gt;
control-plane-join 操作的帮助命令。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_join/kubeadm_join_phase_control-plane-join_all/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_join/kubeadm_join_phase_control-plane-join_all/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Join a machine as a control plane instance
--&gt;
&lt;p&gt;添加作为控制平面实例的机器。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm join phase control-plane-join all &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--apiserver-advertise-address string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
If the node should host a new control plane instance, the IP address the API Server will advertise it's listening on. If not set the default network interface will be used.
--&gt;
如果该节点托管一个新的控制平面实例，则 API 服务器将公布其正在侦听的 IP 地址。
如果未设置，则使用默认网络接口。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_join/kubeadm_join_phase_control-plane-join_etcd/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_join/kubeadm_join_phase_control-plane-join_etcd/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Add a new local etcd member
--&gt;
&lt;p&gt;添加新的本地 etcd 成员。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm join phase control-plane-join etcd &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--apiserver-advertise-address string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
If the node should host a new control plane instance, the IP address the API Server will advertise it's listening on. If not set the default network interface will be used.
--&gt;
如果该节点托管一个新的控制平面实例，则 API 服务器将公布其正在侦听的 IP 地址。
如果未设置，则使用默认网络接口。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_join/kubeadm_join_phase_control-plane-join_mark-control-plane/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_join/kubeadm_join_phase_control-plane-join_mark-control-plane/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Mark a node as a control-plane
--&gt;
&lt;p&gt;将节点标记为控制平面节点。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm join phase control-plane-join mark-control-plane &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--config string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Path to a kubeadm configuration file.
--&gt;
kubeadm 配置文件的路径。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--control-plane&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Create a new control plane instance on this node
--&gt;
在此节点上创建一个新的控制平面实例。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--dry-run&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Don't apply any changes; just output what would be done.
--&gt;
不做任何更改；只输出将要执行的操作。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_join/kubeadm_join_phase_control-plane-prepare/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_join/kubeadm_join_phase_control-plane-prepare/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Prepare the machine for serving a control plane
--&gt;
&lt;p&gt;准备为控制平面服务的机器。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm join phase control-plane-prepare &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Examples
--&gt;
&lt;h3 id="示例"&gt;示例&lt;/h3&gt;
&lt;!--
```
# Prepares the machine for serving a control plane
kubeadm join phase control-plane-prepare all
```
--&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 准备为控制平面服务的机器&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm join phase control-plane-prepare all
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
help for control-plane-prepare
--&gt;
control-plane-prepare 操作的帮助命令。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_join/kubeadm_join_phase_control-plane-prepare_all/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_join/kubeadm_join_phase_control-plane-prepare_all/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Prepare the machine for serving a control plane
--&gt;
&lt;p&gt;准备为控制平面服务的机器。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm join phase control-plane-prepare all &lt;span style="color:#666"&gt;[&lt;/span&gt;api-server-endpoint&lt;span style="color:#666"&gt;]&lt;/span&gt; &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--apiserver-advertise-address string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;!--
If the node should host a new control plane instance, the IP address the API Server will advertise it's listening on. If not set the default network interface will be used.
--&gt;
&lt;p&gt;
如果该节点托管一个新的控制平面实例，则 API 服务器将公布其正在侦听的 IP 地址。
如果未设置，则使用默认网络接口。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_join/kubeadm_join_phase_control-plane-prepare_certs/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_join/kubeadm_join_phase_control-plane-prepare_certs/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Generate the certificates for the new control plane components
--&gt;
&lt;p&gt;为新的控制平面组件生成证书。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm join phase control-plane-prepare certs &lt;span style="color:#666"&gt;[&lt;/span&gt;api-server-endpoint&lt;span style="color:#666"&gt;]&lt;/span&gt; &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--apiserver-advertise-address string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
If the node should host a new control plane instance, the IP address the API Server will advertise it's listening on. If not set the default network interface will be used.
--&gt;
如果该节点托管一个新的控制平面实例，则 API 服务器将公布其正在侦听的 IP 地址。
如果未设置，则使用默认网络接口。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_join/kubeadm_join_phase_control-plane-prepare_control-plane/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_join/kubeadm_join_phase_control-plane-prepare_control-plane/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Generate the manifests for the new control plane components
--&gt;
&lt;p&gt;为新的控制平面组件生成清单（manifest）。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm join phase control-plane-prepare control-plane &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--apiserver-advertise-address string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
If the node should host a new control plane instance, the IP address the API Server will advertise it's listening on. If not set the default network interface will be used.
--&gt;
对于将要托管新的控制平面实例的节点，指定 API 服务器将公布的其正在侦听的 IP 地址。如果未设置，则使用默认网络接口。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_join/kubeadm_join_phase_control-plane-prepare_download-certs/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_join/kubeadm_join_phase_control-plane-prepare_download-certs/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Download certificates shared among control-plane nodes from the kubeadm-certs Secret
--&gt;
&lt;p&gt;从 kubeadm-certs Secret 下载控制平面节点之间共享的证书。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm join phase control-plane-prepare download-certs &lt;span style="color:#666"&gt;[&lt;/span&gt;api-server-endpoint&lt;span style="color:#666"&gt;]&lt;/span&gt; &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--certificate-key string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Use this key to decrypt the certificate secrets uploaded by init. The certificate key is a hex encoded string that is an AES key of size 32 bytes.
--&gt;
使用此密钥可以解密由 init 上传的证书 Secret。
证书密钥为一个十六进制编码的字符串，是大小为 32 字节的 AES 密钥。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_join/kubeadm_join_phase_control-plane-prepare_kubeconfig/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_join/kubeadm_join_phase_control-plane-prepare_kubeconfig/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Generate the kubeconfig for the new control plane components
--&gt;
&lt;p&gt;为新的控制平面组件生成 kubeconfig。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm join phase control-plane-prepare kubeconfig &lt;span style="color:#666"&gt;[&lt;/span&gt;api-server-endpoint&lt;span style="color:#666"&gt;]&lt;/span&gt; &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--certificate-key string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Use this key to decrypt the certificate secrets uploaded by init. The certificate key is a hex encoded string that is an AES key of size 32 bytes.
--&gt;
使用此密钥可以解密由 init 上传的证书 Secret。
证书密钥为一个十六进制编码的字符串，它是大小为 32 字节的 AES 密钥。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_join/kubeadm_join_phase_kubelet-start/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_join/kubeadm_join_phase_kubelet-start/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Write a file with KubeletConfiguration and an environment file with node specific kubelet settings, and then (re)start kubelet.
--&gt;
&lt;p&gt;生成一个包含 KubeletConfiguration 的文件和一个包含特定于节点的 kubelet
配置的环境文件，然后（重新）启动 kubelet。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm join phase kubelet-start &lt;span style="color:#666"&gt;[&lt;/span&gt;api-server-endpoint&lt;span style="color:#666"&gt;]&lt;/span&gt; &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--config string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Path to a kubeadm configuration file.
--&gt;
kubeadm 配置文件的路径。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--cri-socket string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Path to the CRI socket to connect. If empty kubeadm will try to auto-detect this value; use this option only if you have more than one CRI installed or if you have non-standard CRI socket.
--&gt;
提供给 CRI 套接字建立连接的路径。如果为空，则 kubeadm 将尝试自动检测该值；
仅当安装了多个 CRI 或存在非标准的 CRI 套接字时，才使用此选项。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_join/kubeadm_join_phase_kubelet-wait-bootstrap/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_join/kubeadm_join_phase_kubelet-wait-bootstrap/</guid><description>&lt;!--
### Synopsis

Wait for the kubelet to bootstrap itself

```
kubeadm join phase kubelet-wait-bootstrap [flags]
```
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;p&gt;等待 kubelet 完成自身的引导初始化。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm join phase kubelet-wait-bootstrap &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--config string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;
&lt;!--
Path to a kubeadm configuration file.
--&gt;
kubeadm 配置文件的路径。
&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--cri-socket string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;
&lt;!--
Path to the CRI socket to connect. If empty kubeadm will try to auto-detect this value; use this option only if you have more than one CRI installed or if you have non-standard CRI socket.
--&gt;
要连接的 CRI 套接字的路径。如果为空，kubeadm 将尝试自动检测此值；
仅当安装了多个 CRI 或具有非标准 CRI 套接字时，才使用此选项。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_join/kubeadm_join_phase_preflight/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_join/kubeadm_join_phase_preflight/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Run pre-flight checks for kubeadm join.
--&gt;
&lt;p&gt;运行 kubeadm join 命令添加节点前检查。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm join phase preflight &lt;span style="color:#666"&gt;[&lt;/span&gt;api-server-endpoint&lt;span style="color:#666"&gt;]&lt;/span&gt; &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Examples
--&gt;
&lt;h3 id="示例"&gt;示例&lt;/h3&gt;
&lt;!--
```
# Run join pre-flight checks using a config file.
kubeadm join phase preflight --config kubeadm-config.yaml
```
--&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 使用配置文件运行 kubeadm join 命令添加节点前检查。&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm join phase preflight --config kubeadm-config.yaml
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--apiserver-advertise-address string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
If the node should host a new control plane instance, the IP address the API Server will advertise it's listening on. If not set the default network interface will be used.
--&gt;
对于将要托管新的控制平面实例的节点，指定 API 服务器将公布的其正在侦听的 IP 地址。
如果未设置，则使用默认网络接口。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_join/kubeadm_join_phase_wait-control-plane/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_join/kubeadm_join_phase_wait-control-plane/</guid><description>&lt;!--
### Synopsis

Wait for the control plane to start
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;p&gt;等待控制平面启动&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm join phase wait-control-plane &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
help for wait-control-plane
--&gt;
wait-control-plane 操作的帮助命令。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;!--
### Options inherited from parent commands
--&gt;
&lt;h3 id="从父命令继承的选项"&gt;从父命令继承的选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--rootfs string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
[EXPERIMENTAL] The path to the 'real' host root filesystem.
--&gt;
[实验] 到 '真实' 主机根文件系统的路径。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_kubeconfig/kubeadm_kubeconfig_user/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_kubeconfig/kubeadm_kubeconfig_user/</guid><description>&lt;!--
Output a kubeconfig file for an additional user

### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;p&gt;为其他用户输出一个 kubeconfig 文件。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm kubeconfig user &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Examples

```
# Output a kubeconfig file for an additional user named foo
kubeadm kubeconfig user --client-name=foo

# Output a kubeconfig file for an additional user named foo using a kubeadm config file bar
kubeadm kubeconfig user --client-name=foo --config=bar
```
--&gt;
&lt;h3 id="示例"&gt;示例&lt;/h3&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 为一个名为 foo 的其他用户输出 kubeconfig 文件&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm kubeconfig user --client-name&lt;span style="color:#666"&gt;=&lt;/span&gt;foo
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# 使用 kubeadm 配置文件 bar 为另一个名为 foo 的用户输出 kubeconfig 文件&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm alpha kubeconfig user --client-name&lt;span style="color:#666"&gt;=&lt;/span&gt;foo --config&lt;span style="color:#666"&gt;=&lt;/span&gt;bar
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--client-name string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
The name of user. It will be used as the CN if client certificates are created
--&gt;
用户名。如果生成客户端证书，则用作其 CN。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset/kubeadm_reset_phase/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset/kubeadm_reset_phase/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Use this command to invoke single phase of the "reset" workflow
--&gt;
&lt;p&gt;使用此命令来调用 &lt;code&gt;reset&lt;/code&gt; 工作流程的某个阶段：&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm reset phase &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
help for phase
--&gt;
phase 操作的帮助命令。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;!--
### Options inherited from parent commands
--&gt;
&lt;h3 id="从父命令继承的选项"&gt;从父命令继承的选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--rootfs string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
The path to the 'real' host root filesystem. This will cause kubeadm to chroot into the provided path.
--&gt;
到“真实”主机根文件系统的路径。这将导致 kubeadm 切换到所提供的路径。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset/kubeadm_reset_phase_cleanup-node/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset/kubeadm_reset_phase_cleanup-node/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Run cleanup node.
--&gt;
&lt;p&gt;执行 cleanup node（清理节点）操作。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm reset phase cleanup-node &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;
&lt;!--
--cert-dir string&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Default: "/etc/kubernetes/pki"
--&gt;
--cert-dir string&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;默认值："/etc/kubernetes/pki"
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
The path to the directory where the certificates are stored. If specified, clean this directory.
--&gt;
存储证书的目录路径。如果已指定，则需要清空此目录。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--cleanup-tmp-dir&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Cleanup the &amp;quot;/etc/kubernetes/tmp&amp;quot; directory
--&gt;
清理 &amp;quot;/etc/kubernetes/tmp&amp;quot; 目录。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--cri-socket string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Path to the CRI socket to connect. If empty kubeadm will try to auto-detect this value; use this option only if you have more than one CRI installed or if you have non-standard CRI socket.
--&gt;
要连接的 CRI 套接字的路径。如果为空，则 kubeadm 将尝试自动检测此值；
仅当安装了多个 CRI 或具有非标准 CRI 套接字时，才使用此选项。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset/kubeadm_reset_phase_preflight/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset/kubeadm_reset_phase_preflight/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Run pre-flight checks for kubeadm reset.
--&gt;
&lt;p&gt;kubeadm reset（重置）前运行启动前检查。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm reset phase preflight &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--dry-run&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Don't apply any changes; just output what would be done.
--&gt;
不做任何更改；只输出将要执行的操作。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-f, --force&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Reset the node without prompting for confirmation.
--&gt;
在不提示确认的情况下重置节点。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
help for preflight
--&gt;
preflight 操作的帮助命令。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset/kubeadm_reset_phase_remove-etcd-member/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset/kubeadm_reset_phase_remove-etcd-member/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Remove a local etcd member for a control plane node.
--&gt;
&lt;p&gt;移除控制平面节点的本地 etcd 成员。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm reset phase remove-etcd-member &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--dry-run&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Don't apply any changes; just output what would be done.
--&gt;
不做任何更改；只输出将要执行的操作。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
help for remove-etcd-member
--&gt;
remove-etcd-member 操作的帮助命令。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--kubeconfig string&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;!--Default:--&gt;默认值："/etc/kubernetes/admin.conf"&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
--&gt;
与集群通信时使用的 kubeconfig 文件。如果未设置该标志，
则可以在默认位置中查找现有的 kubeconfig 文件。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_token/kubeadm_token_create/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_token/kubeadm_token_create/</guid><description>&lt;!--
Create bootstrap tokens on the server
--&gt;
&lt;p&gt;在服务器上创建引导令牌。&lt;/p&gt;
&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
This command will create a bootstrap token for you.
You can specify the usages for this token, the "time to live" and an optional human friendly description.

The [token] is the actual token to write.
This should be a securely generated random token of the form "[a-z0-9]{6}.[a-z0-9]{16}".
If no [token] is given, kubeadm will generate a random token instead.
--&gt;
&lt;p&gt;这个命令将为你创建一个引导令牌。
你可以设置此令牌的用途，&amp;quot;有效时间&amp;quot; 和可选的人性化的描述。&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_token/kubeadm_token_delete/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_token/kubeadm_token_delete/</guid><description>&lt;!--
Delete bootstrap tokens on the server
--&gt;
&lt;p&gt;删除服务器上的引导令牌。&lt;/p&gt;
&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
This command will delete a list of bootstrap tokens for you.

The [token-value] is the full Token of the form "[a-z0-9]{6}.[a-z0-9]{16}" or the
Token ID of the form "[a-z0-9]{6}" to delete.
--&gt;
&lt;p&gt;这个命令将为你删除指定的引导令牌列表。&lt;/p&gt;
&lt;p&gt;&lt;code&gt;[token-value]&lt;/code&gt; 是要删除的 &amp;quot;[a-z0-9]{6}.[a-z0-9]{16}&amp;quot; 形式的完整令牌或者是 &amp;quot;[a-z0-9]{6}&amp;quot; 形式的令牌 ID。&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm token delete [token-value] ...
&lt;/code&gt;&lt;/pre&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
help for delete
--&gt;
delete 操作的帮助命令。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_token/kubeadm_token_generate/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_token/kubeadm_token_generate/</guid><description>&lt;!-- 
Generate and print a bootstrap token, but do not create it on the server
--&gt;
&lt;p&gt;生成并打印一个引导令牌，但不要在服务器上创建它。&lt;/p&gt;
&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
This command will print out a randomly-generated bootstrap token that can be used with
the "init" and "join" commands.

You don't have to use this command in order to generate a token. You can do so
yourself as long as it is in the format "[a-z0-9]{6}.[a-z0-9]{16}". This
command is provided for convenience to generate tokens in the given format.

You can also use "kubeadm init" without specifying a token and it will
generate and print one for you.
--&gt;
&lt;p&gt;此命令将打印一个随机生成的可以被 &amp;quot;init&amp;quot; 和 &amp;quot;join&amp;quot; 命令使用的引导令牌。&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_token/kubeadm_token_list/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_token/kubeadm_token_list/</guid><description>&lt;!--
List bootstrap tokens on the server
--&gt;
&lt;p&gt;列出服务器上的引导令牌。&lt;/p&gt;
&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
This command will list all bootstrap tokens for you.
--&gt;
&lt;p&gt;此命令将为你列出所有的引导令牌。&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm token list [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;
&lt;!-- --allow-missing-template-keys&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Default: true --&gt;
--allow-missing-template-keys&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;默认值：true
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!-- 
If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.
--&gt;
如果设置为 true，则在模板中缺少字段或哈希表的键时忽略模板中的任何错误。
仅适用于 golang 和 jsonpath 输出格式。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_apply/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_apply/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Upgrade your Kubernetes cluster to the specified version
--&gt;
&lt;p&gt;将 Kubernetes 集群升级到指定版本。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm upgrade apply &lt;span style="color:#666"&gt;[&lt;/span&gt;version&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
The "apply [version]" command executes the following phases:
```
preflight Run preflight checks before upgrade
control-plane Upgrade the control plane
upload-config Upload the kubeadm and kubelet configurations to ConfigMaps
 /kubeadm Upload the kubeadm ClusterConfiguration to a ConfigMap
 /kubelet Upload the kubelet configuration to a ConfigMap
kubelet-config Upgrade the kubelet configuration for this node
bootstrap-token Configures bootstrap token and cluster-info RBAC rules
addon Upgrade the default kubeadm addons
 /coredns Upgrade the CoreDNS addon
 /kube-proxy Upgrade the kube-proxy addon
post-upgrade Run post upgrade tasks
```
--&gt;
&lt;p&gt;&lt;code&gt;apply [version]&lt;/code&gt; 命令执行以下阶段：&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_apply_phase/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_apply_phase/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Use this command to invoke single phase of the "apply" workflow
--&gt;
&lt;p&gt;使用此命令来调用 &amp;quot;apply&amp;quot; 工作流的单个阶段。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm upgrade apply phase &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
help for phase
--&gt;
phase 操作的帮助命令。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;!--
### Options inherited from parent commands
--&gt;
&lt;h3 id="从父命令继承的选项"&gt;从父命令继承的选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--rootfs string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;
&lt;!--
The path to the 'real' host root filesystem. This will cause kubeadm to chroot into the provided path.
--&gt;
到“真实”主机根文件系统的路径。配置此参数将导致 kubeadm 切换到所提供的路径。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_apply_phase_addon/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_apply_phase_addon/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Upgrade the default kubeadm addons
--&gt;
&lt;p&gt;升级默认的 kubeadm 插件&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm upgrade apply phase addon &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
help for addon
--&gt;
addon 操作的帮助命令。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;!--
### Options inherited from parent commands
--&gt;
&lt;h3 id="从父命令继承的选项"&gt;从父命令继承的选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--rootfs string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
The path to the 'real' host root filesystem. This will cause kubeadm to chroot into the provided path.
--&gt;
到“真实”主机根文件系统的路径。设置此参数将导致 kubeadm 切换到所提供的路径。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_apply_phase_addon_all/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_apply_phase_addon_all/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Upgrade all the addons
--&gt;
&lt;p&gt;升级所有插件。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm upgrade apply phase addon all &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h3 id="options"&gt;Options&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--config string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Path to a kubeadm configuration file.
--&gt;
kubeadm 配置文件的路径。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--dry-run&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Do not change any state, just output what actions would be performed.
--&gt;
不更改任何状态，只输出要执行的操作。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
help for all
--&gt;
all 操作的帮助命令。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_apply_phase_addon_coredns/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_apply_phase_addon_coredns/</guid><description>&lt;!--
### Synopsis

Upgrade the CoreDNS addon
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;p&gt;升级 CoreDNS 插件。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm upgrade apply phase addon coredns &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--config string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Path to a kubeadm configuration file.
--&gt;
kubeadm 配置文件的路径。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--dry-run&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Do not change any state, just output what actions would be performed.
--&gt;
不更改任何状态，只输出要执行的操作。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
help for coredns
--&gt;
coredns 操作的帮助命令。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_apply_phase_addon_kube-proxy/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_apply_phase_addon_kube-proxy/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Upgrade the kube-proxy addon
--&gt;
&lt;p&gt;升级 kube-proxy 插件。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm upgrade apply phase addon kube-proxy &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--config string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Path to a kubeadm configuration file.
--&gt;
kubeadm 配置文件的路径。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--dry-run&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Do not change any state, just output the actions that would be performed.
--&gt;
不改变任何状态，只输出将要执行的操作。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
help for all
--&gt;
all 操作的帮助命令。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_apply_phase_bootstrap-token/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_apply_phase_bootstrap-token/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Configures bootstrap token and cluster-info RBAC rules
--&gt;
&lt;p&gt;配置启动引导令牌和 cluster-info 的 RBAC 规则&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm upgrade apply phase bootstrap-token &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h3 id="options"&gt;Options&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--config string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Path to a kubeadm configuration file.
--&gt;
kubeadm 配置文件的路径。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--dry-run&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Do not change any state, just output what actions would be performed.
--&gt;
不更改任何状态，只输出要执行的操作。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
help for bootstrap-token
--&gt;
bootstrap-token 操作的帮助命令。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_apply_phase_control-plane/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_apply_phase_control-plane/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Upgrade the control plane
--&gt;
&lt;p&gt;升级控制平面。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm upgrade apply phase control-plane &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;
&lt;!--
--certificate-renewal&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Default: true
--&gt;
--certificate-renewal&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;默认值：true
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Perform the renewal of certificates used by component changed during upgrades.
--&gt;
执行升级期间更改的组件所使用的证书的更新。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--config string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Path to a kubeadm configuration file.
--&gt;
kubeadm 配置文件的路径。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--dry-run&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Do not change any state, just output what actions would be performed.
--&gt;
不更改任何状态，只输出要执行的操作。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_apply_phase_kubelet-config/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_apply_phase_kubelet-config/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Upgrade the kubelet configuration for this node by downloading it from the kubelet-config ConfigMap stored in the cluster
--&gt;
&lt;p&gt;从集群中的 &lt;code&gt;kubelet-config&lt;/code&gt; ConfigMap 下载以升级该节点的 kubelet 配置。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm upgrade apply phase kubelet-config &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--config string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Path to a kubeadm configuration file.
--&gt;
kubeadm 配置文件的路径。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--dry-run&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Do not change any state, just output the actions that would be performed.
--&gt;
不改变任何状态，只输出将要执行的操作。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_apply_phase_post-upgrade/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_apply_phase_post-upgrade/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Run post upgrade tasks
--&gt;
&lt;p&gt;运行升级后的任务&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm upgrade apply phase post-upgrade &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--config string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Path to a kubeadm configuration file.
--&gt;
kubeadm 配置文件的路径。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--dry-run&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Do not change any state, just output the actions that would be performed.
--&gt;
不改变任何状态，只输出将要执行的操作。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
help for post-upgrade
--&gt;
post-upgrade 操作的帮助命令。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_apply_phase_preflight/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_apply_phase_preflight/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Run preflight checks before upgrade
--&gt;
&lt;p&gt;执行升级前的预检。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm upgrade apply phase preflight &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--allow-experimental-upgrades&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Show unstable versions of Kubernetes as an upgrade alternative and allow upgrading to an alpha/beta/release candidate versions of Kubernetes.
--&gt;
显示 Kubernetes 的不稳定版本作为升级替代方案，并允许升级到 Kubernetes
的 Alpha、Beta 或 RC 版本。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--allow-release-candidate-upgrades&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Show release candidate versions of Kubernetes as an upgrade alternative and allow upgrading to a release candidate versions of Kubernetes.
--&gt;
显示 Kubernetes 的发行候选版本作为升级选择，并允许升级到 Kubernetes 的 RC 版本。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_apply_phase_upload-config/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_apply_phase_upload-config/</guid><description>&lt;!--
### Synopsis

Upload the kubeadm and kubelet configurations to ConfigMaps
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;p&gt;将 kubeadm 和 kubelet 配置上传到 ConfigMap。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm upgrade apply phase upload-config &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
help for upload-config
--&gt;
upload-config 操作的帮助命令。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;!--
### Options inherited from parent commands
--&gt;
&lt;h3 id="从父命令继承的选项"&gt;从父命令继承的选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--rootfs string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
The path to the 'real' host root filesystem. This will cause kubeadm to chroot into the provided path.
--&gt;
“真实”主机根文件系统的路径。设置此参数将导致 kubeadm 切换到所提供的路径。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_apply_phase_upload-config_all/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_apply_phase_upload-config_all/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Upload all the configurations to ConfigMaps
--&gt;
&lt;p&gt;将所有配置上传到 ConfigMap。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm upgrade apply phase upload-config all &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--config string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Path to a kubeadm configuration file.
--&gt;
kubeadm 配置文件的路径。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--dry-run&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Do not change any state, just output the actions that would be performed.
--&gt;
不改变任何状态，只输出将要执行的操作。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
help for all
--&gt;
all 操作的帮助命令。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_apply_phase_upload-config_kubeadm/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_apply_phase_upload-config_kubeadm/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Upload the kubeadm ClusterConfiguration to a ConfigMap
--&gt;
&lt;p&gt;将 kubeadm ClusterConfiguration 上传到 ConfigMap。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm upgrade apply phase upload-config kubeadm &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--config string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Path to a kubeadm configuration file.
--&gt;
kubeadm 配置文件的路径。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--dry-run&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Do not change any state, just output the actions that would be performed.
--&gt;
不改变任何状态，只输出将要执行的操作。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_apply_phase_upload-config_kubelet/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_apply_phase_upload-config_kubelet/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Upload the kubelet configuration to a ConfigMap
--&gt;
&lt;p&gt;将 kubelet 配置上传到 ConfigMap。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm upgrade apply phase upload-config kubelet &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--config string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Path to a kubeadm configuration file.
--&gt;
kubeadm 配置文件的路径。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--dry-run&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Do not change any state, just output the actions that would be performed.
--&gt;
不改变任何状态，只输出将要执行的操作。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_diff/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_diff/</guid><description>&lt;!--
### Synopsis

Show what differences would be applied to existing static pod manifests. See also: kubeadm upgrade apply --dry-run
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;p&gt;显示哪些差异将被应用于现有的静态 Pod 资源清单。另请参考：kubeadm upgrade apply --dry-run&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm upgrade diff &lt;span style="color:#666"&gt;[&lt;/span&gt;version&lt;span style="color:#666"&gt;]&lt;/span&gt; &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options

 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;

&lt;tr&gt;
&lt;td colspan="2"&gt;--api-server-manifest string&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Default: "/etc/kubernetes/manifests/kube-apiserver.yaml"&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;path to API server manifest&lt;/td&gt;
&lt;/tr&gt;
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--api-server-manifest string&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;默认值："/etc/kubernetes/manifests/kube-apiserver.yaml"&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;API 服务器清单的路径。&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;!--
&lt;tr&gt;
&lt;td colspan="2"&gt;--config string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;Path to a kubeadm configuration file.&lt;/td&gt;
&lt;/tr&gt;
--&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--config string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;kubeadm 配置文件的路径。&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_node/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_node/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Upgrade commands for a node in the cluster
--&gt;
&lt;p&gt;升级集群中某个节点的命令。&lt;/p&gt;
&lt;!--
The "node" command executes the following phases:
--&gt;
&lt;p&gt;&amp;quot;node&amp;quot; 命令执行以下阶段：&lt;/p&gt;
&lt;!--
```
preflight Run upgrade node pre-flight checks
control-plane Upgrade the control plane instance deployed on this node, if any
kubelet-config Upgrade the kubelet configuration for this node
addon Upgrade the default kubeadm addons
 /coredns Upgrade the CoreDNS addon
 /kube-proxy Upgrade the kube-proxy addon
post-upgrade Run post upgrade tasks
```
--&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;preflight 执行节点升级前检查
control-plane 如果存在的话，升级部署在该节点上的管理面实例
kubelet-config 更新该节点上的 kubelet 配置
addon 升级默认的 kubeadm 插件
 /coredns 升级 CoreDNS 插件
 /kube-proxy 升级 kube-proxy 插件
post-upgrade 运行升级后的任务
&lt;/code&gt;&lt;/pre&gt;&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm upgrade node &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--certificate-renewal&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;!--Default: true--&gt;默认值：true&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Perform the renewal of certificates used by component changed during upgrades.
--&gt;
对升级期间变化的组件所使用的证书执行续订。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_node_phase/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_node_phase/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Use this command to invoke single phase of the "node" workflow
--&gt;
&lt;p&gt;使用此命令调用 node 工作流的某个阶段。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm upgrade node phase &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
help for phase
--&gt;
phase 操作的帮助命令。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;!--
### Options inherited from parent commands
--&gt;
&lt;h3 id="从父命令继承的选项"&gt;从父命令继承的选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--rootfs string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
[EXPERIMENTAL] The path to the 'real' host root filesystem.
--&gt;
[实验] 指向 '真实' 宿主机根文件系统的路径。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_node_phase_addon/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_node_phase_addon/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Upgrade the default kubeadm addons
--&gt;
&lt;p&gt;升级默认的 kubeadm 插件。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm upgrade node phase addon &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
help for addon
--&gt;
addon 操作的帮助命令。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;!--
### Options inherited from parent commands
--&gt;
&lt;h3 id="从父命令继承的选项"&gt;从父命令继承的选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--rootfs string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
The path to the 'real' host root filesystem. This will cause kubeadm to chroot into the provided path.
--&gt;
到“真实”主机根文件系统的路径。设置此参数将导致 kubeadm 切换到所提供的路径。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_node_phase_addon_all/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_node_phase_addon_all/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Upgrade all the addons
--&gt;
&lt;p&gt;升级所有插件&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm upgrade node phase addon all &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--config string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Path to a kubeadm configuration file.
--&gt;
kubeadm 配置文件的路径。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--dry-run&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Do not change any state, just output the actions that would be performed.
--&gt;
不改变任何状态，只输出将要执行的操作。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
help for all
--&gt;
all 操作的帮助命令。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_node_phase_addon_coredns/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_node_phase_addon_coredns/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Upgrade the CoreDNS addon
--&gt;
&lt;p&gt;升级 CoreDNS 插件。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm upgrade node phase addon coredns &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--config string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;!--
Path to a kubeadm configuration file.
--&gt;
&lt;p&gt;
kubeadm 配置文件的路径。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--dry-run&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Do not change any state, just output the actions that would be performed.
--&gt;
不改变任何状态，只输出将要执行的操作。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;!--
help for coredns
--&gt;
&lt;p&gt;
coredns 操作的帮助命令。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_node_phase_addon_kube-proxy/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_node_phase_addon_kube-proxy/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Upgrade the kube-proxy addon
--&gt;
&lt;p&gt;升级 kube-proxy 插件。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm upgrade node phase addon kube-proxy &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--config string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Path to a kubeadm configuration file.
--&gt;
kubeadm 配置文件的路径。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--dry-run&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Do not change any state, just output the actions that would be performed.
--&gt;
不改变任何状态，只输出将要执行的操作。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
help for kube-proxy
--&gt;
kube-proxy 操作的帮助命令。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_node_phase_control-plane/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_node_phase_control-plane/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Upgrade the control plane instance deployed on this node, if any
--&gt;
&lt;p&gt;升级部署在此节点上的控制平面实例，如果有的话。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm upgrade node phase control-plane &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--certificate-renewal&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Perform the renewal of certificates used by component changed during upgrades.
--&gt;
续订在升级期间变更的组件所使用的证书。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--config string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Path to a kubeadm configuration file.
--&gt;
kubeadm 配置文件的路径。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--dry-run&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Do not change any state, just output the actions that would be performed.
--&gt;
不改变任何状态，只输出将要执行的动作。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_node_phase_kubelet-config/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_node_phase_kubelet-config/</guid><description>&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Upgrade the kubelet configuration for this node by downloading it from the kubelet-config ConfigMap stored in the cluster
--&gt;
&lt;p&gt;从集群中的 &lt;code&gt;kubelet-config&lt;/code&gt; ConfigMap 下载以升级该节点的 kubelet 配置&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm upgrade node phase kubelet-config &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--config string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Path to a kubeadm configuration file.
--&gt;
kubeadm 配置文件的路径。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--dry-run&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Do not change any state, just output the actions that would be performed.
--&gt;
不改变任何状态，只输出将要执行的操作。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_node_phase_post-upgrade/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_node_phase_post-upgrade/</guid><description>&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;!--
Run post upgrade tasks
--&gt;
&lt;p&gt;运行升级后的任务&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm upgrade node phase post-upgrade &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h3 id="options"&gt;Options&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--config string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Path to a kubeadm configuration file.
--&gt;
配置文件的路径。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--dry-run&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Do not change any state, just output the actions that would be performed.
--&gt;
不改变任何状态，只输出将要执行的操作。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
help for post-upgrade
--&gt;
post-upgrade 操作的帮助命令。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;
&lt;!--
--kubeconfig string&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Default: "/etc/kubernetes/admin.conf"
--&gt;
--kubeconfig string&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;默认值："/etc/kubernetes/admin.conf"
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
--&gt;
用于和集群通信的 kubeconfig 文件。如果它没有被设置，那么 kubeadm 将会搜索一个已经存在于标准路径的 kubeconfig 文件。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_node_phase_preflight/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_node_phase_preflight/</guid><description>&lt;!--
Run upgrade node pre-flight checks
--&gt;
&lt;p&gt;执行升级节点的预检。&lt;/p&gt;
&lt;!--
### Synopsis
--&gt;
&lt;h3 id="概要"&gt;概要&lt;/h3&gt;
&lt;!--
Run pre-flight checks for kubeadm upgrade node.
--&gt;
&lt;p&gt;执行 kubeadm 升级节点的预检。&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm upgrade node phase preflight [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--config string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Path to a kubeadm configuration file.
--&gt;
kubeadm 配置文件的路径。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
help for preflight
--&gt;
preflight 操作的帮助命令。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--ignore-preflight-errors strings&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
A list of checks whose errors will be shown as warnings. Example: 'IsPrivilegedUser,Swap'. Value 'all' ignores errors from all checks.
--&gt;
错误将显示为警告的检查清单。示例：'IsPrivilegedUser,Swap'。值为 'all' 表示忽略所有检查的错误。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_plan/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_plan/</guid><description>&lt;!--
### Synopsis

Check which versions are available to upgrade to and validate whether your current cluster is upgradeable.
This command can only run on the control plane nodes where the kubeconfig file "admin.conf" exists.
To skip the internet check, pass in the optional [version] parameter.
--&gt;
&lt;h3 id="概述"&gt;概述&lt;/h3&gt;
&lt;p&gt;检查可升级到哪些版本，并验证你当前的集群是否可升级。
该命令只能在存在 kubeconfig 文件 &lt;code&gt;admin.conf&lt;/code&gt; 的控制平面节点上运行。
要跳过互联网检查，请传入可选参数 [version]。&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubeadm upgrade plan &lt;span style="color:#666"&gt;[&lt;/span&gt;version&lt;span style="color:#666"&gt;]&lt;/span&gt; &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Options
--&gt;
&lt;h3 id="选项"&gt;选项&lt;/h3&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--allow-experimental-upgrades&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;
&lt;p&gt;
&lt;!--
Show unstable versions of Kubernetes as an upgrade alternative and allow upgrading to an alpha/beta/release candidate versions of Kubernetes.
--&gt;
显示不稳定版本的 Kubernetes 作为升级替代方案，并允许升级到 Kubernetes
的 Alpha、Beta 或 RC 版本。
&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/readme/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/generated/readme/</guid><description>&lt;p&gt;此目录下的所有文件都是从其他仓库自动生成的。 &lt;strong&gt;不要人工编辑它们。 你必须在上游仓库中编辑它们&lt;/strong&gt;&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/examples/readme/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/examples/readme/</guid><description>&lt;!--
To run the tests for a localization, use the following command:
--&gt;
&lt;p&gt;要运行本地化测试，请使用以下命令:&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;go test k8s.io/website/content/&amp;lt;lang&amp;gt;/examples
&lt;/code&gt;&lt;/pre&gt;&lt;!--
where `&lt;lang&gt;` is the two character representation of a language. For example:
--&gt;
&lt;p&gt;其中 &lt;code&gt;&amp;lt;lang&amp;gt;&lt;/code&gt; 是用两个字符表示一种语言。例如:&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;go test k8s.io/website/content/en/examples
&lt;/code&gt;&lt;/pre&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/zh-cn/readme/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/readme/</guid><description>&lt;h1 id="kubernetes-文档"&gt;Kubernetes 文档&lt;/h1&gt;
&lt;!--
# The Kubernetes documentation
--&gt;
&lt;p&gt;&lt;a href="https://app.netlify.com/sites/kubernetes-io-main-staging/deploys"&gt;&lt;img src="https://api.netlify.com/api/v1/badges/be93b718-a6df-402a-b4a4-855ba186c97d/deploy-status" alt="Netlify Status"&gt;&lt;/a&gt; &lt;a href="https://github.com/kubernetes/website/releases/latest"&gt;&lt;img src="https://img.shields.io/github/release/kubernetes/website.svg" alt="GitHub release"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;!--
This repository contains the assets required to build the [Kubernetes website and documentation](https://kubernetes.io/). We're glad that you want to contribute!
--&gt;
&lt;p&gt;本仓库包含了所有用于构建 &lt;a href="https://kubernetes.io/"&gt;Kubernetes 网站和文档&lt;/a&gt;的软件资产。
我们非常高兴你想要参与贡献！&lt;/p&gt;
&lt;!--
- [Contributing to the docs](#contributing-to-the-docs)
- [Localization READMEs](#localization-readmemds)
--&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#%E4%B8%BA%E6%96%87%E6%A1%A3%E5%81%9A%E8%B4%A1%E7%8C%AE"&gt;为文档做贡献&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#readme-%E6%9C%AC%E5%9C%B0%E5%8C%96"&gt;README 本地化&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
## Using this repository

You can run the website locally using [Hugo (Extended version)](https://gohugo.io/), or you can run it in a container runtime. We strongly recommend using the container runtime, as it gives deployment consistency with the live website.
--&gt;
&lt;h2 id="使用这个仓库"&gt;使用这个仓库&lt;/h2&gt;
&lt;p&gt;可以使用 &lt;a href="https://gohugo.io/"&gt;Hugo（扩展版）&lt;/a&gt;在本地运行网站，也可以在容器中运行它。
强烈建议使用容器，因为这样可以和在线网站的部署保持一致。&lt;/p&gt;</description></item><item><title>Amadeus Case Study</title><link>https://andygol-k8s.netlify.app/zh-cn/case-studies/amadeus/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/case-studies/amadeus/</guid><description>&lt;div class="banner1"&gt;
 &lt;h1&gt; CASE STUDY:&lt;img src="https://andygol-k8s.netlify.app/images/amadeus_logo.png" class="header_logo"&gt;&lt;br&gt; &lt;div class="subhead"&gt;Another Technical Evolution for a 30-Year-Old Company
&lt;/div&gt;&lt;/h1&gt;
&lt;/div&gt;
&lt;div class="details"&gt;
 Company &amp;nbsp;&lt;b&gt;Amadeus IT Group&lt;/b&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Location &amp;nbsp;&lt;b&gt;Madrid, Spain&lt;/b&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Industry &amp;nbsp;&lt;b&gt;Travel Technology&lt;/b&gt;
&lt;/div&gt;
&lt;hr&gt;
&lt;section class="section1"&gt;
&lt;div class="cols"&gt;
 &lt;div class="col1"&gt;
 &lt;h2&gt;Challenge&lt;/h2&gt;
 In the past few years, Amadeus, which provides IT solutions to the travel industry around the world, found itself in need of a new platform for the 5,000 services supported by its service-oriented architecture. The 30-year-old company operates its own data center in Germany, and there were growing demands internally and externally for solutions that needed to be geographically dispersed. And more generally, "we had objectives of being even more highly available," says Eric Mountain, Senior Expert, Distributed Systems at Amadeus. Among the company’s goals: to increase automation in managing its infrastructure, optimize the distribution of workloads, use data center resources more efficiently, and adopt new technologies more easily.
 &lt;/div&gt;
&lt;div class="col2"&gt;
 &lt;h2&gt;Solution&lt;/h2&gt;
 Mountain has been overseeing the company’s migration to &lt;a href="https://kubernetes.io/"&gt;Kubernetes&lt;/a&gt;, using &lt;a href="https://www.openshift.org/"&gt;OpenShift&lt;/a&gt; Container Platform, &lt;a href="https://www.redhat.com/en"&gt;Red Hat&lt;/a&gt;’s enterprise container platform.
&lt;br&gt;&lt;br&gt;
&lt;h2&gt;Impact&lt;/h2&gt;
 One of the first projects the team deployed in Kubernetes was the Amadeus Airline Cloud Availability solution, which helps manage ever-increasing flight-search volume. "It’s now handling in production several thousand transactions per second, and it’s deployed in multiple data centers throughout the world," says Mountain. "It’s not a migration of an existing workload; it’s a whole new workload that we couldn’t have done otherwise. [This platform] gives us access to market opportunities that we didn’t have before."
&lt;/div&gt;
&lt;/div&gt;
&lt;/section&gt;

&lt;div class="banner2"&gt;
 &lt;div class="banner2text"&gt;
 "We want multi-data center capabilities, and we want them for our mainstream system as well. We didn’t think that we could achieve them with our existing system. We need new automation, things that Kubernetes and OpenShift bring."&lt;br style="height:25px"&gt;&lt;span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"&gt;&lt;br&gt;- Eric Mountain, Senior Expert, Distributed Systems at Amadeus IT Group&lt;/span&gt;
 &lt;/div&gt;
&lt;/div&gt;
&lt;section class="section2"&gt;

&lt;div class="fullcol"&gt;
 &lt;h2&gt;In his two decades at Amadeus, Eric Mountain has been the migrations guy. &lt;/h2&gt;
 Back in the day, he worked on the company’s move from Unix to Linux, and now he’s overseeing the journey to cloud native. "Technology just keeps changing, and we embrace it," he says. "We are celebrating our 30 years this year, and we continue evolving and innovating to stay cost-efficient and enhance everyone’s travel experience, without interrupting workflows for the customers who depend on our technology."&lt;br&gt;&lt;br&gt;
 That was the challenge that Amadeus—which provides IT solutions to the travel industry around the world, from flight searches to hotel bookings to customer feedback—faced in 2014. The technology team realized it was in need of a new platform for the 5,000 services supported by its service-oriented architecture.&lt;br&gt;&lt;br&gt;
 The tipping point occurred when they began receiving many requests, internally and externally, for solutions that needed to be geographically outside the company’s main data center in Germany. "Some requests were for running our applications on customer premises," Mountain says. "There were also new services we were looking to offer that required response time to the order of a few hundred milliseconds, which we couldn’t achieve with transatlantic traffic. Or at least, not without eating into a considerable portion of the time available to our applications for them to process individual queries."&lt;br&gt;&lt;br&gt;
 More generally, the company was interested in leveling up on high availability, increasing automation in managing infrastructure, optimizing the distribution of workloads and using data center resources more efficiently. "We have thousands and thousands of servers," says Mountain. "These servers are assigned roles, so even if the setup is highly automated, the machine still has a given role. It’s wasteful on many levels. For instance, an application doesn’t necessarily use the machine very optimally. Virtualization can help a bit, but it’s not a silver bullet. If that machine breaks, you still want to repair it because it has that role and you can’t simply say, ‘Well, I’ll bring in another machine and give it that role.’ It’s not fast. It’s not efficient. So we wanted the next level of automation."&lt;br&gt;&lt;br&gt;
&lt;/div&gt;
&lt;/section&gt;

&lt;div class="banner3"&gt;
 &lt;div class="banner3text"&gt;
 "We hope that if we build on what others have built, what we do might actually be upstream-able. As Kubernetes and OpenShift progress, we see that we are indeed able to remove some of the additional layers we implemented to compensate for gaps we perceived earlier."
 &lt;/div&gt;
&lt;/div&gt;

&lt;section class="section3"&gt;
&lt;div class="fullcol"&gt;
 While mainly a C++ and Java shop, Amadeus also wanted to be able to adopt new technologies more easily. Some of its developers had started using languages like &lt;a href="https://www.python.org/"&gt;Python&lt;/a&gt; and databases like &lt;a href="https://www.couchbase.com/"&gt;Couchbase&lt;/a&gt;, but Mountain wanted still more options, he says, "in order to better adapt our technical solutions to the products we offer, and open up entirely new possibilities to our developers." Working with recent technologies and cool new things would also make it easier to attract new talent.
 &lt;br&gt;&lt;br&gt;
 All of those needs led Mountain and his team on a search for a new platform. "We did a set of studies and proofs of concept over a fairly short period, and we considered many technologies," he says. "In the end, we were left with three choices: build everything on premise, build on top of &lt;a href="https://kubernetes.io/"&gt;Kubernetes&lt;/a&gt; whatever happens to be missing from our point of view, or go with &lt;a href="https://www.openshift.com/"&gt;OpenShift&lt;/a&gt; and build whatever remains there."
&lt;br&gt;&lt;br&gt;
 The team decided against building everything themselves—though they’d done that sort of thing in the past—because "people were already inventing things that looked good," says Mountain.
&lt;br&gt;&lt;br&gt;
 Ultimately, they went with OpenShift Container Platform, &lt;a href="https://www.redhat.com/en"&gt;Red Hat&lt;/a&gt;’s Kubernetes-based enterprise offering, instead of building on top of Kubernetes because "there was a lot of synergy between what we wanted and the way Red Hat was anticipating going with OpenShift," says Mountain. "They were clearly developing Kubernetes, and developing certain things ahead of time in OpenShift, which were important to us, such as more security."
&lt;br&gt;&lt;br&gt;
 The hope was that those particular features would eventually be built into Kubernetes, and, in the case of security, Mountain feels that has happened. "We realize that there’s always a certain amount of automation that we will probably have to develop ourselves to compensate for certain gaps," says Mountain. "The less we do that, the better for us. We hope that if we build on what others have built, what we do might actually be upstream-able. As Kubernetes and OpenShift progress, we see that we are indeed able to remove some of the additional layers we implemented to compensate for gaps we perceived earlier."
&lt;/div&gt;
&lt;/section&gt;

&lt;div class="banner4"&gt;
 &lt;div class="banner4text"&gt;
 "It’s not a migration of an existing workload; it’s a whole new workload that we couldn’t have done otherwise. [This platform] gives us access to market opportunities that we didn’t have before."
 &lt;/div&gt;
&lt;/div&gt;

&lt;section class="section4"&gt;
&lt;div class="fullcol"&gt;
 The first project the team tackled was one that they knew had to run outside the data center in Germany. Because of the project’s needs, "We couldn’t rely only on the built-in Kubernetes service discovery; we had to layer on top of that an extra service discovery level that allows us to load balance at the operation level within our system," says Mountain. They also built a stream dedicated to monitoring, which at the time wasn’t offered in the Kubernetes or OpenShift ecosystem. Now that &lt;a href="https://www.prometheus.io/"&gt;Prometheus&lt;/a&gt; and other products are available, Mountain says the company will likely re-evaluate their monitoring system: "We obviously always like to leverage what Kubernetes and OpenShift can offer."
&lt;br&gt;&lt;br&gt;
 The second project ended up going into production first: the Amadeus Airline Cloud Availability solution, which helps manage ever-increasing flight-search volume and was deployed in public cloud. Launched in early 2016, it is "now handling in production several thousand transactions per second, and it’s deployed in multiple data centers throughout the world," says Mountain. "It’s not a migration of an existing workload; it’s a whole new workload that we couldn’t have done otherwise. [This platform] gives us access to market opportunities that we didn’t have before."
&lt;br&gt;&lt;br&gt;
 Having been through this kind of technical evolution more than once, Mountain has advice on how to handle the cultural changes. "That’s one aspect that we can tackle progressively," he says. "We have to go on supplying our customers with new features on our pre-existing products, and we have to keep existing products working. So we can’t simply do absolutely everything from one day to the next. And we mustn’t sell it that way."
&lt;br&gt;&lt;br&gt;
 The first order of business, then, is to pick one or two applications to demonstrate that the technology works. Rather than choosing a high-impact, high-risk project, Mountain’s team selected a smaller application that was representative of all the company’s other applications in its complexity: "We just made sure we picked something that’s complex enough, and we showed that it can be done."
&lt;/div&gt;
&lt;/section&gt;

&lt;div class="banner5"&gt;
 &lt;div class="banner5text"&gt;
 "The bottom line is we want these multi-data center capabilities, and we want them as well for our mainstream system," he says. "And we don’t think that we can implement them with our previous system. We need the new automation, homogeneity, and scale that Kubernetes and OpenShift bring."
 &lt;/div&gt;
&lt;/div&gt;

&lt;section class="section5"&gt;
&lt;div class="fullcol"&gt;
 Next comes convincing people. "On the operations side and on the R&amp;D side, there will be people who say quite rightly, ‘There is a system, and it works, so why change?’" Mountain says. "The only thing that really convinces people is showing them the value." For Amadeus, people realized that the Airline Cloud Availability product could not have been made available on the public cloud with the company’s existing system. The question then became, he says, "Do we go into a full-blown migration? Is that something that is justified?"
&lt;br&gt;&lt;br&gt;
 "The bottom line is we want these multi-data center capabilities, and we want them as well for our mainstream system," he says. "And we don’t think that we can implement them with our previous system. We need the new automation, homogeneity, and scale that Kubernetes and OpenShift bring."
&lt;br&gt;&lt;br&gt;
 So how do you get everyone on board? "Make sure you have good links between your R&amp;D and your operations," he says. "Also make sure you’re going to talk early on to the investors and stakeholders. Figure out what it is that they will be expecting from you, that will convince them or not, that this is the right way for your company."
&lt;br&gt;&lt;br&gt;
 His other advice is simply to make the technology available for people to try it. "Kubernetes and OpenShift Origin are open source software, so there’s no complicated license key for the evaluation period and you’re not limited to 30 days," he points out. "Just go and get it running." Along with that, he adds, "You’ve got to be prepared to rethink how you do things. Of course making your applications as cloud native as possible is how you’ll reap the most benefits: 12 factors, CI/CD, which is continuous integration, continuous delivery, but also continuous deployment."
&lt;br&gt;&lt;br&gt;
 And while they explore that aspect of the technology, Mountain and his team will likely be practicing what he preaches to others taking the cloud native journey. "See what happens when you break it, because it’s important to understand the limits of the system," he says. Or rather, he notes, the advantages of it. "Breaking things on Kube is actually one of the nice things about it—it recovers. It’s the only real way that you’ll see that you might be able to do things."
&lt;/div&gt;
&lt;/section&gt;</description></item><item><title>Ancestry Case Study</title><link>https://andygol-k8s.netlify.app/zh-cn/case-studies/ancestry/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/case-studies/ancestry/</guid><description>&lt;div class="banner1"&gt;
 &lt;h1&gt; CASE STUDY:&lt;img src="https://andygol-k8s.netlify.app/images/ancestry_logo.png" width="22%" style="margin-bottom:-12px;margin-left:3px;"&gt;&lt;br&gt; &lt;div class="subhead"&gt;Digging Into the Past With New Technology&lt;/div&gt;&lt;/h1&gt;

&lt;/div&gt;

&lt;div class="details"&gt;
 Company &amp;nbsp;&lt;b&gt;Ancestry&lt;/b&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Location &amp;nbsp;&lt;b&gt;Lehi, Utah&lt;/b&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Industry &amp;nbsp;&lt;b&gt;Internet Company, Online Services&lt;/b&gt;
&lt;/div&gt;

&lt;hr&gt;

&lt;section class="section1"&gt;
&lt;div class="cols"&gt;
&lt;div class="col1"&gt;

&lt;h2&gt;Challenge&lt;/h2&gt;
Ancestry, the global leader in family history and consumer genomics, uses sophisticated engineering and technology to help everyone, everywhere discover the story of what led to them. The company has spent more than 30 years innovating and building products and technologies that at their core, result in real and emotional human responses. &lt;a href="https://www.ancestry.com"&gt;Ancestry&lt;/a&gt; currently serves more than 2.6 million paying subscribers, holds 20 billion historical records, 90 million family trees and more than four million people are in its AncestryDNA network, making it the largest consumer genomics DNA network in the world. The company's popular website, &lt;a href="https://www.ancestry.com"&gt;ancestry.com&lt;/a&gt;, has been working with big data long before the term was popularized. The site was built on hundreds of services, technologies and a traditional deployment methodology. "It's worked well for us in the past," says Paul MacKay, software engineer and architect at Ancestry, "but had become quite cumbersome in its processing and is time-consuming. As a primarily online service, we are constantly looking for ways to accelerate to be more agile in delivering our solutions and our&amp;nbsp;products."

&lt;br&gt;

&lt;/div&gt;

&lt;div class="col2"&gt;
 &lt;h2&gt;Solution&lt;/h2&gt;

 The company is transitioning to cloud native infrastructure, using &lt;a href="https://www.docker.com"&gt;Docker&lt;/a&gt; containerization, &lt;a href="https://kubernetes.io"&gt;Kubernetes&lt;/a&gt; orchestration and &lt;a href="https://prometheus.io"&gt;Prometheus&lt;/a&gt; for cluster monitoring.&lt;br&gt;
&lt;br&gt;
&lt;h2&gt;Impact&lt;/h2&gt;
 "Every single product, every decision we make at Ancestry, focuses on delighting our customers with intimate, sometimes life-changing discoveries about themselves and their families," says MacKay. "As the company continues to grow, the increased productivity gains from using Kubernetes has helped Ancestry make customer discoveries faster. With the move to Dockerization for example, instead of taking between 20 to 50 minutes to deploy a new piece of code, we can now deploy in under a minute for much of our code. We’ve truly experienced significant time savings in addition to the various features and benefits from cloud native and Kubernetes-type technologies."
&lt;/div&gt;
&lt;/div&gt;
&lt;/section&gt;

&lt;div class="banner2"&gt;
 &lt;div class="banner2text"&gt;
 "At a certain point, you have to step back if you're going to push a new technology and get key thought leaders with engineers within the organization to become your champions for new technology adoption. At training sessions, the development teams were always the ones that were saying, 'Kubernetes saved our time tremendously; it's an enabler. It really is incredible.'"&lt;br&gt;&lt;br&gt;&lt;span style="font-size:16px"&gt;- PAUL MACKAY, SOFTWARE ENGINEER AND ARCHITECT AT ANCESTRY&lt;/span&gt;
 &lt;/div&gt;
&lt;/div&gt;

&lt;section class="section2"&gt;
&lt;div class="fullcol"&gt;
&lt;h2&gt;It started with a Shaky Leaf.&lt;/h2&gt;

 Since its introduction a decade ago, the Shaky Leaf icon has become one of Ancestry's signature features, which signals to users that there's a helpful hint you can use to find out more about your family tree.&lt;br&gt;&lt;br&gt;
 So when the company decided to begin moving its infrastructure to cloud native technology, the first service that was launched on &lt;a href="https://kubernetes.io"&gt;Kubernetes&lt;/a&gt;, the open source platform for managing application containers across clusters of hosts, was this hint system. Think of it as Amazon's recommended products, but instead of recommending products the company recommends records, stories, or familial connections. "It was a very important part of the site," says Ancestry software engineer and architect Paul MacKay, "but also small enough for a pilot project that we knew we could handle in a very appropriate, secure way."&lt;br&gt;&lt;br&gt;
 And when it went live smoothly in early 2016, "our deployment time for this service literally was cut down from 50 minutes to 2 or 5 minutes," MacKay adds. "The development team was just thrilled because we're focused on supplying a great experience for our customers. And that means features, it means stability, it means all those things that we need for a first-in-class type operation."&lt;br&gt;&lt;br&gt;
 The stability of that Shaky Leaf was a signal for MacKay and his team that their decision to embrace cloud native technologies was the right one for the company. With a private data center, Ancestry built its website (which launched in 1996) on hundreds of services and technologies and a traditional deployment methodology. "It worked well for us in the past, but the sum of the legacy systems became quite cumbersome in its processing and was time-consuming," says MacKay. "We were looking for other ways to accelerate, to be more agile in delivering our solutions and our products."
&lt;/div&gt;
&lt;/section&gt;

&lt;div class="banner3"&gt;
 &lt;div class="banner3text"&gt;
"And when it [Kubernetes] went live smoothly in early 2016, 'our deployment time for this service literally was cut down from 50 minutes to 2 or 5 minutes,' MacKay adds. 'The development team was just thrilled because we're focused on supplying a great experience for our customers. And that means features, it means stability, it means all those things that we need for a first-in-class type operation.'"
 &lt;/div&gt;
&lt;/div&gt;

&lt;section class="section3"&gt;
&lt;div class="fullcol"&gt;
 That need led them in 2015 to explore containerization. Ancestry engineers had already been using technology like &lt;a href="https://www.java.com/en/"&gt;Java&lt;/a&gt; and &lt;a href="https://www.python.org"&gt;Python&lt;/a&gt; on Linux, so part of the decision was about making the infrastructure more Linux-friendly. They quickly decided that they wanted to go with Docker for containerization, "but it always comes down to the orchestration part of it to make it really work," says MacKay.&lt;br&gt;&lt;br&gt;
 His team looked at orchestration platforms offered by &lt;a href="https://docs.docker.com/compose/"&gt;Docker Compose&lt;/a&gt;, &lt;a href="https://mesos.apache.org"&gt;Mesos&lt;/a&gt; and &lt;a href="https://www.openstack.org/software/"&gt;OpenStack&lt;/a&gt;, and even started to prototype some homegrown solutions. And then they started hearing rumblings of the imminent release of Kubernetes v1.0. "At the forefront, we were looking at the secret store, so we didn't have to manage that all ourselves, the config maps, the methodology of seamless deployment strategy," he says. "We found that how Kubernetes had done their resources, their types, their labels and just their interface was so much further advanced than the other things we had seen. It was a feature fit."&lt;br&gt;&lt;br&gt;
 &lt;div class="quote"&gt;
 Plus, MacKay says, "I just believed in the confidence that comes with the history that Google has with containerization. So we started out right on the leading edge of it. And we haven't looked back since."&lt;/div&gt;&lt;br&gt;
 Which is not to say that adopting a new technology hasn't come with some challenges. "Change is hard," says MacKay. "Not because the technology is hard or that the technology is not good. It's just that people like to do things like they had done [before]. You have the early adopters and you have those who are coming in later. It was a learning experience on both sides."&lt;br&gt;&lt;br&gt;
 Figuring out the best deployment operations for Ancestry was a big part of the work it took to adopt cloud native infrastructure. "We want to make sure the process is easy and also controlled in the manner that allows us the highest degree of security that we demand and our customers demand," says MacKay. "With Kubernetes and other products, there are some good solutions, but a little bit of glue is needed to bring it into corporate processes and governances. It's like having a set of gloves that are generic, but when you really do want to grab something you have to make it so it's customized to you. That's what we had to do."&lt;br&gt;&lt;br&gt;
 Their best practices include allowing their developers to deploy into development stage and production, but then controlling the aspects that need governance and auditing, such as secrets. They found that having one namespace per service is useful for achieving that containment of secrets and config maps. And for their needs, having one container per pod makes it easier to manage and to have a smaller unit of deployment.
&lt;br&gt;&lt;br&gt;
&lt;/div&gt;
&lt;/section&gt;

&lt;div class="banner4"&gt;
&lt;div class="banner4text"&gt;

"The success of Ancestry's first deployment of the hint system on Kubernetes helped create momentum for greater adoption of the technology."

&lt;/div&gt;
&lt;/div&gt;

&lt;section class="section4"&gt;
&lt;div class="fullcol"&gt;
 With that process established, the time spent on deployment was cut down to under a minute for some services. "As programmers, we have what's called REPL: read, evaluate, print, and loop, but with Kubernetes, we have CDEL: compile, deploy, execute, and loop," says MacKay. "It's a very quick loop back and a great benefit to understand that when our services are deployed in production, they're the same as what we tested in the pre-production environments. The approach of cloud native for Ancestry provides us a better ability to scale and to accommodate the business needs as work loads occur."&lt;br&gt;&lt;br&gt;
 The success of Ancestry's first deployment of the hint system on Kubernetes helped create momentum for greater adoption of the technology. "Engineers like to code, they like to do features, they don't like to sit around waiting for things to be deployed and worrying about scaling up and out and down," says MacKay. "After a while the engineers became our champions. At training sessions, the development teams were always the ones saying, 'Kubernetes saved our time tremendously; it's an enabler; it really is incredible.' Over time, we were able to convince our management that this was a transition that the industry is making and that we needed to be a part of it."&lt;br&gt;&lt;br&gt;
 A year later, Ancestry has transitioned a good number of applications to Kubernetes. "We have many different services that make up the rich environment that [the website] has from both the DNA side and the family history side," says MacKay. "We have front-end stacks, back-end stacks and back-end processing type stacks that are in the cluster."&lt;br&gt;&lt;br&gt;
 The company continues to weigh which services it will move forward to Kubernetes, which ones will be kept as is, and which will be replaced in the future and thus don't have to be moved over. MacKay estimates that the company is "approaching halfway on those features that are going forward. We don't have to do a lot of convincing anymore. It's more of an issue of timing with getting product management and engineering staff the knowledge and information that they need."
&lt;/div&gt;
&lt;/section&gt;

&lt;div class="banner5"&gt;
 &lt;div class="banner5text"&gt;
 "... 'I believe in Kubernetes. I believe in containerization. I think
 if we can get there and establish ourselves in that world, we will be further along and far better off being agile and all the things we talk about,
 and it'll&amp;nbsp;go&amp;nbsp;forward.'"
 &lt;/div&gt;
&lt;/div&gt;

&lt;section class="section5"&gt;
&lt;div class="fullcol"&gt;


Looking ahead, MacKay sees Ancestry maximizing the benefits of Kubernetes in 2017. "We're very close to having everything that should be or could be in a Linux-friendly world in Kubernetes by the end of the year," he says, adding that he's looking forward to features such as federation and horizontal pod autoscaling that are currently in the works. "Kubernetes has been very wonderful for us and we continue to ride the wave."&lt;br&gt;&lt;br&gt;
That wave, he points out, has everything to do with the vibrant Kubernetes community, which has grown by leaps and bounds since Ancestry joined it as an early adopter. "This is just a very rough way of judging it, but on Slack in June 2015, there were maybe 500 on there," MacKay says. "The last time I looked there were maybe 8,500 just on the Slack channel. There are so many major companies and different kinds of companies involved now. It's the variety of contributors, the number of contributors, the incredibly competent and friendly community."&lt;br&gt;&lt;br&gt;
As much as he and his team at Ancestry have benefited from what he calls "the goodness and the technical abilities of many" in the community, they've also contributed information about best practices, logged bug issues and participated in the open source conversation. And they've been active in attending &lt;a href="https://www.meetup.com/Utah-Kubernetes-Meetup/"&gt;meetups&lt;/a&gt; to help educate and give back to the local tech community in Utah. Says MacKay: "We're trying to give back as far as our experience goes, rather than just code."
&lt;br&gt;&lt;br&gt;When he meets with companies considering adopting cloud native infrastructure, the best advice he has to give from Ancestry's Kubernetes journey is this: "Start small, but with hard problems," he says. And "you need a patron who understands the vision of containerization, to help you tackle the political as well as other technical roadblocks that can occur when change is needed."&lt;br&gt;&lt;br&gt;
With the changes that MacKay's team has led over the past year and a half, cloud native will be part of Ancestry's technological genealogy for years to come. MacKay has been such a champion of the technology that he says people have jokingly accused him of having a Kubernetes tattoo.&lt;br&gt;&lt;br&gt;
"I really don't," he says with a laugh. "But I'm passionate. I'm not exclusive to any technology; I use whatever I need that's out there that makes us great. If it's something else, I'll use it. But right now I believe in Kubernetes. I believe in containerization. I think if we can get there and establish ourselves in that world, we will be further along and far better off being agile and all the things we talk about, and it'll go forward."&lt;br&gt;&lt;br&gt;
He pauses. "So, yeah, I guess you can say I'm an evangelist for Kubernetes," he says. "But I'm not getting a tattoo!"


&lt;/div&gt;
&lt;/section&gt;</description></item><item><title>蚂蚁金服案例研究</title><link>https://andygol-k8s.netlify.app/zh-cn/case-studies/ant-financial/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/case-studies/ant-financial/</guid><description>&lt;!--
title: Ant Financial Case Study
linkTitle: ant-financial
case_study_styles: true
cid: caseStudies
featured: false

new_case_study_styles: true
heading_background: /images/case-studies/antfinancial/banner1.jpg
heading_title_logo: /images/antfinancial_logo.png
subheading: &gt;
 Ant Financial's Hypergrowth Strategy Using Kubernetes
case_study_details:
 - Company: Ant Financial
 - Location: Hangzhou, China
 - Industry: Financial Services
--&gt;

&lt;!--
&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;Officially founded in October 2014, &lt;a href="https://www.antfin.com/index.htm?locale=en_us"&gt;Ant Financial&lt;/a&gt; originated from &lt;a href="https://global.alipay.com/"&gt;Alipay&lt;/a&gt;, the world's largest online payment platform that launched in 2004. The company also offers numerous other services leveraging technology innovation. With the volume of transactions Alipay handles for its 900+ million users worldwide (through its local and global partners)—256,000 transactions per second at the peak of Double 11 Singles Day 2017, and total gross merchandise value of $31 billion for Singles Day 2018—not to mention that of its other services, Ant Financial faces "data processing challenge in a whole new way," says Haojie Hang, who is responsible for Product Management for the Storage and Compute Group. "We see three major problems of operating at that scale: how to provide real-time compute, storage, and processing capability, for instance to make real-time recommendations for fraud detection; how to provide intelligence on top of this data, because there's too much data and then we're not getting enough insight; and how to apply security in the application level, in the middleware level, the system level, even the chip level." In order to provide reliable and consistent services to its customers, Ant Financial embraced containers in early 2014, and soon needed an orchestration solution for the tens-of-thousands-of-node clusters in its data centers.&lt;/p&gt;</description></item><item><title>BlaBlaCar Case Study</title><link>https://andygol-k8s.netlify.app/zh-cn/case-studies/blablacar/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/case-studies/blablacar/</guid><description>&lt;div class="banner1"&gt;
 &lt;h1&gt; CASE STUDY:&lt;img src="https://andygol-k8s.netlify.app/images/blablacar_logo.png" class="header_logo"&gt;&lt;br /&gt; &lt;div class="subhead"&gt;Turning to Containerization to Support Millions of Rideshares&lt;/div&gt;&lt;/h1&gt;

&lt;/div&gt;

&lt;div class="details"&gt;
 Company &amp;nbsp;&lt;b&gt;BlaBlaCar&lt;/b&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Location &amp;nbsp;&lt;b&gt;Paris, France&lt;/b&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Industry &amp;nbsp;&lt;b&gt;Ridesharing Company&lt;/b&gt;
&lt;/div&gt;

&lt;hr /&gt;
&lt;section class="section1"&gt;
&lt;div class="cols"&gt;
 &lt;div class="col1"&gt;
 &lt;h2&gt;Challenge&lt;/h2&gt;
 The world’s largest long-distance carpooling community, &lt;a href="https://www.blablacar.com/"&gt;BlaBlaCar&lt;/a&gt;, connects 40 million members across 22 countries. The company has been experiencing exponential growth since 2012 and needed its infrastructure to keep up. "When you’re thinking about doubling the number of servers, you start thinking, ‘What should I do to be more efficient?’" says Simon Lallemand, Infrastructure Engineer at BlaBlaCar. "The answer is not to hire more and more people just to deal with the servers and installation." The team knew they had to scale the platform, but wanted to stay on their own bare metal servers.
 &lt;br /&gt;
 &lt;br /&gt;
 &lt;h2&gt;Solution&lt;/h2&gt;
 Opting not to shift to cloud virtualization or use a private cloud on their own servers, the BlaBlaCar team became early adopters of containerization, using the CoreOs runtime &lt;a href="https://coreos.com/rkt"&gt;rkt&lt;/a&gt;, initially deployed using &lt;a href="https://coreos.com/fleet/docs/latest/launching-containers-fleet.html"&gt;fleet&lt;/a&gt; cluster manager. Last year, the company switched to &lt;a href="https://kubernetes.io/"&gt;Kubernetes&lt;/a&gt; orchestration, and now also uses &lt;a href="https://prometheus.io/"&gt;Prometheus&lt;/a&gt; for monitoring.
 &lt;/div&gt;

&lt;div class="col2"&gt;
&lt;h2&gt;Impact&lt;/h2&gt;
 "Before using containers, it would take sometimes a day, sometimes two, just to create a new service," says Lallemand. "With all the tooling that we made around the containers, copying a new service now is a matter of minutes. It’s really a huge gain. We are better at capacity planning in our data center because we have fewer constraints due to this abstraction between the services and the hardware we run on. For the developers, it also means they can focus only on the features that they’re developing, and not on the infrastructure."
&lt;/div&gt;
&lt;/div&gt;
&lt;/section&gt;
&lt;div class="banner2"&gt;
 &lt;div class="banner2text"&gt;
 "When you’re switching to this cloud-native model and running everything in containers, you have to make sure that at any moment you can reboot without any downtime and without losing traffic. [With Kubernetes] our infrastructure is much more resilient and we have better availability than before."&lt;br style="height:25px"&gt;&lt;span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"&gt;&lt;br /&gt;- Simon Lallemand, Infrastructure Engineer at BlaBlaCar&lt;/span&gt;
 &lt;/div&gt;
&lt;/div&gt;

&lt;section class="section2"&gt;
&lt;div class="fullcol"&gt;
 &lt;h2&gt;For the 40 million users of &lt;a href="https://www.blablacar.com/"&gt;BlaBlaCar&lt;/a&gt;, it’s easy to find strangers headed in the same direction to share rides and costs. You can even choose how much "bla bla" chatter you want from a long-distance ride mate.&lt;/h2&gt;
 Behind the scenes, though, the infrastructure was falling woefully behind the rider community’s exponential growth. Founded in 2006, the company hit its current stride around 2012. "Our infrastructure was very traditional," says Infrastructure Engineer Simon Lallemand, who began working at the company in 2014. "In the beginning, it was a bit chaotic because we had to [grow] fast. But then comes the time when you have to design things to make it manageable."&lt;br /&gt;&lt;br /&gt;
 By 2015, the company had about 50 bare metal servers. The team was using a &lt;a href="https://www.mysql.com/"&gt;MySQL&lt;/a&gt; database and &lt;a href="http://php.net/"&gt;PHP&lt;/a&gt;, but, Lallemand says, "it was a very static way." They also utilized the configuration management system, &lt;a href="https://www.chef.io/chef/"&gt;Chef&lt;/a&gt;, but had little automation in its process. "When you’re thinking about doubling the number of servers, you start thinking, ‘What should I do to be more efficient?’" says Lallemand. "The answer is not to hire more and more people just to deal with the servers and installation."&lt;br /&gt;&lt;br /&gt;
 Instead, BlaBlaCar began its cloud-native journey but wasn’t sure which route to take. "We could either decide to go into cloud virtualization or even use a private cloud on our own servers," says Lallemand. "But going into the cloud meant we had to make a lot of changes in our application work, and we were just not ready to make the switch from on premise to the cloud." They wanted to keep the great performance they got on bare metal, so they didn’t want to go to virtualization on premise.&lt;br /&gt;&lt;br /&gt;
 The solution: containerization. This was early 2015 and containers were still relatively new. "It was a bold move at the time," says Lallemand. "We decided that the next servers that we would buy in the new data center would all be the same model, so we could outsource the maintenance of the servers. And we decided to go with containers and with &lt;a href="https://coreos.com/"&gt;CoreOS&lt;/a&gt; Container Linux as an abstraction for this hardware. It seemed future-proof to go with containers because we could see what companies were already doing with containers."
&lt;/div&gt;
&lt;/section&gt;

&lt;div class="banner3"&gt;
 &lt;div class="banner3text"&gt;
 "With all the tooling that we made around the containers, copying a new service is a matter of minutes. It’s a huge gain. For the developers, it means they can focus only on the features that they’re developing and not on the infrastructure or the hour they would test their code, or the hour that it would get deployed."
 &lt;/div&gt;
&lt;/div&gt;

&lt;section class="section3"&gt;
&lt;div class="fullcol"&gt;
 Next, they needed to choose a runtime for the containers, but "there were very few deployments in production at that time," says Lallemand. They experimented with &lt;a href="https://www.docker.com/"&gt;Docker&lt;/a&gt; but decided to go with &lt;a href="https://coreos.com/rkt"&gt;rkt&lt;/a&gt;. Lallemand explains that for BlaBlaCar, it was "much simpler to integrate things that are on rkt." At the time, the project was still pre-v1.0, so "we could speak with the developers of rkt and give them feedback. It was an advantage." Plus, he notes, rkt was very stable, even at this early stage.&lt;br /&gt;&lt;br /&gt;
 Once those decisions were made that summer, the company came up with a plan for implementation. First, they formed a task force to create a workflow that would be tested by three of the 10 members on Lallemand’s team. But they took care to run regular workshops with all 10 members to make sure everyone was on board. "When you’re focused on your product sometimes you forget if it’s really user friendly, whether other people can manage to create containers too," Lallemand says. "So we did a lot of iterations to find a good workflow."&lt;br /&gt;&lt;br /&gt;
 After establishing the workflow, Lallemand says with a smile that "we had this strange idea that we should try the most difficult thing first. Because if it works, it will work for everything." So the first project the team decided to containerize was the database. "Nobody did that at the time, and there were really no existing tools for what we wanted to do, including building container images," he says. So the team created their own tools, such as &lt;a href="https://github.com/blablacar/dgr"&gt;dgr&lt;/a&gt;, which builds container images so that the whole team has a common framework to build on the same images with the same standards. They also revamped the service-discovery tools &lt;a href="https://github.com/airbnb/nerve"&gt;Nerve&lt;/a&gt; and &lt;a href="http://airbnb.io/projects/synapse/"&gt;Synapse&lt;/a&gt;; their versions, &lt;a href="https://github.com/blablacar/go-nerve"&gt;Go-Nerve&lt;/a&gt; and &lt;a href="https://github.com/blablacar/go-synapse"&gt;Go-Synapse&lt;/a&gt;, were written in Go and built to be more efficient and include new features. All of these tools were open-sourced.&lt;br /&gt;&lt;br /&gt;
 At the same time, the company was working to migrate its entire platform to containers with a deadline set for Christmas 2015. With all the work being done in parallel, BlaBlaCar was able to get about 80 percent of its production into containers by its deadline with live traffic running on containers during December. (It’s now at 100 percent.) "It’s a really busy time for traffic," says Lallemand. "We knew that by using those new servers with containers, it would help us handle the traffic."&lt;br /&gt;&lt;br /&gt;
 In the middle of that peak season for carpooling, everything worked well. "The biggest impact that we had was for the deployment of new services," says Lallemand. "Before using containers, we had to first deploy a new server and create configurations with Chef. It would take sometimes a day, sometimes two, just to create a new service. And with all the tooling that we made around the containers, copying a new service is a matter of minutes. So it’s really a huge gain. For the developers, it means they can focus only on the features that they’re developing and not on the infrastructure or the hour they would test their code, or the hour that it would get deployed."
&lt;/div&gt;
&lt;/section&gt;

&lt;div class="banner4"&gt;
 &lt;div class="banner4text"&gt;
 "We realized that there was a really strong community around it [Kubernetes], which meant we would not have to maintain a lot of tools of our own," says Lallemand. "It was better if we could contribute to some bigger project like Kubernetes."
 &lt;/div&gt;
&lt;/div&gt;

&lt;section class="section4"&gt;
&lt;div class="fullcol"&gt;
 In order to meet their self-imposed deadline, one of the decisions they made was to not do any "orchestration magic" for containers in the first production alignment. Instead, they used the basic &lt;a href="https://coreos.com/fleet/docs/latest/launching-containers-fleet.html"&gt;fleet&lt;/a&gt; tool from CoreOS to deploy their containers. (They did build a tool called &lt;a href="https://github.com/blablacar/ggn"&gt;GGN&lt;/a&gt;, which they’ve open-sourced, to make it more manageable for their system engineers to use.)&lt;br /&gt;&lt;br /&gt;
 Still, the team knew that they’d want more orchestration. "Our tool was doing a pretty good job, but at some point you want to give more autonomy to the developer team," Lallemand says. "We also realized that we don’t want to be the single point of contact for developers when they want to launch new services." By the summer of 2016, they found their answer in &lt;a href="http://kubernetes.io/"&gt;Kubernetes&lt;/a&gt;, which had just begun supporting rkt implementation.&lt;br /&gt;&lt;br /&gt;
 After discussing their needs with their contacts at CoreOS and Google, they were convinced that Kubernetes would work for BlaBlaCar. "We realized that there was a really strong community around it, which meant we would not have to maintain a lot of tools of our own," says Lallemand. "It was better if we could contribute to some bigger project like Kubernetes." They also started using &lt;a href="https://prometheus.io/"&gt;Prometheus&lt;/a&gt;, as they were looking for "service-oriented monitoring that could be updated nightly." Production on Kubernetes began in December 2016. "We like to do crazy stuff around Christmas," he adds with a laugh.&lt;br /&gt;&lt;br /&gt;
 BlaBlaCar now has about 3,000 pods, with 1200 of them running on Kubernetes. Lallemand leads a "foundations team" of 25 members who take care of the networks, databases and systems for about 100 developers. There have been some challenges getting to this point. "The rkt implementation is still not 100 percent finished," Lallemand points out. "It’s really good, but there are some features still missing. We have questions about how we do things with stateful services, like databases. We know how we will be migrating some of the services; some of the others are a bit more complicated to deal with. But the Kubernetes community is making a lot of progress on that part."&lt;br /&gt;&lt;br /&gt;
 The team is particularly happy that they’re now able to plan capacity better in the company’s data center. "We have fewer constraints since we have this abstraction between the services and the hardware we run on," says Lallemand. "If we lose a server because there’s a hardware problem on it, we just move the containers onto another server. It’s much more efficient. We do that by just changing a line in the configuration file. And with Kubernetes, it should be automatic, so we would have nothing to do."
&lt;/div&gt;
&lt;/section&gt;

&lt;div class="banner5"&gt;
 &lt;div class="banner5text"&gt;
 "If we lose a server because there’s a hardware problem on it, we just move the containers onto another server. It’s much more efficient. We do that by just changing a line in the configuration file. With Kubernetes, it should be automatic, so we would have nothing to do."
 &lt;/div&gt;
&lt;/div&gt;

&lt;section class="section5"&gt;
&lt;div class="fullcol"&gt;
 And these advances ultimately trickle down to BlaBlaCar’s users. "We have improved availability overall on our website," says Lallemand. "When you’re switching to this cloud-native model with running everything in containers, you have to make sure that you can at any moment reboot a server or a data container without any downtime, without losing traffic. So now our infrastructure is much more resilient and we have better availability than before."&lt;br /&gt;&lt;br /&gt;
 Within BlaBlaCar’s technology department, the cloud-native journey has created some profound changes. Lallemand thinks that the regular meetings during the conception stage and the training sessions during implementation helped. "After that everybody took part in the migration process," he says. "Then we split the organization into different ‘tribes’—teams that gather developers, product managers, data analysts, all the different jobs, to work on a specific part of the product. Before, they were organized by function. The idea is to give all these tribes access to the infrastructure directly in a self-service way without having to ask. These people are really autonomous. They have responsibility of that part of the product, and they can make decisions faster." &lt;br /&gt;&lt;br /&gt;
 This DevOps transformation turned out to be a positive one for the company’s staffers. "The team was very excited about the DevOps transformation because it was new, and we were working to make things more reliable, more future-proof," says Lallemand. "We like doing things that very few people are doing, other than the internet giants." &lt;br /&gt;&lt;br /&gt;
 With these changes already making an impact, BlaBlaCar is looking to split up more and more of its application into services. "I don’t say microservices because they’re not so micro," Lallemand says. "If we can split the responsibilities between the development teams, it would be easier to manage and more reliable, because we can easily add and remove services if one fails. You can handle it easily, instead of adding a big monolith that we still have." &lt;br /&gt;&lt;br /&gt;
 When Lallemand speaks to other European companies curious about what BlaBlaCar has done with its infrastructure, he tells them to come along for the ride. "I tell them that it’s such a pleasure to deal with the infrastructure that we have today compared to what we had before," he says. "They just need to keep in mind their real motive, whether it’s flexibility in development or reliability or so on, and then go step by step towards reaching those objectives. That’s what we’ve done. It’s important not to do technology for the sake of technology. Do it for a purpose. Our focus was on helping the developers."
&lt;/div&gt;
&lt;/section&gt;</description></item><item><title>BlackRock Case Study</title><link>https://andygol-k8s.netlify.app/zh-cn/case-studies/blackrock/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/case-studies/blackrock/</guid><description>&lt;div class="banner1"&gt;
 &lt;h1&gt; CASE STUDY: &lt;img src="https://andygol-k8s.netlify.app/images/blackrock_logo.png" class="header_logo"&gt;&lt;br&gt;
 &lt;div class="subhead"&gt;Rolling Out Kubernetes in Production in 100 Days&lt;/div&gt;
 &lt;/h1&gt;

&lt;/div&gt;

&lt;div class="details"&gt;
 Company &amp;nbsp;&lt;b&gt;BlackRock&lt;/b&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Location &amp;nbsp;&lt;b&gt;New York, NY&lt;/b&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Industry &amp;nbsp;&lt;b&gt;Financial Services&lt;/b&gt;
&lt;/div&gt;

&lt;hr&gt;

&lt;section class="section1"&gt;

&lt;div class="cols"&gt;
 &lt;div class="col1"&gt;
 &lt;h2&gt;Challenge&lt;/h2&gt;
 The world’s largest asset manager, &lt;a href="https://www.blackrock.com/investing"&gt;BlackRock&lt;/a&gt; operates a very controlled static deployment scheme, which has allowed for scalability over the years. But in their data science division, there was a need for more dynamic access to resources. "We want to be able to give every investor access to data science, meaning &lt;a href="https://www.python.org"&gt;Python&lt;/a&gt; notebooks, or even something much more advanced, like a MapReduce engine based on &lt;a href="https://spark.apache.org"&gt;Spark&lt;/a&gt;," says Michael Francis, a Managing Director in BlackRock’s Product Group, which runs the company’s investment management platform. "Managing complex Python installations on users’ desktops is really hard because everyone ends up with slightly different environments. We have existing environments that do these things, but we needed to make it real, expansive and scalable. Being able to spin that up on demand, tear it down, make that much more dynamic, became a critical thought process for us. It’s not so much that we had to solve our main core production problem, it’s how do we extend that? How do we evolve?"
 &lt;/div&gt;

&lt;div class="col2"&gt;
 &lt;h2&gt;Solution&lt;/h2&gt;
 Drawing from what they learned during a pilot done last year using &lt;a href="https://www.docker.com"&gt;Docker&lt;/a&gt; environments, Francis put together a cross-sectional team of 20 to build an investor research web app using &lt;a href="https://kubernetes.io"&gt;Kubernetes&lt;/a&gt; with the goal of getting it into production within one quarter.
&lt;br&gt;&lt;br&gt;
&lt;h2&gt;Impact&lt;/h2&gt;
 "Our goal was: How do you give people tools rapidly without having to install them on their desktop?" says Francis. And the team hit the goal within 100 days. Francis is pleased with the results and says, "We’re going to use this infrastructure for lots of other application workloads as time goes on. It’s not just data science; it’s this style of application that needs the dynamism. But I think we’re 6-12 months away from making a [large scale] decision. We need to gain experience of running the system in production, we need to understand failure modes and how best to manage operational issues. What’s interesting is that just having this technology there is changing the way our developers are starting to think about their future development."

&lt;/div&gt;
&lt;/div&gt;

&lt;/section&gt;

&lt;div class="banner2"&gt;
 &lt;div class="banner2text"&gt;
 "My message to other enterprises like us is you can actually integrate Kubernetes into an existing, well-orchestrated machinery. You don’t have to throw out everything you do. And using Kubernetes made a complex problem significantly easier."&lt;br style="height:25px"&gt;&lt;span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"&gt;&lt;br&gt;- Michael Francis, Managing Director, BlackRock&lt;/span&gt;
 &lt;/div&gt;
&lt;/div&gt;

&lt;section class="section2"&gt;

&lt;div class="fullcol"&gt;
 One of the management objectives for BlackRock’s Product Group employees in 2017 was to "build cool stuff." Led by Managing Director Michael Francis, a cross-sectional group of 20 did just that: They rolled out a full production Kubernetes environment and released a new investor research web app on it. In 100 days.&lt;br&gt;&lt;br&gt;
 For a company that’s the world’s largest asset manager, "just equipment procurement can take 100 days sometimes, let alone from inception to delivery," says Karl Wieman, a Senior System Administrator. "It was an aggressive schedule. But it moved the dial."
 In fact, the project achieved two goals: It solved a business problem (creating the needed web app) as well as provided real-world, in-production experience with Kubernetes, a cloud-native technology that the company was eager to explore. "It’s not so much that we had to solve our main core production problem, it’s how do we extend that? How do we evolve?" says Francis. The ultimate success of this project, beyond delivering the app, lies in the fact that "we’ve managed to integrate a radically new thought process into a controlled infrastructure that we didn’t want to change."&lt;br&gt;&lt;br&gt;
 After all, in its three decades of existence, BlackRock has "a very well-established environment for managing our compute resources," says Francis. "We manage large cluster processes on machines, so we do a lot of orchestration and management for our main production processes in a way that’s very cloudish in concept. We’re able to manage them in a very controlled, static deployment scheme, and that has given us a huge amount of scalability."&lt;br&gt;&lt;br&gt;
 Though that works well for the core production, the company has found that some data science workloads require more dynamic access to resources. "It’s a very bursty process," says Francis, who is head of data for the company’s Aladdin investment management platform division.&lt;br&gt;&lt;br&gt;
 Aladdin, which connects the people, information and technology needed for money management in real time, is used internally and is also sold as a platform to other asset managers and insurance companies. "We want to be able to give every investor access to data science, meaning &lt;a href="https://www.python.org"&gt;Python&lt;/a&gt; notebooks, or even something much more advanced, like a MapReduce engine based on &lt;a href="https://spark.apache.org"&gt;Spark&lt;/a&gt;," says Francis. But "managing complex Python installations on users’ desktops is really hard because everyone ends up with slightly different environments. Docker allows us to flatten that environment."
&lt;/div&gt;
&lt;/section&gt;

&lt;div class="banner3"&gt;
 &lt;div class="banner3text"&gt;
 "We manage large cluster processes on machines, so we do a lot of orchestration and management for our main production processes in a way that’s very cloudish in concept. We’re able to manage them in a very controlled, static deployment scheme, and that has given us a huge amount of scalability."
 &lt;/div&gt;
&lt;/div&gt;

&lt;section class="section3"&gt;
&lt;div class="fullcol"&gt;
 Still, challenges remain. "If you have a shared cluster, you get this storming herd problem where everyone wants to do the same thing at the same time," says Francis. "You could put limits on it, but you’d have to build an infrastructure to define limits for our processes, and the Python notebooks weren’t really designed for that. We have existing environments that do these things, but we needed to make it real, expansive, and scalable. Being able to spin that up on demand, tear it down, and make that much more dynamic, became a critical thought process for us."&lt;br&gt;&lt;br&gt;
 Made up of managers from technology, infrastructure, production operations, development and information security, Francis’s team was able to look at the problem holistically and come up with a solution that made sense for BlackRock. "Our initial straw man was that we were going to build everything using &lt;a href="https://www.ansible.com"&gt;Ansible&lt;/a&gt; and run it all using some completely different distributed environment," says Francis. "That would have been absolutely the wrong thing to do. Had we gone off on our own as the dev team and developed this solution, it would have been a very different product. And it would have been very expensive. We would not have gone down the route of running under our existing orchestration system. Because we don’t understand it. These guys [in operations and infrastructure] understand it. Having the multidisciplinary team allowed us to get to the right solutions and that actually meant we didn’t build anywhere near the amount we thought we were going to end up building."&lt;br&gt;&lt;br&gt;
 In search of a solution in which they could manage usage on a user-by-user level, Francis’s team gravitated to Red Hat’s &lt;a href="https://www.openshift.com"&gt;OpenShift&lt;/a&gt; Kubernetes offering. The company had already experimented with other cloud-native environments, but the team liked that Kubernetes was open source, and "we felt the winds were blowing in the direction of Kubernetes long term," says Francis. "Typically we make technology choices that we believe are going to be here in 5-10 years’ time, in some form. And right now, in this space, Kubernetes feels like the one that’s going to be there." Adds Uri Morris, Vice President of Production Operations: "When you see that the non-Google committers to Kubernetes overtook the Google committers, that’s an indicator of the momentum."&lt;br&gt;&lt;br&gt;
 Once that decision was made, the major challenge was figuring out how to make Kubernetes work within BlackRock’s existing framework. "It’s about understanding how we can operate, manage and support a platform like this, in addition to tacking it onto our existing technology platform," says Project Manager Michael Maskallis. "All the controls we have in place, the change management process, the software development lifecycle, onboarding processes we go through—how can we do all these things?"&lt;br&gt;&lt;br&gt;
 The first (anticipated) speed bump was working around issues behind BlackRock’s corporate firewalls. "One of our challenges is there are no firewalls in most open source software," says Francis. "So almost all install scripts fail in some bizarre way, and pulling down packages doesn’t necessarily work." The team ran into these types of problems using &lt;a href="https://andygol-k8s.netlify.app/docs/getting-started-guides/minikube/"&gt;Minikube&lt;/a&gt; and did a few small pushes back to the open source project.


&lt;/div&gt;
&lt;/section&gt;

&lt;div class="banner4"&gt;
 &lt;div class="banner4text"&gt;
 "Typically we make technology choices that we believe are going to be here in 5-10 years’ time, in some form. And right now, in this space, Kubernetes feels like the one that’s going to be there."
 &lt;/div&gt;
&lt;/div&gt;

&lt;section class="section4"&gt;
&lt;div class="fullcol"&gt;
 There were also questions about service discovery. "You can think of Aladdin as a cloud of services with APIs between them that allows us to build applications rapidly," says Francis. "It’s all on a proprietary message bus, which gives us all sorts of advantages but at the same time, how does that play in a third party [platform]?"&lt;br&gt;&lt;br&gt;
 Another issue they had to navigate was that in BlackRock’s existing system, the messaging protocol has different instances in the different development, test and production environments. While Kubernetes enables a more DevOps-style model, it didn’t make sense for BlackRock. "I think what we are very proud of is that the ability for us to push into production is still incredibly rapid in this [new] infrastructure, but we have the control points in place, and we didn’t have to disrupt everything," says Francis. "A lot of the cost of this development was thinking how best to leverage our internal tools. So it was less costly than we actually thought it was going to be."&lt;br&gt;&lt;br&gt;
 The project leveraged tools associated with the messaging bus, for example. "The way that the Kubernetes cluster will talk to our internal messaging platform is through a gateway program, and this gateway program already has built-in checks and throttles," says Morris. "We can use them to control and potentially throttle the requests coming in from Kubernetes’s very elastic infrastructure to the production infrastructure. We’ll continue to go in that direction. It enables us to scale as we need to from the operational perspective."&lt;br&gt;&lt;br&gt;
 The solution also had to be complementary with BlackRock’s centralized operational support team structure. "The core infrastructure components of Kubernetes are hooked into our existing orchestration framework, which means that anyone in our support team has both control and visibility to the cluster using the existing operational tools," Morris explains. "That means that I don’t need to hire more people."&lt;br&gt;&lt;br&gt;
 With those points established, the team created a procedure for the project: "We rolled this out first to a development environment, then moved on to a testing environment and then eventually to two production environments, in that sequential order," says Maskallis. "That drove a lot of our learning curve. We have all these moving parts, the software components on the infrastructure side, the software components with Kubernetes directly, the interconnectivity with the rest of the environment that we operate here at BlackRock, and how we connect all these pieces. If we came across issues, we fixed them, and then moved on to the different environments to replicate that until we eventually ended up in our production environment where this particular cluster is supposed to live."&lt;br&gt;&lt;br&gt;
 The team had weekly one-hour working sessions with all the members (who are located around the world) participating, and smaller breakout or deep-dive meetings focusing on specific technical details. Possible solutions would be reported back to the group and debated the following week. "I think what made it a successful experiment was people had to work to learn, and they shared their experiences with others," says Vice President and Software Developer Fouad Semaan. Then, Francis says, "We gave our engineers the space to do what they’re good at. This hasn’t been top-down."


&lt;/div&gt;
&lt;/section&gt;

&lt;div class="banner5"&gt;
 &lt;div class="banner5text"&gt;
 "The core infrastructure components of Kubernetes are hooked into our existing orchestration framework, which means that anyone in our support team has both control and visibility to the cluster using the existing operational tools. That means that I don’t need to hire more people."

 &lt;/div&gt;
&lt;/div&gt;

&lt;section class="section5"&gt;
&lt;div class="fullcol"&gt;
 They were led by one key axiom: To stay focused and avoid scope creep. This meant that they wouldn’t use features that weren’t in the core of Kubernetes and Docker. But if there was a real need, they’d build the features themselves. Luckily, Francis says, "Because of the rapidity of the development, a lot of things we thought we would have to build ourselves have been rolled into the core product. [The package manager&lt;a href="https://helm.sh"&gt; Helm&lt;/a&gt; is one example]. People have similar problems."&lt;br&gt;&lt;br&gt;
 By the end of the 100 days, the app was up and running for internal BlackRock users. The initial capacity of 30 users was hit within hours, and quickly increased to 150. "People were immediately all over it," says Francis. In the next phase of this project, they are planning to scale up the cluster to have more capacity.&lt;br&gt;&lt;br&gt;
 Even more importantly, they now have in-production experience with Kubernetes that they can continue to build on—and a complete framework for rolling out new applications. "We’re going to use this infrastructure for lots of other application workloads as time goes on. It’s not just data science; it’s this style of application that needs the dynamism," says Francis. "Is it the right place to move our core production processes onto? It might be. We’re not at a point where we can say yes or no, but we felt that having real production experience with something like Kubernetes at some form and scale would allow us to understand that. I think we’re 6-12 months away from making a [large scale] decision. We need to gain experience of running the system in production, we need to understand failure modes and how best to manage operational issues."&lt;br&gt;&lt;br&gt;
 For other big companies considering a project like this, Francis says commitment and dedication are key: "We got the signoff from [senior management] from day one, with the commitment that we were able to get the right people. If I had to isolate what makes something complex like this succeed, I would say senior hands-on people who can actually drive it make a huge difference." With that in place, he adds, "My message to other enterprises like us is you can actually integrate Kubernetes into an existing, well-orchestrated machinery. You don’t have to throw out everything you do. And using Kubernetes made a complex problem significantly easier."

&lt;/div&gt;
&lt;/section&gt;</description></item><item><title>Box Case Study</title><link>https://andygol-k8s.netlify.app/zh-cn/case-studies/box/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/case-studies/box/</guid><description>&lt;div class="banner1"&gt;
 &lt;h1&gt;CASE STUDY: &lt;img src="https://andygol-k8s.netlify.app/images/box_logo.png" width="10%" style="margin-bottom:-6px"&gt;&lt;br&gt;
 &lt;div class="subhead"&gt;An Early Adopter Envisions
 a New Cloud Platform&lt;/div&gt;
 &lt;/h1&gt;
&lt;/div&gt;


&lt;div class="details"&gt;
 Company &amp;nbsp;&lt;b&gt;Box&lt;/b&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Location &amp;nbsp;&lt;b&gt;Redwood City, California&lt;/b&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Industry &amp;nbsp;&lt;b&gt;Technology&lt;/b&gt;
&lt;/div&gt;

&lt;hr&gt;

&lt;section class="section1"&gt;

&lt;div class="cols"&gt;
 &lt;div class="col1"&gt;

 &lt;h2&gt;Challenge&lt;/h2&gt;
 Founded in 2005, the enterprise content management company allows its more than 50 million users to manage content in the cloud. &lt;a href="https://www.box.com/home"&gt;Box&lt;/a&gt; was built primarily with bare metal inside the company’s own data centers, with a monolithic PHP code base. As the company was expanding globally, it needed to focus on "how we run our workload across many different cloud infrastructures from bare metal to public cloud," says Sam Ghods, Cofounder and Services Architect of Box. "It’s been a huge challenge because of different clouds, especially bare metal, have very different interfaces."
 &lt;br&gt;
 &lt;/div&gt;

 &lt;div class="col2"&gt;
 &lt;h2&gt;Solution&lt;/h2&gt;
 Over the past couple of years, Box has been decomposing its infrastructure into microservices, and became an early adopter of, as well as contributor to, &lt;a href="http://kubernetes.io/"&gt;Kubernetes&lt;/a&gt; container orchestration. Kubernetes, Ghods says, has allowed Box’s developers to "target a universal set of concepts that are portable across all clouds."&lt;br&gt;&lt;br&gt;

 &lt;h2&gt;Impact&lt;/h2&gt;
 "Before Kubernetes," Ghods says, "our infrastructure was so antiquated it was taking us more than six months to deploy a new microservice. Today, a new microservice takes less than five days to deploy. And we’re working on getting it to an hour."
 &lt;/div&gt;
&lt;/div&gt;

&lt;/section&gt;

&lt;div class="banner2"&gt;
 &lt;div class="banner2text"&gt;
 "We looked at a lot of different options, but Kubernetes really stood out....the fact that on day one it was designed to run on bare metal just as well as Google Cloud meant that we could actually migrate to it inside of our data centers, and then use those same tools and concepts to run across public cloud providers as&amp;nbsp;well."&lt;br&gt;&lt;br&gt;&lt;span style="font-size:15px;letter-spacing:0.08em"&gt;- SAM GHOUDS, CO-FOUNDER AND SERVICES ARCHITECT OF BOX&lt;/span&gt;
 &lt;/div&gt;
&lt;/div&gt;

&lt;section class="section2"&gt;

 &lt;div class="fullcol"&gt;
 &lt;h2&gt;In the summer of 2014, Box was feeling the pain of a decade’s worth of hardware and software infrastructure that wasn’t keeping up with the company’s needs.&lt;/h2&gt;

 A platform that allows its more than 50 million users (including governments and big businesses like &lt;a href="https://www.ge.com/"&gt;General Electric&lt;/a&gt;) to manage and share content in the cloud, Box was originally a &lt;a href="http://php.net/"&gt;PHP&lt;/a&gt; monolith of millions of lines of code built exclusively with bare metal inside of its own data centers. It had already begun to slowly chip away at the monolith, decomposing it into microservices. And "as we’ve been expanding into regions around the globe, and as the public cloud wars have been heating up, we’ve been focusing a lot more on figuring out how we run our workload across many different environments and many different cloud infrastructure providers," says Box Cofounder and Services Architect Sam Ghods. "It’s been a huge challenge thus far because of all these different providers, especially bare metal, have very different interfaces and ways in which you work with them."&lt;br&gt;&lt;br&gt;
 Box’s cloud native journey accelerated that June, when Ghods attended &lt;a href="https://www.docker.com/events/dockercon"&gt;DockerCon&lt;/a&gt;. The company had come to the realization that it could no longer run its applications only off bare metal, and was researching containerizing with Docker, virtualizing with OpenStack, and supporting public cloud.&lt;br&gt;&lt;br&gt;
 At that conference, Google announced the release of its Kubernetes container management system, and Ghods was won over. "We looked at a lot of different options, but Kubernetes really stood out, especially because of the incredibly strong team of &lt;a href="https://research.google/pubs/large-scale-cluster-management-at-google-with-borg/"&gt;Borg&lt;/a&gt; veterans and the vision of having a completely infrastructure-agnostic way of being able to run cloud software," he says, referencing Google’s internal container orchestrator Borg. "The fact that on day one it was designed to run on bare metal just as well as &lt;a href="https://cloud.google.com/"&gt;Google Cloud&lt;/a&gt; meant that we could actually migrate to it inside of our data centers, and then use those same tools and concepts to run across public cloud providers as well."&lt;br&gt;&lt;br&gt;
 Another plus: Ghods liked that &lt;a href="https://kubernetes.io/"&gt;Kubernetes&lt;/a&gt; has a universal set of API objects like pod, service, replica set and deployment object, which created a consistent surface to build tooling against. "Even PaaS layers like &lt;a href="https://www.openshift.com/"&gt;OpenShift&lt;/a&gt; or &lt;a href="http://deis.io/"&gt;Deis&lt;/a&gt; that build on top of Kubernetes still treat those objects as first-class principles," he says. "We were excited about having these abstractions shared across the entire ecosystem, which would result in a lot more momentum than we saw in other potential solutions."&lt;br&gt;&lt;br&gt;
 Box deployed Kubernetes in a cluster in a production data center just six months later. Kubernetes was then still pre-beta, on version 0.11. They started small: The very first thing Ghods’s team ran on Kubernetes was a Box API checker that confirms Box is up. "That was just to write and deploy some software to get the whole pipeline functioning," he says. Next came some daemons that process jobs, which was "nice and safe because if they experienced any interruptions, we wouldn’t fail synchronous incoming requests from customers."

 &lt;/div&gt;
&lt;/section&gt;

&lt;div class="banner3"&gt;
 &lt;div class="banner3text"&gt;
 "As we’ve been expanding into regions around the globe, and as the public cloud wars have been heating up, we’ve been focusing a lot more on figuring out how we [can have Kubernetes help] run our workload across many different environments and many different cloud infrastructure providers."
 &lt;/div&gt;
&lt;/div&gt;

&lt;section class="section3"&gt;
 &lt;div class="fullcol"&gt;
 The first live service, which the team could route to and ask for information, was launched a few months later. At that point, Ghods says, "We were comfortable with the stability of the Kubernetes cluster. We started to port some services over, then we would increase the cluster size and port a few more, and that’s ended up to about 100 servers in each data center that are dedicated purely to Kubernetes. And that’s going to be expanding a lot over the next 12 months, probably too many hundreds if not thousands."&lt;br&gt;&lt;br&gt;
 While observing teams who began to use Kubernetes for their microservices, "we immediately saw an uptick in the number of microservices being released," Ghods&amp;nbsp;notes. "There was clearly a pent-up demand for a better way of building software through microservices, and the increase in agility helped our developers be more productive and make better architectural choices."
 &lt;br&gt;&lt;br&gt;&lt;div class="quote"&gt;"There was clearly a pent-up demand for a better way of building software through microservices, and the increase in agility helped our developers be more productive and make better architectural choices."&lt;/div&gt;&lt;br&gt;
 Ghods reflects that as early adopters, Box had a different journey from what companies experience now. "We were definitely lock step with waiting for certain things to stabilize or features to get released," he says. "In the early days we were doing a lot of contributions [to components such as kubectl apply] and waiting for Kubernetes to release each of them, and then we’d upgrade, contribute more, and go back and forth several times. The entire project took about 18 months from our first real deployment on Kubernetes to having general availability. If we did that exact same thing today, it would probably be no more than six."&lt;br&gt;&lt;br&gt;
 In any case, Box didn’t have to make too many modifications to Kubernetes for it to work for the company. "The vast majority of the work our team has done to implement Kubernetes at Box has been making it work inside of our existing (and often legacy) infrastructure," says Ghods, "such as upgrading our base operating system from RHEL6 to RHEL7 or integrating it into &lt;a href="https://www.nagios.org/"&gt;Nagios&lt;/a&gt;, our monitoring infrastructure. But overall Kubernetes has been remarkably flexible with fitting into many of our constraints, and we’ve been running it very successfully on our bare metal infrastructure."&lt;br&gt;&lt;br&gt;
 Perhaps the bigger challenge for Box was a cultural one. "Kubernetes, and cloud native in general, represents a pretty big paradigm shift, and it’s not very incremental," Ghods says. "We’re essentially making this pitch that Kubernetes is going to solve everything because it does things the right way and everything is just suddenly better. But it’s important to keep in mind that it’s not nearly as proven as many other solutions out there. You can’t say how long this or that company took to do it because there just aren’t that many yet. Our team had to really fight for resources because our project was a bit of a moonshot."
 &lt;/div&gt;
&lt;/section&gt;

 &lt;div class="banner4"&gt;
 &lt;div class="banner4text"&gt;
 "The vast majority of the work our team has done to implement Kubernetes at Box has been making it work inside of our existing [and often legacy] infrastructure....overall Kubernetes has been remarkably flexible with fitting into many of our constraints, and we’ve been running it very successfully on our bare metal infrastructure."
 &lt;/div&gt;
 &lt;/div&gt;

&lt;section class="section4"&gt;
 &lt;div class="fullcol"&gt;
 Having learned from experience, Ghods offers these two pieces of advice for companies going through similar challenges:
 &lt;h2&gt;1. Deliver early and often.&lt;/h2&gt; Service discovery was a huge problem for Box, and the team had to decide whether to build an interim solution or wait for Kubernetes to natively satisfy Box’s unique requirements. After much debate, "we just started focusing on delivering something that works, and then dealing with potentially migrating to a more native solution later," Ghods says. "The above-all-else target for the team should always be to serve real production use cases on the infrastructure, no matter how trivial. This helps keep the momentum going both for the team itself and for the organizational perception of the project." &lt;/br&gt;&lt;/br&gt;
 &lt;h2&gt;2. Keep an open mind about what your company has to abstract away from developers and what it&amp;nbsp;doesn’t.&lt;/h2&gt; Early on, the team built an abstraction on top of Docker files to help ensure that images had the right security updates.
 This turned out to be superfluous work, since container images are considered immutable and you can easily scan them post-build to ensure they do not contain vulnerabilities. Because managing infrastructure through containerization is such a discontinuous leap, it’s better to start by interacting directly with the native tools and learning their unique advantages and caveats. An abstraction should be built only after a practical need for it arises.&lt;/br&gt;&lt;/br&gt;
 In the end, the impact has been powerful. "Before Kubernetes," Ghods says, "our infrastructure was so antiquated it was taking us more than six months to deploy a new microservice. Now a new microservice takes less than five days to deploy. And we’re working on getting it to an hour. Granted, much of that six months was due to how broken our systems were, but bare metal is intrinsically a difficult platform to support unless you have a system like Kubernetes to help manage&amp;nbsp;it."&lt;/br&gt;&lt;/br&gt;
 By Ghods’s estimate, Box is still several years away from his goal of being a 90-plus percent Kubernetes shop. "We’re very far along on having a mission-critical, stable Kubernetes deployment that provides a lot of value," he says. "Right now about five percent of all of our compute runs on Kubernetes, and I think in the next six months we’ll likely be between 20 to 50 percent. We’re working hard on enabling all stateless service use cases, and shift our focus to stateful services after&amp;nbsp;that."
 &lt;/div&gt;
&lt;/section&gt;

 &lt;div class="banner5"&gt;
 &lt;div class="banner5text"&gt;
 "Ghods predicts that Kubernetes has the opportunity to be the new cloud platform. '...because it’s a never-before-seen level of automation and intelligence surrounding infrastructure that is portable and agnostic to every way you can run your infrastructure.'"
 &lt;/div&gt;
 &lt;/div&gt;

&lt;section class="section5"&gt;
 &lt;div class="fullcol"&gt;
 In fact, that’s what he envisions across the industry: Ghods predicts that Kubernetes has the opportunity to be the new cloud platform. Kubernetes provides an API consistent across different cloud platforms including bare metal, and "I don’t think people have seen the full potential of what’s possible when you can program against one single interface," he says. "The same way &lt;a href="https://aws.amazon.com/"&gt;AWS&lt;/a&gt; changed infrastructure so that you don’t have to think about servers or cabinets or networking equipment anymore, Kubernetes enables you to focus exclusively on the containers that you’re running, which is pretty exciting. That’s the vision."&lt;/br&gt;&lt;/br&gt;
 Ghods points to projects that are already in development or recently released for Kubernetes as a cloud platform: cluster federation, the Dashboard UI, and &lt;a href="https://coreos.com/"&gt;CoreOS&lt;/a&gt;’s etcd operator. "I honestly believe it’s the most exciting thing I’ve seen in cloud infrastructure," he says, "because it’s a never-before-seen level of automation and intelligence surrounding infrastructure that is portable and agnostic to every way you can run your infrastructure."&lt;/br&gt;&lt;/br&gt;
 Box, with its early decision to use bare metal, embarked on its Kubernetes journey out of necessity. But Ghods says that even if companies don’t have to be agnostic about cloud providers today, Kubernetes may soon become the industry standard, as more and more tooling and extensions are built around the API.&lt;/br&gt;&lt;/br&gt;
 "The same way it doesn’t make sense to deviate from Linux because it’s such a standard," Ghods says, "I think Kubernetes is going down the same path. It is still early days—the documentation still needs work and the user experience for writing and publishing specs to the Kubernetes clusters is still rough. When you’re on the cutting edge you can expect to bleed a little. But the bottom line is, this is where the industry is going. Three to five years from now it’s really going to be shocking if you run your infrastructure any other way."
 &lt;/div&gt;
&lt;/section&gt;</description></item><item><title>Capital One Case Study</title><link>https://andygol-k8s.netlify.app/zh-cn/case-studies/capital-one/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/case-studies/capital-one/</guid><description>&lt;div class="banner1 desktop" style="background-image: url('/images/case-studies/capitalone/banner1.jpg')"&gt;
 &lt;h1&gt; CASE STUDY:&lt;img src="https://andygol-k8s.netlify.app/images/capitalone-logo.png" style="margin-bottom:-2%" class="header_logo"&gt;&lt;br&gt; &lt;div class="subhead"&gt;Supporting Fast Decisioning Applications with Kubernetes

&lt;/div&gt;&lt;/h1&gt;

&lt;/div&gt;

&lt;div class="details"&gt;
 Company &amp;nbsp;&lt;b&gt;Capital One&lt;/b&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Location &amp;nbsp;&lt;b&gt;McLean, Virginia&lt;/b&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Industry &amp;nbsp;&lt;b&gt;Retail banking&lt;/b&gt;
&lt;/div&gt;

&lt;hr&gt;
&lt;section class="section1"&gt;
&lt;div class="cols"&gt;
 &lt;div class="col1"&gt;
 &lt;h2&gt;Challenge&lt;/h2&gt;
 The team set out to build a provisioning platform for &lt;a href="https://www.capitalone.com/"&gt;Capital One&lt;/a&gt; applications deployed on AWS that use streaming, big-data decisioning, and machine learning. One of these applications handles millions of transactions a day; some deal with critical functions like fraud detection and credit decisioning. The key considerations: resilience and speed—as well as full rehydration of the cluster from base AMIs.

&lt;br&gt;
 &lt;h2&gt;Solution&lt;/h2&gt;
 The decision to run &lt;a href="https://kubernetes.io/"&gt;Kubernetes&lt;/a&gt; "is very strategic for us," says John Swift, Senior Director Software Engineering. "We use Kubernetes as a substrate or an operating system, if you will. There’s a degree of affinity in our product development."
&lt;/div&gt;

&lt;div class="col2"&gt;

&lt;h2&gt;Impact&lt;/h2&gt;
 "Kubernetes is a significant productivity multiplier," says Lead Software Engineer Keith Gasser, adding that to run the platform without Kubernetes would "easily see our costs triple, quadruple what they are now for the amount of pure AWS expense." Time to market has been improved as well: "Now, a team can come to us and we can have them up and running with a basic decisioning app in a fortnight, which before would have taken a whole quarter, if not longer." Deployments increased by several orders of magnitude. Plus, the rehydration/cluster-rebuild process, which took a significant part of a day to do manually, now takes a couple hours with Kubernetes automation and declarative configuration.


&lt;/div&gt;

&lt;/div&gt;
&lt;/section&gt;
&lt;div class="banner2"&gt;
 &lt;div class="banner2text"&gt;
&lt;iframe width="560" height="315" src="https://www.youtube.com/embed/UHVW01ksg-s" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen&gt;&lt;/iframe&gt;&lt;br&gt;&lt;br&gt;
"With the scalability, the management, the coordination, Kubernetes really empowers us and gives us more time back than we had before." &lt;span style="font-size:16px;text-transform:uppercase"&gt;— Jamil Jadallah, Scrum Master&lt;/span&gt;
&lt;/div&gt;
 &lt;/div&gt;
&lt;/div&gt;
&lt;section class="section2"&gt;
&lt;div class="fullcol"&gt;
 &lt;h2&gt;&lt;/h2&gt;
 As a top 10 U.S. retail bank, Capital One has applications that handle millions of transactions a day. Big-data decisioning—for fraud detection, credit approvals and beyond—is core to the business. To support the teams that build applications with those functions for the bank, the cloud team led by Senior Director Software Engineering John Swift embraced Kubernetes for its provisioning platform. "Kubernetes and its entire ecosystem are very strategic for us," says Swift. "We use Kubernetes as a substrate or an operating system, if you will. There’s a degree of affinity in our product development."&lt;br&gt;&lt;br&gt;
 Almost two years ago, the team embarked on this journey by first working with Docker. Then came Kubernetes. "We wanted to put streaming services into Kubernetes as one feature of the workloads for fast decisioning, and to be able to do batch alongside it," says Lead Software Engineer Keith Gasser. "Once the data is streamed and batched, there are so many tool sets in &lt;a href="https://flink.apache.org/"&gt;Flink&lt;/a&gt; that we use for decisioning. We want to provide the tools in the same ecosystem, in a consistent way, rather than have a large custom snowflake ecosystem where every tool needs its own custom deployment. Kubernetes gives us the ability to bring all of these together, so the richness of the open source and even the license community dealing with big data can be corralled."



&lt;/div&gt;
&lt;/section&gt;
&lt;div class="banner3" style="background-image: url('/images/case-studies/capitalone/banner3.jpg')"&gt;
 &lt;div class="banner3text"&gt;
 "We want to provide the tools in the same ecosystem, in a consistent way, rather than have a large custom snowflake ecosystem where every tool needs its own custom deployment. Kubernetes gives us the ability to bring all of these together, so the richness of the open source and even the license community dealing with big data can be corralled."


 &lt;/div&gt;
&lt;/div&gt;
&lt;section class="section3"&gt;
&lt;div class="fullcol"&gt;
 In this first year, the impact has already been great. "Time to market is really huge for us," says Gasser. "Especially with fraud, you have to be very nimble in the way you respond to threats in the marketplace—being able to add and push new rules, detect new patterns of behavior, detect anomalies in account and transaction flows." With Kubernetes, "a team can come to us and we can have them up and running with a basic decisioning app in a fortnight, which before would have taken a whole quarter, if not longer. Kubernetes is a manifold productivity multiplier."&lt;br&gt;&lt;br&gt;
 Teams now have the tools to be autonomous in their deployments, and as a result, deployments have increased by two orders of magnitude. "And that was with just seven dedicated resources, without needing a whole group sitting there watching everything," says Scrum Master Jamil Jadallah. "That’s a huge cost savings. With the scalability, the management, the coordination, Kubernetes really empowers us and gives us more time back than we had before."

&lt;/div&gt;
&lt;/section&gt;
&lt;div class="banner4" style="background-image: url('/images/case-studies/capitalone/banner4.jpg')"&gt;
 &lt;div class="banner4text"&gt;
 With Kubernetes, "a team can come to us and we can have them up and running with a basic decisioning app in a fortnight, which before would have taken a whole quarter, if not longer. Kubernetes is a manifold productivity multiplier."
 &lt;/div&gt;
&lt;/div&gt;

&lt;section class="section5" style="padding:0px !important"&gt;
&lt;div class="fullcol"&gt;
 Kubernetes has also been a great time-saver for Capital One’s required period "rehydration" of clusters from base AMIs. To minimize the attack vulnerability profile for applications in the cloud, "Our entire clusters get rebuilt from scratch periodically, with new fresh instances and virtual server images that are patched with the latest and greatest security patches," says Gasser. This process used to take the better part of a day, and personnel, to do manually. It’s now a quick Kubernetes job.&lt;br&gt;&lt;br&gt;
 Savings extend to both capital and operating expenses. "It takes very little to get into Kubernetes because it’s all open source," Gasser points out. "We went the DIY route for building our cluster, and we definitely like the flexibility of being able to embrace the latest from the community immediately without waiting for a downstream company to do it. There’s capex related to those licenses that we don’t have to pay for. Moreover, there’s capex savings for us from some of the proprietary software that we get to sunset in our particular domain. So that goes onto our ledger in a positive way as well." (Some of those open source technologies include Prometheus, Fluentd, gRPC, Istio, CNI, and Envoy.)

&lt;/div&gt;

&lt;div class="banner5"&gt;
 &lt;div class="banner5text"&gt;
 "If we had to do all of this without Kubernetes, on underlying cloud services, I could easily see our costs triple, quadruple what they are now for the amount of pure AWS expense. That doesn’t account for personnel to deploy and maintain all the additional infrastructure."
 &lt;/div&gt;
&lt;/div&gt;

&lt;div class="fullcol"&gt;
 And on the opex side, Gasser says, the savings are high. "We run dozens of services, we have scores of pods, many daemon sets, and since we’re data-driven, we take advantage of EBS-backed volume claims for all of our stateful services. If we had to do all of this without Kubernetes, on underlying cloud services, I could easily see our costs triple, quadruple what they are now for the amount of pure AWS expense. That doesn’t account for personnel to deploy and maintain all the additional infrastructure."&lt;br&gt;&lt;br&gt;
 The team is confident that the benefits will continue to multiply—without a steep learning curve for the engineers being exposed to the new technology. "As we onboard additional tenants in this ecosystem, I think the need for folks to understand Kubernetes may not necessarily go up. In fact, I think it goes down, and that’s good," says Gasser. "Because that really demonstrates the scalability of the technology. You start to reap the benefits, and they can concentrate on all the features they need to build for great decisioning in the business— fraud decisions, credit decisions—and not have to worry about, ‘Is my AWS server broken? Is my pod not running?’"
&lt;/div&gt;

&lt;/section&gt;</description></item><item><title>CERN Case Study</title><link>https://andygol-k8s.netlify.app/zh-cn/case-studies/cern/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/case-studies/cern/</guid><description>&lt;div class="banner1" style="background-image: url('/images/case-studies/cern/banner1.jpg')"&gt;
 &lt;h1&gt; CASE STUDY: CERN&lt;br&gt; &lt;div class="subhead" style="margin-top:1%"&gt;CERN: Processing Petabytes of Data More Efficiently with Kubernetes

&lt;/div&gt;&lt;/h1&gt;

&lt;/div&gt;

&lt;div class="details"&gt;
 Company &amp;nbsp;&lt;b&gt;CERN&lt;/b&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Location &amp;nbsp;&lt;b&gt;Geneva, Switzerland
&lt;/b&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Industry &amp;nbsp;&lt;b&gt;Particle physics research&lt;/b&gt;
&lt;/div&gt;

&lt;hr&gt;
&lt;section class="section1"&gt;
&lt;div class="cols"&gt;
 &lt;div class="col1" style="width:100%""&gt;
 &lt;h2&gt;Challenge&lt;/h2&gt;
 At CERN, the European Organization for Nuclear Research, physicists conduct experiments to learn about fundamental science. In its particle accelerators, "we accelerate protons to very high energy, close to the speed of light, and we make the two beams of protons collide," says CERN Software Engineer Ricardo Rocha. "The end result is a lot of data that we have to process." CERN currently stores 330 petabytes of data in its data centers, and an upgrade of its accelerators expected in the next few years will drive that number up by 10x. Additionally, the organization experiences extreme peaks in its workloads during periods prior to big conferences, and needs its infrastructure to scale to those peaks. "We want to have a more hybrid infrastructure, where we have our on premise infrastructure but can make use of public clouds temporarily when these peaks come up," says Rocha. "We’ve been looking to new technologies that can help improve our efficiency in our infrastructure so that we can dedicate more of our resources to the actual processing of the data."
&lt;br&gt;&lt;br&gt;
 &lt;h2&gt;Solution&lt;/h2&gt;
 CERN’s technology team embraced containerization and cloud native practices, choosing Kubernetes for orchestration, Helm for deployment, Prometheus for monitoring, and CoreDNS for DNS resolution inside the clusters. Kubernetes federation has allowed the organization to run some production workloads both on premise and in public clouds.
&lt;br&gt;&lt;br&gt;
&lt;h2&gt;Impact&lt;/h2&gt;
 "Kubernetes gives us the full automation of the application," says Rocha. "It comes with built-in monitoring and logging for all the applications and the workloads that deploy in Kubernetes. This is a massive simplification of our current deployments." The time to deploy a new cluster for a complex distributed storage system has gone from more than 3 hours to less than 15 minutes. Adding new nodes to a cluster used to take more than an hour; now it takes less than 2 minutes. The time it takes to autoscale replicas for system components has decreased from more than an hour to less than 2 minutes. Initially, virtualization gave 20% overhead, but with tuning this was reduced to ~5%. Moving to Kubernetes on bare metal would get this to 0%. Not having to host virtual machines is expected to also get 10% of memory capacity back.

&lt;/div&gt;

&lt;/div&gt;
&lt;/section&gt;
&lt;div class="banner2"&gt;
 &lt;div class="banner2text"&gt;
 "Kubernetes is something we can relate to very much because it’s naturally distributed. What it gives us is a uniform API across heterogeneous resources to define our workloads. This is something we struggled with a lot in the past when we want to expand our resources outside our infrastructure."
&lt;br style="height:25px"&gt;&lt;span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"&gt;&lt;br&gt;- Ricardo Rocha, Software Engineer, CERN&lt;/span&gt;
 &lt;/div&gt;
&lt;/div&gt;
&lt;section class="section2"&gt;
&lt;div class="fullcol"&gt;
 &lt;h2&gt;With a mission of researching fundamental science, and a stable of extremely large machines, the European Organization for Nuclear Research (CERN) operates at what can only be described as hyperscale. &lt;/h2&gt;
 Experiments are conducted in particle accelerators, the biggest of which is 27 kilometers in circumference. "We accelerate protons to very high energy, to close to the speed of light, and we make the two beams of protons collide in well-defined places," says CERN Software Engineer Ricardo Rocha. "We build experiments around these places where we do the collisions. The end result is a lot of data that we have to process."&lt;br&gt;&lt;br&gt;
 And he does mean a lot: CERN currently stores and processes 330 petabytes of data—gathered from 4,300 projects and 3,300 users—using 10,000 hypervisors and 320,000 cores in its data centers. &lt;br&gt;&lt;br&gt;
 Over the years, the CERN technology department has built a large computing infrastructure, based on OpenStack private clouds, to help the organization’s physicists analyze and treat all this data. The organization experiences extreme peaks in its workloads. "Very often, just before conferences, physicists want to do an enormous amount of extra analysis to publish their papers, and we have to scale to these peaks, which means overcommitting resources in some cases," says Rocha. "We want to have a more hybrid infrastructure, where we have our on premise infrastructure but can make use of public clouds temporarily when these peaks come up."&lt;br&gt;&lt;br&gt;
 Additionally, few years ago, CERN announced that it would be doing a big upgrade of its accelerators, which will mean a ten-fold increase in the amount of data that can be collected. "So we’ve been looking to new technologies that can help improve our efficiency in our infrastructure, so that we can dedicate more of our resources to the actual processing of the data," says Rocha.

&lt;/div&gt;
&lt;/section&gt;
&lt;div class="banner3" style="background-image: url('/images/case-studies/cern/banner3.jpg')"&gt;
 &lt;div class="banner3text"&gt;
 "Before, the tendency was always: ‘I need this, I get a couple of developers, and I implement it.’ Right now it’s ‘I need this, I’m sure other people also need this, so I’ll go and ask around.’ The CNCF is a good source because there’s a very large catalog of applications available. It’s very hard right now to justify developing a new product in-house. There is really no real reason to keep doing that. It’s much easier for us to try it out, and if we see it’s a good solution, we try to reach out to the community and start working with that community." &lt;br style="height:25px"&gt;&lt;span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"&gt;&lt;br&gt;- Ricardo Rocha, Software Engineer, CERN&lt;/span&gt;

 &lt;/div&gt;
&lt;/div&gt;
&lt;section class="section3"&gt;
&lt;div class="fullcol"&gt;
 Rocha’s team started looking at Kubernetes and containerization in the second half of 2015. "We’ve been using distributed infrastructures for decades now," says Rocha. "Kubernetes is something we can relate to very much because it’s naturally distributed. What it gives us is a uniform API across heterogeneous resources to define our workloads. This is something we struggled with a lot in the past when we want to expand our resources outside our infrastructure."&lt;br&gt;&lt;br&gt;
 The team created a prototype system for users to deploy their own Kubernetes cluster in CERN’s infrastructure, and spent six months validating the use cases and making sure that Kubernetes integrated with CERN’s internal systems. The main use case is batch workloads, which represent more than 80% of resource usage at CERN. (One single project that does most of the physics data processing and analysis alone consumes 250,000 cores.) "This is something where the investment in simplification of the deployment, logging, and monitoring pays off very quickly," says Rocha. Other use cases include Spark-based data analysis and machine learning to improve physics analysis. "The fact that most of these technologies integrate very well with Kubernetes makes our lives easier," he adds.&lt;br&gt;&lt;br&gt;
 The system went into production in October 2016, also using Helm for deployment, Prometheus for monitoring, and CoreDNS for DNS resolution within the cluster. "One thing that Kubernetes gives us is the full automation of the application," says Rocha. "So it comes with built-in monitoring and logging for all the applications and the workloads that deploy in Kubernetes. This is a massive simplification of our current deployments." The time to deploy a new cluster for a complex distributed storage system has gone from more than 3 hours to less than 15 minutes.&lt;br&gt;&lt;br&gt; Adding new nodes to a cluster used to take more than an hour; now it takes less than 2 minutes. The time it takes to autoscale replicas for system components has decreased from more than an hour to less than 2 minutes.

&lt;/div&gt;
&lt;/section&gt;
&lt;div class="banner4" style="background-image: url('/images/case-studies/cern/banner4.jpg')"&gt;
 &lt;div class="banner4text"&gt;
 "With Kubernetes, there’s a well-established technology and a big community that we can contribute to. It allows us to do our physics analysis without having to focus so much on the lower level software. This is just exciting. We are looking forward to keep contributing to the community and collaborating with everyone."&lt;br style="height:25px"&gt;&lt;span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"&gt;&lt;br&gt;- Ricardo Rocha, Software Engineer, CERN&lt;/span&gt;
 &lt;/div&gt;
&lt;/div&gt;

&lt;section class="section5" style="padding:0px !important"&gt;
&lt;div class="fullcol"&gt;
 Rocha points out that the metric used in the particle accelerators may be events per second, but in reality "it’s how fast and how much of the data we can process that actually counts." And efficiency has certainly been improved with Kubernetes. Initially, virtualization gave 20% overhead, but with tuning this was reduced to ~5%. Moving to Kubernetes on bare metal would get this to 0%. Not having to host virtual machines is expected to also get 10% of memory capacity back.&lt;br&gt;&lt;br&gt;
 Kubernetes federation, which CERN has been using for a portion of its production workloads since February 2018, has allowed the organization to adopt a hybrid cloud strategy. And it was remarkably simple to do. "We had a summer intern working on federation," says Rocha. "For many years, I’ve been developing distributed computing software, which took like a decade and a lot of effort from a lot of people to stabilize and make sure it works. And for our intern, in a couple of days he was able to demo to me and my team that we had a cluster at CERN and a few clusters outside in public clouds that were federated together and that we could submit workloads to. This was shocking for us. It really shows the power of using this kind of well-established technologies." &lt;br&gt;&lt;br&gt;
 With such results, adoption of Kubernetes has made rapid gains at CERN, and the team is eager to give back to the community. "If we look back into the ’90s and early 2000s, there were not a lot of companies focusing on systems that have to scale to this kind of size, storing petabytes of data, analyzing petabytes of data," says Rocha. "The fact that Kubernetes is supported by such a wide community and different backgrounds, it motivates us to contribute back."

&lt;/div&gt;

&lt;div class="banner5" &gt;
 &lt;div class="banner5text"&gt;
This means that the physicist can build his or her analysis and publish it in a repository, share it with colleagues, and in 10 years redo the same analysis with new data. If we looked back even 10 years, this was just a dream."&lt;br style="height:25px"&gt;&lt;span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"&gt;&lt;br&gt;- Ricardo Rocha, Software Engineer, CERN&lt;/span&gt;&lt;/div&gt;
&lt;/div&gt;

&lt;div class="fullcol"&gt;
 These new technologies aren’t just enabling infrastructure improvements. CERN also uses the Kubernetes-based &lt;a href="https://github.com/recast-hep"&gt;Reana/Recast&lt;/a&gt; platform for reusable analysis, which is "the ability to define physics analysis as a set of workflows that are fully containerized in one single entry point," says Rocha. "This means that the physicist can build his or her analysis and publish it in a repository, share it with colleagues, and in 10 years redo the same analysis with new data. If we looked back even 10 years, this was just a dream."&lt;br&gt;&lt;br&gt;
 All of these things have changed the culture at CERN considerably. A decade ago, "The tendency was always: ‘I need this, I get a couple of developers, and I implement it,’" says Rocha. "Right now it’s ‘I need this, I’m sure other people also need this, so I’ll go and ask around.’ The CNCF is a good source because there’s a very large catalog of applications available. It’s very hard right now to justify developing a new product in-house. There is really no real reason to keep doing that. It’s much easier for us to try it out, and if we see it’s a good solution, we try to reach out to the community and start working with that community."

&lt;/div&gt;
&lt;/section&gt;</description></item><item><title>中国联通案例研究</title><link>https://andygol-k8s.netlify.app/zh-cn/case-studies/chinaunicom/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/case-studies/chinaunicom/</guid><description>&lt;!--
 title: China Unicom Case Study
linkTitle: chinaunicom
case_study_styles: true
cid: caseStudies
featured: false

new_case_study_styles: true
heading_background: /images/case-studies/chinaunicom/banner1.jpg
heading_title_logo: /images/chinaunicom_logo.png
subheading: &gt;
 China Unicom: How China Unicom Leveraged Kubernetes to Boost Efficiency and Lower IT Costs
case_study_details:
 - Company: China Unicom
 - Location: Beijing, China
 - Industry: Telecom
--&gt;

&lt;!--
&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;China Unicom is one of the top three telecom operators in China, and to serve its 300 million users, the company runs several data centers with thousands of servers in each, using &lt;a href="https://www.docker.com/"&gt;Docker&lt;/a&gt; containerization and &lt;a href="https://www.vmware.com/"&gt;VMWare&lt;/a&gt; and &lt;a href="https://www.openstack.org/"&gt;OpenStack&lt;/a&gt; infrastructure since 2016. Unfortunately, "the resource utilization rate was relatively low," says Chengyu Zhang, Group Leader of Platform Technology R&amp;D, "and we didn't have a cloud platform to accommodate our hundreds of applications." Formerly an entirely state-owned company, China Unicom has in recent years taken private investment from BAT (Baidu, Alibaba, Tencent) and JD.com, and is now focusing on internal development using open source technology, rather than commercial products. As such, Zhang's China Unicom Lab team began looking for open source orchestration for its cloud infrastructure.&lt;/p&gt;</description></item><item><title>City of Montreal Case Study</title><link>https://andygol-k8s.netlify.app/zh-cn/case-studies/city-of-montreal/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/case-studies/city-of-montreal/</guid><description>&lt;div class="banner1" style="background-image: url('/images/case-studies/montreal/banner1.jpg')"&gt;
 &lt;h1&gt; CASE STUDY:&lt;img src="https://andygol-k8s.netlify.app/images/montreal_logo.png" class="header_logo" style="width:20%;margin-bottom:-1.2%"&gt;&lt;br&gt; &lt;div class="subhead" style="margin-top:1%"&gt;City of Montréal - How the City of Montréal Is Modernizing Its 30-Year-Old, Siloed&amp;nbsp;Architecture&amp;nbsp;with&amp;nbsp;Kubernetes

&lt;/div&gt;&lt;/h1&gt;

&lt;/div&gt;

&lt;div class="details"&gt;
 Company &amp;nbsp;&lt;b&gt;City of Montréal&lt;/b&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Location &amp;nbsp;&lt;b&gt;Montréal, Québec, Canada&lt;/b&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Industry &amp;nbsp;&lt;b&gt;Government&lt;/b&gt;
&lt;/div&gt;

&lt;hr&gt;
&lt;section class="section1"&gt;
&lt;div class="cols"&gt;
 &lt;div class="col1" style="width:100%""&gt;
 &lt;h2&gt;Challenge&lt;/h2&gt;
 Like many governments, Montréal has a number of legacy systems, and “we have systems that are older than some developers working here,” says the city’s CTO, Jean-Martin Thibault. “We have mainframes, all flavors of Windows, various flavors of Linux, old and new Oracle systems, Sun servers, all kinds of databases. Like all big corporations, some of the most important systems, like Budget and Human Resources, were developed on mainframes in-house over the past 30 years.” There are over 1,000 applications in all, and most of them were running on different ecosystems. In 2015, a new management team decided to break down those silos, and invest in IT in order to move toward a more integrated governance for the city. They needed to figure out how to modernize the architecture.

 &lt;h2&gt;Solution&lt;/h2&gt;
 The first step was containerization. The team started with a small Docker farm with four or five servers, with Rancher for providing access to the Docker containers and their logs and Jenkins to deploy. “We based our effort on the new trends; we understood the benefits of immutability and deployments without downtime and such things,” says Solutions Architect Marc Khouzam. They soon realized they needed orchestration as well, and opted for Kubernetes. Says Enterprise Architect Morgan Martinet: “Kubernetes offered concepts on how you would describe an architecture for any kind of application, and based on those concepts, deploy what’s required to run the infrastructure. It was becoming a de facto standard.”
&lt;br&gt;
&lt;h2&gt;Impact&lt;/h2&gt;
 The time to market has improved drastically, from many months to a few weeks. Deployments went from months to hours. “In the past, you would have to ask for virtual machines, and that alone could take weeks, easily,” says Thibault. “Now you don’t even have to ask for anything. You just create your project and it gets deployed.” Kubernetes has also improved the efficiency of how the city uses its compute resources: “Before, the 200 application components we currently run on Kubernetes would have required hundreds of virtual machines, and now, if we’re talking about a single environment of production, we are able to run them on 8 machines, counting the masters of Kubernetes,” says Martinet. And it’s all done with a small team of just 5 people operating the Kubernetes clusters.
&lt;/div&gt;

&lt;/div&gt;
&lt;/section&gt;
&lt;div class="banner2"&gt;
 &lt;div class="banner2text"&gt;
 "We realized the limitations of having a non-orchestrated Docker environment. Kubernetes came to the rescue, bringing in all these features that make it a lot easier to manage and give a lot more benefits to the users."
&lt;br style="height:25px"&gt;&lt;span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"&gt;&lt;br&gt;- JEAN-MARTIN THIBAULT, CTO, CITY OF MONTRÉAL&lt;/span&gt;
 &lt;/div&gt;
&lt;/div&gt;
&lt;section class="section2"&gt;
&lt;div class="fullcol"&gt;
 &lt;h2&gt;The second biggest municipality in Canada, Montréal has a large number of legacy systems keeping the government running. And while they don’t quite date back to the city’s founding in 1642, “we have systems that are older than some developers working here,” jokes the city’s CTO, Jean-Martin Thibault.&lt;/h2&gt;
 “We have mainframes, all flavors of Windows, various flavors of Linux, old and new Oracle systems, Sun servers, all kinds of databases. Some of the most important systems, like Budget and Human Resources, were developed on mainframes in-house over the past 30 years.”
&lt;br&gt;&lt;br&gt;
 In recent years, that fact became a big pain point. There are over 1,000 applications in all, running on almost as many different ecosystems. In 2015, a new city management team decided to break down those silos, and invest in IT in order to move toward a more integrated governance. “The organization was siloed, so as a result the architecture was siloed,” says Thibault. “Once we got integrated into one IT team, we decided to redo an overall enterprise architecture.”
&lt;br&gt;&lt;br&gt;
 The first step to modernize the architecture was containerization. “We based our effort on the new trends; we understood the benefits of immutability and deployments without downtime and such things,” says Solutions Architect Marc Khouzam. The team started with a small Docker farm with four or five servers, with Rancher for providing access to the Docker containers and their logs and Jenkins for deployment.
&lt;/div&gt;
&lt;/section&gt;
&lt;div class="banner3" style="background-image: url('/images/case-studies/montreal/banner3.jpg')"&gt;
 &lt;div class="banner3text"&gt;
 "Getting a project running in Kubernetes is entirely dependent on how long you need to program the actual software. It’s no longer dependent on deployment. Deployment is so fast that it’s negligible."&lt;br style="height:25px"&gt;&lt;span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"&gt;&lt;br&gt;- MARC KHOUZAM, SOLUTIONS ARCHITECT, CITY OF MONTRÉAL&lt;/span&gt;

 &lt;/div&gt;
&lt;/div&gt;
&lt;section class="section3"&gt;
&lt;div class="fullcol"&gt;
 But this Docker farm setup had some limitations, including the lack of self-healing and dynamic scaling based on traffic, and the effort required to optimize server resources and scale to multiple instances of the same container. The team soon realized they needed orchestration as well. “Kubernetes came to the rescue,” says Thibault, “bringing in all these features that make it a lot easier to manage and give a lot more benefits to the users.”
&lt;br&gt;&lt;br&gt;
 The team had evaluated several orchestration solutions, but Kubernetes stood out because it addressed all of the pain points. (They were also inspired by Yahoo! Japan’s use case, which the team members felt came close to their vision.) “Kubernetes offered concepts on how you would describe an architecture for any kind of application, and based on those concepts, deploy what’s required to run the infrastructure,” says Enterprise Architect Morgan Martinet. “It was becoming a de facto standard. It also promised portability across cloud providers. The choice of Kubernetes now gives us many options such as running clusters in-house or in any IaaS provider, or even using Kubernetes-as-a-service in any of the major cloud providers.”
&lt;br&gt;&lt;br&gt;
 Another important factor in the decision was vendor neutrality. “As a government entity, it is essential for us to be neutral in our selection of products and providers,” says Thibault. “The independence of the Cloud Native Computing Foundation from any company provides this.”
&lt;/div&gt;
&lt;/section&gt;
&lt;div class="banner4" style="background-image: url('/images/case-studies/montreal/banner4.jpg')"&gt;
 &lt;div class="banner4text"&gt;
 "Kubernetes has been great. It’s been stable, and it provides us with elasticity, resilience, and robustness. While re-architecting for Kubernetes, we also benefited from the monitoring and logging aspects, with centralized logging, Prometheus logging, and Grafana dashboards. We have enhanced visibility of what’s being deployed." &lt;br style="height:25px"&gt;&lt;span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"&gt;&lt;br&gt;- MORGAN MARTINET, ENTERPRISE ARCHITECT, CITY OF MONTRÉAL&lt;/span&gt;
 &lt;/div&gt;
&lt;/div&gt;

&lt;section class="section5" style="padding:0px !important"&gt;
&lt;div class="fullcol"&gt;
 The Kubernetes implementation began with the deployment of a small cluster using an internal Ansible playbook, which was soon replaced by the Kismatic distribution. Given the complexity they saw in operating a Kubernetes platform, they decided to provide development groups with an automated CI/CD solution based on Helm. “An integrated CI/CD solution on Kubernetes standardized how the various development teams designed and deployed their solutions, but allowed them to remain independent,” says Khouzam.
&lt;br&gt;&lt;br&gt;
 During the re-architecting process, the team also added Prometheus for monitoring and alerting, Fluentd for logging, and Grafana for visualization. “We have enhanced visibility of what’s being deployed,” says Martinet. Adds Khouzam: “The big benefit is we can track anything, even things that don’t run inside the Kubernetes cluster. It’s our way to unify our monitoring effort.”
&lt;br&gt;&lt;br&gt;
 All together, the cloud native solution has had a positive impact on velocity as well as administrative overhead. With standardization, code generation, automatic deployments into Kubernetes, and standardized monitoring through Prometheus, the time to market has improved drastically, from many months to a few weeks. Deployments went from months and weeks of planning down to hours. “In the past, you would have to ask for virtual machines, and that alone could take weeks to properly provision,” says Thibault. Plus, for dedicated systems, experts often had to be brought in to install them with their own recipes, which could take weeks and months.
&lt;br&gt;&lt;br&gt;
 Now, says Khouzam, “we can deploy pretty much any application that’s been Dockerized without any help from anybody. Getting a project running in Kubernetes is entirely dependent on how long you need to program the actual software. It’s no longer dependent on deployment. Deployment is so fast that it’s negligible.”

&lt;/div&gt;

&lt;div class="banner5" &gt;
 &lt;div class="banner5text"&gt;
"We’re working with the market when possible, to put pressure on our vendors to support Kubernetes, because it’s a much easier solution to manage"&lt;br style="height:25px"&gt;&lt;span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"&gt;&lt;br&gt;- MORGAN MARTINET, ENTERPRISE ARCHITECT, CITY OF MONTRÉAL&lt;/span&gt;&lt;/div&gt;
&lt;/div&gt;

&lt;div class="fullcol"&gt;
 Kubernetes has also improved the efficiency of how the city uses its compute resources: “Before, the 200 application components we currently run in Kubernetes would have required hundreds of virtual machines, and now, if we’re talking about a single environment of production, we are able to run them on 8 machines, counting the masters of Kubernetes,” says Martinet. And it’s all done with a small team of just five people operating the Kubernetes clusters. Adds Martinet: “It’s a dramatic improvement no matter what you measure.”
&lt;br&gt;&lt;br&gt;
 So it should come as no surprise that the team’s strategy going forward is to target Kubernetes as much as they can. “If something can’t run inside Kubernetes, we’ll wait for it,” says Thibault. That means they haven’t moved any of the city’s Windows systems onto Kubernetes, though it’s something they would like to do. “We’re working with the market when possible, to put pressure on our vendors to support Kubernetes, because it’s a much easier solution to manage,” says Martinet.
&lt;br&gt;&lt;br&gt;
 Thibault sees a near future where 60% of the city’s workloads are running on a Kubernetes platform—basically any and all of the use cases that they can get to work there. “It’s so much more efficient than the way we used to do things,” he says. “There’s no looking back.”

&lt;/div&gt;
&lt;/section&gt;</description></item><item><title>Crowdfire Case Study</title><link>https://andygol-k8s.netlify.app/zh-cn/case-studies/crowdfire/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/case-studies/crowdfire/</guid><description>&lt;div class="banner1"&gt;
 &lt;h1&gt; CASE STUDY:&lt;img src="https://andygol-k8s.netlify.app/images/crowdfire_logo.png" class="header_logo"&gt;&lt;br&gt; &lt;div class="subhead"&gt;How to Keep Iterating a Fast-Growing App With a Cloud-Native Approach&lt;/div&gt;&lt;/h1&gt;

&lt;/div&gt;

&lt;div class="details"&gt;
 Company &amp;nbsp;&lt;b&gt;Crowdfire&lt;/b&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Location &amp;nbsp;&lt;b&gt;Mumbai, India&lt;/b&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Industry &amp;nbsp;&lt;b&gt;Social Media Software&lt;/b&gt;
&lt;/div&gt;

&lt;hr&gt;
&lt;section class="section1"&gt;
&lt;div class="cols"&gt;
 &lt;div class="col1"&gt;
 &lt;h2&gt;Challenge&lt;/h2&gt;
 &lt;a href="https://www.crowdfireapp.com/"&gt;Crowdfire&lt;/a&gt; helps content creators create their content anywhere on the Internet and publish it everywhere else in the right format. Since its launch in 2010, it has grown to 16 million users. The product began as a monolith app running on &lt;a href="https://cloud.google.com/appengine/"&gt;Google App Engine&lt;/a&gt;, and in 2015, the company began a transformation to microservices running on Amazon Web Services &lt;a href="https://aws.amazon.com/elasticbeanstalk/"&gt;Elastic Beanstalk&lt;/a&gt;. "It was okay for our use cases initially, but as the number of services, development teams and scale increased, the deploy times, self-healing capabilities and resource utilization started to become problems for us," says Software Engineer Amanpreet Singh, who leads the infrastructure team for Crowdfire.&lt;br&gt;
 &lt;h2&gt;Solution&lt;/h2&gt;
 "We realized that we needed a more cloud-native approach to deal with these issues," says Singh. The team decided to implement a custom setup of Kubernetes based on &lt;a href="https://www.terraform.io/"&gt;Terraform&lt;/a&gt; and &lt;a href="https://www.ansible.com/"&gt;Ansible&lt;/a&gt;.
 &lt;br&gt;
 &lt;/div&gt;

&lt;div class="col2"&gt;

&lt;h2&gt;Impact&lt;/h2&gt;
 "Kubernetes has helped us reduce the deployment time from 15 minutes to less than a minute," says Singh. "Due to Kubernetes’s self-healing nature, the operations team doesn’t need to do any manual intervention in case of a node or pod failure." Plus, he says, "Dev-Prod parity has improved since developers can experiment with options in dev/staging clusters, and when it’s finalized, they just commit the config changes in the respective code repositories. These changes automatically get replicated on the production cluster via CI/CD pipelines."
&lt;/div&gt;

&lt;/div&gt;
&lt;/section&gt;
&lt;div class="banner2"&gt;
 &lt;div class="banner2text"&gt;
 "In the 15 months that we’ve been using Kubernetes, it has been amazing for us. It enabled us to iterate quickly, increase development speed, and continuously deliver new features and bug fixes to our users, while keeping our operational costs and infrastructure management overhead under control."&lt;br style="height:25px"&gt;&lt;span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"&gt;&lt;br&gt;- Amanpreet Singh, Software Engineer at Crowdfire&lt;/span&gt;
 &lt;/div&gt;
&lt;/div&gt;
&lt;section class="section2"&gt;
&lt;div class="fullcol"&gt;
 &lt;h2&gt;"If you build it, they will come."&lt;/h2&gt;
 For most content creators, only half of that movie quote may ring true. Sure, platforms like Wordpress, YouTube and Shopify have made it simple for almost anyone to start publishing new content online, but attracting an audience isn’t as easy. Crowdfire "helps users publish their content to all possible places where their audience exists," says Amanpreet Singh, a Software Engineer at the company based in Mumbai, India. Crowdfire has gained more than 16 million users—from bloggers and artists to makers and small businesses—since its launch in 2010.&lt;br&gt;&lt;br&gt;
 With that kind of growth—and a high demand from users for new features and continuous improvements—the Crowdfire team struggled to keep up behind the scenes. In 2015, they moved their monolith Java application to Amazon Web Services &lt;a href="https://aws.amazon.com/elasticbeanstalk/"&gt;Elastic Beanstalk&lt;/a&gt; and started breaking it down into microservices.&lt;br&gt;&lt;br&gt;
 It was a good first step, but the team soon realized they needed to go further down the cloud-native path, which would lead them to Kubernetes. "It was okay for our use cases initially, but as the number of services and development teams increased and we scaled further, deploy times, self-healing capabilities and resource utilization started to become problematic," says Singh, who leads the infrastructure team at Crowdfire. "We realized that we needed a more cloud-native approach to deal with these issues."&lt;br&gt;&lt;br&gt;
 As he looked around for solutions, Singh had a checklist of what Crowdfire needed. "We wanted to keep some things separate so they could be shipped independent of other things; this would help remove blockers and let different teams work at their own pace," he says. "We also make a lot of data-driven decisions, so shipping a feature and its iterations quickly was a must."&lt;br&gt;&lt;br&gt;
 Kubernetes checked all the boxes and then some. "One of the best things was the built-in service discovery," he says. "When you have a bunch of microservices that need to call each other, having internal DNS readily available and service IPs and ports automatically set as environment variables help a lot." Plus, he adds, "Kubernetes’s opinionated approach made it easier to get started."

&lt;/div&gt;
&lt;/section&gt;
&lt;div class="banner3"&gt;
 &lt;div class="banner3text"&gt;
 "We realized that we needed a more cloud-native approach to deal with these issues," says Singh. The team decided to implement a custom setup of Kubernetes based on Terraform and Ansible."
 &lt;/div&gt;
&lt;/div&gt;
&lt;section class="section3"&gt;
&lt;div class="fullcol"&gt;
 There was another compelling business reason for the cloud-native approach. "In today’s world of ever-changing business requirements, using cloud native technology provides a variety of options to choose from—even the ability to run services in a hybrid cloud environment," says Singh. "Businesses can keep services in a region closest to the users, and thus benefit from high-availability and resiliency."&lt;br&gt;&lt;br&gt;
 So in February 2016, Singh set up a test Kubernetes cluster using the kube-up scripts provided. "I explored the features and was able to deploy an application pretty easily," he says. "However, it seemed like a black box since I didn’t understand the components completely, and had no idea what the kube-up script did under the hood. So when it broke, it was hard to find the issue and fix it." &lt;br&gt;&lt;br&gt;
 To get a better understanding, Singh dove into the internals of Kubernetes, reading the docs and even some of the code. And he looked to the Kubernetes community for more insight. "I used to stay up a little late every night (a lot of users were active only when it’s night here in India) and would try to answer questions on the Kubernetes community Slack from users who were getting started," he says. "I would also follow other conversations closely. I must admit I was able to avoid a lot of issues in our setup because I knew others had faced the same issues."&lt;br&gt;&lt;br&gt;
 Based on the knowledge he gained, Singh decided to implement a custom setup of Kubernetes based on &lt;a href="https://www.terraform.io/"&gt;Terraform&lt;/a&gt; and &lt;a href="https://www.ansible.com/"&gt;Ansible&lt;/a&gt;. "I wrote Terraform to launch Kubernetes master and nodes (Auto Scaling Groups) and an Ansible playbook to install the required components," he says. (The company recently switched to using prebaked &lt;a href="http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html"&gt;AMIs&lt;/a&gt; to make the node bringup faster, and is planning to change its networking layer.) &lt;br&gt;&lt;br&gt;

&lt;/div&gt;
&lt;/section&gt;
&lt;div class="banner4"&gt;
 &lt;div class="banner4text"&gt;
 "Kubernetes helped us reduce the deployment time from 15 minutes to less than a minute. Due to Kubernetes’s self-healing nature, the operations team doesn’t need to do any manual intervention in case of a node or pod failure."
 &lt;/div&gt;
&lt;/div&gt;

&lt;section class="section4"&gt;
&lt;div class="fullcol"&gt;
 First, the team migrated a few staging services from Elastic Beanstalk to the new Kubernetes staging cluster, and then set up a production cluster a month later to deploy some services. The results were convincing. "By the end of March 2016, we established that all the new services must be deployed on Kubernetes," says Singh. "Kubernetes helped us reduce the deployment time from 15 minutes to less than a minute. Due to Kubernetes’s self-healing nature, the operations team doesn’t need to do any manual intervention in case of a node or pod failure." On top of that, he says, "Dev-Prod parity has improved since developers can experiment with options in dev/staging clusters, and when it’s finalized, they just commit the config changes in the respective code repositories. These changes automatically get replicated on the production cluster via CI/CD pipelines. This brings more visibility into the changes being made, and keeping an audit trail."&lt;br&gt;&lt;br&gt;
 Over the next six months, the team worked on migrating all the services from Elastic Beanstalk to Kubernetes, except for the few that were deprecated and would soon be terminated anyway. The services were moved one at a time, and their performance was monitored for two to three days each. Today, "We’re completely migrated and we run all new services on Kubernetes," says Singh. &lt;br&gt;&lt;br&gt;
 The impact has been considerable: With Kubernetes, the company has experienced a 90% cost savings on Elastic Load Balancer, which is now only used for their public, user-facing services. Their EC2 operating expenses have been decreased by as much as 50%.&lt;br&gt;&lt;br&gt;
 All 30 engineers at Crowdfire were onboarded at once. "I gave an internal talk where I shared the basic components and demoed the usage of kubectl," says Singh. "Everyone was excited and happy about using Kubernetes. Developers have more control and visibility into their applications running in production now. Most of all, they’re happy with the low deploy times and self-healing services." &lt;br&gt;&lt;br&gt;
 And they’re much more productive, too. "Where we used to do about 5 deployments per day," says Singh, "now we’re doing 30+ production and 50+ staging deployments almost every day."


&lt;/div&gt;
&lt;/section&gt;
&lt;div class="banner5"&gt;
 &lt;div class="banner5text"&gt;
 The impact has been considerable: With Kubernetes, the company has experienced a 90% cost savings on Elastic Load Balancer, which is now only used for their public, user-facing services. Their EC2 operating expenses have been decreased by as much as 50%.

 &lt;/div&gt;
&lt;/div&gt;
&lt;section class="section5"&gt;
&lt;div class="fullcol"&gt;

 Singh notes that almost all of the engineers interact with the staging cluster on a daily basis, and that has created a cultural change at Crowdfire. "Developers are more aware of the cloud infrastructure now," he says. "They’ve started following cloud best practices like better health checks, structured logs to stdout [standard output], and config via files or environment variables."&lt;br&gt;&lt;br&gt;
 With Crowdfire’s commitment to Kubernetes, Singh is looking to expand the company’s cloud-native stack. The team already uses &lt;a href="https://prometheus.io/"&gt;Prometheus&lt;/a&gt; for monitoring, and he says he is evaluating &lt;a href="https://linkerd.io/"&gt;Linkerd&lt;/a&gt; and &lt;a href="https://envoyproxy.github.io/"&gt;Envoy Proxy&lt;/a&gt; as a way to "get more metrics about request latencies and failures, and handle them better." Other CNCF projects, including &lt;a href="http://opentracing.io/"&gt;OpenTracing&lt;/a&gt; and &lt;a href="https://grpc.io/"&gt;gRPC&lt;/a&gt; are also on his radar.&lt;br&gt;&lt;br&gt;
 Singh has found that the cloud-native community is growing in India, too, particularly in Bangalore. "A lot of startups and new companies are starting to run their infrastructure on Kubernetes," he says. &lt;br&gt;&lt;br&gt;
 And when people ask him about Crowdfire’s experience, he has this advice to offer: "Kubernetes is a great piece of technology, but it might not be right for you, especially if you have just one or two services or your app isn’t easy to run in a containerized environment," he says. "Assess your situation and the value that Kubernetes provides before going all in. If you do decide to use Kubernetes, make sure you understand the components that run under the hood and what role they play in smoothly running the cluster. Another thing to consider is if your apps are ‘Kubernetes-ready,’ meaning if they have proper health checks and handle termination signals to shut down gracefully."&lt;br&gt;&lt;br&gt;
 And if your company fits that profile, go for it. Crowdfire clearly did—and is now reaping the benefits. "In the 15 months that we’ve been using Kubernetes, it has been amazing for us," says Singh. "It enabled us to iterate quickly, increase development speed and continuously deliver new features and bug fixes to our users, while keeping our operational costs and infrastructure management overhead under control."


&lt;/div&gt;
&lt;/section&gt;</description></item><item><title>DaoCloud 案例分析</title><link>https://andygol-k8s.netlify.app/zh-cn/case-studies/daocloud/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/case-studies/daocloud/</guid><description>&lt;!--
title: DaoCloud Case Study
linkTitle: DaoCloud
case_study_styles: true
cid: caseStudies
logo: daocloud_featured_logo.svg

css: /css/style_daocloud.css
new_case_study_styles: true
heading_background: /images/case-studies/daocloud/banner1.jpg
heading_title_logo: /images/daocloud-light.svg
subheading: &gt;
 Seek Global Optimal Solutions for Digital World
case_study_details:
 - Company: DaoCloud
 - Location: Shanghai, China
 - Industry: Cloud Native
--&gt;

&lt;!--
&lt;h2&gt;Challenges&lt;/h2&gt;
--&gt;
&lt;h2&gt;挑战&lt;/h2&gt;

&lt;!--
&lt;p&gt;&lt;a href="https://www.daocloud.io/en/"&gt;DaoCloud&lt;/a&gt;, founded in 2014, is an innovation leader in the field of cloud native. It boasts independent intellectual property rights of core technologies for crafting an open cloud platform to empower the digital transformation of enterprises.&lt;/p&gt;</description></item><item><title>Event Rate Limit Configuration (v1alpha1)</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/config-api/apiserver-eventratelimit.v1alpha1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/config-api/apiserver-eventratelimit.v1alpha1/</guid><description>&lt;!--
## Resource Types
--&gt;
&lt;h2 id="resource-types"&gt;资源类型&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#eventratelimit-admission-k8s-io-v1alpha1-Configuration"&gt;Configuration&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="eventratelimit-admission-k8s-io-v1alpha1-Configuration"&gt;&lt;code&gt;Configuration&lt;/code&gt;&lt;/h2&gt;
&lt;p&gt;
&lt;!--
Configuration provides configuration for the EventRateLimit admission
controller.
--&gt;
Configuration 为 EventRateLimit 准入控制器提供配置数据。
&lt;/p&gt;
&lt;table class="table"&gt;
&lt;thead&gt;&lt;tr&gt;&lt;th width="30%"&gt;&lt;!--Field--&gt;字段&lt;/th&gt;&lt;th&gt;&lt;!--Description--&gt;描述&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;apiVersion&lt;/code&gt;&lt;br/&gt;string&lt;/td&gt;&lt;td&gt;&lt;code&gt;eventratelimit.admission.k8s.io/v1alpha1&lt;/code&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;kind&lt;/code&gt;&lt;br/&gt;string&lt;/td&gt;&lt;td&gt;&lt;code&gt;Configuration&lt;/code&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;limits&lt;/code&gt; &lt;B&gt;&lt;!--[Required]--&gt;[必需]&lt;/B&gt;&lt;br/&gt;
&lt;a href="#eventratelimit-admission-k8s-io-v1alpha1-Limit"&gt;&lt;code&gt;[]Limit&lt;/code&gt;&lt;/a&gt;
&lt;/td&gt;
&lt;td&gt;
 &lt;p&gt;
 &lt;!--
 limits are the limits to place on event queries received.
Limits can be placed on events received server-wide, per namespace,
per user, and per source+object.
At least one limit is required.
 --&gt;
 &lt;code&gt;limits&lt;/code&gt; 是为所接收到的事件查询设置的限制。可以针对服务器端接收到的事件设置限制，
按逐个名字空间、逐个用户、或逐个来源+对象组合的方式均可以。
至少需要设置一种限制。
 &lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2 id="eventratelimit-admission-k8s-io-v1alpha1-Limit"&gt;&lt;code&gt;Limit&lt;/code&gt;&lt;/h2&gt;
&lt;!--
**Appears in:**
--&gt;
&lt;p&gt;&lt;strong&gt;出现在：&lt;/strong&gt;&lt;/p&gt;</description></item><item><title>GolfNow Case Study</title><link>https://andygol-k8s.netlify.app/zh-cn/case-studies/golfnow/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/case-studies/golfnow/</guid><description>&lt;div class="banner1"&gt;
 &lt;h1&gt;CASE STUDY: &lt;img src="https://andygol-k8s.netlify.app/images/golfnow_logo.png" width="20%" style="margin-bottom:-6px"&gt;&lt;br&gt;
 &lt;div class="subhead"&gt;Saving Time and Money with Cloud Native Infrastructure&lt;/div&gt;
 &lt;/h1&gt;
&lt;/div&gt;

&lt;div class="details"&gt;
 Company&amp;nbsp;&lt;b&gt;GolfNow&lt;/b&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Location&amp;nbsp;&lt;b&gt;Orlando, Florida&lt;/b&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Industry&amp;nbsp;&lt;b&gt;Golf Industry Technology and Services Provider&lt;/b&gt;
&lt;/div&gt;

&lt;hr&gt;

&lt;section class="section1"&gt;
 &lt;div class="cols"&gt;
 &lt;div class="col1"&gt;

 &lt;h2&gt;Challenge&lt;/h2&gt;
 A member of the &lt;a href="http://www.nbcunicareers.com/our-businesses/nbc-sports-group"&gt;NBC Sports Group&lt;/a&gt;, &lt;a href="https://www.golfnow.com/"&gt;GolfNow&lt;/a&gt; is the golf industry’s technology and services leader, managing 10 different products, as well as the largest e-commerce tee time marketplace in the world. As its business began expanding rapidly and globally, GolfNow’s monolithic application became problematic. "We kept growing our infrastructure vertically rather than horizontally, and the cost of doing business became problematic," says Sheriff Mohamed, GolfNow’s Director, Architecture. "We wanted the ability to more easily expand globally."
 &lt;br&gt;
 &lt;/div&gt;

 &lt;div class="col2"&gt;
 &lt;h2&gt;Solution&lt;/h2&gt;
 Turning to microservices and containerization, GolfNow began moving its applications and databases from third-party services to its own clusters running on &lt;a href="https://www.docker.com/"&gt;Docker&lt;/a&gt; and &lt;a href="https://kubernetes.io/"&gt;Kubernetes.&lt;/a&gt;&lt;br&gt;&lt;br&gt;

 &lt;h2&gt;Impact&lt;/h2&gt;
 The results were immediate. While maintaining the same capacity—and beyond, during peak periods—GolfNow saw its infrastructure costs for the first application virtually cut in half.
 &lt;/div&gt;
 &lt;/div&gt;
&lt;/section&gt;


&lt;div class="banner2"&gt;
 &lt;div class="banner2text"&gt;
 "With our growth we obviously needed to expand our infrastructure, and we kept growing vertically rather than horizontally. We were basically wasting money and doubling the cost of our infrastructure."&lt;br&gt;&lt;br&gt;&lt;span style="font-size:15px;letter-spacing:0.08em"&gt;- SHERIFF MOHAMED, DIRECTOR, ARCHITECTURE AT GOLFNOW&lt;/span&gt;
 &lt;/div&gt;
&lt;/div&gt;

&lt;section class="section2"&gt;
 &lt;div class="fullcol"&gt;
 &lt;h2&gt;It’s not every day that you can say you’ve slashed an operating expense by half.&lt;/h2&gt;

 But Sheriff Mohamed and Josh Chandler did just that when they helped lead their company, &lt;a href="https://www.golfnow.com/"&gt;GolfNow&lt;/a&gt;, on a journey from a monolithic to a containerized, cloud native infrastructure managed by Kubernetes.
 &lt;br&gt; &lt;br&gt;
 A top-performing business within the NBC Sports Group, GolfNow is a technology and services company with the largest tee time marketplace in the world. GolfNow serves 5 million active golfers across 10 different products. In recent years, the business had grown so fast that the infrastructure supporting their giant monolithic application (written in C#.NET and backed by SQL Server database management system) could not keep up. "With our growth we obviously needed to expand our infrastructure, and we kept growing vertically rather than horizontally," says Sheriff, GolfNow’s Director, Architecture. "Our costs were growing exponentially. And on top of that, we had to build a Disaster Recovery (DR) environment, which then meant we’d have to copy exactly what we had in our original data center to another data center that was just the standby. We were basically wasting money and doubling the cost of our infrastructure."
 &lt;br&gt; &lt;br&gt;
 In moving just the first of GolfNow’s important applications—a booking engine for golf courses and B2B marketing platform—from third-party services to their own Kubernetes environment, "our bill went down drastically," says Sheriff.
 &lt;br&gt; &lt;br&gt;
 The path to those stellar results began in late 2014. In order to support GolfNow’s global growth, the team decided that the company needed to have multiple data centers and the ability to quickly and easily re-route traffic as needed. "From there we knew that we needed to go in a direction of breaking things apart, microservices, and containerization," says Sheriff. "At the time we were trying to get away from &lt;a href="https://www.microsoft.com/net"&gt;C#.NET&lt;/a&gt; and &lt;a href="https://www.microsoft.com/en-cy/sql-server/sql-server-2016"&gt;SQL Server&lt;/a&gt; since it didn’t run very well on Linux, where everything container was running smoothly."
 &lt;br&gt; &lt;br&gt;
 To that end, the team shifted to working with &lt;a href="https://nodejs.org/"&gt;Node.js&lt;/a&gt;, the open-source, cross-platform JavaScript runtime environment for developing tools and applications, and &lt;a href="https://www.mongodb.com/"&gt;MongoDB&lt;/a&gt;, the open-source database program. At the time, &lt;a href="https://www.docker.com/"&gt;Docker&lt;/a&gt;, the platform for deploying applications in containers, was still new. But once the team began experimenting with it, Sheriff says, "we realized that was the way we wanted to go, especially since that’s the way the industry is heading."
 &lt;/div&gt;
&lt;/section&gt;

&lt;div class="banner3"&gt;
 &lt;div class="banner3text"&gt;
 "The team migrated the rest of the application into their Kubernetes cluster. And the impact was immediate: On top of cutting monthly costs by a large percentage, says Sheriff, 'Running at the same capacity and during our peak time, we were able to horizontally grow. Since we were using our VMs more efficiently with containers, we didn’t have to pay extra money at all.'"
 &lt;/div&gt;
&lt;/div&gt;

&lt;section class="section3"&gt;
 &lt;div class="fullcol"&gt;
 GolfNow’s dev team ran an "internal, low-key" proof of concept and were won over. "We really liked how easy it was to be able to pass containers around to each other and have them up and running in no time, exactly the way it was running on my machine," says Sheriff. "Because that is always the biggest gripe that Ops has with developers, right? ‘It worked on my machine!’ But then we started getting to the point of, ‘How do we make sure that these things stay up and running?’" &lt;br&gt;&lt;br&gt;
 That led the team on a quest to find the right orchestration system for the company’s needs. Sheriff says the first few options they tried were either too heavy or "didn’t feel quite right." In late summer 2015, they discovered the just-released &lt;a href="https://kubernetes.io/"&gt;Kubernetes&lt;/a&gt;, which Sheriff immediately liked for its ease of use. "We did another proof of concept," he says, "and Kubernetes won because of the fact that the community backing was there, built on top of what Google had already done."
 &lt;br&gt;&lt;br&gt;
 But before they could go with Kubernetes, &lt;a href="http://www.nbc.com/"&gt;NBC&lt;/a&gt;, GolfNow’s parent company, also asked them to comparison shop with another company. Sheriff and his team liked the competing company’s platform user interface, but didn’t like that its platform would not allow containers to run natively on Docker. With no clear decision in sight, Sheriff’s VP at GolfNow, Steve McElwee, set up a three-month trial during which a GolfNow team (consisting of Sheriff and Josh, who’s now Lead Architect, Open Platforms) would build out a Kubernetes environment, and a large NBC team would build out one with the other company’s platform.
 &lt;br&gt;&lt;br&gt;
 "We spun up the cluster and we tried to get everything to run the way we wanted it to run," Sheriff says. "The biggest thing that we took away from it is that not only did we want our applications to run within Kubernetes and Docker, we also wanted our databases to run there. We literally wanted our entire infrastructure to run within Kubernetes."
 &lt;br&gt;&lt;br&gt;
 At the time there was nothing in the community to help them get Kafka and MongoDB clusters running within a Kubernetes and Docker environment, so Sheriff and Josh figured it out on their own, taking a full month to get it right. "Everything started rolling from there," Sheriff says. "We were able to get all our applications connected, and we finished our side of the proof of concept a month in advance. My VP was like, ‘Alright, it’s over. Kubernetes wins.’"
 &lt;br&gt;&lt;br&gt;
 The next step, beginning in January 2016, was getting everything working in production. The team focused first on one application that was already written in Node.js and MongoDB. A booking engine for golf courses and B2B marketing platform, the application was already going in the microservice direction but wasn’t quite finished yet. At the time, it was running in &lt;a href="https://devcenter.heroku.com/articles/mongohq"&gt;Heroku Compose&lt;/a&gt; and other third-party services—resulting in a large monthly bill.

 &lt;/div&gt;
&lt;/section&gt;

 &lt;div class="banner4"&gt;
 &lt;div class="banner4text"&gt;
 "'The time I spent actually moving the applications was under 30 seconds! We can move data centers in just incredible amounts of time. If you haven’t come from the Kubernetes world you wouldn’t believe me.' Sheriff puts it in these terms: 'Before Kubernetes I wasn’t sleeping at night, literally. I was woken up all the time, because things were down. After Kubernetes, I’ve been sleeping at night.'"
 &lt;/div&gt;
 &lt;/div&gt;

&lt;section class="section4"&gt;
 &lt;div class="fullcol"&gt;
 "The goal was to take all of that out and put it within this new platform we’ve created with Kubernetes on &lt;a href="https://cloud.google.com/compute/"&gt;Google Compute Engine (GCE)&lt;/a&gt;," says Sheriff. "So we ended up building piece by piece, in parallel, what was out in Heroku and Compose, in our Kubernetes cluster. Then, literally, just switched configs in the background. So in Heroku we had the app running hitting a Compose database. We’d take the config, change it and make it hit the database that was running in our cluster."
 &lt;br&gt;&lt;br&gt;
 Using this procedure, they were able to migrate piecemeal, without any downtime. The first migration was done during off hours, but to test the limits, the team migrated the second database in the middle of the day, when lots of users were running the application. "We did it," Sheriff says, "and again it was successful. Nobody noticed."
 &lt;br&gt;&lt;br&gt;
 After three weeks of monitoring to make sure everything was running stable, the team migrated the rest of the application into their Kubernetes cluster. And the impact was immediate: On top of cutting monthly costs by a large percentage, says Sheriff, "Running at the same capacity and during our peak time, we were able to horizontally grow. Since we were using our VMs more efficiently with containers, we didn’t have to pay extra money at all."
 &lt;br&gt;&lt;br&gt;
 Not only were they saving money, but they were also saving time. "I had a meeting this morning about migrating some applications from one cluster to another," says Josh. "I spent about 2 hours explaining the process. The time I spent actually moving the applications was under 30 seconds! We can move data centers in just incredible amounts of time. If you haven’t come from the Kubernetes world you wouldn’t believe me." Sheriff puts it in these terms: "Before Kubernetes I wasn’t sleeping at night, literally. I was woken up all the time, because things were down. After Kubernetes, I’ve been sleeping at night."
 &lt;br&gt;&lt;br&gt;
 A small percentage of the applications on GolfNow have been migrated over to the Kubernetes environment. "Our Core Team is rewriting a lot of the .NET applications into &lt;a href="https://www.microsoft.com/net/core"&gt;.NET Core&lt;/a&gt; [which is compatible with Linux and Docker] so that we can run them within containers," says Sheriff.
 &lt;br&gt;&lt;br&gt;
 Looking ahead, Sheriff and his team want to spend 2017 continuing to build a whole platform around Kubernetes with &lt;a href="https://github.com/drone/drone"&gt;Drone&lt;/a&gt;, an open-source continuous delivery platform, to make it more developer-centric. "Now they’re able to manage configuration, they’re able to manage their deployments and things like that, making all these subteams that are now creating all these microservices, be self sufficient," he says. "So it can pull us away from applications and allow us to just make sure the cluster is running and healthy, and then actually migrate that over to our Ops team."

 &lt;/div&gt;
&lt;/section&gt;

 &lt;div class="banner5"&gt;
 &lt;div class="banner5text"&gt;
 "Having gone from complete newbies to production-ready in three months, the GolfNow team is eager to encourage other companies to follow their lead. 'This is The Six Million Dollar Man of the cloud right now,' adds Josh. 'Just try it out, watch it happen. I feel like the proof is in the pudding when you look at these kinds of application stacks. They’re faster, they’re more resilient.'"
 &lt;/div&gt;
 &lt;/div&gt;

&lt;section class="section5"&gt;
 &lt;div class="fullcol"&gt;
 And long-term, Sheriff has an even bigger goal for getting more people into the Kubernetes fold. "We’re actually trying to make this platform generic enough so that any of our sister companies can use it if they wish," he says. "Most definitely I think it can be used as a model. I think the way we migrated into it, the way we built it out, are all ways that I think other companies can learn from, and should not be afraid of."
 &lt;br&gt;&lt;br&gt;
 The GolfNow team is also giving back to the Kubernetes community by open-sourcing a bot framework that Josh built. "We noticed that the dashboard user interface is actually moving a lot faster than when we started," says Sheriff. "However we realized what we needed was something that’s more of a bot that really helps us administer Kubernetes as a whole through Slack." Josh explains: "With the Kubernetes-Slack integration, you can essentially hook into a cluster and the issue commands and edit configurations. We’ve tried to simplify the security configuration as much as possible. We hope this will be our major thank you to Kubernetes, for everything you’ve given us."
 &lt;br&gt;&lt;br&gt;
 Having gone from complete newbies to production-ready in three months, the GolfNow team is eager to encourage other companies to follow their lead. The lessons they’ve learned: "You’ve got to have buy-in from your boss," says Sheriff. "Another big deal is having two to three people dedicated to this type of endeavor. You can’t have people who are half in, half out." And if you don’t have buy-in from the get go, proving it out will get you there.
 &lt;br&gt;&lt;br&gt;
 "This is The Six Million Dollar Man of the cloud right now," adds Josh. "Just try it out, watch it happen. I feel like the proof is in the pudding when you look at these kinds of application stacks. They’re faster, they’re more resilient."

 &lt;/div&gt;
&lt;/section&gt;</description></item><item><title>Haufe Group Case Study</title><link>https://andygol-k8s.netlify.app/zh-cn/case-studies/haufegroup/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/case-studies/haufegroup/</guid><description>&lt;div class="banner1"&gt;
 &lt;h1&gt; CASE STUDY:&lt;img src="https://andygol-k8s.netlify.app/images/haufegroup_logo.png" class="header_logo"&gt;&lt;br&gt; &lt;div class="subhead"&gt;Paving the Way for Cloud Native for Midsize Companies&lt;/div&gt;&lt;/h1&gt;

&lt;/div&gt;

&lt;div class="details"&gt;
 Company &amp;nbsp;&lt;b&gt;Haufe Group&lt;/b&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Location &amp;nbsp;&lt;b&gt;Freiburg, Germany&lt;/b&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Industry &amp;nbsp;&lt;b&gt;Media and Software&lt;/b&gt;
&lt;/div&gt;

&lt;hr&gt;

&lt;section class="section1"&gt;

&lt;div class="cols"&gt;
 &lt;div class="col1"&gt;
 &lt;h2&gt;Challenge&lt;/h2&gt;
 Founded in 1930 as a traditional publisher, Haufe Group has grown into a media and software company with 95 percent of its sales from digital products. Over the years, the company has gone from having "hardware in the basement" to outsourcing its infrastructure operations and IT. More recently, the development of new products, from Internet portals for tax experts to personnel training software, has created demands for increased speed, reliability and scalability. "We need to be able to move faster," says Solution Architect Martin Danielsson. "Adapting workloads is something that we really want to be able to do."
 &lt;br&gt;
 &lt;br&gt;
 &lt;h2&gt;Solution&lt;/h2&gt;
 Haufe Group began its cloud-native journey when &lt;a href="https://azure.microsoft.com/"&gt;Microsoft Azure&lt;/a&gt; became available in Europe; the company needed cloud deployments for its desktop apps with bandwidth-heavy download services. "After that, it has been different projects trying out different things," says Danielsson. Two years ago, Holger Reinhardt joined Haufe Group as CTO and rapidly re-oriented the traditional host provider-based approach toward a cloud and API-first strategy.
 &lt;/div&gt;
&lt;div class="col2"&gt;
 A core part of this strategy was a strong mandate to embrace infrastructure-as-code across the entire software deployment lifecycle via Docker. The company is now getting ready to go live with two services in production using &lt;a href="https://kubernetes.io/"&gt;Kubernetes&lt;/a&gt; orchestration on &lt;a href="https://azure.microsoft.com/"&gt;Microsoft Azure&lt;/a&gt; and &lt;a href="https://aws.amazon.com/"&gt;Amazon Web Services&lt;/a&gt;. The team is also working on breaking up one of their core Java Enterprise desktop products into microservices to allow for better evolvability and dynamic scaling in the cloud.
&lt;br&gt;
&lt;br&gt;
&lt;h2&gt;Impact&lt;/h2&gt;
 With the ability to adapt workloads, Danielsson says, teams "will be able to scale down to around half the capacity at night, saving 30 percent of the hardware cost." Plus, shorter release times have had a major impact. "Before, we had to announce at least a week in advance when we wanted to do a release because there was a huge checklist of things that you had to do," he says. "By going cloud native, we have the infrastructure in place to be able to automate all of these things. Now we can get a new release done in half an hour instead of days."

&lt;/div&gt;
&lt;/div&gt;

&lt;/section&gt;

&lt;div class="banner2"&gt;
 &lt;div class="banner2text"&gt;
 "Over the next couple of years, people won’t even think that much about it when they want to run containers. Kubernetes is going to be the go-to solution."&lt;br style="height:25px"&gt;&lt;span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"&gt;&lt;br&gt;- Martin Danielsson, Solution Architect, Haufe Group&lt;/span&gt;
 &lt;/div&gt;
&lt;/div&gt;

&lt;section class="section2"&gt;

&lt;div class="fullcol"&gt;
 &lt;h2&gt;More than 80 years ago, Haufe Group was founded as a traditional publishing company, printing books and commentary on paper.&lt;/h2&gt; By the 1990s, though, the company’s leaders recognized that the future was digital, and to their credit, were able to transform Haufe Group into a media and software business that now gets 95 percent of its sales from digital products. "Among the German companies doing this, we were one of the early adopters," says Martin Danielsson, Solution Architect for Haufe Group.&lt;br&gt;&lt;br&gt;
 And now they’re leading the way for midsize companies embracing cloud-native technology like Kubernetes. "The really big companies like Ticketmaster and Google get it right, and the startups get it right because they’re faster," says Danielsson. "We’re in this big lump of companies in the middle with a lot of legacy, a lot of structure, a lot of culture that does not easily fit the cloud technologies. We’re just 1,500 people, but we have hundreds of customer-facing applications. So we’re doing things that will be relevant for many companies of our size or even smaller."&lt;br&gt;&lt;br&gt;
 Many of those legacy challenges stemmed from simply following the technology trends of the times. "We used to do full DevOps," he says. In the 1990s and 2000s, "that meant that you had your hardware in the basement. And then 10 years ago, the hype of the moment was to outsource application operations, outsource everything, and strip down your IT department to take away the distraction of all these hardware things. That’s not our area of expertise. We didn’t want to be an infrastructure provider. And now comes the backlash of that."&lt;br&gt;&lt;br&gt;
 Haufe Group began feeling the pain as they were developing more new products, from Internet portals for tax experts to personnel training software, that have created demands for increased speed, reliability and scalability. "Right now, we have this break in workflows, where we go from writing concepts to developing, handing it over to production and then handing that over to your host provider," he says. "And then when things go bad we have no clue what went wrong. We definitely want to take back control, and we want to move a lot faster. Adapting workloads is something that we really want to be able to do."&lt;br&gt;&lt;br&gt;
 Those needs led them to explore cloud-native technology. Their first foray into the cloud was doing deployments in &lt;a href="https://azure.microsoft.com/"&gt;Microsoft Azure&lt;/a&gt;, once it became available in Europe, for desktop products that had built-in download services. Hosting expenses for such bandwidth-heavy services were too high, so the company turned to the cloud. "After that, it has been different projects trying out different things," says Danielsson.
&lt;/div&gt;
&lt;/section&gt;

&lt;div class="banner3"&gt;
 &lt;div class="banner3text"&gt;
 "We have been doing containers for the last two years, and we really got the hang of how they work," says Danielsson. "But it was always for development and test, never in production, because we didn’t fully understand how that would work. And to me, Kubernetes was definitely the technology that solved that."
 &lt;/div&gt;
&lt;/div&gt;

&lt;section class="section3"&gt;
&lt;div class="fullcol"&gt;

 Two years ago, Holger Reinhardt joined Haufe Group as CTO and rapidly re-oriented the traditional host provider-based approach toward a cloud and API-first strategy. A core part of this strategy was a strong mandate to embrace infrastructure-as-code across the entire software deployment lifecycle via Docker.
 Some experiments went further than others; German regulations about sensitive data proved to be a road block in moving some workloads to Azure and Amazon Web Services. "Due to our history, Germany is really strict with things like personally identifiable data," Danielsson says.&lt;br&gt;&lt;br&gt;
 These experiments took on new life with the arrival of the Azure Sovereign Cloud for Germany (an Azure clone run by the German T-Systems provider). With the availability of Azure.de—which conforms to Germany’s privacy regulations—teams started to seriously consider deploying production loads in Docker into the cloud. "We have been doing containers for the last two years, and we really got the hang of how they work," says Danielsson. "But it was always for development and test, never in production, because we didn’t fully understand how that would work. And to me, Kubernetes was definitely the technology that solved that."&lt;br&gt;&lt;br&gt;
 In parallel, Danielsson had built an API management system with the aim of supporting CI/CD scenarios, aspects of which were missing in off-the-shelf API management products. With a foundation based on &lt;a href="https://getkong.org/"&gt;Mashape’s Kong&lt;/a&gt; gateway, it is open-sourced as &lt;a href="http://wicked.haufe.io/"&gt;wicked.haufe.io&lt;/a&gt;. He put wicked.haufe.io to use with his product team.&lt;br&gt;&lt;br&gt; Otherwise, Danielsson says his philosophy was "don’t try to reinvent the wheel all the time. Go for what’s there and 99 percent of the time it will be enough. And if you think you really need something custom or additional, think perhaps once or twice again. One of the things that I find so amazing with this cloud-native framework is that everything ties in."&lt;br&gt;&lt;br&gt;
 Currently, Haufe Group is working on two projects using Kubernetes in production. One is a new mobile application for researching legislation and tax laws. "We needed a way to take out functionality from a legacy core and put an application on top of that with an API gateway—a lot of moving parts that screams containers," says Danielsson. So the team moved the build pipeline away from "deploying to some old, huge machine that you could deploy anything to" and onto a Kubernetes cluster where there would be automatic CI/CD "with feature branches and all these things that were a bit tedious in the past."
&lt;/div&gt;
&lt;/section&gt;

&lt;div class="banner4"&gt;
 &lt;div class="banner4text"&gt;
 "Before, we had to announce at least a week in advance when we wanted to do a release because there was a huge checklist of things that you had to do," says Danielsson. "By going cloud native, we have the infrastructure in place to be able to automate all of these things. Now we can get a new release done in half an hour instead of days."
 &lt;/div&gt;
&lt;/div&gt;

&lt;section class="section4"&gt;
&lt;div class="fullcol"&gt;
 It was a proof of concept effort, and the proof was in the pudding. "Everyone was really impressed at what we accomplished in a week," says Danielsson. "We did these kinds of integrations just to make sure that we got a handle on how Kubernetes works. If you can create optimism and buzz around something, it’s half won. And if the developers and project managers know this is working, you’re more or less done." Adds Reinhardt: "You need to create some very visible, quick wins in order to overcome the status quo."&lt;br&gt;&lt;br&gt;
 The impact on the speed of deployment was clear: "Before, we had to announce at least a week in advance when we wanted to do a release because there was a huge checklist of things that you had to do," says Danielsson. "By going cloud native, we have the infrastructure in place to be able to automate all of these things. Now we can get a new release done in half an hour instead of days." &lt;br&gt;&lt;br&gt;
 The potential impact on cost was another bonus. "Hosting applications is quite expensive, so moving to the cloud is something that we really want to be able to do," says Danielsson. With the ability to adapt workloads, teams "will be able to scale down to around half the capacity at night, saving 30 percent of the hardware cost." &lt;br&gt;&lt;br&gt;
 Just as importantly, Danielsson says, there’s added flexibility: "When we try to move or rework applications that are really crucial, it’s often tricky to validate whether the path we want to take is going to work out well. In order to validate that, we would need to reproduce the environment and really do testing, and that’s prohibitively expensive and simply not doable with traditional host providers. Cloud native gives us the ability to do risky changes and validate them in a cost-effective way."&lt;br&gt;&lt;br&gt;
 As word of the two successful test projects spread throughout the company, interest in Kubernetes has grown. "We want to be able to support our developers in running Kubernetes clusters but we’re not there yet, so we allow them to do it as long as they’re aware that they are on their own," says Danielsson. "So that’s why we are also looking at things like [the managed Kubernetes platform] &lt;a href="https://coreos.com/tectonic/"&gt;CoreOS Tectonic&lt;/a&gt;, &lt;a href="https://azure.microsoft.com/en-us/services/container-service/"&gt;Azure Container Service&lt;/a&gt;, &lt;a href="https://aws.amazon.com/ecs/"&gt;ECS&lt;/a&gt;, etc. These kinds of services will be a lot more relevant to midsize companies that want to leverage cloud native but don’t have the IT departments or the structure around that."&lt;br&gt;&lt;br&gt;
 In the next year and a half, Danielsson says the company will be working on moving one of their legacy desktop products, a web app for researching legislation and tax laws originally built in Java Enterprise, onto cloud-native technology. "We’re doing a microservice split out right now so that we can independently deploy the different parts," he says. The main website, which provides free content for customers, is also moving to cloud native.

&lt;/div&gt;
&lt;/section&gt;

&lt;div class="banner5"&gt;
 &lt;div class="banner5text"&gt;
 "the execution of a strategy requires alignment of culture, structure and technology. Only if those three dimensions are aligned can you successfully execute a transformation into microservices and cloud-native architectures. And it is only then that the Cloud will pay the dividends in much faster speeds in product innovation and much lower operational costs."

 &lt;/div&gt;
&lt;/div&gt;

&lt;section class="section5"&gt;
&lt;div class="fullcol"&gt;
 But with these goals, Danielsson believes there are bigger cultural challenges that need to be constantly addressed. The move to new technology, not to mention a shift toward DevOps, means a lot of change for employees. "The roles were rather fixed in the past," he says. "You had developers, you had project leads, you had testers. And now you get into these really, really important things like test automation. Testers aren’t actually doing click testing anymore, and they have to write automated testing. And if you really want to go full-blown CI/CD, all these little pieces have to work together so that you get the confidence to do a check in, and know this check in is going to land in production, because if I messed up, some test is going to break. This is a really powerful thing because whatever you do, whenever you merge something into the trunk or to the master, this is going live. And that’s where you either get the people or they run away screaming."
 Danielsson understands that it may take some people much longer to get used to the new ways.&lt;br&gt;&lt;br&gt;
 "Culture is nothing that you can force on people," he says. "You have to live it for yourself. You have to evangelize. You have to show the advantages time and time again: This is how you can do it, this is what you get from it." To that end, his team has scheduled daylong workshops for the staff, bringing in outside experts to talk about everything from API to Devops to cloud. &lt;br&gt;&lt;br&gt;
 For every person who runs away screaming, many others get drawn in. "Get that foot in the door and make them really interested in this stuff," says Danielsson. "Usually it catches on. We have people you never would have expected chanting, ‘Docker Docker Docker’ now. It’s cool to see them realize that there is a world outside of their Python libraries. It’s awesome to see them really work with Kubernetes."&lt;br&gt;&lt;br&gt;
 Ultimately, Reinhardt says, "the execution of a strategy requires alignment of culture, structure and technology. Only if those three dimensions are aligned can you successfully execute a transformation into microservices and cloud-native architectures. And it is only then that the Cloud will pay the dividends in much faster speeds in product innovation and much lower operational costs."

&lt;/div&gt;
&lt;/section&gt;</description></item><item><title>Image Policy API (v1alpha1)</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/config-api/imagepolicy.v1alpha1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/config-api/imagepolicy.v1alpha1/</guid><description>&lt;!--
## Resource Types
--&gt;
&lt;h2 id="resource-types"&gt;资源类型&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#imagepolicy-k8s-io-v1alpha1-ImageReview"&gt;ImageReview&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="imagepolicy-k8s-io-v1alpha1-ImageReview"&gt;&lt;code&gt;ImageReview&lt;/code&gt;&lt;/h2&gt;
&lt;p&gt;
&lt;!--
ImageReview checks if the set of images in a pod are allowed.
--&gt;
&lt;code&gt;ImageReview&lt;code&gt; 检查某个 Pod 中是否可以使用某些镜像。
&lt;/p&gt;
&lt;table class="table"&gt;
&lt;thead&gt;&lt;tr&gt;&lt;th width="30%"&gt;&lt;!--Field--&gt;字段&lt;/th&gt;&lt;th&gt;&lt;!--Description--&gt;描述&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;apiVersion&lt;/code&gt;&lt;br/&gt;string&lt;/td&gt;&lt;td&gt;&lt;code&gt;imagepolicy.k8s.io/v1alpha1&lt;/code&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;kind&lt;/code&gt;&lt;br/&gt;string&lt;/td&gt;&lt;td&gt;&lt;code&gt;ImageReview&lt;/code&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;metadata&lt;/code&gt;&lt;br/&gt;
&lt;a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.35/#objectmeta-v1-meta"&gt;&lt;code&gt;meta/v1.ObjectMeta&lt;/code&gt;&lt;/a&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;p&gt;
 &lt;!--
 Standard object's metadata.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata&lt;/p&gt;
 --&gt;
 标准的对象元数据。更多信息：https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
&lt;/p&gt;
 &lt;!--
 Refer to the Kubernetes API documentation for the fields of the &lt;code&gt;metadata&lt;/code&gt; field.
 --&gt;
&lt;p&gt;参阅 Kubernetes API 文档了解 &lt;code&gt;metadata&lt;/code&gt; 字段的内容。&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;spec&lt;/code&gt; &lt;B&gt;&lt;!--Required--&gt;[必需]&lt;/B&gt;&lt;br/&gt;
&lt;a href="#imagepolicy-k8s-io-v1alpha1-ImageReviewSpec"&gt;&lt;code&gt;ImageReviewSpec&lt;/code&gt;&lt;/a&gt;
&lt;/td&gt;
&lt;td&gt;
 &lt;p&gt;
 &lt;!--
 Spec holds information about the pod being evaluated
 --&gt;
 &lt;code&gt;spec&lt;/code&gt; 中包含与被评估的 Pod 相关的信息。
 &lt;/p&gt;</description></item><item><title>JD.com Case Study</title><link>https://andygol-k8s.netlify.app/zh-cn/case-studies/jd-com/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/case-studies/jd-com/</guid><description>&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;With more than 300 million active users and total 2017 revenue of more than $55 billion, &lt;a href="https://corporate.JD.com/home"&gt;JD.com&lt;/a&gt; is China's largest retailer, and its operations are the epitome of hyperscale. For example, there are more than a trillion images in JD.com's product databases—with 100 million being added daily—and this enormous amount of data needs to be instantly accessible. In 2014, JD.com moved its applications to containers running on bare metal machines using OpenStack and Docker to "speed up the delivery of our computing resources and make the operations much simpler," says Haifeng Liu, JD.com's Chief Architect. But by the end of 2015, with tens of thousands of nodes running in multiple data centers, "we encountered a lot of problems because our platform was not strong enough, and we suffered from bottlenecks and scalability issues," says Liu. "We needed infrastructure for the next five years of development, now."&lt;/p&gt;</description></item><item><title>kube-apiserver Admission (v1)</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/config-api/apiserver-admission.v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/config-api/apiserver-admission.v1/</guid><description>&lt;!--
title: kube-apiserver Admission (v1)
content_type: tool-reference
package: admission.k8s.io/v1
auto_generated: true
--&gt;
&lt;!--
## Resource Types
--&gt;
&lt;h2 id="资源类型"&gt;资源类型&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#admission-k8s-io-v1-AdmissionReview"&gt;AdmissionReview&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="admission-k8s-io-v1-AdmissionReview"&gt;&lt;code&gt;AdmissionReview&lt;/code&gt;&lt;/h2&gt;
&lt;p&gt;
&lt;!--
AdmissionReview describes an admission review request/response.
--&gt;
&lt;code&gt;AdmissionReview&lt;/code&gt; 描述准入评审请求/响应。
&lt;/p&gt;
&lt;table class="table"&gt;
&lt;thead&gt;&lt;tr&gt;&lt;th width="30%"&gt;&lt;!--Field--&gt;字段&lt;/th&gt;&lt;th&gt;&lt;!--Description--&gt;描述&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;apiVersion&lt;/code&gt;&lt;br/&gt;string&lt;/td&gt;&lt;td&gt;&lt;code&gt;admission.k8s.io/v1&lt;/code&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;kind&lt;/code&gt;&lt;br/&gt;string&lt;/td&gt;&lt;td&gt;&lt;code&gt;AdmissionReview&lt;/code&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;request&lt;/code&gt;&lt;br/&gt;
&lt;a href="#admission-k8s-io-v1-AdmissionRequest"&gt;&lt;code&gt;AdmissionRequest&lt;/code&gt;&lt;/a&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;p&gt;
&lt;!--
request describes the attributes for the admission request.
--&gt;
&lt;code&gt;request&lt;/code&gt; 描述准入请求的属性。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;response&lt;/code&gt;&lt;br/&gt;
&lt;a href="#admission-k8s-io-v1-AdmissionResponse"&gt;&lt;code&gt;AdmissionResponse&lt;/code&gt;&lt;/a&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;p&gt;
&lt;!--
response describes the attributes for the admission response.
--&gt;
&lt;code&gt;response&lt;/code&gt; 描述准入响应的属性。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2 id="admission-k8s-io-v1-AdmissionRequest"&gt;&lt;code&gt;AdmissionRequest&lt;/code&gt;&lt;/h2&gt;
&lt;!--
**Appears in:**
--&gt;
&lt;p&gt;&lt;strong&gt;出现在：&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#admission-k8s-io-v1-AdmissionReview"&gt;AdmissionReview&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;
&lt;!--
AdmissionRequest describes the admission.Attributes for the admission request.
--&gt;
&lt;code&gt;AdmissionRequest&lt;/code&gt; 描述准入请求的 admission.Attributes。
&lt;/p&gt;</description></item><item><title>kube-apiserver Audit 配置（v1）</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/config-api/apiserver-audit.v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/config-api/apiserver-audit.v1/</guid><description>&lt;!---
title: kube-apiserver Audit Configuration (v1)
content_type: tool-reference
package: audit.k8s.io/v1
auto_generated: true
--&gt;
&lt;!--
## Resource Types 
--&gt;
&lt;h2 id="resource-types"&gt;资源类型&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#audit-k8s-io-v1-Event"&gt;Event&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#audit-k8s-io-v1-EventList"&gt;EventList&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#audit-k8s-io-v1-Policy"&gt;Policy&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#audit-k8s-io-v1-PolicyList"&gt;PolicyList&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="audit-k8s-io-v1-Event"&gt;&lt;code&gt;Event&lt;/code&gt;&lt;/h2&gt;
&lt;!--
**Appears in:**
--&gt;
&lt;p&gt;&lt;strong&gt;出现在：&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#audit-k8s-io-v1-EventList"&gt;EventList&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;
&lt;!--
Event captures all the information that can be included in an API audit log.
--&gt;
Event 结构包含可出现在 API 审计日志中的所有信息。
&lt;/p&gt;
&lt;table class="table"&gt;
&lt;thead&gt;&lt;tr&gt;&lt;th width="30%"&gt;&lt;!--Field--&gt;字段&lt;/th&gt;&lt;th&gt;&lt;!--Description--&gt;描述&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;apiVersion&lt;/code&gt;&lt;br/&gt;string&lt;/td&gt;&lt;td&gt;&lt;code&gt;audit.k8s.io/v1&lt;/code&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;kind&lt;/code&gt;&lt;br/&gt;string&lt;/td&gt;&lt;td&gt;&lt;code&gt;Event&lt;/code&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;level&lt;/code&gt; &lt;B&gt;&lt;!--[Required]--&gt;[必需]&lt;/B&gt;&lt;br/&gt;
&lt;a href="#audit-k8s-io-v1-Level"&gt;&lt;code&gt;Level&lt;/code&gt;&lt;/a&gt;
&lt;/td&gt;
&lt;td&gt;
 &lt;p&gt;
 &lt;!--
 AuditLevel at which event was generated
 --&gt;
 生成事件所对应的审计级别。
 &lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;auditID&lt;/code&gt; &lt;B&gt;&lt;!--[Required]--&gt;[必需]&lt;/B&gt;&lt;br/&gt;
&lt;a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/types#UID"&gt;&lt;code&gt;k8s.io/apimachinery/pkg/types.UID&lt;/code&gt;&lt;/a&gt;
&lt;/td&gt;
&lt;td&gt;
 &lt;p&gt;
 &lt;!--
 Unique audit ID, generated for each request.
 --&gt;
 为每个请求所生成的唯一审计 ID。
 &lt;/p&gt;</description></item><item><title>kube-apiserver 配置 (v1)</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/config-api/apiserver-config.v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/config-api/apiserver-config.v1/</guid><description>&lt;!--
title: kube-apiserver Configuration (v1)
content_type: tool-reference
package: apiserver.config.k8s.io/v1
auto_generated: true
--&gt;
&lt;p&gt;
&lt;!--
Package v1 is the v1 version of the API.
--&gt;
v1 包中包含 API 的 v1 版本。
&lt;/p&gt;
&lt;!--
## Resource Types
--&gt;
&lt;h2 id="资源类型"&gt;资源类型&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#apiserver-config-k8s-io-v1-AdmissionConfiguration"&gt;AdmissionConfiguration&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#apiserver-config-k8s-io-v1-AuthorizationConfiguration"&gt;AuthorizationConfiguration&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#apiserver-config-k8s-io-v1-EncryptionConfiguration"&gt;EncryptionConfiguration&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="apiserver-config-k8s-io-v1-AdmissionConfiguration"&gt;&lt;code&gt;AdmissionConfiguration&lt;/code&gt;&lt;/h2&gt;
&lt;p&gt;
&lt;!--
AdmissionConfiguration provides versioned configuration for admission controllers.
--&gt;
AdmissionConfiguration 为准入控制器提供版本化的配置。
&lt;/p&gt;
&lt;table class="table"&gt;
&lt;thead&gt;&lt;tr&gt;&lt;th width="30%"&gt;&lt;!--Field--&gt;字段&lt;/th&gt;&lt;th&gt;&lt;!--Description--&gt;描述&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;apiVersion&lt;/code&gt;&lt;br/&gt;string&lt;/td&gt;&lt;td&gt;&lt;code&gt;apiserver.config.k8s.io/v1&lt;/code&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;kind&lt;/code&gt;&lt;br/&gt;string&lt;/td&gt;&lt;td&gt;&lt;code&gt;AdmissionConfiguration&lt;/code&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;plugins&lt;/code&gt;&lt;br/&gt;
&lt;a href="#apiserver-config-k8s-io-v1-AdmissionPluginConfiguration"&gt;&lt;code&gt;[]AdmissionPluginConfiguration&lt;/code&gt;&lt;/a&gt;
&lt;/td&gt;
&lt;td&gt;
 &lt;p&gt;
 &lt;!--
 Plugins allows specifying a configuration per admission control plugin.
 --&gt;
 &lt;code&gt;plugins&lt;/code&gt; 字段允许为每个准入控制插件设置配置选项。
 &lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2 id="apiserver-config-k8s-io-v1-AuthorizationConfiguration"&gt;&lt;code&gt;AuthorizationConfiguration&lt;/code&gt;&lt;/h2&gt;
&lt;table class="table"&gt;
&lt;thead&gt;&lt;tr&gt;&lt;th width="30%"&gt;&lt;!--Field--&gt;字段&lt;/th&gt;&lt;th&gt;&lt;!--Description--&gt;描述&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;apiVersion&lt;/code&gt;&lt;br/&gt;string&lt;/td&gt;&lt;td&gt;&lt;code&gt;apiserver.config.k8s.io/v1&lt;/code&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;kind&lt;/code&gt;&lt;br/&gt;string&lt;/td&gt;&lt;td&gt;&lt;code&gt;AuthorizationConfiguration&lt;/code&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;authorizers&lt;/code&gt; &lt;B&gt;&lt;!--[Required]--&gt;[必需]&lt;/B&gt;&lt;br/&gt;
&lt;a href="#apiserver-config-k8s-io-v1-AuthorizerConfiguration"&gt;&lt;code&gt;[]AuthorizerConfiguration&lt;/code&gt;&lt;/a&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;p&gt;
&lt;!--
Authorizers is an ordered list of authorizers to
authorize requests against.
This is similar to the --authorization-modes kube-apiserver flag
Must be at least one.
--&gt;
&lt;code&gt;authorizers&lt;/code&gt; 是用于针对请求进行鉴权的鉴权组件的有序列表。
这类似于 &lt;code&gt;--authorization-modes&lt;/code&gt; kube-apiserver 标志。
必须至少包含一个元素。
&lt;/p&gt;</description></item><item><title>kube-apiserver 配置 (v1alpha1)</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/config-api/apiserver-config.v1alpha1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/config-api/apiserver-config.v1alpha1/</guid><description>&lt;!--
title: kube-apiserver Configuration (v1alpha1)
content_type: tool-reference
package: apiserver.k8s.io/v1alpha1
auto_generated: true
--&gt;
&lt;p&gt;
&lt;!--
Package v1alpha1 is the v1alpha1 version of the API.
--&gt;
包 v1alpha1 包含 API 的 v1alpha1 版本。
&lt;/p&gt;
&lt;!--
## Resource Types
--&gt;
&lt;h2 id="resource-types"&gt;资源类型&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#apiserver-k8s-io-v1alpha1-AdmissionConfiguration"&gt;AdmissionConfiguration&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#apiserver-k8s-io-v1alpha1-AuthenticationConfiguration"&gt;AuthenticationConfiguration&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#apiserver-k8s-io-v1alpha1-AuthorizationConfiguration"&gt;AuthorizationConfiguration&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#apiserver-k8s-io-v1alpha1-EgressSelectorConfiguration"&gt;EgressSelectorConfiguration&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#apiserver-k8s-io-v1alpha1-TracingConfiguration"&gt;TracingConfiguration&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="TracingConfiguration"&gt;&lt;code&gt;TracingConfiguration&lt;/code&gt;&lt;/h2&gt;
&lt;!--
**Appears in:**
--&gt;
&lt;p&gt;&lt;strong&gt;出现在：&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="#kubelet-config-k8s-io-v1beta1-KubeletConfiguration"&gt;KubeletConfiguration&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="#apiserver-k8s-io-v1alpha1-TracingConfiguration"&gt;TracingConfiguration&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;
&lt;!--
TracingConfiguration provides versioned configuration for OpenTelemetry tracing clients.
--&gt;
TracingConfiguration 为 OpenTelemetry 跟踪客户端提供了不同版本的配置。
&lt;/p&gt;
&lt;table class="table"&gt;
&lt;thead&gt;&lt;tr&gt;&lt;th width="30%"&gt;&lt;!--Field--&gt;字段&lt;/th&gt;&lt;th&gt;&lt;!--Description--&gt;描述&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;endpoint&lt;/code&gt;&lt;br/&gt;
&lt;code&gt;string&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
 &lt;p&gt;
 &lt;!--
 Endpoint of the collector this component will report traces to.
 The connection is insecure, and does not currently support TLS.
 Recommended is unset, and endpoint is the otlp grpc default, localhost:4317.
 --&gt;
 采集器的端点，此组件将向其报告跟踪信息。
 连接不安全，目前不支持 TLS。
 推荐不设置，端点为 otlp grpc 默认值 localhost:4317。
&lt;/p&gt;</description></item><item><title>kube-apiserver 配置 (v1beta1)</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/config-api/apiserver-config.v1beta1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/config-api/apiserver-config.v1beta1/</guid><description>&lt;!-- 
title: kube-apiserver Configuration (v1beta1)
content_type: tool-reference
package: apiserver.k8s.io/v1beta1
auto_generated: true
--&gt;
&lt;p&gt;
&lt;!-- 
Package v1beta1 is the v1beta1 version of the API.&lt;/p&gt; 
--&gt;
v1beta1 包是 v1beta1 版本的 API。
&lt;/p&gt;
&lt;!-- 
## Resource Types
--&gt;
&lt;h2 id="resource-types"&gt;资源类型&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#apiserver-k8s-io-v1beta1-AuthenticationConfiguration"&gt;AuthenticationConfiguration&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#apiserver-k8s-io-v1beta1-AuthorizationConfiguration"&gt;AuthorizationConfiguration&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#apiserver-k8s-io-v1beta1-EgressSelectorConfiguration"&gt;EgressSelectorConfiguration&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#apiserver-k8s-io-v1beta1-TracingConfiguration"&gt;TracingConfiguration&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="TracingConfiguration"&gt;&lt;code&gt;TracingConfiguration&lt;/code&gt;&lt;/h2&gt;
&lt;!--
**Appears in:**
--&gt;
&lt;p&gt;&lt;strong&gt;出现在：&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="#kubelet-config-k8s-io-v1beta1-KubeletConfiguration"&gt;KubeletConfiguration&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="#apiserver-k8s-io-v1alpha1-TracingConfiguration"&gt;TracingConfiguration&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="#apiserver-k8s-io-v1beta1-TracingConfiguration"&gt;TracingConfiguration&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;
&lt;!--
TracingConfiguration provides versioned configuration for OpenTelemetry tracing clients.
--&gt;
TracingConfiguration 为 OpenTelemetry 跟踪客户端提供版本化的配置。
&lt;/p&gt;
&lt;table class="table"&gt;
&lt;thead&gt;&lt;tr&gt;&lt;th width="30%"&gt;&lt;!--Field--&gt;字段&lt;/th&gt;&lt;th&gt;&lt;!--Description--&gt;描述&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;endpoint&lt;/code&gt;&lt;br/&gt;
&lt;code&gt;string&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
 &lt;p&gt;
 &lt;!--
 Endpoint of the collector this component will report traces to.
 The connection is insecure, and does not currently support TLS.
 Recommended is unset, and endpoint is the otlp grpc default, localhost:4317.
 --&gt;
 采集器的端点，此组件将向其报告跟踪信息。
 连接不安全，目前不支持 TLS。
 推荐不设置，端点为 otlp grpc 默认值 localhost:4317。
 &lt;/p&gt;</description></item><item><title>kube-controller-manager Configuration (v1alpha1)</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/config-api/kube-controller-manager-config.v1alpha1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/config-api/kube-controller-manager-config.v1alpha1/</guid><description>&lt;!--
## Resource Types
--&gt;
&lt;h2 id="resource-types"&gt;资源类型&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration"&gt;CloudControllerManagerConfiguration&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#controllermanager-config-k8s-io-v1alpha1-LeaderMigrationConfiguration"&gt;LeaderMigrationConfiguration&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration"&gt;KubeControllerManagerConfiguration&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="ClientConnectionConfiguration"&gt;&lt;code&gt;ClientConnectionConfiguration&lt;/code&gt;&lt;/h2&gt;
&lt;!--
**Appears in:**
--&gt;
&lt;p&gt;&lt;strong&gt;出现在：&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="#kubescheduler-config-k8s-io-v1-KubeSchedulerConfiguration"&gt;KubeSchedulerConfiguration&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="#controllermanager-config-k8s-io-v1alpha1-GenericControllerManagerConfiguration"&gt;GenericControllerManagerConfiguration&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;
&lt;!--
ClientConnectionConfiguration contains details for constructing a client.
--&gt;
ClientConnectionConfiguration 包含构建客户端的详细信息。
&lt;/p&gt;
&lt;table class="table"&gt;
&lt;thead&gt;&lt;tr&gt;&lt;th width="30%"&gt;&lt;!--Field--&gt;字段&lt;/th&gt;&lt;th&gt;&lt;!--Description--&gt;描述&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;kubeconfig&lt;/code&gt; &lt;B&gt;&lt;!--[Required]--&gt;[必需]&lt;/B&gt;&lt;br/&gt;
&lt;code&gt;string&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;p&gt;
&lt;!--
kubeconfig is the path to a KubeConfig file.
--&gt;
kubeconfig 是指向 KubeConfig 文件的路径。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;acceptContentTypes&lt;/code&gt; &lt;B&gt;&lt;!--[Required]--&gt;[必需]&lt;/B&gt;&lt;br/&gt;
&lt;code&gt;string&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;p&gt;
&lt;!--
acceptContentTypes defines the Accept header sent by clients when connecting to a server, overriding the
default value of 'application/json'. This field will control all connections to the server used by a particular
client.
--&gt;
acceptContentTypes 定义了客户端在连接服务器时发送的 Accept 请求头，
覆盖默认值 application/json。此字段将控制特定客户端与服务器之间的所有连接。
&lt;/p&gt;</description></item><item><title>kube-proxy 配置 (v1alpha1)</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/config-api/kube-proxy-config.v1alpha1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/config-api/kube-proxy-config.v1alpha1/</guid><description>&lt;!--
title: kube-proxy Configuration (v1alpha1)
content_type: tool-reference
package: kubeproxy.config.k8s.io/v1alpha1
auto_generated: true
--&gt;
&lt;!--
## Resource Types
--&gt;
&lt;h2 id="resource-types"&gt;资源类型&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#kubeproxy-config-k8s-io-v1alpha1-KubeProxyConfiguration"&gt;KubeProxyConfiguration&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="FormatOptions"&gt;&lt;code&gt;FormatOptions&lt;/code&gt;&lt;/h2&gt;
&lt;!--
**Appears in:**
--&gt;
&lt;p&gt;&lt;strong&gt;出现在：&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#LoggingConfiguration"&gt;LoggingConfiguration&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;
&lt;!--
FormatOptions contains options for the different logging formats.
--&gt;
`FormatOptions` 包含不同日志格式的选项。
&lt;/p&gt;
&lt;table class="table"&gt;
&lt;thead&gt;&lt;tr&gt;&lt;th width="30%"&gt;&lt;!--Field--&gt;字段&lt;/th&gt;&lt;th&gt;&lt;!--Description--&gt;描述&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;text&lt;/code&gt; &lt;B&gt;&lt;!--[Required]--&gt;[必需]&lt;/B&gt;&lt;br/&gt;
&lt;a href="#TextOptions"&gt;&lt;code&gt;TextOptions&lt;/code&gt;&lt;/a&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;p&gt;
&lt;!--
[Alpha] Text contains options for logging format &amp;quot;text&amp;quot;.
Only available when the LoggingAlphaOptions feature gate is enabled.
--&gt;
[Alpha] text 包含日志格式 &amp;quot;text&amp;quot; 的选项。
仅在启用了 `LoggingAlphaOptions` 特性门控时可用。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;json&lt;/code&gt; &lt;B&gt;&lt;!--[Required]--&gt;[必需]&lt;/B&gt;&lt;br/&gt;
&lt;a href="#JSONOptions"&gt;&lt;code&gt;JSONOptions&lt;/code&gt;&lt;/a&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;p&gt;
&lt;!--
[Alpha] JSON contains options for logging format &amp;quot;json&amp;quot;.
Only available when the LoggingAlphaOptions feature gate is enabled.
--&gt;
[Alpha] JSON 包含日志格式 &amp;quot;json&amp;quot; 的选项。
仅在启用了 `LoggingAlphaOptions` 特性门控时可用。
&lt;/p&gt;</description></item><item><title>kube-scheduler 配置 (v1)</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/config-api/kube-scheduler-config.v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/config-api/kube-scheduler-config.v1/</guid><description>&lt;!--
title: kube-scheduler Configuration (v1)
content_type: tool-reference
package: kubescheduler.config.k8s.io/v1
auto_generated: true
--&gt;
&lt;!--
## Resource Types
--&gt;
&lt;h2 id="resource-types"&gt;资源类型&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#kubescheduler-config-k8s-io-v1-DefaultPreemptionArgs"&gt;DefaultPreemptionArgs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#kubescheduler-config-k8s-io-v1-DynamicResourcesArgs"&gt;DynamicResourcesArgs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#kubescheduler-config-k8s-io-v1-InterPodAffinityArgs"&gt;InterPodAffinityArgs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#kubescheduler-config-k8s-io-v1-KubeSchedulerConfiguration"&gt;KubeSchedulerConfiguration&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#kubescheduler-config-k8s-io-v1-NodeAffinityArgs"&gt;NodeAffinityArgs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#kubescheduler-config-k8s-io-v1-NodeResourcesBalancedAllocationArgs"&gt;NodeResourcesBalancedAllocationArgs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#kubescheduler-config-k8s-io-v1-NodeResourcesFitArgs"&gt;NodeResourcesFitArgs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#kubescheduler-config-k8s-io-v1-PodTopologySpreadArgs"&gt;PodTopologySpreadArgs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#kubescheduler-config-k8s-io-v1-VolumeBindingArgs"&gt;VolumeBindingArgs&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="ClientConnectionConfiguration"&gt;&lt;code&gt;ClientConnectionConfiguration&lt;/code&gt;&lt;/h2&gt;
&lt;!--
**Appears in:**
--&gt;
&lt;p&gt;&lt;strong&gt;出现在：&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#kubescheduler-config-k8s-io-v1-KubeSchedulerConfiguration"&gt;KubeSchedulerConfiguration&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
ClientConnectionConfiguration contains details for constructing a client.
--&gt;
&lt;p&gt;ClientConnectionConfiguration 中包含用来构造客户端所需的细节。&lt;/p&gt;
&lt;table class="table"&gt;
&lt;thead&gt;&lt;tr&gt;&lt;th width="30%"&gt;&lt;!--Field--&gt;字段&lt;/th&gt;&lt;th&gt;&lt;!--Description--&gt;描述&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;kubeconfig&lt;/code&gt; &lt;B&gt;&lt;!--[Required]--&gt;[必需]&lt;/B&gt;&lt;br/&gt;
&lt;code&gt;string&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
 &lt;!--
 kubeconfig is the path to a KubeConfig file.
 --&gt;
 &lt;p&gt;&lt;code&gt;kubeconfig&lt;/code&gt; 字段为指向 KubeConfig 文件的路径。&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;acceptContentTypes&lt;/code&gt; &lt;B&gt;&lt;!--[Required]--&gt;[必需]&lt;/B&gt;&lt;br/&gt;
&lt;code&gt;string&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
 &lt;p&gt;
 &lt;!--
 acceptContentTypes defines the Accept header sent by clients when connecting to a server, overriding the
default value of 'application/json'. This field will control all connections to the server used by a particular
client.
 --&gt;
 &lt;code&gt;acceptContentTypes&lt;/code&gt; 定义的是客户端与服务器建立连接时要发送的 Accept 头部，
 这里的设置值会覆盖默认值 "application/json"。此字段会影响某特定客户端与服务器的所有连接。
 &lt;/p&gt;</description></item><item><title>kubeadm 配置 (v1beta4)</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/config-api/kubeadm-config.v1beta4/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/config-api/kubeadm-config.v1beta4/</guid><description>&lt;!--
title: kubeadm Configuration (v1beta4)
content_type: tool-reference
package: kubeadm.k8s.io/v1beta4
auto_generated: true
--&gt;
&lt;!--
&lt;h2&gt;Overview&lt;/h2&gt;
&lt;p&gt;Package v1beta4 defines the v1beta4 version of the kubeadm configuration file format.
This version improves on the v1beta3 format by fixing some minor issues and adding a few new fields.&lt;/p&gt;
&lt;p&gt;A list of changes since v1beta3:&lt;/p&gt;
--&gt;
&lt;h2&gt;概述&lt;/h2&gt;
&lt;p&gt;v1beta4 包定义 v1beta4 版本的 kubeadm 配置文件格式。
此版本改进了 v1beta3 的格式，修复了一些小问题并添加了一些新的字段。&lt;/p&gt;
&lt;p&gt;从 v1beta3 版本以来的变更列表：&lt;/p&gt;
&lt;!--
&lt;ul&gt;
&lt;li&gt;TODO https://github.com/kubernetes/kubeadm/issues/2890&lt;/li&gt;
&lt;li&gt;Support custom environment variables in control plane components under
&lt;code&gt;ClusterConfiguration&lt;/code&gt;.
Use &lt;code&gt;APIServer.ExtraEnvs&lt;/code&gt;, &lt;code&gt;ControllerManager.ExtraEnvs&lt;/code&gt;, &lt;code&gt;Scheduler.ExtraEnvs&lt;/code&gt;,
&lt;code&gt;Etcd.Local.ExtraEnvs&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;ResetConfiguration&lt;/code&gt; API type is now supported in v1beta4.
Users are able to reset a node by passing a &lt;code&gt;--config&lt;/code&gt; file to &lt;code&gt;kubeadm reset&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;
--&gt;
&lt;ul&gt;
&lt;li&gt;TODO https://github.com/kubernetes/kubeadm/issues/2890&lt;/li&gt;
&lt;li&gt;使用 &lt;code&gt;APIServer.ExtraEnvs&lt;/code&gt;、&lt;code&gt;ControllerManager.ExtraEnvs&lt;/code&gt;、
&lt;code&gt;Scheduler.ExtraEnvs&lt;/code&gt;、&lt;code&gt;Etcd.Local.ExtraEnvs&lt;/code&gt;。
支持在 &lt;code&gt;ClusterConfiguration&lt;/code&gt; 下控制平面组件中的定制环境变量。&lt;/li&gt;
&lt;li&gt;&lt;code&gt;ResetConfiguration&lt;/code&gt; API 类型在 v1beta4 中已得到支持。
用户可以为 &lt;code&gt;kubeadm reset&lt;/code&gt; 指定 &lt;code&gt;--config&lt;/code&gt; 文件来重置节点。&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
&lt;h1&gt;Migration from old kubeadm config versions&lt;/h1&gt;
&lt;ul&gt;
&lt;li&gt;kubeadm v1.15.x and newer can be used to migrate from v1beta1 to v1beta2.&lt;/li&gt;
&lt;li&gt;kubeadm v1.22.x and newer no longer support v1beta1 and older APIs, but can be used to migrate v1beta2 to v1beta3.&lt;/li&gt;
&lt;li&gt;kubeadm v1.27.x and newer no longer support v1beta2 and older APIs.&lt;/li&gt;
&lt;li&gt;TODO: https://github.com/kubernetes/kubeadm/issues/2890
add version that can be used to convert to v1beta4&lt;/li&gt;
&lt;/ul&gt;
--&gt;
&lt;h1&gt;kubeadm 配置版本迁移&lt;/h1&gt;
&lt;ul&gt;
&lt;li&gt;kubeadm v1.15.x 及更高版本可用于从 v1beta1 迁移到 v1beta2。&lt;/li&gt;
&lt;li&gt;kubeadm v1.22.x 及更高版本不再支持 v1beta1 和更早的 API，但可用于从 v1beta2 迁移到 v1beta3。&lt;/li&gt;
&lt;li&gt;kubeadm v1.27.x 及更高版本不再支持 v1beta2 和更早的 API。&lt;/li&gt;
&lt;li&gt;TODO: https://github.com/kubernetes/kubeadm/issues/2890
添加可用于转换到 v1beta4 的版本&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
&lt;h2&gt;Basics&lt;/h2&gt;
&lt;p&gt;The preferred way to configure kubeadm is to pass an YAML configuration file with
the `--config“ option. Some of the configuration options defined in the kubeadm
config file are also available as command line flags, but only the most
common/simple use case are supported with this approach.&lt;/p&gt;</description></item><item><title>kubeadm 配置（v1beta3）</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/config-api/kubeadm-config.v1beta3/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/config-api/kubeadm-config.v1beta3/</guid><description>&lt;!--
title: kubeadm Configuration (v1beta3)
content_type: tool-reference
package: kubeadm.k8s.io/v1beta3
auto_generated: true
--&gt;
&lt;!--
&lt;h2&gt;Overview&lt;/h2&gt;
&lt;p&gt;Package v1beta3 defines the v1beta3 version of the kubeadm configuration file format.
This version improves on the v1beta2 format by fixing some minor issues and adding a few new fields.&lt;/p&gt;
&lt;p&gt;A list of changes since v1beta2:&lt;/p&gt;
--&gt;
&lt;h2&gt;概述&lt;/h2&gt;
&lt;p&gt;v1beta3 包定义 v1beta3 版本的 kubeadm 配置文件格式。
此版本改进了 v1beta2 的格式，修复了一些小问题并添加了一些新的字段。&lt;/p&gt;
&lt;p&gt;从 v1beta2 版本以来的变更列表：&lt;/p&gt;
&lt;!--
&lt;ul&gt;
&lt;li&gt;The deprecated &amp;quot;ClusterConfiguration.useHyperKubeImage&amp;quot; field has been removed.
Kubeadm no longer supports the hyperkube image.&lt;/li&gt;
&lt;li&gt;The &amp;quot;ClusterConfiguration.dns.Type&amp;quot; field has been removed since CoreDNS is the only supported
DNS server type by kubeadm.&lt;/li&gt;
&lt;li&gt;Include &amp;quot;datapolicy&amp;quot; tags on the fields that hold secrets.
This would result in the field values to be omitted when API structures are printed with klog.&lt;/li&gt;
&lt;li&gt;Add &amp;quot;InitConfiguration.skipPhases&amp;quot;, &amp;quot;JoinConfiguration.skipPhases&amp;quot; to allow skipping
a list of phases during kubeadm init/join command execution.&lt;/li&gt;
--&gt;
&lt;ul&gt;
&lt;li&gt;已弃用的字段 &amp;quot;ClusterConfiguration.useHyperKubeImage&amp;quot; 现在被移除。
kubeadm 不再支持 hyperkube 镜像。&lt;/li&gt;
&lt;li&gt;&amp;quot;ClusterConfiguration.dns.type&amp;quot; 字段已经被移除，因为 CoreDNS 是
kubeadm 所支持的唯一 DNS 服务器类型。&lt;/li&gt;
&lt;li&gt;保存 Secret 信息的字段现在包含了 &amp;quot;datapolicy&amp;quot; 标记（tag）。
这一标记会导致 API 结构通过 klog 打印输出时，会忽略这些字段的值。&lt;/li&gt;
&lt;li&gt;添加了 &amp;quot;InitConfiguration.skipPhases&amp;quot;、&amp;quot;JoinConfiguration.skipPhases&amp;quot;，
以允许在执行 &lt;code&gt;kubeadm init/join&lt;/code&gt; 命令时略过某些阶段。&lt;/li&gt;
&lt;!--
&lt;li&gt;Add &amp;quot;InitConfiguration.nodeRegistration.imagePullPolicy&amp;quot; and &amp;quot;JoinConfiguration.nodeRegistration.imagePullPolicy&amp;quot;
to allow specifying the images pull policy during kubeadm &amp;quot;init&amp;quot; and &amp;quot;join&amp;quot;.
The value must be one of &amp;quot;Always&amp;quot;, &amp;quot;Never&amp;quot; or &amp;quot;IfNotPresent&amp;quot;.
&amp;quot;IfNotPresent&amp;quot; is the default, which has been the existing behavior prior to this addition.&lt;/li&gt;
&lt;li&gt;Add &amp;quot;InitConfiguration.patches.directory&amp;quot;, &amp;quot;JoinConfiguration.patches.directory&amp;quot; to allow
the user to configure a directory from which to take patches for components deployed by kubeadm.&lt;/li&gt;
&lt;li&gt;Move the BootstrapToken* API and related utilities out of the &amp;quot;kubeadm&amp;quot; API group to a new group
&amp;quot;bootstraptoken&amp;quot;. The kubeadm API version v1beta3 no longer contains the BootstrapToken* structures.&lt;/li&gt;
--&gt;
&lt;li&gt;添加了 &amp;quot;InitConfiguration.nodeRegistration.imagePullPolicy&amp;quot; 和
&amp;quot;JoinConfiguration.nodeRegistration.imagePullPolicy&amp;quot;
以允许在 &lt;code&gt;kubeadm init&lt;/code&gt; 和 &lt;code&gt;kubeadm join&lt;/code&gt; 期间指定镜像拉取策略。
这两个字段的值必须是 &amp;quot;Always&amp;quot;、&amp;quot;Never&amp;quot; 或 &amp;quot;IfNotPresent&amp;quot 之一。
默认值是 &amp;quot;IfNotPresent&amp;quot;，也是添加此字段之前的默认行为。&lt;/li&gt;
&lt;li&gt;添加了 &amp;quot;InitConfiguration.patches.directory&amp;quot; 和
&amp;quot;JoinConfiguration.patches.directory&amp;quot; 以允许用户配置一个目录，
kubeadm 将从该目录中提取组件的补丁包。&lt;/li&gt;
&lt;li&gt;BootstrapToken* API 和相关的工具被从 &amp;quot;kubeadm&amp;quot; API 组中移出，
放到一个新的 &amp;quot;bootstraptoken&amp;quot; 组中。kubeadm API 版本 v1beta3 不再包含
BootstrapToken* 结构。&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
&lt;p&gt;Migration from old kubeadm config versions&lt;/p&gt;</description></item><item><title>kubeconfig (v1)</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/config-api/kubeconfig.v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/config-api/kubeconfig.v1/</guid><description>&lt;!--
title: kubeconfig (v1)
content_type: tool-reference
package: v1
auto_generated: true
--&gt;
&lt;!--
## Resource Types
--&gt;
&lt;h2 id="资源类型"&gt;资源类型&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#Config"&gt;Config&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="Config"&gt;&lt;code&gt;Config&lt;/code&gt;&lt;/h2&gt;
&lt;!--
Config holds the information needed to build connect to remote kubernetes clusters as a given user
--&gt;
&lt;p&gt;Config 保存以给定用户身份构建连接到远程 Kubernetes 集群所需的信息 &lt;/p&gt;
&lt;table class="table"&gt;
&lt;thead&gt;&lt;tr&gt;&lt;th width="30%"&gt;&lt;!--Field--&gt;字段&lt;/th&gt;&lt;th&gt;&lt;!--Description--&gt;描述&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;apiVersion&lt;/code&gt;&lt;br/&gt;string&lt;/td&gt;&lt;td&gt;&lt;code&gt;/v1&lt;/code&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;kind&lt;/code&gt;&lt;br/&gt;string&lt;/td&gt;&lt;td&gt;&lt;code&gt;Config&lt;/code&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;kind&lt;/code&gt;&lt;br/&gt;
&lt;code&gt;string&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;p&gt;
&lt;!--
Legacy field from pkg/api/types.go TypeMeta.TODO(jlowdermilk): remove this after eliminating downstream dependencies.
--&gt;
来自 pkg/api/types.go TypeMeta 的遗留字段。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;apiVersion&lt;/code&gt;&lt;br/&gt;
&lt;code&gt;string&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;p&gt;
&lt;!--
Legacy field from pkg/api/types.go TypeMeta. TODO(jlowdermilk): remove this after eliminating downstream dependencies.
--&gt;
来自 pkg/api/types.go TypeMeta 的遗留字段。
&lt;/p&gt;</description></item><item><title>Kubelet CredentialProvider (v1)</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/config-api/kubelet-credentialprovider.v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/config-api/kubelet-credentialprovider.v1/</guid><description>&lt;!--
title: Kubelet CredentialProvider (v1)
content_type: tool-reference
package: credentialprovider.kubelet.k8s.io/v1
auto_generated: true
--&gt;
&lt;!--
## Resource Types
--&gt;
&lt;h2 id="resource-types"&gt;资源类型&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#credentialprovider-kubelet-k8s-io-v1-CredentialProviderRequest"&gt;CredentialProviderRequest&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#credentialprovider-kubelet-k8s-io-v1-CredentialProviderResponse"&gt;CredentialProviderResponse&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="credentialprovider-kubelet-k8s-io-v1-CredentialProviderRequest"&gt;&lt;code&gt;CredentialProviderRequest&lt;/code&gt;&lt;/h2&gt;
&lt;p&gt;
&lt;!--
CredentialProviderRequest includes the image that the kubelet requires authentication for.
Kubelet will pass this request object to the plugin via stdin. In general, plugins should
prefer responding with the same apiVersion they were sent.
--&gt;
&lt;code&gt;CredentialProviderRequest&lt;/code&gt; 包含 kubelet 需要通过身份验证才能访问的镜像。
kubelet 将此请求对象通过 stdin 传递到插件。
通常，插件应优先使用所收到的 &lt;code&gt;apiVersion&lt;/code&gt; 作出响应。
&lt;/p&gt;
&lt;table class="table"&gt;
&lt;thead&gt;&lt;tr&gt;&lt;th width="30%"&gt;&lt;!--Field--&gt;字段&lt;/th&gt;&lt;th&gt;&lt;!--Description--&gt;描述&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;apiVersion&lt;/code&gt;&lt;br/&gt;string&lt;/td&gt;&lt;td&gt;&lt;code&gt;credentialprovider.kubelet.k8s.io/v1&lt;/code&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;kind&lt;/code&gt;&lt;br/&gt;string&lt;/td&gt;&lt;td&gt;&lt;code&gt;CredentialProviderRequest&lt;/code&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;image&lt;/code&gt; &lt;B&gt;&lt;!--[Required]--&gt;[必需]&lt;/B&gt;&lt;br/&gt;
&lt;code&gt;string&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;p&gt;
&lt;!--
image is the container image that is being pulled as part of the
credential provider plugin request. Plugins may optionally parse the image
to extract any information required to fetch credentials.
--&gt;
&lt;code&gt;image&lt;/code&gt; 是作为凭据提供程序插件请求的一部分所拉取的容器镜像。
这些插件可以选择解析镜像以提取获取凭据所需的任何信息。
&lt;/p&gt;</description></item><item><title>kubelet 配置 (v1)</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/config-api/kubelet-config.v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/config-api/kubelet-config.v1/</guid><description>&lt;!--
title: Kubelet Configuration (v1)
content_type: tool-reference
package: kubelet.config.k8s.io/v1
auto_generated: true
--&gt;
&lt;!--
## Resource Types
--&gt;
&lt;h2 id="资源类型"&gt;资源类型&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#kubelet-config-k8s-io-v1-CredentialProviderConfig"&gt;CredentialProviderConfig&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="kubelet-config-k8s-io-v1-CredentialProviderConfig"&gt;&lt;code&gt;CredentialProviderConfig&lt;/code&gt;&lt;/h2&gt;
&lt;!--
CredentialProviderConfig is the configuration containing information about
each exec credential provider. Kubelet reads this configuration from disk and enables
each provider as specified by the CredentialProvider type.
--&gt;
&lt;p&gt;CredentialProviderConfig 包含有关每个 exec 凭据提供程序的配置信息。
kubelet 从磁盘上读取这些配置信息，并根据 CredentialProvider 类型启用各个提供程序。&lt;/p&gt;
&lt;table class="table"&gt;
&lt;thead&gt;&lt;tr&gt;&lt;th width="30%"&gt;&lt;!--Field--&gt;字段&lt;/th&gt;&lt;th&gt;&lt;!--Description--&gt;描述&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;apiVersion&lt;/code&gt;&lt;br/&gt;string&lt;/td&gt;&lt;td&gt;&lt;code&gt;kubelet.config.k8s.io/v1&lt;/code&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;kind&lt;/code&gt;&lt;br/&gt;string&lt;/td&gt;&lt;td&gt;&lt;code&gt;CredentialProviderConfig&lt;/code&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;providers&lt;/code&gt; &lt;B&gt;&lt;!--[Required]--&gt;[必需]&lt;/B&gt;&lt;br/&gt;
&lt;a href="#kubelet-config-k8s-io-v1-CredentialProvider"&gt;&lt;code&gt;[]CredentialProvider&lt;/code&gt;&lt;/a&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;p&gt;
&lt;!--
providers is a list of credential provider plugins that will be enabled by the kubelet.
Multiple providers may match against a single image, in which case credentials
from all providers will be returned to the kubelet. If multiple providers are called
for a single image, the results are combined. If providers return overlapping
auth keys, the value from the provider earlier in this list is attempted first.
--&gt;
&lt;code&gt;providers&lt;/code&gt; 是一组凭据提供程序插件，这些插件会被 kubelet 启用。
多个提供程序可以匹配到同一镜像上，这时，来自所有提供程序的凭据信息都会返回给 kubelet。
如果针对同一镜像调用了多个提供程序，则结果会被组合起来。如果提供程序返回的认证主键有重复，
列表中先出现的提供程序所返回的值将被首先尝试。
&lt;/p&gt;</description></item><item><title>kubelet 配置 (v1alpha1)</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/config-api/kubelet-config.v1alpha1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/config-api/kubelet-config.v1alpha1/</guid><description>&lt;!--
title: Kubelet Configuration (v1alpha1)
content_type: tool-reference
package: kubelet.config.k8s.io/v1alpha1
auto_generated: true
--&gt;
&lt;!--
## Resource Types
--&gt;
&lt;h2 id="资源类型"&gt;资源类型&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#kubelet-config-k8s-io-v1alpha1-CredentialProviderConfig"&gt;CredentialProviderConfig&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="kubelet-config-k8s-io-v1alpha1-CredentialProviderConfig"&gt;&lt;code&gt;CredentialProviderConfig&lt;/code&gt;&lt;/h2&gt;
&lt;!--
CredentialProviderConfig is the configuration containing information about
each exec credential provider. Kubelet reads this configuration from disk and enables
each provider as specified by the CredentialProvider type.
--&gt;
&lt;p&gt;CredentialProviderConfig 包含有关每个 exec 凭据提供者的配置信息。
kubelet 从磁盘上读取这些配置信息，并根据 CredentialProvider 类型启用各个提供者。&lt;/p&gt;
&lt;table class="table"&gt;
&lt;thead&gt;&lt;tr&gt;&lt;th width="30%"&gt;&lt;!--Field--&gt;字段&lt;/th&gt;&lt;th&gt;&lt;!--Description--&gt;描述&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;apiVersion&lt;/code&gt;&lt;br/&gt;string&lt;/td&gt;&lt;td&gt;&lt;code&gt;kubelet.config.k8s.io/v1alpha1&lt;/code&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;kind&lt;/code&gt;&lt;br/&gt;string&lt;/td&gt;&lt;td&gt;&lt;code&gt;CredentialProviderConfig&lt;/code&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;providers&lt;/code&gt; &lt;B&gt;&lt;!--[Required]--&gt;[必需]&lt;/B&gt;&lt;br/&gt;
&lt;a href="#kubelet-config-k8s-io-v1alpha1-CredentialProvider"&gt;&lt;code&gt;[]CredentialProvider&lt;/code&gt;&lt;/a&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;!--
providers is a list of credential provider plugins that will be enabled by the kubelet.
Multiple providers may match against a single image, in which case credentials
from all providers will be returned to the kubelet. If multiple providers are called
for a single image, the results are combined. If providers return overlapping
auth keys, the value from the provider earlier in this list is attempted first.
--&gt;
&lt;code&gt;providers&lt;/code&gt; 是一组凭据提供者插件，这些插件会被 kubelet 启用。
多个提供者可以匹配到同一镜像上，这时，来自所有提供者的凭据信息都会返回给 kubelet。
如果针对同一镜像调用了多个提供者，则结果会被组合起来。如果提供者返回的认证主键有重复，
列表中先出现的提供者所返回的值将第一个被尝试使用。
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2 id="kubelet-config-k8s-io-v1alpha1-ImagePullIntent"&gt;&lt;code&gt;ImagePullIntent&lt;/code&gt;&lt;/h2&gt;
&lt;p&gt;
&lt;!--
ImagePullIntent is a record of the kubelet attempting to pull an image.
--&gt;
&lt;code&gt;ImagePullIntent&lt;/code&gt; 是 kubelet 尝试拉取镜像的记录。
&lt;/p&gt;</description></item><item><title>Kubelet 配置 (v1beta1)</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/config-api/kubelet-config.v1beta1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/config-api/kubelet-config.v1beta1/</guid><description>&lt;!--
title: Kubelet Configuration (v1beta1)
content_type: tool-reference
package: kubelet.config.k8s.io/v1beta1
auto_generated: true
--&gt;
&lt;!--
## Resource Types
--&gt;
&lt;h2 id="resource-types"&gt;资源类型&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#kubelet-config-k8s-io-v1beta1-CredentialProviderConfig"&gt;CredentialProviderConfig&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#kubelet-config-k8s-io-v1beta1-KubeletConfiguration"&gt;KubeletConfiguration&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#kubelet-config-k8s-io-v1beta1-SerializedNodeConfigSource"&gt;SerializedNodeConfigSource&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="FormatOptions"&gt;&lt;code&gt;FormatOptions&lt;/code&gt;&lt;/h2&gt;
&lt;!--
**Appears in:**
--&gt;
&lt;p&gt;&lt;strong&gt;出现在：&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#LoggingConfiguration"&gt;LoggingConfiguration&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;
&lt;!--
FormatOptions contains options for the different logging formats.
--&gt;
FormatOptions 包含为不同日志格式提供的选项。
&lt;/p&gt;
&lt;table class="table"&gt;
&lt;thead&gt;&lt;tr&gt;&lt;th width="30%"&gt;&lt;!--Field--&gt;字段&lt;/th&gt;&lt;th&gt;&lt;!--Description--&gt;描述&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;text&lt;/code&gt; &lt;B&gt;&lt;!-- [Required] --&gt;[必需]&lt;/B&gt;&lt;br/&gt;
&lt;a href="#TextOptions"&gt;&lt;code&gt;TextOptions&lt;/code&gt;&lt;/a&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;!--
 &lt;p&gt;[Alpha] Text contains options for logging format &amp;quot;text&amp;quot;.
Only available when the LoggingAlphaOptions feature gate is enabled.&lt;/p&gt;
--&gt;
 &lt;p&gt;[Alpha] 文本包含用于记录 &amp;quot;text&amp;quot; 格式的选项。
仅当 LoggingAlphaOptions 特性门控被启用时可用。&lt;/p&gt;</description></item><item><title>kuberc (v1alpha1)</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/config-api/kuberc.v1alpha1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/config-api/kuberc.v1alpha1/</guid><description>&lt;!--
title: kuberc (v1alpha1)
content_type: tool-reference
package: kubectl.config.k8s.io/v1alpha1
auto_generated: true
--&gt;
&lt;!--
## Resource Types 
--&gt;
&lt;h2 id="resource-types"&gt;资源类型&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#kubectl-config-k8s-io-v1alpha1-Preference"&gt;Preference&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="kubectl-config-k8s-io-v1alpha1-Preference"&gt;&lt;code&gt;Preference&lt;/code&gt;&lt;/h2&gt;
&lt;p&gt;
&lt;!--
Preference stores elements of KubeRC configuration file
--&gt;
&lt;code&gt;Preference&lt;/code&gt; 存储 KubeRC 配置文件的元素
&lt;/p&gt;
&lt;table class="table"&gt;
&lt;thead&gt;&lt;tr&gt;&lt;th width="30%"&gt;&lt;!--Field--&gt;字段&lt;/th&gt;&lt;th&gt;&lt;!--Description--&gt;描述&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;apiVersion&lt;/code&gt;&lt;br/&gt;string&lt;/td&gt;&lt;td&gt;&lt;code&gt;kubectl.config.k8s.io/v1alpha1&lt;/code&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;kind&lt;/code&gt;&lt;br/&gt;string&lt;/td&gt;&lt;td&gt;&lt;code&gt;Preference&lt;/code&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;overrides&lt;/code&gt; &lt;B&gt;&lt;!--[Required]--&gt;[必需]&lt;/B&gt;&lt;br/&gt;
&lt;a href="#kubectl-config-k8s-io-v1alpha1-CommandOverride"&gt;&lt;code&gt;[]CommandOverride&lt;/code&gt;&lt;/a&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;p&gt;
&lt;!--
overrides allows changing default flag values of commands.
This is especially useful, when user doesn't want to explicitly
set flags each time.
--&gt;
&lt;code&gt;overrides&lt;/code&gt; 允许更改命令的默认标志值。
这对于用户不想每次明确设置标志时特别有用。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;aliases&lt;/code&gt; &lt;B&gt;&lt;!--[Required]--&gt;[必需]&lt;/B&gt;&lt;br/&gt;
&lt;a href="#kubectl-config-k8s-io-v1alpha1-AliasOverride"&gt;&lt;code&gt;[]AliasOverride&lt;/code&gt;&lt;/a&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;p&gt;
&lt;!--
aliases allow defining command aliases for existing kubectl commands, with optional default flag values.
If the alias name collides with a built-in command, built-in command always takes precedence.
Flag overrides defined in the overrides section do NOT apply to aliases for the same command.
kubectl [ALIAS NAME] [USER_FLAGS] [USER_EXPLICIT_ARGS] expands to
kubectl [COMMAND] # built-in command alias points to
[KUBERC_PREPEND_ARGS]
[USER_FLAGS]
[KUBERC_FLAGS] # rest of the flags that are not passed by user in [USER_FLAGS]
[USER_EXPLICIT_ARGS]
[KUBERC_APPEND_ARGS]
e.g.
--&gt;
&lt;code&gt;aliases&lt;/code&gt; 允许为现有的 kubectl 命令定义命令别名，并可选择设置默认标志值。
如果别名与内置命令冲突，内置命令始终优先。
在 &lt;code&gt;overrides&lt;/code&gt; 部分定义的标志覆盖不适用于同一命令的别名。
&lt;code&gt;kubectl [ALIAS NAME] [USER_FLAGS] [USER_EXPLICIT_ARGS]&lt;/code&gt; 展开为：
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl &lt;span style="color:#666"&gt;[&lt;/span&gt;COMMAND&lt;span style="color:#666"&gt;]&lt;/span&gt; &lt;span style="color:#080;font-style:italic"&gt;# 别名指向的内置命令&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; &lt;span style="color:#666"&gt;[&lt;/span&gt;KUBERC_PREPEND_ARGS&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; &lt;span style="color:#666"&gt;[&lt;/span&gt;USER_FLAGS&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; &lt;span style="color:#666"&gt;[&lt;/span&gt;KUBERC_FLAGS&lt;span style="color:#666"&gt;]&lt;/span&gt; &lt;span style="color:#080;font-style:italic"&gt;# 其余未由用户在 [用户标志] 中传递的标志&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; &lt;span style="color:#666"&gt;[&lt;/span&gt;USER_EXPLICIT_ARGS&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; &lt;span style="color:#666"&gt;[&lt;/span&gt;KUBERC_APPEND_ARGS&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;例如：&lt;/p&gt;</description></item><item><title>kuberc (v1beta1)</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/config-api/kuberc.v1beta1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/config-api/kuberc.v1beta1/</guid><description>&lt;!--
title: kuberc (v1beta1)
content_type: tool-reference
package: kubectl.config.k8s.io/v1beta1
auto_generated: true
--&gt;
&lt;!--
## Resource Types 
--&gt;
&lt;h2 id="resource-types"&gt;资源类型&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#kubectl-config-k8s-io-v1beta1-Preference"&gt;Preference&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="kubectl-config-k8s-io-v1beta1-Preference"&gt;&lt;code&gt;Preference&lt;/code&gt;&lt;/h2&gt;
&lt;p&gt;
&lt;!--
Preference stores elements of KubeRC configuration file
--&gt;
&lt;code&gt;Preference&lt;/code&gt; 存储 KubeRC 配置文件的元素。
&lt;/p&gt;
&lt;table class="table"&gt;
&lt;thead&gt;&lt;tr&gt;&lt;th width="30%"&gt;&lt;!--Field--&gt;字段&lt;/th&gt;&lt;th&gt;&lt;!--Description--&gt;描述&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;apiVersion&lt;/code&gt;&lt;br/&gt;string&lt;/td&gt;&lt;td&gt;&lt;code&gt;kubectl.config.k8s.io/v1beta1&lt;/code&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;kind&lt;/code&gt;&lt;br/&gt;string&lt;/td&gt;&lt;td&gt;&lt;code&gt;Preference&lt;/code&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;defaults&lt;/code&gt; &lt;B&gt;&lt;!--[Required]--&gt;[必需]&lt;/B&gt;&lt;br/&gt;
&lt;a href="#kubectl-config-k8s-io-v1beta1-CommandDefaults"&gt;&lt;code&gt;[]CommandDefaults&lt;/code&gt;&lt;/a&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;p&gt;
&lt;!--
defaults allow changing default option values of commands.
This is especially useful, when user doesn't want to explicitly
set options each time.
--&gt;
&lt;code&gt;defaults&lt;/code&gt; 允许更改命令的默认选项值。
这对于用户不想每次明确设置选项时特别有用。
&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;aliases&lt;/code&gt; &lt;B&gt;&lt;!--[Required]--&gt;[必需]&lt;/B&gt;&lt;br/&gt;
&lt;a href="#kubectl-config-k8s-io-v1beta1-AliasOverride"&gt;&lt;code&gt;[]AliasOverride&lt;/code&gt;&lt;/a&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;p&gt;
&lt;!--
aliases allow defining command aliases for existing kubectl commands, with optional default option values.
If the alias name collides with a built-in command, built-in command always takes precedence.
Option overrides defined in the defaults section do NOT apply to aliases for the same command.
kubectl [ALIAS NAME] [USER_OPTIONS] [USER_EXPLICIT_ARGS] expands to
kubectl [COMMAND] # built-in command alias points to
[KUBERC_PREPEND_ARGS]
[USER_OPTIONS]
[KUBERC_OPTIONS] # rest of the options that are not passed by user in [USER_OPTIONS]
[USER_EXPLICIT_ARGS]
[KUBERC_APPEND_ARGS]
e.g.
--&gt;
&lt;code&gt;aliases&lt;/code&gt; 允许为现有的 kubectl 命令定义命令别名，并可选择设置默认选项值。
如果别名与内置命令冲突，内置命令始终优先。
在 &lt;code&gt;defaults&lt;/code&gt; 部分定义的选项 &lt;code&gt;overrides&lt;/code&gt; 不适用于同一命令的别名。
&lt;code&gt;kubectl [ALIAS NAME] [USER_OPTIONS] [USER_EXPLICIT_ARGS]&lt;/code&gt; 展开为：
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl &lt;span style="color:#666"&gt;[&lt;/span&gt;COMMAND&lt;span style="color:#666"&gt;]&lt;/span&gt; &lt;span style="color:#080;font-style:italic"&gt;# 别名指向的内置命令&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; &lt;span style="color:#666"&gt;[&lt;/span&gt;KUBERC_PREPEND_ARGS&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; &lt;span style="color:#666"&gt;[&lt;/span&gt;USER_OPTIONS&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; &lt;span style="color:#666"&gt;[&lt;/span&gt;KUBERC_OPTIONS&lt;span style="color:#666"&gt;]&lt;/span&gt; &lt;span style="color:#080;font-style:italic"&gt;# 其余未由用户在 [用户选项] 中传递的选项&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; &lt;span style="color:#666"&gt;[&lt;/span&gt;USER_EXPLICIT_ARGS&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; &lt;span style="color:#666"&gt;[&lt;/span&gt;KUBERC_APPEND_ARGS&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;例如：&lt;/p&gt;</description></item><item><title>Kubernetes 发布周期</title><link>https://andygol-k8s.netlify.app/zh-cn/releases/release/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/releases/release/</guid><description>&lt;!-- 
title: Kubernetes Release Cycle
type: docs
auto_generated: true
--&gt;
&lt;!-- THIS CONTENT IS AUTO-GENERATED via https://github.com/kubernetes/website/blob/main/scripts/releng/update-release-info.sh --&gt;
&lt;div class="pageinfo pageinfo-light"&gt;
&lt;!-- 
This content is auto-generated and links may not function. The source of the document is located
[here](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-release/release.md).
--&gt;
&lt;p&gt;此内容原文是自动生成的，链接可能无法正常访问。
文档的来源在&lt;a href="https://github.com/kubernetes/community/blob/master/contributors/devel/sig-release/release.md"&gt;这里&lt;/a&gt;。&lt;/p&gt;
&lt;/div&gt;
&lt;!-- Localization note: omit the pageinfo block when localizing --&gt;
&lt;!-- 
## Targeting enhancements, Issues and PRs to Release Milestones

This document is focused on Kubernetes developers and contributors who need to
create an enhancement, issue, or pull request which targets a specific release
milestone.
--&gt;
&lt;h2 id="targeting-enhancements-issues-and-prs-to-release-milestones"&gt;针对发布里程碑的特性增强、Issue 和 PR&lt;/h2&gt;
&lt;p&gt;本文档重点是面向于那些需要创建针对特定发布里程碑的特性增强、问题或拉取请求的 Kubernetes 开发人员和贡献者。&lt;/p&gt;</description></item><item><title>Kubernetes 社区行为准则</title><link>https://andygol-k8s.netlify.app/zh-cn/community/code-of-conduct/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/community/code-of-conduct/</guid><description>&lt;!--
title: Kubernetes Community Code of Conduct
body_class: code-of-conduct
cid: code-of-conduct
--&gt;
&lt;!--
_Kubernetes follows the
[CNCF Code of Conduct](https://github.com/cncf/foundation/blob/main/code-of-conduct.md).
The text of the CNCF CoC is replicated below, as of
[commit 71412bb02](https://github.com/cncf/foundation/blob/71412bb029090d42ecbeadb39374a337bfb48a9c/code-of-conduct.md)._
--&gt;
&lt;p&gt;&lt;strong&gt;Kubernetes 遵循
&lt;a href="https://github.com/cncf/foundation/blob/main/code-of-conduct.md"&gt;CNCF 行为准则&lt;/a&gt;。
有关 CNCF 行为准则的文本，请参阅
&lt;a href="https://github.com/cncf/foundation/blob/71412bb029090d42ecbeadb39374a337bfb48a9c/code-of-conduct.md"&gt;commit 71412bb02&lt;/a&gt;。&lt;/strong&gt;&lt;/p&gt;
&lt;div id="cncf-code-of-conduct"&gt;

	&lt;!--
Do not edit this file directly. Get the latest from
https://github.com/cncf/foundation/blob/master/code-of-conduct-languages/zh.md
--&gt;
&lt;!--
## CNCF Community Code of Conduct v1.3

### Community Code of Conduct
--&gt;
&lt;h2 id="cncf-community-code-of-conduct-v13"&gt;云原生计算基金会（CNCF）社区行为准则 1.3 版本&lt;/h2&gt;
&lt;h3 id="community-code-of-conduct"&gt;社区行为准则&lt;/h3&gt;
&lt;!--
As contributors, maintainers, and participants in the CNCF community, and in the interest of fostering
an open and welcoming community, we pledge to respect all people who participate or contribute
through reporting issues, posting feature requests, updating documentation,
submitting pull requests or patches, attending conferences or events, or engaging in other community or project activities.

We are committed to making participation in the CNCF community a harassment-free experience for everyone, regardless of age, body size, caste, disability, ethnicity, level of experience, family status, gender, gender identity and expression, marital status, military or veteran status, nationality, personal appearance, race, religion, sexual orientation, socieconomic status, tribe, or any other dimension of diversity.
--&gt;
&lt;p&gt;作为 CNCF 社区的贡献者、维护者和参与者，我们努力建设一个开放和受欢迎的社区，我们承诺尊重所有上报
Issue、发布功能需求、更新文档、提交 PR 或补丁、参加会议活动以及其他社区和项目活动的贡献者和参与者。&lt;/p&gt;</description></item><item><title>Kubernetes 外部指标 (v1beta1)</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/external-api/external-metrics.v1beta1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/external-api/external-metrics.v1beta1/</guid><description>&lt;!--
title: Kubernetes External Metrics (v1beta1)
content_type: tool-reference
package: external.metrics.k8s.io/v1beta1
auto_generated: true
--&gt;
&lt;p&gt;
&lt;!--
Package v1beta1 is the v1beta1 version of the external metrics API.
--&gt;
v1beta1 包是 v1beta1 版本的外部指标 API。
&lt;/p&gt;
&lt;!--
## Resource Types
--&gt;
&lt;h2 id="resource-types"&gt;资源类型&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#external-metrics-k8s-io-v1beta1-ExternalMetricValue"&gt;ExternalMetricValue&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#external-metrics-k8s-io-v1beta1-ExternalMetricValueList"&gt;ExternalMetricValueList&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="external-metrics-k8s-io-v1beta1-ExternalMetricValue"&gt;&lt;code&gt;ExternalMetricValue&lt;/code&gt;&lt;/h2&gt;
&lt;!--
**Appears in:**
--&gt;
&lt;p&gt;&lt;strong&gt;出现在：&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#external-metrics-k8s-io-v1beta1-ExternalMetricValueList"&gt;ExternalMetricValueList&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;
&lt;!--
ExternalMetricValue is a metric value for external metric
A single metric value is identified by metric name and a set of string labels.
For one metric there can be multiple values with different sets of labels.
--&gt;
ExternalMetricValue 是外部指标的一个度量值。
单个度量值由指标名称和一组字符串标签标识。
对于一个指标，可以有多个具有不同标签集的值。
&lt;/p&gt;</description></item><item><title>Kubernetes 指标 (v1beta1)</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/external-api/metrics.v1beta1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/external-api/metrics.v1beta1/</guid><description>&lt;!--
title: Kubernetes Metrics (v1beta1)
content_type: tool-reference
package: metrics.k8s.io/v1beta1
auto_generated: true
--&gt;
&lt;!--
&lt;p&gt;Package v1beta1 is the v1beta1 version of the metrics API.&lt;/p&gt;
--&gt;
&lt;p&gt;v1beta1 包是 v1beta1 版本的指标 API。&lt;/p&gt;
&lt;!--
## Resource Types
--&gt;
&lt;h2 id="resource-types"&gt;资源类型&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#metrics-k8s-io-v1beta1-NodeMetrics"&gt;NodeMetrics&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#metrics-k8s-io-v1beta1-NodeMetricsList"&gt;NodeMetricsList&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#metrics-k8s-io-v1beta1-PodMetrics"&gt;PodMetrics&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#metrics-k8s-io-v1beta1-PodMetricsList"&gt;PodMetricsList&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="metrics-k8s-io-v1beta1-NodeMetrics"&gt;&lt;code&gt;NodeMetrics&lt;/code&gt;&lt;/h2&gt;
&lt;!--
**Appears in:**
--&gt;
&lt;p&gt;&lt;strong&gt;出现在：&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#metrics-k8s-io-v1beta1-NodeMetricsList"&gt;NodeMetricsList&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
&lt;p&gt;NodeMetrics sets resource usage metrics of a node.&lt;/p&gt;
--&gt;
&lt;p&gt;NodeMetrics 设置节点的资源用量指标。&lt;/p&gt;
&lt;table class="table"&gt;
&lt;thead&gt;&lt;tr&gt;&lt;th width="30%"&gt;&lt;!--Field--&gt;字段&lt;/th&gt;&lt;th&gt;&lt;!--Description--&gt;描述&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;apiVersion&lt;/code&gt;&lt;br/&gt;string&lt;/td&gt;&lt;td&gt;&lt;code&gt;metrics.k8s.io/v1beta1&lt;/code&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;kind&lt;/code&gt;&lt;br/&gt;string&lt;/td&gt;&lt;td&gt;&lt;code&gt;NodeMetrics&lt;/code&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;metadata&lt;/code&gt;&lt;br/&gt;
&lt;a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.28/#objectmeta-v1-meta"&gt;&lt;code&gt;meta/v1.ObjectMeta&lt;/code&gt;&lt;/a&gt;
&lt;/td&gt;
&lt;td&gt;
 &lt;!--
 &lt;p&gt;Standard object's metadata.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata&lt;/p&gt;
Refer to the Kubernetes API documentation for the fields of the &lt;code&gt;metadata&lt;/code&gt; field.
 --&gt;
 &lt;p&gt;标准的对象元数据。更多信息：
 https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata&lt;/p&gt;</description></item><item><title>Kubernetes 自定义指标 (v1beta2)</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/external-api/custom-metrics.v1beta2/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/external-api/custom-metrics.v1beta2/</guid><description>&lt;!--
title: Kubernetes Custom Metrics (v1beta2)
content_type: tool-reference
package: custom.metrics.k8s.io/v1beta2
auto_generated: true
--&gt;
&lt;!--
&lt;p&gt;Package v1beta2 is the v1beta2 version of the custom_metrics API.&lt;/p&gt;
--&gt;
&lt;p&gt;v1beta2 包是 v1beta2 版本的 custom_metrics API。&lt;/p&gt;
&lt;h2 id="resource-types"&gt;资源类型&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#custom-metrics-k8s-io-v1beta2-MetricListOptions"&gt;MetricListOptions&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#custom-metrics-k8s-io-v1beta2-MetricValue"&gt;MetricValue&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#custom-metrics-k8s-io-v1beta2-MetricValueList"&gt;MetricValueList&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="custom-metrics-k8s-io-v1beta2-MetricListOptions"&gt;&lt;code&gt;MetricListOptions&lt;/code&gt;&lt;/h2&gt;
&lt;p&gt;
&lt;!--
&lt;p&gt;MetricListOptions is used to select metrics by their label selectors&lt;/p&gt;
--&gt;
&lt;p&gt;MetricListOptions 用于按其标签选择算符来选择指标。&lt;/p&gt;
&lt;table class="table"&gt;
&lt;thead&gt;&lt;tr&gt;&lt;th width="30%"&gt;&lt;!--Field--&gt;字段&lt;/th&gt;&lt;th&gt;&lt;!--Description--&gt;描述&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;apiVersion&lt;/code&gt;&lt;br/&gt;string&lt;/td&gt;&lt;td&gt;&lt;code&gt;custom.metrics.k8s.io/v1beta2&lt;/code&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;kind&lt;/code&gt;&lt;br/&gt;string&lt;/td&gt;&lt;td&gt;&lt;code&gt;MetricListOptions&lt;/code&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;labelSelector&lt;/code&gt;&lt;br/&gt;
&lt;code&gt;string&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
 &lt;!--
 &lt;p&gt;A selector to restrict the list of returned objects by their labels.
Defaults to everything.&lt;/p&gt;</description></item><item><title>NAIC Case Study</title><link>https://andygol-k8s.netlify.app/zh-cn/case-studies/naic/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/case-studies/naic/</guid><description>&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;The &lt;a href="http://www.naic.org/"&gt;National Association of Insurance Commissioners (NAIC)&lt;/a&gt;, the U.S. standard-setting and regulatory support organization, was looking for a way to deliver new services faster to provide more value for members and staff. It also needed greater agility to improve productivity internally.&lt;/p&gt;

&lt;h2&gt;Solution&lt;/h2&gt;

&lt;p&gt;Beginning in 2016, they started using &lt;a href="https://www.cncf.io/"&gt;Cloud Native Computing Foundation (CNCF)&lt;/a&gt; tools such as &lt;a href="https://prometheus.io/"&gt;Prometheus&lt;/a&gt;. NAIC began hosting internal systems and development systems on &lt;a href="https://kubernetes.io/"&gt;Kubernetes&lt;/a&gt; at the beginning of 2018, as part of a broad move toward the public cloud. "Our culture and technology transition is a strategy embraced by our top leaders," says Dan Barker, Chief Enterprise Architect. "It has already proven successful by allowing us to accelerate our value pipeline by more than double while decreasing our costs by more than half. We are also seeing customer satisfaction increase as we add more and more applications to these new technologies."&lt;/p&gt;</description></item><item><title>Nav Case Study</title><link>https://andygol-k8s.netlify.app/zh-cn/case-studies/nav/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/case-studies/nav/</guid><description>&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;Founded in 2012, &lt;a href="https://www.nav.com/"&gt;Nav&lt;/a&gt; provides small business owners with access to their business credit scores from all three major commercial credit bureaus—Equifax, Experian and Dun &amp; Bradstreet—and financing options that best fit their needs. Five years in, the startup was growing rapidly, and "our cloud environments were getting very large, and our usage of those environments was extremely low, like under 1%," says Director of Engineering Travis Jeppson. "We wanted our usage of cloud environments to be more tightly coupled with what we actually needed, so we started looking at containerization and orchestration to help us be able to run workloads that were distinct from one another but could share a similar resource pool."&lt;/p&gt;</description></item><item><title>案例研究：NetEase</title><link>https://andygol-k8s.netlify.app/zh-cn/case-studies/netease/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/case-studies/netease/</guid><description>&lt;!-- 
title: NetEase Case Study
linkTitle: NetEase
case_study_styles: true
cid: caseStudies
logo: netease_featured_logo.png
featured: false

new_case_study_styles: true
heading_background: /images/case-studies/netease/banner1.jpg
heading_title_logo: /images/netease_logo.png
subheading: &gt;
 How NetEase Leverages Kubernetes to Support Internet Business Worldwide
case_study_details:
 - Company: NetEase
 - Location: Hangzhou, China
 - Industry: Internet technology
--&gt;

&lt;!-- 
&lt;h2&gt;Challenge&lt;/h2&gt;
--&gt;
&lt;h2&gt;挑战&lt;/h2&gt;

&lt;!-- 
&lt;p&gt;Its gaming business is one of the largest in the world, but that's not all that &lt;a href="https://netease-na.com/"&gt;NetEase&lt;/a&gt; provides to Chinese consumers. The company also operates e-commerce, advertising, music streaming, online education, and email platforms; the last of which serves almost a billion users with free email services through sites like &lt;a href="https://www.163.com/"&gt;163.com&lt;/a&gt;. In 2015, the NetEase Cloud team providing the infrastructure for all of these systems realized that their R&amp;D process was slowing down developers. "Our users needed to prepare all of the infrastructure by themselves," says Feng Changjian, Architect for NetEase Cloud and Container Service. "We were eager to provide the infrastructure and tools for our users automatically via serverless container service."&lt;/p&gt;</description></item><item><title>New York Times Case Study</title><link>https://andygol-k8s.netlify.app/zh-cn/case-studies/newyorktimes/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/case-studies/newyorktimes/</guid><description>&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;When the company decided a few years ago to move out of its data centers, its first deployments on the public cloud were smaller, less critical applications managed on virtual machines. "We started building more and more tools, and at some point we realized that we were doing a disservice by treating Amazon as another data center," says Deep Kapadia, Executive Director, Engineering at The New York Times. Kapadia was tapped to lead a Delivery Engineering Team that would "design for the abstractions that cloud providers offer us."&lt;/p&gt;</description></item><item><title>Nokia Case Study</title><link>https://andygol-k8s.netlify.app/zh-cn/case-studies/nokia/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/case-studies/nokia/</guid><description>&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.nokia.com/en_int"&gt;Nokia&lt;/a&gt;'s core business is building telecom networks end-to-end; its main products are related to the infrastructure, such as antennas, switching equipment, and routing equipment. "As telecom vendors, we have to deliver our software to several telecom operators and put the software into their infrastructure, and each of the operators have a bit different infrastructure," says Gergely Csatari, Senior Open Source Engineer. "There are operators who are running on bare metal. There are operators who are running on virtual machines. There are operators who are running on &lt;a href="https://cloud.vmware.com/"&gt;VMware Cloud&lt;/a&gt; and &lt;a href="https://www.openstack.org/"&gt;OpenStack&lt;/a&gt; Cloud. We want to run the same product on all of these different infrastructures without changing the product itself."&lt;/p&gt;</description></item><item><title>Northwestern Mutual Case Study</title><link>https://andygol-k8s.netlify.app/zh-cn/case-studies/northwestern-mutual/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/case-studies/northwestern-mutual/</guid><description>&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;In the spring of 2015, Northwestern Mutual acquired a fintech startup, LearnVest, and decided to take "Northwestern Mutual's leading products and services and meld it with LearnVest's digital experience and innovative financial planning platform," says Brad Williams, Director of Engineering for Client Experience, Northwestern Mutual. The company's existing infrastructure had been optimized for batch workflows hosted on on-prem networks; deployments were very traditional, focused on following a process instead of providing deployment agility. "We had to build a platform that was elastically scalable, but also much more responsive, so we could quickly get data to the client website so our end-customers have the experience they expect," says Williams.&lt;/p&gt;</description></item><item><title>OpenAI 案例研究</title><link>https://andygol-k8s.netlify.app/zh-cn/case-studies/openai/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/case-studies/openai/</guid><description>&lt;!--
title: OpenAI Case Study
case_study_styles: true
cid: caseStudies

new_case_study_styles: true
heading_background: /images/case-studies/openAI/banner1.jpg
heading_title_logo: /images/openAI_logo.png
subheading: &gt;
 Launching and Scaling Up Experiments, Made Simple
case_study_details:
 - Company: OpenAI
 - Location: San Francisco, California
 - Industry: Artificial Intelligence Research
--&gt;

&lt;!--
&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;An artificial intelligence research lab, OpenAI needed infrastructure for deep learning that would allow experiments to be run either in the cloud or in its own data center, and to easily scale. Portability, speed, and cost were the main drivers.&lt;/p&gt;</description></item><item><title>Pear Deck Case Study</title><link>https://andygol-k8s.netlify.app/zh-cn/case-studies/peardeck/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/case-studies/peardeck/</guid><description>&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;The three-year-old startup provides a web app for teachers to interact with their students in the classroom. The JavaScript app was built on Google's web app development platform &lt;a href="https://firebase.google.com/"&gt;Firebase&lt;/a&gt;, using &lt;a href="https://www.heroku.com/"&gt;Heroku&lt;/a&gt;. As the user base steadily grew, so did the development team. "We outgrew Heroku when we started wanting to have multiple services, and the deploying story got pretty horrendous. We were frustrated that we couldn't have the developers quickly stage a version," says CEO Riley Eynon-Lynch. "Tracing and monitoring became basically impossible." On top of that, many of Pear Deck's customers are behind government firewalls and connect through Firebase, not Pear Deck's servers, making troubleshooting even more difficult.&lt;/p&gt;</description></item><item><title>Pearson Case Study</title><link>https://andygol-k8s.netlify.app/zh-cn/case-studies/pearson/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/case-studies/pearson/</guid><description>&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;A global education company serving 75 million learners, Pearson set a goal to more than double that number, to 200 million, by 2025. A key part of this growth is in digital learning experiences, and Pearson was having difficulty in scaling and adapting to its growing online audience. They needed an infrastructure platform that would be able to scale quickly and deliver products to market faster.&lt;/p&gt;

&lt;h2&gt;Solution&lt;/h2&gt;

&lt;p&gt;"To transform our infrastructure, we had to think beyond simply enabling automated provisioning," says Chris Jackson, Director for Cloud Platforms &amp; SRE at Pearson. "We realized we had to build a platform that would allow Pearson developers to build, manage and deploy applications in a completely different way." The team chose Docker container technology and Kubernetes orchestration "because of its flexibility, ease of management and the way it would improve our engineers' productivity."&lt;/p&gt;</description></item><item><title>pingcap Case Study</title><link>https://andygol-k8s.netlify.app/zh-cn/case-studies/pingcap/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/case-studies/pingcap/</guid><description>&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;PingCAP is the company leading the development of the popular open source NewSQL database &lt;a href="https://github.com/pingcap/tidb"&gt;TiDB&lt;/a&gt;, which is MySQL-compatible, can handle hybrid transactional and analytical processing (HTAP) workloads, and has a cloud native architectural design. "Having a hybrid multi-cloud product is an important part of our global go-to-market strategy," says Kevin Xu, General Manager of Global Strategy and Operations. In order to achieve that, the team had to address two challenges: "how to deploy, run, and manage a distributed stateful application, such as a distributed database like TiDB, in a containerized world," Xu says, and "how to deliver an easy-to-use, consistent, and reliable experience for our customers when they use TiDB in the cloud, any cloud, whether that's one cloud provider or a combination of different cloud environments." Knowing that using a distributed system isn't easy, they began looking for the right orchestration layer to help reduce some of that complexity for end users.&lt;/p&gt;</description></item><item><title>Prowise Case Study</title><link>https://andygol-k8s.netlify.app/zh-cn/case-studies/nerdalize/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/case-studies/nerdalize/</guid><description>&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;Nerdalize offers affordable cloud hosting for customers—and free heat and hot water for people who sign up to house the heating devices that contain the company's servers. The savings Nerdalize realizes by not running data centers are passed on to its customers. When the team began using Docker to make its software more portable, it realized it also needed a container orchestration solution. "As a cloud provider, we have internal services for hosting our backends and billing our customers, but we also need to offer our compute to our end users," says Digital Product Engineer Ad van der Veer. "Since we have these heating devices spread across the Netherlands, we need some way of tying that all together."&lt;/p&gt;</description></item><item><title>Prowise Case Study</title><link>https://andygol-k8s.netlify.app/zh-cn/case-studies/prowise/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/case-studies/prowise/</guid><description>&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;A Dutch company that produces educational devices and software used around the world, &lt;a href="https://www.prowise.com/en/"&gt;Prowise&lt;/a&gt; had an infrastructure based on Linux services with multiple availability zones in Europe, Australia, and the U.S. "We've grown a lot in the past couple of years, and we started to encounter problems with versioning and flexible scaling," says Senior DevOps Engineer Victor van den Bosch, "not only scaling in demands, but also in being able to deploy multiple products which all have their own versions, their own development teams, and their own problems that they're trying to solve. To be able to put that all on the same platform without much resistance is what we were looking for. We wanted to future proof our infrastructure, and also solve some of the problems that are associated with just running a normal Linux service."&lt;/p&gt;</description></item><item><title>Slamtec Case Study</title><link>https://andygol-k8s.netlify.app/zh-cn/case-studies/slamtec/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/case-studies/slamtec/</guid><description>&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;Founded in 2013, SLAMTEC provides service robot autonomous localization and navigation solutions. The company's strength lies in its R&amp;D team's ability to quickly introduce, and continually iterate on, its core products. In the past few years, the company, which had a legacy infrastructure based on Alibaba Cloud and VMware vSphere, began looking to build its own stable and reliable container cloud platform to host its Internet of Things applications. "Our needs for the cloud platform included high availability, scalability and security; multi-granularity monitoring alarm capability; friendliness to containers and microservices; and perfect CI/CD support," says Benniu Ji, Director of Cloud Computing Business Division.&lt;/p&gt;</description></item><item><title>SOS International Case Study</title><link>https://andygol-k8s.netlify.app/zh-cn/case-studies/sos/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/case-studies/sos/</guid><description>&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;For the past six decades, SOS International has been providing reliable medical and travel assistance in the Nordic region. In recent years, the company's business strategy has required increasingly intense development in the digital space, but when it came to its IT systems, "SOS has a very fragmented legacy," with three traditional monoliths (Java, .NET, and IBM's AS/400) and a waterfall approach, says Martin Ahrentsen, Head of Enterprise Architecture. "We have been forced to institute both new technology and new ways of working, so we could be more efficient with a shorter time to market. It was a much more agile approach, and we needed to have a platform that can help us deliver that to the business."&lt;/p&gt;</description></item><item><title>Spotify Case Study</title><link>https://andygol-k8s.netlify.app/zh-cn/case-studies/spotify/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/case-studies/spotify/</guid><description>&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;Launched in 2008, the audio-streaming platform has grown to over 200 million monthly active users across the world. "Our goal is to empower creators and enable a really immersive listening experience for all of the consumers that we have today—and hopefully the consumers we'll have in the future," says Jai Chakrabarti, Director of Engineering, Infrastructure and Operations. An early adopter of microservices and Docker, Spotify had containerized microservices running across its fleet of VMs with a homegrown container orchestration system called &lt;a href="https://github.com/spotify/helios"&gt;Helios&lt;/a&gt;. By late 2017, it became clear that "having a small team working on the features was just not as efficient as adopting something that was supported by a much bigger community," he says.&lt;/p&gt;</description></item><item><title>Squarespace 案例分析</title><link>https://andygol-k8s.netlify.app/zh-cn/case-studies/squarespace/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/case-studies/squarespace/</guid><description>&lt;!--
---
title: Squarespace Case Study
case_study_styles: true
cid: caseStudies
css: /css/style_case_studies.css
---
--&gt;

&lt;!-- &lt;div class="banner1 desktop" style="background-image: url('/images/case-studies/squarespace/banner1.jpg')"&gt;
 &lt;h1&gt; CASE STUDY:&lt;img src="https://andygol-k8s.netlify.app/images/squarespace_logo.png" class="header_logo"&gt;&lt;br&gt;
 &lt;div class="subhead"&gt;Squarespace: Gaining Productivity and Resilience with Kubernetes&lt;/div&gt;
 &lt;/h1&gt;
&lt;/div&gt; --&gt;

&lt;div class="banner1 desktop" style="background-image: url('/images/case-studies/squarespace/banner1.jpg')"&gt;
 &lt;h1&gt; 案例分析：&lt;img src="https://andygol-k8s.netlify.app/images/squarespace_logo.png" class="header_logo"&gt;&lt;br&gt;
 &lt;div class="subhead"&gt;Squarespace: 借力 Kubernetes 提升效率和可靠性&lt;/div&gt;
 &lt;/h1&gt;
&lt;/div&gt;

&lt;!-- &lt;div class="details"&gt;
 Company &amp;nbsp;&lt;b&gt;Squarespace&lt;/b&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Location &amp;nbsp;&lt;b&gt;New York, N.Y.&lt;/b&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Industry &amp;nbsp;&lt;b&gt;Software as a Service, Website-Building Platform&lt;/b&gt;
&lt;/div&gt; --&gt;

&lt;div class="details"&gt;
 公司名 &amp;nbsp;&lt;b&gt;Squarespace&lt;/b&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;地址 &amp;nbsp;&lt;b&gt;纽约市，纽约州&lt;/b&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;行业 &amp;nbsp;&lt;b&gt;软件服务，网站构建平台&lt;/b&gt;
&lt;/div&gt;

&lt;hr&gt;
&lt;section class="section1"&gt;
&lt;div class="cols"&gt;
 &lt;div class="col1"&gt;
 &lt;!-- &lt;h2&gt;Challenge&lt;/h2&gt;
 Moving from a monolith to microservices in 2014 "solved a problem on the development side, but it pushed that problem to the infrastructure team," says Kevin Lynch, Staff Engineer on the Site Reliability team at Squarespace. "The infrastructure deployment process on our 5,000 VM hosts was slowing everyone down." --&gt;
 &lt;h2&gt;挑战&lt;/h2&gt;
 自从 2014 年，我们从 monolith 架构移植到微服务架构，
 “虽然解决了开发端的问题，但却带来了架构组的问题”，Squarespace 网站可靠性组的主任工程师 Kevin Lynch 说道。
 “5000 个 VM 主机上的部署过程，让每个人都举步维艰。”

 &lt;br&gt;
 &lt;!-- &lt;h2&gt;Solution&lt;/h2&gt;
 The team experimented with container orchestration platforms, and found that Kubernetes "answered all the questions that we had," says Lynch. The company began running Kubernetes in its data centers in&amp;nbsp;2016. --&gt;
 &lt;h2&gt;解决方案&lt;/h2&gt;
 网站可靠性组开始尝试使用不同的容器编排平台，然后发现 Kubernetes “解决了我们所有的既有问题”，Lynch 说道。于是整个公司在 2016 年开始在自己的数据中心中运行 Kubernetes 集群。
 &lt;/div&gt;

 &lt;div class="col2"&gt;

&lt;!-- &lt;h2&gt;Impact&lt;/h2&gt;
Since Squarespace moved to Kubernetes, in conjunction with modernizing its networking stack, deployment time has been reduced by almost 85%.
Before, their VM deployment would take half an hour; now, says Lynch, "someone can generate a templated application, deploy it within five minutes,
and have actual instances containerized, running in our staging environment at that point." Because of that, "productivity time is the big cost saver,"
he adds. "When we started the Kubernetes project, we had probably a dozen microservices. Today there are twice that in the pipeline being actively worked on."
Resilience has also been improved with Kubernetes: "If a node goes down, it’s rescheduled immediately and there’s no performance&amp;nbsp;impact." --&gt;

&lt;h2&gt;影响&lt;/h2&gt;
自从 Squarespace 开始全面使用 Kubernetes，伴随着网络技术栈的革新，部署时间大幅减少85%。
以前，他们的 VM 部署需要耗费半个小时；现在，Lynch 提到，“一个人可以生成一个模板应用，在五分钟内部署，并将实例容器化，并在模拟环境下运行。”正因为如此，“开发效率节省了大量的成本。”
他又补充道，“当我们开始用 Kubernetes 时，我们可能只有十几个微服务。而现在的任务栏里面已经有两倍多的微服务正在进行中。”
Kubernetes 也同样提升了可靠性：“如果一个节点宕掉，马上会重新调度一个新的节点，没有任何性能上的影响。”

&lt;/div&gt;

&lt;/div&gt;
&lt;/section&gt;
&lt;div class="banner2"&gt;
 &lt;div class="banner2text"&gt;
 &lt;iframe width="560" height="315" src="https://www.youtube.com/embed/feQkzJkW-SA" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen&gt;&lt;/iframe&gt;
 &lt;br&gt;&lt;br&gt;“一旦你验证了 Kubernetes 可以解决一个问题，每个人都会立即着手解决其它的问题，无需你的布道。”
&lt;br style="height:25px"&gt;&lt;span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"&gt;&lt;br&gt;— Kevin Lynch，Squarespace 网站可靠性组的主任工程师&lt;/span&gt;
 &lt;/div&gt;
&lt;/div&gt;
&lt;section class="section2"&gt;
&lt;div class="fullcol"&gt;
 &lt;!-- &lt;h2&gt;Since it was started in a dorm room in 2003, Squarespace has made it simple for millions of people to create their own websites.&lt;/h2&gt; --&gt;
 &lt;h2&gt;从 2003 年宿舍起步， Squarespace 已经为数百万人提供了网站构建服务。&lt;/h2&gt;

 &lt;!-- Behind the scenes, though, the company’s monolithic Java application was making things not so simple for its developers to keep improving the platform.
 So in 2014, the company decided to "go down the microservices path," says Kevin Lynch, staff engineer on Squarespace’s Site Reliability team.
 "But we were always deploying our applications in vCenter VMware VMs [in our own data centers]. Microservices solved a problem on the development side,
 but it pushed that problem to the Infrastructure team. The infrastructure deployment process on our 5,000 VM hosts was slowing everyone down."&lt;br&gt;&lt;br&gt; --&gt;

 但在幕后，公司的单体应用却让开发人员在平台创新上举步维艰。所以在 2014 年，公司决定”走微服务之路”，Kevin Lynch 提到，Squarespace 网站稳定性组的主任工程师。
 “但是我们还是一直在自己的 vCenter VMware 虚拟机[我们自己的数据中心]上部署应用。微服务解决了开发端的问题，但是让问题转变到了基础架构组这一边。我们在5000个虚拟机主机上的部署流程让每个人的开发效率都提高不起来。”

 &lt;!-- After experimenting with another container orchestration platform and "breaking it in very painful ways," Lynch says,
 the team began experimenting with Kubernetes in mid-2016 and found that it "answered all the questions that we had."
 Deploying it in the data center rather than the public cloud was their biggest challenge, and at the time, not a lot of other companies were doing that.
 "We had to figure out how to deploy this in our infrastructure for ourselves, and we had to integrate it with our other applications," says Lynch.&lt;br&gt;&lt;br&gt; --&gt;

 在尝试过另外一个容器编排平台，“非常痛苦地拆解它”，Lynch 说道，我们组开始在 2016 年年中尝试 Kubernetes，发现它“能解决我们所有的问题”。
 将 Kubernetes 部署在数据中心，而非公有云上是我们最大的挑战，但在当时，并没有很多其它的公司会这么做。
 “我们必须要自己摸索出如何在自己的基础架构中部署它，我们也必须要将其和我们其它的应用做集成，”Lynch补充道。&lt;br&gt;&lt;br&gt;

 &lt;!-- At the same time, Squarespace’s Network Engineering team was modernizing its networking stack, switching from a traditional layer-two network to a layer-three spine-and-leaf network.
 "It mapped beautifully with what we wanted to do with Kubernetes," says Lynch. "It gives us the ability to have our servers communicate directly with the top-of-rack switches. We use Calico for
 &lt;a href="https://github.com/containernetworking/cnihttps://github.com/containernetworking/cni"&gt;CNI networking for Kubernetes&lt;/a&gt;,
 so we can announce all these individual Kubernetes pod IP addresses and have them integrate seamlessly with our other services that are still provisioned in the VMs." --&gt;

 与此同时，Squarespace 的网络工程组也正在革新它们的网络技术栈，从传统的 L2 网络转变为 L3 脊叶网络架构。
 “” Lynch 说道，“它给了我们服务器直接通过架顶交换机通信的能力。我们使用 Calico 作为
 &lt;a href="https://github.com/containernetworking/cnihttps://github.com/containernetworking/cni"&gt;Kubernetes 的 CNI 网络插件&lt;/a&gt;”，
 因而，我们可以为每个 Kubernetes pod 分配 IP 地址，并将它们和其它仍在虚拟机中创建的服务无缝衔接。

&lt;/div&gt;
&lt;/section&gt;

&lt;div class="banner3" style="background-image: url('/images/case-studies/squarespace/banner3.jpg')"&gt;
 &lt;!-- &lt;div class="banner3text"&gt;
 After experimenting with another container orchestration platform and "breaking it in very painful ways,"
 Lynch says, the team began experimenting with Kubernetes in mid-2016 and found that it "answered all the questions that we had."
 &lt;/div&gt; --&gt;
 &lt;div class="banner3text"&gt;
 在尝试过另外一个容器编排平台，“非常痛苦地拆解它”，Lynch 说道，我们组开始在 2016 年年中尝试 Kubernetes，发现它“能解决我们所有的问题”。
 &lt;/div&gt;
&lt;/div&gt;

&lt;section class="section3"&gt;
&lt;div class="fullcol"&gt;
 &lt;!-- Within a couple months, they had a stable cluster for their internal use, and began rolling out Kubernetes for production.
 They also added Zipkin and CNCF projects &lt;a href="https://prometheus.io/"&gt;Prometheus&lt;/a&gt; and &lt;a href="https://www.fluentd.org/"&gt;fluentd&lt;/a&gt; to their cloud native stack.
 "We switched to Kubernetes, a new world, and we revamped all our other tooling as well," says Lynch. "It allowed us to streamline our process,
 so we can now easily create an entire microservice project from templates, generate the code and deployment pipeline for that, generate the Docker file,
 and then immediately just ship a workable, deployable project to Kubernetes." Deployments across Dev/QA/Stage/Prod were also "simplified drastically," Lynch adds.
 "Now there is little configuration variation." --&gt;

 几个月的时间，它们就有了一个稳定的集群供内部使用，并开始在生产环境下使用 Kubernetes。
 他们同时还在自己的云原生技术栈中用到了 Zipkin 和 CNCF 项目 &lt;a href="https://prometheus.io/"&gt;Prometheus&lt;/a&gt; and &lt;a href="https://www.fluentd.org/"&gt;fluentd&lt;/a&gt; 。
 “我们换到 Kubernetes，就像进入了一个新世界，我们也同时改进了其它的工具，” Lynch 说道。“它让我们简化了流程，因而，我们才能更加方便地从模板中创建整个微服务项目，生成代码和部署管道，生成 Docker 文件，
 并迅速地将可用的、可部署的项目发布到 Kubernetes 集群上。”在 Dev/QA/Stage/Prod 不同环境间的部署也变得 “异常的简单，” Lynch 补充道。
 “现在，环境间的配置差异变得很小。”

&lt;br&gt;&lt;br&gt;
 &lt;!-- And the whole process takes only five minutes, an almost 85% reduction in time compared to their VM deployment.
 "From end to end that probably took half an hour, and that’s not accounting for the fact that an infrastructure engineer would be responsible for doing that,
 so there’s some business delay in there as well." --&gt;

 而且整个部署过程只需要五分钟，和虚拟机部署相比，几乎节约了 85% 的时间。
 “从端到端可能要半个小时，这还没有考虑可能需要基础架构工程师来做这方面的工作，因而，也还有一些业务上的延时。”
&lt;br&gt;&lt;br&gt;
 &lt;!-- With faster deployments, "productivity time is the big cost saver," says Lynch. "We had a team that was implementing a new file storage service,
 and they just started integrating that with our storage back end without our involvement"—which wouldn’t have been possible before Kubernetes.
 He adds: "When we started the Kubernetes project, we had probably a dozen microservices. Today there are twice that in the pipeline being actively worked on." --&gt;

 部署变快之后，“开发效率节省了大量的成本，” Lynch 提到，“我们有个组想要实现新的文件存储服务，他们就径直和我们的存储后来做了集成，而不需要我们的参与”，这在采用 Kubernetes 之前是不可想象的。
 他又补充道：“在我们开始 Kubernetes 项目时，我们可能只有十几个微服务。而现在的任务栏里面已经有两倍多的微服务正在进行中。”



&lt;/div&gt;
&lt;/section&gt;
&lt;div class="banner4" style="background-image: url('/images/case-studies/squarespace/banner4.jpg')"&gt;
 &lt;div class="banner4text"&gt;
 &lt;!-- "We switched to Kubernetes, a new world....It allowed us to streamline our process, so we can now easily create an entire microservice project from templates,"
 Lynch says. And the whole process takes only five minutes, an almost 85% reduction in time compared to their VM deployment. --&gt;

 “我们换到 Kubernetes，就像进入了一个新世界....它让我们简化了流程，因而，我们才能更加方便地从模板中创建整个微服务项目，”
 Lynch 说道。整个部署过程只需要五分钟，和虚拟机部署相比，几乎节约了 85% 的时间。
 &lt;/div&gt;
&lt;/div&gt;

&lt;section class="section5" style="padding:0px !important"&gt;
&lt;div class="fullcol"&gt;
 &lt;!-- There’s also been a positive impact on the application’s resilience. "When we’re deploying VMs, we have to build tooling to ensure that a service is
 spread across racks appropriately and can withstand failure," he says. "Kubernetes just does it. If a node goes down,
 it’s rescheduled immediately and there’s no performance impact." --&gt;

 同样在应用程序的可靠性方面也有积极的影响。“当我们在部署虚拟机时，我们需要工具来保障服务散布在机架间，可以承受失败，”他说道，“Kubernetes 正好可以做到这一点。如果节点宕掉，可以马上重新调度，没有性能影响。”
&lt;br&gt;&lt;br&gt;
 &lt;!-- Another big benefit is autoscaling. "It wasn’t really possible with the way we’ve been using VMware," says Lynch, "but now we can just
 add the appropriate autoscaling features via Kubernetes directly, and boom, it’s scaling up as demand increases. And it worked out of the box." --&gt;

 另一个很大的好处就是扩缩容。“按照我们使用 VMware 的方式，扩缩容好像不可能实现，”Lynch 说道，“但现在，我们可以直接通过 Kubernetes 加入合适的扩缩容功能，然后，随着需求的增加而扩容。开箱即用！”
&lt;br&gt;&lt;br&gt;
 &lt;!-- For others starting out with Kubernetes, Lynch says his best advice is to "fail fast": "Once you’ve planned things out, just execute.
 Kubernetes has been really great for trying something out quickly and seeing if it works or not." --&gt;

 针对刚开始使用 Kubernetes 的人，Lynch 说他最后的建议就是“快速失败”：“一旦计划后，马上执行。Kubernetes 真的是非常适合快速实验，看看是否可行。”

&lt;/div&gt;

&lt;div class="banner5"&gt;
 &lt;div class="banner5text"&gt;
 &lt;!-- "When we’re deploying VMs, we have to build tooling to ensure that a service is spread across racks appropriately and can withstand failure,"
 he says. "Kubernetes just does it. If a node goes down, it’s rescheduled immediately and there’s no performance impact." --&gt;

 “当我们在部署虚拟机时，我们需要工具来保障服务散布在机架间，可以承受失败，”他说道，“Kubernetes 正好可以做到这一点。如果节点宕掉，可以马上重新调度，没有性能影响。”

 &lt;/div&gt;
&lt;/div&gt;

&lt;div class="fullcol"&gt;
 &lt;!-- Lynch and his team are planning to open source some of the tools they’ve developed to extend Kubernetes and use it as an API itself.
 The first tool injects dependent applications as containers in a pod.
 "When you ship an application, usually it comes along with a whole bunch of dependent applications that need to be shipped with that,
 for example, fluentd for logging," he explains. With this tool, the developer doesn’t need to worry about the configurations. --&gt;

 Lynch 和他的小组正准备开源一些他们的工具，这些工具用来延展 Kubernetes，将其作为 API 使用。
 第一个工具在 pod 将依赖应用作为容器注入。
 “当你在发布应用时，常常需要一系列的依赖应用，例如，日志用的 fluentd，” 他解释道。
 有了这个工具，开发人员就不需要担心配置了。

&lt;br&gt;&lt;br&gt;
 &lt;!-- Going forward, all new services at Squarespace are going into Kubernetes, and the end goal is to convert everything it can. About a quarter of
 existing services have been migrated. "Our monolithic application is going to be the last one, just because it’s so big and complex," says Lynch.
 "But now I’m seeing other services get moved over, like the file storage service. Someone just did it and it worked—painlessly. So I believe if we tackle it,
 it’s probably going to be a lot easier than we fear. Maybe I should just take my own advice and fail fast!" --&gt;

 自此之后，Squarespace 所有新的微服务都将直接部署到 Kubernetes 上，而最终的目标是要扩大到所有的服务上。
 现在已经有四分之一的服务已经移植完。“我的单体应用将是最后一个被移植的，仅仅是以为它太大、太复杂，”Lynch 说道。
 “但现在我已经看到其它的服务已经被移植到 Kubernetes 上，例如文件存储服务。有人解决了，而且并不复杂。
 所以我坚信如果我们着手解决它，很可能回避我们所担心的要轻松许多。也许我应该接受自己的建议，“快速失败”！”

&lt;/div&gt;

&lt;/section&gt;</description></item><item><title>WebhookAdmission 配置 (v1)</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/config-api/apiserver-webhookadmission.v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/config-api/apiserver-webhookadmission.v1/</guid><description>&lt;!--
title: WebhookAdmission Configuration (v1)
content_type: tool-reference
package: apiserver.config.k8s.io/v1
auto_generated: true
--&gt;
&lt;p&gt;
&lt;!--
Package v1 is the v1 version of the API.
--&gt;
此 API 的版本是 v1。
&lt;/p&gt;
&lt;!--
## Resource Types 
--&gt;
&lt;h2 id="resource-types"&gt;资源类型&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#apiserver-config-k8s-io-v1-WebhookAdmission"&gt;WebhookAdmission&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="apiserver-config-k8s-io-v1-WebhookAdmission"&gt;&lt;code&gt;WebhookAdmission&lt;/code&gt;&lt;/h2&gt;
&lt;p&gt;
&lt;!--
WebhookAdmission provides configuration for the webhook admission controller.
--&gt;
WebhookAdmission 为 Webhook 准入控制器提供配置信息。
&lt;/p&gt;
&lt;table class="table"&gt;
&lt;thead&gt;&lt;tr&gt;&lt;th width="30%"&gt;&lt;!--Field--&gt;字段&lt;/th&gt;&lt;th&gt;&lt;!--Description--&gt;描述&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;apiVersion&lt;/code&gt;&lt;br/&gt;string&lt;/td&gt;&lt;td&gt;&lt;code&gt;apiserver.config.k8s.io/v1&lt;/code&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;kind&lt;/code&gt;&lt;br/&gt;string&lt;/td&gt;&lt;td&gt;&lt;code&gt;WebhookAdmission&lt;/code&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;kubeConfigFile&lt;/code&gt; &lt;B&gt;&lt;!--[Required]--&gt;[必需]&lt;/B&gt;&lt;br/&gt;
&lt;code&gt;string&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;p&gt;
&lt;!--
KubeConfigFile is the path to the kubeconfig file.
--&gt;
字段 kubeConfigFile 包含指向 kubeconfig 文件的路径。
&lt;/p&gt;</description></item><item><title>Windows 调试技巧</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/debug/debug-cluster/windows/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/debug/debug-cluster/windows/</guid><description>&lt;!--
reviewers:
- aravindhp
- jayunit100
- jsturtevant
- marosset
title: Windows debugging tips
content_type: concept
--&gt;
&lt;!-- overview --&gt;
&lt;!-- body --&gt;
&lt;!-- 
## Node-level troubleshooting {#troubleshooting-node}

1. My Pods are stuck at "Container Creating" or restarting over and over

 Ensure that your pause image is compatible with your Windows OS version.
 See [Pause container](/docs/concepts/windows/intro/#pause-container)
 to see the latest / recommended pause image and/or get more information.
--&gt;
&lt;h2 id="troubleshooting-node"&gt;工作节点级别排障&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;我的 Pod 都卡在 “Container Creating” 或者不断重启&lt;/p&gt;</description></item><item><title>Yahoo! Japan</title><link>https://andygol-k8s.netlify.app/zh-cn/case-studies/yahoo-japan/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/case-studies/yahoo-japan/</guid><description/></item><item><title>阿迪达斯案例研究</title><link>https://andygol-k8s.netlify.app/zh-cn/case-studies/adidas/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/case-studies/adidas/</guid><description>&lt;!--
title: adidas Case Study
linkTitle: adidas
case_study_styles: true
cid: caseStudies
featured: false

new_case_study_styles: true
heading_background: /images/case-studies/adidas/banner1.png
heading_title_text: adidas
use_gradient_overlay: true
subheading: &gt;
 Staying True to Its Culture, adidas Got 40% of Its Most Impactful Systems Running on Kubernetes in a Year
case_study_details:
 - Company: adidas
 - Location: Herzogenaurach, Germany
 - Industry: Fashion
--&gt;

&lt;!-- 
&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;In recent years, the adidas team was happy with its software choices from a technology perspective—but accessing all of the tools was a problem. For instance, "just to get a developer VM, you had to send a request form, give the purpose, give the title of the project, who's responsible, give the internal cost center a call so that they can do recharges," says Daniel Eichten, Senior Director of Platform Engineering. "The best case is you got your machine in half an hour. Worst case is half a week or sometimes even a week."&lt;/p&gt;</description></item><item><title>案例研究：Buffer</title><link>https://andygol-k8s.netlify.app/zh-cn/case-studies/buffer/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/case-studies/buffer/</guid><description>&lt;!-- &lt;div class="banner1"&gt;
 &lt;h1&gt;CASE STUDY: &lt;img src="https://andygol-k8s.netlify.app/images/buffer.png" width="18%" style="margin-bottom:-5px;margin-left:10px;"&gt;&lt;br&gt;
 &lt;div class="subhead"&gt;Making Deployments Easy for a Small, Distributed Team&lt;/div&gt;
&lt;/h1&gt;
&lt;/div&gt; --&gt;
&lt;div class="banner1"&gt;
 &lt;h1&gt;案例研究: &lt;img src="https://andygol-k8s.netlify.app/images/buffer.png" width="18%" style="margin-bottom:-5px;margin-left:10px;"&gt;&lt;br&gt;
 &lt;div class="subhead"&gt;使小型分布式团队轻松部署&lt;/div&gt;
&lt;/h1&gt;
&lt;/div&gt;

&lt;!-- &lt;div class="details"&gt;
 Company&amp;nbsp;&lt;b&gt;Buffer&lt;/b&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Location &amp;nbsp;&lt;b&gt;Around the World&lt;/b&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Industry &amp;nbsp;&lt;b&gt;Social Media Technology&lt;/b&gt;
&lt;/div&gt; --&gt;
&lt;div class="details"&gt;
 公司&amp;nbsp;&lt;b&gt;Buffer&lt;/b&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;位置 &amp;nbsp;&lt;b&gt;全球&lt;/b&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;行业 &amp;nbsp;&lt;b&gt;社交媒体技术公司&lt;/b&gt;
&lt;/div&gt;

&lt;hr&gt;

&lt;section class="section1"&gt;
 &lt;div class="cols"&gt;
 &lt;div class="col1"&gt;

&lt;!-- &lt;h2&gt;Challenge&lt;/h2&gt;
With a small but fully distributed team of 80 working across almost a dozen time zones, Buffer—which offers social media management to agencies and marketers—was looking to solve its "classic monolithic code base problem," says Architect Dan Farrelly. "We wanted to have the kind of liquid infrastructure where a developer could create an app and deploy it and scale it horizontally as&amp;nbsp;necessary." --&gt;
&lt;h2&gt;挑战&lt;/h2&gt;
 Buffer 拥有一支80人的分布式团队，他们遍布全球近十几个时区。这样一个为代理商和营销人员提供社交媒体管理的公司，希望解决其“典型的单一庞大编码基数问题”，架构师 Dan Farrelly 这样说。“我们希望拥有一种流动性的基础架构，开发人员可以创建一个应用程序，可以根据需要部署并横向扩展它”。
&lt;/div&gt;

&lt;div class="col2"&gt;
 &lt;!-- &lt;h2&gt;Solution&lt;/h2&gt;
 Embracing containerization, Buffer moved its infrastructure from Amazon Web Services’ Elastic Beanstalk to Docker on AWS, orchestrated with&amp;nbsp;Kubernetes. --&gt;
&lt;h2&gt;解决方案&lt;/h2&gt;
拥抱容器化，Buffer 将其基础设施从 AWS 上的 Elastic Beanstalk 迁移到由 Kubernetes 负责编排的 Docker 上。
 &lt;br&gt;
 &lt;br&gt;
&lt;!-- &lt;h2&gt;Impact&lt;/h2&gt;
 The new system "leveled up our ability with deployment and rolling out new changes," says Farrelly. "Building something on your computer and knowing that it’s going to work has shortened things up a lot. Our feedback cycles are a lot faster now&amp;nbsp;too." --&gt;
&lt;h2&gt;影响&lt;/h2&gt;
Farrelly 说，新系统“提高了我们将新变化进行部署和上线的能力”。“在自己的计算机上构建一些东西，并且知道它是可用的，这已经让事情简单了很多；而且我们的反馈周期现在也快了很多。”
&lt;/div&gt;
&lt;/div&gt;
&lt;/section&gt;

&lt;div class="banner2"&gt;
 &lt;div class="banner2text"&gt;
 &lt;!-- "It’s amazing that we can use the Kubernetes solution off the shelf with our team. And it just keeps getting better. Before we even know that we need something, it’s there in the next release or it’s coming in the next few months."&lt;br&gt;&lt;br&gt;&lt;span style="font-size:16px;letter-spacing:2px;"&gt;- DAN FARRELLY, BUFFER ARCHITECT&lt;/span&gt; --&gt;
“我们的团队能够直接使用既有的 Kubernetes 解决方案，这太棒了，而且它还在不断改进中。在意识到我们需要某些功能之前，它就出现在下一个版本中，或者在未来几个月内出现。&lt;br&gt;&lt;br&gt;&lt;span style="font-size:16px;letter-spacing:2px;"&gt;- DAN FARRELLY, BUFFER ARCHITECT&lt;/span&gt;
 &lt;/div&gt;
&lt;/div&gt;

&lt;section class="section2"&gt;
&lt;div class="fullcol"&gt;
&lt;!-- &lt;h2&gt;Dan Farrelly uses a carpentry analogy to explain the problem his company, &lt;a href="https://buffer.com"&gt;Buffer&lt;/a&gt;, began having as its team of developers grew over the past few years.&lt;/h2&gt; --&gt;
&lt;h2&gt;Dan Farrelly 用木工类比来解释他的公司&lt;a href="https://buffer.com"&gt; Buffer &lt;/a&gt;，随着过去几年的发展，公司开始有这个问题。&lt;/h2&gt;

&lt;!-- "If you’re building a table by yourself, it’s fine," the company’s architect says. "If you bring in a second person to work on the table, maybe that person can start sanding the legs while you’re sanding the top. But when you bring a third or fourth person in, someone should probably work on a different table." Needing to work on more and more different tables led Buffer on a path toward microservices and containerization made possible by Kubernetes. --&gt;
“如果你自己做一张桌子，这很好！”公司的架构师说。“如果你请另一个人一起来做这个桌子，也许这个人可以在你抛光桌面时开始对腿进行抛光。但是，当你把第三个或第四个人带进来时，有人也许应该在另外一张桌子上工作。”需要处理越来越多不同的桌子，使 Buffer 走上了 Kubernetes 实现微服务和容器化的道路。
&lt;br&gt;&lt;br&gt;
&lt;!-- Since around 2012, Buffer had already been using &lt;a href="https://aws.amazon.com/elasticbeanstalk/"&gt;Elastic Beanstalk&lt;/a&gt;, the orchestration service for deploying infrastructure offered by &lt;a href="https://aws.amazon.com"&gt;Amazon Web Services&lt;/a&gt;. "We were deploying a single monolithic &lt;a href="http://php.net/manual/en/intro-whatis.php"&gt;PHP&lt;/a&gt; application, and it was the same application across five or six environments," says Farrelly. "We were very much a product-driven company. --&gt;
自2012年左右以来，Buffer 开始使用&lt;a href="https://aws.amazon.com/elasticbeanstalk/"&gt; Elastic Beanstalk &lt;/a&gt;,这是&lt;a href="https://aws.amazon.com"&gt;亚马逊网络服务&lt;/a&gt;提供的网络基础设施编排服务。“我们部署了一个单一的&lt;a href="http://php.net/manual/en/intro-whatis.php"&gt;PHP&lt;/a&gt;应用程序,它是在五六个环境中相同的应用程序，”Farrelly 说。“我们在很大程度上是一家产品驱动型的公司。”
&lt;!-- It was all about shipping new features quickly and getting things out the door, and if something was not broken, we didn’t spend too much time on it. If things were getting a little bit slow, we’d maybe use a faster server or just scale up one instance, and it would be good enough. We’d move on." --&gt;
这一切都是为了尽快给应用推出新功能并进行交付，如果能够正常运行，我们就不会花太多时间在上面。如果产品变得有点慢，我们可能会使用更快的服务器或只是增加一个实例，这些就已经足够了。然后继续前进。
&lt;br&gt;&lt;br&gt;
&lt;!-- But things came to a head in 2016. With the growing number of committers on staff, Farrelly and Buffer’s then-CTO, Sunil Sadasivan, decided it was time to re-architect and rethink their infrastructure. "It was a classic monolithic code base problem," says Farrelly.&lt;br&gt;&lt;br&gt;Some of the company’s team was already successfully using &lt;a href="https://www.docker.com"&gt;Docker&lt;/a&gt; in their development environment, but the only application running on Docker in production was a marketing website that didn’t see real user traffic. They wanted to go further with Docker, and the next step was looking at options for&amp;nbsp;orchestration. --&gt;
但事情在2016年就到了头。随着应用提交修改的数量不断增加，Farrelly 和 Buffer 当时的首席技术官 Sunil Sadasivan 决定,是重新思考和构建其基础架构的时候了。“这是一个典型的单一庞大编码基数问题，”Farrelly说。公司的一些团队已经在开发环境中成功使用&lt;a href="https://www.docker.com"&gt;Docker&lt;/a&gt;，但在生产环境中，运行在 Docker 上的唯一应用程序是一个看不到真实用户流量的营销网站。他们希望在使用 Docker 上更进一步,下一步是寻找业务流程的选项。
&lt;/div&gt;
&lt;/section&gt;

&lt;div class="banner3"&gt;
 &lt;div class="banner3text"&gt;
&lt;!-- And all the things Kubernetes did well suited Buffer’s needs. "We wanted to have the kind of liquid infrastructure where a developer could create an app and deploy it and scale it horizontally as necessary," says Farrelly. "We quickly used some scripts to set up a couple of test clusters, we built some small proof-of-concept applications in containers, and we deployed things within an hour. We had very little experience in running containers in production. It was amazing how quickly we could get a handle on it&amp;nbsp;[Kubernetes]." --&gt;
Kubernetes 所做的所有事情都很好地适应了 Buffer 的需求。Farrelly 说：“我们希望拥有一种流动基础架构，开发人员可以创建一个应用程序，并在必要时部署并进行水平扩展。”“我们很快就使用一些脚本来设置几个测试集群，在容器中构建了一些小的概念性验证应用程序，并在一小时内部署了这些内容。我们在生产中运行容器方面经验很少。令人惊奇的是，我们能很快地用 Kubernetes 来处理。”
 &lt;/div&gt;
&lt;/div&gt;

&lt;section class="section3"&gt;
&lt;div class="fullcol"&gt;
 &lt;!-- First they considered &lt;a href="https://mesosphere.com"&gt;Mesosphere&lt;/a&gt;, &lt;a href="https://dcos.io"&gt;DC/OS&lt;/a&gt; and &lt;a href="https://aws.amazon.com/ecs/"&gt;Amazon Elastic Container Service&lt;/a&gt; (which their data systems team was already using for some data pipeline jobs). While they were impressed by these offerings, they ultimately went with Kubernetes. --&gt;
首先，他们考虑了&lt;a href="https://mesosphere.com"&gt;Mesosphere&lt;/a&gt;、&lt;a href="https://dcos.io"&gt;DC/OS&lt;/a&gt;和&lt;a href="https://aws.amazon.com/ecs/"&gt;Amazon Elastic Container Service&lt;/a&gt;(他们的数据系统团队已经将其用于某些数据管道作业)。虽然他们对这些产品印象深刻，但他们最终还是与 Kubernetes 合作。
 &lt;!-- "We run on AWS still, so spinning up, creating services and creating load balancers on demand for us without having to configure them manually was a great way for our team to get into this," says Farrelly. "We didn’t need to figure out how to configure this or that, especially coming from a former Elastic Beanstalk environment that gave us an automatically-configured load balancer. I really liked Kubernetes’ controls of the command line. It just took care of ports. It was a lot more flexible. Kubernetes was designed for doing what it does, so it does it very well." --&gt;
Farrelly 说：“我们的应用仍在 AWS 上运行，但是我们的团队通过 Kubernetes 无需手动配置即可创建服务并按需创建负载均衡器，这是使用 Kubernetes 的最佳入门体验。”“我们不需要考虑如何配置这个或那个,特别是相较于以前的 Elastic Beanstalk 环境，它为我们提供了一个自动配置的负载均衡器。我真的很喜欢 Kubernetes 对命令行的控制，只需要对端口进行配置，这样更加灵活。Kubernetes 是为做它所做的事情而设计的,所以它做得非常好。”
&lt;br&gt;&lt;br&gt;
 &lt;!-- And all the things Kubernetes did well suited Buffer’s needs. "We wanted to have the kind of liquid infrastructure where a developer could create an app and deploy it and scale it horizontally as necessary," says Farrelly. "We quickly used some scripts to set up a couple of test clusters, we built some small proof-of-concept applications in containers, and we deployed things within an hour. We had very little experience in running containers in production. It was amazing how quickly we could get a handle on it [Kubernetes]." --&gt;
Kubernetes 所做的所有事情都很好地适应了 Buffer 的需求。Farrelly 说：“我们希望拥有一种流动基础架构，开发人员可以创建一个应用程序，并在必要时部署并进行水平扩展。”“我们很快就使用一些脚本来设置几个测试集群，在容器中构建了一些小的概念性验证应用程序，并在一小时内部署了这些内容。我们在生产中运行容器方面经验很少。令人惊奇的是,我们能很快地用 Kubernetes 来处理。”
&lt;br&gt;&lt;br&gt;
 &lt;!-- Above all, it provided a powerful solution for one of the company’s most distinguishing characteristics: their remote team that’s spread across a dozen different time zones. "The people with deep knowledge of our infrastructure live in time zones different from our peak traffic time zones, and most of our product engineers live in other places," says Farrelly. "So we really wanted something where anybody could get a grasp of the system early on and utilize it, and not have to worry that the deploy engineer is asleep. Otherwise people would sit around for 12 to 24 hours for something. It’s been really cool to see people moving much faster." --&gt;
最重要的是，它为公司最显著的特征之一提供了强大的解决方案：他们的远程团队分布在十几个不同的时区。Farrelly 说：“对我们的基础设施有深入了解的人生活在不同于我们相对集中的时区，而我们的大部分产品工程师都住在其他地方。”“因此，我们确实希望有人能够尽早掌握这套系统并利用好它，而不必担心部署工程师正在睡觉。否则，人们会为了一些东西必须得等待12到24小时左右。看到人们前进得更快，这真的很酷。”
 &lt;br&gt;&lt;br&gt;
 &lt;!-- With a relatively small engineering team—just 25 people, and only a handful working on infrastructure, with the majority front-end developers—Buffer needed "something robust for them to deploy whatever they wanted," says Farrelly. Before, "it was only a couple of people who knew how to set up everything in the old way. With this system, it was easy to review documentation and get something out extremely quickly. It lowers the bar for us to get everything in production. We don't have the big team to build all these tools or manage the infrastructure like other larger companies&amp;nbsp;might." --&gt;
Farrelly 说，Buffer 拥有相对较小的工程团队，只有 25 人，只有少数人从事基础设施工作，大多数前端开发人员都需要“一些强大的配置能力，以便部署任何他们想要的内容。”以前，“只有几个人知道如何用旧的方式设置一切。有了这个系统，审查文档变得很容易，并且能很快地看到效果。它降低了我们获取在开发环境中所需要一切的门槛。因为我们团队的人数不足以来构建所有这些工具或像其他大公司那样管理基础架构。”
&lt;/div&gt;
&lt;/section&gt;

&lt;div class="banner4"&gt;
 &lt;div class="banner4text"&gt;
 &lt;!-- "In our old way of working, the feedback loop was a lot longer, and it was delicate because if you deployed something, the risk was high to potentially break something else," Farrelly says. "With the kind of deploys that we built around Kubernetes, we were able to detect bugs and fix them, and get them deployed super fast. The second someone is fixing [a bug], it’s out the&amp;nbsp;door." --&gt;
Farrelly 说：“在我们以往的工作方式中，反馈循环流程要长得多，而且很微妙，因为如果你部署了某些东西，就存在破坏其他东西的风险。”“我们通过围绕 Kubernetes 构建的部署类型，能够及时检测 Bug 并修复它们，并使其部署得超快。这一秒正在修复漏洞，不一会就上线运行了。”
 &lt;/div&gt;
&lt;/div&gt;

&lt;section class="section4"&gt;
&lt;div class="fullcol"&gt;
 &lt;!-- To help with this, Buffer developers wrote a deploy bot that wraps the Kubernetes deploy process and can be used by every team. "Before, our data analysts would update, say, a &lt;a href="https://www.python.org"&gt;Python&lt;/a&gt; analysis script and have to wait for the lead on that team to click the button and deploy it," Farrelly explains. "Now our data analysts can make a change, enter a &lt;a href="https://slack.com"&gt;Slack&lt;/a&gt; command, ‘/deploy,’ and it goes out instantly. They don’t need to wait on these slow turnaround times. They don’t even know where it’s running; it doesn’t matter." --&gt;
为了帮助解决这一问题，Buffer 开发人员编写了一个部署机器人，该机器人包装了 Kubernetes 部署过程，并且每个团队都可以使用。”“以前,我们的数据分析师会更新&lt;a href="https://www.python.org"&gt; Python &lt;/a&gt;分析脚本，并且必须等待该团队的主管单击该按钮并部署它，”Farrelly 解释道。“现在，我们的数据分析师可以进行更改，输入&lt;a href="https://slack.com"&gt; Slack &lt;/a&gt;命令，‘/deploy’,它会立即进行部署。他们不需要等待这些缓慢的周转时间，他们甚至不知道它在哪里运行。这些都不重要了。
 &lt;br&gt;&lt;br&gt;
 &lt;!-- One of the first applications the team built from scratch using Kubernetes was a new image resizing service. As a social media management tool that allows marketing teams to collaborate on posts and send updates across multiple social media profiles and networks, Buffer has to be able to resize photographs as needed to meet the varying limitations of size and format posed by different social networks. "We always had these hacked together solutions," says Farrelly. --&gt;
团队使用 Kubernetes 从头开始构建的第一个应用程序是一种新的图像大小调整服务。作为一种社交媒体管理工具，它允许营销团队通过发帖进行协作，并通过多个社交媒体配置文件和网络发送更新，Buffer 必须能够根据需要调整照片的大小，以满足不同的社交网络。Farrelly 说：“我们一直有这些拼凑在一起的解决方案。”
 &lt;br&gt;&lt;br&gt;
 &lt;!-- To create this new service, one of the senior product engineers was assigned to learn Docker and Kubernetes, then build the service, test it, deploy it and monitor it—which he was able to do relatively quickly. "In our old way of working, the feedback loop was a lot longer, and it was delicate because if you deployed something, the risk was high to potentially break something else," Farrelly says. "With the kind of deploys that we built around Kubernetes, we were able to detect bugs and fix them, and get them deployed super fast. The second someone is fixing [a bug], it’s out the door." --&gt;
为了创建这项新服务，一位高级产品工程师被指派学习 Docker 和 Kubernetes，然后构建服务、测试、部署和监视服务，他能够相对快速地完成该服务。Farrelly说：“在我们以往的工作方式中，反馈循环流程要长得多，而且很微妙，因为如果你部署了某些东西，就存在破坏其他东西的风险。”“我们通过围绕 Kubernetes 构建的部署类型，能够及时检测 Bug 并修复它们，并使其部署得超快。这一秒正在修复漏洞,不一会就上线运行了。”
 &lt;br&gt;&lt;br&gt;
 &lt;!-- Plus, unlike with their old system, they could scale things horizontally with one command. "As we rolled it out," Farrelly says, "we could anticipate and just click a button. This allowed us to deal with the demand that our users were placing on the system and easily scale it to handle it." --&gt;
此外，与旧系统不同，他们只需一个命令就可以水平缩放内容。“当我们推出它,”Farrelly说，“我们可以预测，只需点击一个按钮。这使我们能够处理用户对系统的需求,并轻松扩展以处理它。”
 &lt;br&gt;&lt;br&gt;
 &lt;!-- Another thing they weren’t able to do before was a canary deploy. This new capability "made us so much more confident in deploying big changes," says Farrelly. "Before, it took a lot of testing, which is still good, but it was also a lot of ‘fingers crossed.’ And this is something that gets run 800,000 times a day, the core of our business. If it doesn’t work, our business doesn’t work. In a Kubernetes world, I can do a canary deploy to test it for 1 percent and I can shut it down very quickly if it isn’t working. This has leveled up our ability to deploy and roll out new changes quickly while reducing&amp;nbsp;risk." --&gt;
他们以前不能做的另外一件事是金丝雀部署。Farrelly说，这种新功能“使我们在部署重大变革方面更加自信。”“以前，虽然进行很多测试但仍表现不错，但它也存在很多‘手指交叉’。这是每天运行 800000 次的东西，这是我们业务的核心。如果它不工作，我们的业务也无法运行。在 Kubernetes 世界中，我可以执行金丝雀部署以测试 1%，如果它不工作，我可以很快将其关闭。这使我们在快速部署和推出新更改的同时降低风险的能力得到了提高。”

&lt;/div&gt;
&lt;/section&gt;

&lt;div class="banner5"&gt;
 &lt;div class="banner5text"&gt;
 &lt;!-- "If you want to run containers in production, with nearly the power that Google uses internally, this [Kubernetes] is a great way to do that," Farrelly says. "We’re a relatively small team that’s actually running Kubernetes, and we’ve never run anything like it before. So it’s more approachable than you might think. That’s the one big thing that I tell people who are experimenting with it. Pick a couple of things, roll it out, kick the tires on this for a couple of months and see how much it can handle. You start learning a lot this&amp;nbsp;way." --&gt;
Farrelly 说：“如果你想在生产中运行容器，拥有像谷歌内部使用的那种效果，那么 Kubernetes 就是一个很好的方法。”“我们是一个相对较小的团，实际上运行 Kubernetes，我们之前从来没来做过这样的事情。因此，它比你想象的更容易上手，这正是我想要告诉正在尝试使用它的人们的一件很重要的事。挑几个应用,把它们准备好,在机器上运行几个月,看看它能处理的怎么样。通过这种方式你可以学到很多东西。”
 &lt;/div&gt;
&lt;/div&gt;

&lt;section class="section5"&gt;
&lt;div class="fullcol"&gt;
 &lt;!-- By October 2016, 54 percent of Buffer’s traffic was going through their Kubernetes cluster. "There’s a lot of our legacy functionality that still runs alright, and those parts might move to Kubernetes or stay in our old setup forever," says Farrelly. But the company made the commitment at that time that going forward, "all new development, all new features, will be running on Kubernetes." --&gt;
到 2016 年 10 月，Buffer 54% 的流量都通过其 Kubernetes 集群。Farrelly 说：“我们很多传统功能仍然运行正常，这些部分可能会转移到 Kubernetes 或永远留在我们的旧设置中。”但该公司当时承诺，“所有新的开发，所有新功能,将在Kubernetes上运行。”
 &lt;br&gt;&lt;br&gt;
 &lt;!-- The plan for 2017 is to move all the legacy applications to a new Kubernetes cluster, and run everything they’ve pulled out of their old infrastructure, plus the new services they’re developing in Kubernetes, on another cluster. "I want to bring all the benefits that we’ve seen on our early services to everyone on the team," says Farrelly. --&gt;
2017年计划是将所有旧应用程序迁移到新的 Kubernetes 集群，并运行他们从旧基础架构中撤出的所有内容，以及他们正在另一个 Kubernetes 集群上开发的新服务。Farrelly 说：“我想为团队中的每个人带来我们早期服务的所有好处。”
 &lt;br&gt;&lt;br&gt;
 &lt;h2&gt;
&lt;!-- For Buffer’s engineers, it’s an exciting process. "Every time we’re deploying a new service, we need to figure out: OK, what’s the architecture? How do these services communicate? What’s the best way to build this service?" Farrelly says. "And then we use the different features that Kubernetes has to glue all the pieces together. It’s enabling us to experiment as we’re learning how to design a service-oriented architecture. Before, we just wouldn’t have been able to do it. This is actually giving us a blank white board so we can do whatever we want on it." --&gt;
对于 Buffer 的工程师来说，这是一个令人兴奋的过程。“每次部署新服务时，我们都需要弄清楚:好的，体系结构是什么？这些服务如何沟通？构建此服务的最佳方式是什么？”Farrelly说。“然后，我们使用 Kubernetes 的特性将所有部分聚合到一起。在学习如何设计面向服务的体系结构时，它使我们能够进行试验。以前，我们只是不能做到这一点。这实际上给了我们一个空白的白板，所以我们可以做任何我们想要的。”
 &lt;/h2&gt;
 &lt;!-- Part of that blank slate is the flexibility that Kubernetes offers should the time come when Buffer may want or need to change its cloud. "It’s cloud agnostic so maybe one day we could switch to Google or somewhere else," Farrelly says. "We’re very deep in Amazon but it’s nice to know we could move away if we need to." --&gt;
部分空白是 Kubernetes 提供的灵活性，如果某一天 Buffer 可能想要或需要改变他的云服务。“这是与云无关的，所以也许有一天我们可以切换到谷歌或其他地方，”Farrelly 说。“我们非常依赖亚马逊的基础服务，但很高兴知道，如果我们需要的话，我们可以搬走。”
 &lt;br&gt;&lt;br&gt;
 &lt;!-- At this point, the team at Buffer can’t imagine running their infrastructure any other way—and they’re happy to spread the word. "If you want to run containers in production, with nearly the power that Google uses internally, this [Kubernetes] is a great way to do that," Farrelly says. "We’re a relatively small team that’s actually running Kubernetes, and we’ve never run anything like it before. So it’s more approachable than you might think. That’s the one big thing that I tell people who are experimenting with it. Pick a couple of things, roll it out, kick the tires on this for a couple of months and see how much it can handle. You start learning a lot this&amp;nbsp;way." --&gt;
此时，Buffer 团队无法想象以任何其他方式运行其基础结构，他们很乐意传播这一信息。Farrelly 说：“如果你想在生产中运行容器，拥有像谷歌内部使用的那种效果，那么 Kubernetes 就是一个很好的方法。”“我们是一个相对较小的团队，实际上运行 Kubernetes，我们之前从来没来做过这样的事情。因此，它比你想象的更容易上手，这正是我想要告诉正在尝试使用它的人们的一件很重要的事。挑几个应用，把它们准备好，在机器上运行几个月，看看它能处理的怎么样。通过这种方式你可以学到很多东西。”
 &lt;br&gt;&lt;br&gt;
&lt;/div&gt;
&lt;/section&gt;</description></item><item><title>案例研究：IBM</title><link>https://andygol-k8s.netlify.app/zh-cn/case-studies/ibm/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/case-studies/ibm/</guid><description>&lt;!-- &lt;div class="banner1" style="background-image: url('/images/CaseStudy_ibm_banner1.jpg')"&gt;
 &lt;h1&gt; CASE STUDY:&lt;img src="https://andygol-k8s.netlify.app/images/ibm_logo.png" class="header_logo" style="width:10%"&gt;&lt;br&gt; &lt;div class="subhead"&gt;Building an Image Trust Service on Kubernetes with Notary and TUF&lt;/div&gt;&lt;/h1&gt;

&lt;/div&gt; --&gt;

&lt;div class="banner1"&gt;
 &lt;h1&gt; 案例研究：&lt;img src="https://andygol-k8s.netlify.app/images/ibm_logo.png" width="18%" style="margin-bottom:-5px;margin-left:10px;"&gt;&lt;br&gt; &lt;div class="subhead"&gt;在 Kubernetes 上使用 Notary 和 TUF 建立镜像信任服务&lt;/div&gt;&lt;/h1&gt;

&lt;/div&gt;

&lt;div class="details"&gt;
 公司 &amp;nbsp;&lt;b&gt;IBM&lt;/b&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;位置 &amp;nbsp;&lt;b&gt;阿蒙克， 纽约&lt;/b&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;行业 &amp;nbsp;&lt;b&gt;云计算&lt;/b&gt;
&lt;/div&gt;

&lt;hr&gt;
&lt;section class="section1"&gt;
&lt;div class="cols"&gt;
 &lt;div class="col1"&gt;
 &lt;h2&gt;挑战&lt;/h2&gt;
 &lt;!-- &lt;a href="https://www.ibm.com/cloud/"&gt;IBM Cloud&lt;/a&gt; offers public, private, and hybrid cloud functionality across a diverse set of runtimes from its OpenWhisk-based function as a service (FaaS) offering, managed &lt;a href="https://kubernetes.io"&gt;Kubernetes&lt;/a&gt; and containers, to &lt;a href="https://www.cloudfoundry.org"&gt;Cloud Foundry&lt;/a&gt; platform as a service (PaaS). These runtimes are combined with the power of the company’s enterprise technologies, such as MQ and DB2, its modern artificial intelligence (AI) Watson, and data analytics services. Users of IBM Cloud can exploit capabilities from more than 170 different cloud native services in its catalog, including capabilities such as IBM’s Weather Company API and data services. In the later part of 2017, the IBM Cloud Container Registry team wanted to build out an image trust service. --&gt;
&lt;a href="https://www.ibm.com/cloud/"&gt;IBM Cloud&lt;/a&gt; 提供公共、私有和混合云功能，包括基于 OpenWhisk 的服务 （FaaS）、托管于 &lt;a href="https://kubernetes.io"&gt;Kubernetes&lt;/a&gt; 和容器，以及 &lt;a href="https://www.cloudfoundry.org"&gt;Cloud Foundry&lt;/a&gt; 服务 （PaaS） 的各种运行时。这些运行时与公司企业技术（如 MQ 和 DB2、其现代人工智能 （AI） Watson 和数据分析服务）的强大功能相结合。IBM Cloud 用户可以使用其目录中 170 多个不同云原生服务的功能，包括 IBM 的气象公司 API 和数据服务等功能。在 2017 年后期，IBM 云容器托管团队希望构建镜像信任服务。&lt;br&gt;&lt;br&gt;
 &lt;h2&gt;解决方案&lt;/h2&gt;
 &lt;!-- The work on this new service culminated with its public availability in the IBM Cloud in February 2018. The image trust service, called Portieris, is fully based on the &lt;a href="https://www.cncf.io"&gt;Cloud Native Computing Foundation (CNCF)&lt;/a&gt; open source project &lt;a href="https://github.com/theupdateframework/notary"&gt;Notary&lt;/a&gt;, according to Michael Hough, a software developer with the IBM Cloud Container Registry team. Portieris is a Kubernetes admission controller for enforcing content trust. Users can create image security policies for each Kubernetes namespace, or at the cluster level, and enforce different levels of trust for different images. Portieris is a key part of IBM’s trust story, since it makes it possible for users to consume the company’s Notary offering from within their IKS clusters. The offering is that Notary server runs in IBM’s cloud, and then Portieris runs inside the IKS cluster. This enables users to be able to have their IKS cluster verify that the image they're loading containers from contains exactly what they expect it to, and Portieris is what allows an IKS cluster to apply that verification. --&gt;
2018 年 2 月，这项新服务在 IBM 云中公开发布。IBM 云容器托管团队的软件开发者 Michael Hough 说，名为 Portieris 的镜像信任服务完全基于 &lt;a href="https://www.cncf.io"&gt;Cloud Native Computing Foundation (CNCF)&lt;/a&gt; 的开源项目 &lt;a href="https://github.com/theupdateframework/notary"&gt;Notary&lt;/a&gt;。Portieris 是 Kubernetes 的准入控制器，用于强制执行适当的信任等级。用户可以为每个 Kubernetes 命名空间或在集群级别创建镜像安全策略，并为不同的镜像强制实施不同级别的信任。Portieris 是 IBM 信任内容的关键部分，因为它使用户能够从 IKS 集群中使用公司的 Notary。产品是 Notary 服务器在 IBM 的云中运行，然后 Portieris 在 IKS 集群内运行。这使用户能够让 IKS 集群验证他们加载容器的镜像是否包含他们期望的内容，而 Portieris 是允许 IKS 集群应用该验证的原因。

 &lt;/div&gt;
&lt;div class="col2"&gt;
&lt;h2&gt;影响&lt;/h2&gt;
 &lt;!-- IBM's intention in offering a managed Kubernetes container service and image registry is to provide a fully secure end-to-end platform for its enterprise customers. "Image signing is one key part of that offering, and our container registry team saw Notary as the de facto way to implement that capability in the current Docker and container ecosystem," Hough says. The company had not been offering image signing before, and Notary is the tool it used to implement that capability. "We had a multi-tenant Docker Registry with private image hosting," Hough says. "The Docker Registry uses hashes to ensure that image content is correct, and data is encrypted both in flight and at rest. But it does not provide any guarantees of who pushed an image. We used Notary to enable users to sign images in their private registry namespaces if they so choose." --&gt;
IBM 打算提供基于 Kubernetes 的容器服务和镜像托管服务，目的是为其企业客户提供完全安全的端到端平台。Hough 说：“镜像签名是该产品的关键部分之一，我们的容器托管团队将 Notary 视为在当前 Docker 和容器生态系统中实现该功能的实际方式。”该公司以前没有提供镜像签名，Notary 是它用来实现该功能的工具。“我们有一个多租户 Docker 托管服务，具有私有镜像托管功能，” Hough 说。“ Docker 托管使用哈希值来确保镜像内容正确，并且数据在传输和静态时都进行了加密。但它没有提供任何保证谁推镜像。我们使用 Notary 来允许用户在其专用仓库命名空间中签名镜像（如果他们愿意的话）。”
&lt;/div&gt;

&lt;/div&gt;
&lt;/section&gt;
&lt;div class="banner2"&gt;
 &lt;div class="banner2text"&gt;
 &lt;!-- "We see CNCF as a safe haven for cloud native open source, providing stability, longevity, and expected maintenance for member projects—no matter the originating vendor or project."&lt;br style="height:25px"&gt;&lt;span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"&gt;&lt;br&gt;- Michael Hough, a software developer with the IBM Container Registry team&lt;/span&gt; --&gt;
 “我们将 CNCF 视为云原生开源的安全避难所，为成员项目（无论是原始供应商还是项目）提供稳定性、使用寿命和预期维护。”&lt;br&gt;&lt;br&gt;&lt;span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"&gt;- Michael Hough, IBM 容器托管团队软件开发人员&lt;/span&gt;
 &lt;/div&gt;
&lt;/div&gt;
&lt;section class="section2"&gt;
&lt;div class="fullcol"&gt;
 &lt;!-- &lt;h2&gt;Docker had already created the Notary project as an implementation of &lt;a href="https://github.com/theupdateframework/specification" style="text-decoration:underline"&gt;The Update Framework (TUF)&lt;/a&gt;, and this implementation of TUF provided the capabilities for Docker Content Trust.&lt;/h2&gt; "After contribution to CNCF of both TUF and Notary, we perceived that it was becoming the de facto standard for image signing in the container ecosystem", says Michael Hough, a software developer with the IBM Cloud Container Registry team. --&gt;
 &lt;h2&gt;Docker 已经创建了 Notary 项目作为 &lt;a href="https://github.com/theupdateframework/specification" style="text-decoration:underline"&gt;The Update Framework (TUF)&lt;/a&gt; 的实现，TUF 的此实现为 Docker 内容信任提供了功能。&lt;/h2&gt; IBM 云容器托管团队的软件开发者 Michael Hough 说：“在 TUF 和 Notary 对 CNCF 做出了贡献后，我们发现它正在成为容器生态系统中镜像签名的实际标准。”&lt;br&gt;&lt;br&gt;
&lt;!-- The key reason for selecting Notary was that it was already compatible with the existing authentication stack IBM’s container registry was using. So was the design of TUF, which does not require the registry team to have to enter the business of key management. Both of these were "attractive design decisions that confirmed our choice of Notary," he says. --&gt;
选择 Notary 的关键原因是它已经与 IBM 的容器托管正在使用的现有身份验证技术兼容。TUF 的设计也是如此，它不要求托管团队必须涉足密钥管理业务。他说，这两项都是“有吸引力的设计决定，证实了我们对 Notary 的选择是正确的。”&lt;br&gt;&lt;br&gt;
&lt;!-- The introduction of Notary to implement image signing capability in IBM Cloud encourages increased security across IBM's cloud platform, "where we expect it will include both the signing of official IBM images as well as expected use by security-conscious enterprise customers," Hough says. "When combined with security policy implementations, we expect an increased use of deployment policies in CI/CD pipelines that allow for fine-grained control of service deployment based on image signers." --&gt;
在 IBM Cloud 中引入 Notary 功能以实现镜像签名，可提高 IBM 云平台的安全性，“我们预计这将包括签署 IBM 官方镜像以及预期的有安全需求的企业客户，” Hough 说。与安全策略实现结合使用时，我们预计 CI/CD 管道中会更多地使用部署策略，以便根据镜像签名者对服务部署进行精细控制。
&lt;!-- The availability of image signing "is a huge benefit to security-conscious customers who require this level of image provenance and security," Hough says. "With our IBM Cloud Kubernetes as-a-service offering and the admission controller we have made available, it allows both IBM services as well as customers of the IBM public cloud to use security policies to control service deployment." --&gt;
Hough 说，镜像签名的可用性“对于需要这种级别镜像来源和安全性的客户来说，是一个巨大的好处。”“借助我们的 IBM 云上的 Kubernetes 以及我们提供的许可控制器，它允许 IBM 服务以及 IBM 公共云的客户使用安全策略来控制服务部署。”
&lt;/div&gt;
&lt;/section&gt;
&lt;div class="banner3"&gt;
 &lt;div class="banner3text"&gt;
 &lt;!-- "Image signing is one key part of our Kubernetes container service offering, and our container registry team saw Notary as the de facto way to implement that capability in the current Docker and container ecosystem"&lt;span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"&gt;&lt;br&gt;&lt;br&gt;- Michael Hough, a software developer with the IBM Cloud Container Registry team&lt;/span&gt; --&gt;
 镜像签名是我们 Kubernetes 容器服务的关键部分之一，我们的容器托管团队将 Notary 视为在当前 Docker 和容器生态系统中实现该功能的实际方式。&lt;br&gt;&lt;br&gt;&lt;span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"&gt;- Michael Hough, IBM 容器托管团队软件开发人员&lt;/span&gt;

 &lt;/div&gt;
&lt;/div&gt;
&lt;section class="section3"&gt;
&lt;div class="fullcol"&gt;
 &lt;!-- Now that the Notary-implemented service is generally available in IBM’s public cloud as a component of its existing IBM Cloud Container Registry, it is deployed as a highly available service across five IBM Cloud regions. This high-availability deployment has three instances across two zones in each of the five regions, load balanced with failover support. "We have also deployed it with end-to-end TLS support through to our back-end IBM Cloudant persistence storage service," Hough says. --&gt;
现在，Notary 通常作为现有 IBM 云容器托管的一个组件在 IBM 的公共云中提供服务，因此它被部署为五个 IBM 云区域中的高可用服务。此高可用性部署在五个区域中的每个区域中各有三个实例，实现负载均衡与故障转移。Hough 说：“我们还将其部署到后端 IBM Cloudant 持久性存储服务，并随端到端 TLS 支持一起部署。”&lt;br&gt;&lt;br&gt;
 &lt;!-- The IBM team has created and open sourced a Kubernetes admission controller called Portieris, which uses Notary signing information combined with customer-defined security policies to control image deployment into their cluster. "We are hoping to drive adoption of Portieris through its use of our Notary offering," Hough says. --&gt;
IBM 团队创建并开源了名为 Portieris 的 Kubernetes 准入控制器，该控制器使用 Notary 签名信息与客户定义的安全策略相结合，以控制将镜像部署到集群中。“我们希望通过使用我们的 Notary 服务来推动 Portieris 的使用，” Hough 说。&lt;br&gt;&lt;br&gt;
 &lt;!-- IBM has been a key player in the creation and support of open source foundations, including CNCF. Todd Moore, IBM's vice president of Open Technology, is the current CNCF governing board chair and a number of IBMers are active across many of the CNCF member projects. --&gt;
IBM 在创建和支持开源基础（包括 CNCF）方面一直占据主导地位。IBM 开放技术副总裁 Todd Moore 是现任 CNCF 董事会主席，许多 IBM 员工活跃于 CNCF 成员项目中。
&lt;/div&gt;
&lt;/section&gt;
&lt;div class="banner4"&gt;
 &lt;div class="banner4text"&gt;
 &lt;!-- "There are new projects addressing these challenges, including within CNCF. We will definitely be following these advancements with interest. We found the Notary community to be an active and friendly community open to changes, such as our addition of a CouchDB backend for persistent storage." &lt;span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"&gt;&lt;br&gt;&lt;br&gt;- Michael Hough, a software developer with the IBM Cloud Container Registry team&lt;/span&gt; --&gt;
 “有新项目应对这些挑战，包括在 CNCF 内。我们一定会饶有兴趣地关注这些进步。我们发现 Notary 社区是一个积极友好的社区，对变化持开放态度，例如我们为持久存储添加的 CouchDB 后端。”&lt;br&gt;&lt;br&gt; &lt;span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"&gt;- Michael Hough, IBM 容器托管团队软件开发人员&lt;/span&gt;
 &lt;/div&gt;
&lt;/div&gt;
&lt;section class="section4"&gt;
 &lt;div class="fullcol"&gt;
&lt;!-- The company has used other CNCF projects &lt;a href="https://containerd.io"&gt;containerd&lt;/a&gt;, &lt;a href="https://www.envoyproxy.io"&gt;Envoy&lt;/a&gt;, &lt;a href="https://prometheus.io"&gt;Prometheus&lt;/a&gt;, &lt;a href="https://grpc.io"&gt;gRPC&lt;/a&gt;, and &lt;a href="https://github.com/containernetworking"&gt;CNI&lt;/a&gt;, and is looking into &lt;a href="https://github.com/spiffe"&gt;SPIFFE&lt;/a&gt; and &lt;a href="https://github.com/spiffe/spire"&gt;SPIRE&lt;/a&gt; as well for potential future use. --&gt;
该公司已经使用的 CNCF 项目有 &lt;a href="https://containerd.io"&gt;containerd&lt;/a&gt;，&lt;a href="https://www.envoyproxy.io"&gt;Envoy&lt;/a&gt;，&lt;a href="https://prometheus.io"&gt;Prometheus&lt;/a&gt;，&lt;a href="https://grpc.io"&gt;gRPC&lt;/a&gt;，&lt;a href="https://github.com/containernetworking"&gt;CNI&lt;/a&gt;，而且正在探索 &lt;a href="https://github.com/spiffe"&gt;SPIFFE&lt;/a&gt; 和 &lt;a href="https://github.com/spiffe/spire"&gt;SPIRE&lt;/a&gt; 在未来的潜在可用性。&lt;br&gt;&lt;br&gt;
&lt;!-- What advice does Hough have for other companies that are looking to deploy Notary or a cloud native infrastructure? --&gt;
对于希望部署 Notary 或云原生基础架构的其他公司，Hough 有何建议？&lt;br&gt;&lt;br&gt;
&lt;!-- "While this is true for many areas of cloud native infrastructure software, we found that a high-availability, multi-region deployment of Notary requires a solid implementation to handle certificate management and rotation," he says. "There are new projects addressing these challenges, including within CNCF. We will definitely be following these advancements with interest. We found the Notary community to be an active and friendly community open to changes, such as our addition of a CouchDB backend for persistent storage." --&gt;
“虽然对于云原生基础结构软件的许多领域也是如此，但我们发现，高可用性、多区域的 Notary 部署需要扎实的实现来处理证书管理和轮换，”他说。“有新项目应对这些挑战，包括在 CNCF 内。我们一定会饶有兴趣地关注这些进步。我们发现 Notary 社区是一个积极友好的社区，对变化持开放态度，例如我们为持久存储添加的 CouchDB 后端。”
 &lt;/div&gt;
&lt;/section&gt;</description></item><item><title>案例研究：Nordstrom</title><link>https://andygol-k8s.netlify.app/zh-cn/case-studies/nordstrom/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/case-studies/nordstrom/</guid><description>&lt;!-- 
&lt;h2&gt;Challenge&lt;/h2&gt;
--&gt;
&lt;h2&gt;挑战&lt;/h2&gt;

&lt;!--
&lt;p&gt;Nordstrom wanted to increase the efficiency and speed of its technology operations, which includes the Nordstrom.com e-commerce site. At the same time, Nordstrom Technology was looking for ways to tighten its technology operational costs.&lt;/p&gt;
--&gt;
&lt;p&gt;Nordstrom 希望提高其技术运营的效率和速度，其中包括 Nordstrom.com 电子商务网站。与此同时，Nordstrom 技术公司正在寻找压缩技术运营成本的方法。&lt;/p&gt;

&lt;!-- 
&lt;h2&gt;Solution&lt;/h2&gt;
--&gt;
&lt;h2&gt;解决方案&lt;/h2&gt;

&lt;!-- 
&lt;p&gt;After embracing a DevOps transformation and launching a continuous integration/continuous deployment (CI/CD) project four years ago, the company reduced its deployment time from three months to 30 minutes. But they wanted to go even faster across environments, so they began their cloud native journey, adopting Docker containers orchestrated with &lt;a href="http://kubernetes.io/"&gt;Kubernetes&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>案例研究：Wikimedia</title><link>https://andygol-k8s.netlify.app/zh-cn/case-studies/wikimedia/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/case-studies/wikimedia/</guid><description>&lt;!--
title: Wikimedia Case Study
case_study_styles: true
cid: caseStudies

new_case_study_styles: true
heading_title_text: Wikimedia
use_gradient_overlay: true
subheading: &gt;
 Using Kubernetes to Build Tools to Improve the World's Wikis
case_study_details:
 - Company: Wikimedia
 - Location: San Francisco, CA
--&gt;

&lt;!--
&lt;p&gt;The non-profit Wikimedia Foundation operates some of the largest collaboratively edited reference projects in the world, including Wikipedia. To help users maintain and use wikis, it runs Wikimedia Tool Labs, a hosting environment for community developers working on tools and bots to help editors and other volunteers do their work, including reducing vandalism. The community around Wikimedia Tool Labs began forming nearly 10 years ago.&lt;/p&gt;</description></item><item><title>案例研究：Wink</title><link>https://andygol-k8s.netlify.app/zh-cn/case-studies/wink/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/case-studies/wink/</guid><description>&lt;div class="banner1"&gt;
 &lt;!-- &lt;h1&gt;CASE STUDY: &lt;img src="https://andygol-k8s.netlify.app/images/wink_logo.png" width="13%" style="margin-bottom:-4px"&gt;&lt;br&gt;
 &lt;div class="subhead"&gt;Cloud-Native Infrastructure Keeps Your Smart Home Connected&lt;/div&gt;
 &lt;/h1&gt; --&gt;
 &lt;h1&gt;案例研究： &lt;img src="https://andygol-k8s.netlify.app/images/wink_logo.png" width="13%" style="margin-bottom:-4px"&gt;&lt;br&gt;
 &lt;div class="subhead"&gt;云原生基础设施让你的智能家居互联&lt;/div&gt;
 &lt;/h1&gt;

&lt;/div&gt;


&lt;!-- &lt;div class="details"&gt;
 Company &amp;nbsp;&lt;b&gt;Wink&lt;/b&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Location &amp;nbsp;&lt;b&gt;New York, N.Y.&lt;/b&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Industry &amp;nbsp;&lt;b&gt;Internet of Things Platform&lt;/b&gt;
&lt;/div&gt; --&gt;
&lt;div class="details"&gt;
 公司 &amp;nbsp;&lt;b&gt;Wink&lt;/b&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;位置 &amp;nbsp;&lt;b&gt;纽约，纽约州&lt;/b&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;行业 &amp;nbsp;&lt;b&gt;物联网平台&lt;/b&gt;
&lt;/div&gt;

&lt;hr&gt;

&lt;section class="section1"&gt;
&lt;div class="cols"&gt;
 &lt;div class="col1"&gt;

 &lt;h2&gt;挑战&lt;/h2&gt;
 &lt;!-- Building a low-latency, highly reliable infrastructure to serve communications between millions of connected smart-home devices and the company’s consumer hubs and mobile app, with an emphasis on horizontal scalability, the ability to encrypt everything quickly and connections that could be easily brought back up if anything went wrong. --&gt;
构建低延迟、高度可靠的基础设施，为数百万互联智能家居设备、公司消费者中心、移动应用之间的通信提供服务，强调水平可扩展性，能够快速加密所有内容和强健的连接。
 &lt;br&gt;&lt;br&gt;
 &lt;h2&gt;解决方案&lt;/h2&gt;
 &lt;!-- Across-the-board use of a Kubernetes-Docker-CoreOS Container Linux stack.&lt;br&gt;&lt;br&gt; --&gt;
全面使用 Kubernetes-Docker-CoreOS 容器的 Linux 系统。&lt;br&gt;&lt;br&gt;

 &lt;/div&gt;

 &lt;div class="col2"&gt;
 &lt;h2&gt;影响&lt;/h2&gt;
 &lt;!-- "Two of the biggest American retailers [Home Depot and Walmart] are carrying and promoting the brand and the hardware,” Wink Head of Engineering Kit Klein says proudly – though he adds that "it really comes with a lot of pressure. It’s not a retail situation where you have a lot of tech enthusiasts. These are everyday people who want something that works and have no tolerance for technical excuses.” And that’s further testament to how much faith Klein has in the infrastructure that the Wink team has built. With 80 percent of Wink’s workload running on a unified stack of Kubernetes-Docker-CoreOS, the company has put itself in a position to continually innovate and improve its products and services. Committing to this technology, says Klein, "makes building on top of the infrastructure relatively&amp;nbsp;easy.” --&gt;
“美国最大的两家零售商 [Home Depot 和 Walmart] 正在使用和推广 Wink 品牌和硬件，” Wink 工程主管 Kit Klein 自豪地说，不过他补充道，“这确实带来了很大的压力。这不是普通的零售商客户像技术狂热者一样追求前沿技术。这些人每天都在想要一些行之有效的东西，并且不会容忍技术方面的借口。”这进一步证明了 Klein 对 Wink 团队所构建的基础设施有多大的信心。由于 Wink 80% 的工作量在 Kubernetes-Docker-CoreOS 的统一堆栈上运行，公司已使自己能够不断创新和改进其产品和服务。克莱因说，致力于这项技术“使得在基础设施之上的建设相对容易。”
 &lt;/div&gt;

&lt;/div&gt;
&lt;/section&gt;


&lt;div class="banner2"&gt;
 &lt;div class="banner2text"&gt;
 &lt;!-- "It’s not proprietary, it’s totally open, it’s really portable. You can run all the workloads across different cloud providers. You can easily run a hybrid AWS or even bring in your own data center. That’s the benefit of having everything unified on one open source Kubernetes-Docker-CoreOS Container Linux stack. There are massive security benefits if you only have one Linux distro/machine image to validate. The benefits are enormous because you save money, and you save time.”&lt;br&gt;&lt;br&gt;&lt;span style="font-size:15px;letter-spacing:0.08em"&gt;- KIT KLEIN, HEAD OF ENGINEERING, WINK&lt;/span&gt; --&gt;
“它不是专有的，它是完全开放的，它非常可移植。你可以跨不同的云提供商运行所有工作负载。你可以轻松地运行混合形式的 AWS，甚至引入你自己的数据中心。这就是在一个开源 Kubernetes-Docker-CoreOS 容器 Linux 系统上统一所有内容的好处。如果你只有一个 Linux 发行版/计算机映像进行验证，则具有巨大的安全优势。好处是巨大的，因为即省钱又省时间。&lt;br&gt;&lt;br&gt;&lt;span style="font-size:15px;letter-spacing:0.08em"&gt;- KIT KLEIN, WINK 工程主管&lt;/span&gt;

 &lt;/div&gt;
&lt;/div&gt;


&lt;section class="section2"&gt;
 &lt;div class="fullcol"&gt;
 &lt;!-- &lt;h2&gt;How many people does it take to turn on a light bulb?&lt;/h2&gt; --&gt;
&lt;h2&gt;打开一个灯泡需要多少人？&lt;/h2&gt;

 &lt;!-- Kit Klein whips out his phone to demonstrate. With a few swipes, the head of engineering at Wink pulls up the smart-home app created by the New York City-based company and taps the light button. "Honestly when you’re holding the phone and you’re hitting the light,” he says, "by the time you feel the pressure of your finger on the screen, it’s on. It takes as long as the signal to travel to your brain.”&lt;br&gt;&lt;br&gt; --&gt;
Kit Klein 拿出他的手机进行演示。只需轻扫几下，Wink 的工程主管打开由一家纽约公司开发的智能家居应用程序，然后轻触灯光按钮。“老实说，当你拿着手机，意味着你控制着灯光，”他说，“当你感觉到你的手指在屏幕上的压力，灯就打开了。开灯就和触觉反馈到你的大脑一样快。”&lt;br&gt;&lt;br&gt;
 &lt;!-- Sure, it takes just one finger and less than 200 milliseconds to turn on the light – or lock a door or change a thermostat. But what allows Wink to help consumers manage their connected smart-home products with such speed and ease is a sophisticated, cloud native infrastructure that Klein and his team built and continue to develop using a unified stack of CoreOS, the open-source operating system designed for clustered deployments, and Kubernetes, an open-source platform for automating deployment, scaling, and operations of application containers across clusters of hosts, providing container-centric infrastructure. "When you have a big, complex network of interdependent microservices that need to be able to discover each other, and need to be horizontally scalable and tolerant to failure, that’s what this is really optimized for,” says Klein. "A lot of people end up relying on proprietary services [offered by some big cloud providers] to do some of this stuff, but what you get by adopting CoreOS/Kubernetes is portability, to not be locked in to anyone. You can really make your own fate.”&lt;br&gt;&lt;br&gt; --&gt;
当然，只需一根手指，不到 200 毫秒即可打开灯，或锁上门或更换恒温器。但是，让 Wink 能够帮助消费者以如此快速和轻松的速度管理其互联智能家居产品的是一个复杂的云原生基础架构，Klein 和他的团队使用开源的 CoreOS 统一系统构建并继续开发专为集群部署设计的操作系统和 Kubernetes，Kubernetes 是一个开源平台，用于跨主机集群自动部署、扩展和操作应用程序容器，提供以容器为中心的基础结构。Klein 说：“当你拥有一个庞大而复杂的相互依赖的微服务网络，需要能够发现彼此，并且需要水平可伸缩和对故障的容忍度时，这就是真正优化的原因。”“很多人最终依靠一些大型云提供商提供的专有服务来执行某些操作，但通过采用 CoreOS/Kubernetes 获得的是可移植性，而不是依赖于固定的某人。你真的可以决定自己的命运。“”&lt;br&gt;&lt;br&gt;
 &lt;!-- Indeed, Wink did. The company’s mission statement is to make the connected home accessible – that is, user-friendly for non-technical owners, affordable and perhaps most importantly, reliable. "If you can’t trust that when you hit the switch, you know a light is going to go on, or if you’re remote and you’re checking on your house and that information isn’t accurate, then the convenience of the system is lost,” says Klein. "So that’s where the infrastructure comes in.”&lt;br&gt;&lt;br&gt; --&gt;
事实上，Wink 做到了。公司的使命是使家庭互联无障碍，即对非技术业主来说，用户友好，价格合理，也许最重要的是，可靠。Klein 说：“如果你不相信，当你点击开关时就可以打开一盏灯，或者你在远处使用手机检查你的房子，但是反馈信息不准确，那么系统的便利性就会丧失。”这就是基础架构的用处。&lt;br&gt;&lt;br&gt;
 &lt;!-- Wink was incubated within Quirky, a company that developed crowd-sourced inventions. The Wink app was first introduced in 2013, and at the time, it controlled only a few consumer products such as the PivotPower Strip that Quirky produced in collaboration with GE. As smart-home products proliferated, Wink was launched in 2014 in Home Depot stores nationwide. Its first project: a hub that could integrate with smart products from about a dozen brands like Honeywell and Chamberlain. The biggest challenge would be to build the infrastructure to serve all those communications between the hub and the products, with a focus on maximizing reliability and minimizing latency.&lt;br&gt;&lt;br&gt; --&gt;
Wink 是在 Quirky 公司孵化的，该公司开发众包产品。Wink 应用程序于 2013 年首次推出，当时，它只控制了少数消费类产品，如 Quirky 与 GE 合作生产的 PivotPower Strip。随着智能家居产品的激增，Wink 于 2014 年在全国的 Home Depot 商店推出。其第一个项目：可以与 Honeywell 和 Chamberlain 等十几个品牌的智能产品集成的平台。最大的挑战是构建基础设施，为信息中心和产品之间的所有这些通信提供服务，重点是最大限度地提高可靠性和最小化延迟。&lt;br&gt;&lt;br&gt;
 &lt;!-- "When we originally started out, we were moving very fast trying to get the first product to market, the minimum viable product,” says Klein. "Lots of times you go down a path and end up having to backtrack and try different things. But in this particular case, we did a lot of the work up front, which led to us making a really sound decision to deploy it on CoreOS Container Linux. And that was very early in the life of it.” --&gt;
Klein 说：“当我们最初推出时，我们正以非常快的速度尝试将第一个产品推向市场，即最低可行产品。”“很多时候，你走一条路，最终不得不回头尝试不同的东西。但是在这个特殊的情况下，我们预先做了许多工作，这导致我们作出了一个真正明智的决定，将其部署到使用 CoreOS 容器的 Linux 上。而且是在项目伊始时就这么做了。”

 &lt;/div&gt;
&lt;/section&gt;

&lt;div class="banner3"&gt;
 &lt;div class="banner3text"&gt;
 &lt;!-- "...what you get by adopting CoreOS/Kubernetes is portability, to not be locked in to anyone. You can really make your own fate.” --&gt;
“...通过 CoreOS/Kubernetes 带来的可移植性，你可以不依赖于任何人。你真的可以决定自己的命运。”
 &lt;/div&gt;
&lt;/div&gt;

&lt;section class="section3"&gt;
 &lt;div class="fullcol"&gt;
 &lt;!-- Concern number one: Wink’s products need to connect to consumer devices in people’s homes, behind a firewall. "You don’t have an end point like a URL, and you don’t even know what ports are open behind that firewall,” Klein explains. "So you essentially need to have this thing wake up and talk to your system and then open real-time, bidirectional communication between the cloud and the device. And it’s really, really important that it’s persistent because you want to decrease as much as possible the overhead of sending a message – you never know when someone is going to turn on the lights.”&lt;br&gt;&lt;br&gt; --&gt;
关注第一：Wink 的产品需要连接到人们家中在防火墙后面的消费设备。Klein 解释道：“由于没有 URL 这样的端点，你甚至不需要知道防火墙后面打开了哪些端口。”“因此，你基本上需要唤醒这些设备，然后与你的系统通信，然后在云和设备之间打开实时、双向通信。持续的连接真的非常重要，因为你希望尽可能减少发送消息的开销，你永远不知道什么时候有人会开灯。”&lt;br&gt;&lt;br&gt;
 &lt;!-- With the earliest version of the Wink Hub, when you decided to turn your lights on or off, the request would be sent to the cloud and then executed. Subsequent updates to Wink’s software enabled local control, cutting latency down to about 10 milliseconds for many devices. But with the need for cloud-enabled integrations of an ever-growing ecosystem of smart home products, low-latency internet connectivity is still a critical consideration. --&gt;
使用 Wink Hub 的最早版本，当你决定打开或关闭灯光时，请求将发送到云，然后执行。Wink 软件的后续更新启用了本地控制，将许多设备的延迟缩短到大约 10 毫秒。但是，由于需要云支持的智能家居产品生态系统的集成，低延迟互联网连接仍然是一个关键的考虑因素。
 &lt;br&gt;&lt;br&gt;
 &lt;!-- &lt;h2&gt;"You essentially need to have this thing wake up and talk to your system and then open real-time, bidirectional communication between the cloud and the device. And it’s really, really important that it’s persistent...you never know when someone is going to turn on the&amp;nbsp;lights.”&lt;/h2&gt; --&gt;
&lt;h2&gt;“你基本上需要唤醒此设备，然后与你的系统通信，然后在云和设备之间打开实时、双向通信。持续的连接真的非常重要，你永远不知道什么时候有人会开灯。”&lt;/h2&gt;
 &lt;!-- In addition, Wink had other requirements: horizontal scalability, the ability to encrypt everything quickly, connections that could be easily brought back up if something went wrong. "Looking at this whole structure we started, we decided to make a secure socket-based service,” says Klein. "We’ve always used, I would say, some sort of clustering technology to deploy our services and so the decision we came to was, this thing is going to be containerized, running on Docker.”&lt;br&gt;&lt;br&gt; --&gt;
此外，Wink 还有其他要求：水平可扩展性、快速加密所有内容的能力、在出现问题时可以轻松恢复的连接。Klein 说：“从一开始的整个结构来看，我们决定提供基于套接字的安全服务。”“我们总是使用某种集群技术来部署我们的服务，因此我们决定，这东西将被容器化，在 Docker 上运行。”&lt;br&gt;&lt;br&gt;
 &lt;!-- At the time – just over two years ago – Docker wasn’t yet widely used, but as Klein points out, "it was certainly understood by the people who were on the frontier of technology. We started looking at potential technologies that existed. One of the limiting factors was that we needed to deploy multi-port non-http/https services. It wasn’t really appropriate for some of the early cluster technology. We liked the project a lot and we ended up using it on other stuff for a while, but initially it was too targeted toward http workloads.”&lt;br&gt;&lt;br&gt; --&gt;
当时，就在两年多前，Docker 还没有被广泛使用，但是正如 Klein 指出的，“技术前沿的人们当然理解它。我们开始研究现有的潜在技术。限制因素之一是我们需要部署多端口非 http/https 服务。它并不真正适合某些早期的集群技术。我们非常喜欢这个项目，最后在其它东西上使用了一段时间，但最初它过于针对 http 工作负载。”&lt;br&gt;&lt;br&gt;
 &lt;!-- Once Wink’s backend engineering team decided on a Dockerized workload, they had to make decisions about the OS and the container orchestration platform. "Obviously you can’t just start the containers and hope everything goes well,” Klein says with a laugh. "You need to have a system that is helpful [in order] to manage where the workloads are being distributed out to. And when the container inevitably dies or something like that, to restart it, you have a load balancer. All sorts of housekeeping work is needed to have a robust infrastructure.” --&gt;
Wink 的后端工程团队决定使用 Docker 化的工作负载后，就不得不就操作系统和容器编排平台做出决定。Klein 笑着说：“很明显，你不能只是启动容器，希望一切顺利。”“你需要有一个有用的系统，以便管理工作负载的分发位置。当容器不可避免地死亡或类似的东西，重新启动它，你得有一个负载平衡器。要拥有强大的基础设施，需要进行各种内部事务管理工作。”

 &lt;/div&gt;
&lt;/section&gt;

 &lt;div class="banner4"&gt;
 &lt;div class="banner4text"&gt;
 &lt;!-- "Obviously you can’t just start the containers and hope everything goes well,” Klein says with a laugh. "You need to have a system that is helpful [in order] to manage where the workloads are being distributed out to. And when the container inevitably dies or something like that, to restart it, you have a load balancer. All sorts of housekeeping work is needed to have a robust infrastructure.” --&gt;
Klein 笑着说：“很明显，你不能只是启动容器，希望一切顺利。”“你需要有一个有用的系统，以便管理工作负载的分发位置。当容器不可避免地死亡或类似的东西，重新启动它，你得有一个负载平衡器。要拥有强大的基础设施，需要进行各种内部事务管理工作。”
 &lt;/div&gt;
 &lt;/div&gt;

&lt;section class="section4"&gt;
 &lt;div class="fullcol"&gt;
 &lt;!-- Wink considered building directly on a general purpose Linux distro like Ubuntu (which would have required installing tools to run a containerized workload) and cluster management systems like Mesos (which was targeted toward enterprises with larger teams/workloads), but ultimately set their sights on CoreOS Container Linux. "A container-optimized Linux distribution system was exactly what we needed,” he says. "We didn’t have to futz around with trying to take something like a Linux distro and install everything. It’s got a built-in container orchestration system, which is Fleet, and an easy-to-use API. It’s not as feature-rich as some of the heavier solutions, but we realized that, at that moment, it was exactly what we needed.”&lt;br&gt;&lt;br&gt; --&gt;
Wink 考虑直接在通用 Linux 发行版（需要安装工具来运行容器化工作负载）和集群管理系统如 Mesos（面向拥有较大团队的企业）的基础上构建，但最终将目光投向了使用 CoreOS 容器的 Linux。“一个容器优化的Linux分发系统正是我们需要的，”他说。“我们不必为了尝试采用 Linux 发行版之类的东西来安装所有内容而四处奔波。它有一个内置的容器编排系统，Fleet ，和一个易于使用的 API。它不像一些较重的解决方案那样具有丰富的功能，但我们意识到，在那一刻，这正是我们需要的。”&lt;br&gt;&lt;br&gt;
 &lt;!-- Wink’s hub (along with a revamped app) was introduced in July 2014 with a short-term deployment, and within the first month, they had moved the service to the Dockerized CoreOS deployment. Since then, they’ve moved almost every other piece of their infrastructure – from third-party cloud-to-cloud integrations to their customer service and payment portals – onto CoreOS Container Linux clusters. &lt;br&gt;&lt;br&gt; --&gt;
Wink 的核心（以及经过改进的应用程序）于 2014 年 7 月推出，并进行了短期部署，并在第一个月内将服务转移到 Docker 化的 CoreOS 上部署。自那时以来，他们几乎将基础设施的其他所有部分（从第三方云到云的集成迁移到其客户服务和支付门户）迁移到使用 CoreOS 容器的 Linux 集群上。&lt;br&gt;&lt;br&gt;
 &lt;!-- Using this setup did require some customization. "Fleet is really nice as a basic container orchestration system, but it doesn’t take care of routing, sharing configurations, secrets, et cetera, among instances of a service,” Klein says. "All of those layers of functionality can be implemented, of course, but if you don’t want to spend a lot of time writing unit files manually – which of course nobody does – you need to create a tool to automate some of that, which we did.”&lt;br&gt;&lt;br&gt; --&gt;
使用此设置确实需要一些额外的配置。Klein 说：“ Fleet 作为一个基本的容器编排系统确实很好，但它不负责服务实例之间的路由、配置共享、加密等。当然，所有这些功能层都可以实现，但如果你不想花费大量时间手动编写单元文件（当然没有人这样做），则需要创建一个工具来自动执行其中一些文件，我们做到了。”&lt;br&gt;&lt;br&gt;
 &lt;!-- Wink quickly embraced the Kubernetes container cluster manager when it was launched in 2015 and integrated with CoreOS core technology, and as promised, it ended up providing the features Wink wanted and had planned to build. "If not for Kubernetes, we likely would have taken the logic and library we implemented for the automation tool that we created, and would have used it in a higher level abstraction and tool that could be used by non-DevOps engineers from the command line to create and manage clusters,” Klein says. "But Kubernetes made that totally unnecessary – and is written and maintained by people with a lot more experience in cluster management than us, so all the better.” Now, an estimated 80 percent of Wink’s workload is run on Kubernetes on top of CoreOS Container Linux. --&gt;
Wink 在 2015 年推出并集成了 CoreOS 核心技术时，很快接受了 Kubernetes 容器集群管理器，正如所承诺的那样，它最终提供了 Wink 想要的功能，并计划构建这些功能。“如果不是 Kubernetes，我们很可能采用我们为所创建的自动化工具实现的逻辑和库，并将它用于更高级别的抽象和工具，这些抽象和工具可供命令行中的非 DevOps 工程师使用，以创建和管理集群，” Klein 说。“但 Kubernetes 完全没有必要这样做，并且由比我们在集群管理方面有经验更丰富的人编写和维护，因此更好。现在，估计 Wink 80% 的工作负载是在使用 CoreOS 容器的 Linux 之上的 Kubernetes 上运行。”

 &lt;/div&gt;
&lt;/section&gt;

 &lt;div class="banner5"&gt;
 &lt;div class="banner5text"&gt;
 &lt;!-- "Stay close to the development. Understand why decisions are being made. If you understand the intent behind the project, from the technological intent to a certain philosophical intent, then it helps you understand how to build your system in harmony with those systems as opposed to trying to work against it.” --&gt;
“同发展保持同步。了解决策原因。如果你了解项目背后的意图，从技术意图到某种哲学意图，那么它可以帮助你了解如何与这些系统和谐地构建系统，而不是试图与之对抗。”
 &lt;/div&gt;
 &lt;/div&gt;

&lt;section class="section5"&gt;
 &lt;div class="fullcol"&gt;
 &lt;!-- Wink’s reasons for going all in are clear: "It’s not proprietary, it’s totally open, it’s really portable,” Klein says. "You can run all the workloads across different cloud providers. You can easily run a hybrid AWS or even bring in your own data center. That’s the benefit of having everything unified on one Kubernetes-Docker-CoreOS Container Linux stack. There are massive security benefits if you only have one Linux distro to try to validate. The benefits are enormous because you save money, you save time.”&lt;br&gt;&lt;br&gt; --&gt;
Wink 的去向理由很明确：“它不是专有的，它是完全开放的，它真的很便携，” Klein 说，“它很可移植。你可以跨不同的云提供商运行所有工作负载。你可以轻松地运行混合 AWS，甚至引入你自己的数据中心。这就是在一个使用 Kubernetes-Docker-CoreOS 容器的 Linux 系统上统一所有内容的好处。如果你只有一个 Linux 发行版来尝试验证，则具有巨大的安全优势。好处是巨大的，因为既省钱又省时间。”
 &lt;!-- Klein concedes that there are tradeoffs in every technology decision. "Cutting-edge technology is going to be scary for some people,” he says. "In order to take advantage of this, you really have to keep up with the technology. You can’t treat it like it’s a black box. Stay close to the development. Understand why decisions are being made. If you understand the intent behind the project, from the technological intent to a certain philosophical intent, then it helps you understand how to build your system in harmony with those systems as opposed to trying to work against it.”&lt;br&gt;&lt;br&gt; --&gt;
Klein 承认，每个技术决策都有权衡。他表示：“对一些人来说，尖端技术将非常可怕。”“为了利用这一优势，你确实必须跟上技术。你不能把它当作一个黑盒子。同发展保持同步。了解决策原因。如果你了解项目背后的意图，从技术意图到某种哲学意图，那么它可以帮助你了解如何与这些系统和谐地构建系统，而不是试图与之对抗。”
 &lt;!-- Wink, which was acquired by Flex in 2015, now controls 2.3 million connected devices in households all over the country. What’s next for the company? A new version of the hub - Wink Hub 2 - hit shelves last November – and is being offered for the first time at Walmart stores in addition to Home Depot. "Two of the biggest American retailers are carrying and promoting the brand and the hardware,” Klein says proudly – though he adds that "it really comes with a lot of pressure. It’s not a retail situation where you have a lot of tech enthusiasts. These are everyday people who want something that works and have no tolerance for technical excuses.” And that’s further testament to how much faith Klein has in the infrastructure that the Wink team has have built.&lt;br&gt;&lt;br&gt; --&gt;
Wink 于 2015 年被 Flex 收购，目前控制着全国 230 万台联网设备。公司下一步怎么办？去年11月，一款新版的 “Wink Hub 2”上架，除 Home Depot 外，Walmart 商店还首次上市。Klein 自豪地说：“美国最大的两家零售商都正在使用和推广 Wink 品牌和硬件，”不过他补充道，“这确实带来了很大的压力。这不是普通的零售商客户像技术狂热者一样追求前沿技术。这些人每天都在想要一些行之有效的东西，并且不会容忍技术方面的借口。这进一步证明了 Klein 对 Wink 团队所构建的基础设施有多大的信心。”
 &lt;!-- Wink’s engineering team has grown exponentially since its early days, and behind the scenes, Klein is most excited about the machine learning Wink is using. "We built [a system of] containerized small sections of the data pipeline that feed each other and can have multiple outputs,” he says. "It’s like data pipelines as microservices.” Again, Klein points to having a unified stack running on CoreOS Container Linux and Kubernetes as the primary driver for the innovations to come. "You’re not reinventing the wheel every time,” he says. "You can just get down to work.” --&gt;
Wink 的工程团队从早期起就呈指数级增长，在幕后，Klein 对 Wink 使用的机器学习感到非常兴奋。“我们构建了一个数据管道的容器化小段系统，这些部分对整个团队都是有益的，并且可以有多个输出，”他说。“这就像数据管道作为微服务。”同样，Klein 指出，在使用 CoreOS 容器的 Linux 和 Kubernetes 上运行统一的系统是未来的创新的主要驱动力。“你不是每次都在重新发明轮子，”他说。“你可以专注于业务进行开发。”
&lt;/div&gt;
&lt;/section&gt;</description></item><item><title>案例研究：Workiva</title><link>https://andygol-k8s.netlify.app/zh-cn/case-studies/workiva/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/case-studies/workiva/</guid><description>&lt;div class="banner1"&gt;
 &lt;!-- &lt;h1&gt; CASE STUDY:&lt;img src="https://andygol-k8s.netlify.app/images/workiva_logo.png" style="margin-bottom:0%" class="header_logo"&gt;&lt;br&gt; &lt;div class="subhead"&gt;Using OpenTracing to Help Pinpoint the Bottlenecks

&lt;/div&gt;&lt;/h1&gt; --&gt;
 &lt;h1&gt; 案例研究：&lt;img src="https://andygol-k8s.netlify.app/images/workiva_logo.png" style="margin-bottom:0%" class="header_logo"&gt;&lt;br&gt; &lt;div class="subhead"&gt;使用 OpenTracing 帮助查找瓶颈

&lt;/div&gt;&lt;/h1&gt;


&lt;/div&gt;

&lt;!-- &lt;div class="details"&gt;
 Company &amp;nbsp;&lt;b&gt;Workiva&lt;/b&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Location &amp;nbsp;&lt;b&gt;Ames, Iowa&lt;/b&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Industry &amp;nbsp;&lt;b&gt;Enterprise Software&lt;/b&gt;
&lt;/div&gt; --&gt;
&lt;div class="details"&gt;
 公司 &amp;nbsp;&lt;b&gt;Workiva&lt;/b&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Location &amp;nbsp;&lt;b&gt;艾姆斯，爱荷华州&lt;/b&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;行业 &amp;nbsp;&lt;b&gt;企业软件&lt;/b&gt;
&lt;/div&gt;


&lt;hr&gt;
&lt;section class="section1"&gt;
&lt;div class="cols"&gt;
 &lt;div class="col1"&gt;
 &lt;h2&gt;挑战&lt;/h2&gt;
 &lt;!-- &lt;a href="https://www.workiva.com/"&gt;Workiva&lt;/a&gt; offers a cloud-based platform for managing and reporting business data. This SaaS product, Wdesk, is used by more than 70 percent of the Fortune 500 companies. As the company made the shift from a monolith to a more distributed, microservice-based system, "We had a number of people working on this, all on different teams, so we needed to identify what the issues were and where the bottlenecks were," says Senior Software Architect MacLeod Broad. With back-end code running on Google App Engine, Google Compute Engine, as well as Amazon Web Services, Workiva needed a tracing system that was agnostic of platform. While preparing one of the company’s first products utilizing AWS, which involved a "sync and link" feature that linked data from spreadsheets built in the new application with documents created in the old application on Workiva’s existing system, Broad’s team found an ideal use case for tracing: There were circular dependencies, and optimizations often turned out to be micro-optimizations that didn’t impact overall speed. --&gt;
&lt;a href="https://www.workiva.com/"&gt;Workiva&lt;/a&gt;提供了一个基于云的平台，用于管理和报告业务数据。此 SaaS 产品 Wdesk 被 70% 以上的财富 500 强企业使用。随着公司从单体系统向更分散的基于微服务的系统转变，“我们有许多人在研究这个问题，他们都在不同的团队中，所以我们需要确定问题是什么，瓶颈在哪里，”高级软件架构师 MacLeod Broad 说。随着后端代码在 Google App Engine，Google Compute Engine，以及 Amazon Web Services 上运行，Workiva 需要一个还未被开发的跟踪系统。在准备公司首批使用 AWS 的产品时，Broad 的团队将新应用程序中构建的电子表格数据与 Workiva 现有系统上的旧应用程序中创建的文档相关联，该功能涉及“同步和链接”功能，从而发现了一个理想的跟踪用例：存在循环依赖关系，而优化通常证明是不影响整体速度的微优化。

&lt;br&gt;

&lt;/div&gt;

&lt;div class="col2"&gt;
 &lt;h2&gt;解决方案&lt;/h2&gt;
 &lt;!-- Broad’s team introduced the platform-agnostic distributed tracing system OpenTracing to help them pinpoint the bottlenecks. --&gt;
Broad 的团队推出了与平台无关的分布式跟踪系统 OpenTracing，帮助他们找出瓶颈。
&lt;br&gt;
&lt;h2&gt;影响&lt;/h2&gt;
 &lt;!-- Now used throughout the company, OpenTracing produced immediate results. Software Engineer Michael Davis reports: "Tracing has given us immediate, actionable insight into how to improve our service. Through a combination of seeing where each call spends its time, as well as which calls are most often used, we were able to reduce our average response time by 95 percent (from 600ms to 30ms) in a single fix." --&gt;
现在在整个公司使用，OpenTracing 产生了立竿见影的效果。软件工程师 Michael Davis 报告说：“跟踪让我们能够立即了解如何改进我们的服务。通过查看每个调用的花费时间以及最常使用哪些调用的组合，我们能够在单个修复中将平均响应时间减少 95%（从 600ms 到 30ms）。”

&lt;/div&gt;

&lt;/div&gt;
&lt;/section&gt;
&lt;div class="banner2"&gt;
 &lt;div class="banner2text"&gt;
&lt;!-- "With OpenTracing, my team was able to look at a trace and make optimization suggestions to another team without ever looking at their code." &lt;span style="font-size:14px;letter-spacing:0.12em;padding-top:20px;text-transform:uppercase"&gt;&lt;br&gt;— MacLeod Broad, Senior Software Architect at Workiva&lt;/span&gt; --&gt;
使用 OpenTracing，我的团队能够查看跟踪，并针对其他团队提出优化建议，而无需查看他们的代码。&lt;span style="font-size:14px;letter-spacing:0.12em;padding-top:20px;text-transform:uppercase"&gt;&lt;br&gt;— MacLeod Broad, Workiva 高级软件架构师&lt;/span&gt;

 &lt;/div&gt;
&lt;/div&gt;
&lt;section class="section2"&gt;
&lt;div class="fullcol"&gt;
 &lt;!-- &lt;h2&gt;Last fall, MacLeod Broad’s platform team at Workiva was prepping one of the company’s first products utilizing &lt;a href="https://aws.amazon.com/"&gt;Amazon Web Services&lt;/a&gt; when they ran into a roadblock.&lt;/h2&gt; --&gt;
&lt;h2&gt;去年秋天，MacLeod Broad 在 Workiva 的平台团队在准备该公司的首批产品之一，当遇到障碍时就使用 &lt;a href="https://aws.amazon.com/"&gt;Amazon Web Services&lt;/a&gt; 。&lt;/h2&gt;
 &lt;!-- Early on, Workiva’s backend had run mostly on &lt;a href="https://cloud.google.com/appengine/"&gt;Google App Engine&lt;/a&gt;. But things changed along the way as Workiva’s SaaS offering, &lt;a href="https://www.workiva.com/wdesk"&gt;Wdesk&lt;/a&gt;, a cloud-based platform for managing and reporting business data, grew its customer base to more than 70 percent of the Fortune 500 companies. "As customer needs grew and the product offering expanded, we started to leverage a wider offering of services such as Amazon Web Services as well as other Google Cloud Platform services, creating a multi-vendor environment."&lt;br&gt;&lt;br&gt; --&gt;
早期，Workiva 的后端主要运行在&lt;a href="https://cloud.google.com/appengine/"&gt;Google App Engine&lt;/a&gt;上。但随着 Workiva 的 SaaS 产品，&lt;a href="https://www.workiva.com/wdesk"&gt;Wdesk&lt;/a&gt;，一种基于云的管理和报告业务数据平台，将客户群增长到财富 500 强公司的 70% 以上，情况发生了变化。“随着客户需求的增长和产品供应的扩大，我们开始利用更广泛的服务，如 Amazon Web Services 以及其他 Google Cloud Platform 服务，从而创建一个多供应商环境。”

&lt;!-- With this new product, there was a "sync and link" feature by which data "went through a whole host of services starting with the new spreadsheet system [&lt;a href="https://aws.amazon.com/rds/aurora/"&gt;Amazon Aurora&lt;/a&gt;] into what we called our linking system, and then pushed through http to our existing system, and then a number of calculations would go on, and the results would be transmitted back into the new system," says Broad. "We were trying to optimize that for speed. We thought we had made this great optimization and then it would turn out to be a micro optimization, which didn’t really affect the overall speed of things." &lt;br&gt;&lt;br&gt; --&gt;
有了这个新产品，具备“同步和链接”功能，通过它，数据“经历了一大堆服务，从新的电子表格系统[&lt;a href="https://aws.amazon.com/rds/aurora/"&gt;Amazon Aurora&lt;/a&gt;]到我们所谓的链接系统，然后通过 http 推送到我们现有的系统，然后进行一些计算，结果将传回新系统，”Broad 说。“我们当时进行优化是为了速度。我们认为做了这种极好的优化，然后它会变成一个微观优化，这并没有真正影响系统的整体速度。”&lt;br&gt;&lt;br&gt;
&lt;!-- The challenges faced by Broad’s team may sound familiar to other companies that have also made the shift from monoliths to more distributed, microservice-based systems. "We had a number of people working on this, all on different teams, so it was difficult to get our head around what the issues were and where the bottlenecks were," says Broad.&lt;br&gt;&lt;br&gt; --&gt;
Broad 团队面临的挑战听起来可能为其他公司所熟悉，这些公司也从单体系统转向更分散的基于微服务的系统。Broad 说：“我们有许多人从事这方面的工作，他们都在不同的团队中，因此很难了解问题是什么以及瓶颈在哪里。”&lt;br&gt;&lt;br&gt;
 &lt;!-- "Each service team was going through different iterations of their architecture and it was very hard to follow what was actually going on in each teams’ system," he adds. "We had circular dependencies where we’d have three or four different service teams unsure of where the issues really were, requiring a lot of back and forth communication. So we wasted a lot of time saying, ‘What part of this is slow? Which part of this is sometimes slow depending on the use case? Which part is degrading over time? Which part of this process is asynchronous so it doesn’t really matter if it’s long-running or not? What are we doing that’s redundant, and which part of this is buggy?’" --&gt;
“每个服务团队都经历着不同的架构迭代，很难了解每个团队系统中的实际内容，”他补充道。“我们有循环依赖关系，其中有三个或四个不同的服务团队不确定问题的真正位置，需要大量的来回沟通。因此，我们浪费了很多时间说，‘这个哪一部分比较慢？根据应用场景的不同，哪一部分有时很慢？随着时间推移，哪一部分会逐渐变慢？这个进程的哪一部分是异步的，因此，它是否长时间运行并不重要？我们在做的哪些是多余的，其中哪一部分是错误？’”


&lt;/div&gt;
&lt;/section&gt;
&lt;div class="banner3"&gt;
 &lt;div class="banner3text"&gt;
 &lt;!-- "A tracing system can at a glance explain an architecture, narrow down a performance bottleneck and zero in on it, and generally just help direct an investigation at a high level. Being able to do that at a glance is much faster than at a meeting or with three days of debugging, and it’s a lot faster than never figuring out the problem and just moving on."&lt;span style="font-size:14px;letter-spacing:0.12em;padding-top:20px;text-transform:uppercase"&gt;&lt;br&gt;— MACLEOD BROAD, SENIOR SOFTWARE ARCHITECT AT WORKIVA&lt;/span&gt; --&gt;
 “跟踪系统可以一目了然地解释体系结构，缩小性能瓶颈并归零，通常只是帮助指导高级别的调查。一眼就能做到这一点，比在会议或三天的调试中要快得多，而且比从来不找出问题得过且过要快得多。”&lt;span style="font-size:14px;letter-spacing:0.12em;padding-top:20px;text-transform:uppercase"&gt;&lt;br&gt;— MACLEOD BROAD, WORKIVA 高级软件架构师&lt;/span&gt;

 &lt;/div&gt;
&lt;/div&gt;
&lt;section class="section3"&gt;
&lt;div class="fullcol"&gt;

&lt;!-- Simply put, it was an ideal use case for tracing. "A tracing system can at a glance explain an architecture, narrow down a performance bottleneck and zero in on it, and generally just help direct an investigation at a high level," says Broad. "Being able to do that at a glance is much faster than at a meeting or with three days of debugging, and it’s a lot faster than never figuring out the problem and just moving on."&lt;br&gt;&lt;br&gt; --&gt;
简而言之，它是跟踪的理想用例。Broad 说：“跟踪系统可以一目了然地解释体系结构，缩小性能瓶颈并归零，通常只是帮助在高级别指导调查。”“一眼就能做到这一点，比在会议或三天的调试中要快得多，而且比从不找出问题得过且过要快得多。”&lt;br&gt;&lt;br&gt;
&lt;!-- With Workiva’s back-end code running on &lt;a href="https://cloud.google.com/compute/"&gt;Google Compute Engine&lt;/a&gt; as well as App Engine and AWS, Broad knew that he needed a tracing system that was platform agnostic. "We were looking at different tracing solutions," he says, "and we decided that because it seemed to be a very evolving market, we didn’t want to get stuck with one vendor. So OpenTracing seemed like the cleanest way to avoid vendor lock-in on what backend we actually had to use."&lt;br&gt;&lt;br&gt; --&gt;
Workiva 的后端代码在 &lt;a href="https://cloud.google.com/compute/"&gt;Google Compute Engine&lt;/a&gt; 、App Engine 和 AWS 上运行，Broad 知道他需要一个与平台无关的跟踪系统。“我们正在研究不同的追踪解决方案，”他说，“我们决定，由于市场似乎是一个不断发展的市场，我们不想被一个供应商卡住。因此，OpenTracing 似乎是避免供应商限制我们实际使用的后端的最简捷方法。”&lt;br&gt;&lt;br&gt;
&lt;!-- Once they introduced OpenTracing into this first use case, Broad says, "The trace made it super obvious where the bottlenecks were." Even though everyone had assumed it was Workiva’s existing code that was slowing things down, that wasn’t exactly the case. "It looked like the existing code was slow only because it was reaching out to our next-generation services, and they were taking a very long time to service all those requests," says Broad. "On the waterfall graph you can see the exact same work being done on every request when it was calling back in. So every service request would look the exact same for every response being paged out. And then it was just a no-brainer of, ‘Why is it doing all this work again?’"&lt;br&gt;&lt;br&gt; --&gt;
Broad 说，一旦他们将 OpenTrace 引入到第一个用例中，“跟踪使得瓶颈所在位置变得非常明显。”尽管每个人都认为是 Workiva 的现有代码减缓了速度，但事实并非如此。Broad 说：“看起来现有代码速度很慢，只是因为它能够达到我们下一代服务的水平，而且它们需要很长时间才能处理所有这些请求。”“在瀑布图上，你可以看到每个请求在得到响应时所做的完全相同的工作。因此，对于被分页的每个响应，每个服务请求看起来都完全相同。然后，就不用费神去思考‘为什么它再次做所有这些工作？’”&lt;br&gt;&lt;br&gt;
&lt;!-- Using the insight OpenTracing gave them, "My team was able to look at a trace and make optimization suggestions to another team without ever looking at their code," says Broad. "The way we named our traces gave us insight whether it’s doing a SQL call or it’s making an RPC. And so it was really easy to say, ‘OK, we know that it’s going to page through all these requests. Do the work once and stuff it in cache.’ And we were done basically. All those calls became sub-second calls immediately."&lt;br&gt;&lt;br&gt; --&gt;
使用 OpenTrace 的跟踪功能，“我的团队能够查看跟踪，并针对另一个团队提出优化建议，而无需查看他们的代码，”Broad 说。“我们命名跟踪的方式可以说明请求是执行 SQL 调用还是创建 RPC。因此，可以非常简单地说，‘好吧，我们知道它能处理所有这些请求。处理一次缓存一次。’我们基本上完成了。所有这些调用立即变成了亚秒级的调用。”&lt;br&gt;&lt;br&gt;

&lt;/div&gt;
&lt;/section&gt;

&lt;div class="banner4"&gt;
 &lt;div class="banner4text"&gt;
&lt;!-- "We were looking at different tracing solutions and we decided that because it seemed to be a very evolving market, we didn’t want to get stuck with one vendor. So OpenTracing seemed like the cleanest way to avoid vendor lock-in on what backend we actually had to&amp;nbsp;use." &lt;span style="font-size:14px;letter-spacing:0.12em;padding-top:20px;text-transform:uppercase"&gt;&lt;br&gt;— MACLEOD BROAD, SENIOR SOFTWARE ARCHITECT AT WORKIVA&lt;/span&gt; --&gt;
“我们正在研究不同的跟踪解决方案，我们决定使用 OpenTracing，由于市场似乎是一个不断发展的市场，我们不想被一个供应商卡住。因此，OpenTracing 似乎是避免供应商限制我们实际使用的后端的最简捷方法。”&lt;span style="font-size:14px;letter-spacing:0.12em;padding-top:20px;text-transform:uppercase"&gt;&lt;br&gt;— MACLEOD BROAD, WORKIVA 高级软件架构师&lt;/span&gt;

 &lt;/div&gt;
&lt;/div&gt;

&lt;section class="section4"&gt;
&lt;div class="fullcol"&gt;
 &lt;!-- After the success of the first use case, everyone involved in the trial went back and fully instrumented their products. Tracing was added to a few more use cases. "We wanted to get through the initial implementation pains early without bringing the whole department along for the ride," says Broad. "Now, a lot of teams add it when they’re starting up a new service. We’re really pushing adoption now more than we were before." &lt;br&gt;&lt;br&gt; --&gt;
在第一个用例成功后，参与尝试的每个人都回去重新编排了他们的产品。跟踪功能已添加到几个更多的用例中。Broad 说：“我们希望尽早度过最初的实施难关，而不会让整个部门一起渡过难关。”“现在，许多团队在启动新服务时添加它。我们现在确实比以前更将推动采用。”&lt;br&gt;&lt;br&gt;
&lt;!-- Some teams were won over quickly. "Tracing has given us immediate, actionable insight into how to improve our [Workspaces] service," says Software Engineer Michael Davis. "Through a combination of seeing where each call spends its time, as well as which calls are most often used, we were able to reduce our average response time by 95 percent (from 600ms to 30ms) in a single fix." &lt;br&gt;&lt;br&gt; --&gt;
一些团队很快就赢了。软件工程师 Michael Davis 说：“跟踪让我们能够立即了解如何改进我们的（工作空间）服务。通过查看每个调用的花费时间以及最常使用哪些调用的组合，我们能够在单个修复中将平均响应时间减少 95%（从 600ms 到 30ms）”。&lt;br&gt;&lt;br&gt;
&lt;!-- Most of Workiva’s major products are now traced using OpenTracing, with data pushed into &lt;a href="https://cloud.google.com/stackdriver/"&gt;Google StackDriver&lt;/a&gt;. Even the products that aren’t fully traced have some components and libraries that are. &lt;br&gt;&lt;br&gt; --&gt;
Workiva 的大部分主要产品现在都使用 OpenTracing 进行跟踪，将数据推送到 &lt;a href="https://cloud.google.com/stackdriver/"&gt;Google StackDriver&lt;/a&gt;。即使未完全使用跟踪功能的产品，也具有一些组件和库。&lt;br&gt;&lt;br&gt;
&lt;!-- Broad points out that because some of the engineers were working on App Engine and already had experience with the platform’s Appstats library for profiling performance, it didn’t take much to get them used to using OpenTracing. But others were a little more reluctant. "The biggest hindrance to adoption I think has been the concern about how much latency is introducing tracing [and StackDriver] going to cost," he says. "People are also very concerned about adding middleware to whatever they’re working on. Questions about passing the context around and how that’s done were common. A lot of our Go developers were fine with it, because they were already doing that in one form or another. Our Java developers were not super keen on doing that because they’d used other systems that didn’t require that."&lt;br&gt;&lt;br&gt; --&gt;
Broad 指出，由于一些工程师正在使用 App Engine，并且已经拥有平台的 Appstats 库分析性能的经验，因此他们不需要花太多时间才能习惯使用 OpenTracing。但其他人则有点不情愿。“我认为，使用 OpenTracing 的最大障碍是担心引入跟踪和 StackDriver 的成本会是多少，”他说。“人们也非常关心将中间件添加到他们正在处理的任何内容中。有关传递上下文以及如何完成这些工作的问题很常见。我们的许多 Go 开发人员对此都很好，因为他们已经以这样或那样的形式这样做了。我们的 Java 开发人员并不热衷于这样做，因为他们使用了其他不需要该功能的系统。”&lt;br&gt;&lt;br&gt;
&lt;!-- But the benefits clearly outweighed the concerns, and today, Workiva’s official policy is to use tracing." --&gt;
但好处显然超过了人们的担忧，而今天，Workiva 的官方政策是使用追踪。
&lt;!-- In fact, Broad believes that tracing naturally fits in with Workiva’s existing logging and metrics systems. "This was the way we presented it internally, and also the way we designed our use," he says. "Our traces are logged in the exact same mechanism as our app metric and logging data, and they get pushed the exact same way. So we treat all that data exactly the same when it’s being created and when it’s being recorded. We have one internal library that we use for logging, telemetry, analytics and tracing." --&gt;
事实上，Broad 认为跟踪天然符合 Workiva 现有的日志记录和指标系统。“这是我们在内部展示的方式，也是我们设计使用方式的方式，”他说。“我们的跟踪记录在与我们的应用指标和日志记录数据完全相同的机制中，并且它们被推送的方式完全相同。因此，在创建和记录所有数据时，我们对待这些数据完全一样。我们有一个内部库，用于日志记录、遥测、分析和跟踪。”


&lt;/div&gt;
&lt;/section&gt;

&lt;div class="banner5"&gt;
 &lt;div class="banner5text"&gt;
 &lt;!-- "Tracing has given us immediate, actionable insight into how to improve our [Workspaces] service. Through a combination of seeing where each call spends its time, as well as which calls are most often used, we were able to reduce our average response time by 95 percent (from 600ms to 30ms) in&amp;nbsp;a&amp;nbsp;single&amp;nbsp;fix." &lt;span style="font-size:14px;letter-spacing:0.12em;padding-top:20px;text-transform:uppercase"&gt;&lt;br&gt;— Michael Davis, Software Engineer, Workiva &lt;/span&gt; --&gt;
“跟踪让我们能够立即、可操作地洞察如何改进我们的（工作空间）服务。通过查看每个调用花费的时间以及最常使用哪些调用的组合，我们能够在单个修复中将平均响应时间减少 95%（从 600ms 到 30ms）。”&lt;span style="font-size:14px;letter-spacing:0.12em;padding-top:20px;text-transform:uppercase"&gt;&lt;br&gt;— Michael Davis, 软件工程师, Workiva &lt;/span&gt;

 &lt;/div&gt;
&lt;/div&gt;

&lt;section class="section5"&gt;
&lt;div class="fullcol"&gt;
 &lt;!-- For Workiva, OpenTracing has become an essential tool for zeroing in on optimizations and determining what’s actually a micro-optimization by observing usage patterns. "On some projects we often assume what the customer is doing, and we optimize for these crazy scale cases that we hit 1 percent of the time," says Broad. "It’s been really helpful to be able to say, ‘OK, we’re adding 100 milliseconds on every request that does X, and we only need to add that 100 milliseconds if it’s the worst of the worst case, which only happens one out of a thousand requests or one out of a million requests."&lt;br&gt;&lt;br&gt; --&gt;
对于 Workiva，OpenTracing 已成为一种重要工具，通过观察使用模式来聚焦最佳优化和确定什么是微观优化。Broad 说：“在某些项目中，我们经常假设客户正在做什么，我们针对这些疯狂的测试案例进行优化，这些案例的 1% 是同用户实际非常相符的。”“这样是非常有益的，‘好吧，我们在每个执行 X 的请求上添加 100 毫秒，如果最坏的情况，我们也只需要添加 100 毫秒，这仅发生在千分之一的请求或一百万个请求中的一个。’”&lt;br&gt;&lt;br&gt;
&lt;!-- Unlike many other companies, Workiva also traces the client side. "For us, the user experience is important—it doesn’t matter if the RPC takes 100 milliseconds if it still takes 5 seconds to do the rendering to show it in the browser," says Broad. "So for us, those client times are important. We trace it to see what parts of loading take a long time. We’re in the middle of working on a definition of what is ‘loaded.’ Is it when you have it, or when it’s rendered, or when you can interact with it? Those are things we’re planning to use tracing for to keep an eye on and to better understand."&lt;br&gt;&lt;br&gt; --&gt;
与许多其他公司不同，Workiva 还跟踪客户端。Broad 说：“对我们来说，用户体验很重要，如果 RPC 执行后还需要 5 秒才能在浏览器中显示，则 RPC 执行需要 100 毫秒就不重要了。”“因此，对于我们而言，这些客户端响应时间非常重要。我们跟踪它，看看加载的哪些部分需要很长时间。我们正在研究什么是“加载”的定义。是当你访问到它，还是当它被渲染时，或者当你可以与它交互时？我们计划使用跟踪来关注和更好地了解这些内容。”&lt;br&gt;&lt;br&gt;
&lt;!-- That also requires adjusting for differences in external and internal clocks. "Before time correcting, it was horrible; our traces were more misleading than anything," says Broad. "So we decided that we would return a timestamp on the response headers, and then have the client reorient its time based on that—not change its internal clock but just calculate the offset on the response time to when the client got it. And if you end up in an impossible situation where a client RPC spans 210 milliseconds but the time on the response time is outside of that window, then we have to reorient that."&lt;br&gt;&lt;br&gt; --&gt;
这还需要根据外部和内部时钟的差异进行调整。“在时间纠正之前，这是非常恐怖的；我们的跟踪比什么都更具有误导性，” Broad 说。“因此，我们决定在响应标头上返回一个时间戳，然后让客户端根据该时间调整其时间，而不是更改其内部时钟，而只需计算客户端收到时响应时间的偏移量。如果最终出现一种不可能的情况，客户端的 RPC 调用持续了 210 毫秒但响应返回时间并不在这个范围内，那就必须得重新调用请求。”&lt;br&gt;&lt;br&gt;
&lt;!-- Broad is excited about the impact OpenTracing has already had on the company, and is also looking ahead to what else the technology can enable. One possibility is using tracing to update documentation in real time. "Keeping documentation up to date with reality is a big challenge," he says. "Say, we just ran a trace simulation or we just ran a smoke test on this new deploy, and the architecture doesn’t match the documentation. We can find whose responsibility it is and let them know and have them update it. That’s one of the places I’d like to get in the future with tracing." --&gt;
Broad 对 OpenTracing 对公司产生的影响感到兴奋，并且也在展望该技术还能实现什么。一种可能性是使用跟踪实时更新文档。他表示：“使文档与现实保持同步是一项重大挑战。”“例如，我们刚刚运行了跟踪模拟，或者我们刚刚在此新部署上运行了烟雾测试，但是结果显示架构与文档不匹配。我们可以找到是谁的责任，并通知他们进行更新。这是我希望在未来通过追踪获得的功能之一。”

&lt;/div&gt;

&lt;/section&gt;</description></item><item><title>案例研究：Zalando</title><link>https://andygol-k8s.netlify.app/zh-cn/case-studies/zalando/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/case-studies/zalando/</guid><description>&lt;!-- 
title: Zalando Case Study
case_study_styles: true
cid: caseStudies

new_case_study_styles: true
heading_background: /images/case-studies/zalando/banner1.jpg
heading_title_logo: /images/zalando_logo.png
subheading: &gt;
 Europe's Leading Online Fashion Platform Gets Radical with Cloud Native
case_study_details:
 - Company: Zalando
 - Location: Berlin, Germany
 - Industry: Online Fashion
--&gt;
&lt;!-- 
&lt;h2&gt;Challenge&lt;/h2&gt;
--&gt;
&lt;h2&gt;挑战&lt;/h2&gt;

&lt;!-- 
&lt;p&gt;Zalando, Europe's leading online fashion platform, has experienced exponential growth since it was founded in 2008. In 2015, with plans to further expand its original e-commerce site to include new services and products, Zalando embarked on a &lt;a href="https://jobs.zalando.com/tech/blog/radical-agility-study-notes/"&gt;radical transformation&lt;/a&gt; resulting in autonomous self-organizing teams. This change requires an infrastructure that could scale with the growth of the engineering organization. Zalando's technology department began rewriting its applications to be cloud-ready and started moving its infrastructure from on-premise data centers to the cloud. While orchestration wasn't immediately considered, as teams migrated to &lt;a href="https://aws.amazon.com/"&gt;Amazon Web Services&lt;/a&gt; (AWS): "We saw the pain teams were having with infrastructure and Cloud Formation on AWS," says Henning Jacobs, Head of Developer Productivity. "There's still too much operational overhead for the teams and compliance. " To provide better support, cluster management was brought into play.&lt;/p&gt;</description></item><item><title>版本偏差策略</title><link>https://andygol-k8s.netlify.app/zh-cn/releases/version-skew-policy/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/releases/version-skew-policy/</guid><description>&lt;!-- 
reviewers:
- sig-api-machinery
- sig-architecture
- sig-cli
- sig-cluster-lifecycle
- sig-node
- sig-release
title: Version Skew Policy
type: docs
description: &gt;
 The maximum version skew supported between various Kubernetes components.
--&gt;
&lt;!-- overview --&gt;
&lt;!-- 
This document describes the maximum version skew supported between various Kubernetes components.
Specific cluster deployment tools may place additional restrictions on version skew.
--&gt;
&lt;p&gt;本文档描述了 Kubernetes 各个组件之间所支持的最大版本偏差。
特定的集群部署工具可能会对版本偏差添加额外的限制。&lt;/p&gt;
&lt;!-- body --&gt;
&lt;!-- 
## Supported versions

Kubernetes versions are expressed as **x.y.z**, where **x** is the major version,
**y** is the minor version, and **z** is the patch version, following
[Semantic Versioning](https://semver.org/) terminology. For more information, see
[Kubernetes Release Versioning](https://git.k8s.io/sig-release/release-engineering/versioning.md#kubernetes-release-versioning).

The Kubernetes project maintains release branches for the most recent three minor releases
(1.35, 1.34, 1.33).
Kubernetes 1.19 and newer receive [approximately 1 year of patch support](/releases/patch-releases/#support-period).
Kubernetes 1.18 and older received approximately 9 months of patch support.
--&gt;
&lt;h2 id="supported-versions"&gt;支持的版本&lt;/h2&gt;
&lt;p&gt;Kubernetes 版本以 &lt;strong&gt;x.y.z&lt;/strong&gt; 表示，其中 &lt;strong&gt;x&lt;/strong&gt; 是主要版本，
&lt;strong&gt;y&lt;/strong&gt; 是次要版本，&lt;strong&gt;z&lt;/strong&gt; 是补丁版本，遵循&lt;a href="https://semver.org/"&gt;语义版本控制&lt;/a&gt;术语。
更多信息请参见
&lt;a href="https://git.k8s.io/sig-release/release-engineering/versioning.md#kubernetes-release-versioning"&gt;Kubernetes 版本发布控制&lt;/a&gt;。&lt;/p&gt;</description></item><item><title>变更性准入策略</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/access-authn-authz/mutating-admission-policy/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/access-authn-authz/mutating-admission-policy/</guid><description>&lt;!--
reviewers:
- deads2k
- sttts
- cici37
title: Mutating Admission Policy
content_type: concept
--&gt;
&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-beta"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.34 [beta]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;!-- due to feature gate history, use manual version specification here --&gt;
&lt;!--
This page provides an overview of _MutatingAdmissionPolicies_.
--&gt;
&lt;p&gt;本页概要介绍 &lt;strong&gt;MutatingAdmissionPolicy（变更性准入策略）&lt;/strong&gt;。&lt;/p&gt;
&lt;!--
MutatingAdmissionPolicies allow you to change what happens when someone writes a change to the Kubernetes API.
If you want to use declarative policies just to prevent a particular kind of change to resources (for example: protecting platform namespaces from deletion),
[ValidatingAdmissionPolicy](/docs/reference/access-authn-authz/validating-admission-policy/)
is
a simpler and more effective alternative.

To use the feature, enable the `MutatingAdmissionPolicy` feature gate (which is off by default) and set `--runtime-config=admissionregistration.k8s.io/v1beta1=true` on the kube-apiserver.
--&gt;
&lt;p&gt;MutatingAdmissionPolicies 允许你在有人向 Kubernetes API 写入变更时修改发生的操作。
如果你只想使用声明式策略来阻止对资源的某种更改（例如：保护平台命名空间不被删除），
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/access-authn-authz/validating-admission-policy/"&gt;ValidatingAdmissionPolicy&lt;/a&gt;
是更简单且更有效的替代方案。&lt;/p&gt;</description></item><item><title>补丁版本</title><link>https://andygol-k8s.netlify.app/zh-cn/releases/patch-releases/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/releases/patch-releases/</guid><description>&lt;!--
title: Patch Releases
type: docs
--&gt;
&lt;!--
Schedule and team contact information for Kubernetes patch releases.

For general information about Kubernetes release cycle, see the
[release process description].
--&gt;
&lt;p&gt;Kubernetes 补丁版本的发布时间表和团队联系信息。&lt;/p&gt;
&lt;p&gt;有关 Kubernetes 发布周期的常规信息，请参阅&lt;a href="https://andygol-k8s.netlify.app/zh-cn/releases/release"&gt;发布流程说明&lt;/a&gt;。&lt;/p&gt;
&lt;!--
## Cadence

Our typical patch release cadence is monthly. It is
commonly a bit faster (1 to 2 weeks) for the earliest patch releases
after a 1.X minor release. Critical bug fixes may cause a more
immediate release outside of the normal cadence. We also aim to not make
releases during major holiday periods.
--&gt;
&lt;h2 id="cadence"&gt;节奏&lt;/h2&gt;
&lt;p&gt;我们的补丁发布节奏通常是每月一次。
在 1.X 次要版本之后，最早的补丁版本通常要快一些（提前 1 到 2 周）。
严重错误修复可能会导致超出正常节奏而更快速的发布。
我们尽量避免在重要的节假日期间发布。&lt;/p&gt;</description></item><item><title>测试页面（中文版）</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/test/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/test/</guid><description>&lt;!--
title: Docs smoke test page
main_menu: false
--&gt;
&lt;!--
This page serves two purposes:

- Demonstrate how the Kubernetes documentation uses Markdown
- Provide a "smoke test" document we can use to test HTML, CSS, and template
 changes that affect the overall documentation.
--&gt;
&lt;p&gt;本页面服务于两个目的：&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;展示 Kubernetes 中文版文档中应如何使用 Markdown&lt;/li&gt;
&lt;li&gt;提供一个测试用文档，用来测试可能影响所有文档的 HTML、CSS 和模板变更&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
## Heading levels

The above heading is an H2. The page title renders as an H1. The following
sections show H3 - H6.
--&gt;
&lt;h2 id="heading-levels"&gt;标题级别&lt;/h2&gt;
&lt;p&gt;上面的标题是 H2 级别。页面标题（Title）会渲染为 H1。以下各节分别展示 H3-H6
的渲染结果。&lt;/p&gt;</description></item><item><title>调度 GPU</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/manage-gpus/scheduling-gpus/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/manage-gpus/scheduling-gpus/</guid><description>&lt;!--
reviewers:
- vishh
content_type: concept
title: Schedule GPUs
description: Configure and schedule GPUs for use as a resource by nodes in a cluster.
--&gt;
&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.26 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;!--
Kubernetes includes **stable** support for managing AMD and NVIDIA GPUs
(graphical processing units) across different nodes in your cluster, using
&lt;a class='glossary-tooltip' title='一种软件扩展，可以使 Pod 访问由特定厂商初始化或者安装的设备。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/' target='_blank' aria-label='device plugins'&gt;device plugins&lt;/a&gt;.

This page describes how users can consume GPUs, and outlines
some of the limitations in the implementation.
--&gt;
&lt;p&gt;Kubernetes 支持使用&lt;a class='glossary-tooltip' title='一种软件扩展，可以使 Pod 访问由特定厂商初始化或者安装的设备。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/' target='_blank' aria-label='设备插件'&gt;设备插件&lt;/a&gt;来跨集群中的不同节点管理
AMD 和 NVIDIA GPU（图形处理单元），目前处于&lt;strong&gt;稳定&lt;/strong&gt;状态。&lt;/p&gt;</description></item><item><title>调试运行中的 Pod</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/debug/debug-application/debug-running-pod/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/debug/debug-application/debug-running-pod/</guid><description>&lt;!-- 
reviewers:
- verb
- soltysh
title: Debug Running Pods
content_type: task
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page explains how to debug Pods running (or crashing) on a Node.
--&gt;
&lt;p&gt;本页解释如何在节点上调试运行中（或崩溃）的 Pod。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;!--
* Your &lt;a class='glossary-tooltip' title='Pod 表示你的集群上一组正在运行的容器。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/' target='_blank' aria-label='Pod'&gt;Pod&lt;/a&gt; should already be
 scheduled and running. If your Pod is not yet running, start with [Debugging
 Pods](/docs/tasks/debug/debug-application/).
* For some of the advanced debugging steps you need to know on which Node the
 Pod is running and have shell access to run commands on that Node. You don't
 need that access to run the standard debug steps that use `kubectl`.
--&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;你的 &lt;a class='glossary-tooltip' title='Pod 表示你的集群上一组正在运行的容器。' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/zh-cn/docs/concepts/workloads/pods/' target='_blank' aria-label='Pod'&gt;Pod&lt;/a&gt; 应该已经被调度并正在运行中，
如果你的 Pod 还没有运行，请参阅&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/tasks/debug/debug-application/"&gt;调试 Pod&lt;/a&gt;。&lt;/p&gt;</description></item><item><title>发布管理员</title><link>https://andygol-k8s.netlify.app/zh-cn/releases/release-managers/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/releases/release-managers/</guid><description>&lt;!-- 
title: Release Managers
type: docs
--&gt;
&lt;!-- 
"Release Managers" is an umbrella term that encompasses the set of Kubernetes
contributors responsible for maintaining release branches and creating releases
by using the tools SIG Release provides.

The responsibilities of each role are described below.
--&gt;
&lt;p&gt;“发布管理员（Release Managers）” 是一个总称，通过使用 SIG Release 提供的工具，
负责维护发布分支、标记发行版本以及创建发行版本的贡献者。&lt;/p&gt;
&lt;p&gt;每个角色的职责如下所述。&lt;/p&gt;
&lt;!-- 
- [Contact](#contact)
 - [Security Embargo Policy](#security-embargo-policy)
- [Handbooks](#handbooks)
- [Release Managers](#release-managers)
 - [Becoming a Release Manager](#becoming-a-release-manager)
- [Release Manager Associates](#release-manager-associates)
 - [Becoming a Release Manager Associate](#becoming-a-release-manager-associate)
- [SIG Release Leads](#sig-release-leads)
 - [Chairs](#chairs)
 - [Technical Leads](#technical-leads)
--&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#contact"&gt;联系方式&lt;/a&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#security-embargo-policy"&gt;安全禁运政策&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="#handbooks"&gt;手册&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#release-managers"&gt;发布管理员&lt;/a&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#becoming-a-release-manager"&gt;成为发布管理员&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="#release-manager-associates"&gt;发布管理员助理&lt;/a&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#becoming-a-release-manager-associate"&gt;成为发布管理员助理&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="#sig-release-leads"&gt;SIG 发布负责人&lt;/a&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#chairs"&gt;首席&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#technical-leads"&gt;技术负责人&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;!-- 
## Contact

| Mailing List | Slack | Visibility | Usage | Membership |
| --- | --- | --- | --- | --- |
| [release-managers@kubernetes.io](mailto:release-managers@kubernetes.io) | [#release-management](https://kubernetes.slack.com/messages/CJH2GBF7Y) (channel) / @release-managers (user group) | Public | Public discussion for Release Managers | All Release Managers (including Associates, and SIG Chairs) |
| [release-managers-private@kubernetes.io](mailto:release-managers-private@kubernetes.io) | N/A | Private | Private discussion for privileged Release Managers | Release Managers, SIG Release leadership |
| [security-release-team@kubernetes.io](mailto:security-release-team@kubernetes.io) | [#security-release-team](https://kubernetes.slack.com/archives/G0162T1RYHG) (channel) / @security-rel-team (user group) | Private | Security release coordination with the Security Response Committee | [security-discuss-private@kubernetes.io](mailto:security-discuss-private@kubernetes.io), [release-managers-private@kubernetes.io](mailto:release-managers-private@kubernetes.io) |
--&gt;
&lt;h2 id="contact"&gt;联系方式&lt;/h2&gt;
&lt;table&gt;
 &lt;thead&gt;
 &lt;tr&gt;
 &lt;th&gt;邮件列表&lt;/th&gt;
 &lt;th&gt;Slack&lt;/th&gt;
 &lt;th&gt;可见范围&lt;/th&gt;
 &lt;th&gt;用法&lt;/th&gt;
 &lt;th&gt;会员资格&lt;/th&gt;
 &lt;/tr&gt;
 &lt;/thead&gt;
 &lt;tbody&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;a href="mailto:release-managers@kubernetes.io"&gt;release-managers@kubernetes.io&lt;/a&gt;&lt;/td&gt;
 &lt;td&gt;&lt;a href="https://kubernetes.slack.com/messages/CJH2GBF7Y"&gt;#release-management&lt;/a&gt;（频道）/@release-managers（用户组）&lt;/td&gt;
 &lt;td&gt;公共&lt;/td&gt;
 &lt;td&gt;发布管理员公开讨论&lt;/td&gt;
 &lt;td&gt;所有发布管理员（包括助理和 SIG 主席）&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;a href="mailto:release-managers-private@kubernetes.io"&gt;release-managers-private@kubernetes.io&lt;/a&gt;&lt;/td&gt;
 &lt;td&gt;不适用&lt;/td&gt;
 &lt;td&gt;私人&lt;/td&gt;
 &lt;td&gt;拥有特权的发布管理员私人讨论&lt;/td&gt;
 &lt;td&gt;发布管理员，SIG Release 负责人&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;a href="mailto:security-release-team@kubernetes.io"&gt;security-release-team@kubernetes.io&lt;/a&gt;&lt;/td&gt;
 &lt;td&gt;&lt;a href="https://kubernetes.slack.com/archives/G0162T1RYHG"&gt;#security-release-team&lt;/a&gt;（频道）/@security-rel-team（用户组）&lt;/td&gt;
 &lt;td&gt;私人&lt;/td&gt;
 &lt;td&gt;与安全响应委员会协调安全发布&lt;/td&gt;
 &lt;td&gt;&lt;a href="mailto:security-discuss-private@kubernetes.io"&gt;security-discuss-private@kubernetes.io&lt;/a&gt;, &lt;a href="mailto:release-managers-private@kubernetes.io"&gt;release-managers-private@kubernetes.io&lt;/a&gt;&lt;/td&gt;
 &lt;/tr&gt;
 &lt;/tbody&gt;
&lt;/table&gt;
&lt;!-- 
### Security Embargo Policy

Some information about releases is subject to embargo and we have defined policy about
how those embargoes are set. Please refer to the
[Security Embargo Policy](https://github.com/kubernetes/committee-security-response/blob/main/private-distributors-list.md#embargo-policy)
for more information.
--&gt;
&lt;h3 id="security-embargo-policy"&gt;安全禁运政策&lt;/h3&gt;
&lt;p&gt;发布的相关信息受到禁运，我们已经定义了有关如何设置这些禁运的政策。
更多信息请参考&lt;a href="https://github.com/kubernetes/committee-security-response/blob/main/private-distributors-list.md#embargo-policy"&gt;安全禁运政策&lt;/a&gt;。&lt;/p&gt;</description></item><item><title>说明</title><link>https://andygol-k8s.netlify.app/zh-cn/releases/notes/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/releases/notes/</guid><description>&lt;!--
linktitle: Release Notes
title: Notes
type: docs
description: &gt;
 Kubernetes release notes.
sitemap:
 priority: 0.5
--&gt;
&lt;!-- 
Release notes can be found by reading the [Changelog](https://github.com/kubernetes/kubernetes/tree/master/CHANGELOG)
that matches your Kubernetes version. View the changelog for 1.35 on
[GitHub](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.35.md).

Alternately, release notes can be searched and filtered online at: [relnotes.k8s.io](https://relnotes.k8s.io).
View filtered release notes for 1.35 on
[relnotes.k8s.io](https://relnotes.k8s.io/?releaseVersions=1.35.0).
--&gt;
&lt;p&gt;可以通过阅读与你的 Kubernetes 版本对应的
&lt;a href="https://github.com/kubernetes/kubernetes/tree/master/CHANGELOG"&gt;Changelog&lt;/a&gt;
找到发行版本说明。
在 &lt;a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.35.md"&gt;GitHub&lt;/a&gt;
上查看 1.35 的变更日志。&lt;/p&gt;
&lt;p&gt;或者，可以在以下位置在线搜索和筛选发行版本说明：&lt;a href="https://relnotes.k8s.io"&gt;relnotes.k8s.io&lt;/a&gt;。
在 &lt;a href="https://relnotes.k8s.io/?releaseVersions=1.35.0"&gt;relnotes.k8s.io&lt;/a&gt;
上查看 1.35 的筛选后的版本说明。&lt;/p&gt;</description></item><item><title>管理集群中的 TLS 认证</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/tls/managing-tls-in-a-cluster/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/tls/managing-tls-in-a-cluster/</guid><description>&lt;!--
title: Manage TLS Certificates in a Cluster
content_type: task
reviewers:
- mikedanese
- beacham
- liggit
--&gt;
&lt;!-- overview --&gt;
&lt;!--
Kubernetes provides a `certificates.k8s.io` API, which lets you provision TLS
certificates signed by a Certificate Authority (CA) that you control. These CA
and certificates can be used by your workloads to establish trust.

`certificates.k8s.io` API uses a protocol that is similar to the [ACME
draft](https://github.com/ietf-wg-acme/acme/).
--&gt;
&lt;p&gt;Kubernetes 提供 &lt;code&gt;certificates.k8s.io&lt;/code&gt; API，可让你配置由你控制的证书颁发机构（CA）
签名的 TLS 证书。 你的工作负载可以使用这些 CA 和证书来建立信任。&lt;/p&gt;</description></item><item><title>管理巨页（HugePage）</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/manage-hugepages/scheduling-hugepages/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/manage-hugepages/scheduling-hugepages/</guid><description>&lt;!--
reviewers:
- derekwaynecarr
title: Manage HugePages
content_type: task
description: Configure and manage huge pages as a schedulable resource in a cluster.
---&gt;
&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable" title="特性门控： HugePages"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt; 
 &lt;code&gt;Kubernetes v1.14 [stable]&lt;/code&gt;（默认启用）&lt;/div&gt;

&lt;!--
Kubernetes supports the allocation and consumption of pre-allocated huge pages
by applications in a Pod. This page describes how users can consume huge pages.
---&gt;
&lt;p&gt;Kubernetes 支持在 Pod 应用中使用预先分配的巨页。本文描述了用户如何使用巨页，以及当前的限制。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;!--
Kubernetes nodes must
[pre-allocate huge pages](https://www.kernel.org/doc/html/latest/admin-guide/mm/hugetlbpage.html)
in order for the node to report its huge page capacity.

A node can pre-allocate huge pages for multiple sizes, for instance,
the following line in `/etc/default/grub` allocates `2*1GiB` of 1 GiB
and `512*2 MiB` of 2 MiB pages:
---&gt;
&lt;p&gt;为了使节点能够上报巨页容量，Kubernetes
节点必须&lt;a href="https://www.kernel.org/doc/html/latest/admin-guide/mm/hugetlbpage.html"&gt;预先分配巨页&lt;/a&gt;。&lt;/p&gt;</description></item><item><title>华为案例分析</title><link>https://andygol-k8s.netlify.app/zh-cn/case-studies/huawei/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/case-studies/huawei/</guid><description>&lt;!--
title: Huawei Case Study
case_study_styles: true
cid: caseStudies

new_case_study_styles: true
heading_background: /images/case-studies/huawei/banner1.jpg
heading_title_logo: /images/huawei_logo.png
subheading: &gt;
 Embracing Cloud Native as a User – and a Vendor
case_study_details:
 - Company: Huawei
 - Location: Shenzhen, China
 - Industry: Telecommunications Equipment
--&gt;

&lt;!--
&lt;h2&gt;Challenge&lt;/h2&gt;
--&gt;
&lt;h2&gt;挑战&lt;/h2&gt;

&lt;p&gt;
&lt;!--
A multinational company that's the largest telecommunications equipment manufacturer in the world, Huawei has more than 180,000 employees. In order to support its fast business development around the globe, &lt;a href="https://www.huawei.com/"&gt;Huawei&lt;/a&gt; has eight data centers for its internal I.T. department, which have been running 800+ applications in 100K+ VMs to serve these 180,000 users. With the rapid increase of new applications, the cost and efficiency of management and deployment of VM-based apps all became critical challenges for business agility. "It's very much a distributed system so we found that managing all of the tasks in a more consistent way is always a challenge," says Peixin Hou, the company's Chief Software Architect and Community Director for Open Source. "We wanted to move into a more agile and decent practice."
--&gt;
华为作为一个跨国企业，是世界上最大的电信设备制造商，拥有超过 18 万名员工。
为了支持华为在全球的快速业务发展，&lt;a href="https://www.huawei.com/"&gt;华为&lt;/a&gt;内部 IT 部门有 8 个数据中心，
这些数据中心在 10 万多台虚拟机上运行了 800 多个应用程序，为内部 18 万用户提供服务。
随着新应用程序的快速增长，基于虚拟机的应用程序管理和部署的成本和效率都成为业务敏捷性的关键挑战。
该公司首席软件架构师、开源社区总监侯培新表示：
“这是一个超大的分布式系统，因此我们发现，以更一致的方式管理所有任务始终是一个挑战。
我们希望进入一种更敏捷、更得体的实践”。
&lt;/p&gt;</description></item><item><title>获取正在运行容器的 Shell</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/debug/debug-application/get-shell-running-container/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/debug/debug-application/get-shell-running-container/</guid><description>&lt;!-- overview --&gt;
&lt;!--
This page shows how to use `kubectl exec` to get a shell to a
running container.
--&gt;
&lt;p&gt;本文介绍怎样使用 &lt;code&gt;kubectl exec&lt;/code&gt; 命令获取正在运行容器的 Shell。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;!--
You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
[minikube](https://minikube.sigs.k8s.io/docs/tutorials/multi_node/)
or you can use one of these Kubernetes playgrounds:
--&gt;
&lt;p&gt;你必须拥有一个 Kubernetes 的集群，且必须配置 kubectl 命令行工具让其与你的集群通信。
建议运行本教程的集群至少有两个节点，且这两个节点不能作为控制平面主机。
如果你还没有集群，你可以通过 &lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;Minikube&lt;/a&gt;
构建一个你自己的集群，或者你可以使用下面的 Kubernetes 练习环境之一：&lt;/p&gt;</description></item><item><title>客户端身份认证（Client Authentication） (v1)</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/config-api/client-authentication.v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/config-api/client-authentication.v1/</guid><description>&lt;!--
title: Client Authentication (v1)
content_type: tool-reference
package: client.authentication.k8s.io/v1
auto_generated: true
--&gt;
&lt;!--
## Resource Types 
--&gt;
&lt;h2 id="resource-types"&gt;资源类型&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#client-authentication-k8s-io-v1-ExecCredential"&gt;ExecCredential&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="client-authentication-k8s-io-v1-ExecCredential"&gt;&lt;code&gt;ExecCredential&lt;/code&gt;&lt;/h2&gt;
&lt;!--
ExecCredential is used by exec-based plugins to communicate credentials to
HTTP transports.
--&gt;
&lt;p&gt;ExecCredential 由基于 exec 的插件使用，与 HTTP 传输组件沟通凭据信息。&lt;/p&gt;
&lt;table class="table"&gt;
&lt;thead&gt;&lt;tr&gt;&lt;th width="30%"&gt;&lt;!--Field--&gt;字段&lt;/th&gt;&lt;th&gt;&lt;!--Description--&gt;描述&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;apiVersion&lt;/code&gt;&lt;br/&gt;string&lt;/td&gt;&lt;td&gt;&lt;code&gt;client.authentication.k8s.io/v1&lt;/code&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;kind&lt;/code&gt;&lt;br/&gt;string&lt;/td&gt;&lt;td&gt;&lt;code&gt;ExecCredential&lt;/code&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;spec&lt;/code&gt; &lt;B&gt;&lt;!--[Required]--&gt;[必需]&lt;/B&gt;&lt;br/&gt;
&lt;a href="#client-authentication-k8s-io-v1-ExecCredentialSpec"&gt;&lt;code&gt;ExecCredentialSpec&lt;/code&gt;&lt;/a&gt;
&lt;/td&gt;
&lt;td&gt;
 &lt;!--
 Spec holds information passed to the plugin by the transport.
 --&gt;
 字段 spec 包含由 HTTP 传输组件传递给插件的信息。
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;status&lt;/code&gt;&lt;br/&gt;
&lt;a href="#client-authentication-k8s-io-v1-ExecCredentialStatus"&gt;&lt;code&gt;ExecCredentialStatus&lt;/code&gt;&lt;/a&gt;
&lt;/td&gt;
&lt;td&gt;
 &lt;!--
 Status is filled in by the plugin and holds the credentials that the transport
 should use to contact the API.
 --&gt;
 字段 status 由插件填充，包含传输组件与 API 服务器连接时需要提供的凭据。
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2 id="client-authentication-k8s-io-v1-Cluster"&gt;&lt;code&gt;Cluster&lt;/code&gt;&lt;/h2&gt;
&lt;!--
**Appears in:**
--&gt;
&lt;p&gt;&lt;strong&gt;出现在：&lt;/strong&gt;&lt;/p&gt;</description></item><item><title>客户端身份认证（Client Authentication）(v1beta1)</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/config-api/client-authentication.v1beta1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/config-api/client-authentication.v1beta1/</guid><description>&lt;!-- 
title: Client Authentication (v1beta1)
content_type: tool-reference
package: client.authentication.k8s.io/v1beta1
auto_generated: true
--&gt;
&lt;!--
## Resource Types 
--&gt;
&lt;h2 id="resource-types"&gt;资源类型&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#client-authentication-k8s-io-v1beta1-ExecCredential"&gt;ExecCredential&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="client-authentication-k8s-io-v1beta1-ExecCredential"&gt;&lt;code&gt;ExecCredential&lt;/code&gt;&lt;/h2&gt;
&lt;!--
ExecCredential is used by exec-based plugins to communicate credentials to
HTTP transports.
--&gt;
&lt;p&gt;ExecCredential 由基于 exec 的插件使用，与 HTTP 传输组件沟通凭据信息。&lt;/p&gt;
&lt;table class="table"&gt;
&lt;thead&gt;&lt;tr&gt;&lt;th width="30%"&gt;&lt;!--Field--&gt;字段&lt;/th&gt;&lt;th&gt;&lt;!--Description--&gt;描述&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;apiVersion&lt;/code&gt;&lt;br/&gt;string&lt;/td&gt;&lt;td&gt;&lt;code&gt;client.authentication.k8s.io/v1beta1&lt;/code&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;kind&lt;/code&gt;&lt;br/&gt;string&lt;/td&gt;&lt;td&gt;&lt;code&gt;ExecCredential&lt;/code&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;spec&lt;/code&gt; &lt;B&gt;&lt;!--[Required]--&gt;[必需]&lt;/B&gt;&lt;br/&gt;
&lt;a href="#client-authentication-k8s-io-v1beta1-ExecCredentialSpec"&gt;&lt;code&gt;ExecCredentialSpec&lt;/code&gt;&lt;/a&gt;
&lt;/td&gt;
&lt;td&gt;
 &lt;!--
 Spec holds information passed to the plugin by the transport.
 --&gt;
&lt;p&gt;字段 &lt;code&gt;spec&lt;/code&gt; 包含由 HTTP 传输组件传递给插件的信息。&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;status&lt;/code&gt;&lt;br/&gt;
&lt;a href="#client-authentication-k8s-io-v1beta1-ExecCredentialStatus"&gt;&lt;code&gt;ExecCredentialStatus&lt;/code&gt;&lt;/a&gt;
&lt;/td&gt;
&lt;td&gt;
 &lt;!--
 Status is filled in by the plugin and holds the credentials that the transport
 should use to contact the API.
 --&gt;
&lt;p&gt;字段 &lt;code&gt;status&lt;/code&gt; 由插件填充，包含传输组件与 API 服务器连接时需要提供的凭据。&lt;/p&gt;</description></item><item><title>扩展 Service IP 范围</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/network/extend-service-ip-ranges/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/network/extend-service-ip-ranges/</guid><description>&lt;!--
reviewers:
- thockin
- dwinship
min-kubernetes-server-version: v1.29
title: Extend Service IP Ranges
content_type: task
--&gt;
&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable" title="特性门控： MultiCIDRServiceAllocator"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt; 
 &lt;code&gt;Kubernetes v1.33 [stable]&lt;/code&gt;（默认启用）&lt;/div&gt;

&lt;!--
This document shares how to extend the existing Service IP range assigned to a cluster.
--&gt;
&lt;p&gt;本文将介绍如何扩展分配给集群的现有 Service IP 范围。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;!--
You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
[minikube](https://minikube.sigs.k8s.io/docs/tutorials/multi_node/)
or you can use one of these Kubernetes playgrounds:
--&gt;
&lt;p&gt;你必须拥有一个 Kubernetes 的集群，且必须配置 kubectl 命令行工具让其与你的集群通信。
建议运行本教程的集群至少有两个节点，且这两个节点不能作为控制平面主机。
如果你还没有集群，你可以通过 &lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;Minikube&lt;/a&gt;
构建一个你自己的集群，或者你可以使用下面的 Kubernetes 练习环境之一：&lt;/p&gt;</description></item><item><title>审计</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/debug/debug-cluster/audit/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/debug/debug-cluster/audit/</guid><description>&lt;!--
reviewers:
- soltysh
- sttts
content_type: concept
title: Auditing
--&gt;
&lt;!-- overview --&gt;
&lt;!--
Kubernetes _auditing_ provides a security-relevant, chronological set of records documenting
the sequence of actions in a cluster. The cluster audits the activities generated by users,
by applications that use the Kubernetes API, and by the control plane itself.

Auditing allows cluster administrators to answer the following questions:
--&gt;
&lt;p&gt;Kubernetes &lt;strong&gt;审计（Auditing）&lt;/strong&gt; 功能提供了与安全相关的、按时间顺序排列的记录集，
记录每个用户、使用 Kubernetes API 的应用以及控制面自身引发的活动。&lt;/p&gt;</description></item><item><title>使用 telepresence 在本地开发和调试服务</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/debug/debug-cluster/local-debugging/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/debug/debug-cluster/local-debugging/</guid><description>&lt;!--
title: Developing and debugging services locally using telepresence
content_type: task
--&gt;
&lt;!-- overview --&gt;
&lt;div class="alert alert-secondary callout third-party-content" role="note"&gt;&lt;strong&gt;说明：&lt;/strong&gt;&amp;puncsp;本部分链接到提供 Kubernetes 所需功能的第三方项目。Kubernetes 项目作者不负责这些项目。此页面遵循&lt;a href="https://github.com/cncf/foundation/blob/main/policies-guidance/website-guidelines.md" target="_blank"&gt;CNCF 网站指南&lt;/a&gt;，按字母顺序列出项目。要将项目添加到此列表中，请在提交更改之前阅读&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/contribute/style/content-guide/#third-party-content"&gt;内容指南&lt;/a&gt;。&lt;/div&gt;
&lt;!--
Kubernetes applications usually consist of multiple, separate services,
each running in its own container. Developing and debugging these services
on a remote Kubernetes cluster can be cumbersome, requiring you to
[get a shell on a running container](/docs/tasks/debug/debug-application/get-shell-running-container/)
in order to run debugging tools.
--&gt;
&lt;p&gt;Kubernetes 应用程序通常由多个独立的服务组成，每个服务都在自己的容器中运行。
在远端的 Kubernetes 集群上开发和调试这些服务可能很麻烦，
需要&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/tasks/debug/debug-application/get-shell-running-container/"&gt;在运行的容器上打开 Shell&lt;/a&gt;，
以运行调试工具。&lt;/p&gt;</description></item><item><title>手动轮换 CA 证书</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/tls/manual-rotation-of-ca-certificates/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/tls/manual-rotation-of-ca-certificates/</guid><description>&lt;!--
title: Manual Rotation of CA Certificates
content_type: task
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page shows how to manually rotate the certificate authority (CA) certificates.
--&gt;
&lt;p&gt;本页展示如何手动轮换证书机构（CA）证书。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;!--
You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
[minikube](https://minikube.sigs.k8s.io/docs/tutorials/multi_node/)
or you can use one of these Kubernetes playgrounds:
--&gt;
&lt;p&gt;你必须拥有一个 Kubernetes 的集群，且必须配置 kubectl 命令行工具让其与你的集群通信。
建议运行本教程的集群至少有两个节点，且这两个节点不能作为控制平面主机。
如果你还没有集群，你可以通过 &lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;Minikube&lt;/a&gt;
构建一个你自己的集群，或者你可以使用下面的 Kubernetes 练习环境之一：&lt;/p&gt;</description></item><item><title>搜索结果</title><link>https://andygol-k8s.netlify.app/zh-cn/search/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/search/</guid><description>&lt;!--
layout: search
title: Search Results
--&gt;</description></item><item><title>为 kubelet 配置证书轮换</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/tls/certificate-rotation/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/tls/certificate-rotation/</guid><description>&lt;!--
reviewers:
- jcbsmpsn
- mikedanese
title: Configure Certificate Rotation for the Kubelet
content_type: task
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page shows how to enable and configure certificate rotation for the kubelet.
--&gt;
&lt;p&gt;本文展示如何在 kubelet 中启用并配置证书轮换。&lt;/p&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.19 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;!--
* Kubernetes version 1.8.0 or later is required
--&gt;
&lt;ul&gt;
&lt;li&gt;要求 Kubernetes 1.8.0 或更高的版本&lt;/li&gt;
&lt;/ul&gt;
&lt;!-- steps --&gt;
&lt;!--
## Overview

The kubelet uses certificates for authenticating to the Kubernetes API. By
default, these certificates are issued with one year expiration so that they do
not need to be renewed too frequently.
--&gt;
&lt;h2 id="概述"&gt;概述&lt;/h2&gt;
&lt;p&gt;Kubelet 使用证书进行 Kubernetes API 的认证。
默认情况下，这些证书的签发期限为一年，所以不需要太频繁地进行更新。&lt;/p&gt;</description></item><item><title>为 Kubernetes 文档出一份力</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/contribute/docs/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/contribute/docs/</guid><description>&lt;!--
content_type: concept
title: Contribute to Kubernetes Documentation
weight: 09
card:
 name: contribute
 weight: 11
 title: Contribute to documentation
--&gt;
&lt;!--
This website is maintained by [Kubernetes SIG Docs](/docs/contribute/#get-involved-with-sig-docs).
The Kubernetes project welcomes help from all contributors, new or experienced!

Kubernetes documentation contributors:

- Improve existing content
- Create new content
- Translate the documentation
- Manage and publish the documentation parts of the Kubernetes release cycle

The blog team, part of SIG Docs, helps manage the official blogs. Read
[contributing to Kubernetes blogs](/docs/contribute/blog/) to learn more.
--&gt;
&lt;p&gt;本网站由 &lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/contribute/#get-involved-with-SIG-Docs"&gt;Kubernetes SIG Docs&lt;/a&gt;（文档特别兴趣小组）维护。
Kubernetes 项目欢迎所有贡献者（无论是新手还是经验丰富的贡献者）提供帮助！&lt;/p&gt;</description></item><item><title>下载 Kubernetes</title><link>https://andygol-k8s.netlify.app/zh-cn/releases/download/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/releases/download/</guid><description>&lt;!--
title: Download Kubernetes
type: docs
--&gt;
&lt;!--
Kubernetes ships binaries for each component as well as a standard set of client
applications to bootstrap or interact with a cluster. Components like the
API server are capable of running within container images inside of a
cluster. Those components are also shipped in container images as part of the
official release process. All binaries as well as container images are available
for multiple operating systems as well as hardware architectures.
--&gt;
&lt;p&gt;Kubernetes 为每个组件提供二进制文件以及一组标准的客户端应用来引导集群或与集群交互。
像 API 服务器这样的组件能够在集群内的容器镜像中运行。
这些组件作为官方发布过程的一部分，也以容器镜像的形式提供。
所有二进制文件和容器镜像都可用于多种操作系统和硬件架构。&lt;/p&gt;</description></item><item><title>验证 IPv4/IPv6 双协议栈</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/network/validate-dual-stack/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/network/validate-dual-stack/</guid><description>&lt;!--
reviewers:
- lachie83
- khenidak
- bridgetkromhout
min-kubernetes-server-version: v1.23
title: Validate IPv4/IPv6 dual-stack
content_type: task
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This document shares how to validate IPv4/IPv6 dual-stack enabled Kubernetes clusters.
--&gt;
&lt;p&gt;本文分享了如何验证 IPv4/IPv6 双协议栈的 Kubernetes 集群。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;!--
* Provider support for dual-stack networking (Cloud provider or otherwise must be able to
 provide Kubernetes nodes with routable IPv4/IPv6 network interfaces)
* A [network plugin](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/)
 that supports dual-stack networking.
* [Dual-stack enabled](/docs/concepts/services-networking/dual-stack/) cluster
--&gt;
&lt;ul&gt;
&lt;li&gt;驱动程序对双协议栈网络的支持 (云驱动或其他方式必须能够为 Kubernetes 节点提供可路由的 IPv4/IPv6 网络接口)&lt;/li&gt;
&lt;li&gt;一个能够支持&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/services-networking/dual-stack/"&gt;双协议栈&lt;/a&gt;网络的
&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/"&gt;网络插件&lt;/a&gt;。&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/services-networking/dual-stack/"&gt;启用双协议栈&lt;/a&gt;集群&lt;/li&gt;
&lt;/ul&gt;


你的 Kubernetes 服务器版本必须不低于版本 v1.23.
 &lt;p&gt;要获知版本信息，请输入 &lt;code&gt;kubectl version&lt;/code&gt;.&lt;/p&gt;</description></item><item><title>验证准入策略（ValidatingAdmissionPolicy）</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/reference/access-authn-authz/validating-admission-policy/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/reference/access-authn-authz/validating-admission-policy/</guid><description>&lt;!--
reviewers:
- liggitt
- jpbetz
- cici37
title: Validating Admission Policy
content_type: concept
--&gt;
&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.30 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;!--
This page provides an overview of Validating Admission Policy.
--&gt;
&lt;p&gt;本页面提供验证准入策略（Validating Admission Policy）的概述。&lt;/p&gt;
&lt;!-- body --&gt;
&lt;!--
## What is Validating Admission Policy?

Validating admission policies offer a declarative, in-process alternative to validating admission webhooks.

Validating admission policies use the Common Expression Language (CEL) to declare the validation
rules of a policy.
Validation admission policies are highly configurable, enabling policy authors to define policies
that can be parameterized and scoped to resources as needed by cluster administrators.
--&gt;
&lt;h2 id="what-is-validating-admission-policy"&gt;什么是验证准入策略？&lt;/h2&gt;
&lt;p&gt;验证准入策略提供一种声明式的、进程内的替代方案来验证准入 Webhook。&lt;/p&gt;</description></item><item><title>用 Kubectl 调试 Kubernetes 节点</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/debug/debug-cluster/kubectl-node-debug/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/debug/debug-cluster/kubectl-node-debug/</guid><description>&lt;!--
title: Debugging Kubernetes Nodes With Kubectl
content_type: task
min-kubernetes-server-version: 1.20
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This page shows how to debug a [node](/docs/concepts/architecture/nodes/)
running on the Kubernetes cluster using `kubectl debug` command.
--&gt;
&lt;p&gt;本页演示如何使用 &lt;code&gt;kubectl debug&lt;/code&gt; 命令调试在 Kubernetes
集群上运行的&lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/concepts/architecture/nodes/"&gt;节点&lt;/a&gt;。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;p&gt;&lt;!--
You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
[minikube](https://minikube.sigs.k8s.io/docs/tutorials/multi_node/)
or you can use one of these Kubernetes playgrounds:
--&gt;
&lt;p&gt;你必须拥有一个 Kubernetes 的集群，且必须配置 kubectl 命令行工具让其与你的集群通信。
建议运行本教程的集群至少有两个节点，且这两个节点不能作为控制平面主机。
如果你还没有集群，你可以通过 &lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;Minikube&lt;/a&gt;
构建一个你自己的集群，或者你可以使用下面的 Kubernetes 练习环境之一：&lt;/p&gt;</description></item><item><title>用插件扩展 kubectl</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/extend-kubectl/kubectl-plugins/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/extend-kubectl/kubectl-plugins/</guid><description>&lt;!--
title: Extend kubectl with plugins
reviewers:
- juanvallejo
- soltysh
description: Extend kubectl by creating and installing kubectl plugins.
content_type: task
--&gt;
&lt;!-- overview --&gt;
&lt;!--
This guide demonstrates how to install and write extensions for [kubectl](/docs/reference/kubectl/kubectl/).
By thinking of core `kubectl` commands as essential building blocks for interacting with a Kubernetes cluster,
a cluster administrator can think of plugins as a means of utilizing these building blocks to create more complex behavior.
Plugins extend `kubectl` with new sub-commands, allowing for new and custom features not included in the main distribution of `kubectl`.
--&gt;
&lt;p&gt;本指南演示了如何为 &lt;a href="https://andygol-k8s.netlify.app/zh-cn/docs/reference/kubectl/kubectl/"&gt;kubectl&lt;/a&gt; 安装和编写扩展。
通过将核心 &lt;code&gt;kubectl&lt;/code&gt; 命令看作与 Kubernetes 集群交互的基本构建块，
集群管理员可以将插件视为一种利用这些构建块创建更复杂行为的方法。
插件用新的子命令扩展了 &lt;code&gt;kubectl&lt;/code&gt;，允许新的和自定义的特性不包括在 &lt;code&gt;kubectl&lt;/code&gt; 的主要发行版中。&lt;/p&gt;</description></item><item><title>中文本地化样式指南</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/contribute/localization_zh/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/contribute/localization_zh/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;本节详述文档中文本地化过程中须注意的事项。
这里列举的内容包含了&lt;strong&gt;中文本地化小组&lt;/strong&gt;早期给出的指导性建议和后续实践过程中积累的经验。
在阅读、贡献、评阅中文本地化文档的过程中，如果对本文的指南有任何改进建议，
都请直接提出 PR。我们欢迎任何形式的补充和更正！&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="general"&gt;一般规定&lt;/h2&gt;
&lt;p&gt;本节列举一些译文中常见问题和约定。&lt;/p&gt;
&lt;h3 id="commented-en-text"&gt;英文原文的保留&lt;/h3&gt;
&lt;p&gt;为便于译文审查和变更追踪，所有中文本地化 Markdown 文件中都应使用 HTML 注释
&lt;code&gt;&amp;lt;!--&lt;/code&gt; 和 &lt;code&gt;--&amp;gt;&lt;/code&gt; 将英文原文逐段注释起来，后跟对应中文译文。例如：&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;&amp;lt;!--
This is English text ... 
--&amp;gt;
中文译文对应 ...
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;不建议采用下面的方式注释英文段落，除非英文段落非常非常短：&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;&amp;lt;!-- This is English text ... --&amp;gt;
中文译文对应 ...
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;无论英文原文或者中文译文中，都不要保留过多的、不必要的空白行。&lt;/p&gt;
&lt;h4 id="paras"&gt;段落划分&lt;/h4&gt;
&lt;p&gt;请避免大段大段地注释和翻译。一般而言，每段翻译可对应两三个自然段。
段落过长会导致译文很难评阅。但也不必每个段落都单独翻译。例如：&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;&amp;lt;!--
## Overview

### Concept

First paragraph, not very long.
--&amp;gt;
## 概述 {#overview}

### 概念 {#concept}

第一段落，不太长。
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;以下风格是不必要的：&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;&amp;lt;!--
## Overview
--&amp;gt;
## 概述 {#overview}

&amp;lt;!--
### Concept
--&amp;gt;
### 概念 {#concept}

&amp;lt;!--
First paragraph, not very long.
--&amp;gt;
第一段落，不太长。
&lt;/code&gt;&lt;/pre&gt;&lt;h4 id="list"&gt;编号列表的处理&lt;/h4&gt;
&lt;p&gt;编号列表需要编号的连续性，处理不好的话可能导致输出结果错误。
由于有些列表可能很长，一次性等将整个列表注释掉再翻译也不现实。
推荐采用下面的方式。&lt;/p&gt;</description></item><item><title>重新配置 Kubernetes 默认的 ServiceCIDR</title><link>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/network/reconfigure-default-service-ip-ranges/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/zh-cn/docs/tasks/network/reconfigure-default-service-ip-ranges/</guid><description>&lt;!--
reviewers:
- thockin
- dwinship
min-kubernetes-server-version: v1.33
title: Kubernetes Default ServiceCIDR Reconfiguration
content_type: task
--&gt;
&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable" title="特性门控： MultiCIDRServiceAllocator"&gt;
 &lt;span class="feature-state-name"&gt;特性状态：&lt;/span&gt; 
 &lt;code&gt;Kubernetes v1.33 [stable]&lt;/code&gt;（默认启用）&lt;/div&gt;

&lt;!--
This document shares how to reconfigure the default Service IP range(s) assigned
to a cluster.
--&gt;
&lt;p&gt;本文介绍如何重新配置集群中分配的默认 Service IP 范围。&lt;/p&gt;
&lt;h2 id="准备开始"&gt;准备开始&lt;/h2&gt;
&lt;!--
You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
[minikube](https://minikube.sigs.k8s.io/docs/tutorials/multi_node/)
or you can use one of these Kubernetes playgrounds:
--&gt;
&lt;p&gt;你必须拥有一个 Kubernetes 的集群，且必须配置 kubectl 命令行工具让其与你的集群通信。
建议运行本教程的集群至少有两个节点，且这两个节点不能作为控制平面主机。
如果你还没有集群，你可以通过 &lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;Minikube&lt;/a&gt;
构建一个你自己的集群，或者你可以使用下面的 Kubernetes 练习环境之一：&lt;/p&gt;</description></item></channel></rss>