<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Production-Grade Container Orchestration on Kubernetes</title><link>https://andygol-k8s.netlify.app/</link><description>Recent content in Production-Grade Container Orchestration on Kubernetes</description><generator>Hugo</generator><language>en</language><atom:link href="https://andygol-k8s.netlify.app/feed.xml" rel="self" type="application/rss+xml"/><item><title>APIService</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/cluster-resources/api-service-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/cluster-resources/api-service-v1/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: apiregistration.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/kube-aggregator/pkg/apis/apiregistration/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="APIService"&gt;APIService&lt;/h2&gt;
&lt;p&gt;APIService represents a server for a particular GroupVersion. Name must be &amp;quot;version.group&amp;quot;.&lt;/p&gt;</description></item><item><title>Babylon Case Study</title><link>https://andygol-k8s.netlify.app/case-studies/babylon/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/case-studies/babylon/</guid><description>&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;A large number of Babylon's products leverage machine learning and artificial intelligence, and in 2019, there wasn't enough computing power in-house to run a particular experiment. The company was also growing (from 100 to 1,600 in three years) and planning expansion into other countries.&lt;/p&gt;

&lt;h2&gt;Solution&lt;/h2&gt;

&lt;p&gt;Babylon had migrated its user-facing applications to a Kubernetes platform in 2018, so the infrastructure team turned to Kubeflow, a toolkit for machine learning on Kubernetes. "We tried to create a Kubernetes core server, we deployed Kubeflow, and we orchestrated the whole experiment, which ended up being a really good success," says AI Infrastructure Lead Jérémie Vallée. The team began building a self-service AI training platform on top of Kubernetes.&lt;/p&gt;</description></item><item><title>ConfigMap</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/config-and-storage-resources/config-map-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/config-and-storage-resources/config-map-v1/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/core/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="ConfigMap"&gt;ConfigMap&lt;/h2&gt;
&lt;p&gt;ConfigMap holds configuration data for pods to consume.&lt;/p&gt;
&lt;hr&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;apiVersion&lt;/strong&gt;: v1&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;kind&lt;/strong&gt;: ConfigMap&lt;/p&gt;</description></item><item><title>CustomResourceDefinition</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/extend-resources/custom-resource-definition-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/extend-resources/custom-resource-definition-v1/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: apiextensions.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="CustomResourceDefinition"&gt;CustomResourceDefinition&lt;/h2&gt;
&lt;p&gt;CustomResourceDefinition represents a resource that should be exposed on the API server. Its name MUST be in the format &amp;lt;.spec.name&amp;gt;.&amp;lt;.spec.group&amp;gt;.&lt;/p&gt;</description></item><item><title>DeleteOptions</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/common-definitions/delete-options/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/common-definitions/delete-options/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/apimachinery/pkg/apis/meta/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;DeleteOptions may be provided when deleting an API object.&lt;/p&gt;
&lt;hr&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;apiVersion&lt;/strong&gt; (string)&lt;/p&gt;
&lt;p&gt;APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: &lt;a href="https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources"&gt;https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources&lt;/a&gt;&lt;/p&gt;</description></item><item><title>FlowSchema</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/policy-resources/flow-schema-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/policy-resources/flow-schema-v1/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: flowcontrol.apiserver.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/flowcontrol/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="FlowSchema"&gt;FlowSchema&lt;/h2&gt;
&lt;p&gt;FlowSchema defines the schema of a group of flows. Note that a flow is made up of a set of inbound API requests with similar attributes and is identified by a pair of strings: the name of the FlowSchema and a &amp;quot;flow distinguisher&amp;quot;.&lt;/p&gt;</description></item><item><title>Introduction to kubectl</title><link>https://andygol-k8s.netlify.app/docs/reference/kubectl/introduction/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubectl/introduction/</guid><description>&lt;p&gt;kubectl is the Kubernetes cli version of a swiss army knife, and can do many things.&lt;/p&gt;
&lt;p&gt;While this Book is focused on using kubectl to declaratively manage applications in Kubernetes, it
also covers other kubectl functions.&lt;/p&gt;
&lt;h2 id="command-families"&gt;Command Families&lt;/h2&gt;
&lt;p&gt;Most kubectl commands typically fall into one of a few categories:&lt;/p&gt;
&lt;table&gt;
 &lt;thead&gt;
 &lt;tr&gt;
 &lt;th&gt;Type&lt;/th&gt;
 &lt;th&gt;Used For&lt;/th&gt;
 &lt;th&gt;Description&lt;/th&gt;
 &lt;/tr&gt;
 &lt;/thead&gt;
 &lt;tbody&gt;
 &lt;tr&gt;
 &lt;td&gt;Declarative Resource Management&lt;/td&gt;
 &lt;td&gt;Deployment and operations (e.g. GitOps)&lt;/td&gt;
 &lt;td&gt;Declaratively manage Kubernetes workloads using resource configuration&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Imperative Resource Management&lt;/td&gt;
 &lt;td&gt;Development Only&lt;/td&gt;
 &lt;td&gt;Run commands to manage Kubernetes workloads using Command Line arguments and flags&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Printing Workload State&lt;/td&gt;
 &lt;td&gt;Debugging&lt;/td&gt;
 &lt;td&gt;Print information about workloads&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Interacting with Containers&lt;/td&gt;
 &lt;td&gt;Debugging&lt;/td&gt;
 &lt;td&gt;Exec, attach, cp, logs&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Cluster Management&lt;/td&gt;
 &lt;td&gt;Cluster operations&lt;/td&gt;
 &lt;td&gt;Drain and cordon Nodes&lt;/td&gt;
 &lt;/tr&gt;
 &lt;/tbody&gt;
&lt;/table&gt;
&lt;h2 id="declarative-application-management"&gt;Declarative Application Management&lt;/h2&gt;
&lt;p&gt;The preferred approach for managing resources is through
declarative files called resource configuration used with the kubectl &lt;em&gt;Apply&lt;/em&gt; command.
This command reads a local (or remote) file structure and modifies cluster state to
reflect the declared intent.&lt;/p&gt;</description></item><item><title>LocalSubjectAccessReview</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/authorization-resources/local-subject-access-review-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/authorization-resources/local-subject-access-review-v1/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: authorization.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/authorization/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="LocalSubjectAccessReview"&gt;LocalSubjectAccessReview&lt;/h2&gt;
&lt;p&gt;LocalSubjectAccessReview checks whether or not a user or group can perform an action in a given namespace. Having a namespace scoped resource makes it much easier to grant namespace scoped policy that includes permissions checking.&lt;/p&gt;</description></item><item><title>MutatingAdmissionPolicyBindingList v1beta1</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/other-resources/mutating-admission-policy-binding-list-v1beta1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/other-resources/mutating-admission-policy-binding-list-v1beta1/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: admissionregistration.k8s.io/v1beta1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/admissionregistration/v1beta1&amp;quot;&lt;/code&gt;&lt;/p&gt;</description></item><item><title>Pod</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/workload-resources/pod-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/workload-resources/pod-v1/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/core/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="Pod"&gt;Pod&lt;/h2&gt;
&lt;p&gt;Pod is a collection of containers that can run on a host. This resource is created by clients and scheduled onto hosts.&lt;/p&gt;</description></item><item><title>Service</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/service-resources/service-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/service-resources/service-v1/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/core/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="Service"&gt;Service&lt;/h2&gt;
&lt;p&gt;Service is a named abstraction of software service (for example, mysql) consisting of local port (for example 3306) that the proxy listens on, and the selector that determines which pods will answer requests sent through the proxy.&lt;/p&gt;</description></item><item><title>ServiceAccount</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/authentication-resources/service-account-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/authentication-resources/service-account-v1/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/core/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="ServiceAccount"&gt;ServiceAccount&lt;/h2&gt;
&lt;p&gt;ServiceAccount binds together: * a name, understood by users, and perhaps by peripheral systems, for an identity * a principal that can be authenticated and authorized * a set of secrets&lt;/p&gt;</description></item><item><title>Binding</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/workload-resources/binding-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/workload-resources/binding-v1/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/core/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="Binding"&gt;Binding&lt;/h2&gt;
&lt;p&gt;Binding ties one object to another; for example, a pod is bound to a node by a scheduler.&lt;/p&gt;</description></item><item><title>Booz Allen Case Study</title><link>https://andygol-k8s.netlify.app/case-studies/booz-allen/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/case-studies/booz-allen/</guid><description>&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;In 2017, Booz Allen Hamilton's Strategic Innovation Group worked with the federal government to relaunch the decade-old recreation.gov website, which provides information and real-time booking for more than 100,000 campsites and facilities on federal lands across the country. The infrastructure needed to be agile, reliable, and scalable—as well as repeatable for the other federal agencies that are among Booz Allen Hamilton's customers.&lt;/p&gt;

&lt;h2&gt;Solution&lt;/h2&gt;

&lt;p&gt;"The only way that we thought we could be successful with this problem across all the different agencies is to create a microservice architecture and containers, so that we could be very dynamic and very agile to any given agency for whatever requirements that they may have," says Booz Allen Hamilton Senior Lead Technologist Martin Folkoff. To meet those requirements, Folkoff's team looked to Kubernetes for orchestration.&lt;/p&gt;</description></item><item><title>Bose Case Study</title><link>https://andygol-k8s.netlify.app/case-studies/bose/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/case-studies/bose/</guid><description>&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;A household name in high-quality audio equipment, &lt;a href="https://www.bose.com/en_us/index.html"&gt;Bose&lt;/a&gt; has offered connected products for more than five years, and as that demand grew, the infrastructure had to change to support it. "We needed to provide a mechanism for developers to rapidly prototype and deploy services all the way to production pretty fast," says Lead Cloud Engineer Josh West. In 2016, the company decided to start building a platform from scratch. The primary goal: "To be one to two steps ahead of the different product groups so that we are never scrambling to catch up with their scale," says Cloud Architecture Manager Dylan O'Mahony.&lt;/p&gt;</description></item><item><title>ComponentStatus</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/cluster-resources/component-status-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/cluster-resources/component-status-v1/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/core/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="ComponentStatus"&gt;ComponentStatus&lt;/h2&gt;
&lt;p&gt;ComponentStatus (and ComponentStatusList) holds the cluster validation info. Deprecated: This API is deprecated in v1.19+&lt;/p&gt;</description></item><item><title>DeviceClass</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/extend-resources/device-class-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/extend-resources/device-class-v1/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: resource.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/resource/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="DeviceClass"&gt;DeviceClass&lt;/h2&gt;
&lt;p&gt;DeviceClass is a vendor- or admin-provided resource that contains device configuration and selectors. It can be referenced in the device requests of a claim to apply these presets. Cluster scoped.&lt;/p&gt;</description></item><item><title>Endpoints</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/service-resources/endpoints-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/service-resources/endpoints-v1/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/core/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="Endpoints"&gt;Endpoints&lt;/h2&gt;
&lt;p&gt;Endpoints is a collection of endpoints that implement the actual service. Example:&lt;/p&gt;</description></item><item><title>LabelSelector</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/common-definitions/label-selector/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/common-definitions/label-selector/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/apimachinery/pkg/apis/meta/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects.&lt;/p&gt;</description></item><item><title>LimitRange</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/policy-resources/limit-range-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/policy-resources/limit-range-v1/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/core/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="LimitRange"&gt;LimitRange&lt;/h2&gt;
&lt;p&gt;LimitRange sets resource usage limits for each kind of resource in a Namespace.&lt;/p&gt;</description></item><item><title>Secret</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/config-and-storage-resources/secret-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/config-and-storage-resources/secret-v1/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/core/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="Secret"&gt;Secret&lt;/h2&gt;
&lt;p&gt;Secret holds secret data of a certain type. The total bytes of the values in the Data field must be less than MaxSecretSize bytes.&lt;/p&gt;</description></item><item><title>SelfSubjectAccessReview</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/authorization-resources/self-subject-access-review-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/authorization-resources/self-subject-access-review-v1/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: authorization.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/authorization/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="SelfSubjectAccessReview"&gt;SelfSubjectAccessReview&lt;/h2&gt;
&lt;p&gt;SelfSubjectAccessReview checks whether or the current user can perform an action. Not filling in a spec.namespace means &amp;quot;in all namespaces&amp;quot;. Self is a special case, because users should always be able to check whether they can perform an action&lt;/p&gt;</description></item><item><title>TokenRequest</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/authentication-resources/token-request-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/authentication-resources/token-request-v1/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: authentication.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/authentication/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="TokenRequest"&gt;TokenRequest&lt;/h2&gt;
&lt;p&gt;TokenRequest requests a token for a given service account.&lt;/p&gt;
&lt;hr&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;apiVersion&lt;/strong&gt;: authentication.k8s.io/v1&lt;/p&gt;</description></item><item><title>Booking.com Case Study</title><link>https://andygol-k8s.netlify.app/case-studies/booking-com/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/case-studies/booking-com/</guid><description>&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;In 2016, Booking.com migrated to an OpenShift platform, which gave product developers faster access to infrastructure. But because Kubernetes was abstracted away from the developers, the infrastructure team became a "knowledge bottleneck" when challenges arose. Trying to scale that support wasn't sustainable.&lt;/p&gt;

&lt;h2&gt;Solution&lt;/h2&gt;

&lt;p&gt;After a year operating OpenShift, the platform team decided to build its own vanilla Kubernetes platform—and ask developers to learn some Kubernetes in order to use it. "This is not a magical platform," says Ben Tyler, Principal Developer, B Platform Track. "We're not claiming that you can just use it with your eyes closed. Developers need to do some learning, and we're going to do everything we can to make sure they have access to that knowledge."&lt;/p&gt;</description></item><item><title>CSIDriver</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/config-and-storage-resources/csi-driver-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/config-and-storage-resources/csi-driver-v1/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: storage.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/storage/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="CSIDriver"&gt;CSIDriver&lt;/h2&gt;
&lt;p&gt;CSIDriver captures information about a Container Storage Interface (CSI) volume driver deployed on the cluster. Kubernetes attach detach controller uses this object to determine whether attach is required. Kubelet uses this object to determine whether pod information needs to be passed on mount. CSIDriver objects are non-namespaced.&lt;/p&gt;</description></item><item><title>EndpointSlice</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/service-resources/endpoint-slice-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/service-resources/endpoint-slice-v1/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: discovery.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/discovery/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="EndpointSlice"&gt;EndpointSlice&lt;/h2&gt;
&lt;p&gt;EndpointSlice represents a set of service endpoints. Most EndpointSlices are created by the EndpointSlice controller to represent the Pods selected by Service objects. For a given service there may be multiple EndpointSlice objects which must be joined to produce the full set of endpoints; you can find all of the slices for a given service by listing EndpointSlices in the service's namespace whose &lt;code&gt;kubernetes.io/service-name&lt;/code&gt; label contains the service's name.&lt;/p&gt;</description></item><item><title>Event</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/cluster-resources/event-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/cluster-resources/event-v1/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: events.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/events/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="Event"&gt;Event&lt;/h2&gt;
&lt;p&gt;Event is a report of an event somewhere in the cluster. It generally denotes some state change in the system. Events have a limited retention time and triggers and messages may evolve with time. Event consumers should not rely on the timing of an event with a given Reason reflecting a consistent underlying trigger, or the continued existence of events with that Reason. Events should be treated as informative, best-effort, supplemental data.&lt;/p&gt;</description></item><item><title>ListMeta</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/common-definitions/list-meta/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/common-definitions/list-meta/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/apimachinery/pkg/apis/meta/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;ListMeta describes metadata that synthetic resources must have, including lists and various status objects. A resource may have only one of {ObjectMeta, ListMeta}.&lt;/p&gt;</description></item><item><title>MutatingWebhookConfiguration</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/extend-resources/mutating-webhook-configuration-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/extend-resources/mutating-webhook-configuration-v1/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: admissionregistration.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/admissionregistration/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="MutatingWebhookConfiguration"&gt;MutatingWebhookConfiguration&lt;/h2&gt;
&lt;p&gt;MutatingWebhookConfiguration describes the configuration of and admission webhook that accept or reject and may change the object.&lt;/p&gt;</description></item><item><title>PodTemplate</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/workload-resources/pod-template-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/workload-resources/pod-template-v1/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/core/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="PodTemplate"&gt;PodTemplate&lt;/h2&gt;
&lt;p&gt;PodTemplate describes a template for creating copies of a predefined pod.&lt;/p&gt;</description></item><item><title>ResourceQuota</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/policy-resources/resource-quota-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/policy-resources/resource-quota-v1/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/core/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="ResourceQuota"&gt;ResourceQuota&lt;/h2&gt;
&lt;p&gt;ResourceQuota sets aggregate quota restrictions enforced per namespace&lt;/p&gt;
&lt;hr&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;apiVersion&lt;/strong&gt;: v1&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;kind&lt;/strong&gt;: ResourceQuota&lt;/p&gt;</description></item><item><title>SelfSubjectRulesReview</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/authorization-resources/self-subject-rules-review-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/authorization-resources/self-subject-rules-review-v1/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: authorization.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/authorization/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="SelfSubjectRulesReview"&gt;SelfSubjectRulesReview&lt;/h2&gt;
&lt;p&gt;SelfSubjectRulesReview enumerates the set of actions the current user can perform within a namespace. The returned list of actions may be incomplete depending on the server's authorization mode, and any errors experienced during the evaluation. SelfSubjectRulesReview should be used by UIs to show/hide actions, or to quickly let an end user reason about their permissions. It should NOT Be used by external systems to drive authorization decisions as this raises confused deputy, cache lifetime/revocation, and correctness concerns. SubjectAccessReview, and LocalAccessReview are the correct way to defer authorization decisions to the API server.&lt;/p&gt;</description></item><item><title>TokenReview</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/authentication-resources/token-review-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/authentication-resources/token-review-v1/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: authentication.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/authentication/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="TokenReview"&gt;TokenReview&lt;/h2&gt;
&lt;p&gt;TokenReview attempts to authenticate a token to a known user. Note: TokenReview requests may be cached by the webhook token authenticator plugin in the kube-apiserver.&lt;/p&gt;</description></item><item><title>AppDirect Case Study</title><link>https://andygol-k8s.netlify.app/case-studies/appdirect/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/case-studies/appdirect/</guid><description>&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.appdirect.com/"&gt;AppDirect&lt;/a&gt; provides an end-to-end commerce platform for cloud-based products and services. When Director of Software Development Pierre-Alexandre Lacerte began working there in 2014, the company had a monolith application deployed on a "tomcat infrastructure, and the whole release process was complex for what it should be," he says. "There were a lot of manual steps involved, with one engineer building a feature, then another team picking up the change. So you had bottlenecks in the pipeline to ship a feature to production." At the same time, the engineering team was growing, and the company realized it needed a better infrastructure to both support that growth and increase velocity.&lt;/p&gt;</description></item><item><title>CertificateSigningRequest</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/authentication-resources/certificate-signing-request-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/authentication-resources/certificate-signing-request-v1/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: certificates.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/certificates/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="CertificateSigningRequest"&gt;CertificateSigningRequest&lt;/h2&gt;
&lt;p&gt;CertificateSigningRequest objects provide a mechanism to obtain x509 certificates by submitting a certificate signing request, and having it asynchronously approved and issued.&lt;/p&gt;</description></item><item><title>CSINode</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/config-and-storage-resources/csi-node-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/config-and-storage-resources/csi-node-v1/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: storage.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/storage/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="CSINode"&gt;CSINode&lt;/h2&gt;
&lt;p&gt;CSINode holds information about all CSI drivers installed on a node. CSI drivers do not need to create the CSINode object directly. As long as they use the node-driver-registrar sidecar container, the kubelet will automatically populate the CSINode object for the CSI driver as part of kubelet plugin registration. CSINode has the same name as a node. If the object is missing, it means either there are no CSI Drivers available on the node, or the Kubelet version is low enough that it doesn't create this object. CSINode has an OwnerReference that points to the corresponding node object.&lt;/p&gt;</description></item><item><title>Denso Case Study</title><link>https://andygol-k8s.netlify.app/case-studies/denso/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/case-studies/denso/</guid><description>&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;DENSO Corporation is one of the biggest automotive components suppliers in the world. With the advent of connected cars, the company launched a Digital Innovation Department to expand into software, working on vehicle edge and vehicle cloud products. But there were several technical challenges to creating an integrated vehicle edge/cloud platform: "the amount of computing resources, the occasional lack of mobile signal, and an enormous number of distributed vehicles," says R&amp;D Product Manager Seiichi Koizumi.&lt;/p&gt;</description></item><item><title>Ingress</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/service-resources/ingress-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/service-resources/ingress-v1/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: networking.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/networking/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="Ingress"&gt;Ingress&lt;/h2&gt;
&lt;p&gt;Ingress is a collection of rules that allow inbound connections to reach the endpoints defined by a backend. An Ingress can be configured to give services externally-reachable urls, load balance traffic, terminate SSL, offer name based virtual hosting etc.&lt;/p&gt;</description></item><item><title>IPAddress</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/cluster-resources/ip-address-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/cluster-resources/ip-address-v1/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: networking.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/networking/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="IPAddress"&gt;IPAddress&lt;/h2&gt;
&lt;p&gt;IPAddress represents a single IP of a single IP Family. The object is designed to be used by APIs that operate on IP addresses. The object is used by the Service core API for allocation of IP addresses. An IP address can be represented in different formats, to guarantee the uniqueness of the IP, the name of the object is the IP address in canonical format, four decimal digits separated by dots suppressing leading zeros for IPv4 and the representation defined by RFC 5952 for IPv6. Valid: 192.168.1.5 or 2001:db8::1 or 2001:db8:aaaa:bbbb:cccc:dddd:eeee:1 Invalid: 10.01.2.3 or 2001:db8:0:0:0::1&lt;/p&gt;</description></item><item><title>LocalObjectReference</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/common-definitions/local-object-reference/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/common-definitions/local-object-reference/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/core/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace.&lt;/p&gt;</description></item><item><title>NetworkPolicy</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/policy-resources/network-policy-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/policy-resources/network-policy-v1/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: networking.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/networking/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="NetworkPolicy"&gt;NetworkPolicy&lt;/h2&gt;
&lt;p&gt;NetworkPolicy describes what network traffic is allowed for a set of Pods&lt;/p&gt;</description></item><item><title>Ocado Case Study</title><link>https://andygol-k8s.netlify.app/case-studies/ocado/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/case-studies/ocado/</guid><description>&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;The world's largest online-only grocery retailer, &lt;a href="http://www.ocadogroup.com/"&gt;Ocado&lt;/a&gt; developed the Ocado Smart Platform to manage its own operations, from websites to warehouses, and is now licensing the technology to other retailers such as &lt;a href="http://fortune.com/2018/05/17/ocado-kroger-warehouse-automation-amazon-walmart/"&gt;Kroger&lt;/a&gt;. To set up the first warehouses for the platform, Ocado shifted from virtual machines and &lt;a href="https://puppet.com/"&gt;Puppet&lt;/a&gt; infrastructure to &lt;a href="https://www.docker.com/"&gt;Docker&lt;/a&gt; containers, using CoreOS's &lt;a href="https://github.com/coreos/fleet"&gt;fleet&lt;/a&gt; scheduler to provision all the services on its &lt;a href="https://www.openstack.org/"&gt;OpenStack&lt;/a&gt;-based private cloud on bare metal. As the Smart Platform grew and "fleet was going end-of-life," says Platform Engineer Mike Bryant, "we started looking for a more complete platform, with all of these disparate infrastructure services being brought together in one unified API."&lt;/p&gt;</description></item><item><title>ReplicationController</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/workload-resources/replication-controller-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/workload-resources/replication-controller-v1/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/core/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="ReplicationController"&gt;ReplicationController&lt;/h2&gt;
&lt;p&gt;ReplicationController represents the configuration of a replication controller.&lt;/p&gt;
&lt;hr&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;apiVersion&lt;/strong&gt;: v1&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;kind&lt;/strong&gt;: ReplicationController&lt;/p&gt;</description></item><item><title>SubjectAccessReview</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/authorization-resources/subject-access-review-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/authorization-resources/subject-access-review-v1/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: authorization.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/authorization/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="SubjectAccessReview"&gt;SubjectAccessReview&lt;/h2&gt;
&lt;p&gt;SubjectAccessReview checks whether or not a user or group can perform an action.&lt;/p&gt;</description></item><item><title>ValidatingWebhookConfiguration</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/extend-resources/validating-webhook-configuration-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/extend-resources/validating-webhook-configuration-v1/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: admissionregistration.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/admissionregistration/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="ValidatingWebhookConfiguration"&gt;ValidatingWebhookConfiguration&lt;/h2&gt;
&lt;p&gt;ValidatingWebhookConfiguration describes the configuration of and admission webhook that accept or reject and object without changing it.&lt;/p&gt;</description></item><item><title>Building a Basic DaemonSet</title><link>https://andygol-k8s.netlify.app/docs/tasks/manage-daemon/create-daemon-set/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/manage-daemon/create-daemon-set/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page demonstrates how to build a basic &lt;a class='glossary-tooltip' title='Ensures a copy of a Pod is running across a set of nodes in a cluster.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/daemonset' target='_blank' aria-label='DaemonSet'&gt;DaemonSet&lt;/a&gt;
that runs a Pod on every node in a Kubernetes cluster.
It covers a simple use case of mounting a file from the host, logging its contents using
an &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/init-containers/"&gt;init container&lt;/a&gt;, and utilizing a pause container.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>ClusterRole</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/authorization-resources/cluster-role-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/authorization-resources/cluster-role-v1/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: rbac.authorization.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/rbac/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="ClusterRole"&gt;ClusterRole&lt;/h2&gt;
&lt;p&gt;ClusterRole is a cluster level, logical grouping of PolicyRules that can be referenced as a unit by a RoleBinding or ClusterRoleBinding.&lt;/p&gt;</description></item><item><title>ClusterTrustBundle v1beta1</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/authentication-resources/cluster-trust-bundle-v1beta1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/authentication-resources/cluster-trust-bundle-v1beta1/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: certificates.k8s.io/v1beta1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/certificates/v1beta1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="ClusterTrustBundle"&gt;ClusterTrustBundle&lt;/h2&gt;
&lt;p&gt;ClusterTrustBundle is a cluster-scoped container for X.509 trust anchors (root certificates).&lt;/p&gt;</description></item><item><title>CSIStorageCapacity</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/config-and-storage-resources/csi-storage-capacity-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/config-and-storage-resources/csi-storage-capacity-v1/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: storage.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/storage/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="CSIStorageCapacity"&gt;CSIStorageCapacity&lt;/h2&gt;
&lt;p&gt;CSIStorageCapacity stores the result of one CSI GetCapacity call. For a given StorageClass, this describes the available capacity in a particular topology segment. This can be used when considering where to instantiate new PersistentVolumes.&lt;/p&gt;</description></item><item><title>Glossary</title><link>https://andygol-k8s.netlify.app/docs/reference/glossary/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/glossary/</guid><description/></item><item><title>Hello Minikube</title><link>https://andygol-k8s.netlify.app/docs/tutorials/hello-minikube/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tutorials/hello-minikube/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This tutorial shows you how to run a sample app on Kubernetes using minikube.
The tutorial provides a container image that uses NGINX to echo back all the requests.&lt;/p&gt;
&lt;h2 id="objectives"&gt;Objectives&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Deploy a sample application to minikube.&lt;/li&gt;
&lt;li&gt;Run the app.&lt;/li&gt;
&lt;li&gt;View application logs.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;This tutorial assumes that you have already set up &lt;code&gt;minikube&lt;/code&gt;.
See &lt;strong&gt;Step 1&lt;/strong&gt; in &lt;a href="https://minikube.sigs.k8s.io/docs/start/"&gt;minikube start&lt;/a&gt; for installation instructions.

&lt;div class="alert alert-info" role="note"&gt;&lt;h4 class="alert-heading"&gt;Note:&lt;/h4&gt;Only execute the instructions in &lt;strong&gt;Step 1, Installation&lt;/strong&gt;. The rest is covered on this page.&lt;/div&gt;
&lt;/p&gt;</description></item><item><title>IngressClass</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/service-resources/ingress-class-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/service-resources/ingress-class-v1/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: networking.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/networking/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="IngressClass"&gt;IngressClass&lt;/h2&gt;
&lt;p&gt;IngressClass represents the class of the Ingress, referenced by the Ingress Spec. The &lt;code&gt;ingressclass.kubernetes.io/is-default-class&lt;/code&gt; annotation can be used to indicate that an IngressClass should be considered default. When a single IngressClass resource has this annotation set to true, new Ingress resources without a class specified will be assigned this default class.&lt;/p&gt;</description></item><item><title>Lease</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/cluster-resources/lease-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/cluster-resources/lease-v1/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: coordination.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/coordination/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="Lease"&gt;Lease&lt;/h2&gt;
&lt;p&gt;Lease defines a lease concept.&lt;/p&gt;
&lt;hr&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;apiVersion&lt;/strong&gt;: coordination.k8s.io/v1&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;kind&lt;/strong&gt;: Lease&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;metadata&lt;/strong&gt; (&lt;a href="https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/common-definitions/object-meta/#ObjectMeta"&gt;ObjectMeta&lt;/a&gt;)&lt;/p&gt;</description></item><item><title>NodeSelectorRequirement</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/common-definitions/node-selector-requirement/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/common-definitions/node-selector-requirement/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/core/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values.&lt;/p&gt;</description></item><item><title>PodDisruptionBudget</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/policy-resources/pod-disruption-budget-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/policy-resources/pod-disruption-budget-v1/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: policy/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/policy/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="PodDisruptionBudget"&gt;PodDisruptionBudget&lt;/h2&gt;
&lt;p&gt;PodDisruptionBudget is an object to define the max disruption that can be caused to a collection of pods&lt;/p&gt;</description></item><item><title>ReplicaSet</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/workload-resources/replica-set-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/workload-resources/replica-set-v1/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: apps/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/apps/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="ReplicaSet"&gt;ReplicaSet&lt;/h2&gt;
&lt;p&gt;ReplicaSet ensures that a specified number of pod replicas are running at any given time.&lt;/p&gt;</description></item><item><title>ClusterRoleBinding</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/authorization-resources/cluster-role-binding-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/authorization-resources/cluster-role-binding-v1/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: rbac.authorization.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/rbac/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="ClusterRoleBinding"&gt;ClusterRoleBinding&lt;/h2&gt;
&lt;p&gt;ClusterRoleBinding references a ClusterRole, but not contain it. It can reference a ClusterRole in the global namespace, and adds who information via Subject.&lt;/p&gt;</description></item><item><title>Deployment</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/workload-resources/deployment-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/workload-resources/deployment-v1/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: apps/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/apps/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="Deployment"&gt;Deployment&lt;/h2&gt;
&lt;p&gt;Deployment enables declarative updates for Pods and ReplicaSets.&lt;/p&gt;
&lt;hr&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;apiVersion&lt;/strong&gt;: apps/v1&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;kind&lt;/strong&gt;: Deployment&lt;/p&gt;</description></item><item><title>LeaseCandidate v1beta1</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/cluster-resources/lease-candidate-v1beta1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/cluster-resources/lease-candidate-v1beta1/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: coordination.k8s.io/v1beta1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/coordination/v1beta1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="LeaseCandidate"&gt;LeaseCandidate&lt;/h2&gt;
&lt;p&gt;LeaseCandidate defines a candidate for a Lease object. Candidates are created such that coordinated leader election will pick the best leader from the list of candidates.&lt;/p&gt;</description></item><item><title>ObjectFieldSelector</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/common-definitions/object-field-selector/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/common-definitions/object-field-selector/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/core/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;ObjectFieldSelector selects an APIVersioned field of an object.&lt;/p&gt;
&lt;hr&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;fieldPath&lt;/strong&gt; (string), required&lt;/p&gt;
&lt;p&gt;Path of the field to select in the specified API version.&lt;/p&gt;</description></item><item><title>PersistentVolumeClaim</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/config-and-storage-resources/persistent-volume-claim-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/config-and-storage-resources/persistent-volume-claim-v1/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/core/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="PersistentVolumeClaim"&gt;PersistentVolumeClaim&lt;/h2&gt;
&lt;p&gt;PersistentVolumeClaim is a user's request for and claim to a persistent volume&lt;/p&gt;</description></item><item><title>PriorityLevelConfiguration</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/policy-resources/priority-level-configuration-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/policy-resources/priority-level-configuration-v1/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: flowcontrol.apiserver.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/flowcontrol/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="PriorityLevelConfiguration"&gt;PriorityLevelConfiguration&lt;/h2&gt;
&lt;p&gt;PriorityLevelConfiguration represents the configuration of a priority level.&lt;/p&gt;
&lt;hr&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;apiVersion&lt;/strong&gt;: flowcontrol.apiserver.k8s.io/v1&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;kind&lt;/strong&gt;: PriorityLevelConfiguration&lt;/p&gt;</description></item><item><title>SelfSubjectReview</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/authentication-resources/self-subject-review-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/authentication-resources/self-subject-review-v1/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: authentication.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/authentication/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="SelfSubjectReview"&gt;SelfSubjectReview&lt;/h2&gt;
&lt;p&gt;SelfSubjectReview contains the user information that the kube-apiserver has about the user making this request. When using impersonation, users will receive the user info of the user being impersonated. If impersonation or request header authentication is used, any extra keys will have their case ignored and returned as lowercase.&lt;/p&gt;</description></item><item><title>Namespace</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/cluster-resources/namespace-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/cluster-resources/namespace-v1/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/core/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="Namespace"&gt;Namespace&lt;/h2&gt;
&lt;p&gt;Namespace provides a scope for Names. Use of multiple namespaces is optional.&lt;/p&gt;</description></item><item><title>ObjectMeta</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/common-definitions/object-meta/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/common-definitions/object-meta/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/apimachinery/pkg/apis/meta/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;ObjectMeta is metadata that all persisted resources must have, which includes all objects users must create.&lt;/p&gt;</description></item><item><title>PersistentVolume</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/config-and-storage-resources/persistent-volume-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/config-and-storage-resources/persistent-volume-v1/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/core/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="PersistentVolume"&gt;PersistentVolume&lt;/h2&gt;
&lt;p&gt;PersistentVolume (PV) is a storage resource provisioned by an administrator. It is analogous to a node. More info: &lt;a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes"&gt;https://kubernetes.io/docs/concepts/storage/persistent-volumes&lt;/a&gt;&lt;/p&gt;</description></item><item><title>PodCertificateRequest v1beta1</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/authentication-resources/pod-certificate-request-v1beta1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/authentication-resources/pod-certificate-request-v1beta1/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: certificates.k8s.io/v1beta1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/certificates/v1beta1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="PodCertificateRequest"&gt;PodCertificateRequest&lt;/h2&gt;
&lt;p&gt;PodCertificateRequest encodes a pod requesting a certificate from a given signer.&lt;/p&gt;</description></item><item><title>Role</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/authorization-resources/role-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/authorization-resources/role-v1/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: rbac.authorization.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/rbac/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="Role"&gt;Role&lt;/h2&gt;
&lt;p&gt;Role is a namespaced, logical grouping of PolicyRules that can be referenced as a unit by a RoleBinding.&lt;/p&gt;</description></item><item><title>StatefulSet</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/workload-resources/stateful-set-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/workload-resources/stateful-set-v1/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: apps/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/apps/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="StatefulSet"&gt;StatefulSet&lt;/h2&gt;
&lt;p&gt;StatefulSet represents a set of pods with consistent identities. Identities are defined as:&lt;/p&gt;</description></item><item><title>ValidatingAdmissionPolicy</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/policy-resources/validating-admission-policy-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/policy-resources/validating-admission-policy-v1/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: admissionregistration.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/admissionregistration/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="ValidatingAdmissionPolicy"&gt;ValidatingAdmissionPolicy&lt;/h2&gt;
&lt;p&gt;ValidatingAdmissionPolicy describes the definition of an admission validation policy that accepts or rejects an object without changing it.&lt;/p&gt;</description></item><item><title>ControllerRevision</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/workload-resources/controller-revision-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/workload-resources/controller-revision-v1/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: apps/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/apps/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="ControllerRevision"&gt;ControllerRevision&lt;/h2&gt;
&lt;p&gt;ControllerRevision implements an immutable snapshot of state data. Clients are responsible for serializing and deserializing the objects that contain their internal state. Once a ControllerRevision has been successfully created, it can not be updated. The API Server will fail validation of all requests that attempt to mutate the Data field. ControllerRevisions may, however, be deleted. Note that, due to its use by both the DaemonSet and StatefulSet controllers for update and rollback, this object is beta. However, it may be subject to name and representation changes in future releases, and clients should not depend on its stability. It is primarily for internal use by controllers.&lt;/p&gt;</description></item><item><title>Node</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/cluster-resources/node-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/cluster-resources/node-v1/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/core/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="Node"&gt;Node&lt;/h2&gt;
&lt;p&gt;Node is a worker node in Kubernetes. Each node will have a unique identifier in the cache (i.e. in etcd).&lt;/p&gt;</description></item><item><title>ObjectReference</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/common-definitions/object-reference/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/common-definitions/object-reference/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/core/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;ObjectReference contains enough information to let you inspect or modify the referred object.&lt;/p&gt;
&lt;hr&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;apiVersion&lt;/strong&gt; (string)&lt;/p&gt;</description></item><item><title>RoleBinding</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/authorization-resources/role-binding-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/authorization-resources/role-binding-v1/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: rbac.authorization.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/rbac/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="RoleBinding"&gt;RoleBinding&lt;/h2&gt;
&lt;p&gt;RoleBinding references a role, but does not contain it. It can reference a Role in the same namespace or a ClusterRole in the global namespace. It adds who information via Subjects and namespace information by which namespace it exists in. RoleBindings in a given namespace only have effect in that namespace.&lt;/p&gt;</description></item><item><title>StorageClass</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/config-and-storage-resources/storage-class-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/config-and-storage-resources/storage-class-v1/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: storage.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/storage/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="StorageClass"&gt;StorageClass&lt;/h2&gt;
&lt;p&gt;StorageClass describes the parameters for a class of storage for which PersistentVolumes can be dynamically provisioned.&lt;/p&gt;</description></item><item><title>ValidatingAdmissionPolicyBinding</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/policy-resources/validating-admission-policy-binding-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/policy-resources/validating-admission-policy-binding-v1/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: admissionregistration.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/admissionregistration/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="ValidatingAdmissionPolicyBinding"&gt;ValidatingAdmissionPolicyBinding&lt;/h2&gt;
&lt;p&gt;ValidatingAdmissionPolicyBinding binds the ValidatingAdmissionPolicy with paramerized resources. ValidatingAdmissionPolicyBinding and parameter CRDs together define how cluster administrators configure policies for clusters.&lt;/p&gt;</description></item><item><title>DaemonSet</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/workload-resources/daemon-set-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/workload-resources/daemon-set-v1/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: apps/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/apps/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="DaemonSet"&gt;DaemonSet&lt;/h2&gt;
&lt;p&gt;DaemonSet represents the configuration of a daemon set.&lt;/p&gt;
&lt;hr&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;apiVersion&lt;/strong&gt;: apps/v1&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;kind&lt;/strong&gt;: DaemonSet&lt;/p&gt;</description></item><item><title>MutatingAdmissionPolicy v1beta1</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/policy-resources/mutating-admission-policy-v1beta1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/policy-resources/mutating-admission-policy-v1beta1/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: admissionregistration.k8s.io/v1beta1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/admissionregistration/v1beta1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="MutatingAdmissionPolicy"&gt;MutatingAdmissionPolicy&lt;/h2&gt;
&lt;p&gt;MutatingAdmissionPolicy describes the definition of an admission mutation policy that mutates the object coming into admission chain.&lt;/p&gt;</description></item><item><title>Patch</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/common-definitions/patch/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/common-definitions/patch/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/apimachinery/pkg/apis/meta/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Patch is provided to give a concrete name and type to the Kubernetes PATCH request body.&lt;/p&gt;</description></item><item><title>RuntimeClass</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/cluster-resources/runtime-class-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/cluster-resources/runtime-class-v1/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: node.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/node/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="RuntimeClass"&gt;RuntimeClass&lt;/h2&gt;
&lt;p&gt;RuntimeClass defines a class of container runtime supported in the cluster. The RuntimeClass is used to determine which container runtime is used to run all containers in a pod. RuntimeClasses are manually defined by a user or cluster provisioner, and referenced in the PodSpec. The Kubelet is responsible for resolving the RuntimeClassName reference before running the pod. For more details, see &lt;a href="https://kubernetes.io/docs/concepts/containers/runtime-class/"&gt;https://kubernetes.io/docs/concepts/containers/runtime-class/&lt;/a&gt;&lt;/p&gt;</description></item><item><title>StorageVersionMigration v1beta1</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/config-and-storage-resources/storage-version-migration-v1beta1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/config-and-storage-resources/storage-version-migration-v1beta1/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: storagemigration.k8s.io/v1beta1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/storagemigration/v1beta1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="StorageVersionMigration"&gt;StorageVersionMigration&lt;/h2&gt;
&lt;p&gt;StorageVersionMigration represents a migration of stored data to the latest storage version.&lt;/p&gt;</description></item><item><title>Adding Linux worker nodes</title><link>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/kubeadm/adding-linux-nodes/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/kubeadm/adding-linux-nodes/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page explains how to add Linux worker nodes to a kubeadm cluster.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Each joining worker node has installed the required components from
&lt;a href="https://andygol-k8s.netlify.app/docs/setup/production-environment/tools/kubeadm/install-kubeadm/"&gt;Installing kubeadm&lt;/a&gt;, such as,
kubeadm, the kubelet and a &lt;a class='glossary-tooltip' title='The container runtime is the software that is responsible for running containers.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/setup/production-environment/container-runtimes' target='_blank' aria-label='container runtime'&gt;container runtime&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;A running kubeadm cluster created by &lt;code&gt;kubeadm init&lt;/code&gt; and following the steps
in the document &lt;a href="https://andygol-k8s.netlify.app/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/"&gt;Creating a cluster with kubeadm&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;You need superuser access to the node.&lt;/li&gt;
&lt;/ul&gt;
&lt;!-- steps --&gt;
&lt;h2 id="adding-linux-worker-nodes"&gt;Adding Linux worker nodes&lt;/h2&gt;
&lt;p&gt;To add new Linux worker nodes to your cluster do the following for each machine:&lt;/p&gt;</description></item><item><title>Apply Pod Security Standards at the Cluster Level</title><link>https://andygol-k8s.netlify.app/docs/tutorials/security/cluster-level-pss/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tutorials/security/cluster-level-pss/</guid><description>&lt;div class="alert alert-primary" role="alert"&gt;&lt;div class="h4 alert-heading" role="heading"&gt;Note&lt;/div&gt;
&lt;p&gt;This tutorial applies only for new clusters.&lt;/p&gt;
&lt;/div&gt;
&lt;p&gt;Pod Security is an admission controller that carries out checks against the Kubernetes
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/security/pod-security-standards/"&gt;Pod Security Standards&lt;/a&gt; when new pods are
created. It is a feature GA'ed in v1.25.
This tutorial shows you how to enforce the &lt;code&gt;baseline&lt;/code&gt; Pod Security
Standard at the cluster level which applies a standard configuration
to all namespaces in a cluster.&lt;/p&gt;
&lt;p&gt;To apply Pod Security Standards to specific namespaces, refer to
&lt;a href="https://andygol-k8s.netlify.app/docs/tutorials/security/ns-level-pss/"&gt;Apply Pod Security Standards at the namespace level&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>Assign Memory Resources to Containers and Pods</title><link>https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/assign-memory-resource/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/assign-memory-resource/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page shows how to assign a memory &lt;em&gt;request&lt;/em&gt; and a memory &lt;em&gt;limit&lt;/em&gt; to a
Container. A Container is guaranteed to have as much memory as it requests,
but is not allowed to use more memory than its limit.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Audit Annotations</title><link>https://andygol-k8s.netlify.app/docs/reference/labels-annotations-taints/audit-annotations/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/labels-annotations-taints/audit-annotations/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page serves as a reference for the audit annotations of the kubernetes.io
namespace. These annotations apply to &lt;code&gt;Event&lt;/code&gt; object from API group
&lt;code&gt;audit.k8s.io&lt;/code&gt;.&lt;/p&gt;

&lt;div class="alert alert-info" role="note"&gt;&lt;h4 class="alert-heading"&gt;Note:&lt;/h4&gt;The following annotations are not used within the Kubernetes API. When you
&lt;a href="https://andygol-k8s.netlify.app/docs/tasks/debug/debug-cluster/audit/"&gt;enable auditing&lt;/a&gt; in your cluster,
audit event data is written using &lt;code&gt;Event&lt;/code&gt; from API group &lt;code&gt;audit.k8s.io&lt;/code&gt;.
The annotations apply to audit events. Audit events are different from objects in the
&lt;a href="https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/cluster-resources/event-v1/"&gt;Event API&lt;/a&gt; (API group
&lt;code&gt;events.k8s.io&lt;/code&gt;).&lt;/div&gt;

&lt;!-- body --&gt;
&lt;h2 id="k8s-io-deprecated"&gt;k8s.io/deprecated&lt;/h2&gt;
&lt;p&gt;Example: &lt;code&gt;k8s.io/deprecated: &amp;quot;true&amp;quot;&lt;/code&gt;&lt;/p&gt;</description></item><item><title>Authenticating</title><link>https://andygol-k8s.netlify.app/docs/reference/access-authn-authz/authentication/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/access-authn-authz/authentication/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page provides an overview of authentication in Kubernetes, with a focus on
authentication to the &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/overview/kubernetes-api/"&gt;Kubernetes API&lt;/a&gt;.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="users-in-kubernetes"&gt;Users in Kubernetes&lt;/h2&gt;
&lt;p&gt;All Kubernetes clusters have two categories of users: service accounts managed
by Kubernetes, and normal users.&lt;/p&gt;
&lt;p&gt;It is assumed that a cluster-independent service manages normal users in the following ways:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;an administrator distributing private keys&lt;/li&gt;
&lt;li&gt;a user store like Keystone or Google Accounts&lt;/li&gt;
&lt;li&gt;a file with a list of usernames and passwords&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In this regard, &lt;em&gt;Kubernetes does not have objects which represent normal user accounts.&lt;/em&gt;
Normal users cannot be added to a cluster through an API call.&lt;/p&gt;</description></item><item><title>Available Documentation Versions</title><link>https://andygol-k8s.netlify.app/docs/home/supported-doc-versions/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/home/supported-doc-versions/</guid><description>&lt;p&gt;This website contains documentation for the current version of Kubernetes
and the four previous versions of Kubernetes.&lt;/p&gt;
&lt;p&gt;The availability of documentation for a Kubernetes version is separate from whether
that release is currently supported.
Read &lt;a href="https://andygol-k8s.netlify.app/releases/patch-releases/#support-period"&gt;Support period&lt;/a&gt; to learn about
which versions of Kubernetes are officially supported, and for how long.&lt;/p&gt;</description></item><item><title>Changing the Container Runtime on a Node from Docker Engine to containerd</title><link>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/migrating-from-dockershim/change-runtime-containerd/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/migrating-from-dockershim/change-runtime-containerd/</guid><description>&lt;p&gt;This task outlines the steps needed to update your container runtime to containerd from Docker. It
is applicable for cluster operators running Kubernetes 1.23 or earlier. This also covers an
example scenario for migrating from dockershim to containerd. Alternative container runtimes
can be picked from this &lt;a href="https://andygol-k8s.netlify.app/docs/setup/production-environment/container-runtimes/"&gt;page&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;div class="alert alert-secondary callout third-party-content" role="note"&gt;&lt;strong&gt;Note:&lt;/strong&gt;&amp;puncsp;This section links to third party projects that provide functionality required by Kubernetes. The Kubernetes project authors aren't responsible for these projects, which are listed alphabetically. To add a project to this list, read the &lt;a href="https://andygol-k8s.netlify.app/docs/contribute/style/content-guide/#third-party-content"&gt;content guide&lt;/a&gt; before submitting a change. &lt;a href="#third-party-content-disclaimer"&gt;More information.&lt;/a&gt;&lt;/div&gt;
&lt;p&gt;Install containerd. For more information see
&lt;a href="https://containerd.io/docs/getting-started/"&gt;containerd's installation documentation&lt;/a&gt;
and for specific prerequisite follow
&lt;a href="https://andygol-k8s.netlify.app/docs/setup/production-environment/container-runtimes/#containerd"&gt;the containerd guide&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>Cloud Native Security and Kubernetes</title><link>https://andygol-k8s.netlify.app/docs/concepts/security/cloud-native-security/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/security/cloud-native-security/</guid><description>&lt;p&gt;Kubernetes is based on a cloud native architecture and draws on advice from the
&lt;a class='glossary-tooltip' title='Cloud Native Computing Foundation' data-toggle='tooltip' data-placement='top' href='https://cncf.io/' target='_blank' aria-label='CNCF'&gt;CNCF&lt;/a&gt; about good practices for
cloud native information security.&lt;/p&gt;
&lt;p&gt;Read on for an overview of how Kubernetes is designed to help you deploy a
secure cloud native platform.&lt;/p&gt;
&lt;h2 id="cloud-native-information-security"&gt;Cloud native information security&lt;/h2&gt;

&lt;p&gt;The CNCF &lt;a href="https://github.com/cncf/tag-security/blob/main/community/resources/security-whitepaper/v2/CNCF_cloud-native-security-whitepaper-May2022-v2.pdf"&gt;white paper&lt;/a&gt;
on cloud native security defines security controls and practices that are
appropriate to different &lt;em&gt;lifecycle phases&lt;/em&gt;.&lt;/p&gt;</description></item><item><title>Configure Default Memory Requests and Limits for a Namespace</title><link>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page shows how to configure default memory requests and limits for a
&lt;a class='glossary-tooltip' title='An abstraction used by Kubernetes to support isolation of groups of resources within a single cluster.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/overview/working-with-objects/namespaces' target='_blank' aria-label='namespace'&gt;namespace&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;A Kubernetes cluster can be divided into namespaces. Once you have a namespace that
has a default memory
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/configuration/manage-resources-containers/#requests-and-limits"&gt;limit&lt;/a&gt;,
and you then try to create a Pod with a container that does not specify its own memory
limit, then the
&lt;a class='glossary-tooltip' title='The container orchestration layer that exposes the API and interfaces to define, deploy, and manage the lifecycle of containers.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/reference/glossary/?all=true#term-control-plane' target='_blank' aria-label='control plane'&gt;control plane&lt;/a&gt; assigns the default
memory limit to that container.&lt;/p&gt;</description></item><item><title>Configure the Aggregation Layer</title><link>https://andygol-k8s.netlify.app/docs/tasks/extend-kubernetes/configure-aggregation-layer/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/extend-kubernetes/configure-aggregation-layer/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;Configuring the &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/"&gt;aggregation layer&lt;/a&gt;
allows the Kubernetes apiserver to be extended with additional APIs, which are not
part of the core Kubernetes APIs.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Considerations for large clusters</title><link>https://andygol-k8s.netlify.app/docs/setup/best-practices/cluster-large/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/setup/best-practices/cluster-large/</guid><description>&lt;p&gt;A cluster is a set of &lt;a class='glossary-tooltip' title='A node is a worker machine in Kubernetes.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/architecture/nodes/' target='_blank' aria-label='nodes'&gt;nodes&lt;/a&gt; (physical
or virtual machines) running Kubernetes agents, managed by the
&lt;a class='glossary-tooltip' title='The container orchestration layer that exposes the API and interfaces to define, deploy, and manage the lifecycle of containers.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/reference/glossary/?all=true#term-control-plane' target='_blank' aria-label='control plane'&gt;control plane&lt;/a&gt;.
Kubernetes v1.35 supports clusters with up to 5,000 nodes. More specifically,
Kubernetes is designed to accommodate configurations that meet &lt;em&gt;all&lt;/em&gt; of the following criteria:&lt;/p&gt;</description></item><item><title>Documentation Content Guide</title><link>https://andygol-k8s.netlify.app/docs/contribute/style/content-guide/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/contribute/style/content-guide/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page contains guidelines for Kubernetes documentation.&lt;/p&gt;
&lt;p&gt;If you have questions about what's allowed, join the #sig-docs channel in
&lt;a href="https://slack.k8s.io/"&gt;Kubernetes Slack&lt;/a&gt; and ask!&lt;/p&gt;
&lt;p&gt;You can register for Kubernetes Slack at &lt;a href="https://slack.k8s.io/"&gt;https://slack.k8s.io/&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;For information on creating new content for the Kubernetes
docs, follow the &lt;a href="https://andygol-k8s.netlify.app/docs/contribute/style/style-guide/"&gt;style guide&lt;/a&gt;.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="overview"&gt;Overview&lt;/h2&gt;
&lt;p&gt;Source for the Kubernetes website, including the docs, resides in the
&lt;a href="https://github.com/kubernetes/website"&gt;kubernetes/website&lt;/a&gt; repository.&lt;/p&gt;
&lt;p&gt;Located in the &lt;code&gt;kubernetes/website/content/&amp;lt;language_code&amp;gt;/docs&lt;/code&gt; folder, the
majority of Kubernetes documentation is specific to the &lt;a href="https://github.com/kubernetes/kubernetes"&gt;Kubernetes
project&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>Custom Resources</title><link>https://andygol-k8s.netlify.app/docs/concepts/extend-kubernetes/api-extension/custom-resources/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/extend-kubernetes/api-extension/custom-resources/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;&lt;em&gt;Custom resources&lt;/em&gt; are extensions of the Kubernetes API. This page discusses when to add a custom
resource to your Kubernetes cluster and when to use a standalone service. It describes the two
methods for adding custom resources and how to choose between them.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="custom-resources"&gt;Custom resources&lt;/h2&gt;
&lt;p&gt;A &lt;em&gt;resource&lt;/em&gt; is an endpoint in the &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/overview/kubernetes-api/"&gt;Kubernetes API&lt;/a&gt; that
stores a collection of &lt;a class='glossary-tooltip' title='An entity in the Kubernetes system, representing part of the state of your cluster.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/overview/working-with-objects/#kubernetes-objects' target='_blank' aria-label='API objects'&gt;API objects&lt;/a&gt;
of a certain kind; for example, the built-in &lt;em&gt;pods&lt;/em&gt; resource contains a collection of Pod objects.&lt;/p&gt;</description></item><item><title>Debug Pods</title><link>https://andygol-k8s.netlify.app/docs/tasks/debug/debug-application/debug-pods/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/debug/debug-application/debug-pods/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This guide is to help users debug applications that are deployed into Kubernetes
and not behaving correctly. This is &lt;em&gt;not&lt;/em&gt; a guide for people who want to debug their cluster.
For that you should check out &lt;a href="https://andygol-k8s.netlify.app/docs/tasks/debug/debug-cluster/"&gt;this guide&lt;/a&gt;.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="diagnosing-the-problem"&gt;Diagnosing the problem&lt;/h2&gt;
&lt;p&gt;The first step in troubleshooting is triage. What is the problem?
Is it your Pods, your Replication Controller or your Service?&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#debugging-pods"&gt;Debugging Pods&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#debugging-replication-controllers"&gt;Debugging Replication Controllers&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#debugging-services"&gt;Debugging Services&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="debugging-pods"&gt;Debugging Pods&lt;/h3&gt;
&lt;p&gt;The first step in debugging a Pod is taking a look at it. Check the current
state of the Pod and recent events with the following command:&lt;/p&gt;</description></item><item><title>Declarative Management of Kubernetes Objects Using Configuration Files</title><link>https://andygol-k8s.netlify.app/docs/tasks/manage-kubernetes-objects/declarative-config/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/manage-kubernetes-objects/declarative-config/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;Kubernetes objects can be created, updated, and deleted by storing multiple
object configuration files in a directory and using &lt;code&gt;kubectl apply&lt;/code&gt; to
recursively create and update those objects as needed. This method
retains writes made to live objects without merging the changes
back into the object configuration files. &lt;code&gt;kubectl diff&lt;/code&gt; also gives you a
preview of what changes &lt;code&gt;apply&lt;/code&gt; will make.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;Install &lt;a href="https://andygol-k8s.netlify.app/docs/tasks/tools/"&gt;&lt;code&gt;kubectl&lt;/code&gt;&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Define a Command and Arguments for a Container</title><link>https://andygol-k8s.netlify.app/docs/tasks/inject-data-application/define-command-argument-container/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/inject-data-application/define-command-argument-container/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page shows how to define commands and arguments when you run a container
in a &lt;a class='glossary-tooltip' title='A Pod represents a set of running containers in your cluster.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/' target='_blank' aria-label='Pod'&gt;Pod&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Deploy and Access the Kubernetes Dashboard</title><link>https://andygol-k8s.netlify.app/docs/tasks/access-application-cluster/web-ui-dashboard/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/access-application-cluster/web-ui-dashboard/</guid><description>&lt;div class="pageinfo pageinfo-primary"&gt;
&lt;p&gt;&lt;strong&gt;Kubernetes Dashboard is deprecated and unmaintained.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The Kubernetes Dashboard project has been archived and is no longer actively maintained.
For new installations, consider using &lt;a href="https://headlamp.dev/"&gt;Headlamp&lt;/a&gt;.&lt;/p&gt;
&lt;/div&gt;

&lt;div class="alert alert-info" role="note"&gt;&lt;h4 class="alert-heading"&gt;Note:&lt;/h4&gt;For in-cluster deployments similar to Kubernetes Dashboard, see the
&lt;a href="https://headlamp.dev/docs/latest/installation/in-cluster/"&gt;Headlamp in-cluster installation guide&lt;/a&gt;.&lt;/div&gt;

&lt;!-- overview --&gt;
&lt;p&gt;Dashboard is a web-based Kubernetes user interface.
You can use Dashboard to deploy containerized applications to a Kubernetes cluster,
troubleshoot your containerized application, and manage the cluster resources.
You can use Dashboard to get an overview of applications running on your cluster,
as well as for creating or modifying individual Kubernetes resources
(such as Deployments, Jobs, DaemonSets, etc).
For example, you can scale a Deployment, initiate a rolling update, restart a pod
or deploy new applications using a deploy wizard.&lt;/p&gt;</description></item><item><title>Deployments</title><link>https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/deployment/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/deployment/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;A &lt;em&gt;Deployment&lt;/em&gt; provides declarative updates for &lt;a class='glossary-tooltip' title='A Pod represents a set of running containers in your cluster.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/' target='_blank' aria-label='Pods'&gt;Pods&lt;/a&gt; and
&lt;a class='glossary-tooltip' title='ReplicaSet ensures that a specified number of Pod replicas are running at one time' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/replicaset/' target='_blank' aria-label='ReplicaSets'&gt;ReplicaSets&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;You describe a &lt;em&gt;desired state&lt;/em&gt; in a Deployment, and the Deployment &lt;a class='glossary-tooltip' title='A control loop that watches the shared state of the cluster through the apiserver and makes changes attempting to move the current state towards the desired state.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/architecture/controller/' target='_blank' aria-label='Controller'&gt;Controller&lt;/a&gt; changes the actual state to the desired state at a controlled rate. You can define Deployments to create new ReplicaSets, or to remove existing Deployments and adopt all their resources with new Deployments.&lt;/p&gt;</description></item><item><title>Exposing an External IP Address to Access an Application in a Cluster</title><link>https://andygol-k8s.netlify.app/docs/tutorials/stateless-application/expose-external-ip-address/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tutorials/stateless-application/expose-external-ip-address/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page shows how to create a Kubernetes Service object that exposes an
external IP address.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Install &lt;a href="https://andygol-k8s.netlify.app/docs/tasks/tools/"&gt;kubectl&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Use a cloud provider like Google Kubernetes Engine or Amazon Web Services to
create a Kubernetes cluster. This tutorial creates an
&lt;a href="https://andygol-k8s.netlify.app/docs/tasks/access-application-cluster/create-external-load-balancer/"&gt;external load balancer&lt;/a&gt;,
which requires a cloud provider.&lt;/li&gt;
&lt;li&gt;Configure &lt;code&gt;kubectl&lt;/code&gt; to communicate with your Kubernetes API server. For instructions, see the
documentation for your cloud provider.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="objectives"&gt;Objectives&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Run five instances of a Hello World application.&lt;/li&gt;
&lt;li&gt;Create a Service object that exposes an external IP address.&lt;/li&gt;
&lt;li&gt;Use the Service object to access the running application.&lt;/li&gt;
&lt;/ul&gt;
&lt;!-- lessoncontent --&gt;
&lt;h2 id="creating-a-service-for-an-application-running-in-five-pods"&gt;Creating a service for an application running in five pods&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Run a Hello World application in your cluster:&lt;/p&gt;</description></item><item><title>Feature Gates</title><link>https://andygol-k8s.netlify.app/docs/reference/command-line-tools-reference/feature-gates/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/command-line-tools-reference/feature-gates/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page contains an overview of the various feature gates an administrator
can specify on different Kubernetes components.&lt;/p&gt;
&lt;p&gt;See &lt;a href="#feature-stages"&gt;feature stages&lt;/a&gt; for an explanation of the stages for a feature.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="overview"&gt;Overview&lt;/h2&gt;
&lt;p&gt;Feature gates are a set of key=value pairs that describe Kubernetes features.
You can turn these features on or off using the &lt;code&gt;--feature-gates&lt;/code&gt; command line flag
on each Kubernetes component.&lt;/p&gt;
&lt;h2 id="how-to-enable-feature-gates"&gt;How to enable Feature Gates&lt;/h2&gt;
&lt;p&gt;To enable or disable a feature gate for a particular Kubernetes component, use the
&lt;code&gt;--feature-gates&lt;/code&gt; flag.&lt;/p&gt;</description></item><item><title>Images</title><link>https://andygol-k8s.netlify.app/docs/concepts/containers/images/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/containers/images/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;A container image represents binary data that encapsulates an application and all its
software dependencies. Container images are executable software bundles that can run
standalone and that make very well-defined assumptions about their runtime environment.&lt;/p&gt;
&lt;p&gt;You typically create a container image of your application and push it to a registry
before referring to it in a &lt;a class='glossary-tooltip' title='A Pod represents a set of running containers in your cluster.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/' target='_blank' aria-label='Pod'&gt;Pod&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>Install and Set Up kubectl on Linux</title><link>https://andygol-k8s.netlify.app/docs/tasks/tools/install-kubectl-linux/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/tools/install-kubectl-linux/</guid><description>&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;You must use a kubectl version that is within one minor version difference of
your cluster. For example, a v1.35 client can communicate
with v1.34, v1.35,
and v1.36 control planes.
Using the latest compatible version of kubectl helps avoid unforeseen issues.&lt;/p&gt;
&lt;h2 id="install-kubectl-on-linux"&gt;Install kubectl on Linux&lt;/h2&gt;
&lt;p&gt;The following methods exist for installing kubectl on Linux:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#install-kubectl-binary-with-curl-on-linux"&gt;Install kubectl binary with curl on Linux&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#install-using-native-package-management"&gt;Install using native package management&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#install-using-other-package-management"&gt;Install using other package management&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="install-kubectl-binary-with-curl-on-linux"&gt;Install kubectl binary with curl on Linux&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Download the latest release with the command:&lt;/p&gt;</description></item><item><title>Install and Set Up kubectl on macOS</title><link>https://andygol-k8s.netlify.app/docs/tasks/tools/install-kubectl-macos/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/tools/install-kubectl-macos/</guid><description>&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;You must use a kubectl version that is within one minor version difference of
your cluster. For example, a v1.35 client can communicate
with v1.34, v1.35,
and v1.36 control planes.
Using the latest compatible version of kubectl helps avoid unforeseen issues.&lt;/p&gt;
&lt;h2 id="install-kubectl-on-macos"&gt;Install kubectl on macOS&lt;/h2&gt;
&lt;p&gt;The following methods exist for installing kubectl on macOS:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#install-kubectl-on-macos"&gt;Install kubectl on macOS&lt;/a&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#install-kubectl-binary-with-curl-on-macos"&gt;Install kubectl binary with curl on macOS&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#install-with-homebrew-on-macos"&gt;Install with Homebrew on macOS&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#install-with-macports-on-macos"&gt;Install with Macports on macOS&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="#verify-kubectl-configuration"&gt;Verify kubectl configuration&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#optional-kubectl-configurations-and-plugins"&gt;Optional kubectl configurations and plugins&lt;/a&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#enable-shell-autocompletion"&gt;Enable shell autocompletion&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#install-kubectl-convert-plugin"&gt;Install &lt;code&gt;kubectl convert&lt;/code&gt; plugin&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="install-kubectl-binary-with-curl-on-macos"&gt;Install kubectl binary with curl on macOS&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Download the latest release:&lt;/p&gt;</description></item><item><title>Install and Set Up kubectl on Windows</title><link>https://andygol-k8s.netlify.app/docs/tasks/tools/install-kubectl-windows/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/tools/install-kubectl-windows/</guid><description>&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;You must use a kubectl version that is within one minor version difference of
your cluster. For example, a v1.35 client can communicate
with v1.34, v1.35,
and v1.36 control planes.
Using the latest compatible version of kubectl helps avoid unforeseen issues.&lt;/p&gt;
&lt;h2 id="install-kubectl-on-windows"&gt;Install kubectl on Windows&lt;/h2&gt;
&lt;p&gt;The following methods exist for installing kubectl on Windows:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#install-kubectl-binary-on-windows-via-direct-download-or-curl"&gt;Install kubectl binary on Windows (via direct download or curl)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#install-nonstandard-package-tools"&gt;Install on Windows using Chocolatey, Scoop, or winget&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="install-kubectl-binary-on-windows-via-direct-download-or-curl"&gt;Install kubectl binary on Windows (via direct download or curl)&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;You have two options for installing kubectl on your Windows device&lt;/p&gt;</description></item><item><title>Installing kubeadm</title><link>https://andygol-k8s.netlify.app/docs/setup/production-environment/tools/kubeadm/install-kubeadm/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/setup/production-environment/tools/kubeadm/install-kubeadm/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;&lt;img src="https://andygol-k8s.netlify.app/images/kubeadm-stacked-color.png" align="right" width="150px"&gt;&lt;/img&gt;
This page shows how to install the &lt;code&gt;kubeadm&lt;/code&gt; toolbox.
For information on how to create a cluster with kubeadm once you have performed this installation process,
see the &lt;a href="https://andygol-k8s.netlify.app/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/"&gt;Creating a cluster with kubeadm&lt;/a&gt; page.&lt;/p&gt;






&lt;div class="version-list"&gt;
 &lt;p&gt;
 This installation guide is for Kubernetes v1.35. If you want to use a different Kubernetes version, please refer to the following pages instead:
 &lt;/p&gt;
 &lt;ul&gt;
 
 
 
 
 &lt;li&gt;
 &lt;a href="https://v1-34.docs.kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/"&gt;Installing kubeadm (Kubernetes v1.34)&lt;/a&gt;
 &lt;/li&gt;
 
 
 
 &lt;li&gt;
 &lt;a href="https://v1-33.docs.kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/"&gt;Installing kubeadm (Kubernetes v1.33)&lt;/a&gt;
 &lt;/li&gt;
 
 
 
 &lt;li&gt;
 &lt;a href="https://v1-32.docs.kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/"&gt;Installing kubeadm (Kubernetes v1.32)&lt;/a&gt;
 &lt;/li&gt;
 
 
 
 &lt;li&gt;
 &lt;a href="https://v1-31.docs.kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/"&gt;Installing kubeadm (Kubernetes v1.31)&lt;/a&gt;
 &lt;/li&gt;
 
 
 &lt;/ul&gt;
&lt;/div&gt;

&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;A compatible Linux host. The Kubernetes project provides generic instructions for Linux distributions
based on Debian and Red Hat, and those distributions without a package manager.&lt;/li&gt;
&lt;li&gt;2 GB or more of RAM per machine (any less will leave little room for your apps).&lt;/li&gt;
&lt;li&gt;2 CPUs or more for control plane machines.&lt;/li&gt;
&lt;li&gt;Full network connectivity between all machines in the cluster (public or private network is fine).&lt;/li&gt;
&lt;li&gt;Unique hostname, MAC address, and product_uuid for every node. See &lt;a href="#verify-mac-address"&gt;here&lt;/a&gt; for more details.&lt;/li&gt;
&lt;li&gt;Certain ports are open on your machines. See &lt;a href="#check-required-ports"&gt;here&lt;/a&gt; for more details.&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="alert alert-info" role="note"&gt;&lt;h4 class="alert-heading"&gt;Note:&lt;/h4&gt;The &lt;code&gt;kubeadm&lt;/code&gt; installation is done via binaries that use dynamic linking and assumes that your target system provides &lt;code&gt;glibc&lt;/code&gt;.
This is a reasonable assumption on many Linux distributions (including Debian, Ubuntu, Fedora, CentOS, etc.)
but it is not always the case with custom and lightweight distributions which don't include &lt;code&gt;glibc&lt;/code&gt; by default, such as Alpine Linux.
The expectation is that the distribution either includes &lt;code&gt;glibc&lt;/code&gt; or a
&lt;a href="https://wiki.alpinelinux.org/wiki/Running_glibc_programs"&gt;compatibility layer&lt;/a&gt;
that provides the expected symbols.&lt;/div&gt;

&lt;!-- steps --&gt;
&lt;h2 id="check-your-os-version"&gt;Check your OS version&lt;/h2&gt;
&lt;div class="alert alert-secondary callout third-party-content" role="note"&gt;&lt;strong&gt;Note:&lt;/strong&gt;&amp;puncsp;This section links to third party projects that provide functionality required by Kubernetes. The Kubernetes project authors aren't responsible for these projects, which are listed alphabetically. To add a project to this list, read the &lt;a href="https://andygol-k8s.netlify.app/docs/contribute/style/content-guide/#third-party-content"&gt;content guide&lt;/a&gt; before submitting a change. &lt;a href="#third-party-content-disclaimer"&gt;More information.&lt;/a&gt;&lt;/div&gt;
&lt;ul class="nav nav-tabs" id="operating-system-version-check" role="tablist"&gt;&lt;li class="nav-item"&gt;&lt;a data-toggle="tab" class="nav-link active" href="#operating-system-version-check-0" role="tab" aria-controls="operating-system-version-check-0" aria-selected="true"&gt;Linux&lt;/a&gt;&lt;/li&gt;
	 
		&lt;li class="nav-item"&gt;&lt;a data-toggle="tab" class="nav-link" href="#operating-system-version-check-1" role="tab" aria-controls="operating-system-version-check-1"&gt;Windows&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;
&lt;div class="tab-content" id="operating-system-version-check"&gt;&lt;div id="operating-system-version-check-0" class="tab-pane show active" role="tabpanel" aria-labelledby="operating-system-version-check-0"&gt;

&lt;p&gt;&lt;ul&gt;
&lt;li&gt;The kubeadm project supports LTS kernels. See &lt;a href="https://www.kernel.org/category/releases.html"&gt;List of LTS kernels&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;You can get the kernel version using the command &lt;code&gt;uname -r&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For more information, see &lt;a href="https://andygol-k8s.netlify.app/docs/reference/node/kernel-version-requirements/"&gt;Linux Kernel Requirements&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>Job</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/workload-resources/job-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/workload-resources/job-v1/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: batch/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/batch/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="Job"&gt;Job&lt;/h2&gt;
&lt;p&gt;Job represents the configuration of a single job.&lt;/p&gt;
&lt;hr&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;apiVersion&lt;/strong&gt;: batch/v1&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;kind&lt;/strong&gt;: Job&lt;/p&gt;</description></item><item><title>kubectl Quick Reference</title><link>https://andygol-k8s.netlify.app/docs/reference/kubectl/quick-reference/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubectl/quick-reference/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page contains a list of commonly used &lt;code&gt;kubectl&lt;/code&gt; commands and flags.&lt;/p&gt;

&lt;div class="alert alert-info" role="note"&gt;&lt;h4 class="alert-heading"&gt;Note:&lt;/h4&gt;These instructions are for Kubernetes v1.35. To check the version, use the &lt;code&gt;kubectl version&lt;/code&gt; command.&lt;/div&gt;

&lt;!-- body --&gt;
&lt;h2 id="kubectl-autocomplete"&gt;Kubectl autocomplete&lt;/h2&gt;
&lt;h3 id="bash"&gt;BASH&lt;/h3&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#a2f"&gt;source&lt;/span&gt; &amp;lt;&lt;span style="color:#666"&gt;(&lt;/span&gt;kubectl completion bash&lt;span style="color:#666"&gt;)&lt;/span&gt; &lt;span style="color:#080;font-style:italic"&gt;# set up autocomplete in bash into the current shell, bash-completion package should be installed first.&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#a2f"&gt;echo&lt;/span&gt; &lt;span style="color:#b44"&gt;&amp;#34;source &amp;lt;(kubectl completion bash)&amp;#34;&lt;/span&gt; &amp;gt;&amp;gt; ~/.bashrc &lt;span style="color:#080;font-style:italic"&gt;# add autocomplete permanently to your bash shell.&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;You can also use a shorthand alias for &lt;code&gt;kubectl&lt;/code&gt; that also works with completion:&lt;/p&gt;</description></item><item><title>Kubelet Checkpoint API</title><link>https://andygol-k8s.netlify.app/docs/reference/node/kubelet-checkpoint-api/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/node/kubelet-checkpoint-api/</guid><description>&lt;div class="feature-state-notice feature-beta" title="Feature Gate: ContainerCheckpoint"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt; 
 &lt;code&gt;Kubernetes v1.30 [beta]&lt;/code&gt;(enabled by default)&lt;/div&gt;

&lt;p&gt;Checkpointing a container is the functionality to create a stateful copy of a
running container. Once you have a stateful copy of a container, you could
move it to a different computer for debugging or similar purposes.&lt;/p&gt;
&lt;p&gt;If you move the checkpointed container data to a computer that's able to restore
it, that restored container continues to run at exactly the same
point it was checkpointed. You can also inspect the saved data, provided that you
have suitable tools for doing so.&lt;/p&gt;</description></item><item><title>Kubernetes Components</title><link>https://andygol-k8s.netlify.app/docs/concepts/overview/components/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/overview/components/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page provides a high-level overview of the essential components that make up a Kubernetes cluster.&lt;/p&gt;


&lt;figure class="diagram-large clickable-zoom"&gt;
 &lt;img src="https://andygol-k8s.netlify.app/images/docs/components-of-kubernetes.svg"
 alt="Components of Kubernetes"/&gt; &lt;figcaption&gt;
 &lt;p&gt;The components of a Kubernetes cluster&lt;/p&gt;
 &lt;/figcaption&gt;
&lt;/figure&gt;
&lt;!-- body --&gt;
&lt;h2 id="core-components"&gt;Core Components&lt;/h2&gt;
&lt;p&gt;A Kubernetes cluster consists of a control plane and one or more worker nodes.
Here's a brief overview of the main components:&lt;/p&gt;
&lt;h3 id="control-plane-components"&gt;Control Plane Components&lt;/h3&gt;
&lt;p&gt;Manage the overall state of the cluster:&lt;/p&gt;
&lt;dl&gt;
&lt;dt&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/architecture/#kube-apiserver"&gt;kube-apiserver&lt;/a&gt;&lt;/dt&gt;
&lt;dd&gt;The core component server that exposes the Kubernetes HTTP API.&lt;/dd&gt;
&lt;dt&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/architecture/#etcd"&gt;etcd&lt;/a&gt;&lt;/dt&gt;
&lt;dd&gt;Consistent and highly-available key value store for all API server data.&lt;/dd&gt;
&lt;dt&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/architecture/#kube-scheduler"&gt;kube-scheduler&lt;/a&gt;&lt;/dt&gt;
&lt;dd&gt;Looks for Pods not yet bound to a node, and assigns each Pod to a suitable node.&lt;/dd&gt;
&lt;dt&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/architecture/#kube-controller-manager"&gt;kube-controller-manager&lt;/a&gt;&lt;/dt&gt;
&lt;dd&gt;Runs &lt;a class='glossary-tooltip' title='A control loop that watches the shared state of the cluster through the apiserver and makes changes attempting to move the current state towards the desired state.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/architecture/controller/' target='_blank' aria-label='controllers'&gt;controllers&lt;/a&gt; to implement Kubernetes API behavior.&lt;/dd&gt;
&lt;dt&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/architecture/#cloud-controller-manager"&gt;cloud-controller-manager&lt;/a&gt; (optional)&lt;/dt&gt;
&lt;dd&gt;Integrates with underlying cloud provider(s).&lt;/dd&gt;
&lt;/dl&gt;
&lt;h3 id="node-components"&gt;Node Components&lt;/h3&gt;
&lt;p&gt;Run on every node, maintaining running pods and providing the Kubernetes runtime environment:&lt;/p&gt;</description></item><item><title>Kubernetes Issue Tracker</title><link>https://andygol-k8s.netlify.app/docs/reference/issues-security/issues/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/issues-security/issues/</guid><description>&lt;p&gt;To report a security issue, please follow the &lt;a href="https://andygol-k8s.netlify.app/docs/reference/issues-security/security/#report-a-vulnerability"&gt;Kubernetes security disclosure process&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Work on Kubernetes code and public issues are tracked using &lt;a href="https://github.com/kubernetes/kubernetes/issues/"&gt;GitHub Issues&lt;/a&gt;.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Official &lt;a href="https://andygol-k8s.netlify.app/docs/reference/issues-security/official-cve-feed/"&gt;list of known CVEs&lt;/a&gt;
(security vulnerabilities) that have been announced by the
&lt;a href="https://github.com/kubernetes/committee-security-response"&gt;Security Response Committee&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/kubernetes/kubernetes/issues?utf8=%E2%9C%93&amp;q=is%3Aissue+label%3Aarea%2Fsecurity+in%3Atitle+CVE"&gt;CVE-related GitHub issues&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Security-related announcements are sent to the &lt;a href="https://groups.google.com/forum/#!forum/kubernetes-security-announce"&gt;kubernetes-security-announce@googlegroups.com&lt;/a&gt; mailing list.&lt;/p&gt;</description></item><item><title>Kubernetes Scheduler</title><link>https://andygol-k8s.netlify.app/docs/concepts/scheduling-eviction/kube-scheduler/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/scheduling-eviction/kube-scheduler/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;In Kubernetes, &lt;em&gt;scheduling&lt;/em&gt; refers to making sure that &lt;a class='glossary-tooltip' title='A Pod represents a set of running containers in your cluster.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/' target='_blank' aria-label='Pods'&gt;Pods&lt;/a&gt;
are matched to &lt;a class='glossary-tooltip' title='A node is a worker machine in Kubernetes.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/architecture/nodes/' target='_blank' aria-label='Nodes'&gt;Nodes&lt;/a&gt; so that
&lt;a class='glossary-tooltip' title='An agent that runs on each node in the cluster. It makes sure that containers are running in a pod.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/reference/command-line-tools-reference/kubelet' target='_blank' aria-label='Kubelet'&gt;Kubelet&lt;/a&gt; can run them.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="scheduling"&gt;Scheduling overview&lt;/h2&gt;
&lt;p&gt;A scheduler watches for newly created Pods that have no Node assigned. For
every Pod that the scheduler discovers, the scheduler becomes responsible
for finding the best Node for that Pod to run on. The scheduler reaches
this placement decision taking into account the scheduling principles
described below.&lt;/p&gt;</description></item><item><title>Limit Ranges</title><link>https://andygol-k8s.netlify.app/docs/concepts/policy/limit-range/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/policy/limit-range/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;By default, containers run with unbounded
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/configuration/manage-resources-containers/"&gt;compute resources&lt;/a&gt; on a Kubernetes cluster.
Using Kubernetes &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/policy/resource-quotas/"&gt;resource quotas&lt;/a&gt;,
administrators (also termed &lt;em&gt;cluster operators&lt;/em&gt;) can restrict consumption and creation
of cluster resources (such as CPU time, memory, and persistent storage) within a specified
&lt;a class='glossary-tooltip' title='An abstraction used by Kubernetes to support isolation of groups of resources within a single cluster.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/overview/working-with-objects/namespaces' target='_blank' aria-label='namespace'&gt;namespace&lt;/a&gt;.
Within a namespace, a &lt;a class='glossary-tooltip' title='A Pod represents a set of running containers in your cluster.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/' target='_blank' aria-label='Pod'&gt;Pod&lt;/a&gt; can consume as much CPU and memory as is allowed by the ResourceQuotas that apply to that namespace.
As a cluster operator, or as a namespace-level administrator, you might also be concerned
about making sure that a single object cannot monopolize all available resources within a namespace.&lt;/p&gt;</description></item><item><title>Linux Kernel Version Requirements</title><link>https://andygol-k8s.netlify.app/docs/reference/node/kernel-version-requirements/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/node/kernel-version-requirements/</guid><description>&lt;div class="alert alert-secondary callout third-party-content" role="note"&gt;&lt;strong&gt;Note:&lt;/strong&gt;&amp;puncsp;This section links to third party projects that provide functionality required by Kubernetes. The Kubernetes project authors aren't responsible for these projects, which are listed alphabetically. To add a project to this list, read the &lt;a href="https://andygol-k8s.netlify.app/docs/contribute/style/content-guide/#third-party-content"&gt;content guide&lt;/a&gt; before submitting a change. &lt;a href="#third-party-content-disclaimer"&gt;More information.&lt;/a&gt;&lt;/div&gt;
&lt;p&gt;Many features rely on specific kernel functionalities and have minimum kernel version requirements.
However, relying solely on kernel version numbers may not be sufficient
for certain operating system distributions,
as maintainers for distributions such as RHEL, Ubuntu and SUSE often backport selected features
to older kernel releases (retaining the older kernel version).&lt;/p&gt;</description></item><item><title>Managing Secrets using kubectl</title><link>https://andygol-k8s.netlify.app/docs/tasks/configmap-secret/managing-secret-using-kubectl/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/configmap-secret/managing-secret-using-kubectl/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page shows you how to create, edit, manage, and delete Kubernetes
&lt;a class='glossary-tooltip' title='Stores sensitive information, such as passwords, OAuth tokens, and ssh keys.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/configuration/secret/' target='_blank' aria-label='Secrets'&gt;Secrets&lt;/a&gt; using the &lt;code&gt;kubectl&lt;/code&gt;
command-line tool.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>MutatingAdmissionPolicyBinding v1alpha1</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/policy-resources/mutating-admission-policy-binding-v1alpha1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/policy-resources/mutating-admission-policy-binding-v1alpha1/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: admissionregistration.k8s.io/v1alpha1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/admissionregistration/v1alpha1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="MutatingAdmissionPolicyBinding"&gt;MutatingAdmissionPolicyBinding&lt;/h2&gt;
&lt;p&gt;MutatingAdmissionPolicyBinding binds the MutatingAdmissionPolicy with parametrized resources. MutatingAdmissionPolicyBinding and the optional parameter resource together define how cluster administrators configure policies for clusters.&lt;/p&gt;</description></item><item><title>Network Plugins</title><link>https://andygol-k8s.netlify.app/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;Kubernetes (version 1.3 through to the latest 1.35, and likely onwards) lets you use
&lt;a href="https://github.com/containernetworking/cni"&gt;Container Network Interface&lt;/a&gt;
(CNI) plugins for cluster networking. You must use a CNI plugin that is compatible with your
cluster and that suits your needs. Different plugins are available (both open- and closed- source)
in the wider Kubernetes ecosystem.&lt;/p&gt;
&lt;p&gt;A CNI plugin is required to implement the
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/services-networking/#the-kubernetes-network-model"&gt;Kubernetes network model&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;You must use a CNI plugin that is compatible with the
&lt;a href="https://github.com/containernetworking/cni/blob/spec-v0.4.0/SPEC.md"&gt;v0.4.0&lt;/a&gt; or later
releases of the CNI specification. The Kubernetes project recommends using a plugin that is
compatible with the &lt;a href="https://github.com/containernetworking/cni/blob/spec-v1.0.0/SPEC.md"&gt;v1.0.0&lt;/a&gt;
CNI specification (plugins can be compatible with multiple spec versions).&lt;/p&gt;</description></item><item><title>Node Shutdowns</title><link>https://andygol-k8s.netlify.app/docs/concepts/cluster-administration/node-shutdown/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/cluster-administration/node-shutdown/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;In a Kubernetes cluster, a &lt;a class='glossary-tooltip' title='A node is a worker machine in Kubernetes.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/architecture/nodes/' target='_blank' aria-label='node'&gt;node&lt;/a&gt;
can be shut down in a planned graceful way or unexpectedly because of reasons such
as a power outage or something else external. A node shutdown could lead to workload
failure if the node is not drained before the shutdown. A node shutdown can be
either &lt;strong&gt;graceful&lt;/strong&gt; or &lt;strong&gt;non-graceful&lt;/strong&gt;.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="graceful-node-shutdown"&gt;Graceful node shutdown&lt;/h2&gt;
&lt;p&gt;The kubelet attempts to detect node system shutdown and terminates pods running on the node.&lt;/p&gt;</description></item><item><title>Nodes</title><link>https://andygol-k8s.netlify.app/docs/concepts/architecture/nodes/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/architecture/nodes/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;Kubernetes runs your &lt;a class='glossary-tooltip' title='A workload is an application running on Kubernetes.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/workloads/' target='_blank' aria-label='workload'&gt;workload&lt;/a&gt;
by placing containers into Pods to run on &lt;em&gt;Nodes&lt;/em&gt;.
A node may be a virtual or physical machine, depending on the cluster. Each node
is managed by the
&lt;a class='glossary-tooltip' title='The container orchestration layer that exposes the API and interfaces to define, deploy, and manage the lifecycle of containers.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/reference/glossary/?all=true#term-control-plane' target='_blank' aria-label='control plane'&gt;control plane&lt;/a&gt;
and contains the services necessary to run
&lt;a class='glossary-tooltip' title='A Pod represents a set of running containers in your cluster.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/' target='_blank' aria-label='Pods'&gt;Pods&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>Opening a pull request</title><link>https://andygol-k8s.netlify.app/docs/contribute/new-content/open-a-pr/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/contribute/new-content/open-a-pr/</guid><description>&lt;!-- overview --&gt;

&lt;div class="alert alert-info" role="note"&gt;&lt;h4 class="alert-heading"&gt;Note:&lt;/h4&gt;&lt;strong&gt;Code developers&lt;/strong&gt;: If you are documenting a new feature for an
upcoming Kubernetes release, see
&lt;a href="https://andygol-k8s.netlify.app/docs/contribute/new-content/new-features/"&gt;Document a new feature&lt;/a&gt;.&lt;/div&gt;

&lt;p&gt;To contribute new content pages or improve existing content pages, open a pull request (PR).
Make sure you follow all the requirements in the
&lt;a href="https://andygol-k8s.netlify.app/docs/contribute/new-content/"&gt;Before you begin&lt;/a&gt; section.&lt;/p&gt;
&lt;p&gt;If your change is small, or you're unfamiliar with git, read
&lt;a href="#changes-using-github"&gt;Changes using GitHub&lt;/a&gt; to learn how to edit a page.&lt;/p&gt;
&lt;p&gt;If your changes are large, read &lt;a href="#fork-the-repo"&gt;Work from a local fork&lt;/a&gt; to learn how to make
changes locally on your computer.&lt;/p&gt;</description></item><item><title>Overprovision Node Capacity For A Cluster</title><link>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/node-overprovisioning/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/node-overprovisioning/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page guides you through configuring &lt;a class='glossary-tooltip' title='A node is a worker machine in Kubernetes.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/architecture/nodes/' target='_blank' aria-label='Node'&gt;Node&lt;/a&gt;
overprovisioning in your Kubernetes cluster. Node overprovisioning is a strategy that proactively
reserves a portion of your cluster's compute resources. This reservation helps reduce the time
required to schedule new pods during scaling events, enhancing your cluster's responsiveness
to sudden spikes in traffic or workload demands.&lt;/p&gt;
&lt;p&gt;By maintaining some unused capacity, you ensure that resources are immediately available when
new pods are created, preventing them from entering a pending state while the cluster scales up.&lt;/p&gt;</description></item><item><title>Perform a Rolling Update on a DaemonSet</title><link>https://andygol-k8s.netlify.app/docs/tasks/manage-daemon/update-daemon-set/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/manage-daemon/update-daemon-set/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page shows how to perform a rolling update on a DaemonSet.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Performing a Rolling Update</title><link>https://andygol-k8s.netlify.app/docs/tutorials/kubernetes-basics/update/update-intro/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tutorials/kubernetes-basics/update/update-intro/</guid><description>&lt;h2 id="objectives"&gt;Objectives&lt;/h2&gt;
&lt;p&gt;Perform a rolling update using kubectl.&lt;/p&gt;
&lt;h2 id="updating-an-application"&gt;Updating an application&lt;/h2&gt;
&lt;div class="alert alert-primary" role="alert"&gt;
&lt;p&gt;&lt;em&gt;Rolling updates allow Deployments' update to take place with zero downtime by
incrementally updating Pods instances with new ones.&lt;/em&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;p&gt;Users expect applications to be available all the time, and developers are expected
to deploy new versions of them several times a day. In Kubernetes this is done with
rolling updates. A &lt;strong&gt;rolling update&lt;/strong&gt; allows a Deployment update to take place with
zero downtime. It does this by incrementally replacing the current Pods with new ones.
The new Pods are scheduled on Nodes with available resources, and Kubernetes waits
for those new Pods to start before removing the old Pods.&lt;/p&gt;</description></item><item><title>Pod Group Policies</title><link>https://andygol-k8s.netlify.app/docs/concepts/workloads/workload-api/policies/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/workloads/workload-api/policies/</guid><description>&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-alpha" title="Feature Gate: GenericWorkload"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt; 
 &lt;code&gt;Kubernetes v1.35 [alpha]&lt;/code&gt;(disabled by default)&lt;/div&gt;

&lt;p&gt;Every pod group defined in a &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/workloads/workload-api/"&gt;Workload&lt;/a&gt;
must declare a scheduling policy. This policy dictates how the scheduler treats the collection of Pods.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="policy-types"&gt;Policy types&lt;/h2&gt;
&lt;p&gt;The API currently supports two policy types: &lt;code&gt;basic&lt;/code&gt; and &lt;code&gt;gang&lt;/code&gt;.
You must specify exactly one policy for each group.&lt;/p&gt;
&lt;h3 id="basic-policy"&gt;Basic policy&lt;/h3&gt;
&lt;p&gt;The &lt;code&gt;basic&lt;/code&gt; policy instructs the scheduler to treat all Pods in the group as independent entities,
scheduling them using the standard Kubernetes behavior.&lt;/p&gt;</description></item><item><title>Protocols for Services</title><link>https://andygol-k8s.netlify.app/docs/reference/networking/service-protocols/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/networking/service-protocols/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;If you configure a &lt;a class='glossary-tooltip' title='A way to expose an application running on a set of Pods as a network service.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/services-networking/service/' target='_blank' aria-label='Service'&gt;Service&lt;/a&gt;,
you can select from any network protocol that Kubernetes supports.&lt;/p&gt;
&lt;p&gt;Kubernetes supports the following protocols with Services:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#protocol-sctp"&gt;&lt;code&gt;SCTP&lt;/code&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#protocol-tcp"&gt;&lt;code&gt;TCP&lt;/code&gt;&lt;/a&gt; &lt;em&gt;(the default)&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#protocol-udp"&gt;&lt;code&gt;UDP&lt;/code&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;When you define a Service, you can also specify the
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/services-networking/service/#application-protocol"&gt;application protocol&lt;/a&gt;
that it uses.&lt;/p&gt;
&lt;p&gt;This document details some special cases, all of them typically using TCP
as a transport protocol:&lt;/p&gt;</description></item><item><title>Quantity</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/common-definitions/quantity/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/common-definitions/quantity/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/apimachinery/pkg/api/resource&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Quantity is a fixed-point representation of a number. It provides convenient marshaling/unmarshaling in JSON and YAML, in addition to String() and AsInt64() accessors.&lt;/p&gt;</description></item><item><title>Reference Documentation Quickstart</title><link>https://andygol-k8s.netlify.app/docs/contribute/generate-ref-docs/quickstart/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/contribute/generate-ref-docs/quickstart/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page shows how to use the &lt;code&gt;update-imported-docs.py&lt;/code&gt; script to generate
the Kubernetes reference documentation. The script automates
the build setup and generates the reference documentation for a release.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;

	&lt;h3 id="requirements"&gt;Requirements:&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;You need a machine that is running Linux or macOS.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;You need to have these tools installed:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.python.org/downloads/"&gt;Python&lt;/a&gt; v3.7.x+&lt;/li&gt;
&lt;li&gt;&lt;a href="https://git-scm.com/book/en/v2/Getting-Started-Installing-Git"&gt;Git&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://go.dev/dl/"&gt;Golang&lt;/a&gt; version 1.13+&lt;/li&gt;
&lt;li&gt;&lt;a href="https://pypi.org/project/pip/"&gt;Pip&lt;/a&gt; used to install PyYAML&lt;/li&gt;
&lt;li&gt;&lt;a href="https://pyyaml.org/"&gt;PyYAML&lt;/a&gt; v5.1.2&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.gnu.org/software/make/"&gt;make&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://gcc.gnu.org/"&gt;gcc compiler/linker&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.docker.com/engine/installation/"&gt;Docker&lt;/a&gt; (Required only for &lt;code&gt;kubectl&lt;/code&gt; command reference)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Your &lt;code&gt;PATH&lt;/code&gt; environment variable must include the required build tools, such as the &lt;code&gt;Go&lt;/code&gt; binary and &lt;code&gt;python&lt;/code&gt;.&lt;/p&gt;</description></item><item><title>Reviewing pull requests</title><link>https://andygol-k8s.netlify.app/docs/contribute/review/reviewing-prs/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/contribute/review/reviewing-prs/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;Anyone can review a documentation pull request. Visit the &lt;a href="https://github.com/kubernetes/website/pulls"&gt;pull requests&lt;/a&gt;
section in the Kubernetes website repository to see open pull requests.&lt;/p&gt;
&lt;p&gt;Reviewing documentation pull requests is a great way to introduce yourself to the Kubernetes
community. It helps you learn the code base and build trust with other contributors.&lt;/p&gt;
&lt;p&gt;Before reviewing, it's a good idea to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Read the &lt;a href="https://andygol-k8s.netlify.app/docs/contribute/style/content-guide/"&gt;content guide&lt;/a&gt; and
&lt;a href="https://andygol-k8s.netlify.app/docs/contribute/style/style-guide/"&gt;style guide&lt;/a&gt; so you can leave informed comments.&lt;/li&gt;
&lt;li&gt;Understand the different
&lt;a href="https://andygol-k8s.netlify.app/docs/contribute/participate/roles-and-responsibilities/"&gt;roles and responsibilities&lt;/a&gt;
in the Kubernetes documentation community.&lt;/li&gt;
&lt;/ul&gt;
&lt;!-- body --&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;Before you start a review:&lt;/p&gt;</description></item><item><title>Roles and responsibilities</title><link>https://andygol-k8s.netlify.app/docs/contribute/participate/roles-and-responsibilities/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/contribute/participate/roles-and-responsibilities/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;Anyone can contribute to Kubernetes. As your contributions to SIG Docs grow,
you can apply for different levels of membership in the community.
These roles allow you to take on more responsibility within the community.
Each role requires more time and commitment. The roles are:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Anyone: regular contributors to the Kubernetes documentation&lt;/li&gt;
&lt;li&gt;Members: can assign and triage issues and provide non-binding review on pull requests&lt;/li&gt;
&lt;li&gt;Reviewers: can lead reviews on documentation pull requests and can vouch for a change's quality&lt;/li&gt;
&lt;li&gt;Approvers: can lead reviews on documentation and merge changes&lt;/li&gt;
&lt;/ul&gt;
&lt;!-- body --&gt;
&lt;h2 id="anyone"&gt;Anyone&lt;/h2&gt;
&lt;p&gt;Anyone with a GitHub account can contribute to Kubernetes. SIG Docs welcomes all new contributors!&lt;/p&gt;</description></item><item><title>Run a Stateless Application Using a Deployment</title><link>https://andygol-k8s.netlify.app/docs/tasks/run-application/run-stateless-application-deployment/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/run-application/run-stateless-application-deployment/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page shows how to run an application using a Kubernetes Deployment object.&lt;/p&gt;
&lt;h2 id="objectives"&gt;Objectives&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Create an nginx deployment.&lt;/li&gt;
&lt;li&gt;Use kubectl to list information about the deployment.&lt;/li&gt;
&lt;li&gt;Update the deployment.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Running Automated Tasks with a CronJob</title><link>https://andygol-k8s.netlify.app/docs/tasks/job/automated-tasks-with-cron-jobs/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/job/automated-tasks-with-cron-jobs/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page shows how to run automated tasks using Kubernetes &lt;a class='glossary-tooltip' title='A repeating task (a Job) that runs on a regular schedule.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/cron-jobs/' target='_blank' aria-label='CronJob'&gt;CronJob&lt;/a&gt; object.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Running Kubelet in Standalone Mode</title><link>https://andygol-k8s.netlify.app/docs/tutorials/cluster-management/kubelet-standalone/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tutorials/cluster-management/kubelet-standalone/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This tutorial shows you how to run a standalone kubelet instance.&lt;/p&gt;
&lt;p&gt;You may have different motivations for running a standalone kubelet.
This tutorial is aimed at introducing you to Kubernetes, even if you don't have
much experience with it. You can follow this tutorial and learn about node setup,
basic (static) Pods, and how Kubernetes manages containers.&lt;/p&gt;
&lt;p&gt;Once you have followed this tutorial, you could try using a cluster that has a
&lt;a class='glossary-tooltip' title='The container orchestration layer that exposes the API and interfaces to define, deploy, and manage the lifecycle of containers.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/reference/glossary/?all=true#term-control-plane' target='_blank' aria-label='control plane'&gt;control plane&lt;/a&gt; to manage pods
and nodes, and other types of objects. For example,
&lt;a href="https://andygol-k8s.netlify.app/docs/tutorials/hello-minikube/"&gt;Hello, minikube&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>Running Multiple Instances of Your App</title><link>https://andygol-k8s.netlify.app/docs/tutorials/kubernetes-basics/scale/scale-intro/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tutorials/kubernetes-basics/scale/scale-intro/</guid><description>&lt;h2 id="objectives"&gt;Objectives&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Scale an existing app manually using kubectl.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="scaling-an-application"&gt;Scaling an application&lt;/h2&gt;
&lt;div class="alert alert-primary" role="alert"&gt;
&lt;p&gt;&lt;em&gt;You can create from the start a Deployment with multiple instances using the --replicas
parameter for the kubectl create deployment command.&lt;/em&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;p&gt;Previously we created a &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/deployment/"&gt;Deployment&lt;/a&gt;,
and then exposed it publicly via a &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/services-networking/service/"&gt;Service&lt;/a&gt;.
The Deployment created only one Pod for running our application. When traffic increases,
we will need to scale the application to keep up with user demand.&lt;/p&gt;</description></item><item><title>Service</title><link>https://andygol-k8s.netlify.app/docs/concepts/services-networking/service/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/services-networking/service/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;In Kubernetes, a Service is a method for exposing a network application that is running as one or more
&lt;a class='glossary-tooltip' title='A Pod represents a set of running containers in your cluster.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/' target='_blank' aria-label='Pods'&gt;Pods&lt;/a&gt; in your cluster.&lt;/p&gt;
&lt;p&gt;A key aim of Services in Kubernetes is that you don't need to modify your existing
application to use an unfamiliar service discovery mechanism.
You can run code in Pods, whether this is a code designed for a cloud-native world, or
an older app you've containerized. You use a Service to make that set of Pods available
on the network so that clients can interact with it.&lt;/p&gt;</description></item><item><title>ServiceCIDR</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/cluster-resources/service-cidr-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/cluster-resources/service-cidr-v1/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: networking.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/networking/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="ServiceCIDR"&gt;ServiceCIDR&lt;/h2&gt;
&lt;p&gt;ServiceCIDR defines a range of IP addresses using CIDR format (e.g. 192.168.0.0/24 or 2001:db2::/64). This range is used to allocate ClusterIPs to Service objects.&lt;/p&gt;</description></item><item><title>Set Up DRA in a Cluster</title><link>https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/assign-resources/set-up-dra-cluster/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/assign-resources/set-up-dra-cluster/</guid><description>&lt;div class="feature-state-notice feature-stable" title="Feature Gate: DynamicResourceAllocation"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt; 
 &lt;code&gt;Kubernetes v1.35 [stable]&lt;/code&gt;(enabled by default)&lt;/div&gt;

&lt;!-- overview --&gt;
&lt;p&gt;This page shows you how to configure &lt;em&gt;dynamic resource allocation (DRA)&lt;/em&gt; in a
Kubernetes cluster by enabling API groups and configuring classes of devices.
These instructions are for cluster administrators.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="about-dra"&gt;About DRA&lt;/h2&gt;
&lt;p&gt;A Kubernetes feature that lets you request and share resources among Pods.
These resources are often attached
&lt;a class='glossary-tooltip' title='Any resource that&amp;#39;s directly or indirectly attached your cluster&amp;#39;s nodes, like GPUs or circuit boards.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/reference/glossary/?all=true#term-device' target='_blank' aria-label='devices'&gt;devices&lt;/a&gt; like hardware
accelerators.&lt;/p&gt;</description></item><item><title>StatefulSet Basics</title><link>https://andygol-k8s.netlify.app/docs/tutorials/stateful-application/basic-stateful-set/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tutorials/stateful-application/basic-stateful-set/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This tutorial provides an introduction to managing applications with
&lt;a class='glossary-tooltip' title='A StatefulSet manages deployment and scaling of a set of Pods, with durable storage and persistent identifiers for each Pod.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/statefulset/' target='_blank' aria-label='StatefulSets'&gt;StatefulSets&lt;/a&gt;.
It demonstrates how to create, delete, scale, and update the Pods of StatefulSets.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;Before you begin this tutorial, you should familiarize yourself with the
following Kubernetes concepts:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/"&gt;Pods&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/services-networking/dns-pod-service/"&gt;Cluster DNS&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/services-networking/service/#headless-services"&gt;Headless Services&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/storage/persistent-volumes/"&gt;PersistentVolumes&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/storage/dynamic-provisioning/"&gt;PersistentVolumes Provisioning&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;The &lt;a href="https://andygol-k8s.netlify.app/docs/reference/kubectl/kubectl/"&gt;kubectl&lt;/a&gt; command line tool&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Swap memory management</title><link>https://andygol-k8s.netlify.app/docs/concepts/cluster-administration/swap-memory-management/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/cluster-administration/swap-memory-management/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;Kubernetes can be configured to use swap memory on a &lt;a class='glossary-tooltip' title='A node is a worker machine in Kubernetes.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/architecture/nodes/' target='_blank' aria-label='node'&gt;node&lt;/a&gt;,
allowing the kernel to free up physical memory by swapping out pages to backing storage.
This is useful for multiple use-cases.
For example, nodes running workloads that can benefit from using swap,
such as those that have large memory footprints but only access a portion of that memory at any given time.
It also helps prevent Pods from being terminated during memory pressure spikes,
shields nodes from system-level memory spikes that might compromise its stability,
allows for more flexible memory management on the node, and much more.&lt;/p&gt;</description></item><item><title>Troubleshooting kubectl</title><link>https://andygol-k8s.netlify.app/docs/tasks/debug/debug-cluster/troubleshoot-kubectl/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/debug/debug-cluster/troubleshoot-kubectl/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This documentation is about investigating and diagnosing
&lt;a class='glossary-tooltip' title='A command line tool for communicating with a Kubernetes cluster.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/reference/kubectl/' target='_blank' aria-label='kubectl'&gt;kubectl&lt;/a&gt; related issues.
If you encounter issues accessing &lt;code&gt;kubectl&lt;/code&gt; or connecting to your cluster, this
document outlines various common scenarios and potential solutions to help
identify and address the likely cause.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;You need to have a Kubernetes cluster.&lt;/li&gt;
&lt;li&gt;You also need to have &lt;code&gt;kubectl&lt;/code&gt; installed - see &lt;a href="https://andygol-k8s.netlify.app/docs/tasks/tools/#kubectl"&gt;install tools&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="verify-kubectl-setup"&gt;Verify kubectl setup&lt;/h2&gt;
&lt;p&gt;Make sure you have installed and configured &lt;code&gt;kubectl&lt;/code&gt; correctly on your local machine.
Check the &lt;code&gt;kubectl&lt;/code&gt; version to ensure it is up-to-date and compatible with your cluster.&lt;/p&gt;</description></item><item><title>Use Antrea for NetworkPolicy</title><link>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/network-policy-provider/antrea-network-policy/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/network-policy-provider/antrea-network-policy/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page shows how to install and use Antrea CNI plugin on Kubernetes.
For background on Project Antrea, read the &lt;a href="https://antrea.io/docs/"&gt;Introduction to Antrea&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;You need to have a Kubernetes cluster. Follow the
&lt;a href="https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/"&gt;kubeadm getting started guide&lt;/a&gt; to bootstrap one.&lt;/p&gt;
&lt;!-- steps --&gt;
&lt;h2 id="deploying-antrea-with-kubeadm"&gt;Deploying Antrea with kubeadm&lt;/h2&gt;
&lt;p&gt;Follow &lt;a href="https://github.com/vmware-tanzu/antrea/blob/main/docs/getting-started.md"&gt;Getting Started&lt;/a&gt; guide to deploy Antrea for kubeadm.&lt;/p&gt;
&lt;h2 id="what-s-next"&gt;What's next&lt;/h2&gt;
&lt;p&gt;Once your cluster is running, you can follow the &lt;a href="https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/declare-network-policy/"&gt;Declare Network Policy&lt;/a&gt; to try out Kubernetes NetworkPolicy.&lt;/p&gt;</description></item><item><title>Using a Service to Expose Your App</title><link>https://andygol-k8s.netlify.app/docs/tutorials/kubernetes-basics/expose/expose-intro/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tutorials/kubernetes-basics/expose/expose-intro/</guid><description>&lt;h2 id="objectives"&gt;Objectives&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Learn about a Service in Kubernetes.&lt;/li&gt;
&lt;li&gt;Understand how labels and selectors relate to a Service.&lt;/li&gt;
&lt;li&gt;Expose an application outside a Kubernetes cluster.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="overview-of-kubernetes-services"&gt;Overview of Kubernetes Services&lt;/h2&gt;
&lt;p&gt;Kubernetes &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/"&gt;Pods&lt;/a&gt; are mortal. Pods have a
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/pod-lifecycle/"&gt;lifecycle&lt;/a&gt;. When a worker node dies,
the Pods running on the Node are also lost. A &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/replicaset/"&gt;Replicaset&lt;/a&gt;
might then dynamically drive the cluster back to the desired state via the creation
of new Pods to keep your application running. As another example, consider an image-processing
backend with 3 replicas. Those replicas are exchangeable; the front-end system should
not care about backend replicas or even if a Pod is lost and recreated. That said,
each Pod in a Kubernetes cluster has a unique IP address, even Pods on the same Node,
so there needs to be a way of automatically reconciling changes among Pods so that your
applications continue to function.&lt;/p&gt;</description></item><item><title>Using kubectl to Create a Deployment</title><link>https://andygol-k8s.netlify.app/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro/</guid><description>&lt;h2 id="objectives"&gt;Objectives&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Learn about application Deployments.&lt;/li&gt;
&lt;li&gt;Deploy your first app on Kubernetes with kubectl.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="kubernetes-deployments"&gt;Kubernetes Deployments&lt;/h2&gt;
&lt;div class="alert alert-primary" role="alert"&gt;
&lt;p&gt;&lt;em&gt;A Deployment is responsible for creating and updating instances of your application.&lt;/em&gt;&lt;/p&gt;
&lt;/div&gt;

&lt;div class="alert alert-info" role="note"&gt;&lt;h4 class="alert-heading"&gt;Note:&lt;/h4&gt;This tutorial uses a container that requires the AMD64 architecture. If you are using
minikube on a computer with a different CPU architecture, you could try using minikube with
a driver that can emulate AMD64. For example, the Docker Desktop driver can do this.&lt;/div&gt;

&lt;p&gt;Once you have a &lt;a href="https://andygol-k8s.netlify.app/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro/"&gt;running Kubernetes cluster&lt;/a&gt;,
you can deploy your containerized applications on top of it. To do so, you create a
Kubernetes &lt;strong&gt;Deployment&lt;/strong&gt;. The Deployment instructs Kubernetes how to create and
update instances of your application. Once you've created a Deployment, the Kubernetes
control plane schedules the application instances included in that Deployment to run
on individual Nodes in the cluster.&lt;/p&gt;</description></item><item><title>Using Minikube to Create a Cluster</title><link>https://andygol-k8s.netlify.app/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro/</guid><description>&lt;h2 id="objectives"&gt;Objectives&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Learn what a Kubernetes cluster is.&lt;/li&gt;
&lt;li&gt;Learn what Minikube is.&lt;/li&gt;
&lt;li&gt;Start a Kubernetes cluster on your computer.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="kubernetes-clusters"&gt;Kubernetes Clusters&lt;/h2&gt;
&lt;div class="alert alert-primary" role="alert"&gt;
&lt;p&gt;&lt;em&gt;Kubernetes is a production-grade, open-source platform that orchestrates
the placement (scheduling) and execution of application containers
within and across computer clusters.&lt;/em&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;Kubernetes coordinates a highly available cluster of computers that are connected
to work as a single unit.&lt;/strong&gt; The abstractions in Kubernetes allow you to deploy
containerized applications to a cluster without tying them specifically to individual
machines. To make use of this new model of deployment, applications need to be packaged
in a way that decouples them from individual hosts: they need to be containerized.
Containerized applications are more flexible and available than in past deployment models,
where applications were installed directly onto specific machines as packages deeply
integrated into the host. &lt;strong&gt;Kubernetes automates the distribution and scheduling of
application containers across a cluster in a more efficient way.&lt;/strong&gt; Kubernetes is an
open-source platform and is production-ready.&lt;/p&gt;</description></item><item><title>Viewing Pods and Nodes</title><link>https://andygol-k8s.netlify.app/docs/tutorials/kubernetes-basics/explore/explore-intro/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tutorials/kubernetes-basics/explore/explore-intro/</guid><description>&lt;h2 id="objectives"&gt;Objectives&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Learn about Kubernetes Pods.&lt;/li&gt;
&lt;li&gt;Learn about Kubernetes Nodes.&lt;/li&gt;
&lt;li&gt;Troubleshoot deployed applications.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="kubernetes-pods"&gt;Kubernetes Pods&lt;/h2&gt;
&lt;div class="alert alert-primary" role="alert"&gt;
&lt;p&gt;&lt;em&gt;A Pod is a group of one or more application containers (such as Docker) and includes
shared storage (volumes), IP address and information about how to run them.&lt;/em&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;p&gt;When you created a Deployment in &lt;a href="https://andygol-k8s.netlify.app/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro/"&gt;Module 2&lt;/a&gt;,
Kubernetes created a &lt;strong&gt;Pod&lt;/strong&gt; to host your application instance. A Pod is a Kubernetes
abstraction that represents a group of one or more application containers (such as Docker),
and some shared resources for those containers. Those resources include:&lt;/p&gt;</description></item><item><title>Volume</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/config-and-storage-resources/volume/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/config-and-storage-resources/volume/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/core/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="Volume"&gt;Volume&lt;/h2&gt;
&lt;p&gt;Volume represents a named volume in a pod that may be accessed by any container in the pod.&lt;/p&gt;</description></item><item><title>Volumes</title><link>https://andygol-k8s.netlify.app/docs/concepts/storage/volumes/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/storage/volumes/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;Kubernetes &lt;em&gt;volumes&lt;/em&gt; provide a way for containers in a &lt;a class='glossary-tooltip' title='A Pod represents a set of running containers in your cluster.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/' target='_blank' aria-label='pod'&gt;pod&lt;/a&gt;
to access and share data via the filesystem. There are different kinds of volume that you can use for different purposes,
such as:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;populating a configuration file based on a &lt;a class='glossary-tooltip' title='An API object used to store non-confidential data in key-value pairs. Can be consumed as environment variables, command-line arguments, or configuration files in a volume.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/configuration/configmap/' target='_blank' aria-label='ConfigMap'&gt;ConfigMap&lt;/a&gt;
or a &lt;a class='glossary-tooltip' title='Stores sensitive information, such as passwords, OAuth tokens, and ssh keys.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/configuration/secret/' target='_blank' aria-label='Secret'&gt;Secret&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;providing some temporary scratch space for a pod&lt;/li&gt;
&lt;li&gt;sharing a filesystem between two different containers in the same pod&lt;/li&gt;
&lt;li&gt;sharing a filesystem between two different pods (even if those Pods run on different nodes)&lt;/li&gt;
&lt;li&gt;durably storing data so that it stays available even if the Pod restarts or is replaced&lt;/li&gt;
&lt;li&gt;passing configuration information to an app running in a container, based on details of the Pod
the container is in
(for example: telling a &lt;a class='glossary-tooltip' title='An auxilliary container that stays running throughout the lifecycle of a Pod.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/sidecar-containers/' target='_blank' aria-label='sidecar container'&gt;sidecar container&lt;/a&gt;
what namespace the Pod is running in)&lt;/li&gt;
&lt;li&gt;providing read-only access to data in a different container image&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Data sharing can be between different local processes within a container, or between different containers,
or between Pods.&lt;/p&gt;</description></item><item><title>Adding Windows worker nodes</title><link>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/</guid><description>&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-beta"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.18 [beta]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;p&gt;This page explains how to add Windows worker nodes to a kubeadm cluster.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;A running &lt;a href="https://www.microsoft.com/cloud-platform/windows-server-pricing"&gt;Windows Server 2022&lt;/a&gt;
(or higher) instance with administrative access.&lt;/li&gt;
&lt;li&gt;A running kubeadm cluster created by &lt;code&gt;kubeadm init&lt;/code&gt; and following the steps
in the document &lt;a href="https://andygol-k8s.netlify.app/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/"&gt;Creating a cluster with kubeadm&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;!-- steps --&gt;
&lt;h2 id="adding-windows-worker-nodes"&gt;Adding Windows worker nodes&lt;/h2&gt;

&lt;div class="alert alert-info" role="note"&gt;&lt;h4 class="alert-heading"&gt;Note:&lt;/h4&gt;To facilitate the addition of Windows worker nodes to a cluster, PowerShell scripts from the repository
&lt;a href="https://sigs.k8s.io/sig-windows-tools"&gt;https://sigs.k8s.io/sig-windows-tools&lt;/a&gt; are used.&lt;/div&gt;

&lt;p&gt;Do the following for each machine:&lt;/p&gt;</description></item><item><title>Common Parameters</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/common-parameters/common-parameters/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/common-parameters/common-parameters/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h2 id="allowWatchBookmarks"&gt;allowWatchBookmarks&lt;/h2&gt;
&lt;p&gt;allowWatchBookmarks requests watch events with type &amp;quot;BOOKMARK&amp;quot;. Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored.&lt;/p&gt;</description></item><item><title>CronJob</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/workload-resources/cron-job-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/workload-resources/cron-job-v1/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: batch/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/batch/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="CronJob"&gt;CronJob&lt;/h2&gt;
&lt;p&gt;CronJob represents the configuration of a single cron job.&lt;/p&gt;
&lt;hr&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;apiVersion&lt;/strong&gt;: batch/v1&lt;/p&gt;</description></item><item><title>Previewing locally</title><link>https://andygol-k8s.netlify.app/docs/contribute/new-content/preview-locally/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/contribute/new-content/preview-locally/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;Before you're going to &lt;a href="https://andygol-k8s.netlify.app/docs/contribute/new-content/open-a-pr/"&gt;open a new PR&lt;/a&gt;,
previewing your changes is recommended. A preview lets you catch build
errors or markdown formatting problems.&lt;/p&gt;
&lt;h2 id="preview-locally"&gt;Preview your changes locally&lt;/h2&gt;
&lt;p&gt;You can either build the website's container image or run Hugo locally. Building the container
image is slower but displays &lt;a href="https://andygol-k8s.netlify.app/docs/contribute/style/hugo-shortcodes/"&gt;Hugo shortcodes&lt;/a&gt;, which can
be useful for debugging.&lt;/p&gt;
&lt;ul class="nav nav-tabs" id="tab-with-hugo" role="tablist"&gt;&lt;li class="nav-item"&gt;&lt;a data-toggle="tab" class="nav-link active" href="#tab-with-hugo-0" role="tab" aria-controls="tab-with-hugo-0" aria-selected="true"&gt;Hugo in a container&lt;/a&gt;&lt;/li&gt;
	 
		&lt;li class="nav-item"&gt;&lt;a data-toggle="tab" class="nav-link" href="#tab-with-hugo-1" role="tab" aria-controls="tab-with-hugo-1"&gt;Hugo on the command line&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;
&lt;div class="tab-content" id="tab-with-hugo"&gt;&lt;div id="tab-with-hugo-0" class="tab-pane show active" role="tabpanel" aria-labelledby="tab-with-hugo-0"&gt;

&lt;p&gt;&lt;div class="alert alert-info" role="note"&gt;&lt;h4 class="alert-heading"&gt;Note:&lt;/h4&gt;The commands below use Docker as default container engine. Set the &lt;code&gt;CONTAINER_ENGINE&lt;/code&gt; environment
variable to override this behaviour.&lt;/div&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Build the container image locally&lt;br&gt;
&lt;em&gt;You only need this step if you are testing a change to the Hugo tool itself&lt;/em&gt;&lt;/p&gt;</description></item><item><title>ResourceFieldSelector</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/common-definitions/resource-field-selector/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/common-definitions/resource-field-selector/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/core/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;ResourceFieldSelector represents container resources (cpu, memory) and their output format&lt;/p&gt;
&lt;hr&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;resource&lt;/strong&gt; (string), required&lt;/p&gt;
&lt;p&gt;Required: resource to select&lt;/p&gt;</description></item><item><title>VolumeAttachment</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/config-and-storage-resources/volume-attachment-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/config-and-storage-resources/volume-attachment-v1/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: storage.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/storage/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="VolumeAttachment"&gt;VolumeAttachment&lt;/h2&gt;
&lt;p&gt;VolumeAttachment captures the intent to attach or detach the specified volume to/from the specified node.&lt;/p&gt;</description></item><item><title>HorizontalPodAutoscaler</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/workload-resources/horizontal-pod-autoscaler-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/workload-resources/horizontal-pod-autoscaler-v1/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: autoscaling/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/autoscaling/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="HorizontalPodAutoscaler"&gt;HorizontalPodAutoscaler&lt;/h2&gt;
&lt;p&gt;configuration of a horizontal pod autoscaler.&lt;/p&gt;
&lt;hr&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;apiVersion&lt;/strong&gt;: autoscaling/v1&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;kind&lt;/strong&gt;: HorizontalPodAutoscaler&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;metadata&lt;/strong&gt; (&lt;a href="https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/common-definitions/object-meta/#ObjectMeta"&gt;ObjectMeta&lt;/a&gt;)&lt;/p&gt;</description></item><item><title>Status</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/common-definitions/status/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/common-definitions/status/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/apimachinery/pkg/apis/meta/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Status is a return value for calls that don't return other objects.&lt;/p&gt;
&lt;hr&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;apiVersion&lt;/strong&gt; (string)&lt;/p&gt;</description></item><item><title>VolumeAttributesClass</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/config-and-storage-resources/volume-attributes-class-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/config-and-storage-resources/volume-attributes-class-v1/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: storage.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/storage/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="VolumeAttributesClass"&gt;VolumeAttributesClass&lt;/h2&gt;
&lt;p&gt;VolumeAttributesClass represents a specification of mutable volume attributes defined by the CSI driver. The class can be specified during dynamic provisioning of PersistentVolumeClaims, and changed in the PersistentVolumeClaim spec after provisioning.&lt;/p&gt;</description></item><item><title>HorizontalPodAutoscaler</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/workload-resources/horizontal-pod-autoscaler-v2/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/workload-resources/horizontal-pod-autoscaler-v2/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: autoscaling/v2&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/autoscaling/v2&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="HorizontalPodAutoscaler"&gt;HorizontalPodAutoscaler&lt;/h2&gt;
&lt;p&gt;HorizontalPodAutoscaler is the configuration for a horizontal pod autoscaler, which automatically manages the replica count of any resource implementing the scale subresource based on the metrics specified.&lt;/p&gt;</description></item><item><title>TypedLocalObjectReference</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/common-definitions/typed-local-object-reference/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/common-definitions/typed-local-object-reference/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/core/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;TypedLocalObjectReference contains enough information to let you locate the typed referenced object inside the same namespace.&lt;/p&gt;</description></item><item><title>PriorityClass</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/workload-resources/priority-class-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/workload-resources/priority-class-v1/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: scheduling.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/scheduling/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="PriorityClass"&gt;PriorityClass&lt;/h2&gt;
&lt;p&gt;PriorityClass defines mapping from a priority class name to the priority integer value. The value can be any valid integer.&lt;/p&gt;</description></item><item><title>DeviceTaintRule v1alpha3</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/workload-resources/device-taint-rule-v1alpha3/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/workload-resources/device-taint-rule-v1alpha3/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: resource.k8s.io/v1alpha3&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/resource/v1alpha3&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="DeviceTaintRule"&gt;DeviceTaintRule&lt;/h2&gt;
&lt;p&gt;DeviceTaintRule adds one taint to all devices which match the selector. This has the same effect as if the taint was specified directly in the ResourceSlice by the DRA driver.&lt;/p&gt;</description></item><item><title>Feature Gates (removed)</title><link>https://andygol-k8s.netlify.app/docs/reference/command-line-tools-reference/feature-gates-removed/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/command-line-tools-reference/feature-gates-removed/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page contains list of feature gates that have been removed. The information on this page is for reference.
A removed feature gate is different from a GA'ed or deprecated one in that a removed one is
no longer recognized as a valid feature gate.
However, a GA'ed or a deprecated feature gate is still recognized by the corresponding Kubernetes
components although they are unable to cause any behavior differences in a cluster.&lt;/p&gt;</description></item><item><title>Node Autoscaling</title><link>https://andygol-k8s.netlify.app/docs/concepts/cluster-administration/node-autoscaling/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/cluster-administration/node-autoscaling/</guid><description>&lt;p&gt;In order to run workloads in your cluster, you need
&lt;a class='glossary-tooltip' title='A node is a worker machine in Kubernetes.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/architecture/nodes/' target='_blank' aria-label='Nodes'&gt;Nodes&lt;/a&gt;. Nodes in your cluster can be &lt;em&gt;autoscaled&lt;/em&gt; -
dynamically &lt;a href="#provisioning"&gt;&lt;em&gt;provisioned&lt;/em&gt;&lt;/a&gt;, or &lt;a href="#consolidation"&gt;&lt;em&gt;consolidated&lt;/em&gt;&lt;/a&gt; to provide needed
capacity while optimizing cost. Autoscaling is performed by Node &lt;a href="#autoscalers"&gt;&lt;em&gt;autoscalers&lt;/em&gt;&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id="provisioning"&gt;Node provisioning&lt;/h2&gt;
&lt;p&gt;If there are Pods in a cluster that can't be scheduled on existing Nodes, new Nodes can be
automatically added to the cluster—&lt;em&gt;provisioned&lt;/em&gt;—to accommodate the Pods. This is
especially useful if the number of Pods changes over time, for example as a result of
&lt;a href="#horizontal-workload-autoscaling"&gt;combining horizontal workload with Node autoscaling&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>Pod Security Standards</title><link>https://andygol-k8s.netlify.app/docs/concepts/security/pod-security-standards/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/security/pod-security-standards/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;The Pod Security Standards define three different &lt;em&gt;policies&lt;/em&gt; to broadly cover the security
spectrum. These policies are &lt;em&gt;cumulative&lt;/em&gt; and range from highly-permissive to highly-restrictive.
This guide outlines the requirements of each policy.&lt;/p&gt;
&lt;table&gt;
 &lt;thead&gt;
 &lt;tr&gt;
 &lt;th&gt;Profile&lt;/th&gt;
 &lt;th&gt;Description&lt;/th&gt;
 &lt;/tr&gt;
 &lt;/thead&gt;
 &lt;tbody&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;strong style="white-space: nowrap"&gt;Privileged&lt;/strong&gt;&lt;/td&gt;
 &lt;td&gt;Unrestricted policy, providing the widest possible level of permissions. This policy allows for known privilege escalations.&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;strong style="white-space: nowrap"&gt;Baseline&lt;/strong&gt;&lt;/td&gt;
 &lt;td&gt;Minimally restrictive policy which prevents known privilege escalations. Allows the default (minimally specified) Pod configuration.&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;strong style="white-space: nowrap"&gt;Restricted&lt;/strong&gt;&lt;/td&gt;
 &lt;td&gt;Heavily restricted policy, following current Pod hardening best practices.&lt;/td&gt;
 &lt;/tr&gt;
 &lt;/tbody&gt;
&lt;/table&gt;
&lt;!-- body --&gt;
&lt;h2 id="profile-details"&gt;Profile Details&lt;/h2&gt;
&lt;h3 id="privileged"&gt;Privileged&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;The &lt;em&gt;Privileged&lt;/em&gt; policy is purposely-open, and entirely unrestricted.&lt;/strong&gt; This type of policy is
typically aimed at system- and infrastructure-level workloads managed by privileged, trusted users.&lt;/p&gt;</description></item><item><title>Resource metrics pipeline</title><link>https://andygol-k8s.netlify.app/docs/tasks/debug/debug-cluster/resource-metrics-pipeline/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/debug/debug-cluster/resource-metrics-pipeline/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;For Kubernetes, the &lt;em&gt;Metrics API&lt;/em&gt; offers a basic set of metrics to support automatic scaling and
similar use cases. This API makes information available about resource usage for node and pod,
including metrics for CPU and memory. If you deploy the Metrics API into your cluster, clients of
the Kubernetes API can then query for this information, and you can use Kubernetes' access control
mechanisms to manage permissions to do so.&lt;/p&gt;</description></item><item><title>Set up an Extension API Server</title><link>https://andygol-k8s.netlify.app/docs/tasks/extend-kubernetes/setup-extension-api-server/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/extend-kubernetes/setup-extension-api-server/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;Setting up an extension API server to work with the aggregation layer allows the Kubernetes apiserver to be extended with additional APIs, which are not part of the core Kubernetes APIs.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Tools for Monitoring Resources</title><link>https://andygol-k8s.netlify.app/docs/tasks/debug/debug-cluster/resource-usage-monitoring/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/debug/debug-cluster/resource-usage-monitoring/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;To scale an application and provide a reliable service, you need to
understand how the application behaves when it is deployed. You can examine
application performance in a Kubernetes cluster by examining the containers,
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/"&gt;pods&lt;/a&gt;,
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/services-networking/service/"&gt;services&lt;/a&gt;, and
the characteristics of the overall cluster. Kubernetes provides detailed
information about an application's resource usage at each of these levels.
This information allows you to evaluate your application's performance and
where bottlenecks can be removed to improve overall performance.&lt;/p&gt;</description></item><item><title>ResourceClaim</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/workload-resources/resource-claim-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/workload-resources/resource-claim-v1/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: resource.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/resource/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="ResourceClaim"&gt;ResourceClaim&lt;/h2&gt;
&lt;p&gt;ResourceClaim describes a request for access to resources in the cluster, for use by workloads. For example, if a workload needs an accelerator device with specific properties, this is how that request is expressed. The status stanza tracks whether this claim has been satisfied and what specific resources have been allocated.&lt;/p&gt;</description></item><item><title>ResourceClaimTemplate</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/workload-resources/resource-claim-template-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/workload-resources/resource-claim-template-v1/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: resource.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/resource/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="ResourceClaimTemplate"&gt;ResourceClaimTemplate&lt;/h2&gt;
&lt;p&gt;ResourceClaimTemplate is used to produce ResourceClaim objects.&lt;/p&gt;
&lt;p&gt;This is an alpha type and requires enabling the DynamicResourceAllocation feature gate.&lt;/p&gt;</description></item><item><title>ResourceSlice</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/workload-resources/resource-slice-v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/workload-resources/resource-slice-v1/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: resource.k8s.io/v1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/resource/v1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="ResourceSlice"&gt;ResourceSlice&lt;/h2&gt;
&lt;p&gt;ResourceSlice represents one or more resources in a pool of similar resources, managed by a common driver. A pool may span more than one ResourceSlice, and exactly how many ResourceSlices comprise a pool is determined by the driver.&lt;/p&gt;</description></item><item><title>Workload v1alpha1</title><link>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/workload-resources/workload-v1alpha1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/workload-resources/workload-v1alpha1/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the 
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;&lt;code&gt;apiVersion: scheduling.k8s.io/v1alpha1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;import &amp;quot;k8s.io/api/scheduling/v1alpha1&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="Workload"&gt;Workload&lt;/h2&gt;
&lt;p&gt;Workload allows for expressing scheduling constraints that should be used when managing lifecycle of workloads from scheduling perspective, including scheduling, preemption, eviction and other phases.&lt;/p&gt;</description></item><item><title>Accessing Clusters</title><link>https://andygol-k8s.netlify.app/docs/tasks/access-application-cluster/access-cluster/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/access-application-cluster/access-cluster/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This topic discusses multiple ways to interact with clusters.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="accessing-for-the-first-time-with-kubectl"&gt;Accessing for the first time with kubectl&lt;/h2&gt;
&lt;p&gt;When accessing the Kubernetes API for the first time, we suggest using the
Kubernetes CLI, &lt;code&gt;kubectl&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;To access a cluster, you need to know the location of the cluster and have credentials
to access it. Typically, this is automatically set-up when you work through
a &lt;a href="https://andygol-k8s.netlify.app/docs/setup/"&gt;Getting started guide&lt;/a&gt;,
or someone else set up the cluster and provided you with credentials and a location.&lt;/p&gt;</description></item><item><title>Allocate Devices to Workloads with DRA</title><link>https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/assign-resources/allocate-devices-dra/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/assign-resources/allocate-devices-dra/</guid><description>&lt;div class="feature-state-notice feature-stable" title="Feature Gate: DynamicResourceAllocation"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt; 
 &lt;code&gt;Kubernetes v1.35 [stable]&lt;/code&gt;(enabled by default)&lt;/div&gt;

&lt;!-- overview --&gt;
&lt;p&gt;This page shows you how to allocate devices to your Pods by using
&lt;em&gt;dynamic resource allocation (DRA)&lt;/em&gt;. These instructions are for workload
operators. Before reading this page, familiarize yourself with how DRA works and
with DRA terminology like
&lt;a class='glossary-tooltip' title='Describes the resources that a workload needs, such as devices. ResourceClaims can request devices from DeviceClasses.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/scheduling-eviction/dynamic-resource-allocation/#resourceclaims-templates' target='_blank' aria-label='ResourceClaims'&gt;ResourceClaims&lt;/a&gt; and
&lt;a class='glossary-tooltip' title='Defines a template for Kubernetes to create ResourceClaims. Used to provide per-Pod access to separate, similar resources.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/scheduling-eviction/dynamic-resource-allocation/#resourceclaims-templates' target='_blank' aria-label='ResourceClaimTemplates'&gt;ResourceClaimTemplates&lt;/a&gt;.
For more information, see
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/scheduling-eviction/dynamic-resource-allocation/"&gt;Dynamic Resource Allocation (DRA)&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>Apply Pod Security Standards at the Namespace Level</title><link>https://andygol-k8s.netlify.app/docs/tutorials/security/ns-level-pss/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tutorials/security/ns-level-pss/</guid><description>&lt;div class="alert alert-primary" role="alert"&gt;&lt;div class="h4 alert-heading" role="heading"&gt;Note&lt;/div&gt;
&lt;p&gt;This tutorial applies only for new clusters.&lt;/p&gt;
&lt;/div&gt;
&lt;p&gt;Pod Security Admission is an admission controller that applies
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/security/pod-security-standards/"&gt;Pod Security Standards&lt;/a&gt;
when pods are created. It is a feature GA'ed in v1.25.
In this tutorial, you will enforce the &lt;code&gt;baseline&lt;/code&gt; Pod Security Standard,
one namespace at a time.&lt;/p&gt;
&lt;p&gt;You can also apply Pod Security Standards to multiple namespaces at once at the cluster
level. For instructions, refer to
&lt;a href="https://andygol-k8s.netlify.app/docs/tutorials/security/cluster-level-pss/"&gt;Apply Pod Security Standards at the cluster level&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>Articles on dockershim Removal and on Using CRI-compatible Runtimes</title><link>https://andygol-k8s.netlify.app/docs/reference/node/topics-on-dockershim-and-cri-compatible-runtimes/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/node/topics-on-dockershim-and-cri-compatible-runtimes/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This is a list of articles and other pages that are either
about the Kubernetes' deprecation and removal of &lt;em&gt;dockershim&lt;/em&gt;,
or about using CRI-compatible container runtimes,
in connection with that removal.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="kubernetes-project"&gt;Kubernetes project&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Kubernetes blog: &lt;a href="https://andygol-k8s.netlify.app/blog/2020/12/02/dockershim-faq/"&gt;Dockershim Removal FAQ&lt;/a&gt; (originally published 2020/12/02)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Kubernetes blog: &lt;a href="https://andygol-k8s.netlify.app/blog/2022/02/17/dockershim-faq/"&gt;Updated: Dockershim Removal FAQ&lt;/a&gt; (updated published 2022/02/17)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Kubernetes blog: &lt;a href="https://andygol-k8s.netlify.app/blog/2022/01/07/kubernetes-is-moving-on-from-dockershim/"&gt;Kubernetes is Moving on From Dockershim: Commitments and Next Steps&lt;/a&gt; (published 2022/01/07)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Kubernetes blog: &lt;a href="https://andygol-k8s.netlify.app/blog/2021/11/12/are-you-ready-for-dockershim-removal/"&gt;Dockershim removal is coming. Are you ready?&lt;/a&gt; (published 2021/11/12)&lt;/p&gt;</description></item><item><title>Assign CPU Resources to Containers and Pods</title><link>https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/assign-cpu-resource/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/assign-cpu-resource/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page shows how to assign a CPU &lt;em&gt;request&lt;/em&gt; and a CPU &lt;em&gt;limit&lt;/em&gt; to
a container. Containers cannot use more CPU than the configured limit.
Provided the system has CPU time free, a container is guaranteed to be
allocated as much CPU as it requests.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Assigning Pods to Nodes</title><link>https://andygol-k8s.netlify.app/docs/concepts/scheduling-eviction/assign-pod-node/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/scheduling-eviction/assign-pod-node/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;You can constrain a &lt;a class='glossary-tooltip' title='A Pod represents a set of running containers in your cluster.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/' target='_blank' aria-label='Pod'&gt;Pod&lt;/a&gt; so that it is
&lt;em&gt;restricted&lt;/em&gt; to run on particular &lt;a class='glossary-tooltip' title='A node is a worker machine in Kubernetes.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/architecture/nodes/' target='_blank' aria-label='node(s)'&gt;node(s)&lt;/a&gt;,
or to &lt;em&gt;prefer&lt;/em&gt; to run on particular nodes.
There are several ways to do this and the recommended approaches all use
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/overview/working-with-objects/labels/"&gt;label selectors&lt;/a&gt; to facilitate the selection.
Often, you do not need to set any such constraints; the
&lt;a class='glossary-tooltip' title='Control plane component that watches for newly created pods with no assigned node, and selects a node for them to run on.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/reference/command-line-tools-reference/kube-scheduler/' target='_blank' aria-label='scheduler'&gt;scheduler&lt;/a&gt; will automatically do a reasonable placement
(for example, spreading your Pods across nodes so as not place Pods on a node with insufficient free resources).
However, there are some circumstances where you may want to control which node
the Pod deploys to, for example, to ensure that a Pod ends up on a node with an SSD attached to it,
or to co-locate Pods from two different services that communicate a lot into the same availability zone.&lt;/p&gt;</description></item><item><title>Authenticating with Bootstrap Tokens</title><link>https://andygol-k8s.netlify.app/docs/reference/access-authn-authz/bootstrap-tokens/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/access-authn-authz/bootstrap-tokens/</guid><description>&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.18 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;p&gt;Bootstrap tokens are a simple bearer token that is meant to be used when
creating new clusters or joining new nodes to an existing cluster.
It was built to support &lt;a href="https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/"&gt;kubeadm&lt;/a&gt;, but can be used in other contexts
for users that wish to start clusters without &lt;code&gt;kubeadm&lt;/code&gt;. It is also built to
work, via RBAC policy, with the
&lt;a href="https://andygol-k8s.netlify.app/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/"&gt;kubelet TLS Bootstrapping&lt;/a&gt; system.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="bootstrap-tokens-overview"&gt;Bootstrap Tokens Overview&lt;/h2&gt;
&lt;p&gt;Bootstrap Tokens are defined with a specific type
(&lt;code&gt;bootstrap.kubernetes.io/token&lt;/code&gt;) of secrets that lives in the &lt;code&gt;kube-system&lt;/code&gt;
namespace. These Secrets are then read by the Bootstrap Authenticator in the
API Server. Expired tokens are removed with the TokenCleaner controller in the
Controller Manager. The tokens are also used to create a signature for a
specific ConfigMap used in a &amp;quot;discovery&amp;quot; process through a BootstrapSigner
controller.&lt;/p&gt;</description></item><item><title>Certificates</title><link>https://andygol-k8s.netlify.app/docs/concepts/cluster-administration/certificates/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/cluster-administration/certificates/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;To learn how to generate certificates for your cluster, see &lt;a href="https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/certificates/"&gt;Certificates&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>Coarse Parallel Processing Using a Work Queue</title><link>https://andygol-k8s.netlify.app/docs/tasks/job/coarse-parallel-processing-work-queue/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/job/coarse-parallel-processing-work-queue/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;In this example, you will run a Kubernetes Job with multiple parallel
worker processes.&lt;/p&gt;
&lt;p&gt;In this example, as each pod is created, it picks up one unit of work
from a task queue, completes it, deletes it from the queue, and exits.&lt;/p&gt;
&lt;p&gt;Here is an overview of the steps in this example:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Start a message queue service.&lt;/strong&gt; In this example, you use RabbitMQ, but you could use another
one. In practice you would set up a message queue service once and reuse it for many jobs.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Create a queue, and fill it with messages.&lt;/strong&gt; Each message represents one task to be done. In
this example, a message is an integer that we will do a lengthy computation on.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Start a Job that works on tasks from the queue&lt;/strong&gt;. The Job starts several pods. Each pod takes
one task from the message queue, processes it, and exits.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;You should already be familiar with the basic,
non-parallel, use of &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/job/"&gt;Job&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>Communication between Nodes and the Control Plane</title><link>https://andygol-k8s.netlify.app/docs/concepts/architecture/control-plane-node-communication/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/architecture/control-plane-node-communication/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This document catalogs the communication paths between the &lt;a class='glossary-tooltip' title='Control plane component that serves the Kubernetes API.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/architecture/#kube-apiserver' target='_blank' aria-label='API server'&gt;API server&lt;/a&gt;
and the Kubernetes &lt;a class='glossary-tooltip' title='A set of worker machines, called nodes, that run containerized applications. Every cluster has at least one worker node.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/reference/glossary/?all=true#term-cluster' target='_blank' aria-label='cluster'&gt;cluster&lt;/a&gt;.
The intent is to allow users to customize their installation to harden the network configuration
such that the cluster can be run on an untrusted network (or on fully public IPs on a cloud
provider).&lt;/p&gt;</description></item><item><title>ConfigMaps</title><link>https://andygol-k8s.netlify.app/docs/concepts/configuration/configmap/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/configuration/configmap/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;&lt;p&gt;A ConfigMap is an API object used to store non-confidential data in key-value pairs.
&lt;a class='glossary-tooltip' title='A Pod represents a set of running containers in your cluster.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/' target='_blank' aria-label='Pods'&gt;Pods&lt;/a&gt; can consume ConfigMaps as
environment variables, command-line arguments, or as configuration files in a
&lt;a class='glossary-tooltip' title='A directory containing data, accessible to the containers in a pod.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/storage/volumes/' target='_blank' aria-label='volume'&gt;volume&lt;/a&gt;.&lt;/p&gt;&lt;/p&gt;
&lt;p&gt;A ConfigMap allows you to decouple environment-specific configuration from your &lt;a class='glossary-tooltip' title='Stored instance of a container that holds a set of software needed to run an application.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/reference/glossary/?all=true#term-image' target='_blank' aria-label='container images'&gt;container images&lt;/a&gt;, so that your applications are easily portable.&lt;/p&gt;</description></item><item><title>Configure Default CPU Requests and Limits for a Namespace</title><link>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page shows how to configure default CPU requests and limits for a
&lt;a class='glossary-tooltip' title='An abstraction used by Kubernetes to support isolation of groups of resources within a single cluster.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/overview/working-with-objects/namespaces' target='_blank' aria-label='namespace'&gt;namespace&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;A Kubernetes cluster can be divided into namespaces. If you create a Pod within a
namespace that has a default CPU
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/configuration/manage-resources-containers/#requests-and-limits"&gt;limit&lt;/a&gt;, and any container in that Pod does not specify
its own CPU limit, then the
&lt;a class='glossary-tooltip' title='The container orchestration layer that exposes the API and interfaces to define, deploy, and manage the lifecycle of containers.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/reference/glossary/?all=true#term-control-plane' target='_blank' aria-label='control plane'&gt;control plane&lt;/a&gt; assigns the default
CPU limit to that container.&lt;/p&gt;</description></item><item><title>Configure Multiple Schedulers</title><link>https://andygol-k8s.netlify.app/docs/tasks/extend-kubernetes/configure-multiple-schedulers/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/extend-kubernetes/configure-multiple-schedulers/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;Kubernetes ships with a default scheduler that is described
&lt;a href="https://andygol-k8s.netlify.app/docs/reference/command-line-tools-reference/kube-scheduler/"&gt;here&lt;/a&gt;.
If the default scheduler does not suit your needs you can implement your own scheduler.
Moreover, you can even run multiple schedulers simultaneously alongside the default
scheduler and instruct Kubernetes what scheduler to use for each of your pods. Let's
learn how to run multiple schedulers in Kubernetes with an example.&lt;/p&gt;
&lt;p&gt;A detailed description of how to implement a scheduler is outside the scope of this
document. Please refer to the kube-scheduler implementation in
&lt;a href="https://github.com/kubernetes/kubernetes/tree/master/pkg/scheduler"&gt;pkg/scheduler&lt;/a&gt;
in the Kubernetes source directory for a canonical example.&lt;/p&gt;</description></item><item><title>Connecting Applications with Services</title><link>https://andygol-k8s.netlify.app/docs/tutorials/services/connect-applications-service/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tutorials/services/connect-applications-service/</guid><description>&lt;!-- overview --&gt;
&lt;h2 id="the-kubernetes-model-for-connecting-containers"&gt;The Kubernetes model for connecting containers&lt;/h2&gt;
&lt;p&gt;Now that you have a continuously running, replicated application you can expose it on a network.&lt;/p&gt;
&lt;p&gt;Kubernetes assumes that pods can communicate with other pods, regardless of which host they land on.
Kubernetes gives every pod its own cluster-private IP address, so you do not need to explicitly
create links between pods or map container ports to host ports. This means that containers within
a Pod can all reach each other's ports on localhost, and all pods in a cluster can see each other
without NAT. The rest of this document elaborates on how you can run reliable services on such a
networking model.&lt;/p&gt;</description></item><item><title>Container Environment</title><link>https://andygol-k8s.netlify.app/docs/concepts/containers/container-environment/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/containers/container-environment/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page describes the resources available to Containers in the Container environment.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="container-environment"&gt;Container environment&lt;/h2&gt;
&lt;p&gt;The Kubernetes Container environment provides several important resources to Containers:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A filesystem, which is a combination of an &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/containers/images/"&gt;image&lt;/a&gt; and one or more &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/storage/volumes/"&gt;volumes&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Information about the Container itself.&lt;/li&gt;
&lt;li&gt;Information about other objects in the cluster.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="container-information"&gt;Container information&lt;/h3&gt;
&lt;p&gt;The &lt;em&gt;hostname&lt;/em&gt; of a Container is the name of the Pod in which the Container is running.
It is available through the &lt;code&gt;hostname&lt;/code&gt; command or the
&lt;a href="https://man7.org/linux/man-pages/man2/gethostname.2.html"&gt;&lt;code&gt;gethostname&lt;/code&gt;&lt;/a&gt;
function call in libc.&lt;/p&gt;</description></item><item><title>Container Runtimes</title><link>https://andygol-k8s.netlify.app/docs/setup/production-environment/container-runtimes/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/setup/production-environment/container-runtimes/</guid><description>&lt;!-- overview --&gt;
&lt;div class="alert alert-secondary callout note" role="note"&gt;
 &lt;strong&gt;Note:&lt;/strong&gt; Dockershim has been removed from the Kubernetes project as of release 1.24. Read the &lt;a href="https://andygol-k8s.netlify.app/dockershim"&gt;Dockershim Removal FAQ&lt;/a&gt; for further details.
&lt;/div&gt;
&lt;p&gt;You need to install a
&lt;a class='glossary-tooltip' title='The container runtime is the software that is responsible for running containers.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/setup/production-environment/container-runtimes' target='_blank' aria-label='container runtime'&gt;container runtime&lt;/a&gt;
into each node in the cluster so that Pods can run there. This page outlines
what is involved and describes related tasks for setting up nodes.&lt;/p&gt;</description></item><item><title>Contributing to the Upstream Kubernetes Code</title><link>https://andygol-k8s.netlify.app/docs/contribute/generate-ref-docs/contribute-upstream/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/contribute/generate-ref-docs/contribute-upstream/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page shows how to contribute to the upstream &lt;code&gt;kubernetes/kubernetes&lt;/code&gt; project.
You can fix bugs found in the Kubernetes API documentation or the content of
the Kubernetes components such as &lt;code&gt;kubeadm&lt;/code&gt;, &lt;code&gt;kube-apiserver&lt;/code&gt;, and &lt;code&gt;kube-controller-manager&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;If you instead want to regenerate the reference documentation for the Kubernetes
API or the &lt;code&gt;kube-*&lt;/code&gt; components from the upstream code, see the following instructions:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/contribute/generate-ref-docs/kubernetes-api/"&gt;Generating Reference Documentation for the Kubernetes API&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/contribute/generate-ref-docs/kubernetes-components/"&gt;Generating Reference Documentation for the Kubernetes Components and Tools&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;You need to have these tools installed:&lt;/p&gt;</description></item><item><title>Debug Services</title><link>https://andygol-k8s.netlify.app/docs/tasks/debug/debug-application/debug-service/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/debug/debug-application/debug-service/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;An issue that comes up rather frequently for new installations of Kubernetes is
that a Service is not working properly. You've run your Pods through a
Deployment (or other workload controller) and created a Service, but you
get no response when you try to access it. This document will hopefully help
you to figure out what's going wrong.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="running-commands-in-a-pod"&gt;Running commands in a Pod&lt;/h2&gt;
&lt;p&gt;For many steps here you will want to see what a Pod running in the cluster
sees. The simplest way to do this is to run an interactive busybox Pod:&lt;/p&gt;</description></item><item><title>Declarative API Validation</title><link>https://andygol-k8s.netlify.app/docs/reference/using-api/declarative-validation/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/using-api/declarative-validation/</guid><description>&lt;div class="feature-state-notice feature-beta"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.33 [beta]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;p&gt;Kubernetes 1.35 includes optional &lt;em&gt;declarative validation&lt;/em&gt; for APIs. When enabled, the Kubernetes API server can use this mechanism rather than the legacy approach that relies on hand-written Go
code (&lt;code&gt;validation.go&lt;/code&gt; files) to ensure that requests against the API are valid.
Kubernetes developers, and people &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/"&gt;extending the Kubernetes API&lt;/a&gt;,
can define validation rules directly alongside the API type definitions (&lt;code&gt;types.go&lt;/code&gt; files). Code authors define
special comment tags (e.g., &lt;code&gt;+k8s:minimum=0&lt;/code&gt;). A code generator (&lt;code&gt;validation-gen&lt;/code&gt;) then uses these tags to produce
optimized Go code for API validation.&lt;/p&gt;</description></item><item><title>Declarative Management of Kubernetes Objects Using Kustomize</title><link>https://andygol-k8s.netlify.app/docs/tasks/manage-kubernetes-objects/kustomization/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/manage-kubernetes-objects/kustomization/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;&lt;a href="https://github.com/kubernetes-sigs/kustomize"&gt;Kustomize&lt;/a&gt; is a standalone tool
to customize Kubernetes objects
through a &lt;a href="https://kubectl.docs.kubernetes.io/references/kustomize/glossary/#kustomization"&gt;kustomization file&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Since 1.14, kubectl also
supports the management of Kubernetes objects using a kustomization file.
To view resources found in a directory containing a kustomization file, run the following command:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl kustomize &amp;lt;kustomization_directory&amp;gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;To apply those resources, run &lt;code&gt;kubectl apply&lt;/code&gt; with &lt;code&gt;--kustomize&lt;/code&gt; or &lt;code&gt;-k&lt;/code&gt; flag:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl apply -k &amp;lt;kustomization_directory&amp;gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;Install &lt;a href="https://andygol-k8s.netlify.app/docs/tasks/tools/"&gt;&lt;code&gt;kubectl&lt;/code&gt;&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Define Dependent Environment Variables</title><link>https://andygol-k8s.netlify.app/docs/tasks/inject-data-application/define-interdependent-environment-variables/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/inject-data-application/define-interdependent-environment-variables/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page shows how to define dependent environment variables for a container
in a Kubernetes Pod.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Define Environment Variables for a Container</title><link>https://andygol-k8s.netlify.app/docs/tasks/inject-data-application/define-environment-variable-container/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/inject-data-application/define-environment-variable-container/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page shows how to define environment variables for a container
in a Kubernetes Pod.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Device Plugins</title><link>https://andygol-k8s.netlify.app/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/</guid><description>&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.26 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;p&gt;Kubernetes provides a device plugin framework that you can use to advertise system hardware
resources to the &lt;a class='glossary-tooltip' title='An agent that runs on each node in the cluster. It makes sure that containers are running in a pod.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/reference/command-line-tools-reference/kubelet' target='_blank' aria-label='Kubelet'&gt;Kubelet&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Instead of customizing the code for Kubernetes itself, vendors can implement a
device plugin that you deploy either manually or as a &lt;a class='glossary-tooltip' title='Ensures a copy of a Pod is running across a set of nodes in a cluster.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/daemonset' target='_blank' aria-label='DaemonSet'&gt;DaemonSet&lt;/a&gt;.
The targeted devices include GPUs, high-performance NICs, FPGAs, InfiniBand adapters,
and other similar computing resources that may require vendor specific initialization
and setup.&lt;/p&gt;</description></item><item><title>Documenting a feature for a release</title><link>https://andygol-k8s.netlify.app/docs/contribute/new-content/new-features/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/contribute/new-content/new-features/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;Each major Kubernetes release introduces new features that require documentation.
New releases also bring updates to existing features and documentation
(such as upgrading a feature from alpha to beta).&lt;/p&gt;
&lt;p&gt;Generally, the SIG responsible for a feature submits draft documentation of the
feature as a pull request to the appropriate development branch of the
&lt;code&gt;kubernetes/website&lt;/code&gt; repository, and someone on the SIG Docs team provides
editorial feedback or edits the draft directly. This section covers the branching
conventions and process used during a release by both groups.&lt;/p&gt;</description></item><item><title>Example: Deploying PHP Guestbook application with Redis</title><link>https://andygol-k8s.netlify.app/docs/tutorials/stateless-application/guestbook/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tutorials/stateless-application/guestbook/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This tutorial shows you how to build and deploy a simple &lt;em&gt;(not production
ready)&lt;/em&gt;, multi-tier web application using Kubernetes and
&lt;a href="https://www.docker.com/"&gt;Docker&lt;/a&gt;. This example consists of the following
components:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A single-instance &lt;a href="https://www.redis.io/"&gt;Redis&lt;/a&gt; to store guestbook entries&lt;/li&gt;
&lt;li&gt;Multiple web frontend instances&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="objectives"&gt;Objectives&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Start up a Redis leader.&lt;/li&gt;
&lt;li&gt;Start up two Redis followers.&lt;/li&gt;
&lt;li&gt;Start up the guestbook frontend.&lt;/li&gt;
&lt;li&gt;Expose and view the Frontend Service.&lt;/li&gt;
&lt;li&gt;Clean up.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Example: Deploying WordPress and MySQL with Persistent Volumes</title><link>https://andygol-k8s.netlify.app/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This tutorial shows you how to deploy a WordPress site and a MySQL database using
Minikube. Both applications use PersistentVolumes and PersistentVolumeClaims to store data.&lt;/p&gt;
&lt;p&gt;A &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/storage/persistent-volumes/"&gt;PersistentVolume&lt;/a&gt; (PV) is a piece
of storage in the cluster that has been manually provisioned by an administrator,
or dynamically provisioned by Kubernetes using a &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/storage/storage-classes/"&gt;StorageClass&lt;/a&gt;.
A &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims"&gt;PersistentVolumeClaim&lt;/a&gt; (PVC)
is a request for storage by a user that can be fulfilled by a PV. PersistentVolumes and
PersistentVolumeClaims are independent from Pod lifecycles and preserve data through
restarting, rescheduling, and even deleting Pods.&lt;/p&gt;</description></item><item><title>Extend the Kubernetes API with CustomResourceDefinitions</title><link>https://andygol-k8s.netlify.app/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page shows how to install a
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/extend-kubernetes/api-extension/custom-resources/"&gt;custom resource&lt;/a&gt;
into the Kubernetes API by creating a
&lt;a href="https://andygol-k8s.netlify.app/docs/reference/generated/kubernetes-api/v1.35/#customresourcedefinition-v1-apiextensions-k8s-io"&gt;CustomResourceDefinition&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Reviewing for approvers and reviewers</title><link>https://andygol-k8s.netlify.app/docs/contribute/review/for-approvers/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/contribute/review/for-approvers/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;SIG Docs &lt;a href="https://andygol-k8s.netlify.app/docs/contribute/participate/#reviewers"&gt;Reviewers&lt;/a&gt; and
&lt;a href="https://andygol-k8s.netlify.app/docs/contribute/participate/#approvers"&gt;Approvers&lt;/a&gt; do a few extra things
when reviewing a change.&lt;/p&gt;
&lt;p&gt;Every week a specific docs approver volunteers to triage and review pull requests.
This person is the &amp;quot;PR Wrangler&amp;quot; for the week. See the
&lt;a href="https://github.com/kubernetes/website/wiki/PR-Wranglers"&gt;PR Wrangler scheduler&lt;/a&gt;
for more information. To become a PR Wrangler, attend the weekly SIG Docs meeting
and volunteer. Even if you are not on the schedule for the current week, you can
still review pull requests (PRs) that are not already under active review.&lt;/p&gt;</description></item><item><title>Issue Wranglers</title><link>https://andygol-k8s.netlify.app/docs/contribute/participate/issue-wrangler/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/contribute/participate/issue-wrangler/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;Alongside the &lt;a href="https://andygol-k8s.netlify.app/docs/contribute/participate/pr-wranglers/"&gt;PR Wrangler&lt;/a&gt;, formal approvers,
reviewers and members of SIG Docs take week-long shifts
&lt;a href="https://andygol-k8s.netlify.app/docs/contribute/review/for-approvers/#triage-and-categorize-issues"&gt;triaging and categorising issues&lt;/a&gt;
for the repository.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="duties"&gt;Duties&lt;/h2&gt;
&lt;p&gt;Each day in a week-long shift the Issue Wrangler will be responsible for:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Triaging and tagging incoming issues daily. See
&lt;a href="https://andygol-k8s.netlify.app/docs/contribute/review/for-approvers/#triage-and-categorize-issues"&gt;Triage and categorize issues&lt;/a&gt;
for guidelines on how SIG Docs uses metadata.&lt;/li&gt;
&lt;li&gt;Keeping an eye on stale &amp;amp; rotten issues within the kubernetes/website repository.&lt;/li&gt;
&lt;li&gt;Maintenance of the &lt;a href="https://github.com/orgs/kubernetes/projects/72/views/1"&gt;Issues board&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="requirements"&gt;Requirements&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Must be an active member of the Kubernetes organization.&lt;/li&gt;
&lt;li&gt;A minimum of 15 &lt;a href="https://www.kubernetes.dev/docs/guide/pull-requests/#trivial-edits"&gt;non-trivial&lt;/a&gt;
contributions to Kubernetes (of which a certain amount should be directed towards kubernetes/website).&lt;/li&gt;
&lt;li&gt;Performing the role in an informal capacity already.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="helpful-prow-commands-for-wranglers"&gt;Helpful Prow commands for wranglers&lt;/h2&gt;
&lt;p&gt;Below are some commonly used commands for Issue Wranglers:&lt;/p&gt;</description></item><item><title>kubeadm init</title><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/kubeadm-init/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/kubeadm-init/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This command initializes a Kubernetes control plane node.&lt;/p&gt;
&lt;!-- body --&gt;

	&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Run this command in order to set up the Kubernetes control plane&lt;/p&gt;</description></item><item><title>kubectl Commands</title><link>https://andygol-k8s.netlify.app/docs/reference/kubectl/kubectl-cmds/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubectl/kubectl-cmds/</guid><description>&lt;p&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl/"&gt;kubectl Command Reference&lt;/a&gt;&lt;/p&gt;</description></item><item><title>Kubernetes API Aggregation Layer</title><link>https://andygol-k8s.netlify.app/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;The aggregation layer allows Kubernetes to be extended with additional APIs, beyond what is
offered by the core Kubernetes APIs.
The additional APIs can either be ready-made solutions such as a
&lt;a href="https://github.com/kubernetes-sigs/metrics-server"&gt;metrics server&lt;/a&gt;, or APIs that you develop yourself.&lt;/p&gt;
&lt;p&gt;The aggregation layer is different from
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/extend-kubernetes/api-extension/custom-resources/"&gt;Custom Resource Definitions&lt;/a&gt;,
which are a way to make the &lt;a class='glossary-tooltip' title='Control plane component that serves the Kubernetes API.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/architecture/#kube-apiserver' target='_blank' aria-label='kube-apiserver'&gt;kube-apiserver&lt;/a&gt;
recognise new kinds of object.&lt;/p&gt;</description></item><item><title>Kubernetes API Concepts</title><link>https://andygol-k8s.netlify.app/docs/reference/using-api/api-concepts/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/using-api/api-concepts/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;The Kubernetes API is a resource-based (RESTful) programmatic interface
provided via HTTP. It supports retrieving, creating, updating, and deleting
primary resources via the standard HTTP verbs (POST, PUT, PATCH, DELETE,
GET).&lt;/p&gt;
&lt;p&gt;For some resources, the API includes additional subresources that allow
fine-grained authorization (such as separate views for Pod details and
log retrievals), and can accept and serve those resources in different
representations for convenience or efficiency.&lt;/p&gt;</description></item><item><title>Kubernetes Object Management</title><link>https://andygol-k8s.netlify.app/docs/concepts/overview/working-with-objects/object-management/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/overview/working-with-objects/object-management/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;The &lt;code&gt;kubectl&lt;/code&gt; command-line tool supports several different ways to create and manage
Kubernetes &lt;a class='glossary-tooltip' title='An entity in the Kubernetes system, representing part of the state of your cluster.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/overview/working-with-objects/#kubernetes-objects' target='_blank' aria-label='objects'&gt;objects&lt;/a&gt;. This document provides an overview of the different
approaches. Read the &lt;a href="https://kubectl.docs.kubernetes.io"&gt;Kubectl book&lt;/a&gt; for
details of managing objects by Kubectl.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="management-techniques"&gt;Management techniques&lt;/h2&gt;
&lt;div class="alert alert-danger" role="note"&gt;&lt;h4 class="alert-heading"&gt;Warning:&lt;/h4&gt;A Kubernetes object should be managed using only one technique. Mixing
and matching techniques for the same object results in undefined behavior.&lt;/div&gt;

&lt;table&gt;
 &lt;thead&gt;
 &lt;tr&gt;
 &lt;th&gt;Management technique&lt;/th&gt;
 &lt;th&gt;Operates on&lt;/th&gt;
 &lt;th&gt;Recommended environment&lt;/th&gt;
 &lt;th&gt;Supported writers&lt;/th&gt;
 &lt;th&gt;Learning curve&lt;/th&gt;
 &lt;/tr&gt;
 &lt;/thead&gt;
 &lt;tbody&gt;
 &lt;tr&gt;
 &lt;td&gt;Imperative commands&lt;/td&gt;
 &lt;td&gt;Live objects&lt;/td&gt;
 &lt;td&gt;Development projects&lt;/td&gt;
 &lt;td&gt;1+&lt;/td&gt;
 &lt;td&gt;Lowest&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Imperative object configuration&lt;/td&gt;
 &lt;td&gt;Individual files&lt;/td&gt;
 &lt;td&gt;Production projects&lt;/td&gt;
 &lt;td&gt;1&lt;/td&gt;
 &lt;td&gt;Moderate&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Declarative object configuration&lt;/td&gt;
 &lt;td&gt;Directories of files&lt;/td&gt;
 &lt;td&gt;Production projects&lt;/td&gt;
 &lt;td&gt;1+&lt;/td&gt;
 &lt;td&gt;Highest&lt;/td&gt;
 &lt;/tr&gt;
 &lt;/tbody&gt;
&lt;/table&gt;
&lt;h2 id="imperative-commands"&gt;Imperative commands&lt;/h2&gt;
&lt;p&gt;When using imperative commands, a user operates directly on live objects
in a cluster. The user provides operations to
the &lt;code&gt;kubectl&lt;/code&gt; command as arguments or flags.&lt;/p&gt;</description></item><item><title>Kubernetes Security and Disclosure Information</title><link>https://andygol-k8s.netlify.app/docs/reference/issues-security/security/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/issues-security/security/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page describes Kubernetes security and disclosure information.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="security-announcements"&gt;Security Announcements&lt;/h2&gt;
&lt;p&gt;Join the &lt;a href="https://groups.google.com/forum/#!forum/kubernetes-security-announce"&gt;kubernetes-security-announce&lt;/a&gt;
group for emails about security and major API announcements.&lt;/p&gt;
&lt;h2 id="report-a-vulnerability"&gt;Report a Vulnerability&lt;/h2&gt;
&lt;p&gt;We're extremely grateful for security researchers and users that report vulnerabilities to
the Kubernetes Open Source Community. All reports are thoroughly investigated by a set of community volunteers.&lt;/p&gt;
&lt;p&gt;To make a report, submit your vulnerability to the &lt;a href="https://hackerone.com/kubernetes"&gt;Kubernetes bug bounty program&lt;/a&gt;.
This allows triage and handling of the vulnerability with standardized response times.&lt;/p&gt;</description></item><item><title>Managing Secrets using Configuration File</title><link>https://andygol-k8s.netlify.app/docs/tasks/configmap-secret/managing-secret-using-config-file/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/configmap-secret/managing-secret-using-config-file/</guid><description>&lt;!-- overview --&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Monitor Node Health</title><link>https://andygol-k8s.netlify.app/docs/tasks/debug/debug-cluster/monitor-node-health/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/debug/debug-cluster/monitor-node-health/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;&lt;em&gt;Node Problem Detector&lt;/em&gt; is a daemon for monitoring and reporting about a node's health.
You can run Node Problem Detector as a &lt;code&gt;DaemonSet&lt;/code&gt; or as a standalone daemon.
Node Problem Detector collects information about node problems from various daemons
and reports these conditions to the API server as Node &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/architecture/nodes/#condition"&gt;Condition&lt;/a&gt;s
or as &lt;a href="https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/cluster-resources/event-v1/"&gt;Event&lt;/a&gt;s.&lt;/p&gt;
&lt;p&gt;To learn how to install and use Node Problem Detector, see
&lt;a href="https://github.com/kubernetes/node-problem-detector"&gt;Node Problem Detector project documentation&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Perform a Rollback on a DaemonSet</title><link>https://andygol-k8s.netlify.app/docs/tasks/manage-daemon/rollback-daemon-set/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/manage-daemon/rollback-daemon-set/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page shows how to perform a rollback on a &lt;a class='glossary-tooltip' title='Ensures a copy of a Pod is running across a set of nodes in a cluster.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/daemonset' target='_blank' aria-label='DaemonSet'&gt;DaemonSet&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Persistent Volumes</title><link>https://andygol-k8s.netlify.app/docs/concepts/storage/persistent-volumes/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/storage/persistent-volumes/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This document describes &lt;em&gt;persistent volumes&lt;/em&gt; in Kubernetes. Familiarity with
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/storage/volumes/"&gt;volumes&lt;/a&gt;, &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/storage/storage-classes/"&gt;StorageClasses&lt;/a&gt;
and &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/storage/volume-attributes-classes/"&gt;VolumeAttributesClasses&lt;/a&gt; is suggested.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="introduction"&gt;Introduction&lt;/h2&gt;
&lt;p&gt;Managing storage is a distinct problem from managing compute instances.
The PersistentVolume subsystem provides an API for users and administrators
that abstracts details of how storage is provided from how it is consumed.
To do this, we introduce two new API resources: PersistentVolume and PersistentVolumeClaim.&lt;/p&gt;
&lt;p&gt;A &lt;em&gt;PersistentVolume&lt;/em&gt; (PV) is a piece of storage in the cluster that has been
provisioned by an administrator or dynamically provisioned using
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/storage/storage-classes/"&gt;Storage Classes&lt;/a&gt;. It is a resource in
the cluster just like a node is a cluster resource. PVs are volume plugins like
Volumes, but have a lifecycle independent of any individual Pod that uses the PV.
This API object captures the details of the implementation of the storage, be that
NFS, iSCSI, or a cloud-provider-specific storage system.&lt;/p&gt;</description></item><item><title>Pod Security Admission</title><link>https://andygol-k8s.netlify.app/docs/concepts/security/pod-security-admission/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/security/pod-security-admission/</guid><description>&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.25 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;p&gt;The Kubernetes &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/security/pod-security-standards/"&gt;Pod Security Standards&lt;/a&gt; define
different isolation levels for Pods. These standards let you define how you want to restrict the
behavior of pods in a clear, consistent fashion.&lt;/p&gt;
&lt;p&gt;Kubernetes offers a built-in &lt;em&gt;Pod Security&lt;/em&gt; &lt;a class='glossary-tooltip' title='A piece of code that intercepts requests to the Kubernetes API server prior to persistence of the object.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/reference/access-authn-authz/admission-controllers/' target='_blank' aria-label='admission controller'&gt;admission controller&lt;/a&gt; to enforce the Pod Security Standards. Pod security restrictions
are applied at the &lt;a class='glossary-tooltip' title='An abstraction used by Kubernetes to support isolation of groups of resources within a single cluster.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/overview/working-with-objects/namespaces' target='_blank' aria-label='namespace'&gt;namespace&lt;/a&gt; level when pods are
created.&lt;/p&gt;</description></item><item><title>PR wranglers</title><link>https://andygol-k8s.netlify.app/docs/contribute/participate/pr-wranglers/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/contribute/participate/pr-wranglers/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;SIG Docs &lt;a href="https://andygol-k8s.netlify.app/docs/contribute/participate/roles-and-responsibilities/#approvers"&gt;approvers&lt;/a&gt;
take week-long shifts &lt;a href="https://github.com/kubernetes/website/wiki/PR-Wranglers"&gt;managing pull requests&lt;/a&gt;
for the repository.&lt;/p&gt;
&lt;p&gt;This section covers the duties of a PR wrangler. For more information on giving good reviews,
see &lt;a href="https://andygol-k8s.netlify.app/docs/contribute/review/"&gt;Reviewing changes&lt;/a&gt;.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="duties"&gt;Duties&lt;/h2&gt;
&lt;p&gt;Each day in a week-long shift as PR Wrangler:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Review &lt;a href="https://github.com/kubernetes/website/pulls"&gt;open pull requests&lt;/a&gt; for quality
and adherence to the &lt;a href="https://andygol-k8s.netlify.app/docs/contribute/style/style-guide/"&gt;Style&lt;/a&gt; and
&lt;a href="https://andygol-k8s.netlify.app/docs/contribute/style/content-guide/"&gt;Content&lt;/a&gt; guides.
&lt;ul&gt;
&lt;li&gt;Start with the smallest PRs (&lt;code&gt;size/XS&lt;/code&gt;) first, and end with the largest (&lt;code&gt;size/XXL&lt;/code&gt;).
Review as many PRs as you can.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Make sure PR contributors sign the &lt;a href="https://github.com/kubernetes/community/blob/master/CLA.md"&gt;CLA&lt;/a&gt;.
&lt;ul&gt;
&lt;li&gt;Use &lt;a href="https://github.com/zparnold/k8s-docs-pr-botherer"&gt;this&lt;/a&gt; script to remind contributors
that haven't signed the CLA to do so.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Provide feedback on changes and ask for technical reviews from members of other SIGs.
&lt;ul&gt;
&lt;li&gt;Provide inline suggestions on the PR for the proposed content changes.&lt;/li&gt;
&lt;li&gt;If you need to verify content, comment on the PR and request more details.&lt;/li&gt;
&lt;li&gt;Assign relevant &lt;code&gt;sig/&lt;/code&gt; label(s).&lt;/li&gt;
&lt;li&gt;If needed, assign reviewers from the &lt;code&gt;reviewers:&lt;/code&gt; block in the file's front matter.&lt;/li&gt;
&lt;li&gt;You can also tag a &lt;a href="https://github.com/kubernetes/community/blob/master/sig-list.md"&gt;SIG&lt;/a&gt;
for a review by commenting &lt;code&gt;@kubernetes/&amp;lt;sig&amp;gt;-pr-reviews&lt;/code&gt; on the PR.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Use the &lt;code&gt;/approve&lt;/code&gt; comment to approve a PR for merging. Merge the PR when ready.
&lt;ul&gt;
&lt;li&gt;PRs should have a &lt;code&gt;/lgtm&lt;/code&gt; comment from another member before merging.&lt;/li&gt;
&lt;li&gt;Consider accepting technically accurate content that doesn't meet the
&lt;a href="https://andygol-k8s.netlify.app/docs/contribute/style/style-guide/"&gt;style guidelines&lt;/a&gt;. As you approve the change,
open a new issue to address the style concern. You can usually write these style fix
issues as &lt;a href="https://kubernetes.dev/docs/guide/help-wanted/#good-first-issue"&gt;good first issues&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Using style fixups as good first issues is a good way to ensure a supply of easier tasks
to help onboard new contributors.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Also check for pull requests against the &lt;a href="https://github.com/kubernetes-sigs/reference-docs"&gt;reference docs generator&lt;/a&gt;
code, and review those (or bring in help).&lt;/li&gt;
&lt;li&gt;Support the &lt;a href="https://andygol-k8s.netlify.app/docs/contribute/participate/issue-wrangler/"&gt;issue wrangler&lt;/a&gt; to
triage and tag incoming issues daily.
See &lt;a href="https://andygol-k8s.netlify.app/docs/contribute/review/for-approvers/#triage-and-categorize-issues"&gt;Triage and categorize issues&lt;/a&gt;
for guidelines on how SIG Docs uses metadata.&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="alert alert-info" role="note"&gt;&lt;h4 class="alert-heading"&gt;Note:&lt;/h4&gt;PR wrangler duties do not apply to localization PRs (non-English PRs).
Localization teams have their own processes and teams for reviewing their language PRs.
However, it's often helpful to ensure language PRs are labeled correctly,
review small non-language dependent PRs (like a link update),
or tag reviewers or contributors in long-running PRs
(ones opened more than 6 months ago and have not been updated in a month or more).&lt;/div&gt;

&lt;h3 id="helpful-github-queries-for-wranglers"&gt;Helpful GitHub queries for wranglers&lt;/h3&gt;
&lt;p&gt;The following queries are helpful when wrangling.
After working through these queries, the remaining list of PRs to review is usually small.
These queries exclude localization PRs. All queries are against the main branch except the last one.&lt;/p&gt;</description></item><item><title>ReplicaSet</title><link>https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/replicaset/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/replicaset/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;A ReplicaSet's purpose is to maintain a stable set of replica Pods running at any given time. As such, it is often
used to guarantee the availability of a specified number of identical Pods.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="how-a-replicaset-works"&gt;How a ReplicaSet works&lt;/h2&gt;
&lt;p&gt;A ReplicaSet is defined with fields, including a selector that specifies how to identify Pods it can acquire, a number
of replicas indicating how many Pods it should be maintaining, and a pod template specifying the data of new Pods
it should create to meet the number of replicas criteria. A ReplicaSet then fulfills its purpose by creating
and deleting Pods as needed to reach the desired number. When a ReplicaSet needs to create new Pods, it uses its Pod
template.&lt;/p&gt;</description></item><item><title>Resource Quotas</title><link>https://andygol-k8s.netlify.app/docs/concepts/policy/resource-quotas/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/policy/resource-quotas/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;When several users or teams share a cluster with a fixed number of nodes,
there is a concern that one team could use more than its fair share of resources.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Resource quotas&lt;/em&gt; are a tool for administrators to address this concern.&lt;/p&gt;
&lt;p&gt;A resource quota, defined by a ResourceQuota object, provides constraints that limit
aggregate resource consumption per &lt;a class='glossary-tooltip' title='An abstraction used by Kubernetes to support isolation of groups of resources within a single cluster.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/overview/working-with-objects/namespaces' target='_blank' aria-label='namespace'&gt;namespace&lt;/a&gt;. A ResourceQuota can also
limit the &lt;a href="#quota-on-object-count"&gt;quantity of objects that can be created in a namespace&lt;/a&gt; by API kind, as well as the total
amount of &lt;a class='glossary-tooltip' title='A defined amount of infrastructure available for consumption (CPU, memory, etc).' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/reference/glossary/?all=true#term-infrastructure-resource' target='_blank' aria-label='infrastructure resources'&gt;infrastructure resources&lt;/a&gt; that may be consumed by
API objects found in that namespace.&lt;/p&gt;</description></item><item><title>Run a Single-Instance Stateful Application</title><link>https://andygol-k8s.netlify.app/docs/tasks/run-application/run-single-instance-stateful-application/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/run-application/run-single-instance-stateful-application/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page shows you how to run a single-instance stateful application
in Kubernetes using a PersistentVolume and a Deployment. The
application is MySQL.&lt;/p&gt;
&lt;h2 id="objectives"&gt;Objectives&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Create a PersistentVolume referencing a disk in your environment.&lt;/li&gt;
&lt;li&gt;Create a MySQL Deployment.&lt;/li&gt;
&lt;li&gt;Expose MySQL to other pods in the cluster at a known DNS name.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Running in multiple zones</title><link>https://andygol-k8s.netlify.app/docs/setup/best-practices/multiple-zones/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/setup/best-practices/multiple-zones/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page describes running Kubernetes across multiple zones.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="background"&gt;Background&lt;/h2&gt;
&lt;p&gt;Kubernetes is designed so that a single Kubernetes cluster can run
across multiple failure zones, typically where these zones fit within
a logical grouping called a &lt;em&gt;region&lt;/em&gt;. Major cloud providers define a region
as a set of failure zones (also called &lt;em&gt;availability zones&lt;/em&gt;) that provide
a consistent set of features: within a region, each zone offers the same
APIs and services.&lt;/p&gt;</description></item><item><title>Scheduler Configuration</title><link>https://andygol-k8s.netlify.app/docs/reference/scheduling/config/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/scheduling/config/</guid><description>&lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.25 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;p&gt;You can customize the behavior of the &lt;code&gt;kube-scheduler&lt;/code&gt; by writing a configuration
file and passing its path as a command line argument.&lt;/p&gt;
&lt;!-- overview --&gt;
&lt;!-- body --&gt;
&lt;p&gt;A scheduling Profile allows you to configure the different stages of scheduling
in the &lt;a class='glossary-tooltip' title='Control plane component that watches for newly created pods with no assigned node, and selects a node for them to run on.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/reference/command-line-tools-reference/kube-scheduler/' target='_blank' aria-label='kube-scheduler'&gt;kube-scheduler&lt;/a&gt;.
Each stage is exposed in an extension point. Plugins provide scheduling behaviors
by implementing one or more of these extension points.&lt;/p&gt;</description></item><item><title>Kubernetes Component SLI Metrics</title><link>https://andygol-k8s.netlify.app/docs/reference/instrumentation/slis/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/instrumentation/slis/</guid><description>&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable" title="Feature Gate: ComponentSLIs"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt; 
 &lt;code&gt;Kubernetes v1.32 [stable]&lt;/code&gt;(enabled by default)&lt;/div&gt;

&lt;p&gt;By default, Kubernetes 1.35 publishes Service Level Indicator (SLI) metrics
for each Kubernetes component binary. This metric endpoint is exposed on the serving
HTTPS port of each component, at the path &lt;code&gt;/metrics/slis&lt;/code&gt;. The
&lt;code&gt;ComponentSLIs&lt;/code&gt; &lt;a href="https://andygol-k8s.netlify.app/docs/reference/command-line-tools-reference/feature-gates/"&gt;feature gate&lt;/a&gt;
defaults to enabled for each Kubernetes component as of v1.27.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="sli-metrics"&gt;SLI Metrics&lt;/h2&gt;
&lt;p&gt;With SLI metrics enabled, each Kubernetes component exposes two metrics,
labeled per healthcheck:&lt;/p&gt;</description></item><item><title>Suggesting content improvements</title><link>https://andygol-k8s.netlify.app/docs/contribute/suggesting-improvements/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/contribute/suggesting-improvements/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;If you notice an issue with Kubernetes documentation or have an idea for new content, then open an issue. All you need is a &lt;a href="https://github.com/join"&gt;GitHub account&lt;/a&gt; and a web browser.&lt;/p&gt;
&lt;p&gt;In most cases, new work on Kubernetes documentation begins with an issue in GitHub. Kubernetes contributors
then review, categorize and tag issues as needed. Next, you or another member
of the Kubernetes community open a pull request with changes to resolve the issue.&lt;/p&gt;</description></item><item><title>Troubleshooting kubeadm</title><link>https://andygol-k8s.netlify.app/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;As with any program, you might run into an error installing or running kubeadm.
This page lists some common failure scenarios and have provided steps that can help you understand and fix the problem.&lt;/p&gt;
&lt;p&gt;If your problem is not listed below, please follow the following steps:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;If you think your problem is a bug with kubeadm:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Go to &lt;a href="https://github.com/kubernetes/kubeadm/issues"&gt;github.com/kubernetes/kubeadm&lt;/a&gt; and search for existing issues.&lt;/li&gt;
&lt;li&gt;If no issue exists, please &lt;a href="https://github.com/kubernetes/kubeadm/issues/new"&gt;open one&lt;/a&gt; and follow the issue template.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;If you are unsure about how kubeadm works, you can ask on &lt;a href="https://slack.k8s.io/"&gt;Slack&lt;/a&gt; in &lt;code&gt;#kubeadm&lt;/code&gt;,
or open a question on &lt;a href="https://stackoverflow.com/questions/tagged/kubernetes"&gt;StackOverflow&lt;/a&gt;. Please include
relevant tags like &lt;code&gt;#kubernetes&lt;/code&gt; and &lt;code&gt;#kubeadm&lt;/code&gt; so folks can help you.&lt;/p&gt;</description></item><item><title>Updating Configuration via a ConfigMap</title><link>https://andygol-k8s.netlify.app/docs/tutorials/configuration/updating-configuration-via-a-configmap/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tutorials/configuration/updating-configuration-via-a-configmap/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page provides a step-by-step example of updating configuration within a Pod via a ConfigMap
and builds upon the &lt;a href="https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/configure-pod-configmap/"&gt;Configure a Pod to Use a ConfigMap&lt;/a&gt; task.&lt;br&gt;
At the end of this tutorial, you will understand how to change the configuration for a running application.&lt;br&gt;
This tutorial uses the &lt;code&gt;alpine&lt;/code&gt; and &lt;code&gt;nginx&lt;/code&gt; images as examples.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Use Calico for NetworkPolicy</title><link>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/network-policy-provider/calico-network-policy/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/network-policy-provider/calico-network-policy/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page shows a couple of quick ways to create a Calico cluster on Kubernetes.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;Decide whether you want to deploy a &lt;a href="#creating-a-calico-cluster-with-google-kubernetes-engine-gke"&gt;cloud&lt;/a&gt; or &lt;a href="#creating-a-local-calico-cluster-with-kubeadm"&gt;local&lt;/a&gt; cluster.&lt;/p&gt;
&lt;!-- steps --&gt;
&lt;h2 id="creating-a-calico-cluster-with-google-kubernetes-engine-gke"&gt;Creating a Calico cluster with Google Kubernetes Engine (GKE)&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Prerequisite&lt;/strong&gt;: &lt;a href="https://cloud.google.com/sdk/docs/quickstarts"&gt;gcloud&lt;/a&gt;.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;To launch a GKE cluster with Calico, include the &lt;code&gt;--enable-network-policy&lt;/code&gt; flag.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Syntax&lt;/strong&gt;&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;gcloud container clusters create &lt;span style="color:#666"&gt;[&lt;/span&gt;CLUSTER_NAME&lt;span style="color:#666"&gt;]&lt;/span&gt; --enable-network-policy
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;&lt;strong&gt;Example&lt;/strong&gt;&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;gcloud container clusters create my-calico-cluster --enable-network-policy
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;To verify the deployment, use the following command.&lt;/p&gt;</description></item><item><title>Projected Volumes</title><link>https://andygol-k8s.netlify.app/docs/concepts/storage/projected-volumes/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/storage/projected-volumes/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This document describes &lt;em&gt;projected volumes&lt;/em&gt; in Kubernetes. Familiarity with &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/storage/volumes/"&gt;volumes&lt;/a&gt; is suggested.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="introduction"&gt;Introduction&lt;/h2&gt;
&lt;p&gt;A &lt;code&gt;projected&lt;/code&gt; volume maps several existing volume sources into the same directory.&lt;/p&gt;
&lt;p&gt;Currently, the following types of volume sources can be projected:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/storage/volumes/#secret"&gt;&lt;code&gt;secret&lt;/code&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/storage/volumes/#downwardapi"&gt;&lt;code&gt;downwardAPI&lt;/code&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/storage/volumes/#configmap"&gt;&lt;code&gt;configMap&lt;/code&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#serviceaccounttoken"&gt;&lt;code&gt;serviceAccountToken&lt;/code&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#clustertrustbundle"&gt;&lt;code&gt;clusterTrustBundle&lt;/code&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#podcertificate"&gt;&lt;code&gt;podCertificate&lt;/code&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;All sources are required to be in the same namespace as the Pod. For more details,
see the &lt;a href="https://git.k8s.io/design-proposals-archive/node/all-in-one-volume.md"&gt;all-in-one volume&lt;/a&gt; design document.&lt;/p&gt;
&lt;h3 id="example-configuration-secret-downwardapi-configmap"&gt;Example configuration with a secret, a downwardAPI, and a configMap&lt;/h3&gt;
&lt;div class="highlight code-sample"&gt;
 &lt;div class="copy-code-icon"&gt;
 &lt;a href="https://raw.githubusercontent.com/kubernetes/website/main/content/en/examples/pods/storage/projected-secret-downwardapi-configmap.yaml" download="pods/storage/projected-secret-downwardapi-configmap.yaml"&gt;&lt;code&gt;pods/storage/projected-secret-downwardapi-configmap.yaml&lt;/code&gt;
 &lt;/a&gt;&lt;img src="https://andygol-k8s.netlify.app/images/copycode.svg" class="icon-copycode" onclick="copyCode('pods-storage-projected-secret-downwardapi-configmap-yaml')" title="Copy pods/storage/projected-secret-downwardapi-configmap.yaml to clipboard"&gt;&lt;/img&gt;&lt;/div&gt;
 &lt;div class="includecode" id="pods-storage-projected-secret-downwardapi-configmap-yaml"&gt;&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-yaml" data-lang="yaml"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;apiVersion&lt;/span&gt;:&lt;span style="color:#bbb"&gt; &lt;/span&gt;v1&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;kind&lt;/span&gt;:&lt;span style="color:#bbb"&gt; &lt;/span&gt;Pod&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;metadata&lt;/span&gt;:&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;name&lt;/span&gt;:&lt;span style="color:#bbb"&gt; &lt;/span&gt;volume-test&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;spec&lt;/span&gt;:&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;containers&lt;/span&gt;:&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;- &lt;span style="color:#008000;font-weight:bold"&gt;name&lt;/span&gt;:&lt;span style="color:#bbb"&gt; &lt;/span&gt;container-test&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;image&lt;/span&gt;:&lt;span style="color:#bbb"&gt; &lt;/span&gt;busybox:1.28&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;command&lt;/span&gt;:&lt;span style="color:#bbb"&gt; &lt;/span&gt;[&lt;span style="color:#b44"&gt;&amp;#34;sleep&amp;#34;&lt;/span&gt;,&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#b44"&gt;&amp;#34;3600&amp;#34;&lt;/span&gt;]&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;volumeMounts&lt;/span&gt;:&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;- &lt;span style="color:#008000;font-weight:bold"&gt;name&lt;/span&gt;:&lt;span style="color:#bbb"&gt; &lt;/span&gt;all-in-one&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;mountPath&lt;/span&gt;:&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#b44"&gt;&amp;#34;/projected-volume&amp;#34;&lt;/span&gt;&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;readOnly&lt;/span&gt;:&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#a2f;font-weight:bold"&gt;true&lt;/span&gt;&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;volumes&lt;/span&gt;:&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;- &lt;span style="color:#008000;font-weight:bold"&gt;name&lt;/span&gt;:&lt;span style="color:#bbb"&gt; &lt;/span&gt;all-in-one&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;projected&lt;/span&gt;:&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;sources&lt;/span&gt;:&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;- &lt;span style="color:#008000;font-weight:bold"&gt;secret&lt;/span&gt;:&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;name&lt;/span&gt;:&lt;span style="color:#bbb"&gt; &lt;/span&gt;mysecret&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;items&lt;/span&gt;:&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;- &lt;span style="color:#008000;font-weight:bold"&gt;key&lt;/span&gt;:&lt;span style="color:#bbb"&gt; &lt;/span&gt;username&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;path&lt;/span&gt;:&lt;span style="color:#bbb"&gt; &lt;/span&gt;my-group/my-username&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;- &lt;span style="color:#008000;font-weight:bold"&gt;downwardAPI&lt;/span&gt;:&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;items&lt;/span&gt;:&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;- &lt;span style="color:#008000;font-weight:bold"&gt;path&lt;/span&gt;:&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#b44"&gt;&amp;#34;labels&amp;#34;&lt;/span&gt;&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;fieldRef&lt;/span&gt;:&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;fieldPath&lt;/span&gt;:&lt;span style="color:#bbb"&gt; &lt;/span&gt;metadata.labels&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;- &lt;span style="color:#008000;font-weight:bold"&gt;path&lt;/span&gt;:&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#b44"&gt;&amp;#34;cpu_limit&amp;#34;&lt;/span&gt;&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;resourceFieldRef&lt;/span&gt;:&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;containerName&lt;/span&gt;:&lt;span style="color:#bbb"&gt; &lt;/span&gt;container-test&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;resource&lt;/span&gt;:&lt;span style="color:#bbb"&gt; &lt;/span&gt;limits.cpu&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;- &lt;span style="color:#008000;font-weight:bold"&gt;configMap&lt;/span&gt;:&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;name&lt;/span&gt;:&lt;span style="color:#bbb"&gt; &lt;/span&gt;myconfigmap&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;items&lt;/span&gt;:&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;- &lt;span style="color:#008000;font-weight:bold"&gt;key&lt;/span&gt;:&lt;span style="color:#bbb"&gt; &lt;/span&gt;config&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;path&lt;/span&gt;:&lt;span style="color:#bbb"&gt; &lt;/span&gt;my-group/my-config&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;h3 id="example-configuration-secrets-nondefault-permission-mode"&gt;Example configuration: secrets with a non-default permission mode set&lt;/h3&gt;
&lt;div class="highlight code-sample"&gt;
 &lt;div class="copy-code-icon"&gt;
 &lt;a href="https://raw.githubusercontent.com/kubernetes/website/main/content/en/examples/pods/storage/projected-secrets-nondefault-permission-mode.yaml" download="pods/storage/projected-secrets-nondefault-permission-mode.yaml"&gt;&lt;code&gt;pods/storage/projected-secrets-nondefault-permission-mode.yaml&lt;/code&gt;
 &lt;/a&gt;&lt;img src="https://andygol-k8s.netlify.app/images/copycode.svg" class="icon-copycode" onclick="copyCode('pods-storage-projected-secrets-nondefault-permission-mode-yaml')" title="Copy pods/storage/projected-secrets-nondefault-permission-mode.yaml to clipboard"&gt;&lt;/img&gt;&lt;/div&gt;
 &lt;div class="includecode" id="pods-storage-projected-secrets-nondefault-permission-mode-yaml"&gt;&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-yaml" data-lang="yaml"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;apiVersion&lt;/span&gt;:&lt;span style="color:#bbb"&gt; &lt;/span&gt;v1&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;kind&lt;/span&gt;:&lt;span style="color:#bbb"&gt; &lt;/span&gt;Pod&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;metadata&lt;/span&gt;:&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;name&lt;/span&gt;:&lt;span style="color:#bbb"&gt; &lt;/span&gt;volume-test&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;spec&lt;/span&gt;:&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;containers&lt;/span&gt;:&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;- &lt;span style="color:#008000;font-weight:bold"&gt;name&lt;/span&gt;:&lt;span style="color:#bbb"&gt; &lt;/span&gt;container-test&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;image&lt;/span&gt;:&lt;span style="color:#bbb"&gt; &lt;/span&gt;busybox:1.28&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;command&lt;/span&gt;:&lt;span style="color:#bbb"&gt; &lt;/span&gt;[&lt;span style="color:#b44"&gt;&amp;#34;sleep&amp;#34;&lt;/span&gt;,&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#b44"&gt;&amp;#34;3600&amp;#34;&lt;/span&gt;]&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;volumeMounts&lt;/span&gt;:&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;- &lt;span style="color:#008000;font-weight:bold"&gt;name&lt;/span&gt;:&lt;span style="color:#bbb"&gt; &lt;/span&gt;all-in-one&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;mountPath&lt;/span&gt;:&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#b44"&gt;&amp;#34;/projected-volume&amp;#34;&lt;/span&gt;&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;readOnly&lt;/span&gt;:&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#a2f;font-weight:bold"&gt;true&lt;/span&gt;&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;volumes&lt;/span&gt;:&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;- &lt;span style="color:#008000;font-weight:bold"&gt;name&lt;/span&gt;:&lt;span style="color:#bbb"&gt; &lt;/span&gt;all-in-one&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;projected&lt;/span&gt;:&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;sources&lt;/span&gt;:&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;- &lt;span style="color:#008000;font-weight:bold"&gt;secret&lt;/span&gt;:&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;name&lt;/span&gt;:&lt;span style="color:#bbb"&gt; &lt;/span&gt;mysecret&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;items&lt;/span&gt;:&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;- &lt;span style="color:#008000;font-weight:bold"&gt;key&lt;/span&gt;:&lt;span style="color:#bbb"&gt; &lt;/span&gt;username&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;path&lt;/span&gt;:&lt;span style="color:#bbb"&gt; &lt;/span&gt;my-group/my-username&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;- &lt;span style="color:#008000;font-weight:bold"&gt;secret&lt;/span&gt;:&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;name&lt;/span&gt;:&lt;span style="color:#bbb"&gt; &lt;/span&gt;mysecret2&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;items&lt;/span&gt;:&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;- &lt;span style="color:#008000;font-weight:bold"&gt;key&lt;/span&gt;:&lt;span style="color:#bbb"&gt; &lt;/span&gt;password&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;path&lt;/span&gt;:&lt;span style="color:#bbb"&gt; &lt;/span&gt;my-group/my-password&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;mode&lt;/span&gt;:&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#666"&gt;511&lt;/span&gt;&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Each projected volume source is listed in the spec under &lt;code&gt;sources&lt;/code&gt;. The
parameters are nearly the same with two exceptions:&lt;/p&gt;</description></item><item><title>Official CVE Feed</title><link>https://andygol-k8s.netlify.app/docs/reference/issues-security/official-cve-feed/index.json</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/issues-security/official-cve-feed/index.json</guid><description>&lt;div class="feature-state-notice feature-beta"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.27 [beta]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;p&gt;This is a community maintained list of official CVEs announced by
the Kubernetes Security Response Committee. See
&lt;a href="https://andygol-k8s.netlify.app/docs/reference/issues-security/security/"&gt;Kubernetes Security and Disclosure Information&lt;/a&gt;
for more details.&lt;/p&gt;
&lt;p&gt;The Kubernetes project publishes a programmatically accessible feed of published
security issues in &lt;a href="https://andygol-k8s.netlify.app/docs/reference/issues-security/official-cve-feed/index.json"&gt;JSON feed&lt;/a&gt;
and &lt;a href="https://andygol-k8s.netlify.app/docs/reference/issues-security/official-cve-feed/feed.xml"&gt;RSS feed&lt;/a&gt;
formats. You can access it by executing the following commands:&lt;/p&gt;
&lt;ul class="nav nav-tabs" id="cve-feeds" role="tablist"&gt;&lt;li class="nav-item"&gt;&lt;a data-toggle="tab" class="nav-link active" href="#cve-feeds-0" role="tab" aria-controls="cve-feeds-0" aria-selected="true"&gt;JSON feed&lt;/a&gt;&lt;/li&gt;
	 
		&lt;li class="nav-item"&gt;&lt;a data-toggle="tab" class="nav-link" href="#cve-feeds-1" role="tab" aria-controls="cve-feeds-1"&gt;RSS feed&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;
&lt;div class="tab-content" id="cve-feeds"&gt;&lt;div id="cve-feeds-0" class="tab-pane show active" role="tabpanel" aria-labelledby="cve-feeds-0"&gt;

&lt;p&gt;&lt;p&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/reference/issues-security/official-cve-feed/index.json"&gt;Link to JSON format&lt;/a&gt;&lt;/p&gt;</description></item><item><title>Server-Side Apply</title><link>https://andygol-k8s.netlify.app/docs/reference/using-api/server-side-apply/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/using-api/server-side-apply/</guid><description>&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable" title="Feature Gate: ServerSideApply"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt; 
 &lt;code&gt;Kubernetes v1.22 [stable]&lt;/code&gt;(enabled by default)&lt;/div&gt;

&lt;p&gt;Kubernetes supports multiple appliers collaborating to manage the fields
of a single &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/overview/working-with-objects/"&gt;object&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Server-Side Apply provides an optional mechanism for your cluster's control plane to track
changes to an object's fields. At the level of a specific resource, Server-Side
Apply records and tracks information about control over the fields of that object.&lt;/p&gt;
&lt;p&gt;Server-Side Apply helps users and &lt;a class='glossary-tooltip' title='A control loop that watches the shared state of the cluster through the apiserver and makes changes attempting to move the current state towards the desired state.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/architecture/controller/' target='_blank' aria-label='controllers'&gt;controllers&lt;/a&gt;
manage their resources through declarative configuration. Clients can create and modify
&lt;a class='glossary-tooltip' title='An entity in the Kubernetes system, representing part of the state of your cluster.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/overview/working-with-objects/#kubernetes-objects' target='_blank' aria-label='objects'&gt;objects&lt;/a&gt;
declaratively by submitting their &lt;em&gt;fully specified intent&lt;/em&gt;.&lt;/p&gt;</description></item><item><title>Service Accounts</title><link>https://andygol-k8s.netlify.app/docs/concepts/security/service-accounts/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/security/service-accounts/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page introduces the ServiceAccount object in Kubernetes, providing
information about how service accounts work, use cases, limitations,
alternatives, and links to resources for additional guidance.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="what-are-service-accounts"&gt;What are service accounts?&lt;/h2&gt;
&lt;p&gt;A service account is a type of non-human account that, in Kubernetes, provides
a distinct identity in a Kubernetes cluster. Application Pods, system
components, and entities inside and outside the cluster can use a specific
ServiceAccount's credentials to identify as that ServiceAccount. This identity
is useful in various situations, including authenticating to the API server or
implementing identity-based security policies.&lt;/p&gt;</description></item><item><title>Assign Pod-level CPU and memory resources</title><link>https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/assign-pod-level-resources/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/assign-pod-level-resources/</guid><description>&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-beta" title="Feature Gate: PodLevelResources"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt; 
 &lt;code&gt;Kubernetes v1.34 [beta]&lt;/code&gt;(enabled by default)&lt;/div&gt;

&lt;p&gt;This page shows how to specify CPU and memory resources for a Pod at pod-level in
addition to container-level resource specifications. A Kubernetes node allocates
resources to a pod based on the pod's resource requests. These requests can be
defined at the pod level or individually for containers within the pod. When
both are present, the pod-level requests take precedence.&lt;/p&gt;</description></item><item><title>Authorization</title><link>https://andygol-k8s.netlify.app/docs/reference/access-authn-authz/authorization/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/access-authn-authz/authorization/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;Kubernetes authorization takes place following
&lt;a href="https://andygol-k8s.netlify.app/docs/reference/access-authn-authz/authentication/"&gt;authentication&lt;/a&gt;.
Usually, a client making a request must be authenticated (logged in) before its
request can be allowed; however, Kubernetes also allows anonymous requests in
some circumstances.&lt;/p&gt;
&lt;p&gt;For an overview of how authorization fits into the wider context of API access
control, read
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/security/controlling-access/"&gt;Controlling Access to the Kubernetes API&lt;/a&gt;.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="determine-whether-a-request-is-allowed-or-denied"&gt;Authorization verdicts&lt;/h2&gt;
&lt;p&gt;Kubernetes authorization of API requests takes place within the API server.
The API server evaluates all of the request attributes against all policies,
potentially also consulting external services, and then allows or denies the
request.&lt;/p&gt;</description></item><item><title>Submitting case studies</title><link>https://andygol-k8s.netlify.app/docs/contribute/new-content/case-studies/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/contribute/new-content/case-studies/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;Case studies highlight how organizations are using Kubernetes to solve real-world problems. The
Kubernetes marketing team and members of the &lt;a class='glossary-tooltip' title='Cloud Native Computing Foundation' data-toggle='tooltip' data-placement='top' href='https://cncf.io/' target='_blank' aria-label='CNCF'&gt;CNCF&lt;/a&gt;
collaborate with you on all case studies.&lt;/p&gt;
&lt;p&gt;Case studies require extensive review before they're approved.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="submit-a-case-study"&gt;Submit a case study&lt;/h2&gt;
&lt;p&gt;Have a look at the source for the
&lt;a href="https://github.com/kubernetes/website/tree/main/content/en/case-studies"&gt;existing case studies&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Refer to the &lt;a href="https://github.com/cncf/foundation/blob/main/policies-guidance/case-study-guidelines.md"&gt;case study guidelines&lt;/a&gt;
and submit your request as outlined in the guidelines.&lt;/p&gt;</description></item><item><title>Client Libraries</title><link>https://andygol-k8s.netlify.app/docs/reference/using-api/client-libraries/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/using-api/client-libraries/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page contains an overview of the client libraries for using the Kubernetes
API from various programming languages.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;p&gt;To write applications using the &lt;a href="https://andygol-k8s.netlify.app/docs/reference/using-api/"&gt;Kubernetes REST API&lt;/a&gt;,
you do not need to implement the API calls and request/response types yourself.
You can use a client library for the programming language you are using.&lt;/p&gt;
&lt;p&gt;Client libraries often handle common tasks such as authentication for you.
Most client libraries can discover and use the Kubernetes Service Account to
authenticate if the API client is running inside the Kubernetes cluster, or can
understand the &lt;a href="https://andygol-k8s.netlify.app/docs/tasks/access-application-cluster/configure-access-multiple-clusters/"&gt;kubeconfig file&lt;/a&gt;
format to read the credentials and the API Server address.&lt;/p&gt;</description></item><item><title>Configure Access to Multiple Clusters</title><link>https://andygol-k8s.netlify.app/docs/tasks/access-application-cluster/configure-access-multiple-clusters/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/access-application-cluster/configure-access-multiple-clusters/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page shows how to configure access to multiple clusters by using
configuration files. After your clusters, users, and contexts are defined in
one or more configuration files, you can quickly switch between clusters by using the
&lt;code&gt;kubectl config use-context&lt;/code&gt; command.&lt;/p&gt;

&lt;div class="alert alert-info" role="note"&gt;&lt;h4 class="alert-heading"&gt;Note:&lt;/h4&gt;A file that is used to configure access to a cluster is sometimes called
a &lt;em&gt;kubeconfig file&lt;/em&gt;. This is a generic way of referring to configuration files.
It does not mean that there is a file named &lt;code&gt;kubeconfig&lt;/code&gt;.&lt;/div&gt;

&lt;div class="alert alert-danger" role="note"&gt;&lt;h4 class="alert-heading"&gt;Warning:&lt;/h4&gt;Only use kubeconfig files from trusted sources. Using a specially-crafted kubeconfig
file could result in malicious code execution or file exposure.
If you must use an untrusted kubeconfig file, inspect it carefully first, much as you would a shell script.&lt;/div&gt;

&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Configure GMSA for Windows Pods and containers</title><link>https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/configure-gmsa/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/configure-gmsa/</guid><description>&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.18 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;p&gt;This page shows how to configure
&lt;a href="https://docs.microsoft.com/en-us/windows-server/security/group-managed-service-accounts/group-managed-service-accounts-overview"&gt;Group Managed Service Accounts&lt;/a&gt; (GMSA)
for Pods and containers that will run on Windows nodes. Group Managed Service Accounts
are a specific type of Active Directory account that provides automatic password management,
simplified service principal name (SPN) management, and the ability to delegate the management
to other administrators across multiple servers.&lt;/p&gt;
&lt;p&gt;In Kubernetes, GMSA credential specs are configured at a Kubernetes cluster-wide scope
as Custom Resources. Windows Pods, as well as individual containers within a Pod,
can be configured to use a GMSA for domain based functions (e.g. Kerberos authentication)
when interacting with other Windows services.&lt;/p&gt;</description></item><item><title>Configure Minimum and Maximum Memory Constraints for a Namespace</title><link>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page shows how to set minimum and maximum values for memory used by containers
running in a &lt;a class='glossary-tooltip' title='An abstraction used by Kubernetes to support isolation of groups of resources within a single cluster.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/overview/working-with-objects/namespaces' target='_blank' aria-label='namespace'&gt;namespace&lt;/a&gt;.
You specify minimum and maximum memory values in a
&lt;a href="https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/policy-resources/limit-range-v1/"&gt;LimitRange&lt;/a&gt;
object. If a Pod does not meet the constraints imposed by the LimitRange,
it cannot be created in the namespace.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Configuring Redis using a ConfigMap</title><link>https://andygol-k8s.netlify.app/docs/tutorials/configuration/configure-redis-using-configmap/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tutorials/configuration/configure-redis-using-configmap/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page provides a real world example of how to configure Redis using a ConfigMap and builds upon the &lt;a href="https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/configure-pod-configmap/"&gt;Configure a Pod to Use a ConfigMap&lt;/a&gt; task.&lt;/p&gt;
&lt;h2 id="objectives"&gt;Objectives&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Create a ConfigMap with Redis configuration values&lt;/li&gt;
&lt;li&gt;Create a Redis Pod that mounts and uses the created ConfigMap&lt;/li&gt;
&lt;li&gt;Verify that the configuration was correctly applied.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Controllers</title><link>https://andygol-k8s.netlify.app/docs/concepts/architecture/controller/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/architecture/controller/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;In robotics and automation, a &lt;em&gt;control loop&lt;/em&gt; is
a non-terminating loop that regulates the state of a system.&lt;/p&gt;
&lt;p&gt;Here is one example of a control loop: a thermostat in a room.&lt;/p&gt;
&lt;p&gt;When you set the temperature, that's telling the thermostat
about your &lt;em&gt;desired state&lt;/em&gt;. The actual room temperature is the
&lt;em&gt;current state&lt;/em&gt;. The thermostat acts to bring the current state
closer to the desired state, by turning equipment on or off.&lt;/p&gt;</description></item><item><title>Creating a cluster with kubeadm</title><link>https://andygol-k8s.netlify.app/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;&lt;img src="https://andygol-k8s.netlify.app/images/kubeadm-stacked-color.png" align="right" width="150px"&gt;&lt;/img&gt;
Using &lt;code&gt;kubeadm&lt;/code&gt;, you can create a minimum viable Kubernetes cluster that conforms to best practices.
In fact, you can use &lt;code&gt;kubeadm&lt;/code&gt; to set up a cluster that will pass the
&lt;a href="https://andygol-k8s.netlify.app/blog/2017/10/software-conformance-certification/"&gt;Kubernetes Conformance tests&lt;/a&gt;.
&lt;code&gt;kubeadm&lt;/code&gt; also supports other cluster lifecycle functions, such as
&lt;a href="https://andygol-k8s.netlify.app/docs/reference/access-authn-authz/bootstrap-tokens/"&gt;bootstrap tokens&lt;/a&gt; and cluster upgrades.&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;kubeadm&lt;/code&gt; tool is good if you need:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A simple way for you to try out Kubernetes, possibly for the first time.&lt;/li&gt;
&lt;li&gt;A way for existing users to automate setting up a cluster and test their application.&lt;/li&gt;
&lt;li&gt;A building block in other ecosystem and/or installer tools with a larger
scope.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;You can install and use &lt;code&gt;kubeadm&lt;/code&gt; on various machines: your laptop, a set
of cloud servers, a Raspberry Pi, and more. Whether you're deploying into the
cloud or on-premises, you can integrate &lt;code&gt;kubeadm&lt;/code&gt; into provisioning systems such
as Ansible or Terraform.&lt;/p&gt;</description></item><item><title>Debug a StatefulSet</title><link>https://andygol-k8s.netlify.app/docs/tasks/debug/debug-application/debug-statefulset/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/debug/debug-application/debug-statefulset/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This task shows you how to debug a StatefulSet.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster.&lt;/li&gt;
&lt;li&gt;You should have a StatefulSet running that you want to investigate.&lt;/li&gt;
&lt;/ul&gt;
&lt;!-- steps --&gt;
&lt;h2 id="debugging-a-statefulset"&gt;Debugging a StatefulSet&lt;/h2&gt;
&lt;p&gt;In order to list all the pods which belong to a StatefulSet, which have a label &lt;code&gt;app.kubernetes.io/name=MyApp&lt;/code&gt; set on them,
you can use the following:&lt;/p&gt;</description></item><item><title>Debugging Kubernetes nodes with crictl</title><link>https://andygol-k8s.netlify.app/docs/tasks/debug/debug-cluster/crictl/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/debug/debug-cluster/crictl/</guid><description>&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.11 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;p&gt;&lt;code&gt;crictl&lt;/code&gt; is a command-line interface for CRI-compatible container runtimes.
You can use it to inspect and debug container runtimes and applications on a
Kubernetes node. &lt;code&gt;crictl&lt;/code&gt; and its source are hosted in the
&lt;a href="https://github.com/kubernetes-sigs/cri-tools"&gt;cri-tools&lt;/a&gt; repository.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;&lt;code&gt;crictl&lt;/code&gt; requires a Linux operating system with a CRI runtime.&lt;/p&gt;
&lt;!-- steps --&gt;
&lt;h2 id="installing-crictl"&gt;Installing crictl&lt;/h2&gt;
&lt;p&gt;You can download a compressed archive &lt;code&gt;crictl&lt;/code&gt; from the cri-tools
&lt;a href="https://github.com/kubernetes-sigs/cri-tools/releases"&gt;release page&lt;/a&gt;, for several
different architectures. Download the version that corresponds to your version
of Kubernetes. Extract it and move it to a location on your system path, such as
&lt;code&gt;/usr/local/bin/&lt;/code&gt;.&lt;/p&gt;</description></item><item><title>Define Environment Variable Values Using An Init Container</title><link>https://andygol-k8s.netlify.app/docs/tasks/inject-data-application/define-environment-variable-via-file/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/inject-data-application/define-environment-variable-via-file/</guid><description>&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-beta" title="Feature Gate: EnvFiles"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt; 
 &lt;code&gt;Kubernetes v1.35 [beta]&lt;/code&gt;(enabled by default)&lt;/div&gt;

&lt;p&gt;This page show how to configure environment variables for containers in a Pod via file.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Determine the Reason for Pod Failure</title><link>https://andygol-k8s.netlify.app/docs/tasks/debug/debug-application/determine-reason-pod-failure/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/debug/debug-application/determine-reason-pod-failure/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page shows how to write and read a Container termination message.&lt;/p&gt;
&lt;p&gt;Termination messages provide a way for containers to write
information about fatal events to a location where it can
be easily retrieved and surfaced by tools like dashboards
and monitoring software. In most cases, information that you
put in a termination message should also be written to
the general
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/cluster-administration/logging/"&gt;Kubernetes logs&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Ephemeral Volumes</title><link>https://andygol-k8s.netlify.app/docs/concepts/storage/ephemeral-volumes/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/storage/ephemeral-volumes/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This document describes &lt;em&gt;ephemeral volumes&lt;/em&gt; in Kubernetes. Familiarity
with &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/storage/volumes/"&gt;volumes&lt;/a&gt; is suggested, in
particular PersistentVolumeClaim and PersistentVolume.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;p&gt;Some applications need additional storage but don't care whether that
data is stored persistently across restarts. For example, caching
services are often limited by memory size and can move infrequently
used data into storage that is slower than memory with little impact
on overall performance.&lt;/p&gt;
&lt;p&gt;Other applications expect some read-only input data to be present in
files, like configuration data or secret keys.&lt;/p&gt;</description></item><item><title>Example: Deploying Cassandra with a StatefulSet</title><link>https://andygol-k8s.netlify.app/docs/tutorials/stateful-application/cassandra/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tutorials/stateful-application/cassandra/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This tutorial shows you how to run &lt;a href="https://cassandra.apache.org/"&gt;Apache Cassandra&lt;/a&gt; on Kubernetes.
Cassandra, a database, needs persistent storage to provide data durability (application &lt;em&gt;state&lt;/em&gt;).
In this example, a custom Cassandra seed provider lets the database discover new Cassandra instances as they join the Cassandra cluster.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;StatefulSets&lt;/em&gt; make it easier to deploy stateful applications into your Kubernetes cluster.
For more information on the features used in this tutorial, see
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/statefulset/"&gt;StatefulSet&lt;/a&gt;.&lt;/p&gt;

&lt;div class="alert alert-info" role="note"&gt;&lt;h4 class="alert-heading"&gt;Note:&lt;/h4&gt;&lt;p&gt;Cassandra and Kubernetes both use the term &lt;em&gt;node&lt;/em&gt; to mean a member of a cluster. In this
tutorial, the Pods that belong to the StatefulSet are Cassandra nodes and are members
of the Cassandra cluster (called a &lt;em&gt;ring&lt;/em&gt;). When those Pods run in your Kubernetes cluster,
the Kubernetes control plane schedules those Pods onto Kubernetes
&lt;a class='glossary-tooltip' title='A node is a worker machine in Kubernetes.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/architecture/nodes/' target='_blank' aria-label='Nodes'&gt;Nodes&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>Expose Pod Information to Containers Through Environment Variables</title><link>https://andygol-k8s.netlify.app/docs/tasks/inject-data-application/environment-variable-expose-pod-information/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/inject-data-application/environment-variable-expose-pod-information/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page shows how a Pod can use environment variables to expose information
about itself to containers running in the Pod, using the &lt;em&gt;downward API&lt;/em&gt;.
You can use environment variables to expose Pod fields, container fields, or both.&lt;/p&gt;
&lt;p&gt;In Kubernetes, there are two ways to expose Pod and container fields to a running container:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;em&gt;Environment variables&lt;/em&gt;, as explained in this task&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/"&gt;Volume files&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Together, these two ways of exposing Pod and container fields are called the
downward API.&lt;/p&gt;</description></item><item><title>Find Out What Container Runtime is Used on a Node</title><link>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/migrating-from-dockershim/find-out-runtime-you-use/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/migrating-from-dockershim/find-out-runtime-you-use/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page outlines steps to find out what &lt;a href="https://andygol-k8s.netlify.app/docs/setup/production-environment/container-runtimes/"&gt;container runtime&lt;/a&gt;
the nodes in your cluster use.&lt;/p&gt;
&lt;p&gt;Depending on the way you run your cluster, the container runtime for the nodes may
have been pre-configured or you need to configure it. If you're using a managed
Kubernetes service, there might be vendor-specific ways to check what container runtime is
configured for the nodes. The method described on this page should work whenever
the execution of &lt;code&gt;kubectl&lt;/code&gt; is allowed.&lt;/p&gt;</description></item><item><title>Fine Parallel Processing Using a Work Queue</title><link>https://andygol-k8s.netlify.app/docs/tasks/job/fine-parallel-processing-work-queue/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/job/fine-parallel-processing-work-queue/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;In this example, you will run a Kubernetes Job that runs multiple parallel
tasks as worker processes, each running as a separate Pod.&lt;/p&gt;
&lt;p&gt;In this example, as each pod is created, it picks up one unit of work
from a task queue, processes it, and repeats until the end of the queue is reached.&lt;/p&gt;
&lt;p&gt;Here is an overview of the steps in this example:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Start a storage service to hold the work queue.&lt;/strong&gt; In this example, you will use Redis to store
work items. In the &lt;a href="https://andygol-k8s.netlify.app/docs/tasks/job/coarse-parallel-processing-work-queue/"&gt;previous example&lt;/a&gt;,
you used RabbitMQ. In this example, you will use Redis and a custom work-queue client library;
this is because AMQP does not provide a good way for clients to
detect when a finite-length work queue is empty. In practice you would set up a store such
as Redis once and reuse it for the work queues of many jobs, and other things.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Create a queue, and fill it with messages.&lt;/strong&gt; Each message represents one task to be done. In
this example, a message is an integer that we will do a lengthy computation on.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Start a Job that works on tasks from the queue&lt;/strong&gt;. The Job starts several pods. Each pod takes
one task from the message queue, processes it, and repeats until the end of the queue is reached.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Generate Certificates Manually</title><link>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/certificates/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/certificates/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;When using client certificate authentication, you can generate certificates
manually through &lt;a href="https://github.com/OpenVPN/easy-rsa"&gt;&lt;code&gt;easyrsa&lt;/code&gt;&lt;/a&gt;, &lt;a href="https://github.com/openssl/openssl"&gt;&lt;code&gt;openssl&lt;/code&gt;&lt;/a&gt; or &lt;a href="https://github.com/cloudflare/cfssl"&gt;&lt;code&gt;cfssl&lt;/code&gt;&lt;/a&gt;.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h3 id="easyrsa"&gt;easyrsa&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;easyrsa&lt;/strong&gt; can manually generate certificates for your cluster.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Download, unpack, and initialize the patched version of &lt;code&gt;easyrsa3&lt;/code&gt;.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;curl -LO https://dl.k8s.io/easy-rsa/easy-rsa.tar.gz
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;tar xzf easy-rsa.tar.gz
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#a2f"&gt;cd&lt;/span&gt; easy-rsa-master/easyrsa3
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;./easyrsa init-pki
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Generate a new certificate authority (CA). &lt;code&gt;--batch&lt;/code&gt; sets automatic mode;
&lt;code&gt;--req-cn&lt;/code&gt; specifies the Common Name (CN) for the CA's new root certificate.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;./easyrsa --batch &lt;span style="color:#b44"&gt;&amp;#34;--req-cn=&lt;/span&gt;&lt;span style="color:#b68;font-weight:bold"&gt;${&lt;/span&gt;&lt;span style="color:#b8860b"&gt;MASTER_IP&lt;/span&gt;&lt;span style="color:#b68;font-weight:bold"&gt;}&lt;/span&gt;&lt;span style="color:#b44"&gt;@`date +%s`&amp;#34;&lt;/span&gt; build-ca nopass
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Generate server certificate and key.&lt;/p&gt;</description></item><item><title>Indexed Job for Parallel Processing with Static Work Assignment</title><link>https://andygol-k8s.netlify.app/docs/tasks/job/indexed-parallel-processing-static/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/job/indexed-parallel-processing-static/</guid><description>&lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.24 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;!-- overview --&gt;
&lt;p&gt;In this example, you will run a Kubernetes Job that uses multiple parallel
worker processes.
Each worker is a different container running in its own Pod. The Pods have an
&lt;em&gt;index number&lt;/em&gt; that the control plane sets automatically, which allows each Pod
to identify which part of the overall task to work on.&lt;/p&gt;
&lt;p&gt;The pod index is available in the &lt;a class='glossary-tooltip' title='A key-value pair that is used to attach arbitrary non-identifying metadata to objects.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/overview/working-with-objects/annotations' target='_blank' aria-label='annotation'&gt;annotation&lt;/a&gt;
&lt;code&gt;batch.kubernetes.io/job-completion-index&lt;/code&gt; as a string representing its
decimal value. In order for the containerized task process to obtain this index,
you can publish the value of the annotation using the &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/downward-api/"&gt;downward API&lt;/a&gt;
mechanism.
For convenience, the control plane automatically sets the downward API to
expose the index in the &lt;code&gt;JOB_COMPLETION_INDEX&lt;/code&gt; environment variable.&lt;/p&gt;</description></item><item><title>Ingress</title><link>https://andygol-k8s.netlify.app/docs/concepts/services-networking/ingress/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/services-networking/ingress/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;






 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.19 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;p&gt;An API object that manages external access to the services in a cluster, typically HTTP.&lt;/p&gt;
&lt;p&gt;Ingress may provide load balancing, SSL termination and name-based virtual hosting.&lt;/p&gt;&lt;/p&gt;

&lt;div class="alert alert-info" role="note"&gt;&lt;h4 class="alert-heading"&gt;Note:&lt;/h4&gt;&lt;p&gt;The Kubernetes project recommends using &lt;a href="https://gateway-api.sigs.k8s.io/"&gt;Gateway&lt;/a&gt; instead of
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/services-networking/ingress/"&gt;Ingress&lt;/a&gt;.
The Ingress API has been frozen.&lt;/p&gt;
&lt;p&gt;This means that:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The Ingress API is generally available, and is subject to the &lt;a href="https://andygol-k8s.netlify.app/docs/reference/using-api/deprecation-policy/#deprecating-parts-of-the-api"&gt;stability guarantees&lt;/a&gt; for generally available APIs.
The Kubernetes project has no plans to remove Ingress from Kubernetes.&lt;/li&gt;
&lt;li&gt;The Ingress API is no longer being developed, and will have no further changes
or updates made to it.&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;

&lt;!-- body --&gt;
&lt;h2 id="terminology"&gt;Terminology&lt;/h2&gt;
&lt;p&gt;For clarity, this guide defines the following terms:&lt;/p&gt;</description></item><item><title>Job with Pod-to-Pod Communication</title><link>https://andygol-k8s.netlify.app/docs/tasks/job/job-with-pod-to-pod-communication/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/job/job-with-pod-to-pod-communication/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;In this example, you will run a Job in &lt;a href="https://andygol-k8s.netlify.app/blog/2021/04/19/introducing-indexed-jobs/"&gt;Indexed completion mode&lt;/a&gt;
configured such that the pods created by the Job can communicate with each other using pod hostnames rather
than pod IP addresses.&lt;/p&gt;
&lt;p&gt;Pods within a Job might need to communicate among themselves. The user workload running in each pod
could query the Kubernetes API server to learn the IPs of the other Pods, but it's much simpler to
rely on Kubernetes' built-in DNS resolution.&lt;/p&gt;</description></item><item><title>kube-apiserver</title><link>https://andygol-k8s.netlify.app/docs/reference/command-line-tools-reference/kube-apiserver/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/command-line-tools-reference/kube-apiserver/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h2 id="synopsis"&gt;Synopsis&lt;/h2&gt;
&lt;p&gt;The Kubernetes API server validates and configures data
for the api objects which include pods, services, replicationcontrollers, and
others. The API Server services REST operations and provides the frontend to the
cluster's shared state through which all other components interact.&lt;/p&gt;</description></item><item><title>kube-controller-manager</title><link>https://andygol-k8s.netlify.app/docs/reference/command-line-tools-reference/kube-controller-manager/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/command-line-tools-reference/kube-controller-manager/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h2 id="synopsis"&gt;Synopsis&lt;/h2&gt;
&lt;p&gt;The Kubernetes controller manager is a daemon that embeds
the core control loops shipped with Kubernetes. In applications of robotics and
automation, a control loop is a non-terminating loop that regulates the state of
the system. In Kubernetes, a controller is a control loop that watches the shared
state of the cluster through the apiserver and makes changes attempting to move the
current state towards the desired state. Examples of controllers that ship with
Kubernetes today are the replication controller, endpoints controller, namespace
controller, and serviceaccounts controller.&lt;/p&gt;</description></item><item><title>kube-proxy</title><link>https://andygol-k8s.netlify.app/docs/reference/command-line-tools-reference/kube-proxy/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/command-line-tools-reference/kube-proxy/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h2 id="synopsis"&gt;Synopsis&lt;/h2&gt;
&lt;p&gt;The Kubernetes network proxy runs on each node. This
reflects services as defined in the Kubernetes API on each node and can do simple
TCP, UDP, and SCTP stream forwarding or round robin TCP, UDP, and SCTP forwarding across a set of backends.
Service cluster IPs and ports are currently found through Docker-links-compatible
environment variables specifying ports opened by the service proxy. There is an optional
addon that provides cluster DNS for these cluster IPs. The user must create a service
with the apiserver API to configure the proxy.&lt;/p&gt;</description></item><item><title>kube-scheduler</title><link>https://andygol-k8s.netlify.app/docs/reference/command-line-tools-reference/kube-scheduler/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/command-line-tools-reference/kube-scheduler/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h2 id="synopsis"&gt;Synopsis&lt;/h2&gt;
&lt;p&gt;The Kubernetes scheduler is a control plane process which assigns
Pods to Nodes. The scheduler determines which Nodes are valid placements for
each Pod in the scheduling queue according to constraints and available
resources. The scheduler then ranks each valid Node and binds the Pod to a
suitable Node. Multiple different schedulers may be used within a cluster;
kube-scheduler is the reference implementation.
See &lt;a href="https://kubernetes.io/docs/concepts/scheduling-eviction/"&gt;scheduling&lt;/a&gt;
for more information about scheduling and the kube-scheduler component.&lt;/p&gt;</description></item><item><title>kubeadm join</title><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/kubeadm-join/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/kubeadm-join/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This command initializes a new Kubernetes node and joins it to the existing cluster.&lt;/p&gt;
&lt;!-- body --&gt;

	&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;Run this on any machine you wish to join an existing cluster&lt;/p&gt;</description></item><item><title>kubectl</title><link>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h2 id="synopsis"&gt;Synopsis&lt;/h2&gt;
&lt;p&gt;kubectl controls the Kubernetes cluster manager.&lt;/p&gt;
&lt;p&gt;Find more information at: &lt;a href="https://kubernetes.io/docs/reference/kubectl/"&gt;https://kubernetes.io/docs/reference/kubectl/&lt;/a&gt;&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubectl [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id="options"&gt;Options&lt;/h2&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--as string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;Username to impersonate for the operation. User could be a regular user or a service account in a namespace.&lt;/p&gt;</description></item><item><title>kubectl</title><link>https://andygol-k8s.netlify.app/docs/reference/kubectl/kubectl/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubectl/kubectl/</guid><description>&lt;h2 id="synopsis"&gt;Synopsis&lt;/h2&gt;
&lt;p&gt;kubectl controls the Kubernetes cluster manager.&lt;/p&gt;
&lt;p&gt;Find more information in &lt;a href="https://andygol-k8s.netlify.app/docs/reference/kubectl/"&gt;Command line tool&lt;/a&gt; (&lt;code&gt;kubectl&lt;/code&gt;).&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl &lt;span style="color:#666"&gt;[&lt;/span&gt;flags&lt;span style="color:#666"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="options"&gt;Options&lt;/h2&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--add-dir-header&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;If true, adds the file directory to the header of the log messages&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--alsologtostderr&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;log to standard error as well as files&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--as string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;Username to impersonate for the operation&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--as-group stringArray&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;Group to impersonate for the operation, this flag can be repeated to specify multiple groups.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--azure-container-registry-config string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;Path to the file containing Azure container registry configuration information.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--cache-dir string&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Default: "$HOME/.kube/cache"&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;Default cache directory&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--certificate-authority string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;Path to a cert file for the certificate authority&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--client-certificate string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;Path to a client certificate file for TLS&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--client-key string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;Path to a client key file for TLS&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--cloud-provider-gce-l7lb-src-cidrs cidrs&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Default: 130.211.0.0/22,35.191.0.0/16&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;CIDRs opened in GCE firewall for L7 LB traffic proxy &amp; health checks&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--cloud-provider-gce-lb-src-cidrs cidrs&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Default: 130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;CIDRs opened in GCE firewall for L4 LB traffic proxy &amp; health checks&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--cluster string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;The name of the kubeconfig cluster to use&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--context string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;The name of the kubeconfig context to use&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--default-not-ready-toleration-seconds int&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Default: 300&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--default-unreachable-toleration-seconds int&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Default: 300&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;help for kubectl&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--insecure-skip-tls-verify&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--kubeconfig string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;Path to the kubeconfig file to use for CLI requests.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--log-backtrace-at traceLocation&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Default: :0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;when logging hits line file:N, emit a stack trace&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--log-dir string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;If non-empty, write log files in this directory&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--log-file string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;If non-empty, use this log file&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--log-file-max-size uint&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Default: 1800&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--log-flush-frequency duration&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Default: 5s&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;Maximum number of seconds between log flushes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--logtostderr&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Default: true&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;log to standard error instead of files&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--match-server-version&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;Require server version to match client version&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-n, --namespace string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;If present, the namespace scope for this CLI request&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--one-output&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;If true, only write logs to their native severity level (vs also writing to each lower severity level)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--password string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;Password for basic authentication to the API server&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--profile string&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Default: "none"&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--profile-output string&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Default: "profile.pprof"&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;Name of the file to write the profile to&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--request-timeout string&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Default: "0"&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-s, --server string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;The address and port of the Kubernetes API server&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--skip-headers&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;If true, avoid header prefixes in the log messages&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--skip-log-headers&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;If true, avoid headers when opening log files&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--stderrthreshold severity&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Default: 2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;logs at or above this threshold go to stderr&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--tls-server-name string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--token string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;Bearer token for authentication to the API server&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--user string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;The name of the kubeconfig user to use&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--username string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;Username for basic authentication to the API server&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-v, --v Level&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;number for the log level verbosity&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--version version[=true]&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;Print version information and quit&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--vmodule moduleSpec&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;comma-separated list of pattern=N settings for file-filtered logging&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--warnings-as-errors&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;Treat warnings received from the server as errors and exit with a non-zero exit code&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2 id="environment-variables"&gt;Environment variables&lt;/h2&gt;
&lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;KUBECONFIG&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;Path to the kubectl configuration ("kubeconfig") file. Default: "$HOME/.kube/config"&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;KUBECTL_EXPLAIN_OPENAPIV3&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;Toggles whether calls to `kubectl explain` use the new OpenAPIv3 data source available. OpenAPIV3 is enabled by default since Kubernetes 1.24.
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;KUBECTL_ENABLE_CMD_SHADOW&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;When set to true, external plugins can be used as subcommands for builtin commands if subcommand does not exist. In alpha stage, this feature can only be used for create command(e.g. kubectl create networkpolicy).
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;KUBECTL_PORT_FORWARD_WEBSOCKETS&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;When set to true, the kubectl port-forward command will attempt to stream using the websockets protocol. If the upgrade to websockets fails, the commands will fallback to use the current SPDY protocol.
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;KUBECTL_REMOTE_COMMAND_WEBSOCKETS&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;When set to true, the kubectl exec, cp, and attach commands will attempt to stream using the websockets protocol. If the upgrade to websockets fails, the commands will fallback to use the current SPDY protocol.
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;KUBECTL_KUBERC&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;When set to true, kuberc file is taken into account to define user specific preferences.
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;KUBECTL_KYAML&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;When set to true, kubectl is capable of producing Kubernetes-specific dialect of YAML output format.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2 id="see-also"&gt;See Also&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_annotate/"&gt;kubectl annotate&lt;/a&gt; - Update the annotations on a resource&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_api-resources/"&gt;kubectl api-resources&lt;/a&gt; - Print the supported API resources on the server&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_api-versions/"&gt;kubectl api-versions&lt;/a&gt; - Print the supported API versions on the server,
in the form of &amp;quot;group/version&amp;quot;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_apply/"&gt;kubectl apply&lt;/a&gt; - Apply a configuration to a resource by filename or stdin&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_attach/"&gt;kubectl attach&lt;/a&gt; - Attach to a running container&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_auth/"&gt;kubectl auth&lt;/a&gt; - Inspect authorization&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_autoscale/"&gt;kubectl autoscale&lt;/a&gt; - Auto-scale a Deployment, ReplicaSet, or ReplicationController&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_certificate/"&gt;kubectl certificate&lt;/a&gt; - Modify certificate resources.&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_cluster-info/"&gt;kubectl cluster-info&lt;/a&gt; - Display cluster info&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_completion/"&gt;kubectl completion&lt;/a&gt; - Output shell completion code for the specified shell (bash or zsh)&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_config/"&gt;kubectl config&lt;/a&gt; - Modify kubeconfig files&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_cordon/"&gt;kubectl cordon&lt;/a&gt; - Mark node as unschedulable&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_cp/"&gt;kubectl cp&lt;/a&gt; - Copy files and directories to and from containers.&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_create/"&gt;kubectl create&lt;/a&gt; - Create a resource from a file or from stdin.&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_debug/"&gt;kubectl debug&lt;/a&gt; - Create debugging sessions for troubleshooting workloads and nodes&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_delete/"&gt;kubectl delete&lt;/a&gt; - Delete resources by filenames,
stdin, resources and names, or by resources and label selector&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_describe/"&gt;kubectl describe&lt;/a&gt; - Show details of a specific resource or group of resources&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_diff/"&gt;kubectl diff&lt;/a&gt; - Diff live version against would-be applied version&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_drain/"&gt;kubectl drain&lt;/a&gt; - Drain node in preparation for maintenance&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_edit/"&gt;kubectl edit&lt;/a&gt; - Edit a resource on the server&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_events/"&gt;kubectl events&lt;/a&gt; - List events&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_exec/"&gt;kubectl exec&lt;/a&gt; - Execute a command in a container&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_explain/"&gt;kubectl explain&lt;/a&gt; - Documentation of resources&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_expose/"&gt;kubectl expose&lt;/a&gt; - Take a replication controller,
service, deployment or pod and expose it as a new Kubernetes Service&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_get/"&gt;kubectl get&lt;/a&gt; - Display one or many resources&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_kustomize/"&gt;kubectl kustomize&lt;/a&gt; - Build a kustomization
target from a directory or a remote url.&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_label/"&gt;kubectl label&lt;/a&gt; - Update the labels on a resource&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_logs/"&gt;kubectl logs&lt;/a&gt; - Print the logs for a container in a pod&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_options/"&gt;kubectl options&lt;/a&gt; - Print the list of flags inherited by all commands&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_patch/"&gt;kubectl patch&lt;/a&gt; - Update field(s) of a resource&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_plugin/"&gt;kubectl plugin&lt;/a&gt; - Provides utilities for interacting with plugins.&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_port-forward/"&gt;kubectl port-forward&lt;/a&gt; - Forward one or more local ports to a pod&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_proxy/"&gt;kubectl proxy&lt;/a&gt; - Run a proxy to the Kubernetes API server&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_replace/"&gt;kubectl replace&lt;/a&gt; - Replace a resource by filename or stdin&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_rollout/"&gt;kubectl rollout&lt;/a&gt; - Manage the rollout of a resource&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_run/"&gt;kubectl run&lt;/a&gt; - Run a particular image on the cluster&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_scale/"&gt;kubectl scale&lt;/a&gt; - Set a new size for a Deployment, ReplicaSet or Replication Controller&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_set/"&gt;kubectl set&lt;/a&gt; - Set specific features on objects&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_taint/"&gt;kubectl taint&lt;/a&gt; - Update the taints on one or more nodes&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_top/"&gt;kubectl top&lt;/a&gt; - Display Resource (CPU/Memory/Storage) usage.&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_uncordon/"&gt;kubectl uncordon&lt;/a&gt; - Mark node as schedulable&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_version/"&gt;kubectl version&lt;/a&gt; - Print the client and server version information&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_wait/"&gt;kubectl wait&lt;/a&gt; - Experimental: Wait for a specific condition on one or many resources.&lt;/li&gt;
&lt;/ul&gt;</description></item><item><title>kubectl alpha kuberc</title><link>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_alpha/kubectl_alpha_kuberc/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_alpha/kubectl_alpha_kuberc/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h2 id="synopsis"&gt;Synopsis&lt;/h2&gt;
&lt;p&gt;Manage user preferences (kuberc) file.&lt;/p&gt;
&lt;p&gt;The kuberc file allows you to customize your kubectl experience.&lt;/p&gt;</description></item><item><title>kubectl alpha kuberc set</title><link>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_alpha/kubectl_alpha_kuberc_set/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_alpha/kubectl_alpha_kuberc_set/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h2 id="synopsis"&gt;Synopsis&lt;/h2&gt;
&lt;p&gt;Set values in the kuberc configuration file.&lt;/p&gt;
&lt;p&gt;Use --section to specify whether to set defaults or aliases.&lt;/p&gt;</description></item><item><title>kubectl alpha kuberc view</title><link>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_alpha/kubectl_alpha_kuberc_view/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_alpha/kubectl_alpha_kuberc_view/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h2 id="synopsis"&gt;Synopsis&lt;/h2&gt;
&lt;p&gt;Display the contents of the kuberc file in the specified output format.&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubectl alpha kuberc view
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id="examples"&gt;Examples&lt;/h2&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt; # View kuberc configuration in YAML format (default)
 kubectl alpha kuberc view
 
 # View kuberc configuration in JSON format
 kubectl alpha kuberc view --output json
 
 # View a specific kuberc file
 kubectl alpha kuberc view --kuberc /path/to/kuberc
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id="options"&gt;Options&lt;/h2&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--allow-missing-template-keys&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Default: true&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.&lt;/p&gt;</description></item><item><title>kubectl apply edit-last-applied</title><link>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_apply/kubectl_apply_edit-last-applied/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_apply/kubectl_apply_edit-last-applied/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h2 id="synopsis"&gt;Synopsis&lt;/h2&gt;
&lt;p&gt;Edit the latest last-applied-configuration annotations of resources from the default editor.&lt;/p&gt;
&lt;p&gt;The edit-last-applied command allows you to directly edit any API resource you can retrieve via the command-line tools. It will open the editor defined by your KUBE_EDITOR, or EDITOR environment variables, or fall back to 'vi' for Linux or 'notepad' for Windows. You can edit multiple objects, although changes are applied one at a time. The command accepts file names as well as command-line arguments, although the files you point to must be previously saved versions of resources.&lt;/p&gt;</description></item><item><title>kubectl apply set-last-applied</title><link>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_apply/kubectl_apply_set-last-applied/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_apply/kubectl_apply_set-last-applied/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h2 id="synopsis"&gt;Synopsis&lt;/h2&gt;
&lt;p&gt;Set the latest last-applied-configuration annotations by setting it to match the contents of a file. This results in the last-applied-configuration being updated as though 'kubectl apply -f&amp;lt;file&amp;gt; ' was run, without updating any other parts of the object.&lt;/p&gt;</description></item><item><title>kubectl apply view-last-applied</title><link>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_apply/kubectl_apply_view-last-applied/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_apply/kubectl_apply_view-last-applied/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h2 id="synopsis"&gt;Synopsis&lt;/h2&gt;
&lt;p&gt;View the latest last-applied-configuration annotations by type/name or file.&lt;/p&gt;
&lt;p&gt;The default output will be printed to stdout in YAML format. You can use the -o option to change the output format.&lt;/p&gt;</description></item><item><title>kubectl auth can-i</title><link>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_auth/kubectl_auth_can-i/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_auth/kubectl_auth_can-i/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h2 id="synopsis"&gt;Synopsis&lt;/h2&gt;
&lt;p&gt;Check whether an action is allowed.&lt;/p&gt;
&lt;p&gt;VERB is a logical Kubernetes API verb like 'get', 'list', 'watch', 'delete', etc. TYPE is a Kubernetes resource. Shortcuts and groups will be resolved. NONRESOURCEURL is a partial URL that starts with &amp;quot;/&amp;quot;. NAME is the name of a particular Kubernetes resource. This command pairs nicely with impersonation. See --as global flag.&lt;/p&gt;</description></item><item><title>kubectl auth reconcile</title><link>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_auth/kubectl_auth_reconcile/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_auth/kubectl_auth_reconcile/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h2 id="synopsis"&gt;Synopsis&lt;/h2&gt;
&lt;p&gt;Reconciles rules for RBAC role, role binding, cluster role, and cluster role binding objects.&lt;/p&gt;
&lt;p&gt;Missing objects are created, and the containing namespace is created for namespaced objects, if required.&lt;/p&gt;</description></item><item><title>kubectl auth whoami</title><link>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_auth/kubectl_auth_whoami/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_auth/kubectl_auth_whoami/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h2 id="synopsis"&gt;Synopsis&lt;/h2&gt;
&lt;p&gt;Experimental: Check who you are and your attributes (groups, extra).&lt;/p&gt;
&lt;pre&gt;&lt;code&gt; This command is helpful to get yourself aware of the current user attributes,
 especially when dynamic authentication, e.g., token webhook, auth proxy, or OIDC provider,
 is enabled in the Kubernetes cluster.
&lt;/code&gt;&lt;/pre&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubectl auth whoami
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id="examples"&gt;Examples&lt;/h2&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt; # Get your subject attributes
 kubectl auth whoami
 
 # Get your subject attributes in JSON format
 kubectl auth whoami -o json
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id="options"&gt;Options&lt;/h2&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--allow-missing-template-keys&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Default: true&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.&lt;/p&gt;</description></item><item><title>kubectl certificate approve</title><link>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_certificate/kubectl_certificate_approve/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_certificate/kubectl_certificate_approve/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h2 id="synopsis"&gt;Synopsis&lt;/h2&gt;
&lt;p&gt;Approve a certificate signing request.&lt;/p&gt;
&lt;p&gt;kubectl certificate approve allows a cluster admin to approve a certificate signing request (CSR). This action tells a certificate signing controller to issue a certificate to the requester with the attributes requested in the CSR.&lt;/p&gt;</description></item><item><title>kubectl certificate deny</title><link>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_certificate/kubectl_certificate_deny/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_certificate/kubectl_certificate_deny/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h2 id="synopsis"&gt;Synopsis&lt;/h2&gt;
&lt;p&gt;Deny a certificate signing request.&lt;/p&gt;
&lt;p&gt;kubectl certificate deny allows a cluster admin to deny a certificate signing request (CSR). This action tells a certificate signing controller to not to issue a certificate to the requester.&lt;/p&gt;</description></item><item><title>kubectl cluster-info dump</title><link>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_cluster-info/kubectl_cluster-info_dump/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_cluster-info/kubectl_cluster-info_dump/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h2 id="synopsis"&gt;Synopsis&lt;/h2&gt;
&lt;p&gt;Dump cluster information out suitable for debugging and diagnosing cluster problems. By default, dumps everything to stdout. You can optionally specify a directory with --output-directory. If you specify a directory, Kubernetes will build a set of files in that directory. By default, only dumps things in the current namespace and 'kube-system' namespace, but you can switch to a different namespace with the --namespaces flag, or specify --all-namespaces to dump all namespaces.&lt;/p&gt;</description></item><item><title>kubectl config current-context</title><link>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_config/kubectl_config_current-context/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_config/kubectl_config_current-context/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h2 id="synopsis"&gt;Synopsis&lt;/h2&gt;
&lt;p&gt;Display the current-context.&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubectl config current-context [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id="examples"&gt;Examples&lt;/h2&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt; # Display the current-context
 kubectl config current-context
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id="options"&gt;Options&lt;/h2&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;help for current-context&lt;/p&gt;</description></item><item><title>kubectl config delete-cluster</title><link>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_config/kubectl_config_delete-cluster/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_config/kubectl_config_delete-cluster/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h2 id="synopsis"&gt;Synopsis&lt;/h2&gt;
&lt;p&gt;Delete the specified cluster from the kubeconfig.&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubectl config delete-cluster NAME
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id="examples"&gt;Examples&lt;/h2&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt; # Delete the minikube cluster
 kubectl config delete-cluster minikube
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id="options"&gt;Options&lt;/h2&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;help for delete-cluster&lt;/p&gt;</description></item><item><title>kubectl config delete-context</title><link>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_config/kubectl_config_delete-context/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_config/kubectl_config_delete-context/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h2 id="synopsis"&gt;Synopsis&lt;/h2&gt;
&lt;p&gt;Delete the specified context from the kubeconfig.&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubectl config delete-context NAME
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id="examples"&gt;Examples&lt;/h2&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt; # Delete the context for the minikube cluster
 kubectl config delete-context minikube
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id="options"&gt;Options&lt;/h2&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;help for delete-context&lt;/p&gt;</description></item><item><title>kubectl config delete-user</title><link>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_config/kubectl_config_delete-user/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_config/kubectl_config_delete-user/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h2 id="synopsis"&gt;Synopsis&lt;/h2&gt;
&lt;p&gt;Delete the specified user from the kubeconfig.&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubectl config delete-user NAME
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id="examples"&gt;Examples&lt;/h2&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt; # Delete the minikube user
 kubectl config delete-user minikube
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id="options"&gt;Options&lt;/h2&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;help for delete-user&lt;/p&gt;</description></item><item><title>kubectl config get-clusters</title><link>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_config/kubectl_config_get-clusters/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_config/kubectl_config_get-clusters/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h2 id="synopsis"&gt;Synopsis&lt;/h2&gt;
&lt;p&gt;Display clusters defined in the kubeconfig.&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubectl config get-clusters [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id="examples"&gt;Examples&lt;/h2&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt; # List the clusters that kubectl knows about
 kubectl config get-clusters
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id="options"&gt;Options&lt;/h2&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;help for get-clusters&lt;/p&gt;</description></item><item><title>kubectl config get-contexts</title><link>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_config/kubectl_config_get-contexts/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_config/kubectl_config_get-contexts/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h2 id="synopsis"&gt;Synopsis&lt;/h2&gt;
&lt;p&gt;Display one or many contexts from the kubeconfig file.&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubectl config get-contexts [(-o|--output=)name)]
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id="examples"&gt;Examples&lt;/h2&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt; # List all the contexts in your kubeconfig file
 kubectl config get-contexts
 
 # Describe one context in your kubeconfig file
 kubectl config get-contexts my-context
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id="options"&gt;Options&lt;/h2&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;help for get-contexts&lt;/p&gt;</description></item><item><title>kubectl config get-users</title><link>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_config/kubectl_config_get-users/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_config/kubectl_config_get-users/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h2 id="synopsis"&gt;Synopsis&lt;/h2&gt;
&lt;p&gt;Display users defined in the kubeconfig.&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubectl config get-users [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id="examples"&gt;Examples&lt;/h2&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt; # List the users that kubectl knows about
 kubectl config get-users
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id="options"&gt;Options&lt;/h2&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;help for get-users&lt;/p&gt;</description></item><item><title>kubectl config rename-context</title><link>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_config/kubectl_config_rename-context/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_config/kubectl_config_rename-context/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h2 id="synopsis"&gt;Synopsis&lt;/h2&gt;
&lt;p&gt;Renames a context from the kubeconfig file.&lt;/p&gt;
&lt;p&gt;CONTEXT_NAME is the context name that you want to change.&lt;/p&gt;</description></item><item><title>kubectl config set</title><link>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_config/kubectl_config_set/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_config/kubectl_config_set/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h2 id="synopsis"&gt;Synopsis&lt;/h2&gt;
&lt;p&gt;Set an individual value in a kubeconfig file.&lt;/p&gt;
&lt;p&gt;PROPERTY_NAME is a dot delimited name where each token represents either an attribute name or a map key. Map keys may not contain dots.&lt;/p&gt;</description></item><item><title>kubectl config set-cluster</title><link>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_config/kubectl_config_set-cluster/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_config/kubectl_config_set-cluster/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h2 id="synopsis"&gt;Synopsis&lt;/h2&gt;
&lt;p&gt;Set a cluster entry in kubeconfig.&lt;/p&gt;
&lt;p&gt;Specifying a name that already exists will merge new fields on top of existing values for those fields.&lt;/p&gt;</description></item><item><title>kubectl config set-context</title><link>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_config/kubectl_config_set-context/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_config/kubectl_config_set-context/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h2 id="synopsis"&gt;Synopsis&lt;/h2&gt;
&lt;p&gt;Set a context entry in kubeconfig.&lt;/p&gt;
&lt;p&gt;Specifying a name that already exists will merge new fields on top of existing values for those fields.&lt;/p&gt;</description></item><item><title>kubectl config set-credentials</title><link>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_config/kubectl_config_set-credentials/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_config/kubectl_config_set-credentials/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h2 id="synopsis"&gt;Synopsis&lt;/h2&gt;
&lt;p&gt;Set a user entry in kubeconfig.&lt;/p&gt;
&lt;p&gt;Specifying a name that already exists will merge new fields on top of existing values.&lt;/p&gt;</description></item><item><title>kubectl config unset</title><link>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_config/kubectl_config_unset/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_config/kubectl_config_unset/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h2 id="synopsis"&gt;Synopsis&lt;/h2&gt;
&lt;p&gt;Unset an individual value in a kubeconfig file.&lt;/p&gt;
&lt;p&gt;PROPERTY_NAME is a dot delimited name where each token represents either an attribute name or a map key. Map keys may not contain dots.&lt;/p&gt;</description></item><item><title>kubectl config use-context</title><link>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_config/kubectl_config_use-context/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_config/kubectl_config_use-context/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h2 id="synopsis"&gt;Synopsis&lt;/h2&gt;
&lt;p&gt;Set the current-context in a kubeconfig file.&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubectl config use-context CONTEXT_NAME
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id="examples"&gt;Examples&lt;/h2&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt; # Use the context for the minikube cluster
 kubectl config use-context minikube
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id="options"&gt;Options&lt;/h2&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;help for use-context&lt;/p&gt;</description></item><item><title>kubectl config view</title><link>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_config/kubectl_config_view/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_config/kubectl_config_view/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h2 id="synopsis"&gt;Synopsis&lt;/h2&gt;
&lt;p&gt;Display merged kubeconfig settings or a specified kubeconfig file.&lt;/p&gt;
&lt;p&gt;You can use --output jsonpath={...} to extract specific values using a jsonpath expression.&lt;/p&gt;</description></item><item><title>kubectl create clusterrole</title><link>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_create/kubectl_create_clusterrole/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_create/kubectl_create_clusterrole/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h2 id="synopsis"&gt;Synopsis&lt;/h2&gt;
&lt;p&gt;Create a cluster role.&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubectl create clusterrole NAME --verb=verb --resource=resource.group [--resource-name=resourcename] [--dry-run=server|client|none]
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id="examples"&gt;Examples&lt;/h2&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt; # Create a cluster role named &amp;#34;pod-reader&amp;#34; that allows user to perform &amp;#34;get&amp;#34;, &amp;#34;watch&amp;#34; and &amp;#34;list&amp;#34; on pods
 kubectl create clusterrole pod-reader --verb=get,list,watch --resource=pods
 
 # Create a cluster role named &amp;#34;pod-reader&amp;#34; with ResourceName specified
 kubectl create clusterrole pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod
 
 # Create a cluster role named &amp;#34;foo&amp;#34; with API Group specified
 kubectl create clusterrole foo --verb=get,list,watch --resource=rs.apps
 
 # Create a cluster role named &amp;#34;foo&amp;#34; with SubResource specified
 kubectl create clusterrole foo --verb=get,list,watch --resource=pods,pods/status
 
 # Create a cluster role name &amp;#34;foo&amp;#34; with NonResourceURL specified
 kubectl create clusterrole &amp;#34;foo&amp;#34; --verb=get --non-resource-url=https://andygol-k8s.netlify.app/logs/*
 
 # Create a cluster role name &amp;#34;monitoring&amp;#34; with AggregationRule specified
 kubectl create clusterrole monitoring --aggregation-rule=&amp;#34;rbac.example.com/aggregate-to-monitoring=true&amp;#34;
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id="options"&gt;Options&lt;/h2&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--aggregation-rule &amp;lt;comma-separated 'key=value' pairs&amp;gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;An aggregation label selector for combining ClusterRoles.&lt;/p&gt;</description></item><item><title>kubectl create clusterrolebinding</title><link>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_create/kubectl_create_clusterrolebinding/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_create/kubectl_create_clusterrolebinding/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h2 id="synopsis"&gt;Synopsis&lt;/h2&gt;
&lt;p&gt;Create a cluster role binding for a particular cluster role.&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubectl create clusterrolebinding NAME --clusterrole=NAME [--user=username] [--group=groupname] [--serviceaccount=namespace:serviceaccountname] [--dry-run=server|client|none]
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id="examples"&gt;Examples&lt;/h2&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt; # Create a cluster role binding for user1, user2, and group1 using the cluster-admin cluster role
 kubectl create clusterrolebinding cluster-admin --clusterrole=cluster-admin --user=user1 --user=user2 --group=group1
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id="options"&gt;Options&lt;/h2&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--allow-missing-template-keys&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Default: true&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.&lt;/p&gt;</description></item><item><title>kubectl create configmap</title><link>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_create/kubectl_create_configmap/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_create/kubectl_create_configmap/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h2 id="synopsis"&gt;Synopsis&lt;/h2&gt;
&lt;p&gt;Create a config map based on a file, directory, or specified literal value.&lt;/p&gt;
&lt;p&gt;A single config map may package one or more key/value pairs.&lt;/p&gt;</description></item><item><title>kubectl create cronjob</title><link>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_create/kubectl_create_cronjob/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_create/kubectl_create_cronjob/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h2 id="synopsis"&gt;Synopsis&lt;/h2&gt;
&lt;p&gt;Create a cron job with the specified name.&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubectl create cronjob NAME --image=image --schedule=&amp;#39;0/5 * * * ?&amp;#39; -- [COMMAND] [args...] [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id="examples"&gt;Examples&lt;/h2&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt; # Create a cron job
 kubectl create cronjob my-job --image=busybox --schedule=&amp;#34;*/1 * * * *&amp;#34;
 
 # Create a cron job with a command
 kubectl create cronjob my-job --image=busybox --schedule=&amp;#34;*/1 * * * *&amp;#34; -- date
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id="options"&gt;Options&lt;/h2&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--allow-missing-template-keys&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Default: true&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.&lt;/p&gt;</description></item><item><title>kubectl create deployment</title><link>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_create/kubectl_create_deployment/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_create/kubectl_create_deployment/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h2 id="synopsis"&gt;Synopsis&lt;/h2&gt;
&lt;p&gt;Create a deployment with the specified name.&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubectl create deployment NAME --image=image -- [COMMAND] [args...]
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id="examples"&gt;Examples&lt;/h2&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt; # Create a deployment named my-dep that runs the busybox image
 kubectl create deployment my-dep --image=busybox
 
 # Create a deployment with a command
 kubectl create deployment my-dep --image=busybox -- date
 
 # Create a deployment named my-dep that runs the nginx image with 3 replicas
 kubectl create deployment my-dep --image=nginx --replicas=3
 
 # Create a deployment named my-dep that runs the busybox image and expose port 5701
 kubectl create deployment my-dep --image=busybox --port=5701
 
 # Create a deployment named my-dep that runs multiple containers
 kubectl create deployment my-dep --image=busybox:latest --image=ubuntu:latest --image=nginx
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id="options"&gt;Options&lt;/h2&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--allow-missing-template-keys&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Default: true&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.&lt;/p&gt;</description></item><item><title>kubectl create ingress</title><link>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_create/kubectl_create_ingress/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_create/kubectl_create_ingress/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h2 id="synopsis"&gt;Synopsis&lt;/h2&gt;
&lt;p&gt;Create an ingress with the specified name.&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubectl create ingress NAME --rule=host/path=service:port[,tls[=secret]] 
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id="examples"&gt;Examples&lt;/h2&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt; # Create a single ingress called &amp;#39;simple&amp;#39; that directs requests to foo.com/bar to svc
 # svc1:8080 with a TLS secret &amp;#34;my-cert&amp;#34;
 kubectl create ingress simple --rule=&amp;#34;foo.com/bar=svc1:8080,tls=my-cert&amp;#34;
 
 # Create a catch all ingress of &amp;#34;/path&amp;#34; pointing to service svc:port and Ingress Class as &amp;#34;otheringress&amp;#34;
 kubectl create ingress catch-all --class=otheringress --rule=&amp;#34;/path=svc:port&amp;#34;
 
 # Create an ingress with two annotations: ingress.annotation1 and ingress.annotations2
 kubectl create ingress annotated --class=default --rule=&amp;#34;foo.com/bar=svc:port&amp;#34; \
 --annotation ingress.annotation1=foo \
 --annotation ingress.annotation2=bla
 
 # Create an ingress with the same host and multiple paths
 kubectl create ingress multipath --class=default \
 --rule=&amp;#34;foo.com/=svc:port&amp;#34; \
 --rule=&amp;#34;foo.com/admin/=svcadmin:portadmin&amp;#34;
 
 # Create an ingress with multiple hosts and the pathType as Prefix
 kubectl create ingress ingress1 --class=default \
 --rule=&amp;#34;foo.com/path*=svc:8080&amp;#34; \
 --rule=&amp;#34;bar.com/admin*=svc2:http&amp;#34;
 
 # Create an ingress with TLS enabled using the default ingress certificate and different path types
 kubectl create ingress ingtls --class=default \
 --rule=&amp;#34;foo.com/=svc:https,tls&amp;#34; \
 --rule=&amp;#34;foo.com/path/subpath*=othersvc:8080&amp;#34;
 
 # Create an ingress with TLS enabled using a specific secret and pathType as Prefix
 kubectl create ingress ingsecret --class=default \
 --rule=&amp;#34;foo.com/*=svc:8080,tls=secret1&amp;#34;
 
 # Create an ingress with a default backend
 kubectl create ingress ingdefault --class=default \
 --default-backend=defaultsvc:http \
 --rule=&amp;#34;foo.com/*=svc:8080,tls=secret1&amp;#34;
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id="options"&gt;Options&lt;/h2&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--allow-missing-template-keys&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Default: true&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.&lt;/p&gt;</description></item><item><title>kubectl create job</title><link>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_create/kubectl_create_job/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_create/kubectl_create_job/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h2 id="synopsis"&gt;Synopsis&lt;/h2&gt;
&lt;p&gt;Create a job with the specified name.&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubectl create job NAME --image=image [--from=cronjob/name] -- [COMMAND] [args...]
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id="examples"&gt;Examples&lt;/h2&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt; # Create a job
 kubectl create job my-job --image=busybox
 
 # Create a job with a command
 kubectl create job my-job --image=busybox -- date
 
 # Create a job from a cron job named &amp;#34;a-cronjob&amp;#34;
 kubectl create job test-job --from=cronjob/a-cronjob
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id="options"&gt;Options&lt;/h2&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--allow-missing-template-keys&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Default: true&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.&lt;/p&gt;</description></item><item><title>kubectl create namespace</title><link>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_create/kubectl_create_namespace/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_create/kubectl_create_namespace/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h2 id="synopsis"&gt;Synopsis&lt;/h2&gt;
&lt;p&gt;Create a namespace with the specified name.&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubectl create namespace NAME [--dry-run=server|client|none]
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id="examples"&gt;Examples&lt;/h2&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt; # Create a new namespace named my-namespace
 kubectl create namespace my-namespace
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id="options"&gt;Options&lt;/h2&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--allow-missing-template-keys&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Default: true&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.&lt;/p&gt;</description></item><item><title>kubectl create poddisruptionbudget</title><link>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_create/kubectl_create_poddisruptionbudget/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_create/kubectl_create_poddisruptionbudget/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h2 id="synopsis"&gt;Synopsis&lt;/h2&gt;
&lt;p&gt;Create a pod disruption budget with the specified name, selector, and desired minimum available pods.&lt;/p&gt;</description></item><item><title>kubectl create priorityclass</title><link>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_create/kubectl_create_priorityclass/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_create/kubectl_create_priorityclass/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h2 id="synopsis"&gt;Synopsis&lt;/h2&gt;
&lt;p&gt;Create a priority class with the specified name, value, globalDefault and description.&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubectl create priorityclass NAME --value=VALUE --global-default=BOOL [--dry-run=server|client|none]
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id="examples"&gt;Examples&lt;/h2&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt; # Create a priority class named high-priority
 kubectl create priorityclass high-priority --value=1000 --description=&amp;#34;high priority&amp;#34;
 
 # Create a priority class named default-priority that is considered as the global default priority
 kubectl create priorityclass default-priority --value=1000 --global-default=true --description=&amp;#34;default priority&amp;#34;
 
 # Create a priority class named high-priority that cannot preempt pods with lower priority
 kubectl create priorityclass high-priority --value=1000 --description=&amp;#34;high priority&amp;#34; --preemption-policy=&amp;#34;Never&amp;#34;
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id="options"&gt;Options&lt;/h2&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--allow-missing-template-keys&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Default: true&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.&lt;/p&gt;</description></item><item><title>kubectl create quota</title><link>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_create/kubectl_create_quota/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_create/kubectl_create_quota/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h2 id="synopsis"&gt;Synopsis&lt;/h2&gt;
&lt;p&gt;Create a resource quota with the specified name, hard limits, and optional scopes.&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubectl create quota NAME [--hard=key1=value1,key2=value2] [--scopes=Scope1,Scope2] [--dry-run=server|client|none]
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id="examples"&gt;Examples&lt;/h2&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt; # Create a new resource quota named my-quota
 kubectl create quota my-quota --hard=cpu=1,memory=1G,pods=2,services=3,replicationcontrollers=2,resourcequotas=1,secrets=5,persistentvolumeclaims=10
 
 # Create a new resource quota named best-effort
 kubectl create quota best-effort --hard=pods=100 --scopes=BestEffort
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id="options"&gt;Options&lt;/h2&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--allow-missing-template-keys&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Default: true&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.&lt;/p&gt;</description></item><item><title>kubectl create role</title><link>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_create/kubectl_create_role/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_create/kubectl_create_role/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h2 id="synopsis"&gt;Synopsis&lt;/h2&gt;
&lt;p&gt;Create a role with single rule.&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubectl create role NAME --verb=verb --resource=resource.group/subresource [--resource-name=resourcename] [--dry-run=server|client|none]
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id="examples"&gt;Examples&lt;/h2&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt; # Create a role named &amp;#34;pod-reader&amp;#34; that allows user to perform &amp;#34;get&amp;#34;, &amp;#34;watch&amp;#34; and &amp;#34;list&amp;#34; on pods
 kubectl create role pod-reader --verb=get --verb=list --verb=watch --resource=pods
 
 # Create a role named &amp;#34;pod-reader&amp;#34; with ResourceName specified
 kubectl create role pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod
 
 # Create a role named &amp;#34;foo&amp;#34; with API Group specified
 kubectl create role foo --verb=get,list,watch --resource=rs.apps
 
 # Create a role named &amp;#34;foo&amp;#34; with SubResource specified
 kubectl create role foo --verb=get,list,watch --resource=pods,pods/status
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id="options"&gt;Options&lt;/h2&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--allow-missing-template-keys&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Default: true&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.&lt;/p&gt;</description></item><item><title>kubectl create rolebinding</title><link>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_create/kubectl_create_rolebinding/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_create/kubectl_create_rolebinding/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h2 id="synopsis"&gt;Synopsis&lt;/h2&gt;
&lt;p&gt;Create a role binding for a particular role or cluster role.&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubectl create rolebinding NAME --clusterrole=NAME|--role=NAME [--user=username] [--group=groupname] [--serviceaccount=namespace:serviceaccountname] [--dry-run=server|client|none]
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id="examples"&gt;Examples&lt;/h2&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt; # Create a role binding for user1, user2, and group1 using the admin cluster role
 kubectl create rolebinding admin --clusterrole=admin --user=user1 --user=user2 --group=group1
 
 # Create a role binding for service account monitoring:sa-dev using the admin role
 kubectl create rolebinding admin-binding --role=admin --serviceaccount=monitoring:sa-dev
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id="options"&gt;Options&lt;/h2&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--allow-missing-template-keys&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Default: true&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.&lt;/p&gt;</description></item><item><title>kubectl create secret</title><link>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_create/kubectl_create_secret/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_create/kubectl_create_secret/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h2 id="synopsis"&gt;Synopsis&lt;/h2&gt;
&lt;p&gt;Create a secret with specified type.&lt;/p&gt;
&lt;p&gt;A docker-registry type secret is for accessing a container registry.&lt;/p&gt;</description></item><item><title>kubectl create secret docker-registry</title><link>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_create/kubectl_create_secret_docker-registry/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_create/kubectl_create_secret_docker-registry/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h2 id="synopsis"&gt;Synopsis&lt;/h2&gt;
&lt;p&gt;Create a new secret for use with Docker registries.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt; Dockercfg secrets are used to authenticate against Docker registries.
 
 When using the Docker command line to push images, you can authenticate to a given registry by running:
 '$ docker login DOCKER_REGISTRY_SERVER --username=DOCKER_USER --password=DOCKER_PASSWORD --email=DOCKER_EMAIL'.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;That produces a ~/.dockercfg file that is used by subsequent 'docker push' and 'docker pull' commands to authenticate to the registry. The email address is optional.&lt;/p&gt;</description></item><item><title>kubectl create secret generic</title><link>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_create/kubectl_create_secret_generic/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_create/kubectl_create_secret_generic/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h2 id="synopsis"&gt;Synopsis&lt;/h2&gt;
&lt;p&gt;Create a secret based on a file, directory, or specified literal value.&lt;/p&gt;
&lt;p&gt;A single secret may package one or more key/value pairs.&lt;/p&gt;</description></item><item><title>kubectl create secret tls</title><link>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_create/kubectl_create_secret_tls/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_create/kubectl_create_secret_tls/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h2 id="synopsis"&gt;Synopsis&lt;/h2&gt;
&lt;p&gt;Create a TLS secret from the given public/private key pair.&lt;/p&gt;
&lt;p&gt;The public/private key pair must exist beforehand. The public key certificate must be .PEM encoded and match the given private key.&lt;/p&gt;</description></item><item><title>kubectl create service</title><link>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_create/kubectl_create_service/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_create/kubectl_create_service/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h2 id="synopsis"&gt;Synopsis&lt;/h2&gt;
&lt;p&gt;Create a service using a specified subcommand.&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubectl create service [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id="options"&gt;Options&lt;/h2&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;help for service&lt;/p&gt;</description></item><item><title>kubectl create service clusterip</title><link>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_create/kubectl_create_service_clusterip/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_create/kubectl_create_service_clusterip/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h2 id="synopsis"&gt;Synopsis&lt;/h2&gt;
&lt;p&gt;Create a ClusterIP service with the specified name.&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubectl create service clusterip NAME [--tcp=&amp;lt;port&amp;gt;:&amp;lt;targetPort&amp;gt;] [--dry-run=server|client|none]
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id="examples"&gt;Examples&lt;/h2&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt; # Create a new ClusterIP service named my-cs
 kubectl create service clusterip my-cs --tcp=5678:8080
 
 # Create a new ClusterIP service named my-cs (in headless mode)
 kubectl create service clusterip my-cs --clusterip=&amp;#34;None&amp;#34;
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id="options"&gt;Options&lt;/h2&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--allow-missing-template-keys&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Default: true&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.&lt;/p&gt;</description></item><item><title>kubectl create service externalname</title><link>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_create/kubectl_create_service_externalname/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_create/kubectl_create_service_externalname/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h2 id="synopsis"&gt;Synopsis&lt;/h2&gt;
&lt;p&gt;Create an ExternalName service with the specified name.&lt;/p&gt;
&lt;p&gt;ExternalName service references to an external DNS address instead of only pods, which will allow application authors to reference services that exist off platform, on other clusters, or locally.&lt;/p&gt;</description></item><item><title>kubectl create service loadbalancer</title><link>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_create/kubectl_create_service_loadbalancer/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_create/kubectl_create_service_loadbalancer/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h2 id="synopsis"&gt;Synopsis&lt;/h2&gt;
&lt;p&gt;Create a LoadBalancer service with the specified name.&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubectl create service loadbalancer NAME [--tcp=port:targetPort] [--dry-run=server|client|none]
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id="examples"&gt;Examples&lt;/h2&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt; # Create a new LoadBalancer service named my-lbs
 kubectl create service loadbalancer my-lbs --tcp=5678:8080
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id="options"&gt;Options&lt;/h2&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--allow-missing-template-keys&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Default: true&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.&lt;/p&gt;</description></item><item><title>kubectl create service nodeport</title><link>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_create/kubectl_create_service_nodeport/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_create/kubectl_create_service_nodeport/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h2 id="synopsis"&gt;Synopsis&lt;/h2&gt;
&lt;p&gt;Create a NodePort service with the specified name.&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubectl create service nodeport NAME [--tcp=port:targetPort] [--dry-run=server|client|none]
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id="examples"&gt;Examples&lt;/h2&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt; # Create a new NodePort service named my-ns
 kubectl create service nodeport my-ns --tcp=5678:8080
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id="options"&gt;Options&lt;/h2&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--allow-missing-template-keys&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Default: true&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.&lt;/p&gt;</description></item><item><title>kubectl create serviceaccount</title><link>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_create/kubectl_create_serviceaccount/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_create/kubectl_create_serviceaccount/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h2 id="synopsis"&gt;Synopsis&lt;/h2&gt;
&lt;p&gt;Create a service account with the specified name.&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubectl create serviceaccount NAME [--dry-run=server|client|none]
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id="examples"&gt;Examples&lt;/h2&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt; # Create a new service account named my-service-account
 kubectl create serviceaccount my-service-account
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id="options"&gt;Options&lt;/h2&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--allow-missing-template-keys&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Default: true&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.&lt;/p&gt;</description></item><item><title>kubectl create token</title><link>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_create/kubectl_create_token/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_create/kubectl_create_token/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h2 id="synopsis"&gt;Synopsis&lt;/h2&gt;
&lt;p&gt;Request a service account token.&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubectl create token SERVICE_ACCOUNT_NAME
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id="examples"&gt;Examples&lt;/h2&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt; # Request a token to authenticate to the kube-apiserver as the service account &amp;#34;myapp&amp;#34; in the current namespace
 kubectl create token myapp
 
 # Request a token for a service account in a custom namespace
 kubectl create token myapp --namespace myns
 
 # Request a token with a custom expiration
 kubectl create token myapp --duration 10m
 
 # Request a token with a custom audience
 kubectl create token myapp --audience https://example.com
 
 # Request a token bound to an instance of a Secret object
 kubectl create token myapp --bound-object-kind Secret --bound-object-name mysecret
 
 # Request a token bound to an instance of a Secret object with a specific UID
 kubectl create token myapp --bound-object-kind Secret --bound-object-name mysecret --bound-object-uid 0d4691ed-659b-4935-a832-355f77ee47cc
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id="options"&gt;Options&lt;/h2&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--allow-missing-template-keys&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Default: true&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.&lt;/p&gt;</description></item><item><title>kubectl plugin list</title><link>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_plugin/kubectl_plugin_list/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_plugin/kubectl_plugin_list/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h2 id="synopsis"&gt;Synopsis&lt;/h2&gt;
&lt;p&gt;List all available plugin files on a user's PATH. To see plugins binary names without the full path use --name-only flag.&lt;/p&gt;</description></item><item><title>kubectl rollout history</title><link>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_rollout/kubectl_rollout_history/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_rollout/kubectl_rollout_history/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h2 id="synopsis"&gt;Synopsis&lt;/h2&gt;
&lt;p&gt;View previous rollout revisions and configurations.&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubectl rollout history (TYPE NAME | TYPE/NAME) [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id="examples"&gt;Examples&lt;/h2&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt; # View the rollout history of a deployment
 kubectl rollout history deployment/abc
 
 # View the details of daemonset revision 3
 kubectl rollout history daemonset/abc --revision=3
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id="options"&gt;Options&lt;/h2&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--allow-missing-template-keys&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Default: true&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.&lt;/p&gt;</description></item><item><title>kubectl rollout pause</title><link>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_rollout/kubectl_rollout_pause/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_rollout/kubectl_rollout_pause/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h2 id="synopsis"&gt;Synopsis&lt;/h2&gt;
&lt;p&gt;Mark the provided resource as paused.&lt;/p&gt;
&lt;p&gt;Paused resources will not be reconciled by a controller. Use &amp;quot;kubectl rollout resume&amp;quot; to resume a paused resource. Currently only deployments support being paused.&lt;/p&gt;</description></item><item><title>kubectl rollout restart</title><link>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_rollout/kubectl_rollout_restart/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_rollout/kubectl_rollout_restart/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h2 id="synopsis"&gt;Synopsis&lt;/h2&gt;
&lt;p&gt;Restart a resource.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt; Resource rollout will be restarted.
&lt;/code&gt;&lt;/pre&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubectl rollout restart RESOURCE
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id="examples"&gt;Examples&lt;/h2&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt; # Restart all deployments in the test-namespace namespace
 kubectl rollout restart deployment -n test-namespace
 
 # Restart a deployment
 kubectl rollout restart deployment/nginx
 
 # Restart a daemon set
 kubectl rollout restart daemonset/abc
 
 # Restart deployments with the app=nginx label
 kubectl rollout restart deployment --selector=app=nginx
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id="options"&gt;Options&lt;/h2&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--allow-missing-template-keys&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Default: true&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.&lt;/p&gt;</description></item><item><title>kubectl rollout resume</title><link>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_rollout/kubectl_rollout_resume/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_rollout/kubectl_rollout_resume/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h2 id="synopsis"&gt;Synopsis&lt;/h2&gt;
&lt;p&gt;Resume a paused resource.&lt;/p&gt;
&lt;p&gt;Paused resources will not be reconciled by a controller. By resuming a resource, we allow it to be reconciled again. Currently only deployments support being resumed.&lt;/p&gt;</description></item><item><title>kubectl rollout status</title><link>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_rollout/kubectl_rollout_status/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_rollout/kubectl_rollout_status/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h2 id="synopsis"&gt;Synopsis&lt;/h2&gt;
&lt;p&gt;Show the status of the rollout.&lt;/p&gt;
&lt;p&gt;By default 'rollout status' will watch the status of the latest rollout until it's done. If you don't want to wait for the rollout to finish then you can use --watch=false. Note that if a new rollout starts in-between, then 'rollout status' will continue watching the latest revision. If you want to pin to a specific revision and abort if it is rolled over by another revision, use --revision=N where N is the revision you need to watch for.&lt;/p&gt;</description></item><item><title>kubectl rollout undo</title><link>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_rollout/kubectl_rollout_undo/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_rollout/kubectl_rollout_undo/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h2 id="synopsis"&gt;Synopsis&lt;/h2&gt;
&lt;p&gt;Roll back to a previous rollout.&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubectl rollout undo (TYPE NAME | TYPE/NAME) [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id="examples"&gt;Examples&lt;/h2&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt; # Roll back to the previous deployment
 kubectl rollout undo deployment/abc
 
 # Roll back to daemonset revision 3
 kubectl rollout undo daemonset/abc --to-revision=3
 
 # Roll back to the previous deployment with dry-run
 kubectl rollout undo --dry-run=server deployment/abc
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id="options"&gt;Options&lt;/h2&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--allow-missing-template-keys&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Default: true&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.&lt;/p&gt;</description></item><item><title>kubectl set env</title><link>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_set/kubectl_set_env/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_set/kubectl_set_env/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h2 id="synopsis"&gt;Synopsis&lt;/h2&gt;
&lt;p&gt;Update environment variables on a pod template.&lt;/p&gt;
&lt;p&gt;List environment variable definitions in one or more pods, pod templates. Add, update, or remove container environment variable definitions in one or more pod templates (within replication controllers or deployment configurations). View or modify the environment variable definitions on all containers in the specified pods or pod templates, or just those that match a wildcard.&lt;/p&gt;</description></item><item><title>kubectl set image</title><link>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_set/kubectl_set_image/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_set/kubectl_set_image/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h2 id="synopsis"&gt;Synopsis&lt;/h2&gt;
&lt;p&gt;Update existing container image(s) of resources.&lt;/p&gt;
&lt;p&gt;Possible resources include (case insensitive):&lt;/p&gt;
&lt;pre&gt;&lt;code&gt; pod (po), replicationcontroller (rc), deployment (deploy), daemonset (ds), statefulset (sts), cronjob (cj), replicaset (rs)
&lt;/code&gt;&lt;/pre&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubectl set image (-f FILENAME | TYPE NAME) CONTAINER_NAME_1=CONTAINER_IMAGE_1 ... CONTAINER_NAME_N=CONTAINER_IMAGE_N
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id="examples"&gt;Examples&lt;/h2&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt; # Set a deployment&amp;#39;s nginx container image to &amp;#39;nginx:1.9.1&amp;#39;, and its busybox container image to &amp;#39;busybox&amp;#39;
 kubectl set image deployment/nginx busybox=busybox nginx=nginx:1.9.1
 
 # Update all deployments&amp;#39; and rc&amp;#39;s nginx container&amp;#39;s image to &amp;#39;nginx:1.9.1&amp;#39;
 kubectl set image deployments,rc nginx=nginx:1.9.1 --all
 
 # Update image of all containers of daemonset abc to &amp;#39;nginx:1.9.1&amp;#39;
 kubectl set image daemonset abc *=nginx:1.9.1
 
 # Print result (in yaml format) of updating nginx container image from local file, without hitting the server
 kubectl set image -f path/to/file.yaml nginx=nginx:1.9.1 --local -o yaml
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id="options"&gt;Options&lt;/h2&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--all&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;Select all resources, in the namespace of the specified resource types&lt;/p&gt;</description></item><item><title>kubectl set resources</title><link>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_set/kubectl_set_resources/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_set/kubectl_set_resources/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h2 id="synopsis"&gt;Synopsis&lt;/h2&gt;
&lt;p&gt;Specify compute resource requirements (CPU, memory) for any resource that defines a pod template. If a pod is successfully scheduled, it is guaranteed the amount of resource requested, but may burst up to its specified limits.&lt;/p&gt;</description></item><item><title>kubectl set selector</title><link>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_set/kubectl_set_selector/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_set/kubectl_set_selector/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h2 id="synopsis"&gt;Synopsis&lt;/h2&gt;
&lt;p&gt;Set the selector on a resource. Note that the new selector will overwrite the old selector if the resource had one prior to the invocation of 'set selector'.&lt;/p&gt;</description></item><item><title>kubectl set serviceaccount</title><link>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_set/kubectl_set_serviceaccount/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_set/kubectl_set_serviceaccount/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h2 id="synopsis"&gt;Synopsis&lt;/h2&gt;
&lt;p&gt;Update the service account of pod template resources.&lt;/p&gt;
&lt;p&gt;Possible resources (case insensitive) can be:&lt;/p&gt;
&lt;p&gt;replicationcontroller (rc), deployment (deploy), daemonset (ds), job, replicaset (rs), statefulset&lt;/p&gt;</description></item><item><title>kubectl set subject</title><link>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_set/kubectl_set_subject/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_set/kubectl_set_subject/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h2 id="synopsis"&gt;Synopsis&lt;/h2&gt;
&lt;p&gt;Update the user, group, or service account in a role binding or cluster role binding.&lt;/p&gt;</description></item><item><title>kubectl top node</title><link>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_top/kubectl_top_node/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_top/kubectl_top_node/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h2 id="synopsis"&gt;Synopsis&lt;/h2&gt;
&lt;p&gt;Display resource (CPU/memory) usage of nodes.&lt;/p&gt;
&lt;p&gt;The top-node command allows you to see the resource consumption of nodes.&lt;/p&gt;</description></item><item><title>kubectl top pod</title><link>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_top/kubectl_top_pod/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubectl/generated/kubectl_top/kubectl_top_pod/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h2 id="synopsis"&gt;Synopsis&lt;/h2&gt;
&lt;p&gt;Display resource (CPU/memory) usage of pods.&lt;/p&gt;
&lt;p&gt;The 'top pod' command allows you to see the resource consumption of pods.&lt;/p&gt;</description></item><item><title>kubelet</title><link>https://andygol-k8s.netlify.app/docs/reference/command-line-tools-reference/kubelet/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/command-line-tools-reference/kubelet/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h2 id="synopsis"&gt;Synopsis&lt;/h2&gt;
&lt;p&gt;The kubelet is the primary &amp;quot;node agent&amp;quot; that runs on each
node. It can register the node with the apiserver using one of: the hostname; a flag to
override the hostname; or specific logic for a cloud provider.&lt;/p&gt;</description></item><item><title>Leases</title><link>https://andygol-k8s.netlify.app/docs/concepts/architecture/leases/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/architecture/leases/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;Distributed systems often have a need for &lt;em&gt;leases&lt;/em&gt;, which provide a mechanism to lock shared resources
and coordinate activity between members of a set.
In Kubernetes, the lease concept is represented by &lt;a href="https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/cluster-resources/lease-v1/"&gt;Lease&lt;/a&gt;
objects in the &lt;code&gt;coordination.k8s.io&lt;/code&gt; &lt;a class='glossary-tooltip' title='A set of related paths in the Kubernetes API.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/overview/kubernetes-api/#api-groups-and-versioning' target='_blank' aria-label='API Group'&gt;API Group&lt;/a&gt;,
which are used for system-critical capabilities such as node heartbeats and component-level leader election.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="node-heart-beats"&gt;Node heartbeats&lt;/h2&gt;
&lt;p&gt;Kubernetes uses the Lease API to communicate kubelet node heartbeats to the Kubernetes API server.
For every &lt;code&gt;Node&lt;/code&gt; , there is a &lt;code&gt;Lease&lt;/code&gt; object with a matching name in the &lt;code&gt;kube-node-lease&lt;/code&gt;
namespace. Under the hood, every kubelet heartbeat is an &lt;strong&gt;update&lt;/strong&gt; request to this &lt;code&gt;Lease&lt;/code&gt; object, updating
the &lt;code&gt;spec.renewTime&lt;/code&gt; field for the Lease. The Kubernetes control plane uses the time stamp of this field
to determine the availability of this &lt;code&gt;Node&lt;/code&gt;.&lt;/p&gt;</description></item><item><title>Managing Kubernetes Objects Using Imperative Commands</title><link>https://andygol-k8s.netlify.app/docs/tasks/manage-kubernetes-objects/imperative-command/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/manage-kubernetes-objects/imperative-command/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;Kubernetes objects can quickly be created, updated, and deleted directly using
imperative commands built into the &lt;code&gt;kubectl&lt;/code&gt; command-line tool. This document
explains how those commands are organized and how to use them to manage live objects.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;Install &lt;a href="https://andygol-k8s.netlify.app/docs/tasks/tools/"&gt;&lt;code&gt;kubectl&lt;/code&gt;&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Managing Secrets using Kustomize</title><link>https://andygol-k8s.netlify.app/docs/tasks/configmap-secret/managing-secret-using-kustomize/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/configmap-secret/managing-secret-using-kustomize/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;&lt;code&gt;kubectl&lt;/code&gt; supports using the &lt;a href="https://andygol-k8s.netlify.app/docs/tasks/manage-kubernetes-objects/kustomization/"&gt;Kustomize object management tool&lt;/a&gt; to manage Secrets
and ConfigMaps. You create a &lt;em&gt;resource generator&lt;/em&gt; using Kustomize, which
generates a Secret that you can apply to the API server using &lt;code&gt;kubectl&lt;/code&gt;.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Object Names and IDs</title><link>https://andygol-k8s.netlify.app/docs/concepts/overview/working-with-objects/names/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/overview/working-with-objects/names/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;Each &lt;a class='glossary-tooltip' title='An entity in the Kubernetes system, representing part of the state of your cluster.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/overview/working-with-objects/#kubernetes-objects' target='_blank' aria-label='object'&gt;object&lt;/a&gt; in your cluster has a &lt;a href="#names"&gt;&lt;em&gt;Name&lt;/em&gt;&lt;/a&gt; that is unique for that type of resource.
Every Kubernetes object also has a &lt;a href="#uids"&gt;&lt;em&gt;UID&lt;/em&gt;&lt;/a&gt; that is unique across your whole cluster.&lt;/p&gt;
&lt;p&gt;For example, you can only have one Pod named &lt;code&gt;myapp-1234&lt;/code&gt; within the same &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/overview/working-with-objects/namespaces/"&gt;namespace&lt;/a&gt;, but you can have one Pod and one Deployment that are each named &lt;code&gt;myapp-1234&lt;/code&gt;.&lt;/p&gt;</description></item><item><title>Operator pattern</title><link>https://andygol-k8s.netlify.app/docs/concepts/extend-kubernetes/operator/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/extend-kubernetes/operator/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;Operators are software extensions to Kubernetes that make use of
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/extend-kubernetes/api-extension/custom-resources/"&gt;custom resources&lt;/a&gt;
to manage applications and their components. Operators follow
Kubernetes principles, notably the &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/architecture/controller/"&gt;control loop&lt;/a&gt;.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="motivation"&gt;Motivation&lt;/h2&gt;
&lt;p&gt;The &lt;em&gt;operator pattern&lt;/em&gt; aims to capture the key aim of a human operator who
is managing a service or set of services. Human operators who look after
specific applications and services have deep knowledge of how the system
ought to behave, how to deploy it, and how to react if there are problems.&lt;/p&gt;</description></item><item><title>Pinterest Case Study</title><link>https://andygol-k8s.netlify.app/case-studies/pinterest/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/case-studies/pinterest/</guid><description>&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;After eight years in existence, Pinterest had grown into 1,000 microservices and multiple layers of infrastructure and diverse set-up tools and platforms. In 2016 the company launched a roadmap towards a new compute platform, led by the vision of creating the fastest path from an idea to production, without making engineers worry about the underlying infrastructure.&lt;/p&gt;

&lt;h2&gt;Solution&lt;/h2&gt;

&lt;p&gt;The first phase involved moving services to Docker containers. Once these services went into production in early 2017, the team began looking at orchestration to help create efficiencies and manage them in a decentralized way. After an evaluation of various solutions, Pinterest went with Kubernetes.&lt;/p&gt;</description></item><item><title>Pod Lifecycle</title><link>https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/pod-lifecycle/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/pod-lifecycle/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page describes the lifecycle of a Pod. Pods follow a defined lifecycle, starting
in the &lt;code&gt;Pending&lt;/code&gt; &lt;a href="#pod-phase"&gt;phase&lt;/a&gt;, moving through &lt;code&gt;Running&lt;/code&gt; if at least one
of its primary containers starts OK, and then through either the &lt;code&gt;Succeeded&lt;/code&gt; or
&lt;code&gt;Failed&lt;/code&gt; phases depending on whether any container in the Pod terminated in failure.&lt;/p&gt;
&lt;p&gt;Like individual application containers, Pods are considered to be relatively
ephemeral (rather than durable) entities. Pods are created, assigned a unique
ID (&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/overview/working-with-objects/names/#uids"&gt;UID&lt;/a&gt;), and scheduled
to run on nodes where they remain until termination (according to restart policy) or
deletion.
If a &lt;a class='glossary-tooltip' title='A node is a worker machine in Kubernetes.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/architecture/nodes/' target='_blank' aria-label='Node'&gt;Node&lt;/a&gt; dies, the Pods running on (or scheduled
to run on) that node are &lt;a href="#pod-garbage-collection"&gt;marked for deletion&lt;/a&gt;. The control
plane marks the Pods for removal after a timeout period.&lt;/p&gt;</description></item><item><title>Pod Overhead</title><link>https://andygol-k8s.netlify.app/docs/concepts/scheduling-eviction/pod-overhead/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/scheduling-eviction/pod-overhead/</guid><description>&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.24 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;p&gt;When you run a Pod on a Node, the Pod itself takes an amount of system resources. These
resources are additional to the resources needed to run the container(s) inside the Pod.
In Kubernetes, &lt;em&gt;Pod Overhead&lt;/em&gt; is a way to account for the resources consumed by the Pod
infrastructure on top of the container requests &amp;amp; limits.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;p&gt;In Kubernetes, the Pod's overhead is set at
&lt;a href="https://andygol-k8s.netlify.app/docs/reference/access-authn-authz/extensible-admission-controllers/#what-are-admission-webhooks"&gt;admission&lt;/a&gt;
time according to the overhead associated with the Pod's
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/containers/runtime-class/"&gt;RuntimeClass&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>Pod Security Policies</title><link>https://andygol-k8s.netlify.app/docs/concepts/security/pod-security-policy/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/security/pod-security-policy/</guid><description>&lt;!-- overview --&gt;
&lt;div class="alert alert-warning" role="alert"&gt;&lt;div class="h4 alert-heading" role="heading"&gt;Removed feature&lt;/div&gt;
&lt;p&gt;PodSecurityPolicy was &lt;a href="https://andygol-k8s.netlify.app/blog/2021/04/08/kubernetes-1-21-release-announcement/#podsecuritypolicy-deprecation"&gt;deprecated&lt;/a&gt;
in Kubernetes v1.21, and removed from Kubernetes in v1.25.&lt;/p&gt;
&lt;/div&gt;
&lt;p&gt;Instead of using PodSecurityPolicy, you can enforce similar restrictions on Pods using
either or both:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/security/pod-security-admission/"&gt;Pod Security Admission&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;a 3rd party admission plugin, that you deploy and configure yourself&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For a migration guide, see &lt;a href="https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/migrate-from-psp/"&gt;Migrate from PodSecurityPolicy to the Built-In PodSecurity Admission Controller&lt;/a&gt;.
For more information on the removal of this API,
see &lt;a href="https://andygol-k8s.netlify.app/blog/2021/04/06/podsecuritypolicy-deprecation-past-present-and-future/"&gt;PodSecurityPolicy Deprecation: Past, Present, and Future&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>Resize CPU and Memory Resources assigned to Containers</title><link>https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/resize-container-resources/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/resize-container-resources/</guid><description>&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable" title="Feature Gate: InPlacePodVerticalScaling"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt; 
 &lt;code&gt;Kubernetes v1.35 [stable]&lt;/code&gt;(enabled by default)&lt;/div&gt;

&lt;p&gt;This page explains how to change the CPU and memory resource requests and limits
assigned to a container &lt;em&gt;without recreating the Pod&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;Traditionally, changing a Pod's resource requirements necessitated deleting the existing Pod
and creating a replacement, often managed by a &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/"&gt;workload controller&lt;/a&gt;.
In-place Pod Resize allows changing the CPU/memory allocation of container(s) within a running Pod
while potentially avoiding application disruption. The process for resizing Pod resources is covered in &lt;a href="https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/resize-pod-resources/"&gt;Resize CPU and Memory Resources assigned to Pods&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>Resize CPU and Memory Resources assigned to Pods</title><link>https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/resize-pod-resources/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/resize-pod-resources/</guid><description>&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-alpha" title="Feature Gate: InPlacePodLevelResourcesVerticalScaling"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt; 
 &lt;code&gt;Kubernetes v1.35 [alpha]&lt;/code&gt;(disabled by default)&lt;/div&gt;

&lt;p&gt;This page explains how to change the CPU and memory resources set at the Pod level without recreating the Pod.&lt;/p&gt;
&lt;p&gt;The In-place Pod Resize feature allows modifying resource allocations for a running Pod, avoiding application disruption. The process for resizing individual container resources is covered in &lt;a href="https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/resize-container-resources/"&gt;Resize CPU and Memory Resources assigned to Containers&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;This page highlights In-place Pod-level resources resize. Pod-level resources
are defined in &lt;code&gt;spec.resources&lt;/code&gt; and they act as the upper bound on the aggregate resources
consumed by all containers in the Pod. The In-place Pod-level resources resize feature
lets you change these aggregate CPU and memory allocations for a running Pod directly.&lt;/p&gt;</description></item><item><title>Restrict a Container's Access to Resources with AppArmor</title><link>https://andygol-k8s.netlify.app/docs/tutorials/security/apparmor/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tutorials/security/apparmor/</guid><description>&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable" title="Feature Gate: AppArmor"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt; 
 &lt;code&gt;Kubernetes v1.31 [stable]&lt;/code&gt;(enabled by default)&lt;/div&gt;

&lt;p&gt;This page shows you how to load AppArmor profiles on your nodes and enforce
those profiles in Pods. To learn more about how Kubernetes can confine Pods using
AppArmor, see
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/security/linux-kernel-security-constraints/#apparmor"&gt;Linux kernel security constraints for Pods and containers&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id="objectives"&gt;Objectives&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;See an example of how to load a profile on a Node&lt;/li&gt;
&lt;li&gt;Learn how to enforce the profile on a Pod&lt;/li&gt;
&lt;li&gt;Learn how to check that the profile is loaded&lt;/li&gt;
&lt;li&gt;See what happens when a profile is violated&lt;/li&gt;
&lt;li&gt;See what happens when a profile cannot be loaded&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;AppArmor is an optional kernel module and Kubernetes feature, so verify it is supported on your
Nodes before proceeding:&lt;/p&gt;</description></item><item><title>Run a Replicated Stateful Application</title><link>https://andygol-k8s.netlify.app/docs/tasks/run-application/run-replicated-stateful-application/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/run-application/run-replicated-stateful-application/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page shows how to run a replicated stateful application using a
&lt;a class='glossary-tooltip' title='A StatefulSet manages deployment and scaling of a set of Pods, with durable storage and persistent identifiers for each Pod.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/statefulset/' target='_blank' aria-label='StatefulSet'&gt;StatefulSet&lt;/a&gt;.
This application is a replicated MySQL database. The example topology has a
single primary server and multiple replicas, using asynchronous row-based
replication.&lt;/p&gt;

&lt;div class="alert alert-info" role="note"&gt;&lt;h4 class="alert-heading"&gt;Note:&lt;/h4&gt;&lt;strong&gt;This is not a production configuration&lt;/strong&gt;. MySQL settings remain on insecure defaults to keep the focus
on general patterns for running stateful applications in Kubernetes.&lt;/div&gt;

&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Running Pods on Only Some Nodes</title><link>https://andygol-k8s.netlify.app/docs/tasks/manage-daemon/pods-some-nodes/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/manage-daemon/pods-some-nodes/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page demonstrates how can you run &lt;a class='glossary-tooltip' title='A Pod represents a set of running containers in your cluster.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/' target='_blank' aria-label='Pods'&gt;Pods&lt;/a&gt; on only some &lt;a class='glossary-tooltip' title='A node is a worker machine in Kubernetes.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/architecture/nodes/' target='_blank' aria-label='Nodes'&gt;Nodes&lt;/a&gt; as part of a &lt;a class='glossary-tooltip' title='Ensures a copy of a Pod is running across a set of nodes in a cluster.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/daemonset' target='_blank' aria-label='DaemonSet'&gt;DaemonSet&lt;/a&gt;&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Runtime Class</title><link>https://andygol-k8s.netlify.app/docs/concepts/containers/runtime-class/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/containers/runtime-class/</guid><description>&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.20 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;p&gt;This page describes the RuntimeClass resource and runtime selection mechanism.&lt;/p&gt;
&lt;p&gt;RuntimeClass is a feature for selecting the container runtime configuration. The container runtime
configuration is used to run a Pod's containers.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="motivation"&gt;Motivation&lt;/h2&gt;
&lt;p&gt;You can set a different RuntimeClass between different Pods to provide a balance of
performance versus security. For example, if part of your workload deserves a high
level of information security assurance, you might choose to schedule those Pods so
that they run in a container runtime that uses hardware virtualization. You'd then
benefit from the extra isolation of the alternative runtime, at the expense of some
additional overhead.&lt;/p&gt;</description></item><item><title>Scheduling Policies</title><link>https://andygol-k8s.netlify.app/docs/reference/scheduling/policies/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/scheduling/policies/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;In Kubernetes versions before v1.23, a scheduling policy can be used to specify the &lt;em&gt;predicates&lt;/em&gt; and &lt;em&gt;priorities&lt;/em&gt; process. For example, you can set a scheduling policy by
running &lt;code&gt;kube-scheduler --policy-config-file &amp;lt;filename&amp;gt;&lt;/code&gt; or &lt;code&gt;kube-scheduler --policy-configmap &amp;lt;ConfigMap&amp;gt;&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;This scheduling policy is not supported since Kubernetes v1.23. Associated flags &lt;code&gt;policy-config-file&lt;/code&gt;, &lt;code&gt;policy-configmap&lt;/code&gt;, &lt;code&gt;policy-configmap-namespace&lt;/code&gt; and &lt;code&gt;use-legacy-policy-config&lt;/code&gt; are also not supported. Instead, use the &lt;a href="https://andygol-k8s.netlify.app/docs/reference/scheduling/config/"&gt;Scheduler Configuration&lt;/a&gt; to achieve similar behavior.&lt;/p&gt;
&lt;h2 id="what-s-next"&gt;What's next&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Learn about &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/scheduling-eviction/kube-scheduler/"&gt;scheduling&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Learn about &lt;a href="https://andygol-k8s.netlify.app/docs/reference/scheduling/config/"&gt;kube-scheduler Configuration&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Read the &lt;a href="https://andygol-k8s.netlify.app/docs/reference/config-api/kube-scheduler-config.v1/"&gt;kube-scheduler configuration reference (v1)&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</description></item><item><title>Secrets</title><link>https://andygol-k8s.netlify.app/docs/concepts/configuration/secret/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/configuration/secret/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;A Secret is an object that contains a small amount of sensitive data such as
a password, a token, or a key. Such information might otherwise be put in a
&lt;a class='glossary-tooltip' title='A Pod represents a set of running containers in your cluster.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/' target='_blank' aria-label='Pod'&gt;Pod&lt;/a&gt; specification or in a
&lt;a class='glossary-tooltip' title='Stored instance of a container that holds a set of software needed to run an application.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/reference/glossary/?all=true#term-image' target='_blank' aria-label='container image'&gt;container image&lt;/a&gt;. Using a
Secret means that you don't need to include confidential data in your
application code.&lt;/p&gt;</description></item><item><title>StatefulSets</title><link>https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/statefulset/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/statefulset/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;StatefulSet is the workload API object used to manage stateful applications.&lt;/p&gt;
&lt;p&gt;Manages the deployment and scaling of a set of &lt;a class='glossary-tooltip' title='A Pod represents a set of running containers in your cluster.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/' target='_blank' aria-label='Pods'&gt;Pods&lt;/a&gt;, &lt;em&gt;and provides guarantees about the ordering and uniqueness&lt;/em&gt; of these Pods.&lt;/p&gt;
&lt;p&gt;Like a &lt;a class='glossary-tooltip' title='Manages a replicated application on your cluster.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/deployment/' target='_blank' aria-label='Deployment'&gt;Deployment&lt;/a&gt;, a StatefulSet manages Pods that are based on an identical container spec. Unlike a Deployment, a StatefulSet maintains a sticky identity for each of its Pods. These pods are created from the same spec, but are not interchangeable: each has a persistent identifier that it maintains across any rescheduling.&lt;/p&gt;</description></item><item><title>Submitting articles to Kubernetes blogs</title><link>https://andygol-k8s.netlify.app/docs/contribute/blog/article-submission/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/contribute/blog/article-submission/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;There are two official Kubernetes blogs, and the CNCF has its own blog where you
can cover Kubernetes too. For the
&lt;a href="https://andygol-k8s.netlify.app/docs/contribute/blog/"&gt;main Kubernetes blog&lt;/a&gt;, we (the Kubernetes project) like
to publish articles with different perspectives and special focuses, that have a
link to Kubernetes.&lt;/p&gt;
&lt;p&gt;With only a few special case exceptions, we only publish content that hasn't
been submitted or published anywhere else.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="writing-for-the-kubernetes-blog-s"&gt;Writing for the Kubernetes blog(s)&lt;/h2&gt;
&lt;p&gt;As an author, you have three different routes towards publication.&lt;/p&gt;</description></item><item><title>Upgrading kubeadm clusters</title><link>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page explains how to upgrade a Kubernetes cluster created with kubeadm from version
1.34.x to version 1.35.x, and from version
1.35.x to 1.35.y (where &lt;code&gt;y &amp;gt; x&lt;/code&gt;). Skipping MINOR versions
when upgrading is unsupported. For more details, please visit &lt;a href="https://andygol-k8s.netlify.app/releases/version-skew-policy/"&gt;Version Skew Policy&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;To see information about upgrading clusters created using older versions of kubeadm,
please refer to following pages instead:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://v1-34.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/"&gt;Upgrading a kubeadm cluster from 1.33 to 1.34&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://v1-33.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/"&gt;Upgrading a kubeadm cluster from 1.32 to 1.33&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://v1-32.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/"&gt;Upgrading a kubeadm cluster from 1.31 to 1.32&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://v1-31.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/"&gt;Upgrading a kubeadm cluster from 1.30 to 1.31&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The Kubernetes project recommends upgrading to the latest patch releases promptly, and
to ensure that you are running a supported minor release of Kubernetes.
Following this recommendation helps you to stay secure.&lt;/p&gt;</description></item><item><title>Use Cilium for NetworkPolicy</title><link>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page shows how to use Cilium for NetworkPolicy.&lt;/p&gt;
&lt;p&gt;For background on Cilium, read the &lt;a href="https://docs.cilium.io/en/stable/overview/intro"&gt;Introduction to Cilium&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Validate node setup</title><link>https://andygol-k8s.netlify.app/docs/setup/best-practices/node-conformance/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/setup/best-practices/node-conformance/</guid><description>&lt;h2 id="node-conformance-test"&gt;Node Conformance Test&lt;/h2&gt;
&lt;p&gt;&lt;em&gt;Node conformance test&lt;/em&gt; is a containerized test framework that provides a system
verification and functionality test for a node. The test validates whether the
node meets the minimum requirements for Kubernetes; a node that passes the test
is qualified to join a Kubernetes cluster.&lt;/p&gt;
&lt;h2 id="node-prerequisite"&gt;Node Prerequisite&lt;/h2&gt;
&lt;p&gt;To run node conformance test, a node must satisfy the same prerequisites as a
standard Kubernetes node. At a minimum, the node should have the following
daemons installed:&lt;/p&gt;</description></item><item><title>Versions in CustomResourceDefinitions</title><link>https://andygol-k8s.netlify.app/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page explains how to add versioning information to
&lt;a href="https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/extend-resources/custom-resource-definition-v1/"&gt;CustomResourceDefinitions&lt;/a&gt;, to indicate the stability
level of your CustomResourceDefinitions or advance your API to a new version with conversion between API representations. It also describes how to upgrade an object from one version to another.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Using RBAC Authorization</title><link>https://andygol-k8s.netlify.app/docs/reference/access-authn-authz/rbac/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/access-authn-authz/rbac/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;Role-based access control (RBAC) is a method of regulating access to computer or
network resources based on the roles of individual users within your organization.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;p&gt;RBAC authorization uses the &lt;code&gt;rbac.authorization.k8s.io&lt;/code&gt;
&lt;a class='glossary-tooltip' title='A set of related paths in the Kubernetes API.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/overview/kubernetes-api/#api-groups-and-versioning' target='_blank' aria-label='API group'&gt;API group&lt;/a&gt; to drive authorization
decisions, allowing you to dynamically configure policies through the Kubernetes API.&lt;/p&gt;
&lt;p&gt;To enable RBAC, start the &lt;a class='glossary-tooltip' title='Control plane component that serves the Kubernetes API.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/architecture/#kube-apiserver' target='_blank' aria-label='API server'&gt;API server&lt;/a&gt;
with the &lt;code&gt;--authorization-config&lt;/code&gt; flag set to a file that includes the &lt;code&gt;RBAC&lt;/code&gt; authorizer; for example:&lt;/p&gt;</description></item><item><title>Using Node Authorization</title><link>https://andygol-k8s.netlify.app/docs/reference/access-authn-authz/node/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/access-authn-authz/node/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;Node authorization is a special-purpose authorization mode that specifically
authorizes API requests made by kubelets.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="overview"&gt;Overview&lt;/h2&gt;
&lt;p&gt;The Node authorizer allows a kubelet to perform API operations. This includes:&lt;/p&gt;
&lt;p&gt;Read operations:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;services&lt;/li&gt;
&lt;li&gt;endpoints&lt;/li&gt;
&lt;li&gt;nodes&lt;/li&gt;
&lt;li&gt;pods&lt;/li&gt;
&lt;li&gt;secrets, configmaps, persistent volume claims and persistent volumes related
to pods bound to the kubelet's node&lt;/li&gt;
&lt;/ul&gt;







 &lt;div class="feature-state-notice feature-stable" title="Feature Gate: AuthorizeNodeWithSelectors"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt; 
 &lt;code&gt;Kubernetes v1.34 [stable]&lt;/code&gt;(enabled by default)&lt;/div&gt;

&lt;p&gt;Kubelets are limited to reading their own Node objects, and only reading pods bound to their node.&lt;/p&gt;</description></item><item><title>Common Expression Language in Kubernetes</title><link>https://andygol-k8s.netlify.app/docs/reference/using-api/cel/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/using-api/cel/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;The &lt;a href="https://github.com/google/cel-go"&gt;Common Expression Language (CEL)&lt;/a&gt; is used
in the Kubernetes API to declare validation rules, policy rules, and other
constraints or conditions.&lt;/p&gt;
&lt;p&gt;CEL expressions are evaluated directly in the
&lt;a class='glossary-tooltip' title='Control plane component that serves the Kubernetes API.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/architecture/#kube-apiserver' target='_blank' aria-label='API server'&gt;API server&lt;/a&gt;, making CEL a
convenient alternative to out-of-process mechanisms, such as webhooks, for many
extensibility use cases. Your CEL expressions continue to execute so long as the
control plane's API server component remains available.&lt;/p&gt;</description></item><item><title>Configuring swap memory on Kubernetes nodes</title><link>https://andygol-k8s.netlify.app/docs/tutorials/cluster-management/provision-swap-memory/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tutorials/cluster-management/provision-swap-memory/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page provides an example of how to provision and configure swap memory on a Kubernetes node using kubeadm.&lt;/p&gt;
&lt;!-- lessoncontent --&gt;
&lt;h2 id="objectives"&gt;Objectives&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Provision swap memory on a Kubernetes node using kubeadm.&lt;/li&gt;
&lt;li&gt;Learn to configure both encrypted and unencrypted swap.&lt;/li&gt;
&lt;li&gt;Learn to enable swap on boot.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Webhook Mode</title><link>https://andygol-k8s.netlify.app/docs/reference/access-authn-authz/webhook/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/access-authn-authz/webhook/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;A WebHook is an HTTP callback: an HTTP POST that occurs when something happens; a simple event-notification via HTTP POST. A web application implementing WebHooks will POST a message to a URL when certain things happen.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;p&gt;When specified, mode &lt;code&gt;Webhook&lt;/code&gt; causes Kubernetes to query an outside REST
service when determining user privileges.&lt;/p&gt;
&lt;h2 id="configuration-file-format"&gt;Configuration File Format&lt;/h2&gt;
&lt;p&gt;Mode &lt;code&gt;Webhook&lt;/code&gt; requires a file for HTTP configuration, specify by the
&lt;code&gt;--authorization-webhook-config-file=SOME_FILENAME&lt;/code&gt; flag.&lt;/p&gt;</description></item><item><title>Using ABAC Authorization</title><link>https://andygol-k8s.netlify.app/docs/reference/access-authn-authz/abac/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/access-authn-authz/abac/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;Attribute-based access control (ABAC) defines an access control paradigm whereby access rights are granted
to users through the use of policies which combine attributes together.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="policy-file-format"&gt;Policy File Format&lt;/h2&gt;
&lt;p&gt;To enable &lt;code&gt;ABAC&lt;/code&gt; mode, specify &lt;code&gt;--authorization-policy-file=SOME_FILENAME&lt;/code&gt; and &lt;code&gt;--authorization-mode=ABAC&lt;/code&gt;
on startup.&lt;/p&gt;
&lt;p&gt;The file format is &lt;a href="https://jsonlines.org/"&gt;one JSON object per line&lt;/a&gt;. There
should be no enclosing list or map, only one map per line.&lt;/p&gt;
&lt;p&gt;Each line is a &amp;quot;policy object&amp;quot;, where each such object is a map with the following
properties:&lt;/p&gt;</description></item><item><title>Admission Control in Kubernetes</title><link>https://andygol-k8s.netlify.app/docs/reference/access-authn-authz/admission-controllers/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/access-authn-authz/admission-controllers/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page provides an overview of &lt;em&gt;admission controllers&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;An admission controller is a piece of code that intercepts requests to the
Kubernetes API server prior to persistence of the resource, but after the request
is authenticated and authorized.&lt;/p&gt;
&lt;p&gt;Several important features of Kubernetes require an admission controller to be enabled in order
to properly support the feature. As a result, a Kubernetes API server that is not properly
configured with the right set of admission controllers is an incomplete server that will not
support all the features you expect.&lt;/p&gt;</description></item><item><title>Adopting Sidecar Containers</title><link>https://andygol-k8s.netlify.app/docs/tutorials/configuration/pod-sidecar-containers/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tutorials/configuration/pod-sidecar-containers/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This section is relevant for people adopting a new built-in
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/sidecar-containers/"&gt;sidecar containers&lt;/a&gt; feature for their workloads.&lt;/p&gt;
&lt;p&gt;Sidecar container is not a new concept as posted in the
&lt;a href="https://andygol-k8s.netlify.app/blog/2015/06/the-distributed-system-toolkit-patterns/"&gt;blog post&lt;/a&gt;.
Kubernetes allows running multiple containers in a Pod to implement this concept.
However, running a sidecar container as a regular container
has a lot of limitations being fixed with the new built-in sidecar containers support.&lt;/p&gt;







 &lt;div class="feature-state-notice feature-stable" title="Feature Gate: SidecarContainers"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt; 
 &lt;code&gt;Kubernetes v1.33 [stable]&lt;/code&gt;(enabled by default)&lt;/div&gt;

&lt;h2 id="objectives"&gt;Objectives&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Understand the need for sidecar containers&lt;/li&gt;
&lt;li&gt;Be able to troubleshoot issues with the sidecar containers&lt;/li&gt;
&lt;li&gt;Understand options to universally &amp;quot;inject&amp;quot; sidecar containers to any workload&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Blog guidelines</title><link>https://andygol-k8s.netlify.app/docs/contribute/blog/guidelines/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/contribute/blog/guidelines/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;These guidelines cover the main Kubernetes blog and the Kubernetes
contributor blog.&lt;/p&gt;
&lt;p&gt;All blog content must also adhere to the overall policy in the
&lt;a href="https://andygol-k8s.netlify.app/docs/contribute/style/content-guide/"&gt;content guide&lt;/a&gt;.&lt;/p&gt;
&lt;h1 id="before-you-begin"&gt;Before you begin&lt;/h1&gt;
&lt;p&gt;Make sure you are familiar with the introduction sections of
&lt;a href="https://andygol-k8s.netlify.app/docs/contribute/blog/"&gt;contributing to Kubernetes blogs&lt;/a&gt;, not just to learn about
the two official blogs and the differences between them, but also to get an overview
of the process.&lt;/p&gt;
&lt;h2 id="original-content"&gt;Original content&lt;/h2&gt;
&lt;p&gt;The Kubernetes project accepts &lt;strong&gt;original content only&lt;/strong&gt;, in English.&lt;/p&gt;</description></item><item><title>Cloud Controller Manager</title><link>https://andygol-k8s.netlify.app/docs/concepts/architecture/cloud-controller/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/architecture/cloud-controller/</guid><description>&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-beta"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.11 [beta]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;p&gt;Cloud infrastructure technologies let you run Kubernetes on public, private, and hybrid clouds.
Kubernetes believes in automated, API-driven infrastructure without tight coupling between
components.&lt;/p&gt;
&lt;p&gt;&lt;p&gt;The cloud-controller-manager is a Kubernetes &lt;a class='glossary-tooltip' title='The container orchestration layer that exposes the API and interfaces to define, deploy, and manage the lifecycle of containers.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/reference/glossary/?all=true#term-control-plane' target='_blank' aria-label='control plane'&gt;control plane&lt;/a&gt; component
that embeds cloud-specific control logic. The cloud controller manager lets you link your
cluster into your cloud provider's API, and separates out the components that interact
with that cloud platform from components that only interact with your cluster.&lt;/p&gt;</description></item><item><title>Configure Minimum and Maximum CPU Constraints for a Namespace</title><link>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page shows how to set minimum and maximum values for the CPU resources used by containers
and Pods in a &lt;a class='glossary-tooltip' title='An abstraction used by Kubernetes to support isolation of groups of resources within a single cluster.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/overview/working-with-objects/namespaces' target='_blank' aria-label='namespace'&gt;namespace&lt;/a&gt;. You specify minimum
and maximum CPU values in a
&lt;a href="https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/policy-resources/limit-range-v1/"&gt;LimitRange&lt;/a&gt;
object. If a Pod does not meet the constraints imposed by the LimitRange, it cannot be created
in the namespace.&lt;/p&gt;</description></item><item><title>Configure RunAsUserName for Windows pods and containers</title><link>https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/configure-runasusername/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/configure-runasusername/</guid><description>&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.18 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;p&gt;This page shows how to use the &lt;code&gt;runAsUserName&lt;/code&gt; setting for Pods and containers that will run on Windows nodes. This is roughly equivalent of the Linux-specific &lt;code&gt;runAsUser&lt;/code&gt; setting, allowing you to run applications in a container as a different username than the default.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;You need to have a Kubernetes cluster and the kubectl command-line tool must be configured to communicate with your cluster. The cluster is expected to have Windows worker nodes where pods with containers running Windows workloads will get scheduled.&lt;/p&gt;</description></item><item><title>Container Lifecycle Hooks</title><link>https://andygol-k8s.netlify.app/docs/concepts/containers/container-lifecycle-hooks/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/containers/container-lifecycle-hooks/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page describes how kubelet managed Containers can use the Container lifecycle hook framework
to run code triggered by events during their management lifecycle.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="overview"&gt;Overview&lt;/h2&gt;
&lt;p&gt;Analogous to many programming language frameworks that have component lifecycle hooks, such as Angular,
Kubernetes provides Containers with lifecycle hooks.
The hooks enable Containers to be aware of events in their management lifecycle
and run code implemented in a handler when the corresponding lifecycle hook is executed.&lt;/p&gt;</description></item><item><title>Customizing components with the kubeadm API</title><link>https://andygol-k8s.netlify.app/docs/setup/production-environment/tools/kubeadm/control-plane-flags/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/setup/production-environment/tools/kubeadm/control-plane-flags/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page covers how to customize the components that kubeadm deploys. For control plane components
you can use flags in the &lt;code&gt;ClusterConfiguration&lt;/code&gt; structure or patches per-node. For the kubelet
and kube-proxy you can use &lt;code&gt;KubeletConfiguration&lt;/code&gt; and &lt;code&gt;KubeProxyConfiguration&lt;/code&gt;, accordingly.&lt;/p&gt;
&lt;p&gt;All of these options are possible via the kubeadm configuration API.
For more details on each field in the configuration you can navigate to our
&lt;a href="https://andygol-k8s.netlify.app/docs/reference/config-api/kubeadm-config.v1beta4/"&gt;API reference pages&lt;/a&gt;.&lt;/p&gt;

&lt;div class="alert alert-info" role="note"&gt;&lt;h4 class="alert-heading"&gt;Note:&lt;/h4&gt;To reconfigure a cluster that has already been created see
&lt;a href="https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/kubeadm/kubeadm-reconfigure/"&gt;Reconfiguring a kubeadm cluster&lt;/a&gt;.&lt;/div&gt;

&lt;!-- body --&gt;
&lt;h2 id="customizing-the-control-plane-with-flags-in-clusterconfiguration"&gt;Customizing the control plane with flags in &lt;code&gt;ClusterConfiguration&lt;/code&gt;&lt;/h2&gt;
&lt;p&gt;The kubeadm &lt;code&gt;ClusterConfiguration&lt;/code&gt; object exposes a way for users to override the default
flags passed to control plane components such as the APIServer, ControllerManager, Scheduler and Etcd.
The components are defined using the following structures:&lt;/p&gt;</description></item><item><title>DaemonSet</title><link>https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/daemonset/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/daemonset/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;A &lt;em&gt;DaemonSet&lt;/em&gt; ensures that all (or some) Nodes run a copy of a Pod. As nodes are added to the
cluster, Pods are added to them. As nodes are removed from the cluster, those Pods are garbage
collected. Deleting a DaemonSet will clean up the Pods it created.&lt;/p&gt;
&lt;p&gt;Some typical uses of a DaemonSet are:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;running a cluster storage daemon on every node&lt;/li&gt;
&lt;li&gt;running a logs collection daemon on every node&lt;/li&gt;
&lt;li&gt;running a node monitoring daemon on every node&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In a simple case, one DaemonSet, covering all nodes, would be used for each type of daemon.
A more complex setup might use multiple DaemonSets for a single type of daemon, but with
different flags and/or different memory and cpu requests for different hardware types.&lt;/p&gt;</description></item><item><title>Debug Init Containers</title><link>https://andygol-k8s.netlify.app/docs/tasks/debug/debug-application/debug-init-containers/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/debug/debug-application/debug-init-containers/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page shows how to investigate problems related to the execution of
Init Containers. The example command lines below refer to the Pod as
&lt;code&gt;&amp;lt;pod-name&amp;gt;&lt;/code&gt; and the Init Containers as &lt;code&gt;&amp;lt;init-container-1&amp;gt;&lt;/code&gt; and
&lt;code&gt;&amp;lt;init-container-2&amp;gt;&lt;/code&gt;.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Enforcing Pod Security Standards</title><link>https://andygol-k8s.netlify.app/docs/setup/best-practices/enforcing-pod-security-standards/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/setup/best-practices/enforcing-pod-security-standards/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page provides an overview of best practices when it comes to enforcing
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/security/pod-security-standards/"&gt;Pod Security Standards&lt;/a&gt;.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="using-the-built-in-pod-security-admission-controller"&gt;Using the built-in Pod Security Admission Controller&lt;/h2&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.25 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;p&gt;The &lt;a href="https://andygol-k8s.netlify.app/docs/reference/access-authn-authz/admission-controllers/#podsecurity"&gt;Pod Security Admission Controller&lt;/a&gt;
intends to replace the deprecated PodSecurityPolicies.&lt;/p&gt;
&lt;h3 id="configure-all-cluster-namespaces"&gt;Configure all cluster namespaces&lt;/h3&gt;
&lt;p&gt;Namespaces that lack any configuration at all should be considered significant gaps in your cluster
security model. We recommend taking the time to analyze the types of workloads occurring in each
namespace, and by referencing the Pod Security Standards, decide on an appropriate level for
each of them. Unlabeled namespaces should only indicate that they've yet to be evaluated.&lt;/p&gt;</description></item><item><title>Expose Pod Information to Containers Through Files</title><link>https://andygol-k8s.netlify.app/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page shows how a Pod can use a
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/storage/volumes/#downwardapi"&gt;&lt;code&gt;downwardAPI&lt;/code&gt; volume&lt;/a&gt;,
to expose information about itself to containers running in the Pod.
A &lt;code&gt;downwardAPI&lt;/code&gt; volume can expose Pod fields and container fields.&lt;/p&gt;
&lt;p&gt;In Kubernetes, there are two ways to expose Pod and container fields to a running container:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/tasks/inject-data-application/environment-variable-expose-pod-information/"&gt;Environment variables&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Volume files, as explained in this task&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Together, these two ways of exposing Pod and container fields are called the
&lt;em&gt;downward API&lt;/em&gt;.&lt;/p&gt;</description></item><item><title>Imperative Management of Kubernetes Objects Using Configuration Files</title><link>https://andygol-k8s.netlify.app/docs/tasks/manage-kubernetes-objects/imperative-config/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/manage-kubernetes-objects/imperative-config/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;Kubernetes objects can be created, updated, and deleted by using the &lt;code&gt;kubectl&lt;/code&gt;
command-line tool along with an object configuration file written in YAML or JSON.
This document explains how to define and manage objects using configuration files.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;Install &lt;a href="https://andygol-k8s.netlify.app/docs/tasks/tools/"&gt;&lt;code&gt;kubectl&lt;/code&gt;&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Init Containers</title><link>https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/init-containers/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/init-containers/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page provides an overview of init containers: specialized containers that run
before app containers in a &lt;a class='glossary-tooltip' title='A Pod represents a set of running containers in your cluster.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/' target='_blank' aria-label='Pod'&gt;Pod&lt;/a&gt;.
Init containers can contain utilities or setup scripts not present in an app image.&lt;/p&gt;
&lt;p&gt;You can specify init containers in the Pod specification alongside the &lt;code&gt;containers&lt;/code&gt;
array (which describes app containers).&lt;/p&gt;
&lt;p&gt;In Kubernetes, a &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/sidecar-containers/"&gt;sidecar container&lt;/a&gt; is a container that
starts before the main application container and &lt;em&gt;continues to run&lt;/em&gt;. This document is about init containers:
containers that run to completion during Pod initialization.&lt;/p&gt;</description></item><item><title>JSONPath Support</title><link>https://andygol-k8s.netlify.app/docs/reference/kubectl/jsonpath/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubectl/jsonpath/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;The &lt;a class='glossary-tooltip' title='A command line tool for communicating with a Kubernetes cluster.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/reference/kubectl/' target='_blank' aria-label='kubectl'&gt;kubectl&lt;/a&gt; tool supports JSONPath templates as an output format.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;p&gt;A &lt;em&gt;JSONPath template&lt;/em&gt; is composed of JSONPath expressions enclosed by curly braces: &lt;code&gt;{&lt;/code&gt; and &lt;code&gt;}&lt;/code&gt;.
Kubectl uses JSONPath expressions to filter on specific fields in the JSON object and format the output.
In addition to the original JSONPath template syntax, the following functions and syntax are valid:&lt;/p&gt;</description></item><item><title>kubeadm upgrade</title><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/kubeadm-upgrade/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/kubeadm-upgrade/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;&lt;code&gt;kubeadm upgrade&lt;/code&gt; is a user-friendly command that wraps complex upgrading logic
behind one command, with support for both planning an upgrade and actually performing it.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="kubeadm-upgrade-guidance"&gt;kubeadm upgrade guidance&lt;/h2&gt;
&lt;p&gt;The steps for performing an upgrade using kubeadm are outlined in &lt;a href="https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/"&gt;this document&lt;/a&gt;.
For older versions of kubeadm, please refer to older documentation sets of the Kubernetes website.&lt;/p&gt;
&lt;p&gt;You can use &lt;code&gt;kubeadm upgrade diff&lt;/code&gt; to see the changes that would be applied to static pod manifests.&lt;/p&gt;</description></item><item><title>kubeadm upgrade phases</title><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/kubeadm-upgrade-phase/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/kubeadm-upgrade-phase/</guid><description>&lt;h2 id="cmd-apply-phase"&gt;kubeadm upgrade apply phase&lt;/h2&gt;
&lt;p&gt;Using the phases of &lt;code&gt;kubeadm upgrade apply&lt;/code&gt;, you can choose to execute the separate steps of the initial upgrade
of a control plane node.&lt;/p&gt;
&lt;ul class="nav nav-tabs" id="tab-apply-phase" role="tablist"&gt;&lt;li class="nav-item"&gt;&lt;a data-toggle="tab" class="nav-link active" href="#tab-apply-phase-0" role="tab" aria-controls="tab-apply-phase-0" aria-selected="true"&gt;phase&lt;/a&gt;&lt;/li&gt;
	 
		&lt;li class="nav-item"&gt;&lt;a data-toggle="tab" class="nav-link" href="#tab-apply-phase-1" role="tab" aria-controls="tab-apply-phase-1"&gt;preflight&lt;/a&gt;&lt;/li&gt;
		&lt;li class="nav-item"&gt;&lt;a data-toggle="tab" class="nav-link" href="#tab-apply-phase-2" role="tab" aria-controls="tab-apply-phase-2"&gt;control-plane&lt;/a&gt;&lt;/li&gt;
		&lt;li class="nav-item"&gt;&lt;a data-toggle="tab" class="nav-link" href="#tab-apply-phase-3" role="tab" aria-controls="tab-apply-phase-3"&gt;upload-config&lt;/a&gt;&lt;/li&gt;
		&lt;li class="nav-item"&gt;&lt;a data-toggle="tab" class="nav-link" href="#tab-apply-phase-4" role="tab" aria-controls="tab-apply-phase-4"&gt;kubelet-config&lt;/a&gt;&lt;/li&gt;
		&lt;li class="nav-item"&gt;&lt;a data-toggle="tab" class="nav-link" href="#tab-apply-phase-5" role="tab" aria-controls="tab-apply-phase-5"&gt;bootstrap-token&lt;/a&gt;&lt;/li&gt;
		&lt;li class="nav-item"&gt;&lt;a data-toggle="tab" class="nav-link" href="#tab-apply-phase-6" role="tab" aria-controls="tab-apply-phase-6"&gt;addon&lt;/a&gt;&lt;/li&gt;
		&lt;li class="nav-item"&gt;&lt;a data-toggle="tab" class="nav-link" href="#tab-apply-phase-7" role="tab" aria-controls="tab-apply-phase-7"&gt;post-upgrade&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;
&lt;div class="tab-content" id="tab-apply-phase"&gt;&lt;div id="tab-apply-phase-0" class="tab-pane show active" role="tabpanel" aria-labelledby="tab-apply-phase-0"&gt;

&lt;p&gt;&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Use this command to invoke single phase of the &amp;quot;apply&amp;quot; workflow&lt;/p&gt;</description></item><item><title>Kubernetes Deprecation Policy</title><link>https://andygol-k8s.netlify.app/docs/reference/using-api/deprecation-policy/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/using-api/deprecation-policy/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This document details the deprecation policy for various facets of the system.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;p&gt;Kubernetes is a large system with many components and many contributors. As
with any such software, the feature set naturally evolves over time, and
sometimes a feature may need to be removed. This could include an API, a flag,
or even an entire feature. To avoid breaking existing users, Kubernetes follows
a deprecation policy for aspects of the system that are slated to be removed.&lt;/p&gt;</description></item><item><title>KYAML Reference</title><link>https://andygol-k8s.netlify.app/docs/reference/encodings/kyaml/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/encodings/kyaml/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;&lt;strong&gt;KYAML&lt;/strong&gt; is a safer and less ambiguous subset of YAML, initially introduced in Kubernetes v1.34 (alpha) and enabled by default in v1.35 (beta). Designed specifically for Kubernetes, KYAML addresses common YAML pitfalls such as whitespace sensitivity and implicit type coercion while maintaining full compatibility with existing YAML parsers and tooling.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;p&gt;This reference describes KYAML syntax.&lt;/p&gt;
&lt;h2 id="getting-started-with-kyaml"&gt;Getting started with KYAML&lt;/h2&gt;
&lt;p&gt;YAML’s reliance on indentation and implicit type coercion often leads to configuration errors, especially in CI/CD pipelines and templating systems like Helm. KYAML eliminates these issues by enforcing explicit syntax and structure, making configurations more reliable and easier to debug.&lt;/p&gt;</description></item><item><title>Labels and Selectors</title><link>https://andygol-k8s.netlify.app/docs/concepts/overview/working-with-objects/labels/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/overview/working-with-objects/labels/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;&lt;em&gt;Labels&lt;/em&gt; are key/value pairs that are attached to
&lt;a class='glossary-tooltip' title='An entity in the Kubernetes system, representing part of the state of your cluster.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/overview/working-with-objects/#kubernetes-objects' target='_blank' aria-label='objects'&gt;objects&lt;/a&gt; such as Pods.
Labels are intended to be used to specify identifying attributes of objects
that are meaningful and relevant to users, but do not directly imply semantics
to the core system. Labels can be used to organize and to select subsets of
objects. Labels can be attached to objects at creation time and subsequently
added and modified at any time. Each object can have a set of key/value labels
defined. Each Key must be unique for a given object.&lt;/p&gt;</description></item><item><title>Liveness, Readiness, and Startup Probes</title><link>https://andygol-k8s.netlify.app/docs/concepts/configuration/liveness-readiness-startup-probes/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/configuration/liveness-readiness-startup-probes/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;Kubernetes has various types of probes:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#liveness-probe"&gt;Liveness probe&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#readiness-probe"&gt;Readiness probe&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#startup-probe"&gt;Startup probe&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;!-- body --&gt;
&lt;h2 id="liveness-probe"&gt;Liveness probe&lt;/h2&gt;
&lt;p&gt;Liveness probes determine when to restart a container. For example, liveness probes could catch a deadlock when an application is running but unable to make progress.&lt;/p&gt;
&lt;p&gt;If a container fails its liveness probe repeatedly, the kubelet restarts the container.&lt;/p&gt;
&lt;p&gt;Liveness probes do not wait for readiness probes to succeed. If you want to wait before executing a liveness probe, you can either define &lt;code&gt;initialDelaySeconds&lt;/code&gt; or use a
&lt;a href="#startup-probe"&gt;startup probe&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>Managing Workloads</title><link>https://andygol-k8s.netlify.app/docs/concepts/workloads/management/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/workloads/management/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;You've deployed your application and exposed it via a Service. Now what? Kubernetes provides a
number of tools to help you manage your application deployment, including scaling and updating.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="organizing-resource-configurations"&gt;Organizing resource configurations&lt;/h2&gt;
&lt;p&gt;Many applications require multiple resources to be created, such as a Deployment along with a Service.
Management of multiple resources can be simplified by grouping them together in the same file
(separated by &lt;code&gt;---&lt;/code&gt; in YAML). For example:&lt;/p&gt;</description></item><item><title>Node Labels Populated By The Kubelet</title><link>https://andygol-k8s.netlify.app/docs/reference/node/node-labels/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/node/node-labels/</guid><description>&lt;p&gt;Kubernetes &lt;a class='glossary-tooltip' title='A node is a worker machine in Kubernetes.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/architecture/nodes/' target='_blank' aria-label='nodes'&gt;nodes&lt;/a&gt; come pre-populated
with a standard set of &lt;a class='glossary-tooltip' title='Tags objects with identifying attributes that are meaningful and relevant to users.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/overview/working-with-objects/labels' target='_blank' aria-label='labels'&gt;labels&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;You can also set your own labels on nodes, either through the kubelet configuration or
using the Kubernetes API.&lt;/p&gt;
&lt;h2 id="preset-labels"&gt;Preset labels&lt;/h2&gt;
&lt;p&gt;The preset labels that Kubernetes sets on nodes are:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/reference/labels-annotations-taints/#kubernetes-io-arch"&gt;&lt;code&gt;kubernetes.io/arch&lt;/code&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/reference/labels-annotations-taints/#kubernetesiohostname"&gt;&lt;code&gt;kubernetes.io/hostname&lt;/code&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/reference/labels-annotations-taints/#kubernetes-io-os"&gt;&lt;code&gt;kubernetes.io/os&lt;/code&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/reference/labels-annotations-taints/#nodekubernetesioinstance-type"&gt;&lt;code&gt;node.kubernetes.io/instance-type&lt;/code&gt;&lt;/a&gt;
(if known to the kubelet – Kubernetes may not have this information to set the label)&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/reference/labels-annotations-taints/#topologykubernetesioregion"&gt;&lt;code&gt;topology.kubernetes.io/region&lt;/code&gt;&lt;/a&gt;
(if known to the kubelet – Kubernetes may not have this information to set the label)&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/reference/labels-annotations-taints/#topologykubernetesiozone"&gt;&lt;code&gt;topology.kubernetes.io/zone&lt;/code&gt;&lt;/a&gt;
(if known to the kubelet – Kubernetes may not have this information to set the label)&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="alert alert-info" role="note"&gt;&lt;h4 class="alert-heading"&gt;Note:&lt;/h4&gt;The value of these labels is cloud provider specific and is not guaranteed to be reliable.
For example, the value of &lt;code&gt;kubernetes.io/hostname&lt;/code&gt; may be the same as the node name in some environments
and a different value in other environments.&lt;/div&gt;

&lt;h2 id="what-s-next"&gt;What's next&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;See &lt;a href="https://andygol-k8s.netlify.app/docs/reference/labels-annotations-taints/"&gt;Well-Known Labels, Annotations and Taints&lt;/a&gt; for a list of common labels.&lt;/li&gt;
&lt;li&gt;Learn how to &lt;a href="https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/assign-pods-nodes/#add-a-label-to-a-node"&gt;add a label to a node&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;</description></item><item><title>Pod Scheduling Readiness</title><link>https://andygol-k8s.netlify.app/docs/concepts/scheduling-eviction/pod-scheduling-readiness/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/scheduling-eviction/pod-scheduling-readiness/</guid><description>&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.30 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;p&gt;Pods were considered ready for scheduling once created. Kubernetes scheduler
does its due diligence to find nodes to place all pending Pods. However, in a
real-world case, some Pods may stay in a &amp;quot;miss-essential-resources&amp;quot; state for a long period.
These Pods actually churn the scheduler (and downstream integrators like Cluster AutoScaler)
in an unnecessary manner.&lt;/p&gt;
&lt;p&gt;By specifying/removing a Pod's &lt;code&gt;.spec.schedulingGates&lt;/code&gt;, you can control when a Pod is ready
to be considered for scheduling.&lt;/p&gt;</description></item><item><title>Pod Topology Spread Constraints</title><link>https://andygol-k8s.netlify.app/docs/concepts/scheduling-eviction/topology-spread-constraints/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/scheduling-eviction/topology-spread-constraints/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;You can use &lt;em&gt;topology spread constraints&lt;/em&gt; to control how
&lt;a class='glossary-tooltip' title='A Pod represents a set of running containers in your cluster.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/' target='_blank' aria-label='Pods'&gt;Pods&lt;/a&gt; are spread across your cluster
among failure-domains such as regions, zones, nodes, and other user-defined topology
domains. This can help to achieve high availability as well as efficient resource
utilization.&lt;/p&gt;
&lt;p&gt;You can set &lt;a href="#cluster-level-default-constraints"&gt;cluster-level constraints&lt;/a&gt; as a default,
or configure topology spread constraints for individual workloads.&lt;/p&gt;</description></item><item><title>Ports and Protocols</title><link>https://andygol-k8s.netlify.app/docs/reference/networking/ports-and-protocols/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/networking/ports-and-protocols/</guid><description>&lt;p&gt;When running Kubernetes in an environment with strict network boundaries, such
as on-premises datacenter with physical network firewalls or Virtual
Networks in Public Cloud, it is useful to be aware of the ports and protocols
used by Kubernetes components.&lt;/p&gt;
&lt;h2 id="control-plane"&gt;Control plane&lt;/h2&gt;
&lt;table&gt;
 &lt;thead&gt;
 &lt;tr&gt;
 &lt;th&gt;Protocol&lt;/th&gt;
 &lt;th&gt;Direction&lt;/th&gt;
 &lt;th&gt;Port Range&lt;/th&gt;
 &lt;th&gt;Purpose&lt;/th&gt;
 &lt;th&gt;Used By&lt;/th&gt;
 &lt;/tr&gt;
 &lt;/thead&gt;
 &lt;tbody&gt;
 &lt;tr&gt;
 &lt;td&gt;TCP&lt;/td&gt;
 &lt;td&gt;Inbound&lt;/td&gt;
 &lt;td&gt;6443&lt;/td&gt;
 &lt;td&gt;Kubernetes API server&lt;/td&gt;
 &lt;td&gt;All&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;TCP&lt;/td&gt;
 &lt;td&gt;Inbound&lt;/td&gt;
 &lt;td&gt;2379-2380&lt;/td&gt;
 &lt;td&gt;etcd server client API&lt;/td&gt;
 &lt;td&gt;kube-apiserver, etcd&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;TCP&lt;/td&gt;
 &lt;td&gt;Inbound&lt;/td&gt;
 &lt;td&gt;10250&lt;/td&gt;
 &lt;td&gt;Kubelet API&lt;/td&gt;
 &lt;td&gt;Self, Control plane&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;TCP&lt;/td&gt;
 &lt;td&gt;Inbound&lt;/td&gt;
 &lt;td&gt;10259&lt;/td&gt;
 &lt;td&gt;kube-scheduler&lt;/td&gt;
 &lt;td&gt;Self&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;TCP&lt;/td&gt;
 &lt;td&gt;Inbound&lt;/td&gt;
 &lt;td&gt;10257&lt;/td&gt;
 &lt;td&gt;kube-controller-manager&lt;/td&gt;
 &lt;td&gt;Self&lt;/td&gt;
 &lt;/tr&gt;
 &lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;Although etcd ports are included in control plane section, you can also host your own
etcd cluster externally or on custom ports.&lt;/p&gt;</description></item><item><title>Process ID Limits And Reservations</title><link>https://andygol-k8s.netlify.app/docs/concepts/policy/pid-limiting/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/policy/pid-limiting/</guid><description>&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.20 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;p&gt;Kubernetes allow you to limit the number of process IDs (PIDs) that a
&lt;a class='glossary-tooltip' title='A Pod represents a set of running containers in your cluster.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/' target='_blank' aria-label='Pod'&gt;Pod&lt;/a&gt; can use.
You can also reserve a number of allocatable PIDs for each &lt;a class='glossary-tooltip' title='A node is a worker machine in Kubernetes.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/architecture/nodes/' target='_blank' aria-label='node'&gt;node&lt;/a&gt;
for use by the operating system and daemons (rather than by Pods).&lt;/p&gt;
&lt;!-- body --&gt;
&lt;p&gt;Process IDs (PIDs) are a fundamental resource on nodes. It is trivial to hit the
task limit without hitting any other resource limits, which can then cause
instability to a host machine.&lt;/p&gt;</description></item><item><title>Resource Management for Pods and Containers</title><link>https://andygol-k8s.netlify.app/docs/concepts/configuration/manage-resources-containers/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/configuration/manage-resources-containers/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;When you specify a &lt;a class='glossary-tooltip' title='A Pod represents a set of running containers in your cluster.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/' target='_blank' aria-label='Pod'&gt;Pod&lt;/a&gt;, you can optionally specify how much of each resource a
&lt;a class='glossary-tooltip' title='A lightweight and portable executable image that contains software and all of its dependencies.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/containers/' target='_blank' aria-label='container'&gt;container&lt;/a&gt; needs. The most common resources to specify are CPU and memory
(RAM); there are others.&lt;/p&gt;
&lt;p&gt;When you specify the resource &lt;em&gt;request&lt;/em&gt; for containers in a Pod, the
&lt;a class='glossary-tooltip' title='Control plane component that watches for newly created pods with no assigned node, and selects a node for them to run on.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/reference/command-line-tools-reference/kube-scheduler/' target='_blank' aria-label='kube-scheduler'&gt;kube-scheduler&lt;/a&gt; uses this information to decide which node to place the Pod on.
When you specify a resource &lt;em&gt;limit&lt;/em&gt; for a container, the &lt;a class='glossary-tooltip' title='An agent that runs on each node in the cluster. It makes sure that containers are running in a pod.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/reference/command-line-tools-reference/kubelet' target='_blank' aria-label='kubelet'&gt;kubelet&lt;/a&gt; enforces those
limits so that the running container is not allowed to use more of that resource
than the limit you set. The kubelet also reserves at least the &lt;em&gt;request&lt;/em&gt; amount of
that system resource specifically for that container to use.&lt;/p&gt;</description></item><item><title>Restrict a Container's Syscalls with seccomp</title><link>https://andygol-k8s.netlify.app/docs/tutorials/security/seccomp/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tutorials/security/seccomp/</guid><description>&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.19 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;p&gt;Seccomp stands for secure computing mode and has been a feature of the Linux
kernel since version 2.6.12. It can be used to sandbox the privileges of a
process, restricting the calls it is able to make from userspace into the
kernel. Kubernetes lets you automatically apply seccomp profiles loaded onto a
&lt;a class='glossary-tooltip' title='A node is a worker machine in Kubernetes.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/architecture/nodes/' target='_blank' aria-label='node'&gt;node&lt;/a&gt; to your Pods and containers.&lt;/p&gt;</description></item><item><title>Running ZooKeeper, A Distributed System Coordinator</title><link>https://andygol-k8s.netlify.app/docs/tutorials/stateful-application/zookeeper/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tutorials/stateful-application/zookeeper/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This tutorial demonstrates running &lt;a href="https://zookeeper.apache.org"&gt;Apache Zookeeper&lt;/a&gt; on
Kubernetes using &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/statefulset/"&gt;StatefulSets&lt;/a&gt;,
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/disruptions/#pod-disruption-budget"&gt;PodDisruptionBudgets&lt;/a&gt;,
and &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity"&gt;PodAntiAffinity&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;Before starting this tutorial, you should be familiar with the following
Kubernetes concepts:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/"&gt;Pods&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/services-networking/dns-pod-service/"&gt;Cluster DNS&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/services-networking/service/#headless-services"&gt;Headless Services&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/storage/persistent-volumes/"&gt;PersistentVolumes&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/statefulset/"&gt;StatefulSets&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/disruptions/#pod-disruption-budget"&gt;PodDisruptionBudgets&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity"&gt;PodAntiAffinity&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/reference/kubectl/kubectl/"&gt;kubectl CLI&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;You must have a cluster with at least four nodes, and each node requires at least 2 CPUs and 4 GiB of memory. In this tutorial you will cordon and drain the cluster's nodes. &lt;strong&gt;This means that the cluster will terminate and evict all Pods on its nodes, and the nodes will temporarily become unschedulable.&lt;/strong&gt; You should use a dedicated cluster for this tutorial, or you should ensure that the disruption you cause will not interfere with other tenants.&lt;/p&gt;</description></item><item><title>Security For Linux Nodes</title><link>https://andygol-k8s.netlify.app/docs/concepts/security/linux-security/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/security/linux-security/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page describes security considerations and best practices specific to the Linux operating system.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="protection-for-secret-data-on-nodes"&gt;Protection for Secret data on nodes&lt;/h2&gt;
&lt;p&gt;On Linux nodes, memory-backed volumes (such as &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/configuration/secret/"&gt;&lt;code&gt;secret&lt;/code&gt;&lt;/a&gt;
volume mounts, or &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/storage/volumes/#emptydir"&gt;&lt;code&gt;emptyDir&lt;/code&gt;&lt;/a&gt; with &lt;code&gt;medium: Memory&lt;/code&gt;)
are implemented with a &lt;code&gt;tmpfs&lt;/code&gt; filesystem.&lt;/p&gt;
&lt;p&gt;If you have swap configured and use an older Linux kernel (or a current kernel and an unsupported configuration of Kubernetes),
&lt;strong&gt;memory&lt;/strong&gt; backed volumes can have data written to persistent storage.&lt;/p&gt;</description></item><item><title>Security For Windows Nodes</title><link>https://andygol-k8s.netlify.app/docs/concepts/security/windows-security/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/security/windows-security/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page describes security considerations and best practices specific to the Windows operating system.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="protection-for-secret-data-on-nodes"&gt;Protection for Secret data on nodes&lt;/h2&gt;
&lt;p&gt;On Windows, data from Secrets are written out in clear text onto the node's local
storage (as compared to using tmpfs / in-memory filesystems on Linux). As a cluster
operator, you should take both of the following additional measures:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Use file ACLs to secure the Secrets' file location.&lt;/li&gt;
&lt;li&gt;Apply volume-level encryption using
&lt;a href="https://docs.microsoft.com/windows/security/information-protection/bitlocker/bitlocker-how-to-deploy-on-windows-server"&gt;BitLocker&lt;/a&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id="container-users"&gt;Container users&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/configure-runasusername/"&gt;RunAsUsername&lt;/a&gt;
can be specified for Windows Pods or containers to execute the container
processes as specific user. This is roughly equivalent to
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/security/pod-security-policy/#users-and-groups"&gt;RunAsUser&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>Storage Classes</title><link>https://andygol-k8s.netlify.app/docs/concepts/storage/storage-classes/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/storage/storage-classes/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This document describes the concept of a StorageClass in Kubernetes. Familiarity
with &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/storage/volumes/"&gt;volumes&lt;/a&gt; and
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/storage/persistent-volumes/"&gt;persistent volumes&lt;/a&gt; is suggested.&lt;/p&gt;
&lt;p&gt;A StorageClass provides a way for administrators to describe the &lt;em&gt;classes&lt;/em&gt; of
storage they offer. Different classes might map to quality-of-service levels,
or to backup policies, or to arbitrary policies determined by the cluster
administrators. Kubernetes itself is unopinionated about what classes
represent.&lt;/p&gt;
&lt;p&gt;The Kubernetes concept of a storage class is similar to “profiles” in some other
storage system designs.&lt;/p&gt;</description></item><item><title>Documentation Style Guide</title><link>https://andygol-k8s.netlify.app/docs/contribute/style/style-guide/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/contribute/style/style-guide/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page gives writing style guidelines for the Kubernetes documentation.
These are guidelines, not rules. Use your best judgment, and feel free to
propose changes to this document in a pull request.&lt;/p&gt;
&lt;p&gt;For additional information on creating new content for the Kubernetes
documentation, read the &lt;a href="https://andygol-k8s.netlify.app/docs/contribute/style/content-guide/"&gt;Documentation Content Guide&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Changes to the style guide are made by SIG Docs as a group. To propose a change
or addition, &lt;a href="https://bit.ly/sig-docs-agenda"&gt;add it to the agenda&lt;/a&gt; for an upcoming
SIG Docs meeting, and attend the meeting to participate in the discussion.&lt;/p&gt;</description></item><item><title>The Kubernetes API</title><link>https://andygol-k8s.netlify.app/docs/concepts/overview/kubernetes-api/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/overview/kubernetes-api/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;The core of Kubernetes' &lt;a class='glossary-tooltip' title='The container orchestration layer that exposes the API and interfaces to define, deploy, and manage the lifecycle of containers.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/reference/glossary/?all=true#term-control-plane' target='_blank' aria-label='control plane'&gt;control plane&lt;/a&gt;
is the &lt;a class='glossary-tooltip' title='Control plane component that serves the Kubernetes API.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/architecture/#kube-apiserver' target='_blank' aria-label='API server'&gt;API server&lt;/a&gt;. The API server
exposes an HTTP API that lets end users, different parts of your cluster, and
external components communicate with one another.&lt;/p&gt;
&lt;p&gt;The Kubernetes API lets you query and manipulate the state of API objects in Kubernetes
(for example: Pods, Namespaces, ConfigMaps, and Events).&lt;/p&gt;</description></item><item><title>Troubleshooting CNI plugin-related errors</title><link>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/migrating-from-dockershim/troubleshooting-cni-plugin-related-errors/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/migrating-from-dockershim/troubleshooting-cni-plugin-related-errors/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;To avoid CNI plugin-related errors, verify that you are using or upgrading to a
container runtime that has been tested to work correctly with your version of
Kubernetes.&lt;/p&gt;
&lt;h2 id="about-the-incompatible-cni-versions-and-failed-to-destroy-network-for-sandbox-errors"&gt;About the &amp;quot;Incompatible CNI versions&amp;quot; and &amp;quot;Failed to destroy network for sandbox&amp;quot; errors&lt;/h2&gt;
&lt;p&gt;Service issues exist for pod CNI network setup and tear down in containerd
v1.6.0-v1.6.3 when the CNI plugins have not been upgraded and/or the CNI config
version is not declared in the CNI config files. The containerd team reports,
&amp;quot;these issues are resolved in containerd v1.6.4.&amp;quot;&lt;/p&gt;</description></item><item><title>Turnkey Cloud Solutions</title><link>https://andygol-k8s.netlify.app/docs/setup/production-environment/turnkey-solutions/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/setup/production-environment/turnkey-solutions/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page provides a list of Kubernetes certified solution providers. From each
provider page, you can learn how to install and setup production
ready clusters.&lt;/p&gt;
&lt;!-- body --&gt;





&lt;script&gt;
function updateLandscapeSource(button,shouldUpdateFragment) {
 console.log({button: button,shouldUpdateFragment: shouldUpdateFragment});
 try {
 if(shouldUpdateFragment) {
 window.location.hash = "#iframe-landscape-"+button.id;
 
 } else {
 var landscapeElements = document.querySelectorAll("#landscape");
 let categories=button.dataset.landscapeTypes;
 let link = `https://landscape.cncf.io/embed/embed.html?key=${encodeURIComponent(categories)}&amp;headers=false&amp;style=shadowed&amp;size=md&amp;bg-color=%23d95e00&amp;fg-color=%23ffffff&amp;iframe-resizer=true`
 landscapeElements[0].src = link;
 }
 }
 catch(err) {
 console.log({message: "error handling Landscape switch", error: err})
 }
}


document.addEventListener("DOMContentLoaded", function () {
 let hashChangeHandler = () =&gt; {
 if (window.location.hash) {
 let selectedTriggerElements = document.querySelectorAll(".landscape-trigger"+window.location.hash);
 if (selectedTriggerElements.length == 1) {
 landscapeSource = selectedTriggerElements[0];
 console.log("Updating Landscape source based on fragment:", window
 .location
 .hash
 .substring(1));
 updateLandscapeSource(landscapeSource,false);
 }
 }
 }
 var landscapeTriggerElements = document.querySelectorAll(".landscape-trigger");
 landscapeTriggerElements.forEach(element =&gt; {
 element.onclick = function() {
 updateLandscapeSource(element,true);
 };
 });
 var landscapeDefaultElements = document.querySelectorAll(".landscape-trigger.landscape-default");
 if (landscapeDefaultElements.length == 1) {
 let defaultLandscapeSource = landscapeDefaultElements[0];
 updateLandscapeSource(defaultLandscapeSource,false);
 }
 window.addEventListener("hashchange", hashChangeHandler, false);
 
 hashChangeHandler();
});
&lt;/script&gt;&lt;div id="frameHolder"&gt;
 
 &lt;iframe id="iframe-landscape" src="https://landscape.cncf.io/embed/embed.html?key=platform--certified-kubernetes-hosted&amp;headers=false&amp;style=shadowed&amp;size=md&amp;bg-color=%233371e3&amp;fg-color=%23ffffff&amp;iframe-resizer=true" style="width: 1px; min-width: 100%; min-height: 100px; border: 0;"&gt;&lt;/iframe&gt;
 &lt;script&gt;
 iFrameResize({ }, '#iframe-landscape');
 &lt;/script&gt;
 
&lt;/div&gt;</description></item><item><title>Upgrading Linux nodes</title><link>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/kubeadm/upgrading-linux-nodes/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/kubeadm/upgrading-linux-nodes/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page explains how to upgrade a Linux Worker Nodes created with kubeadm.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;&lt;p&gt;You need to have shell access to all the nodes, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial
on a cluster with at least two nodes that are not acting as control plane hosts.&lt;/p&gt;
 
 &lt;p&gt;To check the version, enter &lt;code&gt;kubectl version&lt;/code&gt;.&lt;/p&gt;</description></item><item><title>Use an HTTP Proxy to Access the Kubernetes API</title><link>https://andygol-k8s.netlify.app/docs/tasks/extend-kubernetes/http-proxy-access-api/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/extend-kubernetes/http-proxy-access-api/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page shows how to use an HTTP proxy to access the Kubernetes API.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Use Kube-router for NetworkPolicy</title><link>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/network-policy-provider/kube-router-network-policy/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/network-policy-provider/kube-router-network-policy/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page shows how to use &lt;a href="https://github.com/cloudnativelabs/kube-router"&gt;Kube-router&lt;/a&gt; for NetworkPolicy.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;You need to have a Kubernetes cluster running. If you do not already have a cluster, you can create one by using any of the cluster installers like Kops, Bootkube, Kubeadm etc.&lt;/p&gt;
&lt;!-- steps --&gt;
&lt;h2 id="installing-kube-router-addon"&gt;Installing Kube-router addon&lt;/h2&gt;
&lt;p&gt;The Kube-router Addon comes with a Network Policy Controller that watches Kubernetes API server for any NetworkPolicy and pods updated and configures iptables rules and ipsets to allow or block traffic as directed by the policies. Please follow the &lt;a href="https://www.kube-router.io/docs/user-guide/#try-kube-router-with-cluster-installers"&gt;trying Kube-router with cluster installers&lt;/a&gt; guide to install Kube-router addon.&lt;/p&gt;</description></item><item><title>Use Port Forwarding to Access Applications in a Cluster</title><link>https://andygol-k8s.netlify.app/docs/tasks/access-application-cluster/port-forward-access-application-cluster/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/access-application-cluster/port-forward-access-application-cluster/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page shows how to use &lt;code&gt;kubectl port-forward&lt;/code&gt; to connect to a MongoDB
server running in a Kubernetes cluster. This type of connection can be useful
for database debugging.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Using Source IP</title><link>https://andygol-k8s.netlify.app/docs/tutorials/services/source-ip/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tutorials/services/source-ip/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;Applications running in a Kubernetes cluster find and communicate with each
other, and the outside world, through the Service abstraction. This document
explains what happens to the source IP of packets sent to different types
of Services, and how you can toggle this behavior according to your needs.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;h3 id="terminology"&gt;Terminology&lt;/h3&gt;
&lt;p&gt;This document makes use of the following terms:&lt;/p&gt;

&lt;dl&gt;
&lt;dt&gt;&lt;a href="https://en.wikipedia.org/wiki/Network_address_translation"&gt;NAT&lt;/a&gt;&lt;/dt&gt;
&lt;dd&gt;Network address translation&lt;/dd&gt;
&lt;dt&gt;&lt;a href="https://en.wikipedia.org/wiki/Network_address_translation#SNAT"&gt;Source NAT&lt;/a&gt;&lt;/dt&gt;
&lt;dd&gt;Replacing the source IP on a packet; in this page, that usually means replacing with the IP address of a node.&lt;/dd&gt;
&lt;dt&gt;&lt;a href="https://en.wikipedia.org/wiki/Network_address_translation#DNAT"&gt;Destination NAT&lt;/a&gt;&lt;/dt&gt;
&lt;dd&gt;Replacing the destination IP on a packet; in this page, that usually means replacing with the IP address of a &lt;a class='glossary-tooltip' title='A Pod represents a set of running containers in your cluster.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/' target='_blank' aria-label='Pod'&gt;Pod&lt;/a&gt;&lt;/dd&gt;
&lt;dt&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies"&gt;VIP&lt;/a&gt;&lt;/dt&gt;
&lt;dd&gt;A virtual IP address, such as the one assigned to every &lt;a class='glossary-tooltip' title='A way to expose an application running on a set of Pods as a network service.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/services-networking/service/' target='_blank' aria-label='Service'&gt;Service&lt;/a&gt; in Kubernetes&lt;/dd&gt;
&lt;dt&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies"&gt;kube-proxy&lt;/a&gt;&lt;/dt&gt;
&lt;dd&gt;A network daemon that orchestrates Service VIP management on every node&lt;/dd&gt;
&lt;/dl&gt;
&lt;h3 id="prerequisites"&gt;Prerequisites&lt;/h3&gt;
&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Volume Attributes Classes</title><link>https://andygol-k8s.netlify.app/docs/concepts/storage/volume-attributes-classes/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/storage/volume-attributes-classes/</guid><description>&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable" title="Feature Gate: VolumeAttributesClass"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt; 
 &lt;code&gt;Kubernetes v1.34 [stable]&lt;/code&gt;(enabled by default)&lt;/div&gt;

&lt;p&gt;This page assumes that you are familiar with &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/storage/storage-classes/"&gt;StorageClasses&lt;/a&gt;,
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/storage/volumes/"&gt;volumes&lt;/a&gt; and &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/storage/persistent-volumes/"&gt;PersistentVolumes&lt;/a&gt;
in Kubernetes.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;p&gt;A VolumeAttributesClass provides a way for administrators to describe the mutable
&amp;quot;classes&amp;quot; of storage they offer. Different classes might map to different quality-of-service levels.
Kubernetes itself is un-opinionated about what these classes represent.&lt;/p&gt;
&lt;p&gt;This feature is generally available (GA) as of version 1.34, and users have the option to disable it.&lt;/p&gt;</description></item><item><title>Upgrading Windows nodes</title><link>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/kubeadm/upgrading-windows-nodes/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/kubeadm/upgrading-windows-nodes/</guid><description>&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-beta"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.18 [beta]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;p&gt;This page explains how to upgrade a Windows node created with kubeadm.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;&lt;p&gt;You need to have shell access to all the nodes, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial
on a cluster with at least two nodes that are not acting as control plane hosts.&lt;/p&gt;
 
 
 Your Kubernetes server must be at or later than version 1.17.
 &lt;p&gt;To check the version, enter &lt;code&gt;kubectl version&lt;/code&gt;.&lt;/p&gt;</description></item><item><title>Local Files And Paths Used By The Kubelet</title><link>https://andygol-k8s.netlify.app/docs/reference/node/kubelet-files/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/node/kubelet-files/</guid><description>&lt;p&gt;The &lt;a class='glossary-tooltip' title='An agent that runs on each node in the cluster. It makes sure that containers are running in a pod.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/reference/command-line-tools-reference/kubelet' target='_blank' aria-label='kubelet'&gt;kubelet&lt;/a&gt; is mostly a stateless
process running on a Kubernetes &lt;a class='glossary-tooltip' title='A node is a worker machine in Kubernetes.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/architecture/nodes/' target='_blank' aria-label='node'&gt;node&lt;/a&gt;.
This document outlines files that kubelet reads and writes.&lt;/p&gt;

&lt;div class="alert alert-info" role="note"&gt;&lt;h4 class="alert-heading"&gt;Note:&lt;/h4&gt;This document is for informational purpose and not describing any guaranteed behaviors or APIs.
It lists resources used by the kubelet, which is an implementation detail and a subject to change at any release.&lt;/div&gt;

&lt;p&gt;The kubelet typically uses the &lt;a class='glossary-tooltip' title='The container orchestration layer that exposes the API and interfaces to define, deploy, and manage the lifecycle of containers.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/reference/glossary/?all=true#term-control-plane' target='_blank' aria-label='control plane'&gt;control plane&lt;/a&gt; as
the source of truth on what needs to run on the Node, and the
&lt;a class='glossary-tooltip' title='The container runtime is the software that is responsible for running containers.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/setup/production-environment/container-runtimes' target='_blank' aria-label='container runtime'&gt;container runtime&lt;/a&gt; to retrieve
the current state of containers. So long as you provide a &lt;em&gt;kubeconfig&lt;/em&gt; (API client configuration)
to the kubelet, the kubelet does connect to your control plane; otherwise the node operates in
&lt;em&gt;standalone mode&lt;/em&gt;.&lt;/p&gt;</description></item><item><title>Use a SOCKS5 Proxy to Access the Kubernetes API</title><link>https://andygol-k8s.netlify.app/docs/tasks/extend-kubernetes/socks5-proxy-access-api/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/extend-kubernetes/socks5-proxy-access-api/</guid><description>&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.24 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;p&gt;This page shows how to use a SOCKS5 proxy to access the API of a remote Kubernetes cluster.
This is useful when the cluster you want to access does not expose its API directly on the public internet.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Deprecated API Migration Guide</title><link>https://andygol-k8s.netlify.app/docs/reference/using-api/deprecation-guide/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/using-api/deprecation-guide/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;As the Kubernetes API evolves, APIs are periodically reorganized or upgraded.
When APIs evolve, the old API is deprecated and eventually removed.
This page contains information you need to know when migrating from
deprecated API versions to newer and more stable API versions.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="removed-apis-by-release"&gt;Removed APIs by release&lt;/h2&gt;
&lt;h3 id="v1-32"&gt;v1.32&lt;/h3&gt;
&lt;p&gt;The &lt;strong&gt;v1.32&lt;/strong&gt; release stopped serving the following deprecated API versions:&lt;/p&gt;
&lt;h4 id="flowcontrol-resources-v132"&gt;Flow control resources&lt;/h4&gt;
&lt;p&gt;The &lt;strong&gt;flowcontrol.apiserver.k8s.io/v1beta3&lt;/strong&gt; API version of FlowSchema and PriorityLevelConfiguration is no longer served as of v1.32.&lt;/p&gt;</description></item><item><title>Dynamic Admission Control</title><link>https://andygol-k8s.netlify.app/docs/reference/access-authn-authz/extensible-admission-controllers/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/access-authn-authz/extensible-admission-controllers/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;In addition to &lt;a href="https://andygol-k8s.netlify.app/docs/reference/access-authn-authz/admission-controllers/"&gt;compiled-in admission plugins&lt;/a&gt;,
admission plugins can be developed as extensions and run as webhooks configured at runtime.
This page describes how to build, configure, use, and monitor admission webhooks.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="what-are-admission-webhooks"&gt;What are admission webhooks?&lt;/h2&gt;
&lt;p&gt;Admission webhooks are HTTP callbacks that receive admission requests and do
something with them. You can define two types of admission webhooks,
&lt;a href="https://andygol-k8s.netlify.app/docs/reference/access-authn-authz/admission-controllers/#validatingadmissionwebhook"&gt;validating admission webhook&lt;/a&gt;
and
&lt;a href="https://andygol-k8s.netlify.app/docs/reference/access-authn-authz/admission-controllers/#mutatingadmissionwebhook"&gt;mutating admission webhook&lt;/a&gt;.
Mutating admission webhooks are invoked first, and can modify objects sent to the API server to enforce custom defaults.
After all object modifications are complete, and after the incoming object is validated by the API server,
validating admission webhooks are invoked and can reject requests to enforce custom policies.&lt;/p&gt;</description></item><item><title>Namespaces</title><link>https://andygol-k8s.netlify.app/docs/concepts/overview/working-with-objects/namespaces/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/overview/working-with-objects/namespaces/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;In Kubernetes, &lt;em&gt;namespaces&lt;/em&gt; provide a mechanism for isolating groups of resources within a single cluster. Names of resources need to be unique within a namespace, but not across namespaces. Namespace-based scoping is applicable only for namespaced &lt;a class='glossary-tooltip' title='An entity in the Kubernetes system, representing part of the state of your cluster.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/overview/working-with-objects/#kubernetes-objects' target='_blank' aria-label='objects'&gt;objects&lt;/a&gt; &lt;em&gt;(e.g. Deployments, Services, etc.)&lt;/em&gt; and not for cluster-wide objects &lt;em&gt;(e.g. StorageClass, Nodes, PersistentVolumes, etc.)&lt;/em&gt;.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="when-to-use-multiple-namespaces"&gt;When to Use Multiple Namespaces&lt;/h2&gt;
&lt;p&gt;Namespaces are intended for use in environments with many users spread across multiple
teams, or projects. For clusters with a few to tens of users, you should not
need to create or think about namespaces at all. Start using namespaces when you
need the features they provide.&lt;/p&gt;</description></item><item><title>Adform Case Study</title><link>https://andygol-k8s.netlify.app/case-studies/adform/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/case-studies/adform/</guid><description>&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://site.adform.com/"&gt;Adform's&lt;/a&gt; mission is to provide a secure and transparent full stack of advertising technology to enable digital ads across devices. The company has a large infrastructure: &lt;a href="https://www.openstack.org/"&gt;OpenStack&lt;/a&gt;-based private clouds running on 1,100 physical servers in 7 data centers around the world, 3 of which were opened in the past year. With the company's growth, the infrastructure team felt that "our private cloud was not really flexible enough," says IT System Engineer Edgaras Apšega. "The biggest pain point is that our developers need to maintain their virtual machines, so rolling out technology and new software takes time. We were really struggling with our releases, and we didn't have self-healing infrastructure."&lt;/p&gt;</description></item><item><title>Ygrene Case Study</title><link>https://andygol-k8s.netlify.app/case-studies/ygrene/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/case-studies/ygrene/</guid><description>&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;A PACE (Property Assessed Clean Energy) financing company, Ygrene has funded more than $1 billion in loans since 2010. In order to approve and process those loans, "We have lots of data sources that are being aggregated, and we also have lots of systems that need to churn on that data," says Ygrene Development Manager Austin Adams. The company was utilizing massive servers, and "we just reached the limit of being able to scale them vertically. We had a really unstable system that became overwhelmed with requests just for doing background data processing in real time. The performance the users saw was very poor. We needed a solution that wouldn't require us to make huge refactors to the code base." As a finance company, Ygrene also needed to ensure that they were shipping their applications securely.&lt;/p&gt;</description></item><item><title>SlingTV Case Study</title><link>https://andygol-k8s.netlify.app/case-studies/slingtv/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/case-studies/slingtv/</guid><description>&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;Launched by DISH Network in 2015, Sling TV experienced great customer growth from the beginning. After just a year, "we were going through some growing pains of some of the legacy systems and trying to find the right architecture to enable our future," says Brad Linder, Sling TV's Cloud Native &amp; Big Data Evangelist. The company has particular challenges: "We take live TV and distribute it over the internet out to a user's device that we do not control," says Linder. "In a lot of ways, we are working in the Wild West: The internet is what it is going to be, and if a customer's service does not work for whatever reason, they do not care why. They just want things to work. Those are the variables of the equation that we have to try to solve. We really have to try to enable optionality and good customer experience at web scale."&lt;/p&gt;</description></item><item><title>About cgroup v2</title><link>https://andygol-k8s.netlify.app/docs/concepts/architecture/cgroups/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/architecture/cgroups/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;On Linux, &lt;a class='glossary-tooltip' title='A group of Linux processes with optional resource isolation, accounting and limits.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/reference/glossary/?all=true#term-cgroup' target='_blank' aria-label='control groups'&gt;control groups&lt;/a&gt;
constrain resources that are allocated to processes.&lt;/p&gt;
&lt;p&gt;The &lt;a class='glossary-tooltip' title='An agent that runs on each node in the cluster. It makes sure that containers are running in a pod.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/reference/command-line-tools-reference/kubelet' target='_blank' aria-label='kubelet'&gt;kubelet&lt;/a&gt; and the
underlying container runtime need to interface with cgroups to enforce
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/configuration/manage-resources-containers/"&gt;resource management for pods and containers&lt;/a&gt; which
includes cpu/memory requests and limits for containerized workloads.&lt;/p&gt;</description></item><item><title>Autoscaling Workloads</title><link>https://andygol-k8s.netlify.app/docs/concepts/workloads/autoscaling/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/workloads/autoscaling/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;In Kubernetes, you can &lt;em&gt;scale&lt;/em&gt; a workload depending on the current demand of resources.
This allows your cluster to react to changes in resource demand more elastically and efficiently.&lt;/p&gt;
&lt;p&gt;When you scale a workload, you can either increase or decrease the number of replicas managed by
the workload, or adjust the resources available to the replicas in-place.&lt;/p&gt;
&lt;p&gt;The first approach is referred to as &lt;em&gt;horizontal scaling&lt;/em&gt;, while the second is referred to as
&lt;em&gt;vertical scaling&lt;/em&gt;.&lt;/p&gt;</description></item><item><title>Blog article mirroring</title><link>https://andygol-k8s.netlify.app/docs/contribute/blog/article-mirroring/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/contribute/blog/article-mirroring/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;There are two official Kubernetes blogs, and the CNCF has its own blog where you can cover Kubernetes too.
For the main Kubernetes blog, we (the Kubernetes project) like to publish articles with different perspectives and special focuses, that have a link to Kubernetes.&lt;/p&gt;
&lt;p&gt;Some articles appear on both blogs: there is a primary version of the article, and
a &lt;em&gt;mirror article&lt;/em&gt; on the other blog.&lt;/p&gt;
&lt;p&gt;This page describes the criteria for mirroring, the motivation for mirroring, and
explains what you should do to ensure that an article publishes to both blogs.&lt;/p&gt;</description></item><item><title>Check whether dockershim removal affects you</title><link>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-removal-affects-you/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-removal-affects-you/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;The &lt;code&gt;dockershim&lt;/code&gt; component of Kubernetes allows the use of Docker as a Kubernetes's
&lt;a class='glossary-tooltip' title='The container runtime is the software that is responsible for running containers.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/setup/production-environment/container-runtimes' target='_blank' aria-label='container runtime'&gt;container runtime&lt;/a&gt;.
Kubernetes' built-in &lt;code&gt;dockershim&lt;/code&gt; component was removed in release v1.24.&lt;/p&gt;
&lt;p&gt;This page explains how your cluster could be using Docker as a container runtime,
provides details on the role that &lt;code&gt;dockershim&lt;/code&gt; plays when in use, and shows steps
you can take to check whether any workloads could be affected by &lt;code&gt;dockershim&lt;/code&gt; removal.&lt;/p&gt;</description></item><item><title>Cluster Networking</title><link>https://andygol-k8s.netlify.app/docs/concepts/cluster-administration/networking/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/cluster-administration/networking/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;Networking is a central part of Kubernetes, but it can be challenging to
understand exactly how it is expected to work. There are 4 distinct networking
problems to address:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Highly-coupled container-to-container communications: this is solved by
&lt;a class='glossary-tooltip' title='A Pod represents a set of running containers in your cluster.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/' target='_blank' aria-label='Pods'&gt;Pods&lt;/a&gt; and &lt;code&gt;localhost&lt;/code&gt; communications.&lt;/li&gt;
&lt;li&gt;Pod-to-Pod communications: this is the primary focus of this document.&lt;/li&gt;
&lt;li&gt;Pod-to-Service communications: this is covered by &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/services-networking/service/"&gt;Services&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;External-to-Service communications: this is also covered by Services.&lt;/li&gt;
&lt;/ol&gt;
&lt;!-- body --&gt;
&lt;p&gt;Kubernetes is all about sharing machines among applications. Typically,
sharing machines requires ensuring that two applications do not try to use the
same ports. Coordinating ports across multiple developers is very difficult to
do at scale and exposes users to cluster-level issues outside of their control.&lt;/p&gt;</description></item><item><title>Configure Memory and CPU Quotas for a Namespace</title><link>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page shows how to set quotas for the total amount memory and CPU that
can be used by all Pods running in a &lt;a class='glossary-tooltip' title='An abstraction used by Kubernetes to support isolation of groups of resources within a single cluster.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/overview/working-with-objects/namespaces' target='_blank' aria-label='namespace'&gt;namespace&lt;/a&gt;.
You specify quotas in a
&lt;a href="https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/policy-resources/resource-quota-v1/"&gt;ResourceQuota&lt;/a&gt;
object.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Configuring a cgroup driver</title><link>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page explains how to configure the kubelet's cgroup driver to match the container
runtime cgroup driver for kubeadm clusters.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;You should be familiar with the Kubernetes
&lt;a href="https://andygol-k8s.netlify.app/docs/setup/production-environment/container-runtimes/"&gt;container runtime requirements&lt;/a&gt;.&lt;/p&gt;
&lt;!-- steps --&gt;
&lt;h2 id="configuring-the-container-runtime-cgroup-driver"&gt;Configuring the container runtime cgroup driver&lt;/h2&gt;
&lt;p&gt;The &lt;a href="https://andygol-k8s.netlify.app/docs/setup/production-environment/container-runtimes/"&gt;Container runtimes&lt;/a&gt; page
explains that the &lt;code&gt;systemd&lt;/code&gt; driver is recommended for kubeadm based setups instead
of the kubelet's &lt;a href="https://andygol-k8s.netlify.app/docs/reference/config-api/kubelet-config.v1beta1/"&gt;default&lt;/a&gt; &lt;code&gt;cgroupfs&lt;/code&gt; driver,
because kubeadm manages the kubelet as a
&lt;a href="https://andygol-k8s.netlify.app/docs/setup/production-environment/tools/kubeadm/kubelet-integration/"&gt;systemd service&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>Controlling Access to the Kubernetes API</title><link>https://andygol-k8s.netlify.app/docs/concepts/security/controlling-access/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/security/controlling-access/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page provides an overview of controlling access to the Kubernetes API.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;p&gt;Users access the &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/overview/kubernetes-api/"&gt;Kubernetes API&lt;/a&gt; using &lt;code&gt;kubectl&lt;/code&gt;,
client libraries, or by making REST requests. Both human users and
&lt;a href="https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/configure-service-account/"&gt;Kubernetes service accounts&lt;/a&gt; can be
authorized for API access.
When a request reaches the API, it goes through several stages, illustrated in the
following diagram:&lt;/p&gt;
&lt;p&gt;&lt;img src="https://andygol-k8s.netlify.app/images/docs/admin/access-control-overview.svg" alt="Diagram of request handling steps for Kubernetes API request"&gt;&lt;/p&gt;
&lt;h2 id="transport-security"&gt;Transport security&lt;/h2&gt;
&lt;p&gt;By default, the Kubernetes API server listens on port 6443 on the first non-localhost
network interface, protected by TLS. In a typical production Kubernetes cluster, the
API serves on port 443. The port can be changed with the &lt;code&gt;--secure-port&lt;/code&gt;, and the
listening IP address with the &lt;code&gt;--bind-address&lt;/code&gt; flag.&lt;/p&gt;</description></item><item><title>Create a Windows HostProcess Pod</title><link>https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/create-hostprocess-pod/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/create-hostprocess-pod/</guid><description>&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.26 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;p&gt;Windows HostProcess containers enable you to run containerized
workloads on a Windows host. These containers operate as
normal processes but have access to the host network namespace,
storage, and devices when given the appropriate user privileges.
HostProcess containers can be used to deploy network plugins,
storage configurations, device plugins, kube-proxy, and other
components to Windows nodes without the need for dedicated proxies or
the direct installation of host services.&lt;/p&gt;</description></item><item><title>CRI Pod &amp; Container Metrics</title><link>https://andygol-k8s.netlify.app/docs/reference/instrumentation/cri-pod-container-metrics/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/instrumentation/cri-pod-container-metrics/</guid><description>&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-alpha"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.23 [alpha]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;p&gt;The &lt;a href="https://andygol-k8s.netlify.app/docs/reference/command-line-tools-reference/kubelet/"&gt;kubelet&lt;/a&gt; collects pod and
container metrics via &lt;a href="https://github.com/google/cadvisor"&gt;cAdvisor&lt;/a&gt;. As an alpha feature,
Kubernetes lets you configure the collection of pod and container
metrics via the &lt;a class='glossary-tooltip' title='Protocol for communication between the kubelet and the local container runtime.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/architecture/cri' target='_blank' aria-label='Container Runtime Interface'&gt;Container Runtime Interface&lt;/a&gt; (CRI). You
must enable the &lt;code&gt;PodAndContainerStatsFromCRI&lt;/code&gt; &lt;a href="https://andygol-k8s.netlify.app/docs/reference/command-line-tools-reference/feature-gates/"&gt;feature gate&lt;/a&gt; and
use a compatible CRI implementation (containerd &amp;gt;= 1.6.0, CRI-O &amp;gt;= 1.23.0) to
use the CRI based collection mechanism.&lt;/p&gt;</description></item><item><title>Distribute Credentials Securely Using Secrets</title><link>https://andygol-k8s.netlify.app/docs/tasks/inject-data-application/distribute-credentials-secure/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/inject-data-application/distribute-credentials-secure/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page shows how to securely inject sensitive data, such as passwords and
encryption keys, into Pods.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Dynamic Volume Provisioning</title><link>https://andygol-k8s.netlify.app/docs/concepts/storage/dynamic-provisioning/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/storage/dynamic-provisioning/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;Dynamic volume provisioning allows storage volumes to be created on-demand.
Without dynamic provisioning, cluster administrators have to manually make
calls to their cloud or storage provider to create new storage volumes, and
then create &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/storage/persistent-volumes/"&gt;&lt;code&gt;PersistentVolume&lt;/code&gt; objects&lt;/a&gt;
to represent them in Kubernetes. The dynamic provisioning feature eliminates
the need for cluster administrators to pre-provision storage. Instead, it
automatically provisions storage when users create
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/storage/persistent-volumes/"&gt;&lt;code&gt;PersistentVolumeClaim&lt;/code&gt; objects&lt;/a&gt;.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="background"&gt;Background&lt;/h2&gt;
&lt;p&gt;The implementation of dynamic volume provisioning is based on the API object &lt;code&gt;StorageClass&lt;/code&gt;
from the API group &lt;code&gt;storage.k8s.io&lt;/code&gt;. A cluster administrator can define as many
&lt;code&gt;StorageClass&lt;/code&gt; objects as needed, each specifying a &lt;em&gt;volume plugin&lt;/em&gt; (aka
&lt;em&gt;provisioner&lt;/em&gt;) that provisions a volume and the set of parameters to pass to
that provisioner when provisioning.
A cluster administrator can define and expose multiple flavors of storage (from
the same or different storage systems) within a cluster, each with a custom set
of parameters. This design also ensures that end users don't have to worry
about the complexity and nuances of how storage is provisioned, but still
have the ability to select from multiple storage options.&lt;/p&gt;</description></item><item><title>Generating Reference Documentation for the Kubernetes API</title><link>https://andygol-k8s.netlify.app/docs/contribute/generate-ref-docs/kubernetes-api/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/contribute/generate-ref-docs/kubernetes-api/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page shows how to update the Kubernetes API reference documentation.&lt;/p&gt;
&lt;p&gt;The Kubernetes API reference documentation is built from the
&lt;a href="https://github.com/kubernetes/kubernetes/blob/master/api/openapi-spec/swagger.json"&gt;Kubernetes OpenAPI spec&lt;/a&gt;
using the &lt;a href="https://github.com/kubernetes-sigs/reference-docs"&gt;kubernetes-sigs/reference-docs&lt;/a&gt; generation code.&lt;/p&gt;
&lt;p&gt;If you find bugs in the generated documentation, you need to
&lt;a href="https://andygol-k8s.netlify.app/docs/contribute/generate-ref-docs/contribute-upstream/"&gt;fix them upstream&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;If you need only to regenerate the reference documentation from the
&lt;a href="https://github.com/OAI/OpenAPI-Specification"&gt;OpenAPI&lt;/a&gt;
spec, continue reading this page.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;

	&lt;h3 id="requirements"&gt;Requirements:&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;You need a machine that is running Linux or macOS.&lt;/p&gt;</description></item><item><title>ING Case Study</title><link>https://andygol-k8s.netlify.app/case-studies/ing/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/case-studies/ing/</guid><description>&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;After undergoing an agile transformation, &lt;a href="https://www.ing.com/"&gt;ING&lt;/a&gt; realized it needed a standardized platform to support the work their developers were doing. "Our DevOps teams got empowered to be autonomous," says Infrastructure Architect Thijs Ebbers. "It has benefits; you get all kinds of ideas. But a lot of teams are going to devise the same wheel. Teams started tinkering with &lt;a href="https://www.docker.com/"&gt;Docker&lt;/a&gt;, Docker Swarm, &lt;a href="https://kubernetes.io/"&gt;Kubernetes&lt;/a&gt;, &lt;a href="https://mesosphere.com/"&gt;Mesos&lt;/a&gt;. Well, it's not really useful for a company to have one hundred wheels, instead of one good wheel.&lt;/p&gt;</description></item><item><title>Ingress Controllers</title><link>https://andygol-k8s.netlify.app/docs/concepts/services-networking/ingress-controllers/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/services-networking/ingress-controllers/</guid><description>&lt;div class="alert alert-info" role="note"&gt;&lt;h4 class="alert-heading"&gt;Note:&lt;/h4&gt;&lt;p&gt;The Kubernetes project recommends using &lt;a href="https://gateway-api.sigs.k8s.io/"&gt;Gateway&lt;/a&gt; instead of
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/services-networking/ingress/"&gt;Ingress&lt;/a&gt;.
The Ingress API has been frozen.&lt;/p&gt;
&lt;p&gt;This means that:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The Ingress API is generally available, and is subject to the &lt;a href="https://andygol-k8s.netlify.app/docs/reference/using-api/deprecation-policy/#deprecating-parts-of-the-api"&gt;stability guarantees&lt;/a&gt; for generally available APIs.
The Kubernetes project has no plans to remove Ingress from Kubernetes.&lt;/li&gt;
&lt;li&gt;The Ingress API is no longer being developed, and will have no further changes
or updates made to it.&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;

&lt;!-- body --&gt;
&lt;!-- overview --&gt;
&lt;h2 id="ingress-controllers"&gt;Ingress controllers&lt;/h2&gt;
&lt;p&gt;Kubernetes as a project supports and maintains &lt;a href="https://github.com/kubernetes-sigs/aws-load-balancer-controller#readme"&gt;AWS&lt;/a&gt;, and &lt;a href="https://git.k8s.io/ingress-gce/README.md#readme"&gt;GCE&lt;/a&gt; ingress controllers.&lt;/p&gt;</description></item><item><title>Jobs</title><link>https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/job/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/job/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;A Job creates one or more Pods and will continue to retry execution of the Pods until a specified number of them successfully terminate.
As pods successfully complete, the Job tracks the successful completions. When a specified number
of successful completions is reached, the task (ie, Job) is complete. Deleting a Job will clean up
the Pods it created. Suspending a Job will delete its active Pods until the Job
is resumed again.&lt;/p&gt;</description></item><item><title>kubeadm config</title><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/kubeadm-config/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/kubeadm-config/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;During &lt;code&gt;kubeadm init&lt;/code&gt;, kubeadm uploads the &lt;code&gt;ClusterConfiguration&lt;/code&gt; object to your cluster
in a ConfigMap called &lt;code&gt;kubeadm-config&lt;/code&gt; in the &lt;code&gt;kube-system&lt;/code&gt; namespace. This configuration is then read during
&lt;code&gt;kubeadm join&lt;/code&gt;, &lt;code&gt;kubeadm reset&lt;/code&gt; and &lt;code&gt;kubeadm upgrade&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;You can use &lt;code&gt;kubeadm config print&lt;/code&gt; to print the default static configuration that kubeadm
uses for &lt;code&gt;kubeadm init&lt;/code&gt; and &lt;code&gt;kubeadm join&lt;/code&gt;.&lt;/p&gt;

&lt;div class="alert alert-info" role="note"&gt;&lt;h4 class="alert-heading"&gt;Note:&lt;/h4&gt;The output of the command is meant to serve as an example. You must manually edit the output
of this command to adapt to your setup. Remove the fields that you are not certain about and kubeadm
will try to default them on runtime by examining the host.&lt;/div&gt;

&lt;p&gt;For more information on &lt;code&gt;init&lt;/code&gt; and &lt;code&gt;join&lt;/code&gt; navigate to
&lt;a href="https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/kubeadm-init/#config-file"&gt;Using kubeadm init with a configuration file&lt;/a&gt;
or &lt;a href="https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/kubeadm-join/#config-file"&gt;Using kubeadm join with a configuration file&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>kubectl for Docker Users</title><link>https://andygol-k8s.netlify.app/docs/reference/kubectl/docker-cli-to-kubectl/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubectl/docker-cli-to-kubectl/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;You can use the Kubernetes command line tool &lt;code&gt;kubectl&lt;/code&gt; to interact with the API Server. Using kubectl is straightforward if you are familiar with the Docker command line tool. However, there are a few differences between the Docker commands and the kubectl commands. The following sections show a Docker sub-command and describe the equivalent &lt;code&gt;kubectl&lt;/code&gt; command.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="docker-run"&gt;docker run&lt;/h2&gt;
&lt;p&gt;To run an nginx Deployment and expose the Deployment, see &lt;a href="https://andygol-k8s.netlify.app/docs/reference/generated/kubectl/kubectl-commands#-em-deployment-em-"&gt;kubectl create deployment&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>Kubelet Configuration Directory Merging</title><link>https://andygol-k8s.netlify.app/docs/reference/node/kubelet-config-directory-merging/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/node/kubelet-config-directory-merging/</guid><description>&lt;p&gt;When using the kubelet's &lt;code&gt;--config-dir&lt;/code&gt; flag to specify a drop-in directory for
configuration, there is some specific behavior on how different types are
merged.&lt;/p&gt;
&lt;p&gt;Here are some examples of how different data types behave during configuration merging:&lt;/p&gt;
&lt;h3 id="structure-fields"&gt;Structure Fields&lt;/h3&gt;
&lt;p&gt;There are two types of structure fields in a YAML structure: singular (or a
scalar type) and embedded (structures that contain scalar types).
The configuration merging process handles the overriding of singular and embedded struct fields to create a resulting kubelet configuration.&lt;/p&gt;</description></item><item><title>Kubelet Device Manager API Versions</title><link>https://andygol-k8s.netlify.app/docs/reference/node/device-plugin-api-versions/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/node/device-plugin-api-versions/</guid><description>&lt;p&gt;This page provides details of version compatibility between the Kubernetes
&lt;a href="https://github.com/kubernetes/kubelet/tree/master/pkg/apis/deviceplugin"&gt;device plugin API&lt;/a&gt;,
and different versions of Kubernetes itself.&lt;/p&gt;
&lt;h2 id="compatibility-matrix"&gt;Compatibility matrix&lt;/h2&gt;
&lt;table&gt;
 &lt;thead&gt;
 &lt;tr&gt;
 &lt;th&gt;&lt;/th&gt;
 &lt;th&gt;&lt;code&gt;v1alpha1&lt;/code&gt;&lt;/th&gt;
 &lt;th&gt;&lt;code&gt;v1beta1&lt;/code&gt;&lt;/th&gt;
 &lt;/tr&gt;
 &lt;/thead&gt;
 &lt;tbody&gt;
 &lt;tr&gt;
 &lt;td&gt;Kubernetes 1.21&lt;/td&gt;
 &lt;td&gt;-&lt;/td&gt;
 &lt;td&gt;✓&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Kubernetes 1.22&lt;/td&gt;
 &lt;td&gt;-&lt;/td&gt;
 &lt;td&gt;✓&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Kubernetes 1.23&lt;/td&gt;
 &lt;td&gt;-&lt;/td&gt;
 &lt;td&gt;✓&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Kubernetes 1.24&lt;/td&gt;
 &lt;td&gt;-&lt;/td&gt;
 &lt;td&gt;✓&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Kubernetes 1.25&lt;/td&gt;
 &lt;td&gt;-&lt;/td&gt;
 &lt;td&gt;✓&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Kubernetes 1.26&lt;/td&gt;
 &lt;td&gt;-&lt;/td&gt;
 &lt;td&gt;✓&lt;/td&gt;
 &lt;/tr&gt;
 &lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;Key:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;✓&lt;/code&gt; Exactly the same features / API objects in both device plugin API and
the Kubernetes version.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;+&lt;/code&gt; The device plugin API has features or API objects that may not be present in the
Kubernetes cluster, either because the device plugin API has added additional new API
calls, or that the server has removed an old API call. However, everything they have in
common (most other APIs) will work. Note that alpha APIs may vanish or
change significantly between one minor release and the next.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;-&lt;/code&gt; The Kubernetes cluster has features the device plugin API can't use,
either because server has added additional API calls, or that device plugin API has
removed an old API call. However, everything they share in common (most APIs) will work.&lt;/li&gt;
&lt;/ul&gt;</description></item><item><title>Kubernetes API health endpoints</title><link>https://andygol-k8s.netlify.app/docs/reference/using-api/health-checks/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/using-api/health-checks/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;The Kubernetes &lt;a class='glossary-tooltip' title='Control plane component that serves the Kubernetes API.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/architecture/#kube-apiserver' target='_blank' aria-label='API server'&gt;API server&lt;/a&gt; provides API endpoints to indicate the current status of the API server.
This page describes these API endpoints and explains how you can use them.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="api-endpoints-for-health"&gt;API endpoints for health&lt;/h2&gt;
&lt;p&gt;The Kubernetes API server provides 3 API endpoints (&lt;code&gt;healthz&lt;/code&gt;, &lt;code&gt;livez&lt;/code&gt; and &lt;code&gt;readyz&lt;/code&gt;) to indicate the current status of the API server.
The &lt;code&gt;healthz&lt;/code&gt; endpoint is deprecated (since Kubernetes v1.16), and you should use the more specific &lt;code&gt;livez&lt;/code&gt; and &lt;code&gt;readyz&lt;/code&gt; endpoints instead.
The &lt;code&gt;livez&lt;/code&gt; endpoint can be used with the &lt;code&gt;--livez-grace-period&lt;/code&gt; &lt;a href="https://andygol-k8s.netlify.app/docs/reference/command-line-tools-reference/kube-apiserver/"&gt;flag&lt;/a&gt; to specify the startup duration.
For a graceful shutdown you can specify the &lt;code&gt;--shutdown-delay-duration&lt;/code&gt; &lt;a href="https://andygol-k8s.netlify.app/docs/reference/command-line-tools-reference/kube-apiserver/"&gt;flag&lt;/a&gt; with the &lt;code&gt;/readyz&lt;/code&gt; endpoint.
Machines that check the &lt;code&gt;healthz&lt;/code&gt;/&lt;code&gt;livez&lt;/code&gt;/&lt;code&gt;readyz&lt;/code&gt; of the API server should rely on the HTTP status code.
A status code &lt;code&gt;200&lt;/code&gt; indicates the API server is &lt;code&gt;healthy&lt;/code&gt;/&lt;code&gt;live&lt;/code&gt;/&lt;code&gt;ready&lt;/code&gt;, depending on the called endpoint.&lt;/p&gt;</description></item><item><title>Kubernetes Self-Healing</title><link>https://andygol-k8s.netlify.app/docs/concepts/architecture/self-healing/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/architecture/self-healing/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;Kubernetes is designed with self-healing capabilities that help maintain the health and availability of workloads.
It automatically replaces failed containers, reschedules workloads when nodes become unavailable, and ensures that the desired state of the system is maintained.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="self-healing-capabilities"&gt;Self-Healing capabilities&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Container-level restarts:&lt;/strong&gt; If a container inside a Pod fails, Kubernetes restarts it based on the &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy"&gt;&lt;code&gt;restartPolicy&lt;/code&gt;&lt;/a&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Replica replacement:&lt;/strong&gt; If a Pod in a &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/deployment/"&gt;Deployment&lt;/a&gt; or &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/statefulset/"&gt;StatefulSet&lt;/a&gt; fails, Kubernetes creates a replacement Pod to maintain the specified number of replicas.
If a Pod that is part of a &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/daemonset/"&gt;DaemonSet&lt;/a&gt; fails, the control plane
creates a replacement Pod to run on the same node.&lt;/p&gt;</description></item><item><title>Localizing Kubernetes documentation</title><link>https://andygol-k8s.netlify.app/docs/contribute/localization/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/contribute/localization/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page shows you how to
&lt;a href="https://blog.mozilla.org/l10n/2011/12/14/i18n-vs-l10n-whats-the-diff/"&gt;localize&lt;/a&gt;
the docs for a different language.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="contribute-to-an-existing-localization"&gt;Contribute to an existing localization&lt;/h2&gt;
&lt;p&gt;You can help add or improve the content of an existing localization. In
&lt;a href="https://slack.k8s.io/"&gt;Kubernetes Slack&lt;/a&gt;, you can find a channel for each
localization. There is also a general
&lt;a href="https://kubernetes.slack.com/messages/sig-docs-localizations"&gt;SIG Docs Localizations Slack channel&lt;/a&gt;
where you can say hello.&lt;/p&gt;

&lt;div class="alert alert-info" role="note"&gt;&lt;h4 class="alert-heading"&gt;Note:&lt;/h4&gt;For extra details on how to contribute to a specific localization,
look for a localized version of this page.&lt;/div&gt;

&lt;h3 id="find-your-two-letter-language-code"&gt;Find your two-letter language code&lt;/h3&gt;
&lt;p&gt;First, consult the
&lt;a href="https://www.loc.gov/standards/iso639-2/php/code_list.php"&gt;ISO 639-1 standard&lt;/a&gt;
to find your localization's two-letter language code. For example, the two-letter code for
Korean is &lt;code&gt;ko&lt;/code&gt;.&lt;/p&gt;</description></item><item><title>Managing Service Accounts</title><link>https://andygol-k8s.netlify.app/docs/reference/access-authn-authz/service-accounts-admin/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/access-authn-authz/service-accounts-admin/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;A &lt;em&gt;ServiceAccount&lt;/em&gt; provides an identity for processes that run in a Pod.&lt;/p&gt;
&lt;p&gt;A process inside a Pod can use the identity of its associated service account to
authenticate to the cluster's API server.&lt;/p&gt;
&lt;p&gt;For an introduction to service accounts, read &lt;a href="https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/configure-service-account/"&gt;configure service accounts&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;This task guide explains some of the concepts behind ServiceAccounts. The
guide also explains how to obtain or revoke tokens that represent
ServiceAccounts, and how to (optionally) bind a ServiceAccount's validity to
the lifetime of an API object.&lt;/p&gt;</description></item><item><title>Node metrics data</title><link>https://andygol-k8s.netlify.app/docs/reference/instrumentation/node-metrics/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/instrumentation/node-metrics/</guid><description>&lt;p&gt;The &lt;a href="https://andygol-k8s.netlify.app/docs/reference/command-line-tools-reference/kubelet/"&gt;kubelet&lt;/a&gt;
gathers metric statistics at the node, volume, pod and container level,
and emits this information in the
&lt;a href="https://andygol-k8s.netlify.app/docs/reference/config-api/kubelet-stats.v1alpha1/"&gt;Summary API&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;You can send a proxied request to the stats summary API via the
Kubernetes API server.&lt;/p&gt;
&lt;p&gt;Here is an example of a Summary API request for a node named &lt;code&gt;minikube&lt;/code&gt;:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl get --raw &lt;span style="color:#b44"&gt;&amp;#34;/api/v1/nodes/minikube/proxy/stats/summary&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;Here is the same API call using &lt;code&gt;curl&lt;/code&gt;:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# You need to run &amp;#34;kubectl proxy&amp;#34; first&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#080;font-style:italic"&gt;# Change 8080 to the port that &amp;#34;kubectl proxy&amp;#34; assigns&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;curl http://localhost:8080/api/v1/nodes/minikube/proxy/stats/summary
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;div class="alert alert-info" role="note"&gt;&lt;h4 class="alert-heading"&gt;Note:&lt;/h4&gt;Beginning with &lt;code&gt;metrics-server&lt;/code&gt; 0.6.x, &lt;code&gt;metrics-server&lt;/code&gt; queries the &lt;code&gt;/metrics/resource&lt;/code&gt;
kubelet endpoint, and not &lt;code&gt;/stats/summary&lt;/code&gt;.&lt;/div&gt;

&lt;h2 id="summary-api-source"&gt;Summary metrics API source&lt;/h2&gt;
&lt;p&gt;By default, Kubernetes fetches node summary metrics data using an embedded
&lt;a href="https://github.com/google/cadvisor"&gt;cAdvisor&lt;/a&gt; that runs within the kubelet. If you
enable the &lt;code&gt;PodAndContainerStatsFromCRI&lt;/code&gt; &lt;a href="https://andygol-k8s.netlify.app/docs/reference/command-line-tools-reference/feature-gates/"&gt;feature gate&lt;/a&gt;
in your cluster, and you use a container runtime that supports statistics access via
&lt;a class='glossary-tooltip' title='Protocol for communication between the kubelet and the local container runtime.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/architecture/cri' target='_blank' aria-label='Container Runtime Interface'&gt;Container Runtime Interface&lt;/a&gt; (CRI), then
the kubelet &lt;a href="https://andygol-k8s.netlify.app/docs/reference/instrumentation/cri-pod-container-metrics/"&gt;fetches Pod- and container-level metric data using CRI&lt;/a&gt;, and not via cAdvisor.&lt;/p&gt;</description></item><item><title>Node Resource Managers</title><link>https://andygol-k8s.netlify.app/docs/concepts/policy/node-resource-managers/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/policy/node-resource-managers/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;In order to support latency-critical and high-throughput workloads, Kubernetes offers a suite of
Resource Managers. The managers aim to co-ordinate and optimise the alignment of node's resources for pods
configured with a specific requirement for CPUs, devices, and memory (hugepages) resources.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="hardware-topology-alignment-policies"&gt;Hardware topology alignment policies&lt;/h2&gt;
&lt;p&gt;&lt;em&gt;Topology Manager&lt;/em&gt; is a kubelet component that aims to coordinate the set of components that are
responsible for these optimizations. The overall resource management process is governed using
the policy you specify. To learn more, read
&lt;a href="https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/topology-manager/"&gt;Control Topology Management Policies on a Node&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>Options for Highly Available Topology</title><link>https://andygol-k8s.netlify.app/docs/setup/production-environment/tools/kubeadm/ha-topology/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/setup/production-environment/tools/kubeadm/ha-topology/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page explains the two options for configuring the topology of your highly available (HA) Kubernetes clusters.&lt;/p&gt;
&lt;p&gt;You can set up an HA cluster:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;With stacked control plane nodes, where etcd nodes are colocated with control plane nodes&lt;/li&gt;
&lt;li&gt;With external etcd nodes, where etcd runs on separate nodes from the control plane&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;You should carefully consider the advantages and disadvantages of each topology before setting up an HA cluster.&lt;/p&gt;</description></item><item><title>Parallel Processing using Expansions</title><link>https://andygol-k8s.netlify.app/docs/tasks/job/parallel-processing-expansion/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/job/parallel-processing-expansion/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This task demonstrates running multiple &lt;a class='glossary-tooltip' title='A finite or batch task that runs to completion.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/job/' target='_blank' aria-label='Jobs'&gt;Jobs&lt;/a&gt;
based on a common template. You can use this approach to process batches of work in
parallel.&lt;/p&gt;
&lt;p&gt;For this example there are only three items: &lt;em&gt;apple&lt;/em&gt;, &lt;em&gt;banana&lt;/em&gt;, and &lt;em&gt;cherry&lt;/em&gt;.
The sample Jobs process each item by printing a string then pausing.&lt;/p&gt;
&lt;p&gt;See &lt;a href="#using-jobs-in-real-workloads"&gt;using Jobs in real workloads&lt;/a&gt; to learn about how
this pattern fits more realistic use cases.&lt;/p&gt;</description></item><item><title>PKI certificates and requirements</title><link>https://andygol-k8s.netlify.app/docs/setup/best-practices/certificates/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/setup/best-practices/certificates/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;Kubernetes requires PKI certificates for authentication over TLS.
If you install Kubernetes with &lt;a href="https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/"&gt;kubeadm&lt;/a&gt;, the certificates
that your cluster requires are automatically generated.
You can also generate your own certificates -- for example, to keep your private keys more secure
by not storing them on the API server.
This page explains the certificates that your cluster requires.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="how-certificates-are-used-by-your-cluster"&gt;How certificates are used by your cluster&lt;/h2&gt;
&lt;p&gt;Kubernetes requires PKI for the following operations:&lt;/p&gt;</description></item><item><title>Romana for NetworkPolicy</title><link>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/network-policy-provider/romana-network-policy/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/network-policy-provider/romana-network-policy/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page shows how to use Romana for NetworkPolicy.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;Complete steps 1, 2, and 3 of the &lt;a href="https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/"&gt;kubeadm getting started guide&lt;/a&gt;.&lt;/p&gt;
&lt;!-- steps --&gt;
&lt;h2 id="installing-romana-with-kubeadm"&gt;Installing Romana with kubeadm&lt;/h2&gt;
&lt;p&gt;Follow the &lt;a href="https://github.com/romana/romana/tree/master/containerize"&gt;containerized installation guide&lt;/a&gt; for kubeadm.&lt;/p&gt;
&lt;h2 id="applying-network-policies"&gt;Applying network policies&lt;/h2&gt;
&lt;p&gt;To apply network policies use one of the following:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/romana/romana/wiki/Romana-policies"&gt;Romana network policies&lt;/a&gt;.
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/romana/core/blob/master/doc/policy.md"&gt;Example of Romana network policy&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;The NetworkPolicy API.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="what-s-next"&gt;What's next&lt;/h2&gt;
&lt;p&gt;Once you have installed Romana, you can follow the
&lt;a href="https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/declare-network-policy/"&gt;Declare Network Policy&lt;/a&gt;
to try out Kubernetes NetworkPolicy.&lt;/p&gt;</description></item><item><title>Scale a StatefulSet</title><link>https://andygol-k8s.netlify.app/docs/tasks/run-application/scale-stateful-set/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/run-application/scale-stateful-set/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This task shows how to scale a StatefulSet. Scaling a StatefulSet refers to
increasing or decreasing the number of replicas.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;StatefulSets are only available in Kubernetes version 1.5 or later.
To check your version of Kubernetes, run &lt;code&gt;kubectl version&lt;/code&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Not all stateful applications scale nicely. If you are unsure about whether
to scale your StatefulSets, see &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/statefulset/"&gt;StatefulSet concepts&lt;/a&gt;
or &lt;a href="https://andygol-k8s.netlify.app/docs/tutorials/stateful-application/basic-stateful-set/"&gt;StatefulSet tutorial&lt;/a&gt; for further information.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;You should perform scaling only when you are confident that your stateful application
cluster is completely healthy.&lt;/p&gt;</description></item><item><title>Sidecar Containers</title><link>https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/sidecar-containers/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/sidecar-containers/</guid><description>&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable" title="Feature Gate: SidecarContainers"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt; 
 &lt;code&gt;Kubernetes v1.33 [stable]&lt;/code&gt;(enabled by default)&lt;/div&gt;

&lt;p&gt;Sidecar containers are the secondary containers that run along with the main
application container within the same &lt;a class='glossary-tooltip' title='A Pod represents a set of running containers in your cluster.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/' target='_blank' aria-label='Pod'&gt;Pod&lt;/a&gt;.
These containers are used to enhance or to extend the functionality of the primary &lt;em&gt;app
container&lt;/em&gt; by providing additional services, or functionality such as logging, monitoring,
security, or data synchronization, without directly altering the primary application code.&lt;/p&gt;</description></item><item><title>Taints and Tolerations</title><link>https://andygol-k8s.netlify.app/docs/concepts/scheduling-eviction/taint-and-toleration/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/scheduling-eviction/taint-and-toleration/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity"&gt;&lt;em&gt;Node affinity&lt;/em&gt;&lt;/a&gt;
is a property of &lt;a class='glossary-tooltip' title='A Pod represents a set of running containers in your cluster.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/' target='_blank' aria-label='Pods'&gt;Pods&lt;/a&gt; that &lt;em&gt;attracts&lt;/em&gt; them to
a set of &lt;a class='glossary-tooltip' title='A node is a worker machine in Kubernetes.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/architecture/nodes/' target='_blank' aria-label='nodes'&gt;nodes&lt;/a&gt; (either as a preference or a
hard requirement). &lt;em&gt;Taints&lt;/em&gt; are the opposite -- they allow a node to repel a set of pods.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Tolerations&lt;/em&gt; are applied to pods. Tolerations allow the scheduler to schedule pods with matching
taints. Tolerations allow scheduling but don't guarantee scheduling: the scheduler also
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/scheduling-eviction/pod-priority-preemption/"&gt;evaluates other parameters&lt;/a&gt;
as part of its function.&lt;/p&gt;</description></item><item><title>Understand Pressure Stall Information (PSI) Metrics</title><link>https://andygol-k8s.netlify.app/docs/reference/instrumentation/understand-psi-metrics/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/instrumentation/understand-psi-metrics/</guid><description>&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-beta"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.34 [beta]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;p&gt;As a beta feature, Kubernetes lets you configure the kubelet to collect Linux kernel
&lt;a href="https://docs.kernel.org/accounting/psi.html"&gt;Pressure Stall Information&lt;/a&gt;
(PSI) for CPU, memory, and I/O usage. The information is collected at node, pod and container level.
This feature is enabled by default by setting the &lt;code&gt;KubeletPSI&lt;/code&gt; &lt;a href="https://andygol-k8s.netlify.app/docs/reference/command-line-tools-reference/feature-gates/"&gt;feature gate&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;PSI metrics are exposed through two different sources:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The kubelet's &lt;a href="https://andygol-k8s.netlify.app/docs/reference/config-api/kubelet-stats.v1alpha1/"&gt;Summary API&lt;/a&gt;, which provides PSI data at the node, pod, and container level.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;/metrics/cadvisor&lt;/code&gt; endpoint on the kubelet, which exposes PSI metrics in the &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/cluster-administration/system-metrics/#psi-metrics"&gt;Prometheus format&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="requirements"&gt;Requirements&lt;/h3&gt;
&lt;p&gt;Pressure Stall Information requires the following on your Linux nodes:&lt;/p&gt;</description></item><item><title>Update API Objects in Place Using kubectl patch</title><link>https://andygol-k8s.netlify.app/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This task shows how to use &lt;code&gt;kubectl patch&lt;/code&gt; to update an API object in place. The exercises
in this task demonstrate a strategic merge patch and a JSON merge patch.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>User Impersonation</title><link>https://andygol-k8s.netlify.app/docs/reference/access-authn-authz/user-impersonation/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/access-authn-authz/user-impersonation/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;User &lt;em&gt;impersonation&lt;/em&gt; is a method of allowing authenticated users to act as another user,
group, or service account through HTTP headers.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;p&gt;A user can act as another user through impersonation headers. These let requests
manually override the user info a request authenticates as. For example, an admin
could use this feature to debug an authorization policy by temporarily
impersonating another user and seeing if a request was denied.&lt;/p&gt;</description></item><item><title>Virtual IPs and Service Proxies</title><link>https://andygol-k8s.netlify.app/docs/reference/networking/virtual-ips/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/networking/virtual-ips/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;Every &lt;a class='glossary-tooltip' title='A node is a worker machine in Kubernetes.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/architecture/nodes/' target='_blank' aria-label='node'&gt;node&lt;/a&gt; in a Kubernetes
&lt;a class='glossary-tooltip' title='A set of worker machines, called nodes, that run containerized applications. Every cluster has at least one worker node.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/reference/glossary/?all=true#term-cluster' target='_blank' aria-label='cluster'&gt;cluster&lt;/a&gt; runs a
&lt;a href="https://andygol-k8s.netlify.app/docs/reference/command-line-tools-reference/kube-proxy/"&gt;kube-proxy&lt;/a&gt;
(unless you have deployed your own alternative component in place of &lt;code&gt;kube-proxy&lt;/code&gt;).&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;kube-proxy&lt;/code&gt; component is responsible for implementing a &lt;em&gt;virtual IP&lt;/em&gt;
mechanism for &lt;a class='glossary-tooltip' title='A way to expose an application running on a set of Pods as a network service.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/services-networking/service/' target='_blank' aria-label='Services'&gt;Services&lt;/a&gt;
of &lt;code&gt;type&lt;/code&gt; other than
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/services-networking/service/#externalname"&gt;&lt;code&gt;ExternalName&lt;/code&gt;&lt;/a&gt;.
Each instance of kube-proxy watches the Kubernetes
&lt;a class='glossary-tooltip' title='The container orchestration layer that exposes the API and interfaces to define, deploy, and manage the lifecycle of containers.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/reference/glossary/?all=true#term-control-plane' target='_blank' aria-label='control plane'&gt;control plane&lt;/a&gt;
for the addition and removal of Service and &lt;a class='glossary-tooltip' title='EndpointSlices track the IP addresses of Pods for Services.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/services-networking/endpoint-slices/' target='_blank' aria-label='EndpointSlice'&gt;EndpointSlice&lt;/a&gt;
&lt;a class='glossary-tooltip' title='An entity in the Kubernetes system, representing part of the state of your cluster.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/overview/working-with-objects/#kubernetes-objects' target='_blank' aria-label='objects'&gt;objects&lt;/a&gt;. For each Service, kube-proxy
calls appropriate APIs (depending on the kube-proxy mode) to configure
the node to capture traffic to the Service's &lt;code&gt;clusterIP&lt;/code&gt; and &lt;code&gt;port&lt;/code&gt;,
and redirect that traffic to one of the Service's endpoints
(usually a Pod, but possibly an arbitrary user-provided IP address). A control
loop ensures that the rules on each node are reliably synchronized with
the Service and EndpointSlice state as indicated by the API server.&lt;/p&gt;</description></item><item><title>Gateway API</title><link>https://andygol-k8s.netlify.app/docs/concepts/services-networking/gateway/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/services-networking/gateway/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;Make network services available by using an extensible, role-oriented, protocol-aware configuration
mechanism. &lt;a href="https://gateway-api.sigs.k8s.io/"&gt;Gateway API&lt;/a&gt; is an &lt;a class='glossary-tooltip' title='Resources that extend the functionality of Kubernetes.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/cluster-administration/addons/' target='_blank' aria-label='add-on'&gt;add-on&lt;/a&gt;
containing API &lt;a href="https://gateway-api.sigs.k8s.io/references/spec/"&gt;kinds&lt;/a&gt; that provide dynamic infrastructure
provisioning and advanced traffic routing.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="design-principles"&gt;Design principles&lt;/h2&gt;
&lt;p&gt;The following principles shaped the design and architecture of Gateway API:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Role-oriented:&lt;/strong&gt; Gateway API kinds are modeled after organizational roles that are
responsible for managing Kubernetes service networking:
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Infrastructure Provider:&lt;/strong&gt; Manages infrastructure that allows multiple isolated clusters
to serve multiple tenants, e.g. a cloud provider.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Cluster Operator:&lt;/strong&gt; Manages clusters and is typically concerned with policies, network
access, application permissions, etc.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Application Developer:&lt;/strong&gt; Manages an application running in a cluster and is typically
concerned with application-level configuration and &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/services-networking/service/"&gt;Service&lt;/a&gt;
composition.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Portable:&lt;/strong&gt; Gateway API specifications are defined as &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/extend-kubernetes/api-extension/custom-resources/"&gt;custom resources&lt;/a&gt;
and are supported by many &lt;a href="https://gateway-api.sigs.k8s.io/implementations/"&gt;implementations&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Expressive:&lt;/strong&gt; Gateway API kinds support functionality for common traffic routing use cases
such as header-based matching, traffic weighting, and others that were only possible in
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/services-networking/ingress/"&gt;Ingress&lt;/a&gt; by using custom annotations.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Extensible:&lt;/strong&gt; Gateway allows for custom resources to be linked at various layers of the API.
This makes granular customization possible at the appropriate places within the API structure.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="resource-model"&gt;Resource model&lt;/h2&gt;
&lt;p&gt;Gateway API has four stable API kinds:&lt;/p&gt;</description></item><item><title>Observability</title><link>https://andygol-k8s.netlify.app/docs/concepts/cluster-administration/observability/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/cluster-administration/observability/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;In Kubernetes, observability is the process of collecting and analyzing metrics, logs, and traces—often referred to as the three pillars of observability—in order to obtain a better understanding of the internal state, performance, and health of the cluster.&lt;/p&gt;
&lt;p&gt;Kubernetes control plane components, as well as many add-ons, generate and emit these signals. By aggregating and correlating them, you can gain a unified picture of the control plane, add-ons, and applications across the cluster.&lt;/p&gt;</description></item><item><title>Access Clusters Using the Kubernetes API</title><link>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/access-cluster-api/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/access-cluster-api/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page shows how to access clusters using the Kubernetes API.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Adding entries to Pod /etc/hosts with HostAliases</title><link>https://andygol-k8s.netlify.app/docs/tasks/network/customize-hosts-file-for-pods/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/network/customize-hosts-file-for-pods/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;Adding entries to a Pod's &lt;code&gt;/etc/hosts&lt;/code&gt; file provides Pod-level override of hostname resolution when DNS and other options are not applicable. You can add these custom entries with the HostAliases field in PodSpec.&lt;/p&gt;
&lt;p&gt;The Kubernetes project recommends modifying DNS configuration using the &lt;code&gt;hostAliases&lt;/code&gt; field
(part of the &lt;code&gt;.spec&lt;/code&gt; for a Pod), and not by using an init container or other means to edit &lt;code&gt;/etc/hosts&lt;/code&gt;
directly.
Change made in other ways may be overwritten by the kubelet during Pod creation or restart.&lt;/p&gt;</description></item><item><title>Admission Webhook Good Practices</title><link>https://andygol-k8s.netlify.app/docs/concepts/cluster-administration/admission-webhooks-good-practices/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/cluster-administration/admission-webhooks-good-practices/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page provides good practices and considerations when designing
&lt;em&gt;admission webhooks&lt;/em&gt; in Kubernetes. This information is intended for
cluster operators who run admission webhook servers or third-party applications
that modify or validate your API requests.&lt;/p&gt;
&lt;p&gt;Before reading this page, ensure that you're familiar with the following
concepts:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/reference/access-authn-authz/admission-controllers/"&gt;Admission controllers&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/reference/access-authn-authz/extensible-admission-controllers/#what-are-admission-webhooks"&gt;Admission webhooks&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;!-- body --&gt;
&lt;h2 id="why-good-webhook-design-matters"&gt;Importance of good webhook design&lt;/h2&gt;
&lt;p&gt;Admission control occurs when any create, update, or delete request
is sent to the Kubernetes API. Admission controllers intercept requests that
match specific criteria that you define. These requests are then sent to
mutating admission webhooks or validating admission webhooks. These webhooks are
often written to ensure that specific fields in object specifications exist or
have specific allowed values.&lt;/p&gt;</description></item><item><title>Annotations</title><link>https://andygol-k8s.netlify.app/docs/concepts/overview/working-with-objects/annotations/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/overview/working-with-objects/annotations/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;You can use Kubernetes annotations to attach arbitrary non-identifying metadata
to &lt;a class='glossary-tooltip' title='An entity in the Kubernetes system, representing part of the state of your cluster.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/overview/working-with-objects/#kubernetes-objects' target='_blank' aria-label='objects'&gt;objects&lt;/a&gt;.
Clients such as tools and libraries can retrieve this metadata.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="attaching-metadata-to-objects"&gt;Attaching metadata to objects&lt;/h2&gt;
&lt;p&gt;You can use either labels or annotations to attach metadata to Kubernetes
objects. Labels can be used to select objects and to find
collections of objects that satisfy certain conditions. In contrast, annotations
are not used to identify and select objects. The metadata
in an annotation can be small or large, structured or unstructured, and can
include characters not permitted by labels. It is possible to use labels as
well as annotations in the metadata of the same object.&lt;/p&gt;</description></item><item><title>Certificates and Certificate Signing Requests</title><link>https://andygol-k8s.netlify.app/docs/reference/access-authn-authz/certificate-signing-requests/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/access-authn-authz/certificate-signing-requests/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;Kubernetes certificate and trust bundle APIs enable automation of
&lt;a href="https://www.itu.int/rec/T-REC-X.509"&gt;X.509&lt;/a&gt; credential provisioning by providing
a programmatic interface for clients of the Kubernetes API to request and obtain
X.509 &lt;a class='glossary-tooltip' title='A cryptographically secure file used to validate access to the Kubernetes cluster.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/tasks/tls/managing-tls-in-a-cluster/' target='_blank' aria-label='certificates'&gt;certificates&lt;/a&gt; from a Certificate Authority (CA).&lt;/p&gt;
&lt;p&gt;There is also experimental (alpha) support for distributing &lt;a href="#cluster-trust-bundles"&gt;trust bundles&lt;/a&gt;.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="certificate-signing-requests"&gt;Certificate signing requests&lt;/h2&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.19 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;p&gt;A &lt;a href="https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/authentication-resources/certificate-signing-request-v1/"&gt;CertificateSigningRequest&lt;/a&gt;
(CSR) resource is used to request that a certificate be signed
by a denoted signer, after which the request may be approved or denied before
finally being signed.&lt;/p&gt;</description></item><item><title>Configure a Pod Quota for a Namespace</title><link>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page shows how to set a quota for the total number of Pods that can run
in a &lt;a class='glossary-tooltip' title='An abstraction used by Kubernetes to support isolation of groups of resources within a single cluster.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/overview/working-with-objects/namespaces' target='_blank' aria-label='Namespace'&gt;Namespace&lt;/a&gt;. You specify quotas in a
&lt;a href="https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/policy-resources/resource-quota-v1/"&gt;ResourceQuota&lt;/a&gt;
object.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Configure Quality of Service for Pods</title><link>https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/quality-service-pod/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/quality-service-pod/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page shows how to configure Pods so that they will be assigned particular
&lt;a class='glossary-tooltip' title='QoS Class (Quality of Service Class) provides a way for Kubernetes to classify pods within the cluster into several classes and make decisions about scheduling and eviction.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/pod-qos/' target='_blank' aria-label='Quality of Service (QoS) classes'&gt;Quality of Service (QoS) classes&lt;/a&gt;.
Kubernetes uses QoS classes to make decisions about evicting Pods when Node resources are exceeded.&lt;/p&gt;
&lt;p&gt;When Kubernetes creates a Pod it assigns one of these QoS classes to the Pod:&lt;/p&gt;</description></item><item><title>Container Runtime Interface (CRI)</title><link>https://andygol-k8s.netlify.app/docs/concepts/containers/cri/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/containers/cri/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;The CRI is a plugin interface which enables the kubelet to use a wide variety of
container runtimes, without having a need to recompile the cluster components.&lt;/p&gt;
&lt;p&gt;You need a working
&lt;a class='glossary-tooltip' title='The container runtime is the software that is responsible for running containers.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/setup/production-environment/container-runtimes' target='_blank' aria-label='container runtime'&gt;container runtime&lt;/a&gt; on
each Node in your cluster, so that the
&lt;a class='glossary-tooltip' title='An agent that runs on each node in the cluster. It makes sure that containers are running in a pod.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/reference/command-line-tools-reference/kubelet' target='_blank' aria-label='kubelet'&gt;kubelet&lt;/a&gt; can launch
&lt;a class='glossary-tooltip' title='A Pod represents a set of running containers in your cluster.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/' target='_blank' aria-label='Pods'&gt;Pods&lt;/a&gt; and their containers.&lt;/p&gt;</description></item><item><title>Creating Highly Available Clusters with kubeadm</title><link>https://andygol-k8s.netlify.app/docs/setup/production-environment/tools/kubeadm/high-availability/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/setup/production-environment/tools/kubeadm/high-availability/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page explains two different approaches to setting up a highly available Kubernetes
cluster using kubeadm:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;With stacked control plane nodes. This approach requires less infrastructure. The etcd members
and control plane nodes are co-located.&lt;/li&gt;
&lt;li&gt;With an external etcd cluster. This approach requires more infrastructure. The
control plane nodes and etcd members are separated.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Before proceeding, you should carefully consider which approach best meets the needs of your applications
and environment. &lt;a href="https://andygol-k8s.netlify.app/docs/setup/production-environment/tools/kubeadm/ha-topology/"&gt;Options for Highly Available topology&lt;/a&gt;
outlines the advantages and disadvantages of each.&lt;/p&gt;</description></item><item><title>Delete a StatefulSet</title><link>https://andygol-k8s.netlify.app/docs/tasks/run-application/delete-stateful-set/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/run-application/delete-stateful-set/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This task shows you how to delete a &lt;a class='glossary-tooltip' title='A StatefulSet manages deployment and scaling of a set of Pods, with durable storage and persistent identifiers for each Pod.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/statefulset/' target='_blank' aria-label='StatefulSet'&gt;StatefulSet&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;This task assumes you have an application running on your cluster represented by a StatefulSet.&lt;/li&gt;
&lt;/ul&gt;
&lt;!-- steps --&gt;
&lt;h2 id="deleting-a-statefulset"&gt;Deleting a StatefulSet&lt;/h2&gt;
&lt;p&gt;You can delete a StatefulSet in the same way you delete other resources in Kubernetes:
use the &lt;code&gt;kubectl delete&lt;/code&gt; command, and specify the StatefulSet either by file or by name.&lt;/p&gt;</description></item><item><title>Diagram Guide</title><link>https://andygol-k8s.netlify.app/docs/contribute/style/diagram-guide/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/contribute/style/diagram-guide/</guid><description>&lt;!--Overview--&gt;
&lt;p&gt;This guide shows you how to create, edit and share diagrams using the Mermaid
JavaScript library. Mermaid.js allows you to generate diagrams using a simple
markdown-like syntax inside Markdown files. You can also use Mermaid to
generate &lt;code&gt;.svg&lt;/code&gt; or &lt;code&gt;.png&lt;/code&gt; image files that you can add to your documentation.&lt;/p&gt;
&lt;p&gt;The target audience for this guide is anybody wishing to learn about Mermaid
and/or how to create and add diagrams to Kubernetes documentation.&lt;/p&gt;</description></item><item><title>Enable Or Disable Feature Gates</title><link>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/configure-feature-gates/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/configure-feature-gates/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page shows how to enable or disable feature gates to control specific Kubernetes
features in your cluster. Enabling feature gates allows you to test and use Alpha or
Beta features before they become generally available.&lt;/p&gt;

&lt;div class="alert alert-info" role="note"&gt;&lt;h4 class="alert-heading"&gt;Note:&lt;/h4&gt;For some stable (GA) gates, you can also disable them, usually for one minor release
after GA; however if you do that, your cluster may not be conformant as Kubernetes.&lt;/div&gt;

&lt;!--
Changes from original PR proposal:
- Added note about conformance implications when disabling stable gates
- Corrected --help behavior: all components show all gates due to shared definitions
- Clarified that not all components support configuration files (e.g., kube-controller-manager)
- Specified that verification methods apply to kubeadm static pod deployments
- Added context about kubeadm's distributed configuration approach
--&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>EndpointSlices</title><link>https://andygol-k8s.netlify.app/docs/concepts/services-networking/endpoint-slices/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/services-networking/endpoint-slices/</guid><description>&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.21 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


EndpointSlices track the IP addresses of backend endpoints.
EndpointSlices are normally associated with a
&lt;a class='glossary-tooltip' title='A way to expose an application running on a set of Pods as a network service.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/services-networking/service/' target='_blank' aria-label='Service'&gt;Service&lt;/a&gt; and the backend endpoints typically represent
&lt;a class='glossary-tooltip' title='A Pod represents a set of running containers in your cluster.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/' target='_blank' aria-label='Pods'&gt;Pods&lt;/a&gt;.
&lt;!-- body --&gt;
&lt;h2 id="endpointslice-resource"&gt;EndpointSlice API&lt;/h2&gt;
&lt;p&gt;In Kubernetes, an EndpointSlice contains references to a set of network
endpoints. The control plane automatically creates EndpointSlices
for any Kubernetes Service that has a &lt;a class='glossary-tooltip' title='Allows users to filter a list of resources based on labels.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/overview/working-with-objects/labels/' target='_blank' aria-label='selector'&gt;selector&lt;/a&gt; specified. These EndpointSlices include
references to all the Pods that match the Service selector. EndpointSlices group
network endpoints together by unique combinations of IP family, protocol,
port number, and Service name.
The name of a EndpointSlice object must be a valid
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/overview/working-with-objects/names/#dns-subdomain-names"&gt;DNS subdomain name&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>Ephemeral Containers</title><link>https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/ephemeral-containers/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/ephemeral-containers/</guid><description>&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.25 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;p&gt;This page provides an overview of ephemeral containers: a special type of container
that runs temporarily in an existing &lt;a class='glossary-tooltip' title='A Pod represents a set of running containers in your cluster.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/' target='_blank' aria-label='Pod'&gt;Pod&lt;/a&gt; to
accomplish user-initiated actions such as troubleshooting. You use ephemeral
containers to inspect services rather than to build applications.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="understanding-ephemeral-containers"&gt;Understanding ephemeral containers&lt;/h2&gt;
&lt;p&gt;&lt;a class='glossary-tooltip' title='A Pod represents a set of running containers in your cluster.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/' target='_blank' aria-label='Pods'&gt;Pods&lt;/a&gt; are the fundamental building
block of Kubernetes applications. Since Pods are intended to be disposable and
replaceable, you cannot add a container to a Pod once it has been created.
Instead, you usually delete and replace Pods in a controlled fashion using
&lt;a class='glossary-tooltip' title='Manages a replicated application on your cluster.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/deployment/' target='_blank' aria-label='deployments'&gt;deployments&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>Explore Termination Behavior for Pods And Their Endpoints</title><link>https://andygol-k8s.netlify.app/docs/tutorials/services/pods-and-endpoint-termination-flow/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tutorials/services/pods-and-endpoint-termination-flow/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;Once you connected your Application with Service following steps
like those outlined in &lt;a href="https://andygol-k8s.netlify.app/docs/tutorials/services/connect-applications-service/"&gt;Connecting Applications with Services&lt;/a&gt;,
you have a continuously running, replicated application, that is exposed on a network.
This tutorial helps you look at the termination flow for Pods and to explore ways to implement
graceful connection draining.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="termination-process-for-pods-and-their-endpoints"&gt;Termination process for Pods and their endpoints&lt;/h2&gt;
&lt;p&gt;There are often cases when you need to terminate a Pod - be it to upgrade or scale down.
In order to improve application availability, it may be important to implement
a proper active connections draining.&lt;/p&gt;</description></item><item><title>Good practices for Dynamic Resource Allocation as a Cluster Admin</title><link>https://andygol-k8s.netlify.app/docs/concepts/cluster-administration/dra/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/cluster-administration/dra/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page describes good practices when configuring a Kubernetes cluster
utilizing Dynamic Resource Allocation (DRA). These instructions are for cluster
administrators.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="separate-permissions-to-dra-related-apis"&gt;Separate permissions to DRA related APIs&lt;/h2&gt;
&lt;p&gt;DRA is orchestrated through a number of different APIs. Use authorization tools
(like RBAC, or another solution) to control access to the right APIs depending
on the persona of your user.&lt;/p&gt;
&lt;p&gt;In general, DeviceClasses and ResourceSlices should be restricted to admins and
the DRA drivers. Cluster operators that will be deploying Pods with claims will
need access to ResourceClaim and ResourceClaimTemplate APIs; both of these APIs
are namespace scoped.&lt;/p&gt;</description></item><item><title>Handling retriable and non-retriable pod failures with Pod failure policy</title><link>https://andygol-k8s.netlify.app/docs/tasks/job/pod-failure-policy/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/job/pod-failure-policy/</guid><description>&lt;div class="feature-state-notice feature-stable" title="Feature Gate: JobPodFailurePolicy"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt; 
 &lt;code&gt;Kubernetes v1.31 [stable]&lt;/code&gt;(enabled by default)&lt;/div&gt;

&lt;!-- overview --&gt;
&lt;p&gt;This document shows you how to use the
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/job/#pod-failure-policy"&gt;Pod failure policy&lt;/a&gt;,
in combination with the default
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/job/#pod-backoff-failure-policy"&gt;Pod backoff failure policy&lt;/a&gt;,
to improve the control over the handling of container- or Pod-level failure
within a &lt;a class='glossary-tooltip' title='A finite or batch task that runs to completion.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/job/' target='_blank' aria-label='Job'&gt;Job&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The definition of Pod failure policy may help you to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;better utilize the computational resources by avoiding unnecessary Pod retries.&lt;/li&gt;
&lt;li&gt;avoid Job failures due to Pod disruptions (such &lt;a class='glossary-tooltip' title='Preemption logic in Kubernetes helps a pending Pod to find a suitable Node by evicting low priority Pods existing on that Node.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/scheduling-eviction/pod-priority-preemption/#preemption' target='_blank' aria-label='preemption'&gt;preemption&lt;/a&gt;,
&lt;a class='glossary-tooltip' title='API-initiated eviction is the process by which you use the Eviction API to create an Eviction object that triggers graceful pod termination.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/scheduling-eviction/api-eviction/' target='_blank' aria-label='API-initiated eviction'&gt;API-initiated eviction&lt;/a&gt;
or &lt;a class='glossary-tooltip' title='A core object consisting of three required properties: key, value, and effect. Taints prevent the scheduling of pods on nodes or node groups.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/scheduling-eviction/taint-and-toleration/' target='_blank' aria-label='taint'&gt;taint&lt;/a&gt;-based eviction).&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;You should already be familiar with the basic use of &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/job/"&gt;Job&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>Horizontal Pod Autoscaling</title><link>https://andygol-k8s.netlify.app/docs/concepts/workloads/autoscaling/horizontal-pod-autoscale/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/workloads/autoscaling/horizontal-pod-autoscale/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;In Kubernetes, a &lt;em&gt;HorizontalPodAutoscaler&lt;/em&gt; automatically updates a workload resource (such as
a &lt;a class='glossary-tooltip' title='Manages a replicated application on your cluster.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/deployment/' target='_blank' aria-label='Deployment'&gt;Deployment&lt;/a&gt; or
&lt;a class='glossary-tooltip' title='A StatefulSet manages deployment and scaling of a set of Pods, with durable storage and persistent identifiers for each Pod.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/statefulset/' target='_blank' aria-label='StatefulSet'&gt;StatefulSet&lt;/a&gt;), with the
aim of automatically scaling capacity to match demand.&lt;/p&gt;
&lt;p&gt;Horizontal scaling means that the response to increased load is to deploy more
&lt;a class='glossary-tooltip' title='A Pod represents a set of running containers in your cluster.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/' target='_blank' aria-label='Pods'&gt;Pods&lt;/a&gt;.
This is different from &lt;em&gt;vertical&lt;/em&gt; scaling, which for Kubernetes would mean
assigning more resources (for example: memory or CPU) to the Pods that are already
running for the workload.&lt;/p&gt;</description></item><item><title>Install Drivers and Allocate Devices with DRA</title><link>https://andygol-k8s.netlify.app/docs/tutorials/cluster-management/install-use-dra/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tutorials/cluster-management/install-use-dra/</guid><description>&lt;!-- FUTURE MAINTAINERS: 
The original point of this doc was for people (mainly cluster administrators) to
understand the importance of the DRA driver and its interaction with the DRA
APIs. As a result it was a requirement of this tutorial to not use Helm and to
be more direct with all the component installation procedures. While much of
this content is also useful to workload authors, I see the primary audience of
_this_ tutorial as cluster administrators, who I feel also need to understand
how the DRA APIs interact with the driver to maintain them well. If I had to
choose which audience to focus on in this doc, I would choose cluster
administrators. If the prose gets too muddied by considering both of them, I'd
rather make a second tutorial for the workload authors that doesn't go into the
driver at all (since IMHO that is more representative of what we think their
experience should be like) and also potentially goes into much more detailed/ ✨
fun ✨ use cases. 
--&gt;







 &lt;div class="feature-state-notice feature-stable" title="Feature Gate: DynamicResourceAllocation"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt; 
 &lt;code&gt;Kubernetes v1.35 [stable]&lt;/code&gt;(enabled by default)&lt;/div&gt;

&lt;!-- overview --&gt;
&lt;p&gt;This tutorial shows you how to install &lt;a class='glossary-tooltip' title='A Kubernetes feature for requesting and sharing resources, like hardware accelerators, among Pods.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/scheduling-eviction/dynamic-resource-allocation/' target='_blank' aria-label='Dynamic Resource Allocation (DRA)'&gt;Dynamic Resource Allocation (DRA)&lt;/a&gt; drivers in your cluster and how to
use them in conjunction with the DRA APIs to allocate &lt;a class='glossary-tooltip' title='Any resource that&amp;#39;s directly or indirectly attached your cluster&amp;#39;s nodes, like GPUs or circuit boards.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/reference/glossary/?all=true#term-device' target='_blank' aria-label='devices'&gt;devices&lt;/a&gt; to Pods. This page is intended for cluster administrators.&lt;/p&gt;</description></item><item><title>kubeadm reset</title><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/kubeadm-reset/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/kubeadm-reset/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;Performs a best effort revert of changes made by &lt;code&gt;kubeadm init&lt;/code&gt; or &lt;code&gt;kubeadm join&lt;/code&gt;.&lt;/p&gt;
&lt;!-- body --&gt;

	&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Performs a best effort revert of changes made to this host by 'kubeadm init' or 'kubeadm join'&lt;/p&gt;</description></item><item><title>kubectl Usage Conventions</title><link>https://andygol-k8s.netlify.app/docs/reference/kubectl/conventions/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubectl/conventions/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;Recommended usage conventions for &lt;code&gt;kubectl&lt;/code&gt;.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="using-kubectl-in-reusable-scripts"&gt;Using &lt;code&gt;kubectl&lt;/code&gt; in Reusable Scripts&lt;/h2&gt;
&lt;p&gt;For a stable output in a script:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Request one of the machine-oriented output forms, such as &lt;code&gt;-o name&lt;/code&gt;, &lt;code&gt;-o json&lt;/code&gt;, &lt;code&gt;-o yaml&lt;/code&gt;, &lt;code&gt;-o go-template&lt;/code&gt;, or &lt;code&gt;-o jsonpath&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Fully-qualify the version. For example, &lt;code&gt;jobs.v1.batch/myjob&lt;/code&gt;. This will ensure that kubectl does not use its default version that can change over time.&lt;/li&gt;
&lt;li&gt;Don't rely on context, preferences, or other implicit states.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="subresources"&gt;Subresources&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;You can use the &lt;code&gt;--subresource&lt;/code&gt; argument for kubectl subcommands such as &lt;code&gt;get&lt;/code&gt;, &lt;code&gt;patch&lt;/code&gt;,
&lt;code&gt;edit&lt;/code&gt;, &lt;code&gt;apply&lt;/code&gt; and &lt;code&gt;replace&lt;/code&gt; to fetch and update subresources for all resources that
support them. In Kubernetes version 1.35, only the &lt;code&gt;status&lt;/code&gt;, &lt;code&gt;scale&lt;/code&gt;
and &lt;code&gt;resize&lt;/code&gt; subresources are supported.
&lt;ul&gt;
&lt;li&gt;For &lt;code&gt;kubectl edit&lt;/code&gt;, the &lt;code&gt;scale&lt;/code&gt; subresource is not supported. If you use &lt;code&gt;--subresource&lt;/code&gt; with
&lt;code&gt;kubectl edit&lt;/code&gt; and specify &lt;code&gt;scale&lt;/code&gt; as the subresource, the command will error out.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;The API contract against a subresource is identical to a full resource. While updating the
&lt;code&gt;status&lt;/code&gt; subresource to a new value, keep in mind that the subresource could be potentially
reconciled by a controller to a different value.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="best-practices"&gt;Best Practices&lt;/h2&gt;
&lt;h3 id="kubectl-run"&gt;&lt;code&gt;kubectl run&lt;/code&gt;&lt;/h3&gt;
&lt;p&gt;For &lt;code&gt;kubectl run&lt;/code&gt; to satisfy infrastructure as code:&lt;/p&gt;</description></item><item><title>Kubernetes z-pages</title><link>https://andygol-k8s.netlify.app/docs/reference/instrumentation/zpages/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/instrumentation/zpages/</guid><description>&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-alpha"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.32 [alpha]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;p&gt;Kubernetes core components can expose a suite of &lt;em&gt;z-endpoints&lt;/em&gt; to make it easier for users
to debug their cluster and its components. These endpoints are strictly to be used for human
inspection to gain real time debugging information of a component binary.
Avoid automated scraping of data returned by these endpoints; in Kubernetes 1.35
these are an &lt;strong&gt;alpha&lt;/strong&gt; feature and the response format may change in future releases.&lt;/p&gt;</description></item><item><title>Logging Architecture</title><link>https://andygol-k8s.netlify.app/docs/concepts/cluster-administration/logging/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/cluster-administration/logging/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;Application logs can help you understand what is happening inside your application. The
logs are particularly useful for debugging problems and monitoring cluster activity. Most
modern applications have some kind of logging mechanism. Likewise, container engines
are designed to support logging. The easiest and most adopted logging method for
containerized applications is writing to standard output and standard error streams.&lt;/p&gt;
&lt;p&gt;However, the native functionality provided by a container engine or runtime is usually
not enough for a complete logging solution.&lt;/p&gt;</description></item><item><title>Migrate Kubernetes Objects Using Storage Version Migration</title><link>https://andygol-k8s.netlify.app/docs/tasks/manage-kubernetes-objects/storage-version-migration/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/manage-kubernetes-objects/storage-version-migration/</guid><description>&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-beta" title="Feature Gate: StorageVersionMigrator"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt; 
 &lt;code&gt;Kubernetes v1.35 [beta]&lt;/code&gt;(disabled by default)&lt;/div&gt;

&lt;p&gt;Kubernetes relies on API data being actively re-written, to support some
maintenance activities related to at rest storage. Two prominent examples are
the versioned schema of stored resources (that is, the preferred storage schema
changing from v1 to v2 for a given resource) and encryption at rest
(that is, rewriting stale data based on a change in how the data should be encrypted).&lt;/p&gt;</description></item><item><title>Migrating telemetry and security agents from dockershim</title><link>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/migrating-from-dockershim/migrating-telemetry-and-security-agents/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/migrating-from-dockershim/migrating-telemetry-and-security-agents/</guid><description>&lt;!-- overview --&gt;
&lt;div class="alert alert-secondary callout third-party-content" role="note"&gt;&lt;strong&gt;Note:&lt;/strong&gt;&amp;puncsp;This section links to third party projects that provide functionality required by Kubernetes. The Kubernetes project authors aren't responsible for these projects, which are listed alphabetically. To add a project to this list, read the &lt;a href="https://andygol-k8s.netlify.app/docs/contribute/style/content-guide/#third-party-content"&gt;content guide&lt;/a&gt; before submitting a change. &lt;a href="#third-party-content-disclaimer"&gt;More information.&lt;/a&gt;&lt;/div&gt;
&lt;p&gt;Kubernetes' support for direct integration with Docker Engine is deprecated and
has been removed. Most apps do not have a direct dependency on runtime hosting
containers. However, there are still a lot of telemetry and monitoring agents
that have a dependency on Docker to collect containers metadata, logs, and
metrics. This document aggregates information on how to detect these
dependencies as well as links on how to migrate these agents to use generic tools or
alternative runtimes.&lt;/p&gt;</description></item><item><title>Organizing Cluster Access Using kubeconfig Files</title><link>https://andygol-k8s.netlify.app/docs/concepts/configuration/organize-cluster-access-kubeconfig/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/configuration/organize-cluster-access-kubeconfig/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;Use kubeconfig files to organize information about clusters, users, namespaces, and
authentication mechanisms. The &lt;code&gt;kubectl&lt;/code&gt; command-line tool uses kubeconfig files to
find the information it needs to choose a cluster and communicate with the API server
of a cluster.&lt;/p&gt;

&lt;div class="alert alert-info" role="note"&gt;&lt;h4 class="alert-heading"&gt;Note:&lt;/h4&gt;A file that is used to configure access to clusters is called
a &lt;em&gt;kubeconfig file&lt;/em&gt;. This is a generic way of referring to configuration files.
It does not mean that there is a file named &lt;code&gt;kubeconfig&lt;/code&gt;.&lt;/div&gt;

&lt;div class="alert alert-danger" role="note"&gt;&lt;h4 class="alert-heading"&gt;Warning:&lt;/h4&gt;Only use kubeconfig files from trusted sources. Using a specially-crafted kubeconfig file could result in malicious code execution or file exposure.
If you must use an untrusted kubeconfig file, inspect it carefully first, much as you would a shell script.&lt;/div&gt;

&lt;p&gt;By default, &lt;code&gt;kubectl&lt;/code&gt; looks for a file named &lt;code&gt;config&lt;/code&gt; in the &lt;code&gt;$HOME/.kube&lt;/code&gt; directory.
You can specify other kubeconfig files by setting the &lt;code&gt;KUBECONFIG&lt;/code&gt; environment
variable or by setting the
&lt;a href="https://andygol-k8s.netlify.app/docs/reference/generated/kubectl/kubectl/"&gt;&lt;code&gt;--kubeconfig&lt;/code&gt;&lt;/a&gt; flag.&lt;/p&gt;</description></item><item><title>Post-release communications</title><link>https://andygol-k8s.netlify.app/docs/contribute/blog/release-comms/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/contribute/blog/release-comms/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;The Kubernetes &lt;em&gt;Release Comms&lt;/em&gt; team (part of
&lt;a href="https://github.com/kubernetes/community/tree/master/sig-release"&gt;SIG Release&lt;/a&gt;)
looks after release announcements, which go onto the
&lt;a href="https://andygol-k8s.netlify.app/docs/contribute/blog/#main-blog"&gt;main project blog&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;After each release, the Release Comms team take over the main blog for a period
and publish a series of additional articles to explain or announce changes related to
that release. These additional articles are termed &lt;em&gt;post-release comms&lt;/em&gt;.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="opt-in"&gt;Opting in to post-release comms&lt;/h2&gt;
&lt;p&gt;During a release cycle, as a contributor, you can opt in to post-release comms about an
upcoming change to Kubernetes.&lt;/p&gt;</description></item><item><title>Role Based Access Control Good Practices</title><link>https://andygol-k8s.netlify.app/docs/concepts/security/rbac-good-practices/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/security/rbac-good-practices/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;Kubernetes &lt;a class='glossary-tooltip' title='Manages authorization decisions, allowing admins to dynamically configure access policies through the Kubernetes API.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/reference/access-authn-authz/rbac/' target='_blank' aria-label='RBAC'&gt;RBAC&lt;/a&gt; is a key security control
to ensure that cluster users and workloads have only the access to resources required to
execute their roles. It is important to ensure that, when designing permissions for cluster
users, the cluster administrator understands the areas where privilege escalation could occur,
to reduce the risk of excessive access leading to security incidents.&lt;/p&gt;</description></item><item><title>Scheduling Framework</title><link>https://andygol-k8s.netlify.app/docs/concepts/scheduling-eviction/scheduling-framework/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/scheduling-eviction/scheduling-framework/</guid><description>&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.19 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;p&gt;The &lt;em&gt;scheduling framework&lt;/em&gt; is a pluggable architecture for the Kubernetes scheduler.
It consists of a set of &amp;quot;plugin&amp;quot; APIs that are compiled directly into the scheduler.
These APIs allow most scheduling features to be implemented as plugins,
while keeping the scheduling &amp;quot;core&amp;quot; lightweight and maintainable. Refer to the
&lt;a href="https://github.com/kubernetes/enhancements/blob/master/keps/sig-scheduling/624-scheduling-framework/README.md"&gt;design proposal of the scheduling framework&lt;/a&gt; for more technical information on
the design of the framework.&lt;/p&gt;</description></item><item><title>Troubleshooting Topology Management</title><link>https://andygol-k8s.netlify.app/docs/tasks/debug/debug-cluster/topology/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/debug/debug-cluster/topology/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;Kubernetes keeps many aspects of how pods execute on nodes abstracted
from the user. This is by design. However, some workloads require
stronger guarantees in terms of latency and/or performance in order to operate
acceptably. The &lt;code&gt;kubelet&lt;/code&gt; provides methods to enable more complex workload
placement policies while keeping the abstraction free from explicit placement
directives.&lt;/p&gt;
&lt;p&gt;You can manage topology within nodes. This means helping the kubelet to configure the host operating system so that
Pods and containers are placed on the correct side of inner boundaries, such as &lt;em&gt;NUMA domains&lt;/em&gt;. (NUMA is an abbreviation
of &lt;em&gt;non-uniform memory access&lt;/em&gt;, and refers to an idea that CPUs might be topologically closer to specific regions of
memory, due to the physical layout of the hardware components and the way that these are connected).&lt;/p&gt;</description></item><item><title>Use a Service to Access an Application in a Cluster</title><link>https://andygol-k8s.netlify.app/docs/tasks/access-application-cluster/service-access-application-cluster/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/access-application-cluster/service-access-application-cluster/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page shows how to create a Kubernetes Service object that external
clients can use to access an application running in a cluster. The Service
provides load balancing for an application that has two running instances.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Volume Snapshots</title><link>https://andygol-k8s.netlify.app/docs/concepts/storage/volume-snapshots/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/storage/volume-snapshots/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;In Kubernetes, a &lt;em&gt;VolumeSnapshot&lt;/em&gt; represents a snapshot of a volume on a storage
system. This document assumes that you are already familiar with Kubernetes
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/storage/persistent-volumes/"&gt;persistent volumes&lt;/a&gt;.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="introduction"&gt;Introduction&lt;/h2&gt;
&lt;p&gt;Similar to how API resources &lt;code&gt;PersistentVolume&lt;/code&gt; and &lt;code&gt;PersistentVolumeClaim&lt;/code&gt; are
used to provision volumes for users and administrators, &lt;code&gt;VolumeSnapshotContent&lt;/code&gt;
and &lt;code&gt;VolumeSnapshot&lt;/code&gt; API resources are provided to create volume snapshots for
users and administrators.&lt;/p&gt;
&lt;p&gt;A &lt;code&gt;VolumeSnapshotContent&lt;/code&gt; is a snapshot taken from a volume in the cluster that
has been provisioned by an administrator. It is a resource in the cluster just
like a PersistentVolume is a cluster resource.&lt;/p&gt;</description></item><item><title>Weave Net for NetworkPolicy</title><link>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/network-policy-provider/weave-network-policy/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/network-policy-provider/weave-network-policy/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page shows how to use Weave Net for NetworkPolicy.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;You need to have a Kubernetes cluster. Follow the
&lt;a href="https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/"&gt;kubeadm getting started guide&lt;/a&gt; to bootstrap one.&lt;/p&gt;
&lt;!-- steps --&gt;
&lt;h2 id="install-the-weave-net-addon"&gt;Install the Weave Net addon&lt;/h2&gt;
&lt;p&gt;Follow the &lt;a href="https://github.com/weaveworks/weave/blob/master/site/kubernetes/kube-addon.md#-installation"&gt;Integrating Kubernetes via the Addon&lt;/a&gt; guide.&lt;/p&gt;
&lt;p&gt;The Weave Net addon for Kubernetes comes with a
&lt;a href="https://github.com/weaveworks/weave/blob/master/site/kubernetes/kube-addon.md#network-policy"&gt;Network Policy Controller&lt;/a&gt;
that automatically monitors Kubernetes for any NetworkPolicy annotations on all
namespaces and configures &lt;code&gt;iptables&lt;/code&gt; rules to allow or block traffic as directed by the policies.&lt;/p&gt;</description></item><item><title>Volume Snapshot Classes</title><link>https://andygol-k8s.netlify.app/docs/concepts/storage/volume-snapshot-classes/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/storage/volume-snapshot-classes/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This document describes the concept of VolumeSnapshotClass in Kubernetes. Familiarity
with &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/storage/volume-snapshots/"&gt;volume snapshots&lt;/a&gt; and
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/storage/storage-classes/"&gt;storage classes&lt;/a&gt; is suggested.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="introduction"&gt;Introduction&lt;/h2&gt;
&lt;p&gt;Just like StorageClass provides a way for administrators to describe the &amp;quot;classes&amp;quot;
of storage they offer when provisioning a volume, VolumeSnapshotClass provides a
way to describe the &amp;quot;classes&amp;quot; of storage when provisioning a volume snapshot.&lt;/p&gt;
&lt;h2 id="the-volumesnapshotclass-resource"&gt;The VolumeSnapshotClass Resource&lt;/h2&gt;
&lt;p&gt;Each VolumeSnapshotClass contains the fields &lt;code&gt;driver&lt;/code&gt;, &lt;code&gt;deletionPolicy&lt;/code&gt;, and &lt;code&gt;parameters&lt;/code&gt;,
which are used when a VolumeSnapshot belonging to the class needs to be
dynamically provisioned.&lt;/p&gt;</description></item><item><title>Dynamic Resource Allocation</title><link>https://andygol-k8s.netlify.app/docs/concepts/scheduling-eviction/dynamic-resource-allocation/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/scheduling-eviction/dynamic-resource-allocation/</guid><description>&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable" title="Feature Gate: DynamicResourceAllocation"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt; 
 &lt;code&gt;Kubernetes v1.35 [stable]&lt;/code&gt;(enabled by default)&lt;/div&gt;

&lt;p&gt;This page describes &lt;em&gt;dynamic resource allocation (DRA)&lt;/em&gt; in Kubernetes.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="about-dra"&gt;About DRA&lt;/h2&gt;
&lt;p&gt;&lt;p&gt;DRA is a Kubernetes feature that lets you request and share resources among Pods.
These resources are often attached
&lt;a class='glossary-tooltip' title='Any resource that&amp;#39;s directly or indirectly attached your cluster&amp;#39;s nodes, like GPUs or circuit boards.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/reference/glossary/?all=true#term-device' target='_blank' aria-label='devices'&gt;devices&lt;/a&gt; like hardware
accelerators.&lt;/p&gt;&lt;/p&gt;
&lt;p&gt;With DRA, device drivers and cluster admins define device &lt;em&gt;classes&lt;/em&gt; that are
available to &lt;em&gt;claim&lt;/em&gt; in workloads. Kubernetes allocates matching devices to
specific claims and places the corresponding Pods on nodes that can access the
allocated devices.&lt;/p&gt;</description></item><item><title>Windows containers in Kubernetes</title><link>https://andygol-k8s.netlify.app/docs/concepts/windows/intro/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/windows/intro/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;Windows applications constitute a large portion of the services and applications that
run in many organizations. &lt;a href="https://aka.ms/windowscontainers"&gt;Windows containers&lt;/a&gt;
provide a way to encapsulate processes and package dependencies, making it easier
to use DevOps practices and follow cloud native patterns for Windows applications.&lt;/p&gt;
&lt;p&gt;Organizations with investments in Windows-based applications and Linux-based
applications don't have to look for separate orchestrators to manage their workloads,
leading to increased operational efficiencies across their deployments, regardless
of operating system.&lt;/p&gt;</description></item><item><title>Advertise Extended Resources for a Node</title><link>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/extended-resource-node/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/extended-resource-node/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page shows how to specify extended resources for a Node.
Extended resources allow cluster administrators to advertise node-level
resources that would otherwise be unknown to Kubernetes.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Assign Extended Resources to a Container</title><link>https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/extended-resource/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/extended-resource/</guid><description>&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.35 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;p&gt;This page shows how to assign extended resources to a Container.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Automatic Cleanup for Finished Jobs</title><link>https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/ttlafterfinished/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/ttlafterfinished/</guid><description>&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.23 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;p&gt;When your Job has finished, it's useful to keep that Job in the API (and not immediately delete the Job)
so that you can tell whether the Job succeeded or failed.&lt;/p&gt;
&lt;p&gt;Kubernetes' TTL-after-finished &lt;a class='glossary-tooltip' title='A control loop that watches the shared state of the cluster through the apiserver and makes changes attempting to move the current state towards the desired state.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/architecture/controller/' target='_blank' aria-label='controller'&gt;controller&lt;/a&gt; provides a
TTL (time to live) mechanism to limit the lifetime of Job objects that
have finished execution.&lt;/p&gt;</description></item><item><title>Compatibility Version For Kubernetes Control Plane Components</title><link>https://andygol-k8s.netlify.app/docs/concepts/cluster-administration/compatibility-version/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/cluster-administration/compatibility-version/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;Since release v1.32, we introduced configurable version compatibility and emulation options to Kubernetes control plane components to make upgrades safer by providing more control and increasing the granularity of steps available to cluster administrators.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="emulated-version"&gt;Emulated Version&lt;/h2&gt;
&lt;p&gt;The emulation option is set by the &lt;code&gt;--emulated-version&lt;/code&gt; flag of control plane components. It allows the component to emulate the behavior (APIs, features, ...) of an earlier version of Kubernetes.&lt;/p&gt;</description></item><item><title>Connect a Frontend to a Backend Using Services</title><link>https://andygol-k8s.netlify.app/docs/tasks/access-application-cluster/connecting-frontend-backend/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/access-application-cluster/connecting-frontend-backend/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This task shows how to create a &lt;em&gt;frontend&lt;/em&gt; and a &lt;em&gt;backend&lt;/em&gt; microservice. The backend
microservice is a hello greeter. The frontend exposes the backend using nginx and a
Kubernetes &lt;a class='glossary-tooltip' title='A way to expose an application running on a set of Pods as a network service.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/services-networking/service/' target='_blank' aria-label='Service'&gt;Service&lt;/a&gt; object.&lt;/p&gt;
&lt;h2 id="objectives"&gt;Objectives&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Create and run a sample &lt;code&gt;hello&lt;/code&gt; backend microservice using a
&lt;a class='glossary-tooltip' title='Manages a replicated application on your cluster.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/deployment/' target='_blank' aria-label='Deployment'&gt;Deployment&lt;/a&gt; object.&lt;/li&gt;
&lt;li&gt;Use a Service object to send traffic to the backend microservice's multiple replicas.&lt;/li&gt;
&lt;li&gt;Create and run a &lt;code&gt;nginx&lt;/code&gt; frontend microservice, also using a Deployment object.&lt;/li&gt;
&lt;li&gt;Configure the frontend microservice to send traffic to the backend microservice.&lt;/li&gt;
&lt;li&gt;Use a Service object of &lt;code&gt;type=LoadBalancer&lt;/code&gt; to expose the frontend microservice
outside the cluster.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>CSI Volume Cloning</title><link>https://andygol-k8s.netlify.app/docs/concepts/storage/volume-pvc-datasource/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/storage/volume-pvc-datasource/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This document describes the concept of cloning existing CSI Volumes in Kubernetes.
Familiarity with &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/storage/volumes/"&gt;Volumes&lt;/a&gt; is suggested.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="introduction"&gt;Introduction&lt;/h2&gt;
&lt;p&gt;The &lt;a class='glossary-tooltip' title='The Container Storage Interface (CSI) defines a standard interface to expose storage systems to containers.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/storage/volumes/#csi' target='_blank' aria-label='CSI'&gt;CSI&lt;/a&gt; Volume Cloning feature adds
support for specifying existing &lt;a class='glossary-tooltip' title='Claims storage resources defined in a PersistentVolume so that it can be mounted as a volume in a container.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims' target='_blank' aria-label='PVC'&gt;PVC&lt;/a&gt;s
in the &lt;code&gt;dataSource&lt;/code&gt; field to indicate a user would like to clone a &lt;a class='glossary-tooltip' title='A directory containing data, accessible to the containers in a pod.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/storage/volumes/' target='_blank' aria-label='Volume'&gt;Volume&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>Disruptions</title><link>https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/disruptions/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/disruptions/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This guide is for application owners who want to build
highly available applications, and thus need to understand
what types of disruptions can happen to Pods.&lt;/p&gt;
&lt;p&gt;It is also for cluster administrators who want to perform automated
cluster actions, like upgrading and autoscaling clusters.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="voluntary-and-involuntary-disruptions"&gt;Voluntary and involuntary disruptions&lt;/h2&gt;
&lt;p&gt;Pods do not disappear until someone (a person or a controller) destroys them, or
there is an unavoidable hardware or system software error.&lt;/p&gt;</description></item><item><title>Field Selectors</title><link>https://andygol-k8s.netlify.app/docs/concepts/overview/working-with-objects/field-selectors/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/overview/working-with-objects/field-selectors/</guid><description>&lt;p&gt;&lt;em&gt;Field selectors&lt;/em&gt; let you select Kubernetes &lt;a class='glossary-tooltip' title='An entity in the Kubernetes system, representing part of the state of your cluster.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/overview/working-with-objects/#kubernetes-objects' target='_blank' aria-label='objects'&gt;objects&lt;/a&gt; based on the
value of one or more resource fields. Here are some examples of field selector queries:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;metadata.name=my-service&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;metadata.namespace!=default&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;status.phase=Pending&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This &lt;code&gt;kubectl&lt;/code&gt; command selects all Pods for which the value of the &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/pod-lifecycle/#pod-phase"&gt;&lt;code&gt;status.phase&lt;/code&gt;&lt;/a&gt; field is &lt;code&gt;Running&lt;/code&gt;:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl get pods --field-selector status.phase&lt;span style="color:#666"&gt;=&lt;/span&gt;Running
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;div class="alert alert-info" role="note"&gt;&lt;h4 class="alert-heading"&gt;Note:&lt;/h4&gt;Field selectors are essentially resource &lt;em&gt;filters&lt;/em&gt;. By default, no selectors/filters are applied, meaning that all resources of the specified type are selected. This makes the &lt;code&gt;kubectl&lt;/code&gt; queries &lt;code&gt;kubectl get pods&lt;/code&gt; and &lt;code&gt;kubectl get pods --field-selector &amp;quot;&amp;quot;&lt;/code&gt; equivalent.&lt;/div&gt;

&lt;h2 id="supported-fields"&gt;Supported fields&lt;/h2&gt;
&lt;p&gt;Supported field selectors vary by Kubernetes resource type. All resource types support the &lt;code&gt;metadata.name&lt;/code&gt; and &lt;code&gt;metadata.namespace&lt;/code&gt; fields. Using unsupported field selectors produces an error. For example:&lt;/p&gt;</description></item><item><title>Force Delete StatefulSet Pods</title><link>https://andygol-k8s.netlify.app/docs/tasks/run-application/force-delete-stateful-set-pod/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/run-application/force-delete-stateful-set-pod/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page shows how to delete Pods which are part of a
&lt;a class='glossary-tooltip' title='A StatefulSet manages deployment and scaling of a set of Pods, with durable storage and persistent identifiers for each Pod.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/statefulset/' target='_blank' aria-label='stateful set'&gt;stateful set&lt;/a&gt;,
and explains the considerations to keep in mind when doing so.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;This is a fairly advanced task and has the potential to violate some of the properties
inherent to StatefulSet.&lt;/li&gt;
&lt;li&gt;Before proceeding, make yourself familiar with the considerations enumerated below.&lt;/li&gt;
&lt;/ul&gt;
&lt;!-- steps --&gt;
&lt;h2 id="statefulset-considerations"&gt;StatefulSet considerations&lt;/h2&gt;
&lt;p&gt;In normal operation of a StatefulSet, there is &lt;strong&gt;never&lt;/strong&gt; a need to force delete a StatefulSet Pod.
The &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/statefulset/"&gt;StatefulSet controller&lt;/a&gt; is responsible for
creating, scaling and deleting members of the StatefulSet. It tries to ensure that the specified
number of Pods from ordinal 0 through N-1 are alive and ready. StatefulSet ensures that, at any time,
there is at most one Pod with a given identity running in a cluster. This is referred to as
&lt;em&gt;at most one&lt;/em&gt; semantics provided by a StatefulSet.&lt;/p&gt;</description></item><item><title>Gang Scheduling</title><link>https://andygol-k8s.netlify.app/docs/concepts/scheduling-eviction/gang-scheduling/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/scheduling-eviction/gang-scheduling/</guid><description>&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-alpha" title="Feature Gate: GangScheduling"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt; 
 &lt;code&gt;Kubernetes v1.35 [alpha]&lt;/code&gt;(disabled by default)&lt;/div&gt;

&lt;p&gt;Gang scheduling ensures that a group of Pods are scheduled on an &amp;quot;all-or-nothing&amp;quot; basis.
If the cluster cannot accommodate the entire group (or a defined minimum number of Pods),
none of the Pods are bound to a node.&lt;/p&gt;
&lt;p&gt;This feature depends on the &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/workloads/workload-api/"&gt;Workload API&lt;/a&gt;.
Ensure the &lt;a href="https://andygol-k8s.netlify.app/docs/reference/command-line-tools-reference/feature-gates/#GenericWorkload"&gt;&lt;code&gt;GenericWorkload&lt;/code&gt;&lt;/a&gt;
feature gate and the &lt;code&gt;scheduling.k8s.io/v1alpha1&lt;/code&gt;
&lt;a class='glossary-tooltip' title='A set of related paths in the Kubernetes API.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/overview/kubernetes-api/#api-groups-and-versioning' target='_blank' aria-label='API group'&gt;API group&lt;/a&gt; are enabled in the cluster.&lt;/p&gt;</description></item><item><title>Garbage Collection</title><link>https://andygol-k8s.netlify.app/docs/concepts/architecture/garbage-collection/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/architecture/garbage-collection/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;Garbage collection is a collective term for the various mechanisms Kubernetes uses to clean up
cluster resources. This
allows the clean up of resources like the following:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/pod-lifecycle/#pod-garbage-collection"&gt;Terminated pods&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/ttlafterfinished/"&gt;Completed Jobs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#owners-dependents"&gt;Objects without owner references&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#containers-images"&gt;Unused containers and container images&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/storage/persistent-volumes/#delete"&gt;Dynamically provisioned PersistentVolumes with a StorageClass reclaim policy of Delete&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/reference/access-authn-authz/certificate-signing-requests/#request-signing-process"&gt;Stale or expired CertificateSigningRequests (CSRs)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class='glossary-tooltip' title='A node is a worker machine in Kubernetes.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/architecture/nodes/' target='_blank' aria-label='Nodes'&gt;Nodes&lt;/a&gt; deleted in the following scenarios:
&lt;ul&gt;
&lt;li&gt;On a cloud when the cluster uses a &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/architecture/cloud-controller/"&gt;cloud controller manager&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;On-premises when the cluster uses an addon similar to a cloud controller
manager&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/architecture/nodes/#heartbeats"&gt;Node Lease objects&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="owners-dependents"&gt;Owners and dependents&lt;/h2&gt;
&lt;p&gt;Many objects in Kubernetes link to each other through &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/overview/working-with-objects/owners-dependents/"&gt;&lt;em&gt;owner references&lt;/em&gt;&lt;/a&gt;.
Owner references tell the control plane which objects are dependent on others.
Kubernetes uses owner references to give the control plane, and other API
clients, the opportunity to clean up related resources before deleting an
object. In most cases, Kubernetes manages owner references automatically.&lt;/p&gt;</description></item><item><title>Good practices for Kubernetes Secrets</title><link>https://andygol-k8s.netlify.app/docs/concepts/security/secrets-good-practices/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/security/secrets-good-practices/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;&lt;p&gt;In Kubernetes, a Secret is an object that stores sensitive information, such as passwords, OAuth tokens, and SSH keys.&lt;/p&gt;&lt;/p&gt;
&lt;p&gt;Secrets give you more control over how sensitive information is used and reduces
the risk of accidental exposure. Secret values are encoded as base64 strings and
are stored unencrypted by default, but can be configured to be
&lt;a href="https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/encrypt-data/#ensure-all-secrets-are-encrypted"&gt;encrypted at rest&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;A &lt;a class='glossary-tooltip' title='A Pod represents a set of running containers in your cluster.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/' target='_blank' aria-label='Pod'&gt;Pod&lt;/a&gt; can reference the Secret in
a variety of ways, such as in a volume mount or as an environment variable.
Secrets are designed for confidential data and
&lt;a href="https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/configure-pod-configmap/"&gt;ConfigMaps&lt;/a&gt; are
designed for non-confidential data.&lt;/p&gt;</description></item><item><title>Helping as a blog writing buddy</title><link>https://andygol-k8s.netlify.app/docs/contribute/blog/writing-buddy/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/contribute/blog/writing-buddy/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;There are two official Kubernetes blogs, and the CNCF has its own blog where you can cover Kubernetes too.
Read &lt;a href="https://andygol-k8s.netlify.app/docs/contribute/blog/"&gt;contributing to Kubernetes blogs&lt;/a&gt; to learn about these two blogs.&lt;/p&gt;
&lt;p&gt;When people contribute to either blog as an author, the Kubernetes project pairs up authors
as &lt;em&gt;writing buddies&lt;/em&gt;. This page explains how to fulfil the buddy role.&lt;/p&gt;
&lt;p&gt;You should make sure that you have at least read an outline of &lt;a href="https://andygol-k8s.netlify.app/docs/contribute/blog/article-submission/"&gt;article submission&lt;/a&gt;
before you read on within this page.&lt;/p&gt;</description></item><item><title>kubeadm token</title><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/kubeadm-token/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/kubeadm-token/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;Bootstrap tokens are used for establishing bidirectional trust between a node joining
the cluster and a control-plane node, as described in &lt;a href="https://andygol-k8s.netlify.app/docs/reference/access-authn-authz/bootstrap-tokens/"&gt;authenticating with bootstrap tokens&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;kubeadm init&lt;/code&gt; creates an initial token with a 24-hour TTL. The following commands allow you to manage
such a token and also to create and manage new ones.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="cmd-token-create"&gt;kubeadm token create&lt;/h2&gt;

	&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;Create bootstrap tokens on the server&lt;/p&gt;</description></item><item><title>Kubectl user preferences (kuberc)</title><link>https://andygol-k8s.netlify.app/docs/reference/kubectl/kuberc/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/kubectl/kuberc/</guid><description>&lt;div class="feature-state-notice feature-beta"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt;
 &lt;code&gt;Kubernetes 1.34 [beta]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;p&gt;A Kubernetes &lt;code&gt;kuberc&lt;/code&gt; configuration file allows you to define preferences for
&lt;a class='glossary-tooltip' title='A command line tool for communicating with a Kubernetes cluster.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/reference/kubectl/' target='_blank' aria-label='kubectl'&gt;kubectl&lt;/a&gt;,
such as default options and command aliases. Unlike the kubeconfig file, a &lt;code&gt;kuberc&lt;/code&gt;
configuration file does &lt;strong&gt;not&lt;/strong&gt; contain cluster details, usernames or passwords.&lt;/p&gt;
&lt;p&gt;On Linux / POSIX computers, the default location of this configuration file is &lt;code&gt;$HOME/.kube/kuberc&lt;/code&gt;.
The default path on Windows is similar: &lt;code&gt;%USERPROFILE%\.kube\kuberc&lt;/code&gt;.
To provide kubectl with a path to a custom kuberc file, use the &lt;code&gt;--kuberc&lt;/code&gt; command line option,
or set the &lt;code&gt;KUBERC&lt;/code&gt; environment variable.&lt;/p&gt;</description></item><item><title>Metrics For Kubernetes System Components</title><link>https://andygol-k8s.netlify.app/docs/concepts/cluster-administration/system-metrics/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/cluster-administration/system-metrics/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;System component metrics can give a better look into what is happening inside them. Metrics are
particularly useful for building dashboards and alerts.&lt;/p&gt;
&lt;p&gt;Kubernetes components emit metrics in &lt;a href="https://prometheus.io/docs/instrumenting/exposition_formats/"&gt;Prometheus format&lt;/a&gt;.
This format is structured plain text, designed so that people and machines can both read it.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="metrics-in-kubernetes"&gt;Metrics in Kubernetes&lt;/h2&gt;
&lt;p&gt;In most cases metrics are available on &lt;code&gt;/metrics&lt;/code&gt; endpoint of the HTTP server. For components that
don't expose endpoint by default, it can be enabled using &lt;code&gt;--bind-address&lt;/code&gt; flag.&lt;/p&gt;</description></item><item><title>Network Policies</title><link>https://andygol-k8s.netlify.app/docs/concepts/services-networking/network-policies/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/services-networking/network-policies/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;If you want to control traffic flow at the IP address or port level for TCP, UDP, and SCTP protocols,
then you might consider using Kubernetes NetworkPolicies for particular applications in your cluster.
NetworkPolicies are an application-centric construct which allow you to specify how a
&lt;a class='glossary-tooltip' title='A Pod represents a set of running containers in your cluster.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/' target='_blank' aria-label='pod'&gt;pod&lt;/a&gt; is allowed to communicate with various network
&amp;quot;entities&amp;quot; (we use the word &amp;quot;entity&amp;quot; here to avoid overloading the more common terms such as
&amp;quot;endpoints&amp;quot; and &amp;quot;services&amp;quot;, which have specific Kubernetes connotations) over the network.
NetworkPolicies apply to a connection with a pod on one or both ends, and are not relevant to
other connections.&lt;/p&gt;</description></item><item><title>Scheduler Performance Tuning</title><link>https://andygol-k8s.netlify.app/docs/concepts/scheduling-eviction/scheduler-perf-tuning/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/scheduling-eviction/scheduler-perf-tuning/</guid><description>&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-beta"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.14 [beta]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;p&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/scheduling-eviction/kube-scheduler/#kube-scheduler"&gt;kube-scheduler&lt;/a&gt;
is the Kubernetes default scheduler. It is responsible for placement of Pods
on Nodes in a cluster.&lt;/p&gt;
&lt;p&gt;Nodes in a cluster that meet the scheduling requirements of a Pod are
called &lt;em&gt;feasible&lt;/em&gt; Nodes for the Pod. The scheduler finds feasible Nodes
for a Pod and then runs a set of functions to score the feasible Nodes,
picking a Node with the highest score among the feasible ones to run
the Pod. The scheduler then notifies the API server about this decision
in a process called &lt;em&gt;Binding&lt;/em&gt;.&lt;/p&gt;</description></item><item><title>Set up a High Availability etcd Cluster with kubeadm</title><link>https://andygol-k8s.netlify.app/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;By default, kubeadm runs a local etcd instance on each control plane node.
It is also possible to treat the etcd cluster as external and provision
etcd instances on separate hosts. The differences between the two approaches are covered in the
&lt;a href="https://andygol-k8s.netlify.app/docs/setup/production-environment/tools/kubeadm/ha-topology/"&gt;Options for Highly Available topology&lt;/a&gt; page.&lt;/p&gt;
&lt;p&gt;This task walks through the process of creating a high availability external
etcd cluster of three members that can be used by kubeadm during cluster creation.&lt;/p&gt;</description></item><item><title>Set up Konnectivity service</title><link>https://andygol-k8s.netlify.app/docs/tasks/extend-kubernetes/setup-konnectivity/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/extend-kubernetes/setup-konnectivity/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;The Konnectivity service provides a TCP level proxy for the control plane to cluster
communication.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this
tutorial on a cluster with at least two nodes that are not acting as control
plane hosts. If you do not already have a cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>Vertical Pod Autoscaling</title><link>https://andygol-k8s.netlify.app/docs/concepts/workloads/autoscaling/vertical-pod-autoscale/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/workloads/autoscaling/vertical-pod-autoscale/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;In Kubernetes, a &lt;em&gt;VerticalPodAutoscaler&lt;/em&gt; automatically updates a workload management &lt;a class='glossary-tooltip' title='A Kubernetes entity, representing an endpoint on the Kubernetes API server.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/reference/using-api/api-concepts/#standard-api-terminology' target='_blank' aria-label='resource'&gt;resource&lt;/a&gt; (such as
a &lt;a class='glossary-tooltip' title='Manages a replicated application on your cluster.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/deployment/' target='_blank' aria-label='Deployment'&gt;Deployment&lt;/a&gt; or
&lt;a class='glossary-tooltip' title='A StatefulSet manages deployment and scaling of a set of Pods, with durable storage and persistent identifiers for each Pod.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/statefulset/' target='_blank' aria-label='StatefulSet'&gt;StatefulSet&lt;/a&gt;), with the
aim of automatically adjusting infrastructure &lt;a class='glossary-tooltip' title='A defined amount of infrastructure available for consumption (CPU, memory, etc).' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/reference/glossary/?all=true#term-infrastructure-resource' target='_blank' aria-label='resource'&gt;resource&lt;/a&gt;
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/configuration/manage-resources-containers/#requests-and-limits"&gt;requests and limits&lt;/a&gt; to match actual usage.&lt;/p&gt;</description></item><item><title>Writing a new topic</title><link>https://andygol-k8s.netlify.app/docs/contribute/style/write-new-topic/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/contribute/style/write-new-topic/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page shows how to create a new topic for the Kubernetes docs.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;Create a fork of the Kubernetes documentation repository as described in
&lt;a href="https://andygol-k8s.netlify.app/docs/contribute/new-content/open-a-pr/"&gt;Open a PR&lt;/a&gt;.&lt;/p&gt;
&lt;!-- steps --&gt;
&lt;h2 id="choosing-a-page-type"&gt;Choosing a page type&lt;/h2&gt;
&lt;p&gt;As you prepare to write a new topic, think about the page type that would fit your content the best:&lt;/p&gt;


 





&lt;table&gt;&lt;caption style="display: none;"&gt;Guidelines for choosing a page type&lt;/caption&gt;
 &lt;thead&gt;
 &lt;tr&gt;
 &lt;th style="text-align: left"&gt;Type&lt;/th&gt;
 &lt;th style="text-align: left"&gt;Description&lt;/th&gt;
 &lt;/tr&gt;
 &lt;/thead&gt;
 &lt;tbody&gt;
 &lt;tr&gt;
 &lt;td style="text-align: left"&gt;Concept&lt;/td&gt;
 &lt;td style="text-align: left"&gt;A concept page explains some aspect of Kubernetes. For example, a concept page might describe the Kubernetes Deployment object and explain the role it plays as an application while it is deployed, scaled, and updated. Typically, concept pages don't include sequences of steps, but instead provide links to tasks or tutorials. For an example of a concept topic, see &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/architecture/nodes/"&gt;Nodes&lt;/a&gt;.&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td style="text-align: left"&gt;Task&lt;/td&gt;
 &lt;td style="text-align: left"&gt;A task page shows how to do a single thing. The idea is to give readers a sequence of steps that they can actually do as they read the page. A task page can be short or long, provided it stays focused on one area. In a task page, it is OK to blend brief explanations with the steps to be performed, but if you need to provide a lengthy explanation, you should do that in a concept topic. Related task and concept topics should link to each other. For an example of a short task page, see &lt;a href="https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/configure-volume-storage/"&gt;Configure a Pod to Use a Volume for Storage&lt;/a&gt;. For an example of a longer task page, see &lt;a href="https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/"&gt;Configure Liveness and Readiness Probes&lt;/a&gt;&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td style="text-align: left"&gt;Tutorial&lt;/td&gt;
 &lt;td style="text-align: left"&gt;A tutorial page shows how to accomplish a goal that ties together several Kubernetes features. A tutorial might provide several sequences of steps that readers can actually do as they read the page. Or it might provide explanations of related pieces of code. For example, a tutorial could provide a walkthrough of a code sample. A tutorial can include brief explanations of the Kubernetes features that are being tied together, but should link to related concept topics for deep explanations of individual features.&lt;/td&gt;
 &lt;/tr&gt;
 &lt;/tbody&gt;
&lt;/table&gt;

&lt;h3 id="creating-a-new-page"&gt;Creating a new page&lt;/h3&gt;
&lt;p&gt;Use a &lt;a href="https://andygol-k8s.netlify.app/docs/contribute/style/page-content-types/"&gt;content type&lt;/a&gt; for each new page
that you write. The docs site provides templates or
&lt;a href="https://gohugo.io/content-management/archetypes/"&gt;Hugo archetypes&lt;/a&gt; to create
new content pages. To create a new type of page, run &lt;code&gt;hugo new&lt;/code&gt; with the path to the file
you want to create. For example:&lt;/p&gt;</description></item><item><title>Guide for Running Windows Containers in Kubernetes</title><link>https://andygol-k8s.netlify.app/docs/concepts/windows/user-guide/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/windows/user-guide/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page provides a walkthrough for some steps you can follow to run
Windows containers using Kubernetes.
The page also highlights some Windows specific functionality within Kubernetes.&lt;/p&gt;
&lt;p&gt;It is important to note that creating and deploying services and workloads on Kubernetes
behaves in much the same way for Linux and Windows containers.
The &lt;a href="https://andygol-k8s.netlify.app/docs/reference/kubectl/"&gt;kubectl commands&lt;/a&gt; to interface with the cluster are identical.
The examples in this page are provided to jumpstart your experience with Windows containers.&lt;/p&gt;</description></item><item><title>Metrics for Kubernetes Object States</title><link>https://andygol-k8s.netlify.app/docs/concepts/cluster-administration/kube-state-metrics/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/cluster-administration/kube-state-metrics/</guid><description>&lt;p&gt;The state of Kubernetes objects in the Kubernetes API can be exposed as metrics.
An add-on agent called &lt;a href="https://github.com/kubernetes/kube-state-metrics"&gt;kube-state-metrics&lt;/a&gt; can connect to the Kubernetes API server and expose a HTTP endpoint with metrics generated from the state of individual objects in the cluster.
It exposes various information about the state of objects like labels and annotations, startup and termination times, status or the phase the object currently is in.
For example, containers running in pods create a &lt;code&gt;kube_pod_container_info&lt;/code&gt; metric.
This includes the name of the container, the name of the pod it is part of, the &lt;a class='glossary-tooltip' title='An abstraction used by Kubernetes to support isolation of groups of resources within a single cluster.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/overview/working-with-objects/namespaces' target='_blank' aria-label='namespace'&gt;namespace&lt;/a&gt; the pod is running in, the name of the container image, the ID of the image, the image name from the spec of the container, the ID of the running container and the ID of the pod as labels.&lt;/p&gt;</description></item><item><title>Resource Management for Windows nodes</title><link>https://andygol-k8s.netlify.app/docs/concepts/configuration/windows-resource-management/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/configuration/windows-resource-management/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page outlines the differences in how resources are managed between Linux and Windows.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;p&gt;On Linux nodes, &lt;a class='glossary-tooltip' title='A group of Linux processes with optional resource isolation, accounting and limits.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/reference/glossary/?all=true#term-cgroup' target='_blank' aria-label='cgroups'&gt;cgroups&lt;/a&gt; are used
as a pod boundary for resource control. Containers are created within that boundary
for network, process and file system isolation. The Linux cgroup APIs can be used to
gather CPU, I/O, and memory use statistics.&lt;/p&gt;</description></item><item><title>Autoscale the DNS Service in a Cluster</title><link>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/dns-horizontal-autoscaling/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/dns-horizontal-autoscaling/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page shows how to enable and configure autoscaling of the DNS service in
your Kubernetes cluster.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Certificate Management with kubeadm</title><link>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/</guid><description>&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.15 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;p&gt;Client certificates generated by &lt;a href="https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/"&gt;kubeadm&lt;/a&gt; expire after 1 year.
This page explains how to manage certificate renewals with kubeadm. It also covers other tasks related
to kubeadm certificate management.&lt;/p&gt;
&lt;p&gt;The Kubernetes project recommends upgrading to the latest patch releases promptly, and
to ensure that you are running a supported minor release of Kubernetes.
Following this recommendation helps you to stay secure.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;You should be familiar with &lt;a href="https://andygol-k8s.netlify.app/docs/setup/best-practices/certificates/"&gt;PKI certificates and requirements in Kubernetes&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>Configure a Pod to Use a Volume for Storage</title><link>https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/configure-volume-storage/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/configure-volume-storage/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page shows how to configure a Pod to use a Volume for storage.&lt;/p&gt;
&lt;p&gt;A Container's file system lives only as long as the Container does. So when a
Container terminates and restarts, filesystem changes are lost. For more
consistent storage that is independent of the Container, you can use a
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/storage/volumes/"&gt;Volume&lt;/a&gt;. This is especially important for stateful
applications, such as key-value stores (such as Redis) and databases.&lt;/p&gt;</description></item><item><title>Configuring each kubelet in your cluster using kubeadm</title><link>https://andygol-k8s.netlify.app/docs/setup/production-environment/tools/kubeadm/kubelet-integration/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/setup/production-environment/tools/kubeadm/kubelet-integration/</guid><description>&lt;!-- overview --&gt;
&lt;div class="alert alert-secondary callout note" role="note"&gt;
 &lt;strong&gt;Note:&lt;/strong&gt; Dockershim has been removed from the Kubernetes project as of release 1.24. Read the &lt;a href="https://andygol-k8s.netlify.app/dockershim"&gt;Dockershim Removal FAQ&lt;/a&gt; for further details.
&lt;/div&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.11 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;p&gt;The lifecycle of the kubeadm CLI tool is decoupled from the
&lt;a href="https://andygol-k8s.netlify.app/docs/reference/command-line-tools-reference/kubelet/"&gt;kubelet&lt;/a&gt;, which is a daemon that runs
on each node within the Kubernetes cluster. The kubeadm CLI tool is executed by the user when Kubernetes is
initialized or upgraded, whereas the kubelet is always running in the background.&lt;/p&gt;</description></item><item><title>Create an External Load Balancer</title><link>https://andygol-k8s.netlify.app/docs/tasks/access-application-cluster/create-external-load-balancer/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/access-application-cluster/create-external-load-balancer/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page shows how to create an external load balancer.&lt;/p&gt;
&lt;p&gt;When creating a &lt;a class='glossary-tooltip' title='A way to expose an application running on a set of Pods as a network service.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/services-networking/service/' target='_blank' aria-label='Service'&gt;Service&lt;/a&gt;, you have
the option of automatically creating a cloud load balancer. This provides an
externally-accessible IP address that sends traffic to the correct port on your cluster
nodes,
&lt;em&gt;provided your cluster runs in a supported environment and is configured with
the correct cloud load balancer provider package&lt;/em&gt;.&lt;/p&gt;</description></item><item><title>CronJob</title><link>https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/cron-jobs/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/cron-jobs/</guid><description>&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.21 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;p&gt;A &lt;em&gt;CronJob&lt;/em&gt; creates &lt;a class='glossary-tooltip' title='A finite or batch task that runs to completion.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/job/' target='_blank' aria-label='Jobs'&gt;Jobs&lt;/a&gt; on a repeating schedule.&lt;/p&gt;
&lt;p&gt;CronJob is meant for performing regular scheduled actions such as backups, report generation,
and so on. One CronJob object is like one line of a &lt;em&gt;crontab&lt;/em&gt; (cron table) file on a
Unix system. It runs a Job periodically on a given schedule, written in
&lt;a href="https://en.wikipedia.org/wiki/Cron"&gt;Cron&lt;/a&gt; format.&lt;/p&gt;</description></item><item><title>DNS for Services and Pods</title><link>https://andygol-k8s.netlify.app/docs/concepts/services-networking/dns-pod-service/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/services-networking/dns-pod-service/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;Kubernetes creates DNS records for Services and Pods. You can contact
Services with consistent DNS names instead of IP addresses.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;p&gt;Kubernetes publishes information about Pods and Services which is used
to program DNS. kubelet configures Pods' DNS so that running containers
can look up Services by name rather than IP.&lt;/p&gt;
&lt;p&gt;Services defined in the cluster are assigned DNS names. By default, a
client Pod's DNS search list includes the Pod's own namespace and the
cluster's default domain.&lt;/p&gt;</description></item><item><title>Finalizers</title><link>https://andygol-k8s.netlify.app/docs/concepts/overview/working-with-objects/finalizers/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/overview/working-with-objects/finalizers/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;Finalizers are namespaced keys that tell Kubernetes to wait until specific
conditions are met before it fully deletes &lt;a class='glossary-tooltip' title='A Kubernetes entity, representing an endpoint on the Kubernetes API server.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/reference/using-api/api-concepts/#standard-api-terminology' target='_blank' aria-label='resources'&gt;resources&lt;/a&gt;
that are marked for deletion.
Finalizers alert &lt;a class='glossary-tooltip' title='A control loop that watches the shared state of the cluster through the apiserver and makes changes attempting to move the current state towards the desired state.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/architecture/controller/' target='_blank' aria-label='controllers'&gt;controllers&lt;/a&gt;
to clean up resources the deleted object owned.&lt;/p&gt;</description></item><item><title>Issue a Certificate for a Kubernetes API Client Using A CertificateSigningRequest</title><link>https://andygol-k8s.netlify.app/docs/tasks/tls/certificate-issue-client-csr/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/tls/certificate-issue-client-csr/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;Kubernetes lets you use a public key infrastructure (PKI) to authenticate to your cluster
as a client.&lt;/p&gt;
&lt;p&gt;A few steps are required in order to get a normal user to be able to
authenticate and invoke an API. First, this user must have an &lt;a href="https://www.itu.int/rec/T-REC-X.509"&gt;X.509&lt;/a&gt; certificate
issued by an authority that your Kubernetes cluster trusts. The client must then present that certificate to the Kubernetes API.&lt;/p&gt;
&lt;p&gt;You use a &lt;a href="https://andygol-k8s.netlify.app/docs/reference/access-authn-authz/certificate-signing-requests/"&gt;CertificateSigningRequest&lt;/a&gt;
as part of this process, and either you or some other principal must approve the request.&lt;/p&gt;</description></item><item><title>kubeadm version</title><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/kubeadm-version/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/kubeadm-version/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This command prints the version of kubeadm.&lt;/p&gt;
&lt;!-- body --&gt;

	&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Print the version of kubeadm&lt;/p&gt;</description></item><item><title>Kubelet Systemd Watchdog</title><link>https://andygol-k8s.netlify.app/docs/reference/node/systemd-watchdog/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/node/systemd-watchdog/</guid><description>&lt;div class="feature-state-notice feature-beta" title="Feature Gate: SystemdWatchdog"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt; 
 &lt;code&gt;Kubernetes v1.32 [beta]&lt;/code&gt;(enabled by default)&lt;/div&gt;

&lt;p&gt;On Linux nodes, Kubernetes 1.35 supports integrating with
&lt;a href="https://systemd.io/"&gt;systemd&lt;/a&gt; to allow the operating system supervisor to recover
a failed kubelet. This integration is not enabled by default.
It can be used as an alternative to periodically requesting
the kubelet's &lt;code&gt;/healthz&lt;/code&gt; endpoint for health checks. If the kubelet
does not respond to the watchdog within the timeout period, the watchdog
will kill the kubelet.&lt;/p&gt;</description></item><item><title>Multi-tenancy</title><link>https://andygol-k8s.netlify.app/docs/concepts/security/multi-tenancy/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/security/multi-tenancy/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page provides an overview of available configuration options and best practices for cluster
multi-tenancy.&lt;/p&gt;
&lt;p&gt;Sharing clusters saves costs and simplifies administration. However, sharing clusters also
presents challenges such as security, fairness, and managing &lt;em&gt;noisy neighbors&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;Clusters can be shared in many ways. In some cases, different applications may run in the same
cluster. In other cases, multiple instances of the same application may run in the same cluster,
one for each end user. All these types of sharing are frequently described using the umbrella term
&lt;em&gt;multi-tenancy&lt;/em&gt;.&lt;/p&gt;</description></item><item><title>Node Status</title><link>https://andygol-k8s.netlify.app/docs/reference/node/node-status/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/node/node-status/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;The status of a &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/architecture/nodes/"&gt;node&lt;/a&gt; in Kubernetes is a critical
aspect of managing a Kubernetes cluster. In this article, we'll cover the basics of
monitoring and maintaining node status to ensure a healthy and stable cluster.&lt;/p&gt;
&lt;h2 id="node-status-fields"&gt;Node status fields&lt;/h2&gt;
&lt;p&gt;A Node's status contains the following information:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#addresses"&gt;Addresses&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#condition"&gt;Conditions&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#capacity"&gt;Capacity and Allocatable&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#info"&gt;Info&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#declaredfeatures"&gt;Declared Features&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;You can use &lt;code&gt;kubectl&lt;/code&gt; to view a Node's status and other details:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl describe node &amp;lt;insert-node-name-here&amp;gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;Each section of the output is described below.&lt;/p&gt;</description></item><item><title>Page content types</title><link>https://andygol-k8s.netlify.app/docs/contribute/style/page-content-types/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/contribute/style/page-content-types/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;The Kubernetes documentation follows several types of page content:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Concept&lt;/li&gt;
&lt;li&gt;Task&lt;/li&gt;
&lt;li&gt;Tutorial&lt;/li&gt;
&lt;li&gt;Reference&lt;/li&gt;
&lt;/ul&gt;
&lt;!-- body --&gt;
&lt;h2 id="content-sections"&gt;Content sections&lt;/h2&gt;
&lt;p&gt;Each page content type contains a number of sections defined by
Markdown comments and HTML headings. You can add content headings to
your page with the &lt;code&gt;heading&lt;/code&gt; shortcode. The comments and headings help
maintain the structure of the page content types.&lt;/p&gt;
&lt;p&gt;Examples of Markdown comments defining page content sections:&lt;/p&gt;</description></item><item><title>Resource Bin Packing</title><link>https://andygol-k8s.netlify.app/docs/concepts/scheduling-eviction/resource-bin-packing/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/scheduling-eviction/resource-bin-packing/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;In the &lt;a href="https://andygol-k8s.netlify.app/docs/reference/scheduling/config/#scheduling-plugins"&gt;scheduling-plugin&lt;/a&gt; &lt;code&gt;NodeResourcesFit&lt;/code&gt; of kube-scheduler, there are two
scoring strategies that support the bin packing of resources: &lt;code&gt;MostAllocated&lt;/code&gt; and &lt;code&gt;RequestedToCapacityRatio&lt;/code&gt;.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="enabling-bin-packing-using-mostallocated-strategy"&gt;Enabling bin packing using MostAllocated strategy&lt;/h2&gt;
&lt;p&gt;The &lt;code&gt;MostAllocated&lt;/code&gt; strategy scores the nodes based on the utilization of resources, favoring the ones with higher allocation.
For each resource type, you can set a weight to modify its influence in the node score.&lt;/p&gt;
&lt;p&gt;To set the &lt;code&gt;MostAllocated&lt;/code&gt; strategy for the &lt;code&gt;NodeResourcesFit&lt;/code&gt; plugin, use a
&lt;a href="https://andygol-k8s.netlify.app/docs/reference/scheduling/config/"&gt;scheduler configuration&lt;/a&gt; similar to the following:&lt;/p&gt;</description></item><item><title>Seccomp and Kubernetes</title><link>https://andygol-k8s.netlify.app/docs/reference/node/seccomp/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/node/seccomp/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;Seccomp stands for secure computing mode and has been a feature of the Linux
kernel since version 2.6.12. It can be used to sandbox the privileges of a
process, restricting the calls it is able to make from userspace into the
kernel. Kubernetes lets you automatically apply seccomp profiles loaded onto a
&lt;a class='glossary-tooltip' title='A node is a worker machine in Kubernetes.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/architecture/nodes/' target='_blank' aria-label='node'&gt;node&lt;/a&gt; to your Pods and containers.&lt;/p&gt;</description></item><item><title>Storage Capacity</title><link>https://andygol-k8s.netlify.app/docs/concepts/storage/storage-capacity/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/storage/storage-capacity/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;Storage capacity is limited and may vary depending on the node on
which a pod runs: network-attached storage might not be accessible by
all nodes, or storage is local to a node to begin with.&lt;/p&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.24 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;p&gt;This page describes how Kubernetes keeps track of storage capacity and
how the scheduler uses that information to &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/scheduling-eviction/"&gt;schedule Pods&lt;/a&gt; onto nodes
that have access to enough storage capacity for the remaining missing
volumes. Without storage capacity tracking, the scheduler may choose a
node that doesn't have enough capacity to provision a volume and
multiple scheduling retries will be needed.&lt;/p&gt;</description></item><item><title>System Logs</title><link>https://andygol-k8s.netlify.app/docs/concepts/cluster-administration/system-logs/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/cluster-administration/system-logs/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;System component logs record events happening in cluster, which can be very useful for debugging.
You can configure log verbosity to see more or less detail.
Logs can be as coarse-grained as showing errors within a component, or as fine-grained as showing
step-by-step traces of events (like HTTP access logs, pod state changes, controller actions, or
scheduler decisions).&lt;/p&gt;
&lt;!-- body --&gt;
&lt;div class="alert alert-danger" role="note"&gt;&lt;h4 class="alert-heading"&gt;Warning:&lt;/h4&gt;In contrast to the command line flags described here, the &lt;em&gt;log
output&lt;/em&gt; itself does &lt;em&gt;not&lt;/em&gt; fall under the Kubernetes API stability guarantees:
individual log entries and their formatting may change from one release
to the next!&lt;/div&gt;

&lt;h2 id="klog"&gt;Klog&lt;/h2&gt;
&lt;p&gt;klog is the Kubernetes logging library. &lt;a href="https://github.com/kubernetes/klog"&gt;klog&lt;/a&gt;
generates log messages for the Kubernetes system components.&lt;/p&gt;</description></item><item><title>Pod Hostname</title><link>https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/pod-hostname/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/pod-hostname/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page explains how to set a Pod's hostname,
potential side effects after configuration, and the underlying mechanics.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="default-pod-hostname"&gt;Default Pod hostname&lt;/h2&gt;
&lt;p&gt;When a Pod is created, its hostname (as observed from within the Pod)
is derived from the Pod's metadata.name value.
Both the hostname and its corresponding fully qualified domain name (FQDN)
are set to the metadata.name value (from the Pod's perspective)&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-yaml" data-lang="yaml"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;apiVersion&lt;/span&gt;:&lt;span style="color:#bbb"&gt; &lt;/span&gt;v1&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;kind&lt;/span&gt;:&lt;span style="color:#bbb"&gt; &lt;/span&gt;Pod&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;metadata&lt;/span&gt;:&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;name&lt;/span&gt;:&lt;span style="color:#bbb"&gt; &lt;/span&gt;busybox-1&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;spec&lt;/span&gt;:&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;containers&lt;/span&gt;:&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;- &lt;span style="color:#008000;font-weight:bold"&gt;image&lt;/span&gt;:&lt;span style="color:#bbb"&gt; &lt;/span&gt;busybox:1.28&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;command&lt;/span&gt;:&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;- sleep&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;- &lt;span style="color:#b44"&gt;&amp;#34;3600&amp;#34;&lt;/span&gt;&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;name&lt;/span&gt;:&lt;span style="color:#bbb"&gt; &lt;/span&gt;busybox&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;The Pod created by this manifest will have its hostname and fully qualified domain name (FQDN) set to &lt;code&gt;busybox-1&lt;/code&gt;.&lt;/p&gt;</description></item><item><title>Pod Quality of Service Classes</title><link>https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/pod-qos/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/pod-qos/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page introduces &lt;em&gt;Quality of Service (QoS) classes&lt;/em&gt; in Kubernetes, and explains
how Kubernetes assigns a QoS class to each Pod as a consequence of the resource
constraints that you specify for the containers in that Pod. Kubernetes relies on this
classification to make decisions about which Pods to evict when there are not enough
available resources on a Node.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="quality-of-service-classes"&gt;Quality of Service classes&lt;/h2&gt;
&lt;p&gt;Kubernetes classifies the Pods that you run and allocates each Pod into a specific
&lt;em&gt;quality of service (QoS) class&lt;/em&gt;. Kubernetes uses that classification to influence how different
pods are handled. Kubernetes does this classification based on the
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/configuration/manage-resources-containers/"&gt;resource requests&lt;/a&gt;
of the &lt;a class='glossary-tooltip' title='A lightweight and portable executable image that contains software and all of its dependencies.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/containers/' target='_blank' aria-label='Containers'&gt;Containers&lt;/a&gt; in that Pod, along with
how those requests relate to resource limits.
This is known as &lt;a class='glossary-tooltip' title='QoS Class (Quality of Service Class) provides a way for Kubernetes to classify pods within the cluster into several classes and make decisions about scheduling and eviction.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/pod-qos/' target='_blank' aria-label='Quality of Service'&gt;Quality of Service&lt;/a&gt;
(QoS) class. Kubernetes assigns every Pod a QoS class based on the resource requests
and limits of its component Containers. QoS classes are used by Kubernetes to decide
which Pods to evict from a Node experiencing
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/scheduling-eviction/node-pressure-eviction/"&gt;Node Pressure&lt;/a&gt;. The possible
QoS classes are &lt;code&gt;Guaranteed&lt;/code&gt;, &lt;code&gt;Burstable&lt;/code&gt;, and &lt;code&gt;BestEffort&lt;/code&gt;. When a Node runs out of resources,
Kubernetes will first evict &lt;code&gt;BestEffort&lt;/code&gt; Pods running on that Node, followed by &lt;code&gt;Burstable&lt;/code&gt; and
finally &lt;code&gt;Guaranteed&lt;/code&gt; Pods. When this eviction is due to resource pressure, only Pods exceeding
resource requests are candidates for eviction.&lt;/p&gt;</description></item><item><title>Change the Access Mode of a PersistentVolume to ReadWriteOncePod</title><link>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/change-pv-access-mode-readwriteoncepod/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/change-pv-access-mode-readwriteoncepod/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page shows how to change the access mode on an existing PersistentVolume to
use &lt;code&gt;ReadWriteOncePod&lt;/code&gt;.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Change the default StorageClass</title><link>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/change-default-storage-class/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/change-default-storage-class/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page shows how to change the default Storage Class that is used to
provision volumes for PersistentVolumeClaims that have no special requirements.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Configure a Pod to Use a PersistentVolume for Storage</title><link>https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/configure-persistent-volume-storage/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/configure-persistent-volume-storage/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page shows you how to configure a Pod to use a
&lt;a class='glossary-tooltip' title='Claims storage resources defined in a PersistentVolume so that it can be mounted as a volume in a container.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims' target='_blank' aria-label='PersistentVolumeClaim'&gt;PersistentVolumeClaim&lt;/a&gt;
for storage.
Here is a summary of the process:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;You, as cluster administrator, create a PersistentVolume backed by physical
storage. You do not associate the volume with any Pod.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;You, now taking the role of a developer / cluster user, create a
PersistentVolumeClaim that is automatically bound to a suitable
PersistentVolume.&lt;/p&gt;</description></item><item><title>Content organization</title><link>https://andygol-k8s.netlify.app/docs/contribute/style/content-organization/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/contribute/style/content-organization/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This site uses Hugo. In Hugo, &lt;a href="https://gohugo.io/content-management/organization/"&gt;content organization&lt;/a&gt; is a core concept.&lt;/p&gt;
&lt;!-- body --&gt;

&lt;div class="alert alert-info" role="note"&gt;&lt;h4 class="alert-heading"&gt;Note:&lt;/h4&gt;&lt;strong&gt;Hugo Tip:&lt;/strong&gt; Start Hugo with &lt;code&gt;hugo server --navigateToChanged&lt;/code&gt; for content edit-sessions.&lt;/div&gt;

&lt;h2 id="page-lists"&gt;Page Lists&lt;/h2&gt;
&lt;h3 id="page-order"&gt;Page Order&lt;/h3&gt;
&lt;p&gt;The documentation side menu, the documentation page browser etc. are listed using
Hugo's default sort order, which sorts by weight (from 1), date (newest first),
and finally by the link title.&lt;/p&gt;
&lt;p&gt;Given that, if you want to move a page or a section up, set a weight in the page's front matter:&lt;/p&gt;</description></item><item><title>Generating Reference Documentation for kubectl Commands</title><link>https://andygol-k8s.netlify.app/docs/contribute/generate-ref-docs/kubectl/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/contribute/generate-ref-docs/kubectl/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page shows how to generate the &lt;code&gt;kubectl&lt;/code&gt; command reference.&lt;/p&gt;

&lt;div class="alert alert-info" role="note"&gt;&lt;h4 class="alert-heading"&gt;Note:&lt;/h4&gt;This topic shows how to generate reference documentation for
&lt;a href="https://andygol-k8s.netlify.app/docs/reference/generated/kubectl/kubectl-commands"&gt;kubectl commands&lt;/a&gt; like
&lt;a href="https://andygol-k8s.netlify.app/docs/reference/generated/kubectl/kubectl-commands#apply"&gt;kubectl apply&lt;/a&gt; and
&lt;a href="https://andygol-k8s.netlify.app/docs/reference/generated/kubectl/kubectl-commands#taint"&gt;kubectl taint&lt;/a&gt;.
This topic does not show how to generate the
&lt;a href="https://andygol-k8s.netlify.app/docs/reference/generated/kubectl/kubectl-commands/"&gt;kubectl&lt;/a&gt;
options reference page. For instructions on how to generate the kubectl options
reference page, see
&lt;a href="https://andygol-k8s.netlify.app/docs/contribute/generate-ref-docs/kubernetes-components/"&gt;Generating Reference Pages for Kubernetes Components and Tools&lt;/a&gt;.&lt;/div&gt;

&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;

	&lt;h3 id="requirements"&gt;Requirements:&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;You need a machine that is running Linux or macOS.&lt;/p&gt;</description></item><item><title>Hardening Guide - Authentication Mechanisms</title><link>https://andygol-k8s.netlify.app/docs/concepts/security/hardening-guide/authentication-mechanisms/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/security/hardening-guide/authentication-mechanisms/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;Selecting the appropriate authentication mechanism(s) is a crucial aspect of securing your cluster.
Kubernetes provides several built-in mechanisms, each with its own strengths and weaknesses that
should be carefully considered when choosing the best authentication mechanism for your cluster.&lt;/p&gt;
&lt;p&gt;In general, it is recommended to enable as few authentication mechanisms as possible to simplify
user management and prevent cases where users retain access to a cluster that is no longer required.&lt;/p&gt;</description></item><item><title>Hardening Guide - Scheduler Configuration</title><link>https://andygol-k8s.netlify.app/docs/concepts/security/hardening-guide/scheduler/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/security/hardening-guide/scheduler/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;The Kubernetes &lt;a class='glossary-tooltip' title='Control plane component that watches for newly created pods with no assigned node, and selects a node for them to run on.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/reference/command-line-tools-reference/kube-scheduler/' target='_blank' aria-label='scheduler'&gt;scheduler&lt;/a&gt; is
one of the critical components of the
&lt;a class='glossary-tooltip' title='The container orchestration layer that exposes the API and interfaces to define, deploy, and manage the lifecycle of containers.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/reference/glossary/?all=true#term-control-plane' target='_blank' aria-label='control plane'&gt;control plane&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;This document covers how to improve the security posture of the Scheduler.&lt;/p&gt;</description></item><item><title>IPv4/IPv6 dual-stack</title><link>https://andygol-k8s.netlify.app/docs/concepts/services-networking/dual-stack/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/services-networking/dual-stack/</guid><description>&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.23 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;p&gt;IPv4/IPv6 dual-stack networking enables the allocation of both IPv4 and IPv6 addresses to
&lt;a class='glossary-tooltip' title='A Pod represents a set of running containers in your cluster.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/' target='_blank' aria-label='Pods'&gt;Pods&lt;/a&gt; and &lt;a class='glossary-tooltip' title='A way to expose an application running on a set of Pods as a network service.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/services-networking/service/' target='_blank' aria-label='Services'&gt;Services&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;IPv4/IPv6 dual-stack networking is enabled by default for your Kubernetes cluster starting in
1.21, allowing the simultaneous assignment of both IPv4 and IPv6 addresses.&lt;/p&gt;</description></item><item><title>kubeadm alpha</title><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/kubeadm-alpha/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/kubeadm-alpha/</guid><description>&lt;div class="alert alert-caution" role="note"&gt;&lt;h4 class="alert-heading"&gt;Caution:&lt;/h4&gt;&lt;code&gt;kubeadm alpha&lt;/code&gt; provides a preview of a set of features made available for gathering feedback
from the community. Please try it out and give us feedback!&lt;/div&gt;

&lt;p&gt;Currently there are no experimental commands under &lt;code&gt;kubeadm alpha&lt;/code&gt;.&lt;/p&gt;
&lt;h2 id="what-s-next"&gt;What's next&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/kubeadm-init/"&gt;kubeadm init&lt;/a&gt; to bootstrap a Kubernetes control-plane node&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/kubeadm-join/"&gt;kubeadm join&lt;/a&gt; to connect a node to the cluster&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/kubeadm-reset/"&gt;kubeadm reset&lt;/a&gt; to revert any changes made to this host by &lt;code&gt;kubeadm init&lt;/code&gt; or &lt;code&gt;kubeadm join&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;</description></item><item><title>kubeadm certs</title><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/kubeadm-certs/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/kubeadm-certs/</guid><description>&lt;p&gt;&lt;code&gt;kubeadm certs&lt;/code&gt; provides utilities for managing certificates.
For more details on how these commands can be used, see
&lt;a href="https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/"&gt;Certificate Management with kubeadm&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id="cmd-certs"&gt;kubeadm certs&lt;/h2&gt;
&lt;p&gt;A collection of operations for operating Kubernetes certificates.&lt;/p&gt;
&lt;ul class="nav nav-tabs" id="tab-certs" role="tablist"&gt;&lt;li class="nav-item"&gt;&lt;a data-toggle="tab" class="nav-link active" href="#tab-certs-0" role="tab" aria-controls="tab-certs-0" aria-selected="true"&gt;overview&lt;/a&gt;&lt;/li&gt;
	 &lt;/ul&gt;
&lt;div class="tab-content" id="tab-certs"&gt;&lt;div id="tab-certs-0" class="tab-pane show active" role="tabpanel" aria-labelledby="tab-certs-0"&gt;

&lt;p&gt;&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Commands related to handling Kubernetes certificates&lt;/p&gt;</description></item><item><title>kubeadm init phase</title><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/</guid><description>&lt;p&gt;&lt;code&gt;kubeadm init phase&lt;/code&gt; enables you to invoke atomic steps of the bootstrap process.
Hence, you can let kubeadm do some of the work and you can fill in the gaps
if you wish to apply customization.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;kubeadm init phase&lt;/code&gt; is consistent with the &lt;a href="https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/kubeadm-init/#init-workflow"&gt;kubeadm init workflow&lt;/a&gt;,
and behind the scene both use the same code.&lt;/p&gt;
&lt;h2 id="cmd-phase-preflight"&gt;kubeadm init phase preflight&lt;/h2&gt;
&lt;p&gt;Using this command you can execute preflight checks on a control-plane node.&lt;/p&gt;</description></item><item><title>kubeadm join phase</title><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/kubeadm-join-phase/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/kubeadm-join-phase/</guid><description>&lt;p&gt;&lt;code&gt;kubeadm join phase&lt;/code&gt; enables you to invoke atomic steps of the join process.
Hence, you can let kubeadm do some of the work and you can fill in the gaps
if you wish to apply customization.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;kubeadm join phase&lt;/code&gt; is consistent with the &lt;a href="https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/kubeadm-join/#join-workflow"&gt;kubeadm join workflow&lt;/a&gt;,
and behind the scene both use the same code.&lt;/p&gt;
&lt;h2 id="cmd-join-phase"&gt;kubeadm join phase&lt;/h2&gt;
&lt;ul class="nav nav-tabs" id="tab-phase" role="tablist"&gt;&lt;li class="nav-item"&gt;&lt;a data-toggle="tab" class="nav-link active" href="#tab-phase-0" role="tab" aria-controls="tab-phase-0" aria-selected="true"&gt;phase&lt;/a&gt;&lt;/li&gt;
	 &lt;/ul&gt;
&lt;div class="tab-content" id="tab-phase"&gt;&lt;div id="tab-phase-0" class="tab-pane show active" role="tabpanel" aria-labelledby="tab-phase-0"&gt;

&lt;p&gt;&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Use this command to invoke single phase of the &amp;quot;join&amp;quot; workflow&lt;/p&gt;</description></item><item><title>kubeadm kubeconfig</title><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/kubeadm-kubeconfig/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/kubeadm-kubeconfig/</guid><description>&lt;p&gt;&lt;code&gt;kubeadm kubeconfig&lt;/code&gt; provides utilities for managing kubeconfig files.&lt;/p&gt;
&lt;p&gt;For examples on how to use &lt;code&gt;kubeadm kubeconfig user&lt;/code&gt; see
&lt;a href="https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/#kubeconfig-additional-users"&gt;Generating kubeconfig files for additional users&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id="cmd-kubeconfig"&gt;kubeadm kubeconfig&lt;/h2&gt;
&lt;ul class="nav nav-tabs" id="tab-kubeconfig" role="tablist"&gt;&lt;li class="nav-item"&gt;&lt;a data-toggle="tab" class="nav-link active" href="#tab-kubeconfig-0" role="tab" aria-controls="tab-kubeconfig-0" aria-selected="true"&gt;overview&lt;/a&gt;&lt;/li&gt;
	 &lt;/ul&gt;
&lt;div class="tab-content" id="tab-kubeconfig"&gt;&lt;div id="tab-kubeconfig-0" class="tab-pane show active" role="tabpanel" aria-labelledby="tab-kubeconfig-0"&gt;

&lt;p&gt;&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Kubeconfig file utilities.&lt;/p&gt;</description></item><item><title>kubeadm reset phase</title><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/kubeadm-reset-phase/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/kubeadm-reset-phase/</guid><description>&lt;p&gt;&lt;code&gt;kubeadm reset phase&lt;/code&gt; enables you to invoke atomic steps of the node reset process.
Hence, you can let kubeadm do some of the work and you can fill in the gaps
if you wish to apply customization.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;kubeadm reset phase&lt;/code&gt; is consistent with the &lt;a href="https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/kubeadm-reset/#reset-workflow"&gt;kubeadm reset workflow&lt;/a&gt;,
and behind the scene both use the same code.&lt;/p&gt;
&lt;h2 id="cmd-reset-phase"&gt;kubeadm reset phase&lt;/h2&gt;
&lt;ul class="nav nav-tabs" id="tab-phase" role="tablist"&gt;&lt;li class="nav-item"&gt;&lt;a data-toggle="tab" class="nav-link active" href="#tab-phase-0" role="tab" aria-controls="tab-phase-0" aria-selected="true"&gt;phase&lt;/a&gt;&lt;/li&gt;
	 &lt;/ul&gt;
&lt;div class="tab-content" id="tab-phase"&gt;&lt;div id="tab-phase-0" class="tab-pane show active" role="tabpanel" aria-labelledby="tab-phase-0"&gt;

&lt;p&gt;&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Use this command to invoke single phase of the &amp;quot;reset&amp;quot; workflow&lt;/p&gt;</description></item><item><title>Kubernetes API Server Bypass Risks</title><link>https://andygol-k8s.netlify.app/docs/concepts/security/api-server-bypass-risks/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/security/api-server-bypass-risks/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;The Kubernetes API server is the main point of entry to a cluster for external parties
(users and services) interacting with it.&lt;/p&gt;
&lt;p&gt;As part of this role, the API server has several key built-in security controls, such as
audit logging and &lt;a class='glossary-tooltip' title='A piece of code that intercepts requests to the Kubernetes API server prior to persistence of the object.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/reference/access-authn-authz/admission-controllers/' target='_blank' aria-label='admission controllers'&gt;admission controllers&lt;/a&gt;.
However, there are ways to modify the configuration
or content of the cluster that bypass these controls.&lt;/p&gt;</description></item><item><title>Node-specific Volume Limits</title><link>https://andygol-k8s.netlify.app/docs/concepts/storage/storage-limits/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/storage/storage-limits/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page describes the maximum number of volumes that can be attached
to a Node for various cloud providers.&lt;/p&gt;
&lt;p&gt;Cloud providers like Google, Amazon, and Microsoft typically have a limit on
how many volumes can be attached to a Node. It is important for Kubernetes to
respect those limits. Otherwise, Pods scheduled on a Node could get stuck
waiting for volumes to attach.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="kubernetes-default-limits"&gt;Kubernetes default limits&lt;/h2&gt;
&lt;p&gt;The Kubernetes scheduler has default limits on the number of volumes
that can be attached to a Node:&lt;/p&gt;</description></item><item><title>Owners and Dependents</title><link>https://andygol-k8s.netlify.app/docs/concepts/overview/working-with-objects/owners-dependents/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/overview/working-with-objects/owners-dependents/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;In Kubernetes, some &lt;a class='glossary-tooltip' title='An entity in the Kubernetes system, representing part of the state of your cluster.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/overview/working-with-objects/#kubernetes-objects' target='_blank' aria-label='objects'&gt;objects&lt;/a&gt; are
&lt;em&gt;owners&lt;/em&gt; of other objects. For example, a
&lt;a class='glossary-tooltip' title='ReplicaSet ensures that a specified number of Pod replicas are running at one time' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/replicaset/' target='_blank' aria-label='ReplicaSet'&gt;ReplicaSet&lt;/a&gt; is the owner
of a set of Pods. These owned objects are &lt;em&gt;dependents&lt;/em&gt; of their owner.&lt;/p&gt;
&lt;p&gt;Ownership is different from the &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/overview/working-with-objects/labels/"&gt;labels and selectors&lt;/a&gt;
mechanism that some resources also use. For example, consider a Service that
creates &lt;code&gt;EndpointSlice&lt;/code&gt; objects. The Service uses &lt;a class='glossary-tooltip' title='Tags objects with identifying attributes that are meaningful and relevant to users.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/overview/working-with-objects/labels' target='_blank' aria-label='labels'&gt;labels&lt;/a&gt; to allow the control plane to
determine which &lt;code&gt;EndpointSlice&lt;/code&gt; objects are used for that Service. In addition
to the labels, each &lt;code&gt;EndpointSlice&lt;/code&gt; that is managed on behalf of a Service has
an owner reference. Owner references help different parts of Kubernetes avoid
interfering with objects they don’t control.&lt;/p&gt;</description></item><item><title>Pod Priority and Preemption</title><link>https://andygol-k8s.netlify.app/docs/concepts/scheduling-eviction/pod-priority-preemption/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/scheduling-eviction/pod-priority-preemption/</guid><description>&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.14 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;p&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/"&gt;Pods&lt;/a&gt; can have &lt;em&gt;priority&lt;/em&gt;. Priority indicates the
importance of a Pod relative to other Pods. If a Pod cannot be scheduled, the
scheduler tries to preempt (evict) lower priority Pods to make scheduling of the
pending Pod possible.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;div class="alert alert-danger" role="note"&gt;&lt;h4 class="alert-heading"&gt;Warning:&lt;/h4&gt;&lt;p&gt;In a cluster where not all users are trusted, a malicious user could create Pods
at the highest possible priorities, causing other Pods to be evicted/not get
scheduled.
An administrator can use ResourceQuota to prevent users from creating pods at
high priorities.&lt;/p&gt;</description></item><item><title>Reconfiguring a kubeadm cluster</title><link>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/kubeadm/kubeadm-reconfigure/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/kubeadm/kubeadm-reconfigure/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;kubeadm does not support automated ways of reconfiguring components that
were deployed on managed nodes. One way of automating this would be
by using a custom &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/extend-kubernetes/operator/"&gt;operator&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;To modify the components configuration you must manually edit associated cluster
objects and files on disk.&lt;/p&gt;
&lt;p&gt;This guide shows the correct sequence of steps that need to be performed
to achieve kubeadm cluster reconfiguration.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;You need a cluster that was deployed using kubeadm&lt;/li&gt;
&lt;li&gt;Have administrator credentials (&lt;code&gt;/etc/kubernetes/admin.conf&lt;/code&gt;) and network connectivity
to a running kube-apiserver in the cluster from a host that has kubectl installed&lt;/li&gt;
&lt;li&gt;Have a text editor installed on all hosts&lt;/li&gt;
&lt;/ul&gt;
&lt;!-- steps --&gt;
&lt;h2 id="reconfiguring-the-cluster"&gt;Reconfiguring the cluster&lt;/h2&gt;
&lt;p&gt;kubeadm writes a set of cluster wide component configuration options in
ConfigMaps and other objects. These objects must be manually edited. The command &lt;code&gt;kubectl edit&lt;/code&gt;
can be used for that.&lt;/p&gt;</description></item><item><title>ReplicationController</title><link>https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/replicationcontroller/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/replicationcontroller/</guid><description>&lt;!-- overview --&gt;

&lt;div class="alert alert-info" role="note"&gt;&lt;h4 class="alert-heading"&gt;Note:&lt;/h4&gt;A &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/deployment/"&gt;&lt;code&gt;Deployment&lt;/code&gt;&lt;/a&gt; that configures a &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/replicaset/"&gt;&lt;code&gt;ReplicaSet&lt;/code&gt;&lt;/a&gt; is now the recommended way to set up replication.&lt;/div&gt;

&lt;p&gt;A &lt;em&gt;ReplicationController&lt;/em&gt; ensures that a specified number of pod replicas are running at any one
time. In other words, a ReplicationController makes sure that a pod or a homogeneous set of pods is
always up and available.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="how-a-replicationcontroller-works"&gt;How a ReplicationController works&lt;/h2&gt;
&lt;p&gt;If there are too many pods, the ReplicationController terminates the extra pods. If there are too few, the
ReplicationController starts more pods. Unlike manually created pods, the pods maintained by a
ReplicationController are automatically replaced if they fail, are deleted, or are terminated.
For example, your pods are re-created on a node after disruptive maintenance such as a kernel upgrade.
For this reason, you should use a ReplicationController even if your application requires
only a single pod. A ReplicationController is similar to a process supervisor,
but instead of supervising individual processes on a single node, the ReplicationController supervises multiple pods
across multiple nodes.&lt;/p&gt;</description></item><item><title>Switching from Polling to CRI Event-based Updates to Container Status</title><link>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/switch-to-evented-pleg/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/switch-to-evented-pleg/</guid><description>&lt;div class="feature-state-notice feature-alpha" title="Feature Gate: EventedPLEG"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt; 
 &lt;code&gt;Kubernetes v1.26 [alpha]&lt;/code&gt;(disabled by default)&lt;/div&gt;

&lt;!-- overview --&gt;
&lt;p&gt;This page shows how to migrate nodes to use event based updates for container status. The event-based
implementation reduces node resource consumption by the kubelet, compared to the legacy approach
that relies on polling.
You may know this feature as &lt;em&gt;evented Pod lifecycle event generator (PLEG)&lt;/em&gt;. That's the name used
internally within the Kubernetes project for a key implementation detail.&lt;/p&gt;</description></item><item><title>Traces For Kubernetes System Components</title><link>https://andygol-k8s.netlify.app/docs/concepts/cluster-administration/system-traces/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/cluster-administration/system-traces/</guid><description>&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-beta"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.27 [beta]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;p&gt;System component traces record the latency of and relationships between operations in the cluster.&lt;/p&gt;
&lt;p&gt;Kubernetes components emit traces using the
&lt;a href="https://opentelemetry.io/docs/specs/otlp/"&gt;OpenTelemetry Protocol&lt;/a&gt;
with the gRPC exporter and can be collected and routed to tracing backends using an
&lt;a href="https://github.com/open-telemetry/opentelemetry-collector#-opentelemetry-collector"&gt;OpenTelemetry Collector&lt;/a&gt;.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="trace-collection"&gt;Trace Collection&lt;/h2&gt;
&lt;p&gt;Kubernetes components have built-in gRPC exporters for OTLP to export traces, either with an OpenTelemetry Collector,
or without an OpenTelemetry Collector.&lt;/p&gt;</description></item><item><title>Workload Reference</title><link>https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/workload-reference/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/workload-reference/</guid><description>&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-alpha" title="Feature Gate: GenericWorkload"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt; 
 &lt;code&gt;Kubernetes v1.35 [alpha]&lt;/code&gt;(disabled by default)&lt;/div&gt;

&lt;p&gt;You can link a Pod to a &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/workloads/workload-api/"&gt;Workload&lt;/a&gt; object
to indicate that the Pod belongs to a larger application or group. This enables the scheduler to make decisions
based on the group's requirements rather than treating the Pod as an independent entity.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="specifying-a-workload-reference"&gt;Specifying a Workload reference&lt;/h2&gt;
&lt;p&gt;When the &lt;a href="(/docs/reference/command-line-tools-reference/feature-gates/#GenericWorkload)"&gt;&lt;code&gt;GenericWorkload&lt;/code&gt;&lt;/a&gt;
feature gate is enabled, you can use the &lt;code&gt;spec.workloadRef&lt;/code&gt; field in your Pod manifest.
This field establishes a link to a specific pod group defined within a Workload resource
in the same namespace.&lt;/p&gt;</description></item><item><title>Local ephemeral storage</title><link>https://andygol-k8s.netlify.app/docs/concepts/storage/ephemeral-storage/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/storage/ephemeral-storage/</guid><description>&lt;p&gt;Nodes have local ephemeral storage, backed by
locally-attached writeable devices or, sometimes, by RAM.
&amp;quot;Ephemeral&amp;quot; means that there is no long-term guarantee about durability.&lt;/p&gt;
&lt;p&gt;Pods use ephemeral local storage for scratch space, caching, and for logs.
The kubelet can provide scratch space to Pods using local ephemeral storage to
mount &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/storage/volumes/#emptydir"&gt;&lt;code&gt;emptyDir&lt;/code&gt;&lt;/a&gt;
&lt;a class='glossary-tooltip' title='A directory containing data, accessible to the containers in a pod.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/storage/volumes/' target='_blank' aria-label='volumes'&gt;volumes&lt;/a&gt; into containers.&lt;/p&gt;
&lt;p&gt;The kubelet also uses this kind of storage to hold
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/cluster-administration/logging/#logging-at-the-node-level"&gt;node-level container logs&lt;/a&gt;,
container images, and the writable layers of running containers.&lt;/p&gt;</description></item><item><title>Mapping PodSecurityPolicies to Pod Security Standards</title><link>https://andygol-k8s.netlify.app/docs/reference/access-authn-authz/psp-to-pod-security-standards/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/access-authn-authz/psp-to-pod-security-standards/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;The tables below enumerate the configuration parameters on
&lt;code&gt;PodSecurityPolicy&lt;/code&gt; objects, whether the field mutates
and/or validates pods, and how the configuration values map to the
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/security/pod-security-standards/"&gt;Pod Security Standards&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;For each applicable parameter, the allowed values for the
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/security/pod-security-standards/#baseline"&gt;Baseline&lt;/a&gt; and
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/security/pod-security-standards/#restricted"&gt;Restricted&lt;/a&gt; profiles are listed.
Anything outside the allowed values for those profiles would fall under the
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/security/pod-security-standards/#privileged"&gt;Privileged&lt;/a&gt; profile. &amp;quot;No opinion&amp;quot;
means all values are allowed under all Pod Security Standards.&lt;/p&gt;
&lt;p&gt;For a step-by-step migration guide, see
&lt;a href="https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/migrate-from-psp/"&gt;Migrate from PodSecurityPolicy to the Built-In PodSecurity Admission Controller&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>Advanced contributing</title><link>https://andygol-k8s.netlify.app/docs/contribute/advanced/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/contribute/advanced/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page assumes that you understand how to
&lt;a href="https://andygol-k8s.netlify.app/docs/contribute/new-content/"&gt;contribute to new content&lt;/a&gt; and
&lt;a href="https://andygol-k8s.netlify.app/docs/contribute/review/reviewing-prs/"&gt;review others' work&lt;/a&gt;, and are ready
to learn about more ways to contribute. You need to use the Git command line
client and other tools for some of these tasks.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="propose-improvements"&gt;Propose improvements&lt;/h2&gt;
&lt;p&gt;SIG Docs &lt;a href="https://andygol-k8s.netlify.app/docs/contribute/participate/roles-and-responsibilities/#members"&gt;members&lt;/a&gt;
can propose improvements.&lt;/p&gt;
&lt;p&gt;After you've been contributing to the Kubernetes documentation for a while, you
may have ideas for improving the &lt;a href="https://andygol-k8s.netlify.app/docs/contribute/style/style-guide/"&gt;Style Guide&lt;/a&gt;
, the &lt;a href="https://andygol-k8s.netlify.app/docs/contribute/style/content-guide/"&gt;Content Guide&lt;/a&gt;, the toolchain used to build
the documentation, the website style, the processes for reviewing and merging
pull requests, or other aspects of the documentation. For maximum transparency,
these types of proposals need to be discussed in a SIG Docs meeting or on the
&lt;a href="https://groups.google.com/forum/#!forum/kubernetes-sig-docs"&gt;kubernetes-sig-docs mailing list&lt;/a&gt;.
In addition, it can help to have some context about the way things
currently work and why past decisions have been made before proposing sweeping
changes. The quickest way to get answers to questions about how the documentation
currently works is to ask in the &lt;code&gt;#sig-docs&lt;/code&gt; Slack channel on
&lt;a href="https://kubernetes.slack.com"&gt;kubernetes.slack.com&lt;/a&gt;&lt;/p&gt;</description></item><item><title>Change the Reclaim Policy of a PersistentVolume</title><link>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/change-pv-reclaim-policy/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/change-pv-reclaim-policy/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page shows how to change the reclaim policy of a Kubernetes
PersistentVolume.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Configure a Pod to Use a Projected Volume for Storage</title><link>https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/configure-projected-volume-storage/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/configure-projected-volume-storage/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page shows how to use a &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/storage/volumes/#projected"&gt;&lt;code&gt;projected&lt;/code&gt;&lt;/a&gt; Volume to mount
several existing volume sources into the same directory. Currently, &lt;code&gt;secret&lt;/code&gt;, &lt;code&gt;configMap&lt;/code&gt;, &lt;code&gt;downwardAPI&lt;/code&gt;,
and &lt;code&gt;serviceAccountToken&lt;/code&gt; volumes can be projected.&lt;/p&gt;

&lt;div class="alert alert-info" role="note"&gt;&lt;h4 class="alert-heading"&gt;Note:&lt;/h4&gt;&lt;code&gt;serviceAccountToken&lt;/code&gt; is not a volume type.&lt;/div&gt;

&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Dual-stack support with kubeadm</title><link>https://andygol-k8s.netlify.app/docs/setup/production-environment/tools/kubeadm/dual-stack-support/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/setup/production-environment/tools/kubeadm/dual-stack-support/</guid><description>&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.23 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;p&gt;Your Kubernetes cluster includes &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/services-networking/dual-stack/"&gt;dual-stack&lt;/a&gt;
networking, which means that cluster networking lets you use either address family.
In a cluster, the control plane can assign both an IPv4 address and an IPv6 address to a single
&lt;a class='glossary-tooltip' title='A Pod represents a set of running containers in your cluster.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/' target='_blank' aria-label='Pod'&gt;Pod&lt;/a&gt; or a &lt;a class='glossary-tooltip' title='A way to expose an application running on a set of Pods as a network service.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/services-networking/service/' target='_blank' aria-label='Service'&gt;Service&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>Generating Reference Documentation for Metrics</title><link>https://andygol-k8s.netlify.app/docs/contribute/generate-ref-docs/metrics-reference/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/contribute/generate-ref-docs/metrics-reference/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page demonstrates the generation of metrics reference documentation.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;

	&lt;h3 id="requirements"&gt;Requirements:&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;You need a machine that is running Linux or macOS.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;You need to have these tools installed:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.python.org/downloads/"&gt;Python&lt;/a&gt; v3.7.x+&lt;/li&gt;
&lt;li&gt;&lt;a href="https://git-scm.com/book/en/v2/Getting-Started-Installing-Git"&gt;Git&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://go.dev/dl/"&gt;Golang&lt;/a&gt; version 1.13+&lt;/li&gt;
&lt;li&gt;&lt;a href="https://pypi.org/project/pip/"&gt;Pip&lt;/a&gt; used to install PyYAML&lt;/li&gt;
&lt;li&gt;&lt;a href="https://pyyaml.org/"&gt;PyYAML&lt;/a&gt; v5.1.2&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.gnu.org/software/make/"&gt;make&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://gcc.gnu.org/"&gt;gcc compiler/linker&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.docker.com/engine/installation/"&gt;Docker&lt;/a&gt; (Required only for &lt;code&gt;kubectl&lt;/code&gt; command reference)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Your &lt;code&gt;PATH&lt;/code&gt; environment variable must include the required build tools, such as the &lt;code&gt;Go&lt;/code&gt; binary and &lt;code&gt;python&lt;/code&gt;.&lt;/p&gt;</description></item><item><title>HorizontalPodAutoscaler Walkthrough</title><link>https://andygol-k8s.netlify.app/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;A &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/workloads/autoscaling/horizontal-pod-autoscale/"&gt;HorizontalPodAutoscaler&lt;/a&gt;
(HPA for short)
automatically updates a workload resource (such as
a &lt;a class='glossary-tooltip' title='Manages a replicated application on your cluster.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/deployment/' target='_blank' aria-label='Deployment'&gt;Deployment&lt;/a&gt; or
&lt;a class='glossary-tooltip' title='A StatefulSet manages deployment and scaling of a set of Pods, with durable storage and persistent identifiers for each Pod.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/statefulset/' target='_blank' aria-label='StatefulSet'&gt;StatefulSet&lt;/a&gt;), with the
aim of automatically scaling the workload to match demand.&lt;/p&gt;
&lt;p&gt;Horizontal scaling means that the response to increased load is to deploy more
&lt;a class='glossary-tooltip' title='A Pod represents a set of running containers in your cluster.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/' target='_blank' aria-label='Pods'&gt;Pods&lt;/a&gt;.
This is different from &lt;em&gt;vertical&lt;/em&gt; scaling, which for Kubernetes would mean
assigning more resources (for example: memory or CPU) to the Pods that are already
running for the workload.&lt;/p&gt;</description></item><item><title>Implementation details</title><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/implementation-details/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/implementation-details/</guid><description>&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.10 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;p&gt;&lt;code&gt;kubeadm init&lt;/code&gt; and &lt;code&gt;kubeadm join&lt;/code&gt; together provide a nice user experience for creating a
bare Kubernetes cluster from scratch, that aligns with the best-practices.
However, it might not be obvious &lt;em&gt;how&lt;/em&gt; kubeadm does that.&lt;/p&gt;
&lt;p&gt;This document provides additional details on what happens under the hood, with the aim of sharing
knowledge on the best practices for a Kubernetes cluster.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="core-design-principles"&gt;Core design principles&lt;/h2&gt;
&lt;p&gt;The cluster that &lt;code&gt;kubeadm init&lt;/code&gt; and &lt;code&gt;kubeadm join&lt;/code&gt; set up should be:&lt;/p&gt;</description></item><item><title>Linux kernel security constraints for Pods and containers</title><link>https://andygol-k8s.netlify.app/docs/concepts/security/linux-kernel-security-constraints/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/security/linux-kernel-security-constraints/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page describes some of the security features that are built into the Linux
kernel that you can use in your Kubernetes workloads. To learn how to apply
these features to your Pods and containers, refer to
&lt;a href="https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/security-context/"&gt;Configure a SecurityContext for a Pod or Container&lt;/a&gt;.
You should already be familiar with Linux and with the basics of Kubernetes
workloads.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="run-without-root"&gt;Run workloads without root privileges&lt;/h2&gt;
&lt;p&gt;When you deploy a workload in Kubernetes, use the Pod specification to restrict
that workload from running as the root user on the node. You can use the Pod
&lt;code&gt;securityContext&lt;/code&gt; to define the specific Linux user and group for the processes in
the Pod, and explicitly restrict containers from running as root users. Setting
these values in the Pod manifest takes precedence over similar values in the
container image, which is especially useful if you're running images that you
don't own.&lt;/p&gt;</description></item><item><title>List All Container Images Running in a Cluster</title><link>https://andygol-k8s.netlify.app/docs/tasks/access-application-cluster/list-all-running-container-images/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/access-application-cluster/list-all-running-container-images/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page shows how to use kubectl to list all of the Container images
for Pods running in a cluster.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Node-pressure Eviction</title><link>https://andygol-k8s.netlify.app/docs/concepts/scheduling-eviction/node-pressure-eviction/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/scheduling-eviction/node-pressure-eviction/</guid><description>&lt;p&gt;Node-pressure eviction is the process by which the &lt;a class='glossary-tooltip' title='An agent that runs on each node in the cluster. It makes sure that containers are running in a pod.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/reference/command-line-tools-reference/kubelet' target='_blank' aria-label='kubelet'&gt;kubelet&lt;/a&gt; proactively terminates
pods to reclaim &lt;a class='glossary-tooltip' title='A defined amount of infrastructure available for consumption (CPU, memory, etc).' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/reference/glossary/?all=true#term-infrastructure-resource' target='_blank' aria-label='resource'&gt;resource&lt;/a&gt;
on nodes.&lt;/br&gt;&lt;/p&gt;
&lt;p&gt;The &lt;a class='glossary-tooltip' title='An agent that runs on each node in the cluster. It makes sure that containers are running in a pod.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/reference/command-line-tools-reference/kubelet' target='_blank' aria-label='kubelet'&gt;kubelet&lt;/a&gt; monitors resources
like memory, disk space, and filesystem inodes on your cluster's nodes.
When one or more of these resources reach specific consumption levels, the
kubelet can proactively fail one or more pods on the node to reclaim resources
and prevent starvation.&lt;/p&gt;</description></item><item><title>Proxies in Kubernetes</title><link>https://andygol-k8s.netlify.app/docs/concepts/cluster-administration/proxies/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/cluster-administration/proxies/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page explains proxies used with Kubernetes.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="proxies"&gt;Proxies&lt;/h2&gt;
&lt;p&gt;There are several different proxies you may encounter when using Kubernetes:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;The &lt;a href="https://andygol-k8s.netlify.app/docs/tasks/access-application-cluster/access-cluster/#directly-accessing-the-rest-api"&gt;kubectl proxy&lt;/a&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;runs on a user's desktop or in a pod&lt;/li&gt;
&lt;li&gt;proxies from a localhost address to the Kubernetes apiserver&lt;/li&gt;
&lt;li&gt;client to proxy uses HTTP&lt;/li&gt;
&lt;li&gt;proxy to apiserver uses HTTPS&lt;/li&gt;
&lt;li&gt;locates apiserver&lt;/li&gt;
&lt;li&gt;adds authentication headers&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The &lt;a href="https://andygol-k8s.netlify.app/docs/tasks/access-application-cluster/access-cluster-services/#discovering-builtin-services"&gt;apiserver proxy&lt;/a&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;is a bastion built into the apiserver&lt;/li&gt;
&lt;li&gt;connects a user outside of the cluster to cluster IPs which otherwise might not be reachable&lt;/li&gt;
&lt;li&gt;runs in the apiserver processes&lt;/li&gt;
&lt;li&gt;client to proxy uses HTTPS (or http if apiserver so configured)&lt;/li&gt;
&lt;li&gt;proxy to target may use HTTP or HTTPS as chosen by proxy using available information&lt;/li&gt;
&lt;li&gt;can be used to reach a Node, Pod, or Service&lt;/li&gt;
&lt;li&gt;does load balancing when used to reach a Service&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/services-networking/service/#ips-and-vips"&gt;kube proxy&lt;/a&gt;:&lt;/p&gt;</description></item><item><title>Recommended Labels</title><link>https://andygol-k8s.netlify.app/docs/concepts/overview/working-with-objects/common-labels/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/overview/working-with-objects/common-labels/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;You can visualize and manage Kubernetes objects with more tools than kubectl and
the dashboard. A common set of labels allows tools to work interoperably, describing
objects in a common manner that all tools can understand.&lt;/p&gt;
&lt;p&gt;In addition to supporting tooling, the recommended labels describe applications
in a way that can be queried.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;p&gt;The metadata is organized around the concept of an &lt;em&gt;application&lt;/em&gt;. Kubernetes is not
a platform as a service (PaaS) and doesn't have or enforce a formal notion of an application.
Instead, applications are informal and described with metadata. The definition of
what an application contains is loose.&lt;/p&gt;</description></item><item><title>Security Checklist</title><link>https://andygol-k8s.netlify.app/docs/concepts/security/security-checklist/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/security/security-checklist/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This checklist aims at providing a basic list of guidance with links to more
comprehensive documentation on each topic. It does not claim to be exhaustive
and is meant to evolve.&lt;/p&gt;
&lt;p&gt;On how to read and use this document:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The order of topics does not reflect an order of priority.&lt;/li&gt;
&lt;li&gt;Some checklist items are detailed in the paragraph below the list of each section.&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="alert alert-caution" role="note"&gt;&lt;h4 class="alert-heading"&gt;Caution:&lt;/h4&gt;Checklists are &lt;strong&gt;not&lt;/strong&gt; sufficient for attaining a good security posture on their
own. A good security posture requires constant attention and improvement, but a
checklist can be the first step on the never-ending journey towards security
preparedness. Some of the recommendations in this checklist may be too
restrictive or too lax for your specific security needs. Since Kubernetes
security is not &amp;quot;one size fits all&amp;quot;, each category of checklist items should be
evaluated on its merits.&lt;/div&gt;

&lt;!-- body --&gt;
&lt;h2 id="authentication-authorization"&gt;Authentication &amp;amp; Authorization&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;input disabled="" type="checkbox"&gt; &lt;code&gt;system:masters&lt;/code&gt; group is not used for user or component authentication after bootstrapping.&lt;/li&gt;
&lt;li&gt;&lt;input disabled="" type="checkbox"&gt; The kube-controller-manager is running with &lt;code&gt;--use-service-account-credentials&lt;/code&gt;
enabled.&lt;/li&gt;
&lt;li&gt;&lt;input disabled="" type="checkbox"&gt; The root certificate is protected (either an offline CA, or a managed
online CA with effective access controls).&lt;/li&gt;
&lt;li&gt;&lt;input disabled="" type="checkbox"&gt; Intermediate and leaf certificates have an expiry date no more than 3
years in the future.&lt;/li&gt;
&lt;li&gt;&lt;input disabled="" type="checkbox"&gt; A process exists for periodic access review, and reviews occur no more
than 24 months apart.&lt;/li&gt;
&lt;li&gt;&lt;input disabled="" type="checkbox"&gt; The &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/security/rbac-good-practices/"&gt;Role Based Access Control Good Practices&lt;/a&gt;
are followed for guidance related to authentication and authorization.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;After bootstrapping, neither users nor components should authenticate to the
Kubernetes API as &lt;code&gt;system:masters&lt;/code&gt;. Similarly, running all of
kube-controller-manager as &lt;code&gt;system:masters&lt;/code&gt; should be avoided. In fact,
&lt;code&gt;system:masters&lt;/code&gt; should only be used as a break-glass mechanism, as opposed to
an admin user.&lt;/p&gt;</description></item><item><title>Topology Aware Routing</title><link>https://andygol-k8s.netlify.app/docs/concepts/services-networking/topology-aware-routing/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/services-networking/topology-aware-routing/</guid><description>&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-beta"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.23 [beta]&lt;/code&gt;
 &lt;/div&gt;
 



&lt;div class="alert alert-info" role="note"&gt;&lt;h4 class="alert-heading"&gt;Note:&lt;/h4&gt;Prior to Kubernetes 1.27, this feature was known as &lt;em&gt;Topology Aware Hints&lt;/em&gt;.&lt;/div&gt;

&lt;p&gt;&lt;em&gt;Topology Aware Routing&lt;/em&gt; adjusts routing behavior to prefer keeping traffic in
the zone it originated from. In some cases this can help reduce costs or improve
network performance.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="motivation"&gt;Motivation&lt;/h2&gt;
&lt;p&gt;Kubernetes clusters are increasingly deployed in multi-zone environments.
&lt;em&gt;Topology Aware Routing&lt;/em&gt; provides a mechanism to help keep traffic within the
zone it originated from. When calculating the endpoints for a &lt;a class='glossary-tooltip' title='A way to expose an application running on a set of Pods as a network service.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/services-networking/service/' target='_blank' aria-label='Service'&gt;Service&lt;/a&gt;, the EndpointSlice controller considers
the topology (region and zone) of each endpoint and populates the hints field to
allocate it to a zone. Cluster components such as &lt;a class='glossary-tooltip' title='kube-proxy is a network proxy that runs on each node in the cluster.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/reference/command-line-tools-reference/kube-proxy/' target='_blank' aria-label='kube-proxy'&gt;kube-proxy&lt;/a&gt; can then consume those hints, and use
them to influence how the traffic is routed (favoring topologically closer
endpoints).&lt;/p&gt;</description></item><item><title>Volume Health Monitoring</title><link>https://andygol-k8s.netlify.app/docs/concepts/storage/volume-health-monitoring/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/storage/volume-health-monitoring/</guid><description>&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-alpha"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.21 [alpha]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;p&gt;&lt;a class='glossary-tooltip' title='The Container Storage Interface (CSI) defines a standard interface to expose storage systems to containers.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/storage/volumes/#csi' target='_blank' aria-label='CSI'&gt;CSI&lt;/a&gt; volume health monitoring allows
CSI Drivers to detect abnormal volume conditions from the underlying storage systems
and report them as events on &lt;a class='glossary-tooltip' title='Claims storage resources defined in a PersistentVolume so that it can be mounted as a volume in a container.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims' target='_blank' aria-label='PVCs'&gt;PVCs&lt;/a&gt;
or &lt;a class='glossary-tooltip' title='A Pod represents a set of running containers in your cluster.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/' target='_blank' aria-label='Pods'&gt;Pods&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>API Priority and Fairness</title><link>https://andygol-k8s.netlify.app/docs/concepts/cluster-administration/flow-control/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/cluster-administration/flow-control/</guid><description>&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.29 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;p&gt;Controlling the behavior of the Kubernetes API server in an overload situation
is a key task for cluster administrators. The &lt;a class='glossary-tooltip' title='Control plane component that serves the Kubernetes API.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/architecture/#kube-apiserver' target='_blank' aria-label='kube-apiserver'&gt;kube-apiserver&lt;/a&gt; has some controls available
(i.e. the &lt;code&gt;--max-requests-inflight&lt;/code&gt; and &lt;code&gt;--max-mutating-requests-inflight&lt;/code&gt;
command-line flags) to limit the amount of outstanding work that will be
accepted, preventing a flood of inbound requests from overloading and
potentially crashing the API server, but these flags are not enough to ensure
that the most important requests get through in a period of high traffic.&lt;/p&gt;</description></item><item><title>API-initiated Eviction</title><link>https://andygol-k8s.netlify.app/docs/concepts/scheduling-eviction/api-eviction/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/scheduling-eviction/api-eviction/</guid><description>&lt;p&gt;API-initiated eviction is the process by which you use the &lt;a href="https://andygol-k8s.netlify.app/docs/reference/generated/kubernetes-api/v1.35/#create-eviction-pod-v1-core"&gt;Eviction API&lt;/a&gt;
to create an &lt;code&gt;Eviction&lt;/code&gt; object that triggers graceful pod termination. &lt;/br&gt;&lt;/p&gt;
&lt;p&gt;You can request eviction by calling the Eviction API directly, or programmatically
using a client of the &lt;a class='glossary-tooltip' title='Control plane component that serves the Kubernetes API.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/architecture/#kube-apiserver' target='_blank' aria-label='API server'&gt;API server&lt;/a&gt;, like the &lt;code&gt;kubectl drain&lt;/code&gt; command. This
creates an &lt;code&gt;Eviction&lt;/code&gt; object, which causes the API server to terminate the Pod.&lt;/p&gt;
&lt;p&gt;API-initiated evictions respect your configured &lt;a href="https://andygol-k8s.netlify.app/docs/tasks/run-application/configure-pdb/"&gt;&lt;code&gt;PodDisruptionBudgets&lt;/code&gt;&lt;/a&gt;
and &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination"&gt;&lt;code&gt;terminationGracePeriodSeconds&lt;/code&gt;&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>Application Security Checklist</title><link>https://andygol-k8s.netlify.app/docs/concepts/security/application-security-checklist/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/security/application-security-checklist/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This checklist aims to provide basic guidelines on securing applications
running in Kubernetes from a developer's perspective.
This list is not meant to be exhaustive and is intended to evolve over time.&lt;/p&gt;
&lt;!-- The following is taken from the existing checklist created for Kubernetes admins. https://kubernetes.io/docs/concepts/security/security-checklist/ --&gt;
&lt;p&gt;On how to read and use this document:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The order of topics does not reflect an order of priority.&lt;/li&gt;
&lt;li&gt;Some checklist items are detailed in the paragraph below the list of each section.&lt;/li&gt;
&lt;li&gt;This checklist assumes that a &lt;code&gt;developer&lt;/code&gt; is a Kubernetes cluster user who
interacts with namespaced scope objects.&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="alert alert-caution" role="note"&gt;&lt;h4 class="alert-heading"&gt;Caution:&lt;/h4&gt;Checklists are &lt;strong&gt;not&lt;/strong&gt; sufficient for attaining a good security posture on their own.
A good security posture requires constant attention and improvement, but a checklist
can be the first step on the never-ending journey towards security preparedness.
Some recommendations in this checklist may be too restrictive or too lax for
your specific security needs. Since Kubernetes security is not &amp;quot;one size fits all&amp;quot;,
each category of checklist items should be evaluated on its merits.&lt;/div&gt;

&lt;!-- body --&gt;
&lt;h2 id="base-security-hardening"&gt;Base security hardening&lt;/h2&gt;
&lt;p&gt;The following checklist provides base security hardening recommendations that
would apply to most applications deploying to Kubernetes.&lt;/p&gt;</description></item><item><title>Cloud Controller Manager Administration</title><link>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/running-cloud-controller/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/running-cloud-controller/</guid><description>&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-beta"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.11 [beta]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;p&gt;Since cloud providers develop and release at a different pace compared to the
Kubernetes project, abstracting the provider-specific code to the
&lt;code&gt;&lt;a class='glossary-tooltip' title='Control plane component that integrates Kubernetes with third-party cloud providers.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/architecture/cloud-controller/' target='_blank' aria-label='cloud-controller-manager'&gt;cloud-controller-manager&lt;/a&gt;&lt;/code&gt;
binary allows cloud vendors to evolve independently from the core Kubernetes code.&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;cloud-controller-manager&lt;/code&gt; can be linked to any cloud provider that satisfies
&lt;a href="https://github.com/kubernetes/cloud-provider/blob/master/cloud.go"&gt;cloudprovider.Interface&lt;/a&gt;.
For backwards compatibility, the
&lt;a href="https://github.com/kubernetes/kubernetes/tree/master/cmd/cloud-controller-manager"&gt;cloud-controller-manager&lt;/a&gt;
provided in the core Kubernetes project uses the same cloud libraries as &lt;code&gt;kube-controller-manager&lt;/code&gt;.
Cloud providers already supported in Kubernetes core are expected to use the in-tree
cloud-controller-manager to transition out of Kubernetes core.&lt;/p&gt;</description></item><item><title>Configure a Security Context for a Pod or Container</title><link>https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/security-context/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/security-context/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;A security context defines privilege and access control settings for
a Pod or Container. Security context settings include, but are not limited to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Discretionary Access Control: Permission to access an object, like a file, is based on
&lt;a href="https://wiki.archlinux.org/index.php/users_and_groups"&gt;user ID (UID) and group ID (GID)&lt;/a&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://en.wikipedia.org/wiki/Security-Enhanced_Linux"&gt;Security Enhanced Linux (SELinux)&lt;/a&gt;:
Objects are assigned security labels.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Running as privileged or unprivileged.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://linux-audit.com/linux-capabilities-hardening-linux-binaries-by-removing-setuid/"&gt;Linux Capabilities&lt;/a&gt;:
Give a process some privileges, but not all the privileges of the root user.&lt;/p&gt;</description></item><item><title>Kubelet authentication/authorization</title><link>https://andygol-k8s.netlify.app/docs/reference/access-authn-authz/kubelet-authn-authz/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/access-authn-authz/kubelet-authn-authz/</guid><description>&lt;h2 id="overview"&gt;Overview&lt;/h2&gt;
&lt;p&gt;A kubelet's HTTPS endpoint exposes APIs which give access to data of varying sensitivity,
and allow you to perform operations with varying levels of power on the node and within containers.&lt;/p&gt;
&lt;p&gt;This document describes how to authenticate and authorize access to the kubelet's HTTPS endpoint.&lt;/p&gt;
&lt;h2 id="kubelet-authentication"&gt;Kubelet authentication&lt;/h2&gt;
&lt;p&gt;By default, requests to the kubelet's HTTPS endpoint that are not rejected by other configured
authentication methods are treated as anonymous requests, and given a username of &lt;code&gt;system:anonymous&lt;/code&gt;
and a group of &lt;code&gt;system:unauthenticated&lt;/code&gt;.&lt;/p&gt;</description></item><item><title>Linux Node Swap Behaviors</title><link>https://andygol-k8s.netlify.app/docs/reference/node/swap-behavior/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/node/swap-behavior/</guid><description>&lt;p&gt;To allow Kubernetes workloads to use swap, on a Linux node,
you must disable the kubelet's default behavior of failing when swap is detected,
and specify memory-swap behavior as &lt;code&gt;LimitedSwap&lt;/code&gt;:&lt;/p&gt;
&lt;p&gt;The available choices for swap behavior are:&lt;/p&gt;
&lt;dl&gt;
&lt;dt&gt;&lt;code&gt;NoSwap&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;(default) Workloads running as Pods on this node do not and cannot use swap. However, processes
outside of Kubernetes' scope, such as system daemons (including the kubelet itself!) &lt;strong&gt;can&lt;/strong&gt; utilize swap.
This behavior is beneficial for protecting the node from system-level memory spikes,
but it does not safeguard the workloads themselves from such spikes.&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;LimitedSwap&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;Kubernetes workloads can utilize swap memory. The amount of swap available to a Pod is determined automatically.&lt;/dd&gt;
&lt;/dl&gt;
&lt;p&gt;To learn more, read &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/cluster-administration/swap-memory-management/"&gt;swap memory management&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>Networking on Windows</title><link>https://andygol-k8s.netlify.app/docs/concepts/services-networking/windows-networking/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/services-networking/windows-networking/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;Kubernetes supports running nodes on either Linux or Windows. You can mix both kinds of node
within a single cluster.
This page provides an overview to networking specific to the Windows operating system.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="networking"&gt;Container networking on Windows&lt;/h2&gt;
&lt;p&gt;Networking for Windows containers is exposed through
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/"&gt;CNI plugins&lt;/a&gt;.
Windows containers function similarly to virtual machines in regards to
networking. Each container has a virtual network adapter (vNIC) which is connected
to a Hyper-V virtual switch (vSwitch). The Host Networking Service (HNS) and the
Host Compute Service (HCS) work together to create containers and attach container
vNICs to networks. HCS is responsible for the management of containers whereas HNS
is responsible for the management of networking resources such as:&lt;/p&gt;</description></item><item><title>Specifying a Disruption Budget for your Application</title><link>https://andygol-k8s.netlify.app/docs/tasks/run-application/configure-pdb/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/run-application/configure-pdb/</guid><description>&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.21 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;p&gt;This page shows how to limit the number of concurrent disruptions
that your application experiences, allowing for higher availability
while permitting the cluster administrator to manage the clusters
nodes.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;


Your Kubernetes server must be at or later than version v1.21.
 &lt;p&gt;To check the version, enter &lt;code&gt;kubectl version&lt;/code&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You are the owner of an application running on a Kubernetes cluster that requires
high availability.&lt;/li&gt;
&lt;li&gt;You should know how to deploy &lt;a href="https://andygol-k8s.netlify.app/docs/tasks/run-application/run-stateless-application-deployment/"&gt;Replicated Stateless Applications&lt;/a&gt;
and/or &lt;a href="https://andygol-k8s.netlify.app/docs/tasks/run-application/run-replicated-stateful-application/"&gt;Replicated Stateful Applications&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;You should have read about &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/disruptions/"&gt;Pod Disruptions&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;You should confirm with your cluster owner or service provider that they respect
Pod Disruption Budgets.&lt;/li&gt;
&lt;/ul&gt;
&lt;!-- steps --&gt;
&lt;h2 id="protecting-an-application-with-a-poddisruptionbudget"&gt;Protecting an Application with a PodDisruptionBudget&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;Identify what application you want to protect with a PodDisruptionBudget (PDB).&lt;/li&gt;
&lt;li&gt;Think about how your application reacts to disruptions.&lt;/li&gt;
&lt;li&gt;Create a PDB definition as a YAML file.&lt;/li&gt;
&lt;li&gt;Create the PDB object from the YAML file.&lt;/li&gt;
&lt;/ol&gt;
&lt;!-- discussion --&gt;
&lt;h2 id="identify-an-application-to-protect"&gt;Identify an Application to Protect&lt;/h2&gt;
&lt;p&gt;The most common use case when you want to protect an application
specified by one of the built-in Kubernetes controllers:&lt;/p&gt;</description></item><item><title>Storage Versions</title><link>https://andygol-k8s.netlify.app/docs/concepts/overview/working-with-objects/storage-version/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/overview/working-with-objects/storage-version/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;The Kubernetes API server stores objects, relying on an etcd-compatible backing
store (often, the backing storage is etcd itself). Each object is serialized
using a particular version of that API type; for example, the v1 representation
of a ConfigMap. Kubernetes uses the term &lt;em&gt;storage version&lt;/em&gt; to describe how an
object is stored in your cluster.&lt;/p&gt;
&lt;p&gt;The Kubernetes API also relies on automatic conversion; for example, if you have
a HorizontalPodAutoscaler, then you can interact with that
HorizontalPodAutoscaler using any mix of the v1 and v2 versions of the
HorizontalPodAutoscaler API. Kubernetes is responsible for converting each API
call so that clients do not see what version is actually serialized.&lt;/p&gt;</description></item><item><title>Windows Storage</title><link>https://andygol-k8s.netlify.app/docs/concepts/storage/windows-storage/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/storage/windows-storage/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page provides an storage overview specific to the Windows operating system.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="storage"&gt;Persistent storage&lt;/h2&gt;
&lt;p&gt;Windows has a layered filesystem driver to mount container layers and create a copy
filesystem based on NTFS. All file paths in the container are resolved only within
the context of that container.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;With Docker, volume mounts can only target a directory in the container, and not
an individual file. This limitation does not apply to containerd.&lt;/li&gt;
&lt;li&gt;Volume mounts cannot project files or directories back to the host filesystem.&lt;/li&gt;
&lt;li&gt;Read-only filesystems are not supported because write access is always required
for the Windows registry and SAM database. However, read-only volumes are supported.&lt;/li&gt;
&lt;li&gt;Volume user-masks and permissions are not available. Because the SAM is not shared
between the host &amp;amp; container, there's no mapping between them. All permissions are
resolved within the context of the container.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;As a result, the following storage functionality is not supported on Windows nodes:&lt;/p&gt;</description></item><item><title>Accessing the Kubernetes API from a Pod</title><link>https://andygol-k8s.netlify.app/docs/tasks/run-application/access-api-from-pod/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/run-application/access-api-from-pod/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This guide demonstrates how to access the Kubernetes API from within a pod.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Communicate Between Containers in the Same Pod Using a Shared Volume</title><link>https://andygol-k8s.netlify.app/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page shows how to use a Volume to communicate between two Containers running
in the same Pod. See also how to allow processes to communicate by
&lt;a href="https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/share-process-namespace/"&gt;sharing process namespace&lt;/a&gt;
between containers.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Configure a kubelet image credential provider</title><link>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/kubelet-credential-provider/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/kubelet-credential-provider/</guid><description>&lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.26 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;!-- overview --&gt;
&lt;p&gt;Starting from Kubernetes v1.20, the kubelet can dynamically retrieve credentials for a container image registry
using exec plugins. The kubelet and the exec plugin communicate through stdio (stdin, stdout, and stderr) using
Kubernetes versioned APIs. These plugins allow the kubelet to request credentials for a container registry dynamically
as opposed to storing static credentials on disk. For example, the plugin may talk to a local metadata server to retrieve
short-lived credentials for an image that is being pulled by the kubelet.&lt;/p&gt;</description></item><item><title>Configure Service Accounts for Pods</title><link>https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/configure-service-account/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/configure-service-account/</guid><description>&lt;p&gt;Kubernetes offers two distinct ways for clients that run within your
cluster, or that otherwise have a relationship to your cluster's
&lt;a class='glossary-tooltip' title='The container orchestration layer that exposes the API and interfaces to define, deploy, and manage the lifecycle of containers.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/reference/glossary/?all=true#term-control-plane' target='_blank' aria-label='control plane'&gt;control plane&lt;/a&gt;
to authenticate to the
&lt;a class='glossary-tooltip' title='Control plane component that serves the Kubernetes API.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/architecture/#kube-apiserver' target='_blank' aria-label='API server'&gt;API server&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;A &lt;em&gt;service account&lt;/em&gt; provides an identity for processes that run in a Pod,
and maps to a ServiceAccount object. When you authenticate to the API
server, you identify yourself as a particular &lt;em&gt;user&lt;/em&gt;. Kubernetes recognises
the concept of a user, however, Kubernetes itself does &lt;strong&gt;not&lt;/strong&gt; have a User
API.&lt;/p&gt;</description></item><item><title>Custom Hugo Shortcodes</title><link>https://andygol-k8s.netlify.app/docs/contribute/style/hugo-shortcodes/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/contribute/style/hugo-shortcodes/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page explains the custom Hugo shortcodes that can be used in Kubernetes Markdown documentation.&lt;/p&gt;
&lt;p&gt;Read more about shortcodes in the &lt;a href="https://gohugo.io/content-management/shortcodes"&gt;Hugo documentation&lt;/a&gt;.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="feature-state"&gt;Feature state&lt;/h2&gt;
&lt;p&gt;In a Markdown page (&lt;code&gt;.md&lt;/code&gt; file) on this site, you can add a shortcode to
display version and state of the documented feature.&lt;/p&gt;
&lt;h3 id="feature-state-demo"&gt;Feature state demo&lt;/h3&gt;
&lt;p&gt;Below is a demo of the feature state snippet, which displays the feature as
stable in the latest Kubernetes version.&lt;/p&gt;</description></item><item><title>Generating Reference Pages for Kubernetes Components and Tools</title><link>https://andygol-k8s.netlify.app/docs/contribute/generate-ref-docs/kubernetes-components/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/contribute/generate-ref-docs/kubernetes-components/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page shows how to build the Kubernetes component and tool reference pages.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;Start with the &lt;a href="https://andygol-k8s.netlify.app/docs/contribute/generate-ref-docs/quickstart/#before-you-begin"&gt;Prerequisites section&lt;/a&gt;
in the Reference Documentation Quickstart guide.&lt;/p&gt;
&lt;!-- steps --&gt;
&lt;p&gt;Follow the &lt;a href="https://andygol-k8s.netlify.app/docs/contribute/generate-ref-docs/quickstart/"&gt;Reference Documentation Quickstart&lt;/a&gt;
to generate the Kubernetes component and tool reference pages.&lt;/p&gt;
&lt;h2 id="what-s-next"&gt;What's next&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/contribute/generate-ref-docs/quickstart/"&gt;Generating Reference Documentation Quickstart&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/contribute/generate-ref-docs/kubectl/"&gt;Generating Reference Documentation for kubectl Commands&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/contribute/generate-ref-docs/kubernetes-api/"&gt;Generating Reference Documentation for the Kubernetes API&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/contribute/generate-ref-docs/contribute-upstream/"&gt;Contributing to the Upstream Kubernetes Project for Documentation&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</description></item><item><title>Service ClusterIP allocation</title><link>https://andygol-k8s.netlify.app/docs/concepts/services-networking/cluster-ip-allocation/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/services-networking/cluster-ip-allocation/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;In Kubernetes, &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/services-networking/service/"&gt;Services&lt;/a&gt; are an abstract way to expose
an application running on a set of Pods. Services
can have a cluster-scoped virtual IP address (using a Service of &lt;code&gt;type: ClusterIP&lt;/code&gt;).
Clients can connect using that virtual IP address, and Kubernetes then load-balances traffic to that
Service across the different backing Pods.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="how-service-clusterips-are-allocated"&gt;How Service ClusterIPs are allocated?&lt;/h2&gt;
&lt;p&gt;When Kubernetes needs to assign a virtual IP address for a Service,
that assignment happens one of two ways:&lt;/p&gt;</description></item><item><title>Service Internal Traffic Policy</title><link>https://andygol-k8s.netlify.app/docs/concepts/services-networking/service-traffic-policy/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/services-networking/service-traffic-policy/</guid><description>&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.26 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;p&gt;&lt;em&gt;Service Internal Traffic Policy&lt;/em&gt; enables internal traffic restrictions to only route
internal traffic to endpoints within the node the traffic originated from. The
&amp;quot;internal&amp;quot; traffic here refers to traffic originated from Pods in the current
cluster. This can help to reduce costs and improve performance.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="using-service-internal-traffic-policy"&gt;Using Service Internal Traffic Policy&lt;/h2&gt;
&lt;p&gt;You can enable the internal-only traffic policy for a
&lt;a class='glossary-tooltip' title='A way to expose an application running on a set of Pods as a network service.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/services-networking/service/' target='_blank' aria-label='Service'&gt;Service&lt;/a&gt;, by setting its
&lt;code&gt;.spec.internalTrafficPolicy&lt;/code&gt; to &lt;code&gt;Local&lt;/code&gt;. This tells kube-proxy to only use node local
endpoints for cluster internal traffic.&lt;/p&gt;</description></item><item><title>TLS bootstrapping</title><link>https://andygol-k8s.netlify.app/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;In a Kubernetes cluster, the components on the worker nodes - kubelet and kube-proxy - need
to communicate with Kubernetes control plane components, specifically kube-apiserver.
In order to ensure that communication is kept private, not interfered with, and ensure that
each component of the cluster is talking to another trusted component, we strongly
recommend using client TLS certificates on nodes.&lt;/p&gt;
&lt;p&gt;The normal process of bootstrapping these components, especially worker nodes that need certificates
so they can communicate safely with kube-apiserver, can be a challenging process as it is often outside
of the scope of Kubernetes and requires significant additional work.
This in turn, can make it challenging to initialize or scale a cluster.&lt;/p&gt;</description></item><item><title>Viewing Site Analytics</title><link>https://andygol-k8s.netlify.app/docs/contribute/analytics/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/contribute/analytics/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page contains information about the kubernetes.io analytics dashboard.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;p&gt;&lt;a href="https://lookerstudio.google.com/u/0/reporting/fe615dc5-59b0-4db5-8504-ef9eacb663a9/page/4VDGB/"&gt;View the dashboard&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;This dashboard is built using &lt;a href="https://lookerstudio.google.com/overview"&gt;Google Looker Studio&lt;/a&gt; and shows information collected on kubernetes.io using Google Analytics 4 since August 2022.&lt;/p&gt;
&lt;h3 id="using-the-dashboard"&gt;Using the dashboard&lt;/h3&gt;
&lt;p&gt;By default, the dashboard shows all collected analytics for the past 30 days. Use the date selector to see data from a different date range. Other filtering options allow you to view data based on user location, the device used to access the site, the translation of the docs used, and more.&lt;/p&gt;</description></item><item><title>Configure DNS for a Cluster</title><link>https://andygol-k8s.netlify.app/docs/tasks/access-application-cluster/configure-dns-cluster/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/access-application-cluster/configure-dns-cluster/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;Kubernetes offers a DNS cluster addon, which most of the supported environments enable by default. In Kubernetes version 1.11 and later, CoreDNS is recommended and is installed by default with kubeadm.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;p&gt;For more information on how to configure CoreDNS for a Kubernetes cluster, see the &lt;a href="https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/dns-custom-nameservers/"&gt;Customizing DNS Service&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>Configure Quotas for API Objects</title><link>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/quota-api-object/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/quota-api-object/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page shows how to configure quotas for API objects, including
PersistentVolumeClaims and Services. A quota restricts the number of
objects, of a particular type, that can be created in a namespace.
You specify quotas in a
&lt;a href="https://andygol-k8s.netlify.app/docs/reference/generated/kubernetes-api/v1.35/#resourcequota-v1-core"&gt;ResourceQuota&lt;/a&gt;
object.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Flow control</title><link>https://andygol-k8s.netlify.app/docs/reference/debug-cluster/flow-control/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/debug-cluster/flow-control/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;API Priority and Fairness controls the behavior of the Kubernetes API server in
an overload situation. You can find more information about it in the
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/cluster-administration/flow-control/"&gt;API Priority and Fairness&lt;/a&gt;
documentation.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="diagnostics"&gt;Diagnostics&lt;/h2&gt;
&lt;p&gt;Every HTTP response from an API server with the priority and fairness feature
enabled has two extra headers: &lt;code&gt;X-Kubernetes-PF-FlowSchema-UID&lt;/code&gt; and
&lt;code&gt;X-Kubernetes-PF-PriorityLevel-UID&lt;/code&gt;, noting the flow schema that matched the request
and the priority level to which it was assigned, respectively. The API objects'
names are not included in these headers (to avoid revealing details in case the
requesting user does not have permission to view them). When debugging, you
can use a command such as:&lt;/p&gt;</description></item><item><title>Pull an Image from a Private Registry</title><link>https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/pull-image-private-registry/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/pull-image-private-registry/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page shows how to create a Pod that uses a
&lt;a class='glossary-tooltip' title='Stores sensitive information, such as passwords, OAuth tokens, and ssh keys.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/configuration/secret/' target='_blank' aria-label='Secret'&gt;Secret&lt;/a&gt; to pull an image
from a private container image registry or repository. There are many private
registries in use. This task uses &lt;a href="https://www.docker.com/products/docker-hub"&gt;Docker Hub&lt;/a&gt;
as an example registry.&lt;/p&gt;
&lt;div class="alert alert-secondary callout third-party-content" role="note"&gt;&amp;#128711; This item links to a third party project or product that is not part of Kubernetes itself. &lt;a class="alert-more-info" href="#third-party-content-disclaimer"&gt;More information&lt;/a&gt;&lt;/div&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Access Services Running on Clusters</title><link>https://andygol-k8s.netlify.app/docs/tasks/access-application-cluster/access-cluster-services/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/access-application-cluster/access-cluster-services/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page shows how to connect to services running on the Kubernetes cluster.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Configure Liveness, Readiness and Startup Probes</title><link>https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page shows how to configure liveness, readiness and startup probes for containers.&lt;/p&gt;
&lt;p&gt;For more information about probes, see &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/configuration/liveness-readiness-startup-probes/"&gt;Liveness, Readiness and Startup Probes&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;The &lt;a href="https://andygol-k8s.netlify.app/docs/reference/command-line-tools-reference/kubelet/"&gt;kubelet&lt;/a&gt; uses
liveness probes to know when to restart a container. For example, liveness
probes could catch a deadlock, where an application is running, but unable to
make progress. Restarting a container in such a state can help to make the
application more available despite bugs.&lt;/p&gt;</description></item><item><title>Control CPU Management Policies on the Node</title><link>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/cpu-management-policies/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/cpu-management-policies/</guid><description>&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.26 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;p&gt;Kubernetes keeps many aspects of how pods execute on nodes abstracted
from the user. This is by design.  However, some workloads require
stronger guarantees in terms of latency and/or performance in order to operate
acceptably. The kubelet provides methods to enable more complex workload
placement policies while keeping the abstraction free from explicit placement
directives.&lt;/p&gt;
&lt;p&gt;For detailed information on resource management, please refer to the
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/configuration/manage-resources-containers/"&gt;Resource Management for Pods and Containers&lt;/a&gt;
documentation.&lt;/p&gt;</description></item><item><title>Control Memory Management Policies on a Node</title><link>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/memory-manager/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/memory-manager/</guid><description>&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable" title="Feature Gate: MemoryManager"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt; 
 &lt;code&gt;Kubernetes v1.32 [stable]&lt;/code&gt;(enabled by default)&lt;/div&gt;

&lt;p&gt;The Kubernetes &lt;em&gt;Memory Manager&lt;/em&gt; enables the feature of guaranteed memory (and hugepages)
allocation for pods in the &lt;code&gt;Guaranteed&lt;/code&gt; &lt;a class='glossary-tooltip' title='QoS Class (Quality of Service Class) provides a way for Kubernetes to classify pods within the cluster into several classes and make decisions about scheduling and eviction.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/pod-qos/' target='_blank' aria-label='QoS class'&gt;QoS class&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The Memory Manager employs a hint generation protocol to yield the most suitable NUMA affinity for a pod.
The Memory Manager feeds the central manager (&lt;em&gt;Topology Manager&lt;/em&gt;) with these affinity hints.
Based on both the hints and Topology Manager policy, the pod is rejected or admitted to the node.&lt;/p&gt;</description></item><item><title>Assign Pods to Nodes</title><link>https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/assign-pods-nodes/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/assign-pods-nodes/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page shows how to assign a Kubernetes Pod to a particular node in a
Kubernetes cluster.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Changing The Kubernetes Package Repository</title><link>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/kubeadm/change-package-repository/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/kubeadm/change-package-repository/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page explains how to enable a package repository for the desired
Kubernetes minor release upon upgrading a cluster. This is only needed
for users of the community-owned package repositories hosted at &lt;code&gt;pkgs.k8s.io&lt;/code&gt;.
Unlike the legacy package repositories, the community-owned package
repositories are structured in a way that there's a dedicated package
repository for each Kubernetes minor version.&lt;/p&gt;

&lt;div class="alert alert-info" role="note"&gt;&lt;h4 class="alert-heading"&gt;Note:&lt;/h4&gt;This guide only covers a part of the Kubernetes upgrade process. Please see the
&lt;a href="https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/"&gt;upgrade guide&lt;/a&gt; for
more information about upgrading Kubernetes clusters.&lt;/div&gt;


&lt;div class="alert alert-info" role="note"&gt;&lt;h4 class="alert-heading"&gt;Note:&lt;/h4&gt;This step is only needed upon upgrading a cluster to another &lt;strong&gt;minor&lt;/strong&gt; release.
If you're upgrading to another patch release within the same minor release (e.g.
v1.35.5 to v1.35.7), you don't
need to follow this guide. However, if you're still using the legacy package
repositories, you'll need to migrate to the new community-owned package
repositories before upgrading (see the next section for more details on how to
do this).&lt;/div&gt;

&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;This document assumes that you're already using the community-owned
package repositories (&lt;code&gt;pkgs.k8s.io&lt;/code&gt;). If that's not the case, it's strongly
recommended to migrate to the community-owned package repositories as described
in the &lt;a href="https://andygol-k8s.netlify.app/blog/2023/08/15/pkgs-k8s-io-introduction/"&gt;official announcement&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>Control Topology Management Policies on a node</title><link>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/topology-manager/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/topology-manager/</guid><description>&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.27 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;p&gt;An increasing number of systems leverage a combination of CPUs and hardware accelerators to
support latency-critical execution and high-throughput parallel computation. These include
workloads in fields such as telecommunications, scientific computing, machine learning, financial
services and data analytics. Such hybrid systems comprise a high performance environment.&lt;/p&gt;
&lt;p&gt;In order to extract the best performance, optimizations related to CPU isolation, memory and
device locality are required. However, in Kubernetes, these optimizations are handled by a
disjoint set of components.&lt;/p&gt;</description></item><item><title>Installing Addons</title><link>https://andygol-k8s.netlify.app/docs/concepts/cluster-administration/addons/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/cluster-administration/addons/</guid><description>&lt;!-- overview --&gt;
&lt;div class="alert alert-secondary callout third-party-content" role="note"&gt;&lt;strong&gt;Note:&lt;/strong&gt;&amp;puncsp;This section links to third party projects that provide functionality required by Kubernetes. The Kubernetes project authors aren't responsible for these projects, which are listed alphabetically. To add a project to this list, read the &lt;a href="https://andygol-k8s.netlify.app/docs/contribute/style/content-guide/#third-party-content"&gt;content guide&lt;/a&gt; before submitting a change. &lt;a href="#third-party-content-disclaimer"&gt;More information.&lt;/a&gt;&lt;/div&gt;
&lt;p&gt;Add-ons extend the functionality of Kubernetes.&lt;/p&gt;
&lt;p&gt;This page lists some of the available add-ons and links to their respective
installation instructions. The list does not try to be exhaustive.&lt;/p&gt;</description></item><item><title>Assign Pods to Nodes using Node Affinity</title><link>https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page shows how to assign a Kubernetes Pod to a particular node using Node Affinity in a
Kubernetes cluster.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Customizing DNS Service</title><link>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/dns-custom-nameservers/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/dns-custom-nameservers/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page explains how to configure your DNS
&lt;a class='glossary-tooltip' title='A Pod represents a set of running containers in your cluster.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/' target='_blank' aria-label='Pod(s)'&gt;Pod(s)&lt;/a&gt; and customize the
DNS resolution process in your cluster.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Node Declared Features</title><link>https://andygol-k8s.netlify.app/docs/concepts/scheduling-eviction/node-declared-features/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/scheduling-eviction/node-declared-features/</guid><description>&lt;div class="feature-state-notice feature-alpha" title="Feature Gate: NodeDeclaredFeatures"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt; 
 &lt;code&gt;Kubernetes v1.35 [alpha]&lt;/code&gt;(disabled by default)&lt;/div&gt;

&lt;p&gt;Kubernetes nodes use &lt;em&gt;declared features&lt;/em&gt; to report the availability of specific
features that are new or feature-gated. Control plane components
utilize this information to make better decisions. The kube-scheduler, via the
&lt;code&gt;NodeDeclaredFeatures&lt;/code&gt; plugin, ensures pods are only placed on nodes that
explicitly support the features the pod requires. Additionally, the
&lt;code&gt;NodeDeclaredFeatureValidator&lt;/code&gt; admission controller validates pod updates
against a node's declared features.&lt;/p&gt;</description></item><item><title>User Namespaces</title><link>https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/user-namespaces/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/user-namespaces/</guid><description>&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-beta"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.30 [beta]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;p&gt;This page explains how user namespaces are used in Kubernetes pods. A user
namespace isolates the user running inside the container from the one
in the host.&lt;/p&gt;
&lt;p&gt;A process running as root in a container can run as a different (non-root) user
in the host; in other words, the process has full privileges for operations
inside the user namespace, but is unprivileged for operations outside the
namespace.&lt;/p&gt;</description></item><item><title>Configure Pod Initialization</title><link>https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/configure-pod-initialization/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/configure-pod-initialization/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page shows how to use an Init Container to initialize a Pod before an
application Container runs.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Debugging DNS Resolution</title><link>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/dns-debugging-resolution/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/dns-debugging-resolution/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page provides hints on diagnosing DNS problems.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Downward API</title><link>https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/downward-api/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/downward-api/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;It is sometimes useful for a container to have information about itself, without
being overly coupled to Kubernetes. The &lt;em&gt;downward API&lt;/em&gt; allows containers to consume
information about themselves or the cluster without using the Kubernetes client
or API server.&lt;/p&gt;
&lt;p&gt;An example is an existing application that assumes a particular well-known
environment variable holds a unique identifier. One possibility is to wrap the
application, but that is tedious and error-prone, and it violates the goal of low
coupling. A better option would be to use the Pod's name as an identifier, and
inject the Pod's name into the well-known environment variable.&lt;/p&gt;</description></item><item><title>Advanced Pod Configuration</title><link>https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/advanced-pod-config/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/advanced-pod-config/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page covers advanced Pod configuration topics including &lt;a href="#priorityclasses"&gt;PriorityClasses&lt;/a&gt;, &lt;a href="#runtimeclasses"&gt;RuntimeClasses&lt;/a&gt;,
&lt;a href="#security-context"&gt;security context&lt;/a&gt; within Pods, and introduces aspects of &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/scheduling-eviction/#scheduling"&gt;scheduling&lt;/a&gt;.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="priorityclasses"&gt;PriorityClasses&lt;/h2&gt;
&lt;p&gt;&lt;em&gt;PriorityClasses&lt;/em&gt; allow you to set the importance of Pods relative to other Pods.
If you assign a priority class to a Pod, Kubernetes sets the &lt;code&gt;.spec.priority&lt;/code&gt; field for that Pod
based on the PriorityClass you specified (you cannot set &lt;code&gt;.spec.priority&lt;/code&gt; directly).
If or when a Pod cannot be scheduled, and the problem is due to a lack of resources, the &lt;a class='glossary-tooltip' title='Control plane component that watches for newly created pods with no assigned node, and selects a node for them to run on.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/reference/command-line-tools-reference/kube-scheduler/' target='_blank' aria-label='kube-scheduler'&gt;kube-scheduler&lt;/a&gt;
tries to &lt;a class='glossary-tooltip' title='Preemption logic in Kubernetes helps a pending Pod to find a suitable Node by evicting low priority Pods existing on that Node.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/scheduling-eviction/pod-priority-preemption/#preemption' target='_blank' aria-label='preempt'&gt;preempt&lt;/a&gt; lower priority
Pods, in order to make scheduling of the higher priority Pod possible.&lt;/p&gt;</description></item><item><title>Attach Handlers to Container Lifecycle Events</title><link>https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page shows how to attach handlers to Container lifecycle events. Kubernetes supports
the postStart and preStop events. Kubernetes sends the postStart event immediately
after a Container is started, and it sends the preStop event immediately before the
Container is terminated. A Container may specify one handler per event.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Declare Network Policy</title><link>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/declare-network-policy/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/declare-network-policy/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This document helps you get started using the Kubernetes &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/services-networking/network-policies/"&gt;NetworkPolicy API&lt;/a&gt; to declare network policies that govern how pods communicate with each other.&lt;/p&gt;
&lt;div class="alert alert-secondary callout third-party-content" role="note"&gt;&lt;strong&gt;Note:&lt;/strong&gt;&amp;puncsp;This section links to third party projects that provide functionality required by Kubernetes. The Kubernetes project authors aren't responsible for these projects, which are listed alphabetically. To add a project to this list, read the &lt;a href="https://andygol-k8s.netlify.app/docs/contribute/style/content-guide/#third-party-content"&gt;content guide&lt;/a&gt; before submitting a change. &lt;a href="#third-party-content-disclaimer"&gt;More information.&lt;/a&gt;&lt;/div&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Configure a Pod to Use a ConfigMap</title><link>https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/configure-pod-configmap/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/configure-pod-configmap/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;Many applications rely on configuration which is used during either application initialization or runtime.
Most times, there is a requirement to adjust values assigned to configuration parameters.
ConfigMaps are a Kubernetes mechanism that let you inject configuration data into application
&lt;a class='glossary-tooltip' title='A Pod represents a set of running containers in your cluster.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/' target='_blank' aria-label='pods'&gt;pods&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The ConfigMap concept allow you to decouple configuration artifacts from image content to
keep containerized applications portable. For example, you can download and run the same
&lt;a class='glossary-tooltip' title='Stored instance of a container that holds a set of software needed to run an application.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/reference/glossary/?all=true#term-image' target='_blank' aria-label='container image'&gt;container image&lt;/a&gt; to spin up containers for
the purposes of local development, system test, or running a live end-user workload.&lt;/p&gt;</description></item><item><title>Developing Cloud Controller Manager</title><link>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/developing-cloud-controller-manager/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/developing-cloud-controller-manager/</guid><description>&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-beta"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.11 [beta]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;p&gt;&lt;p&gt;The cloud-controller-manager is a Kubernetes &lt;a class='glossary-tooltip' title='The container orchestration layer that exposes the API and interfaces to define, deploy, and manage the lifecycle of containers.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/reference/glossary/?all=true#term-control-plane' target='_blank' aria-label='control plane'&gt;control plane&lt;/a&gt; component
that embeds cloud-specific control logic. The cloud controller manager lets you link your
cluster into your cloud provider's API, and separates out the components that interact
with that cloud platform from components that only interact with your cluster.&lt;/p&gt;</description></item><item><title>Coordinated Leader Election</title><link>https://andygol-k8s.netlify.app/docs/concepts/cluster-administration/coordinated-leader-election/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/cluster-administration/coordinated-leader-election/</guid><description>&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-beta" title="Feature Gate: CoordinatedLeaderElection"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt; 
 &lt;code&gt;Kubernetes v1.33 [beta]&lt;/code&gt;(disabled by default)&lt;/div&gt;

&lt;p&gt;Kubernetes 1.35 includes a beta feature that allows &lt;a class='glossary-tooltip' title='The container orchestration layer that exposes the API and interfaces to define, deploy, and manage the lifecycle of containers.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/reference/glossary/?all=true#term-control-plane' target='_blank' aria-label='control plane'&gt;control plane&lt;/a&gt; components to
deterministically select a leader via &lt;em&gt;coordinated leader election&lt;/em&gt;.
This is useful to satisfy Kubernetes version skew constraints during cluster upgrades.
Currently, the only builtin selection strategy is &lt;code&gt;OldestEmulationVersion&lt;/code&gt;,
preferring the leader with the lowest emulation version, followed by binary
version, followed by creation timestamp.&lt;/p&gt;</description></item><item><title>Enable Or Disable A Kubernetes API</title><link>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/enable-disable-api/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/enable-disable-api/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page shows how to enable or disable an API version from your cluster's
&lt;a class='glossary-tooltip' title='The container orchestration layer that exposes the API and interfaces to define, deploy, and manage the lifecycle of containers.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/reference/glossary/?all=true#term-control-plane' target='_blank' aria-label='control plane'&gt;control plane&lt;/a&gt;.&lt;/p&gt;
&lt;!-- steps --&gt;
&lt;p&gt;Specific API versions can be turned on or off by passing &lt;code&gt;--runtime-config=api/&amp;lt;version&amp;gt;&lt;/code&gt; as a
command line argument to the API server. The values for this argument are a comma-separated
list of API versions. Later values override earlier values.&lt;/p&gt;</description></item><item><title>Share Process Namespace between Containers in a Pod</title><link>https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/share-process-namespace/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/share-process-namespace/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page shows how to configure process namespace sharing for a pod. When
process namespace sharing is enabled, processes in a container are visible
to all other containers in the same pod.&lt;/p&gt;
&lt;p&gt;You can use this feature to configure cooperating containers, such as a log
handler sidecar container, or to troubleshoot container images that don't
include debugging utilities like a shell.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Encrypting Confidential Data at Rest</title><link>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/encrypt-data/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/encrypt-data/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;All of the APIs in Kubernetes that let you write persistent API resource data support
at-rest encryption. For example, you can enable at-rest encryption for
&lt;a class='glossary-tooltip' title='Stores sensitive information, such as passwords, OAuth tokens, and ssh keys.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/configuration/secret/' target='_blank' aria-label='Secrets'&gt;Secrets&lt;/a&gt;.
This at-rest encryption is additional to any system-level encryption for the
etcd cluster or for the filesystem(s) on hosts where you are running the
kube-apiserver.&lt;/p&gt;
&lt;p&gt;This page shows how to enable and configure encryption of API data at rest.&lt;/p&gt;</description></item><item><title>Use a User Namespace With a Pod</title><link>https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/user-namespaces/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/user-namespaces/</guid><description>&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-beta" title="Feature Gate: UserNamespacesSupport"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt; 
 &lt;code&gt;Kubernetes v1.33 [beta]&lt;/code&gt;(enabled by default)&lt;/div&gt;

&lt;p&gt;This page shows how to configure a user namespace for pods. This allows you to
isolate the user running inside the container from the one in the host.&lt;/p&gt;
&lt;p&gt;A process running as root in a container can run as a different (non-root) user
in the host; in other words, the process has full privileges for operations
inside the user namespace, but is unprivileged for operations outside the
namespace.&lt;/p&gt;</description></item><item><title>Use an Image Volume With a Pod</title><link>https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/image-volumes/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/image-volumes/</guid><description>&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-beta" title="Feature Gate: ImageVolume"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt; 
 &lt;code&gt;Kubernetes v1.35 [beta]&lt;/code&gt;(enabled by default)&lt;/div&gt;

&lt;p&gt;This page shows how to configure a pod using image volumes. This allows you to
mount content from OCI registries inside containers.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Decrypt Confidential Data that is Already Encrypted at Rest</title><link>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/decrypt-data/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/decrypt-data/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;All of the APIs in Kubernetes that let you write persistent API resource data support
at-rest encryption. For example, you can enable at-rest encryption for
&lt;a class='glossary-tooltip' title='Stores sensitive information, such as passwords, OAuth tokens, and ssh keys.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/configuration/secret/' target='_blank' aria-label='Secrets'&gt;Secrets&lt;/a&gt;.
This at-rest encryption is additional to any system-level encryption for the
etcd cluster or for the filesystem(s) on hosts where you are running the
kube-apiserver.&lt;/p&gt;
&lt;p&gt;This page shows how to switch from encryption of API data at rest, so that API data
are stored unencrypted. You might want to do this to improve performance; usually,
though, if it was a good idea to encrypt some data, it's also a good idea to leave them
encrypted.&lt;/p&gt;</description></item><item><title>Create static Pods</title><link>https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/static-pod/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/static-pod/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;&lt;em&gt;Static Pods&lt;/em&gt; are managed directly by the kubelet daemon on a specific node,
without the &lt;a class='glossary-tooltip' title='Control plane component that serves the Kubernetes API.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/architecture/#kube-apiserver' target='_blank' aria-label='API server'&gt;API server&lt;/a&gt;
observing them.
Unlike Pods that are managed by the control plane (for example, a
&lt;a class='glossary-tooltip' title='Manages a replicated application on your cluster.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/deployment/' target='_blank' aria-label='Deployment'&gt;Deployment&lt;/a&gt;);
instead, the kubelet watches each static Pod (and restarts it if it fails).&lt;/p&gt;
&lt;p&gt;Static Pods are always bound to one &lt;a class='glossary-tooltip' title='An agent that runs on each node in the cluster. It makes sure that containers are running in a pod.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/reference/command-line-tools-reference/kubelet' target='_blank' aria-label='Kubelet'&gt;Kubelet&lt;/a&gt; on a specific node.&lt;/p&gt;</description></item><item><title>Guaranteed Scheduling For Critical Add-On Pods</title><link>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;Kubernetes core components such as the API server, scheduler, and controller-manager run on a control plane node. However, add-ons must run on a regular cluster node.
Some of these add-ons are critical to a fully functional cluster, such as metrics-server, DNS, and UI.
A cluster may stop working properly if a critical add-on is evicted (either manually or as a side effect of another operation like upgrade)
and becomes pending (for example when the cluster is highly utilized and either there are other pending pods that schedule into the space
vacated by the evicted critical add-on pod or the amount of resources available on the node changed for some other reason).&lt;/p&gt;</description></item><item><title>Mixed Version Proxy</title><link>https://andygol-k8s.netlify.app/docs/concepts/architecture/mixed-version-proxy/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/concepts/architecture/mixed-version-proxy/</guid><description>&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-alpha" title="Feature Gate: UnknownVersionInteroperabilityProxy"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt; 
 &lt;code&gt;Kubernetes v1.28 [alpha]&lt;/code&gt;(disabled by default)&lt;/div&gt;

&lt;p&gt;Kubernetes 1.35 includes an alpha feature that lets an
&lt;a class='glossary-tooltip' title='Control plane component that serves the Kubernetes API.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/architecture/#kube-apiserver' target='_blank' aria-label='API Server'&gt;API Server&lt;/a&gt;
proxy resource requests to other &lt;em&gt;peer&lt;/em&gt; API servers. It also lets clients get
a holistic view of resources served across the entire cluster through discovery.
This is useful when there are multiple
API servers running different versions of Kubernetes in one cluster
(for example, during a long-lived rollout to a new release of Kubernetes).&lt;/p&gt;</description></item><item><title>IP Masquerade Agent User Guide</title><link>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/ip-masq-agent/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/ip-masq-agent/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page shows how to configure and enable the &lt;code&gt;ip-masq-agent&lt;/code&gt;.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Translate a Docker Compose File to Kubernetes Resources</title><link>https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/translate-compose-kubernetes/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/translate-compose-kubernetes/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;What's Kompose? It's a conversion tool for all things compose (namely Docker Compose) to container orchestrators (Kubernetes or OpenShift).&lt;/p&gt;
&lt;p&gt;More information can be found on the Kompose website at &lt;a href="https://kompose.io/"&gt;https://kompose.io/&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Enforce Pod Security Standards by Configuring the Built-in Admission Controller</title><link>https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/enforce-standards-admission-controller/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/enforce-standards-admission-controller/</guid><description>&lt;p&gt;Kubernetes provides a built-in &lt;a href="https://andygol-k8s.netlify.app/docs/reference/access-authn-authz/admission-controllers/#podsecurity"&gt;admission controller&lt;/a&gt;
to enforce the &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/security/pod-security-standards/"&gt;Pod Security Standards&lt;/a&gt;.
You can configure this admission controller to set cluster-wide defaults and &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/security/pod-security-admission/#exemptions"&gt;exemptions&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;Following an alpha release in Kubernetes v1.22,
Pod Security Admission became available by default in Kubernetes v1.23, as
a beta. From version 1.25 onwards, Pod Security Admission is generally
available.&lt;/p&gt;
 &lt;p&gt;To check the version, enter &lt;code&gt;kubectl version&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;If you are not running Kubernetes 1.35, you can switch
to viewing this page in the documentation for the Kubernetes version that you
are running.&lt;/p&gt;</description></item><item><title>Limit Storage Consumption</title><link>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/limit-storage-consumption/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/limit-storage-consumption/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This example demonstrates how to limit the amount of storage consumed in a namespace.&lt;/p&gt;
&lt;p&gt;The following resources are used in the demonstration: &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/policy/resource-quotas/"&gt;ResourceQuota&lt;/a&gt;,
&lt;a href="https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/"&gt;LimitRange&lt;/a&gt;,
and &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/storage/persistent-volumes/"&gt;PersistentVolumeClaim&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Enforce Pod Security Standards with Namespace Labels</title><link>https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/enforce-standards-namespace-labels/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/enforce-standards-namespace-labels/</guid><description>&lt;p&gt;Namespaces can be labeled to enforce the &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/security/pod-security-standards/"&gt;Pod Security Standards&lt;/a&gt;. The three policies
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/security/pod-security-standards/#privileged"&gt;privileged&lt;/a&gt;, &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/security/pod-security-standards/#baseline"&gt;baseline&lt;/a&gt;
and &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/security/pod-security-standards/#restricted"&gt;restricted&lt;/a&gt; broadly cover the security spectrum
and are implemented by the &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/security/pod-security-admission/"&gt;Pod Security&lt;/a&gt; &lt;a class='glossary-tooltip' title='A piece of code that intercepts requests to the Kubernetes API server prior to persistence of the object.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/reference/access-authn-authz/admission-controllers/' target='_blank' aria-label='admission controller'&gt;admission controller&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;Pod Security Admission was available by default in Kubernetes v1.23, as
a beta. From version 1.25 onwards, Pod Security Admission is generally
available.&lt;/p&gt;</description></item><item><title>Migrate Replicated Control Plane To Use Cloud Controller Manager</title><link>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/controller-manager-leader-migration/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/controller-manager-leader-migration/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;&lt;p&gt;The cloud-controller-manager is a Kubernetes &lt;a class='glossary-tooltip' title='The container orchestration layer that exposes the API and interfaces to define, deploy, and manage the lifecycle of containers.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/reference/glossary/?all=true#term-control-plane' target='_blank' aria-label='control plane'&gt;control plane&lt;/a&gt; component
that embeds cloud-specific control logic. The cloud controller manager lets you link your
cluster into your cloud provider's API, and separates out the components that interact
with that cloud platform from components that only interact with your cluster.&lt;/p&gt;</description></item><item><title>Migrate from PodSecurityPolicy to the Built-In PodSecurity Admission Controller</title><link>https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/migrate-from-psp/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/migrate-from-psp/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page describes the process of migrating from PodSecurityPolicies to the built-in PodSecurity
admission controller. This can be done effectively using a combination of dry-run and &lt;code&gt;audit&lt;/code&gt; and
&lt;code&gt;warn&lt;/code&gt; modes, although this becomes harder if mutating PSPs are used.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;Your Kubernetes server must be at or later than version v1.22.&lt;/p&gt;
 &lt;p&gt;To check the version, enter &lt;code&gt;kubectl version&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;If you are currently running a version of Kubernetes other than
1.35, you may want to switch to viewing this
page in the documentation for the version of Kubernetes that you
are actually running.&lt;/p&gt;</description></item><item><title>Namespaces Walkthrough</title><link>https://andygol-k8s.netlify.app/docs/tutorials/cluster-management/namespaces-walkthrough/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tutorials/cluster-management/namespaces-walkthrough/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;Kubernetes &lt;a class='glossary-tooltip' title='An abstraction used by Kubernetes to support isolation of groups of resources within a single cluster.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/overview/working-with-objects/namespaces' target='_blank' aria-label='namespaces'&gt;namespaces&lt;/a&gt;
help different projects, teams, or customers to share a Kubernetes cluster.&lt;/p&gt;
&lt;p&gt;It does this by providing the following:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;A scope for &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/overview/working-with-objects/names/"&gt;Names&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;A mechanism to attach authorization and policy to a subsection of the cluster.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Use of multiple namespaces is optional.&lt;/p&gt;
&lt;p&gt;This example demonstrates how to use Kubernetes namespaces to subdivide your cluster.&lt;/p&gt;</description></item><item><title>Operating etcd clusters for Kubernetes</title><link>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/configure-upgrade-etcd/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/configure-upgrade-etcd/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;&lt;p&gt;etcd is a consistent and highly-available key value store used as Kubernetes' backing store for all cluster data.&lt;/p&gt;&lt;/p&gt;
&lt;p&gt;If your Kubernetes cluster uses etcd as its backing store, make sure you have a
&lt;a href="https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/configure-upgrade-etcd/#backing-up-an-etcd-cluster"&gt;back up&lt;/a&gt; plan
for the data.&lt;/p&gt;
&lt;p&gt;You can find in-depth information about etcd in the official &lt;a href="https://etcd.io/docs/"&gt;documentation&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;Before you follow steps in this page to deploy, manage, back up or restore etcd,
you need to understand the typical expectations for operating an etcd cluster.
Refer to the &lt;a href="https://etcd.io/docs/"&gt;etcd documentation&lt;/a&gt; for more context.&lt;/p&gt;</description></item><item><title>Reserve Compute Resources for System Daemons</title><link>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/reserve-compute-resources/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/reserve-compute-resources/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;Kubernetes nodes can be scheduled to &lt;code&gt;Capacity&lt;/code&gt;. Pods can consume all the
available capacity on a node by default. This is an issue because nodes
typically run quite a few system daemons that power the OS and Kubernetes
itself. Unless resources are set aside for these system daemons, pods and system
daemons compete for resources and lead to resource starvation issues on the
node.&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;kubelet&lt;/code&gt; exposes a feature named 'Node Allocatable' that helps to reserve
compute resources for system daemons. Kubernetes recommends cluster
administrators to configure 'Node Allocatable' based on their workload density
on each node.&lt;/p&gt;</description></item><item><title>Running Kubernetes Node Components as a Non-root User</title><link>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/kubelet-in-userns/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/kubelet-in-userns/</guid><description>&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-alpha"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.22 [alpha]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;p&gt;This document describes how to run Kubernetes Node components such as kubelet, CRI, OCI, and CNI
without root privileges, by using a &lt;a class='glossary-tooltip' title='A Linux kernel feature to emulate superuser privilege for unprivileged users.' data-toggle='tooltip' data-placement='top' href='https://man7.org/linux/man-pages/man7/user_namespaces.7.html' target='_blank' aria-label='user namespace'&gt;user namespace&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;This technique is also known as &lt;em&gt;rootless mode&lt;/em&gt;.&lt;/p&gt;

&lt;div class="alert alert-info" role="note"&gt;&lt;h4 class="alert-heading"&gt;Note:&lt;/h4&gt;&lt;p&gt;This document describes how to run Kubernetes Node components (and hence pods) as a non-root user.&lt;/p&gt;
&lt;p&gt;If you are just looking for how to run a pod as a non-root user, see &lt;a href="https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/security-context/"&gt;SecurityContext&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>Safely Drain a Node</title><link>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/safely-drain-node/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/safely-drain-node/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page shows how to safely drain a &lt;a class='glossary-tooltip' title='A node is a worker machine in Kubernetes.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/architecture/nodes/' target='_blank' aria-label='node'&gt;node&lt;/a&gt;,
optionally respecting the PodDisruptionBudget you have defined.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;This task assumes that you have met the following prerequisites:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;You do not require your applications to be highly available during the
node drain, or&lt;/li&gt;
&lt;li&gt;You have read about the &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/disruptions/"&gt;PodDisruptionBudget&lt;/a&gt; concept,
and have &lt;a href="https://andygol-k8s.netlify.app/docs/tasks/run-application/configure-pdb/"&gt;configured PodDisruptionBudgets&lt;/a&gt; for
applications that need them.&lt;/li&gt;
&lt;/ol&gt;
&lt;!-- steps --&gt;
&lt;h2 id="configure-poddisruptionbudget"&gt;(Optional) Configure a disruption budget&lt;/h2&gt;
&lt;p&gt;To ensure that your workloads remain available during maintenance, you can
configure a &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/disruptions/"&gt;PodDisruptionBudget&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>Securing a Cluster</title><link>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/securing-a-cluster/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/securing-a-cluster/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This document covers topics related to protecting a cluster from accidental or malicious access
and provides recommendations on overall security.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Set Kubelet Parameters Via A Configuration File</title><link>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/kubelet-config-file/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/kubelet-config-file/</guid><description>&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;Some steps in this page use the &lt;code&gt;jq&lt;/code&gt; tool. If you don't have &lt;code&gt;jq&lt;/code&gt;, you can
install it via your operating system's software sources, or fetch it from
&lt;a href="https://jqlang.github.io/jq/"&gt;https://jqlang.github.io/jq/&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Some steps also involve installing &lt;code&gt;curl&lt;/code&gt;, which can be installed via your
operating system's software sources.&lt;/p&gt;
&lt;!-- overview --&gt;
&lt;p&gt;A subset of the kubelet's configuration parameters may be
set via an on-disk config file, as a substitute for command-line flags.&lt;/p&gt;</description></item><item><title>Share a Cluster with Namespaces</title><link>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/namespaces/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/namespaces/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page shows how to view, work in, and delete &lt;a class='glossary-tooltip' title='An abstraction used by Kubernetes to support isolation of groups of resources within a single cluster.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/overview/working-with-objects/namespaces' target='_blank' aria-label='namespaces'&gt;namespaces&lt;/a&gt;.
The page also shows how to use Kubernetes namespaces to subdivide your cluster.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Have an &lt;a href="https://andygol-k8s.netlify.app/docs/setup/"&gt;existing Kubernetes cluster&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;You have a basic understanding of Kubernetes &lt;a class='glossary-tooltip' title='A Pod represents a set of running containers in your cluster.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/' target='_blank' aria-label='Pods'&gt;Pods&lt;/a&gt;,
&lt;a class='glossary-tooltip' title='A way to expose an application running on a set of Pods as a network service.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/services-networking/service/' target='_blank' aria-label='Services'&gt;Services&lt;/a&gt;, and
&lt;a class='glossary-tooltip' title='Manages a replicated application on your cluster.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/deployment/' target='_blank' aria-label='Deployments'&gt;Deployments&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;!-- steps --&gt;
&lt;h2 id="viewing-namespaces"&gt;Viewing namespaces&lt;/h2&gt;
&lt;p&gt;List the current namespaces in a cluster using:&lt;/p&gt;</description></item><item><title>Upgrade A Cluster</title><link>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/cluster-upgrade/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/cluster-upgrade/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page provides an overview of the steps you should follow to upgrade a
Kubernetes cluster.&lt;/p&gt;
&lt;p&gt;The Kubernetes project recommends upgrading to the latest patch releases promptly, and
to ensure that you are running a supported minor release of Kubernetes.
Following this recommendation helps you to stay secure.&lt;/p&gt;
&lt;p&gt;The way that you upgrade a cluster depends on how you initially deployed it
and on any subsequent changes.&lt;/p&gt;</description></item><item><title>Use Cascading Deletion in a Cluster</title><link>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/use-cascading-deletion/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/use-cascading-deletion/</guid><description>&lt;!--overview--&gt;
&lt;p&gt;This page shows you how to specify the type of
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/architecture/garbage-collection/#cascading-deletion"&gt;cascading deletion&lt;/a&gt;
to use in your cluster during &lt;a class='glossary-tooltip' title='A collective term for the various mechanisms Kubernetes uses to clean up cluster resources.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/architecture/garbage-collection/' target='_blank' aria-label='garbage collection'&gt;garbage collection&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Using a KMS provider for data encryption</title><link>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/kms-provider/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/kms-provider/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page shows how to configure a Key Management Service (KMS) provider and plugin to enable secret data encryption.
In Kubernetes 1.35 there are two versions of KMS at-rest encryption.
You should use KMS v2 if feasible because KMS v1 is deprecated (since Kubernetes v1.28) and disabled by default (since Kubernetes v1.29).
KMS v2 offers significantly better performance characteristics than KMS v1.&lt;/p&gt;
&lt;div class="alert alert-caution" role="note"&gt;&lt;h4 class="alert-heading"&gt;Caution:&lt;/h4&gt;This documentation is for the generally available implementation of KMS v2 (and for the
deprecated version 1 implementation).
If you are using any control plane components older than Kubernetes v1.29, please check
the equivalent page in the documentation for the version of Kubernetes that your cluster
is running. Earlier releases of Kubernetes had different behavior that may be relevant
for information security.&lt;/div&gt;

&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Using CoreDNS for Service Discovery</title><link>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/coredns/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/coredns/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page describes the CoreDNS upgrade process and how to install CoreDNS instead of kube-dns.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Using NodeLocal DNSCache in Kubernetes Clusters</title><link>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/nodelocaldns/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/nodelocaldns/</guid><description>&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.18 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;p&gt;This page provides an overview of NodeLocal DNSCache feature in Kubernetes.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Using sysctls in a Kubernetes Cluster</title><link>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/sysctl-cluster/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/sysctl-cluster/</guid><description>&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.21 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;p&gt;This document describes how to configure and use kernel parameters within a
Kubernetes cluster using the &lt;a class='glossary-tooltip' title='An interface for getting and setting Unix kernel parameters' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/sysctl-cluster/' target='_blank' aria-label='sysctl'&gt;sysctl&lt;/a&gt;
interface.&lt;/p&gt;

&lt;div class="alert alert-info" role="note"&gt;&lt;h4 class="alert-heading"&gt;Note:&lt;/h4&gt;Starting from Kubernetes version 1.23, the kubelet supports the use of either &lt;code&gt;/&lt;/code&gt; or &lt;code&gt;.&lt;/code&gt;
as separators for sysctl names.
Starting from Kubernetes version 1.25, setting Sysctls for a Pod supports setting sysctls with slashes.
For example, you can represent the same sysctl name as &lt;code&gt;kernel.shm_rmid_forced&lt;/code&gt; using a
period as the separator, or as &lt;code&gt;kernel/shm_rmid_forced&lt;/code&gt; using a slash as a separator.
For more sysctl parameter conversion method details, please refer to
the page &lt;a href="https://man7.org/linux/man-pages/man5/sysctl.d.5.html"&gt;sysctl.d(5)&lt;/a&gt; from
the Linux man-pages project.&lt;/div&gt;

&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;

&lt;div class="alert alert-info" role="note"&gt;&lt;h4 class="alert-heading"&gt;Note:&lt;/h4&gt;&lt;code&gt;sysctl&lt;/code&gt; is a Linux-specific command-line tool used to configure various kernel parameters
and it is not available on non-Linux operating systems.&lt;/div&gt;

&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Verify Signed Kubernetes Artifacts</title><link>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/verify-signed-artifacts/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/verify-signed-artifacts/</guid><description>&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-beta"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.26 [beta]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;You will need to have the following tools installed:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;cosign&lt;/code&gt; (&lt;a href="https://docs.sigstore.dev/cosign/system_config/installation/"&gt;install guide&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;&lt;code&gt;curl&lt;/code&gt; (often provided by your operating system)&lt;/li&gt;
&lt;li&gt;&lt;code&gt;jq&lt;/code&gt; (&lt;a href="https://jqlang.github.io/jq/download/"&gt;download jq&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="verifying-binary-signatures"&gt;Verifying binary signatures&lt;/h2&gt;
&lt;p&gt;The Kubernetes release process signs all binary artifacts (tarballs, SPDX files,
standalone binaries) by using cosign's keyless signing. To verify a particular
binary, retrieve it together with its signature and certificate:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#b8860b"&gt;URL&lt;/span&gt;&lt;span style="color:#666"&gt;=&lt;/span&gt;https://dl.k8s.io/release/v1.35.0/bin/linux/amd64
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#b8860b"&gt;BINARY&lt;/span&gt;&lt;span style="color:#666"&gt;=&lt;/span&gt;kubectl
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#b8860b"&gt;FILES&lt;/span&gt;&lt;span style="color:#666"&gt;=(&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; &lt;span style="color:#b44"&gt;&amp;#34;&lt;/span&gt;&lt;span style="color:#b8860b"&gt;$BINARY&lt;/span&gt;&lt;span style="color:#b44"&gt;&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; &lt;span style="color:#b44"&gt;&amp;#34;&lt;/span&gt;&lt;span style="color:#b8860b"&gt;$BINARY&lt;/span&gt;&lt;span style="color:#b44"&gt;.sig&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; &lt;span style="color:#b44"&gt;&amp;#34;&lt;/span&gt;&lt;span style="color:#b8860b"&gt;$BINARY&lt;/span&gt;&lt;span style="color:#b44"&gt;.cert&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#666"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#a2f;font-weight:bold"&gt;for&lt;/span&gt; FILE in &lt;span style="color:#b44"&gt;&amp;#34;&lt;/span&gt;&lt;span style="color:#b68;font-weight:bold"&gt;${&lt;/span&gt;&lt;span style="color:#b8860b"&gt;FILES&lt;/span&gt;[@]&lt;span style="color:#b68;font-weight:bold"&gt;}&lt;/span&gt;&lt;span style="color:#b44"&gt;&amp;#34;&lt;/span&gt;; &lt;span style="color:#a2f;font-weight:bold"&gt;do&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; curl -sSfL --retry &lt;span style="color:#666"&gt;3&lt;/span&gt; --retry-delay &lt;span style="color:#666"&gt;3&lt;/span&gt; &lt;span style="color:#b44"&gt;&amp;#34;&lt;/span&gt;&lt;span style="color:#b8860b"&gt;$URL&lt;/span&gt;&lt;span style="color:#b44"&gt;/&lt;/span&gt;&lt;span style="color:#b8860b"&gt;$FILE&lt;/span&gt;&lt;span style="color:#b44"&gt;&amp;#34;&lt;/span&gt; -o &lt;span style="color:#b44"&gt;&amp;#34;&lt;/span&gt;&lt;span style="color:#b8860b"&gt;$FILE&lt;/span&gt;&lt;span style="color:#b44"&gt;&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#a2f;font-weight:bold"&gt;done&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;Then verify the blob by using &lt;code&gt;cosign verify-blob&lt;/code&gt;:&lt;/p&gt;</description></item><item><title>Spotlight on SIG Architecture: API Governance</title><link>https://andygol-k8s.netlify.app/blog/2026/02/12/sig-architecture-api-spotlight/</link><pubDate>Thu, 12 Feb 2026 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2026/02/12/sig-architecture-api-spotlight/</guid><description>&lt;p&gt;&lt;em&gt;This is the fifth interview of a SIG Architecture Spotlight series that covers the different
subprojects, and we will be covering &lt;a href="https://github.com/kubernetes/community/blob/master/sig-architecture/README.md#architecture-and-api-governance-1"&gt;SIG Architecture: API
Governance&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;In this SIG Architecture spotlight we talked with &lt;a href="https://github.com/liggitt"&gt;Jordan Liggitt&lt;/a&gt;, lead
of the API Governance sub-project.&lt;/p&gt;
&lt;h2 id="introduction"&gt;Introduction&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;FM: Hello Jordan, thank you for your availability. Tell us a bit about yourself, your role and how
you got involved in Kubernetes.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;JL&lt;/strong&gt;: My name is Jordan Liggitt. I'm a Christian, husband, father of four, software engineer at
&lt;a href="https://about.google/"&gt;Google&lt;/a&gt; by day, and &lt;a href="https://www.youtube.com/watch?v=UDdr-VIWQwo"&gt;amateur musician&lt;/a&gt; by stealth. I was born in Texas (and still
like to claim it as my point of origin), but I've lived in North Carolina for most of my life.&lt;/p&gt;</description></item><item><title>Introducing Node Readiness Controller</title><link>https://andygol-k8s.netlify.app/blog/2026/02/03/introducing-node-readiness-controller/</link><pubDate>Tue, 03 Feb 2026 10:00:00 +0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2026/02/03/introducing-node-readiness-controller/</guid><description>&lt;img style="float: right; display: inline-block; margin-left: 2em; max-width: 15em;" src="./node-readiness-controller-logo.svg" alt="Logo for node readiness controller" /&gt;
&lt;p&gt;In the standard Kubernetes model, a node’s suitability for workloads hinges on a single binary &amp;quot;Ready&amp;quot; condition. However, in modern Kubernetes environments, nodes require complex infrastructure dependencies—such as network agents, storage drivers, GPU firmware, or custom health checks—to be fully operational before they can reliably host pods.&lt;/p&gt;
&lt;p&gt;Today, on behalf of the Kubernetes project, I am announcing the &lt;a href="https://node-readiness-controller.sigs.k8s.io/"&gt;Node Readiness Controller&lt;/a&gt;.
This project introduces a declarative system for managing node taints, extending the readiness guardrails during node bootstrapping beyond standard conditions.
By dynamically managing taints based on custom health signals, the controller ensures that workloads are only placed on nodes that met all infrastructure-specific requirements.&lt;/p&gt;</description></item><item><title>New Conversion from cgroup v1 CPU Shares to v2 CPU Weight</title><link>https://andygol-k8s.netlify.app/blog/2026/01/30/new-cgroup-v1-to-v2-cpu-conversion-formula/</link><pubDate>Fri, 30 Jan 2026 08:00:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2026/01/30/new-cgroup-v1-to-v2-cpu-conversion-formula/</guid><description>&lt;p&gt;I'm excited to announce the implementation of an improved conversion formula
from cgroup v1 CPU shares to cgroup v2 CPU weight. This enhancement addresses
critical issues with CPU priority allocation for Kubernetes workloads when
running on systems with cgroup v2.&lt;/p&gt;
&lt;h2 id="background"&gt;Background&lt;/h2&gt;
&lt;p&gt;Kubernetes was originally designed with cgroup v1 in mind, where CPU shares
were defined simply by assigning the container's CPU requests in millicpu
form.&lt;/p&gt;
&lt;p&gt;For example, a container requesting 1 CPU (1024m) would get (cpu.shares = 1024).&lt;/p&gt;</description></item><item><title>Ingress NGINX: Statement from the Kubernetes Steering and Security Response Committees</title><link>https://andygol-k8s.netlify.app/blog/2026/01/29/ingress-nginx-statement/</link><pubDate>Thu, 29 Jan 2026 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2026/01/29/ingress-nginx-statement/</guid><description>&lt;p&gt;&lt;strong&gt;In March 2026, Kubernetes will retire Ingress NGINX, a piece of critical infrastructure for about half of cloud native environments.&lt;/strong&gt; The retirement of Ingress NGINX was &lt;a href="https://kubernetes.io/blog/2025/11/11/ingress-nginx-retirement/"&gt;announced&lt;/a&gt; for March 2026, after years of &lt;a href="https://groups.google.com/a/kubernetes.io/g/dev/c/rxtrKvT_Q8E/m/6_ej0c1ZBAAJ"&gt;public warnings&lt;/a&gt; that the project was in dire need of contributors and maintainers. There will be no more releases for bug fixes, security patches, or any updates of any kind after the project is retired. This cannot be ignored, brushed off, or left until the last minute to address. We cannot overstate the severity of this situation or the importance of beginning migration to alternatives like &lt;a href="https://gateway-api.sigs.k8s.io/guides/getting-started/"&gt;Gateway API&lt;/a&gt; or one of the many &lt;a href="https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/"&gt;third-party Ingress controllers&lt;/a&gt; immediately.&lt;/p&gt;</description></item><item><title>Experimenting with Gateway API using kind</title><link>https://andygol-k8s.netlify.app/blog/2026/01/28/experimenting-gateway-api-with-kind/</link><pubDate>Wed, 28 Jan 2026 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2026/01/28/experimenting-gateway-api-with-kind/</guid><description>&lt;p&gt;This document will guide you through setting up a local experimental environment with &lt;a href="https://gateway-api.sigs.k8s.io/"&gt;Gateway API&lt;/a&gt; on &lt;a href="https://kind.sigs.k8s.io/"&gt;kind&lt;/a&gt;. This setup is designed for learning and testing. It helps you understand Gateway API concepts without production complexity.&lt;/p&gt;
&lt;div class="alert alert-caution" role="note"&gt;&lt;h4 class="alert-heading"&gt;Caution:&lt;/h4&gt;This is an experimentation learning setup, and should not be used for production. The components used on this document are not suited for production usage.
Once you're ready to deploy Gateway API in a production environment,
select an &lt;a href="https://gateway-api.sigs.k8s.io/implementations/"&gt;implementation&lt;/a&gt; that suits your needs.&lt;/div&gt;

&lt;h2 id="overview"&gt;Overview&lt;/h2&gt;
&lt;p&gt;In this guide, you will:&lt;/p&gt;</description></item><item><title>Cluster API v1.12: Introducing In-place Updates and Chained Upgrades</title><link>https://andygol-k8s.netlify.app/blog/2026/01/27/cluster-api-v1-12-release/</link><pubDate>Tue, 27 Jan 2026 08:00:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2026/01/27/cluster-api-v1-12-release/</guid><description>&lt;p&gt;&lt;a href="https://cluster-api.sigs.k8s.io/"&gt;Cluster API&lt;/a&gt; brings declarative management to Kubernetes cluster lifecycle, allowing users and platform teams to define the desired state of clusters and rely on controllers to continuously reconcile toward it.&lt;/p&gt;
&lt;p&gt;Similar to how you can use StatefulSets or Deployments in Kubernetes to manage a group of Pods, in Cluster API you can use KubeadmControlPlane to manage a set of control plane Machines, or you can use MachineDeployments to manage a group of worker Nodes.&lt;/p&gt;</description></item><item><title>Headlamp in 2025: Project Highlights</title><link>https://andygol-k8s.netlify.app/blog/2026/01/22/headlamp-in-2025-project-highlights/</link><pubDate>Thu, 22 Jan 2026 10:00:00 +0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2026/01/22/headlamp-in-2025-project-highlights/</guid><description>&lt;p&gt;&lt;em&gt;This announcement is a recap from a post originally &lt;a href="https://headlamp.dev/blog/2025/11/13/headlamp-in-2025"&gt;published&lt;/a&gt; on the Headlamp blog.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://headlamp.dev/"&gt;Headlamp&lt;/a&gt; has come a long way in 2025. The project has continued to grow – reaching more teams across platforms, powering new workflows and integrations through plugins, and seeing increased collaboration from the broader community.&lt;/p&gt;
&lt;p&gt;We wanted to take a moment to share a few updates and highlight how Headlamp has evolved over the past year.&lt;/p&gt;
&lt;h2 id="updates"&gt;Updates&lt;/h2&gt;
&lt;h3 id="joining-kubernetes-sig-ui"&gt;Joining Kubernetes SIG UI&lt;/h3&gt;
&lt;p&gt;This year marked a big milestone for the project: Headlamp is now officially part of Kubernetes &lt;a href="https://github.com/kubernetes/community/blob/master/sig-ui/README.md"&gt;SIG UI&lt;/a&gt;. This move brings roadmap and design discussions even closer to the core Kubernetes community and reinforces Headlamp’s role as a modern, extensible UI for the project.&lt;/p&gt;</description></item><item><title>Announcing the Checkpoint/Restore Working Group</title><link>https://andygol-k8s.netlify.app/blog/2026/01/21/introducing-checkpoint-restore-wg/</link><pubDate>Wed, 21 Jan 2026 10:00:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2026/01/21/introducing-checkpoint-restore-wg/</guid><description>&lt;p&gt;The community around Kubernetes includes a number of Special Interest Groups (SIGs) and Working Groups (WGs) facilitating discussions on important topics between interested contributors. Today we would like to announce the new &lt;a href="https://github.com/kubernetes/community/tree/master/wg-checkpoint-restore"&gt;Kubernetes Checkpoint Restore WG&lt;/a&gt; focusing on the integration of Checkpoint/Restore functionality into Kubernetes.&lt;/p&gt;
&lt;h2 id="motivation-and-use-cases"&gt;Motivation and use cases&lt;/h2&gt;
&lt;p&gt;There are several high-level scenarios discussed in the working group:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Optimizing resource utilization for interactive workloads, such as Jupyter notebooks and AI chatbots&lt;/li&gt;
&lt;li&gt;Accelerating startup of applications with long initialization times, including Java applications and &lt;a href="https://doi.org/10.1145/3731599.3767354"&gt;LLM inference services&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Using periodic checkpointing to enable fault-tolerance for long-running workloads, such as distributed model training&lt;/li&gt;
&lt;li&gt;Providing &lt;a href="https://doi.org/10.1007/978-3-032-10507-3_3"&gt;interruption-aware scheduling&lt;/a&gt; with transparent checkpoint/restore, allowing lower-priority Pods to be preempted while preserving the runtime state of applications&lt;/li&gt;
&lt;li&gt;Facilitating Pod migration across nodes for load balancing and maintenance, without disrupting workloads.&lt;/li&gt;
&lt;li&gt;Enabling forensic checkpointing to investigate and analyze security incidents such as cyberattacks, data breaches, and unauthorized access.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Across these scenarios, the goal is to help facilitate discussions of ideas between the Kubernetes community and the growing Checkpoint/Restore in Userspace (CRIU) ecosystem. The CRIU community includes several projects that support these use cases, including:&lt;/p&gt;</description></item><item><title>Uniform API server access using clientcmd</title><link>https://andygol-k8s.netlify.app/blog/2026/01/19/clientcmd-apiserver-access/</link><pubDate>Mon, 19 Jan 2026 10:00:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2026/01/19/clientcmd-apiserver-access/</guid><description>&lt;p&gt;If you've ever wanted to develop a command line client for a Kubernetes API,
especially if you've considered making your client usable as a &lt;code&gt;kubectl&lt;/code&gt; plugin,
you might have wondered how to make your client feel familiar to users of &lt;code&gt;kubectl&lt;/code&gt;.
A quick glance at the output of &lt;code&gt;kubectl options&lt;/code&gt; might put a damper on that:
&amp;quot;Am I really supposed to implement all those options?&amp;quot;&lt;/p&gt;
&lt;p&gt;Fear not, others have done a lot of the work involved for you.
In fact, the Kubernetes project provides two libraries to help you handle
&lt;code&gt;kubectl&lt;/code&gt;-style command line arguments in Go programs:
&lt;a href="https://pkg.go.dev/k8s.io/client-go/tools/clientcmd"&gt;&lt;code&gt;clientcmd&lt;/code&gt;&lt;/a&gt; and
&lt;a href="https://pkg.go.dev/k8s.io/cli-runtime"&gt;&lt;code&gt;cli-runtime&lt;/code&gt;&lt;/a&gt;
(which uses &lt;code&gt;clientcmd&lt;/code&gt;).
This article will show how to use the former.&lt;/p&gt;</description></item><item><title>Kubernetes v1.35: Restricting executables invoked by kubeconfigs via exec plugin allowList added to kuberc</title><link>https://andygol-k8s.netlify.app/blog/2026/01/09/kubernetes-v1-35-kuberc-credential-plugin-allowlist/</link><pubDate>Fri, 09 Jan 2026 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2026/01/09/kubernetes-v1-35-kuberc-credential-plugin-allowlist/</guid><description>&lt;p&gt;Did you know that &lt;code&gt;kubectl&lt;/code&gt; can run arbitrary executables, including shell
scripts, with the full privileges of the invoking user, and without your
knowledge? Whenever you download or auto-generate a &lt;code&gt;kubeconfig&lt;/code&gt;, the
&lt;code&gt;users[n].exec.command&lt;/code&gt; field can specify an executable to fetch credentials on
your behalf. Don't get me wrong, this is an incredible feature that allows you
to authenticate to the cluster with external identity providers. Nevertheless,
you probably see the problem: Do you know exactly what executables your &lt;code&gt;kubeconfig&lt;/code&gt;
is running on your system? Do you trust the pipeline that generated your &lt;code&gt;kubeconfig&lt;/code&gt;?
If there has been a supply-chain attack on the code that generates the kubeconfig,
or if the generating pipeline has been compromised, an attacker might well be
doing unsavory things to your machine by tricking your &lt;code&gt;kubeconfig&lt;/code&gt; into running
arbitrary code.&lt;/p&gt;</description></item><item><title>Kubernetes v1.35: Mutable PersistentVolume Node Affinity (alpha)</title><link>https://andygol-k8s.netlify.app/blog/2026/01/08/kubernetes-v1-35-mutable-pv-nodeaffinity/</link><pubDate>Thu, 08 Jan 2026 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2026/01/08/kubernetes-v1-35-mutable-pv-nodeaffinity/</guid><description>&lt;p&gt;The PersistentVolume &lt;a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#node-affinity"&gt;node affinity&lt;/a&gt; API
dates back to Kubernetes v1.10.
It is widely used to express that volumes may not be equally accessible by all nodes in the cluster.
This field was previously immutable,
and it is now mutable in Kubernetes v1.35 (alpha). This change opens a door to more flexible online volume management.&lt;/p&gt;
&lt;h2 id="why-make-node-affinity-mutable"&gt;Why make node affinity mutable?&lt;/h2&gt;
&lt;p&gt;This raises an obvious question: why make node affinity mutable now?
While stateless workloads like Deployments can be changed freely
and the changes will be rolled out automatically by re-creating every Pod,
PersistentVolumes (PVs) are stateful and cannot be re-created easily without losing data.&lt;/p&gt;</description></item><item><title>Kubernetes v1.35: A Better Way to Pass Service Account Tokens to CSI Drivers</title><link>https://andygol-k8s.netlify.app/blog/2026/01/07/kubernetes-v1-35-csi-sa-tokens-secrets-field-beta/</link><pubDate>Wed, 07 Jan 2026 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2026/01/07/kubernetes-v1-35-csi-sa-tokens-secrets-field-beta/</guid><description>&lt;p&gt;If you maintain a CSI driver that uses service account tokens,
Kubernetes v1.35 brings a refinement you'll want to know about.
Since the introduction of the &lt;a href="https://kubernetes-csi.github.io/docs/token-requests.html"&gt;TokenRequests feature&lt;/a&gt;,
service account tokens requested by CSI drivers have been passed to them through the &lt;code&gt;volume_context&lt;/code&gt; field.
While this has worked, it's not the ideal place for sensitive information,
and we've seen instances where tokens were accidentally logged in CSI drivers.&lt;/p&gt;
&lt;p&gt;Kubernetes v1.35 introduces a beta solution to address this:
&lt;em&gt;CSI Driver Opt-in for Service Account Tokens via Secrets Field&lt;/em&gt;.
This allows CSI drivers to receive service account tokens
through the &lt;code&gt;secrets&lt;/code&gt; field in &lt;code&gt;NodePublishVolumeRequest&lt;/code&gt;,
which is the appropriate place for sensitive data in the CSI specification.&lt;/p&gt;</description></item><item><title>Kubernetes v1.35: Extended Toleration Operators to Support Numeric Comparisons (Alpha)</title><link>https://andygol-k8s.netlify.app/blog/2026/01/05/kubernetes-v1-35-numeric-toleration-operators/</link><pubDate>Mon, 05 Jan 2026 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2026/01/05/kubernetes-v1-35-numeric-toleration-operators/</guid><description>&lt;p&gt;Many production Kubernetes clusters blend on-demand (higher-SLA) and spot/preemptible (lower-SLA) nodes to optimize costs while maintaining reliability for critical workloads. Platform teams need a safe default that keeps most workloads away from risky capacity, while allowing specific workloads to opt-in with explicit thresholds like &amp;quot;I can tolerate nodes with failure probability up to 5%&amp;quot;.&lt;/p&gt;
&lt;p&gt;Today, Kubernetes taints and tolerations can match exact values or check for existence, but they can't compare numeric thresholds. You'd need to create discrete taint categories, use external admission controllers, or accept less-than-optimal placement decisions.&lt;/p&gt;</description></item><item><title>Kubernetes v1.35: New level of efficiency with in-place Pod restart</title><link>https://andygol-k8s.netlify.app/blog/2026/01/02/kubernetes-v1-35-restart-all-containers/</link><pubDate>Fri, 02 Jan 2026 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2026/01/02/kubernetes-v1-35-restart-all-containers/</guid><description>&lt;p&gt;The release of Kubernetes 1.35 introduces a powerful new feature that provides a much-requested capability: the ability to trigger a full, in-place restart of the Pod. This feature, &lt;em&gt;Restart All Containers&lt;/em&gt; (alpha in 1.35), allows for an efficient way to reset a Pod's state compared to resource-intensive approach of deleting and recreating the entire Pod. This feature is especially useful for AI/ML workloads allowing application developers to concentrate on their core training logic while offloading complex failure-handling and recovery mechanisms to sidecars and declarative Kubernetes configuration. With &lt;code&gt;RestartAllContainers&lt;/code&gt; and other planned enhancements, Kubernetes continues to add building blocks for creating the most flexible, robust, and efficient platforms for AI/ML workloads.&lt;/p&gt;</description></item><item><title>Kubernetes 1.35: Enhanced Debugging with Versioned z-pages APIs</title><link>https://andygol-k8s.netlify.app/blog/2025/12/31/kubernetes-v1-35-structured-zpages/</link><pubDate>Wed, 31 Dec 2025 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/12/31/kubernetes-v1-35-structured-zpages/</guid><description>&lt;p&gt;Debugging Kubernetes control plane components can be challenging, especially when you need to quickly understand the runtime state of a component or verify its configuration. With Kubernetes 1.35, we're enhancing the z-pages debugging endpoints with structured, machine-parseable responses that make it easier to build tooling and automate troubleshooting workflows.&lt;/p&gt;
&lt;h2 id="what-are-z-pages"&gt;What are z-pages?&lt;/h2&gt;
&lt;p&gt;z-pages are special debugging endpoints exposed by Kubernetes control plane components. Introduced as an alpha feature in Kubernetes 1.32, these endpoints provide runtime diagnostics for components like &lt;code&gt;kube-apiserver&lt;/code&gt;, &lt;code&gt;kube-controller-manager&lt;/code&gt;, &lt;code&gt;kube-scheduler&lt;/code&gt;, &lt;code&gt;kubelet&lt;/code&gt; and &lt;code&gt;kube-proxy&lt;/code&gt;. The name &amp;quot;z-pages&amp;quot; comes from the convention of using &lt;code&gt;/*z&lt;/code&gt; paths for debugging endpoints.&lt;/p&gt;</description></item><item><title>Kubernetes v1.35: Watch Based Route Reconciliation in the Cloud Controller Manager</title><link>https://andygol-k8s.netlify.app/blog/2025/12/30/kubernetes-v1-35-watch-based-route-reconciliation-in-ccm/</link><pubDate>Tue, 30 Dec 2025 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/12/30/kubernetes-v1-35-watch-based-route-reconciliation-in-ccm/</guid><description>&lt;p&gt;Up to and including Kubernetes v1.34, the route controller in Cloud Controller Manager (CCM)
implementations built using the &lt;a href="https://github.com/kubernetes/cloud-provider"&gt;k8s.io/cloud-provider&lt;/a&gt; library reconciles
routes at a fixed interval. This causes unnecessary API requests to the cloud provider when
there are no changes to routes. Other controllers implemented through the same library already
use watch-based mechanisms, leveraging informers to avoid unnecessary API calls. A new feature gate
is being introduced in v1.35 to allow changing the behavior of the route controller to use watch-based informers.&lt;/p&gt;</description></item><item><title>Kubernetes v1.35: Introducing Workload Aware Scheduling</title><link>https://andygol-k8s.netlify.app/blog/2025/12/29/kubernetes-v1-35-introducing-workload-aware-scheduling/</link><pubDate>Mon, 29 Dec 2025 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/12/29/kubernetes-v1-35-introducing-workload-aware-scheduling/</guid><description>&lt;p&gt;Scheduling large workloads is a much more complex and fragile operation than scheduling a single Pod,
as it often requires considering all Pods together instead of scheduling each one independently.
For example, when scheduling a machine learning batch job, you often need to place each worker strategically,
such as on the same rack, to make the entire process as efficient as possible.
At the same time, the Pods that are part of such a workload are very often identical
from the scheduling perspective, which fundamentally changes how this process should look.&lt;/p&gt;</description></item><item><title>Kubernetes v1.35: Fine-grained Supplemental Groups Control Graduates to GA</title><link>https://andygol-k8s.netlify.app/blog/2025/12/23/kubernetes-v1-35-fine-grained-supplementalgroups-control-ga/</link><pubDate>Tue, 23 Dec 2025 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/12/23/kubernetes-v1-35-fine-grained-supplementalgroups-control-ga/</guid><description>&lt;p&gt;On behalf of Kubernetes SIG Node, we are pleased to announce the graduation of &lt;em&gt;fine-grained supplemental groups control&lt;/em&gt; to General Availability (GA) in Kubernetes v1.35!&lt;/p&gt;
&lt;p&gt;The new Pod field, &lt;code&gt;supplementalGroupsPolicy&lt;/code&gt;, was introduced as an opt-in alpha feature for Kubernetes v1.31, and then had graduated to beta in v1.33.
Now, the feature is generally available.
This feature allows you to implement more precise control over supplemental groups in Linux containers that can strengthen the security posture particularly in accessing volumes.
Moreover, it also enhances the transparency of UID/GID details in containers, offering improved security oversight.&lt;/p&gt;</description></item><item><title>Kubernetes v1.35: Kubelet Configuration Drop-in Directory Graduates to GA</title><link>https://andygol-k8s.netlify.app/blog/2025/12/22/kubernetes-v1-35-kubelet-config-drop-in-directory-ga/</link><pubDate>Mon, 22 Dec 2025 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/12/22/kubernetes-v1-35-kubelet-config-drop-in-directory-ga/</guid><description>&lt;p&gt;With the recent v1.35 release of Kubernetes, support for a kubelet configuration drop-in directory is generally available.
The newly stable feature simplifies the management of kubelet configuration across large, heterogeneous clusters.&lt;/p&gt;
&lt;p&gt;With v1.35, the kubelet command line argument &lt;code&gt;--config-dir&lt;/code&gt; is production-ready and fully supported,
allowing you to specify a directory containing kubelet configuration drop-in files.
All files in that directory will be automatically merged with your main kubelet configuration.
This allows cluster administrators to maintain a cohesive &lt;em&gt;base configuration&lt;/em&gt; for kubelets while enabling targeted customizations for different node groups or use cases, and without complex tooling or manual configuration management.&lt;/p&gt;</description></item><item><title>Avoiding Zombie Cluster Members When Upgrading to etcd v3.6</title><link>https://andygol-k8s.netlify.app/blog/2025/12/21/preventing-etcd-zombies/</link><pubDate>Sun, 21 Dec 2025 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/12/21/preventing-etcd-zombies/</guid><description>The key takeaway? Always upgrade to etcd v3.5.26 or later before moving to v3.6. This ensures your cluster is automatically repaired, and avoids zombie members.</description></item><item><title>Kubernetes 1.35: In-Place Pod Resize Graduates to Stable</title><link>https://andygol-k8s.netlify.app/blog/2025/12/19/kubernetes-v1-35-in-place-pod-resize-ga/</link><pubDate>Fri, 19 Dec 2025 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/12/19/kubernetes-v1-35-in-place-pod-resize-ga/</guid><description>&lt;p&gt;This release marks a major step: more than 6 years after its initial conception,
the &lt;strong&gt;In-Place Pod Resize&lt;/strong&gt; feature (also known as In-Place Pod Vertical Scaling), first introduced as
alpha in Kubernetes v1.27, and graduated to beta in Kubernetes v1.33, is now &lt;strong&gt;stable (GA)&lt;/strong&gt; in Kubernetes
1.35!&lt;/p&gt;
&lt;p&gt;This graduation is a major milestone for improving resource efficiency and flexibility for workloads
running on Kubernetes.&lt;/p&gt;
&lt;h2 id="what-is-in-place-pod-resize"&gt;What is in-place Pod Resize?&lt;/h2&gt;
&lt;p&gt;In the past, the CPU and memory resources allocated to a container in a Pod were immutable. This meant changing
them required deleting and recreating the entire Pod. For stateful services, batch jobs, or latency-sensitive
workloads, this was an incredibly disruptive operation.&lt;/p&gt;</description></item><item><title>Kubernetes v1.35: Job Managed By Goes GA</title><link>https://andygol-k8s.netlify.app/blog/2025/12/18/kubernetes-v1-35-job-managedby-for-jobs-goes-ga/</link><pubDate>Thu, 18 Dec 2025 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/12/18/kubernetes-v1-35-job-managedby-for-jobs-goes-ga/</guid><description>&lt;p&gt;In Kubernetes v1.35, the ability to specify an external Job controller (through &lt;code&gt;.spec.managedBy&lt;/code&gt;) graduates to General Availability.&lt;/p&gt;
&lt;p&gt;This feature allows external controllers to take full responsibility for Job reconciliation, unlocking powerful scheduling patterns like multi-cluster dispatching with &lt;a href="https://kueue.sigs.k8s.io/docs/concepts/multikueue/"&gt;MultiKueue&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id="why-delegate-job-reconciliation"&gt;Why delegate Job reconciliation?&lt;/h2&gt;
&lt;p&gt;The primary motivation for this feature is to support multi-cluster batch scheduling architectures, such as MultiKueue.&lt;/p&gt;
&lt;p&gt;The MultiKueue architecture distinguishes between a Management Cluster and a pool of Worker Clusters:&lt;/p&gt;</description></item><item><title>Kubernetes v1.35: Timbernetes (The World Tree Release)</title><link>https://andygol-k8s.netlify.app/blog/2025/12/17/kubernetes-v1-35-release/</link><pubDate>Wed, 17 Dec 2025 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/12/17/kubernetes-v1-35-release/</guid><description>&lt;p&gt;&lt;strong&gt;Editors&lt;/strong&gt;: Aakanksha Bhende, Arujjwal Negi, Chad M. Crowell, Graziano Casto, Swathi Rao&lt;/p&gt;
&lt;p&gt;Similar to previous releases, the release of Kubernetes v1.35 introduces new stable, beta, and alpha features. The consistent delivery of high-quality releases underscores the strength of our development cycle and the vibrant support from our community.&lt;/p&gt;
&lt;p&gt;This release consists of 60 enhancements, including 17 stable, 19 beta, and 22 alpha features.&lt;/p&gt;
&lt;p&gt;There are also some &lt;a href="#deprecations-removals-and-community-updates"&gt;deprecations and removals&lt;/a&gt; in this release; make sure to read about those.&lt;/p&gt;</description></item><item><title>Kubernetes v1.35 Sneak Peek</title><link>https://andygol-k8s.netlify.app/blog/2025/11/26/kubernetes-v1-35-sneak-peek/</link><pubDate>Wed, 26 Nov 2025 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/11/26/kubernetes-v1-35-sneak-peek/</guid><description>&lt;p&gt;As the release of Kubernetes v1.35 approaches, the Kubernetes project continues to evolve. Features may be deprecated, removed, or replaced to improve the project's overall health. This blog post outlines planned changes for the v1.35 release that the release team believes you should be aware of to ensure the continued smooth operation of your Kubernetes cluster(s), and to keep you up to date with the latest developments. The information below is based on the current status of the v1.35 release and is subject to change before the final release date.&lt;/p&gt;</description></item><item><title>Kubernetes Configuration Good Practices</title><link>https://andygol-k8s.netlify.app/blog/2025/11/25/configuration-good-practices/</link><pubDate>Tue, 25 Nov 2025 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/11/25/configuration-good-practices/</guid><description>&lt;p&gt;Configuration is one of those things in Kubernetes that seems small until it's not. Configuration is at the heart of every Kubernetes workload.
A missing quote, a wrong API version or a misplaced YAML indent can ruin your entire deploy.&lt;/p&gt;
&lt;p&gt;This blog brings together tried-and-tested configuration best practices. The small habits that make your Kubernetes setup clean, consistent and easier to manage.
Whether you are just starting out or already deploying apps daily, these are the little things that keep your cluster stable and your future self sane.&lt;/p&gt;</description></item><item><title>Ingress NGINX Retirement: What You Need to Know</title><link>https://andygol-k8s.netlify.app/blog/2025/11/11/ingress-nginx-retirement/</link><pubDate>Tue, 11 Nov 2025 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/11/11/ingress-nginx-retirement/</guid><description>&lt;p&gt;To prioritize the safety and security of the ecosystem, Kubernetes SIG Network and the Security Response Committee are announcing the upcoming retirement of &lt;a href="https://github.com/kubernetes/ingress-nginx/"&gt;Ingress NGINX&lt;/a&gt;. Best-effort maintenance will continue until March 2026. Afterward, there will be no further releases, no bugfixes, and no updates to resolve any security vulnerabilities that may be discovered. &lt;strong&gt;Existing deployments of Ingress NGINX will continue to function and installation artifacts will remain available.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;We recommend migrating to one of the many alternatives. Consider &lt;a href="https://gateway-api.sigs.k8s.io/guides/"&gt;migrating to Gateway API&lt;/a&gt;, the modern replacement for Ingress. If you must continue using Ingress, many alternative Ingress controllers are &lt;a href="https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/"&gt;listed in the Kubernetes documentation&lt;/a&gt;. Continue reading for further information about the history and current state of Ingress NGINX, as well as next steps.&lt;/p&gt;</description></item><item><title>Announcing the 2025 Steering Committee Election Results</title><link>https://andygol-k8s.netlify.app/blog/2025/11/09/steering-committee-results-2025/</link><pubDate>Sun, 09 Nov 2025 15:10:00 -0500</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/11/09/steering-committee-results-2025/</guid><description>&lt;p&gt;The &lt;a href="https://github.com/kubernetes/community/tree/master/elections/steering/2025"&gt;2025 Steering Committee Election&lt;/a&gt; is now complete. The Kubernetes Steering Committee consists of 7 seats, 4 of which were up for election in 2025. Incoming committee members serve a term of 2 years, and all members are elected by the Kubernetes Community.&lt;/p&gt;
&lt;p&gt;The Steering Committee oversees the governance of the entire Kubernetes project. With that great power comes great responsibility. You can learn more about the steering committee’s role in their &lt;a href="https://github.com/kubernetes/steering/blob/master/charter.md"&gt;charter&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>Gateway API 1.4: New Features</title><link>https://andygol-k8s.netlify.app/blog/2025/11/06/gateway-api-v1-4/</link><pubDate>Thu, 06 Nov 2025 09:00:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/11/06/gateway-api-v1-4/</guid><description>&lt;p&gt;&lt;img src="https://andygol-k8s.netlify.app/blog/2025/11/06/gateway-api-v1-4/gateway-api-logo.svg" alt="Gateway API logo"&gt;&lt;/p&gt;
&lt;p&gt;Ready to rock your Kubernetes networking? The Kubernetes SIG Network community presented the General Availability (GA) release of Gateway API (v1.4.0)! Released on October 6, 2025, version 1.4.0 reinforces the path for modern, expressive, and extensible service networking in Kubernetes.&lt;/p&gt;
&lt;p&gt;Gateway API v1.4.0 brings three new features to the &lt;em&gt;Standard channel&lt;/em&gt;
(Gateway API's GA release channel):&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;BackendTLSPolicy for TLS between gateways and backends&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;code&gt;supportedFeatures&lt;/code&gt; in GatewayClass status&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Named rules for Routes&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;and introduces three new experimental features:&lt;/p&gt;</description></item><item><title>7 Common Kubernetes Pitfalls (and How I Learned to Avoid Them)</title><link>https://andygol-k8s.netlify.app/blog/2025/10/20/seven-kubernetes-pitfalls-and-how-to-avoid/</link><pubDate>Mon, 20 Oct 2025 08:30:00 -0700</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/10/20/seven-kubernetes-pitfalls-and-how-to-avoid/</guid><description>&lt;p&gt;It’s no secret that Kubernetes can be both powerful and frustrating at times. When I first started dabbling with container orchestration, I made more than my fair share of mistakes enough to compile a whole list of pitfalls. In this post, I want to walk through seven big gotchas I’ve encountered (or seen others run into) and share some tips on how to avoid them. Whether you’re just kicking the tires on Kubernetes or already managing production clusters, I hope these insights help you steer clear of a little extra stress.&lt;/p&gt;</description></item><item><title>Spotlight on Policy Working Group</title><link>https://andygol-k8s.netlify.app/blog/2025/10/18/wg-policy-spotlight-2025/</link><pubDate>Sat, 18 Oct 2025 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/10/18/wg-policy-spotlight-2025/</guid><description>&lt;p&gt;&lt;em&gt;(Note: The Policy Working Group has completed its mission and is no longer active. This article reflects its work, accomplishments, and insights into how a working group operates.)&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;In the complex world of Kubernetes, policies play a crucial role in managing and securing clusters. But have you ever wondered how these policies are developed, implemented, and standardized across the Kubernetes ecosystem? To answer that, let's take a look back at the work of the Policy Working Group.&lt;/p&gt;</description></item><item><title>Introducing Headlamp Plugin for Karpenter - Scaling and Visibility</title><link>https://andygol-k8s.netlify.app/blog/2025/10/06/introducing-headlamp-plugin-for-karpenter/</link><pubDate>Mon, 06 Oct 2025 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/10/06/introducing-headlamp-plugin-for-karpenter/</guid><description>&lt;p&gt;Headlamp is an open‑source, extensible Kubernetes SIG UI project designed to let you explore, manage, and debug cluster resources.&lt;/p&gt;
&lt;p&gt;Karpenter is a Kubernetes Autoscaling SIG node provisioning project that helps clusters scale quickly and efficiently. It launches new nodes in seconds, selects appropriate instance types for workloads, and manages the full node lifecycle, including scale-down.&lt;/p&gt;
&lt;p&gt;The new Headlamp Karpenter Plugin adds real-time visibility into Karpenter’s activity directly from the Headlamp UI. It shows how Karpenter resources relate to Kubernetes objects, displays live metrics, and surfaces scaling events as they happen. You can inspect pending pods during provisioning, review scaling decisions, and edit Karpenter-managed resources with built-in validation. The Karpenter plugin was made as part of a LFX mentor project.&lt;/p&gt;</description></item><item><title>Announcing Changed Block Tracking API support (alpha)</title><link>https://andygol-k8s.netlify.app/blog/2025/09/25/csi-changed-block-tracking/</link><pubDate>Thu, 25 Sep 2025 05:00:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/09/25/csi-changed-block-tracking/</guid><description>&lt;p&gt;We're excited to announce the alpha support for a &lt;em&gt;changed block tracking&lt;/em&gt; mechanism. This enhances
the Kubernetes storage ecosystem by providing an efficient way for
&lt;a href="https://kubernetes.io/docs/concepts/storage/volumes/#csi"&gt;CSI&lt;/a&gt; storage drivers to identify changed
blocks in PersistentVolume snapshots. With a driver that can use the feature, you could benefit
from faster and more resource-efficient backup operations.&lt;/p&gt;
&lt;p&gt;If you're eager to try this feature, you can &lt;a href="#getting-started"&gt;skip to the Getting Started section&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id="what-is-changed-block-tracking"&gt;What is changed block tracking?&lt;/h2&gt;
&lt;p&gt;Changed block tracking enables storage systems to identify and track modifications at the block level
between snapshots, eliminating the need to scan entire volumes during backup operations. The
improvement is a change to the Container Storage Interface (CSI), and also to the storage support
in Kubernetes itself.
With the alpha feature enabled, your cluster can:&lt;/p&gt;</description></item><item><title>Kubernetes v1.34: Pod Level Resources Graduated to Beta</title><link>https://andygol-k8s.netlify.app/blog/2025/09/22/kubernetes-v1-34-pod-level-resources/</link><pubDate>Mon, 22 Sep 2025 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/09/22/kubernetes-v1-34-pod-level-resources/</guid><description>&lt;p&gt;On behalf of the Kubernetes community, I am thrilled to announce that the Pod Level Resources feature has graduated to Beta in the Kubernetes v1.34 release and is enabled by default! This significant milestone introduces a new layer of flexibility for defining and managing resource allocation for your Pods. This flexibility stems from the ability to specify CPU and memory resources for the Pod as a whole. Pod level resources can be combined with the container-level specifications to express the exact resource requirements and limits your application needs.&lt;/p&gt;</description></item><item><title>Kubernetes v1.34: Recovery From Volume Expansion Failure (GA)</title><link>https://andygol-k8s.netlify.app/blog/2025/09/19/kubernetes-v1-34-recover-expansion-failure/</link><pubDate>Fri, 19 Sep 2025 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/09/19/kubernetes-v1-34-recover-expansion-failure/</guid><description>&lt;p&gt;Have you ever made a typo when expanding your persistent volumes in Kubernetes? Meant to specify &lt;code&gt;2TB&lt;/code&gt;
but specified &lt;code&gt;20TiB&lt;/code&gt;? This seemingly innocuous problem was kinda hard to fix - and took the project almost 5 years to fix.
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/storage/persistent-volumes/#recovering-from-failure-when-expanding-volumes"&gt;Automated recovery from storage expansion&lt;/a&gt; has been around for a while in beta; however, with the v1.34 release, we have graduated this to
&lt;strong&gt;general availability&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;While it was always possible to recover from failing volume expansions manually, it usually required cluster-admin access and was tedious to do (See aformentioned link for more information).&lt;/p&gt;</description></item><item><title>Kubernetes v1.34: DRA Consumable Capacity</title><link>https://andygol-k8s.netlify.app/blog/2025/09/18/kubernetes-v1-34-dra-consumable-capacity/</link><pubDate>Thu, 18 Sep 2025 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/09/18/kubernetes-v1-34-dra-consumable-capacity/</guid><description>&lt;p&gt;Dynamic Resource Allocation (DRA) is a Kubernetes API for managing scarce resources across Pods and containers.
It enables flexible resource requests, going beyond simply allocating &lt;em&gt;N&lt;/em&gt; number of devices to support more granular usage scenarios.
With DRA, users can request specific types of devices based on their attributes, define custom configurations tailored to their workloads, and even share the same resource among multiple containers or Pods.&lt;/p&gt;
&lt;p&gt;In this blog, we focus on the device sharing feature and dive into a new capability introduced in Kubernetes 1.34: &lt;em&gt;DRA consumable capacity&lt;/em&gt;,
which extends DRA to support finer-grained device sharing.&lt;/p&gt;</description></item><item><title>Kubernetes v1.34: Pods Report DRA Resource Health</title><link>https://andygol-k8s.netlify.app/blog/2025/09/17/kubernetes-v1-34-pods-report-dra-resource-health/</link><pubDate>Wed, 17 Sep 2025 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/09/17/kubernetes-v1-34-pods-report-dra-resource-health/</guid><description>&lt;p&gt;The rise of AI/ML and other high-performance workloads has made specialized hardware like GPUs, TPUs, and FPGAs a critical component of many Kubernetes clusters. However, as discussed in a &lt;a href="https://andygol-k8s.netlify.app/blog/2025/07/03/navigating-failures-in-pods-with-devices/"&gt;previous blog post about navigating failures in Pods with devices&lt;/a&gt;, when this hardware fails, it can be difficult to diagnose, leading to significant downtime. With the release of Kubernetes v1.34, we are excited to announce a new alpha feature that brings much-needed visibility into the health of these devices.&lt;/p&gt;</description></item><item><title>Kubernetes v1.34: Moving Volume Group Snapshots to v1beta2</title><link>https://andygol-k8s.netlify.app/blog/2025/09/16/kubernetes-v1-34-volume-group-snapshot-beta-2/</link><pubDate>Tue, 16 Sep 2025 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/09/16/kubernetes-v1-34-volume-group-snapshot-beta-2/</guid><description>&lt;p&gt;Volume group snapshots were &lt;a href="https://andygol-k8s.netlify.app/blog/2023/05/08/kubernetes-1-27-volume-group-snapshot-alpha/"&gt;introduced&lt;/a&gt;
as an Alpha feature with the Kubernetes 1.27 release and moved to &lt;a href="https://andygol-k8s.netlify.app/blog/2024/12/18/kubernetes-1-32-volume-group-snapshot-beta/"&gt;Beta&lt;/a&gt; in the Kubernetes 1.32 release.
The recent release of Kubernetes v1.34 moved that support to a second beta.
The support for volume group snapshots relies on a set of
&lt;a href="https://kubernetes-csi.github.io/docs/group-snapshot-restore-feature.html#volume-group-snapshot-apis"&gt;extension APIs for group snapshots&lt;/a&gt;.
These APIs allow users to take crash consistent snapshots for a set of volumes.
Behind the scenes, Kubernetes uses a label selector to group multiple PersistentVolumeClaims
for snapshotting.
A key aim is to allow you restore that set of snapshots to new volumes and
recover your workload based on a crash consistent recovery point.&lt;/p&gt;</description></item><item><title>Kubernetes v1.34: Decoupled Taint Manager Is Now Stable</title><link>https://andygol-k8s.netlify.app/blog/2025/09/15/kubernetes-v1-34-decoupled-taint-manager-is-now-stable/</link><pubDate>Mon, 15 Sep 2025 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/09/15/kubernetes-v1-34-decoupled-taint-manager-is-now-stable/</guid><description>&lt;p&gt;This enhancement separates the responsibility of managing node lifecycle and pod eviction into two distinct components.
Previously, the node lifecycle controller handled both marking nodes as unhealthy with NoExecute taints and evicting pods from them.
Now, a dedicated taint eviction controller manages the eviction process, while the node lifecycle controller focuses solely on applying taints.
This separation not only improves code organization but also makes it easier to improve taint eviction controller or build custom implementations of the taint based eviction.&lt;/p&gt;</description></item><item><title>Kubernetes v1.34: Autoconfiguration for Node Cgroup Driver Goes GA</title><link>https://andygol-k8s.netlify.app/blog/2025/09/12/kubernetes-v1-34-cri-cgroup-driver-lookup-now-ga/</link><pubDate>Fri, 12 Sep 2025 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/09/12/kubernetes-v1-34-cri-cgroup-driver-lookup-now-ga/</guid><description>&lt;p&gt;Historically, configuring the correct cgroup driver has been a pain point for users running new
Kubernetes clusters. On Linux systems, there are two different cgroup drivers:
&lt;code&gt;cgroupfs&lt;/code&gt; and &lt;code&gt;systemd&lt;/code&gt;. In the past, both the &lt;a href="https://andygol-k8s.netlify.app/docs/reference/command-line-tools-reference/kubelet/"&gt;kubelet&lt;/a&gt;
and CRI implementation (like CRI-O or containerd) needed to be configured to use
the same cgroup driver, or else the kubelet would misbehave without any explicit
error message. This was a source of headaches for many cluster admins. Now, we've
(almost) arrived at the end of that headache.&lt;/p&gt;</description></item><item><title>Kubernetes v1.34: Mutable CSI Node Allocatable Graduates to Beta</title><link>https://andygol-k8s.netlify.app/blog/2025/09/11/kubernetes-v1-34-mutable-csi-node-allocatable-count/</link><pubDate>Thu, 11 Sep 2025 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/09/11/kubernetes-v1-34-mutable-csi-node-allocatable-count/</guid><description>&lt;p&gt;The &lt;a href="https://kep.k8s.io/4876"&gt;functionality for CSI drivers to update information about attachable volume count on the nodes&lt;/a&gt;, first introduced as Alpha in Kubernetes v1.33, has graduated to &lt;strong&gt;Beta&lt;/strong&gt; in the Kubernetes v1.34 release! This marks a significant milestone in enhancing the accuracy of stateful pod scheduling by reducing failures due to outdated attachable volume capacity information.&lt;/p&gt;
&lt;h2 id="background"&gt;Background&lt;/h2&gt;
&lt;p&gt;Traditionally, Kubernetes &lt;a href="https://kubernetes-csi.github.io/docs/introduction.html"&gt;CSI drivers&lt;/a&gt; report a static maximum volume attachment limit when initializing. However, actual attachment capacities can change during a node's lifecycle for various reasons, such as:&lt;/p&gt;</description></item><item><title>Kubernetes v1.34: Use An Init Container To Define App Environment Variables</title><link>https://andygol-k8s.netlify.app/blog/2025/09/10/kubernetes-v1-34-env-files/</link><pubDate>Wed, 10 Sep 2025 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/09/10/kubernetes-v1-34-env-files/</guid><description>&lt;p&gt;Kubernetes typically uses ConfigMaps and Secrets to set environment variables,
which introduces additional API calls and complexity,
For example, you need to separately manage the Pods of your workloads
and their configurations, while ensuring orderly
updates for both the configurations and the workload Pods.&lt;/p&gt;
&lt;p&gt;Alternatively, you might be using a vendor-supplied container
that requires environment variables (such as a license key or a one-time token),
but you don’t want to hard-code them or mount volumes just to get the job done.&lt;/p&gt;</description></item><item><title>Kubernetes v1.34: Snapshottable API server cache</title><link>https://andygol-k8s.netlify.app/blog/2025/09/09/kubernetes-v1-34-snapshottable-api-server-cache/</link><pubDate>Tue, 09 Sep 2025 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/09/09/kubernetes-v1-34-snapshottable-api-server-cache/</guid><description>&lt;p&gt;For years, the Kubernetes community has been on a mission to improve the stability and performance predictability of the API server.
A major focus of this effort has been taming &lt;strong&gt;list&lt;/strong&gt; requests, which have historically been a primary source of high memory usage and heavy load on the &lt;code&gt;etcd&lt;/code&gt; datastore.
With each release, we've chipped away at the problem, and today, we're thrilled to announce the final major piece of this puzzle.&lt;/p&gt;</description></item><item><title>Kubernetes v1.34: VolumeAttributesClass for Volume Modification GA</title><link>https://andygol-k8s.netlify.app/blog/2025/09/08/kubernetes-v1-34-volume-attributes-class/</link><pubDate>Mon, 08 Sep 2025 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/09/08/kubernetes-v1-34-volume-attributes-class/</guid><description>&lt;p&gt;The VolumeAttributesClass API, which empowers users to dynamically modify volume attributes, has officially graduated to General Availability (GA) in Kubernetes v1.34. This marks a significant milestone, providing a robust and stable way to tune your persistent storage directly within Kubernetes.&lt;/p&gt;
&lt;h2 id="what-is-volumeattributesclass"&gt;What is VolumeAttributesClass?&lt;/h2&gt;
&lt;p&gt;At its core, VolumeAttributesClass is a cluster-scoped resource that defines a set of mutable parameters for a volume. Think of it as a &amp;quot;profile&amp;quot; for your storage, allowing cluster administrators to expose different quality-of-service (QoS) levels or performance tiers.&lt;/p&gt;</description></item><item><title>Kubernetes v1.34: Pod Replacement Policy for Jobs Goes GA</title><link>https://andygol-k8s.netlify.app/blog/2025/09/05/kubernetes-v1-34-pod-replacement-policy-for-jobs-goes-ga/</link><pubDate>Fri, 05 Sep 2025 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/09/05/kubernetes-v1-34-pod-replacement-policy-for-jobs-goes-ga/</guid><description>&lt;p&gt;In Kubernetes v1.34, the &lt;em&gt;Pod replacement policy&lt;/em&gt; feature has reached general availability (GA).
This blog post describes the Pod replacement policy feature and how to use it in your Jobs.&lt;/p&gt;
&lt;h2 id="about-pod-replacement-policy"&gt;About Pod Replacement Policy&lt;/h2&gt;
&lt;p&gt;By default, the Job controller immediately recreates Pods as soon as they fail or begin terminating (when they have a deletion timestamp).&lt;/p&gt;
&lt;p&gt;As a result, while some Pods are terminating, the total number of running Pods for a Job can temporarily exceed the specified parallelism.
For Indexed Jobs, this can even mean multiple Pods running for the same index at the same time.&lt;/p&gt;</description></item><item><title>Kubernetes v1.34: PSI Metrics for Kubernetes Graduates to Beta</title><link>https://andygol-k8s.netlify.app/blog/2025/09/04/kubernetes-v1-34-introducing-psi-metrics-beta/</link><pubDate>Thu, 04 Sep 2025 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/09/04/kubernetes-v1-34-introducing-psi-metrics-beta/</guid><description>&lt;p&gt;As Kubernetes clusters grow in size and complexity, understanding the health and performance of individual nodes becomes increasingly critical. We are excited to announce that as of Kubernetes v1.34, &lt;strong&gt;Pressure Stall Information (PSI) Metrics&lt;/strong&gt; has graduated to Beta.&lt;/p&gt;
&lt;h2 id="what-is-pressure-stall-information-psi"&gt;What is Pressure Stall Information (PSI)?&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://docs.kernel.org/accounting/psi.html"&gt;Pressure Stall Information (PSI)&lt;/a&gt; is a feature of the Linux kernel (version 4.20 and later)
that provides a canonical way to quantify pressure on infrastructure resources,
in terms of whether demand for a resource exceeds current supply.
It moves beyond simple resource utilization metrics and instead
measures the amount of time that tasks are stalled due to resource contention.
This is a powerful way to identify and diagnose resource bottlenecks that can impact application performance.&lt;/p&gt;</description></item><item><title>Kubernetes v1.34: Service Account Token Integration for Image Pulls Graduates to Beta</title><link>https://andygol-k8s.netlify.app/blog/2025/09/03/kubernetes-v1-34-sa-tokens-image-pulls-beta/</link><pubDate>Wed, 03 Sep 2025 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/09/03/kubernetes-v1-34-sa-tokens-image-pulls-beta/</guid><description>&lt;p&gt;The Kubernetes community continues to advance security best practices
by reducing reliance on long-lived credentials.
Following the successful &lt;a href="https://andygol-k8s.netlify.app/blog/2025/05/07/kubernetes-v1-33-wi-for-image-pulls/"&gt;alpha release in Kubernetes v1.33&lt;/a&gt;,
&lt;em&gt;Service Account Token Integration for Kubelet Credential Providers&lt;/em&gt;
has now graduated to &lt;strong&gt;beta&lt;/strong&gt; in Kubernetes v1.34,
bringing us closer to eliminating long-lived image pull secrets from Kubernetes clusters.&lt;/p&gt;
&lt;p&gt;This enhancement allows credential providers
to use workload-specific service account tokens to obtain registry credentials,
providing a secure, ephemeral alternative to traditional image pull secrets.&lt;/p&gt;</description></item><item><title>Kubernetes v1.34: Introducing CPU Manager Static Policy Option for Uncore Cache Alignment</title><link>https://andygol-k8s.netlify.app/blog/2025/09/02/kubernetes-v1-34-prefer-align-by-uncore-cache-cpumanager-static-policy-optimization/</link><pubDate>Tue, 02 Sep 2025 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/09/02/kubernetes-v1-34-prefer-align-by-uncore-cache-cpumanager-static-policy-optimization/</guid><description>&lt;p&gt;A new CPU Manager Static Policy Option called &lt;code&gt;prefer-align-cpus-by-uncorecache&lt;/code&gt; was introduced in Kubernetes v1.32 as an alpha feature, and has graduated to &lt;strong&gt;beta&lt;/strong&gt; in Kubernetes v1.34.
This CPU Manager Policy Option is designed to optimize performance for specific workloads running on processors with a &lt;em&gt;split uncore cache&lt;/em&gt; architecture.
In this article, I'll explain what that means and why it's useful.&lt;/p&gt;
&lt;h2 id="understanding-the-feature"&gt;Understanding the feature&lt;/h2&gt;
&lt;h3 id="what-is-uncore-cache"&gt;What is uncore cache?&lt;/h3&gt;
&lt;p&gt;Until relatively recently, nearly all mainstream computer processors had a
monolithic last-level-cache cache that was shared across every core in a multiple
CPU package.
This monolithic cache is also referred to as &lt;em&gt;uncore cache&lt;/em&gt;
(because it is not linked to a specific core), or as Level 3 cache.
As well as the Level 3 cache, there is other cache, commonly called Level 1 and Level 2 cache,
that &lt;strong&gt;is&lt;/strong&gt; associated with a specific CPU core.&lt;/p&gt;</description></item><item><title>Kubernetes v1.34: DRA has graduated to GA</title><link>https://andygol-k8s.netlify.app/blog/2025/09/01/kubernetes-v1-34-dra-updates/</link><pubDate>Mon, 01 Sep 2025 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/09/01/kubernetes-v1-34-dra-updates/</guid><description>&lt;p&gt;Kubernetes 1.34 is here, and it has brought a huge wave of enhancements for Dynamic Resource Allocation (DRA)! This
release marks a major milestone with many APIs in the &lt;code&gt;resource.k8s.io&lt;/code&gt; group graduating to General Availability (GA),
unlocking the full potential of how you manage devices on Kubernetes. On top of that, several key features have
moved to beta, and a fresh batch of new alpha features promise even more expressiveness and flexibility.&lt;/p&gt;</description></item><item><title>Kubernetes v1.34: Finer-Grained Control Over Container Restarts</title><link>https://andygol-k8s.netlify.app/blog/2025/08/29/kubernetes-v1-34-per-container-restart-policy/</link><pubDate>Fri, 29 Aug 2025 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/08/29/kubernetes-v1-34-per-container-restart-policy/</guid><description>&lt;p&gt;With the release of Kubernetes 1.34, a new alpha feature is introduced
that gives you more granular control over container restarts within a Pod. This
feature, named &lt;strong&gt;Container Restart Policy and Rules&lt;/strong&gt;, allows you to specify a
restart policy for each container individually, overriding the Pod's global
restart policy. In addition, it also allows you to conditionally restart
individual containers based on their exit codes. This feature is available
behind the alpha feature gate &lt;code&gt;ContainerRestartRules&lt;/code&gt;.&lt;/p&gt;</description></item><item><title>Kubernetes v1.34: User preferences (kuberc) are available for testing in kubectl 1.34</title><link>https://andygol-k8s.netlify.app/blog/2025/08/28/kubernetes-v1-34-kubectl-kuberc-beta/</link><pubDate>Thu, 28 Aug 2025 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/08/28/kubernetes-v1-34-kubectl-kuberc-beta/</guid><description>&lt;p&gt;Have you ever wished you could enable &lt;a href="https://kep.k8s.io/3895"&gt;interactive delete&lt;/a&gt;,
by default, in &lt;code&gt;kubectl&lt;/code&gt;? Or maybe, you'd like to have custom aliases defined,
but not necessarily &lt;a href="https://github.com/ahmetb/kubectl-aliases"&gt;generate hundreds of them manually&lt;/a&gt;?
Look no further. &lt;a href="https://git.k8s.io/community/sig-cli/"&gt;SIG-CLI&lt;/a&gt;
has been working hard to add &lt;a href="https://kep.k8s.io/3104"&gt;user preferences to kubectl&lt;/a&gt;,
and we are happy to announce that this functionality is reaching beta as part
of the Kubernetes v1.34 release.&lt;/p&gt;
&lt;h2 id="how-it-works"&gt;How it works&lt;/h2&gt;
&lt;p&gt;A full description of this functionality is available &lt;a href="https://andygol-k8s.netlify.app/docs/reference/kubectl/kuberc/"&gt;in our official documentation&lt;/a&gt;,
but this blog post will answer both of the questions from the beginning of this
article.&lt;/p&gt;</description></item><item><title>Kubernetes v1.34: Of Wind &amp; Will (O' WaW)</title><link>https://andygol-k8s.netlify.app/blog/2025/08/27/kubernetes-v1-34-release/</link><pubDate>Wed, 27 Aug 2025 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/08/27/kubernetes-v1-34-release/</guid><description>&lt;p&gt;&lt;strong&gt;Editors:&lt;/strong&gt; Agustina Barbetta, Alejandro Josue Leon Bellido, Graziano Casto, Melony Qin, Dipesh Rawat&lt;/p&gt;
&lt;p&gt;Similar to previous releases, the release of Kubernetes v1.34 introduces new stable, beta, and alpha features. The consistent delivery of high-quality releases underscores the strength of our development cycle and the vibrant support from our community.&lt;/p&gt;
&lt;p&gt;This release consists of 58 enhancements. Of those enhancements, 23 have graduated to Stable, 22 have entered Beta, and 13 have entered Alpha.&lt;/p&gt;</description></item><item><title>Tuning Linux Swap for Kubernetes: A Deep Dive</title><link>https://andygol-k8s.netlify.app/blog/2025/08/19/tuning-linux-swap-for-kubernetes-a-deep-dive/</link><pubDate>Tue, 19 Aug 2025 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/08/19/tuning-linux-swap-for-kubernetes-a-deep-dive/</guid><description>&lt;p&gt;The Kubernetes &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/cluster-administration/swap-memory-management/"&gt;NodeSwap feature&lt;/a&gt;, likely to graduate to &lt;em&gt;stable&lt;/em&gt; in the upcoming Kubernetes v1.34 release,
allows swap usage:
a significant shift from the conventional practice of disabling swap for performance predictability.
This article focuses exclusively on tuning swap on Linux nodes, where this feature is available. By allowing Linux nodes to use secondary storage for additional virtual memory when physical RAM is exhausted, node swap support aims to improve resource utilization and reduce out-of-memory (OOM) kills.&lt;/p&gt;</description></item><item><title>Introducing Headlamp AI Assistant</title><link>https://andygol-k8s.netlify.app/blog/2025/08/07/introducing-headlamp-ai-assistant/</link><pubDate>Thu, 07 Aug 2025 20:00:00 +0100</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/08/07/introducing-headlamp-ai-assistant/</guid><description>&lt;p&gt;&lt;em&gt;This announcement originally &lt;a href="https://headlamp.dev/blog/2025/08/07/introducing-the-headlamp-ai-assistant"&gt;appeared&lt;/a&gt; on the Headlamp blog.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;To simplify Kubernetes management and troubleshooting, we're thrilled to
introduce &lt;a href="https://github.com/headlamp-k8s/plugins/tree/main/ai-assistant#readme"&gt;Headlamp AI Assistant&lt;/a&gt;: a powerful new plugin for Headlamp that helps
you understand and operate your Kubernetes clusters and applications with
greater clarity and ease.&lt;/p&gt;
&lt;p&gt;Whether you're a seasoned engineer or just getting started, the AI Assistant offers:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Fast time to value:&lt;/strong&gt; Ask questions like &lt;em&gt;&amp;quot;Is my application healthy?&amp;quot;&lt;/em&gt; or
&lt;em&gt;&amp;quot;How can I fix this?&amp;quot;&lt;/em&gt; without needing deep Kubernetes knowledge.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Deep insights:&lt;/strong&gt; Start with high-level queries and dig deeper with prompts
like &lt;em&gt;&amp;quot;List all the problematic pods&amp;quot;&lt;/em&gt; or &lt;em&gt;&amp;quot;How can I fix this pod?&amp;quot;&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Focused &amp;amp; relevant:&lt;/strong&gt; Ask questions in the context of what you're viewing
in the UI, such as &lt;em&gt;&amp;quot;What's wrong here?&amp;quot;&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Action-oriented:&lt;/strong&gt; Let the AI take action for you, like &lt;em&gt;&amp;quot;Restart that
deployment&amp;quot;&lt;/em&gt;, with your permission.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Here is a demo of the AI Assistant in action as it helps troubleshoot an
application running with issues in a Kubernetes cluster:&lt;/p&gt;</description></item><item><title>Kubernetes v1.34 Sneak Peek</title><link>https://andygol-k8s.netlify.app/blog/2025/07/28/kubernetes-v1-34-sneak-peek/</link><pubDate>Mon, 28 Jul 2025 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/07/28/kubernetes-v1-34-sneak-peek/</guid><description>&lt;p&gt;Kubernetes v1.34 is coming at the end of August 2025.
This release will not include any removal or deprecation, but it is packed with an impressive number of enhancements.
Here are some of the features we are most excited about in this cycle!&lt;/p&gt;
&lt;p&gt;Please note that this information reflects the current state of v1.34 development and may change before release.&lt;/p&gt;
&lt;h2 id="featured-enhancements-of-kubernetes-v1-34"&gt;Featured enhancements of Kubernetes v1.34&lt;/h2&gt;
&lt;p&gt;The following list highlights some of the notable enhancements likely to be included in the v1.34 release,
but is not an exhaustive list of all planned changes.
This is not a commitment and the release content is subject to change.&lt;/p&gt;</description></item><item><title>Post-Quantum Cryptography in Kubernetes</title><link>https://andygol-k8s.netlify.app/blog/2025/07/18/pqc-in-k8s/</link><pubDate>Fri, 18 Jul 2025 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/07/18/pqc-in-k8s/</guid><description>&lt;p&gt;The world of cryptography is on the cusp of a major shift with the advent of
quantum computing. While powerful quantum computers are still largely
theoretical for many applications, their potential to break current
cryptographic standards is a serious concern, especially for long-lived
systems. This is where &lt;em&gt;Post-Quantum Cryptography&lt;/em&gt; (PQC) comes in. In this
article, I'll dive into what PQC means for TLS and, more specifically, for the
Kubernetes ecosystem. I'll explain what the (suprising) state of PQC in
Kubernetes is and what the implications are for current and future clusters.&lt;/p&gt;</description></item><item><title>Navigating Failures in Pods With Devices</title><link>https://andygol-k8s.netlify.app/blog/2025/07/03/navigating-failures-in-pods-with-devices/</link><pubDate>Thu, 03 Jul 2025 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/07/03/navigating-failures-in-pods-with-devices/</guid><description>&lt;p&gt;Kubernetes is the de facto standard for container orchestration, but when it
comes to handling specialized hardware like GPUs and other accelerators, things
get a bit complicated. This blog post dives into the challenges of managing
failure modes when operating pods with devices in Kubernetes, based on insights
from &lt;a href="https://sched.co/1i7pT"&gt;Sergey Kanzhelev and Mrunal Patel's talk at KubeCon NA
2024&lt;/a&gt;. You can follow the links to
&lt;a href="https://static.sched.com/hosted_files/kccncna2024/b9/KubeCon%20NA%202024_%20Navigating%20Failures%20in%20Pods%20With%20Devices_%20Challenges%20and%20Solutions.pptx.pdf?_gl=1*191m4j5*_gcl_au*MTU1MDM0MTM1My4xNzMwOTE4ODY5LjIxNDI4Nzk1NDIuMTczMTY0ODgyMC4xNzMxNjQ4ODIy*FPAU*MTU1MDM0MTM1My4xNzMwOTE4ODY5"&gt;slides&lt;/a&gt;
and
&lt;a href="https://www.youtube.com/watch?v=-YCnOYTtVO8&amp;list=PLj6h78yzYM2Pw4mRw4S-1p_xLARMqPkA7&amp;index=150"&gt;recording&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id="the-ai-ml-boom-and-its-impact-on-kubernetes"&gt;The AI/ML boom and its impact on Kubernetes&lt;/h2&gt;
&lt;p&gt;The rise of AI/ML workloads has brought new challenges to Kubernetes. These
workloads often rely heavily on specialized hardware, and any device failure can
significantly impact performance and lead to frustrating interruptions. As
highlighted in the 2024 &lt;a href="https://ai.meta.com/research/publications/the-llama-3-herd-of-models/"&gt;Llama
paper&lt;/a&gt;,
hardware issues, particularly GPU failures, are a major cause of disruption in
AI/ML training. You can also learn how much effort NVIDIA spends on handling
devices failures and maintenance in the KubeCon talk by &lt;a href="https://kccncna2024.sched.com/event/1i7kJ/all-your-gpus-are-belong-to-us-an-inside-look-at-nvidias-self-healing-geforce-now-infrastructure-ryan-hallisey-piotr-prokop-pl-nvidia"&gt;Ryan Hallisey and Piotr
Prokop All-Your-GPUs-Are-Belong-to-Us: An Inside Look at NVIDIA's Self-Healing
GeForce NOW
Infrastructure&lt;/a&gt;
(&lt;a href="https://www.youtube.com/watch?v=iLnHtKwmu2I"&gt;recording&lt;/a&gt;) as they see 19
remediation requests per 1000 nodes a day!
We also see data centers offering spot consumption models and overcommit on
power, making device failures commonplace and a part of the business model.&lt;/p&gt;</description></item><item><title>Image Compatibility In Cloud Native Environments</title><link>https://andygol-k8s.netlify.app/blog/2025/06/25/image-compatibility-in-cloud-native-environments/</link><pubDate>Wed, 25 Jun 2025 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/06/25/image-compatibility-in-cloud-native-environments/</guid><description>&lt;p&gt;In industries where systems must run very reliably and meet strict performance criteria such as telecommunication, high-performance or AI computing, containerized applications often need specific operating system configuration or hardware presence.
It is common practice to require the use of specific versions of the kernel, its configuration, device drivers, or system components.
Despite the existence of the &lt;a href="https://opencontainers.org/"&gt;Open Container Initiative (OCI)&lt;/a&gt;, a governing community to define standards and specifications for container images, there has been a gap in expression of such compatibility requirements.
The need to address this issue has led to different proposals and, ultimately, an implementation in Kubernetes' &lt;a href="https://kubernetes-sigs.github.io/node-feature-discovery/stable/get-started/index.html"&gt;Node Feature Discovery (NFD)&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>Changes to Kubernetes Slack</title><link>https://andygol-k8s.netlify.app/blog/2025/06/16/changes-to-kubernetes-slack/</link><pubDate>Mon, 16 Jun 2025 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/06/16/changes-to-kubernetes-slack/</guid><description>&lt;p&gt;&lt;strong&gt;UPDATE&lt;/strong&gt;: We’ve received notice from Salesforce that our Slack workspace &lt;strong&gt;WILL NOT BE DOWNGRADED&lt;/strong&gt; on June 20th. Stand by for more details, but for now, there is no urgency to back up private channels or direct messages.&lt;/p&gt;
&lt;p&gt;&lt;del&gt;Kubernetes Slack will lose its special status and will be changing into a standard free Slack on June 20, 2025&lt;/del&gt;. Sometime later this year, our community may move to a new platform. If you are responsible for a channel or private channel, or a member of a User Group, you will need to take some actions as soon as you can.&lt;/p&gt;</description></item><item><title>Enhancing Kubernetes Event Management with Custom Aggregation</title><link>https://andygol-k8s.netlify.app/blog/2025/06/10/enhancing-kubernetes-event-management-custom-aggregation/</link><pubDate>Tue, 10 Jun 2025 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/06/10/enhancing-kubernetes-event-management-custom-aggregation/</guid><description>&lt;p&gt;Kubernetes &lt;a href="https://andygol-k8s.netlify.app/docs/reference/kubernetes-api/cluster-resources/event-v1/"&gt;Events&lt;/a&gt; provide crucial insights into cluster operations, but as clusters grow, managing and analyzing these events becomes increasingly challenging. This blog post explores how to build custom event aggregation systems that help engineering teams better understand cluster behavior and troubleshoot issues more effectively.&lt;/p&gt;
&lt;h2 id="the-challenge-with-kubernetes-events"&gt;The challenge with Kubernetes events&lt;/h2&gt;
&lt;p&gt;In a Kubernetes cluster, events are generated for various operations - from pod scheduling and container starts to volume mounts and network configurations. While these events are invaluable for debugging and monitoring, several challenges emerge in production environments:&lt;/p&gt;</description></item><item><title>Introducing Gateway API Inference Extension</title><link>https://andygol-k8s.netlify.app/blog/2025/06/05/introducing-gateway-api-inference-extension/</link><pubDate>Thu, 05 Jun 2025 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/06/05/introducing-gateway-api-inference-extension/</guid><description>&lt;p&gt;Modern generative AI and large language model (LLM) services create unique traffic-routing challenges
on Kubernetes. Unlike typical short-lived, stateless web requests, LLM inference sessions are often
long-running, resource-intensive, and partially stateful. For example, a single GPU-backed model server
may keep multiple inference sessions active and maintain in-memory token caches.&lt;/p&gt;
&lt;p&gt;Traditional load balancers focused on HTTP path or round-robin lack the specialized capabilities needed
for these workloads. They also don’t account for model identity or request criticality (e.g., interactive
chat vs. batch jobs). Organizations often patch together ad-hoc solutions, but a standardized approach
is missing.&lt;/p&gt;</description></item><item><title>Start Sidecar First: How To Avoid Snags</title><link>https://andygol-k8s.netlify.app/blog/2025/06/03/start-sidecar-first/</link><pubDate>Tue, 03 Jun 2025 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/06/03/start-sidecar-first/</guid><description>&lt;p&gt;From the &lt;a href="https://andygol-k8s.netlify.app/blog/2025/04/22/multi-container-pods-overview/"&gt;Kubernetes Multicontainer Pods: An Overview blog post&lt;/a&gt; you know what their job is, what are the main architectural patterns, and how they are implemented in Kubernetes. The main thing I’ll cover in this article is how to ensure that your sidecar containers start before the main app. It’s more complicated than you might think!&lt;/p&gt;
&lt;h2 id="a-gentle-refresher"&gt;A gentle refresher&lt;/h2&gt;
&lt;p&gt;I'd just like to remind readers that the &lt;a href="https://andygol-k8s.netlify.app/blog/2023/12/13/kubernetes-v1-29-release/"&gt;v1.29.0 release of Kubernetes&lt;/a&gt; added native support for
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/sidecar-containers/"&gt;sidecar containers&lt;/a&gt;, which can now be defined within the &lt;code&gt;.spec.initContainers&lt;/code&gt; field,
but with &lt;code&gt;restartPolicy: Always&lt;/code&gt;. You can see that illustrated in the following example Pod manifest snippet:&lt;/p&gt;</description></item><item><title>Gateway API v1.3.0: Advancements in Request Mirroring, CORS, Gateway Merging, and Retry Budgets</title><link>https://andygol-k8s.netlify.app/blog/2025/06/02/gateway-api-v1-3/</link><pubDate>Mon, 02 Jun 2025 09:00:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/06/02/gateway-api-v1-3/</guid><description>&lt;p&gt;&lt;img src="https://andygol-k8s.netlify.app/blog/2025/06/02/gateway-api-v1-3/gateway-api-logo.svg" alt="Gateway API logo"&gt;&lt;/p&gt;
&lt;p&gt;Join us in the Kubernetes SIG Network community in celebrating the general
availability of &lt;a href="https://gateway-api.sigs.k8s.io/"&gt;Gateway API&lt;/a&gt; v1.3.0! We are
also pleased to announce that there are already a number of conformant
implementations to try, made possible by postponing this blog
announcement. Version 1.3.0 of the API was released about a month ago on
April 24, 2025.&lt;/p&gt;
&lt;p&gt;Gateway API v1.3.0 brings a new feature to the &lt;em&gt;Standard&lt;/em&gt; channel
(Gateway API's GA release channel): &lt;em&gt;percentage-based request mirroring&lt;/em&gt;, and
introduces three new experimental features: cross-origin resource sharing (CORS)
filters, a standardized mechanism for listener and gateway merging, and retry
budgets.&lt;/p&gt;</description></item><item><title>Kubernetes v1.33: In-Place Pod Resize Graduated to Beta</title><link>https://andygol-k8s.netlify.app/blog/2025/05/16/kubernetes-v1-33-in-place-pod-resize-beta/</link><pubDate>Fri, 16 May 2025 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/05/16/kubernetes-v1-33-in-place-pod-resize-beta/</guid><description>&lt;p&gt;On behalf of the Kubernetes project, I am excited to announce that the &lt;strong&gt;in-place Pod resize&lt;/strong&gt; feature (also known as In-Place Pod Vertical Scaling), first introduced as alpha in Kubernetes v1.27, has graduated to &lt;strong&gt;Beta&lt;/strong&gt; and will be enabled by default in the Kubernetes v1.33 release! This marks a significant milestone in making resource management for Kubernetes workloads more flexible and less disruptive.&lt;/p&gt;
&lt;h2 id="what-is-in-place-pod-resize"&gt;What is in-place Pod resize?&lt;/h2&gt;
&lt;p&gt;Traditionally, changing the CPU or memory resources allocated to a container required restarting the Pod. While acceptable for many stateless applications, this could be disruptive for stateful services, batch jobs, or any workloads sensitive to restarts.&lt;/p&gt;</description></item><item><title>Announcing etcd v3.6.0</title><link>https://andygol-k8s.netlify.app/blog/2025/05/15/announcing-etcd-3.6/</link><pubDate>Thu, 15 May 2025 16:00:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/05/15/announcing-etcd-3.6/</guid><description>&lt;p&gt;&lt;em&gt;This announcement originally &lt;a href="https://etcd.io/blog/2025/announcing-etcd-3.6/"&gt;appeared&lt;/a&gt; on the etcd blog.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Today, we are releasing &lt;a href="https://github.com/etcd-io/etcd/releases/tag/v3.6.0"&gt;etcd v3.6.0&lt;/a&gt;, the first minor release since etcd v3.5.0 on June 15, 2021. This release
introduces several new features, makes significant progress on long-standing efforts like downgrade support and
migration to v3store, and addresses numerous critical &amp;amp; major issues. It also includes major optimizations in
memory usage, improving efficiency and performance.&lt;/p&gt;
&lt;p&gt;In addition to the features of v3.6.0, etcd has joined Kubernetes as a SIG (sig-etcd), enabling us to improve
project sustainability. We've introduced systematic robustness testing to ensure correctness and reliability.
Through the etcd-operator Working Group, we plan to improve usability as well.&lt;/p&gt;</description></item><item><title>Kubernetes 1.33: Job's SuccessPolicy Goes GA</title><link>https://andygol-k8s.netlify.app/blog/2025/05/15/kubernetes-1-33-jobs-success-policy-goes-ga/</link><pubDate>Thu, 15 May 2025 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/05/15/kubernetes-1-33-jobs-success-policy-goes-ga/</guid><description>&lt;p&gt;On behalf of the Kubernetes project, I'm pleased to announce that Job &lt;em&gt;success policy&lt;/em&gt; has graduated to General Availability (GA) as part of the v1.33 release.&lt;/p&gt;
&lt;h2 id="about-job-s-success-policy"&gt;About Job's Success Policy&lt;/h2&gt;
&lt;p&gt;In batch workloads, you might want to use leader-follower patterns like &lt;a href="https://en.wikipedia.org/wiki/Message_Passing_Interface"&gt;MPI&lt;/a&gt;,
in which the leader controls the execution, including the followers' lifecycle.&lt;/p&gt;
&lt;p&gt;In this case, you might want to mark it as succeeded
even if some of the indexes failed. Unfortunately, a leader-follower Kubernetes Job that didn't use a success policy, in most cases, would have to require &lt;strong&gt;all&lt;/strong&gt; Pods to finish successfully
for that Job to reach an overall succeeded state.&lt;/p&gt;</description></item><item><title>Kubernetes v1.33: Updates to Container Lifecycle</title><link>https://andygol-k8s.netlify.app/blog/2025/05/14/kubernetes-v1-33-updates-to-container-lifecycle/</link><pubDate>Wed, 14 May 2025 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/05/14/kubernetes-v1-33-updates-to-container-lifecycle/</guid><description>&lt;p&gt;Kubernetes v1.33 introduces a few updates to the lifecycle of containers. The Sleep action for container lifecycle hooks now supports a zero sleep duration (feature enabled by default).
There is also alpha support for customizing the stop signal sent to containers when they are being terminated.&lt;/p&gt;
&lt;p&gt;This blog post goes into the details of these new aspects of the container lifecycle, and how you can use them.&lt;/p&gt;
&lt;h2 id="zero-value-for-sleep-action"&gt;Zero value for Sleep action&lt;/h2&gt;
&lt;p&gt;Kubernetes v1.29 introduced the &lt;code&gt;Sleep&lt;/code&gt; action for container PreStop and PostStart Lifecycle hooks. The Sleep action lets your containers pause for a specified duration after the container is started or before it is terminated. This was needed to provide a straightforward way to manage graceful shutdowns. Before the Sleep action, folks used to run the &lt;code&gt;sleep&lt;/code&gt; command using the exec action in their container lifecycle hooks. If you wanted to do this you'd need to have the binary for the &lt;code&gt;sleep&lt;/code&gt; command in your container image. This is difficult if you're using third party images.&lt;/p&gt;</description></item><item><title>Kubernetes v1.33: Job's Backoff Limit Per Index Goes GA</title><link>https://andygol-k8s.netlify.app/blog/2025/05/13/kubernetes-v1-33-jobs-backoff-limit-per-index-goes-ga/</link><pubDate>Tue, 13 May 2025 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/05/13/kubernetes-v1-33-jobs-backoff-limit-per-index-goes-ga/</guid><description>&lt;p&gt;In Kubernetes v1.33, the &lt;em&gt;Backoff Limit Per Index&lt;/em&gt; feature reaches general
availability (GA). This blog describes the Backoff Limit Per Index feature and
its benefits.&lt;/p&gt;
&lt;h2 id="about-backoff-limit-per-index"&gt;About backoff limit per index&lt;/h2&gt;
&lt;p&gt;When you run workloads on Kubernetes, you must consider scenarios where Pod
failures can affect the completion of your workloads. Ideally, your workload
should tolerate transient failures and continue running.&lt;/p&gt;
&lt;p&gt;To achieve failure tolerance in a Kubernetes Job, you can set the
&lt;code&gt;spec.backoffLimit&lt;/code&gt; field. This field specifies the total number of tolerated
failures.&lt;/p&gt;</description></item><item><title>Kubernetes v1.33: Image Pull Policy the way you always thought it worked!</title><link>https://andygol-k8s.netlify.app/blog/2025/05/12/kubernetes-v1-33-ensure-secret-pulled-images-alpha/</link><pubDate>Mon, 12 May 2025 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/05/12/kubernetes-v1-33-ensure-secret-pulled-images-alpha/</guid><description>&lt;h2 id="image-pull-policy-the-way-you-always-thought-it-worked"&gt;Image Pull Policy the way you always thought it worked!&lt;/h2&gt;
&lt;p&gt;Some things in Kubernetes are surprising, and the way &lt;code&gt;imagePullPolicy&lt;/code&gt; behaves might
be one of them. Given Kubernetes is all about running pods, it may be peculiar
to learn that there has been a caveat to restricting pod access to authenticated images for
over 10 years in the form of &lt;a href="https://github.com/kubernetes/kubernetes/issues/18787"&gt;issue 18787&lt;/a&gt;!
It is an exciting release when you can resolve a ten-year-old issue.&lt;/p&gt;</description></item><item><title>Kubernetes v1.33: Streaming List responses</title><link>https://andygol-k8s.netlify.app/blog/2025/05/09/kubernetes-v1-33-streaming-list-responses/</link><pubDate>Fri, 09 May 2025 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/05/09/kubernetes-v1-33-streaming-list-responses/</guid><description>&lt;p&gt;Managing Kubernetes cluster stability becomes increasingly critical as your infrastructure grows. One of the most challenging aspects of operating large-scale clusters has been handling List requests that fetch substantial datasets - a common operation that could unexpectedly impact your cluster's stability.&lt;/p&gt;
&lt;p&gt;Today, the Kubernetes community is excited to announce a significant architectural improvement: streaming encoding for List responses.&lt;/p&gt;
&lt;h2 id="the-problem-unnecessary-memory-consumption-with-large-resources"&gt;The problem: unnecessary memory consumption with large resources&lt;/h2&gt;
&lt;p&gt;Current API response encoders just serialize an entire response into a single contiguous memory and perform one &lt;a href="https://pkg.go.dev/net/http#ResponseWriter.Write"&gt;ResponseWriter.Write&lt;/a&gt; call to transmit data to the client. Despite HTTP/2's capability to split responses into smaller frames for transmission, the underlying HTTP server continues to hold the complete response data as a single buffer. Even as individual frames are transmitted to the client, the memory associated with these frames cannot be freed incrementally.&lt;/p&gt;</description></item><item><title>Kubernetes 1.33: Volume Populators Graduate to GA</title><link>https://andygol-k8s.netlify.app/blog/2025/05/08/kubernetes-v1-33-volume-populators-ga/</link><pubDate>Thu, 08 May 2025 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/05/08/kubernetes-v1-33-volume-populators-ga/</guid><description>&lt;p&gt;Kubernetes &lt;em&gt;volume populators&lt;/em&gt; are now generally available (GA)! The &lt;code&gt;AnyVolumeDataSource&lt;/code&gt; feature
gate is treated as always enabled for Kubernetes v1.33, which means that users can specify any appropriate
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/extend-kubernetes/api-extension/custom-resources/#custom-resources"&gt;custom resource&lt;/a&gt;
as the data source of a PersistentVolumeClaim (PVC).&lt;/p&gt;
&lt;p&gt;An example of how to use dataSourceRef in PVC:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-yaml" data-lang="yaml"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;apiVersion&lt;/span&gt;:&lt;span style="color:#bbb"&gt; &lt;/span&gt;v1&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;kind&lt;/span&gt;:&lt;span style="color:#bbb"&gt; &lt;/span&gt;PersistentVolumeClaim&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;metadata&lt;/span&gt;:&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;name&lt;/span&gt;:&lt;span style="color:#bbb"&gt; &lt;/span&gt;pvc1&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;spec&lt;/span&gt;:&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;...&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;dataSourceRef&lt;/span&gt;:&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;apiGroup&lt;/span&gt;:&lt;span style="color:#bbb"&gt; &lt;/span&gt;provider.example.com&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;kind&lt;/span&gt;:&lt;span style="color:#bbb"&gt; &lt;/span&gt;Provider&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;name&lt;/span&gt;:&lt;span style="color:#bbb"&gt; &lt;/span&gt;provider1&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="what-is-new"&gt;What is new&lt;/h2&gt;
&lt;p&gt;There are four major enhancements from beta.&lt;/p&gt;</description></item><item><title>Kubernetes v1.33: From Secrets to Service Accounts: Kubernetes Image Pulls Evolved</title><link>https://andygol-k8s.netlify.app/blog/2025/05/07/kubernetes-v1-33-wi-for-image-pulls/</link><pubDate>Wed, 07 May 2025 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/05/07/kubernetes-v1-33-wi-for-image-pulls/</guid><description>&lt;p&gt;Kubernetes has steadily evolved to reduce reliance on long-lived credentials
stored in the API.
A prime example of this shift is the transition of Kubernetes Service Account (KSA) tokens
from long-lived, static tokens to ephemeral, automatically rotated tokens
with OpenID Connect (OIDC)-compliant semantics.
This advancement enables workloads to securely authenticate with external services
without needing persistent secrets.&lt;/p&gt;
&lt;p&gt;However, one major gap remains: &lt;strong&gt;image pull authentication&lt;/strong&gt;.
Today, Kubernetes clusters rely on image pull secrets stored in the API,
which are long-lived and difficult to rotate,
or on node-level kubelet credential providers,
which allow any pod running on a node to access the same credentials.
This presents security and operational challenges.&lt;/p&gt;</description></item><item><title>Kubernetes v1.33: Fine-grained SupplementalGroups Control Graduates to Beta</title><link>https://andygol-k8s.netlify.app/blog/2025/05/06/kubernetes-v1-33-fine-grained-supplementalgroups-control-beta/</link><pubDate>Tue, 06 May 2025 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/05/06/kubernetes-v1-33-fine-grained-supplementalgroups-control-beta/</guid><description>&lt;p&gt;The new field, &lt;code&gt;supplementalGroupsPolicy&lt;/code&gt;, was introduced as an opt-in alpha feature for Kubernetes v1.31 and has graduated to beta in v1.33; the corresponding feature gate (&lt;code&gt;SupplementalGroupsPolicy&lt;/code&gt;) is now enabled by default. This feature enables to implement more precise control over supplemental groups in containers that can strengthen the security posture, particularly in accessing volumes. Moreover, it also enhances the transparency of UID/GID details in containers, offering improved security oversight.&lt;/p&gt;
&lt;p&gt;Please be aware that this beta release contains some behavioral breaking change. See &lt;a href="#the-behavioral-changes-introduced-in-beta"&gt;The Behavioral Changes Introduced In Beta&lt;/a&gt; and &lt;a href="#upgrade-consideration"&gt;Upgrade Considerations&lt;/a&gt; sections for details.&lt;/p&gt;</description></item><item><title>Kubernetes v1.33: Prevent PersistentVolume Leaks When Deleting out of Order graduates to GA</title><link>https://andygol-k8s.netlify.app/blog/2025/05/05/kubernetes-v1-33-prevent-persistentvolume-leaks-when-deleting-out-of-order-graduate-to-ga/</link><pubDate>Mon, 05 May 2025 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/05/05/kubernetes-v1-33-prevent-persistentvolume-leaks-when-deleting-out-of-order-graduate-to-ga/</guid><description>&lt;p&gt;I am thrilled to announce that the feature to prevent
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/storage/persistent-volumes/"&gt;PersistentVolume&lt;/a&gt; (or PVs for short)
leaks when deleting out of order has graduated to General Availability (GA) in
Kubernetes v1.33! This improvement, initially introduced as a beta
feature in Kubernetes v1.31, ensures that your storage resources are properly
reclaimed, preventing unwanted leaks.&lt;/p&gt;
&lt;h2 id="how-did-reclaim-work-in-previous-kubernetes-releases"&gt;How did reclaim work in previous Kubernetes releases?&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/storage/persistent-volumes/#Introduction"&gt;PersistentVolumeClaim&lt;/a&gt; (or PVC for short) is
a user's request for storage. A PV and PVC are considered &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/storage/persistent-volumes/#Binding"&gt;Bound&lt;/a&gt;
if a newly created PV or a matching PV is found. The PVs themselves are
backed by volumes allocated by the storage backend.&lt;/p&gt;</description></item><item><title>Kubernetes v1.33: Mutable CSI Node Allocatable Count</title><link>https://andygol-k8s.netlify.app/blog/2025/05/02/kubernetes-1-33-mutable-csi-node-allocatable-count/</link><pubDate>Fri, 02 May 2025 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/05/02/kubernetes-1-33-mutable-csi-node-allocatable-count/</guid><description>&lt;p&gt;Scheduling stateful applications reliably depends heavily on accurate information about resource availability on nodes.
Kubernetes v1.33 introduces an alpha feature called &lt;em&gt;mutable CSI node allocatable count&lt;/em&gt;, allowing Container Storage Interface (CSI) drivers to dynamically update the reported maximum number of volumes that a node can handle.
This capability significantly enhances the accuracy of pod scheduling decisions and reduces scheduling failures caused by outdated volume capacity information.&lt;/p&gt;
&lt;h2 id="background"&gt;Background&lt;/h2&gt;
&lt;p&gt;Traditionally, Kubernetes CSI drivers report a static maximum volume attachment limit when initializing. However, actual attachment capacities can change during a node's lifecycle for various reasons, such as:&lt;/p&gt;</description></item><item><title>Kubernetes v1.33: New features in DRA</title><link>https://andygol-k8s.netlify.app/blog/2025/05/01/kubernetes-v1-33-dra-updates/</link><pubDate>Thu, 01 May 2025 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/05/01/kubernetes-v1-33-dra-updates/</guid><description>&lt;p&gt;Kubernetes &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/scheduling-eviction/dynamic-resource-allocation/"&gt;Dynamic Resource Allocation&lt;/a&gt; (DRA) was originally introduced as an alpha feature in the v1.26 release, and then went through a significant redesign for Kubernetes v1.31. The main DRA feature went to beta in v1.32, and the project hopes it will be generally available in Kubernetes v1.34.&lt;/p&gt;
&lt;p&gt;The basic feature set of DRA provides a far more powerful and flexible API for requesting devices than Device Plugin. And while DRA remains a beta feature for v1.33, the DRA team has been hard at work implementing a number of new features and UX improvements. One feature has been promoted to beta, while a number of new features have been added in alpha. The team has also made progress towards getting DRA ready for GA.&lt;/p&gt;</description></item><item><title>Kubernetes v1.33: Storage Capacity Scoring of Nodes for Dynamic Provisioning (alpha)</title><link>https://andygol-k8s.netlify.app/blog/2025/04/30/kubernetes-v1-33-storage-capacity-scoring-feature/</link><pubDate>Wed, 30 Apr 2025 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/04/30/kubernetes-v1-33-storage-capacity-scoring-feature/</guid><description>&lt;p&gt;Kubernetes v1.33 introduces a new alpha feature called &lt;code&gt;StorageCapacityScoring&lt;/code&gt;. This feature adds a scoring method for pod scheduling
with &lt;a href="https://andygol-k8s.netlify.app/blog/2018/10/11/topology-aware-volume-provisioning-in-kubernetes/"&gt;the topology-aware volume provisioning&lt;/a&gt;.
This feature eases to schedule pods on nodes with either the most or least available storage capacity.&lt;/p&gt;
&lt;h2 id="about-this-feature"&gt;About this feature&lt;/h2&gt;
&lt;p&gt;This feature extends the kube-scheduler's VolumeBinding plugin to perform scoring using node storage capacity information
obtained from &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/storage/storage-capacity/"&gt;Storage Capacity&lt;/a&gt;. Currently, you can only filter out nodes with insufficient storage capacity.
So, you have to use a scheduler extender to achieve storage-capacity-based pod scheduling.&lt;/p&gt;</description></item><item><title>Kubernetes v1.33: Image Volumes graduate to beta!</title><link>https://andygol-k8s.netlify.app/blog/2025/04/29/kubernetes-v1-33-image-volume-beta/</link><pubDate>Tue, 29 Apr 2025 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/04/29/kubernetes-v1-33-image-volume-beta/</guid><description>&lt;p&gt;&lt;a href="https://andygol-k8s.netlify.app/blog/2024/08/16/kubernetes-1-31-image-volume-source"&gt;Image Volumes&lt;/a&gt; were
introduced as an Alpha feature with the Kubernetes v1.31 release as part of
&lt;a href="https://github.com/kubernetes/enhancements/issues/4639"&gt;KEP-4639&lt;/a&gt;. In Kubernetes v1.33, this feature graduates to &lt;strong&gt;beta&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;Please note that the feature is still &lt;em&gt;disabled&lt;/em&gt; by default, because not all
&lt;a href="https://andygol-k8s.netlify.app/docs/setup/production-environment/container-runtimes/"&gt;container runtimes&lt;/a&gt; have
full support for it. &lt;a href="https://cri-o.io"&gt;CRI-O&lt;/a&gt; supports the initial feature since version v1.31 and
will add support for Image Volumes as beta in v1.33.
&lt;a href="https://github.com/containerd/containerd/pull/10579"&gt;containerd merged&lt;/a&gt; support
for the alpha feature which will be part of the v2.1.0 release and is working on
beta support as part of &lt;a href="https://github.com/containerd/containerd/pull/11578"&gt;PR #11578&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>Kubernetes v1.33: HorizontalPodAutoscaler Configurable Tolerance</title><link>https://andygol-k8s.netlify.app/blog/2025/04/28/kubernetes-v1-33-hpa-configurable-tolerance/</link><pubDate>Mon, 28 Apr 2025 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/04/28/kubernetes-v1-33-hpa-configurable-tolerance/</guid><description>&lt;p&gt;This post describes &lt;em&gt;configurable tolerance for horizontal Pod autoscaling&lt;/em&gt;,
a new alpha feature first available in Kubernetes 1.33.&lt;/p&gt;
&lt;h2 id="what-is-it"&gt;What is it?&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/tasks/run-application/horizontal-pod-autoscale/"&gt;Horizontal Pod Autoscaling&lt;/a&gt;
is a well-known Kubernetes feature that allows your workload to
automatically resize by adding or removing replicas based on resource
utilization.&lt;/p&gt;
&lt;p&gt;Let's say you have a web application running in a Kubernetes cluster with 50
replicas. You configure the HorizontalPodAutoscaler (HPA) to scale based on
CPU utilization, with a target of 75% utilization. Now, imagine that the current
CPU utilization across all replicas is 90%, which is higher than the desired
75%. The HPA will calculate the required number of replicas using the formula:&lt;/p&gt;</description></item><item><title>Kubernetes v1.33: User Namespaces enabled by default!</title><link>https://andygol-k8s.netlify.app/blog/2025/04/25/userns-enabled-by-default/</link><pubDate>Fri, 25 Apr 2025 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/04/25/userns-enabled-by-default/</guid><description>&lt;p&gt;In Kubernetes v1.33 support for user namespaces is enabled by default. This means
that, when the stack requirements are met, pods can opt-in to use user
namespaces. To use the feature there is no need to enable any Kubernetes feature
flag anymore!&lt;/p&gt;
&lt;p&gt;In this blog post we answer some common questions about user namespaces. But,
before we dive into that, let's recap what user namespaces are and why they are
important.&lt;/p&gt;</description></item><item><title>Kubernetes v1.33: Continuing the transition from Endpoints to EndpointSlices</title><link>https://andygol-k8s.netlify.app/blog/2025/04/24/endpoints-deprecation/</link><pubDate>Thu, 24 Apr 2025 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/04/24/endpoints-deprecation/</guid><description>&lt;p&gt;Since the addition of &lt;a href="https://andygol-k8s.netlify.app/blog/2020/09/02/scaling-kubernetes-networking-with-endpointslices/"&gt;EndpointSlices&lt;/a&gt; (&lt;a href="https://github.com/kubernetes/enhancements/blob/master/keps/sig-network/0752-endpointslices/README.md"&gt;KEP-752&lt;/a&gt;) as alpha in v1.15
and later GA in v1.21, the
Endpoints API in Kubernetes has been gathering dust. New Service
features like &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/services-networking/dual-stack/"&gt;dual-stack networking&lt;/a&gt; and &lt;a href="https://andygol-k8s.netlify.app/docs/reference/networking/virtual-ips/#traffic-distribution"&gt;traffic distribution&lt;/a&gt; are
only supported via the EndpointSlice API, so all service proxies,
Gateway API implementations, and similar controllers have had to be
ported from using Endpoints to using EndpointSlices. At this point,
the Endpoints API is really only there to avoid breaking end user
workloads and scripts that still make use of it.&lt;/p&gt;</description></item><item><title>Kubernetes v1.33: Octarine</title><link>https://andygol-k8s.netlify.app/blog/2025/04/23/kubernetes-v1-33-release/</link><pubDate>Wed, 23 Apr 2025 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/04/23/kubernetes-v1-33-release/</guid><description>&lt;p&gt;&lt;strong&gt;Editors:&lt;/strong&gt; Agustina Barbetta, Aakanksha Bhende, Udi Hofesh, Ryota Sawada, Sneha Yadav&lt;/p&gt;
&lt;p&gt;Similar to previous releases, the release of Kubernetes v1.33 introduces new stable, beta, and alpha
features. The consistent delivery of high-quality releases underscores the strength of our
development cycle and the vibrant support from our community.&lt;/p&gt;
&lt;p&gt;This release consists of 64 enhancements. Of those enhancements, 18 have graduated to Stable, 20 are
entering Beta, 24 have entered Alpha, and 2 are deprecated or withdrawn.&lt;/p&gt;</description></item><item><title>Kubernetes Multicontainer Pods: An Overview</title><link>https://andygol-k8s.netlify.app/blog/2025/04/22/multi-container-pods-overview/</link><pubDate>Tue, 22 Apr 2025 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/04/22/multi-container-pods-overview/</guid><description>&lt;p&gt;As cloud-native architectures continue to evolve, Kubernetes has become the go-to platform for deploying complex, distributed systems. One of the most powerful yet nuanced design patterns in this ecosystem is the sidecar pattern—a technique that allows developers to extend application functionality without diving deep into source code.&lt;/p&gt;
&lt;h2 id="the-origins-of-the-sidecar-pattern"&gt;The origins of the sidecar pattern&lt;/h2&gt;
&lt;p&gt;Think of a sidecar like a trusty companion motorcycle attachment. Historically, IT infrastructures have always used auxiliary services to handle critical tasks. Before containers, we relied on background processes and helper daemons to manage logging, monitoring, and networking. The microservices revolution transformed this approach, making sidecars a structured and intentional architectural choice.
With the rise of microservices, the sidecar pattern became more clearly defined, allowing developers to offload specific responsibilities from the main service without altering its code. Service meshes like Istio and Linkerd have popularized sidecar proxies, demonstrating how these companion containers can elegantly handle observability, security, and traffic management in distributed systems.&lt;/p&gt;</description></item><item><title>Introducing kube-scheduler-simulator</title><link>https://andygol-k8s.netlify.app/blog/2025/04/07/introducing-kube-scheduler-simulator/</link><pubDate>Mon, 07 Apr 2025 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/04/07/introducing-kube-scheduler-simulator/</guid><description>&lt;p&gt;The Kubernetes Scheduler is a crucial control plane component that determines which node a Pod will run on.
Thus, anyone utilizing Kubernetes relies on a scheduler.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://github.com/kubernetes-sigs/kube-scheduler-simulator"&gt;kube-scheduler-simulator&lt;/a&gt; is a &lt;em&gt;simulator&lt;/em&gt; for the Kubernetes scheduler, that started as a &lt;a href="https://summerofcode.withgoogle.com/"&gt;Google Summer of Code 2021&lt;/a&gt; project developed by me (Kensei Nakada) and later received a lot of contributions.
This tool allows users to closely examine the scheduler’s behavior and decisions.&lt;/p&gt;
&lt;p&gt;It is useful for casual users who employ scheduling constraints (for example, &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity/#affinity-and-anti-affinity"&gt;inter-Pod affinity&lt;/a&gt;)
and experts who extend the scheduler with custom plugins.&lt;/p&gt;</description></item><item><title>Kubernetes v1.33 sneak peek</title><link>https://andygol-k8s.netlify.app/blog/2025/03/26/kubernetes-v1-33-upcoming-changes/</link><pubDate>Wed, 26 Mar 2025 10:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/03/26/kubernetes-v1-33-upcoming-changes/</guid><description>&lt;p&gt;As the release of Kubernetes v1.33 approaches, the Kubernetes project continues to evolve. Features may be deprecated, removed, or replaced to improve the overall health of the project. This blog post outlines some planned changes for the v1.33 release, which the release team believes you should be aware of to ensure the continued smooth operation of your Kubernetes environment and to keep you up-to-date with the latest developments. The information below is based on the current status of the v1.33 release and is subject to change before the final release date.&lt;/p&gt;</description></item><item><title>Fresh Swap Features for Linux Users in Kubernetes 1.32</title><link>https://andygol-k8s.netlify.app/blog/2025/03/25/swap-linux-improvements/</link><pubDate>Tue, 25 Mar 2025 10:00:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/03/25/swap-linux-improvements/</guid><description>&lt;p&gt;Swap is a fundamental and an invaluable Linux feature.
It offers numerous benefits, such as effectively increasing a node’s memory by
swapping out unused data,
shielding nodes from system-level memory spikes,
preventing Pods from crashing when they hit their memory limits,
and &lt;a href="https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/2400-node-swap/README.md#user-stories"&gt;much more&lt;/a&gt;.
As a result, the node special interest group within the Kubernetes project
has invested significant effort into supporting swap on Linux nodes.&lt;/p&gt;
&lt;p&gt;The 1.22 release &lt;a href="https://andygol-k8s.netlify.app/blog/2021/08/09/run-nodes-with-swap-alpha/"&gt;introduced&lt;/a&gt; Alpha support
for configuring swap memory usage for Kubernetes workloads running on Linux on a per-node basis.
Later, in release 1.28, support for swap on Linux nodes has graduated to Beta, along with many
new improvements.
In the following Kubernetes releases more improvements were made, paving the way
to GA in the near future.&lt;/p&gt;</description></item><item><title>Ingress-nginx CVE-2025-1974: What You Need to Know</title><link>https://andygol-k8s.netlify.app/blog/2025/03/24/ingress-nginx-cve-2025-1974/</link><pubDate>Mon, 24 Mar 2025 12:00:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/03/24/ingress-nginx-cve-2025-1974/</guid><description>&lt;p&gt;Today, the ingress-nginx maintainers have released patches for a batch of critical vulnerabilities that could make it easy for attackers to take over your Kubernetes cluster: &lt;a href="https://github.com/kubernetes/ingress-nginx/releases/tag/controller-v1.12.1"&gt;ingress-nginx v1.12.1&lt;/a&gt; and &lt;a href="https://github.com/kubernetes/ingress-nginx/releases/tag/controller-v1.11.5"&gt;ingress-nginx v1.11.5&lt;/a&gt;. If you are among the over 40% of Kubernetes administrators using &lt;a href="https://github.com/kubernetes/ingress-nginx/"&gt;ingress-nginx&lt;/a&gt;, you should take action immediately to protect your users and data.&lt;/p&gt;
&lt;h2 id="background"&gt;Background&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/services-networking/ingress/"&gt;Ingress&lt;/a&gt; is the traditional Kubernetes feature for exposing your workload Pods to the world so that they can be useful. In an implementation-agnostic way, Kubernetes users can define how their applications should be made available on the network. Then, an &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/services-networking/ingress-controllers/"&gt;ingress controller&lt;/a&gt; uses that definition to set up local or cloud resources as required for the user’s particular situation and needs.&lt;/p&gt;</description></item><item><title>Introducing JobSet</title><link>https://andygol-k8s.netlify.app/blog/2025/03/23/introducing-jobset/</link><pubDate>Sun, 23 Mar 2025 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/03/23/introducing-jobset/</guid><description>&lt;p&gt;&lt;strong&gt;Authors&lt;/strong&gt;: Daniel Vega-Myhre (Google), Abdullah Gharaibeh (Google), Kevin Hannon (Red Hat)&lt;/p&gt;
&lt;p&gt;In this article, we introduce &lt;a href="https://jobset.sigs.k8s.io/"&gt;JobSet&lt;/a&gt;, an open source API for
representing distributed jobs. The goal of JobSet is to provide a unified API for distributed ML
training and HPC workloads on Kubernetes.&lt;/p&gt;
&lt;h2 id="why-jobset"&gt;Why JobSet?&lt;/h2&gt;
&lt;p&gt;The Kubernetes community’s recent enhancements to the batch ecosystem on Kubernetes has attracted ML
engineers who have found it to be a natural fit for the requirements of running distributed training
workloads.&lt;/p&gt;</description></item><item><title>Spotlight on SIG Apps</title><link>https://andygol-k8s.netlify.app/blog/2025/03/12/sig-apps-spotlight-2025/</link><pubDate>Wed, 12 Mar 2025 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/03/12/sig-apps-spotlight-2025/</guid><description>&lt;p&gt;In our ongoing SIG Spotlight series, we dive into the heart of the Kubernetes project by talking to
the leaders of its various Special Interest Groups (SIGs). This time, we focus on
&lt;strong&gt;&lt;a href="https://github.com/kubernetes/community/tree/master/sig-apps#apps-special-interest-group"&gt;SIG Apps&lt;/a&gt;&lt;/strong&gt;,
the group responsible for everything related to developing, deploying, and operating applications on
Kubernetes. &lt;a href="https://www.linkedin.com/in/sandipanpanda"&gt;Sandipan Panda&lt;/a&gt;
(&lt;a href="https://www.devzero.io/"&gt;DevZero&lt;/a&gt;) had the opportunity to interview &lt;a href="https://github.com/soltysh"&gt;Maciej
Szulik&lt;/a&gt; (&lt;a href="https://defenseunicorns.com/"&gt;Defense Unicorns&lt;/a&gt;) and &lt;a href="https://github.com/janetkuo"&gt;Janet
Kuo&lt;/a&gt; (&lt;a href="https://about.google/"&gt;Google&lt;/a&gt;), the chairs and tech leads of
SIG Apps. They shared their experiences, challenges, and visions for the future of application
management within the Kubernetes ecosystem.&lt;/p&gt;</description></item><item><title>Spotlight on SIG etcd</title><link>https://andygol-k8s.netlify.app/blog/2025/03/04/sig-etcd-spotlight/</link><pubDate>Tue, 04 Mar 2025 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/03/04/sig-etcd-spotlight/</guid><description>&lt;p&gt;In this SIG etcd spotlight we talked with &lt;a href="https://github.com/jmhbnz"&gt;James Blair&lt;/a&gt;, &lt;a href="https://github.com/serathius"&gt;Marek
Siarkowicz&lt;/a&gt;, &lt;a href="https://github.com/wenjiaswe"&gt;Wenjia Zhang&lt;/a&gt;, and
&lt;a href="https://github.com/ahrtr"&gt;Benjamin Wang&lt;/a&gt; to learn a bit more about this Kubernetes Special Interest
Group.&lt;/p&gt;
&lt;h2 id="introducing-sig-etcd"&gt;Introducing SIG etcd&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Frederico: Hello, thank you for the time! Let’s start with some introductions, could you tell us a
bit about yourself, your role and how you got involved in Kubernetes.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Benjamin:&lt;/strong&gt; Hello, I am Benjamin. I am a SIG etcd Tech Lead and one of the etcd maintainers. I
work for VMware, which is part of the Broadcom group. I got involved in Kubernetes &amp;amp; etcd &amp;amp; CSI
(&lt;a href="https://github.com/container-storage-interface/spec/blob/master/spec.md"&gt;Container Storage Interface&lt;/a&gt;)
because of work and also a big passion for open source. I have been working on Kubernetes &amp;amp; etcd
(and also CSI) since 2020.&lt;/p&gt;</description></item><item><title>NFTables mode for kube-proxy</title><link>https://andygol-k8s.netlify.app/blog/2025/02/28/nftables-kube-proxy/</link><pubDate>Fri, 28 Feb 2025 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/02/28/nftables-kube-proxy/</guid><description>&lt;p&gt;A new nftables mode for kube-proxy was introduced as an alpha feature
in Kubernetes 1.29. Currently in beta, it is expected to be GA as of
1.33. The new mode fixes long-standing performance problems with the
iptables mode and all users running on systems with reasonably-recent
kernels are encouraged to try it out. (For compatibility reasons, even
once nftables becomes GA, iptables will still be the &lt;em&gt;default&lt;/em&gt;.)&lt;/p&gt;
&lt;h2 id="why-nftables-part-1-data-plane-latency"&gt;Why nftables? Part 1: data plane latency&lt;/h2&gt;
&lt;p&gt;The iptables API was designed for implementing simple firewalls, and
has problems scaling up to support Service proxying in a large
Kubernetes cluster with tens of thousands of Services.&lt;/p&gt;</description></item><item><title>The Cloud Controller Manager Chicken and Egg Problem</title><link>https://andygol-k8s.netlify.app/blog/2025/02/14/cloud-controller-manager-chicken-egg-problem/</link><pubDate>Fri, 14 Feb 2025 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/02/14/cloud-controller-manager-chicken-egg-problem/</guid><description>&lt;p&gt;Kubernetes 1.31
&lt;a href="https://andygol-k8s.netlify.app/blog/2024/05/20/completing-cloud-provider-migration/"&gt;completed the largest migration in Kubernetes history&lt;/a&gt;, removing the in-tree
cloud provider. While the component migration is now done, this leaves some additional
complexity for users and installer projects (for example, kOps or Cluster API) . We will go
over those additional steps and failure points and make recommendations for cluster owners.
This migration was complex and some logic had to be extracted from the core components,
building four new subsystems.&lt;/p&gt;</description></item><item><title>Spotlight on SIG Architecture: Enhancements</title><link>https://andygol-k8s.netlify.app/blog/2025/01/21/sig-architecture-enhancements/</link><pubDate>Tue, 21 Jan 2025 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2025/01/21/sig-architecture-enhancements/</guid><description>&lt;p&gt;&lt;em&gt;This is the fourth interview of a SIG Architecture Spotlight series that will cover the different
subprojects, and we will be covering &lt;a href="https://github.com/kubernetes/community/blob/master/sig-architecture/README.md#enhancements"&gt;SIG Architecture:
Enhancements&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;In this SIG Architecture spotlight we talked with &lt;a href="https://github.com/kikisdeliveryservice"&gt;Kirsten
Garrison&lt;/a&gt;, lead of the Enhancements subproject.&lt;/p&gt;
&lt;h2 id="the-enhancements-subproject"&gt;The Enhancements subproject&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Frederico (FSM): Hi Kirsten, very happy to have the opportunity to talk about the Enhancements
subproject. Let's start with some quick information about yourself and your role.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Kirsten Garrison (KG)&lt;/strong&gt;: I’m a lead of the Enhancements subproject of SIG-Architecture and
currently work at Google. I first got involved by contributing to the service-catalog project with
the help of &lt;a href="https://github.com/carolynvs"&gt;Carolyn Van Slyck&lt;/a&gt;. With time, &lt;a href="https://github.com/kubernetes/sig-release/blob/master/releases/release-1.17/release_team.md"&gt;I joined the Release
team&lt;/a&gt;,
eventually becoming the Enhancements Lead and a Release Lead shadow. While on the release team, I
worked on some ideas to make the process better for the SIGs and Enhancements team (the opt-in
process) based on my team’s experiences. Eventually, I started attending Subproject meetings and
contributing to the Subproject’s work.&lt;/p&gt;</description></item><item><title>Kubernetes 1.32: Moving Volume Group Snapshots to Beta</title><link>https://andygol-k8s.netlify.app/blog/2024/12/18/kubernetes-1-32-volume-group-snapshot-beta/</link><pubDate>Wed, 18 Dec 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2024/12/18/kubernetes-1-32-volume-group-snapshot-beta/</guid><description>&lt;p&gt;Volume group snapshots were &lt;a href="https://andygol-k8s.netlify.app/blog/2023/05/08/kubernetes-1-27-volume-group-snapshot-alpha/"&gt;introduced&lt;/a&gt;
as an Alpha feature with the Kubernetes 1.27 release.
The recent release of Kubernetes v1.32 moved that support to &lt;strong&gt;beta&lt;/strong&gt;.
The support for volume group snapshots relies on a set of
&lt;a href="https://kubernetes-csi.github.io/docs/group-snapshot-restore-feature.html#volume-group-snapshot-apis"&gt;extension APIs for group snapshots&lt;/a&gt;.
These APIs allow users to take crash consistent snapshots for a set of volumes.
Behind the scenes, Kubernetes uses a label selector to group multiple PersistentVolumeClaims
for snapshotting.
A key aim is to allow you restore that set of snapshots to new volumes and
recover your workload based on a crash consistent recovery point.&lt;/p&gt;</description></item><item><title>Enhancing Kubernetes API Server Efficiency with API Streaming</title><link>https://andygol-k8s.netlify.app/blog/2024/12/17/kube-apiserver-api-streaming/</link><pubDate>Tue, 17 Dec 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2024/12/17/kube-apiserver-api-streaming/</guid><description>&lt;p&gt;Managing Kubernetes clusters efficiently is critical, especially as their size is growing.
A significant challenge with large clusters is the memory overhead caused by &lt;strong&gt;list&lt;/strong&gt; requests.&lt;/p&gt;
&lt;p&gt;In the existing implementation, the kube-apiserver processes &lt;strong&gt;list&lt;/strong&gt; requests by assembling the entire response in-memory before transmitting any data to the client.
But what if the response body is substantial, say hundreds of megabytes? Additionally, imagine a scenario where multiple &lt;strong&gt;list&lt;/strong&gt; requests flood in simultaneously, perhaps after a brief network outage.
While &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/cluster-administration/flow-control/"&gt;API Priority and Fairness&lt;/a&gt; has proven to reasonably protect kube-apiserver from CPU overload, its impact is visibly smaller for memory protection.
This can be explained by the differing nature of resource consumption by a single API request - the CPU usage at any given time is capped by a constant, whereas memory, being uncompressible, can grow proportionally with the number of processed objects and is unbounded.
This situation poses a genuine risk, potentially overwhelming and crashing any kube-apiserver within seconds due to out-of-memory (OOM) conditions. To better visualize the issue, let's consider the below graph.&lt;/p&gt;</description></item><item><title>Kubernetes v1.32 Adds A New CPU Manager Static Policy Option For Strict CPU Reservation</title><link>https://andygol-k8s.netlify.app/blog/2024/12/16/cpumanager-strict-cpu-reservation/</link><pubDate>Mon, 16 Dec 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2024/12/16/cpumanager-strict-cpu-reservation/</guid><description>&lt;p&gt;In Kubernetes v1.32, after years of community discussion, we are excited to introduce a
&lt;code&gt;strict-cpu-reservation&lt;/code&gt; option for the &lt;a href="https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/cpu-management-policies/#static-policy-options"&gt;CPU Manager static policy&lt;/a&gt;.
This feature is currently in alpha, with the associated policy hidden by default. You can only use the
policy if you explicitly enable the alpha behavior in your cluster.&lt;/p&gt;
&lt;h2 id="understanding-the-feature"&gt;Understanding the feature&lt;/h2&gt;
&lt;p&gt;The CPU Manager static policy is used to reduce latency or improve performance. The &lt;code&gt;reservedSystemCPUs&lt;/code&gt; defines an explicit CPU set for OS system daemons and kubernetes system daemons. This option is designed for Telco/NFV type use cases where uncontrolled interrupts/timers may impact the workload performance. you can use this option to define the explicit cpuset for the system/kubernetes daemons as well as the interrupts/timers, so the rest CPUs on the system can be used exclusively for workloads, with less impact from uncontrolled interrupts/timers. More details of this parameter can be found on the &lt;a href="https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/reserve-compute-resources/#explicitly-reserved-cpu-list"&gt;Explicitly Reserved CPU List&lt;/a&gt; page.&lt;/p&gt;</description></item><item><title>Kubernetes v1.32: Memory Manager Goes GA</title><link>https://andygol-k8s.netlify.app/blog/2024/12/13/memory-manager-goes-ga/</link><pubDate>Fri, 13 Dec 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2024/12/13/memory-manager-goes-ga/</guid><description>&lt;p&gt;With Kubernetes 1.32, the memory manager has officially graduated to General Availability (GA),
marking a significant milestone in the journey toward efficient and predictable memory allocation for containerized applications.
Since Kubernetes v1.22, where it graduated to beta, the memory manager has proved itself reliable, stable and a good complementary feature for the
&lt;a href="https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/cpu-management-policies/"&gt;CPU Manager&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;As part of kubelet's workload admission process,
the memory manager provides topology hints
to optimize memory allocation and alignment.
This enables users to allocate exclusive
memory for Pods in the &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/pod-qos/#guaranteed"&gt;Guaranteed&lt;/a&gt; QoS class.
More details about the process can be found in the memory manager goes to beta &lt;a href="https://andygol-k8s.netlify.app/blog/2021/08/11/kubernetes-1-22-feature-memory-manager-moves-to-beta/"&gt;blog&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>Kubernetes v1.32: QueueingHint Brings a New Possibility to Optimize Pod Scheduling</title><link>https://andygol-k8s.netlify.app/blog/2024/12/12/scheduler-queueinghint/</link><pubDate>Thu, 12 Dec 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2024/12/12/scheduler-queueinghint/</guid><description>&lt;p&gt;The Kubernetes &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/scheduling-eviction/kube-scheduler/"&gt;scheduler&lt;/a&gt; is the core
component that selects the nodes on which new Pods run. The scheduler processes
these new Pods &lt;strong&gt;one by one&lt;/strong&gt;. Therefore, the larger your clusters, the more important
the throughput of the scheduler becomes.&lt;/p&gt;
&lt;p&gt;Over the years, Kubernetes SIG Scheduling has improved the throughput
of the scheduler in multiple enhancements. This blog post describes a major improvement to the
scheduler in Kubernetes v1.32: a
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/scheduling-eviction/scheduling-framework/#extension-points"&gt;scheduling context element&lt;/a&gt;
named &lt;em&gt;QueueingHint&lt;/em&gt;. This page provides background knowledge of the scheduler and explains how
QueueingHint improves scheduling throughput.&lt;/p&gt;</description></item><item><title>Kubernetes v1.32: Penelope</title><link>https://andygol-k8s.netlify.app/blog/2024/12/11/kubernetes-v1-32-release/</link><pubDate>Wed, 11 Dec 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2024/12/11/kubernetes-v1-32-release/</guid><description>&lt;p&gt;&lt;strong&gt;Editors:&lt;/strong&gt; Matteo Bianchi, Edith Puclla, William Rizzo, Ryota Sawada, Rashan Smith&lt;/p&gt;
&lt;p&gt;Announcing the release of Kubernetes v1.32: Penelope!&lt;/p&gt;
&lt;p&gt;In line with previous releases, the release of Kubernetes v1.32 introduces new stable, beta, and alpha features.
The consistent delivery of high-quality releases underscores the strength of our development cycle and the vibrant
support from our community.
This release consists of 44 enhancements in total.
Of those enhancements, 13 have graduated to Stable, 12 are entering Beta, and 19 have entered in Alpha.&lt;/p&gt;</description></item><item><title>Gateway API v1.2: WebSockets, Timeouts, Retries, and More</title><link>https://andygol-k8s.netlify.app/blog/2024/11/21/gateway-api-v1-2/</link><pubDate>Thu, 21 Nov 2024 09:00:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2024/11/21/gateway-api-v1-2/</guid><description>&lt;p&gt;&lt;img src="https://andygol-k8s.netlify.app/blog/2024/11/21/gateway-api-v1-2/gateway-api-logo.svg" alt="Gateway API logo"&gt;&lt;/p&gt;
&lt;p&gt;Kubernetes SIG Network is delighted to announce the general availability of
&lt;a href="https://gateway-api.sigs.k8s.io/"&gt;Gateway API&lt;/a&gt; v1.2! This version of the API
was released on October 3, and we're delighted to report that we now have a
number of conformant implementations of it for you to try out.&lt;/p&gt;
&lt;p&gt;Gateway API v1.2 brings a number of new features to the &lt;em&gt;Standard channel&lt;/em&gt;
(Gateway API's GA release channel), introduces some new experimental features,
and inaugurates our new release process — but it also brings two breaking
changes that you'll want to be careful of.&lt;/p&gt;</description></item><item><title>How we built a dynamic Kubernetes API Server for the API Aggregation Layer in Cozystack</title><link>https://andygol-k8s.netlify.app/blog/2024/11/21/dynamic-kubernetes-api-server-for-cozystack/</link><pubDate>Thu, 21 Nov 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2024/11/21/dynamic-kubernetes-api-server-for-cozystack/</guid><description>&lt;p&gt;Hi there! I'm Andrei Kvapil, but you might know me as &lt;a href="https://github.com/kvaps"&gt;@kvaps&lt;/a&gt; in communities dedicated to Kubernetes
and cloud-native tools. In this article, I want to share how we implemented our own extension api-server
in the open-source PaaS platform, Cozystack.&lt;/p&gt;
&lt;p&gt;Kubernetes truly amazes me with its powerful extensibility features. You're probably already
familiar with the &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/architecture/controller/"&gt;controller&lt;/a&gt; concept
and frameworks like &lt;a href="https://book.kubebuilder.io/"&gt;kubebuilder&lt;/a&gt; and
&lt;a href="https://sdk.operatorframework.io/"&gt;operator-sdk&lt;/a&gt; that help you implement it. In a nutshell, they
allow you to extend your Kubernetes cluster by defining custom resources (CRDs) and writing additional
controllers that handle your business logic for reconciling and managing these kinds of resources.
This approach is well-documented, with a wealth of information available online on how to develop your
own operators.&lt;/p&gt;</description></item><item><title>Kubernetes v1.32 sneak peek</title><link>https://andygol-k8s.netlify.app/blog/2024/11/08/kubernetes-1-32-upcoming-changes/</link><pubDate>Fri, 08 Nov 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2024/11/08/kubernetes-1-32-upcoming-changes/</guid><description>&lt;p&gt;As we get closer to the release date for Kubernetes v1.32, the project develops and matures.
Features may be deprecated, removed, or replaced with better ones for the project's overall health.&lt;/p&gt;
&lt;p&gt;This blog outlines some of the planned changes for the Kubernetes v1.32 release,
that the release team feels you should be aware of, for the continued maintenance
of your Kubernetes environment and keeping up to date with the latest changes.
Information listed below is based on the current status of the v1.32 release
and may change before the actual release date.&lt;/p&gt;</description></item><item><title>Spotlight on Kubernetes Upstream Training in Japan</title><link>https://andygol-k8s.netlify.app/blog/2024/10/28/k8s-upstream-training-japan-spotlight/</link><pubDate>Mon, 28 Oct 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2024/10/28/k8s-upstream-training-japan-spotlight/</guid><description>&lt;p&gt;We are organizers of &lt;a href="https://github.com/kubernetes-sigs/contributor-playground/tree/master/japan"&gt;Kubernetes Upstream Training in Japan&lt;/a&gt;.
Our team is composed of members who actively contribute to Kubernetes, including individuals who hold roles such as member, reviewer, approver, and chair.&lt;/p&gt;
&lt;p&gt;Our goal is to increase the number of Kubernetes contributors and foster the growth of the community.
While Kubernetes community is friendly and collaborative, newcomers may find the first step of contributing to be a bit challenging.
Our training program aims to lower that barrier and create an environment where even beginners can participate smoothly.&lt;/p&gt;</description></item><item><title>Announcing the 2024 Steering Committee Election Results</title><link>https://andygol-k8s.netlify.app/blog/2024/10/02/steering-committee-results-2024/</link><pubDate>Wed, 02 Oct 2024 15:10:00 -0500</pubDate><guid>https://andygol-k8s.netlify.app/blog/2024/10/02/steering-committee-results-2024/</guid><description>&lt;p&gt;The &lt;a href="https://github.com/kubernetes/community/tree/master/elections/steering/2024"&gt;2024 Steering Committee Election&lt;/a&gt; is now complete. The Kubernetes Steering Committee consists of 7 seats, 3 of which were up for election in 2024. Incoming committee members serve a term of 2 years, and all members are elected by the Kubernetes Community.&lt;/p&gt;
&lt;p&gt;This community body is significant since it oversees the governance of the entire Kubernetes project. With that great power comes great responsibility. You can learn more about the steering committee’s role in their &lt;a href="https://github.com/kubernetes/steering/blob/master/charter.md"&gt;charter&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>Spotlight on CNCF Deaf and Hard-of-hearing Working Group (DHHWG)</title><link>https://andygol-k8s.netlify.app/blog/2024/09/30/cncf-deaf-and-hard-of-hearing-working-group-spotlight/</link><pubDate>Mon, 30 Sep 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2024/09/30/cncf-deaf-and-hard-of-hearing-working-group-spotlight/</guid><description>&lt;p&gt;&lt;em&gt;In recognition of Deaf Awareness Month and the importance of inclusivity in the tech community, we are spotlighting &lt;a href="https://www.linkedin.com/in/catherinepaganini/"&gt;Catherine Paganini&lt;/a&gt;, facilitator and one of the founding members of &lt;a href="https://contribute.cncf.io/about/deaf-and-hard-of-hearing/"&gt;CNCF Deaf and Hard-of-Hearing Working Group&lt;/a&gt; (DHHWG). In this interview, &lt;a href="https://www.linkedin.com/in/sandeepkanabar/"&gt;Sandeep Kanabar&lt;/a&gt;, a deaf member of the DHHWG and part of the Kubernetes &lt;a href="https://github.com/kubernetes/community/blob/master/sig-contributor-experience/README.md#contributor-comms"&gt;SIG ContribEx Communications team&lt;/a&gt;, sits down with Catherine to explore the impact of the DHHWG on cloud native projects like Kubernetes.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Sandeep’s journey is a testament to the power of inclusion. Through his involvement in the DHHWG, he connected with members of the Kubernetes community who encouraged him to join &lt;a href="https://github.com/kubernetes/community/blob/master/sig-contributor-experience/README.md"&gt;SIG ContribEx&lt;/a&gt; - the group responsible for sustaining the Kubernetes contributor experience. In an ecosystem where open-source projects are actively seeking contributors and maintainers, this story highlights how important it is to create pathways for underrepresented groups, including those with disabilities, to contribute their unique perspectives and skills.&lt;/em&gt;&lt;/p&gt;</description></item><item><title>Spotlight on SIG Scheduling</title><link>https://andygol-k8s.netlify.app/blog/2024/09/24/sig-scheduling-spotlight-2024/</link><pubDate>Tue, 24 Sep 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2024/09/24/sig-scheduling-spotlight-2024/</guid><description>&lt;p&gt;In this SIG Scheduling spotlight we talked with &lt;a href="https://github.com/sanposhiho/"&gt;Kensei Nakada&lt;/a&gt;, an
approver in SIG Scheduling.&lt;/p&gt;
&lt;h2 id="introductions"&gt;Introductions&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Arvind:&lt;/strong&gt; &lt;strong&gt;Hello, thank you for the opportunity to learn more about SIG Scheduling! Would you
like to introduce yourself and tell us a bit about your role, and how you got involved with
Kubernetes?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Kensei&lt;/strong&gt;: Hi, thanks for the opportunity! I’m Kensei Nakada
(&lt;a href="https://github.com/sanposhiho/"&gt;@sanposhiho&lt;/a&gt;), a software engineer at
&lt;a href="https://tetrate.io/"&gt;Tetrate.io&lt;/a&gt;. I have been contributing to Kubernetes in my free time for more
than 3 years, and now I’m an approver of SIG Scheduling in Kubernetes. Also, I’m a founder/owner of
two SIG subprojects,
&lt;a href="https://github.com/kubernetes-sigs/kube-scheduler-simulator"&gt;kube-scheduler-simulator&lt;/a&gt; and
&lt;a href="https://github.com/kubernetes-sigs/kube-scheduler-wasm-extension"&gt;kube-scheduler-wasm-extension&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>Kubernetes v1.31: kubeadm v1beta4</title><link>https://andygol-k8s.netlify.app/blog/2024/08/23/kubernetes-1-31-kubeadm-v1beta4/</link><pubDate>Fri, 23 Aug 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2024/08/23/kubernetes-1-31-kubeadm-v1beta4/</guid><description>&lt;p&gt;As part of the Kubernetes v1.31 release, &lt;a href="https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/"&gt;&lt;code&gt;kubeadm&lt;/code&gt;&lt;/a&gt; is
adopting a new (&lt;a href="https://andygol-k8s.netlify.app/docs/reference/config-api/kubeadm-config.v1beta4/"&gt;v1beta4&lt;/a&gt;) version of
its configuration file format. Configuration in the previous v1beta3 format is now formally
deprecated, which means it's supported but you should migrate to v1beta4 and stop using
the deprecated format.
Support for v1beta3 configuration will be removed after a minimum of 3 Kubernetes minor releases.&lt;/p&gt;
&lt;p&gt;In this article, I'll walk you through key changes;
I'll explain about the kubeadm v1beta4 configuration format,
and how to migrate from v1beta3 to v1beta4.&lt;/p&gt;</description></item><item><title>Kubernetes 1.31: Custom Profiling in Kubectl Debug Graduates to Beta</title><link>https://andygol-k8s.netlify.app/blog/2024/08/22/kubernetes-1-31-custom-profiling-kubectl-debug/</link><pubDate>Thu, 22 Aug 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2024/08/22/kubernetes-1-31-custom-profiling-kubectl-debug/</guid><description>&lt;p&gt;There are many ways of troubleshooting the pods and nodes in the cluster. However, &lt;code&gt;kubectl debug&lt;/code&gt; is one of the easiest, highly used and most prominent ones. It
provides a set of static profiles and each profile serves for a different kind of role. For instance, from the network administrator's point of view,
debugging the node should be as easy as this:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;$ kubectl debug node/mynode -it --image&lt;span style="color:#666"&gt;=&lt;/span&gt;busybox --profile&lt;span style="color:#666"&gt;=&lt;/span&gt;netadmin
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;On the other hand, static profiles also bring about inherent rigidity, which has some implications for some pods contrary to their ease of use.
Because there are various kinds of pods (or nodes) that all have their specific
necessities, and unfortunately, some can't be debugged by only using the static profiles.&lt;/p&gt;</description></item><item><title>Kubernetes 1.31: Fine-grained SupplementalGroups control</title><link>https://andygol-k8s.netlify.app/blog/2024/08/22/fine-grained-supplementalgroups-control/</link><pubDate>Thu, 22 Aug 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2024/08/22/fine-grained-supplementalgroups-control/</guid><description>&lt;p&gt;This blog discusses a new feature in Kubernetes 1.31 to improve the handling of supplementary groups in containers within Pods.&lt;/p&gt;
&lt;h2 id="motivation-implicit-group-memberships-defined-in-etc-group-in-the-container-image"&gt;Motivation: Implicit group memberships defined in &lt;code&gt;/etc/group&lt;/code&gt; in the container image&lt;/h2&gt;
&lt;p&gt;Although this behavior may not be popular with many Kubernetes cluster users/admins, kubernetes, by default, &lt;em&gt;merges&lt;/em&gt; group information from the Pod with information defined in &lt;code&gt;/etc/group&lt;/code&gt; in the container image.&lt;/p&gt;
&lt;p&gt;Let's see an example, below Pod specifies &lt;code&gt;runAsUser=1000&lt;/code&gt;, &lt;code&gt;runAsGroup=3000&lt;/code&gt; and &lt;code&gt;supplementalGroups=4000&lt;/code&gt; in the Pod's security context.&lt;/p&gt;</description></item><item><title>Kubernetes v1.31: New Kubernetes CPUManager Static Policy: Distribute CPUs Across Cores</title><link>https://andygol-k8s.netlify.app/blog/2024/08/22/cpumanager-static-policy-distributed-cpu-across-cores/</link><pubDate>Thu, 22 Aug 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2024/08/22/cpumanager-static-policy-distributed-cpu-across-cores/</guid><description>&lt;p&gt;In Kubernetes v1.31, we are excited to introduce a significant enhancement to CPU management capabilities: the &lt;code&gt;distribute-cpus-across-cores&lt;/code&gt; option for the &lt;a href="https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/cpu-management-policies/#static-policy-options"&gt;CPUManager static policy&lt;/a&gt;. This feature is currently in alpha and hidden by default, marking a strategic shift aimed at optimizing CPU utilization and improving system performance across multi-core processors.&lt;/p&gt;
&lt;h2 id="understanding-the-feature"&gt;Understanding the feature&lt;/h2&gt;
&lt;p&gt;Traditionally, Kubernetes' CPUManager tends to allocate CPUs as compactly as possible, typically packing them onto the fewest number of physical cores. However, allocation strategy matters, CPUs on the same physical host still share some resources of the physical core, such as the cache and execution units, etc.&lt;/p&gt;</description></item><item><title>Kubernetes 1.31: Autoconfiguration For Node Cgroup Driver (beta)</title><link>https://andygol-k8s.netlify.app/blog/2024/08/21/cri-cgroup-driver-lookup-now-beta/</link><pubDate>Wed, 21 Aug 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2024/08/21/cri-cgroup-driver-lookup-now-beta/</guid><description>&lt;p&gt;Historically, configuring the correct cgroup driver has been a pain point for users running new
Kubernetes clusters. On Linux systems, there are two different cgroup drivers:
&lt;code&gt;cgroupfs&lt;/code&gt; and &lt;code&gt;systemd&lt;/code&gt;. In the past, both the &lt;a href="https://andygol-k8s.netlify.app/docs/reference/command-line-tools-reference/kubelet/"&gt;kubelet&lt;/a&gt;
and CRI implementation (like CRI-O or containerd) needed to be configured to use
the same cgroup driver, or else the kubelet would exit with an error. This was a
source of headaches for many cluster admins. However, there is light at the end of the tunnel!&lt;/p&gt;</description></item><item><title>Kubernetes 1.31: Streaming Transitions from SPDY to WebSockets</title><link>https://andygol-k8s.netlify.app/blog/2024/08/20/websockets-transition/</link><pubDate>Tue, 20 Aug 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2024/08/20/websockets-transition/</guid><description>&lt;p&gt;In Kubernetes 1.31, by default kubectl now uses the WebSocket protocol
instead of SPDY for streaming.&lt;/p&gt;
&lt;p&gt;This post describes what these changes mean for you and why these streaming APIs
matter.&lt;/p&gt;
&lt;h2 id="streaming-apis-in-kubernetes"&gt;Streaming APIs in Kubernetes&lt;/h2&gt;
&lt;p&gt;In Kubernetes, specific endpoints that are exposed as an HTTP or RESTful
interface are upgraded to streaming connections, which require a streaming
protocol. Unlike HTTP, which is a request-response protocol, a streaming
protocol provides a persistent connection that's bi-directional, low-latency,
and lets you interact in real-time. Streaming protocols support reading and
writing data between your client and the server, in both directions, over the
same connection. This type of connection is useful, for example, when you create
a shell in a running container from your local workstation and run commands in
the container.&lt;/p&gt;</description></item><item><title>Kubernetes 1.31: Pod Failure Policy for Jobs Goes GA</title><link>https://andygol-k8s.netlify.app/blog/2024/08/19/kubernetes-1-31-pod-failure-policy-for-jobs-goes-ga/</link><pubDate>Mon, 19 Aug 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2024/08/19/kubernetes-1-31-pod-failure-policy-for-jobs-goes-ga/</guid><description>&lt;p&gt;This post describes &lt;em&gt;Pod failure policy&lt;/em&gt;, which graduates to stable in Kubernetes
1.31, and how to use it in your Jobs.&lt;/p&gt;
&lt;h2 id="about-pod-failure-policy"&gt;About Pod failure policy&lt;/h2&gt;
&lt;p&gt;When you run workloads on Kubernetes, Pods might fail for a variety of reasons.
Ideally, workloads like Jobs should be able to ignore transient, retriable
failures and continue running to completion.&lt;/p&gt;
&lt;p&gt;To allow for these transient failures, Kubernetes Jobs include the &lt;code&gt;backoffLimit&lt;/code&gt;
field, which lets you specify a number of Pod failures that you're willing to tolerate
during Job execution. However, if you set a large value for the &lt;code&gt;backoffLimit&lt;/code&gt; field
and rely solely on this field, you might notice unnecessary increases in operating
costs as Pods restart excessively until the backoffLimit is met.&lt;/p&gt;</description></item><item><title>Kubernetes 1.31: MatchLabelKeys in PodAffinity graduates to beta</title><link>https://andygol-k8s.netlify.app/blog/2024/08/16/matchlabelkeys-podaffinity/</link><pubDate>Fri, 16 Aug 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2024/08/16/matchlabelkeys-podaffinity/</guid><description>&lt;p&gt;Kubernetes 1.29 introduced new fields &lt;code&gt;matchLabelKeys&lt;/code&gt; and &lt;code&gt;mismatchLabelKeys&lt;/code&gt; in &lt;code&gt;podAffinity&lt;/code&gt; and &lt;code&gt;podAntiAffinity&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;In Kubernetes 1.31, this feature moves to beta and the corresponding feature gate (&lt;code&gt;MatchLabelKeysInPodAffinity&lt;/code&gt;) gets enabled by default.&lt;/p&gt;
&lt;h2 id="matchlabelkeys-enhanced-scheduling-for-versatile-rolling-updates"&gt;&lt;code&gt;matchLabelKeys&lt;/code&gt; - Enhanced scheduling for versatile rolling updates&lt;/h2&gt;
&lt;p&gt;During a workload's (e.g., Deployment) rolling update, a cluster may have Pods from multiple versions at the same time.
However, the scheduler cannot distinguish between old and new versions based on the &lt;code&gt;labelSelector&lt;/code&gt; specified in &lt;code&gt;podAffinity&lt;/code&gt; or &lt;code&gt;podAntiAffinity&lt;/code&gt;. As a result, it will co-locate or disperse Pods regardless of their versions.&lt;/p&gt;</description></item><item><title>Kubernetes 1.31: Prevent PersistentVolume Leaks When Deleting out of Order</title><link>https://andygol-k8s.netlify.app/blog/2024/08/16/kubernetes-1-31-prevent-persistentvolume-leaks-when-deleting-out-of-order/</link><pubDate>Fri, 16 Aug 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2024/08/16/kubernetes-1-31-prevent-persistentvolume-leaks-when-deleting-out-of-order/</guid><description>&lt;p&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/storage/persistent-volumes/"&gt;PersistentVolume&lt;/a&gt; (or PVs for short) are
associated with &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/storage/persistent-volumes/#reclaim-policy"&gt;Reclaim Policy&lt;/a&gt;.
The reclaim policy is used to determine the actions that need to be taken by the storage
backend on deletion of the PVC Bound to a PV.
When the reclaim policy is &lt;code&gt;Delete&lt;/code&gt;, the expectation is that the storage backend
releases the storage resource allocated for the PV. In essence, the reclaim
policy needs to be honored on PV deletion.&lt;/p&gt;
&lt;p&gt;With the recent Kubernetes v1.31 release, a beta feature lets you configure your
cluster to behave that way and honor the configured reclaim policy.&lt;/p&gt;</description></item><item><title>Kubernetes 1.31: Read Only Volumes Based On OCI Artifacts (alpha)</title><link>https://andygol-k8s.netlify.app/blog/2024/08/16/kubernetes-1-31-image-volume-source/</link><pubDate>Fri, 16 Aug 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2024/08/16/kubernetes-1-31-image-volume-source/</guid><description>&lt;p&gt;The Kubernetes community is moving towards fulfilling more Artificial
Intelligence (AI) and Machine Learning (ML) use cases in the future. While the
project has been designed to fulfill microservice architectures in the past,
it’s now time to listen to the end users and introduce features which have a
stronger focus on AI/ML.&lt;/p&gt;
&lt;p&gt;One of these requirements is to support &lt;a href="https://opencontainers.org"&gt;Open Container Initiative (OCI)&lt;/a&gt;
compatible images and artifacts (referred as OCI objects) directly as a native
volume source. This allows users to focus on OCI standards as well as enables
them to store and distribute any content using OCI registries. A feature like
this gives the Kubernetes project a chance to grow into use cases which go
beyond running particular images.&lt;/p&gt;</description></item><item><title>Kubernetes 1.31: VolumeAttributesClass for Volume Modification Beta</title><link>https://andygol-k8s.netlify.app/blog/2024/08/15/kubernetes-1-31-volume-attributes-class/</link><pubDate>Thu, 15 Aug 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2024/08/15/kubernetes-1-31-volume-attributes-class/</guid><description>&lt;p&gt;Volumes in Kubernetes have been described by two attributes: their storage class, and
their capacity. The storage class is an immutable property of the volume, while the
capacity can be changed dynamically with &lt;a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims"&gt;volume
resize&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;This complicates vertical scaling of workloads with volumes. While cloud providers and
storage vendors often offer volumes which allow specifying IO quality of service
(Performance) parameters like IOPS or throughput and tuning them as workloads operate,
Kubernetes has no API which allows changing them.&lt;/p&gt;</description></item><item><title>Kubernetes v1.31: Accelerating Cluster Performance with Consistent Reads from Cache</title><link>https://andygol-k8s.netlify.app/blog/2024/08/15/consistent-read-from-cache-beta/</link><pubDate>Thu, 15 Aug 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2024/08/15/consistent-read-from-cache-beta/</guid><description>&lt;p&gt;Kubernetes is renowned for its robust orchestration of containerized applications,
but as clusters grow, the demands on the control plane can become a bottleneck.
A key challenge has been ensuring strongly consistent reads from the etcd datastore,
requiring resource-intensive quorum reads.&lt;/p&gt;
&lt;p&gt;Today, the Kubernetes community is excited to announce a major improvement:
&lt;em&gt;consistent reads from cache&lt;/em&gt;, graduating to Beta in Kubernetes v1.31.&lt;/p&gt;
&lt;h3 id="why-consistent-reads-matter"&gt;Why consistent reads matter&lt;/h3&gt;
&lt;p&gt;Consistent reads are essential for ensuring that Kubernetes components have an accurate view of the latest cluster state.
Guaranteeing consistent reads is crucial for maintaining the accuracy and reliability of Kubernetes operations,
enabling components to make informed decisions based on up-to-date information.
In large-scale clusters, fetching and processing this data can be a performance bottleneck,
especially for requests that involve filtering results.
While Kubernetes can filter data by namespace directly within etcd,
any other filtering by labels or field selectors requires the entire dataset to be fetched from etcd and then filtered in-memory by the Kubernetes API server.
This is particularly impactful for components like the kubelet,
which only needs to list pods scheduled to its node - but previously required the API Server and etcd to process all pods in the cluster.&lt;/p&gt;</description></item><item><title>Kubernetes 1.31: Moving cgroup v1 Support into Maintenance Mode</title><link>https://andygol-k8s.netlify.app/blog/2024/08/14/kubernetes-1-31-moving-cgroup-v1-support-maintenance-mode/</link><pubDate>Wed, 14 Aug 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2024/08/14/kubernetes-1-31-moving-cgroup-v1-support-maintenance-mode/</guid><description>&lt;p&gt;As Kubernetes continues to evolve and adapt to the changing landscape of
container orchestration, the community has decided to move cgroup v1 support
into &lt;a href="#what-does-maintenance-mode-mean"&gt;maintenance mode&lt;/a&gt; in v1.31.
This shift aligns with the broader industry's move towards cgroup v2, offering
improved functionalities: including scalability and a more consistent interface.
Before we dive into the consequences for Kubernetes, let's take a step back to
understand what cgroups are and their significance in Linux.&lt;/p&gt;</description></item><item><title>Kubernetes v1.31: PersistentVolume Last Phase Transition Time Moves to GA</title><link>https://andygol-k8s.netlify.app/blog/2024/08/14/last-phase-transition-time-ga/</link><pubDate>Wed, 14 Aug 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2024/08/14/last-phase-transition-time-ga/</guid><description>&lt;p&gt;Announcing the graduation to General Availability (GA) of the PersistentVolume &lt;code&gt;lastTransitionTime&lt;/code&gt; status
field, in Kubernetes v1.31!&lt;/p&gt;
&lt;p&gt;The Kubernetes SIG Storage team is excited to announce that the &amp;quot;PersistentVolumeLastPhaseTransitionTime&amp;quot; feature, introduced
as an alpha in Kubernetes v1.28, has now reached GA status and is officially part of the Kubernetes v1.31 release. This enhancement
helps Kubernetes users understand when a &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/storage/persistent-volumes/"&gt;PersistentVolume&lt;/a&gt; transitions between
different phases, allowing for more efficient and informed resource management.&lt;/p&gt;</description></item><item><title>Kubernetes v1.31: Elli</title><link>https://andygol-k8s.netlify.app/blog/2024/08/13/kubernetes-v1-31-release/</link><pubDate>Tue, 13 Aug 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2024/08/13/kubernetes-v1-31-release/</guid><description>&lt;p&gt;&lt;strong&gt;Editors:&lt;/strong&gt; Matteo Bianchi, Yigit Demirbas, Abigail McCarthy, Edith Puclla, Rashan Smith&lt;/p&gt;
&lt;p&gt;Announcing the release of Kubernetes v1.31: Elli!&lt;/p&gt;
&lt;p&gt;Similar to previous releases, the release of Kubernetes v1.31 introduces new
stable, beta, and alpha features.
The consistent delivery of high-quality releases underscores the strength of our development cycle and the vibrant support from our community.
This release consists of 45 enhancements.
Of those enhancements, 11 have graduated to Stable, 22 are entering Beta,
and 12 have graduated to Alpha.&lt;/p&gt;</description></item><item><title>Introducing Feature Gates to Client-Go: Enhancing Flexibility and Control</title><link>https://andygol-k8s.netlify.app/blog/2024/08/12/feature-gates-in-client-go/</link><pubDate>Mon, 12 Aug 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2024/08/12/feature-gates-in-client-go/</guid><description>&lt;p&gt;Kubernetes components use on-off switches called &lt;em&gt;feature gates&lt;/em&gt; to manage the risk of adding a new feature.
The feature gate mechanism is what enables incremental graduation of a feature through the stages Alpha, Beta, and GA.&lt;/p&gt;
&lt;p&gt;Kubernetes components, such as kube-controller-manager and kube-scheduler, use the client-go library to interact with the API.
The same library is used across the Kubernetes ecosystem to build controllers, tools, webhooks, and more. client-go now includes
its own feature gating mechanism, giving developers and cluster administrators more control over how they adopt client features.&lt;/p&gt;</description></item><item><title>Spotlight on SIG API Machinery</title><link>https://andygol-k8s.netlify.app/blog/2024/08/07/sig-api-machinery-spotlight-2024/</link><pubDate>Wed, 07 Aug 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2024/08/07/sig-api-machinery-spotlight-2024/</guid><description>&lt;p&gt;We recently talked with &lt;a href="https://github.com/fedebongio"&gt;Federico Bongiovanni&lt;/a&gt; (Google) and &lt;a href="https://github.com/deads2k"&gt;David
Eads&lt;/a&gt; (Red Hat), Chairs of SIG API Machinery, to know a bit more about
this Kubernetes Special Interest Group.&lt;/p&gt;
&lt;h2 id="introductions"&gt;Introductions&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Frederico (FSM): Hello, and thank your for your time. To start with, could you tell us about
yourselves and how you got involved in Kubernetes?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;David&lt;/strong&gt;: I started working on
&lt;a href="https://www.redhat.com/en/technologies/cloud-computing/openshift"&gt;OpenShift&lt;/a&gt; (the Red Hat
distribution of Kubernetes) in the fall of 2014 and got involved pretty quickly in API Machinery. My
first PRs were fixing kube-apiserver error messages and from there I branched out to &lt;code&gt;kubectl&lt;/code&gt;
(&lt;em&gt;kubeconfigs&lt;/em&gt; are my fault!), &lt;code&gt;auth&lt;/code&gt; (&lt;a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/"&gt;RBAC&lt;/a&gt; and &lt;code&gt;*Review&lt;/code&gt; APIs are ports
from OpenShift), &lt;code&gt;apps&lt;/code&gt; (&lt;em&gt;workqueues&lt;/em&gt; and &lt;em&gt;sharedinformers&lt;/em&gt; for example). Don’t tell the others,
but API Machinery is still my favorite :)&lt;/p&gt;</description></item><item><title>Kubernetes Removals and Major Changes In v1.31</title><link>https://andygol-k8s.netlify.app/blog/2024/07/19/kubernetes-1-31-upcoming-changes/</link><pubDate>Fri, 19 Jul 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2024/07/19/kubernetes-1-31-upcoming-changes/</guid><description>&lt;p&gt;As Kubernetes develops and matures, features may be deprecated, removed, or replaced with better ones for the project's overall health.
This article outlines some planned changes for the Kubernetes v1.31 release that the release team feels you should be aware of for the continued maintenance of your Kubernetes environment.
The information listed below is based on the current status of the v1.31 release.
It may change before the actual release date.&lt;/p&gt;</description></item><item><title>Spotlight on SIG Node</title><link>https://andygol-k8s.netlify.app/blog/2024/06/20/sig-node-spotlight-2024/</link><pubDate>Thu, 20 Jun 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2024/06/20/sig-node-spotlight-2024/</guid><description>&lt;p&gt;In the world of container orchestration, &lt;a href="https://andygol-k8s.netlify.app/"&gt;Kubernetes&lt;/a&gt; reigns
supreme, powering some of the most complex and dynamic applications across the globe. Behind the
scenes, a network of Special Interest Groups (SIGs) drives Kubernetes' innovation and stability.&lt;/p&gt;
&lt;p&gt;Today, I have the privilege of speaking with &lt;a href="https://www.linkedin.com/in/matthias-bertschy-b427b815/"&gt;Matthias
Bertschy&lt;/a&gt;, &lt;a href="https://www.linkedin.com/in/gunju-kim-916b33190/"&gt;Gunju
Kim&lt;/a&gt;, and &lt;a href="https://www.linkedin.com/in/sergeykanzhelev/"&gt;Sergey
Kanzhelev&lt;/a&gt;, members of &lt;a href="https://github.com/kubernetes/community/blob/master/sig-node/README.md"&gt;SIG
Node&lt;/a&gt;, who will shed some
light on their roles, challenges, and the exciting developments within SIG Node.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Answers given collectively by all interviewees will be marked by their initials.&lt;/em&gt;&lt;/p&gt;</description></item><item><title>10 Years of Kubernetes</title><link>https://andygol-k8s.netlify.app/blog/2024/06/06/10-years-of-kubernetes/</link><pubDate>Thu, 06 Jun 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2024/06/06/10-years-of-kubernetes/</guid><description>&lt;p&gt;&lt;img src="https://andygol-k8s.netlify.app/blog/2024/06/06/10-years-of-kubernetes/kcseu2024.jpg" alt="KCSEU 2024 group photo"&gt;&lt;/p&gt;
&lt;p&gt;Ten (10) years ago, on June 6th, 2014, the
&lt;a href="https://github.com/kubernetes/kubernetes/commit/2c4b3a562ce34cddc3f8218a2c4d11c7310e6d56"&gt;first commit&lt;/a&gt;
of Kubernetes was pushed to GitHub. That first commit with 250 files and 47,501 lines of go, bash
and markdown kicked off the project we have today. Who could have predicted that 10 years later,
Kubernetes would grow to become one of the largest Open Source projects to date with over
&lt;a href="https://k8s.devstats.cncf.io/d/24/overall-project-statistics?orgId=1"&gt;88,000 contributors&lt;/a&gt; from
more than &lt;a href="https://www.cncf.io/reports/kubernetes-project-journey-report/"&gt;8,000 companies&lt;/a&gt;, across
44 countries.&lt;/p&gt;</description></item><item><title>Completing the largest migration in Kubernetes history</title><link>https://andygol-k8s.netlify.app/blog/2024/05/20/completing-cloud-provider-migration/</link><pubDate>Mon, 20 May 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2024/05/20/completing-cloud-provider-migration/</guid><description>&lt;p&gt;Since as early as Kubernetes v1.7, the Kubernetes project has pursued the ambitious goal of removing built-in cloud provider integrations (&lt;a href="https://github.com/kubernetes/enhancements/blob/master/keps/sig-cloud-provider/2395-removing-in-tree-cloud-providers/README.md"&gt;KEP-2395&lt;/a&gt;).
While these integrations were instrumental in Kubernetes' early development and growth, their removal was driven by two key factors:
the growing complexity of maintaining native support for every cloud provider across millions of lines of Go code, and the desire to establish
Kubernetes as a truly vendor-neutral platform.&lt;/p&gt;
&lt;p&gt;After many releases, we're thrilled to announce that all cloud provider integrations have been successfully migrated from the core Kubernetes repository to external plugins.
In addition to achieving our initial objectives, we've also significantly streamlined Kubernetes by removing roughly 1.5 million lines of code and reducing the binary sizes of core components by approximately 40%.&lt;/p&gt;</description></item><item><title>Gateway API v1.1: Service mesh, GRPCRoute, and a whole lot more</title><link>https://andygol-k8s.netlify.app/blog/2024/05/09/gateway-api-v1-1/</link><pubDate>Thu, 09 May 2024 09:00:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2024/05/09/gateway-api-v1-1/</guid><description>&lt;p&gt;&lt;img src="https://andygol-k8s.netlify.app/blog/2024/05/09/gateway-api-v1-1/gateway-api-logo.svg" alt="Gateway API logo"&gt;&lt;/p&gt;
&lt;p&gt;Following the GA release of Gateway API last October, Kubernetes
SIG Network is pleased to announce the v1.1 release of
&lt;a href="https://gateway-api.sigs.k8s.io/"&gt;Gateway API&lt;/a&gt;. In this release, several features are graduating to
&lt;em&gt;Standard Channel&lt;/em&gt; (GA), notably including support for service mesh and
GRPCRoute. We're also introducing some new experimental features, including
session persistence and client certificate verification.&lt;/p&gt;
&lt;h2 id="what-s-new"&gt;What's new&lt;/h2&gt;
&lt;h3 id="graduation-to-standard"&gt;Graduation to Standard&lt;/h3&gt;
&lt;p&gt;This release includes the graduation to Standard of four eagerly awaited features.
This means they are no longer experimental concepts; inclusion in the Standard
release channel denotes a high level of confidence in the API surface and
provides guarantees of backward compatibility. Of course, as with any other
Kubernetes API, Standard Channel features can continue to evolve with
backward-compatible additions over time, and we certainly expect further
refinements and improvements to these new features in the future.
For more information on how all of this works, refer to the
&lt;a href="https://gateway-api.sigs.k8s.io/concepts/versioning/"&gt;Gateway API Versioning Policy&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>Container Runtime Interface streaming explained</title><link>https://andygol-k8s.netlify.app/blog/2024/05/01/cri-streaming-explained/</link><pubDate>Wed, 01 May 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2024/05/01/cri-streaming-explained/</guid><description>&lt;p&gt;The Kubernetes &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/architecture/cri"&gt;Container Runtime Interface (CRI)&lt;/a&gt;
acts as the main connection between the &lt;a href="https://andygol-k8s.netlify.app/docs/reference/command-line-tools-reference/kubelet/"&gt;kubelet&lt;/a&gt;
and the &lt;a href="https://andygol-k8s.netlify.app/docs/setup/production-environment/container-runtimes/"&gt;Container Runtime&lt;/a&gt;.
Those runtimes have to provide a &lt;a href="https://grpc.io"&gt;gRPC&lt;/a&gt; server which has to
fulfill a Kubernetes defined &lt;a href="https://protobuf.dev"&gt;Protocol Buffer&lt;/a&gt; interface.
&lt;a href="https://github.com/kubernetes/cri-api/blob/63929b3/pkg/apis/runtime/v1/api.proto"&gt;This API definition&lt;/a&gt;
evolves over time, for example when contributors add new features or fields are
going to become deprecated.&lt;/p&gt;
&lt;p&gt;In this blog post, I'd like to dive into the functionality and history of three
extraordinary Remote Procedure Calls (RPCs), which are truly outstanding in
terms of how they work: &lt;code&gt;Exec&lt;/code&gt;, &lt;code&gt;Attach&lt;/code&gt; and &lt;code&gt;PortForward&lt;/code&gt;.&lt;/p&gt;</description></item><item><title>Kubernetes 1.30: Preventing unauthorized volume mode conversion moves to GA</title><link>https://andygol-k8s.netlify.app/blog/2024/04/30/prevent-unauthorized-volume-mode-conversion-ga/</link><pubDate>Tue, 30 Apr 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2024/04/30/prevent-unauthorized-volume-mode-conversion-ga/</guid><description>&lt;p&gt;With the release of Kubernetes 1.30, the feature to prevent the modification of the volume mode
of a &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/storage/persistent-volumes/"&gt;PersistentVolumeClaim&lt;/a&gt; that was created from
an existing VolumeSnapshot in a Kubernetes cluster, has moved to GA!&lt;/p&gt;
&lt;h2 id="the-problem"&gt;The problem&lt;/h2&gt;
&lt;p&gt;The &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/storage/persistent-volumes/#volume-mode"&gt;Volume Mode&lt;/a&gt; of a PersistentVolumeClaim
refers to whether the underlying volume on the storage device is formatted into a filesystem or
presented as a raw block device to the Pod that uses it.&lt;/p&gt;
&lt;p&gt;Users can leverage the VolumeSnapshot feature, which has been stable since Kubernetes v1.20,
to create a PersistentVolumeClaim (shortened as PVC) from an existing VolumeSnapshot in
the Kubernetes cluster. The PVC spec includes a dataSource field, which can point to an
existing VolumeSnapshot instance.
Visit &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/storage/persistent-volumes/#create-persistent-volume-claim-from-volume-snapshot"&gt;Create a PersistentVolumeClaim from a Volume Snapshot&lt;/a&gt;
for more details on how to create a PVC from an existing VolumeSnapshot in a Kubernetes cluster.&lt;/p&gt;</description></item><item><title>Kubernetes 1.30: Multi-Webhook and Modular Authorization Made Much Easier</title><link>https://andygol-k8s.netlify.app/blog/2024/04/26/multi-webhook-and-modular-authorization-made-much-easier/</link><pubDate>Fri, 26 Apr 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2024/04/26/multi-webhook-and-modular-authorization-made-much-easier/</guid><description>&lt;p&gt;With Kubernetes 1.30, we (SIG Auth) are moving Structured Authorization
Configuration to beta.&lt;/p&gt;
&lt;p&gt;Today's article is about &lt;em&gt;authorization&lt;/em&gt;: deciding what someone can and cannot
access. Check a previous article from yesterday to find about what's new in
Kubernetes v1.30 around &lt;em&gt;authentication&lt;/em&gt; (finding out who's performing a task,
and checking that they are who they say they are).&lt;/p&gt;
&lt;h2 id="introduction"&gt;Introduction&lt;/h2&gt;
&lt;p&gt;Kubernetes continues to evolve to meet the intricate requirements of system
administrators and developers alike. A critical aspect of Kubernetes that
ensures the security and integrity of the cluster is the API server
authorization. Until recently, the configuration of the authorization chain in
kube-apiserver was somewhat rigid, limited to a set of command-line flags and
allowing only a single webhook in the authorization chain. This approach, while
functional, restricted the flexibility needed by cluster administrators to
define complex, fine-grained authorization policies. The latest Structured
Authorization Configuration feature (&lt;a href="https://kep.k8s.io/3221"&gt;KEP-3221&lt;/a&gt;) aims
to revolutionize this aspect by introducing a more structured and versatile way
to configure the authorization chain, focusing on enabling multiple webhooks and
providing explicit control mechanisms.&lt;/p&gt;</description></item><item><title>Kubernetes 1.30: Structured Authentication Configuration Moves to Beta</title><link>https://andygol-k8s.netlify.app/blog/2024/04/25/structured-authentication-moves-to-beta/</link><pubDate>Thu, 25 Apr 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2024/04/25/structured-authentication-moves-to-beta/</guid><description>&lt;p&gt;With Kubernetes 1.30, we (SIG Auth) are moving Structured Authentication Configuration to beta.&lt;/p&gt;
&lt;p&gt;Today's article is about &lt;em&gt;authentication&lt;/em&gt;: finding out who's performing a task, and checking
that they are who they say they are. Check back in tomorrow to find about what's new in
Kubernetes v1.30 around &lt;em&gt;authorization&lt;/em&gt; (deciding what someone can and can't access).&lt;/p&gt;
&lt;h2 id="motivation"&gt;Motivation&lt;/h2&gt;
&lt;p&gt;Kubernetes has had a long-standing need for a more flexible and extensible
authentication system. The current system, while powerful, has some limitations
that make it difficult to use in certain scenarios. For example, it is not
possible to use multiple authenticators of the same type (e.g., multiple JWT
authenticators) or to change the configuration without restarting the API server. The
Structured Authentication Configuration feature is the first step towards
addressing these limitations and providing a more flexible and extensible way
to configure authentication in Kubernetes.&lt;/p&gt;</description></item><item><title>Kubernetes 1.30: Validating Admission Policy Is Generally Available</title><link>https://andygol-k8s.netlify.app/blog/2024/04/24/validating-admission-policy-ga/</link><pubDate>Wed, 24 Apr 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2024/04/24/validating-admission-policy-ga/</guid><description>&lt;p&gt;On behalf of the Kubernetes project, I am excited to announce that ValidatingAdmissionPolicy has reached
&lt;strong&gt;general availability&lt;/strong&gt;
as part of Kubernetes 1.30 release. If you have not yet read about this new declarative alternative to
validating admission webhooks, it may be interesting to read our
&lt;a href="https://andygol-k8s.netlify.app/blog/2022/12/20/validating-admission-policies-alpha/"&gt;previous post&lt;/a&gt; about the new feature.
If you have already heard about ValidatingAdmissionPolicies and you are eager to try them out,
there is no better time to do it than now.&lt;/p&gt;</description></item><item><title>Kubernetes 1.30: Read-only volume mounts can be finally literally read-only</title><link>https://andygol-k8s.netlify.app/blog/2024/04/23/recursive-read-only-mounts/</link><pubDate>Tue, 23 Apr 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2024/04/23/recursive-read-only-mounts/</guid><description>&lt;p&gt;Read-only volume mounts have been a feature of Kubernetes since the beginning.
Surprisingly, read-only mounts are not completely read-only under certain conditions on Linux.
As of the v1.30 release, they can be made completely read-only,
with alpha support for &lt;em&gt;recursive read-only mounts&lt;/em&gt;.&lt;/p&gt;
&lt;h2 id="read-only-volume-mounts-are-not-really-read-only-by-default"&gt;Read-only volume mounts are not really read-only by default&lt;/h2&gt;
&lt;p&gt;Volume mounts can be deceptively complicated.&lt;/p&gt;
&lt;p&gt;You might expect that the following manifest makes everything under &lt;code&gt;/mnt&lt;/code&gt; in the containers read-only:&lt;/p&gt;</description></item><item><title>Kubernetes 1.30: Beta Support For Pods With User Namespaces</title><link>https://andygol-k8s.netlify.app/blog/2024/04/22/userns-beta/</link><pubDate>Mon, 22 Apr 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2024/04/22/userns-beta/</guid><description>&lt;p&gt;Linux provides different namespaces to isolate processes from each other. For
example, a typical Kubernetes pod runs within a network namespace to isolate the
network identity and a PID namespace to isolate the processes.&lt;/p&gt;
&lt;p&gt;One Linux namespace that was left behind is the &lt;a href="https://man7.org/linux/man-pages/man7/user_namespaces.7.html"&gt;user
namespace&lt;/a&gt;. This
namespace allows us to isolate the user and group identifiers (UIDs and GIDs) we
use inside the container from the ones on the host.&lt;/p&gt;
&lt;p&gt;This is a powerful abstraction that allows us to run containers as &amp;quot;root&amp;quot;: we
are root inside the container and can do everything root can inside the pod,
but our interactions with the host are limited to what a non-privileged user can
do. This is great for limiting the impact of a container breakout.&lt;/p&gt;</description></item><item><title>Kubernetes v1.30: Uwubernetes</title><link>https://andygol-k8s.netlify.app/blog/2024/04/17/kubernetes-v1-30-release/</link><pubDate>Wed, 17 Apr 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2024/04/17/kubernetes-v1-30-release/</guid><description>&lt;p&gt;&lt;strong&gt;Editors:&lt;/strong&gt; Amit Dsouza, Frederick Kautz, Kristin Martin, Abigail McCarthy, Natali Vlatko&lt;/p&gt;
&lt;p&gt;Announcing the release of Kubernetes v1.30: Uwubernetes, the cutest release!&lt;/p&gt;
&lt;p&gt;Similar to previous releases, the release of Kubernetes v1.30 introduces new stable, beta, and alpha
features. The consistent delivery of top-notch releases underscores the strength of our development
cycle and the vibrant support from our community.&lt;/p&gt;
&lt;p&gt;This release consists of 45 enhancements. Of those enhancements, 17 have graduated to Stable, 18 are
entering Beta, and 10 have graduated to Alpha.&lt;/p&gt;</description></item><item><title>Spotlight on SIG Architecture: Code Organization</title><link>https://andygol-k8s.netlify.app/blog/2024/04/11/sig-architecture-code-spotlight-2024/</link><pubDate>Thu, 11 Apr 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2024/04/11/sig-architecture-code-spotlight-2024/</guid><description>&lt;p&gt;&lt;em&gt;This is the third interview of a SIG Architecture Spotlight series that will cover the different
subprojects. We will cover &lt;a href="https://github.com/kubernetes/community/blob/e44c2c9d0d3023e7111d8b01ac93d54c8624ee91/sig-architecture/README.md#code-organization"&gt;SIG Architecture: Code Organization&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;In this SIG Architecture spotlight I talked with &lt;a href="https://github.com/MadhavJivrajani"&gt;Madhav Jivrajani&lt;/a&gt;
(VMware), a member of the Code Organization subproject.&lt;/p&gt;
&lt;h2 id="introducing-the-code-organization-subproject"&gt;Introducing the Code Organization subproject&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Frederico (FSM)&lt;/strong&gt;: Hello Madhav, thank you for your availability. Could you start by telling us a
bit about yourself, your role and how you got involved in Kubernetes?&lt;/p&gt;</description></item><item><title>DIY: Create Your Own Cloud with Kubernetes (Part 3)</title><link>https://andygol-k8s.netlify.app/blog/2024/04/05/diy-create-your-own-cloud-with-kubernetes-part-3/</link><pubDate>Fri, 05 Apr 2024 07:40:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2024/04/05/diy-create-your-own-cloud-with-kubernetes-part-3/</guid><description>&lt;p&gt;Approaching the most interesting phase, this article delves into running Kubernetes within
Kubernetes. Technologies such as Kamaji and Cluster API are highlighted, along with their
integration with KubeVirt.&lt;/p&gt;
&lt;p&gt;Previous discussions have covered
&lt;a href="https://andygol-k8s.netlify.app/blog/2024/04/05/diy-create-your-own-cloud-with-kubernetes-part-1/"&gt;preparing Kubernetes on bare metal&lt;/a&gt;
and
&lt;a href="https://andygol-k8s.netlify.app/blog/2024/04/05/diy-create-your-own-cloud-with-kubernetes-part-2"&gt;how to turn Kubernetes into virtual machines management system&lt;/a&gt;.
This article concludes the series by explaining how, using all of the above, you can build a
full-fledged managed Kubernetes and run virtual Kubernetes clusters with just a click.&lt;/p&gt;</description></item><item><title>DIY: Create Your Own Cloud with Kubernetes (Part 2)</title><link>https://andygol-k8s.netlify.app/blog/2024/04/05/diy-create-your-own-cloud-with-kubernetes-part-2/</link><pubDate>Fri, 05 Apr 2024 07:35:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2024/04/05/diy-create-your-own-cloud-with-kubernetes-part-2/</guid><description>&lt;p&gt;Continuing our series of posts on how to build your own cloud using just the Kubernetes ecosystem.
In the &lt;a href="https://andygol-k8s.netlify.app/blog/2024/04/05/diy-create-your-own-cloud-with-kubernetes-part-1/"&gt;previous article&lt;/a&gt;, we
explained how we prepare a basic Kubernetes distribution based on Talos Linux and Flux CD.
In this article, we'll show you a few various virtualization technologies in Kubernetes and prepare
everything need to run virtual machines in Kubernetes, primarily storage and networking.&lt;/p&gt;
&lt;p&gt;We will talk about technologies such as KubeVirt, LINSTOR, and Kube-OVN.&lt;/p&gt;</description></item><item><title>DIY: Create Your Own Cloud with Kubernetes (Part 1)</title><link>https://andygol-k8s.netlify.app/blog/2024/04/05/diy-create-your-own-cloud-with-kubernetes-part-1/</link><pubDate>Fri, 05 Apr 2024 07:30:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2024/04/05/diy-create-your-own-cloud-with-kubernetes-part-1/</guid><description>&lt;p&gt;At Ænix, we have a deep affection for Kubernetes and dream that all modern technologies will soon
start utilizing its remarkable patterns.&lt;/p&gt;
&lt;p&gt;Have you ever thought about building your own cloud? I bet you have. But is it possible to do this
using only modern technologies and approaches, without leaving the cozy Kubernetes ecosystem?
Our experience in developing Cozystack required us to delve deeply into it.&lt;/p&gt;
&lt;p&gt;You might argue that Kubernetes is not intended for this purpose and why not simply use OpenStack
for bare metal servers and run Kubernetes inside it as intended. But by doing so, you would simply
shift the responsibility from your hands to the hands of OpenStack administrators.
This would add at least one more huge and complex system to your ecosystem.&lt;/p&gt;</description></item><item><title>Introducing the Windows Operational Readiness Specification</title><link>https://andygol-k8s.netlify.app/blog/2024/04/03/intro-windows-ops-readiness/</link><pubDate>Wed, 03 Apr 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2024/04/03/intro-windows-ops-readiness/</guid><description>&lt;p&gt;Since Windows support &lt;a href="https://andygol-k8s.netlify.app/blog/2019/03/25/kubernetes-1-14-release-announcement/"&gt;graduated to stable&lt;/a&gt;
with Kubernetes 1.14 in 2019, the capability to run Windows workloads has been much
appreciated by the end user community. The level of and availability of Windows workload
support has consistently been a major differentiator for Kubernetes distributions used by
large enterprises. However, with more Windows workloads being migrated to Kubernetes
and new Windows features being continuously released, it became challenging to test
Windows worker nodes in an effective and standardized way.&lt;/p&gt;</description></item><item><title>A Peek at Kubernetes v1.30</title><link>https://andygol-k8s.netlify.app/blog/2024/03/12/kubernetes-1-30-upcoming-changes/</link><pubDate>Tue, 12 Mar 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2024/03/12/kubernetes-1-30-upcoming-changes/</guid><description>&lt;h2 id="a-quick-look-exciting-changes-in-kubernetes-v1-30"&gt;A quick look: exciting changes in Kubernetes v1.30&lt;/h2&gt;
&lt;p&gt;It's a new year and a new Kubernetes release. We're halfway through the release cycle and
have quite a few interesting and exciting enhancements coming in v1.30. From brand new features
in alpha, to established features graduating to stable, to long-awaited improvements, this release
has something for everyone to pay attention to!&lt;/p&gt;
&lt;p&gt;To tide you over until the official release, here's a sneak peek of the enhancements we're most
excited about in this cycle!&lt;/p&gt;</description></item><item><title>CRI-O: Applying seccomp profiles from OCI registries</title><link>https://andygol-k8s.netlify.app/blog/2024/03/07/cri-o-seccomp-oci-artifacts/</link><pubDate>Thu, 07 Mar 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2024/03/07/cri-o-seccomp-oci-artifacts/</guid><description>&lt;p&gt;Seccomp stands for secure computing mode and has been a feature of the Linux
kernel since version 2.6.12. It can be used to sandbox the privileges of a
process, restricting the calls it is able to make from userspace into the
kernel. Kubernetes lets you automatically apply seccomp profiles loaded onto a
node to your Pods and containers.&lt;/p&gt;
&lt;p&gt;But distributing those seccomp profiles is a major challenge in Kubernetes,
because the JSON files have to be available on all nodes where a workload can
possibly run. Projects like the &lt;a href="https://sigs.k8s.io/security-profiles-operator"&gt;Security Profiles
Operator&lt;/a&gt; solve that problem by
running as a daemon within the cluster, which makes me wonder which part of that
distribution could be done by the &lt;a href="https://andygol-k8s.netlify.app/docs/setup/production-environment/container-runtimes/"&gt;container
runtime&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>Spotlight on SIG Cloud Provider</title><link>https://andygol-k8s.netlify.app/blog/2024/03/01/sig-cloud-provider-spotlight-2024/</link><pubDate>Fri, 01 Mar 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2024/03/01/sig-cloud-provider-spotlight-2024/</guid><description>&lt;p&gt;One of the most popular ways developers use Kubernetes-related services is via cloud providers, but
have you ever wondered how cloud providers can do that? How does this whole process of integration
of Kubernetes to various cloud providers happen? To answer that, let's put the spotlight on &lt;a href="https://github.com/kubernetes/community/blob/master/sig-cloud-provider/README.md"&gt;SIG
Cloud Provider&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;SIG Cloud Provider works to create seamless integrations between Kubernetes and various cloud
providers. Their mission? Keeping the Kubernetes ecosystem fair and open for all. By setting clear
standards and requirements, they ensure every cloud provider plays nicely with Kubernetes. It is
their responsibility to configure cluster components to enable cloud provider integrations.&lt;/p&gt;</description></item><item><title>A look into the Kubernetes Book Club</title><link>https://andygol-k8s.netlify.app/blog/2024/02/22/k8s-book-club/</link><pubDate>Thu, 22 Feb 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2024/02/22/k8s-book-club/</guid><description>&lt;p&gt;Learning Kubernetes and the entire ecosystem of technologies around it is not without its
challenges. In this interview, we will talk with &lt;a href="https://www.linkedin.com/in/csantanapr/"&gt;Carlos Santana
(AWS)&lt;/a&gt; to learn a bit more about how he created the
&lt;a href="https://community.cncf.io/kubernetes-virtual-book-club/"&gt;Kubernetes Book Club&lt;/a&gt;, how it works, and
how anyone can join in to take advantage of a community-based learning experience.&lt;/p&gt;
&lt;p&gt;&lt;img src="https://andygol-k8s.netlify.app/blog/2024/02/22/k8s-book-club/csantana_k8s_book_club.jpg" alt="Carlos Santana speaking at KubeCon NA 2023"&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Frederico Muñoz (FSM)&lt;/strong&gt;: Hello Carlos, thank you so much for your availability. To start with,
could you tell us a bit about yourself?&lt;/p&gt;</description></item><item><title>Image Filesystem: Configuring Kubernetes to store containers on a separate filesystem</title><link>https://andygol-k8s.netlify.app/blog/2024/01/23/kubernetes-separate-image-filesystem/</link><pubDate>Tue, 23 Jan 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2024/01/23/kubernetes-separate-image-filesystem/</guid><description>&lt;p&gt;A common issue in running/operating Kubernetes clusters is running out of disk space.
When the node is provisioned, you should aim to have a good amount of storage space for your container images and running containers.
The &lt;a href="https://andygol-k8s.netlify.app/docs/setup/production-environment/container-runtimes/"&gt;container runtime&lt;/a&gt; usually writes to &lt;code&gt;/var&lt;/code&gt;.
This can be located as a separate partition or on the root filesystem.
CRI-O, by default, writes its containers and images to &lt;code&gt;/var/lib/containers&lt;/code&gt;, while containerd writes its containers and images to &lt;code&gt;/var/lib/containerd&lt;/code&gt;.&lt;/p&gt;</description></item><item><title>Spotlight on SIG Release (Release Team Subproject)</title><link>https://andygol-k8s.netlify.app/blog/2024/01/15/sig-release-spotlight-2023/</link><pubDate>Mon, 15 Jan 2024 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2024/01/15/sig-release-spotlight-2023/</guid><description>&lt;p&gt;The Release Special Interest Group (SIG Release), where Kubernetes sharpens its blade
with cutting-edge features and bug fixes every 4 months. Have you ever considered how such a big
project like Kubernetes manages its timeline so efficiently to release its new version, or how
the internal workings of the Release Team look like? If you're curious about these questions or
want to know more and get involved with the work SIG Release does, read on!&lt;/p&gt;</description></item><item><title>Contextual logging in Kubernetes 1.29: Better troubleshooting and enhanced logging</title><link>https://andygol-k8s.netlify.app/blog/2023/12/20/contextual-logging-in-kubernetes-1-29/</link><pubDate>Wed, 20 Dec 2023 09:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/12/20/contextual-logging-in-kubernetes-1-29/</guid><description>&lt;p&gt;On behalf of the &lt;a href="https://github.com/kubernetes/community/blob/master/wg-structured-logging/README.md"&gt;Structured Logging Working Group&lt;/a&gt;
and &lt;a href="https://github.com/kubernetes/community/tree/master/sig-instrumentation#readme"&gt;SIG Instrumentation&lt;/a&gt;,
we are pleased to announce that the contextual logging feature
introduced in Kubernetes v1.24 has now been successfully migrated to
two components (kube-scheduler and kube-controller-manager)
as well as some directories. This feature aims to provide more useful logs
for better troubleshooting of Kubernetes and to empower developers to enhance Kubernetes.&lt;/p&gt;
&lt;h2 id="what-is-contextual-logging"&gt;What is contextual logging?&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/3077-contextual-logging"&gt;Contextual logging&lt;/a&gt;
is based on the &lt;a href="https://github.com/go-logr/logr#a-minimal-logging-api-for-go"&gt;go-logr&lt;/a&gt; API.
The key idea is that libraries are passed a logger instance by their caller
and use that for logging instead of accessing a global logger.
The binary decides the logging implementation, not the libraries.
The go-logr API is designed around structured logging and supports attaching
additional information to a logger.&lt;/p&gt;</description></item><item><title>Kubernetes 1.29: Decoupling taint manager from node lifecycle controller</title><link>https://andygol-k8s.netlify.app/blog/2023/12/19/kubernetes-1-29-taint-eviction-controller/</link><pubDate>Tue, 19 Dec 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/12/19/kubernetes-1-29-taint-eviction-controller/</guid><description>&lt;p&gt;This blog discusses a new feature in Kubernetes 1.29 to improve the handling of taint-based pod eviction.&lt;/p&gt;
&lt;h2 id="background"&gt;Background&lt;/h2&gt;
&lt;p&gt;In Kubernetes 1.29, an improvement has been introduced to enhance the taint-based pod eviction handling on nodes.
This blog discusses the changes made to node-lifecycle-controller
to separate its responsibilities and improve overall code maintainability.&lt;/p&gt;
&lt;h2 id="summary-of-changes"&gt;Summary of changes&lt;/h2&gt;
&lt;p&gt;node-lifecycle-controller previously combined two independent functions:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Adding a pre-defined set of &lt;code&gt;NoExecute&lt;/code&gt; taints to Node based on Node's condition.&lt;/li&gt;
&lt;li&gt;Performing pod eviction on &lt;code&gt;NoExecute&lt;/code&gt; taint.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;With the Kubernetes 1.29 release, the taint-based eviction implementation has been
moved out of node-lifecycle-controller into a separate and independent component called taint-eviction-controller.
This separation aims to disentangle code, enhance code maintainability,
and facilitate future extensions to either component.&lt;/p&gt;</description></item><item><title>Kubernetes 1.29: PodReadyToStartContainers Condition Moves to Beta</title><link>https://andygol-k8s.netlify.app/blog/2023/12/19/pod-ready-to-start-containers-condition-now-in-beta/</link><pubDate>Tue, 19 Dec 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/12/19/pod-ready-to-start-containers-condition-now-in-beta/</guid><description>&lt;p&gt;With the recent release of Kubernetes 1.29, the &lt;code&gt;PodReadyToStartContainers&lt;/code&gt;
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/pod-lifecycle/#pod-conditions"&gt;condition&lt;/a&gt; is
available by default.
The kubelet manages the value for that condition throughout a Pod's lifecycle,
in the status field of a Pod. The kubelet will use the &lt;code&gt;PodReadyToStartContainers&lt;/code&gt;
condition to accurately surface the initialization state of a Pod,
from the perspective of Pod sandbox creation and network configuration by a container runtime.&lt;/p&gt;
&lt;h2 id="what-s-the-motivation-for-this-feature"&gt;What's the motivation for this feature?&lt;/h2&gt;
&lt;p&gt;Cluster administrators did not have a clear and easily accessible way to view the completion of Pod's sandbox creation
and initialization. As of 1.28, the &lt;code&gt;Initialized&lt;/code&gt; condition in Pods tracks the execution of init containers.
However, it has limitations in accurately reflecting the completion of sandbox creation and readiness to start containers for all Pods in a cluster.
This distinction is particularly important in multi-tenant clusters where tenants own the Pod specifications, including the set of init containers,
while cluster administrators manage storage plugins, networking plugins, and container runtime handlers.
Therefore, there is a need for an improved mechanism to provide cluster administrators with a clear and
comprehensive view of Pod sandbox creation completion and container readiness.&lt;/p&gt;</description></item><item><title>Kubernetes 1.29: New (alpha) Feature, Load Balancer IP Mode for Services</title><link>https://andygol-k8s.netlify.app/blog/2023/12/18/kubernetes-1-29-feature-loadbalancer-ip-mode-alpha/</link><pubDate>Mon, 18 Dec 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/12/18/kubernetes-1-29-feature-loadbalancer-ip-mode-alpha/</guid><description>&lt;p&gt;This blog introduces a new alpha feature in Kubernetes 1.29.
It provides a configurable approach to define how Service implementations,
exemplified in this blog by kube-proxy,
handle traffic from pods to the Service, within the cluster.&lt;/p&gt;
&lt;h2 id="background"&gt;Background&lt;/h2&gt;
&lt;p&gt;In older Kubernetes releases, the kube-proxy would intercept traffic that was destined for the IP
address associated with a Service of &lt;code&gt;type: LoadBalancer&lt;/code&gt;. This happened whatever mode you used
for &lt;code&gt;kube-proxy&lt;/code&gt;.
The interception implemented the expected behavior (traffic eventually reaching the expected
endpoints behind the Service). The mechanism to make that work depended on the mode for kube-proxy;
on Linux, kube-proxy in iptables mode would redirecting packets directly to the endpoint; in ipvs mode,
kube-proxy would configure the load balancer's IP address to one interface on the node.
The motivation for implementing that interception was for two reasons:&lt;/p&gt;</description></item><item><title>Kubernetes 1.29: Single Pod Access Mode for PersistentVolumes Graduates to Stable</title><link>https://andygol-k8s.netlify.app/blog/2023/12/18/read-write-once-pod-access-mode-ga/</link><pubDate>Mon, 18 Dec 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/12/18/read-write-once-pod-access-mode-ga/</guid><description>&lt;p&gt;With the release of Kubernetes v1.29, the &lt;code&gt;ReadWriteOncePod&lt;/code&gt; volume access mode
has graduated to general availability: it's part of Kubernetes' stable API. In
this blog post, I'll take a closer look at this access mode and what it does.&lt;/p&gt;
&lt;h2 id="what-is-readwriteoncepod"&gt;What is &lt;code&gt;ReadWriteOncePod&lt;/code&gt;?&lt;/h2&gt;
&lt;p&gt;&lt;code&gt;ReadWriteOncePod&lt;/code&gt; is an access mode for
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/storage/persistent-volumes/#persistent-volumes"&gt;PersistentVolumes&lt;/a&gt; (PVs)
and &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims"&gt;PersistentVolumeClaims&lt;/a&gt; (PVCs)
introduced in Kubernetes v1.22. This access mode enables you to restrict volume
access to a single pod in the cluster, ensuring that only one pod can write to
the volume at a time. This can be particularly useful for stateful workloads
that require single-writer access to storage.&lt;/p&gt;</description></item><item><title>Kubernetes 1.29: CSI Storage Resizing Authenticated and Generally Available in v1.29</title><link>https://andygol-k8s.netlify.app/blog/2023/12/15/csi-node-expand-secret-support-ga/</link><pubDate>Fri, 15 Dec 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/12/15/csi-node-expand-secret-support-ga/</guid><description>&lt;p&gt;Kubernetes version v1.29 brings generally available support for authentication
during CSI (Container Storage Interface) storage resize operations.&lt;/p&gt;
&lt;p&gt;Let's embark on the evolution of this feature, initially introduced in alpha in
Kubernetes v1.25, and unravel the changes accompanying its transition to GA.&lt;/p&gt;
&lt;h2 id="authenticated-csi-storage-resizing-unveiled"&gt;Authenticated CSI storage resizing unveiled&lt;/h2&gt;
&lt;p&gt;Kubernetes harnesses the capabilities of CSI to seamlessly integrate with third-party
storage systems, empowering your cluster to seamlessly expand storage volumes
managed by the CSI driver. The recent elevation of authentication secret support
for resizes from Beta to GA ushers in new horizons, enabling volume expansion in
scenarios where the underlying storage operation demands credentials for backend
cluster operations – such as accessing a SAN/NAS fabric. This enhancement addresses
a critical limitation for CSI drivers, allowing volume expansion at the node level,
especially in cases necessitating authentication for resize operations.&lt;/p&gt;</description></item><item><title>Kubernetes 1.29: VolumeAttributesClass for Volume Modification</title><link>https://andygol-k8s.netlify.app/blog/2023/12/15/kubernetes-1-29-volume-attributes-class/</link><pubDate>Fri, 15 Dec 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/12/15/kubernetes-1-29-volume-attributes-class/</guid><description>&lt;p&gt;The v1.29 release of Kubernetes introduced an alpha feature to support modifying a volume
by changing the &lt;code&gt;volumeAttributesClassName&lt;/code&gt; that was specified for a PersistentVolumeClaim (PVC).
With the feature enabled, Kubernetes can handle updates of volume attributes other than capacity.
Allowing volume attributes to be changed without managing it through different
provider's APIs directly simplifies the current flow.&lt;/p&gt;
&lt;p&gt;You can read about VolumeAttributesClass usage details in the Kubernetes documentation
or you can read on to learn about why the Kubernetes project is supporting this feature.&lt;/p&gt;</description></item><item><title>Kubernetes 1.29: Cloud Provider Integrations Are Now Separate Components</title><link>https://andygol-k8s.netlify.app/blog/2023/12/14/cloud-provider-integration-changes/</link><pubDate>Thu, 14 Dec 2023 09:30:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/12/14/cloud-provider-integration-changes/</guid><description>&lt;p&gt;For Kubernetes v1.29, you need to use additional components to integrate your
Kubernetes cluster with a cloud infrastructure provider. By default, Kubernetes
v1.29 components &lt;strong&gt;abort&lt;/strong&gt; if you try to specify integration with any cloud provider using
one of the legacy compiled-in cloud provider integrations. If you want to use a legacy
integration, you have to opt back in - and a future release will remove even that option.&lt;/p&gt;
&lt;p&gt;In 2018, the &lt;a href="https://kubernetes.io/blog/2019/04/17/the-future-of-cloud-providers-in-kubernetes/"&gt;Kubernetes community agreed to form the Cloud Provider Special
Interest Group (SIG)&lt;/a&gt;, with a mission to externalize all cloud provider
integrations and remove all the existing in-tree cloud provider integrations.
In January 2019, the Kubernetes community approved the initial draft of
&lt;a href="https://github.com/kubernetes/enhancements/tree/master/keps/sig-cloud-provider/2395-removing-in-tree-cloud-providers"&gt;KEP-2395: Removing In-Tree Cloud Provider Code&lt;/a&gt;. This KEP defines a
process by which we can remove cloud provider specific code from the core
Kubernetes source tree. From the KEP:&lt;/p&gt;</description></item><item><title>Kubernetes v1.29: Mandala</title><link>https://andygol-k8s.netlify.app/blog/2023/12/13/kubernetes-v1-29-release/</link><pubDate>Wed, 13 Dec 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/12/13/kubernetes-v1-29-release/</guid><description>&lt;p&gt;&lt;strong&gt;Editors:&lt;/strong&gt; Carol Valencia, Kristin Martin, Abigail McCarthy, James Quigley&lt;/p&gt;
&lt;p&gt;Announcing the release of Kubernetes v1.29: Mandala (The Universe), the last release of 2023!&lt;/p&gt;
&lt;p&gt;Similar to previous releases, the release of Kubernetes v1.29 introduces new stable, beta, and alpha features. The consistent delivery of top-notch releases underscores the strength of our development cycle and the vibrant support from our community.&lt;/p&gt;
&lt;p&gt;This release consists of 49 enhancements. Of those enhancements, 11 have graduated to Stable, 19 are entering Beta and 19 have graduated to Alpha.&lt;/p&gt;</description></item><item><title>New Experimental Features in Gateway API v1.0</title><link>https://andygol-k8s.netlify.app/blog/2023/11/28/gateway-api-ga/</link><pubDate>Tue, 28 Nov 2023 10:00:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/11/28/gateway-api-ga/</guid><description>&lt;p&gt;Recently, the &lt;a href="https://gateway-api.sigs.k8s.io/"&gt;Gateway API&lt;/a&gt; &lt;a href="https://andygol-k8s.netlify.app/blog/2023/10/31/gateway-api-ga/"&gt;announced its v1.0 GA release&lt;/a&gt;, marking a huge milestone for the project.&lt;/p&gt;
&lt;p&gt;Along with stabilizing some of the core functionality in the API, a number of exciting new &lt;em&gt;experimental&lt;/em&gt; features have been added.&lt;/p&gt;
&lt;h2 id="backend-tls-policy"&gt;Backend TLS Policy&lt;/h2&gt;
&lt;p&gt;&lt;code&gt;BackendTLSPolicy&lt;/code&gt; is a new Gateway API type used for specifying the TLS configuration of the connection from the Gateway to backend Pods via the Service API object.
It is specified as a &lt;a href="https://gateway-api.sigs.k8s.io/geps/gep-713/#direct-policy-attachment"&gt;Direct PolicyAttachment&lt;/a&gt; without defaults or overrides, applied to a Service that accesses a backend, where the BackendTLSPolicy resides in the same namespace as the Service to which it is applied.
All Gateway API Routes that point to a referenced Service should respect a configured &lt;code&gt;BackendTLSPolicy&lt;/code&gt;.&lt;/p&gt;</description></item><item><title>Spotlight on SIG Testing</title><link>https://andygol-k8s.netlify.app/blog/2023/11/24/sig-testing-spotlight-2023/</link><pubDate>Fri, 24 Nov 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/11/24/sig-testing-spotlight-2023/</guid><description>&lt;p&gt;Welcome to another edition of the &lt;em&gt;SIG spotlight&lt;/em&gt; blog series, where we
highlight the incredible work being done by various Special Interest
Groups (SIGs) within the Kubernetes project. In this edition, we turn
our attention to &lt;a href="https://github.com/kubernetes/community/tree/master/sig-testing#readme"&gt;SIG Testing&lt;/a&gt;,
a group interested in effective testing of Kubernetes and automating
away project toil. SIG Testing focus on creating and running tools and
infrastructure that make it easier for the community to write and run
tests, and to contribute, analyze and act upon test results.&lt;/p&gt;</description></item><item><title>Kubernetes Removals, Deprecations, and Major Changes in Kubernetes 1.29</title><link>https://andygol-k8s.netlify.app/blog/2023/11/16/kubernetes-1-29-upcoming-changes/</link><pubDate>Thu, 16 Nov 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/11/16/kubernetes-1-29-upcoming-changes/</guid><description>&lt;p&gt;As with every release, Kubernetes v1.29 will introduce feature deprecations and removals. Our continued ability to produce high-quality releases is a testament to our robust development cycle and healthy community. The following are some of the deprecations and removals coming in the Kubernetes 1.29 release.&lt;/p&gt;
&lt;h2 id="the-kubernetes-api-removal-and-deprecation-process"&gt;The Kubernetes API removal and deprecation process&lt;/h2&gt;
&lt;p&gt;The Kubernetes project has a well-documented deprecation policy for features. This policy states that stable APIs may only be deprecated when a newer, stable version of that same API is available and that APIs have a minimum lifetime for each stability level. A deprecated API is one that has been marked for removal in a future Kubernetes release; it will continue to function until removal (at least one year from the deprecation), but usage will result in a warning being displayed. Removed APIs are no longer available in the current version, at which point you must migrate to using the replacement.&lt;/p&gt;</description></item><item><title>The Case for Kubernetes Resource Limits: Predictability vs. Efficiency</title><link>https://andygol-k8s.netlify.app/blog/2023/11/16/the-case-for-kubernetes-resource-limits/</link><pubDate>Thu, 16 Nov 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/11/16/the-case-for-kubernetes-resource-limits/</guid><description>&lt;p&gt;There’s been quite a lot of posts suggesting that not using Kubernetes resource limits might be a fairly useful thing (for example, &lt;a href="https://home.robusta.dev/blog/stop-using-cpu-limits/"&gt;For the Love of God, Stop Using CPU Limits on Kubernetes&lt;/a&gt; or &lt;a href="https://erickhun.com/posts/kubernetes-faster-services-no-cpu-limits/"&gt;Kubernetes: Make your services faster by removing CPU limits&lt;/a&gt; ). The points made there are totally valid – it doesn’t make much sense to pay for compute power that will not be used due to limits, nor to artificially increase latency. This post strives to argue that limits have their legitimate use as well.&lt;/p&gt;</description></item><item><title>Introducing SIG etcd</title><link>https://andygol-k8s.netlify.app/blog/2023/11/07/introducing-sig-etcd/</link><pubDate>Tue, 07 Nov 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/11/07/introducing-sig-etcd/</guid><description>&lt;p&gt;Special Interest Groups (SIGs) are a fundamental part of the Kubernetes project, with a substantial share of the community activity happening within them. When the need arises, &lt;a href="https://github.com/kubernetes/community/blob/master/sig-wg-lifecycle.md"&gt;new SIGs can be created&lt;/a&gt;, and that was precisely what happened recently.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://github.com/kubernetes/community/blob/master/sig-etcd/README.md"&gt;SIG etcd&lt;/a&gt; is the most recent addition to the list of Kubernetes SIGs. In this article we will get to know it a bit better, understand its origins, scope, and plans.&lt;/p&gt;
&lt;h2 id="the-critical-role-of-etcd"&gt;The critical role of etcd&lt;/h2&gt;
&lt;p&gt;If we look inside the control plane of a Kubernetes cluster, we will find &lt;a href="https://kubernetes.io/docs/concepts/architecture/#etcd"&gt;etcd&lt;/a&gt;, a consistent and highly-available key value store used as Kubernetes' backing store for all cluster data -- this description alone highlights the critical role that etcd plays, and the importance of it within the Kubernetes ecosystem.&lt;/p&gt;</description></item><item><title>Kubernetes Contributor Summit: Behind-the-scenes</title><link>https://andygol-k8s.netlify.app/blog/2023/11/03/k8s-contributor-summit-behind-the-scenes/</link><pubDate>Fri, 03 Nov 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/11/03/k8s-contributor-summit-behind-the-scenes/</guid><description>&lt;p&gt;Every year, just before the official start of KubeCon+CloudNativeCon, there's a special event that
has a very special place in the hearts of those organizing and participating in it: the Kubernetes
Contributor Summit. To find out why, and to provide a behind-the-scenes perspective, we interview
Noah Abrahams, whom amongst other roles was the co-lead for the Kubernetes Contributor Summit in
2023.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Frederico Muñoz (FSM)&lt;/strong&gt;: Hello Noah, and welcome. Could you start by introducing yourself and
telling us how you got involved in Kubernetes?&lt;/p&gt;</description></item><item><title>Spotlight on SIG Architecture: Production Readiness</title><link>https://andygol-k8s.netlify.app/blog/2023/11/02/sig-architecture-production-readiness-spotlight-2023/</link><pubDate>Thu, 02 Nov 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/11/02/sig-architecture-production-readiness-spotlight-2023/</guid><description>&lt;p&gt;&lt;em&gt;This is the second interview of a SIG Architecture Spotlight series that will cover the different
subprojects. In this blog, we will cover the &lt;a href="https://github.com/kubernetes/community/blob/master/sig-architecture/README.md#production-readiness-1"&gt;SIG Architecture: Production Readiness
subproject&lt;/a&gt;&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;In this SIG Architecture spotlight, we talked with &lt;a href="https://github.com/wojtek-t"&gt;Wojciech Tyczynski&lt;/a&gt;
(Google), lead of the Production Readiness subproject.&lt;/p&gt;
&lt;h2 id="about-sig-architecture-and-the-production-readiness-subproject"&gt;About SIG Architecture and the Production Readiness subproject&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Frederico (FSM)&lt;/strong&gt;: Hello Wojciech, could you tell us a bit about yourself, your role and how you
got involved in Kubernetes?&lt;/p&gt;</description></item><item><title>Gateway API v1.0: GA Release</title><link>https://andygol-k8s.netlify.app/blog/2023/10/31/gateway-api-ga/</link><pubDate>Tue, 31 Oct 2023 10:00:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/10/31/gateway-api-ga/</guid><description>&lt;p&gt;On behalf of Kubernetes SIG Network, we are pleased to announce the v1.0 release of &lt;a href="https://gateway-api.sigs.k8s.io/"&gt;Gateway
API&lt;/a&gt;! This release marks a huge milestone for
this project. Several key APIs are graduating to GA (generally available), while
other significant features have been added to the Experimental channel.&lt;/p&gt;
&lt;h2 id="what-s-new"&gt;What's new&lt;/h2&gt;
&lt;h3 id="graduation-to-v1"&gt;Graduation to v1&lt;/h3&gt;
&lt;p&gt;This release includes the graduation of
&lt;a href="https://gateway-api.sigs.k8s.io/api-types/gateway/"&gt;Gateway&lt;/a&gt;,
&lt;a href="https://gateway-api.sigs.k8s.io/api-types/gatewayclass/"&gt;GatewayClass&lt;/a&gt;, and
&lt;a href="https://gateway-api.sigs.k8s.io/api-types/httproute/"&gt;HTTPRoute&lt;/a&gt; to v1, which
means they are now generally available (GA). This API version denotes a high
level of confidence in the API surface and provides guarantees of backwards
compatibility. Note that although, the version of these APIs included in the
Standard channel are now considered stable, that does not mean that they are
complete. These APIs will continue to receive new features via the Experimental
channel as they meet graduation criteria. For more information on how all of
this works, refer to the &lt;a href="https://gateway-api.sigs.k8s.io/concepts/versioning/"&gt;Gateway API Versioning
Policy&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>Introducing ingress2gateway; Simplifying Upgrades to Gateway API</title><link>https://andygol-k8s.netlify.app/blog/2023/10/25/introducing-ingress2gateway/</link><pubDate>Wed, 25 Oct 2023 10:00:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/10/25/introducing-ingress2gateway/</guid><description>&lt;p&gt;Today we are releasing &lt;a href="https://github.com/kubernetes-sigs/ingress2gateway"&gt;ingress2gateway&lt;/a&gt;, a tool
that can help you migrate from &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/services-networking/ingress/"&gt;Ingress&lt;/a&gt; to &lt;a href="https://gateway-api.sigs.k8s.io"&gt;Gateway
API&lt;/a&gt;. Gateway API is just weeks away from graduating to GA, if you
haven't upgraded yet, now's the time to think about it!&lt;/p&gt;
&lt;h2 id="background"&gt;Background&lt;/h2&gt;
&lt;p&gt;In the ever-evolving world of Kubernetes, networking plays a pivotal role. As more applications are
deployed in Kubernetes clusters, effective exposure of these services to clients becomes a critical
concern. If you've been working with Kubernetes, you're likely familiar with the &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/services-networking/ingress/"&gt;Ingress API&lt;/a&gt;,
which has been the go-to solution for managing external access to services.&lt;/p&gt;</description></item><item><title>Plants, process and parties: the Kubernetes 1.28 release interview</title><link>https://andygol-k8s.netlify.app/blog/2023/10/24/plants-process-and-parties-the-kubernetes-1.28-release-interview/</link><pubDate>Tue, 24 Oct 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/10/24/plants-process-and-parties-the-kubernetes-1.28-release-interview/</guid><description>&lt;p&gt;Since 2018, one of my favourite contributions to the Kubernetes community has been to &lt;a href="https://www.google.com/search?q=%22release+interview%22+site%3Akubernetes.io%2Fblog"&gt;share the story of each release&lt;/a&gt;. Many of these stories were told on behalf of a past employer; by popular demand, I've brought them back, now under my own name. If you were a fan of the old show, I would be delighted if you would &lt;a href="https://craigbox.substack.com/about"&gt;subscribe&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Back in August, &lt;a href="https://andygol-k8s.netlify.app/blog/2023/08/15/kubernetes-v1-28-release/"&gt;we welcomed the release of Kubernetes 1.28&lt;/a&gt;. That release was led by &lt;a href="https://twitter.com/gracenng"&gt;Grace Nguyen&lt;/a&gt;, a CS student at the University of Waterloo. Grace joined me for the traditional release interview, and while you can read her story below, &lt;a href="https://craigbox.substack.com/p/the-kubernetes-128-release-interview"&gt;I encourage you to listen to it if you can&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>PersistentVolume Last Phase Transition Time in Kubernetes</title><link>https://andygol-k8s.netlify.app/blog/2023/10/23/persistent-volume-last-phase-transition-time/</link><pubDate>Mon, 23 Oct 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/10/23/persistent-volume-last-phase-transition-time/</guid><description>&lt;p&gt;In the recent Kubernetes v1.28 release, we (SIG Storage) introduced a new alpha feature that aims to improve PersistentVolume (PV)
storage management and help cluster administrators gain better insights into the lifecycle of PVs.
With the addition of the &lt;code&gt;lastPhaseTransitionTime&lt;/code&gt; field into the status of a PV,
cluster administrators are now able to track the last time a PV transitioned to a different
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/storage/persistent-volumes/#phase"&gt;phase&lt;/a&gt;, allowing for more efficient
and informed resource management.&lt;/p&gt;</description></item><item><title>A Quick Recap of 2023 China Kubernetes Contributor Summit</title><link>https://andygol-k8s.netlify.app/blog/2023/10/20/kcs-shanghai/</link><pubDate>Fri, 20 Oct 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/10/20/kcs-shanghai/</guid><description>&lt;p&gt;On September 26, 2023, the first day of
&lt;a href="https://www.lfasiallc.com/kubecon-cloudnativecon-open-source-summit-china/"&gt;KubeCon + CloudNativeCon + Open Source Summit China 2023&lt;/a&gt;,
nearly 50 contributors gathered in Shanghai for the Kubernetes Contributor Summit.&lt;/p&gt;


&lt;figure&gt;
 &lt;img src="https://andygol-k8s.netlify.app/blog/2023/10/20/kcs-shanghai/kcs04.jpeg"
 alt="All participants in the 2023 Kubernetes Contributor Summit"/&gt; &lt;figcaption&gt;
 &lt;p&gt;All participants in the 2023 Kubernetes Contributor Summit&lt;/p&gt;
 &lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;This marked the first in-person offline gathering held in China after three years of the pandemic.&lt;/p&gt;
&lt;h2 id="a-joyful-meetup"&gt;A joyful meetup&lt;/h2&gt;
&lt;p&gt;The event began with welcome speeches from &lt;a href="https://github.com/kevin-wangzefeng"&gt;Kevin Wang&lt;/a&gt; from Huawei Cloud,
one of the co-chairs of KubeCon, and &lt;a href="https://github.com/puja108"&gt;Puja&lt;/a&gt; from Giant Swarm.&lt;/p&gt;</description></item><item><title>Bootstrap an Air Gapped Cluster With Kubeadm</title><link>https://andygol-k8s.netlify.app/blog/2023/10/12/bootstrap-an-air-gapped-cluster-with-kubeadm/</link><pubDate>Thu, 12 Oct 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/10/12/bootstrap-an-air-gapped-cluster-with-kubeadm/</guid><description>&lt;p&gt;Ever wonder how software gets deployed onto a system that is deliberately disconnected from the Internet and other networks? These systems are typically disconnected due to their sensitive nature. Sensitive as in utilities (power/water), banking, healthcare, weapons systems, other government use cases, etc. Sometimes it's technically a water gap, if you're running Kubernetes on an underwater vessel. Still, these environments need software to operate. This concept of deployment in a disconnected state is what it means to deploy to the other side of an &lt;a href="https://en.wikipedia.org/wiki/Air_gap_(networking)"&gt;air gap&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>CRI-O is moving towards pkgs.k8s.io</title><link>https://andygol-k8s.netlify.app/blog/2023/10/10/cri-o-community-package-infrastructure/</link><pubDate>Tue, 10 Oct 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/10/10/cri-o-community-package-infrastructure/</guid><description>&lt;p&gt;The Kubernetes community &lt;a href="https://andygol-k8s.netlify.app/blog/2023/08/31/legacy-package-repository-deprecation/"&gt;recently announced&lt;/a&gt;
that their legacy package repositories are frozen, and now they moved to
&lt;a href="https://andygol-k8s.netlify.app/blog/2023/08/15/pkgs-k8s-io-introduction"&gt;introduced community-owned package repositories&lt;/a&gt; powered by the
&lt;a href="https://build.opensuse.org/project/subprojects/isv:kubernetes"&gt;OpenBuildService (OBS)&lt;/a&gt;.
CRI-O has a long history of utilizing
&lt;a href="https://github.com/cri-o/cri-o/blob/e292f17/install.md#install-packaged-versions-of-cri-o"&gt;OBS for their package builds&lt;/a&gt;,
but all of the packaging efforts have been done manually so far.&lt;/p&gt;
&lt;p&gt;The CRI-O community absolutely loves Kubernetes, which means that they're
delighted to announce that:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;All future CRI-O packages will be shipped as part of the officially supported
Kubernetes infrastructure hosted on pkgs.k8s.io!&lt;/strong&gt;&lt;/p&gt;</description></item><item><title>Spotlight on SIG Architecture: Conformance</title><link>https://andygol-k8s.netlify.app/blog/2023/10/05/sig-architecture-conformance-spotlight-2023/</link><pubDate>Thu, 05 Oct 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/10/05/sig-architecture-conformance-spotlight-2023/</guid><description>&lt;p&gt;&lt;em&gt;This is the first interview of a SIG Architecture Spotlight series
that will cover the different subprojects. We start with the SIG
Architecture: Conformance subproject&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;In this &lt;a href="https://github.com/kubernetes/community/blob/master/sig-architecture/README.md"&gt;SIG
Architecture&lt;/a&gt;
spotlight, we talked with &lt;a href="https://github.com/Riaankl"&gt;Riaan
Kleinhans&lt;/a&gt; (ii.nz), Lead for the
&lt;a href="https://github.com/kubernetes/community/blob/master/sig-architecture/README.md#conformance-definition-1"&gt;Conformance
sub-project&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id="about-sig-architecture-and-the-conformance-subproject"&gt;About SIG Architecture and the Conformance subproject&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Frederico (FSM)&lt;/strong&gt;: Hello Riaan, and welcome! For starters, tell us a
bit about yourself, your role and how you got involved in Kubernetes.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Riaan Kleinhans (RK)&lt;/strong&gt;: Hi! My name is Riaan Kleinhans and I live in
South Africa. I am the Project manager for the &lt;a href="https://ii.nz"&gt;ii.nz&lt;/a&gt; team in New
Zealand. When I joined ii the plan was to move to New Zealand in April
2020 and then Covid happened. Fortunately, being a flexible and
dynamic team we were able to make it work remotely and in very
different time zones.&lt;/p&gt;</description></item><item><title>Announcing the 2023 Steering Committee Election Results</title><link>https://andygol-k8s.netlify.app/blog/2023/10/02/steering-committee-results-2023/</link><pubDate>Mon, 02 Oct 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/10/02/steering-committee-results-2023/</guid><description>&lt;p&gt;The &lt;a href="https://github.com/kubernetes/community/tree/master/elections/steering/2023"&gt;2023 Steering Committee Election&lt;/a&gt; is now complete. The Kubernetes Steering Committee consists of 7 seats, 4 of which were up for election in 2023. Incoming committee members serve a term of 2 years, and all members are elected by the Kubernetes Community.&lt;/p&gt;
&lt;p&gt;This community body is significant since it oversees the governance of the entire Kubernetes project. With that great power comes great responsibility. You can learn more about the steering committee’s role in their &lt;a href="https://github.com/kubernetes/steering/blob/master/charter.md"&gt;charter&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>Happy 7th Birthday kubeadm!</title><link>https://andygol-k8s.netlify.app/blog/2023/09/26/happy-7th-birthday-kubeadm/</link><pubDate>Tue, 26 Sep 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/09/26/happy-7th-birthday-kubeadm/</guid><description>&lt;p&gt;What a journey so far!&lt;/p&gt;
&lt;p&gt;Starting from the initial blog post &lt;a href="https://andygol-k8s.netlify.app/blog/2016/09/how-we-made-kubernetes-easy-to-install/"&gt;“How we made Kubernetes insanely easy to install”&lt;/a&gt; in September 2016, followed by an exciting growth that lead to general availability / &lt;a href="https://andygol-k8s.netlify.app/blog/2018/12/04/production-ready-kubernetes-cluster-creation-with-kubeadm/"&gt;“Production-Ready Kubernetes Cluster Creation with kubeadm”&lt;/a&gt; two years later.&lt;/p&gt;
&lt;p&gt;And later on a continuous, steady and reliable flow of small improvements that is still going on as of today.&lt;/p&gt;
&lt;h2 id="what-is-kubeadm-quick-refresher"&gt;What is kubeadm? (quick refresher)&lt;/h2&gt;
&lt;p&gt;kubeadm is focused on bootstrapping Kubernetes clusters on existing infrastructure and performing an essential set of maintenance tasks. The core of the kubeadm interface is quite simple: new control plane nodes
are created by running &lt;a href="https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/kubeadm-init/"&gt;&lt;code&gt;kubeadm init&lt;/code&gt;&lt;/a&gt; and
worker nodes are joined to the control plane by running
&lt;a href="https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/kubeadm-join/"&gt;&lt;code&gt;kubeadm join&lt;/code&gt;&lt;/a&gt;.
Also included are utilities for managing already bootstrapped clusters, such as control plane upgrades
and token and certificate renewal.&lt;/p&gt;</description></item><item><title>kubeadm: Use etcd Learner to Join a Control Plane Node Safely</title><link>https://andygol-k8s.netlify.app/blog/2023/09/25/kubeadm-use-etcd-learner-mode/</link><pubDate>Mon, 25 Sep 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/09/25/kubeadm-use-etcd-learner-mode/</guid><description>&lt;p&gt;The &lt;a href="https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/"&gt;&lt;code&gt;kubeadm&lt;/code&gt;&lt;/a&gt; tool now supports etcd learner mode, which
allows you to enhance the resilience and stability
of your Kubernetes clusters by leveraging the &lt;a href="https://etcd.io/docs/v3.4/learning/design-learner/#appendix-learner-implementation-in-v34"&gt;learner mode&lt;/a&gt;
feature introduced in etcd version 3.4.
This guide will walk you through using etcd learner mode with kubeadm. By default, kubeadm runs
a local etcd instance on each control plane node.&lt;/p&gt;
&lt;p&gt;In v1.27, kubeadm introduced a new feature gate &lt;code&gt;EtcdLearnerMode&lt;/code&gt;. With this feature gate enabled,
when joining a new control plane node, a new etcd member will be created as a learner and
promoted to a voting member only after the etcd data are fully aligned.&lt;/p&gt;</description></item><item><title>User Namespaces: Now Supports Running Stateful Pods in Alpha!</title><link>https://andygol-k8s.netlify.app/blog/2023/09/13/userns-alpha/</link><pubDate>Wed, 13 Sep 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/09/13/userns-alpha/</guid><description>&lt;p&gt;Kubernetes v1.25 introduced support for user namespaces for only stateless
pods. Kubernetes 1.28 lifted that restriction, after some design changes were
done in 1.27.&lt;/p&gt;
&lt;p&gt;The beauty of this feature is that:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;it is trivial to adopt (you just need to set a bool in the pod spec)&lt;/li&gt;
&lt;li&gt;doesn't need any changes for &lt;strong&gt;most&lt;/strong&gt; applications&lt;/li&gt;
&lt;li&gt;improves security by &lt;em&gt;drastically&lt;/em&gt; enhancing the isolation of containers and
mitigating CVEs rated HIGH and CRITICAL.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This post explains the basics of user namespaces and also shows:&lt;/p&gt;</description></item><item><title>Comparing Local Kubernetes Development Tools: Telepresence, Gefyra, and mirrord</title><link>https://andygol-k8s.netlify.app/blog/2023/09/12/local-k8s-development-tools/</link><pubDate>Tue, 12 Sep 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/09/12/local-k8s-development-tools/</guid><description>&lt;p&gt;The Kubernetes development cycle is an evolving landscape with a myriad of tools seeking to streamline the process. Each tool has its unique approach, and the choice often comes down to individual project requirements, the team's expertise, and the preferred workflow.&lt;/p&gt;
&lt;p&gt;Among the various solutions, a category we dubbed “Local K8S Development tools” has emerged, which seeks to enhance the Kubernetes development experience by connecting locally running components to the Kubernetes cluster. This facilitates rapid testing of new code in cloud conditions, circumventing the traditional cycle of Dockerization, CI, and deployment.&lt;/p&gt;</description></item><item><title>Kubernetes Legacy Package Repositories Will Be Frozen On September 13, 2023</title><link>https://andygol-k8s.netlify.app/blog/2023/08/31/legacy-package-repository-deprecation/</link><pubDate>Thu, 31 Aug 2023 15:30:00 -0700</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/08/31/legacy-package-repository-deprecation/</guid><description>&lt;p&gt;On August 15, 2023, the Kubernetes project announced the general availability of
the community-owned package repositories for Debian and RPM packages available
at &lt;code&gt;pkgs.k8s.io&lt;/code&gt;. The new package repositories are replacement for the legacy
Google-hosted package repositories: &lt;code&gt;apt.kubernetes.io&lt;/code&gt; and &lt;code&gt;yum.kubernetes.io&lt;/code&gt;.
The
&lt;a href="https://andygol-k8s.netlify.app/blog/2023/08/15/pkgs-k8s-io-introduction/"&gt;announcement blog post for &lt;code&gt;pkgs.k8s.io&lt;/code&gt;&lt;/a&gt;
highlighted that we will stop publishing packages to the legacy repositories in
the future.&lt;/p&gt;
&lt;p&gt;Today, we're formally deprecating the legacy package repositories (&lt;code&gt;apt.kubernetes.io&lt;/code&gt;
and &lt;code&gt;yum.kubernetes.io&lt;/code&gt;), and we're announcing our plans to freeze the contents of
the repositories as of &lt;strong&gt;September 13, 2023&lt;/strong&gt;.&lt;/p&gt;</description></item><item><title>Gateway API v0.8.0: Introducing Service Mesh Support</title><link>https://andygol-k8s.netlify.app/blog/2023/08/29/gateway-api-v0-8/</link><pubDate>Tue, 29 Aug 2023 10:00:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/08/29/gateway-api-v0-8/</guid><description>&lt;p&gt;We are thrilled to announce the v0.8.0 release of Gateway API! With this
release, Gateway API support for service mesh has reached &lt;a href="https://gateway-api.sigs.k8s.io/geps/overview/#status"&gt;Experimental
status&lt;/a&gt;. We look forward to your feedback!&lt;/p&gt;
&lt;p&gt;We're especially delighted to announce that Kuma 2.3+, Linkerd 2.14+, and Istio
1.16+ are all fully-conformant implementations of Gateway API service mesh
support.&lt;/p&gt;
&lt;h2 id="service-mesh-support-in-gateway-api"&gt;Service mesh support in Gateway API&lt;/h2&gt;
&lt;p&gt;While the initial focus of Gateway API was always ingress (north-south)
traffic, it was clear almost from the beginning that the same basic routing
concepts should also be applicable to service mesh (east-west) traffic. In
2022, the Gateway API subproject started the &lt;a href="https://gateway-api.sigs.k8s.io/concepts/gamma/"&gt;GAMMA initiative&lt;/a&gt;, a
dedicated vendor-neutral workstream, specifically to examine how best to fit
service mesh support into the framework of the Gateway API resources, without
requiring users of Gateway API to relearn everything they understand about the
API.&lt;/p&gt;</description></item><item><title>Kubernetes 1.28: A New (alpha) Mechanism For Safer Cluster Upgrades</title><link>https://andygol-k8s.netlify.app/blog/2023/08/28/kubernetes-1-28-feature-mixed-version-proxy-alpha/</link><pubDate>Mon, 28 Aug 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/08/28/kubernetes-1-28-feature-mixed-version-proxy-alpha/</guid><description>&lt;p&gt;This blog describes the &lt;em&gt;mixed version proxy&lt;/em&gt;, a new alpha feature in Kubernetes 1.28. The
mixed version proxy enables an HTTP request for a resource to be served by the correct API server
in cases where there are multiple API servers at varied versions in a cluster. For example,
this is useful during a cluster upgrade, or when you're rolling out the runtime configuration of
the cluster's control plane.&lt;/p&gt;
&lt;h2 id="what-problem-does-this-solve"&gt;What problem does this solve?&lt;/h2&gt;
&lt;p&gt;When a cluster undergoes an upgrade, the kube-apiservers existing at different versions in that scenario can serve different sets (groups, versions, resources) of built-in resources. A resource request made in this scenario may be served by any of the available apiservers, potentially resulting in the request ending up at an apiserver that may not be aware of the requested resource; consequently it being served a 404 not found error which is incorrect. Furthermore, incorrect serving of the 404 errors can lead to serious consequences such as namespace deletion being blocked incorrectly or objects being garbage collected mistakenly.&lt;/p&gt;</description></item><item><title>Kubernetes v1.28: Introducing native sidecar containers</title><link>https://andygol-k8s.netlify.app/blog/2023/08/25/native-sidecar-containers/</link><pubDate>Fri, 25 Aug 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/08/25/native-sidecar-containers/</guid><description>&lt;p&gt;This post explains how to use the new sidecar feature, which enables restartable init containers and is available in alpha in Kubernetes 1.28. We want your feedback so that we can graduate this feature as soon as possible.&lt;/p&gt;
&lt;p&gt;The concept of a “sidecar” has been part of Kubernetes since nearly the very beginning. In 2015, sidecars were described in a &lt;a href="https://andygol-k8s.netlify.app/blog/2015/06/the-distributed-system-toolkit-patterns/"&gt;blog post&lt;/a&gt; about composite containers as additional containers that “extend and enhance the ‘main’ container”. Sidecar containers have become a common Kubernetes deployment pattern and are often used for network proxies or as part of a logging system. Until now, sidecars were a concept that Kubernetes users applied without native support. The lack of native support has caused some usage friction, which this enhancement aims to resolve.&lt;/p&gt;</description></item><item><title>Kubernetes 1.28: Beta support for using swap on Linux</title><link>https://andygol-k8s.netlify.app/blog/2023/08/24/swap-linux-beta/</link><pubDate>Thu, 24 Aug 2023 10:00:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/08/24/swap-linux-beta/</guid><description>&lt;p&gt;The 1.22 release &lt;a href="https://andygol-k8s.netlify.app/blog/2021/08/09/run-nodes-with-swap-alpha/"&gt;introduced Alpha support&lt;/a&gt;
for configuring swap memory usage for Kubernetes workloads running on Linux on a per-node basis.
Now, in release 1.28, support for swap on Linux nodes has graduated to Beta, along with many
new improvements.&lt;/p&gt;
&lt;p&gt;Prior to version 1.22, Kubernetes did not provide support for swap memory on Linux systems.
This was due to the inherent difficulty in guaranteeing and accounting for pod memory utilization
when swap memory was involved. As a result, swap support was deemed out of scope in the initial
design of Kubernetes, and the default behavior of a kubelet was to fail to start if swap memory
was detected on a node.&lt;/p&gt;</description></item><item><title>Kubernetes 1.28: Node podresources API Graduates to GA</title><link>https://andygol-k8s.netlify.app/blog/2023/08/23/kubelet-podresources-api-ga/</link><pubDate>Wed, 23 Aug 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/08/23/kubelet-podresources-api-ga/</guid><description>&lt;p&gt;The podresources API is an API served by the kubelet locally on the node, which exposes the compute resources exclusively
allocated to containers. With the release of Kubernetes 1.28, that API is now Generally Available.&lt;/p&gt;
&lt;h2 id="what-problem-does-it-solve"&gt;What problem does it solve?&lt;/h2&gt;
&lt;p&gt;The kubelet can allocate exclusive resources to containers, like
&lt;a href="https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/cpu-management-policies/"&gt;CPUs, granting exclusive access to full cores&lt;/a&gt;
or &lt;a href="https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/memory-manager/"&gt;memory, either regions or hugepages&lt;/a&gt;.
Workloads which require high performance, or low latency (or both) leverage these features.
The kubelet also can assign &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/"&gt;devices to containers&lt;/a&gt;.
Collectively, these features which enable exclusive assignments are known as &amp;quot;resource managers&amp;quot;.&lt;/p&gt;</description></item><item><title>Kubernetes 1.28: Improved failure handling for Jobs</title><link>https://andygol-k8s.netlify.app/blog/2023/08/21/kubernetes-1-28-jobapi-update/</link><pubDate>Mon, 21 Aug 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/08/21/kubernetes-1-28-jobapi-update/</guid><description>&lt;p&gt;This blog discusses two new features in Kubernetes 1.28 to improve Jobs for batch
users: &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/job/#pod-replacement-policy"&gt;Pod replacement policy&lt;/a&gt;
and &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/job/#backoff-limit-per-index"&gt;Backoff limit per index&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;These features continue the effort started by the
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/job/#pod-failure-policy"&gt;Pod failure policy&lt;/a&gt;
to improve the handling of Pod failures in a Job.&lt;/p&gt;
&lt;h2 id="pod-replacement-policy"&gt;Pod replacement policy&lt;/h2&gt;
&lt;p&gt;By default, when a pod enters a terminating state (e.g. due to preemption or
eviction), Kubernetes immediately creates a replacement Pod. Therefore, both Pods are running
at the same time. In API terms, a pod is considered terminating when it has a
&lt;code&gt;deletionTimestamp&lt;/code&gt; and it has a phase &lt;code&gt;Pending&lt;/code&gt; or &lt;code&gt;Running&lt;/code&gt;.&lt;/p&gt;</description></item><item><title>Kubernetes v1.28: Retroactive Default StorageClass move to GA</title><link>https://andygol-k8s.netlify.app/blog/2023/08/18/retroactive-default-storage-class-ga/</link><pubDate>Fri, 18 Aug 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/08/18/retroactive-default-storage-class-ga/</guid><description>&lt;p&gt;Announcing graduation to General Availability (GA) - Retroactive Default StorageClass Assignment in Kubernetes v1.28!&lt;/p&gt;
&lt;p&gt;Kubernetes SIG Storage team is thrilled to announce that the &amp;quot;Retroactive Default StorageClass Assignment&amp;quot; feature,
introduced as an alpha in Kubernetes v1.25, has now graduated to GA and is officially part of the Kubernetes v1.28 release.
This enhancement brings a significant improvement to how default
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/storage/storage-classes/"&gt;StorageClasses&lt;/a&gt; are assigned to PersistentVolumeClaims (PVCs).&lt;/p&gt;
&lt;p&gt;With this feature enabled, you no longer need to create a default StorageClass first and then a PVC to assign the class.
Instead, any PVCs without a StorageClass assigned will now be retroactively updated to include the default StorageClass.
This enhancement ensures that PVCs no longer get stuck in an unbound state, and storage provisioning works seamlessly,
even when a default StorageClass is not defined at the time of PVC creation.&lt;/p&gt;</description></item><item><title>Kubernetes 1.28: Non-Graceful Node Shutdown Moves to GA</title><link>https://andygol-k8s.netlify.app/blog/2023/08/16/kubernetes-1-28-non-graceful-node-shutdown-ga/</link><pubDate>Wed, 16 Aug 2023 10:00:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/08/16/kubernetes-1-28-non-graceful-node-shutdown-ga/</guid><description>&lt;p&gt;The Kubernetes Non-Graceful Node Shutdown feature is now GA in Kubernetes v1.28.
It was introduced as
&lt;a href="https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/2268-non-graceful-shutdown"&gt;alpha&lt;/a&gt;
in Kubernetes v1.24, and promoted to
&lt;a href="https://kubernetes.io/blog/2022/12/16/kubernetes-1-26-non-graceful-node-shutdown-beta/"&gt;beta&lt;/a&gt;
in Kubernetes v1.26.
This feature allows stateful workloads to restart on a different node if the
original node is shutdown unexpectedly or ends up in a non-recoverable state
such as the hardware failure or unresponsive OS.&lt;/p&gt;
&lt;h2 id="what-is-a-non-graceful-node-shutdown"&gt;What is a Non-Graceful Node Shutdown&lt;/h2&gt;
&lt;p&gt;In a Kubernetes cluster, a node can be shutdown in a planned graceful way or
unexpectedly because of reasons such as power outage or something else external.
A node shutdown could lead to workload failure if the node is not drained
before the shutdown. A node shutdown can be either graceful or non-graceful.&lt;/p&gt;</description></item><item><title>pkgs.k8s.io: Introducing Kubernetes Community-Owned Package Repositories</title><link>https://andygol-k8s.netlify.app/blog/2023/08/15/pkgs-k8s-io-introduction/</link><pubDate>Tue, 15 Aug 2023 20:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/08/15/pkgs-k8s-io-introduction/</guid><description>&lt;p&gt;On behalf of Kubernetes SIG Release, I am very excited to introduce the
Kubernetes community-owned software
repositories for Debian and RPM packages: &lt;code&gt;pkgs.k8s.io&lt;/code&gt;! The new package
repositories are replacement for the Google-hosted package repositories
(&lt;code&gt;apt.kubernetes.io&lt;/code&gt; and &lt;code&gt;yum.kubernetes.io&lt;/code&gt;) that we've been using since
Kubernetes v1.5.&lt;/p&gt;
&lt;p&gt;This blog post contains information about these new package repositories,
what does it mean to you as an end user, and how to migrate to the new
repositories.&lt;/p&gt;</description></item><item><title>Kubernetes v1.28: Planternetes</title><link>https://andygol-k8s.netlify.app/blog/2023/08/15/kubernetes-v1-28-release/</link><pubDate>Tue, 15 Aug 2023 12:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/08/15/kubernetes-v1-28-release/</guid><description>&lt;p&gt;Announcing the release of Kubernetes v1.28 Planternetes, the second release of 2023!&lt;/p&gt;
&lt;p&gt;This release consists of 45 enhancements. Of those enhancements, 19 are entering Alpha, 14 have graduated to Beta, and 12 have graduated to Stable.&lt;/p&gt;
&lt;h2 id="release-theme-and-logo"&gt;Release Theme And Logo&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Kubernetes v1.28: &lt;em&gt;Planternetes&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The theme for Kubernetes v1.28 is &lt;em&gt;Planternetes&lt;/em&gt;.&lt;/p&gt;


&lt;figure class="release-logo "&gt;
 &lt;img src="https://andygol-k8s.netlify.app/images/blog/2023-08-15-kubernetes-1.28-blog/kubernetes-1.28.png"
 alt="Kubernetes 1.28 Planternetes logo"/&gt; 
&lt;/figure&gt;
&lt;p&gt;Each Kubernetes release is the culmination of the hard work of thousands of individuals from our community. The people behind this release come from a wide range of backgrounds, some of us industry veterans, parents, others students and newcomers to open-source. We combine our unique experience to create a collective artifact with global impact.&lt;/p&gt;</description></item><item><title>Spotlight on SIG ContribEx</title><link>https://andygol-k8s.netlify.app/blog/2023/08/14/sig-contribex-spotlight-2023/</link><pubDate>Mon, 14 Aug 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/08/14/sig-contribex-spotlight-2023/</guid><description>&lt;p&gt;Welcome to the world of Kubernetes and its vibrant contributor
community! In this blog post, we'll be shining a spotlight on the
&lt;a href="https://github.com/kubernetes/community/blob/master/sig-contributor-experience/README.md"&gt;Special Interest Group for Contributor
Experience&lt;/a&gt;
(SIG ContribEx), an essential component of the Kubernetes project.&lt;/p&gt;
&lt;p&gt;SIG ContribEx in Kubernetes is responsible for developing and
maintaining a healthy and productive community of contributors to the
project. This involves identifying and addressing bottlenecks that may
hinder the project's growth and feature velocity, such as pull request
latency and the number of open pull requests and issues.&lt;/p&gt;</description></item><item><title>Spotlight on SIG CLI</title><link>https://andygol-k8s.netlify.app/blog/2023/07/20/sig-cli-spotlight-2023/</link><pubDate>Thu, 20 Jul 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/07/20/sig-cli-spotlight-2023/</guid><description>&lt;p&gt;In the world of Kubernetes, managing containerized applications at
scale requires powerful and efficient tools. The command-line
interface (CLI) is an integral part of any developer or operator’s
toolkit, offering a convenient and flexible way to interact with a
Kubernetes cluster.&lt;/p&gt;
&lt;p&gt;SIG CLI plays a crucial role in improving the &lt;a href="https://github.com/kubernetes/community/tree/master/sig-cli"&gt;Kubernetes
CLI&lt;/a&gt;
experience by focusing on the development and enhancement of
&lt;code&gt;kubectl&lt;/code&gt;, the primary command-line tool for Kubernetes.&lt;/p&gt;
&lt;p&gt;In this SIG CLI Spotlight, Arpit Agrawal, SIG ContribEx-Comms team
member, talked with &lt;a href="https://github.com/KnVerey"&gt;Katrina Verey&lt;/a&gt;, Tech
Lead &amp;amp; Chair of SIG CLI,and &lt;a href="https://github.com/soltysh"&gt;Maciej
Szulik&lt;/a&gt;, SIG CLI Batch Lead, about SIG
CLI, current projects, challenges and how anyone can get involved.&lt;/p&gt;</description></item><item><title>Confidential Kubernetes: Use Confidential Virtual Machines and Enclaves to improve your cluster security</title><link>https://andygol-k8s.netlify.app/blog/2023/07/06/confidential-kubernetes/</link><pubDate>Thu, 06 Jul 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/07/06/confidential-kubernetes/</guid><description>&lt;p&gt;In this blog post, we will introduce the concept of Confidential Computing (CC) to improve any computing environment's security and privacy properties. Further, we will show how
the Cloud-Native ecosystem, particularly Kubernetes, can benefit from the new compute paradigm.&lt;/p&gt;
&lt;p&gt;Confidential Computing is a concept that has been introduced previously in the cloud-native world. The
&lt;a href="https://confidentialcomputing.io/"&gt;Confidential Computing Consortium&lt;/a&gt; (CCC) is a project community in the Linux Foundation
that already worked on
&lt;a href="https://confidentialcomputing.io/wp-content/uploads/sites/85/2019/12/CCC_Overview.pdf"&gt;Defining and Enabling Confidential Computing&lt;/a&gt;.
In the &lt;a href="https://confidentialcomputing.io/wp-content/uploads/sites/85/2023/01/CCC-A-Technical-Analysis-of-Confidential-Computing-v1.3_Updated_November_2022.pdf"&gt;Whitepaper&lt;/a&gt;,
they provide a great motivation for the use of Confidential Computing:&lt;/p&gt;</description></item><item><title>Verifying Container Image Signatures Within CRI Runtimes</title><link>https://andygol-k8s.netlify.app/blog/2023/06/29/container-image-signature-verification/</link><pubDate>Thu, 29 Jun 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/06/29/container-image-signature-verification/</guid><description>&lt;p&gt;The Kubernetes community has been signing their container image-based artifacts
since release v1.24. While the graduation of the &lt;a href="https://github.com/kubernetes/enhancements/issues/3031"&gt;corresponding enhancement&lt;/a&gt;
from &lt;code&gt;alpha&lt;/code&gt; to &lt;code&gt;beta&lt;/code&gt; in v1.26 introduced signatures for the binary artifacts,
other projects followed the approach by providing image signatures for their
releases, too. This means that they either create the signatures within their
own CI/CD pipelines, for example by using GitHub actions, or rely on the
Kubernetes &lt;a href="https://github.com/kubernetes-sigs/promo-tools/blob/e2b96dd/docs/image-promotion.md"&gt;image promotion&lt;/a&gt; process to automatically sign the images by
proposing pull requests to the &lt;a href="https://github.com/kubernetes/k8s.io/tree/4b95cc2/k8s.gcr.io"&gt;k/k8s.io&lt;/a&gt; repository. A requirement for
using this process is that the project is part of the &lt;code&gt;kubernetes&lt;/code&gt; or
&lt;code&gt;kubernetes-sigs&lt;/code&gt; GitHub organization, so that they can utilize the community
infrastructure for pushing images into staging buckets.&lt;/p&gt;</description></item><item><title>dl.k8s.io to adopt a Content Delivery Network</title><link>https://andygol-k8s.netlify.app/blog/2023/06/09/dl-adopt-cdn/</link><pubDate>Fri, 09 Jun 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/06/09/dl-adopt-cdn/</guid><description>&lt;p&gt;We're happy to announce that dl.k8s.io, home of the official Kubernetes
binaries, will soon be powered by &lt;a href="https://www.fastly.com"&gt;Fastly&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Fastly is known for its high-performance content delivery network (CDN) designed
to deliver content quickly and reliably around the world. With its powerful
network, Fastly will help us deliver official Kubernetes binaries to users
faster and more reliably than ever before.&lt;/p&gt;
&lt;p&gt;The decision to use Fastly was made after an extensive evaluation process in
which we carefully evaluated several potential content delivery network
providers. Ultimately, we chose Fastly because of their commitment to the open
internet and proven track record of delivering fast and secure digital
experiences to some of the most known open source projects (through their &lt;a href="https://www.fastly.com/fast-forward"&gt;Fast
Forward&lt;/a&gt; program).&lt;/p&gt;</description></item><item><title>Using OCI artifacts to distribute security profiles for seccomp, SELinux and AppArmor</title><link>https://andygol-k8s.netlify.app/blog/2023/05/24/oci-security-profiles/</link><pubDate>Wed, 24 May 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/05/24/oci-security-profiles/</guid><description>&lt;p&gt;The &lt;a href="https://github.com/kubernetes-sigs/security-profiles-operator"&gt;Security Profiles Operator (SPO)&lt;/a&gt; makes managing seccomp, SELinux and
AppArmor profiles within Kubernetes easier than ever. It allows cluster
administrators to define the profiles in a predefined custom resource YAML,
which then gets distributed by the SPO into the whole cluster. Modification and
removal of the security profiles are managed by the operator in the same way,
but that’s a small subset of its capabilities.&lt;/p&gt;
&lt;p&gt;Another core feature of the SPO is being able to stack seccomp profiles. This
means that users can define a &lt;code&gt;baseProfileName&lt;/code&gt; in the YAML specification, which
then gets automatically resolved by the operator and combines the syscall rules.
If a base profile has another &lt;code&gt;baseProfileName&lt;/code&gt;, then the operator will
recursively resolve the profiles up to a certain depth. A common use case is to
define base profiles for low level container runtimes (like &lt;a href="https://github.com/opencontainers/runc"&gt;runc&lt;/a&gt; or
&lt;a href="https://github.com/containers/crun"&gt;crun&lt;/a&gt;) which then contain syscalls which are required in any case to run
the container. Alternatively, application developers can define seccomp base
profiles for their standard distribution containers and stack dedicated profiles
for the application logic on top. This way developers can focus on maintaining
seccomp profiles which are way simpler and scoped to the application logic,
without having a need to take the whole infrastructure setup into account.&lt;/p&gt;</description></item><item><title>Having fun with seccomp profiles on the edge</title><link>https://andygol-k8s.netlify.app/blog/2023/05/18/seccomp-profiles-edge/</link><pubDate>Thu, 18 May 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/05/18/seccomp-profiles-edge/</guid><description>&lt;p&gt;The &lt;a href="https://github.com/kubernetes-sigs/security-profiles-operator"&gt;Security Profiles Operator (SPO)&lt;/a&gt; is a feature-rich
&lt;a href="https://kubernetes.io/docs/concepts/extend-kubernetes/operator"&gt;operator&lt;/a&gt; for Kubernetes to make managing seccomp, SELinux and
AppArmor profiles easier than ever. Recording those profiles from scratch is one
of the key features of this operator, which usually involves the integration
into large CI/CD systems. Being able to test the recording capabilities of the
operator in edge cases is one of the recent development efforts of the SPO and
makes it excitingly easy to play around with seccomp profiles.&lt;/p&gt;</description></item><item><title>Kubernetes 1.27: KMS V2 Moves to Beta</title><link>https://andygol-k8s.netlify.app/blog/2023/05/16/kms-v2-moves-to-beta/</link><pubDate>Tue, 16 May 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/05/16/kms-v2-moves-to-beta/</guid><description>&lt;p&gt;With Kubernetes 1.27, we (SIG Auth) are moving Key Management Service (KMS) v2 API to beta.&lt;/p&gt;
&lt;h2 id="what-is-kms"&gt;What is KMS?&lt;/h2&gt;
&lt;p&gt;One of the first things to consider when securing a Kubernetes cluster is encrypting etcd data at
rest. KMS provides an interface for a provider to utilize a key stored in an external key service to
perform this encryption.&lt;/p&gt;
&lt;p&gt;KMS v1 has been a feature of Kubernetes since version 1.10, and is currently in beta as of version
v1.12. KMS v2 was introduced as alpha in v1.25.&lt;/p&gt;</description></item><item><title>Kubernetes 1.27: updates on speeding up Pod startup</title><link>https://andygol-k8s.netlify.app/blog/2023/05/15/speed-up-pod-startup/</link><pubDate>Mon, 15 May 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/05/15/speed-up-pod-startup/</guid><description>&lt;p&gt;How can Pod start-up be accelerated on nodes in large clusters? This is a common issue that
cluster administrators may face.&lt;/p&gt;
&lt;p&gt;This blog post focuses on methods to speed up pod start-up from the kubelet side. It does not
involve the creation time of pods by controller-manager through kube-apiserver, nor does it
include scheduling time for pods or webhooks executed on it.&lt;/p&gt;
&lt;p&gt;We have mentioned some important factors here to consider from the kubelet's perspective, but
this is not an exhaustive list. As Kubernetes v1.27 is released, this blog highlights
significant changes in v1.27 that aid in speeding up pod start-up.&lt;/p&gt;</description></item><item><title>Kubernetes 1.27: In-place Resource Resize for Kubernetes Pods (alpha)</title><link>https://andygol-k8s.netlify.app/blog/2023/05/12/in-place-pod-resize-alpha/</link><pubDate>Fri, 12 May 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/05/12/in-place-pod-resize-alpha/</guid><description>&lt;p&gt;If you have deployed Kubernetes pods with CPU and/or memory resources
specified, you may have noticed that changing the resource values involves
restarting the pod. This has been a disruptive operation for running
workloads... until now.&lt;/p&gt;
&lt;p&gt;In Kubernetes v1.27, we have added a new alpha feature that allows users
to resize CPU/memory resources allocated to pods without restarting the
containers. To facilitate this, the &lt;code&gt;resources&lt;/code&gt; field in a pod's containers
now allow mutation for &lt;code&gt;cpu&lt;/code&gt; and &lt;code&gt;memory&lt;/code&gt; resources. They can be changed
simply by patching the running pod spec.&lt;/p&gt;</description></item><item><title>Kubernetes 1.27: Avoid Collisions Assigning Ports to NodePort Services</title><link>https://andygol-k8s.netlify.app/blog/2023/05/11/nodeport-dynamic-and-static-allocation/</link><pubDate>Thu, 11 May 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/05/11/nodeport-dynamic-and-static-allocation/</guid><description>&lt;p&gt;In Kubernetes, a Service can be used to provide a unified traffic endpoint for
applications running on a set of Pods. Clients can use the virtual IP address (or &lt;em&gt;VIP&lt;/em&gt;) provided
by the Service for access, and Kubernetes provides load balancing for traffic accessing
different back-end Pods, but a ClusterIP type of Service is limited to providing access to
nodes within the cluster, while traffic from outside the cluster cannot be routed.
One way to solve this problem is to use a &lt;code&gt;type: NodePort&lt;/code&gt; Service, which sets up a mapping
to a specific port of all nodes in the cluster, thus redirecting traffic from the
outside to the inside of the cluster.&lt;/p&gt;</description></item><item><title>Kubernetes 1.27: Safer, More Performant Pruning in kubectl apply</title><link>https://andygol-k8s.netlify.app/blog/2023/05/09/introducing-kubectl-applyset-pruning/</link><pubDate>Tue, 09 May 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/05/09/introducing-kubectl-applyset-pruning/</guid><description>&lt;p&gt;Declarative configuration management with the &lt;code&gt;kubectl apply&lt;/code&gt; command is the gold standard approach
to creating or modifying Kubernetes resources. However, one challenge it presents is the deletion
of resources that are no longer needed. In Kubernetes version 1.5, the &lt;code&gt;--prune&lt;/code&gt; flag was
introduced to address this issue, allowing kubectl apply to automatically clean up previously
applied resources removed from the current configuration.&lt;/p&gt;
&lt;p&gt;Unfortunately, that existing implementation of &lt;code&gt;--prune&lt;/code&gt; has design flaws that diminish its
performance and can result in unexpected behaviors. The main issue stems from the lack of explicit
encoding of the previously applied set by the preceding &lt;code&gt;apply&lt;/code&gt; operation, necessitating
error-prone dynamic discovery. Object leakage, inadvertent over-selection of resources, and limited
compatibility with custom resources are a few notable drawbacks of this implementation. Moreover,
its coupling to client-side apply hinders user upgrades to the superior server-side apply
mechanism.&lt;/p&gt;</description></item><item><title>Kubernetes 1.27: Introducing An API For Volume Group Snapshots</title><link>https://andygol-k8s.netlify.app/blog/2023/05/08/kubernetes-1-27-volume-group-snapshot-alpha/</link><pubDate>Mon, 08 May 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/05/08/kubernetes-1-27-volume-group-snapshot-alpha/</guid><description>&lt;p&gt;Volume group snapshot is introduced as an Alpha feature in Kubernetes v1.27.
This feature introduces a Kubernetes API that allows users to take crash consistent
snapshots for multiple volumes together. It uses a label selector to group multiple
&lt;code&gt;PersistentVolumeClaims&lt;/code&gt; for snapshotting.
This new feature is only supported for &lt;a href="https://kubernetes-csi.github.io/docs/"&gt;CSI&lt;/a&gt; volume drivers.&lt;/p&gt;
&lt;h2 id="an-overview-of-volume-group-snapshots"&gt;An overview of volume group snapshots&lt;/h2&gt;
&lt;p&gt;Some storage systems provide the ability to create a crash consistent snapshot of
multiple volumes. A group snapshot represents “copies” from multiple volumes that
are taken at the same point-in-time. A group snapshot can be used either to rehydrate
new volumes (pre-populated with the snapshot data) or to restore existing volumes to
a previous state (represented by the snapshots).&lt;/p&gt;</description></item><item><title>Kubernetes 1.27: Quality-of-Service for Memory Resources (alpha)</title><link>https://andygol-k8s.netlify.app/blog/2023/05/05/qos-memory-resources/</link><pubDate>Fri, 05 May 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/05/05/qos-memory-resources/</guid><description>&lt;p&gt;Kubernetes v1.27, released in April 2023, introduced changes to
Memory QoS (alpha) to improve memory management capabilites in Linux nodes.&lt;/p&gt;
&lt;p&gt;Support for Memory QoS was initially added in Kubernetes v1.22, and later some
&lt;a href="https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/2570-memory-qos#reasons-for-changing-the-formula-of-memoryhigh-calculation-in-alpha-v127"&gt;limitations&lt;/a&gt;
around the formula for calculating &lt;code&gt;memory.high&lt;/code&gt; were identified. These limitations are
addressed in Kubernetes v1.27.&lt;/p&gt;
&lt;h2 id="background"&gt;Background&lt;/h2&gt;
&lt;p&gt;Kubernetes allows you to optionally specify how much of each resources a container needs
in the Pod specification. The most common resources to specify are CPU and Memory.&lt;/p&gt;</description></item><item><title>Kubernetes 1.27: StatefulSet PVC Auto-Deletion (beta)</title><link>https://andygol-k8s.netlify.app/blog/2023/05/04/kubernetes-1-27-statefulset-pvc-auto-deletion-beta/</link><pubDate>Thu, 04 May 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/05/04/kubernetes-1-27-statefulset-pvc-auto-deletion-beta/</guid><description>&lt;p&gt;Kubernetes v1.27 graduated to beta a new policy mechanism for
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/statefulset/"&gt;&lt;code&gt;StatefulSets&lt;/code&gt;&lt;/a&gt; that controls the lifetime of
their &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/storage/persistent-volumes/"&gt;&lt;code&gt;PersistentVolumeClaims&lt;/code&gt;&lt;/a&gt; (PVCs). The new PVC
retention policy lets users specify if the PVCs generated from the &lt;code&gt;StatefulSet&lt;/code&gt; spec template should
be automatically deleted or retrained when the &lt;code&gt;StatefulSet&lt;/code&gt; is deleted or replicas in the &lt;code&gt;StatefulSet&lt;/code&gt;
are scaled down.&lt;/p&gt;
&lt;h2 id="what-problem-does-this-solve"&gt;What problem does this solve?&lt;/h2&gt;
&lt;p&gt;A &lt;code&gt;StatefulSet&lt;/code&gt; spec can include &lt;code&gt;Pod&lt;/code&gt; and PVC templates. When a replica is first created, the
Kubernetes control plane creates a PVC for that replica if one does not already exist. The behavior
before the PVC retention policy was that the control plane never cleaned up the PVCs created for
&lt;code&gt;StatefulSets&lt;/code&gt; - this was left up to the cluster administrator, or to some add-on automation that
you’d have to find, check suitability, and deploy. The common pattern for managing PVCs, either
manually or through tools such as Helm, is that the PVCs are tracked by the tool that manages them,
with explicit lifecycle. Workflows that use &lt;code&gt;StatefulSets&lt;/code&gt; must determine on their own what PVCs are
created by a &lt;code&gt;StatefulSet&lt;/code&gt; and what their lifecycle should be.&lt;/p&gt;</description></item><item><title>Kubernetes 1.27: HorizontalPodAutoscaler ContainerResource type metric moves to beta</title><link>https://andygol-k8s.netlify.app/blog/2023/05/02/hpa-container-resource-metric/</link><pubDate>Tue, 02 May 2023 12:00:00 +0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/05/02/hpa-container-resource-metric/</guid><description>&lt;p&gt;Kubernetes 1.20 introduced the &lt;a href="https://andygol-k8s.netlify.app/docs/tasks/run-application/horizontal-pod-autoscale/#container-resource-metrics"&gt;&lt;code&gt;ContainerResource&lt;/code&gt; type metric&lt;/a&gt;
in HorizontalPodAutoscaler (HPA).&lt;/p&gt;
&lt;p&gt;In Kubernetes 1.27, this feature moves to beta and the corresponding feature gate (&lt;code&gt;HPAContainerMetrics&lt;/code&gt;) gets enabled by default.&lt;/p&gt;
&lt;h2 id="what-is-the-containerresource-type-metric"&gt;What is the ContainerResource type metric&lt;/h2&gt;
&lt;p&gt;The ContainerResource type metric allows us to configure the autoscaling based on resource usage of individual containers.&lt;/p&gt;
&lt;p&gt;In the following example, the HPA controller scales the target
so that the average utilization of the cpu in the application container of all the pods is around 60%.
(See &lt;a href="https://andygol-k8s.netlify.app/docs/tasks/run-application/horizontal-pod-autoscale/#algorithm-details"&gt;the algorithm details&lt;/a&gt;
to know how the desired replica number is calculated exactly)&lt;/p&gt;</description></item><item><title>Kubernetes 1.27: StatefulSet Start Ordinal Simplifies Migration</title><link>https://andygol-k8s.netlify.app/blog/2023/04/28/statefulset-start-ordinal/</link><pubDate>Fri, 28 Apr 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/04/28/statefulset-start-ordinal/</guid><description>&lt;p&gt;Kubernetes v1.26 introduced a new, alpha-level feature for
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/statefulset/"&gt;StatefulSets&lt;/a&gt; that controls
the ordinal numbering of Pod replicas. As of Kubernetes v1.27, this feature is
now beta. Ordinals can start from arbitrary
non-negative numbers. This blog post will discuss how this feature can be
used.&lt;/p&gt;
&lt;h2 id="background"&gt;Background&lt;/h2&gt;
&lt;p&gt;StatefulSets ordinals provide sequential identities for pod replicas. When using
&lt;a href="https://andygol-k8s.netlify.app/docs/tutorials/stateful-application/basic-stateful-set/#orderedready-pod-management"&gt;&lt;code&gt;OrderedReady&lt;/code&gt; Pod management&lt;/a&gt;
Pods are created from ordinal index &lt;code&gt;0&lt;/code&gt; up to &lt;code&gt;N-1&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;With Kubernetes today, orchestrating a StatefulSet migration across clusters is
challenging. Backup and restore solutions exist, but these require the
application to be scaled down to zero replicas prior to migration. In today's
fully connected world, even planned application downtime may not allow you to
meet your business goals. You could use
&lt;a href="https://andygol-k8s.netlify.app/docs/tutorials/stateful-application/basic-stateful-set/#cascading-delete"&gt;Cascading Delete&lt;/a&gt;
or
&lt;a href="https://andygol-k8s.netlify.app/docs/tutorials/stateful-application/basic-stateful-set/#on-delete"&gt;On Delete&lt;/a&gt;
to migrate individual pods, however this is error prone and tedious to manage.
You lose the self-healing benefit of the StatefulSet controller when your Pods
fail or are evicted.&lt;/p&gt;</description></item><item><title>Updates to the Auto-refreshing Official CVE Feed</title><link>https://andygol-k8s.netlify.app/blog/2023/04/25/k8s-cve-feed-beta/</link><pubDate>Tue, 25 Apr 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/04/25/k8s-cve-feed-beta/</guid><description>&lt;p&gt;Since launching the &lt;a href="https://andygol-k8s.netlify.app/docs/reference/issues-security/official-cve-feed/index.json"&gt;Auto-refreshing Official CVE feed&lt;/a&gt; as an alpha
feature in the 1.25 release, we have made significant improvements and updates. We are excited to announce the release of the
beta version of the feed. This blog post will outline the feedback received, the changes made, and talk about how you can help
as we prepare to make this a stable feature in a future Kubernetes Release.&lt;/p&gt;
&lt;h2 id="feedback-from-end-users"&gt;Feedback from end-users&lt;/h2&gt;
&lt;p&gt;SIG Security received some feedback from end-users:&lt;/p&gt;</description></item><item><title>Kubernetes 1.27: Server Side Field Validation and OpenAPI V3 move to GA</title><link>https://andygol-k8s.netlify.app/blog/2023/04/24/openapi-v3-field-validation-ga/</link><pubDate>Mon, 24 Apr 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/04/24/openapi-v3-field-validation-ga/</guid><description>&lt;p&gt;Before Kubernetes v1.8 (!), typos, mis-indentations or minor errors in
YAMLs could have catastrophic consequences (e.g. a typo like
forgetting the trailing s in &lt;code&gt;replica: 1000&lt;/code&gt; could cause an outage,
because the value would be ignored and missing, forcing a reset of
replicas back to 1). This was solved back then by fetching the OpenAPI
v2 in kubectl and using it to verify that fields were correct and
present before applying. Unfortunately, at that time, Custom Resource
Definitions didn’t exist, and the code was written under that
assumption. When CRDs were later introduced, the lack of flexibility
in the validation code forced some hard decisions in the way CRDs
exposed their schema, leaving us in a cycle of bad validation causing
bad OpenAPI and vice-versa. With the new OpenAPI v3 and Server Field
Validation being GA in 1.27, we’ve now solved both of these problems.&lt;/p&gt;</description></item><item><title>Kubernetes 1.27: Query Node Logs Using The Kubelet API</title><link>https://andygol-k8s.netlify.app/blog/2023/04/21/node-log-query-alpha/</link><pubDate>Fri, 21 Apr 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/04/21/node-log-query-alpha/</guid><description>&lt;p&gt;Kubernetes 1.27 introduced a new feature called &lt;em&gt;Node log query&lt;/em&gt; that allows
viewing logs of services running on the node.&lt;/p&gt;
&lt;h2 id="what-problem-does-it-solve"&gt;What problem does it solve?&lt;/h2&gt;
&lt;p&gt;Cluster administrators face issues when debugging malfunctioning services
running on the node. They usually have to SSH or RDP into the node to view the
logs of the service to debug the issue. The &lt;em&gt;Node log query&lt;/em&gt; feature helps with
this scenario by allowing the cluster administrator to view the logs using
&lt;em&gt;kubectl&lt;/em&gt;. This is especially useful with Windows nodes where you run into the
issue of the node going to the ready state but containers not coming up due to
CNI misconfigurations and other issues that are not easily identifiable by
looking at the Pod status.&lt;/p&gt;</description></item><item><title>Kubernetes 1.27: Single Pod Access Mode for PersistentVolumes Graduates to Beta</title><link>https://andygol-k8s.netlify.app/blog/2023/04/20/read-write-once-pod-access-mode-beta/</link><pubDate>Thu, 20 Apr 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/04/20/read-write-once-pod-access-mode-beta/</guid><description>&lt;p&gt;With the release of Kubernetes v1.27 the ReadWriteOncePod feature has graduated
to beta. In this blog post, we'll take a closer look at this feature, what it
does, and how it has evolved in the beta release.&lt;/p&gt;
&lt;h2 id="what-is-readwriteoncepod"&gt;What is ReadWriteOncePod?&lt;/h2&gt;
&lt;p&gt;ReadWriteOncePod is a new access mode for
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/storage/persistent-volumes/#persistent-volumes"&gt;PersistentVolumes&lt;/a&gt; (PVs)
and &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims"&gt;PersistentVolumeClaims&lt;/a&gt; (PVCs)
introduced in Kubernetes v1.22. This access mode enables you to restrict volume
access to a single pod in the cluster, ensuring that only one pod can write to
the volume at a time. This can be particularly useful for stateful workloads
that require single-writer access to storage.&lt;/p&gt;</description></item><item><title>Kubernetes 1.27: Efficient SELinux volume relabeling (Beta)</title><link>https://andygol-k8s.netlify.app/blog/2023/04/18/kubernetes-1-27-efficient-selinux-relabeling-beta/</link><pubDate>Tue, 18 Apr 2023 10:00:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/04/18/kubernetes-1-27-efficient-selinux-relabeling-beta/</guid><description>&lt;h2 id="the-problem"&gt;The problem&lt;/h2&gt;
&lt;p&gt;On Linux with Security-Enhanced Linux (SELinux) enabled, it's traditionally
the container runtime that applies SELinux labels to a Pod and all its volumes.
Kubernetes only passes the SELinux label from a Pod's &lt;code&gt;securityContext&lt;/code&gt; fields
to the container runtime.&lt;/p&gt;
&lt;p&gt;The container runtime then recursively changes SELinux label on all files that
are visible to the Pod's containers. This can be time-consuming if there are
many files on the volume, especially when the volume is on a remote filesystem.&lt;/p&gt;</description></item><item><title>Kubernetes 1.27: More fine-grained pod topology spread policies reached beta</title><link>https://andygol-k8s.netlify.app/blog/2023/04/17/fine-grained-pod-topology-spread-features-beta/</link><pubDate>Mon, 17 Apr 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/04/17/fine-grained-pod-topology-spread-features-beta/</guid><description>&lt;p&gt;In Kubernetes v1.19, &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/scheduling-eviction/topology-spread-constraints/"&gt;Pod topology spread constraints&lt;/a&gt;
went to general availability (GA).&lt;/p&gt;
&lt;p&gt;As time passed, we - SIG Scheduling - received feedback from users,
and, as a result, we're actively working on improving the Topology Spread feature via three KEPs.
All of these features have reached beta in Kubernetes v1.27 and are enabled by default.&lt;/p&gt;
&lt;p&gt;This blog post introduces each feature and the use case behind each of them.&lt;/p&gt;
&lt;h2 id="kep-3022-min-domains-in-pod-topology-spread"&gt;KEP-3022: min domains in Pod Topology Spread&lt;/h2&gt;
&lt;p&gt;Pod Topology Spread has the &lt;code&gt;maxSkew&lt;/code&gt; parameter to define the degree to which Pods may be unevenly distributed.&lt;/p&gt;</description></item><item><title>Kubernetes v1.27: Chill Vibes</title><link>https://andygol-k8s.netlify.app/blog/2023/04/11/kubernetes-v1-27-release/</link><pubDate>Tue, 11 Apr 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/04/11/kubernetes-v1-27-release/</guid><description>&lt;p&gt;Announcing the release of Kubernetes v1.27, the first release of 2023!&lt;/p&gt;
&lt;p&gt;This release consist of 60 enhancements. 18 of those enhancements are entering Alpha, 29 are graduating to Beta, and 13 are graduating to Stable.&lt;/p&gt;
&lt;h2 id="release-theme-and-logo"&gt;Release theme and logo&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Kubernetes v1.27: Chill Vibes&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The theme for Kubernetes v1.27 is &lt;em&gt;Chill Vibes&lt;/em&gt;.&lt;/p&gt;


&lt;figure class="release-logo "&gt;
 &lt;img src="https://andygol-k8s.netlify.app/images/blog/2023-04-11-kubernetes-1.27-blog/kubernetes-1.27.png"
 alt="Kubernetes 1.27 Chill Vibes logo"/&gt; 
&lt;/figure&gt;
&lt;p&gt;It's a little silly, but there were some important shifts in this release that helped inspire the theme. Throughout a typical Kubernetes release cycle, there are several deadlines that features need to meet to remain included. If a feature misses any of these deadlines, there is an exception process they can go through. Handling these exceptions is a very normal part of the release. But v1.27 is the first release that anyone can remember where we didn't receive a single exception request after the enhancements freeze. Even as the release progressed, things remained much calmer than any of us are used to.&lt;/p&gt;</description></item><item><title>Keeping Kubernetes Secure with Updated Go Versions</title><link>https://andygol-k8s.netlify.app/blog/2023/04/06/keeping-kubernetes-secure-with-updated-go-versions/</link><pubDate>Thu, 06 Apr 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/04/06/keeping-kubernetes-secure-with-updated-go-versions/</guid><description>&lt;h3 id="the-problem"&gt;The problem&lt;/h3&gt;
&lt;p&gt;Since v1.19 (released in 2020), the Kubernetes project provides 12-14 months of patch releases for each minor version.
This enables users to qualify and adopt Kubernetes versions in an annual upgrade cycle and receive security fixes for a year.&lt;/p&gt;
&lt;p&gt;The &lt;a href="https://github.com/golang/go/wiki/Go-Release-Cycle#release-maintenance"&gt;Go project&lt;/a&gt; releases new minor versions twice a year,
and provides security fixes for the last two minor versions, resulting in about a year of support for each Go version.
Even though each new Kubernetes minor version is built with a supported Go version when it is first released,
that Go version falls out of support before the Kubernetes minor version does,
and the lengthened Kubernetes patch support since v1.19 only widened that gap.&lt;/p&gt;</description></item><item><title>Kubernetes Validating Admission Policies: A Practical Example</title><link>https://andygol-k8s.netlify.app/blog/2023/03/30/kubescape-validating-admission-policy-library/</link><pubDate>Thu, 30 Mar 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/03/30/kubescape-validating-admission-policy-library/</guid><description>&lt;p&gt;Admission control is an important part of the Kubernetes control plane, with several internal
features depending on the ability to approve or change an API object as it is submitted to the
server. It is also useful for an administrator to be able to define business logic, or policies,
regarding what objects can be admitted into a cluster. To better support that use case, &lt;a href="https://andygol-k8s.netlify.app/blog/2017/06/kubernetes-1-7-security-hardening-stateful-application-extensibility-updates/"&gt;Kubernetes
introduced external admission control in
v1.7&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>Kubernetes Removals and Major Changes In v1.27</title><link>https://andygol-k8s.netlify.app/blog/2023/03/17/upcoming-changes-in-kubernetes-v1-27/</link><pubDate>Fri, 17 Mar 2023 14:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/03/17/upcoming-changes-in-kubernetes-v1-27/</guid><description>&lt;p&gt;As Kubernetes develops and matures, features may be deprecated, removed, or replaced
with better ones for the project's overall health. Based on the information available
at this point in the v1.27 release process, which is still ongoing and can introduce
additional changes, this article identifies and describes some of the planned changes
for the Kubernetes v1.27 release.&lt;/p&gt;
&lt;h2 id="a-note-about-the-k8s-gcr-io-redirect-to-registry-k8s-io"&gt;A note about the k8s.gcr.io redirect to registry.k8s.io&lt;/h2&gt;
&lt;p&gt;To host its container images, the Kubernetes project uses a community-owned image
registry called registry.k8s.io. &lt;strong&gt;On March 20th, all traffic from the out-of-date
&lt;a href="https://cloud.google.com/container-registry/"&gt;k8s.gcr.io&lt;/a&gt; registry will be redirected
to &lt;a href="https://github.com/kubernetes/registry.k8s.io"&gt;registry.k8s.io&lt;/a&gt;&lt;/strong&gt;. The deprecated
k8s.gcr.io registry will eventually be phased out.&lt;/p&gt;</description></item><item><title>k8s.gcr.io Redirect to registry.k8s.io - What You Need to Know</title><link>https://andygol-k8s.netlify.app/blog/2023/03/10/image-registry-redirect/</link><pubDate>Fri, 10 Mar 2023 17:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/03/10/image-registry-redirect/</guid><description>&lt;p&gt;On Monday, March 20th, the k8s.gcr.io registry &lt;a href="https://kubernetes.io/blog/2022/11/28/registry-k8s-io-faster-cheaper-ga/"&gt;will be redirected to the community owned
registry&lt;/a&gt;,
&lt;strong&gt;registry.k8s.io&lt;/strong&gt; .&lt;/p&gt;
&lt;h2 id="tl-dr-what-you-need-to-know-about-this-change"&gt;TL;DR: What you need to know about this change&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;On Monday, March 20th, traffic from the older k8s.gcr.io registry will be redirected to
registry.k8s.io with the eventual goal of sunsetting k8s.gcr.io.&lt;/li&gt;
&lt;li&gt;If you run in a restricted environment, and apply strict domain name or IP address access policies
limited to k8s.gcr.io, &lt;strong&gt;the image pulls will not function&lt;/strong&gt; after k8s.gcr.io starts redirecting
to the new registry. &lt;/li&gt;
&lt;li&gt;A small subset of non-standard clients do not handle HTTP redirects by image registries, and will
need to be pointed directly at registry.k8s.io.&lt;/li&gt;
&lt;li&gt;The redirect is a stopgap to assist users in making the switch. The deprecated k8s.gcr.io registry
will be phased out at some point. &lt;strong&gt;Please update your manifests as soon as possible to point to
registry.k8s.io&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;If you host your own image registry, you can copy images you need there as well to reduce traffic
to community owned registries.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If you think you may be impacted, or would like to know more about this change, please keep reading.&lt;/p&gt;</description></item><item><title>Forensic container analysis</title><link>https://andygol-k8s.netlify.app/blog/2023/03/10/forensic-container-analysis/</link><pubDate>Fri, 10 Mar 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/03/10/forensic-container-analysis/</guid><description>&lt;p&gt;In my previous article, &lt;a href="https://kubernetes.io/blog/2022/12/05/forensic-container-checkpointing-alpha/"&gt;Forensic container checkpointing in
Kubernetes&lt;/a&gt;, I introduced checkpointing in Kubernetes
and how it has to be setup and how it can be used. The name of the
feature is Forensic container checkpointing, but I did not go into
any details how to do the actual analysis of the checkpoint created by
Kubernetes. In this article I want to provide details how the
checkpoint can be analyzed.&lt;/p&gt;
&lt;p&gt;Checkpointing is still an alpha feature in Kubernetes and this article
wants to provide a preview how the feature might work in the future.&lt;/p&gt;</description></item><item><title>Introducing KWOK: Kubernetes WithOut Kubelet</title><link>https://andygol-k8s.netlify.app/blog/2023/03/01/introducing-kwok/</link><pubDate>Wed, 01 Mar 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/03/01/introducing-kwok/</guid><description>&lt;img style="float: right; display: inline-block; margin-left: 2em; max-width: 15em;" src="https://andygol-k8s.netlify.app/blog/2023/03/01/introducing-kwok/kwok.svg" alt="KWOK logo" /&gt;
&lt;p&gt;Have you ever wondered how to set up a cluster of thousands of nodes just in seconds, how to simulate real nodes with a low resource footprint, and how to test your Kubernetes controller at scale without spending much on infrastructure?&lt;/p&gt;
&lt;p&gt;If you answered &amp;quot;yes&amp;quot; to any of these questions, then you might be interested in KWOK, a toolkit that enables you to create a cluster of thousands of nodes in seconds.&lt;/p&gt;</description></item><item><title>Free Katacoda Kubernetes Tutorials Are Shutting Down</title><link>https://andygol-k8s.netlify.app/blog/2023/02/14/kubernetes-katacoda-tutorials-stop-from-2023-03-31/</link><pubDate>Tue, 14 Feb 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/02/14/kubernetes-katacoda-tutorials-stop-from-2023-03-31/</guid><description>&lt;p&gt;&lt;a href="https://katacoda.com/kubernetes"&gt;Katacoda&lt;/a&gt;, the popular learning platform from O’Reilly that has been helping people learn all about
Java, Docker, Kubernetes, Python, Go, C++, and more, &lt;a href="https://www.oreilly.com/online-learning/leveraging-katacoda-technology.html"&gt;shut down for public use in June 2022&lt;/a&gt;.
However, tutorials specifically for Kubernetes, linked from the Kubernetes website for our project’s
users and contributors, remained available and active after this change. Unfortunately, this will no
longer be the case, and Katacoda tutorials for learning Kubernetes will cease working after March 31st, 2023.&lt;/p&gt;</description></item><item><title>k8s.gcr.io Image Registry Will Be Frozen From the 3rd of April 2023</title><link>https://andygol-k8s.netlify.app/blog/2023/02/06/k8s-gcr-io-freeze-announcement/</link><pubDate>Mon, 06 Feb 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/02/06/k8s-gcr-io-freeze-announcement/</guid><description>&lt;p&gt;The Kubernetes project runs a community-owned image registry called &lt;code&gt;registry.k8s.io&lt;/code&gt;
to host its container images. On the 3rd of April 2023, the old registry &lt;code&gt;k8s.gcr.io&lt;/code&gt;
will be frozen and no further images for Kubernetes and related subprojects will be
pushed to the old registry.&lt;/p&gt;
&lt;p&gt;This registry &lt;code&gt;registry.k8s.io&lt;/code&gt; replaced the old one and has been generally available
for several months. We have published a &lt;a href="https://andygol-k8s.netlify.app/blog/2022/11/28/registry-k8s-io-faster-cheaper-ga/"&gt;blog post&lt;/a&gt;
about its benefits to the community and the Kubernetes project. This post also
announced that future versions of Kubernetes will not be available in the old
registry. Now that time has come.&lt;/p&gt;</description></item><item><title>Spotlight on SIG Instrumentation</title><link>https://andygol-k8s.netlify.app/blog/2023/02/03/sig-instrumentation-spotlight-2023/</link><pubDate>Fri, 03 Feb 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/02/03/sig-instrumentation-spotlight-2023/</guid><description>&lt;p&gt;Observability requires the right data at the right time for the right consumer
(human or piece of software) to make the right decision. In the context of Kubernetes,
having best practices for cluster observability across all Kubernetes components is crucial.&lt;/p&gt;
&lt;p&gt;SIG Instrumentation helps to address this issue by providing best practices and tools
that all other SIGs use to instrument Kubernetes components-like the &lt;em&gt;API server&lt;/em&gt;,
&lt;em&gt;scheduler&lt;/em&gt;, &lt;em&gt;kubelet&lt;/em&gt; and &lt;em&gt;kube-controller-manager&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;In this SIG Instrumentation spotlight, &lt;a href="https://www.linkedin.com/in/imrannoormohamed/"&gt;Imran Noor Mohamed&lt;/a&gt;,
SIG ContribEx-Comms tech lead talked with &lt;a href="https://twitter.com/ehashdn"&gt;Elana Hashman&lt;/a&gt;,
and &lt;a href="https://www.linkedin.com/in/hankang"&gt;Han Kang&lt;/a&gt;, chairs of SIG Instrumentation,
on how the SIG is organized, what are the current challenges and how anyone can get involved and contribute.&lt;/p&gt;</description></item><item><title>Consider All Microservices Vulnerable — And Monitor Their Behavior</title><link>https://andygol-k8s.netlify.app/blog/2023/01/20/security-behavior-analysis/</link><pubDate>Fri, 20 Jan 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/01/20/security-behavior-analysis/</guid><description>&lt;p&gt;&lt;em&gt;This post warns Devops from a false sense of security. Following security
best practices when developing and configuring microservices do not result
in non-vulnerable microservices. The post shows that although all deployed
microservices are vulnerable, there is much that can be done to ensure
microservices are not exploited. It explains how analyzing the behavior of
clients and services from a security standpoint, named here
&lt;strong&gt;&amp;quot;Security-Behavior Analytics&amp;quot;&lt;/strong&gt;, can protect the deployed vulnerable microservices.
It points to &lt;a href="http://knative.dev/security-guard"&gt;Guard&lt;/a&gt;, an open source project offering
security-behavior monitoring and control of Kubernetes microservices presumed vulnerable.&lt;/em&gt;&lt;/p&gt;</description></item><item><title>Protect Your Mission-Critical Pods From Eviction With PriorityClass</title><link>https://andygol-k8s.netlify.app/blog/2023/01/12/protect-mission-critical-pods-priorityclass/</link><pubDate>Thu, 12 Jan 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/01/12/protect-mission-critical-pods-priorityclass/</guid><description>&lt;p&gt;Kubernetes has been widely adopted, and many organizations use it as their de-facto
orchestration engine for running workloads that need to be created and deleted frequently.&lt;/p&gt;
&lt;p&gt;Therefore, proper scheduling of the pods is key to ensuring that application pods
are up and running within the Kubernetes cluster without any issues. This article
delves into the use cases around resource management by leveraging the
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass"&gt;PriorityClass&lt;/a&gt;
object to protect mission-critical or high-priority pods from getting evicted and
making sure that the application pods are up, running, and serving traffic.&lt;/p&gt;</description></item><item><title>Kubernetes 1.26: Eviction policy for unhealthy pods guarded by PodDisruptionBudgets</title><link>https://andygol-k8s.netlify.app/blog/2023/01/06/unhealthy-pod-eviction-policy-for-pdbs/</link><pubDate>Fri, 06 Jan 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/01/06/unhealthy-pod-eviction-policy-for-pdbs/</guid><description>&lt;p&gt;Ensuring the disruptions to your applications do not affect its availability isn't a simple
task. Last month's release of Kubernetes v1.26 lets you specify an &lt;em&gt;unhealthy pod eviction policy&lt;/em&gt;
for &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/disruptions/#pod-disruption-budgets"&gt;PodDisruptionBudgets&lt;/a&gt; (PDBs)
to help you maintain that availability during node management operations.
In this article, we will dive deeper into what modifications were introduced for PDBs to
give application owners greater flexibility in managing disruptions.&lt;/p&gt;
&lt;h2 id="what-problems-does-this-solve"&gt;What problems does this solve?&lt;/h2&gt;
&lt;p&gt;API-initiated eviction of pods respects PodDisruptionBudgets (PDBs). This means that a requested &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/scheduling-eviction/#pod-disruption"&gt;voluntary disruption&lt;/a&gt;
via an eviction to a Pod, should not disrupt a guarded application and &lt;code&gt;.status.currentHealthy&lt;/code&gt; of a PDB should not fall
below &lt;code&gt;.status.desiredHealthy&lt;/code&gt;. Running pods that are &lt;a href="https://andygol-k8s.netlify.app/docs/tasks/run-application/configure-pdb/#healthiness-of-a-pod"&gt;Unhealthy&lt;/a&gt;
do not count towards the PDB status, but eviction of these is only possible in case the application
is not disrupted. This helps disrupted or not yet started application to achieve availability
as soon as possible without additional downtime that would be caused by evictions.&lt;/p&gt;</description></item><item><title>Kubernetes v1.26: Retroactive Default StorageClass</title><link>https://andygol-k8s.netlify.app/blog/2023/01/05/retroactive-default-storage-class/</link><pubDate>Thu, 05 Jan 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/01/05/retroactive-default-storage-class/</guid><description>&lt;p&gt;The v1.25 release of Kubernetes introduced an alpha feature to change how a default
StorageClass was assigned to a PersistentVolumeClaim (PVC). With the feature enabled,
you no longer need to create a default StorageClass first and PVC second to assign the
class. Additionally, any PVCs without a StorageClass assigned can be updated later.
This feature was graduated to beta in Kubernetes v1.26.&lt;/p&gt;
&lt;p&gt;You can read &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/storage/persistent-volumes/#retroactive-default-storageclass-assignment"&gt;retroactive default StorageClass assignment&lt;/a&gt;
in the Kubernetes documentation for more details about how to use that,
or you can read on to learn about why the Kubernetes project is making this change.&lt;/p&gt;</description></item><item><title>Kubernetes v1.26: Alpha support for cross-namespace storage data sources</title><link>https://andygol-k8s.netlify.app/blog/2023/01/02/cross-namespace-data-sources-alpha/</link><pubDate>Mon, 02 Jan 2023 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2023/01/02/cross-namespace-data-sources-alpha/</guid><description>&lt;p&gt;Kubernetes v1.26, released last month, introduced an alpha feature that
lets you specify a data source for a PersistentVolumeClaim, even where the source
data belong to a different namespace.
With the new feature enabled, you specify a namespace in the &lt;code&gt;dataSourceRef&lt;/code&gt; field of
a new PersistentVolumeClaim. Once Kubernetes checks that access is OK, the new
PersistentVolume can populate its data from the storage source specified in that other
namespace.
Before Kubernetes v1.26, provided your cluster had the &lt;code&gt;AnyVolumeDataSource&lt;/code&gt; feature enabled,
you could already provision new volumes from a data source in the &lt;strong&gt;same&lt;/strong&gt;
namespace.
However, that only worked for the data source in the same namespace,
therefore users couldn't provision a PersistentVolume with a claim
in one namespace from a data source in other namespace.
To solve this problem, Kubernetes v1.26 added a new alpha &lt;code&gt;namespace&lt;/code&gt; field
to &lt;code&gt;dataSourceRef&lt;/code&gt; field in PersistentVolumeClaim the API.&lt;/p&gt;</description></item><item><title>Kubernetes v1.26: Advancements in Kubernetes Traffic Engineering</title><link>https://andygol-k8s.netlify.app/blog/2022/12/30/advancements-in-kubernetes-traffic-engineering/</link><pubDate>Fri, 30 Dec 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2022/12/30/advancements-in-kubernetes-traffic-engineering/</guid><description>&lt;p&gt;Kubernetes v1.26 includes significant advancements in network traffic engineering with the graduation of
two features (Service internal traffic policy support, and EndpointSlice terminating conditions) to GA,
and a third feature (Proxy terminating endpoints) to beta. The combination of these enhancements aims
to address short-comings in traffic engineering that people face today, and unlock new capabilities for the future.&lt;/p&gt;
&lt;h2 id="traffic-loss-from-load-balancers-during-rolling-updates"&gt;Traffic Loss from Load Balancers During Rolling Updates&lt;/h2&gt;
&lt;p&gt;Prior to Kubernetes v1.26, clusters could experience &lt;a href="https://github.com/kubernetes/kubernetes/issues/85643"&gt;loss of traffic&lt;/a&gt;
from Service load balancers during rolling updates when setting the &lt;code&gt;externalTrafficPolicy&lt;/code&gt; field to &lt;code&gt;Local&lt;/code&gt;.
There are a lot of moving parts at play here so a quick overview of how Kubernetes manages load balancers might help!&lt;/p&gt;</description></item><item><title>Kubernetes 1.26: Job Tracking, to Support Massively Parallel Batch Workloads, Is Generally Available</title><link>https://andygol-k8s.netlify.app/blog/2022/12/29/scalable-job-tracking-ga/</link><pubDate>Thu, 29 Dec 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2022/12/29/scalable-job-tracking-ga/</guid><description>&lt;p&gt;The Kubernetes 1.26 release includes a stable implementation of the &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/job/"&gt;Job&lt;/a&gt;
controller that can reliably track a large amount of Jobs with high levels of
parallelism. &lt;a href="https://github.com/kubernetes/community/tree/master/sig-apps"&gt;SIG Apps&lt;/a&gt;
and &lt;a href="https://github.com/kubernetes/community/tree/master/wg-batch"&gt;WG Batch&lt;/a&gt;
have worked on this foundational improvement since Kubernetes 1.22. After
multiple iterations and scale verifications, this is now the default
implementation of the Job controller.&lt;/p&gt;
&lt;p&gt;Paired with the Indexed &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/job/#completion-mode"&gt;completion mode&lt;/a&gt;,
the Job controller can handle massively parallel batch Jobs, supporting up to
100k concurrent Pods.&lt;/p&gt;</description></item><item><title>Kubernetes v1.26: CPUManager goes GA</title><link>https://andygol-k8s.netlify.app/blog/2022/12/27/cpumanager-ga/</link><pubDate>Tue, 27 Dec 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2022/12/27/cpumanager-ga/</guid><description>&lt;p&gt;The CPU Manager is a part of the kubelet, the Kubernetes node agent, which enables the user to allocate exclusive CPUs to containers.
Since Kubernetes v1.10, where it &lt;a href="https://andygol-k8s.netlify.app/blog/2018/07/24/feature-highlight-cpu-manager/"&gt;graduated to Beta&lt;/a&gt;, the CPU Manager proved itself reliable and
fulfilled its role of allocating exclusive CPUs to containers, so adoption has steadily grown making it a staple component of performance-critical
and low-latency setups. Over time, most changes were about bugfixes or internal refactoring, with the following noteworthy user-visible changes:&lt;/p&gt;</description></item><item><title>Kubernetes 1.26: Pod Scheduling Readiness</title><link>https://andygol-k8s.netlify.app/blog/2022/12/26/pod-scheduling-readiness-alpha/</link><pubDate>Mon, 26 Dec 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2022/12/26/pod-scheduling-readiness-alpha/</guid><description>&lt;p&gt;Kubernetes 1.26 introduced a new Pod feature: &lt;em&gt;scheduling gates&lt;/em&gt;. In Kubernetes, scheduling gates
are keys that tell the scheduler when a Pod is ready to be considered for scheduling.&lt;/p&gt;
&lt;h2 id="what-problem-does-it-solve"&gt;What problem does it solve?&lt;/h2&gt;
&lt;p&gt;When a Pod is created, the scheduler will continuously attempt to find a node that fits it. This
infinite loop continues until the scheduler either finds a node for the Pod, or the Pod gets deleted.&lt;/p&gt;</description></item><item><title>Kubernetes 1.26: Support for Passing Pod fsGroup to CSI Drivers At Mount Time</title><link>https://andygol-k8s.netlify.app/blog/2022/12/23/kubernetes-12-06-fsgroup-on-mount/</link><pubDate>Fri, 23 Dec 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2022/12/23/kubernetes-12-06-fsgroup-on-mount/</guid><description>&lt;p&gt;Delegation of &lt;code&gt;fsGroup&lt;/code&gt; to CSI drivers was first introduced as alpha in Kubernetes 1.22,
and graduated to beta in Kubernetes 1.25.
For Kubernetes 1.26, we are happy to announce that this feature has graduated to
General Availability (GA).&lt;/p&gt;
&lt;p&gt;In this release, if you specify a &lt;code&gt;fsGroup&lt;/code&gt; in the
&lt;a href="https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod"&gt;security context&lt;/a&gt;,
for a (Linux) Pod, all processes in the pod's containers are part of the additional group
that you specified.&lt;/p&gt;
&lt;p&gt;In previous Kubernetes releases, the kubelet would &lt;em&gt;always&lt;/em&gt; apply the
&lt;code&gt;fsGroup&lt;/code&gt; ownership and permission changes to files in the volume according to the policy
you specified in the Pod's &lt;code&gt;.spec.securityContext.fsGroupChangePolicy&lt;/code&gt; field.&lt;/p&gt;</description></item><item><title>Kubernetes v1.26: GA Support for Kubelet Credential Providers</title><link>https://andygol-k8s.netlify.app/blog/2022/12/22/kubelet-credential-providers/</link><pubDate>Thu, 22 Dec 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2022/12/22/kubelet-credential-providers/</guid><description>&lt;p&gt;Kubernetes v1.26 introduced generally available (GA) support for &lt;a href="https://andygol-k8s.netlify.app/docs/tasks/kubelet-credential-provider/kubelet-credential-provider/"&gt;&lt;em&gt;kubelet credential
provider plugins&lt;/em&gt;&lt;/a&gt;,
offering an extensible plugin framework to dynamically fetch credentials
for any container image registry.&lt;/p&gt;
&lt;h2 id="background"&gt;Background&lt;/h2&gt;
&lt;p&gt;Kubernetes supports the ability to dynamically fetch credentials for a container registry service.
Prior to Kubernetes v1.20, this capability was compiled into the kubelet and only available for
Amazon Elastic Container Registry, Azure Container Registry, and Google Cloud Container Registry.&lt;/p&gt;


&lt;figure&gt;
 &lt;img src="https://andygol-k8s.netlify.app/blog/2022/12/22/kubelet-credential-providers/kubelet-credential-providers-in-tree.png"
 alt="Figure 1: Kubelet built-in credential provider support for Amazon Elastic Container Registry, Azure Container Registry, and Google Cloud Container Registry."/&gt; &lt;figcaption&gt;
 &lt;p&gt;Figure 1: Kubelet built-in credential provider support for Amazon Elastic Container Registry, Azure Container Registry, and Google Cloud Container Registry.&lt;/p&gt;</description></item><item><title>Kubernetes 1.26: Introducing Validating Admission Policies</title><link>https://andygol-k8s.netlify.app/blog/2022/12/20/validating-admission-policies-alpha/</link><pubDate>Tue, 20 Dec 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2022/12/20/validating-admission-policies-alpha/</guid><description>&lt;p&gt;In Kubernetes 1.26, the 1st alpha release of validating admission policies is
available!&lt;/p&gt;
&lt;p&gt;Validating admission policies use the &lt;a href="https://github.com/google/cel-spec"&gt;Common Expression
Language&lt;/a&gt; (CEL) to offer a declarative,
in-process alternative to &lt;a href="https://andygol-k8s.netlify.app/docs/reference/access-authn-authz/extensible-admission-controllers/#what-are-admission-webhooks"&gt;validating admission
webhooks&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;CEL was first introduced to Kubernetes for the &lt;a href="https://andygol-k8s.netlify.app/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#validation-rules"&gt;Validation rules for
CustomResourceDefinitions&lt;/a&gt;.
This enhancement expands the use of CEL in Kubernetes to support a far wider
range of admission use cases.&lt;/p&gt;
&lt;p&gt;Admission webhooks can be burdensome to develop and operate. Webhook developers
must implement and maintain a webhook binary to handle admission requests. Also,
admission webhooks are complex to operate. Each webhook must be deployed,
monitored and have a well defined upgrade and rollback plan. To make matters
worse, if a webhook times out or becomes unavailable, the Kubernetes control
plane can become unavailable. This enhancement avoids much of this complexity of
admission webhooks by embedding CEL expressions into Kubernetes resources
instead of calling out to a remote webhook binary.&lt;/p&gt;</description></item><item><title>Kubernetes 1.26: Device Manager graduates to GA</title><link>https://andygol-k8s.netlify.app/blog/2022/12/19/devicemanager-ga/</link><pubDate>Mon, 19 Dec 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2022/12/19/devicemanager-ga/</guid><description>&lt;p&gt;The Device Plugin framework was introduced in the Kubernetes v1.8 release as a vendor
independent framework to enable discovery, advertisement and allocation of external
devices without modifying core Kubernetes. The feature graduated to Beta in v1.10.
With the recent release of Kubernetes v1.26, Device Manager is now generally
available (GA).&lt;/p&gt;
&lt;p&gt;Within the kubelet, the Device Manager facilitates communication with device plugins
using gRPC through Unix sockets. Device Manager and Device plugins both act as gRPC
servers and clients by serving and connecting to the exposed gRPC services respectively.
Device plugins serve a gRPC service that kubelet connects to for device discovery,
advertisement (as extended resources) and allocation. Device Manager connects to
the &lt;code&gt;Registration&lt;/code&gt; gRPC service served by kubelet to register itself with kubelet.&lt;/p&gt;</description></item><item><title>Kubernetes 1.26: Non-Graceful Node Shutdown Moves to Beta</title><link>https://andygol-k8s.netlify.app/blog/2022/12/16/kubernetes-1-26-non-graceful-node-shutdown-beta/</link><pubDate>Fri, 16 Dec 2022 10:00:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2022/12/16/kubernetes-1-26-non-graceful-node-shutdown-beta/</guid><description>&lt;p&gt;Kubernetes v1.24 &lt;a href="https://kubernetes.io/blog/2022/05/20/kubernetes-1-24-non-graceful-node-shutdown-alpha/"&gt;introduced&lt;/a&gt; an alpha quality implementation of improvements
for handling a &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/architecture/nodes/#non-graceful-node-shutdown"&gt;non-graceful node shutdown&lt;/a&gt;.
In Kubernetes v1.26, this feature moves to beta. This feature allows stateful workloads to failover to a different node after the original node is shut down or in a non-recoverable state, such as the hardware failure or broken OS.&lt;/p&gt;
&lt;h2 id="what-is-a-node-shutdown-in-kubernetes"&gt;What is a node shutdown in Kubernetes?&lt;/h2&gt;
&lt;p&gt;In a Kubernetes cluster, it is possible for a node to shut down. This could happen either in a planned way or it could happen unexpectedly. You may plan for a security patch, or a kernel upgrade and need to reboot the node, or it may shut down due to preemption of VM instances. A node may also shut down due to a hardware failure or a software problem.&lt;/p&gt;</description></item><item><title>Kubernetes 1.26: Alpha API For Dynamic Resource Allocation</title><link>https://andygol-k8s.netlify.app/blog/2022/12/15/dynamic-resource-allocation/</link><pubDate>Thu, 15 Dec 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2022/12/15/dynamic-resource-allocation/</guid><description>&lt;p&gt;Dynamic resource allocation is a new API for requesting resources. It is a
generalization of the persistent volumes API for generic resources, making it possible to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;access the same resource instance in different pods and containers,&lt;/li&gt;
&lt;li&gt;attach arbitrary constraints to a resource request to get the exact resource
you are looking for,&lt;/li&gt;
&lt;li&gt;initialize a resource according to parameters provided by the user.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Third-party resource drivers are responsible for interpreting these parameters
as well as tracking and allocating resources as requests come in.&lt;/p&gt;</description></item><item><title>Kubernetes 1.26: Windows HostProcess Containers Are Generally Available</title><link>https://andygol-k8s.netlify.app/blog/2022/12/13/windows-host-process-containers-ga/</link><pubDate>Tue, 13 Dec 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2022/12/13/windows-host-process-containers-ga/</guid><description>&lt;p&gt;The long-awaited day has arrived: HostProcess containers, the Windows equivalent to Linux privileged
containers, has finally made it to &lt;strong&gt;GA in Kubernetes 1.26&lt;/strong&gt;!&lt;/p&gt;
&lt;p&gt;What are HostProcess containers and why are they useful?&lt;/p&gt;
&lt;p&gt;Cluster operators are often faced with the need to configure their nodes upon provisioning such as
installing Windows services, configuring registry keys, managing TLS certificates,
making network configuration changes, or even deploying monitoring tools such as a Prometheus's node-exporter.
Previously, performing these actions on Windows nodes was usually done by running PowerShell scripts
over SSH or WinRM sessions and/or working with your cloud provider's virtual machine management tooling.
HostProcess containers now enable you to do all of this and more with minimal effort using Kubernetes native APIs.&lt;/p&gt;</description></item><item><title>Kubernetes 1.26: We're now signing our binary release artifacts!</title><link>https://andygol-k8s.netlify.app/blog/2022/12/12/kubernetes-release-artifact-signing/</link><pubDate>Mon, 12 Dec 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2022/12/12/kubernetes-release-artifact-signing/</guid><description>&lt;p&gt;The Kubernetes Special Interest Group (SIG) Release is proud to announce that we
are digitally signing all release artifacts, and that this aspect of Kubernetes
has now reached &lt;em&gt;beta&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;Signing artifacts provides end users a chance to verify the integrity of the
downloaded resource. It allows to mitigate man-in-the-middle attacks directly on
the client side and therefore ensures the trustfulness of the remote serving the
artifacts. The overall goal of out past work was to define the used tooling for
signing all Kubernetes related artifacts as well as providing a standard signing
process for related projects (for example for those in &lt;a href="https://github.com/kubernetes-sigs"&gt;kubernetes-sigs&lt;/a&gt;).&lt;/p&gt;</description></item><item><title>Kubernetes v1.26: Electrifying</title><link>https://andygol-k8s.netlify.app/blog/2022/12/09/kubernetes-v1-26-release/</link><pubDate>Fri, 09 Dec 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2022/12/09/kubernetes-v1-26-release/</guid><description>&lt;p&gt;It's with immense joy that we announce the release of Kubernetes v1.26!&lt;/p&gt;
&lt;p&gt;This release includes a total of 37 enhancements: eleven of them are graduating to Stable, ten are
graduating to Beta, and sixteen of them are entering Alpha. We also have twelve features being
deprecated or removed, three of which we better detail in this announcement.&lt;/p&gt;
&lt;h2 id="release-theme-and-logo"&gt;Release theme and logo&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Kubernetes 1.26: Electrifying&lt;/strong&gt;&lt;/p&gt;


&lt;figure class="release-logo "&gt;
 &lt;img src="https://andygol-k8s.netlify.app/images/blog/2022-12-08-kubernetes-1.26-release/kubernetes-1.26.png"
 alt="Kubernetes 1.26 Electrifying logo"/&gt; 
&lt;/figure&gt;
&lt;p&gt;The theme for Kubernetes v1.26 is &lt;em&gt;Electrifying&lt;/em&gt;.&lt;/p&gt;</description></item><item><title>Forensic container checkpointing in Kubernetes</title><link>https://andygol-k8s.netlify.app/blog/2022/12/05/forensic-container-checkpointing-alpha/</link><pubDate>Mon, 05 Dec 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2022/12/05/forensic-container-checkpointing-alpha/</guid><description>&lt;p&gt;Forensic container checkpointing is based on &lt;a href="https://criu.org/"&gt;Checkpoint/Restore In
Userspace&lt;/a&gt; (CRIU) and allows the creation of stateful copies
of a running container without the container knowing that it is being
checkpointed. The copy of the container can be analyzed and restored in a
sandbox environment multiple times without the original container being aware
of it. Forensic container checkpointing was introduced as an alpha feature in
Kubernetes v1.25.&lt;/p&gt;
&lt;h2 id="how-does-it-work"&gt;How does it work?&lt;/h2&gt;
&lt;p&gt;With the help of CRIU it is possible to checkpoint and restore containers.
CRIU is integrated in runc, crun, CRI-O and containerd and forensic container
checkpointing as implemented in Kubernetes uses these existing CRIU
integrations.&lt;/p&gt;</description></item><item><title>Finding suspicious syscalls with the seccomp notifier</title><link>https://andygol-k8s.netlify.app/blog/2022/12/02/seccomp-notifier/</link><pubDate>Fri, 02 Dec 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2022/12/02/seccomp-notifier/</guid><description>&lt;p&gt;Debugging software in production is one of the biggest challenges we have to
face in our containerized environments. Being able to understand the impact of
the available security options, especially when it comes to configuring our
deployments, is one of the key aspects to make the default security in
Kubernetes stronger. We have all those logging, tracing and metrics data already
at hand, but how do we assemble the information they provide into something
human readable and actionable?&lt;/p&gt;</description></item><item><title>Boosting Kubernetes container runtime observability with OpenTelemetry</title><link>https://andygol-k8s.netlify.app/blog/2022/12/01/runtime-observability-opentelemetry/</link><pubDate>Thu, 01 Dec 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2022/12/01/runtime-observability-opentelemetry/</guid><description>&lt;p&gt;When speaking about observability in the cloud native space, then probably
everyone will mention &lt;a href="https://opentelemetry.io"&gt;OpenTelemetry (OTEL)&lt;/a&gt; at some point in the
conversation. That's great, because the community needs standards to rely on
for developing all cluster components into the same direction. OpenTelemetry
enables us to combine logs, metrics, traces and other contextual information
(called baggage) into a single resource. Cluster administrators or software
engineers can use this resource to get a viewport about what is going on in the
cluster over a defined period of time. But how can Kubernetes itself make use of
this technology stack?&lt;/p&gt;</description></item><item><title>registry.k8s.io: faster, cheaper and Generally Available (GA)</title><link>https://andygol-k8s.netlify.app/blog/2022/11/28/registry-k8s-io-faster-cheaper-ga/</link><pubDate>Mon, 28 Nov 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2022/11/28/registry-k8s-io-faster-cheaper-ga/</guid><description>&lt;p&gt;Starting with Kubernetes 1.25, our container image registry has changed from k8s.gcr.io to &lt;a href="https://registry.k8s.io"&gt;registry.k8s.io&lt;/a&gt;. This new registry spreads the load across multiple Cloud Providers &amp;amp; Regions, functioning as a sort of content delivery network (CDN) for Kubernetes container images. This change reduces the project’s reliance on a single entity and provides a faster download experience for a large number of users.&lt;/p&gt;
&lt;h2 id="tl-dr-what-you-need-to-know-about-this-change"&gt;TL;DR: What you need to know about this change&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Container images for Kubernetes releases from &lt;del&gt;1.25&lt;/del&gt; 1.27 onward are not published to k8s.gcr.io, only to registry.k8s.io.&lt;/li&gt;
&lt;li&gt;In the upcoming December patch releases, the new registry domain default will be backported to all branches still in support (1.22, 1.23, 1.24).&lt;/li&gt;
&lt;li&gt;If you run in a restricted environment and apply strict domain/IP address access policies limited to k8s.gcr.io, the &lt;strong&gt;image pulls will not function&lt;/strong&gt; after the migration to this new registry. For these users, the recommended method is to mirror the release images to a private registry.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If you’d like to know more about why we made this change, or some potential issues you might run into, keep reading.&lt;/p&gt;</description></item><item><title>Kubernetes Removals, Deprecations, and Major Changes in 1.26</title><link>https://andygol-k8s.netlify.app/blog/2022/11/18/upcoming-changes-in-kubernetes-1-26/</link><pubDate>Fri, 18 Nov 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2022/11/18/upcoming-changes-in-kubernetes-1-26/</guid><description>&lt;p&gt;Change is an integral part of the Kubernetes life-cycle: as Kubernetes grows and matures, features may be deprecated, removed, or replaced with improvements for the health of the project. For Kubernetes v1.26 there are several planned: this article identifies and describes some of them, based on the information available at this mid-cycle point in the v1.26 release process, which is still ongoing and can introduce additional changes.&lt;/p&gt;
&lt;h2 id="k8s-api-deprecation-process"&gt;The Kubernetes API Removal and Deprecation process&lt;/h2&gt;
&lt;p&gt;The Kubernetes project has a &lt;a href="https://andygol-k8s.netlify.app/docs/reference/using-api/deprecation-policy/"&gt;well-documented deprecation policy&lt;/a&gt; for features. This policy states that stable APIs may only be deprecated when a newer, stable version of that same API is available and that APIs have a minimum lifetime for each stability level. A deprecated API is one that has been marked for removal in a future Kubernetes release; it will continue to function until removal (at least one year from the deprecation), but usage will result in a warning being displayed. Removed APIs are no longer available in the current version, at which point you must migrate to using the replacement.&lt;/p&gt;</description></item><item><title>Live and let live with Kluctl and Server Side Apply</title><link>https://andygol-k8s.netlify.app/blog/2022/11/04/live-and-let-live-with-kluctl-and-ssa/</link><pubDate>Fri, 04 Nov 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2022/11/04/live-and-let-live-with-kluctl-and-ssa/</guid><description>&lt;p&gt;This blog post was inspired by a previous Kubernetes blog post about
&lt;a href="https://kubernetes.io/blog/2022/10/20/advanced-server-side-apply/"&gt;Advanced Server Side Apply&lt;/a&gt;.
The author of said blog post listed multiple benefits for applications and
controllers when switching to server-side apply (from now on abbreviated with
SSA). Especially the chapter about
&lt;a href="https://kubernetes.io/blog/2022/10/20/advanced-server-side-apply/#ci-cd-systems"&gt;CI/CD systems&lt;/a&gt;
motivated me to respond and write down my thoughts and experiences.&lt;/p&gt;
&lt;p&gt;These thoughts and experiences are the results of me working on &lt;a href="https://kluctl.io"&gt;Kluctl&lt;/a&gt;
for the past 2 years. I describe Kluctl as &amp;quot;The missing glue to put together
large Kubernetes deployments, composed of multiple smaller parts
(Helm/Kustomize/...) in a manageable and unified way.&amp;quot;&lt;/p&gt;</description></item><item><title>Server Side Apply Is Great And You Should Be Using It</title><link>https://andygol-k8s.netlify.app/blog/2022/10/20/advanced-server-side-apply/</link><pubDate>Thu, 20 Oct 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2022/10/20/advanced-server-side-apply/</guid><description>&lt;p&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/reference/using-api/server-side-apply/"&gt;Server-side apply&lt;/a&gt; (SSA) has now
been &lt;a href="https://andygol-k8s.netlify.app/blog/2021/08/06/server-side-apply-ga/"&gt;GA for a few releases&lt;/a&gt;, and I
have found myself in a number of conversations, recommending that people / teams
in various situations use it. So I’d like to write down some of those reasons.&lt;/p&gt;
&lt;h2 id="benefits"&gt;Obvious (and not-so-obvious) benefits of SSA&lt;/h2&gt;
&lt;p&gt;A list of improvements / niceties you get from switching from various things to
Server-side apply!&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Versus client-side-apply (that is, plain &lt;code&gt;kubectl apply&lt;/code&gt;):
&lt;ul&gt;
&lt;li&gt;The system gives you conflicts when you accidentally fight with another
actor over the value of a field!&lt;/li&gt;
&lt;li&gt;When combined with &lt;code&gt;--dry-run&lt;/code&gt;, there’s no chance of accidentally running a
client-side dry run instead of a server side dry run.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Versus hand-rolling patches:
&lt;ul&gt;
&lt;li&gt;The SSA patch format is extremely natural to write, with no weird syntax.
It’s just a regular object, but you can (and should) omit any field you
don’t care about.&lt;/li&gt;
&lt;li&gt;The old patch format (“strategic merge patch”) was ad-hoc and still has some
bugs; JSON-patch and JSON merge-patch fail to handle some cases that are
common in the Kubernetes API, namely lists with items that should be
recursively merged based on a “name” or other identifying field.&lt;/li&gt;
&lt;li&gt;There’s also now great &lt;a href="https://kubernetes.io/blog/2021/08/06/server-side-apply-ga/#using-server-side-apply-in-a-controller"&gt;go-language library support&lt;/a&gt;
for building apply calls programmatically!&lt;/li&gt;
&lt;li&gt;You can use SSA to explicitly delete fields you don’t “own” by setting them
to &lt;code&gt;null&lt;/code&gt;, which makes it a feature-complete replacement for all of the old
patch formats.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Versus shelling out to kubectl:
&lt;ul&gt;
&lt;li&gt;You can use the &lt;strong&gt;apply&lt;/strong&gt; API call from any language without shelling out to
kubectl!&lt;/li&gt;
&lt;li&gt;As stated above, the &lt;a href="https://andygol-k8s.netlify.app/blog/2021/08/06/server-side-apply-ga/#server-side-apply-support-in-client-go"&gt;Go library has dedicated mechanisms&lt;/a&gt;
to make this easy now.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Versus GET-modify-PUT:
&lt;ul&gt;
&lt;li&gt;(This one is more complicated and you can skip it if you've never written a
controller!)&lt;/li&gt;
&lt;li&gt;To use GET-modify-PUT correctly, you have to handle and retry a write
failure in the case that someone else has modified the object in any way
between your GET and PUT. This is an “optimistic concurrency failure” when
it happens.&lt;/li&gt;
&lt;li&gt;SSA offloads this task to the server– you only have to retry if there’s a
conflict, and the conflicts you can get are all meaningful, like when you’re
actually trying to take a field away from another actor in the system.&lt;/li&gt;
&lt;li&gt;To put it another way, if 10 actors do a GET-modify-PUT cycle at the same
time, 9 will get an optimistic concurrency failure and have to retry, then
8, etc, for up to 50 total GET-PUT attempts in the worst case (that’s .5N^2
GET and PUT calls for N actors making simultaneous changes). If the actors
are using SSA instead, and the changes don’t actually conflict over specific
fields, then all the changes can go in in any order. Additionally, SSA
changes can often be done without a GET call at all. That’s only N &lt;strong&gt;apply&lt;/strong&gt;
requests for N actors, which is a drastic improvement!&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="how-can-i-use-ssa"&gt;How can I use SSA?&lt;/h2&gt;
&lt;h3 id="users"&gt;Users&lt;/h3&gt;
&lt;p&gt;Use &lt;code&gt;kubectl apply --server-side&lt;/code&gt;! Soon we (SIG API Machinery) hope to make this
the default and remove the “client side” apply completely!&lt;/p&gt;</description></item><item><title>Current State: 2019 Third Party Security Audit of Kubernetes</title><link>https://andygol-k8s.netlify.app/blog/2022/10/05/current-state-2019-third-party-audit/</link><pubDate>Wed, 05 Oct 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2022/10/05/current-state-2019-third-party-audit/</guid><description>&lt;p&gt;We expect the brand new Third Party Security Audit of Kubernetes will be
published later this month (Oct 2022).&lt;/p&gt;
&lt;p&gt;In preparation for that, let's look at the state of findings that were made
public as part of the last &lt;a href="https://github.com/kubernetes/sig-security/tree/main/sig-security-external-audit/security-audit-2019"&gt;third party security audit of
2019&lt;/a&gt;
that was based on &lt;a href="https://github.com/kubernetes/kubernetes/tree/release-1.13"&gt;Kubernetes v1.13.4&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id="motivation"&gt;Motivation&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://github.com/cji"&gt;Craig Ingram&lt;/a&gt; has graciously attempted over the years to keep track of the
status of the findings reported in the last audit in this issue:
&lt;a href="https://github.com/kubernetes/kubernetes/issues/81146"&gt;kubernetes/kubernetes#81146&lt;/a&gt;.
This blog post will attempt to dive deeper into this, address any gaps
in tracking and become a point in time summary of the state of the
findings reported from 2019.&lt;/p&gt;</description></item><item><title>Introducing Kueue</title><link>https://andygol-k8s.netlify.app/blog/2022/10/04/introducing-kueue/</link><pubDate>Tue, 04 Oct 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2022/10/04/introducing-kueue/</guid><description>&lt;p&gt;Whether on-premises or in the cloud, clusters face real constraints for resource usage, quota, and cost management reasons. Regardless of the autoscalling capabilities, clusters have finite capacity. As a result, users want an easy way to fairly and
efficiently share resources.&lt;/p&gt;
&lt;p&gt;In this article, we introduce &lt;a href="https://github.com/kubernetes-sigs/kueue/tree/main/docs#readme"&gt;Kueue&lt;/a&gt;,
an open source job queueing controller designed to manage batch jobs as a single unit.
Kueue leaves pod-level orchestration to existing stable components of Kubernetes.
Kueue natively supports the Kubernetes &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/job/"&gt;Job&lt;/a&gt;
API and offers hooks for integrating other custom-built APIs for batch jobs.&lt;/p&gt;</description></item><item><title>Kubernetes 1.25: alpha support for running Pods with user namespaces</title><link>https://andygol-k8s.netlify.app/blog/2022/10/03/userns-alpha/</link><pubDate>Mon, 03 Oct 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2022/10/03/userns-alpha/</guid><description>&lt;p&gt;Kubernetes v1.25 introduces the support for user namespaces.&lt;/p&gt;
&lt;p&gt;This is a major improvement for running secure workloads in
Kubernetes. Each pod will have access only to a limited subset of the
available UIDs and GIDs on the system, thus adding a new security
layer to protect from other pods running on the same system.&lt;/p&gt;
&lt;h2 id="how-does-it-work"&gt;How does it work?&lt;/h2&gt;
&lt;p&gt;A process running on Linux can use up to 4294967296 different UIDs and
GIDs.&lt;/p&gt;</description></item><item><title>Enforce CRD Immutability with CEL Transition Rules</title><link>https://andygol-k8s.netlify.app/blog/2022/09/29/enforce-immutability-using-cel/</link><pubDate>Thu, 29 Sep 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2022/09/29/enforce-immutability-using-cel/</guid><description>&lt;p&gt;Immutable fields can be found in a few places in the built-in Kubernetes types.
For example, you can't change the &lt;code&gt;.metadata.name&lt;/code&gt; of an object. Specific objects
have fields where changes to existing objects are constrained; for example, the
&lt;code&gt;.spec.selector&lt;/code&gt; of a Deployment.&lt;/p&gt;
&lt;p&gt;Aside from simple immutability, there are other common design patterns such as
lists which are append-only, or a map with mutable values and immutable keys.&lt;/p&gt;
&lt;p&gt;Until recently the best way to restrict field mutability for CustomResourceDefinitions
has been to create a validating
&lt;a href="https://andygol-k8s.netlify.app/docs/reference/access-authn-authz/extensible-admission-controllers/#what-are-admission-webhooks"&gt;admission webhook&lt;/a&gt;:
this means a lot of complexity for the common case of making a field immutable.&lt;/p&gt;</description></item><item><title>Kubernetes 1.25: Kubernetes In-Tree to CSI Volume Migration Status Update</title><link>https://andygol-k8s.netlify.app/blog/2022/09/26/storage-in-tree-to-csi-migration-status-update-1.25/</link><pubDate>Mon, 26 Sep 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2022/09/26/storage-in-tree-to-csi-migration-status-update-1.25/</guid><description>&lt;p&gt;The Kubernetes in-tree storage plugin to &lt;a href="https://andygol-k8s.netlify.app/blog/2019/01/15/container-storage-interface-ga/"&gt;Container Storage Interface (CSI)&lt;/a&gt; migration infrastructure has already been &lt;a href="https://andygol-k8s.netlify.app/blog/2019/12/09/kubernetes-1-17-feature-csi-migration-beta/"&gt;beta&lt;/a&gt; since v1.17. CSI migration was introduced as alpha in Kubernetes v1.14.
Since then, SIG Storage and other Kubernetes special interest groups are working to ensure feature stability and compatibility in preparation for CSI Migration feature to go GA.&lt;/p&gt;
&lt;p&gt;SIG Storage is excited to announce that the core CSI Migration feature is &lt;strong&gt;generally available&lt;/strong&gt; in Kubernetes v1.25 release!&lt;/p&gt;</description></item><item><title>Kubernetes 1.25: CustomResourceDefinition Validation Rules Graduate to Beta</title><link>https://andygol-k8s.netlify.app/blog/2022/09/23/crd-validation-rules-beta/</link><pubDate>Fri, 23 Sep 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2022/09/23/crd-validation-rules-beta/</guid><description>&lt;p&gt;In Kubernetes 1.25, &lt;a href="https://andygol-k8s.netlify.app/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#validation-rules"&gt;Validation rules for CustomResourceDefinitions&lt;/a&gt; (CRDs) have graduated to Beta!&lt;/p&gt;
&lt;p&gt;Validation rules make it possible to declare how custom resources are validated using the &lt;a href="https://github.com/google/cel-spec"&gt;Common Expression Language&lt;/a&gt; (CEL). For example:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-yaml" data-lang="yaml"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;apiVersion&lt;/span&gt;:&lt;span style="color:#bbb"&gt; &lt;/span&gt;apiextensions.k8s.io/v1&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;kind&lt;/span&gt;:&lt;span style="color:#bbb"&gt; &lt;/span&gt;CustomResourceDefinition&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;...&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;openAPIV3Schema&lt;/span&gt;:&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;type&lt;/span&gt;:&lt;span style="color:#bbb"&gt; &lt;/span&gt;object&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;properties&lt;/span&gt;:&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;spec&lt;/span&gt;:&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;type&lt;/span&gt;:&lt;span style="color:#bbb"&gt; &lt;/span&gt;object&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;x-kubernetes-validations&lt;/span&gt;:&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;- &lt;span style="color:#008000;font-weight:bold"&gt;rule&lt;/span&gt;:&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#b44"&gt;&amp;#34;self.minReplicas &amp;lt;= self.replicas &amp;amp;&amp;amp; self.replicas &amp;lt;= self.maxReplicas&amp;#34;&lt;/span&gt;&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;message&lt;/span&gt;:&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#b44"&gt;&amp;#34;replicas should be in the range minReplicas..maxReplicas.&amp;#34;&lt;/span&gt;&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;properties&lt;/span&gt;:&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;replicas&lt;/span&gt;:&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#008000;font-weight:bold"&gt;type&lt;/span&gt;:&lt;span style="color:#bbb"&gt; &lt;/span&gt;integer&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;...&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;Validation rules support a wide range of use cases. To get a sense of some of the capabilities, let's look at a few examples:&lt;/p&gt;</description></item><item><title>Kubernetes 1.25: Use Secrets for Node-Driven Expansion of CSI Volumes</title><link>https://andygol-k8s.netlify.app/blog/2022/09/21/kubernetes-1-25-use-secrets-while-expanding-csi-volumes-on-node-alpha/</link><pubDate>Wed, 21 Sep 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2022/09/21/kubernetes-1-25-use-secrets-while-expanding-csi-volumes-on-node-alpha/</guid><description>&lt;p&gt;Kubernetes v1.25, released earlier this month, introduced a new feature
that lets your cluster expand storage volumes, even when access to those
volumes requires a secret (for example: a credential for accessing a SAN fabric)
to perform node expand operation. This new behavior is in alpha and you
must enable a feature gate (&lt;code&gt;CSINodeExpandSecret&lt;/code&gt;) to make use of it.
You must also be using &lt;a href="https://kubernetes-csi.github.io/docs/"&gt;CSI&lt;/a&gt;
storage; this change isn't relevant to storage drivers that are built in to Kubernetes.&lt;/p&gt;</description></item><item><title>Kubernetes 1.25: Local Storage Capacity Isolation Reaches GA</title><link>https://andygol-k8s.netlify.app/blog/2022/09/19/local-storage-capacity-isolation-ga/</link><pubDate>Mon, 19 Sep 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2022/09/19/local-storage-capacity-isolation-ga/</guid><description>&lt;p&gt;Local ephemeral storage capacity isolation was introduced as a alpha feature in Kubernetes 1.7 and it went beta in 1.9. With Kubernetes 1.25 we are excited to announce general availability(GA) of this feature.&lt;/p&gt;
&lt;p&gt;Pods use ephemeral local storage for scratch space, caching, and logs. The lifetime of local ephemeral storage does not extend beyond the life of the individual pod. It is exposed to pods using the container’s writable layer, logs directory, and &lt;code&gt;EmptyDir&lt;/code&gt; volumes. Before this feature was introduced, there were issues related to the lack of local storage accounting and isolation, such as Pods not knowing how much local storage is available and being unable to request guaranteed local storage. Local storage is a best-effort resource and pods can be evicted due to other pods filling the local storage.&lt;/p&gt;</description></item><item><title>Kubernetes 1.25: Two Features for Apps Rollouts Graduate to Stable</title><link>https://andygol-k8s.netlify.app/blog/2022/09/15/app-rollout-features-reach-stable/</link><pubDate>Thu, 15 Sep 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2022/09/15/app-rollout-features-reach-stable/</guid><description>&lt;p&gt;This blog describes the two features namely &lt;code&gt;minReadySeconds&lt;/code&gt; for StatefulSets and &lt;code&gt;maxSurge&lt;/code&gt; for DaemonSets that SIG Apps is happy to graduate to stable in Kubernetes 1.25.&lt;/p&gt;
&lt;p&gt;Specifying &lt;code&gt;minReadySeconds&lt;/code&gt; slows down a rollout of a StatefulSet, when using a &lt;code&gt;RollingUpdate&lt;/code&gt; value in &lt;code&gt;.spec.updateStrategy&lt;/code&gt; field, by waiting for each pod for a desired time.
This time can be used for initializing the pod (e.g. warming up the cache) or as a delay before acknowledging the pod.&lt;/p&gt;</description></item><item><title>Kubernetes 1.25: PodHasNetwork Condition for Pods</title><link>https://andygol-k8s.netlify.app/blog/2022/09/14/pod-has-network-condition/</link><pubDate>Wed, 14 Sep 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2022/09/14/pod-has-network-condition/</guid><description>&lt;p&gt;Kubernetes 1.25 introduces Alpha support for a new kubelet-managed pod condition
in the status field of a pod: &lt;code&gt;PodHasNetwork&lt;/code&gt;. The kubelet, for a worker node,
will use the &lt;code&gt;PodHasNetwork&lt;/code&gt; condition to accurately surface the initialization
state of a pod from the perspective of pod sandbox creation and network
configuration by a container runtime (typically in coordination with CNI
plugins). The kubelet starts to pull container images and start individual
containers (including init containers) after the status of the &lt;code&gt;PodHasNetwork&lt;/code&gt;
condition is set to &lt;code&gt;&amp;quot;True&amp;quot;&lt;/code&gt;. Metrics collection services that report latency of
pod initialization from a cluster infrastructural perspective (i.e. agnostic of
per container characteristics like image size or payload) can utilize the
&lt;code&gt;PodHasNetwork&lt;/code&gt; condition to accurately generate Service Level Indicators
(SLIs). Certain operators or controllers that manage underlying pods may utilize
the &lt;code&gt;PodHasNetwork&lt;/code&gt; condition to optimize the set of actions performed when pods
repeatedly fail to come up.&lt;/p&gt;</description></item><item><title>Announcing the Auto-refreshing Official Kubernetes CVE Feed</title><link>https://andygol-k8s.netlify.app/blog/2022/09/12/k8s-cve-feed-alpha/</link><pubDate>Mon, 12 Sep 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2022/09/12/k8s-cve-feed-alpha/</guid><description>&lt;p&gt;A long-standing request from the Kubernetes community has been to have a
programmatic way for end users to keep track of Kubernetes security issues
(also called &amp;quot;CVEs&amp;quot;, after the database that tracks public security issues across
different products and vendors). Accompanying the release of Kubernetes v1.25,
we are excited to announce availability of such
a &lt;a href="https://andygol-k8s.netlify.app/docs/reference/issues-security/official-cve-feed/index.json"&gt;feed&lt;/a&gt; as an &lt;code&gt;alpha&lt;/code&gt;
feature. This blog will cover the background and scope of this new service.&lt;/p&gt;</description></item><item><title>Kubernetes 1.25: KMS V2 Improvements</title><link>https://andygol-k8s.netlify.app/blog/2022/09/09/kms-v2-improvements/</link><pubDate>Fri, 09 Sep 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2022/09/09/kms-v2-improvements/</guid><description>&lt;p&gt;With Kubernetes v1.25, SIG Auth is introducing a new &lt;code&gt;v2alpha1&lt;/code&gt; version of the Key Management Service (KMS) API. There are a lot of improvements in the works, and we're excited to be able to start down the path of a new and improved KMS!&lt;/p&gt;
&lt;h2 id="what-is-kms"&gt;What is KMS?&lt;/h2&gt;
&lt;p&gt;One of the first things to consider when securing a Kubernetes cluster is encrypting persisted API data at rest. KMS provides an interface for a provider to utilize a key stored in an external key service to perform this encryption.&lt;/p&gt;</description></item><item><title>Kubernetes’s IPTables Chains Are Not API</title><link>https://andygol-k8s.netlify.app/blog/2022/09/07/iptables-chains-not-api/</link><pubDate>Wed, 07 Sep 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2022/09/07/iptables-chains-not-api/</guid><description>&lt;p&gt;Some Kubernetes components (such as kubelet and kube-proxy) create
iptables chains and rules as part of their operation. These chains
were never intended to be part of any Kubernetes API/ABI guarantees,
but some external components nonetheless make use of some of them (in
particular, using &lt;code&gt;KUBE-MARK-MASQ&lt;/code&gt; to mark packets as needing to be
masqueraded).&lt;/p&gt;
&lt;p&gt;As a part of the v1.25 release, SIG Network made this declaration
explicit: that (with one exception), the iptables chains that
Kubernetes creates are intended only for Kubernetes’s own internal
use, and third-party components should not assume that Kubernetes will
create any specific iptables chains, or that those chains will contain
any specific rules if they do exist.&lt;/p&gt;</description></item><item><title>Introducing COSI: Object Storage Management using Kubernetes APIs</title><link>https://andygol-k8s.netlify.app/blog/2022/09/02/cosi-kubernetes-object-storage-management/</link><pubDate>Fri, 02 Sep 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2022/09/02/cosi-kubernetes-object-storage-management/</guid><description>&lt;p&gt;This article introduces the Container Object Storage Interface (COSI), a standard for provisioning and consuming object storage in Kubernetes. It is an alpha feature in Kubernetes v1.25.&lt;/p&gt;
&lt;p&gt;File and block storage are treated as first class citizens in the Kubernetes ecosystem via &lt;a href="https://kubernetes.io/blog/2019/01/15/container-storage-interface-ga/"&gt;Container Storage Interface&lt;/a&gt; (CSI). Workloads using CSI volumes enjoy the benefits of portability across vendors and across Kubernetes clusters without the need to change application manifests. An equivalent standard does not exist for Object storage.&lt;/p&gt;</description></item><item><title>Kubernetes 1.25: cgroup v2 graduates to GA</title><link>https://andygol-k8s.netlify.app/blog/2022/08/31/cgroupv2-ga-1-25/</link><pubDate>Wed, 31 Aug 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2022/08/31/cgroupv2-ga-1-25/</guid><description>&lt;p&gt;Kubernetes 1.25 brings cgroup v2 to GA (general availability), letting the
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/overview/components/#kubelet"&gt;kubelet&lt;/a&gt; use the latest container resource
management capabilities.&lt;/p&gt;
&lt;h2 id="what-are-cgroups"&gt;What are cgroups?&lt;/h2&gt;
&lt;p&gt;Effective &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/configuration/manage-resources-containers/"&gt;resource management&lt;/a&gt; is a
critical aspect of Kubernetes. This involves managing the finite resources in
your nodes, such as CPU, memory, and storage.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;cgroups&lt;/em&gt; are a Linux kernel capability that establish resource management
functionality like limiting CPU usage or setting memory limits for running
processes.&lt;/p&gt;
&lt;p&gt;When you use the resource management capabilities in Kubernetes, such as configuring
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/configuration/manage-resources-containers/#requests-and-limits"&gt;requests and limits for Pods and containers&lt;/a&gt;,
Kubernetes uses cgroups to enforce your resource requests and limits.&lt;/p&gt;</description></item><item><title>Kubernetes 1.25: CSI Inline Volumes have graduated to GA</title><link>https://andygol-k8s.netlify.app/blog/2022/08/29/csi-inline-volumes-ga/</link><pubDate>Mon, 29 Aug 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2022/08/29/csi-inline-volumes-ga/</guid><description>&lt;p&gt;CSI Inline Volumes were introduced as an alpha feature in Kubernetes 1.15 and have been beta since 1.16. We are happy to announce that this feature has graduated to General Availability (GA) status in Kubernetes 1.25.&lt;/p&gt;
&lt;p&gt;CSI Inline Volumes are similar to other ephemeral volume types, such as &lt;code&gt;configMap&lt;/code&gt;, &lt;code&gt;downwardAPI&lt;/code&gt; and &lt;code&gt;secret&lt;/code&gt;. The important difference is that the storage is provided by a CSI driver, which allows the use of ephemeral storage provided by third-party vendors. The volume is defined as part of the pod spec and follows the lifecycle of the pod, meaning the volume is created once the pod is scheduled and destroyed when the pod is destroyed.&lt;/p&gt;</description></item><item><title>Kubernetes v1.25: Pod Security Admission Controller in Stable</title><link>https://andygol-k8s.netlify.app/blog/2022/08/25/pod-security-admission-stable/</link><pubDate>Thu, 25 Aug 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2022/08/25/pod-security-admission-stable/</guid><description>&lt;p&gt;The release of Kubernetes v1.25 marks a major milestone for Kubernetes out-of-the-box pod security
controls: Pod Security admission (PSA) graduated to stable, and Pod Security Policy (PSP) has been
removed.
&lt;a href="https://andygol-k8s.netlify.app/blog/2021/04/06/podsecuritypolicy-deprecation-past-present-and-future/"&gt;PSP was deprecated in Kubernetes v1.21&lt;/a&gt;,
and no longer functions in Kubernetes v1.25 and later.&lt;/p&gt;
&lt;p&gt;The Pod Security admission controller replaces PodSecurityPolicy, making it easier to enforce predefined
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/security/pod-security-standards/"&gt;Pod Security Standards&lt;/a&gt; by
simply adding a label to a namespace. The Pod Security Standards are maintained by the K8s
community, which means you automatically get updated security policies whenever new
security-impacting Kubernetes features are introduced.&lt;/p&gt;</description></item><item><title>PodSecurityPolicy: The Historical Context</title><link>https://andygol-k8s.netlify.app/blog/2022/08/23/podsecuritypolicy-the-historical-context/</link><pubDate>Tue, 23 Aug 2022 15:00:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2022/08/23/podsecuritypolicy-the-historical-context/</guid><description>&lt;p&gt;The PodSecurityPolicy (PSP) admission controller has been removed, as of
Kubernetes v1.25. Its deprecation was announced and detailed in the blog post
&lt;a href="https://andygol-k8s.netlify.app/blog/2021/04/06/podsecuritypolicy-deprecation-past-present-and-future/"&gt;PodSecurityPolicy Deprecation: Past, Present, and Future&lt;/a&gt;,
published for the Kubernetes v1.21 release.&lt;/p&gt;
&lt;p&gt;This article aims to provide historical context on the birth and evolution of
PSP, explain why the feature never made it to stable, and show why it was
removed and replaced by Pod Security admission control.&lt;/p&gt;
&lt;p&gt;PodSecurityPolicy, like other specialized admission control plugins, provided
fine-grained permissions on specific fields concerning the pod security settings
as a built-in policy API. It acknowledged that cluster administrators and
cluster users are usually not the same people, and that creating workloads in
the form of a Pod or any resource that will create a Pod should not equal being
&amp;quot;root on the cluster&amp;quot;. It could also encourage best practices by configuring
more secure defaults through mutation and decoupling low-level Linux security
decisions from the deployment process.&lt;/p&gt;</description></item><item><title>Kubernetes v1.25: Combiner</title><link>https://andygol-k8s.netlify.app/blog/2022/08/23/kubernetes-v1-25-release/</link><pubDate>Tue, 23 Aug 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2022/08/23/kubernetes-v1-25-release/</guid><description>&lt;p&gt;Announcing the release of Kubernetes v1.25!&lt;/p&gt;
&lt;p&gt;This release includes a total of 40 enhancements. Fifteen of those enhancements are entering Alpha, ten are graduating to Beta, and thirteen are graduating to Stable. We also have two features being deprecated or removed.&lt;/p&gt;
&lt;h2 id="release-theme-and-logo"&gt;Release theme and logo&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Kubernetes 1.25: Combiner&lt;/strong&gt;&lt;/p&gt;


&lt;figure class="release-logo "&gt;
 &lt;img src="https://andygol-k8s.netlify.app/images/blog/2022-08-23-kubernetes-1.25-release/kubernetes-1.25.png"
 alt="Combiner logo"/&gt; 
&lt;/figure&gt;
&lt;p&gt;The theme for Kubernetes v1.25 is &lt;em&gt;Combiner&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;The Kubernetes project itself is made up of many, many individual components that, when combined, take the form of the project you see today. It is also built and maintained by many individuals, all of them with different skills, experiences, histories, and interests, who join forces not just as the release team but as the many SIGs that support the project and the community year-round.&lt;/p&gt;</description></item><item><title>Spotlight on SIG Storage</title><link>https://andygol-k8s.netlify.app/blog/2022/08/22/sig-storage-spotlight/</link><pubDate>Mon, 22 Aug 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2022/08/22/sig-storage-spotlight/</guid><description>&lt;p&gt;Since the very beginning of Kubernetes, the topic of persistent data and how to address the requirement of stateful applications has been an important topic. Support for stateless deployments was natural, present from the start, and garnered attention, becoming very well-known. Work on better support for stateful applications was also present from early on, with each release increasing the scope of what could be run on Kubernetes.&lt;/p&gt;
&lt;p&gt;Message queues, databases, clustered filesystems: these are some examples of the solutions that have different storage requirements and that are, today, increasingly deployed in Kubernetes. Dealing with ephemeral and persistent storage, local or remote, file or block, from many different vendors, while considering how to provide the needed resiliency and data consistency that users expect, all of this is under SIG Storage's umbrella.&lt;/p&gt;</description></item><item><title>Meet Our Contributors - APAC (China region)</title><link>https://andygol-k8s.netlify.app/blog/2022/08/15/meet-our-contributors-china-ep-03/</link><pubDate>Mon, 15 Aug 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2022/08/15/meet-our-contributors-china-ep-03/</guid><description>&lt;p&gt;&lt;strong&gt;Authors &amp;amp; Interviewers:&lt;/strong&gt; &lt;a href="https://github.com/AvineshTripathi"&gt;Avinesh Tripathi&lt;/a&gt;, &lt;a href="https://github.com/Debanitrkl"&gt;Debabrata Panigrahi&lt;/a&gt;, &lt;a href="https://github.com/jayesh-srivastava"&gt;Jayesh Srivastava&lt;/a&gt;, &lt;a href="https://github.com/Priyankasaggu11929/"&gt;Priyanka Saggu&lt;/a&gt;, &lt;a href="https://github.com/PurneswarPrasad"&gt;Purneswar Prasad&lt;/a&gt;, &lt;a href="https://github.com/vedant-kakde"&gt;Vedant Kakde&lt;/a&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;Hello, everyone 👋&lt;/p&gt;
&lt;p&gt;Welcome back to the third edition of the &amp;quot;Meet Our Contributors&amp;quot; blog post series for APAC.&lt;/p&gt;
&lt;p&gt;This post features four outstanding contributors from China, who have played diverse leadership and community roles in the upstream Kubernetes project.&lt;/p&gt;
&lt;p&gt;So, without further ado, let's get straight to the article.&lt;/p&gt;
&lt;h2 id="andy-zhang"&gt;&lt;a href="https://github.com/andyzhangx"&gt;Andy Zhang&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;Andy Zhang currently works for Microsoft China at the Shanghai site. His main focus is on Kubernetes storage drivers. Andy started contributing to Kubernetes about 5 years ago.&lt;/p&gt;</description></item><item><title>Enhancing Kubernetes one KEP at a Time</title><link>https://andygol-k8s.netlify.app/blog/2022/08/11/enhancing-kubernetes-one-kep-at-a-time/</link><pubDate>Thu, 11 Aug 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2022/08/11/enhancing-kubernetes-one-kep-at-a-time/</guid><description>&lt;p&gt;Did you know that Kubernetes v1.24 has &lt;a href="https://kubernetes.io/blog/2022/05/03/kubernetes-1-24-release-announcement/"&gt;46 enhancements&lt;/a&gt;? That's a lot of new functionality packed into a 4-month release cycle. The Kubernetes release team coordinates the logistics of the release, from remediating test flakes to publishing updated docs. It's a ton of work, but they always deliver.&lt;/p&gt;
&lt;p&gt;The release team comprises around 30 people across six subteams - Bug Triage, CI Signal, Enhancements, Release Notes, Communications, and Docs.  Each of these subteams manages a component of the release. This post will focus on the role of the enhancements subteam and how you can get involved.&lt;/p&gt;</description></item><item><title>Kubernetes Removals and Major Changes In 1.25</title><link>https://andygol-k8s.netlify.app/blog/2022/08/04/upcoming-changes-in-kubernetes-1-25/</link><pubDate>Thu, 04 Aug 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2022/08/04/upcoming-changes-in-kubernetes-1-25/</guid><description>&lt;p&gt;As Kubernetes grows and matures, features may be deprecated, removed, or replaced with improvements for the health of the project. Kubernetes v1.25 includes several major changes and one major removal.&lt;/p&gt;
&lt;h2 id="the-kubernetes-api-removal-and-deprecation-process"&gt;The Kubernetes API Removal and Deprecation process&lt;/h2&gt;
&lt;p&gt;The Kubernetes project has a well-documented &lt;a href="https://andygol-k8s.netlify.app/docs/reference/using-api/deprecation-policy/"&gt;deprecation policy&lt;/a&gt; for features. This policy states that stable APIs may only be deprecated when a newer, stable version of that same API is available and that APIs have a minimum lifetime for each stability level. A deprecated API is one that has been marked for removal in a future Kubernetes release; it will continue to function until removal (at least one year from the deprecation), but usage will result in a warning being displayed. Removed APIs are no longer available in the current version, at which point you must migrate to using the replacement.&lt;/p&gt;</description></item><item><title>Spotlight on SIG Docs</title><link>https://andygol-k8s.netlify.app/blog/2022/08/02/sig-docs-spotlight-2022/</link><pubDate>Tue, 02 Aug 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2022/08/02/sig-docs-spotlight-2022/</guid><description>&lt;h2 id="introduction"&gt;Introduction&lt;/h2&gt;
&lt;p&gt;The official documentation is the go-to source for any open source project. For Kubernetes,
it's an ever-evolving Special Interest Group (SIG) with people constantly putting in their efforts
to make details about the project easier to consume for new contributors and users. SIG Docs publishes
the official documentation on &lt;a href="https://kubernetes.io"&gt;kubernetes.io&lt;/a&gt; which includes,
but is not limited to, documentation of the core APIs, core architectural details, and CLI tools
shipped with the Kubernetes release.&lt;/p&gt;</description></item><item><title>Kubernetes Gateway API Graduates to Beta</title><link>https://andygol-k8s.netlify.app/blog/2022/07/13/gateway-api-graduates-to-beta/</link><pubDate>Wed, 13 Jul 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2022/07/13/gateway-api-graduates-to-beta/</guid><description>&lt;p&gt;We are excited to announce the v0.5.0 release of Gateway API. For the first
time, several of our most important Gateway API resources are graduating to
beta. Additionally, we are starting a new initiative to explore how Gateway API
can be used for mesh and introducing new experimental concepts such as URL
rewrites. We'll cover all of this and more below.&lt;/p&gt;
&lt;h2 id="what-is-gateway-api"&gt;What is Gateway API?&lt;/h2&gt;
&lt;p&gt;Gateway API is a collection of resources centered around &lt;a href="https://gateway-api.sigs.k8s.io/api-types/gateway/"&gt;Gateway&lt;/a&gt; resources
(which represent the underlying network gateways / proxy servers) to enable
robust Kubernetes service networking through expressive, extensible and
role-oriented interfaces that are implemented by many vendors and have broad
industry support.&lt;/p&gt;</description></item><item><title>Annual Report Summary 2021</title><link>https://andygol-k8s.netlify.app/blog/2022/06/01/annual-report-summary-2021/</link><pubDate>Wed, 01 Jun 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2022/06/01/annual-report-summary-2021/</guid><description>&lt;p&gt;Last year, we published our first &lt;a href="https://andygol-k8s.netlify.app/blog/2021/06/28/announcing-kubernetes-community-group-annual-reports/"&gt;Annual Report Summary&lt;/a&gt; for 2020 and it's already time for our second edition!&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.cncf.io/reports/kubernetes-annual-report-2021/"&gt;2021 Annual Report Summary&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;This summary reflects the work that has been done in 2021 and the initiatives on deck for the rest of 2022. Please forward to organizations and indidviduals participating in upstream activities, planning cloud native strategies, and/or those looking to help out. To find a specific community group's complete report, go to the &lt;a href="https://github.com/kubernetes/community"&gt;kubernetes/community repo&lt;/a&gt; under the groups folder. Example: &lt;a href="https://github.com/kubernetes/community/blob/master/sig-api-machinery/annual-report-2021.md"&gt;sig-api-machinery/annual-report-2021.md&lt;/a&gt;&lt;/p&gt;</description></item><item><title>Kubernetes 1.24: Maximum Unavailable Replicas for StatefulSet</title><link>https://andygol-k8s.netlify.app/blog/2022/05/27/maxunavailable-for-statefulset/</link><pubDate>Fri, 27 May 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2022/05/27/maxunavailable-for-statefulset/</guid><description>&lt;p&gt;Kubernetes &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/statefulset/"&gt;StatefulSets&lt;/a&gt;, since their introduction in
1.5 and becoming stable in 1.9, have been widely used to run stateful applications. They provide stable pod identity, persistent
per pod storage and ordered graceful deployment, scaling and rolling updates. You can think of StatefulSet as the atomic building
block for running complex stateful applications. As the use of Kubernetes has grown, so has the number of scenarios requiring
StatefulSets. Many of these scenarios, require faster rolling updates than the currently supported one-pod-at-a-time updates, in the
case where you're using the &lt;code&gt;OrderedReady&lt;/code&gt; Pod management policy for a StatefulSet.&lt;/p&gt;</description></item><item><title>Contextual Logging in Kubernetes 1.24</title><link>https://andygol-k8s.netlify.app/blog/2022/05/25/contextual-logging/</link><pubDate>Wed, 25 May 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2022/05/25/contextual-logging/</guid><description>&lt;p&gt;The &lt;a href="https://github.com/kubernetes/community/blob/master/wg-structured-logging/README.md"&gt;Structured Logging Working
Group&lt;/a&gt;
has added new capabilities to the logging infrastructure in Kubernetes
1.24. This blog post explains how developers can take advantage of those to
make log output more useful and how they can get involved with improving Kubernetes.&lt;/p&gt;
&lt;h2 id="structured-logging"&gt;Structured logging&lt;/h2&gt;
&lt;p&gt;The goal of &lt;a href="https://github.com/kubernetes/enhancements/blob/master/keps/sig-instrumentation/1602-structured-logging/README.md"&gt;structured
logging&lt;/a&gt;
is to replace C-style formatting and the resulting opaque log strings with log
entries that have a well-defined syntax for storing message and parameters
separately, for example as a JSON struct.&lt;/p&gt;</description></item><item><title>Kubernetes 1.24: Avoid Collisions Assigning IP Addresses to Services</title><link>https://andygol-k8s.netlify.app/blog/2022/05/23/service-ip-dynamic-and-static-allocation/</link><pubDate>Mon, 23 May 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2022/05/23/service-ip-dynamic-and-static-allocation/</guid><description>&lt;p&gt;In Kubernetes, &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/services-networking/service/"&gt;Services&lt;/a&gt; are an abstract way to expose
an application running on a set of Pods. Services
can have a cluster-scoped virtual IP address (using a Service of &lt;code&gt;type: ClusterIP&lt;/code&gt;).
Clients can connect using that virtual IP address, and Kubernetes then load-balances traffic to that
Service across the different backing Pods.&lt;/p&gt;
&lt;h2 id="how-service-clusterips-are-allocated"&gt;How Service ClusterIPs are allocated?&lt;/h2&gt;
&lt;p&gt;A Service &lt;code&gt;ClusterIP&lt;/code&gt; can be assigned:&lt;/p&gt;
&lt;dl&gt;
&lt;dt&gt;&lt;em&gt;dynamically&lt;/em&gt;&lt;/dt&gt;
&lt;dd&gt;the cluster's control plane automatically picks a free IP address from within the configured IP range for &lt;code&gt;type: ClusterIP&lt;/code&gt; Services.&lt;/dd&gt;
&lt;dt&gt;&lt;em&gt;statically&lt;/em&gt;&lt;/dt&gt;
&lt;dd&gt;you specify an IP address of your choice, from within the configured IP range for Services.&lt;/dd&gt;
&lt;/dl&gt;
&lt;p&gt;Across your whole cluster, every Service &lt;code&gt;ClusterIP&lt;/code&gt; must be unique.
Trying to create a Service with a specific &lt;code&gt;ClusterIP&lt;/code&gt; that has already
been allocated will return an error.&lt;/p&gt;</description></item><item><title>Kubernetes 1.24: Introducing Non-Graceful Node Shutdown Alpha</title><link>https://andygol-k8s.netlify.app/blog/2022/05/20/kubernetes-1-24-non-graceful-node-shutdown-alpha/</link><pubDate>Fri, 20 May 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2022/05/20/kubernetes-1-24-non-graceful-node-shutdown-alpha/</guid><description>&lt;p&gt;Kubernetes v1.24 introduces alpha support for &lt;a href="https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/2268-non-graceful-shutdown"&gt;Non-Graceful Node Shutdown&lt;/a&gt;. This feature allows stateful workloads to failover to a different node after the original node is shutdown or in a non-recoverable state such as hardware failure or broken OS.&lt;/p&gt;
&lt;h2 id="how-is-this-different-from-graceful-node-shutdown"&gt;How is this different from Graceful Node Shutdown&lt;/h2&gt;
&lt;p&gt;You might have heard about the &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/architecture/nodes/#graceful-node-shutdown"&gt;Graceful Node Shutdown&lt;/a&gt; capability of Kubernetes,
and are wondering how the Non-Graceful Node Shutdown feature is different from that. Graceful Node Shutdown
allows Kubernetes to detect when a node is shutting down cleanly, and handles that situation appropriately.
A Node Shutdown can be &amp;quot;graceful&amp;quot; only if the node shutdown action can be detected by the kubelet ahead
of the actual shutdown. However, there are cases where a node shutdown action may not be detected by
the kubelet. This could happen either because the shutdown command does not trigger the systemd inhibitor
locks mechanism that kubelet relies upon, or because of a configuration error
(the &lt;code&gt;ShutdownGracePeriod&lt;/code&gt; and &lt;code&gt;ShutdownGracePeriodCriticalPods&lt;/code&gt; are not configured properly).&lt;/p&gt;</description></item><item><title>Kubernetes 1.24: Prevent unauthorised volume mode conversion</title><link>https://andygol-k8s.netlify.app/blog/2022/05/18/prevent-unauthorised-volume-mode-conversion-alpha/</link><pubDate>Wed, 18 May 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2022/05/18/prevent-unauthorised-volume-mode-conversion-alpha/</guid><description>&lt;p&gt;Kubernetes v1.24 introduces a new alpha-level feature that prevents unauthorised users
from modifying the volume mode of a &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/storage/persistent-volumes/"&gt;&lt;code&gt;PersistentVolumeClaim&lt;/code&gt;&lt;/a&gt; created from an
existing &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/storage/volume-snapshots/"&gt;&lt;code&gt;VolumeSnapshot&lt;/code&gt;&lt;/a&gt; in the Kubernetes cluster.&lt;/p&gt;
&lt;h3 id="the-problem"&gt;The problem&lt;/h3&gt;
&lt;p&gt;The &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/storage/persistent-volumes/#volume-mode"&gt;Volume Mode&lt;/a&gt; determines whether a volume
is formatted into a filesystem or presented as a raw block device.&lt;/p&gt;
&lt;p&gt;Users can leverage the &lt;code&gt;VolumeSnapshot&lt;/code&gt; feature, which has been stable since Kubernetes v1.20,
to create a &lt;code&gt;PersistentVolumeClaim&lt;/code&gt; (shortened as PVC) from an existing &lt;code&gt;VolumeSnapshot&lt;/code&gt; in
the Kubernetes cluster. The PVC spec includes a &lt;code&gt;dataSource&lt;/code&gt; field, which can point to an
existing &lt;code&gt;VolumeSnapshot&lt;/code&gt; instance.
Visit &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/storage/persistent-volumes/#create-persistent-volume-claim-from-volume-snapshot"&gt;Create a PersistentVolumeClaim from a Volume Snapshot&lt;/a&gt; for more details.&lt;/p&gt;</description></item><item><title>Kubernetes 1.24: Volume Populators Graduate to Beta</title><link>https://andygol-k8s.netlify.app/blog/2022/05/16/volume-populators-beta/</link><pubDate>Mon, 16 May 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2022/05/16/volume-populators-beta/</guid><description>&lt;p&gt;The volume populators feature is now two releases old and entering beta! The &lt;code&gt;AnyVolumeDataSource&lt;/code&gt; feature
gate defaults to enabled in Kubernetes v1.24, which means that users can specify any custom resource
as the data source of a PVC.&lt;/p&gt;
&lt;p&gt;An &lt;a href="https://andygol-k8s.netlify.app/blog/2021/08/30/volume-populators-redesigned/"&gt;earlier blog article&lt;/a&gt; detailed how the
volume populators feature works. In short, a cluster administrator can install a CRD and
associated populator controller in the cluster, and any user who can create instances of
the CR can create pre-populated volumes by taking advantage of the populator.&lt;/p&gt;</description></item><item><title>Kubernetes 1.24: gRPC container probes in beta</title><link>https://andygol-k8s.netlify.app/blog/2022/05/13/grpc-probes-now-in-beta/</link><pubDate>Fri, 13 May 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2022/05/13/grpc-probes-now-in-beta/</guid><description>&lt;p&gt;_Update: Since this article was posted, the feature was graduated to GA in v1.27 and doesn't require any feature gates to be enabled.&lt;/p&gt;
&lt;p&gt;With Kubernetes 1.24 the gRPC probes functionality entered beta and is available by default.
Now you can configure startup, liveness, and readiness probes for your gRPC app
without exposing any HTTP endpoint, nor do you need an executable. Kubernetes can natively connect to your workload via gRPC and query its status.&lt;/p&gt;</description></item><item><title>Kubernetes 1.24: Storage Capacity Tracking Now Generally Available</title><link>https://andygol-k8s.netlify.app/blog/2022/05/06/storage-capacity-ga/</link><pubDate>Fri, 06 May 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2022/05/06/storage-capacity-ga/</guid><description>&lt;p&gt;The v1.24 release of Kubernetes brings &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/storage/storage-capacity/"&gt;storage capacity&lt;/a&gt;
tracking as a generally available feature.&lt;/p&gt;
&lt;h2 id="problems-we-have-solved"&gt;Problems we have solved&lt;/h2&gt;
&lt;p&gt;As explained in more detail in the &lt;a href="https://andygol-k8s.netlify.app/blog/2021/04/14/local-storage-features-go-beta/"&gt;previous blog post about this
feature&lt;/a&gt;, storage capacity
tracking allows a CSI driver to publish information about remaining
capacity. The kube-scheduler then uses that information to pick suitable nodes
for a Pod when that Pod has volumes that still need to be provisioned.&lt;/p&gt;
&lt;p&gt;Without this information, a Pod may get stuck without ever being scheduled onto
a suitable node because kube-scheduler has to choose blindly and always ends up
picking a node for which the volume cannot be provisioned because the
underlying storage system managed by the CSI driver does not have sufficient
capacity left.&lt;/p&gt;</description></item><item><title>Kubernetes 1.24: Volume Expansion Now A Stable Feature</title><link>https://andygol-k8s.netlify.app/blog/2022/05/05/volume-expansion-ga/</link><pubDate>Thu, 05 May 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2022/05/05/volume-expansion-ga/</guid><description>&lt;p&gt;Volume expansion was introduced as a alpha feature in Kubernetes 1.8 and it went beta in 1.11 and with Kubernetes 1.24 we are excited to announce general availability(GA)
of volume expansion.&lt;/p&gt;
&lt;p&gt;This feature allows Kubernetes users to simply edit their &lt;code&gt;PersistentVolumeClaim&lt;/code&gt; objects and specify new size in PVC Spec and Kubernetes will automatically expand the volume
using storage backend and also expand the underlying file system in-use by the Pod without requiring any downtime at all if possible.&lt;/p&gt;</description></item><item><title>Dockershim: The Historical Context</title><link>https://andygol-k8s.netlify.app/blog/2022/05/03/dockershim-historical-context/</link><pubDate>Tue, 03 May 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2022/05/03/dockershim-historical-context/</guid><description>&lt;p&gt;Dockershim has been removed as of Kubernetes v1.24, and this is a positive move for the project. However, context is important for fully understanding something, be it socially or in software development, and this deserves a more in-depth review. Alongside the dockershim removal in Kubernetes v1.24, we’ve seen some confusion (sometimes at a panic level) and dissatisfaction with this decision in the community, largely due to a lack of context around this removal. The decision to deprecate and eventually remove dockershim from Kubernetes was not made quickly or lightly. Still, it’s been in the works for so long that many of today’s users are newer than that decision, and certainly newer than the choices that led to the dockershim being necessary in the first place.&lt;/p&gt;</description></item><item><title>Kubernetes 1.24: Stargazer</title><link>https://andygol-k8s.netlify.app/blog/2022/05/03/kubernetes-1-24-release-announcement/</link><pubDate>Tue, 03 May 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2022/05/03/kubernetes-1-24-release-announcement/</guid><description>&lt;p&gt;We are excited to announce the release of Kubernetes 1.24, the first release of 2022!&lt;/p&gt;
&lt;p&gt;This release consists of 46 enhancements: fourteen enhancements have graduated to stable,
fifteen enhancements are moving to beta, and thirteen enhancements are entering alpha.
Also, two features have been deprecated, and two features have been removed.&lt;/p&gt;
&lt;h2 id="major-themes"&gt;Major Themes&lt;/h2&gt;
&lt;h3 id="dockershim-removed-from-kubelet"&gt;Dockershim Removed from kubelet&lt;/h3&gt;
&lt;p&gt;After its deprecation in v1.20, the dockershim component has been removed from the kubelet in Kubernetes v1.24.
From v1.24 onwards, you will need to either use one of the other &lt;a href="https://andygol-k8s.netlify.app/docs/setup/production-environment/container-runtimes/"&gt;supported runtimes&lt;/a&gt; (such as containerd or CRI-O)
or use cri-dockerd if you are relying on Docker Engine as your container runtime.
For more information about ensuring your cluster is ready for this removal, please
see &lt;a href="https://andygol-k8s.netlify.app/blog/2022/03/31/ready-for-dockershim-removal/"&gt;this guide&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>Increasing the security bar in Ingress-NGINX v1.2.0</title><link>https://andygol-k8s.netlify.app/blog/2022/04/28/ingress-nginx-1-2-0/</link><pubDate>Thu, 28 Apr 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2022/04/28/ingress-nginx-1-2-0/</guid><description>&lt;p&gt;The &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/services-networking/ingress/"&gt;Ingress&lt;/a&gt; may be one of the most targeted components
of Kubernetes. An Ingress typically defines an HTTP reverse proxy, exposed to the Internet, containing
multiple websites, and with some privileged access to Kubernetes API (such as to read Secrets relating to
TLS certificates and their private keys).&lt;/p&gt;
&lt;p&gt;While it is a risky component in your architecture, it is still the most popular way to properly expose your services.&lt;/p&gt;
&lt;p&gt;Ingress-NGINX has been part of security assessments that figured out we have a big problem: we don't
do all proper sanitization before turning the configuration into an &lt;code&gt;nginx.conf&lt;/code&gt; file, which may lead to information
disclosure risks.&lt;/p&gt;</description></item><item><title>Kubernetes Removals and Deprecations In 1.24</title><link>https://andygol-k8s.netlify.app/blog/2022/04/07/upcoming-changes-in-kubernetes-1-24/</link><pubDate>Thu, 07 Apr 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2022/04/07/upcoming-changes-in-kubernetes-1-24/</guid><description>&lt;p&gt;As Kubernetes evolves, features and APIs are regularly revisited and removed. New features may offer
an alternative or improved approach to solving existing problems, motivating the team to remove the
old approach.&lt;/p&gt;
&lt;p&gt;We want to make sure you are aware of the changes coming in the Kubernetes 1.24 release. The release will
&lt;strong&gt;deprecate&lt;/strong&gt; several (beta) APIs in favor of stable versions of the same APIs. The major change coming
in the Kubernetes 1.24 release is the
&lt;a href="https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/2221-remove-dockershim"&gt;removal of Dockershim&lt;/a&gt;.
This is discussed below and will be explored in more depth at release time. For an early look at the
changes coming in Kubernetes 1.24, take a look at the in-progress
&lt;a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md"&gt;CHANGELOG&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>Is Your Cluster Ready for v1.24?</title><link>https://andygol-k8s.netlify.app/blog/2022/03/31/ready-for-dockershim-removal/</link><pubDate>Thu, 31 Mar 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2022/03/31/ready-for-dockershim-removal/</guid><description>&lt;p&gt;Way back in December of 2020, Kubernetes announced the &lt;a href="https://andygol-k8s.netlify.app/blog/2020/12/02/dont-panic-kubernetes-and-docker/"&gt;deprecation of Dockershim&lt;/a&gt;. In Kubernetes, dockershim is a software shim that allows you to use the entire Docker engine as your container runtime within Kubernetes. In the upcoming v1.24 release, we are removing Dockershim - the delay between deprecation and removal in line with the &lt;a href="https://kubernetes.io/docs/reference/using-api/deprecation-policy/"&gt;project’s policy&lt;/a&gt; of supporting features for at least one year after deprecation. If you are a cluster operator, this guide includes the practical realities of what you need to know going into this release. Also, what do you need to do to ensure your cluster doesn’t fall over!&lt;/p&gt;</description></item><item><title>Meet Our Contributors - APAC (Aus-NZ region)</title><link>https://andygol-k8s.netlify.app/blog/2022/03/16/meet-our-contributors-au-nz-ep-02/</link><pubDate>Wed, 16 Mar 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2022/03/16/meet-our-contributors-au-nz-ep-02/</guid><description>&lt;p&gt;&lt;strong&gt;Authors &amp;amp; Interviewers:&lt;/strong&gt; &lt;a href="https://github.com/anubha-v-ardhan"&gt;Anubhav Vardhan&lt;/a&gt;, &lt;a href="https://github.com/Atharva-Shinde"&gt;Atharva Shinde&lt;/a&gt;, &lt;a href="https://github.com/AvineshTripathi"&gt;Avinesh Tripathi&lt;/a&gt;, &lt;a href="https://github.com/bradmccoydev"&gt;Brad McCoy&lt;/a&gt;, &lt;a href="https://github.com/Debanitrkl"&gt;Debabrata Panigrahi&lt;/a&gt;, &lt;a href="https://github.com/jayesh-srivastava"&gt;Jayesh Srivastava&lt;/a&gt;, &lt;a href="https://github.com/verma-kunal"&gt;Kunal Verma&lt;/a&gt;, &lt;a href="https://github.com/PranshuSrivastava"&gt;Pranshu Srivastava&lt;/a&gt;, &lt;a href="github.com/Priyankasaggu11929/"&gt;Priyanka Saggu&lt;/a&gt;, &lt;a href="https://github.com/PurneswarPrasad"&gt;Purneswar Prasad&lt;/a&gt;, &lt;a href="https://github.com/vedant-kakde"&gt;Vedant Kakde&lt;/a&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;Good day, everyone 👋&lt;/p&gt;
&lt;p&gt;Welcome back to the second episode of the &amp;quot;Meet Our Contributors&amp;quot; blog post series for APAC.&lt;/p&gt;
&lt;p&gt;This post will feature four outstanding contributors from the Australia and New Zealand regions, who have played diverse leadership and community roles in the Upstream Kubernetes project.&lt;/p&gt;
&lt;p&gt;So, without further ado, let's get straight to the blog.&lt;/p&gt;</description></item><item><title>Updated: Dockershim Removal FAQ</title><link>https://andygol-k8s.netlify.app/blog/2022/02/17/dockershim-faq/</link><pubDate>Thu, 17 Feb 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2022/02/17/dockershim-faq/</guid><description>&lt;p&gt;&lt;strong&gt;This supersedes the original
&lt;a href="https://andygol-k8s.netlify.app/blog/2020/12/02/dockershim-faq/"&gt;Dockershim Deprecation FAQ&lt;/a&gt; article,
published in late 2020. The article includes updates from the v1.24
release of Kubernetes.&lt;/strong&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;This document goes over some frequently asked questions regarding the
removal of &lt;em&gt;dockershim&lt;/em&gt; from Kubernetes. The removal was originally
&lt;a href="https://andygol-k8s.netlify.app/blog/2020/12/08/kubernetes-1-20-release-announcement/"&gt;announced&lt;/a&gt;
as a part of the Kubernetes v1.20 release. The Kubernetes
&lt;a href="https://andygol-k8s.netlify.app/releases/#release-v1-24"&gt;v1.24 release&lt;/a&gt; actually removed the dockershim
from Kubernetes.&lt;/p&gt;
&lt;p&gt;For more on what that means, check out the blog post
&lt;a href="https://andygol-k8s.netlify.app/blog/2020/12/02/dont-panic-kubernetes-and-docker/"&gt;Don't Panic: Kubernetes and Docker&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>SIG Node CI Subproject Celebrates Two Years of Test Improvements</title><link>https://andygol-k8s.netlify.app/blog/2022/02/16/sig-node-ci-subproject-celebrates/</link><pubDate>Wed, 16 Feb 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2022/02/16/sig-node-ci-subproject-celebrates/</guid><description>&lt;p&gt;Ensuring the reliability of SIG Node upstream code is a continuous effort
that takes a lot of behind-the-scenes effort from many contributors.
There are frequent releases of Kubernetes, base operating systems,
container runtimes, and test infrastructure that result in a complex matrix that
requires attention and steady investment to &amp;quot;keep the lights on.&amp;quot;
In May 2020, the Kubernetes node special interest group (&amp;quot;SIG Node&amp;quot;) organized a new
subproject for continuous integration (CI) for node-related code and tests. Since its
inauguration, the SIG Node CI subproject has run a weekly meeting, and even the full hour
is often not enough to complete triage of all bugs, test-related PRs and issues, and discuss all
related ongoing work within the subgroup.&lt;/p&gt;</description></item><item><title>Spotlight on SIG Multicluster</title><link>https://andygol-k8s.netlify.app/blog/2022/02/07/sig-multicluster-spotlight-2022/</link><pubDate>Mon, 07 Feb 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2022/02/07/sig-multicluster-spotlight-2022/</guid><description>&lt;h2 id="introduction"&gt;Introduction&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://github.com/kubernetes/community/tree/master/sig-multicluster"&gt;SIG Multicluster&lt;/a&gt; is the SIG focused on how Kubernetes concepts are expanded and used beyond the cluster boundary. Historically, Kubernetes resources only interacted within that boundary - KRU or Kubernetes Resource Universe (not an actual Kubernetes concept). Kubernetes clusters, even now, don't really know anything about themselves or, about other clusters. Absence of cluster identifiers is a case in point. With the growing adoption of multicloud and multicluster deployments, the work SIG Multicluster doing is gaining a lot of attention. In this blog, &lt;a href="https://twitter.com/jeremyot"&gt;Jeremy Olmsted-Thompson, Google&lt;/a&gt; and &lt;a href="https://twitter.com/ChrisShort"&gt;Chris Short, AWS&lt;/a&gt; discuss the interesting problems SIG Multicluster is solving and how you can get involved. Their initials &lt;strong&gt;JOT&lt;/strong&gt; and &lt;strong&gt;CS&lt;/strong&gt; will be used for brevity.&lt;/p&gt;</description></item><item><title>Securing Admission Controllers</title><link>https://andygol-k8s.netlify.app/blog/2022/01/19/secure-your-admission-controllers-and-webhooks/</link><pubDate>Wed, 19 Jan 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2022/01/19/secure-your-admission-controllers-and-webhooks/</guid><description>&lt;p&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/reference/access-authn-authz/admission-controllers/"&gt;Admission control&lt;/a&gt; is a key part of Kubernetes security, alongside authentication and authorization. Webhook admission controllers are extensively used to help improve the security of Kubernetes clusters in a variety of ways including restricting the privileges of workloads and ensuring that images deployed to the cluster meet organization’s security requirements.&lt;/p&gt;
&lt;p&gt;However, as with any additional component added to a cluster, security risks can present themselves. A security risk example is if the deployment and management of the admission controller are not handled correctly. To help admission controller users and designers manage these risks appropriately, the &lt;a href="https://github.com/kubernetes/community/tree/master/sig-security#security-docs"&gt;security documentation&lt;/a&gt; subgroup of SIG Security has spent some time developing a &lt;a href="https://github.com/kubernetes/sig-security/tree/main/sig-security-docs/papers/admission-control"&gt;threat model for admission controllers&lt;/a&gt;. This threat model looks at likely risks which may arise from the incorrect use of admission controllers, which could allow security policies to be bypassed, or even allow an attacker to get unauthorised access to the cluster.&lt;/p&gt;</description></item><item><title>Meet Our Contributors - APAC (India region)</title><link>https://andygol-k8s.netlify.app/blog/2022/01/10/meet-our-contributors-india-ep-01/</link><pubDate>Mon, 10 Jan 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2022/01/10/meet-our-contributors-india-ep-01/</guid><description>&lt;p&gt;&lt;strong&gt;Authors &amp;amp; Interviewers:&lt;/strong&gt; &lt;a href="https://github.com/anubha-v-ardhan"&gt;Anubhav Vardhan&lt;/a&gt;, &lt;a href="https://github.com/Atharva-Shinde"&gt;Atharva Shinde&lt;/a&gt;, &lt;a href="https://github.com/AvineshTripathi"&gt;Avinesh Tripathi&lt;/a&gt;, &lt;a href="https://github.com/Debanitrkl"&gt;Debabrata Panigrahi&lt;/a&gt;, &lt;a href="https://github.com/verma-kunal"&gt;Kunal Verma&lt;/a&gt;, &lt;a href="https://github.com/PranshuSrivastava"&gt;Pranshu Srivastava&lt;/a&gt;, &lt;a href="https://github.com/CIPHERTron"&gt;Pritish Samal&lt;/a&gt;, &lt;a href="https://github.com/PurneswarPrasad"&gt;Purneswar Prasad&lt;/a&gt;, &lt;a href="https://github.com/vedant-kakde"&gt;Vedant Kakde&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Editor:&lt;/strong&gt; &lt;a href="https://psaggu.com"&gt;Priyanka Saggu&lt;/a&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;Good day, everyone 👋&lt;/p&gt;
&lt;p&gt;Welcome to the first episode of the APAC edition of the &amp;quot;Meet Our Contributors&amp;quot; blog post series.&lt;/p&gt;
&lt;p&gt;In this post, we'll introduce you to five amazing folks from the India region who have been actively contributing to the upstream Kubernetes projects in a variety of ways, as well as being the leaders or maintainers of numerous community initiatives.&lt;/p&gt;</description></item><item><title>Kubernetes is Moving on From Dockershim: Commitments and Next Steps</title><link>https://andygol-k8s.netlify.app/blog/2022/01/07/kubernetes-is-moving-on-from-dockershim/</link><pubDate>Fri, 07 Jan 2022 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2022/01/07/kubernetes-is-moving-on-from-dockershim/</guid><description>&lt;p&gt;Kubernetes is removing dockershim in the upcoming v1.24 release. We're excited
to reaffirm our community values by supporting open source container runtimes,
enabling a smaller kubelet, and increasing engineering velocity for teams using
Kubernetes. If you &lt;a href="https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/migrating-from-dockershim/find-out-runtime-you-use/"&gt;use Docker Engine as a container runtime&lt;/a&gt;
for your Kubernetes cluster, get ready to migrate in 1.24! To check if you're
affected, refer to &lt;a href="https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-removal-affects-you/"&gt;Check whether dockershim removal affects you&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id="why-we-re-moving-away-from-dockershim"&gt;Why we’re moving away from dockershim&lt;/h2&gt;
&lt;p&gt;Docker was the first container runtime used by Kubernetes. This is one of the
reasons why Docker is so familiar to many Kubernetes users and enthusiasts.
Docker support was hardcoded into Kubernetes – a component the project refers to
as dockershim.
As containerization became an industry standard, the Kubernetes project added support
for additional runtimes. This culminated in the implementation of the
container runtime interface (CRI), letting system components (like the kubelet)
talk to container runtimes in a standardized way. As a result, dockershim became
an anomaly in the Kubernetes project.
Dependencies on Docker and dockershim have crept into various tools
and projects in the CNCF ecosystem ecosystem, resulting in fragile code.&lt;/p&gt;</description></item><item><title>Kubernetes-in-Kubernetes and the WEDOS PXE bootable server farm</title><link>https://andygol-k8s.netlify.app/blog/2021/12/22/kubernetes-in-kubernetes-and-pxe-bootable-server-farm/</link><pubDate>Wed, 22 Dec 2021 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2021/12/22/kubernetes-in-kubernetes-and-pxe-bootable-server-farm/</guid><description>&lt;p&gt;When you own two data centers, thousands of physical servers, virtual machines and hosting for hundreds of thousands sites, Kubernetes can actually simplify the management of all these things. As practice has shown, by using Kubernetes, you can declaratively describe and manage not only applications, but also the infrastructure itself. I work for the largest Czech hosting provider &lt;strong&gt;WEDOS Internet a.s&lt;/strong&gt; and today I'll show you two of my projects — &lt;a href="https://github.com/kvaps/kubernetes-in-kubernetes"&gt;Kubernetes-in-Kubernetes&lt;/a&gt; and &lt;a href="https://github.com/kvaps/kubefarm"&gt;Kubefarm&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>Using Admission Controllers to Detect Container Drift at Runtime</title><link>https://andygol-k8s.netlify.app/blog/2021/12/21/admission-controllers-for-container-drift/</link><pubDate>Tue, 21 Dec 2021 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2021/12/21/admission-controllers-for-container-drift/</guid><description>&lt;figure&gt;
 &lt;img src="https://andygol-k8s.netlify.app/blog/2021/12/21/admission-controllers-for-container-drift/intro-illustration.png"
 alt="Introductory illustration"/&gt; &lt;figcaption&gt;
 &lt;p&gt;Illustration by Munire Aireti&lt;/p&gt;
 &lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;At Box, we use Kubernetes (K8s) to manage hundreds of micro-services that enable Box to stream data at a petabyte scale. When it comes to the deployment process, we run &lt;a href="https://github.com/box/kube-applier"&gt;kube-applier&lt;/a&gt; as part of the GitOps workflows with declarative configuration and automated deployment. Developers declare their K8s apps manifest into a Git repository that requires code reviews and automatic checks to pass, before any changes can get merged and applied inside our K8s clusters. With &lt;code&gt;kubectl exec&lt;/code&gt; and other similar commands, however, developers are able to directly interact with running containers and alter them from their deployed state. This interaction could then subvert the change control and code review processes that are enforced in our CI/CD pipelines. Further, it allows such impacted containers to continue receiving traffic long-term in production.&lt;/p&gt;</description></item><item><title>What's new in Security Profiles Operator v0.4.0</title><link>https://andygol-k8s.netlify.app/blog/2021/12/17/security-profiles-operator/</link><pubDate>Fri, 17 Dec 2021 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2021/12/17/security-profiles-operator/</guid><description>&lt;p&gt;The &lt;a href="https://sigs.k8s.io/security-profiles-operator"&gt;Security Profiles Operator (SPO)&lt;/a&gt;
is an out-of-tree Kubernetes enhancement to make the management of
&lt;a href="https://en.wikipedia.org/wiki/Seccomp"&gt;seccomp&lt;/a&gt;,
&lt;a href="https://en.wikipedia.org/wiki/Security-Enhanced_Linux"&gt;SELinux&lt;/a&gt; and
&lt;a href="https://en.wikipedia.org/wiki/AppArmor"&gt;AppArmor&lt;/a&gt; profiles easier and more
convenient. We're happy to announce that we recently &lt;a href="https://github.com/kubernetes-sigs/security-profiles-operator/releases/tag/v0.4.0"&gt;released
v0.4.0&lt;/a&gt;
of the operator, which contains a ton of new features, fixes and usability
improvements.&lt;/p&gt;
&lt;h2 id="what-s-new"&gt;What's new&lt;/h2&gt;
&lt;p&gt;It has been a while since the last
&lt;a href="https://github.com/kubernetes-sigs/security-profiles-operator/releases/tag/v0.3.0"&gt;v0.3.0&lt;/a&gt;
release of the operator. We added new features, fine-tuned existing ones and
reworked our documentation in 290 commits over the past half year.&lt;/p&gt;</description></item><item><title>Kubernetes 1.23: StatefulSet PVC Auto-Deletion (alpha)</title><link>https://andygol-k8s.netlify.app/blog/2021/12/16/kubernetes-1-23-statefulset-pvc-auto-deletion/</link><pubDate>Thu, 16 Dec 2021 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2021/12/16/kubernetes-1-23-statefulset-pvc-auto-deletion/</guid><description>&lt;p&gt;Kubernetes v1.23 introduced a new, alpha-level policy for
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/statefulset/"&gt;StatefulSets&lt;/a&gt; that controls the lifetime of
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/storage/persistent-volumes/"&gt;PersistentVolumeClaims&lt;/a&gt; (PVCs) generated from the
StatefulSet spec template for cases when they should be deleted automatically when the StatefulSet
is deleted or pods in the StatefulSet are scaled down.&lt;/p&gt;
&lt;h2 id="what-problem-does-this-solve"&gt;What problem does this solve?&lt;/h2&gt;
&lt;p&gt;A StatefulSet spec can include Pod and PVC templates. When a replica is first created, the
Kubernetes control plane creates a PVC for that replica if one does not already exist. The behavior
before Kubernetes v1.23 was that the control plane never cleaned up the PVCs created for
StatefulSets - this was left up to the cluster administrator, or to some add-on automation that
you’d have to find, check suitability, and deploy. The common pattern for managing PVCs, either
manually or through tools such as Helm, is that the PVCs are tracked by the tool that manages them,
with explicit lifecycle. Workflows that use StatefulSets must determine on their own what PVCs are
created by a StatefulSet and what their lifecycle should be.&lt;/p&gt;</description></item><item><title>Kubernetes 1.23: Prevent PersistentVolume leaks when deleting out of order</title><link>https://andygol-k8s.netlify.app/blog/2021/12/15/kubernetes-1-23-prevent-persistentvolume-leaks-when-deleting-out-of-order/</link><pubDate>Wed, 15 Dec 2021 10:00:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2021/12/15/kubernetes-1-23-prevent-persistentvolume-leaks-when-deleting-out-of-order/</guid><description>&lt;p&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/storage/persistent-volumes/"&gt;PersistentVolume&lt;/a&gt; (or PVs for short) are
associated with &lt;a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#reclaim-policy"&gt;Reclaim Policy&lt;/a&gt;.
The Reclaim Policy is used to determine the actions that need to be taken by the storage
backend on deletion of the PV.
Where the reclaim policy is &lt;code&gt;Delete&lt;/code&gt;, the expectation is that the storage backend
releases the storage resource that was allocated for the PV. In essence, the reclaim
policy needs to honored on PV deletion.&lt;/p&gt;
&lt;p&gt;With the recent Kubernetes v1.23 release, an alpha feature lets you configure your
cluster to behave that way and honor the configured reclaim policy.&lt;/p&gt;</description></item><item><title>Kubernetes 1.23: Kubernetes In-Tree to CSI Volume Migration Status Update</title><link>https://andygol-k8s.netlify.app/blog/2021/12/10/storage-in-tree-to-csi-migration-status-update/</link><pubDate>Fri, 10 Dec 2021 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2021/12/10/storage-in-tree-to-csi-migration-status-update/</guid><description>&lt;p&gt;The Kubernetes in-tree storage plugin to &lt;a href="https://andygol-k8s.netlify.app/blog/2019/01/15/container-storage-interface-ga/"&gt;Container Storage Interface (CSI)&lt;/a&gt; migration infrastructure has already been &lt;a href="https://andygol-k8s.netlify.app/blog/2019/12/09/kubernetes-1-17-feature-csi-migration-beta/"&gt;beta&lt;/a&gt; since v1.17. CSI migration was introduced as alpha in Kubernetes v1.14.&lt;/p&gt;
&lt;p&gt;Since then, SIG Storage and other Kubernetes special interest groups are working to ensure feature stability and compatibility in preparation for GA.
This article is intended to give a status update to the feature as well as changes between Kubernetes 1.17 and 1.23. In addition, I will also cover the future roadmap for the CSI migration feature GA for each storage plugin.&lt;/p&gt;</description></item><item><title>Kubernetes 1.23: Pod Security Graduates to Beta</title><link>https://andygol-k8s.netlify.app/blog/2021/12/09/pod-security-admission-beta/</link><pubDate>Thu, 09 Dec 2021 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2021/12/09/pod-security-admission-beta/</guid><description>&lt;p&gt;With the release of Kubernetes v1.23, &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/security/pod-security-admission/"&gt;Pod Security admission&lt;/a&gt; has now entered beta. Pod Security is a &lt;a href="https://andygol-k8s.netlify.app/docs/reference/access-authn-authz/admission-controllers/"&gt;built-in&lt;/a&gt; admission controller that evaluates pod specifications against a predefined set of &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/security/pod-security-standards/"&gt;Pod Security Standards&lt;/a&gt; and determines whether to &lt;code&gt;admit&lt;/code&gt; or &lt;code&gt;deny&lt;/code&gt; the pod from running.&lt;/p&gt;
&lt;p&gt;Pod Security is the successor to &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/security/pod-security-policy/"&gt;PodSecurityPolicy&lt;/a&gt; which was deprecated in the v1.21 release, and will be removed in Kubernetes v1.25. In this article, we cover the key concepts of Pod Security along with how to use it. We hope that cluster administrators and developers alike will use this new mechanism to enforce secure defaults for their workloads.&lt;/p&gt;</description></item><item><title>Kubernetes 1.23: Dual-stack IPv4/IPv6 Networking Reaches GA</title><link>https://andygol-k8s.netlify.app/blog/2021/12/08/dual-stack-networking-ga/</link><pubDate>Wed, 08 Dec 2021 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2021/12/08/dual-stack-networking-ga/</guid><description>&lt;p&gt;&amp;quot;When will Kubernetes have IPv6?&amp;quot; This question has been asked with increasing frequency ever since alpha support for IPv6 was first added in k8s v1.9. While Kubernetes has supported IPv6-only clusters since v1.18, migration from IPv4 to IPv6 was not yet possible at that point. At long last, &lt;a href="https://github.com/kubernetes/enhancements/tree/master/keps/sig-network/563-dual-stack/"&gt;dual-stack IPv4/IPv6 networking&lt;/a&gt; has reached general availability (GA) in Kubernetes v1.23.&lt;/p&gt;
&lt;p&gt;What does dual-stack networking mean for you? Let’s take a look…&lt;/p&gt;
&lt;h2 id="service-api-updates"&gt;Service API updates&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/services-networking/service/"&gt;Services&lt;/a&gt; were single-stack before 1.20, so using both IP families meant creating one Service per IP family. The user experience was simplified in 1.20, when Services were re-implemented to allow both IP families, meaning a single Service can handle both IPv4 and IPv6 workloads. Dual-stack load balancing is possible between services running any combination of IPv4 and IPv6.&lt;/p&gt;</description></item><item><title>Kubernetes 1.23: The Next Frontier</title><link>https://andygol-k8s.netlify.app/blog/2021/12/07/kubernetes-1-23-release-announcement/</link><pubDate>Tue, 07 Dec 2021 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2021/12/07/kubernetes-1-23-release-announcement/</guid><description>&lt;p&gt;We’re pleased to announce the release of Kubernetes 1.23, the last release of 2021!&lt;/p&gt;
&lt;p&gt;This release consists of 47 enhancements: 11 enhancements have graduated to stable, 17 enhancements are moving to beta, and 19 enhancements are entering alpha. Also, 1 feature has been deprecated.&lt;/p&gt;
&lt;h2 id="major-themes"&gt;Major Themes&lt;/h2&gt;
&lt;h3 id="deprecation-of-flexvolume"&gt;Deprecation of FlexVolume&lt;/h3&gt;
&lt;p&gt;FlexVolume is deprecated. The out-of-tree CSI driver is the recommended way to write volume drivers in Kubernetes. See &lt;a href="https://github.com/kubernetes/community/blob/master/sig-storage/volume-plugin-faq.md#kubernetes-volume-plugin-faq-for-storage-vendors"&gt;this doc&lt;/a&gt; for more information. Maintainers of FlexVolume drivers should implement a CSI driver and move users of FlexVolume to CSI. Users of FlexVolume should move their workloads to the CSI driver.&lt;/p&gt;</description></item><item><title>Contribution, containers and cricket: the Kubernetes 1.22 release interview</title><link>https://andygol-k8s.netlify.app/blog/2021/12/01/contribution-containers-and-cricket-the-kubernetes-1.22-release-interview/</link><pubDate>Wed, 01 Dec 2021 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2021/12/01/contribution-containers-and-cricket-the-kubernetes-1.22-release-interview/</guid><description>&lt;p&gt;The Kubernetes release train rolls on, and we look ahead to the release of 1.23 next week. &lt;a href="https://www.google.com/search?q=%22release+interview%22+site%3Akubernetes.io%2Fblog"&gt;As is our tradition&lt;/a&gt;, I'm pleased to bring you a look back at the process that brought us the previous version.&lt;/p&gt;
&lt;p&gt;The release team for 1.22 was led by &lt;a href="https://twitter.com/coffeeartgirl"&gt;Savitha Raghunathan&lt;/a&gt;, who was, at the time, a Senior Platform Engineer at MathWorks. &lt;a href="https://kubernetespodcast.com/episode/157-kubernetes-1.22/"&gt;I spoke to Savitha&lt;/a&gt; on the &lt;a href="https://kubernetespodcast.com/"&gt;Kubernetes Podcast from Google&lt;/a&gt;, the weekly&lt;super&gt;*&lt;/super&gt; show covering the Kubernetes and Cloud Native ecosystem.&lt;/p&gt;</description></item><item><title>Quality-of-Service for Memory Resources</title><link>https://andygol-k8s.netlify.app/blog/2021/11/26/qos-memory-resources/</link><pubDate>Fri, 26 Nov 2021 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2021/11/26/qos-memory-resources/</guid><description>&lt;p&gt;Kubernetes v1.22, released in August 2021, introduced a new alpha feature that improves how Linux nodes implement memory resource requests and limits.&lt;/p&gt;
&lt;p&gt;In prior releases, Kubernetes did not support memory quality guarantees.
For example, if you set container resources as follows:&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;apiVersion: v1
kind: Pod
metadata:
 name: example
spec:
 containers:
 - name: nginx
 resources:
 requests:
 memory: &amp;#34;64Mi&amp;#34;
 cpu: &amp;#34;250m&amp;#34;
 limits:
 memory: &amp;#34;64Mi&amp;#34;
 cpu: &amp;#34;500m&amp;#34;
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;&lt;code&gt;spec.containers[].resources.requests&lt;/code&gt;(e.g. cpu, memory) is designed for scheduling. When you create a Pod, the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum capacity for each of the resource types: the amount of CPU and memory it can provide for Pods. The scheduler ensures that, for each resource type, the sum of the resource requests of the scheduled Containers is less than the capacity of the node.&lt;/p&gt;</description></item><item><title>Dockershim removal is coming. Are you ready?</title><link>https://andygol-k8s.netlify.app/blog/2021/11/12/are-you-ready-for-dockershim-removal/</link><pubDate>Fri, 12 Nov 2021 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2021/11/12/are-you-ready-for-dockershim-removal/</guid><description>&lt;p&gt;&lt;strong&gt;Reviewers:&lt;/strong&gt; Davanum Srinivas, Elana Hashman, Noah Kantrowitz, Rey Lejano.&lt;/p&gt;
&lt;div class="alert alert-info" role="alert"&gt;&lt;div class="h4 alert-heading" role="heading"&gt;Poll closed&lt;/div&gt;
&lt;p&gt;This poll closed on January 7, 2022.&lt;/p&gt;
&lt;/div&gt;
&lt;p&gt;Last year we &lt;a href="https://andygol-k8s.netlify.app/blog/2020/12/08/kubernetes-1-20-release-announcement/#dockershim-deprecation"&gt;announced&lt;/a&gt;
that Kubernetes' dockershim component (which provides a built-in integration for
Docker Engine) is deprecated.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Update: There's a &lt;a href="https://andygol-k8s.netlify.app/blog/2020/12/02/dockershim-faq/"&gt;Dockershim Deprecation FAQ&lt;/a&gt;
with more information, and you can also discuss the deprecation via a dedicated
&lt;a href="https://github.com/kubernetes/kubernetes/issues/106917"&gt;GitHub issue&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Our current plan is to remove dockershim from the Kubernetes codebase soon.
We are looking for feedback from you whether you are ready for dockershim
removal and to ensure that you are ready when the time comes.&lt;/p&gt;</description></item><item><title>Non-root Containers And Devices</title><link>https://andygol-k8s.netlify.app/blog/2021/11/09/non-root-containers-and-devices/</link><pubDate>Tue, 09 Nov 2021 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2021/11/09/non-root-containers-and-devices/</guid><description>&lt;p&gt;The user/group ID related security settings in Pod's &lt;code&gt;securityContext&lt;/code&gt; trigger a problem when users want to
deploy containers that use accelerator devices (via &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/"&gt;Kubernetes Device Plugins&lt;/a&gt;) on Linux. In this blog
post I talk about the problem and describe the work done so far to address it. It's not meant to be a long story about getting the &lt;a href="https://github.com/kubernetes/kubernetes/issues/92211"&gt;k/k issue&lt;/a&gt; fixed.&lt;/p&gt;
&lt;p&gt;Instead, this post aims to raise awareness of the issue and to highlight important device use-cases too. This is needed as Kubernetes works on new related features such as support for user namespaces.&lt;/p&gt;</description></item><item><title>Announcing the 2021 Steering Committee Election Results</title><link>https://andygol-k8s.netlify.app/blog/2021/11/08/steering-committee-results-2021/</link><pubDate>Mon, 08 Nov 2021 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2021/11/08/steering-committee-results-2021/</guid><description>&lt;p&gt;The &lt;a href="https://github.com/kubernetes/community/tree/master/events/elections/2021"&gt;2021 Steering Committee Election&lt;/a&gt; is now complete. The Kubernetes Steering Committee consists of 7 seats, 4 of which were up for election in 2021. Incoming committee members serve a term of 2 years, and all members are elected by the Kubernetes Community.&lt;/p&gt;
&lt;p&gt;This community body is significant since it oversees the governance of the entire Kubernetes project. With that great power comes great responsibility. You can learn more about the steering committee’s role in their &lt;a href="https://github.com/kubernetes/steering/blob/master/charter.md"&gt;charter&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>Use KPNG to Write Specialized kube-proxiers</title><link>https://andygol-k8s.netlify.app/blog/2021/10/18/use-kpng-to-write-specialized-kube-proxiers/</link><pubDate>Mon, 18 Oct 2021 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2021/10/18/use-kpng-to-write-specialized-kube-proxiers/</guid><description>&lt;p&gt;The post will show you how to create a specialized service kube-proxy
style network proxier using Kubernetes Proxy NG
&lt;a href="https://github.com/kubernetes-sigs/kpng"&gt;kpng&lt;/a&gt; without interfering
with the existing kube-proxy. The kpng project aims at renewing the
the default Kubernetes Service implementation, the &amp;quot;kube-proxy&amp;quot;. An
important feature of kpng is that it can be used as a library to
create proxiers outside K8s. While this is useful for CNI-plugins that
replaces the kube-proxy it also opens the possibility for anyone to
create a proxier for a special purpose.&lt;/p&gt;</description></item><item><title>Introducing ClusterClass and Managed Topologies in Cluster API</title><link>https://andygol-k8s.netlify.app/blog/2021/10/08/capi-clusterclass-and-managed-topologies/</link><pubDate>Fri, 08 Oct 2021 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2021/10/08/capi-clusterclass-and-managed-topologies/</guid><description>&lt;p&gt;The &lt;a href="https://cluster-api.sigs.k8s.io/"&gt;Cluster API community&lt;/a&gt; is happy to announce the implementation of &lt;em&gt;ClusterClass and Managed Topologies&lt;/em&gt;, a new feature that will greatly simplify how you can provision, upgrade, and operate multiple Kubernetes clusters in a declarative way.&lt;/p&gt;
&lt;h2 id="a-little-bit-of-context"&gt;A little bit of context…&lt;/h2&gt;
&lt;p&gt;Before getting into the details, let's take a step back and look at the history of Cluster API.&lt;/p&gt;
&lt;p&gt;The &lt;a href="https://github.com/kubernetes-sigs/cluster-api/"&gt;Cluster API project&lt;/a&gt; started three years ago, and the first releases focused on extensibility and implementing a declarative API that allows a seamless experience across infrastructure providers. This was a success with many cloud providers: AWS, Azure, Digital Ocean, GCP, Metal3, vSphere and still counting.&lt;/p&gt;</description></item><item><title>A Closer Look at NSA/CISA Kubernetes Hardening Guidance</title><link>https://andygol-k8s.netlify.app/blog/2021/10/05/nsa-cisa-kubernetes-hardening-guidance/</link><pubDate>Tue, 05 Oct 2021 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2021/10/05/nsa-cisa-kubernetes-hardening-guidance/</guid><description>&lt;div class="alert alert-primary" role="alert"&gt;&lt;div class="h4 alert-heading" role="heading"&gt;Disclaimer&lt;/div&gt;
&lt;p&gt;The open source tools listed in this article are to serve as examples only
and are in no way a direct recommendation from the Kubernetes community or authors.&lt;/p&gt;
&lt;/div&gt;
&lt;h2 id="background"&gt;Background&lt;/h2&gt;
&lt;p&gt;USA's National Security Agency (NSA) and the Cybersecurity and Infrastructure
Security Agency (CISA)
released Kubernetes Hardening Guidance
on August 3rd, 2021. The guidance details threats to Kubernetes environments
and provides secure configuration guidance to minimize risk.&lt;/p&gt;
&lt;p&gt;The following sections of this blog correlate to the sections in the NSA/CISA guidance.
Any missing sections are skipped because of limited opportunities to add
anything new to the existing content.&lt;/p&gt;</description></item><item><title>How to Handle Data Duplication in Data-Heavy Kubernetes Environments</title><link>https://andygol-k8s.netlify.app/blog/2021/09/29/how-to-handle-data-duplication-in-data-heavy-kubernetes-environments/</link><pubDate>Wed, 29 Sep 2021 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2021/09/29/how-to-handle-data-duplication-in-data-heavy-kubernetes-environments/</guid><description>&lt;h2 id="why-duplicate-data"&gt;Why Duplicate Data?&lt;/h2&gt;
&lt;p&gt;It’s convenient to create a copy of your application with a copy of its state for each team.
For example, you might want a separate database copy to test some significant schema changes
or develop other disruptive operations like bulk insert/delete/update...&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Duplicating data takes a lot of time.&lt;/strong&gt; That’s because you need first to download
all the data from a source block storage provider to compute and then send
it back to a storage provider again. There’s a lot of network traffic and CPU/RAM used in this process.
Hardware acceleration by offloading certain expensive operations to dedicated hardware is
&lt;strong&gt;always a huge performance boost&lt;/strong&gt;. It reduces the time required to complete an operation by orders
of magnitude.&lt;/p&gt;</description></item><item><title>Spotlight on SIG Node</title><link>https://andygol-k8s.netlify.app/blog/2021/09/27/sig-node-spotlight-2021/</link><pubDate>Mon, 27 Sep 2021 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2021/09/27/sig-node-spotlight-2021/</guid><description>&lt;h2 id="introduction"&gt;Introduction&lt;/h2&gt;
&lt;p&gt;In Kubernetes, a &lt;em&gt;Node&lt;/em&gt; is a representation of a single machine in your cluster. &lt;a href="https://github.com/kubernetes/community/tree/master/sig-node"&gt;SIG Node&lt;/a&gt; owns that very important Node component and supports various subprojects such as Kubelet, Container Runtime Interface (CRI) and more to support how the pods and host resources interact. In this blog, we have summarized our conversation with &lt;a href="https://twitter.com/ehashdn"&gt;Elana Hashman (EH)&lt;/a&gt; &amp;amp; &lt;a href="https://twitter.com/SergeyKanzhelev"&gt;Sergey Kanzhelev (SK)&lt;/a&gt;, who walk us through the various aspects of being a part of the SIG and share some insights about how others can get involved.&lt;/p&gt;</description></item><item><title>Introducing Single Pod Access Mode for PersistentVolumes</title><link>https://andygol-k8s.netlify.app/blog/2021/09/13/read-write-once-pod-access-mode-alpha/</link><pubDate>Mon, 13 Sep 2021 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2021/09/13/read-write-once-pod-access-mode-alpha/</guid><description>&lt;p&gt;Last month's release of Kubernetes v1.22 introduced a new ReadWriteOncePod access mode for &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/storage/persistent-volumes/#persistent-volumes"&gt;PersistentVolumes&lt;/a&gt; and &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims"&gt;PersistentVolumeClaims&lt;/a&gt;.
With this alpha feature, Kubernetes allows you to restrict volume access to a single pod in the cluster.&lt;/p&gt;
&lt;h2 id="what-are-access-modes-and-why-are-they-important"&gt;What are access modes and why are they important?&lt;/h2&gt;
&lt;p&gt;When using storage, there are different ways to model how that storage is consumed.&lt;/p&gt;
&lt;p&gt;For example, a storage system like a network file share can have many users all reading and writing data simultaneously.
In other cases maybe everyone is allowed to read data but not write it.
For highly sensitive data, maybe only one user is allowed to read and write data but nobody else.&lt;/p&gt;</description></item><item><title>Alpha in Kubernetes v1.22: API Server Tracing</title><link>https://andygol-k8s.netlify.app/blog/2021/09/03/api-server-tracing/</link><pubDate>Fri, 03 Sep 2021 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2021/09/03/api-server-tracing/</guid><description>&lt;p&gt;In distributed systems, it can be hard to figure out where problems are. You grep through one component's logs just to discover that the source of your problem is in another component. You search there only to discover that you need to enable debug logs to figure out what really went wrong... And it goes on. The more complex the path your request takes, the harder it is to answer questions about where it went. I've personally spent many hours doing this dance with a variety of Kubernetes components. Distributed tracing is a tool which is designed to help in these situations, and the Kubernetes API Server is, perhaps, the most important Kubernetes component to be able to debug. At Kubernetes' Sig Instrumentation, our mission is to make it easier to understand what's going on in your cluster, and we are happy to announce that distributed tracing in the Kubernetes API Server reached alpha in 1.22.&lt;/p&gt;</description></item><item><title>Kubernetes 1.22: A New Design for Volume Populators</title><link>https://andygol-k8s.netlify.app/blog/2021/08/30/volume-populators-redesigned/</link><pubDate>Mon, 30 Aug 2021 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2021/08/30/volume-populators-redesigned/</guid><description>&lt;p&gt;Kubernetes v1.22, released earlier this month, introduced a redesigned approach for volume
populators. Originally implemented
in v1.18, the API suffered from backwards compatibility issues. Kubernetes v1.22 includes a new API
field called &lt;code&gt;dataSourceRef&lt;/code&gt; that fixes these problems.&lt;/p&gt;
&lt;h2 id="data-sources"&gt;Data sources&lt;/h2&gt;
&lt;p&gt;Earlier Kubernetes releases already added a &lt;code&gt;dataSource&lt;/code&gt; field into the
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims"&gt;PersistentVolumeClaim&lt;/a&gt; API,
used for cloning volumes and creating volumes from snapshots. You could use the &lt;code&gt;dataSource&lt;/code&gt; field when
creating a new PVC, referencing either an existing PVC or a VolumeSnapshot in the same namespace.
That also modified the normal provisioning process so that instead of yielding an empty volume, the
new PVC contained the same data as either the cloned PVC or the cloned VolumeSnapshot.&lt;/p&gt;</description></item><item><title>Minimum Ready Seconds for StatefulSets</title><link>https://andygol-k8s.netlify.app/blog/2021/08/27/minreadyseconds-statefulsets/</link><pubDate>Fri, 27 Aug 2021 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2021/08/27/minreadyseconds-statefulsets/</guid><description>&lt;p&gt;This blog describes the notion of Availability for &lt;code&gt;StatefulSet&lt;/code&gt; workloads, and a new alpha feature in Kubernetes 1.22 which adds &lt;code&gt;minReadySeconds&lt;/code&gt; configuration for &lt;code&gt;StatefulSets&lt;/code&gt;.&lt;/p&gt;
&lt;h2 id="what-problems-does-this-solve"&gt;What problems does this solve?&lt;/h2&gt;
&lt;p&gt;Prior to Kubernetes 1.22 release, once a &lt;code&gt;StatefulSet&lt;/code&gt; &lt;code&gt;Pod&lt;/code&gt; is in the &lt;code&gt;Ready&lt;/code&gt; state it is considered &lt;code&gt;Available&lt;/code&gt; to receive traffic. For some of the &lt;code&gt;StatefulSet&lt;/code&gt; workloads, it may not be the case. For example, a workload like Prometheus with multiple instances of Alertmanager, it should be considered &lt;code&gt;Available&lt;/code&gt; only when Alertmanager's state transfer is complete, not when the &lt;code&gt;Pod&lt;/code&gt; is in &lt;code&gt;Ready&lt;/code&gt; state. Since &lt;code&gt;minReadySeconds&lt;/code&gt; adds buffer, the state transfer may be complete before the &lt;code&gt;Pod&lt;/code&gt; becomes &lt;code&gt;Available&lt;/code&gt;. While this is not a fool proof way of identifying if the state transfer is complete or not, it gives a way to the end user to express their intention of waiting for sometime before the &lt;code&gt;Pod&lt;/code&gt; is considered &lt;code&gt;Available&lt;/code&gt; and it is ready to serve requests.&lt;/p&gt;</description></item><item><title>Enable seccomp for all workloads with a new v1.22 alpha feature</title><link>https://andygol-k8s.netlify.app/blog/2021/08/25/seccomp-default/</link><pubDate>Wed, 25 Aug 2021 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2021/08/25/seccomp-default/</guid><description>&lt;p&gt;This blog post is about a new Kubernetes feature introduced in v1.22, which adds
an additional security layer on top of the existing seccomp support. Seccomp is
a security mechanism for Linux processes to filter system calls (syscalls) based
on a set of defined rules. Applying seccomp profiles to containerized workloads
is one of the key tasks when it comes to enhancing the security of the
application deployment. Developers, site reliability engineers and
infrastructure administrators have to work hand in hand to create, distribute
and maintain the profiles over the applications life-cycle.&lt;/p&gt;</description></item><item><title>Alpha in v1.22: Windows HostProcess Containers</title><link>https://andygol-k8s.netlify.app/blog/2021/08/16/windows-hostprocess-containers/</link><pubDate>Mon, 16 Aug 2021 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2021/08/16/windows-hostprocess-containers/</guid><description>&lt;p&gt;Kubernetes v1.22 introduced a new alpha feature for clusters that
include Windows nodes: HostProcess containers.&lt;/p&gt;
&lt;p&gt;HostProcess containers aim to extend the Windows container model to enable a wider
range of Kubernetes cluster management scenarios. HostProcess containers run
directly on the host and maintain behavior and access similar to that of a regular
process. With HostProcess containers, users can package and distribute management
operations and functionalities that require host access while retaining versioning
and deployment methods provided by containers. This allows Windows containers to
be used for a variety of device plugin, storage, and networking management scenarios
in Kubernetes. With this comes the enablement of host network mode—allowing
HostProcess containers to be created within the host's network namespace instead of
their own. HostProcess containers can also be built on top of existing Windows server
2019 (or later) base images, managed through the Windows container runtime, and run
as any user that is available on or in the domain of the host machine.&lt;/p&gt;</description></item><item><title>Kubernetes Memory Manager moves to beta</title><link>https://andygol-k8s.netlify.app/blog/2021/08/11/kubernetes-1-22-feature-memory-manager-moves-to-beta/</link><pubDate>Wed, 11 Aug 2021 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2021/08/11/kubernetes-1-22-feature-memory-manager-moves-to-beta/</guid><description>&lt;p&gt;The blog post explains some of the internals of the &lt;em&gt;Memory manager&lt;/em&gt;, a beta feature
of Kubernetes 1.22. In Kubernetes, the Memory Manager is a
&lt;a href="https://kubernetes.io/docs/concepts/overview/components/#kubelet"&gt;kubelet&lt;/a&gt; subcomponent.
The memory manage provides guaranteed memory (and hugepages)
allocation for pods in the &lt;code&gt;Guaranteed&lt;/code&gt; &lt;a href="https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/#qos-classes"&gt;QoS class&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;This blog post covers:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;a href="#Why-do-you-need-it?"&gt;Why do you need it?&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#How-does-it-work?"&gt;The internal details of how the &lt;strong&gt;MemoryManager&lt;/strong&gt; works&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#Current-limitations"&gt;Current limitations of the &lt;strong&gt;MemoryManager&lt;/strong&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#Future-work-for-the-Memory-Manager"&gt;Future work for the &lt;strong&gt;MemoryManager&lt;/strong&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id="why-do-you-need-it"&gt;Why do you need it?&lt;/h2&gt;
&lt;p&gt;Some Kubernetes workloads run on nodes with
&lt;a href="https://en.wikipedia.org/wiki/Non-uniform_memory_access"&gt;non-uniform memory access&lt;/a&gt; (NUMA).
Suppose you have NUMA nodes in your cluster. In that case, you'll know about the potential for extra latency when
compute resources need to access memory on the different NUMA locality.&lt;/p&gt;</description></item><item><title>Kubernetes 1.22: CSI Windows Support (with CSI Proxy) reaches GA</title><link>https://andygol-k8s.netlify.app/blog/2021/08/09/csi-windows-support-with-csi-proxy-reaches-ga/</link><pubDate>Mon, 09 Aug 2021 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2021/08/09/csi-windows-support-with-csi-proxy-reaches-ga/</guid><description>&lt;p&gt;&lt;em&gt;The stable version of CSI Proxy for Windows has been released alongside Kubernetes 1.22. CSI Proxy enables CSI Drivers running on Windows nodes to perform privileged storage operations.&lt;/em&gt;&lt;/p&gt;
&lt;h2 id="background"&gt;Background&lt;/h2&gt;
&lt;p&gt;Container Storage Interface (CSI) for Kubernetes went GA in the Kubernetes 1.13 release. CSI has become the standard for exposing block and file storage to containerized workloads on Container Orchestration systems (COs) like Kubernetes. It enables third-party storage providers to write and deploy plugins without the need to alter the core Kubernetes codebase. Legacy in-tree drivers are deprecated and new storage features are introduced in CSI, therefore it is important to get CSI Drivers to work on Windows.&lt;/p&gt;</description></item><item><title>New in Kubernetes v1.22: alpha support for using swap memory</title><link>https://andygol-k8s.netlify.app/blog/2021/08/09/run-nodes-with-swap-alpha/</link><pubDate>Mon, 09 Aug 2021 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2021/08/09/run-nodes-with-swap-alpha/</guid><description>&lt;p&gt;The 1.22 release introduced alpha support for configuring swap memory usage for
Kubernetes workloads on a per-node basis.&lt;/p&gt;
&lt;p&gt;In prior releases, Kubernetes did not support the use of swap memory on Linux,
as it is difficult to provide guarantees and account for pod memory utilization
when swap is involved. As part of Kubernetes' earlier design, swap support was
considered out of scope, and a kubelet would by default fail to start if swap
was detected on a node.&lt;/p&gt;</description></item><item><title>Kubernetes 1.22: Server Side Apply moves to GA</title><link>https://andygol-k8s.netlify.app/blog/2021/08/06/server-side-apply-ga/</link><pubDate>Fri, 06 Aug 2021 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2021/08/06/server-side-apply-ga/</guid><description>&lt;p&gt;Server-side Apply (SSA) has been promoted to GA in the Kubernetes v1.22 release. The GA milestone means you can depend on the feature and its API, without fear of future backwards-incompatible changes. GA features are protected by the Kubernetes &lt;a href="https://andygol-k8s.netlify.app/docs/reference/using-api/deprecation-policy/"&gt;deprecation policy&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id="what-is-server-side-apply"&gt;What is Server-side Apply?&lt;/h2&gt;
&lt;p&gt;Server-side Apply helps users and controllers manage their resources through declarative configurations. Server-side Apply replaces the client side apply feature implemented by “kubectl apply” with a server-side implementation, permitting use by tools/clients other than kubectl. Server-side Apply is a new merging algorithm, as well as tracking of field ownership, running on the Kubernetes api-server. Server-side Apply enables new features like conflict detection, so the system knows when two actors are trying to edit the same field. Refer to the &lt;a href="https://andygol-k8s.netlify.app/docs/reference/using-api/server-side-apply/"&gt;Server-side Apply Documentation&lt;/a&gt; and &lt;a href="https://kubernetes.io/blog/2020/04/01/kubernetes-1.18-feature-server-side-apply-beta-2/"&gt;Beta 2 release announcement&lt;/a&gt; for more information.&lt;/p&gt;</description></item><item><title>Kubernetes 1.22: Reaching New Peaks</title><link>https://andygol-k8s.netlify.app/blog/2021/08/04/kubernetes-1-22-release-announcement/</link><pubDate>Wed, 04 Aug 2021 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2021/08/04/kubernetes-1-22-release-announcement/</guid><description>&lt;p&gt;We’re pleased to announce the release of Kubernetes 1.22, the second release of 2021!&lt;/p&gt;
&lt;p&gt;This release consists of 53 enhancements: 13 enhancements have graduated to stable, 24 enhancements are moving to beta, and 16 enhancements are entering alpha. Also, three features have been deprecated.&lt;/p&gt;
&lt;p&gt;In April of this year, the Kubernetes release cadence was officially changed from four to three releases yearly. This is the first longer-cycle release related to that change. As the Kubernetes project matures, the number of enhancements per cycle grows. This means more work, from version to version, for the contributor community and Release Engineering team, and it can put pressure on the end-user community to stay up-to-date with releases containing increasingly more features.&lt;/p&gt;</description></item><item><title>Roorkee robots, releases and racing: the Kubernetes 1.21 release interview</title><link>https://andygol-k8s.netlify.app/blog/2021/07/29/roorkee-robots-releases-and-racing-the-kubernetes-1.21-release-interview/</link><pubDate>Thu, 29 Jul 2021 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2021/07/29/roorkee-robots-releases-and-racing-the-kubernetes-1.21-release-interview/</guid><description>&lt;p&gt;With Kubernetes 1.22 due out next week, now is a great time to look back on 1.21. The release team for that version was led by &lt;a href="https://twitter.com/theonlynabarun"&gt;Nabarun Pal&lt;/a&gt; from VMware.&lt;/p&gt;
&lt;p&gt;Back in April I &lt;a href="https://kubernetespodcast.com/episode/146-kubernetes-1.21/"&gt;interviewed Nabarun&lt;/a&gt; on the weekly &lt;a href="https://kubernetespodcast.com/"&gt;Kubernetes Podcast from Google&lt;/a&gt;; the latest in a series of release lead conversations that started back with 1.11, not long after the show started back in 2018.&lt;/p&gt;
&lt;p&gt;In these interviews we learn a little about the release, but also about the process behind it, and the story behind the person chosen to lead it. Getting to know a community member is my favourite part of the show each week, and so I encourage you to &lt;a href="https://kubernetespodcast.com/subscribe/"&gt;subscribe wherever you get your podcasts&lt;/a&gt;. With a release coming next week, you can probably guess what our next topic will be!&lt;/p&gt;</description></item><item><title>Updating NGINX-Ingress to use the stable Ingress API</title><link>https://andygol-k8s.netlify.app/blog/2021/07/26/update-with-ingress-nginx/</link><pubDate>Mon, 26 Jul 2021 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2021/07/26/update-with-ingress-nginx/</guid><description>&lt;p&gt;With all Kubernetes APIs, there is a process to creating, maintaining, and
ultimately deprecating them once they become GA. The networking.k8s.io API group is no
different. The upcoming Kubernetes 1.22 release will remove several deprecated APIs
that are relevant to networking:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;the &lt;code&gt;networking.k8s.io/v1beta1&lt;/code&gt; API version of &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/services-networking/ingress/#ingress-class"&gt;IngressClass&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;all beta versions of &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/services-networking/ingress/"&gt;Ingress&lt;/a&gt;: &lt;code&gt;extensions/v1beta1&lt;/code&gt; and &lt;code&gt;networking.k8s.io/v1beta1&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;On a v1.22 Kubernetes cluster, you'll be able to access Ingress and IngressClass
objects through the stable (v1) APIs, but access via their beta APIs won't be possible.
This change has been in
in discussion since
&lt;a href="https://github.com/kubernetes/kubernetes/issues/43214"&gt;2017&lt;/a&gt;,
&lt;a href="https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16/"&gt;2019&lt;/a&gt; with
1.16 Kubernetes API deprecations, and most recently in
KEP-1453:
&lt;a href="https://github.com/kubernetes/enhancements/tree/master/keps/sig-network/1453-ingress-api#122"&gt;Graduate Ingress API to GA&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>Kubernetes Release Cadence Change: Here’s What You Need To Know</title><link>https://andygol-k8s.netlify.app/blog/2021/07/20/new-kubernetes-release-cadence/</link><pubDate>Tue, 20 Jul 2021 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2021/07/20/new-kubernetes-release-cadence/</guid><description>&lt;p&gt;On April 23, 2021, the Release Team merged a Kubernetes Enhancement Proposal (KEP) changing the Kubernetes release cycle from four releases a year (once a quarter) to three releases a year.&lt;/p&gt;
&lt;p&gt;This blog post provides a high level overview about what this means for the Kubernetes community's contributors and maintainers.&lt;/p&gt;
&lt;h2 id="what-s-changing-and-when"&gt;What's changing and when&lt;/h2&gt;
&lt;p&gt;Starting with the &lt;a href="https://github.com/kubernetes/sig-release/tree/master/releases/release-1.22"&gt;Kubernetes 1.22 release&lt;/a&gt;, a lightweight policy will drive the creation of each release schedule. This policy states:&lt;/p&gt;</description></item><item><title>Spotlight on SIG Usability</title><link>https://andygol-k8s.netlify.app/blog/2021/07/15/sig-usability-spotlight-2021/</link><pubDate>Thu, 15 Jul 2021 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2021/07/15/sig-usability-spotlight-2021/</guid><description>&lt;div class="alert alert-info" role="note"&gt;&lt;h4 class="alert-heading"&gt;Note:&lt;/h4&gt;SIG Usability, which is featured in this Spotlight blog, has been deprecated and is no longer active.
As a result, the links and information provided in this blog post may no longer be valid or relevant.
Should there be renewed interest and increased participation in the future, the SIG may be revived.
However, as of August 2023 the SIG is inactive per the Kubernetes community policy.
The Kubernetes project encourages you to explore other
&lt;a href="https://github.com/kubernetes/community/blob/master/sig-list.md#special-interest-groups"&gt;SIGs&lt;/a&gt;
and resources available on the Kubernetes website to stay up-to-date with the latest developments
and enhancements in Kubernetes.&lt;/div&gt;

&lt;h2 id="introduction"&gt;Introduction&lt;/h2&gt;
&lt;p&gt;Are you interested in learning about what &lt;a href="https://github.com/kubernetes/community/tree/master/sig-usability"&gt;SIG Usability&lt;/a&gt; does and how you can get involved? Well, you're at the right place. SIG Usability is all about making Kubernetes more accessible to new folks, and its main activity is conducting user research for the community. In this blog, we have summarized our conversation with &lt;a href="https://twitter.com/morengab"&gt;Gaby Moreno&lt;/a&gt;, who walks us through the various aspects of being a part of the SIG and shares some insights about how others can get involved.&lt;/p&gt;</description></item><item><title>Kubernetes API and Feature Removals In 1.22: Here’s What You Need To Know</title><link>https://andygol-k8s.netlify.app/blog/2021/07/14/upcoming-changes-in-kubernetes-1-22/</link><pubDate>Wed, 14 Jul 2021 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2021/07/14/upcoming-changes-in-kubernetes-1-22/</guid><description>&lt;p&gt;As the Kubernetes API evolves, APIs are periodically reorganized or upgraded.
When APIs evolve, the old APIs they replace are deprecated, and eventually removed.
See &lt;a href="#kubernetes-api-removals"&gt;Kubernetes API removals&lt;/a&gt; to read more about Kubernetes'
policy on removing APIs.&lt;/p&gt;
&lt;p&gt;We want to make sure you're aware of some upcoming removals. These are
beta APIs that you can use in current, supported Kubernetes versions,
and they are already deprecated. The reason for all of these removals
is that they have been superseded by a newer, stable (“GA”) API.&lt;/p&gt;</description></item><item><title>Announcing Kubernetes Community Group Annual Reports</title><link>https://andygol-k8s.netlify.app/blog/2021/06/28/announcing-kubernetes-community-group-annual-reports/</link><pubDate>Mon, 28 Jun 2021 10:00:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2021/06/28/announcing-kubernetes-community-group-annual-reports/</guid><description>&lt;figure&gt;&lt;a href="https://www.cncf.io/reports/kubernetes-community-annual-report-2020/"&gt;
 &lt;img src="https://andygol-k8s.netlify.app/blog/2021/06/28/announcing-kubernetes-community-group-annual-reports/k8s_annual_report_2020.svg"
 alt="Community annual report 2020"/&gt; &lt;/a&gt;
&lt;/figure&gt;
&lt;p&gt;Given the growth and scale of the Kubernetes project, the existing reporting mechanisms were proving to be inadequate and challenging.
Kubernetes is a large open source project. With over 100000 commits just to the main k/kubernetes repository, hundreds of other code
repositories in the project, and thousands of contributors, there's a lot going on. In fact, there are 37 contributor groups at the time of
writing. We also value all forms of contribution and not just code changes.&lt;/p&gt;</description></item><item><title>Writing a Controller for Pod Labels</title><link>https://andygol-k8s.netlify.app/blog/2021/06/21/writing-a-controller-for-pod-labels/</link><pubDate>Mon, 21 Jun 2021 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2021/06/21/writing-a-controller-for-pod-labels/</guid><description>&lt;p&gt;&lt;a href="https://kubernetes.io/docs/concepts/extend-kubernetes/operator/"&gt;Operators&lt;/a&gt; are proving to be an excellent solution to
running stateful distributed applications in Kubernetes. Open source tools like
the &lt;a href="https://sdk.operatorframework.io/"&gt;Operator SDK&lt;/a&gt; provide ways to build reliable and maintainable
operators, making it easier to extend Kubernetes and implement custom
scheduling.&lt;/p&gt;
&lt;p&gt;Kubernetes operators run complex software inside your cluster. The open source
community has already built &lt;a href="https://operatorhub.io/"&gt;many operators&lt;/a&gt; for distributed
applications like Prometheus, Elasticsearch, or Argo CD. Even outside of
open source, operators can help to bring new functionality to your Kubernetes
cluster.&lt;/p&gt;</description></item><item><title>Using Finalizers to Control Deletion</title><link>https://andygol-k8s.netlify.app/blog/2021/05/14/using-finalizers-to-control-deletion/</link><pubDate>Fri, 14 May 2021 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2021/05/14/using-finalizers-to-control-deletion/</guid><description>&lt;p&gt;Deleting objects in Kubernetes can be challenging. You may think you’ve deleted something, only to find it still persists. While issuing a &lt;code&gt;kubectl delete&lt;/code&gt; command and hoping for the best might work for day-to-day operations, understanding how Kubernetes &lt;code&gt;delete&lt;/code&gt; commands operate will help you understand why some objects linger after deletion.&lt;/p&gt;
&lt;p&gt;In this post, I’ll look at:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;What properties of a resource govern deletion&lt;/li&gt;
&lt;li&gt;How finalizers and owner references impact object deletion&lt;/li&gt;
&lt;li&gt;How the propagation policy can be used to change the order of deletions&lt;/li&gt;
&lt;li&gt;How deletion works, with examples&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For simplicity, all examples will use ConfigMaps and basic shell commands to demonstrate the process. We’ll explore how the commands work and discuss repercussions and results from using them in practice.&lt;/p&gt;</description></item><item><title>Kubernetes 1.21: Metrics Stability hits GA</title><link>https://andygol-k8s.netlify.app/blog/2021/04/23/kubernetes-release-1.21-metrics-stability-ga/</link><pubDate>Fri, 23 Apr 2021 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2021/04/23/kubernetes-release-1.21-metrics-stability-ga/</guid><description>&lt;p&gt;Kubernetes 1.21 marks the graduation of the metrics stability framework and along with it, the first officially supported stable metrics. Not only do stable metrics come with supportability guarantees, the metrics stability framework brings escape hatches that you can use if you encounter problematic metrics.&lt;/p&gt;
&lt;p&gt;See the list of &lt;a href="https://github.com/kubernetes/kubernetes/blob/master/test/instrumentation/testdata/stable-metrics-list.yaml"&gt;stable Kubernetes metrics here&lt;/a&gt;&lt;/p&gt;
&lt;h3 id="what-are-stable-metrics-and-why-do-we-need-them"&gt;What are stable metrics and why do we need them?&lt;/h3&gt;
&lt;p&gt;A stable metric is one which, from a consumption point of view, can be reliably consumed across a number of Kubernetes versions without risk of ingestion failure.&lt;/p&gt;</description></item><item><title>Evolving Kubernetes networking with the Gateway API</title><link>https://andygol-k8s.netlify.app/blog/2021/04/22/evolving-kubernetes-networking-with-the-gateway-api/</link><pubDate>Thu, 22 Apr 2021 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2021/04/22/evolving-kubernetes-networking-with-the-gateway-api/</guid><description>&lt;p&gt;The Ingress resource is one of the many Kubernetes success stories. It created a &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/services-networking/ingress-controllers/"&gt;diverse ecosystem of Ingress controllers&lt;/a&gt; which were used across hundreds of thousands of clusters in a standardized and consistent way. This standardization helped users adopt Kubernetes. However, five years after the creation of Ingress, there are signs of fragmentation into different but &lt;a href="https://dave.cheney.net/paste/ingress-is-dead-long-live-ingressroute.pdf"&gt;strikingly similar CRDs&lt;/a&gt; and &lt;a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/"&gt;overloaded annotations&lt;/a&gt;. The same portability that made Ingress pervasive also limited its future.&lt;/p&gt;</description></item><item><title>Graceful Node Shutdown Goes Beta</title><link>https://andygol-k8s.netlify.app/blog/2021/04/21/graceful-node-shutdown-beta/</link><pubDate>Wed, 21 Apr 2021 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2021/04/21/graceful-node-shutdown-beta/</guid><description>&lt;p&gt;Graceful node shutdown, beta in 1.21, enables kubelet to gracefully evict pods during a node shutdown.&lt;/p&gt;
&lt;p&gt;Kubernetes is a distributed system and as such we need to be prepared for inevitable failures — nodes will fail, containers might crash or be restarted, and - ideally - your workloads will be able to withstand these catastrophic events.&lt;/p&gt;
&lt;p&gt;One of the common classes of issues are workload failures on node shutdown or restart. The best practice prior to bringing your node down is to &lt;a href="https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/safely-drain-node/"&gt;safely drain and cordon your node&lt;/a&gt;. This will ensure that all pods running on this node can safely be evicted. An eviction will ensure your pods can follow the expected &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination"&gt;pod termination lifecycle&lt;/a&gt; meaning receiving a SIGTERM in your container and/or running &lt;code&gt;preStopHooks&lt;/code&gt;.&lt;/p&gt;</description></item><item><title>Annotating Kubernetes Services for Humans</title><link>https://andygol-k8s.netlify.app/blog/2021/04/20/annotating-k8s-for-humans/</link><pubDate>Tue, 20 Apr 2021 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2021/04/20/annotating-k8s-for-humans/</guid><description>&lt;p&gt;Have you ever been asked to troubleshoot a failing Kubernetes service and struggled to find basic information about the service such as the source repository and owner?&lt;/p&gt;
&lt;p&gt;One of the problems as Kubernetes applications grow is the proliferation of services. As the number of services grows, developers start to specialize working with specific services. When it comes to troubleshooting, however, developers need to be able to find the source, understand the service and dependencies, and chat with the owning team for any service.&lt;/p&gt;</description></item><item><title>Defining Network Policy Conformance for Container Network Interface (CNI) providers</title><link>https://andygol-k8s.netlify.app/blog/2021/04/20/defining-networkpolicy-conformance-cni-providers/</link><pubDate>Tue, 20 Apr 2021 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2021/04/20/defining-networkpolicy-conformance-cni-providers/</guid><description>&lt;p&gt;Special thanks to Tim Hockin and Bowie Du (Google), Dan Winship and Antonio Ojea (Red Hat),
Casey Davenport and Shaun Crampton (Tigera), and Abhishek Raut and Antonin Bas (VMware) for
being supportive of this work, and working with us to resolve issues in different Container Network Interfaces (CNIs) over time.&lt;/p&gt;
&lt;p&gt;A brief conversation around &amp;quot;node local&amp;quot; Network Policies in April of 2020 inspired the creation of a NetworkPolicy subproject from SIG Network. It became clear that as a community,
we need a rock-solid story around how to do pod network security on Kubernetes, and this story needed a community around it, so as to grow the cultural adoption of enterprise security patterns in K8s.&lt;/p&gt;</description></item><item><title>Introducing Indexed Jobs</title><link>https://andygol-k8s.netlify.app/blog/2021/04/19/introducing-indexed-jobs/</link><pubDate>Mon, 19 Apr 2021 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2021/04/19/introducing-indexed-jobs/</guid><description>&lt;p&gt;Once you have containerized a non-parallel &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/job/"&gt;Job&lt;/a&gt;,
it is quite easy to get it up and running on Kubernetes without modifications to
the binary. In most cases, when running parallel distributed Jobs, you had
to set a separate system to partition the work among the workers. For
example, you could set up a task queue to &lt;a href="https://andygol-k8s.netlify.app/docs/tasks/job/coarse-parallel-processing-work-queue/"&gt;assign one work item to each
Pod&lt;/a&gt; or &lt;a href="https://andygol-k8s.netlify.app/docs/tasks/job/fine-parallel-processing-work-queue/"&gt;multiple items
to each Pod until the queue is emptied&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>Volume Health Monitoring Alpha Update</title><link>https://andygol-k8s.netlify.app/blog/2021/04/16/volume-health-monitoring-alpha-update/</link><pubDate>Fri, 16 Apr 2021 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2021/04/16/volume-health-monitoring-alpha-update/</guid><description>&lt;p&gt;The CSI Volume Health Monitoring feature, originally introduced in 1.19 has undergone a large update for the 1.21 release.&lt;/p&gt;
&lt;h2 id="why-add-volume-health-monitoring-to-kubernetes"&gt;Why add Volume Health Monitoring to Kubernetes?&lt;/h2&gt;
&lt;p&gt;Without Volume Health Monitoring, Kubernetes has no knowledge of the state of the underlying volumes of a storage system after a PVC is provisioned and used by a Pod. Many things could happen to the underlying storage system after a volume is provisioned in Kubernetes. For example, the volume could be deleted by accident outside of Kubernetes, the disk that the volume resides on could fail, it could be out of capacity, the disk may be degraded which affects its performance, and so on. Even when the volume is mounted on a pod and used by an application, there could be problems later on such as read/write I/O errors, file system corruption, accidental unmounting of the volume outside of Kubernetes, etc. It is very hard to debug and detect root causes when something happened like this.&lt;/p&gt;</description></item><item><title>Three Tenancy Models For Kubernetes</title><link>https://andygol-k8s.netlify.app/blog/2021/04/15/three-tenancy-models-for-kubernetes/</link><pubDate>Thu, 15 Apr 2021 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2021/04/15/three-tenancy-models-for-kubernetes/</guid><description>&lt;p&gt;Kubernetes clusters are typically used by several teams in an organization. In other cases, Kubernetes may be used to deliver applications to end users requiring segmentation and isolation of resources across users from different organizations. Secure sharing of Kubernetes control plane and worker node resources allows maximizing productivity and saving costs in both cases.&lt;/p&gt;
&lt;p&gt;The Kubernetes Multi-Tenancy Working Group is chartered with defining tenancy models for Kubernetes and making it easier to operationalize tenancy related use cases. This blog post, from the working group members, describes three common tenancy models and introduces related working group projects.&lt;/p&gt;</description></item><item><title>Local Storage: Storage Capacity Tracking, Distributed Provisioning and Generic Ephemeral Volumes hit Beta</title><link>https://andygol-k8s.netlify.app/blog/2021/04/14/local-storage-features-go-beta/</link><pubDate>Wed, 14 Apr 2021 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2021/04/14/local-storage-features-go-beta/</guid><description>&lt;p&gt;The &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/storage/ephemeral-volumes/#generic-ephemeral-volumes"&gt;&amp;quot;generic ephemeral
volumes&amp;quot;&lt;/a&gt;
and &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/storage/storage-capacity/"&gt;&amp;quot;storage capacity
tracking&amp;quot;&lt;/a&gt;
features in Kubernetes are getting promoted to beta in Kubernetes
1.21. Together with the &lt;a href="https://github.com/kubernetes-csi/external-provisioner#deployment-on-each-node"&gt;distributed provisioning
support&lt;/a&gt;
in the CSI external-provisioner, development and deployment of
Container Storage Interface (CSI) drivers which manage storage locally
on a node become a lot easier.&lt;/p&gt;
&lt;p&gt;This blog post explains how such drivers worked before and how these
features can be used to make drivers simpler.&lt;/p&gt;
&lt;h2 id="problems-we-are-solving"&gt;Problems we are solving&lt;/h2&gt;
&lt;p&gt;There are drivers for local storage, like
&lt;a href="https://github.com/cybozu-go/topolvm"&gt;TopoLVM&lt;/a&gt; for traditional disks
and &lt;a href="https://intel.github.io/pmem-csi/latest/README.html"&gt;PMEM-CSI&lt;/a&gt;
for &lt;a href="https://pmem.io/"&gt;persistent memory&lt;/a&gt;. They work and are ready for
usage today also on older Kubernetes releases, but making that possible
was not trivial.&lt;/p&gt;</description></item><item><title>kube-state-metrics goes v2.0</title><link>https://andygol-k8s.netlify.app/blog/2021/04/13/kube-state-metrics-v-2-0/</link><pubDate>Tue, 13 Apr 2021 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2021/04/13/kube-state-metrics-v-2-0/</guid><description>&lt;h2 id="what"&gt;What?&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://github.com/kubernetes/kube-state-metrics"&gt;kube-state-metrics&lt;/a&gt;, a project under the Kubernetes organization, generates Prometheus format metrics based on the current state of the Kubernetes native resources. It does this by listening to the Kubernetes API and gathering information about resources and objects, e.g. Deployments, Pods, Services, and StatefulSets. A full list of resources is available in the &lt;a href="https://github.com/kubernetes/kube-state-metrics/tree/master/docs"&gt;documentation&lt;/a&gt; of kube-state-metrics.&lt;/p&gt;
&lt;h2 id="why"&gt;Why?&lt;/h2&gt;
&lt;p&gt;There are numerous useful metrics and insights provided by &lt;code&gt;kube-state-metrics&lt;/code&gt; right out of the box! These metrics can be used to serve as an insight into your cluster: Either through metrics alone, in the form of dashboards, or through an alerting pipeline. To provide a few examples:&lt;/p&gt;</description></item><item><title>Introducing Suspended Jobs</title><link>https://andygol-k8s.netlify.app/blog/2021/04/12/introducing-suspended-jobs/</link><pubDate>Mon, 12 Apr 2021 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2021/04/12/introducing-suspended-jobs/</guid><description>&lt;p&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/job/"&gt;Jobs&lt;/a&gt; are a crucial part of
Kubernetes' API. While other kinds of workloads such as &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/deployment/"&gt;Deployments&lt;/a&gt;,
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/replicaset/"&gt;ReplicaSets&lt;/a&gt;,
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/statefulset/"&gt;StatefulSets&lt;/a&gt;, and
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/daemonset/"&gt;DaemonSets&lt;/a&gt;
solve use-cases that require Pods to run forever, Jobs are useful when Pods need
to run to completion. Commonly used in parallel batch processing, Jobs can be
used in a variety of applications ranging from video rendering and database
maintenance to sending bulk emails and scientific computing.&lt;/p&gt;
&lt;p&gt;While the amount of parallelism and the conditions for Job completion are
configurable, the Kubernetes API lacked the ability to suspend and resume Jobs.
This is often desired when cluster resources are limited and a higher priority
Job needs to execute in the place of another Job. Deleting the lower priority
Job is a poor workaround as Pod completion history and other metrics associated
with the Job will be lost.&lt;/p&gt;</description></item><item><title>Kubernetes 1.21: CronJob Reaches GA</title><link>https://andygol-k8s.netlify.app/blog/2021/04/09/kubernetes-release-1.21-cronjob-ga/</link><pubDate>Fri, 09 Apr 2021 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2021/04/09/kubernetes-release-1.21-cronjob-ga/</guid><description>&lt;p&gt;In Kubernetes v1.21, the
&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/cron-jobs/"&gt;CronJob&lt;/a&gt; resource
reached general availability (GA). We've also substantially improved the
performance of CronJobs since Kubernetes v1.19, by implementing a new
controller.&lt;/p&gt;
&lt;p&gt;In Kubernetes v1.20 we launched a revised v2 controller for CronJobs,
initially as an alpha feature. Kubernetes 1.21 uses the newer controller by
default, and the CronJob resource itself is now GA (group version: &lt;code&gt;batch/v1&lt;/code&gt;).&lt;/p&gt;
&lt;p&gt;In this article, we'll take you through the driving forces behind this new
development, give you a brief description of controller design for core
Kubernetes, and we'll outline what you will gain from this improved controller.&lt;/p&gt;</description></item><item><title>Kubernetes 1.21: Power to the Community</title><link>https://andygol-k8s.netlify.app/blog/2021/04/08/kubernetes-1-21-release-announcement/</link><pubDate>Thu, 08 Apr 2021 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2021/04/08/kubernetes-1-21-release-announcement/</guid><description>&lt;p&gt;We’re pleased to announce the release of Kubernetes 1.21, our first release of 2021! This release consists of 51 enhancements: 13 enhancements have graduated to stable, 16 enhancements are moving to beta, 20 enhancements are entering alpha, and 2 features have been deprecated.&lt;/p&gt;
&lt;p&gt;This release cycle, we saw a major shift in ownership of processes around the release team. We moved from a synchronous mode of communication, where we periodically asked the community for inputs, to a mode where the community opts-in to contribute features and/or blogs to the release. These changes have resulted in an increase in collaboration and teamwork across the community. The result of all that is reflected in Kubernetes 1.21 having the most number of features in the recent times.&lt;/p&gt;</description></item><item><title>PodSecurityPolicy Deprecation: Past, Present, and Future</title><link>https://andygol-k8s.netlify.app/blog/2021/04/06/podsecuritypolicy-deprecation-past-present-and-future/</link><pubDate>Tue, 06 Apr 2021 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2021/04/06/podsecuritypolicy-deprecation-past-present-and-future/</guid><description>&lt;div class="pageinfo pageinfo-primary"&gt;
&lt;p&gt;&lt;strong&gt;Update:&lt;/strong&gt; &lt;em&gt;With the release of Kubernetes v1.25, PodSecurityPolicy has been removed.&lt;/em&gt;
&lt;em&gt;You can read more information about the removal of PodSecurityPolicy in the
&lt;a href="https://andygol-k8s.netlify.app/blog/2022/08/23/kubernetes-v1-25-release/#pod-security-changes"&gt;Kubernetes 1.25 release notes&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;p&gt;PodSecurityPolicy (PSP) is being deprecated in Kubernetes 1.21, to be released later this week.
This starts the countdown to its removal, but doesn’t change anything else.
PodSecurityPolicy will continue to be fully functional for several more releases before being removed completely.
In the meantime, we are developing a replacement for PSP that covers key use cases more easily and sustainably.&lt;/p&gt;</description></item><item><title>The Evolution of Kubernetes Dashboard</title><link>https://andygol-k8s.netlify.app/blog/2021/03/09/the-evolution-of-kubernetes-dashboard/</link><pubDate>Tue, 09 Mar 2021 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2021/03/09/the-evolution-of-kubernetes-dashboard/</guid><description>&lt;p&gt;In October 2020, the Kubernetes Dashboard officially turned five. As main project maintainers, we can barely believe that so much time has passed since our very first commits to the project. However, looking back with a bit of nostalgia, we realize that quite a lot has happened since then. Now it’s due time to celebrate “our baby” with a short recap.&lt;/p&gt;
&lt;h2 id="how-it-all-began"&gt;How It All Began&lt;/h2&gt;
&lt;p&gt;The initial idea behind the Kubernetes Dashboard project was to provide a web interface for Kubernetes. We wanted to reflect the kubectl functionality through an intuitive web UI. The main benefit from using the UI is to be able to quickly see things that do not work as expected (monitoring and troubleshooting). Also, the Kubernetes Dashboard is a great starting point for users that are new to the Kubernetes ecosystem.&lt;/p&gt;</description></item><item><title>A Custom Kubernetes Scheduler to Orchestrate Highly Available Applications</title><link>https://andygol-k8s.netlify.app/blog/2020/12/21/writing-crl-scheduler/</link><pubDate>Mon, 21 Dec 2020 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2020/12/21/writing-crl-scheduler/</guid><description>&lt;p&gt;As long as you're willing to follow the rules, deploying on Kubernetes and air travel can be quite pleasant. More often than not, things will &amp;quot;just work&amp;quot;. However, if one is interested in travelling with an alligator that must remain alive or scaling a database that must remain available, the situation is likely to become a bit more complicated. It may even be easier to build one's own plane or database for that matter. Travelling with reptiles aside, scaling a highly available stateful system is no trivial task.&lt;/p&gt;</description></item><item><title>Kubernetes 1.20: Pod Impersonation and Short-lived Volumes in CSI Drivers</title><link>https://andygol-k8s.netlify.app/blog/2020/12/18/kubernetes-1.20-pod-impersonation-short-lived-volumes-in-csi/</link><pubDate>Fri, 18 Dec 2020 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2020/12/18/kubernetes-1.20-pod-impersonation-short-lived-volumes-in-csi/</guid><description>&lt;p&gt;Typically when a &lt;a href="https://github.com/container-storage-interface/spec/blob/baa71a34651e5ee6cb983b39c03097d7aa384278/spec.md"&gt;CSI&lt;/a&gt; driver mounts credentials such as secrets and certificates, it has to authenticate against storage providers to access the credentials. However, the access to those credentials are controlled on the basis of the pods' identities rather than the CSI driver's identity. CSI drivers, therefore, need some way to retrieve pod's service account token.&lt;/p&gt;
&lt;p&gt;Currently there are two suboptimal approaches to achieve this, either by granting CSI drivers the permission to use TokenRequest API or by reading tokens directly from the host filesystem.&lt;/p&gt;</description></item><item><title>Third Party Device Metrics Reaches GA</title><link>https://andygol-k8s.netlify.app/blog/2020/12/16/third-party-device-metrics-reaches-ga/</link><pubDate>Wed, 16 Dec 2020 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2020/12/16/third-party-device-metrics-reaches-ga/</guid><description>&lt;p&gt;With Kubernetes 1.20, infrastructure teams who manage large scale Kubernetes clusters, are seeing the graduation of two exciting and long awaited features:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The Pod Resources API (introduced in 1.13) is finally graduating to GA. This allows Kubernetes plugins to obtain information about the node’s resource usage and assignment; for example: which pod/container consumes which device.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;DisableAcceleratorMetrics&lt;/code&gt; feature (introduced in 1.19) is graduating to beta and will be enabled by default. This removes device metrics reported by the kubelet in favor of the new plugin architecture.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Many of the features related to fundamental device support (device discovery, plugin, and monitoring) are reaching a strong level of stability.
Kubernetes users should see these features as stepping stones to enable more complex use cases (networking, scheduling, storage, etc.)!&lt;/p&gt;</description></item><item><title>Kubernetes 1.20: Granular Control of Volume Permission Changes</title><link>https://andygol-k8s.netlify.app/blog/2020/12/14/kubernetes-release-1.20-fsgroupchangepolicy-fsgrouppolicy/</link><pubDate>Mon, 14 Dec 2020 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2020/12/14/kubernetes-release-1.20-fsgroupchangepolicy-fsgrouppolicy/</guid><description>&lt;p&gt;Kubernetes 1.20 brings two important beta features, allowing Kubernetes admins and users alike to have more adequate control over how volume permissions are applied when a volume is mounted inside a Pod.&lt;/p&gt;
&lt;h3 id="allow-users-to-skip-recursive-permission-changes-on-mount"&gt;Allow users to skip recursive permission changes on mount&lt;/h3&gt;
&lt;p&gt;Traditionally if your pod is running as a non-root user (&lt;a href="https://twitter.com/thockin/status/1333892204490735617"&gt;which you should&lt;/a&gt;), you must specify a &lt;code&gt;fsGroup&lt;/code&gt; inside the pod’s security context so that the volume can be readable and writable by the Pod. This requirement is covered in more detail in &lt;a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/"&gt;here&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>Kubernetes 1.20: Kubernetes Volume Snapshot Moves to GA</title><link>https://andygol-k8s.netlify.app/blog/2020/12/10/kubernetes-1.20-volume-snapshot-moves-to-ga/</link><pubDate>Thu, 10 Dec 2020 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2020/12/10/kubernetes-1.20-volume-snapshot-moves-to-ga/</guid><description>&lt;p&gt;The Kubernetes Volume Snapshot feature is now GA in Kubernetes v1.20. It was introduced as &lt;a href="https://kubernetes.io/blog/2018/10/09/introducing-volume-snapshot-alpha-for-kubernetes/"&gt;alpha&lt;/a&gt; in Kubernetes v1.12, followed by a &lt;a href="https://kubernetes.io/blog/2019/01/17/update-on-volume-snapshot-alpha-for-kubernetes/"&gt;second alpha&lt;/a&gt; with breaking changes in Kubernetes v1.13, and promotion to &lt;a href="https://kubernetes.io/blog/2019/12/09/kubernetes-1-17-feature-cis-volume-snapshot-beta/"&gt;beta&lt;/a&gt; in Kubernetes 1.17. This blog post summarizes the changes releasing the feature from beta to GA.&lt;/p&gt;
&lt;h2 id="what-is-a-volume-snapshot"&gt;What is a volume snapshot?&lt;/h2&gt;
&lt;p&gt;Many storage systems (like Google Cloud Persistent Disks, Amazon Elastic Block Storage, and many on-premise storage systems) provide the ability to create a “snapshot” of a persistent volume. A snapshot represents a point-in-time copy of a volume. A snapshot can be used either to rehydrate a new volume (pre-populated with the snapshot data) or to restore an existing volume to a previous state (represented by the snapshot).&lt;/p&gt;</description></item><item><title>Kubernetes 1.20: The Raddest Release</title><link>https://andygol-k8s.netlify.app/blog/2020/12/08/kubernetes-1-20-release-announcement/</link><pubDate>Tue, 08 Dec 2020 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2020/12/08/kubernetes-1-20-release-announcement/</guid><description>&lt;p&gt;We’re pleased to announce the release of Kubernetes 1.20, our third and final release of 2020! This release consists of 42 enhancements: 11 enhancements have graduated to stable, 15 enhancements are moving to beta, and 16 enhancements are entering alpha.&lt;/p&gt;
&lt;p&gt;The 1.20 release cycle returned to its normal cadence of 11 weeks following the previous extended release cycle. This is one of the most feature dense releases in a while: the Kubernetes innovation cycle is still trending upward. This release has more alpha than stable enhancements, showing that there is still much to explore in the cloud native ecosystem.&lt;/p&gt;</description></item><item><title>GSoD 2020: Improving the API Reference Experience</title><link>https://andygol-k8s.netlify.app/blog/2020/12/04/gsod-2020-improving-api-reference-experience/</link><pubDate>Fri, 04 Dec 2020 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2020/12/04/gsod-2020-improving-api-reference-experience/</guid><description>&lt;p&gt;&lt;em&gt;Editor's note: Better API references have been my goal since I joined Kubernetes docs three and a half years ago. Philippe has succeeded fantastically. More than a better API reference, though, Philippe embodied the best of the Kubernetes community in this project: excellence through collaboration, and a process that made the community itself better. Thanks, Google Season of Docs, for making Philippe's work possible. —Zach Corleissen&lt;/em&gt;&lt;/p&gt;
&lt;h2 id="introduction"&gt;Introduction&lt;/h2&gt;
&lt;p&gt;The &lt;a href="https://developers.google.com/season-of-docs"&gt;Google Season of Docs&lt;/a&gt; project brings open source organizations and technical writers together to work closely on a specific documentation project.&lt;/p&gt;</description></item><item><title>Dockershim Deprecation FAQ</title><link>https://andygol-k8s.netlify.app/blog/2020/12/02/dockershim-faq/</link><pubDate>Wed, 02 Dec 2020 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2020/12/02/dockershim-faq/</guid><description>&lt;p&gt;&lt;em&gt;&lt;strong&gt;Update&lt;/strong&gt;: There is a &lt;a href="https://andygol-k8s.netlify.app/blog/2022/02/17/dockershim-faq/"&gt;newer version&lt;/a&gt; of this article available.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;This document goes over some frequently asked questions regarding the Dockershim
deprecation announced as a part of the Kubernetes v1.20 release. For more detail
on the deprecation of Docker as a container runtime for Kubernetes kubelets, and
what that means, check out the blog post
&lt;a href="https://andygol-k8s.netlify.app/blog/2020/12/02/dont-panic-kubernetes-and-docker/"&gt;Don't Panic: Kubernetes and Docker&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Also, you can read &lt;a href="https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-removal-affects-you/"&gt;check whether Dockershim removal affects you&lt;/a&gt; to check whether it does.&lt;/p&gt;</description></item><item><title>Don't Panic: Kubernetes and Docker</title><link>https://andygol-k8s.netlify.app/blog/2020/12/02/dont-panic-kubernetes-and-docker/</link><pubDate>Wed, 02 Dec 2020 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2020/12/02/dont-panic-kubernetes-and-docker/</guid><description>&lt;p&gt;&lt;strong&gt;Update:&lt;/strong&gt; &lt;em&gt;Kubernetes support for Docker via &lt;code&gt;dockershim&lt;/code&gt; is now removed.
For more information, read the &lt;a href="https://andygol-k8s.netlify.app/dockershim"&gt;removal FAQ&lt;/a&gt;.
You can also discuss the deprecation via a dedicated &lt;a href="https://github.com/kubernetes/kubernetes/issues/106917"&gt;GitHub issue&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;Kubernetes is &lt;a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md#deprecation"&gt;deprecating
Docker&lt;/a&gt;
as a container runtime after v1.20.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;You do not need to panic. It’s not as dramatic as it sounds.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;TL;DR Docker as an underlying runtime is being deprecated in favor of runtimes
that use the &lt;a href="https://kubernetes.io/blog/2016/12/container-runtime-interface-cri-in-kubernetes/"&gt;Container Runtime Interface (CRI)&lt;/a&gt;
created for Kubernetes. Docker-produced images will continue to work in your
cluster with all runtimes, as they always have.&lt;/p&gt;</description></item><item><title>Cloud native security for your clusters</title><link>https://andygol-k8s.netlify.app/blog/2020/11/18/cloud-native-security-for-your-clusters/</link><pubDate>Wed, 18 Nov 2020 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2020/11/18/cloud-native-security-for-your-clusters/</guid><description>&lt;p&gt;Over the last few years a small, security focused community has been working diligently to deepen our understanding of security, given the evolving cloud native infrastructure and corresponding iterative deployment practices. To enable sharing of this knowledge with the rest of the community, members of &lt;a href="https://github.com/cncf/sig-security"&gt;CNCF SIG Security&lt;/a&gt; (a group which reports into &lt;a href="https://github.com/cncf/toc#sigs"&gt;CNCF TOC&lt;/a&gt; and who are friends with &lt;a href="https://github.com/kubernetes/community/tree/master/sig-security"&gt;Kubernetes SIG Security&lt;/a&gt;) led by Emily Fox, collaborated on a whitepaper outlining holistic cloud native security concerns and best practices. After over 1200 comments, changes, and discussions from 35 members across the world, we are proud to share &lt;a href="https://www.cncf.io/blog/2020/11/18/announcing-the-cloud-native-security-white-paper"&gt;cloud native security whitepaper v1.0&lt;/a&gt; that serves as essential reading for security leadership in enterprises, financial and healthcare industries, academia, government, and non-profit organizations.&lt;/p&gt;</description></item><item><title>Remembering Dan Kohn</title><link>https://andygol-k8s.netlify.app/blog/2020/11/02/remembering-dan-kohn/</link><pubDate>Mon, 02 Nov 2020 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2020/11/02/remembering-dan-kohn/</guid><description>&lt;p&gt;Dan Kohn was instrumental in getting Kubernetes and CNCF community to where it is today. He shared our values, motivations, enthusiasm, community spirit, and helped the Kubernetes community to become the best that it could be. Dan loved getting people together to solve problems big and small. He enabled people to grow their individual scope in the community which often helped launch their career in open source software.&lt;/p&gt;
&lt;p&gt;Dan built a coalition around the nascent Kubernetes project and turned that into a cornerstone to build the larger cloud native space. He loved challenges, especially ones where the payoff was great like building worldwide communities, spreading the love of open source, and helping diverse, underprivileged communities and students to get a head start in technology.&lt;/p&gt;</description></item><item><title>Announcing the 2020 Steering Committee Election Results</title><link>https://andygol-k8s.netlify.app/blog/2020/10/12/steering-committee-results-2020/</link><pubDate>Mon, 12 Oct 2020 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2020/10/12/steering-committee-results-2020/</guid><description>&lt;p&gt;The &lt;a href="https://github.com/kubernetes/community/tree/master/events/elections/2020"&gt;2020 Steering Committee Election&lt;/a&gt; is now complete. In 2019, the committee arrived at its final allocation of 7 seats, 3 of which were up for election in 2020. Incoming committee members serve a term of 2 years, and all members are elected by the Kubernetes Community.&lt;/p&gt;
&lt;p&gt;This community body is significant since it oversees the governance of the entire Kubernetes project. With that great power comes great responsibility. You can learn more about the steering committee’s role in their &lt;a href="https://github.com/kubernetes/steering/blob/master/charter.md"&gt;charter&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>Contributing to the Development Guide</title><link>https://andygol-k8s.netlify.app/blog/2020/10/01/contributing-to-the-development-guide/</link><pubDate>Thu, 01 Oct 2020 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2020/10/01/contributing-to-the-development-guide/</guid><description>&lt;p&gt;When most people think of contributing to an open source project, I suspect they probably think of
contributing code changes, new features, and bug fixes. As a software engineer and a long-time open
source user and contributor, that's certainly what I thought. Although I have written a good quantity
of documentation in different workflows, the massive size of the Kubernetes community was a new kind
of &amp;quot;client.&amp;quot; I just didn't know what to expect when Google asked my compatriots and me at
&lt;a href="https://lionswaycontent.com/"&gt;Lion's Way&lt;/a&gt; to make much-needed updates to the Kubernetes Development Guide.&lt;/p&gt;</description></item><item><title>GSoC 2020 - Building operators for cluster addons</title><link>https://andygol-k8s.netlify.app/blog/2020/09/16/gsoc20-building-operators-for-cluster-addons/</link><pubDate>Wed, 16 Sep 2020 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2020/09/16/gsoc20-building-operators-for-cluster-addons/</guid><description>&lt;h1 id="introduction"&gt;Introduction&lt;/h1&gt;
&lt;p&gt;&lt;a href="https://summerofcode.withgoogle.com/"&gt;Google Summer of Code&lt;/a&gt; is a global program that is geared towards introducing students to open source. Students are matched with open-source organizations to work with them for three months during the summer.&lt;/p&gt;
&lt;p&gt;My name is Somtochi Onyekwere from the Federal University of Technology, Owerri (Nigeria) and this year, I was given the opportunity to work with Kubernetes (under the CNCF organization) and this led to an amazing summer spent learning, contributing and interacting with the community.&lt;/p&gt;</description></item><item><title>Introducing Structured Logs</title><link>https://andygol-k8s.netlify.app/blog/2020/09/04/kubernetes-1-19-introducing-structured-logs/</link><pubDate>Fri, 04 Sep 2020 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2020/09/04/kubernetes-1-19-introducing-structured-logs/</guid><description>&lt;p&gt;Logs are an essential aspect of observability and a critical tool for debugging. But Kubernetes logs have traditionally been unstructured strings, making any automated parsing difficult and any downstream processing, analysis, or querying challenging to do reliably.&lt;/p&gt;
&lt;p&gt;In Kubernetes 1.19, we are adding support for structured logs, which natively support (key, value) pairs and object references. We have also updated many logging calls such that over 99% of logging volume in a typical deployment are now migrated to the structured format.&lt;/p&gt;</description></item><item><title>Warning: Helpful Warnings Ahead</title><link>https://andygol-k8s.netlify.app/blog/2020/09/03/warnings/</link><pubDate>Thu, 03 Sep 2020 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2020/09/03/warnings/</guid><description>&lt;p&gt;As Kubernetes maintainers, we're always looking for ways to improve usability while preserving compatibility.
As we develop features, triage bugs, and answer support questions, we accumulate information that would be helpful for Kubernetes users to know.
In the past, sharing that information was limited to out-of-band methods like release notes, announcement emails, documentation, and blog posts.
Unless someone knew to seek out that information and managed to find it, they would not benefit from it.&lt;/p&gt;</description></item><item><title>Scaling Kubernetes Networking With EndpointSlices</title><link>https://andygol-k8s.netlify.app/blog/2020/09/02/scaling-kubernetes-networking-with-endpointslices/</link><pubDate>Wed, 02 Sep 2020 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2020/09/02/scaling-kubernetes-networking-with-endpointslices/</guid><description>&lt;p&gt;EndpointSlices are an exciting new API that provides a scalable and extensible alternative to the Endpoints API. EndpointSlices track IP addresses, ports, readiness, and topology information for Pods backing a Service.&lt;/p&gt;
&lt;p&gt;In Kubernetes 1.19 this feature is enabled by default with kube-proxy reading from &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/services-networking/endpoint-slices/"&gt;EndpointSlices&lt;/a&gt; instead of Endpoints. Although this will mostly be an invisible change, it should result in noticeable scalability improvements in large clusters. It also enables significant new features in future Kubernetes releases like Topology Aware Routing.&lt;/p&gt;</description></item><item><title>Ephemeral volumes with storage capacity tracking: EmptyDir on steroids</title><link>https://andygol-k8s.netlify.app/blog/2020/09/01/ephemeral-volumes-with-storage-capacity-tracking/</link><pubDate>Tue, 01 Sep 2020 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2020/09/01/ephemeral-volumes-with-storage-capacity-tracking/</guid><description>&lt;p&gt;Some applications need additional storage but don't care whether that
data is stored persistently across restarts. For example, caching
services are often limited by memory size and can move infrequently
used data into storage that is slower than memory with little impact
on overall performance. Other applications expect some read-only input
data to be present in files, like configuration data or secret keys.&lt;/p&gt;
&lt;p&gt;Kubernetes already supports several kinds of such &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/storage/ephemeral-volumes/"&gt;ephemeral
volumes&lt;/a&gt;, but the
functionality of those is limited to what is implemented inside
Kubernetes.&lt;/p&gt;</description></item><item><title>Increasing the Kubernetes Support Window to One Year</title><link>https://andygol-k8s.netlify.app/blog/2020/08/31/kubernetes-1-19-feature-one-year-support/</link><pubDate>Mon, 31 Aug 2020 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2020/08/31/kubernetes-1-19-feature-one-year-support/</guid><description>&lt;p&gt;Starting with Kubernetes 1.19, the support window for Kubernetes versions &lt;a href="https://github.com/kubernetes/enhancements/issues/1498"&gt;will increase from 9 months to one year&lt;/a&gt;. The longer support window is intended to allow organizations to perform major upgrades at a time of the year that works the best for them.&lt;/p&gt;
&lt;p&gt;This is a big change. For many years, the Kubernetes project has delivered a new minor release (e.g.: 1.13 or 1.14) every 3 months. The project provides bugfix support via patch releases (e.g.: 1.13.Y) for three parallel branches of the codebase. Combined, this led to each minor release (e.g.: 1.13) having a patch release stream of support for approximately 9 months. In the end, a cluster operator had to upgrade at least every 9 months to remain supported.&lt;/p&gt;</description></item><item><title>Kubernetes 1.19: Accentuate the Paw-sitive</title><link>https://andygol-k8s.netlify.app/blog/2020/08/26/kubernetes-release-1.19-accentuate-the-paw-sitive/</link><pubDate>Wed, 26 Aug 2020 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2020/08/26/kubernetes-release-1.19-accentuate-the-paw-sitive/</guid><description>&lt;p&gt;Finally, we have arrived with Kubernetes 1.19, the second release for 2020, and by far the longest release cycle lasting 20 weeks in total. It consists of 34 enhancements: 10 enhancements are moving to stable, 15 enhancements in beta, and 9 enhancements in alpha.&lt;/p&gt;
&lt;p&gt;The 1.19 release was quite different from a regular release due to COVID-19, the George Floyd protests, and several other global events that we experienced as a release team. Due to these events, we made the decision to adjust our timeline and allow the SIGs, Working Groups, and contributors more time to get things done. The extra time also allowed for people to take time to focus on their lives outside of the Kubernetes project, and ensure their mental wellbeing was in a good place.&lt;/p&gt;</description></item><item><title>Moving Forward From Beta</title><link>https://andygol-k8s.netlify.app/blog/2020/08/21/moving-forward-from-beta/</link><pubDate>Fri, 21 Aug 2020 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2020/08/21/moving-forward-from-beta/</guid><description>&lt;p&gt;In Kubernetes, features follow a defined
&lt;a href="https://andygol-k8s.netlify.app/docs/reference/command-line-tools-reference/feature-gates/#feature-stages"&gt;lifecycle&lt;/a&gt;.
First, as the twinkle of an eye in an interested developer. Maybe, then,
sketched in online discussions, drawn on the online equivalent of a cafe
napkin. This rough work typically becomes a
&lt;a href="https://github.com/kubernetes/enhancements/blob/master/keps/sig-architecture/0000-kep-process/README.md#kubernetes-enhancement-proposal-process"&gt;Kubernetes Enhancement Proposal&lt;/a&gt; (KEP), and
from there it usually turns into code.&lt;/p&gt;
&lt;p&gt;For Kubernetes v1.20 and onwards, we're focusing on helping that code
graduate into stable features.&lt;/p&gt;
&lt;p&gt;That lifecycle I mentioned runs as follows:&lt;/p&gt;</description></item><item><title>Introducing Hierarchical Namespaces</title><link>https://andygol-k8s.netlify.app/blog/2020/08/14/introducing-hierarchical-namespaces/</link><pubDate>Fri, 14 Aug 2020 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2020/08/14/introducing-hierarchical-namespaces/</guid><description>&lt;p&gt;Safely hosting large numbers of users on a single Kubernetes cluster has always
been a troublesome task. One key reason for this is that different organizations
use Kubernetes in different ways, and so no one tenancy model is likely to suit
everyone. Instead, Kubernetes offers you building blocks to create your own
tenancy solution, such as Role Based Access Control (RBAC) and NetworkPolicies;
the better these building blocks, the easier it is to safely build a multitenant
cluster.&lt;/p&gt;</description></item><item><title>Physics, politics and Pull Requests: the Kubernetes 1.18 release interview</title><link>https://andygol-k8s.netlify.app/blog/2020/08/03/physics-politics-and-pull-requests-the-kubernetes-1.18-release-interview/</link><pubDate>Mon, 03 Aug 2020 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2020/08/03/physics-politics-and-pull-requests-the-kubernetes-1.18-release-interview/</guid><description>&lt;p&gt;The start of the COVID-19 pandemic couldn't delay the release of Kubernetes 1.18, but unfortunately &lt;a href="https://github.com/kubernetes/utils/issues/141"&gt;a small bug&lt;/a&gt; could — thankfully only by a day. This was the last cat that needed to be herded by 1.18 release lead &lt;a href="https://twitter.com/alejandrox135"&gt;Jorge Alarcón&lt;/a&gt; before the &lt;a href="https://kubernetes.io/blog/2020/03/25/kubernetes-1-18-release-announcement/"&gt;release on March 25&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;One of the best parts about co-hosting the weekly &lt;a href="https://kubernetespodcast.com/"&gt;Kubernetes Podcast from Google&lt;/a&gt; is the conversations we have with the people who help bring Kubernetes releases together. &lt;a href="https://kubernetespodcast.com/episode/096-kubernetes-1.18/"&gt;Jorge was our guest on episode 96&lt;/a&gt; back in March, and &lt;a href="https://kubernetes.io/blog/2020/07/27/music-and-math-the-kubernetes-1.17-release-interview/"&gt;just like last week&lt;/a&gt; we are delighted to bring you the transcript of this interview.&lt;/p&gt;</description></item><item><title>Music and math: the Kubernetes 1.17 release interview</title><link>https://andygol-k8s.netlify.app/blog/2020/07/27/music-and-math-the-kubernetes-1.17-release-interview/</link><pubDate>Mon, 27 Jul 2020 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2020/07/27/music-and-math-the-kubernetes-1.17-release-interview/</guid><description>&lt;p&gt;Every time the Kubernetes release train stops at the station, we like to ask the release lead to take a moment to reflect on their experience. That takes the form of an interview on the weekly &lt;a href="https://kubernetespodcast.com/"&gt;Kubernetes Podcast from Google&lt;/a&gt; that I co-host with &lt;a href="https://twitter.com/craigbox"&gt;Craig Box&lt;/a&gt;. If you're not familiar with the show, every week we summarise the new in the Cloud Native ecosystem, and have an insightful discussion with an interesting guest from the broader Kubernetes community.&lt;/p&gt;</description></item><item><title>SIG-Windows Spotlight</title><link>https://andygol-k8s.netlify.app/blog/2020/06/30/sig-windows-spotlight-2020/</link><pubDate>Tue, 30 Jun 2020 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2020/06/30/sig-windows-spotlight-2020/</guid><description>&lt;p&gt;&lt;em&gt;This post tells the story of how Kubernetes contributors work together to provide a container orchestrator that works for both Linux and Windows.&lt;/em&gt;&lt;/p&gt;
&lt;img alt="Image of a computer with Kubernetes logo" width="30%" src="KubernetesComputer_transparent.png"&gt;
&lt;p&gt;Most people who are familiar with Kubernetes are probably used to associating it with Linux. The connection makes sense, since Kubernetes ran on Linux from its very beginning. However, many teams and organizations working on adopting Kubernetes need the ability to orchestrate containers on Windows. Since the release of Docker and rise to popularity of containers, there have been efforts both from the community and from Microsoft itself to make container technology as accessible in Windows systems as it is in Linux systems.&lt;/p&gt;</description></item><item><title>Working with Terraform and Kubernetes</title><link>https://andygol-k8s.netlify.app/blog/2020/06/working-with-terraform-and-kubernetes/</link><pubDate>Mon, 29 Jun 2020 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2020/06/working-with-terraform-and-kubernetes/</guid><description>&lt;p&gt;Maintaining Kubestack, an open-source &lt;a href="https://www.kubestack.com/lp/terraform-gitops-framework"&gt;Terraform GitOps Framework&lt;/a&gt; for Kubernetes, I unsurprisingly spend a lot of time working with Terraform and Kubernetes. Kubestack provisions managed Kubernetes services like AKS, EKS and GKE using Terraform but also integrates cluster services from Kustomize bases into the GitOps workflow. Think of cluster services as everything that's required on your Kubernetes cluster, before you can deploy application workloads.&lt;/p&gt;
&lt;p&gt;Hashicorp recently announced &lt;a href="https://www.hashicorp.com/blog/deploy-any-resource-with-the-new-kubernetes-provider-for-hashicorp-terraform/"&gt;better integration between Terraform and Kubernetes&lt;/a&gt;. I took this as an opportunity to give an overview of how Terraform can be used with Kubernetes today and what to be aware of.&lt;/p&gt;</description></item><item><title>A Better Docs UX With Docsy</title><link>https://andygol-k8s.netlify.app/blog/2020/06/better-docs-ux-with-docsy/</link><pubDate>Mon, 15 Jun 2020 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2020/06/better-docs-ux-with-docsy/</guid><description>&lt;p&gt;&lt;em&gt;Editor's note: Zach is one of the chairs for the Kubernetes documentation special interest group (SIG Docs).&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;I'm pleased to announce that the &lt;a href="https://kubernetes.io"&gt;Kubernetes website&lt;/a&gt; now features the &lt;a href="https://docsy.dev"&gt;Docsy Hugo theme&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The Docsy theme improves the site's organization and navigability, and opens a path to improved API references. After over 4 years with few meaningful UX improvements, Docsy implements some best practices for technical content. The theme makes the Kubernetes site easier to read and makes individual pages easier to navigate. It gives the site a much-needed facelift.&lt;/p&gt;</description></item><item><title>Supporting the Evolving Ingress Specification in Kubernetes 1.18</title><link>https://andygol-k8s.netlify.app/blog/2020/06/05/supporting-the-evolving-ingress-specification-in-kubernetes-1.18/</link><pubDate>Fri, 05 Jun 2020 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2020/06/05/supporting-the-evolving-ingress-specification-in-kubernetes-1.18/</guid><description>&lt;p&gt;Earlier this year, the Kubernetes team released &lt;a href="https://kubernetes.io/blog/2020/03/25/kubernetes-1-18-release-announcement/"&gt;Kubernetes 1.18&lt;/a&gt;, which extended Ingress. In this blog post, we’ll walk through what’s new in the new Ingress specification, what it means for your applications, and how to upgrade to an ingress controller that supports this new specification.&lt;/p&gt;
&lt;h3 id="what-is-kubernetes-ingress"&gt;What is Kubernetes Ingress&lt;/h3&gt;
&lt;p&gt;When deploying your applications in Kubernetes, one of the first challenges many people encounter is how to get traffic into their cluster. &lt;a href="https://kubernetes.io/docs/concepts/services-networking/ingress/"&gt;Kubernetes ingress&lt;/a&gt; is a collection of routing rules that govern how external users access services running in a Kubernetes cluster. There are &lt;a href="https://blog.getambassador.io/kubernetes-ingress-nodeport-load-balancers-and-ingress-controllers-6e29f1c44f2d"&gt;three general approaches&lt;/a&gt; for exposing your application:&lt;/p&gt;</description></item><item><title>K8s KPIs with Kuberhealthy</title><link>https://andygol-k8s.netlify.app/blog/2020/05/29/k8s-kpis-with-kuberhealthy/</link><pubDate>Fri, 29 May 2020 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2020/05/29/k8s-kpis-with-kuberhealthy/</guid><description>&lt;h3 id="building-onward-from-kuberhealthy-v2-0-0"&gt;Building Onward from Kuberhealthy v2.0.0&lt;/h3&gt;
&lt;p&gt;Last November at KubeCon San Diego 2019, we announced the release of
&lt;a href="https://www.youtube.com/watch?v=aAJlWhBtzqY"&gt;Kuberhealthy 2.0.0&lt;/a&gt; - transforming Kuberhealthy into a Kubernetes operator
for synthetic monitoring. This new ability granted developers the means to create their own Kuberhealthy check
containers to synthetically monitor their applications and clusters. The community was quick to adopt this new feature and we're grateful for everyone who implemented and tested Kuberhealthy 2.0.0 in their clusters. Thanks to all of you who reported
issues and contributed to discussions on the #kuberhealthy Slack channel. We quickly set to work to address all your feedback
with a newer version of Kuberhealthy. Additionally, we created a guide on how to easily install and use Kuberhealthy in order to capture some helpful synthetic &lt;a href="https://kpi.org/KPI-Basics"&gt;KPIs&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>My exciting journey into Kubernetes’ history</title><link>https://andygol-k8s.netlify.app/blog/2020/05/my-exciting-journey-into-kubernetes-history/</link><pubDate>Thu, 28 May 2020 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2020/05/my-exciting-journey-into-kubernetes-history/</guid><description>&lt;p&gt;&lt;em&gt;Editor's note: Sascha is part of &lt;a href="https://github.com/kubernetes/sig-release"&gt;SIG Release&lt;/a&gt; and is working on many other
different container runtime related topics. Feel free to reach him out on
Twitter &lt;a href="https://twitter.com/saschagrunert"&gt;@saschagrunert&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;blockquote&gt;
&lt;p&gt;A story of data science-ing 90,000 GitHub issues and pull requests by using
Kubeflow, TensorFlow, Prow and a fully automated CI/CD pipeline.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#introduction"&gt;Introduction&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#getting-the-data"&gt;Getting the Data&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#exploring-the-data"&gt;Exploring the Data&lt;/a&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#labels-labels-labels"&gt;Labels, Labels, Labels&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="#building-the-machine-learning-model"&gt;Building the Machine Learning Model&lt;/a&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#doing-some-first-natural-language-processing-nlp"&gt;Doing some first Natural Language Processing (NLP)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#creating-the-multi-layer-perceptron-mlp-model"&gt;Creating the Multi-Layer Perceptron (MLP) Model&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#training-the-model"&gt;Training the Model&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#a-first-prediction"&gt;A first Prediction&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="#automate-everything"&gt;Automate Everything&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#automatic-labeling-of-new-prs"&gt;Automatic Labeling of new PRs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#summary"&gt;Summary&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h1 id="introduction"&gt;Introduction&lt;/h1&gt;
&lt;p&gt;Choosing the right steps when working in the field of data science is truly no
silver bullet. Most data scientists might have their custom workflow, which
could be more or less automated, depending on their area of work. Using
&lt;a href="https://kubernetes.io"&gt;Kubernetes&lt;/a&gt; can be a tremendous enhancement when trying to automate
workflows on a large scale. In this blog post, I would like to take you on my
journey of doing data science while integrating the overall workflow into
Kubernetes.&lt;/p&gt;</description></item><item><title>An Introduction to the K8s-Infrastructure Working Group</title><link>https://andygol-k8s.netlify.app/blog/2020/05/27/an-introduction-to-the-k8s-infrastructure-working-group/</link><pubDate>Wed, 27 May 2020 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2020/05/27/an-introduction-to-the-k8s-infrastructure-working-group/</guid><description>&lt;p&gt;&lt;strong&gt;Author&lt;/strong&gt;: &lt;a href="https://twitter.com/kiran_oliver"&gt;Kiran &amp;quot;Rin&amp;quot; Oliver&lt;/a&gt; Storyteller, Kubernetes Upstream Marketing Team&lt;/p&gt;
&lt;h1 id="an-introduction-to-the-k8s-infrastructure-working-group"&gt;An Introduction to the K8s-Infrastructure Working Group&lt;/h1&gt;
&lt;p&gt;&lt;em&gt;Welcome to part one of a new series introducing the K8s-Infrastructure working group!&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;When Kubernetes was formed in 2014, Google undertook the task of building and maintaining the infrastructure necessary for keeping the project running smoothly. The tools itself were open source, but the Google Cloud Platform project used to run the infrastructure was internal-only, preventing contributors from being able to help out. In August 2018, Google granted the Cloud Native Computing Foundation &lt;a href="https://cloud.google.com/blog/products/gcp/google-cloud-grants-9m-in-credits-for-the-operation-of-the-kubernetes-project"&gt;$9M in credits for the operation of Kubernetes&lt;/a&gt;. The sentiment behind this was that a project such as Kubernetes should be both maintained and operated by the community itself rather than by a single vendor.&lt;/p&gt;</description></item><item><title>WSL+Docker: Kubernetes on the Windows Desktop</title><link>https://andygol-k8s.netlify.app/blog/2020/05/21/wsl-docker-kubernetes-on-the-windows-desktop/</link><pubDate>Thu, 21 May 2020 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2020/05/21/wsl-docker-kubernetes-on-the-windows-desktop/</guid><description>&lt;h1 id="introduction"&gt;Introduction&lt;/h1&gt;
&lt;p&gt;New to Windows 10 and WSL2, or new to Docker and Kubernetes? Welcome to this blog post where we will install from scratch Kubernetes in Docker &lt;a href="https://kind.sigs.k8s.io/"&gt;KinD&lt;/a&gt; and &lt;a href="https://minikube.sigs.k8s.io/docs/"&gt;Minikube&lt;/a&gt;.&lt;/p&gt;
&lt;h1 id="why-kubernetes-on-windows"&gt;Why Kubernetes on Windows?&lt;/h1&gt;
&lt;p&gt;For the last few years, Kubernetes became a de-facto standard platform for running containerized services and applications in distributed environments. While a wide variety of distributions and installers exist to deploy Kubernetes in the cloud environments (public, private or hybrid), or within the bare metal environments, there is still a need to deploy and run Kubernetes locally, for example, on the developer's workstation.&lt;/p&gt;</description></item><item><title>How Docs Handle Third Party and Dual Sourced Content</title><link>https://andygol-k8s.netlify.app/blog/2020/05/third-party-dual-sourced-content/</link><pubDate>Wed, 06 May 2020 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2020/05/third-party-dual-sourced-content/</guid><description>&lt;p&gt;&lt;em&gt;Editor's note: Zach is one of the chairs for the Kubernetes documentation special interest group (SIG Docs).&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Late last summer, SIG Docs started a community conversation about third party content in Kubernetes docs. This conversation became a &lt;a href="https://github.com/kubernetes/enhancements/pull/1327"&gt;Kubernetes Enhancement Proposal&lt;/a&gt; (KEP) and, after five months for review and comment, SIG Architecture approved the KEP as a &lt;a href="https://andygol-k8s.netlify.app/docs/contribute/style/content-guide/"&gt;content guide&lt;/a&gt; for Kubernetes docs.&lt;/p&gt;
&lt;p&gt;Here's how Kubernetes docs handle third party content now:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Links to active content in the Kubernetes project (projects in the kubernetes and kubernetes-sigs GitHub orgs) are always allowed.&lt;/p&gt;</description></item><item><title>Introducing PodTopologySpread</title><link>https://andygol-k8s.netlify.app/blog/2020/05/Introducing-PodTopologySpread/</link><pubDate>Tue, 05 May 2020 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2020/05/Introducing-PodTopologySpread/</guid><description>&lt;p&gt;Managing Pods distribution across a cluster is hard. The well-known Kubernetes
features for Pod affinity and anti-affinity, allow some control of Pod placement
in different topologies. However, these features only resolve part of Pods
distribution use cases: either place unlimited Pods to a single topology, or
disallow two Pods to co-locate in the same topology. In between these two
extreme cases, there is a common need to distribute the Pods evenly across the
topologies, so as to achieve better cluster utilization and high availability of
applications.&lt;/p&gt;</description></item><item><title>Two-phased Canary Rollout with Open Source Gloo</title><link>https://andygol-k8s.netlify.app/blog/2020/04/Two-phased-Canary-Rollout-With-Gloo/</link><pubDate>Wed, 22 Apr 2020 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2020/04/Two-phased-Canary-Rollout-With-Gloo/</guid><description>&lt;p&gt;&lt;strong&gt;Author:&lt;/strong&gt; Rick Ducott | &lt;a href="https://github.com/rickducott/"&gt;GitHub&lt;/a&gt; | &lt;a href="https://twitter.com/ducott"&gt;Twitter&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Every day, my colleagues and I are talking to platform owners, architects, and engineers who are using &lt;a href="https://github.com/solo-io/gloo"&gt;Gloo&lt;/a&gt; as an API gateway
to expose their applications to end users. These applications may span legacy monoliths, microservices, managed cloud services, and Kubernetes
clusters. Fortunately, Gloo makes it easy to set up routes to manage, secure, and observe application traffic while
supporting a flexible deployment architecture to meet the varying production needs of our users.&lt;/p&gt;</description></item><item><title>Cluster API v1alpha3 Delivers New Features and an Improved User Experience</title><link>https://andygol-k8s.netlify.app/blog/2020/04/21/cluster-api-v1alpha3-delivers-new-features-and-an-improved-user-experience/</link><pubDate>Tue, 21 Apr 2020 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2020/04/21/cluster-api-v1alpha3-delivers-new-features-and-an-improved-user-experience/</guid><description>&lt;img src="https://andygol-k8s.netlify.app/images/blog/2020-04-21-Cluster-API-v1alpha3-Delivers-New-Features-and-an-Improved-User-Experience/kubernetes-cluster-logos_final-02.svg" align="right" width="25%" alt="Cluster API Logo: Turtles All The Way Down"&gt;
&lt;p&gt;The Cluster API is a Kubernetes project to bring declarative, Kubernetes-style APIs to cluster creation, configuration, and management. It provides optional, additive functionality on top of core Kubernetes to manage the lifecycle of a Kubernetes cluster.&lt;/p&gt;
&lt;p&gt;Following the v1alpha2 release in October 2019, many members of the Cluster API community met in San Francisco, California, to plan the next release. The project had just gone through a major transformation, delivering a new architecture that promised to make the project easier for users to adopt, and faster for the community to build. Over the course of those two days, we found our common goals: To implement the features critical to managing production clusters, to make its user experience more intuitive, and to make it a joy to develop.&lt;/p&gt;</description></item><item><title>How Kubernetes contributors are building a better communication process</title><link>https://andygol-k8s.netlify.app/blog/2020/04/21/contributor-communication/</link><pubDate>Tue, 21 Apr 2020 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2020/04/21/contributor-communication/</guid><description>&lt;blockquote&gt;
&lt;p&gt;&amp;quot;Perhaps we just need to use a different word. We may need to use community development or project advocacy as a word in the open source realm as opposed to marketing, and perhaps then people will realize that they need to do it.&amp;quot;
~ &lt;a href="https://todogroup.org/www.linkedin.com/in/nithyaruff/"&gt;&lt;em&gt;Nithya Ruff&lt;/em&gt;&lt;/a&gt; (from &lt;a href="https://todogroup.org/guides/marketing-open-source-projects/"&gt;&lt;em&gt;TODO Group&lt;/em&gt;&lt;/a&gt;)&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;A common way to participate in the Kubernetes contributor community is
to be everywhere.&lt;/p&gt;
&lt;p&gt;We have an active &lt;a href="https://slack.k8s.io"&gt;Slack&lt;/a&gt;, many mailing lists, Twitter account(s), and
dozens of community-driven podcasts and newsletters that highlight all
end-user, contributor, and ecosystem topics. And to add on to that, we also have &lt;a href="http://github.com/kubernetes/community"&gt;repositories of amazing documentation&lt;/a&gt;, tons of &lt;a href="https://calendar.google.com/calendar/embed?src=cgnt364vd8s86hr2phapfjc6uk%40group.calendar.google.com&amp;ctz=America%2FLos_Angeles"&gt;meetings&lt;/a&gt; that drive the project forward, and &lt;a href="https://www.youtube.com/watch?v=yqB_le-N6EE"&gt;recorded code deep dives&lt;/a&gt;. All of this information is incredibly valuable,
but it can be too much.&lt;/p&gt;</description></item><item><title>API Priority and Fairness Alpha</title><link>https://andygol-k8s.netlify.app/blog/2020/04/06/kubernetes-1-18-feature-api-priority-and-fairness-alpha/</link><pubDate>Mon, 06 Apr 2020 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2020/04/06/kubernetes-1-18-feature-api-priority-and-fairness-alpha/</guid><description>&lt;p&gt;This blog describes “API Priority And Fairness”, a new alpha feature in Kubernetes 1.18. API Priority And Fairness permits cluster administrators to divide the concurrency of the control plane into different weighted priority levels. Every request arriving at a kube-apiserver will be categorized into one of the priority levels and get its fair share of the control plane’s throughput.&lt;/p&gt;
&lt;h2 id="what-problem-does-this-solve"&gt;What problem does this solve?&lt;/h2&gt;
&lt;p&gt;Today the apiserver has a simple mechanism for protecting itself against CPU and memory overloads: max-in-flight limits for mutating and for readonly requests. Apart from the distinction between mutating and readonly, no other distinctions are made among requests; consequently, there can be undesirable scenarios where one subset of the requests crowds out other requests.&lt;/p&gt;</description></item><item><title>Introducing Windows CSI support alpha for Kubernetes</title><link>https://andygol-k8s.netlify.app/blog/2020/04/03/kubernetes-1-18-feature-windows-csi-support-alpha/</link><pubDate>Fri, 03 Apr 2020 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2020/04/03/kubernetes-1-18-feature-windows-csi-support-alpha/</guid><description>&lt;p&gt;&lt;em&gt;The alpha version of &lt;a href="https://github.com/kubernetes-csi/csi-proxy"&gt;CSI Proxy&lt;/a&gt; for Windows is being released with Kubernetes 1.18. CSI proxy enables CSI Drivers on Windows by allowing containers in Windows to perform privileged storage operations.&lt;/em&gt;&lt;/p&gt;
&lt;h2 id="background"&gt;Background&lt;/h2&gt;
&lt;p&gt;Container Storage Interface (CSI) for Kubernetes went GA in the Kubernetes 1.13 release. CSI has become the standard for exposing block and file storage to containerized workloads on Container Orchestration systems (COs) like Kubernetes. It enables third-party storage providers to write and deploy plugins without the need to alter the core Kubernetes codebase. All new storage features will utilize CSI, therefore it is important to get CSI drivers to work on Windows.&lt;/p&gt;</description></item><item><title>Improvements to the Ingress API in Kubernetes 1.18</title><link>https://andygol-k8s.netlify.app/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/</link><pubDate>Thu, 02 Apr 2020 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/</guid><description>&lt;p&gt;The Ingress API in Kubernetes has enabled a large number of controllers to provide simple and powerful ways to manage inbound network traffic to Kubernetes workloads. In Kubernetes 1.18, we've made 3 significant additions to this API:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A new &lt;code&gt;pathType&lt;/code&gt; field that can specify how Ingress paths should be matched.&lt;/li&gt;
&lt;li&gt;A new &lt;code&gt;IngressClass&lt;/code&gt; resource that can specify how Ingresses should be implemented by controllers.&lt;/li&gt;
&lt;li&gt;Support for wildcards in hostnames.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="better-path-matching-with-path-types"&gt;Better Path Matching With Path Types&lt;/h2&gt;
&lt;p&gt;The new concept of a path type allows you to specify how a path should be matched. There are three supported types:&lt;/p&gt;</description></item><item><title>Kubernetes 1.18 Feature Server-side Apply Beta 2</title><link>https://andygol-k8s.netlify.app/blog/2020/04/01/kubernetes-1.18-feature-server-side-apply-beta-2/</link><pubDate>Wed, 01 Apr 2020 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2020/04/01/kubernetes-1.18-feature-server-side-apply-beta-2/</guid><description>&lt;h2 id="what-is-server-side-apply"&gt;What is Server-side Apply?&lt;/h2&gt;
&lt;p&gt;Server-side Apply is an important effort to migrate “kubectl apply” to the apiserver. It was started in 2018 by the Apply working group.&lt;/p&gt;
&lt;p&gt;The use of kubectl to declaratively apply resources has exposed the following challenges:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;One needs to use the kubectl go code, or they have to shell out to kubectl.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Strategic merge-patch, the patch format used by kubectl, grew organically and was challenging to fix while maintaining compatibility with various api-server versions.&lt;/p&gt;</description></item><item><title>Kubernetes Topology Manager Moves to Beta - Align Up!</title><link>https://andygol-k8s.netlify.app/blog/2020/04/01/kubernetes-1-18-feature-topology-manager-beta/</link><pubDate>Wed, 01 Apr 2020 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2020/04/01/kubernetes-1-18-feature-topology-manager-beta/</guid><description>&lt;p&gt;This blog post describes the &lt;strong&gt;&lt;code&gt;TopologyManager&lt;/code&gt;&lt;/strong&gt;, a beta feature of Kubernetes in release 1.18. The &lt;strong&gt;&lt;code&gt;TopologyManager&lt;/code&gt;&lt;/strong&gt; feature enables NUMA alignment of CPUs and peripheral devices (such as SR-IOV VFs and GPUs), allowing your workload to run in an environment optimized for low-latency.&lt;/p&gt;
&lt;p&gt;Prior to the introduction of the &lt;strong&gt;&lt;code&gt;TopologyManager&lt;/code&gt;&lt;/strong&gt;, the CPU and Device Manager would make resource allocation decisions independent of each other. This could result in undesirable allocations on multi-socket systems, causing degraded performance on latency critical applications. With the introduction of the &lt;strong&gt;&lt;code&gt;TopologyManager&lt;/code&gt;&lt;/strong&gt;, we now have a way to avoid this.&lt;/p&gt;</description></item><item><title>Kubernetes 1.18: Fit &amp; Finish</title><link>https://andygol-k8s.netlify.app/blog/2020/03/25/kubernetes-1-18-release-announcement/</link><pubDate>Wed, 25 Mar 2020 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2020/03/25/kubernetes-1-18-release-announcement/</guid><description>&lt;p&gt;We're pleased to announce the delivery of Kubernetes 1.18, our first release of 2020! Kubernetes 1.18 consists of 38 enhancements: 15 enhancements are moving to stable, 11 enhancements in beta, and 12 enhancements in alpha.&lt;/p&gt;
&lt;p&gt;Kubernetes 1.18 is a &amp;quot;fit and finish&amp;quot; release. Significant work has gone into improving beta and stable features to ensure users have a better experience. An equal effort has gone into adding new developments and exciting new features that promise to enhance the user experience even more.
Having almost as many enhancements in alpha, beta, and stable is a great achievement. It shows the tremendous effort made by the community on improving the reliability of Kubernetes as well as continuing to expand its existing functionality.&lt;/p&gt;</description></item><item><title>Join SIG Scalability and Learn Kubernetes the Hard Way</title><link>https://andygol-k8s.netlify.app/blog/2020/03/19/join-sig-scalability/</link><pubDate>Thu, 19 Mar 2020 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2020/03/19/join-sig-scalability/</guid><description>&lt;p&gt;Contributing to SIG Scalability is a great way to learn Kubernetes in all its depth and breadth, and the team would love to have you &lt;a href="https://github.com/kubernetes/community/tree/master/sig-scalability#scalability-special-interest-group"&gt;join as a contributor&lt;/a&gt;. I took a look at the value of learning the hard way and interviewed the current SIG chairs to give you an idea of what contribution feels like.&lt;/p&gt;
&lt;h2 id="the-value-of-learning-the-hard-way"&gt;The value of Learning The Hard Way&lt;/h2&gt;
&lt;p&gt;There is a belief in the software development community that pushes for the most challenging and rigorous possible method of learning a new language or system. These tend to go by the moniker of &amp;quot;Learn __ the Hard Way.&amp;quot; Examples abound: Learn Code the Hard Way, Learn Python the Hard Way, and many others originating with Zed Shaw's courses in the topic.&lt;/p&gt;</description></item><item><title>Kong Ingress Controller and Service Mesh: Setting up Ingress to Istio on Kubernetes</title><link>https://andygol-k8s.netlify.app/blog/2020/03/18/kong-ingress-controller-and-istio-service-mesh/</link><pubDate>Wed, 18 Mar 2020 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2020/03/18/kong-ingress-controller-and-istio-service-mesh/</guid><description>&lt;p&gt;Kubernetes has become the de facto way to orchestrate containers and the services within services. But how do we give services outside our cluster access to what is within? Kubernetes comes with the Ingress API object that manages external access to services within a cluster.&lt;/p&gt;
&lt;p&gt;Ingress is a group of rules that will proxy inbound connections to endpoints defined by a backend. However, Kubernetes does not know what to do with Ingress resources without an Ingress controller, which is where an open source controller can come into play. In this post, we are going to use one option for this: the Kong Ingress Controller. The Kong Ingress Controller was open-sourced a year ago and recently reached one million downloads. In the recent 0.7 release, service mesh support was also added. Other features of this release include:&lt;/p&gt;</description></item><item><title>Contributor Summit Amsterdam Postponed</title><link>https://andygol-k8s.netlify.app/blog/2020/03/04/contributor-summit-delayed/</link><pubDate>Wed, 04 Mar 2020 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2020/03/04/contributor-summit-delayed/</guid><description>&lt;p&gt;The CNCF has announced that &lt;a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/attend/novel-coronavirus-update/"&gt;KubeCon + CloudNativeCon EU has been delayed&lt;/a&gt; until July/August of 2020. As a result the Contributor Summit planning team is weighing options for how to proceed. Here’s the current plan:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;There will be an in-person Contributor Summit as planned when KubeCon + CloudNativeCon is rescheduled.&lt;/li&gt;
&lt;li&gt;We are looking at options for having additional virtual contributor activities in the meantime.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;We will communicate via this blog and the usual communications channels on the final plan. Please bear with us as we adapt when we get more information. Thank you for being patient as the team pivots to bring you a great Contributor Summit!&lt;/p&gt;</description></item><item><title>Bring your ideas to the world with kubectl plugins</title><link>https://andygol-k8s.netlify.app/blog/2020/02/28/bring-your-ideas-to-the-world-with-kubectl-plugins/</link><pubDate>Fri, 28 Feb 2020 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2020/02/28/bring-your-ideas-to-the-world-with-kubectl-plugins/</guid><description>&lt;p&gt;&lt;code&gt;kubectl&lt;/code&gt; is the most critical tool to interact with Kubernetes and has to address multiple user personas, each with their own needs and opinions.
One way to make &lt;code&gt;kubectl&lt;/code&gt; do what you need is to build new functionality into &lt;code&gt;kubectl&lt;/code&gt;.&lt;/p&gt;
&lt;h2 id="challenges-with-building-commands-into-kubectl"&gt;Challenges with building commands into &lt;code&gt;kubectl&lt;/code&gt;&lt;/h2&gt;
&lt;p&gt;However, that's easier said than done. Being such an important cornerstone of
Kubernetes, any meaningful change to &lt;code&gt;kubectl&lt;/code&gt; needs to undergo a Kubernetes
Enhancement Proposal (KEP) where the intended change is discussed beforehand.&lt;/p&gt;</description></item><item><title>Contributor Summit Amsterdam Schedule Announced</title><link>https://andygol-k8s.netlify.app/blog/2020/02/18/contributor-summit-amsterdam-schedule-announced/</link><pubDate>Tue, 18 Feb 2020 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2020/02/18/contributor-summit-amsterdam-schedule-announced/</guid><description>&lt;p&gt;&lt;img src="https://andygol-k8s.netlify.app/images/blog/2020-02-18-Contributor-Summit-Amsterdam-Schedule-Announced/contribsummit.jpg" alt="Contributor Summit"&gt;&lt;/p&gt;
&lt;p&gt;Hello everyone and Happy 2020! It’s hard to believe that KubeCon EU 2020 is less than six weeks away, and with that another contributor summit! This year we have the pleasure of being in Amsterdam in early spring, so be sure to pack some warmer clothing. This summit looks to be exciting with a lot of fantastic community-driven content. We received &lt;strong&gt;26&lt;/strong&gt; submissions from the CFP. From that, the events team selected &lt;strong&gt;12&lt;/strong&gt; sessions. Each of the sessions falls into one of four categories:&lt;/p&gt;</description></item><item><title>Deploying External OpenStack Cloud Provider with Kubeadm</title><link>https://andygol-k8s.netlify.app/blog/2020/02/07/deploying-external-openstack-cloud-provider-with-kubeadm/</link><pubDate>Fri, 07 Feb 2020 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2020/02/07/deploying-external-openstack-cloud-provider-with-kubeadm/</guid><description>&lt;p&gt;This document describes how to install a single control-plane Kubernetes cluster v1.15 with kubeadm on CentOS, and then deploy an external OpenStack cloud provider and Cinder CSI plugin to use Cinder volumes as persistent volumes in Kubernetes.&lt;/p&gt;
&lt;h3 id="preparation-in-openstack"&gt;Preparation in OpenStack&lt;/h3&gt;
&lt;p&gt;This cluster runs on OpenStack VMs, so let's create a few things in OpenStack first.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A project/tenant for this Kubernetes cluster&lt;/li&gt;
&lt;li&gt;A user in this project for Kubernetes, to query node information and attach volumes etc&lt;/li&gt;
&lt;li&gt;A private network and subnet&lt;/li&gt;
&lt;li&gt;A router for this private network and connect it to a public network for floating IPs&lt;/li&gt;
&lt;li&gt;A security group for all Kubernetes VMs&lt;/li&gt;
&lt;li&gt;A VM as a control-plane node and a few VMs as worker nodes&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The security group will have the following rules to open ports for Kubernetes.&lt;/p&gt;</description></item><item><title>KubeInvaders - Gamified Chaos Engineering Tool for Kubernetes</title><link>https://andygol-k8s.netlify.app/blog/2020/01/22/kubeinvaders-gamified-chaos-engineering-tool-for-kubernetes/</link><pubDate>Wed, 22 Jan 2020 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2020/01/22/kubeinvaders-gamified-chaos-engineering-tool-for-kubernetes/</guid><description>&lt;p&gt;Some months ago, I released my latest project called KubeInvaders. The
first time I shared it with the community was during an Openshift
Commons Briefing session. Kubenvaders is a Gamified Chaos Engineering
tool for Kubernetes and Openshift and helps test how resilient your
Kubernetes cluster is, in a fun way.&lt;/p&gt;
&lt;p&gt;It is like Space Invaders, but the aliens are pods.&lt;/p&gt;
&lt;p&gt;&lt;img src="https://github.com/lucky-sideburn/KubeInvaders-kubernetes-post/raw/master/img1.png" alt=""&gt;&lt;/p&gt;
&lt;p&gt;During my presentation at Codemotion Milan 2019, I started saying &amp;quot;of
course you can do it with few lines of Bash, but it is boring.&amp;quot;&lt;/p&gt;</description></item><item><title>CSI Ephemeral Inline Volumes</title><link>https://andygol-k8s.netlify.app/blog/2020/01/21/csi-ephemeral-inline-volumes/</link><pubDate>Tue, 21 Jan 2020 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2020/01/21/csi-ephemeral-inline-volumes/</guid><description>&lt;p&gt;Typically, volumes provided by an external storage driver in
Kubernetes are &lt;em&gt;persistent&lt;/em&gt;, with a lifecycle that is completely
independent of pods or (as a special case) loosely coupled to the
first pod which uses a volume (&lt;a href="https://kubernetes.io/docs/concepts/storage/storage-classes/#volume-binding-mode"&gt;late binding
mode&lt;/a&gt;).
The mechanism for requesting and defining such volumes in Kubernetes
are &lt;a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/"&gt;Persistent Volume Claim (PVC) and Persistent Volume
(PV)&lt;/a&gt;
objects. Originally, volumes that are backed by a Container Storage Interface
(CSI) driver could only be used via this PVC/PV mechanism.&lt;/p&gt;</description></item><item><title>Reviewing 2019 in Docs</title><link>https://andygol-k8s.netlify.app/blog/2020/01/21/reviewing-2019-in-docs/</link><pubDate>Tue, 21 Jan 2020 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2020/01/21/reviewing-2019-in-docs/</guid><description>&lt;p&gt;Hi, folks! I'm one of the co-chairs for the Kubernetes documentation special interest group (SIG Docs). This blog post is a review of SIG Docs in 2019. Our contributors did amazing work last year, and I want to highlight their successes.&lt;/p&gt;
&lt;p&gt;Although I review 2019 in this post, my goal is to point forward to 2020. I observe some trends in SIG Docs–some good, others troubling. I want to raise visibility before those challenges increase in severity.&lt;/p&gt;</description></item><item><title>Kubernetes on MIPS</title><link>https://andygol-k8s.netlify.app/blog/2020/01/15/kubernetes-on-mips/</link><pubDate>Wed, 15 Jan 2020 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2020/01/15/kubernetes-on-mips/</guid><description>&lt;h2 id="background"&gt;Background&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://en.wikipedia.org/wiki/MIPS_architecture"&gt;MIPS&lt;/a&gt; (Microprocessor without Interlocked Pipelined Stages) is a reduced instruction set computer (RISC) instruction set architecture (ISA), appeared in 1981 and developed by MIPS Technologies. Now MIPS architecture is widely used in many electronic products.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://kubernetes.io"&gt;Kubernetes&lt;/a&gt; has officially supported a variety of CPU architectures such as x86, arm/arm64, ppc64le, s390x. However, it's a pity that Kubernetes doesn't support MIPS. With the widespread use of cloud native technology, users under MIPS architecture also have an urgent demand for Kubernetes on MIPS.&lt;/p&gt;</description></item><item><title>Announcing the Kubernetes bug bounty program</title><link>https://andygol-k8s.netlify.app/blog/2020/01/14/kubernetes-bug-bounty-announcement/</link><pubDate>Tue, 14 Jan 2020 09:00:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2020/01/14/kubernetes-bug-bounty-announcement/</guid><description>&lt;p&gt;&lt;strong&gt;Authors:&lt;/strong&gt; Maya Kaczorowski and Tim Allclair, Google, on behalf of the &lt;a href="https://github.com/kubernetes/community/tree/master/committee-product-security"&gt;Kubernetes Product Security Committee&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Today, the &lt;a href="https://github.com/kubernetes/community/tree/master/committee-product-security"&gt;Kubernetes Product Security Committee&lt;/a&gt; is launching a &lt;a href="https://hackerone.com/kubernetes"&gt;new bug bounty program&lt;/a&gt;, funded by the &lt;a href="https://www.cncf.io/"&gt;CNCF&lt;/a&gt;, to reward researchers finding security vulnerabilities in Kubernetes.&lt;/p&gt;
&lt;h2 id="setting-up-a-new-bug-bounty-program"&gt;Setting up a new bug bounty program&lt;/h2&gt;
&lt;p&gt;We aimed to set up this bug bounty program as transparently as possible, with &lt;a href="https://docs.google.com/document/d/1dvlQsOGODhY3blKpjTg6UXzRdPzv5y8V55RD_Pbo7ag/edit#heading=h.7t1efwpev42p"&gt;an initial proposal&lt;/a&gt;, &lt;a href="https://github.com/kubernetes/kubernetes/issues/73079"&gt;evaluation of vendors&lt;/a&gt;, and &lt;a href="https://github.com/kubernetes/community/blob/master/contributors/guide/bug-bounty.md"&gt;working draft of the components in scope&lt;/a&gt;. Once we onboarded the selected bug bounty program vendor, &lt;a href="https://www.hackerone.com/"&gt;HackerOne&lt;/a&gt;, these documents were further refined based on the feedback from HackerOne, as well as what was learned in the recent &lt;a href="https://github.com/kubernetes/community/blob/master/wg-security-audit/findings/Kubernetes%20Final%20Report.pdf"&gt;Kubernetes security audit&lt;/a&gt;. The bug bounty program has been in a private release for several months now, with invited researchers able to submit bugs and help us test the triage process. After almost two years since the initial proposal, the program is now ready for all security researchers to contribute!&lt;/p&gt;</description></item><item><title>Remembering Brad Childs</title><link>https://andygol-k8s.netlify.app/blog/2020/01/10/remembering-brad-childs/</link><pubDate>Fri, 10 Jan 2020 10:00:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2020/01/10/remembering-brad-childs/</guid><description>&lt;p&gt;Last year, the Kubernetes family lost one of its own. Brad Childs was a
SIG Storage chair and long time contributor to the project. Brad worked on a
number of features in storage and was known as much for his friendliness and
sense of humor as for his technical contributions and leadership.&lt;/p&gt;
&lt;p&gt;We recently spent time remembering Brad at Kubecon NA:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://youtu.be/4eI2PTAJ-sE"&gt;A Tribute to Bradley Childs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/cncf/memorials/blob/master/bradley-childs.md"&gt;CNCF Memorial&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Our hearts go out to Brad’s friends and family and others whose lives he touched
inside and outside the Kubernetes community.&lt;/p&gt;</description></item><item><title>Testing of CSI drivers</title><link>https://andygol-k8s.netlify.app/blog/2020/01/08/testing-of-csi-drivers/</link><pubDate>Wed, 08 Jan 2020 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2020/01/08/testing-of-csi-drivers/</guid><description>&lt;p&gt;When developing a &lt;a href="https://kubernetes-csi.github.io/docs/"&gt;Container Storage Interface (CSI)
driver&lt;/a&gt;, it is useful to leverage
as much prior work as possible. This includes source code (like the
&lt;a href="https://github.com/kubernetes-csi/csi-driver-host-path/"&gt;sample CSI hostpath
driver&lt;/a&gt;) but
also existing tests. Besides saving time, using tests written by
someone else has the advantage that it can point out aspects of the
specification that might have been overlooked otherwise.&lt;/p&gt;
&lt;p&gt;An earlier blog post about &lt;a href="https://kubernetes.io/blog/2019/03/22/kubernetes-end-to-end-testing-for-everyone/"&gt;end-to-end
testing&lt;/a&gt;
already showed how to use the &lt;a href="https://github.com/kubernetes/kubernetes/tree/master/test/e2e/storage/testsuites"&gt;Kubernetes storage
tests&lt;/a&gt;
for testing of a third-party CSI driver. That
approach makes sense when the goal to also add custom E2E tests, but
depends on quite a bit of effort for setting up and maintaining a test
suite.&lt;/p&gt;</description></item><item><title>Kubernetes 1.17: Stability</title><link>https://andygol-k8s.netlify.app/blog/2019/12/09/kubernetes-1-17-release-announcement/</link><pubDate>Mon, 09 Dec 2019 13:00:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2019/12/09/kubernetes-1-17-release-announcement/</guid><description>&lt;p&gt;We’re pleased to announce the delivery of Kubernetes 1.17, our fourth and final release of 2019! Kubernetes v1.17 consists of 22 enhancements: 14 enhancements have graduated to stable, 4 enhancements are moving to beta, and 4 enhancements are entering alpha.&lt;/p&gt;
&lt;h2 id="major-themes"&gt;Major Themes&lt;/h2&gt;
&lt;h3 id="cloud-provider-labels-reach-general-availability"&gt;Cloud Provider Labels reach General Availability&lt;/h3&gt;
&lt;p&gt;Added as a beta feature way back in v1.2, v1.17 sees the general availability of cloud provider labels.&lt;/p&gt;
&lt;h3 id="volume-snapshot-moves-to-beta"&gt;Volume Snapshot Moves to Beta&lt;/h3&gt;
&lt;p&gt;The Kubernetes Volume Snapshot feature is now beta in Kubernetes v1.17. It was introduced as alpha in Kubernetes v1.12, with a second alpha with breaking changes in Kubernetes v1.13.&lt;/p&gt;</description></item><item><title>Kubernetes 1.17 Feature: Kubernetes Volume Snapshot Moves to Beta</title><link>https://andygol-k8s.netlify.app/blog/2019/12/09/kubernetes-1-17-feature-cis-volume-snapshot-beta/</link><pubDate>Mon, 09 Dec 2019 10:00:00 -0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2019/12/09/kubernetes-1-17-feature-cis-volume-snapshot-beta/</guid><description>&lt;p&gt;The Kubernetes Volume Snapshot feature is now beta in Kubernetes v1.17. It was introduced &lt;a href="https://kubernetes.io/blog/2018/10/09/introducing-volume-snapshot-alpha-for-kubernetes/"&gt;as alpha&lt;/a&gt; in Kubernetes v1.12, with a &lt;a href="https://kubernetes.io/blog/2019/01/17/update-on-volume-snapshot-alpha-for-kubernetes/"&gt;second alpha&lt;/a&gt; with breaking changes in Kubernetes v1.13. This post summarizes the changes in the beta release.&lt;/p&gt;
&lt;h2 id="what-is-a-volume-snapshot"&gt;What is a Volume Snapshot?&lt;/h2&gt;
&lt;p&gt;Many storage systems (like Google Cloud Persistent Disks, Amazon Elastic Block Storage, and many on-premise storage systems) provide the ability to create a “snapshot” of a persistent volume. A snapshot represents a point-in-time copy of a volume. A snapshot can be used either to provision a new volume (pre-populated with the snapshot data) or to restore an existing volume to a previous state (represented by the snapshot).&lt;/p&gt;</description></item><item><title>Kubernetes 1.17 Feature: Kubernetes In-Tree to CSI Volume Migration Moves to Beta</title><link>https://andygol-k8s.netlify.app/blog/2019/12/09/kubernetes-1-17-feature-csi-migration-beta/</link><pubDate>Mon, 09 Dec 2019 09:00:00 +0800</pubDate><guid>https://andygol-k8s.netlify.app/blog/2019/12/09/kubernetes-1-17-feature-csi-migration-beta/</guid><description>&lt;p&gt;The Kubernetes in-tree storage plugin to &lt;a href="https://kubernetes.io/blog/2019/01/15/container-storage-interface-ga/"&gt;Container Storage Interface (CSI)&lt;/a&gt; migration infrastructure is now beta in Kubernetes v1.17. CSI migration was introduced as alpha in Kubernetes v1.14.&lt;/p&gt;
&lt;p&gt;Kubernetes features are generally introduced as alpha and moved to beta (and eventually to stable/GA) over subsequent Kubernetes releases. This process allows Kubernetes developers to get feedback, discover and fix issues, iterate on the designs, and deliver high quality, production grade features.&lt;/p&gt;
&lt;h2 id="why-are-we-migrating-in-tree-plugins-to-csi"&gt;Why are we migrating in-tree plugins to CSI?&lt;/h2&gt;
&lt;p&gt;Prior to CSI, Kubernetes provided a powerful volume plugin system. These volume plugins were “in-tree” meaning their code was part of the core Kubernetes code and shipped with the core Kubernetes binaries. However, adding support for new volume plugins to Kubernetes was challenging. Vendors that wanted to add support for their storage system to Kubernetes (or even fix a bug in an existing volume plugin) were forced to align with the Kubernetes release process. In addition, third-party storage code caused reliability and security issues in core Kubernetes binaries and the code was often difficult (and in some cases impossible) for Kubernetes maintainers to test and maintain. Using the Container Storage Interface in Kubernetes resolves these major issues.&lt;/p&gt;</description></item><item><title>When you're in the release team, you're family: the Kubernetes 1.16 release interview</title><link>https://andygol-k8s.netlify.app/blog/2019/12/06/when-youre-in-the-release-team-youre-family-the-kubernetes-1.16-release-interview/</link><pubDate>Fri, 06 Dec 2019 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2019/12/06/when-youre-in-the-release-team-youre-family-the-kubernetes-1.16-release-interview/</guid><description>&lt;p&gt;It is a pleasure to co-host the weekly &lt;a href="https://kubernetespodcast.com/"&gt;Kubernetes Podcast from Google&lt;/a&gt; with Adam Glick. We get to talk to friends old and new from the community, as well as give people a download on the Cloud Native news every week.&lt;/p&gt;
&lt;p&gt;It was also a pleasure to see Lachlan Evenson, the release team lead for Kubernetes 1.16, &lt;a href="https://www.cncf.io/announcement/2019/11/19/cloud-native-computing-foundation-announces-2019-community-awards-winners/"&gt;win the CNCF &amp;quot;Top Ambassador&amp;quot; award&lt;/a&gt; at KubeCon. We &lt;a href="https://kubernetespodcast.com/episode/072-kubernetes-1.16/"&gt;talked with Lachie&lt;/a&gt; when 1.16 was released, and as is &lt;a href="https://kubernetes.io/blog/2018/07/16/how-the-sausage-is-made-the-kubernetes-1.11-release-interview-from-the-kubernetes-podcast/"&gt;becoming&lt;/a&gt; a &lt;a href="https://kubernetes.io/blog/2019/05/13/cat-shirts-and-groundhog-day-the-kubernetes-1.14-release-interview/"&gt;tradition&lt;/a&gt;, we are delighted to share an abridged version of that interview with the readers of the Kubernetes Blog.&lt;/p&gt;</description></item><item><title>Gardener Project Update</title><link>https://andygol-k8s.netlify.app/blog/2019/12/02/gardener-project-update/</link><pubDate>Mon, 02 Dec 2019 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2019/12/02/gardener-project-update/</guid><description>&lt;p&gt;Last year, we introduced &lt;a href="https://gardener.cloud"&gt;Gardener&lt;/a&gt; in the &lt;a href="https://www.youtube.com/watch?v=DpFTcTnBxbM&amp;feature=youtu.be&amp;t=1642"&gt;Kubernetes
Community
Meeting&lt;/a&gt;
and in a post on the &lt;a href="https://kubernetes.io/blog/2018/05/17/gardener/"&gt;Kubernetes
Blog&lt;/a&gt;. At SAP, we have been
running Gardener for more than two years, and are successfully managing
thousands of &lt;a href="https://k8s-testgrid.appspot.com/conformance-gardener"&gt;conformant&lt;/a&gt;
clusters in various versions on all major hyperscalers as well as in numerous
infrastructures and private clouds that typically join an enterprise via
acquisitions.&lt;/p&gt;
&lt;p&gt;We are often asked why a handful of dynamically scalable clusters would not
suffice. We also started our journey into Kubernetes with a similar mindset. But
we realized that applying the architecture and principles of Kubernetes to
productive scenarios, our internal and external customers very quickly required
the rational separation of concerns and ownership, which in most circumstances
led to the use of multiple clusters. Therefore, a scalable and managed
Kubernetes as a service solution is often also the basis for adoption.
Particularly, when a larger organization runs multiple products on different
providers and in different regions, the number of clusters will quickly rise to
the hundreds or even thousands.&lt;/p&gt;</description></item><item><title>Develop a Kubernetes controller in Java</title><link>https://andygol-k8s.netlify.app/blog/2019/11/26/develop-a-kubernetes-controller-in-java/</link><pubDate>Tue, 26 Nov 2019 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2019/11/26/develop-a-kubernetes-controller-in-java/</guid><description>&lt;p&gt;The official &lt;a href="https://github.com/kubernetes-client/java"&gt;Kubernetes Java SDK&lt;/a&gt; project
recently released their latest work on providing the Java Kubernetes developers
a handy Kubernetes controller-builder SDK which is helpful for easily developing
advanced workloads or systems.&lt;/p&gt;
&lt;h2 id="overall"&gt;Overall&lt;/h2&gt;
&lt;p&gt;Java is no doubt one of the most popular programming languages in the world but
it's been difficult for a period time for those non-Golang developers to build up
their customized controller/operator due to the lack of library resources in the
community. In the world of Golang, there're already some excellent controller
frameworks, for example, &lt;a href="https://github.com/kubernetes-sigs/controller-runtime"&gt;controller runtime&lt;/a&gt;,
&lt;a href="https://github.com/operator-framework/operator-sdk"&gt;operator SDK&lt;/a&gt;. These
existing Golang frameworks are relying on the various utilities from the
&lt;a href="https://github.com/kubernetes/client-go"&gt;Kubernetes Golang SDK&lt;/a&gt; proven to
be stable over years. Driven by the emerging need of further integration into
the platform of Kubernetes, we not only ported many essential toolings from the Golang
SDK into the kubernetes Java SDK including informers, work-queues, leader-elections,
etc. but also developed a controller-builder SDK which wires up everything into
a runnable controller without hiccups.&lt;/p&gt;</description></item><item><title>Running Kubernetes locally on Linux with Microk8s</title><link>https://andygol-k8s.netlify.app/blog/2019/11/26/running-kubernetes-locally-on-linux-with-microk8s/</link><pubDate>Tue, 26 Nov 2019 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2019/11/26/running-kubernetes-locally-on-linux-with-microk8s/</guid><description>&lt;p&gt;This article, the second in a &lt;a href="https://andygol-k8s.netlify.app/blog/2019/03/28/running-kubernetes-locally-on-linux-with-minikube-now-with-kubernetes-1.14-support/"&gt;series&lt;/a&gt; about local deployment options on Linux, and covers &lt;a href="https://microk8s.io/"&gt;MicroK8s&lt;/a&gt;. Microk8s is the click-and-run solution for deploying a Kubernetes cluster locally, originally developed by Canonical, the publisher of Ubuntu.&lt;/p&gt;
&lt;p&gt;While Minikube usually spins up a local virtual machine (VM) for the Kubernetes cluster, MicroK8s doesn’t require a VM. It uses &lt;a href="https://snapcraft.io/"&gt;snap&lt;/a&gt; packages, an application packaging and isolation technology.&lt;/p&gt;
&lt;p&gt;This difference has its pros and cons. Here we’ll discuss a few of the interesting differences, and comparing the benefits of a VM based approach with the benefits of a non-VM approach. One of the first factors is cross-platform portability. While a Minikube VM is portable across operating systems - it supports not only Linux, but Windows, macOS, and even FreeBSD - Microk8s requires Linux, and only on those distributions &lt;a href="https://snapcraft.io/docs/installing-snapd"&gt;that support snaps&lt;/a&gt;. Most popular Linux distributions are supported.&lt;/p&gt;</description></item><item><title>Grokkin' the Docs</title><link>https://andygol-k8s.netlify.app/blog/2019/11/05/grokkin-the-docs/</link><pubDate>Tue, 05 Nov 2019 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2019/11/05/grokkin-the-docs/</guid><description>&lt;figure&gt;
 &lt;img src="https://andygol-k8s.netlify.app/images/blog/grokkin-the-docs/grok-definition.png"
 alt="grok: to understand profoundly and intuitively"/&gt; &lt;figcaption&gt;
 &lt;h4&gt;Definition courtesy of Merriam Webster online dictionary&lt;/h4&gt;
 &lt;/figcaption&gt;
&lt;/figure&gt;
&lt;h2 id="intro-observations-of-a-new-sig-docs-contributor"&gt;Intro - Observations of a new SIG Docs contributor&lt;/h2&gt;
&lt;p&gt;I began contributing to the SIG Docs community in August 2019. Sometimes I feel
like I am a stranger in a strange land adapting to a new community:
investigating community organization, understanding contributor society,
learning new lessons, and incorporating new jargon. I'm an observer as well as a
contributor.&lt;/p&gt;</description></item><item><title>Kubernetes Documentation Survey</title><link>https://andygol-k8s.netlify.app/blog/2019/10/29/kubernetes-documentation-end-user-survey/</link><pubDate>Tue, 29 Oct 2019 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2019/10/29/kubernetes-documentation-end-user-survey/</guid><description>&lt;p&gt;In September, SIG Docs conducted its first survey about the &lt;a href="https://kubernetes.io/docs/"&gt;Kubernetes
documentation&lt;/a&gt;. We'd like to thank the CNCF's Kim
McMahon for helping us create the survey and access the results.&lt;/p&gt;
&lt;h1 id="key-takeaways"&gt;Key takeaways&lt;/h1&gt;
&lt;p&gt;Respondents would like more example code, more detailed content, and more
diagrams in the Concepts, Tasks, and Reference sections.&lt;/p&gt;
&lt;p&gt;74% of respondents would like the Tutorials section to contain advanced content.&lt;/p&gt;
&lt;p&gt;69.70% said the Kubernetes documentation is the first place they look for
information about Kubernetes.&lt;/p&gt;</description></item><item><title>Contributor Summit San Diego Schedule Announced!</title><link>https://andygol-k8s.netlify.app/blog/2019/10/10/contributor-summit-san-diego-schedule/</link><pubDate>Thu, 10 Oct 2019 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2019/10/10/contributor-summit-san-diego-schedule/</guid><description>&lt;p&gt;There are many great sessions planned for the Contributor Summit, spread across
five rooms of current contributor content in addition to the new contributor
workshops. Since this is an upstream contributor summit and we don't often meet,
being a globally distributed team, most of these sessions are discussions or
hands-on labs, not just presentations. We want folks to learn and have a
good time meeting their OSS teammates.&lt;/p&gt;
&lt;p&gt;Unconference tracks are returning from last year with sessions to be chosen
Monday morning. These are ideal for the latest hot topics and specific
discussions that contributors want to have. In previous years, we've covered
flaky tests, cluster lifecycle, KEPs (Kubernetes Enhancement Proposals), mentoring,
security, and more.&lt;/p&gt;</description></item><item><title>2019 Steering Committee Election Results</title><link>https://andygol-k8s.netlify.app/blog/2019/10/03/2019-steering-committee-election-results/</link><pubDate>Thu, 03 Oct 2019 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2019/10/03/2019-steering-committee-election-results/</guid><description>&lt;p&gt;The &lt;a href="https://git.k8s.io/community/events/elections/2019"&gt;2019 Steering Committee Election&lt;/a&gt; is a landmark milestone for the
Kubernetes project. The initial bootstrap committee is graduating to emeritus
and the committee has now shrunk to its final allocation of seven seats. All
members of the Steering Committee are now fully elected by the Kubernetes
Community.&lt;/p&gt;
&lt;p&gt;Moving forward elections will elect either 3 or 4 people to the committee for
two-year terms.&lt;/p&gt;
&lt;h2 id="results"&gt;&lt;strong&gt;Results&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;The Kubernetes Steering Committee Election is now complete and the following
candidates came ahead to secure two-year terms that start immediately
(in alphabetical order by GitHub handle):&lt;/p&gt;</description></item><item><title>Contributor Summit San Diego Registration Open!</title><link>https://andygol-k8s.netlify.app/blog/2019/09/24/san-diego-contributor-summit/</link><pubDate>Tue, 24 Sep 2019 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2019/09/24/san-diego-contributor-summit/</guid><description>&lt;p&gt;&lt;a href="https://events.linuxfoundation.org/events/kubernetes-contributor-summit-north-america-2019/"&gt;Contributor Summit San Diego 2019 Event Page&lt;/a&gt;&lt;br&gt;
In record time, we’ve hit capacity for the &lt;em&gt;new contributor workshop&lt;/em&gt; session of
the event!&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Sunday, November 17&lt;/strong&gt;&lt;br&gt;
Evening Contributor Celebration:&lt;br&gt;
&lt;a href="https://quartyardsd.com/"&gt;QuartYard&lt;/a&gt;†&lt;br&gt;
Address: 1301 Market Street, San Diego, CA 92101&lt;br&gt;
Time: 6:00PM - 9:00PM&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Monday, November 18&lt;/strong&gt;&lt;br&gt;
All Day Contributor Summit:&lt;br&gt;
&lt;a href="https://www.marriott.com/hotels/travel/sandt-marriott-marquis-san-diego-marina/?scid=bb1a189a-fec3-4d19-a255-54ba596febe2"&gt;Marriott Marquis San Diego Marina&lt;/a&gt;&lt;br&gt;
Address: 333 W Harbor Dr, San Diego, CA 92101&lt;br&gt;
Time: 9:00AM - 5:00PM&lt;/p&gt;
&lt;p&gt;While the Kubernetes project is only five years old, we’re already going into our
9th Contributor Summit this November in San Diego before KubeCon + CloudNativeCon.
The rapid increase is thanks to adding European and Asian Contributor Summits to
the North American events we’ve done previously. We will continue to run Contributor
Summits across the globe, as it is important that our contributor base grows in
all forms of diversity.&lt;/p&gt;</description></item><item><title>Kubernetes 1.16: Custom Resources, Overhauled Metrics, and Volume Extensions</title><link>https://andygol-k8s.netlify.app/blog/2019/09/18/kubernetes-1-16-release-announcement/</link><pubDate>Wed, 18 Sep 2019 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2019/09/18/kubernetes-1-16-release-announcement/</guid><description>&lt;p&gt;We’re pleased to announce the delivery of Kubernetes 1.16, our third release of 2019! Kubernetes 1.16 consists of 31 enhancements: 8 enhancements moving to stable, 8 enhancements in beta, and 15 enhancements in alpha.&lt;/p&gt;
&lt;h1 id="major-themes"&gt;Major Themes&lt;/h1&gt;
&lt;h2 id="custom-resources"&gt;Custom resources&lt;/h2&gt;
&lt;p&gt;CRDs are in widespread use as a Kubernetes extensibility mechanism and have been available in beta since the 1.7 release. The 1.16 release marks the graduation of CRDs to general availability (GA).&lt;/p&gt;</description></item><item><title>Announcing etcd 3.4</title><link>https://andygol-k8s.netlify.app/blog/2019/08/30/announcing-etcd-3-4/</link><pubDate>Fri, 30 Aug 2019 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2019/08/30/announcing-etcd-3-4/</guid><description>&lt;p&gt;etcd 3.4 focuses on stability, performance and ease of operation, with features like pre-vote and non-voting member and improvements to storage backend and client balancer.&lt;/p&gt;
&lt;p&gt;Please see &lt;a href="https://github.com/etcd-io/etcd/blob/master/CHANGELOG-3.4.md"&gt;CHANGELOG&lt;/a&gt; for full lists of changes.&lt;/p&gt;
&lt;h2 id="better-storage-backend"&gt;Better Storage Backend&lt;/h2&gt;
&lt;p&gt;etcd v3.4 includes a number of performance improvements for large scale Kubernetes workloads.&lt;/p&gt;
&lt;p&gt;In particular, etcd experienced performance issues with a large number of concurrent read transactions even when there is no write (e.g. &lt;code&gt;“read-only range request ... took too long to execute”&lt;/code&gt;). Previously, the storage backend commit operation on pending writes blocks incoming read transactions, even when there was no pending write. Now, the commit &lt;a href="https://github.com/etcd-io/etcd/pull/9296"&gt;does not block reads&lt;/a&gt; which improve long-running read transaction performance.&lt;/p&gt;</description></item><item><title>OPA Gatekeeper: Policy and Governance for Kubernetes</title><link>https://andygol-k8s.netlify.app/blog/2019/08/06/opa-gatekeeper-policy-and-governance-for-kubernetes/</link><pubDate>Tue, 06 Aug 2019 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2019/08/06/opa-gatekeeper-policy-and-governance-for-kubernetes/</guid><description>&lt;p&gt;The &lt;a href="https://github.com/open-policy-agent/gatekeeper"&gt;Open Policy Agent Gatekeeper&lt;/a&gt; project can be leveraged to help enforce policies and strengthen governance in your Kubernetes environment. In this post, we will walk through the goals, history, and current state of the project.&lt;/p&gt;
&lt;p&gt;The following recordings from the Kubecon EU 2019 sessions are a great starting place in working with Gatekeeper:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://youtu.be/Yup1FUc2Qn0"&gt;Intro: Open Policy Agent Gatekeeper&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://youtu.be/n94_FNhuzy4"&gt;Deep Dive: Open Policy Agent&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="motivations"&gt;Motivations&lt;/h2&gt;
&lt;p&gt;If your organization has been operating Kubernetes, you probably have been looking for ways to control what end-users can do on the cluster and ways to ensure that clusters are in compliance with company policies. These policies may be there to meet governance and legal requirements or to enforce best practices and organizational conventions. With Kubernetes, how do you ensure compliance without sacrificing development agility and operational independence?&lt;/p&gt;</description></item><item><title>Get started with Kubernetes (using Python)</title><link>https://andygol-k8s.netlify.app/blog/2019/07/23/get-started-with-kubernetes-using-python/</link><pubDate>Tue, 23 Jul 2019 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2019/07/23/get-started-with-kubernetes-using-python/</guid><description>&lt;p&gt;So, you know you want to run your application in Kubernetes but don’t know where to start. Or maybe you’re getting started but still don’t know what you don’t know. In this blog you’ll walk through how to containerize an application and get it running in Kubernetes.&lt;/p&gt;
&lt;p&gt;This walk-through assumes you are a developer or at least comfortable with the command line (preferably bash shell).&lt;/p&gt;
&lt;h2 id="what-we-ll-do"&gt;What we’ll do&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;Get the code and run the application locally&lt;/li&gt;
&lt;li&gt;Create an image and run the application in Docker&lt;/li&gt;
&lt;li&gt;Create a deployment and run the application in Kubernetes&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id="prerequisites"&gt;Prerequisites&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;A Kubernetes service - I'm using &lt;a href="https://www.docker.com/products/kubernetes"&gt;Docker Desktop with Kubernetes&lt;/a&gt; in this walkthrough, but you can use one of the others. See &lt;a href="https://kubernetes.io/docs/setup/"&gt;Getting Started&lt;/a&gt; for a full listing.&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.python.org/"&gt;Python 3.7&lt;/a&gt; installed&lt;/li&gt;
&lt;li&gt;&lt;a href="https://git-scm.com/downloads"&gt;Git&lt;/a&gt; installed&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="containerizing-an-application"&gt;Containerizing an application&lt;/h2&gt;
&lt;p&gt;In this section you’ll take some source code, verify it runs locally, and then create a Docker image of the application. The sample application used is a very simple Flask web application; if you want to test it locally, you’ll need Python installed. Otherwise, you can skip to the &amp;quot;Create a Dockerfile&amp;quot; section.&lt;/p&gt;</description></item><item><title>Deprecated APIs Removed In 1.16: Here’s What You Need To Know</title><link>https://andygol-k8s.netlify.app/blog/2019/07/18/api-deprecations-in-1-16/</link><pubDate>Thu, 18 Jul 2019 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2019/07/18/api-deprecations-in-1-16/</guid><description>&lt;p&gt;As the Kubernetes API evolves, APIs are periodically reorganized or upgraded.
When APIs evolve, the old API is deprecated and eventually removed.&lt;/p&gt;
&lt;p&gt;The &lt;strong&gt;v1.16&lt;/strong&gt; release will stop serving the following deprecated API versions in favor of newer and more stable API versions:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;NetworkPolicy in the &lt;strong&gt;extensions/v1beta1&lt;/strong&gt; API version is no longer served
&lt;ul&gt;
&lt;li&gt;Migrate to use the &lt;strong&gt;networking.k8s.io/v1&lt;/strong&gt; API version, available since v1.8.
Existing persisted data can be retrieved/updated via the new version.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;PodSecurityPolicy in the &lt;strong&gt;extensions/v1beta1&lt;/strong&gt; API version
&lt;ul&gt;
&lt;li&gt;Migrate to use the &lt;strong&gt;policy/v1beta1&lt;/strong&gt; API, available since v1.10.
Existing persisted data can be retrieved/updated via the new version.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;DaemonSet in the &lt;strong&gt;extensions/v1beta1&lt;/strong&gt; and &lt;strong&gt;apps/v1beta2&lt;/strong&gt; API versions is no longer served
&lt;ul&gt;
&lt;li&gt;Migrate to use the &lt;strong&gt;apps/v1&lt;/strong&gt; API version, available since v1.9.
Existing persisted data can be retrieved/updated via the new version.&lt;/li&gt;
&lt;li&gt;Notable changes:
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;spec.templateGeneration&lt;/code&gt; is removed&lt;/li&gt;
&lt;li&gt;&lt;code&gt;spec.selector&lt;/code&gt; is now required and immutable after creation; use the existing template labels as the selector for seamless upgrades&lt;/li&gt;
&lt;li&gt;&lt;code&gt;spec.updateStrategy.type&lt;/code&gt; now defaults to &lt;code&gt;RollingUpdate&lt;/code&gt; (the default in &lt;code&gt;extensions/v1beta1&lt;/code&gt; was &lt;code&gt;OnDelete&lt;/code&gt;)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Deployment in the &lt;strong&gt;extensions/v1beta1&lt;/strong&gt;, &lt;strong&gt;apps/v1beta1&lt;/strong&gt;, and &lt;strong&gt;apps/v1beta2&lt;/strong&gt; API versions is no longer served
&lt;ul&gt;
&lt;li&gt;Migrate to use the &lt;strong&gt;apps/v1&lt;/strong&gt; API version, available since v1.9.
Existing persisted data can be retrieved/updated via the new version.&lt;/li&gt;
&lt;li&gt;Notable changes:
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;spec.rollbackTo&lt;/code&gt; is removed&lt;/li&gt;
&lt;li&gt;&lt;code&gt;spec.selector&lt;/code&gt; is now required and immutable after creation; use the existing template labels as the selector for seamless upgrades&lt;/li&gt;
&lt;li&gt;&lt;code&gt;spec.progressDeadlineSeconds&lt;/code&gt; now defaults to &lt;code&gt;600&lt;/code&gt; seconds (the default in &lt;code&gt;extensions/v1beta1&lt;/code&gt; was no deadline)&lt;/li&gt;
&lt;li&gt;&lt;code&gt;spec.revisionHistoryLimit&lt;/code&gt; now defaults to &lt;code&gt;10&lt;/code&gt; (the default in &lt;code&gt;apps/v1beta1&lt;/code&gt; was &lt;code&gt;2&lt;/code&gt;, the default in &lt;code&gt;extensions/v1beta1&lt;/code&gt; was to retain all)&lt;/li&gt;
&lt;li&gt;&lt;code&gt;maxSurge&lt;/code&gt; and &lt;code&gt;maxUnavailable&lt;/code&gt; now default to &lt;code&gt;25%&lt;/code&gt; (the default in &lt;code&gt;extensions/v1beta1&lt;/code&gt; was &lt;code&gt;1&lt;/code&gt;)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;StatefulSet in the &lt;strong&gt;apps/v1beta1&lt;/strong&gt; and &lt;strong&gt;apps/v1beta2&lt;/strong&gt; API versions is no longer served
&lt;ul&gt;
&lt;li&gt;Migrate to use the &lt;strong&gt;apps/v1&lt;/strong&gt; API version, available since v1.9.
Existing persisted data can be retrieved/updated via the new version.&lt;/li&gt;
&lt;li&gt;Notable changes:
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;spec.selector&lt;/code&gt; is now required and immutable after creation; use the existing template labels as the selector for seamless upgrades&lt;/li&gt;
&lt;li&gt;&lt;code&gt;spec.updateStrategy.type&lt;/code&gt; now defaults to &lt;code&gt;RollingUpdate&lt;/code&gt; (the default in &lt;code&gt;apps/v1beta1&lt;/code&gt; was &lt;code&gt;OnDelete&lt;/code&gt;)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;ReplicaSet in the &lt;strong&gt;extensions/v1beta1&lt;/strong&gt;, &lt;strong&gt;apps/v1beta1&lt;/strong&gt;, and &lt;strong&gt;apps/v1beta2&lt;/strong&gt; API versions is no longer served
&lt;ul&gt;
&lt;li&gt;Migrate to use the &lt;strong&gt;apps/v1&lt;/strong&gt; API version, available since v1.9.
Existing persisted data can be retrieved/updated via the new version.&lt;/li&gt;
&lt;li&gt;Notable changes:
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;spec.selector&lt;/code&gt; is now required and immutable after creation; use the existing template labels as the selector for seamless upgrades&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The &lt;strong&gt;v1.22&lt;/strong&gt; release will stop serving the following deprecated API versions in favor of newer and more stable API versions:&lt;/p&gt;</description></item><item><title>Recap of Kubernetes Contributor Summit Barcelona 2019</title><link>https://andygol-k8s.netlify.app/blog/2019/06/25/recap-of-kubernetes-contributor-summit-barcelona-2019/</link><pubDate>Tue, 25 Jun 2019 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2019/06/25/recap-of-kubernetes-contributor-summit-barcelona-2019/</guid><description>&lt;p&gt;First of all, &lt;strong&gt;THANK YOU&lt;/strong&gt; to everyone who made the Kubernetes Contributor Summit in Barcelona possible. We had an amazing team of volunteers tasked with planning and executing the event, and it was so much fun meeting and talking to all new and current contributors during the main event and the pre-event celebration.&lt;/p&gt;
&lt;p&gt;Contributor Summit in Barcelona kicked off KubeCon + CloudNativeCon in a big way as it was the &lt;strong&gt;largest contributor summit&lt;/strong&gt; to date with 331 people signed up, and only 9 didn't pick up their badges!&lt;/p&gt;</description></item><item><title>Automated High Availability in kubeadm v1.15: Batteries Included But Swappable</title><link>https://andygol-k8s.netlify.app/blog/2019/06/24/automated-high-availability-in-kubeadm-v1.15-batteries-included-but-swappable/</link><pubDate>Mon, 24 Jun 2019 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2019/06/24/automated-high-availability-in-kubeadm-v1.15-batteries-included-but-swappable/</guid><description>&lt;p&gt;&lt;em&gt;At the time of publication, Lucas Käldström was writing as SIG Cluster Lifecycle co-chair
and as a subproject owner for &lt;code&gt;kubeadm&lt;/code&gt;; Fabrizio Pandini was writing as a subproject
owner for &lt;code&gt;kubeadm&lt;/code&gt;.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/"&gt;kubeadm&lt;/a&gt; is a tool that enables Kubernetes administrators
to quickly and easily bootstrap minimum viable clusters that are fully compliant with
&lt;a href="https://github.com/cncf/k8s-conformance/blob/master/terms-conditions/Certified_Kubernetes_Terms.md"&gt;Certified Kubernetes&lt;/a&gt; guidelines.
It’s been under active development by &lt;a href="https://github.com/kubernetes/community/tree/master/sig-cluster-lifecycle"&gt;SIG Cluster Lifecycle&lt;/a&gt;
since 2016 and graduated it from beta to
&lt;a href="https://kubernetes.io/blog/2018/12/04/production-ready-kubernetes-cluster-creation-with-kubeadm/"&gt;generally available (GA) at the end of 2018&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>Introducing Volume Cloning Alpha for Kubernetes</title><link>https://andygol-k8s.netlify.app/blog/2019/06/21/introducing-volume-cloning-alpha-for-kubernetes/</link><pubDate>Fri, 21 Jun 2019 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2019/06/21/introducing-volume-cloning-alpha-for-kubernetes/</guid><description>&lt;p&gt;Kubernetes v1.15 introduces alpha support for volume cloning. This feature allows you to create new volumes using the contents of existing volumes in the user's namespace using the Kubernetes API.&lt;/p&gt;
&lt;h2 id="what-is-a-clone"&gt;What is a Clone?&lt;/h2&gt;
&lt;p&gt;Many storage systems provide the ability to create a &amp;quot;clone&amp;quot; of a volume. A clone is a duplicate of an existing volume that is its own unique volume on the system, but the data on the source is duplicated to the destination (clone). A clone is similar to a snapshot in that it's a point in time copy of a volume, however rather than creating a new snapshot object from a volume, we're instead creating a new independent volume, sometimes thought of as pre-populating the newly created volume.&lt;/p&gt;</description></item><item><title>Future of CRDs: Structural Schemas</title><link>https://andygol-k8s.netlify.app/blog/2019/06/20/crd-structural-schema/</link><pubDate>Thu, 20 Jun 2019 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2019/06/20/crd-structural-schema/</guid><description>&lt;p&gt;CustomResourceDefinitions were introduced roughly two years ago as the primary way to extend the Kubernetes API with custom resources. From the beginning they stored arbitrary JSON data, with the exception that &lt;code&gt;kind&lt;/code&gt;, &lt;code&gt;apiVersion&lt;/code&gt; and &lt;code&gt;metadata&lt;/code&gt; had to follow the Kubernetes API conventions. In Kubernetes 1.8 CRDs gained the ability to define an optional OpenAPI v3 based validation schema.&lt;/p&gt;
&lt;p&gt;By the nature of OpenAPI specifications though—only describing what must be there, not what shouldn’t, and by being potentially incomplete specifications—the Kubernetes API server never knew the complete structure of CustomResource instances. As a consequence, kube-apiserver—until today—stores all JSON data received in an API request (if it validates against the OpenAPI spec). This especially includes anything that is not specified in the OpenAPI schema.&lt;/p&gt;</description></item><item><title>Kubernetes 1.15: Extensibility and Continuous Improvement</title><link>https://andygol-k8s.netlify.app/blog/2019/06/19/kubernetes-1-15-release-announcement/</link><pubDate>Wed, 19 Jun 2019 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2019/06/19/kubernetes-1-15-release-announcement/</guid><description>&lt;p&gt;We’re pleased to announce the delivery of Kubernetes 1.15, our second release of 2019! Kubernetes 1.15 consists of 25 enhancements: 2 moving to stable, 13 in beta, and 10 in alpha. The main themes of this release are:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Continuous Improvement
&lt;ul&gt;
&lt;li&gt;Project sustainability is not just about features. Many SIGs have been working on improving test coverage, ensuring the basics stay reliable, and stability of the core feature set and working on maturing existing features and cleaning up the backlog.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Extensibility
&lt;ul&gt;
&lt;li&gt;The community has been asking for continuing support of extensibility, so this cycle features more work around CRDs and API Machinery. Most of the enhancements in this cycle were from SIG API Machinery and related areas.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Let’s dive into the key features of this release:&lt;/p&gt;</description></item><item><title>Join us at the Contributor Summit in Shanghai</title><link>https://andygol-k8s.netlify.app/blog/2019/06/12/join-us-at-the-contributor-summit-in-shanghai/</link><pubDate>Wed, 12 Jun 2019 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2019/06/12/join-us-at-the-contributor-summit-in-shanghai/</guid><description>&lt;p&gt;![Picture of contributor panel at 2018 Shanghai contributor summit. Photo by Josh Berkus, licensed CC-BY 4.0](/images/blog/2019-
06-11-contributor-summit-shanghai/panel.png)&lt;/p&gt;
&lt;p&gt;For the second year, we will have &lt;a href="https://www.lfasiallc.com/events/contributors-summit-china-2019/"&gt;a Contributor Summit event&lt;/a&gt; the day before &lt;a href="https://events.linuxfoundation.cn/events/kubecon-cloudnativecon-china-2019/"&gt;KubeCon China&lt;/a&gt; in Shanghai. If you already contribute to Kubernetes or would like to contribute, please consider attending and &lt;a href="https://www.lfasiallc.com/events/contributors-summit-china-2019/register/"&gt;register&lt;/a&gt;. The Summit will be held June 24th, at the Shanghai Expo Center (the same location where KubeCon will take place), and will include a Current Contributor Day as well as the New Contributor Workshop and the Documentation Sprints.&lt;/p&gt;</description></item><item><title>Kyma - extend and build on Kubernetes with ease</title><link>https://andygol-k8s.netlify.app/blog/2019/05/23/kyma-extend-and-build-on-kubernetes-with-ease/</link><pubDate>Thu, 23 May 2019 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2019/05/23/kyma-extend-and-build-on-kubernetes-with-ease/</guid><description>&lt;p&gt;According to this recently completed &lt;a href="https://www.cncf.io/blog/2018/08/29/cncf-survey-use-of-cloud-native-technologies-in-production-has-grown-over-200-percent/"&gt;CNCF Survey&lt;/a&gt;, the adoption rate of Cloud Native technologies in production is growing rapidly. Kubernetes is at the heart of this technological revolution. Naturally, the growth of cloud native technologies has been accompanied by the growth of the ecosystem that surrounds it. Of course, the complexity of cloud native technologies have increased as well. Just google for the phrase “Kubernetes is hard”, and you’ll get plenty of articles that explain this complexity problem. The best thing about the CNCF community is that problems like this can be solved by smart people building new tools to enable Kubernetes users: Projects like Knative and its &lt;a href="https://github.com/knative/build"&gt;Build resource&lt;/a&gt; extension, for example, serve to reduce complexity across a range of scenarios. Even though increasing complexity might seem like the most important issue to tackle, it is not the only challenge you face when transitioning to Cloud Native.&lt;/p&gt;</description></item><item><title>Kubernetes, Cloud Native, and the Future of Software</title><link>https://andygol-k8s.netlify.app/blog/2019/05/17/kubernetes-cloud-native-and-the-future-of-software/</link><pubDate>Fri, 17 May 2019 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2019/05/17/kubernetes-cloud-native-and-the-future-of-software/</guid><description>&lt;h1 id="kubernetes-cloud-native-and-the-future-of-software"&gt;Kubernetes, Cloud Native, and the Future of Software&lt;/h1&gt;
&lt;p&gt;Five years ago this June, Google Cloud announced a new application management technology called Kubernetes. It began with a &lt;a href="https://github.com/kubernetes/kubernetes/commit/2c4b3a562ce34cddc3f8218a2c4d11c7310e6d56"&gt;simple open source commit&lt;/a&gt;, followed the next day by a &lt;a href="https://cloudplatform.googleblog.com/2014/06/an-update-on-container-support-on-google-cloud-platform.html"&gt;one-paragraph blog mention&lt;/a&gt; around container support. Later in the week, Eric Brewer &lt;a href="https://www.youtube.com/watch?v=YrxnVKZeqK8"&gt;talked about Kubernetes for the first time&lt;/a&gt; at DockerCon. And soon the world was watching.&lt;/p&gt;
&lt;p&gt;We’re delighted to see Kubernetes become core to the creation and operation of modern software, and thereby a key part of the global economy. To us, the success of Kubernetes represents even more: A business transition with truly worldwide implications, thanks to the unprecedented cooperation afforded by the open source software movement.&lt;/p&gt;</description></item><item><title>Expanding our Contributor Workshops</title><link>https://andygol-k8s.netlify.app/blog/2019/05/14/expanding-our-contributor-workshops/</link><pubDate>Tue, 14 May 2019 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2019/05/14/expanding-our-contributor-workshops/</guid><description>&lt;p&gt;&lt;strong&gt;tl;dr&lt;/strong&gt; - learn about the contributor community with us and land your first
PR! We have spots available in &lt;a href="https://events.linuxfoundation.org/events/contributor-summit-europe-2019/"&gt;Barcelona&lt;/a&gt; (registration &lt;strong&gt;closes&lt;/strong&gt; on
Wednesday May 15, so grab your spot!) and the upcoming &lt;a href="https://www.lfasiallc.com/events/contributors-summit-china-2019/"&gt;Shanghai&lt;/a&gt; Summit.
The Barcelona event is poised to be our biggest one yet, with more registered
attendees than ever before!&lt;/p&gt;
&lt;p&gt;Have you always wanted to contribute to Kubernetes, but not sure where to begin?
Have you seen our community’s many code bases and seen places to improve? We
have a workshop for you!&lt;/p&gt;</description></item><item><title>Cat shirts and Groundhog Day: the Kubernetes 1.14 release interview</title><link>https://andygol-k8s.netlify.app/blog/2019/05/13/cat-shirts-and-groundhog-day-the-kubernetes-1.14-release-interview/</link><pubDate>Mon, 13 May 2019 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2019/05/13/cat-shirts-and-groundhog-day-the-kubernetes-1.14-release-interview/</guid><description>&lt;p&gt;Last week we celebrated one year of the &lt;a href="https://kubernetespodcast.com/"&gt;Kubernetes Podcast from Google&lt;/a&gt;. In this weekly show, my co-host Adam Glick and I focus on all the great things that are happening in the world of Kubernetes and Cloud Native. From the news of the week, to interviews with people in the community, we help you stay up to date on everything Kubernetes.&lt;/p&gt;
&lt;p&gt;Every few cycles we check in on the release process for Kubernetes itself. Last year we &lt;a href="https://kubernetespodcast.com/episode/010-kubernetes-1.11/"&gt;interviewed the release managers for Kubernetes 1.11&lt;/a&gt;, and shared that transcript on the Kubernetes blog. We got such great feedback that we wanted to share the transcript of our recent conversation with Aaron Crickenberger, the release manager for Kubernetes 1.14.&lt;/p&gt;</description></item><item><title>Join us for the 2019 KubeCon Diversity Lunch &amp; Hack</title><link>https://andygol-k8s.netlify.app/blog/2019/05/02/kubecon-diversity-lunch-and-hack/</link><pubDate>Thu, 02 May 2019 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2019/05/02/kubecon-diversity-lunch-and-hack/</guid><description>&lt;p&gt;Join us for the 2019 KubeCon Diversity Lunch &amp;amp; Hack: Building Tech Skills &amp;amp; An Inclusive Community - Sponsored by Google Cloud and VMware&lt;/p&gt;
&lt;p&gt;Registration for the Diversity Lunch opens today, May 2nd, 2019. To register, go to the main &lt;a href="https://events.linuxfoundation.org/events/kubecon-cloudnativecon-europe-2019/schedule/"&gt;KubeCon + CloudNativeCon EU schedule&lt;/a&gt;, then log in to your Sched account, and confirm your attendance to the Diversity Lunch. Please sign up ASAP once the link is live, as spaces will fill quickly. We filled the event in just a few days last year, and anticipate doing so again this year.&lt;/p&gt;</description></item><item><title>How You Can Help Localize Kubernetes Docs</title><link>https://andygol-k8s.netlify.app/blog/2019/04/26/how-you-can-help-localize-kubernetes-docs/</link><pubDate>Fri, 26 Apr 2019 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2019/04/26/how-you-can-help-localize-kubernetes-docs/</guid><description>&lt;p&gt;Last year we optimized the Kubernetes website for &lt;a href="https://andygol-k8s.netlify.app/blog/2018/11/08/kubernetes-docs-updates-international-edition/"&gt;hosting multilingual content&lt;/a&gt;. Contributors responded by adding multiple new localizations: as of April 2019, Kubernetes docs are partially available in nine different languages, with six added in 2019 alone. You can see a list of available languages in the language selector at the top of each page.&lt;/p&gt;
&lt;p&gt;By &lt;em&gt;partially available&lt;/em&gt;, I mean that localizations are ongoing projects. They range from mostly complete (&lt;a href="https://v1-12.docs.kubernetes.io/zh-cn/"&gt;Chinese docs for 1.12&lt;/a&gt;) to brand new (1.14 docs in &lt;a href="https://kubernetes.io/pt/"&gt;Portuguese&lt;/a&gt;). If you're interested in helping an existing localization, read on!&lt;/p&gt;</description></item><item><title>Hardware Accelerated SSL/TLS Termination in Ingress Controllers using Kubernetes Device Plugins and RuntimeClass</title><link>https://andygol-k8s.netlify.app/blog/2019/04/24/hardware-accelerated-ssl/tls-termination-in-ingress-controllers-using-kubernetes-device-plugins-and-runtimeclass/</link><pubDate>Wed, 24 Apr 2019 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2019/04/24/hardware-accelerated-ssl/tls-termination-in-ingress-controllers-using-kubernetes-device-plugins-and-runtimeclass/</guid><description>&lt;h2 id="abstract"&gt;Abstract&lt;/h2&gt;
&lt;p&gt;A Kubernetes Ingress is a way to connect cluster services to the world outside the cluster. In order
to correctly route the traffic to service backends, the cluster needs an Ingress controller. The
Ingress controller is responsible for setting the right destinations to backends based on the
Ingress API objects’ information. The actual traffic is routed through a proxy server that
is responsible for tasks such as load balancing and SSL/TLS (later “SSL” refers to both SSL
or TLS ) termination. The SSL termination is a CPU heavy operation due to the crypto operations
involved. To offload some of the CPU intensive work away from the CPU, OpenSSL based proxy
servers can take the benefit of OpenSSL Engine API and dedicated crypto hardware. This frees
CPU cycles for other things and improves the overall throughput of the proxy server.&lt;/p&gt;</description></item><item><title>Introducing kube-iptables-tailer: Better Networking Visibility in Kubernetes Clusters</title><link>https://andygol-k8s.netlify.app/blog/2019/04/19/introducing-kube-iptables-tailer/</link><pubDate>Fri, 19 Apr 2019 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2019/04/19/introducing-kube-iptables-tailer/</guid><description>&lt;p&gt;At Box, we use Kubernetes to empower our engineers to own the whole lifecycle of their microservices. When it comes to networking, our engineers use Tigera’s &lt;a href="https://www.tigera.io/tigera-calico/"&gt;Project Calico&lt;/a&gt; to declaratively manage network policies for their apps running in our Kubernetes clusters. App owners define a Calico policy in order to enable their Pods to send/receive network traffic, which is instantiated as iptables rules.&lt;/p&gt;
&lt;p&gt;There may be times, however, when such network policy is missing or declared incorrectly by app owners. In this situation, the iptables rules will cause network packet drops between the affected Pods, which get logged in a file that is inaccessible to app owners. We needed a mechanism to seamlessly deliver alerts about those iptables packet drops based on their network policies to help app owners quickly diagnose the corresponding issues. To solve this, we developed a service called &lt;a href="https://github.com/box/kube-iptables-tailer"&gt;kube-iptables-tailer&lt;/a&gt; to detect packet drops from iptables logs and report them as Kubernetes events. We are proud to open-source kube-iptables-tailer for you to utilize in your own cluster, regardless of whether you use Calico or other network policy tools.&lt;/p&gt;</description></item><item><title>The Future of Cloud Providers in Kubernetes</title><link>https://andygol-k8s.netlify.app/blog/2019/04/17/the-future-of-cloud-providers-in-kubernetes/</link><pubDate>Wed, 17 Apr 2019 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2019/04/17/the-future-of-cloud-providers-in-kubernetes/</guid><description>&lt;p&gt;Approximately 9 months ago, the Kubernetes community agreed to form the Cloud Provider Special Interest Group (SIG). The justification was to have a single governing SIG to own and shape the integration points between Kubernetes and the many cloud providers it supported. A lot has been in motion since then and we’re here to share with you what has been accomplished so far and what we hope to see in the future.&lt;/p&gt;</description></item><item><title>Pod Priority and Preemption in Kubernetes</title><link>https://andygol-k8s.netlify.app/blog/2019/04/16/pod-priority-and-preemption-in-kubernetes/</link><pubDate>Tue, 16 Apr 2019 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2019/04/16/pod-priority-and-preemption-in-kubernetes/</guid><description>&lt;p&gt;Kubernetes is well-known for running scalable workloads. It scales your workloads based on their resource usage. When a workload is scaled up, more instances of the application get created. When the application is critical for your product, you want to make sure that these new instances are scheduled even when your cluster is under resource pressure. One obvious solution to this problem is to over-provision your cluster resources to have some amount of slack resources available for scale-up situations. This approach often works, but costs more as you would have to pay for the resources that are idle most of the time.&lt;/p&gt;</description></item><item><title>Process ID Limiting for Stability Improvements in Kubernetes 1.14</title><link>https://andygol-k8s.netlify.app/blog/2019/04/15/process-id-limiting-for-stability-improvements-in-kubernetes-1.14/</link><pubDate>Mon, 15 Apr 2019 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2019/04/15/process-id-limiting-for-stability-improvements-in-kubernetes-1.14/</guid><description>&lt;p&gt;Have you ever seen someone take more than their fair share of the cookies? The one person who reaches in and grabs a half dozen fresh baked chocolate chip chunk morsels and skitters off like Cookie Monster exclaiming “Om nom nom nom.”&lt;/p&gt;
&lt;p&gt;In some rare workloads, a similar occurrence was taking place inside Kubernetes clusters. With each Pod and Node, there comes a finite number of possible process IDs (PIDs) for all applications to share. While it is rare for any one process or pod to reach in and grab all the PIDs, some users were experiencing resource starvation due to this type of behavior. So in Kubernetes 1.14, we introduced an enhancement to mitigate the risk of a single pod monopolizing all of the PIDs available.&lt;/p&gt;</description></item><item><title>Kubernetes 1.14: Local Persistent Volumes GA</title><link>https://andygol-k8s.netlify.app/blog/2019/04/04/kubernetes-1.14-local-persistent-volumes-ga/</link><pubDate>Thu, 04 Apr 2019 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2019/04/04/kubernetes-1.14-local-persistent-volumes-ga/</guid><description>&lt;p&gt;The &lt;a href="https://kubernetes.io/docs/concepts/storage/volumes/#local"&gt;Local Persistent Volumes&lt;/a&gt;
feature has been promoted to GA in Kubernetes 1.14.
It was first introduced as alpha in Kubernetes 1.7, and then
&lt;a href="https://kubernetes.io/blog/2018/04/13/local-persistent-volumes-beta/"&gt;beta&lt;/a&gt; in Kubernetes
1.10. The GA milestone indicates that Kubernetes users may depend on the feature
and its API for production use. GA features are protected by the Kubernetes
&lt;a href="https://kubernetes.io/docs/reference/using-api/deprecation-policy/"&gt;deprecation
policy&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id="what-is-a-local-persistent-volume"&gt;What is a Local Persistent Volume?&lt;/h2&gt;
&lt;p&gt;A local persistent volume represents a local disk directly-attached to a single
Kubernetes Node.&lt;/p&gt;</description></item><item><title>Kubernetes v1.14 delivers production-level support for Windows nodes and Windows containers</title><link>https://andygol-k8s.netlify.app/blog/2019/04/01/kubernetes-v1.14-delivers-production-level-support-for-windows-nodes-and-windows-containers/</link><pubDate>Mon, 01 Apr 2019 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2019/04/01/kubernetes-v1.14-delivers-production-level-support-for-windows-nodes-and-windows-containers/</guid><description>&lt;p&gt;The first release of Kubernetes in 2019 brings a highly anticipated feature - production-level support for Windows workloads. Up until now Windows node support in Kubernetes has been in beta, allowing many users to experiment and see the value of Kubernetes for Windows containers. While in beta, developers in the Kubernetes community and Windows Server team worked together to improve the container runtime, build a continuous testing process, and complete features needed for a good user experience. Kubernetes now officially supports adding Windows nodes as worker nodes and scheduling Windows containers, enabling a vast ecosystem of Windows applications to leverage the power of our platform.&lt;/p&gt;</description></item><item><title>kube-proxy Subtleties: Debugging an Intermittent Connection Reset</title><link>https://andygol-k8s.netlify.app/blog/2019/03/29/kube-proxy-subtleties-debugging-an-intermittent-connection-reset/</link><pubDate>Fri, 29 Mar 2019 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2019/03/29/kube-proxy-subtleties-debugging-an-intermittent-connection-reset/</guid><description>&lt;p&gt;I recently came across a bug that causes intermittent connection resets. After
some digging, I found it was caused by a subtle combination of several different
network subsystems. It helped me understand Kubernetes networking better, and I
think it’s worthwhile to share with a wider audience who are interested in the same
topic.&lt;/p&gt;
&lt;h2 id="the-symptom"&gt;The symptom&lt;/h2&gt;
&lt;p&gt;We received a user report claiming they were getting connection resets while using a
Kubernetes service of type ClusterIP to serve large files to pods running in the
same cluster. Initial debugging of the cluster did not yield anything
interesting: network connectivity was fine and downloading the files did not hit
any issues. However, when we ran the workload in parallel across many clients,
we were able to reproduce the problem. Adding to the mystery was the fact that
the problem could not be reproduced when the workload was run using VMs without
Kubernetes. The problem, which could be easily reproduced by &lt;a href="https://github.com/tcarmet/k8s-connection-reset"&gt;a simple
app&lt;/a&gt;, clearly has something to
do with Kubernetes networking, but what?&lt;/p&gt;</description></item><item><title>Running Kubernetes locally on Linux with Minikube - now with Kubernetes 1.14 support</title><link>https://andygol-k8s.netlify.app/blog/2019/03/28/running-kubernetes-locally-on-linux-with-minikube-now-with-kubernetes-1.14-support/</link><pubDate>Thu, 28 Mar 2019 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2019/03/28/running-kubernetes-locally-on-linux-with-minikube-now-with-kubernetes-1.14-support/</guid><description>&lt;center&gt;

&lt;figure&gt;
 &lt;img src="https://andygol-k8s.netlify.app/images/blog/2019-03-28-running-kubernetes-locally-on-linux-with-minikube/ihor-dvoretskyi-1470985-unsplash.jpg" width="600"/&gt; 
&lt;/figure&gt;&lt;/center&gt;
&lt;p&gt;&lt;em&gt;A few days ago, the Kubernetes community announced &lt;a href="https://kubernetes.io/blog/2019/03/25/kubernetes-1-14-release-announcement/"&gt;Kubernetes 1.14&lt;/a&gt;, the most recent version of Kubernetes. Alongside it, Minikube, a part of the Kubernetes project, recently hit the &lt;a href="https://github.com/kubernetes/minikube/releases/tag/v1.0.0"&gt;1.0 milestone&lt;/a&gt;, which supports &lt;a href="https://kubernetes.io/blog/2019/03/25/kubernetes-1-14-release-announcement/"&gt;Kubernetes 1.14&lt;/a&gt; by default.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Kubernetes is a real winner (and a de facto standard) in the world of distributed Cloud Native computing. While it can handle up to &lt;a href="https://kubernetes.io/blog/2017/03/scalability-updates-in-kubernetes-1-6/"&gt;5000 nodes&lt;/a&gt; in a single cluster, local deployment on a single machine (e.g. a laptop, a developer workstation, etc.) is an increasingly common scenario for using Kubernetes.&lt;/p&gt;</description></item><item><title>Kubernetes 1.14: Production-level support for Windows Nodes, Kubectl Updates, Persistent Local Volumes GA</title><link>https://andygol-k8s.netlify.app/blog/2019/03/25/kubernetes-1-14-release-announcement/</link><pubDate>Mon, 25 Mar 2019 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2019/03/25/kubernetes-1-14-release-announcement/</guid><description>&lt;p&gt;We’re pleased to announce the delivery of Kubernetes 1.14, our first release of 2019!&lt;/p&gt;
&lt;p&gt;Kubernetes 1.14 consists of 31 enhancements: 10 moving to stable, 12 in beta, and 7 net new. The main themes of this release are extensibility and supporting more workloads on Kubernetes with three major features moving to general availability, and an important security feature moving to beta.&lt;/p&gt;
&lt;p&gt;More enhancements graduated to stable in this release than any prior Kubernetes release. This represents an important milestone for users and operators in terms of setting support expectations. In addition, there are notable Pod and RBAC enhancements in this release, which are discussed in the “additional notable features” section below.&lt;/p&gt;</description></item><item><title>Kubernetes End-to-end Testing for Everyone</title><link>https://andygol-k8s.netlify.app/blog/2019/03/22/kubernetes-end-to-end-testing-for-everyone/</link><pubDate>Fri, 22 Mar 2019 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2019/03/22/kubernetes-end-to-end-testing-for-everyone/</guid><description>&lt;p&gt;More and more components that used to be part of Kubernetes are now
being developed outside of Kubernetes. For example, storage drivers
used to be compiled into Kubernetes binaries, then were moved into
&lt;a href="https://github.com/kubernetes/community/blob/master/contributors/devel/sig-storage/flexvolume.md"&gt;stand-alone FlexVolume
binaries&lt;/a&gt;
on the host, and now are delivered as &lt;a href="https://github.com/container-storage-interface/spec"&gt;Container Storage Interface
(CSI) drivers&lt;/a&gt;
that get deployed in pods inside the Kubernetes cluster itself.&lt;/p&gt;
&lt;p&gt;This poses a challenge for developers who work on such components: how
can end-to-end (E2E) testing on a Kubernetes cluster be done for such
external components? The E2E framework that is used for testing
Kubernetes itself has all the necessary functionality. However, trying
to use it outside of Kubernetes was difficult and only possible by
carefully selecting the right versions of a large number of
dependencies. E2E testing has become a lot simpler in Kubernetes 1.13.&lt;/p&gt;</description></item><item><title>A Guide to Kubernetes Admission Controllers</title><link>https://andygol-k8s.netlify.app/blog/2019/03/21/a-guide-to-kubernetes-admission-controllers/</link><pubDate>Thu, 21 Mar 2019 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2019/03/21/a-guide-to-kubernetes-admission-controllers/</guid><description>&lt;p&gt;Kubernetes has greatly improved the speed and manageability of backend clusters in production today. Kubernetes has emerged as the de facto standard in container orchestrators thanks to its flexibility, scalability, and ease of use. Kubernetes also provides a range of features that secure production workloads. A more recent introduction in security features is a set of plugins called “&lt;a href="https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/"&gt;admission controllers&lt;/a&gt;.” Admission controllers must be enabled to use some of the more advanced security features of Kubernetes, such as &lt;a href="https://kubernetes.io/docs/concepts/policy/pod-security-policy/"&gt;pod security policies&lt;/a&gt; that enforce a security configuration baseline across an entire namespace. The following must-know tips and tricks will help you leverage admission controllers to make the most of these security capabilities in Kubernetes.&lt;/p&gt;</description></item><item><title>A Look Back and What's in Store for Kubernetes Contributor Summits</title><link>https://andygol-k8s.netlify.app/blog/2019/03/20/a-look-back-and-whats-in-store-for-kubernetes-contributor-summits/</link><pubDate>Wed, 20 Mar 2019 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2019/03/20/a-look-back-and-whats-in-store-for-kubernetes-contributor-summits/</guid><description>&lt;figure&gt;
 &lt;img src="https://andygol-k8s.netlify.app/images/blog/2019-03-14-A-Look-Back-And-Whats-In-Store-For-Kubernetes-Contributor-Summits/celebrationsig.jpg"
 alt="Seattle Contributor Summit" width="600"/&gt; &lt;figcaption&gt;
 &lt;p&gt;Seattle Contributor Summit&lt;/p&gt;
 &lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;As our contributing community grows in great numbers, with more than 16,000 contributors this year across 150+ GitHub repositories, it’s important to provide face to face connections for our large distributed teams to have opportunities for collaboration and learning. In &lt;a href="https://github.com/kubernetes/community/tree/master/sig-contributor-experience"&gt;Contributor Experience&lt;/a&gt;, our methodology with planning events is a lot like our documentation; we build from personas -- interests, skills, and motivators to name a few. This way we ensure there is valuable content and learning for everyone.&lt;/p&gt;</description></item><item><title>KubeEdge, a Kubernetes Native Edge Computing Framework</title><link>https://andygol-k8s.netlify.app/blog/2019/03/19/kubeedge-k8s-based-edge-intro/</link><pubDate>Tue, 19 Mar 2019 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2019/03/19/kubeedge-k8s-based-edge-intro/</guid><description>&lt;h2 id="kubeedge-becomes-the-first-kubernetes-native-edge-computing-platform-with-both-edge-and-cloud-components-open-sourced"&gt;KubeEdge becomes the first Kubernetes Native Edge Computing Platform with both Edge and Cloud components open sourced!&lt;/h2&gt;
&lt;p&gt;Open source edge computing is going through its most dynamic phase of development in the industry. So many open source platforms, so many consolidations and so many initiatives for standardization! This shows the strong drive to build better platforms to bring cloud computing to the edges to meet ever increasing demand. &lt;a href="https://github.com/kubeedge/kubeedge"&gt;KubeEdge&lt;/a&gt;, which was announced last year, now brings great news for cloud native computing! It provides a complete edge computing solution based on Kubernetes with separate cloud and edge core modules. Currently, both the cloud and edge modules are open sourced.&lt;/p&gt;</description></item><item><title>Kubernetes Setup Using Ansible and Vagrant</title><link>https://andygol-k8s.netlify.app/blog/2019/03/15/kubernetes-setup-using-ansible-and-vagrant/</link><pubDate>Fri, 15 Mar 2019 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2019/03/15/kubernetes-setup-using-ansible-and-vagrant/</guid><description>&lt;h2 id="objective"&gt;Objective&lt;/h2&gt;
&lt;p&gt;This blog post describes the steps required to setup a multi node Kubernetes cluster for development purposes. This setup provides a production-like cluster that can be setup on your local machine.&lt;/p&gt;
&lt;h2 id="why-do-we-require-multi-node-cluster-setup"&gt;Why do we require multi node cluster setup?&lt;/h2&gt;
&lt;p&gt;Multi node Kubernetes clusters offer a production-like environment which has various advantages. Even though Minikube provides an excellent platform for getting started, it doesn't provide the opportunity to work with multi node clusters which can help solve problems or bugs that are related to application design and architecture. For instance, Ops can reproduce an issue in a multi node cluster environment, Testers can deploy multiple versions of an application for executing test cases and verifying changes. These benefits enable teams to resolve issues faster which make the more agile.&lt;/p&gt;</description></item><item><title>Raw Block Volume support to Beta</title><link>https://andygol-k8s.netlify.app/blog/2019/03/07/raw-block-volume-support-to-beta/</link><pubDate>Thu, 07 Mar 2019 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2019/03/07/raw-block-volume-support-to-beta/</guid><description>&lt;p&gt;Kubernetes v1.13 moves raw block volume support to beta. This feature allows persistent volumes to be exposed inside containers as a block device instead of as a mounted file system.&lt;/p&gt;
&lt;h2 id="what-are-block-devices"&gt;What are block devices?&lt;/h2&gt;
&lt;p&gt;Block devices enable random access to data in fixed-size blocks. Hard drives, SSDs, and CD-ROMs drives are all examples of block devices.&lt;/p&gt;
&lt;p&gt;Typically persistent storage is implemented in a layered maner with a file system (like ext4) on top of a block device (like a spinning disk or SSD). Applications then read and write files instead of operating on blocks. The operating systems take care of reading and writing files, using the specified filesystem, to the underlying device as blocks.&lt;/p&gt;</description></item><item><title>Automate Operations on your Cluster with OperatorHub.io</title><link>https://andygol-k8s.netlify.app/blog/2019/02/28/automate-operations-on-your-cluster-with-operatorhub.io/</link><pubDate>Thu, 28 Feb 2019 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2019/02/28/automate-operations-on-your-cluster-with-operatorhub.io/</guid><description>&lt;p&gt;One of the important challenges facing developers and Kubernetes administrators has been a lack of ability to quickly find common services that are operationally ready for Kubernetes. Typically, the presence of an Operator for a specific service - a pattern that was introduced in 2016 and has gained momentum - is a good signal for the operational readiness of the service on Kubernetes. However, there has to date not existed a registry of Operators to simplify the discovery of such services.&lt;/p&gt;</description></item><item><title>Building a Kubernetes Edge (Ingress) Control Plane for Envoy v2</title><link>https://andygol-k8s.netlify.app/blog/2019/02/12/building-a-kubernetes-edge-control-plane-for-envoy-v2/</link><pubDate>Tue, 12 Feb 2019 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2019/02/12/building-a-kubernetes-edge-control-plane-for-envoy-v2/</guid><description>&lt;p&gt;Kubernetes has become the de facto runtime for container-based microservice applications, but this orchestration framework alone does not provide all of the infrastructure necessary for running a distributed system. Microservices typically communicate through Layer 7 protocols such as HTTP, gRPC, or WebSockets, and therefore having the ability to make routing decisions, manipulate protocol metadata, and observe at this layer is vital. However, traditional load balancers and edge proxies have predominantly focused on L3/4 traffic. This is where the &lt;a href="https://www.envoyproxy.io/"&gt;Envoy Proxy&lt;/a&gt; comes into play.&lt;/p&gt;</description></item><item><title>Runc and CVE-2019-5736</title><link>https://andygol-k8s.netlify.app/blog/2019/02/11/runc-and-cve-2019-5736/</link><pubDate>Mon, 11 Feb 2019 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2019/02/11/runc-and-cve-2019-5736/</guid><description>&lt;p&gt;This morning &lt;a href="https://www.openwall.com/lists/oss-security/2019/02/11/2"&gt;a container escape vulnerability in runc was announced&lt;/a&gt;. We wanted to provide some guidance to Kubernetes users to ensure everyone is safe and secure.&lt;/p&gt;
&lt;h2 id="what-is-runc"&gt;What is runc?&lt;/h2&gt;
&lt;p&gt;Very briefly, runc is the low-level tool which does the heavy lifting of spawning a Linux container. Other tools like Docker, Containerd, and CRI-O sit on top of runc to deal with things like data formatting and serialization, but runc is at the heart of all of these systems.&lt;/p&gt;</description></item><item><title>Poseidon-Firmament Scheduler – Flow Network Graph Based Scheduler</title><link>https://andygol-k8s.netlify.app/blog/2019/02/06/poseidon-firmament-scheduler-flow-network-graph-based-scheduler/</link><pubDate>Wed, 06 Feb 2019 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2019/02/06/poseidon-firmament-scheduler-flow-network-graph-based-scheduler/</guid><description>&lt;h2 id="introduction"&gt;Introduction&lt;/h2&gt;
&lt;p&gt;Cluster Management systems such as Mesos, Google Borg, Kubernetes etc. in a cloud scale datacenter environment (also termed as &lt;em&gt;&lt;strong&gt;Datacenter-as-a-Computer&lt;/strong&gt;&lt;/em&gt; or &lt;em&gt;&lt;strong&gt;Warehouse-Scale Computing - WSC&lt;/strong&gt;&lt;/em&gt;) typically manage application workloads by performing tasks such as tracking machine live-ness, starting, monitoring, terminating workloads and more importantly using a &lt;strong&gt;Cluster Scheduler&lt;/strong&gt; to decide on workload placements.&lt;/p&gt;
&lt;p&gt;A &lt;strong&gt;Cluster Scheduler&lt;/strong&gt; essentially performs the scheduling of workloads to compute resources – combining the global placement of work across the WSC environment makes the “warehouse-scale computer” more efficient, increases utilization, and saves energy. &lt;strong&gt;Cluster Scheduler&lt;/strong&gt; examples are Google Borg, Kubernetes, Firmament, Mesos, Tarcil, Quasar, Quincy, Swarm, YARN, Nomad, Sparrow, Apollo etc.&lt;/p&gt;</description></item><item><title>Update on Volume Snapshot Alpha for Kubernetes</title><link>https://andygol-k8s.netlify.app/blog/2019/01/17/update-on-volume-snapshot-alpha-for-kubernetes/</link><pubDate>Thu, 17 Jan 2019 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2019/01/17/update-on-volume-snapshot-alpha-for-kubernetes/</guid><description>&lt;p&gt;Volume snapshotting support was introduced in Kubernetes v1.12 as an alpha feature. In Kubernetes v1.13, it remains an alpha feature, but a few enhancements were added and some breaking changes were made. This post summarizes the changes.&lt;/p&gt;
&lt;h2 id="breaking-changes"&gt;Breaking Changes&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://github.com/container-storage-interface/spec/releases/tag/v1.0.0"&gt;CSI spec v1.0&lt;/a&gt; introduced a few breaking changes to the volume snapshot feature. CSI driver maintainers should be aware of these changes as they upgrade their drivers to support v1.0.&lt;/p&gt;
&lt;h2 id="snapshotstatus-replaced-with-boolean-readytouse"&gt;SnapshotStatus replaced with Boolean ReadyToUse&lt;/h2&gt;
&lt;p&gt;CSI v0.3.0, defined a &lt;code&gt;SnapshotStatus&lt;/code&gt; enum in &lt;code&gt;CreateSnapshotResponse&lt;/code&gt; which indicates whether the snapshot is &lt;code&gt;READY&lt;/code&gt;, &lt;code&gt;UPLOADING&lt;/code&gt;, or &lt;code&gt;ERROR_UPLOADING&lt;/code&gt;. In CSI v1.0, &lt;code&gt;SnapshotStatus&lt;/code&gt; has been removed from &lt;code&gt;CreateSnapshotResponse&lt;/code&gt; and replaced with a &lt;code&gt;boolean ReadyToUse&lt;/code&gt;. A &lt;code&gt;ReadyToUse&lt;/code&gt; value of &lt;code&gt;true&lt;/code&gt; indicates that post snapshot processing (such as uploading) is complete and the snapshot is ready to be used as a source to create a volume.&lt;/p&gt;</description></item><item><title>Container Storage Interface (CSI) for Kubernetes GA</title><link>https://andygol-k8s.netlify.app/blog/2019/01/15/container-storage-interface-ga/</link><pubDate>Tue, 15 Jan 2019 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2019/01/15/container-storage-interface-ga/</guid><description>&lt;p&gt;&lt;img src="https://andygol-k8s.netlify.app/images/blog-logging/2018-04-10-container-storage-interface-beta/csi-kubernetes.png" alt="Kubernetes Logo"&gt;
&lt;img src="https://andygol-k8s.netlify.app/images/blog-logging/2018-04-10-container-storage-interface-beta/csi-logo.png" alt="CSI Logo"&gt;&lt;/p&gt;
&lt;p&gt;The Kubernetes implementation of the &lt;a href="https://github.com/container-storage-interface/spec/blob/master/spec.md"&gt;Container Storage Interface&lt;/a&gt; (CSI) has been promoted to GA in the Kubernetes v1.13 release. Support for CSI was &lt;a href="http://blog.kubernetes.io/2018/01/introducing-container-storage-interface.html"&gt;introduced as alpha&lt;/a&gt; in Kubernetes v1.9 release, and &lt;a href="https://kubernetes.io/blog/2018/04/10/container-storage-interface-beta/"&gt;promoted to beta&lt;/a&gt; in the Kubernetes v1.10 release.&lt;/p&gt;
&lt;p&gt;The GA milestone indicates that Kubernetes users may depend on the feature and its API without fear of backwards incompatible changes in future causing regressions. GA features are protected by the &lt;a href="https://andygol-k8s.netlify.app/docs/reference/using-api/deprecation-policy/"&gt;Kubernetes deprecation policy&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>APIServer dry-run and kubectl diff</title><link>https://andygol-k8s.netlify.app/blog/2019/01/14/apiserver-dry-run-and-kubectl-diff/</link><pubDate>Mon, 14 Jan 2019 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2019/01/14/apiserver-dry-run-and-kubectl-diff/</guid><description>&lt;p&gt;Declarative configuration management, also known as configuration-as-code, is
one of the key strengths of Kubernetes. It allows users to commit the desired state of
the cluster, and to keep track of the different versions, improve auditing and
automation through CI/CD pipelines. The &lt;a href="https://groups.google.com/forum/#!forum/kubernetes-wg-apply"&gt;Apply working-group&lt;/a&gt;
is working on fixing some of the gaps, and is happy to announce that Kubernetes
1.13 promoted server-side dry-run and &lt;code&gt;kubectl diff&lt;/code&gt; to beta. These
two features are big improvements for the Kubernetes declarative model.&lt;/p&gt;</description></item><item><title>Kubernetes Federation Evolution</title><link>https://andygol-k8s.netlify.app/blog/2018/12/12/kubernetes-federation-evolution/</link><pubDate>Wed, 12 Dec 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2018/12/12/kubernetes-federation-evolution/</guid><description>&lt;p&gt;Kubernetes provides great primitives for deploying applications to a cluster: it can be as simple as &lt;code&gt;kubectl create -f app.yaml&lt;/code&gt;. Deploy apps across multiple clusters has never been that simple. How should app workloads be distributed? Should the app resources be replicated into all clusters, replicated into selected clusters, or partitioned into clusters? How is access to the clusters managed? What happens if some of the resources that a user wants to distribute pre-exist, in some or all of the clusters, in some form?&lt;/p&gt;</description></item><item><title>etcd: Current status and future roadmap</title><link>https://andygol-k8s.netlify.app/blog/2018/12/11/etcd-current-status-and-future-roadmap/</link><pubDate>Tue, 11 Dec 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2018/12/11/etcd-current-status-and-future-roadmap/</guid><description>&lt;p&gt;etcd is a distributed key value store that provides a reliable way to manage the coordination state of distributed systems. etcd was first announced in June 2013 by CoreOS (part of Red Hat as of 2018). Since its adoption in Kubernetes in 2014, etcd has become a fundamental part of the Kubernetes cluster management software design, and the etcd community has grown exponentially. etcd is now being used in production environments of multiple companies, including large cloud provider environments such as AWS, Google Cloud Platform, Azure, and other on-premises Kubernetes implementations. CNCF currently has &lt;a href="https://www.cncf.io/announcement/2017/11/13/cloud-native-computing-foundation-launches-certified-kubernetes-program-32-conformant-distributions-platforms/"&gt;32 conformant Kubernetes platforms and distributions&lt;/a&gt;, all of which use etcd as the datastore.&lt;/p&gt;</description></item><item><title>New Contributor Workshop Shanghai</title><link>https://andygol-k8s.netlify.app/blog/2018/12/05/new-contributor-workshop-shanghai/</link><pubDate>Wed, 05 Dec 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2018/12/05/new-contributor-workshop-shanghai/</guid><description>&lt;figure&gt;
 &lt;img src="https://andygol-k8s.netlify.app/images/blog/2018-12-05-new-contributor-shanghai/attendees.png"
 alt="Kubecon Shanghai New Contributor Summit attendees. Photo by Jerry Zhang"/&gt; &lt;figcaption&gt;
 &lt;p&gt;Kubecon Shanghai New Contributor Summit attendees. Photo by Jerry Zhang&lt;/p&gt;
 &lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;We recently completed our first New Contributor Summit in China, at the first KubeCon in China. It was very exciting to see all of the Chinese and Asian developers (plus a few folks from around the world) interested in becoming contributors. Over the course of a long day, they learned how, why, and where to contribute to Kubernetes, created pull requests, attended a panel of current contributors, and got their CLAs signed.&lt;/p&gt;</description></item><item><title>Production-Ready Kubernetes Cluster Creation with kubeadm</title><link>https://andygol-k8s.netlify.app/blog/2018/12/04/production-ready-kubernetes-cluster-creation-with-kubeadm/</link><pubDate>Tue, 04 Dec 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2018/12/04/production-ready-kubernetes-cluster-creation-with-kubeadm/</guid><description>&lt;p&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/setup/independent/create-cluster-kubeadm/"&gt;kubeadm&lt;/a&gt; is a tool that enables Kubernetes administrators to quickly and easily bootstrap minimum viable clusters that are fully compliant with &lt;a href="https://github.com/cncf/k8s-conformance/blob/master/terms-conditions/Certified_Kubernetes_Terms.md"&gt;Certified Kubernetes&lt;/a&gt; guidelines. It's been under active development by &lt;a href="https://github.com/kubernetes/community/tree/master/sig-cluster-lifecycle"&gt;SIG Cluster Lifecycle&lt;/a&gt; since 2016 and we're excited to announce that it has now graduated from beta to stable and generally available (GA)!&lt;/p&gt;
&lt;p&gt;This GA release of kubeadm is an important event in the progression of the Kubernetes ecosystem, bringing stability to an area where stability is paramount.&lt;/p&gt;</description></item><item><title>Kubernetes 1.13: Simplified Cluster Management with Kubeadm, Container Storage Interface (CSI), and CoreDNS as Default DNS are Now Generally Available</title><link>https://andygol-k8s.netlify.app/blog/2018/12/03/kubernetes-1-13-release-announcement/</link><pubDate>Mon, 03 Dec 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2018/12/03/kubernetes-1-13-release-announcement/</guid><description>&lt;p&gt;We’re pleased to announce the delivery of Kubernetes 1.13, our fourth and final release of 2018!&lt;/p&gt;
&lt;p&gt;Kubernetes 1.13 has been one of the shortest releases to date at 10 weeks. This release continues to focus on stability and extensibility of Kubernetes with three major features graduating to general availability this cycle in the areas of Storage and Cluster Lifecycle. Notable features graduating in this release include: simplified cluster management with kubeadm, Container Storage Interface (CSI), and CoreDNS as the default DNS.&lt;/p&gt;</description></item><item><title>Kubernetes Docs Updates, International Edition</title><link>https://andygol-k8s.netlify.app/blog/2018/11/08/kubernetes-docs-updates-international-edition/</link><pubDate>Thu, 08 Nov 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2018/11/08/kubernetes-docs-updates-international-edition/</guid><description>&lt;p&gt;As a co-chair of SIG Docs, I'm excited to share that Kubernetes docs have a fully mature workflow for localization (l10n).&lt;/p&gt;
&lt;h2 id="abbreviations-galore"&gt;Abbreviations galore&lt;/h2&gt;
&lt;p&gt;L10n is an abbreviation for &lt;em&gt;localization&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;I18n is an abbreviation for &lt;em&gt;internationalization&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;I18n is &lt;a href="https://www.w3.org/International/questions/qa-i18n"&gt;what you do&lt;/a&gt; to make l10n easier. L10n is a fuller, more comprehensive process than translation (&lt;em&gt;t9n&lt;/em&gt;).&lt;/p&gt;
&lt;h2 id="why-localization-matters"&gt;Why localization matters&lt;/h2&gt;
&lt;p&gt;The goal of SIG Docs is to make Kubernetes easier to use for as many people as possible.&lt;/p&gt;</description></item><item><title>gRPC Load Balancing on Kubernetes without Tears</title><link>https://andygol-k8s.netlify.app/blog/2018/11/07/grpc-load-balancing-on-kubernetes-without-tears/</link><pubDate>Wed, 07 Nov 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2018/11/07/grpc-load-balancing-on-kubernetes-without-tears/</guid><description>&lt;p&gt;Many new gRPC users are surprised to find that Kubernetes's default load
balancing often doesn't work out of the box with gRPC. For example, here's what
happens when you take a &lt;a href="https://github.com/sourishkrout/nodevoto"&gt;simple gRPC Node.js microservices
app&lt;/a&gt; and deploy it on Kubernetes:&lt;/p&gt;
&lt;p&gt;&lt;img src="https://andygol-k8s.netlify.app/images/blog/grpc-load-balancing-with-linkerd/Screenshot2018-11-0116-c4d86100-afc1-4a08-a01c-16da391756dd.34.36.png" alt=""&gt;&lt;/p&gt;
&lt;p&gt;While the &lt;code&gt;voting&lt;/code&gt; service displayed here has several pods, it's clear from
Kubernetes's CPU graphs that only one of the pods is actually doing any
work—because only one of the pods is receiving any traffic. Why?&lt;/p&gt;</description></item><item><title>Tips for Your First Kubecon Presentation - Part 2</title><link>https://andygol-k8s.netlify.app/blog/2018/10/26/tips-for-your-first-kubecon-presentation-part-2/</link><pubDate>Fri, 26 Oct 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2018/10/26/tips-for-your-first-kubecon-presentation-part-2/</guid><description>&lt;p&gt;Hello and welcome back to the second and final part about tips for KubeCon first-time speakers. If you missed the last post, please give it a read &lt;a href="https://kubernetes.io/blog/2018/10/18/tips-for-your-first-kubecon-presentation-part-1/"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id="the-day-before-the-show"&gt;The Day before the Show&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Tip #13 - Get enough sleep&lt;/strong&gt;. I don't know about you, but when I don't get enough sleep (especially when beer is in the game), the next day my brain power is around 80% at best. It's very easy to get distracted at KubeCon (in a positive sense). &amp;quot;Let's have dinner tonight and chat about XYZ&amp;quot;. Get some food, beer or wine because you're so excited and all the good resolutions you had set for the day before your presentation are forgotten :)&lt;/p&gt;</description></item><item><title>Tips for Your First Kubecon Presentation - Part 1</title><link>https://andygol-k8s.netlify.app/blog/2018/10/18/tips-for-your-first-kubecon-presentation-part-1/</link><pubDate>Thu, 18 Oct 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2018/10/18/tips-for-your-first-kubecon-presentation-part-1/</guid><description>&lt;p&gt;First of all, let me congratulate you to this outstanding achievement. Speaking at KubeCon, especially if it's your first time, is a tremendous honor and experience. Well done!&lt;/p&gt;
&lt;center&gt;&lt;blockquote class="twitter-tweet"&gt;&lt;p lang="en" dir="ltr"&gt;Congrats to everyone who got Kubecon talks accepted! 👏👏👏&lt;br&gt;&lt;br&gt;To everyone who got a rejection don&amp;#39;t feel bad. Only 13% could be accepted. Keep trying. There will be other opportunities.&lt;/p&gt;&amp;mdash; Justin Garrison (@rothgar) &lt;a href="https://twitter.com/rothgar/status/1044345018490662912?ref_src=twsrc%5Etfw"&gt;September 24, 2018&lt;/a&gt;&lt;/blockquote&gt; &lt;script async src="https://platform.twitter.com/widgets.js" charset="utf-8"&gt;&lt;/script&gt;&lt;/center&gt;
&lt;p&gt;When I was informed that my &lt;a href="https://www.youtube.com/watch?v=8-apJyr2gi0"&gt;KubeCon talk about Kubernetes Resource Management&lt;/a&gt; was accepted for KubeCon EU in Denmark (2018), I really could not believe it. By then, the chances to get your talk accepted were around 10% (or less, don't really remember the exact number). There were over a 1,000 submissions just for that KubeCon (recall that we now have &lt;strong&gt;three KubeCon events during the year&lt;/strong&gt; - US, EU and Asia region). The popularity of Kubernetes is ever increasing and so is the number of people trying to get a talk accepted. Once again, &lt;strong&gt;outstanding achievement to get your talk in&lt;/strong&gt;!&lt;/p&gt;</description></item><item><title>Kubernetes 2018 North American Contributor Summit</title><link>https://andygol-k8s.netlify.app/blog/2018/10/16/kubernetes-2018-north-american-contributor-summit/</link><pubDate>Tue, 16 Oct 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2018/10/16/kubernetes-2018-north-american-contributor-summit/</guid><description>&lt;p&gt;The 2018 North American Kubernetes Contributor Summit to be hosted right before
&lt;a href="https://events.linuxfoundation.org/events/kubecon-cloudnativecon-north-america-2018/"&gt;KubeCon + CloudNativeCon&lt;/a&gt; Seattle is shaping up to be the largest yet.
It is an event that brings together new and current contributors alike to
connect and share face-to-face; and serves as an opportunity for existing
contributors to help shape the future of community development. For new
community members, it offers a welcoming space to learn, explore and put the
contributor workflow to practice.&lt;/p&gt;</description></item><item><title>2018 Steering Committee Election Results</title><link>https://andygol-k8s.netlify.app/blog/2018/10/15/2018-steering-committee-election-results/</link><pubDate>Mon, 15 Oct 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2018/10/15/2018-steering-committee-election-results/</guid><description>&lt;h2 id="results"&gt;Results&lt;/h2&gt;
&lt;p&gt;The &lt;a href="https://kubernetes.io/blog/2018/09/06/2018-steering-committee-election-cycle-kicks-off/"&gt;Kubernetes Steering Committee Election&lt;/a&gt; is now complete and the following candidates came ahead to secure two year terms that start immediately:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Aaron Crickenberger, Google, &lt;a href="https://github.com/spiffxp"&gt;@spiffxp&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Davanum Srinivas, Huawei, &lt;a href="https://github.com/dims"&gt;@dims&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Tim St. Clair, Heptio, &lt;a href="https://github.com/timothysc"&gt;@timothysc&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="big-thanks"&gt;Big Thanks!&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Steering Committee Member Emeritus &lt;a href="https://github.com/quinton-hoole"&gt;Quinton Hoole&lt;/a&gt; for his service to the community over the past year. We look forward to&lt;/li&gt;
&lt;li&gt;The candidates that came forward to run for election. May we always have a strong set of people who want to push community forward like yours in every election.&lt;/li&gt;
&lt;li&gt;All 307 voters who cast a ballot.&lt;/li&gt;
&lt;li&gt;And last but not least...Cornell University for hosting &lt;a href="https://civs.cs.cornell.edu/"&gt;CIVS&lt;/a&gt;!&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="get-involved-with-the-steering-committee"&gt;Get Involved with the Steering Committee&lt;/h2&gt;
&lt;p&gt;You can follow along to Steering Committee &lt;a href="https://git.k8s.io/steering/backlog.md"&gt;backlog items&lt;/a&gt; and weigh in by filing an issue or creating a PR against their &lt;a href="https://github.com/kubernetes/steering"&gt;repo&lt;/a&gt;. They meet bi-weekly on &lt;a href="https://github.com/kubernetes/steering"&gt;Wednesdays at 8pm UTC&lt;/a&gt; and regularly attend Meet Our Contributors.&lt;/p&gt;</description></item><item><title>Topology-Aware Volume Provisioning in Kubernetes</title><link>https://andygol-k8s.netlify.app/blog/2018/10/11/topology-aware-volume-provisioning-in-kubernetes/</link><pubDate>Thu, 11 Oct 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2018/10/11/topology-aware-volume-provisioning-in-kubernetes/</guid><description>&lt;p&gt;The multi-zone cluster experience with persistent volumes is improving in Kubernetes 1.12 with the topology-aware dynamic provisioning beta feature. This feature allows Kubernetes to make intelligent decisions when dynamically provisioning volumes by getting scheduler input on the best place to provision a volume for a pod. In multi-zone clusters, this means that volumes will get provisioned in an appropriate zone that can run your pod, allowing you to easily deploy and scale your stateful workloads across failure domains to provide high availability and fault tolerance.&lt;/p&gt;</description></item><item><title>Kubernetes v1.12: Introducing RuntimeClass</title><link>https://andygol-k8s.netlify.app/blog/2018/10/10/kubernetes-v1.12-introducing-runtimeclass/</link><pubDate>Wed, 10 Oct 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2018/10/10/kubernetes-v1.12-introducing-runtimeclass/</guid><description>&lt;p&gt;Kubernetes originally launched with support for Docker containers running native applications on a Linux host. Starting with &lt;a href="https://kubernetes.io/blog/2016/07/rktnetes-brings-rkt-container-engine-to-kubernetes/"&gt;rkt&lt;/a&gt; in Kubernetes 1.3 more runtimes were coming, which lead to the development of the &lt;a href="https://kubernetes.io/blog/2016/12/container-runtime-interface-cri-in-kubernetes/"&gt;Container Runtime Interface&lt;/a&gt; (CRI). Since then, the set of alternative runtimes has only expanded: projects like &lt;a href="https://katacontainers.io/"&gt;Kata Containers&lt;/a&gt; and &lt;a href="https://github.com/google/gvisor"&gt;gVisor&lt;/a&gt; were announced for stronger workload isolation, and Kubernetes' Windows support has been &lt;a href="https://kubernetes.io/blog/2018/01/kubernetes-v19-beta-windows-support/"&gt;steadily progressing&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;With runtimes targeting so many different use cases, a clear need for mixed runtimes in a cluster arose. But all these different ways of running containers have brought a new set of problems to deal with:&lt;/p&gt;</description></item><item><title>Introducing Volume Snapshot Alpha for Kubernetes</title><link>https://andygol-k8s.netlify.app/blog/2018/10/09/introducing-volume-snapshot-alpha-for-kubernetes/</link><pubDate>Tue, 09 Oct 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2018/10/09/introducing-volume-snapshot-alpha-for-kubernetes/</guid><description>&lt;p&gt;Kubernetes v1.12 introduces alpha support for volume snapshotting. This feature allows creating/deleting volume snapshots, and the ability to create new volumes from a snapshot natively using the Kubernetes API.&lt;/p&gt;
&lt;h2 id="what-is-a-snapshot"&gt;What is a Snapshot?&lt;/h2&gt;
&lt;p&gt;Many storage systems (like Google Cloud Persistent Disks, Amazon Elastic Block Storage, and many on-premise storage systems) provide the ability to create a &amp;quot;snapshot&amp;quot; of a persistent volume. A snapshot represents a point-in-time copy of a volume. A snapshot can be used either to provision a new volume (pre-populated with the snapshot data) or to restore the existing volume to a previous state (represented by the snapshot).&lt;/p&gt;</description></item><item><title>Support for Azure VMSS, Cluster-Autoscaler and User Assigned Identity</title><link>https://andygol-k8s.netlify.app/blog/2018/10/08/support-for-azure-vmss-cluster-autoscaler-and-user-assigned-identity/</link><pubDate>Mon, 08 Oct 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2018/10/08/support-for-azure-vmss-cluster-autoscaler-and-user-assigned-identity/</guid><description>&lt;h2 id="introduction"&gt;Introduction&lt;/h2&gt;
&lt;p&gt;With Kubernetes v1.12, Azure virtual machine scale sets (VMSS) and cluster-autoscaler have reached their General Availability (GA) and User Assigned Identity is available as a preview feature.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Azure VMSS allow you to create and manage identical, load balanced VMs that automatically increase or decrease based on demand or a set schedule. This enables you to easily manage and scale multiple VMs to provide high availability and application resiliency, ideal for large-scale applications like container workloads &lt;a href="https://learn.microsoft.com/en-us/azure/virtual-machine-scale-sets/overview"&gt;[1]&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;</description></item><item><title>Introducing the Non-Code Contributor’s Guide</title><link>https://andygol-k8s.netlify.app/blog/2018/10/04/introducing-the-non-code-contributors-guide/</link><pubDate>Thu, 04 Oct 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2018/10/04/introducing-the-non-code-contributors-guide/</guid><description>&lt;p&gt;It was May 2018 in Copenhagen, and the Kubernetes community was enjoying the contributor summit at KubeCon/CloudNativeCon, complete with the first run of the New Contributor Workshop. As a time of tremendous collaboration between contributors, the topics covered ranged from signing the CLA to deep technical conversations. Along with the vast exchange of information and ideas, however, came continued scrutiny of the topics at hand to ensure that the community was being as inclusive and accommodating as possible. Over that spring week, some of the pieces under the microscope included the many themes being covered, and how they were being presented, but also the overarching characteristics of the people contributing and the skill sets involved. From the discussions and analysis that followed grew the idea that the community was not benefiting as much as it could from the many people who wanted to contribute, but whose strengths were in areas other than writing code.&lt;/p&gt;</description></item><item><title>KubeDirector: The easy way to run complex stateful applications on Kubernetes</title><link>https://andygol-k8s.netlify.app/blog/2018/10/03/kubedirector-the-easy-way-to-run-complex-stateful-applications-on-kubernetes/</link><pubDate>Wed, 03 Oct 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2018/10/03/kubedirector-the-easy-way-to-run-complex-stateful-applications-on-kubernetes/</guid><description>&lt;p&gt;KubeDirector is an open source project designed to make it easy to run complex stateful scale-out application clusters on Kubernetes. KubeDirector is built using the custom resource definition (CRD) framework and leverages the native Kubernetes API extensions and design philosophy. This enables transparent integration with Kubernetes user/resource management as well as existing clients and tools.&lt;/p&gt;
&lt;p&gt;We recently &lt;a href="https://medium.com/@thomas_phelan/operation-stateful-introducing-bluek8s-and-kubernetes-director-aa204952f619/"&gt;introduced the KubeDirector project&lt;/a&gt;, as part of a broader open source Kubernetes initiative we call BlueK8s. I’m happy to announce that the pre-alpha
code for &lt;a href="https://github.com/bluek8s/kubedirector/"&gt;KubeDirector&lt;/a&gt; is now available. And in this blog post, I’ll show how it works.&lt;/p&gt;</description></item><item><title>Building a Network Bootable Server Farm for Kubernetes with LTSP</title><link>https://andygol-k8s.netlify.app/blog/2018/10/02/building-a-network-bootable-server-farm-for-kubernetes-with-ltsp/</link><pubDate>Tue, 02 Oct 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2018/10/02/building-a-network-bootable-server-farm-for-kubernetes-with-ltsp/</guid><description>&lt;p&gt;&lt;img src="https://andygol-k8s.netlify.app/images/blog/2018-10-01-network-bootable-farm-with-ltsp/k8s&amp;#43;ltsp.svg" alt="k8s&amp;#43;ltsp"&gt;&lt;/p&gt;
&lt;p&gt;In this post, I'm going to introduce you to a cool technology for Kubernetes, LTSP. It is useful for large baremetal Kubernetes deployments.&lt;/p&gt;
&lt;p&gt;You don't need to think about installing an OS and binaries on each node anymore. Why? You can do that automatically through Dockerfile!&lt;/p&gt;
&lt;p&gt;You can buy and put 100 new servers into a production environment and get them working immediately - it's really amazing!&lt;/p&gt;
&lt;p&gt;Intrigued? Let me walk you through how it works.&lt;/p&gt;</description></item><item><title>Health checking gRPC servers on Kubernetes</title><link>https://andygol-k8s.netlify.app/blog/2018/10/01/health-checking-grpc-servers-on-kubernetes/</link><pubDate>Mon, 01 Oct 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2018/10/01/health-checking-grpc-servers-on-kubernetes/</guid><description>&lt;p&gt;&lt;strong&gt;Update (December 2021):&lt;/strong&gt; &lt;em&gt;Kubernetes now has built-in gRPC health probes starting in v1.23.
To learn more, see &lt;a href="https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-grpc-liveness-probe"&gt;Configure Liveness, Readiness and Startup Probes&lt;/a&gt;.
This article was originally written about an external tool to achieve the same task.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://grpc.io"&gt;gRPC&lt;/a&gt; is on its way to becoming the lingua franca for
communication between cloud-native microservices. If you are deploying gRPC
applications to Kubernetes today, you may be wondering about the best way to
configure health checks. In this article, we will talk about
&lt;a href="https://github.com/grpc-ecosystem/grpc-health-probe/"&gt;grpc-health-probe&lt;/a&gt;, a
Kubernetes-native way to health check gRPC apps.&lt;/p&gt;</description></item><item><title>Kubernetes 1.12: Kubelet TLS Bootstrap and Azure Virtual Machine Scale Sets (VMSS) Move to General Availability</title><link>https://andygol-k8s.netlify.app/blog/2018/09/27/kubernetes-1.12-kubelet-tls-bootstrap-and-azure-virtual-machine-scale-sets-vmss-move-to-general-availability/</link><pubDate>Thu, 27 Sep 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2018/09/27/kubernetes-1.12-kubelet-tls-bootstrap-and-azure-virtual-machine-scale-sets-vmss-move-to-general-availability/</guid><description>&lt;p&gt;We’re pleased to announce the delivery of Kubernetes 1.12, our third release of 2018!&lt;/p&gt;
&lt;p&gt;Today’s release continues to focus on internal improvements and graduating features to stable in Kubernetes. This newest version graduates key features such as security and Azure. Notable additions in this release include two highly-anticipated features graduating to general availability: Kubelet TLS Bootstrap and Support for Azure Virtual Machine Scale Sets (VMSS).&lt;/p&gt;
&lt;p&gt;These new features mean increased security, availability, resiliency, and ease of use to get production applications to market faster. The release also signifies the increasing maturation and sophistication of Kubernetes on the developer side.&lt;/p&gt;</description></item><item><title>Hands On With Linkerd 2.0</title><link>https://andygol-k8s.netlify.app/blog/2018/09/18/hands-on-with-linkerd-2.0/</link><pubDate>Tue, 18 Sep 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2018/09/18/hands-on-with-linkerd-2.0/</guid><description>&lt;p&gt;Linkerd 2.0 was recently announced as generally available (GA), signaling its readiness for production use. In this tutorial, we’ll walk you through how to get Linkerd 2.0 up and running on your Kubernetes cluster in a matter seconds.&lt;/p&gt;
&lt;p&gt;But first, what is Linkerd and why should you care? Linkerd is a service sidecar that augments a Kubernetes service, providing zero-config dashboards and UNIX-style CLI tools for runtime debugging, diagnostics, and reliability. Linkerd is also a service mesh, applied to multiple (or all) services in a cluster to provide a uniform layer of telemetry, security, and control across them.&lt;/p&gt;</description></item><item><title>2018 Steering Committee Election Cycle Kicks Off</title><link>https://andygol-k8s.netlify.app/blog/2018/09/06/2018-steering-committee-election-cycle-kicks-off/</link><pubDate>Thu, 06 Sep 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2018/09/06/2018-steering-committee-election-cycle-kicks-off/</guid><description>&lt;p&gt;Having a clear, definable governance model is crucial for the health of open source projects. For one of the highest velocity projects in the open source world, governance is critical especially for one as large and active as Kubernetes, which is one of the most high-velocity projects in the open source world. A clear structure helps users trust that the project will be nurtured and progress forward. Initially, this structure was laid by the former 7 member bootstrap committee composed of founders and senior contributors with a goal to create the foundational governance building blocks.&lt;/p&gt;</description></item><item><title>The Machines Can Do the Work, a Story of Kubernetes Testing, CI, and Automating the Contributor Experience</title><link>https://andygol-k8s.netlify.app/blog/2018/08/29/the-machines-can-do-the-work-a-story-of-kubernetes-testing-ci-and-automating-the-contributor-experience/</link><pubDate>Wed, 29 Aug 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2018/08/29/the-machines-can-do-the-work-a-story-of-kubernetes-testing-ci-and-automating-the-contributor-experience/</guid><description>&lt;p&gt;&lt;em&gt;“Large projects have a lot of less exciting, yet, hard work. We value time spent automating repetitive work more highly than toil. Where that work cannot be automated, it is our culture to recognize and reward all types of contributions. However, heroism is not sustainable.”&lt;/em&gt; - &lt;a href="https://git.k8s.io/community/values.md#automation-over-process"&gt;Kubernetes Community Values&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Like many open source projects, Kubernetes is hosted on GitHub. We felt the barrier to participation would be lowest if the project lived where developers already worked, using tools and processes developers already knew. Thus the project embraced the service fully: it was the basis of our workflow, our issue tracker, our documentation, our blog platform, our team structure, and more.&lt;/p&gt;</description></item><item><title>Introducing Kubebuilder: an SDK for building Kubernetes APIs using CRDs</title><link>https://andygol-k8s.netlify.app/blog/2018/08/10/introducing-kubebuilder-an-sdk-for-building-kubernetes-apis-using-crds/</link><pubDate>Fri, 10 Aug 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2018/08/10/introducing-kubebuilder-an-sdk-for-building-kubernetes-apis-using-crds/</guid><description>&lt;p&gt;How can we enable applications such as MySQL, Spark and Cassandra to manage themselves just like Kubernetes Deployments and Pods do? How do we configure these applications as their own first class APIs instead of a collection of StatefulSets, Services, and ConfigMaps?&lt;/p&gt;
&lt;p&gt;We have been working on a solution and are happy to introduce &lt;a href="https://github.com/kubernetes-sigs/kubebuilder"&gt;&lt;em&gt;kubebuilder&lt;/em&gt;&lt;/a&gt;, a comprehensive development kit for rapidly building and publishing Kubernetes APIs and Controllers using CRDs. Kubebuilder scaffolds projects and API definitions and is built on top of the &lt;a href="https://github.com/kubernetes-sigs/controller-runtime"&gt;controller-runtime&lt;/a&gt; libraries.&lt;/p&gt;</description></item><item><title>Out of the Clouds onto the Ground: How to Make Kubernetes Production Grade Anywhere</title><link>https://andygol-k8s.netlify.app/blog/2018/08/03/out-of-the-clouds-onto-the-ground-how-to-make-kubernetes-production-grade-anywhere/</link><pubDate>Fri, 03 Aug 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2018/08/03/out-of-the-clouds-onto-the-ground-how-to-make-kubernetes-production-grade-anywhere/</guid><description>&lt;p&gt;This blog offers some guidelines for running a production grade Kubernetes cluster in an environment like an on-premise data center or edge location.&lt;/p&gt;
&lt;p&gt;What does it mean to be “production grade”?&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The installation is secure&lt;/li&gt;
&lt;li&gt;The deployment is managed with a repeatable and recorded process&lt;/li&gt;
&lt;li&gt;Performance is predictable and consistent&lt;/li&gt;
&lt;li&gt;Updates and configuration changes can be safely applied&lt;/li&gt;
&lt;li&gt;Logging and monitoring is in place to detect and diagnose failures and resource shortages&lt;/li&gt;
&lt;li&gt;Service is “highly available enough” considering available resources, including constraints on money, physical space, power, etc.&lt;/li&gt;
&lt;li&gt;A recovery process is available, documented, and tested for use in the event of failures&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In short, production grade means anticipating accidents and preparing for recovery with minimal pain and delay.&lt;/p&gt;</description></item><item><title>Dynamically Expand Volume with CSI and Kubernetes</title><link>https://andygol-k8s.netlify.app/blog/2018/08/02/dynamically-expand-volume-with-csi-and-kubernetes/</link><pubDate>Thu, 02 Aug 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2018/08/02/dynamically-expand-volume-with-csi-and-kubernetes/</guid><description>&lt;p&gt;&lt;em&gt;There is a very powerful storage subsystem within Kubernetes itself, covering a fairly broad spectrum of use cases. Whereas, when planning to build a product-grade relational database platform with Kubernetes, we face a big challenge: coming up with storage. This article describes how to extend latest Container Storage Interface 0.2.0 and integrate with Kubernetes, and demonstrates the essential facet of dynamically expanding volume capacity.&lt;/em&gt;&lt;/p&gt;
&lt;h2 id="introduction"&gt;Introduction&lt;/h2&gt;
&lt;p&gt;As we focalize our customers, especially in financial space, there is a huge upswell in the adoption of container orchestration technology.&lt;/p&gt;</description></item><item><title>KubeVirt: Extending Kubernetes with CRDs for Virtualized Workloads</title><link>https://andygol-k8s.netlify.app/blog/2018/07/27/kubevirt-extending-kubernetes-with-crds-for-virtualized-workloads/</link><pubDate>Fri, 27 Jul 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2018/07/27/kubevirt-extending-kubernetes-with-crds-for-virtualized-workloads/</guid><description>&lt;h2 id="what-is-kubevirt"&gt;What is KubeVirt?&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://github.com/kubevirt/kubevirt"&gt;KubeVirt&lt;/a&gt; is a Kubernetes addon that provides users the ability to schedule traditional virtual machine workloads side by side with container workloads. Through the use of &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/extend-kubernetes/api-extension/custom-resources/"&gt;Custom Resource Definitions&lt;/a&gt; (CRDs) and other Kubernetes features, KubeVirt seamlessly extends existing Kubernetes clusters to provide a set of virtualization APIs that can be used to manage virtual machines.&lt;/p&gt;
&lt;h2 id="why-use-crds-over-an-aggregated-api-server"&gt;Why Use CRDs Over an Aggregated API Server?&lt;/h2&gt;
&lt;p&gt;Back in the middle of 2017, those of us working on KubeVirt were at a crossroads. We had to make a decision whether or not to extend Kubernetes using an aggregated API server or to make use of the new Custom Resource Definitions (CRDs) feature.&lt;/p&gt;</description></item><item><title>Feature Highlight: CPU Manager</title><link>https://andygol-k8s.netlify.app/blog/2018/07/24/feature-highlight-cpu-manager/</link><pubDate>Tue, 24 Jul 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2018/07/24/feature-highlight-cpu-manager/</guid><description>&lt;p&gt;This blog post describes the &lt;a href="https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/cpu-management-policies/"&gt;CPU Manager&lt;/a&gt;, a beta feature in &lt;a href="https://kubernetes.io/"&gt;Kubernetes&lt;/a&gt;. The CPU manager feature enables better placement of workloads in the &lt;a href="https://andygol-k8s.netlify.app/docs/reference/command-line-tools-reference/kubelet/"&gt;Kubelet&lt;/a&gt;, the Kubernetes node agent, by allocating exclusive CPUs to certain pod containers.&lt;/p&gt;
&lt;p&gt;&lt;img src="https://andygol-k8s.netlify.app/images/blog/2018-07-24-cpu-manager/cpu-manager.png" alt="cpu manager"&gt;&lt;/p&gt;
&lt;h2 id="sounds-good-but-does-the-cpu-manager-help-me"&gt;Sounds Good! But Does the CPU Manager Help Me?&lt;/h2&gt;
&lt;p&gt;It depends on your workload. A single compute node in a Kubernetes cluster can run many &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/pod/"&gt;pods&lt;/a&gt; and some of these pods could be running CPU-intensive workloads. In such a scenario, the pods might contend for the CPU resources available in that compute node. When this contention intensifies, the workload can move to different CPUs depending on whether the pod is throttled and the availability of CPUs at scheduling time. There might also be cases where the workload could be sensitive to context switches. In all the above scenarios, the performance of the workload might be affected.&lt;/p&gt;</description></item><item><title>The History of Kubernetes &amp; the Community Behind It</title><link>https://andygol-k8s.netlify.app/blog/2018/07/20/the-history-of-kubernetes-the-community-behind-it/</link><pubDate>Fri, 20 Jul 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2018/07/20/the-history-of-kubernetes-the-community-behind-it/</guid><description>&lt;p&gt;&lt;img src="https://andygol-k8s.netlify.app/images/blog/2018-07-20-history-kubernetes-community.png" alt="oscon award"&gt;&lt;/p&gt;
&lt;p&gt;It is remarkable to me to return to Portland and OSCON to stand on stage with members of the Kubernetes community and accept this award for Most Impactful Open Source Project. It was scarcely three years ago, that on this very same stage we declared Kubernetes 1.0 and the project was added to the newly formed Cloud Native Computing Foundation.&lt;/p&gt;
&lt;p&gt;To think about how far we have come in that short period of time and to see the ways in which this project has shaped the cloud computing landscape is nothing short of amazing. The success is a testament to the power and contributions of this amazing open source community. And the daily passion and quality contributions of our endlessly engaged, world-wide community is nothing short of humbling.&lt;/p&gt;</description></item><item><title>Kubernetes Wins the 2018 OSCON Most Impact Award</title><link>https://andygol-k8s.netlify.app/blog/2018/07/19/kubernetes-wins-2018-oscon-most-impact-award/</link><pubDate>Thu, 19 Jul 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2018/07/19/kubernetes-wins-2018-oscon-most-impact-award/</guid><description>&lt;p&gt;We are humbled to be recognized by the community with this award.&lt;/p&gt;
&lt;p&gt;We had high hopes when we created Kubernetes. We wanted to change the way cloud applications were deployed and managed. Whether we’d succeed or not was very uncertain. And look how far we’ve come in such a short time.&lt;/p&gt;
&lt;p&gt;The core technology behind Kubernetes was informed by &lt;a href="https://ai.google/research/pubs/pub44843"&gt;lessons learned from Google’s internal infrastructure&lt;/a&gt;, but nobody can deny the enormous role of the Kubernetes community in the success of the project. &lt;a href="https://k8s.devstats.cncf.io/d/8/company-statistics-by-repository-group?orgId=1"&gt;The community, of which Google is a part&lt;/a&gt;, now drives every aspect of the project: the design, development, testing, documentation, releases, and more. That is what makes Kubernetes fly.&lt;/p&gt;</description></item><item><title>11 Ways (Not) to Get Hacked</title><link>https://andygol-k8s.netlify.app/blog/2018/07/18/11-ways-not-to-get-hacked/</link><pubDate>Wed, 18 Jul 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2018/07/18/11-ways-not-to-get-hacked/</guid><description>&lt;p&gt;Kubernetes security has come a long way since the project's inception, but still contains some gotchas. Starting with the control plane, building up through workload and network security, and finishing with a projection into the future of security, here is a list of handy tips to help harden your clusters and increase their resilience if compromised.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#part-one-the-control-plane"&gt;Part One: The Control Plane&lt;/a&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#1-tls-everywhere"&gt;1. TLS Everywhere&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#2-enable-rbac-with-least-privilege-disable-abac-and-monitor-logs"&gt;2. Enable RBAC with Least Privilege, Disable ABAC, and Monitor Logs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#3-use-third-party-auth-for-api-server"&gt;3. Use Third Party Auth for API Server&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#4-separate-and-firewall-your-etcd-cluster"&gt;4. Separate and Firewall your etcd Cluster&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#5-rotate-encryption-keys"&gt;5. Rotate Encryption Keys&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="#part-two-workloads"&gt;Part Two: Workloads&lt;/a&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#6-use-linux-security-features-and-podsecuritypolicies"&gt;6. Use Linux Security Features and PodSecurityPolicies&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#7-statically-analyse-yaml"&gt;7. Statically Analyse YAML&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#8-run-containers-as-a-non-root-user"&gt;8. Run Containers as a Non-Root User&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#9-use-network-policies"&gt;9. Use Network Policies&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#10-scan-images-and-run-ids"&gt;10. Scan Images and Run IDS&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="#part-three-the-future"&gt;Part Three: The Future&lt;/a&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#11-run-a-service-mesh"&gt;11. Run a Service Mesh&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="#conclusion"&gt;Conclusion&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h1 id="part-one-the-control-plane"&gt;Part One: The Control Plane&lt;/h1&gt;
&lt;p&gt;The control plane is Kubernetes' brain. It has an overall view of every container and pod running on the cluster, can schedule new pods (which can include containers with root access to their parent node), and can read all the secrets stored in the cluster. This valuable cargo needs protecting from accidental leakage and malicious intent: when it's accessed, when it's at rest, and when it's being transported across the network.&lt;/p&gt;</description></item><item><title>How the sausage is made: the Kubernetes 1.11 release interview, from the Kubernetes Podcast</title><link>https://andygol-k8s.netlify.app/blog/2018/07/16/how-the-sausage-is-made-the-kubernetes-1.11-release-interview-from-the-kubernetes-podcast/</link><pubDate>Mon, 16 Jul 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2018/07/16/how-the-sausage-is-made-the-kubernetes-1.11-release-interview-from-the-kubernetes-podcast/</guid><description>&lt;p&gt;At KubeCon EU, my colleague Adam Glick and I were pleased to announce the &lt;a href="https://kubernetespodcast.com/"&gt;Kubernetes Podcast from Google&lt;/a&gt;. In this weekly conversation, we focus on all the great things that are happening in the world of Kubernetes and Cloud Native. From the news of the week, to interviews with people in the community, we help you stay up to date on everything Kubernetes.&lt;/p&gt;
&lt;p&gt;We &lt;a href="https://kubernetespodcast.com/episode/010-kubernetes-1.11/"&gt;recently had the pleasure of speaking&lt;/a&gt; to the release manager for Kubernetes 1.11, Josh Berkus from Red Hat, and the release manager for the upcoming 1.12, Tim Pepper from VMware.&lt;/p&gt;</description></item><item><title>Resizing Persistent Volumes using Kubernetes</title><link>https://andygol-k8s.netlify.app/blog/2018/07/12/resizing-persistent-volumes-using-kubernetes/</link><pubDate>Thu, 12 Jul 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2018/07/12/resizing-persistent-volumes-using-kubernetes/</guid><description>&lt;p&gt;&lt;strong&gt;Editor’s note: this post is part of a &lt;a href="https://kubernetes.io/blog/2018/06/27/kubernetes-1.11-release-announcement/"&gt;series of in-depth articles&lt;/a&gt; on what’s new in Kubernetes 1.11&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;In Kubernetes v1.11 the persistent volume expansion feature is being promoted to beta. This feature allows users to easily resize an existing volume by editing the &lt;code&gt;PersistentVolumeClaim&lt;/code&gt; (PVC) object. Users no longer have to manually interact with the storage backend or delete and recreate PV and PVC objects to increase the size of a volume. Shrinking persistent volumes is not supported.&lt;/p&gt;</description></item><item><title>Dynamic Kubelet Configuration</title><link>https://andygol-k8s.netlify.app/blog/2018/07/11/dynamic-kubelet-configuration/</link><pubDate>Wed, 11 Jul 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2018/07/11/dynamic-kubelet-configuration/</guid><description>&lt;p&gt;&lt;strong&gt;Editor’s note: The feature has been removed in the version 1.24 after deprecation in 1.22.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Editor’s note: this post is part of a &lt;a href="https://kubernetes.io/blog/2018/06/27/kubernetes-1.11-release-announcement/"&gt;series of in-depth articles&lt;/a&gt; on what’s new in Kubernetes 1.11&lt;/strong&gt;&lt;/p&gt;
&lt;h2 id="why-dynamic-kubelet-configuration"&gt;Why Dynamic Kubelet Configuration?&lt;/h2&gt;
&lt;p&gt;Kubernetes provides API-centric tooling that significantly improves workflows for managing applications and infrastructure. Most Kubernetes installations, however, run the Kubelet as a native process on each host, outside the scope of standard Kubernetes APIs.&lt;/p&gt;</description></item><item><title>CoreDNS GA for Kubernetes Cluster DNS</title><link>https://andygol-k8s.netlify.app/blog/2018/07/10/coredns-ga-for-kubernetes-cluster-dns/</link><pubDate>Tue, 10 Jul 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2018/07/10/coredns-ga-for-kubernetes-cluster-dns/</guid><description>&lt;p&gt;&lt;strong&gt;Editor’s note: this post is part of a &lt;a href="https://kubernetes.io/blog/2018/06/27/kubernetes-1.11-release-announcement/"&gt;series of in-depth articles&lt;/a&gt; on what’s new in Kubernetes 1.11&lt;/strong&gt;&lt;/p&gt;
&lt;h2 id="introduction"&gt;Introduction&lt;/h2&gt;
&lt;p&gt;In Kubernetes 1.11, &lt;a href="https://coredns.io"&gt;CoreDNS&lt;/a&gt; has reached General Availability (GA) for DNS-based service discovery, as an alternative to the kube-dns addon. This means that CoreDNS will be offered as an option in upcoming versions of the various installation tools. In fact, the kubeadm team chose to make it the default option starting with Kubernetes 1.11.&lt;/p&gt;</description></item><item><title>Meet Our Contributors - Monthly Streaming YouTube Mentoring Series</title><link>https://andygol-k8s.netlify.app/blog/2018/07/10/meet-our-contributors-monthly-streaming-youtube-mentoring-series/</link><pubDate>Tue, 10 Jul 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2018/07/10/meet-our-contributors-monthly-streaming-youtube-mentoring-series/</guid><description>&lt;p&gt;&lt;img src="https://andygol-k8s.netlify.app/images/blog/2018-06-05-meet-our-contributors-youtube-mentoring-series/meet-our-contributors.png" alt="meet_our_contributors"&gt;&lt;/p&gt;
&lt;p&gt;July 11th at 2:30pm and 8pm UTC kicks off our next installment of Meet Our Contributors YouTube series. This month is special: members of the steering committee will be on to answer any and all questions from the community on the first 30 minutes of the 8pm UTC session. More on submitting questions below.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://github.com/kubernetes/community/blob/master/mentoring/meet-our-contributors.md"&gt;Meet Our Contributors&lt;/a&gt; was created to give an opportunity to new and current contributors alike to get time in front of our upstream community to ask questions that you would typically ask a mentor. We have 3-6 contributors on each session (an AM and PM session depending on where you are in the world!) answer questions &lt;a href="https://www.youtube.com/c/KubernetesCommunity/live"&gt;live on a YouTube stream&lt;/a&gt;. If you miss it, don’t stress, the recording is up after it’s over. Check out a past episode &lt;a href="https://www.youtube.com/watch?v=EVsXi3Zhlo0&amp;list=PL69nYSiGNLP3QpQrhZq_sLYo77BVKv09F"&gt;here&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>IPVS-Based In-Cluster Load Balancing Deep Dive</title><link>https://andygol-k8s.netlify.app/blog/2018/07/09/ipvs-based-in-cluster-load-balancing-deep-dive/</link><pubDate>Mon, 09 Jul 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2018/07/09/ipvs-based-in-cluster-load-balancing-deep-dive/</guid><description>&lt;p&gt;&lt;strong&gt;Editor’s note: this post is part of a &lt;a href="https://kubernetes.io/blog/2018/06/27/kubernetes-1.11-release-announcement/"&gt;series of in-depth articles&lt;/a&gt; on what’s new in Kubernetes 1.11&lt;/strong&gt;&lt;/p&gt;
&lt;h2 id="introduction"&gt;Introduction&lt;/h2&gt;
&lt;p&gt;Per &lt;a href="https://kubernetes.io/blog/2018/06/27/kubernetes-1.11-release-announcement/"&gt;the Kubernetes 1.11 release blog post &lt;/a&gt;, we announced that IPVS-Based In-Cluster Service Load Balancing graduates to General Availability. In this blog, we will take you through a deep dive of the feature.&lt;/p&gt;
&lt;h2 id="what-is-ipvs"&gt;What Is IPVS?&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;IPVS&lt;/strong&gt; (&lt;strong&gt;IP Virtual Server&lt;/strong&gt;) is built on top of the Netfilter and implements transport-layer load balancing as part of the Linux kernel.&lt;/p&gt;</description></item><item><title>Airflow on Kubernetes (Part 1): A Different Kind of Operator</title><link>https://andygol-k8s.netlify.app/blog/2018/06/28/airflow-on-kubernetes-part-1-a-different-kind-of-operator/</link><pubDate>Thu, 28 Jun 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2018/06/28/airflow-on-kubernetes-part-1-a-different-kind-of-operator/</guid><description>&lt;h2 id="introduction"&gt;Introduction&lt;/h2&gt;
&lt;p&gt;As part of Bloomberg's &lt;a href="https://www.techatbloomberg.com/blog/bloomberg-awarded-first-cncf-end-user-award-contributions-kubernetes/"&gt;continued commitment to developing the Kubernetes ecosystem&lt;/a&gt;, we are excited to announce the Kubernetes Airflow Operator; a mechanism for &lt;a href="https://airflow.apache.org/"&gt;Apache Airflow&lt;/a&gt;, a popular workflow orchestration framework to natively launch arbitrary Kubernetes Pods using the Kubernetes API.&lt;/p&gt;
&lt;h2 id="what-is-airflow"&gt;What Is Airflow?&lt;/h2&gt;
&lt;p&gt;Apache Airflow is one realization of the DevOps philosophy of &amp;quot;Configuration As Code.&amp;quot; Airflow allows users to launch multi-step pipelines using a simple Python object DAG (Directed Acyclic Graph). You can define dependencies, programmatically construct complex workflows, and monitor scheduled jobs in an easy to read UI.&lt;/p&gt;</description></item><item><title>Kubernetes 1.11: In-Cluster Load Balancing and CoreDNS Plugin Graduate to General Availability</title><link>https://andygol-k8s.netlify.app/blog/2018/06/27/kubernetes-1.11-release-announcement/</link><pubDate>Wed, 27 Jun 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2018/06/27/kubernetes-1.11-release-announcement/</guid><description>&lt;p&gt;We’re pleased to announce the delivery of Kubernetes 1.11, our second release of 2018!&lt;/p&gt;
&lt;p&gt;Today’s release continues to advance maturity, scalability, and flexibility of Kubernetes, marking significant progress on features that the team has been hard at work on over the last year. This newest version graduates key features in networking, opens up two major features from SIG-API Machinery and SIG-Node for beta testing, and continues to enhance storage features that have been a focal point of the past two releases. The features in this release make it increasingly possible to plug any infrastructure, cloud or on-premise, into the Kubernetes system.&lt;/p&gt;</description></item><item><title>Dynamic Ingress in Kubernetes</title><link>https://andygol-k8s.netlify.app/blog/2018/06/07/dynamic-ingress-in-kubernetes/</link><pubDate>Thu, 07 Jun 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2018/06/07/dynamic-ingress-in-kubernetes/</guid><description>&lt;p&gt;Kubernetes makes it easy to deploy applications that consist of many microservices, but one of the key challenges with this type of architecture is dynamically routing ingress traffic to each of these services. One approach is &lt;a href="https://www.getambassador.io"&gt;Ambassador&lt;/a&gt;, a Kubernetes-native open source API Gateway built on the &lt;a href="https://www.envoyproxy.io"&gt;Envoy Proxy&lt;/a&gt;. Ambassador is designed for dynamic environment where services may come and go frequently.&lt;/p&gt;
&lt;p&gt;Ambassador is configured using Kubernetes annotations. Annotations are used to configure specific mappings from a given Kubernetes service to a particular URL. A mapping can include a number of annotations for configuring a route. Examples include rate limiting, protocol, cross-origin request sharing, traffic shadowing, and routing rules.&lt;/p&gt;</description></item><item><title>4 Years of K8s</title><link>https://andygol-k8s.netlify.app/blog/2018/06/06/4-years-of-k8s/</link><pubDate>Wed, 06 Jun 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2018/06/06/4-years-of-k8s/</guid><description>&lt;p&gt;On June 6, 2014 I checked in the &lt;a href="https://github.com/kubernetes/kubernetes/commit/2c4b3a562ce34cddc3f8218a2c4d11c7310e6d56"&gt;first commit&lt;/a&gt; of what would become the public repository for Kubernetes. Many would assume that is where the story starts. It is the beginning of history, right? But that really doesn’t tell the whole story.&lt;/p&gt;
&lt;p&gt;&lt;img src="https://andygol-k8s.netlify.app/images/blog/2018-06-06-4-years-of-k8s/k8s-first-commit.png" alt="k8s_first_commit"&gt;&lt;/p&gt;
&lt;p&gt;The cast leading up to that commit was large and the success for Kubernetes since then is owed to an ever larger cast.&lt;/p&gt;
&lt;p&gt;Kubernetes was built on ideas that had been proven out at Google over the previous ten years with Borg. And Borg, itself, owed its existence to even earlier efforts at Google and beyond.&lt;/p&gt;</description></item><item><title>Say Hello to Discuss Kubernetes</title><link>https://andygol-k8s.netlify.app/blog/2018/05/30/say-hello-to-discuss-kubernetes/</link><pubDate>Wed, 30 May 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2018/05/30/say-hello-to-discuss-kubernetes/</guid><description>&lt;p&gt;Communication is key when it comes to engaging a community of over 35,000 people in a global and remote environment. Keeping track of everything in the Kubernetes community can be an overwhelming task. On one hand we have our official resources, like Stack Overflow, GitHub, and the mailing lists, and on the other we have more ephemeral resources like Slack, where you can hop in, chat with someone, and then go on your merry way.&lt;/p&gt;</description></item><item><title>Introducing kustomize; Template-free Configuration Customization for Kubernetes</title><link>https://andygol-k8s.netlify.app/blog/2018/05/29/introducing-kustomize-template-free-configuration-customization-for-kubernetes/</link><pubDate>Tue, 29 May 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2018/05/29/introducing-kustomize-template-free-configuration-customization-for-kubernetes/</guid><description>&lt;p&gt;If you run a Kubernetes environment, chances are you’ve
customized a Kubernetes configuration — you've copied
some API object YAML files and edited them to suit
your needs.&lt;/p&gt;
&lt;p&gt;But there are drawbacks to this approach — it can be
hard to go back to the source material and incorporate
any improvements that were made to it. Today Google is
announcing &lt;a href="https://github.com/kubernetes-sigs/kustomize"&gt;&lt;strong&gt;kustomize&lt;/strong&gt;&lt;/a&gt;, a command-line tool
contributed as a &lt;a href="https://github.com/kubernetes/enhancements/blob/master/keps/sig-cli/2377-Kustomize/README.md"&gt;subproject&lt;/a&gt; of &lt;a href="https://github.com/kubernetes/community/tree/master/sig-cli"&gt;SIG-CLI&lt;/a&gt;. The tool
provides a new, purely &lt;em&gt;declarative&lt;/em&gt; approach to
configuration customization that adheres to and
leverages the familiar and carefully designed
Kubernetes API.&lt;/p&gt;</description></item><item><title>Kubernetes Containerd Integration Goes GA</title><link>https://andygol-k8s.netlify.app/blog/2018/05/24/kubernetes-containerd-integration-goes-ga/</link><pubDate>Thu, 24 May 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2018/05/24/kubernetes-containerd-integration-goes-ga/</guid><description>&lt;h1 id="kubernetes-containerd-integration-goes-ga"&gt;Kubernetes Containerd Integration Goes GA&lt;/h1&gt;
&lt;p&gt;In a previous blog - &lt;a href="https://kubernetes.io/blog/2017/11/containerd-container-runtime-options-kubernetes"&gt;Containerd Brings More Container Runtime Options for Kubernetes&lt;/a&gt;, we introduced the alpha version of the Kubernetes containerd integration. With another 6 months of development, the integration with containerd is now generally available! You can now use &lt;a href="https://github.com/containerd/containerd/releases/tag/v1.1.0"&gt;containerd 1.1&lt;/a&gt; as the container runtime for production Kubernetes clusters!&lt;/p&gt;
&lt;p&gt;Containerd 1.1 works with Kubernetes 1.10 and above, and supports all Kubernetes features. The test coverage of containerd integration on &lt;a href="https://cloud.google.com/"&gt;Google Cloud Platform&lt;/a&gt; in Kubernetes test infrastructure is now equivalent to the Docker integration (See: &lt;a href="https://k8s-testgrid.appspot.com/sig-node-containerd"&gt;test dashboard)&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>Getting to Know Kubevirt</title><link>https://andygol-k8s.netlify.app/blog/2018/05/22/getting-to-know-kubevirt/</link><pubDate>Tue, 22 May 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2018/05/22/getting-to-know-kubevirt/</guid><description>&lt;p&gt;Once you've become accustomed to running Linux container workloads on Kubernetes, you may find yourself wishing that you could run other sorts of workloads on your Kubernetes cluster. Maybe you need to run an application that isn't architected for containers, or that requires a different version of the Linux kernel -- or an all together different operating system -- than what's available on your container host.&lt;/p&gt;
&lt;p&gt;These sorts of workloads are often well-suited to running in virtual machines (VMs), and &lt;a href="http://www.kubevirt.io/"&gt;KubeVirt&lt;/a&gt;, a virtual machine management add-on for Kubernetes, is aimed at allowing users to run VMs right alongside containers in the their Kubernetes or OpenShift clusters.&lt;/p&gt;</description></item><item><title>Gardener - The Kubernetes Botanist</title><link>https://andygol-k8s.netlify.app/blog/2018/05/17/gardener/</link><pubDate>Thu, 17 May 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2018/05/17/gardener/</guid><description>&lt;p&gt;Today, Kubernetes is the natural choice for running software in the Cloud. More and more developers and corporations are in the process of containerizing their applications, and many of them are adopting Kubernetes for automated deployments of their Cloud Native workloads.&lt;/p&gt;
&lt;p&gt;There are many Open Source tools which help in creating and updating single Kubernetes clusters. However, the more clusters you need the harder it becomes to operate, monitor, manage, and keep all of them alive and up-to-date.&lt;/p&gt;</description></item><item><title>Docs are Migrating from Jekyll to Hugo</title><link>https://andygol-k8s.netlify.app/blog/2018/05/05/hugo-migration/</link><pubDate>Sat, 05 May 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2018/05/05/hugo-migration/</guid><description>&lt;h2 id="changing-the-site-framework"&gt;Changing the site framework&lt;/h2&gt;
&lt;p&gt;After nearly a year of investigating how to enable multilingual support for Kubernetes docs, we've decided to migrate the site's static generator from Jekyll to &lt;a href="https://gohugo.io/"&gt;Hugo&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;What does the Hugo migration mean for users and contributors?&lt;/p&gt;
&lt;h3 id="things-will-break"&gt;Things will break&lt;/h3&gt;
&lt;p&gt;Hugo's Markdown parser is &lt;a href="https://gohugo.io/getting-started/configuration/#configure-blackfriday"&gt;differently strict than Jekyll's&lt;/a&gt;. As a consequence, some Markdown formatting that rendered fine in Jekyll now produces some unexpected results: &lt;a href="https://github.com/kubernetes/website/issues/8258"&gt;strange left nav ordering&lt;/a&gt;, &lt;a href="https://github.com/kubernetes/website/issues/8247"&gt;vanishing tutorials&lt;/a&gt;, and &lt;a href="https://github.com/kubernetes/website/issues/8246"&gt;broken links&lt;/a&gt;, among others.&lt;/p&gt;</description></item><item><title>Announcing Kubeflow 0.1</title><link>https://andygol-k8s.netlify.app/blog/2018/05/04/announcing-kubeflow-0.1/</link><pubDate>Fri, 04 May 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2018/05/04/announcing-kubeflow-0.1/</guid><description>&lt;h1 id="since-last-we-met"&gt;Since Last We Met&lt;/h1&gt;
&lt;p&gt;Since the &lt;a href="https://kubernetes.io/blog/2017/12/introducing-kubeflow-composable"&gt;initial announcement&lt;/a&gt; of Kubeflow at &lt;a href="https://kccncna17.sched.com/event/CU5v/hot-dogs-or-not-at-scale-with-kubernetes-i-vish-kannan-david-aronchick-google"&gt;the last KubeCon+CloudNativeCon&lt;/a&gt;, we have been both surprised and delighted by the excitement for building great ML stacks for Kubernetes. In just over five months, the &lt;a href="https://github.com/kubeflow"&gt;Kubeflow project&lt;/a&gt; now has:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;70+ contributors&lt;/li&gt;
&lt;li&gt;20+ contributing organizations&lt;/li&gt;
&lt;li&gt;15 repositories&lt;/li&gt;
&lt;li&gt;3100+ GitHub stars&lt;/li&gt;
&lt;li&gt;700+ commits&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;and already is among the top 2% of GitHub projects &lt;strong&gt;&lt;em&gt;ever&lt;/em&gt;&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;People are excited to chat about Kubeflow as well! The Kubeflow community has also held meetups, talks and public sessions all around the world with thousands of attendees. With all this help, we’ve started to make substantial in every step of ML, from building your first model all the way to building a production-ready, high-scale deployments. At the end of the day, our mission remains the same: we want to let data scientists and software engineers focus on the things they do well by giving them an easy-to-use, portable and scalable ML stack.&lt;/p&gt;</description></item><item><title>Current State of Policy in Kubernetes</title><link>https://andygol-k8s.netlify.app/blog/2018/05/02/policy-in-kubernetes/</link><pubDate>Wed, 02 May 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2018/05/02/policy-in-kubernetes/</guid><description>&lt;p&gt;Kubernetes has grown dramatically in its impact to the industry; and with rapid growth, we are beginning to see variations across components in how they define and apply policies.&lt;/p&gt;
&lt;p&gt;Currently, policy related components could be found in identity services, networking services, storage services, multi-cluster federation, RBAC and many other areas, with different degree of maturity and also different motivations for specific problems. Within each component, some policies are extensible while others are not. The languages used by each project to express intent vary based on the original authors and experience. Driving consistent views of policies across the entire domain is a daunting task.&lt;/p&gt;</description></item><item><title>Developing on Kubernetes</title><link>https://andygol-k8s.netlify.app/blog/2018/05/01/developing-on-kubernetes/</link><pubDate>Tue, 01 May 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2018/05/01/developing-on-kubernetes/</guid><description>&lt;p&gt;How do you develop a Kubernetes app? That is, how do you write and test an app that is supposed to run on Kubernetes? This article focuses on the challenges, tools and methods you might want to be aware of to successfully write Kubernetes apps alone or in a team setting.&lt;/p&gt;
&lt;p&gt;We’re assuming you are a developer, you have a favorite programming language, editor/IDE, and a testing framework available. The overarching goal is to introduce minimal changes to your current workflow when developing the app for Kubernetes. For example, if you’re a Node.js developer and are used to a hot-reload setup—that is, on save in your editor the running app gets automagically updated—then dealing with containers and container images, with container registries, Kubernetes deployments, triggers, and more can not only be overwhelming but really take all the fun out if it.&lt;/p&gt;</description></item><item><title>Zero-downtime Deployment in Kubernetes with Jenkins</title><link>https://andygol-k8s.netlify.app/blog/2018/04/30/zero-downtime-deployment-kubernetes-jenkins/</link><pubDate>Mon, 30 Apr 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2018/04/30/zero-downtime-deployment-kubernetes-jenkins/</guid><description>&lt;p&gt;Ever since we added the &lt;a href="https://aka.ms/azjenkinsk8s"&gt;Kubernetes Continuous Deploy&lt;/a&gt; and &lt;a href="https://aka.ms/azjenkinsacs"&gt;Azure Container Service&lt;/a&gt; plugins to the Jenkins update center, &amp;quot;How do I create zero-downtime deployments&amp;quot; is one of our most frequently-asked questions. We created a quickstart template on Azure to demonstrate what zero-downtime deployments can look like. Although our example uses Azure, the concept easily applies to all Kubernetes installations.&lt;/p&gt;
&lt;h2 id="rolling-update"&gt;Rolling Update&lt;/h2&gt;
&lt;p&gt;Kubernetes supports the RollingUpdate strategy to replace old pods with new ones gradually, while continuing to serve clients without incurring downtime. To perform a RollingUpdate deployment:&lt;/p&gt;</description></item><item><title>Kubernetes Community - Top of the Open Source Charts in 2017</title><link>https://andygol-k8s.netlify.app/blog/2018/04/25/open-source-charts-2017/</link><pubDate>Wed, 25 Apr 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2018/04/25/open-source-charts-2017/</guid><description>&lt;p&gt;2017 was a huge year for Kubernetes, and GitHub’s latest &lt;a href="https://octoverse.github.com"&gt;Octoverse report&lt;/a&gt; illustrates just how much attention this project has been getting.&lt;/p&gt;
&lt;p&gt;Kubernetes, an &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/overview/what-is-kubernetes/"&gt;open source platform for running application containers&lt;/a&gt;, provides a consistent interface that enables developers and ops teams to automate the deployment, management, and scaling of a wide variety of applications on just about any infrastructure.&lt;/p&gt;
&lt;p&gt;Solving these shared challenges by leveraging a wide community of expertise and industrial experience, as Kubernetes does, helps engineers focus on building their own products at the top of the stack, rather than needlessly duplicating work that now exists as a standard part of the “cloud native” toolkit.&lt;/p&gt;</description></item><item><title>Kubernetes Application Survey 2018 Results</title><link>https://andygol-k8s.netlify.app/blog/2018/04/24/kubernetes-application-survey-results-2018/</link><pubDate>Tue, 24 Apr 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2018/04/24/kubernetes-application-survey-results-2018/</guid><description>&lt;p&gt;Understanding how people use or want to use Kubernetes can help us shape everything from what we build to how we do it. To help us understand how application developers, application operators, and ecosystem tool developers are using and want to use Kubernetes, the Application Definition Working Group recently performed a survey. The survey focused in on these types of user roles and the features and sub-projects owned by the Kubernetes organization. That included kubectl, Dashboard, Minikube, Helm, the Workloads API, etc.&lt;/p&gt;</description></item><item><title>Local Persistent Volumes for Kubernetes Goes Beta</title><link>https://andygol-k8s.netlify.app/blog/2018/04/13/local-persistent-volumes-beta/</link><pubDate>Fri, 13 Apr 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2018/04/13/local-persistent-volumes-beta/</guid><description>&lt;p&gt;The &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/storage/volumes/#local"&gt;Local Persistent Volumes&lt;/a&gt; beta feature in Kubernetes 1.10 makes it possible to leverage local disks in your StatefulSets. You can specify directly-attached local disks as PersistentVolumes, and use them in StatefulSets with the same PersistentVolumeClaim objects that previously only supported remote volume types.&lt;/p&gt;
&lt;p&gt;Persistent storage is important for running stateful applications, and Kubernetes has supported these workloads with StatefulSets, PersistentVolumeClaims and PersistentVolumes. These primitives have supported remote volume types well, where the volumes can be accessed from any node in the cluster, but did not support local volumes, where the volumes can only be accessed from a specific node. The demand for using local, fast SSDs in replicated, stateful workloads has increased with demand to run more workloads in Kubernetes.&lt;/p&gt;</description></item><item><title>Migrating the Kubernetes Blog</title><link>https://andygol-k8s.netlify.app/blog/2018/04/11/migrating-the-kubernetes-blog/</link><pubDate>Wed, 11 Apr 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2018/04/11/migrating-the-kubernetes-blog/</guid><description>&lt;p&gt;We recently migrated the Kubernetes Blog from the Blogger platform to GitHub. With the change in platform comes a change in URL: formerly at &lt;a href="http://blog.kubernetes.io"&gt;http://blog.kubernetes.io&lt;/a&gt;, the blog now resides at &lt;a href="https://kubernetes.io/blog"&gt;https://kubernetes.io/blog&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;All existing posts redirect from their former URLs with &lt;code&gt;&amp;lt;rel=canonical&amp;gt;&lt;/code&gt; tags, preserving SEO values.&lt;/p&gt;
&lt;h3 id="why-and-how-we-migrated-the-blog"&gt;Why and how we migrated the blog&lt;/h3&gt;
&lt;p&gt;Our primary reasons for migrating were to streamline blog submissions and reviews, and to make the overall blog process faster and more transparent. Blogger's web interface made it difficult to provide drafts to multiple reviewers without also granting unnecessary access permissions and compromising security. GitHub's review process offered clear improvements.&lt;/p&gt;</description></item><item><title>Container Storage Interface (CSI) for Kubernetes Goes Beta</title><link>https://andygol-k8s.netlify.app/blog/2018/04/10/container-storage-interface-beta/</link><pubDate>Tue, 10 Apr 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2018/04/10/container-storage-interface-beta/</guid><description>&lt;p&gt;&lt;img src="https://andygol-k8s.netlify.app/images/blog-logging/2018-04-10-container-storage-interface-beta/csi-kubernetes.png" alt="Kubernetes Logo"&gt;
&lt;img src="https://andygol-k8s.netlify.app/images/blog-logging/2018-04-10-container-storage-interface-beta/csi-logo.png" alt="CSI Logo"&gt;&lt;/p&gt;
&lt;p&gt;The Kubernetes implementation of the Container Storage Interface (CSI) is now beta in Kubernetes v1.10. CSI was &lt;a href="https://kubernetes.io/blog/2018/01/introducing-container-storage-interface"&gt;introduced as alpha&lt;/a&gt; in Kubernetes v1.9.&lt;/p&gt;
&lt;p&gt;Kubernetes features are generally introduced as alpha and moved to beta (and eventually to stable/GA) over subsequent Kubernetes releases. This process allows Kubernetes developers to get feedback, discover and fix issues, iterate on the designs, and deliver high quality, production grade features.&lt;/p&gt;
&lt;h2 id="why-introduce-container-storage-interface-in-kubernetes"&gt;Why introduce Container Storage Interface in Kubernetes?&lt;/h2&gt;
&lt;p&gt;Although Kubernetes already provides a powerful volume plugin system that makes it easy to consume different types of block and file storage, adding support for new volume plugins has been challenging. Because volume plugins are currently “in-tree”—volume plugins are part of the core Kubernetes code and shipped with the core Kubernetes binaries—vendors wanting to add support for their storage system to Kubernetes (or even fix a bug in an existing volume plugin) must align themselves with the Kubernetes release process.&lt;/p&gt;</description></item><item><title>Fixing the Subpath Volume Vulnerability in Kubernetes</title><link>https://andygol-k8s.netlify.app/blog/2018/04/04/fixing-subpath-volume-vulnerability/</link><pubDate>Wed, 04 Apr 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2018/04/04/fixing-subpath-volume-vulnerability/</guid><description>&lt;p&gt;On March 12, 2018, the Kubernetes Product Security team disclosed &lt;a href="https://issue.k8s.io/60813"&gt;CVE-2017-1002101&lt;/a&gt;, which allowed containers using &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/storage/volumes/#using-subpath"&gt;subpath&lt;/a&gt; volume mounts to access files outside of the volume. This means that a container could access any file available on the host, including volumes for other containers that it should not have access to.&lt;/p&gt;
&lt;p&gt;The vulnerability has been fixed and released in the latest Kubernetes patch releases. We recommend that all users upgrade to get the fix. For more details on the impact and how to get the fix, please see the &lt;a href="https://groups.google.com/forum/#!topic/kubernetes-announce/6sNHO_jyBzE"&gt;announcement&lt;/a&gt;. (Note, some functional regressions were found after the initial fix and are being tracked in &lt;a href="https://github.com/kubernetes/kubernetes/issues/61563"&gt;issue #61563&lt;/a&gt;).&lt;/p&gt;</description></item><item><title>Kubernetes 1.10: Stabilizing Storage, Security, and Networking</title><link>https://andygol-k8s.netlify.app/blog/2018/03/26/kubernetes-1.10-stabilizing-storage-security-networking/</link><pubDate>Mon, 26 Mar 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2018/03/26/kubernetes-1.10-stabilizing-storage-security-networking/</guid><description>&lt;p&gt;We’re pleased to announce the delivery of Kubernetes 1.10, our first release
of 2018!&lt;/p&gt;
&lt;p&gt;Today’s release continues to advance maturity, extensibility, and pluggability
of Kubernetes. This newest version stabilizes features in 3 key areas,
including storage, security, and networking. Notable additions in this release
include the introduction of external kubectl credential providers (alpha), the
ability to switch DNS service to CoreDNS at install time (beta), and the move
of Container Storage Interface (CSI) and persistent local volumes to beta.&lt;/p&gt;</description></item><item><title>Principles of Container-based Application Design</title><link>https://andygol-k8s.netlify.app/blog/2018/03/principles-of-container-app-design/</link><pubDate>Thu, 15 Mar 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2018/03/principles-of-container-app-design/</guid><description>&lt;p&gt;It's possible nowadays to put almost any application in a container and run it. Creating cloud-native applications, however—containerized applications that are automated and orchestrated effectively by a cloud-native platform such as Kubernetes—requires additional effort. Cloud-native applications anticipate failure; they run and scale reliably even when their infrastructure experiences outages. To offer such capabilities, cloud-native platforms like Kubernetes impose a set of contracts and constraints on applications. These contracts ensure that applications they run conform to certain constraints and allow the platform to automate application management.&lt;/p&gt;</description></item><item><title>Expanding User Support with Office Hours</title><link>https://andygol-k8s.netlify.app/blog/2018/03/expanding-user-support-with-office-hours/</link><pubDate>Wed, 14 Mar 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2018/03/expanding-user-support-with-office-hours/</guid><description>&lt;p&gt;&lt;strong&gt;Today's post is on Kubernetes office hours.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Today's developer has an almost overwhelming amount of resources available for learning. Kubernetes development teams use &lt;a href="https://stackoverflow.com/questions/tagged/kubernetes"&gt;StackOverflow&lt;/a&gt;, &lt;a href="https://kubernetes.io/docs/home"&gt;user documentation&lt;/a&gt;, &lt;a href="http://slack.k8s.io/"&gt;Slack&lt;/a&gt;, and the &lt;a href="https://groups.google.com/forum/#!forum/kubernetes-users"&gt;mailing lists&lt;/a&gt;. Additionally, the community itself continues to amass an &lt;a href="https://github.com/ramitsurana/awesome-kubernetes"&gt;awesome list&lt;/a&gt; of resources.&lt;/p&gt;
&lt;p&gt;One of the challenges of large projects is keeping user resources relevant and useful. While documentation can be useful, great learning also happens in Q&amp;amp;A sessions at conferences, or by learning with someone whose explanation matches your learning style. Consider that learning Kung Fu from Morpheus would be a lot more fun than reading a book about Kung Fu!&lt;/p&gt;</description></item><item><title>How to Integrate RollingUpdate Strategy for TPR in Kubernetes</title><link>https://andygol-k8s.netlify.app/blog/2018/03/how-to-integrate-rollingupdate-strategy/</link><pubDate>Tue, 13 Mar 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2018/03/how-to-integrate-rollingupdate-strategy/</guid><description>&lt;p&gt;With Kubernetes, it's easy to manage and scale stateless applications like web apps and API services right out of the box. To date, almost all of the talks about Kubernetes has been about microservices and stateless applications.&lt;/p&gt;
&lt;p&gt;With the popularity of container-based microservice architectures, there is a strong need to deploy and manage RDBMS(Relational Database Management Systems). RDBMS requires experienced database-specific knowledge to correctly scale, upgrade, and re-configure while protecting against data loss or unavailability.&lt;/p&gt;</description></item><item><title>Apache Spark 2.3 with Native Kubernetes Support</title><link>https://andygol-k8s.netlify.app/blog/2018/03/apache-spark-23-with-native-kubernetes/</link><pubDate>Tue, 06 Mar 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2018/03/apache-spark-23-with-native-kubernetes/</guid><description>&lt;h3 id="kubernetes-and-big-data"&gt;Kubernetes and Big Data&lt;/h3&gt;
&lt;p&gt;The open source community has been working over the past year to enable first-class support for data processing, data analytics and machine learning workloads in Kubernetes. New extensibility features in Kubernetes, such as &lt;a href="https://kubernetes.io/docs/concepts/api-extension/custom-resources/"&gt;custom resources&lt;/a&gt; and &lt;a href="https://kubernetes.io/docs/concepts/api-extension/custom-resources/#custom-controllers"&gt;custom controllers&lt;/a&gt;, can be used to create deep integrations with individual applications and frameworks.&lt;/p&gt;
&lt;p&gt;Traditionally, data processing workloads have been run in dedicated setups like the YARN/Hadoop stack. However, unifying the control plane for all workloads on Kubernetes simplifies cluster management and can improve resource utilization.&lt;/p&gt;</description></item><item><title>Kubernetes: First Beta Version of Kubernetes 1.10 is Here</title><link>https://andygol-k8s.netlify.app/blog/2018/03/first-beta-version-of-kubernetes-1-10/</link><pubDate>Fri, 02 Mar 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2018/03/first-beta-version-of-kubernetes-1-10/</guid><description>&lt;p&gt;The Kubernetes community has released the first beta version of Kubernetes 1.10, which means you can now try out some of the new features and give your feedback to the release team ahead of the official release. The release, currently scheduled for March 21, 2018, is targeting the inclusion of more than a dozen brand new alpha features and more mature versions of more than two dozen more.&lt;/p&gt;
&lt;p&gt;Specifically, Kubernetes 1.10 will include production-ready versions of Kubelet TLS Bootstrapping, API aggregation, and more detailed storage metrics.&lt;/p&gt;</description></item><item><title>Reporting Errors from Control Plane to Applications Using Kubernetes Events</title><link>https://andygol-k8s.netlify.app/blog/2018/01/reporting-errors-using-kubernetes-events/</link><pubDate>Thu, 25 Jan 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2018/01/reporting-errors-using-kubernetes-events/</guid><description>&lt;p&gt;At &lt;a href="https://www.box.com/"&gt;Box&lt;/a&gt;, we manage several large scale Kubernetes clusters that serve as an internal platform as a service (PaaS) for hundreds of deployed microservices. The majority of those microservices are applications that power box.com for over 80,000 customers. The PaaS team also deploys several services affiliated with the platform infrastructure as the &lt;em&gt;control plane&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;One use case of Box’s control plane is &lt;a href="https://en.wikipedia.org/wiki/Public_key_infrastructure"&gt;public key infrastructure&lt;/a&gt; (&lt;em&gt;PKI&lt;/em&gt;) processing. In our infrastructure, applications needing a new SSL certificate also need to trigger some processing in the control plane. The majority of our applications are not allowed to generate new SSL certificates due to security reasons. The control plane has a different security boundary and network access, and is therefore allowed to generate certificates.&lt;/p&gt;</description></item><item><title>Core Workloads API GA</title><link>https://andygol-k8s.netlify.app/blog/2018/01/core-workloads-api-ga/</link><pubDate>Mon, 15 Jan 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2018/01/core-workloads-api-ga/</guid><description>&lt;h2 id="daemonset-deployment-replicaset-and-statefulset-are-ga"&gt;DaemonSet, Deployment, ReplicaSet, and StatefulSet are GA&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt;Editor’s Note: We’re happy to announce that the Core Workloads API is GA in Kubernetes 1.9! This blog post from Kenneth Owens reviews how Core Workloads got to GA from its origins, reveals changes in 1.9, and talks about what you can expect going forward.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;h2 id="in-the-beginning"&gt;In the Beginning …&lt;/h2&gt;
&lt;p&gt;There were &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/pod-overview/"&gt;Pods&lt;/a&gt;, tightly coupled containers that share resource requirements, networking, storage, and a lifecycle. Pods were useful, but, as it turns out, users wanted to seamlessly, reproducibly, and automatically create many identical replicas of the same Pod, so we created &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/replicationcontroller/"&gt;ReplicationController&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>Introducing client-go version 6</title><link>https://andygol-k8s.netlify.app/blog/2018/01/introducing-client-go-version-6/</link><pubDate>Fri, 12 Jan 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2018/01/introducing-client-go-version-6/</guid><description>&lt;p&gt;The Kubernetes API server &lt;a href="https://blog.openshift.com/tag/api-server/"&gt;exposes a REST interface&lt;/a&gt; consumable by any client. &lt;a href="https://github.com/kubernetes/client-go"&gt;client-go&lt;/a&gt; is the official client library for the Go programming language. It is used both internally by Kubernetes itself (for example, inside kubectl) as well as by &lt;a href="https://github.com/search?q=k8s.io%2Fclient-go&amp;type=Code&amp;utf8=%E2%9C%93"&gt;numerous external consumers&lt;/a&gt;:operators like the &lt;a href="https://github.com/coreos/etcd-operator"&gt;etcd-operator&lt;/a&gt; or &lt;a href="https://github.com/coreos/prometheus-operator"&gt;prometheus-operator;&lt;/a&gt;higher level frameworks like &lt;a href="https://github.com/kubeless/kubeless"&gt;KubeLess&lt;/a&gt; and &lt;a href="https://openshift.io/"&gt;OpenShift&lt;/a&gt;; and many more.&lt;/p&gt;
&lt;p&gt;The version 6 update to client-go adds support for Kubernetes 1.9, allowing access to the latest Kubernetes features. While the &lt;a href="https://github.com/kubernetes/client-go/blob/master/CHANGELOG.md"&gt;changelog&lt;/a&gt; contains all the gory details, this blog post highlights the most prominent changes and intends to guide on how to upgrade from version 5.&lt;/p&gt;</description></item><item><title>Extensible Admission is Beta</title><link>https://andygol-k8s.netlify.app/blog/2018/01/extensible-admission-is-beta/</link><pubDate>Thu, 11 Jan 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2018/01/extensible-admission-is-beta/</guid><description>&lt;p&gt;In this post we review a feature, available in the Kubernetes API server, that allows you to implement arbitrary control decisions and which has matured considerably in Kubernetes 1.9.&lt;/p&gt;
&lt;p&gt;The admission stage of API server processing is one of the most powerful tools for securing a Kubernetes cluster by restricting the objects that can be created, but it has always been limited to compiled code. In 1.9, we promoted webhooks for admission to beta, allowing you to leverage admission from outside the API server process.&lt;/p&gt;</description></item><item><title> Introducing Container Storage Interface (CSI) Alpha for Kubernetes</title><link>https://andygol-k8s.netlify.app/blog/2018/01/introducing-container-storage-interface/</link><pubDate>Wed, 10 Jan 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2018/01/introducing-container-storage-interface/</guid><description>&lt;p&gt;One of the key differentiators for Kubernetes has been a powerful &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/storage/volumes/"&gt;volume plugin system&lt;/a&gt; that enables many different types of storage systems to:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Automatically create storage when required.&lt;/li&gt;
&lt;li&gt;Make storage available to containers wherever they’re scheduled.&lt;/li&gt;
&lt;li&gt;Automatically delete the storage when no longer needed.
Adding support for new storage systems to Kubernetes, however, has been challenging.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Kubernetes 1.9 introduces an &lt;a href="https://github.com/kubernetes/features/issues/178"&gt;alpha implementation of the Container Storage Interface (CSI)&lt;/a&gt; which makes installing new volume plugins as easy as deploying a pod. It also enables third-party storage providers to develop solutions without the need to add to the core Kubernetes codebase.&lt;/p&gt;</description></item><item><title>Kubernetes v1.9 releases beta support for Windows Server Containers</title><link>https://andygol-k8s.netlify.app/blog/2018/01/kubernetes-v19-beta-windows-support/</link><pubDate>Tue, 09 Jan 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2018/01/kubernetes-v19-beta-windows-support/</guid><description>&lt;p&gt;&lt;em&gt;At the time of publication, Michael Michael was writing as SIG-Windows Lead.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;With the release of Kubernetes v1.9, our mission of ensuring Kubernetes works well everywhere and for everyone takes a great step forward. We’ve advanced support for Windows Server to beta along with continued feature and functional advancements on both the Kubernetes and Windows platforms. SIG-Windows has been working since March of 2016 to open the door for many Windows-specific applications and workloads to run on Kubernetes, significantly expanding the implementation scenarios and the enterprise reach of Kubernetes.&lt;/p&gt;</description></item><item><title> Five Days of Kubernetes 1.9</title><link>https://andygol-k8s.netlify.app/blog/2018/01/five-days-of-kubernetes-19/</link><pubDate>Mon, 08 Jan 2018 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2018/01/five-days-of-kubernetes-19/</guid><description>&lt;p&gt;Kubernetes 1.9 is live, made possible by hundreds of contributors pushing thousands of commits in this latest releases.&lt;/p&gt;
&lt;p&gt;The community has tallied around 32,300 commits in the main repo and continues rapid growth outside of the main repo, which signals growing maturity and stability for the project. The community has logged more than 90,700 commits across all repos and 7,800 commits across all repos for v1.8.0 to v1.9.0 alone.&lt;/p&gt;
&lt;p&gt;With the help of our growing community of 1,400 plus contributors, we issued more than 4,490 PRs and pushed more than 7,800 commits to deliver Kubernetes 1.9 with many notable updates, including enhancements for the workloads and stateful application support areas. This all points to increased extensibility and standards-based Kubernetes ecosystem.&lt;/p&gt;</description></item><item><title> Introducing Kubeflow - A Composable, Portable, Scalable ML Stack Built for Kubernetes</title><link>https://andygol-k8s.netlify.app/blog/2017/12/introducing-kubeflow-composable/</link><pubDate>Thu, 21 Dec 2017 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2017/12/introducing-kubeflow-composable/</guid><description>&lt;h2 id="kubernetes-and-machine-learning"&gt;Kubernetes and Machine Learning&lt;/h2&gt;
&lt;p&gt;Kubernetes has quickly become the hybrid solution for deploying complicated workloads anywhere. While it started with just stateless services, customers have begun to move complex workloads to the platform, taking advantage of rich APIs, reliability and performance provided by Kubernetes. One of the fastest growing use cases is to use Kubernetes as the deployment platform of choice for machine learning.&lt;/p&gt;
&lt;p&gt;Building any production-ready machine learning system involves various components, often mixing vendors and hand-rolled solutions. Connecting and managing these services for even moderately sophisticated setups introduces huge barriers of complexity in adopting machine learning. Infrastructure engineers will often spend a significant amount of time manually tweaking deployments and hand rolling solutions before a single model can be tested.&lt;/p&gt;</description></item><item><title>Kubernetes 1.9: Apps Workloads GA and Expanded Ecosystem</title><link>https://andygol-k8s.netlify.app/blog/2017/12/kubernetes-19-workloads-expanded-ecosystem/</link><pubDate>Fri, 15 Dec 2017 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2017/12/kubernetes-19-workloads-expanded-ecosystem/</guid><description>&lt;p&gt;We’re pleased to announce the delivery of Kubernetes 1.9, our fourth and final release this year.&lt;/p&gt;
&lt;p&gt;Today’s release continues the evolution of an increasingly rich feature set, more robust stability, and even greater community contributions. As the fourth release of the year, it gives us an opportunity to look back at the progress made in key areas. Particularly notable is the advancement of the Apps Workloads API to stable. This removes any reservations potential adopters might have had about the functional stability required to run mission-critical workloads. Another big milestone is the beta release of Windows support, which opens the door for many Windows-specific applications and workloads to run in Kubernetes, significantly expanding the implementation scenarios and enterprise readiness of Kubernetes.&lt;/p&gt;</description></item><item><title>Using eBPF in Kubernetes</title><link>https://andygol-k8s.netlify.app/blog/2017/12/using-ebpf-in-kubernetes/</link><pubDate>Thu, 07 Dec 2017 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2017/12/using-ebpf-in-kubernetes/</guid><description>&lt;h2 id="introduction"&gt;Introduction&lt;/h2&gt;
&lt;p&gt;Kubernetes provides a high-level API and a set of components that hides almost all of the intricate and—to some of us—interesting details of what happens at the systems level. Application developers are not required to have knowledge of the machines' IP tables, cgroups, namespaces, seccomp, or, nowadays, even the &lt;a href="https://kubernetes.io/blog/2016/12/container-runtime-interface-cri-in-kubernetes"&gt;container runtime&lt;/a&gt; that their application runs on top of. But underneath, Kubernetes and the technologies upon which it relies (for example, the container runtime) heavily leverage core Linux functionalities.&lt;/p&gt;</description></item><item><title> PaddlePaddle Fluid: Elastic Deep Learning on Kubernetes</title><link>https://andygol-k8s.netlify.app/blog/2017/12/paddle-paddle-fluid-elastic-learning/</link><pubDate>Wed, 06 Dec 2017 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2017/12/paddle-paddle-fluid-elastic-learning/</guid><description>&lt;p&gt;&lt;em&gt;&lt;strong&gt;Editor's note:&lt;/strong&gt; Today's post is a joint post from the deep learning team at Baidu and the etcd team at CoreOS&lt;/em&gt;&lt;/p&gt;
&lt;h2 id="paddlepaddle-fluid-elastic-deep-learning-on-kubernetes"&gt;PaddlePaddle Fluid: Elastic Deep Learning on Kubernetes&lt;/h2&gt;
&lt;p&gt;Two open source communities—PaddlePaddle, the deep learning framework originated in Baidu, and Kubernetes®, the most famous containerized application scheduler—are announcing the Elastic Deep Learning (EDL) feature in PaddlePaddle’s new release codenamed Fluid.&lt;/p&gt;
&lt;p&gt;Fluid EDL includes a &lt;a href="https://github.com/kubernetes/community/blob/master/contributors/devel/sig-api-machinery/controllers.md"&gt;Kubernetes controller&lt;/a&gt;, &lt;a href="https://github.com/PaddlePaddle/cloud/tree/develop/doc/edl/experiment#auto-scaling-experiment"&gt;&lt;em&gt;PaddlePaddle auto-scaler&lt;/em&gt;&lt;/a&gt;, which changes the number of processes of distributed jobs according to the idle hardware resource in the cluster, and a new fault-tolerable architecture as described in the &lt;a href="https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/cluster_train/README.md"&gt;PaddlePaddle design doc&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>Autoscaling in Kubernetes</title><link>https://andygol-k8s.netlify.app/blog/2017/11/autoscaling-in-kubernetes/</link><pubDate>Fri, 17 Nov 2017 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2017/11/autoscaling-in-kubernetes/</guid><description>&lt;p&gt;Kubernetes allows developers to automatically adjust cluster sizes and the number of
pod replicas based on current traffic and load. These adjustments reduce the amount of
unused nodes, saving money and resources. In this talk, Marcin Wielgus of Google walks
you through the current state of pod and node autoscaling in Kubernetes: how it works,
and how to use it, including best practices for deployments in production applications.&lt;/p&gt;
&lt;div style="position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden;"&gt;
 &lt;iframe allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share; fullscreen" loading="eager" referrerpolicy="strict-origin-when-cross-origin" src="https://www.youtube.com/embed/m3Ma3G14dJ0?autoplay=0&amp;amp;controls=1&amp;amp;end=0&amp;amp;loop=0&amp;amp;mute=0&amp;amp;start=0" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%; border:0;" title=" Autoscaling in Kubernetes [I] - Marcin Wielgus, Google"&gt;&lt;/iframe&gt;
 &lt;/div&gt;

&lt;p&gt;Enjoyed this talk? Join us for more exciting sessions on scaling and automating your
Kubernetes clusters at KubeCon in Austin on December 6-8.
&lt;del&gt;&lt;a href="https://www.eventbrite.com/e/kubecon-cloudnativecon-north-america-registration-37824050754"&gt;Register now&lt;/a&gt;.&lt;/del&gt;&lt;/p&gt;</description></item><item><title> Certified Kubernetes Conformance Program: Launch Celebration Round Up</title><link>https://andygol-k8s.netlify.app/blog/2017/11/certified-kubernetes-conformance/</link><pubDate>Thu, 16 Nov 2017 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2017/11/certified-kubernetes-conformance/</guid><description>&lt;p&gt;&lt;a href="https://1.bp.blogspot.com/-YasPeoIh8tA/Wg28rH4dzXI/AAAAAAAAAHg/Hfk2dnUoav4XMefGyjzMWdJMZbu1QJFagCK4BGAYYCw/s1600/certified_kubernetes_color.png"&gt;&lt;img src="https://1.bp.blogspot.com/-YasPeoIh8tA/Wg28rH4dzXI/AAAAAAAAAHg/Hfk2dnUoav4XMefGyjzMWdJMZbu1QJFagCK4BGAYYCw/s200/certified_kubernetes_color.png" alt=""&gt;&lt;/a&gt;This week the CNCFⓇ &lt;a href="https://www.cncf.io/announcement/2017/11/13/cloud-native-computing-foundation-launches-certified-kubernetes-program-32-conformant-distributions-platforms/"&gt;certified the first group&lt;/a&gt; of KubernetesⓇ offerings under the &lt;a href="https://www.cncf.io/certification/software-conformance/"&gt;Certified Kubernetes Conformance Program&lt;/a&gt;. These first certifications follow a &lt;a href="https://kubernetes.io/blog/2017/10/software-conformance-certification"&gt;beta phase&lt;/a&gt; during which we invited participants to submit conformance results. The community response was overwhelming: CNCF certified offerings from 32 vendors!&lt;/p&gt;
&lt;p&gt;The new Certified Kubernetes Conformance Program gives enterprise organizations the confidence that workloads running on any Certified Kubernetes distribution or platform will work correctly on other Certified Kubernetes distributions or platforms. A Certified Kubernetes product guarantees that the complete Kubernetes API functions as specified, so users can rely on a seamless, stable experience.&lt;/p&gt;</description></item><item><title> Kubernetes is Still Hard (for Developers)</title><link>https://andygol-k8s.netlify.app/blog/2017/11/kubernetes-is-still-hard-for-developers/</link><pubDate>Wed, 15 Nov 2017 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2017/11/kubernetes-is-still-hard-for-developers/</guid><description>&lt;p&gt;Kubernetes has made the Ops experience much easier, but how does the developer experience compare? Ops teams can deploy a Kubernetes cluster in a matter of minutes. But developers need to understand a host of new concepts before beginning to work with Kubernetes. This can be a tedious and manual process, but it doesn’t have to be. In this talk, &lt;a href="https://twitter.com/michellenoorali"&gt;Michelle Noorali&lt;/a&gt;, co-lead of SIG-Apps, reimagines the Kubernetes developer experience. She shares her top 3 tips for building a successful developer experience including:&lt;/p&gt;</description></item><item><title> Securing Software Supply Chain with Grafeas</title><link>https://andygol-k8s.netlify.app/blog/2017/11/securing-software-supply-chain-grafeas/</link><pubDate>Fri, 03 Nov 2017 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2017/11/securing-software-supply-chain-grafeas/</guid><description>&lt;p&gt;Kubernetes has evolved to support increasingly complex classes of applications, enabling the development of two major industry trends: hybrid cloud and microservices. With increasing complexity in production environments, customers—especially enterprises—are demanding better ways to manage their software supply chain with more centralized visibility and control over production deployments.&lt;/p&gt;
&lt;p&gt;On October 12th, Google and partners &lt;a href="https://cloudplatform.googleblog.com/2017/10/introducing-grafeas-open-source-api-.html"&gt;announced&lt;/a&gt; Grafeas, an open source initiative to define a best practice for auditing and governing the modern software supply chain. With Grafeas (“scribe” in Greek), developers can plug in components of the CI/CD pipeline into a central source of truth for tracking and enforcing policies. Google is also working on &lt;a href="https://github.com/Grafeas/Grafeas/blob/master/case-studies/binary-authorization.md"&gt;Kritis&lt;/a&gt; (“judge” in Greek), allowing devOps teams to enforce deploy-time image policy using metadata and attestations stored in Grafeas.&lt;/p&gt;</description></item><item><title> Containerd Brings More Container Runtime Options for Kubernetes</title><link>https://andygol-k8s.netlify.app/blog/2017/11/containerd-container-runtime-options-kubernetes/</link><pubDate>Thu, 02 Nov 2017 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2017/11/containerd-container-runtime-options-kubernetes/</guid><description>&lt;p&gt;&lt;em&gt;Update: Kubernetes support for Docker via &lt;code&gt;dockershim&lt;/code&gt; is now deprecated.
For more information, read the &lt;a href="https://andygol-k8s.netlify.app/blog/2020/12/08/kubernetes-1-20-release-announcement/#dockershim-deprecation"&gt;deprecation notice&lt;/a&gt;.
You can also discuss the deprecation via a dedicated &lt;a href="https://github.com/kubernetes/kubernetes/issues/106917"&gt;GitHub issue&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;A &lt;em&gt;container runtime&lt;/em&gt; is software that executes containers and manages container images on a node. Today, the most widely known container runtime is &lt;a href="https://www.docker.com/"&gt;Docker&lt;/a&gt;, but there are other container runtimes in the ecosystem, such as &lt;a href="https://coreos.com/rkt/"&gt;rkt&lt;/a&gt;, &lt;a href="https://containerd.io/"&gt;containerd&lt;/a&gt;, and &lt;a href="https://linuxcontainers.org/lxd/"&gt;lxd&lt;/a&gt;. Docker is by far the most common container runtime used in production Kubernetes environments, but Docker’s smaller offspring, containerd, may prove to be a better option. This post describes using containerd with Kubernetes.&lt;/p&gt;</description></item><item><title> Kubernetes the Easy Way</title><link>https://andygol-k8s.netlify.app/blog/2017/11/kubernetes-easy-way/</link><pubDate>Wed, 01 Nov 2017 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2017/11/kubernetes-easy-way/</guid><description>&lt;p&gt;Kelsey Hightower wrote an invaluable guide for Kubernetes called &lt;a href="https://github.com/kelseyhightower/kubernetes-the-hard-way"&gt;Kubernetes the Hard Way&lt;/a&gt;. It’s an awesome resource for those looking to understand the ins and outs of Kubernetes—but what if you want to put Kubernetes on easy mode? That’s something we’ve been working on together with Google Cloud. In this guide, we’ll show you how to get a cluster up and running, as well as how to actually deploy your code to that cluster and run it.&lt;/p&gt;</description></item><item><title> Enforcing Network Policies in Kubernetes</title><link>https://andygol-k8s.netlify.app/blog/2017/10/enforcing-network-policies-in-kubernetes/</link><pubDate>Mon, 30 Oct 2017 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2017/10/enforcing-network-policies-in-kubernetes/</guid><description>&lt;p&gt;&lt;em&gt;&lt;strong&gt;Editor's note:&lt;/strong&gt; this post is part of a &lt;a href="https://andygol-k8s.netlify.app/blog/2017/10/five-days-of-kubernetes-18"&gt;series of in-depth articles&lt;/a&gt; on what's new in Kubernetes 1.8.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Kubernetes now offers functionality to enforce rules about which pods can communicate with each other using &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/services-networking/network-policies/"&gt;network policies&lt;/a&gt;. This feature is has become stable Kubernetes 1.7 and is ready to use with supported networking plugins. The Kubernetes 1.8 release has added better capabilities to this feature.&lt;/p&gt;
&lt;h2 id="network-policy-what-does-it-mean"&gt;Network policy: What does it mean?&lt;/h2&gt;
&lt;p&gt;In a Kubernetes cluster configured with default settings, all pods can discover and communicate with each other without any restrictions. The new Kubernetes object type NetworkPolicy lets you allow and block traffic to pods.&lt;/p&gt;</description></item><item><title> Using RBAC, Generally Available in Kubernetes v1.8</title><link>https://andygol-k8s.netlify.app/blog/2017/10/using-rbac-generally-available-18/</link><pubDate>Sat, 28 Oct 2017 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2017/10/using-rbac-generally-available-18/</guid><description>&lt;p&gt;&lt;em&gt;&lt;strong&gt;Editor's note:&lt;/strong&gt; this post is part of a &lt;a href="https://andygol-k8s.netlify.app/blog/2017/10/five-days-of-kubernetes-18"&gt;series of in-depth articles&lt;/a&gt; on what's new in Kubernetes 1.8.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Kubernetes 1.8 represents a significant milestone for the &lt;a href="https://andygol-k8s.netlify.app/docs/reference/access-authn-authz/rbac/"&gt;role-based access control (RBAC) authorizer&lt;/a&gt;, which was promoted to GA in this release. RBAC is a mechanism for controlling access to the Kubernetes API, and since its &lt;a href="https://kubernetes.io/blog/2017/04/rbac-support-in-kubernetes"&gt;beta in 1.6&lt;/a&gt;, many Kubernetes clusters and provisioning strategies have enabled it by default.&lt;/p&gt;
&lt;p&gt;Going forward, we expect to see RBAC become a fundamental building block for securing Kubernetes clusters. This post explores using RBAC to manage user and application access to the Kubernetes API.&lt;/p&gt;</description></item><item><title> It Takes a Village to Raise a Kubernetes</title><link>https://andygol-k8s.netlify.app/blog/2017/10/it-takes-village-to-raise-kubernetes/</link><pubDate>Thu, 26 Oct 2017 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2017/10/it-takes-village-to-raise-kubernetes/</guid><description>&lt;p&gt;&lt;strong&gt;&lt;em&gt;Editor’s note: this post is part of a &lt;a href="https://kubernetes.io/blog/2017/10/five-days-of-kubernetes-18"&gt;series of in-depth articles&lt;/a&gt; on what's new in Kubernetes 1.8, written by Jaice Singer DuMars from Microsoft.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Each time we release a new version of Kubernetes, it’s enthralling to see how the community responds to all of the hard work that went into it. Blogs on new or enhanced capabilities crop up all over the web like wildflowers in the spring. Talks, videos, webinars, and demos are not far behind. As soon as the community seems to take this all in, we turn around and add more to the mix. It’s a thrilling time to be a part of this project, and even more so, the movement. It’s not just software anymore.&lt;/p&gt;</description></item><item><title> kubeadm v1.8 Released: Introducing Easy Upgrades for Kubernetes Clusters</title><link>https://andygol-k8s.netlify.app/blog/2017/10/kubeadm-v18-released/</link><pubDate>Wed, 25 Oct 2017 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2017/10/kubeadm-v18-released/</guid><description>&lt;p&gt;&lt;em&gt;&lt;strong&gt;Editor's note:&lt;/strong&gt; this post is part of a &lt;a href="https://andygol-k8s.netlify.app/blog/2017/10/five-days-of-kubernetes-18"&gt;series of in-depth articles&lt;/a&gt; on what's new in Kubernetes 1.8.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Since its debut in &lt;a href="https://kubernetes.io/blog/2016/09/how-we-made-kubernetes-easy-to-install"&gt;September 2016&lt;/a&gt;, the Cluster Lifecycle Special Interest Group (SIG) has established kubeadm as the easiest Kubernetes bootstrap method. Now, we’re releasing kubeadm v1.8.0 in tandem with the release of &lt;a href="https://kubernetes.io/blog/2017/09/kubernetes-18-security-workloads-and"&gt;Kubernetes v1.8.0&lt;/a&gt;. In this blog post, I’ll walk you through the changes we’ve made to kubeadm since the last update, the scope of kubeadm, and how you can contribute to this effort.&lt;/p&gt;</description></item><item><title> Five Days of Kubernetes 1.8</title><link>https://andygol-k8s.netlify.app/blog/2017/10/five-days-of-kubernetes-18/</link><pubDate>Tue, 24 Oct 2017 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2017/10/five-days-of-kubernetes-18/</guid><description>&lt;p&gt;Kubernetes 1.8 is live, made possible by hundreds of contributors pushing thousands of commits in this latest releases.&lt;/p&gt;
&lt;p&gt;The community has tallied more than 66,000 commits in the main repo and continues rapid growth outside of the main repo, which signals growing maturity and stability for the project. The community has logged more than 120,000 commits across all repos and 17,839 commits across all repos for v1.7.0 to v1.8.0 alone.&lt;/p&gt;</description></item><item><title> Introducing Software Certification for Kubernetes</title><link>https://andygol-k8s.netlify.app/blog/2017/10/software-conformance-certification/</link><pubDate>Thu, 19 Oct 2017 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2017/10/software-conformance-certification/</guid><description>&lt;p&gt;Over the last three years, Kubernetes® has seen wide-scale adoption by a vibrant and diverse community of providers. In fact, there are now more than &lt;a href="https://docs.google.com/spreadsheets/d/1LxSqBzjOxfGx3cmtZ4EbB_BGCxT_wlxW_xgHVVa23es/edit#gid=0"&gt;60&lt;/a&gt; known Kubernetes platforms and distributions. From the start, one goal of Kubernetes has been consistency and portability.&lt;/p&gt;
&lt;p&gt;In order to better serve this goal, today the Kubernetes community and the Cloud Native Computing Foundation® (CNCF®) announce the availability of the beta Certified Kubernetes Conformance Program. The Kubernetes conformance certification program gives users the confidence that when they use a Certified Kubernetes™ product, they can rely on a high level of common functionality. Certification provides Independent Software Vendors (ISVs) confidence that if their customer is using a Certified Kubernetes product, their software will behave as expected.&lt;/p&gt;</description></item><item><title> Request Routing and Policy Management with the Istio Service Mesh</title><link>https://andygol-k8s.netlify.app/blog/2017/10/request-routing-and-policy-management/</link><pubDate>Tue, 10 Oct 2017 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2017/10/request-routing-and-policy-management/</guid><description>&lt;p&gt;&lt;em&gt;&lt;strong&gt;Editor's note:&lt;/strong&gt; Today’s post is the second post in a three-part series on Istio.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;In a &lt;a href="https://kubernetes.io/blog/2017/05/managing-microservices-with-istio-service-mesh"&gt;previous article&lt;/a&gt;, we looked at a &lt;a href="https://istio.io/docs/guides/bookinfo.html"&gt;simple application (Bookinfo)&lt;/a&gt; that is composed of four separate microservices. The article showed how to deploy an application with Kubernetes and an Istio-enabled cluster without changing any application code. The article also outlined how to view Istio provided L7 metrics on the running services.&lt;/p&gt;
&lt;p&gt;This article follows up by taking a deeper look at Istio using Bookinfo. Specifically, we’ll look at two more features of Istio: request routing and policy management.&lt;/p&gt;</description></item><item><title> Kubernetes Community Steering Committee Election Results</title><link>https://andygol-k8s.netlify.app/blog/2017/10/kubernetes-community-steering-committee-election-results/</link><pubDate>Thu, 05 Oct 2017 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2017/10/kubernetes-community-steering-committee-election-results/</guid><description>&lt;p&gt;Beginning with the announcement of Kubernetes 1.0 at OSCON in 2015, there has been a concerted effort to share the power and burden of leadership across the Kubernetes community.&lt;/p&gt;
&lt;p&gt;With the work of the Bootstrap Governance Committee, consisting of Brandon Philips, Brendan Burns, Brian Grant, Clayton Coleman, Joe Beda, Sarah Novotny and Tim Hockin - a cross section of long-time leaders representing 5 different companies with major investments of talent and effort in the Kubernetes Ecosystem - we wrote an initial &lt;a href="https://github.com/kubernetes/steering/blob/master/charter.md"&gt;Steering Committee Charter&lt;/a&gt; and launched a community wide election to seat a Kubernetes Steering Committee.&lt;/p&gt;</description></item><item><title> Kubernetes 1.8: Security, Workloads and Feature Depth</title><link>https://andygol-k8s.netlify.app/blog/2017/09/kubernetes-18-security-workloads-and/</link><pubDate>Fri, 29 Sep 2017 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2017/09/kubernetes-18-security-workloads-and/</guid><description>&lt;p&gt;We’re pleased to announce the delivery of Kubernetes 1.8, our third release this year. Kubernetes 1.8 represents a snapshot of many exciting enhancements and refinements underway. In addition to functional improvements, we’re increasing project-wide focus on maturing &lt;a href="https://github.com/kubernetes/sig-release"&gt;process&lt;/a&gt;, formalizing &lt;a href="https://github.com/kubernetes/community/tree/master/sig-architecture"&gt;architecture&lt;/a&gt;, and strengthening Kubernetes’ &lt;a href="https://github.com/kubernetes/community/tree/master/community/elections/2017"&gt;governance model&lt;/a&gt;. The evolution of mature processes clearly signals that sustainability is a driving concern, and helps to ensure that Kubernetes is a viable and thriving project far into the future.&lt;/p&gt;</description></item><item><title> Kubernetes StatefulSets &amp; DaemonSets Updates</title><link>https://andygol-k8s.netlify.app/blog/2017/09/kubernetes-statefulsets-daemonsets/</link><pubDate>Wed, 27 Sep 2017 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2017/09/kubernetes-statefulsets-daemonsets/</guid><description>&lt;p&gt;This post talks about recent updates to the &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/daemonset/"&gt;DaemonSet&lt;/a&gt; and &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/statefulset/"&gt;StatefulSet&lt;/a&gt; API objects for Kubernetes. We explore these features using &lt;a href="https://zookeeper.apache.org/"&gt;Apache ZooKeeper&lt;/a&gt; and &lt;a href="https://kafka.apache.org/"&gt;Apache Kafka&lt;/a&gt; StatefulSets and a &lt;a href="https://github.com/prometheus/node_exporter"&gt;Prometheus node exporter&lt;/a&gt; DaemonSet.&lt;/p&gt;
&lt;p&gt;In Kubernetes 1.6, we added the &lt;a href="https://andygol-k8s.netlify.app/docs/tasks/manage-daemon/update-daemon-set/"&gt;RollingUpdate&lt;/a&gt; update strategy to the DaemonSet API Object. Configuring your DaemonSets with the RollingUpdate strategy causes the DaemonSet controller to perform automated rolling updates to the Pods in your DaemonSets when their spec.template are updated.&lt;/p&gt;
&lt;p&gt;In Kubernetes 1.7, we enhanced the DaemonSet controller to track a history of revisions to the PodTemplateSpecs of DaemonSets. This allows the DaemonSet controller to roll back an update. We also added the &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/statefulset/#update-strategies"&gt;RollingUpdate&lt;/a&gt; strategy to the &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/statefulset/"&gt;StatefulSet&lt;/a&gt; API Object, and implemented revision history tracking for the StatefulSet controller. Additionally, we added the &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/statefulset/#parallel-pod-management"&gt;Parallel&lt;/a&gt; pod management policy to support stateful applications that require Pods with unique identities but not ordered Pod creation and termination.&lt;/p&gt;</description></item><item><title> Introducing the Resource Management Working Group</title><link>https://andygol-k8s.netlify.app/blog/2017/09/introducing-resource-management-working/</link><pubDate>Thu, 21 Sep 2017 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2017/09/introducing-resource-management-working/</guid><description>&lt;h2 id="why-are-we-here"&gt;Why are we here?&lt;/h2&gt;
&lt;p&gt;Kubernetes has evolved to support diverse and increasingly complex classes of applications. We can onboard and scale out modern, cloud-native web applications based on microservices, batch jobs, and stateful applications with persistent storage requirements.&lt;/p&gt;
&lt;p&gt;However, there are still opportunities to improve Kubernetes; for example, the ability to run workloads that require specialized hardware or those that perform measurably better when hardware topology is taken into account. These conflicts can make it difficult for application classes (particularly in established verticals) to adopt Kubernetes.&lt;/p&gt;</description></item><item><title> Windows Networking at Parity with Linux for Kubernetes</title><link>https://andygol-k8s.netlify.app/blog/2017/09/windows-networking-at-parity-with-linux/</link><pubDate>Fri, 08 Sep 2017 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2017/09/windows-networking-at-parity-with-linux/</guid><description>&lt;p&gt;Since I last blogged about &lt;a href="https://blogs.technet.microsoft.com/networking/2017/04/04/windows-networking-for-kubernetes/"&gt;Kubernetes Networking for Windows&lt;/a&gt; four months ago, the Windows Core Networking team has made tremendous progress in both the platform and open source Kubernetes projects. With the updates, Windows is now on par with Linux in terms of networking. Customers can now deploy mixed-OS, Kubernetes clusters in any environment including Azure, on-premises, and on 3rd-party cloud stacks with the same network primitives and topologies supported on Linux without any workarounds, “hacks”, or 3rd-party switch extensions.&lt;/p&gt;</description></item><item><title> Kubernetes Meets High-Performance Computing</title><link>https://andygol-k8s.netlify.app/blog/2017/08/kubernetes-meets-high-performance/</link><pubDate>Tue, 22 Aug 2017 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2017/08/kubernetes-meets-high-performance/</guid><description>&lt;p&gt;Anyone who has worked with Docker can appreciate the enormous gains in efficiency achievable with containers. While Kubernetes excels at orchestrating containers, high-performance computing (HPC) applications can be tricky to deploy on Kubernetes.&lt;/p&gt;
&lt;p&gt;In this post, I discuss some of the challenges of running HPC workloads with Kubernetes, explain how organizations approach these challenges today, and suggest an approach for supporting mixed workloads on a shared Kubernetes cluster. We will also provide information and links to a case study on a customer, IHME, showing how Kubernetes is extended to service their HPC workloads seamlessly while retaining scalability and interfaces familiar to HPC users.&lt;/p&gt;</description></item><item><title> High Performance Networking with EC2 Virtual Private Clouds</title><link>https://andygol-k8s.netlify.app/blog/2017/08/high-performance-networking-with-ec2/</link><pubDate>Fri, 11 Aug 2017 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2017/08/high-performance-networking-with-ec2/</guid><description>&lt;p&gt;One of the most popular platforms for running Kubernetes is Amazon Web Services’ Elastic Compute Cloud (AWS EC2). With more than a decade of experience delivering IaaS, and expanding over time to include a rich set of services with easy to consume APIs, EC2 has captured developer mindshare and loyalty worldwide.&lt;/p&gt;
&lt;p&gt;When it comes to networking, however, EC2 has some limits that hinder performance and make deploying Kubernetes clusters to production unnecessarily complex. The preview release of Romana v2.0, a network and security automation solution for Cloud Native applications, includes features that address some well known network issues when running Kubernetes in EC2.&lt;/p&gt;</description></item><item><title> Kompose Helps Developers Move Docker Compose Files to Kubernetes</title><link>https://andygol-k8s.netlify.app/blog/2017/08/kompose-helps-developers-move-docker/</link><pubDate>Thu, 10 Aug 2017 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2017/08/kompose-helps-developers-move-docker/</guid><description>&lt;p&gt;I'm pleased to announce that &lt;a href="https://github.com/kubernetes/kompose"&gt;Kompose&lt;/a&gt;, a conversion tool for developers to transition Docker Compose applications to Kubernetes, has graduated from the &lt;a href="https://github.com/kubernetes/community/blob/master/incubator.md"&gt;Kubernetes Incubator&lt;/a&gt; to become an official part of the project.&lt;/p&gt;
&lt;p&gt;Since our first commit on June 27, 2016, Kompose has achieved 13 releases over 851 commits, gaining 21 contributors since the inception of the project. Our work started at Skippbox (now part of &lt;a href="https://bitnami.com/"&gt;Bitnami&lt;/a&gt;) and grew through contributions from Google and Red Hat.&lt;/p&gt;</description></item><item><title> Happy Second Birthday: A Kubernetes Retrospective</title><link>https://andygol-k8s.netlify.app/blog/2017/07/happy-second-birthday-kubernetes/</link><pubDate>Fri, 28 Jul 2017 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2017/07/happy-second-birthday-kubernetes/</guid><description>&lt;p&gt;As we do every July, we’re excited to celebrate Kubernetes 2nd birthday! In the two years since GA 1.0 launched as an open source project, &lt;a href="https://andygol-k8s.netlify.app/docs/whatisk8s/"&gt;Kubernetes&lt;/a&gt; (abbreviated as K8s) has grown to become the highest velocity cloud-related project. With more than 2,611 diverse contributors, from independents to leading global companies, the project has had 50,685 commits in the last 12 months. Of the 54 million projects on GitHub, Kubernetes is in the top 5 for number of unique developers contributing code. It also has &lt;a href="https://www.cncf.io/blog/2017/02/27/measuring-popularity-kubernetes-using-bigquery/"&gt;more pull requests and issue comments&lt;/a&gt; than any other project on GitHub.  &lt;/p&gt;</description></item><item><title> How Watson Health Cloud Deploys Applications with Kubernetes</title><link>https://andygol-k8s.netlify.app/blog/2017/07/how-watson-health-cloud-deploys/</link><pubDate>Fri, 14 Jul 2017 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2017/07/how-watson-health-cloud-deploys/</guid><description>&lt;p&gt;Today’s post is by &lt;a href="https://www.linkedin.com/in/sandhyakapoor/"&gt;Sandhya Kapoor&lt;/a&gt;, Senior Technologist, Watson Platform for Health, IBM&lt;/p&gt;
&lt;p&gt;For more than a year, Watson Platform for Health at IBM deployed healthcare applications in virtual machines on our cloud platform. Because virtual machines had been a costly, heavyweight solution for us, we were interested to evaluate Kubernetes for our deployments.&lt;/p&gt;
&lt;p&gt;Our design was to set up the application and data containers in the same namespace, along with the required agents using sidecars, to meet security and compliance requirements in the healthcare industry.&lt;/p&gt;</description></item><item><title> Kubernetes 1.7: Security Hardening, Stateful Application Updates and Extensibility</title><link>https://andygol-k8s.netlify.app/blog/2017/06/kubernetes-1-7-security-hardening-stateful-application-extensibility-updates/</link><pubDate>Fri, 30 Jun 2017 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2017/06/kubernetes-1-7-security-hardening-stateful-application-extensibility-updates/</guid><description>&lt;p&gt;&lt;em&gt;This article is by Aparna Sinha and Ihor Dvoretskyi, on behalf of the Kubernetes 1.7 release team.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Today we’re announcing Kubernetes 1.7, a milestone release that adds security, storage and extensibility features motivated by widespread production use of Kubernetes in the most demanding enterprise environments.&lt;/p&gt;
&lt;p&gt;At-a-glance, security enhancements in this release include encrypted secrets, network policy for pod-to-pod communication, node authorizer to limit kubelet access and client / server TLS certificate rotation. &lt;/p&gt;</description></item><item><title> Managing microservices with the Istio service mesh</title><link>https://andygol-k8s.netlify.app/blog/2017/05/managing-microservices-with-istio-service-mesh/</link><pubDate>Wed, 31 May 2017 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2017/05/managing-microservices-with-istio-service-mesh/</guid><description>&lt;p&gt;&lt;em&gt;Today’s post is by the Istio team showing how you can get visibility, resiliency, security and control for your microservices in Kubernetes.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Services are at the core of modern software architecture. Deploying a series of modular, small (micro-)services rather than big monoliths gives developers the flexibility to work in different languages, technologies and release cadence across the system; resulting in higher productivity and velocity, especially for larger teams.&lt;/p&gt;
&lt;p&gt;With the adoption of microservices, however, new problems emerge due to the sheer number of services that exist in a larger system. Problems that had to be solved once for a monolith, like security, load balancing, monitoring, and rate limiting need to be handled for each service.&lt;/p&gt;</description></item><item><title> Draft: Kubernetes container development made easy</title><link>https://andygol-k8s.netlify.app/blog/2017/05/draft-kubernetes-container-development/</link><pubDate>Wed, 31 May 2017 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2017/05/draft-kubernetes-container-development/</guid><description>&lt;p&gt;About a month ago Microsoft announced the acquisition of Deis to expand our expertise in containers and Kubernetes. Today, I’m excited to announce a new open source project derived from this newly expanded Azure team: Draft.&lt;/p&gt;
&lt;p&gt;While by now the strengths of Kubernetes for deploying and managing applications at scale are well understood. The process of developing a new application for Kubernetes is still too hard. It’s harder still if you are new to containers, Kubernetes, or developing cloud applications.&lt;/p&gt;</description></item><item><title> Kubernetes: a monitoring guide</title><link>https://andygol-k8s.netlify.app/blog/2017/05/kubernetes-monitoring-guide/</link><pubDate>Fri, 19 May 2017 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2017/05/kubernetes-monitoring-guide/</guid><description>&lt;p&gt;Container technologies are taking the infrastructure world by storm. While containers solve or simplify infrastructure management processes, they also introduce significant complexity in terms of orchestration. That’s where Kubernetes comes to our rescue. Just like a conductor directs an orchestra, &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/overview/what-is-kubernetes/"&gt;Kubernetes&lt;/a&gt; oversees our ensemble of containers—starting, stopping, creating, and destroying them automatically to keep our applications humming along.&lt;/p&gt;
&lt;p&gt;Kubernetes makes managing a containerized infrastructure much easier by creating levels of abstractions such as &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/pod/"&gt;pods&lt;/a&gt; and &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/services-networking/service/"&gt;services&lt;/a&gt;. We no longer have to worry about where applications are running or if they have enough resources to work properly. But that doesn’t change the fact that, in order to ensure good performance, we need to monitor our applications, the containers running them, and Kubernetes itself.&lt;/p&gt;</description></item><item><title> Kubespray Ansible Playbooks foster Collaborative Kubernetes Ops</title><link>https://andygol-k8s.netlify.app/blog/2017/05/kubespray-ansible-collaborative-kubernetes-ops/</link><pubDate>Fri, 19 May 2017 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2017/05/kubespray-ansible-collaborative-kubernetes-ops/</guid><description>&lt;p&gt;&lt;strong&gt;Why Kubespray?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Making Kubernetes operationally strong is a widely held priority and I track many deployment efforts around the project. The &lt;a href="https://github.com/kubernetes-incubator/kubespray"&gt;incubated Kubespray project&lt;/a&gt; is of particular interest for me because it uses the popular Ansible toolset to build robust, upgradable clusters on both cloud and physical targets. I believe using tools familiar to operators grows our community.&lt;/p&gt;
&lt;p&gt;We’re excited to see the breadth of platforms enabled by Kubespray and how well it handles a wide range of options like integrating Ceph for &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/statefulset/"&gt;StatefulSet&lt;/a&gt; persistence and Helm for easier application uploads. Those additions have allowed us to fully integrate the &lt;a href="https://github.com/att-comdev/openstack-helm"&gt;OpenStack Helm charts&lt;/a&gt; (&lt;a href="https://www.youtube.com/watch?v=wZ0vMrdx4a4&amp;list=PLXPBeIrpXjfjabMbwYyDULOX3kZmlxEXK&amp;index=2"&gt;demo video&lt;/a&gt;).&lt;/p&gt;</description></item><item><title> Dancing at the Lip of a Volcano: The Kubernetes Security Process - Explained</title><link>https://andygol-k8s.netlify.app/blog/2017/05/kubernetes-security-process-explained/</link><pubDate>Thu, 18 May 2017 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2017/05/kubernetes-security-process-explained/</guid><description>&lt;p&gt;Software running on servers underpins ever growing amounts of the world's commerce, communications, and physical infrastructure. And nearly all of these systems are connected to the internet; which means vital security updates must be applied rapidly. As software developers and IT professionals, we often find ourselves dancing on the edge of a volcano: we may either fall into magma induced oblivion from a security vulnerability exploited before we can fix it, or we may slide off the side of the mountain because of an inadequate process to address security vulnerabilities. &lt;/p&gt;</description></item><item><title> How Bitmovin is Doing Multi-Stage Canary Deployments with Kubernetes in the Cloud and On-Prem</title><link>https://andygol-k8s.netlify.app/blog/2017/04/multi-stage-canary-deployments-with-kubernetes-in-the-cloud-onprem/</link><pubDate>Fri, 21 Apr 2017 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2017/04/multi-stage-canary-deployments-with-kubernetes-in-the-cloud-onprem/</guid><description>&lt;p&gt;Running a large scale video encoding infrastructure on multiple public clouds is tough. At &lt;a href="http://bitmovin.com/"&gt;Bitmovin&lt;/a&gt;, we have been doing it successfully for the last few years, but from an engineering perspective, it’s neither been enjoyable nor particularly fun.&lt;/p&gt;
&lt;p&gt;So obviously, one of the main things that really sold us on using Kubernetes, was it’s common abstraction from the different supported cloud providers and the well thought out programming interface it provides. More importantly, the Kubernetes project did not settle for the lowest common denominator approach. Instead, they added the necessary abstract concepts that are required and useful to run containerized workloads in a cloud and then did all the hard work to map these concepts to the different cloud providers and their offerings.&lt;/p&gt;</description></item><item><title> RBAC Support in Kubernetes</title><link>https://andygol-k8s.netlify.app/blog/2017/04/rbac-support-in-kubernetes/</link><pubDate>Thu, 06 Apr 2017 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2017/04/rbac-support-in-kubernetes/</guid><description>&lt;p&gt;&lt;em&gt;Editor’s note: this post is part of a &lt;a href="https://kubernetes.io/blog/2017/03/five-days-of-kubernetes-1-6"&gt;series of in-depth articles&lt;/a&gt; on what's new in Kubernetes 1.6&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;One of the highlights of the &lt;a href="https://kubernetes.io/blog/2017/03/kubernetes-1-6-multi-user-multi-workloads-at-scale"&gt;Kubernetes 1.6&lt;/a&gt; release is the RBAC authorizer feature moving to &lt;em&gt;beta&lt;/em&gt;. RBAC, Role-based access control, is an authorization mechanism for managing permissions around Kubernetes resources. RBAC allows configuration of flexible authorization policies that can be updated without cluster restarts.&lt;/p&gt;
&lt;p&gt;The focus of this post is to highlight some of the interesting new capabilities and best practices.&lt;/p&gt;</description></item><item><title> Configuring Private DNS Zones and Upstream Nameservers in Kubernetes</title><link>https://andygol-k8s.netlify.app/blog/2017/04/configuring-private-dns-zones-upstream-nameservers-kubernetes/</link><pubDate>Tue, 04 Apr 2017 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2017/04/configuring-private-dns-zones-upstream-nameservers-kubernetes/</guid><description>&lt;p&gt;&lt;em&gt;Editor’s note: this post is part of a &lt;a href="https://kubernetes.io/blog/2017/03/five-days-of-kubernetes-1-6"&gt;series of in-depth articles&lt;/a&gt; on what's new in Kubernetes 1.6&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Many users have existing domain name zones that they would like to integrate into their Kubernetes DNS namespace. For example, hybrid-cloud users may want to resolve their internal “.corp” domain addresses within the cluster. Other users may have a zone populated by a non-Kubernetes service discovery system (like Consul). We’re pleased to announce that, in &lt;a href="https://kubernetes.io/blog/2017/03/kubernetes-1-6-multi-user-multi-workloads-at-scale"&gt;Kubernetes 1.6&lt;/a&gt;, &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/services-networking/dns-pod-service/"&gt;kube-dns&lt;/a&gt; adds support for configurable private DNS zones (often called “stub domains”) and external upstream DNS nameservers. In this blog post, we describe how to configure and use this feature.&lt;/p&gt;</description></item><item><title> Advanced Scheduling in Kubernetes</title><link>https://andygol-k8s.netlify.app/blog/2017/03/advanced-scheduling-in-kubernetes/</link><pubDate>Fri, 31 Mar 2017 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2017/03/advanced-scheduling-in-kubernetes/</guid><description>&lt;p&gt;&lt;em&gt;Editor’s note: this post is part of a &lt;a href="https://kubernetes.io/blog/2017/03/five-days-of-kubernetes-1-6"&gt;series of in-depth articles&lt;/a&gt; on what's new in Kubernetes 1.6&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;The Kubernetes scheduler’s default behavior works well for most cases -- for example, it ensures that pods are only placed on nodes that have sufficient free resources, it ties to spread pods from the same set (&lt;a href="https://andygol-k8s.netlify.app/docs/user-guide/replicasets/"&gt;ReplicaSet&lt;/a&gt;, &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/workloads/controllers/statefulset/"&gt;StatefulSet&lt;/a&gt;, etc.) across nodes, it tries to balance out the resource utilization of nodes, etc.&lt;/p&gt;
&lt;p&gt;But sometimes you want to control how your pods are scheduled. For example, perhaps you want to ensure that certain pods only schedule on nodes with specialized hardware, or you want to co-locate services that communicate frequently, or you want to dedicate a set of nodes to a particular set of users. Ultimately, you know much more about how your applications should be scheduled and deployed than Kubernetes ever will. So &lt;strong&gt;&lt;a href="https://kubernetes.io/blog/2017/03/kubernetes-1-6-multi-user-multi-workloads-at-scale"&gt;Kubernetes 1.6&lt;/a&gt; offers four advanced scheduling features: node affinity/anti-affinity, taints and tolerations, pod affinity/anti-affinity, and custom schedulers&lt;/strong&gt;. Each of these features are now in &lt;em&gt;beta&lt;/em&gt; in Kubernetes 1.6.&lt;/p&gt;</description></item><item><title> Scalability updates in Kubernetes 1.6: 5,000 node and 150,000 pod clusters</title><link>https://andygol-k8s.netlify.app/blog/2017/03/scalability-updates-in-kubernetes-1-6/</link><pubDate>Thu, 30 Mar 2017 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2017/03/scalability-updates-in-kubernetes-1-6/</guid><description>&lt;p&gt;&lt;em&gt;Editor’s note: this post is part of a &lt;a href="https://kubernetes.io/blog/2017/03/five-days-of-kubernetes-1-6"&gt;series of in-depth articles&lt;/a&gt; on what's new in Kubernetes 1.6&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Last summer we &lt;a href="https://kubernetes.io/blog/2016/07/update-on-kubernetes-for-windows-server-containers/"&gt;shared&lt;/a&gt; updates on Kubernetes scalability, since then we’ve been working hard and are proud to announce that &lt;a href="https://kubernetes.io/blog/2017/03/kubernetes-1-6-multi-user-multi-workloads-at-scale"&gt;Kubernetes 1.6&lt;/a&gt; can handle 5,000-node clusters with up to 150,000 pods. Moreover, those cluster have even better end-to-end pod startup time than the previous 2,000-node clusters in the 1.3 release; and latency of the API calls are within the one-second SLO.&lt;/p&gt;</description></item><item><title> Dynamic Provisioning and Storage Classes in Kubernetes</title><link>https://andygol-k8s.netlify.app/blog/2017/03/dynamic-provisioning-and-storage-classes-kubernetes/</link><pubDate>Wed, 29 Mar 2017 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2017/03/dynamic-provisioning-and-storage-classes-kubernetes/</guid><description>&lt;p&gt;&lt;em&gt;Editor’s note: this post is part of a &lt;a href="https://kubernetes.io/blog/2017/03/five-days-of-kubernetes-1-6"&gt;series of in-depth articles&lt;/a&gt; on what's new in Kubernetes 1.6&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Storage is a critical part of running stateful containers, and Kubernetes offers powerful primitives for managing it. Dynamic volume provisioning, a feature unique to Kubernetes, allows storage volumes to be created on-demand. Before dynamic provisioning, cluster administrators had to manually make calls to their cloud or storage provider to provision new storage volumes, and then create PersistentVolume objects to represent them in Kubernetes. With dynamic provisioning, these two steps are automated, eliminating the need for cluster administrators to pre-provision storage. Instead, the storage resources can be dynamically provisioned using the provisioner specified by the StorageClass object (see &lt;a href="https://andygol-k8s.netlify.app/docs/user-guide/persistent-volumes/index#storageclasses"&gt;user-guide&lt;/a&gt;). StorageClasses are essentially blueprints that abstract away the underlying storage provider, as well as other parameters, like disk-type (e.g.; solid-state vs standard disks).&lt;/p&gt;</description></item><item><title> Five Days of Kubernetes 1.6</title><link>https://andygol-k8s.netlify.app/blog/2017/03/five-days-of-kubernetes-1-6/</link><pubDate>Wed, 29 Mar 2017 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2017/03/five-days-of-kubernetes-1-6/</guid><description>&lt;p&gt;With the help of our growing community of 1,110 plus contributors, we pushed around 5,000 commits to deliver &lt;a href="https://kubernetes.io/blog/2017/03/kubernetes-1-6-multi-user-multi-workloads-at-scale"&gt;Kubernetes 1.6&lt;/a&gt;, bringing focus on multi-user, multi-workloads at scale. While many improvements have been contributed, we selected few features to highlight in a series of in-depths posts listed below. &lt;/p&gt;
&lt;p&gt;Follow along and read what’s new:&lt;/p&gt;
&lt;table&gt;
 &lt;thead&gt;
 &lt;tr&gt;
 &lt;th&gt;&lt;/th&gt;
 &lt;th&gt;Five Days of Kubernetes&lt;/th&gt;
 &lt;/tr&gt;
 &lt;/thead&gt;
 &lt;tbody&gt;
 &lt;tr&gt;
 &lt;td&gt;Day 1&lt;/td&gt;
 &lt;td&gt;&lt;a href="https://kubernetes.io/blog/2017/03/dynamic-provisioning-and-storage-classes-kubernetes"&gt;Dynamic Provisioning and Storage Classes in Kubernetes Stable in 1.6&lt;/a&gt;&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Day 2&lt;/td&gt;
 &lt;td&gt;&lt;a href="https://kubernetes.io/blog/2017/03/scalability-updates-in-kubernetes-1-6/"&gt;Scalability updates in Kubernetes 1.6&lt;/a&gt;&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Day 3&lt;/td&gt;
 &lt;td&gt;&lt;a href="https://kubernetes.io/blog/2017/03/advanced-scheduling-in-kubernetes"&gt;Advanced Scheduling in Kubernetes 1.6&lt;/a&gt;&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Day 4&lt;/td&gt;
 &lt;td&gt;&lt;a href="https://kubernetes.io/blog/2017/04/configuring-private-dns-zones-upstream-nameservers-kubernetes"&gt;Configuring Private DNS Zones and Upstream Nameservers in Kubernetes&lt;/a&gt;&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Day 5&lt;/td&gt;
 &lt;td&gt;&lt;a href="https://kubernetes.io/blog/2017/04/rbac-support-in-kubernetes"&gt;RBAC support in Kubernetes&lt;/a&gt;&lt;/td&gt;
 &lt;/tr&gt;
 &lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;&lt;strong&gt;Connect&lt;/strong&gt;&lt;/p&gt;</description></item><item><title> Kubernetes 1.6: Multi-user, Multi-workloads at Scale</title><link>https://andygol-k8s.netlify.app/blog/2017/03/kubernetes-1-6-multi-user-multi-workloads-at-scale/</link><pubDate>Tue, 28 Mar 2017 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2017/03/kubernetes-1-6-multi-user-multi-workloads-at-scale/</guid><description>&lt;p&gt;&lt;em&gt;This article is by Aparna Sinha on behalf of the Kubernetes 1.6 release team.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Today we’re announcing the release of Kubernetes 1.6.&lt;/p&gt;
&lt;p&gt;In this release the community’s focus is on scale and automation, to help you deploy multiple workloads to multiple users on a cluster. We are announcing that 5,000 node clusters are supported. We moved dynamic storage provisioning to &lt;em&gt;stable&lt;/em&gt;. Role-based access control (&lt;a href="https://andygol-k8s.netlify.app/docs/reference/access-authn-authz/rbac/"&gt;RBAC&lt;/a&gt;), &lt;a href="https://andygol-k8s.netlify.app/docs/tutorials/federation/set-up-cluster-federation-kubefed/"&gt;kubefed&lt;/a&gt;, &lt;a href="https://andygol-k8s.netlify.app/docs/getting-started-guides/kubeadm/"&gt;kubeadm&lt;/a&gt;, and several scheduling features are moving to &lt;em&gt;beta&lt;/em&gt;. We have also added intelligent defaults throughout to enable greater automation out of the box.&lt;/p&gt;</description></item><item><title> The K8sPort: Engaging Kubernetes Community One Activity at a Time</title><link>https://andygol-k8s.netlify.app/blog/2017/03/k8sport-engaging-the-kubernetes-community/</link><pubDate>Fri, 24 Mar 2017 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2017/03/k8sport-engaging-the-kubernetes-community/</guid><description>&lt;p&gt;The &lt;a href="http://k8sport.org/"&gt;&lt;strong&gt;K8sPort&lt;/strong&gt;&lt;/a&gt; is a hub designed to help you, the Kubernetes community, earn credit for the hard work you’re putting forth in making this one of the most successful open source projects ever. Back at KubeCon Seattle in November, I &lt;a href="https://youtu.be/LwViH5eLoOI"&gt;presented&lt;/a&gt; a lightning talk of a preview of K8sPort.&lt;/p&gt;
&lt;p&gt;This hub, and our intentions in helping to drive this initiative in the community, grew out of a desire to help cultivate an engaged community of Kubernetes advocates. This is done through gamification in a community hub full of different activities called “challenges,” which are activities meant to help direct members of the community to attend various events and meetings, share and provide feedback on important content, answer questions posed on sites like Stack Overflow, and more.&lt;/p&gt;</description></item><item><title> Deploying PostgreSQL Clusters using StatefulSets</title><link>https://andygol-k8s.netlify.app/blog/2017/02/postgresql-clusters-kubernetes-statefulsets/</link><pubDate>Fri, 24 Feb 2017 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2017/02/postgresql-clusters-kubernetes-statefulsets/</guid><description>&lt;p&gt;&lt;em&gt;Editor’s note: Today’s guest post is by Jeff McCormick, a developer at Crunchy Data, showing how to build a PostgreSQL cluster using the new Kubernetes StatefulSet feature.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;In an earlier &lt;a href="https://kubernetes.io/blog/2016/09/creating-postgresql-cluster-using-helm"&gt;post&lt;/a&gt;, I described how to deploy a PostgreSQL cluster using &lt;a href="https://github.com/kubernetes/helm"&gt;Helm&lt;/a&gt;, a Kubernetes package manager. The following example provides the steps for building a PostgreSQL cluster using the new Kubernetes &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/abstractions/controllers/statefulsets/"&gt;StatefulSets&lt;/a&gt; feature.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;StatefulSets Example&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Step 1&lt;/strong&gt; - Create Kubernetes Environment&lt;/p&gt;
&lt;p&gt;StatefulSets is a new feature implemented in &lt;a href="https://kubernetes.io/blog/2016/12/kubernetes-1-5-supporting-production-workloads/"&gt;Kubernetes 1.5&lt;/a&gt; (prior versions it was known as PetSets). As a result, running this example will require an environment based on Kubernetes 1.5.0 or above.&lt;/p&gt;</description></item><item><title> Containers as a Service, the foundation for next generation PaaS</title><link>https://andygol-k8s.netlify.app/blog/2017/02/caas-the-foundation-for-next-gen-paas/</link><pubDate>Tue, 21 Feb 2017 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2017/02/caas-the-foundation-for-next-gen-paas/</guid><description>&lt;p&gt;Containers are revolutionizing the way that people build, package and deploy software. But what is often overlooked is how they are revolutionizing the way that people build the software that builds, packages and deploys software. (it’s ok if you have to read that sentence twice…) Today, and in a talk at &lt;a href="https://tmt.knect365.com/container-world/"&gt;Container World&lt;/a&gt; tomorrow, I’m taking a look at how container orchestrators like Kubernetes form the foundation for next generation platform as a service (PaaS). In particular, I’m interested in how cloud container as a service (CaaS) platforms like &lt;a href="https://azure.microsoft.com/en-us/services/container-service/"&gt;Azure Container Service&lt;/a&gt;, &lt;a href="https://cloud.google.com/container-engine/"&gt;Google Container Engine&lt;/a&gt; and &lt;a href="https://andygol-k8s.netlify.app/docs/getting-started-guides/#hosted-solutions"&gt;others&lt;/a&gt; are becoming the new infrastructure layer that PaaS is built upon.&lt;/p&gt;</description></item><item><title> Inside JD.com's Shift to Kubernetes from OpenStack</title><link>https://andygol-k8s.netlify.app/blog/2017/02/inside-jd-com-shift-to-kubernetes-from-openstack/</link><pubDate>Fri, 10 Feb 2017 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2017/02/inside-jd-com-shift-to-kubernetes-from-openstack/</guid><description>&lt;p&gt;&lt;em&gt;Editor's note: Today’s post is by the Infrastructure Platform Department team at JD.com about their transition from OpenStack to Kubernetes. JD.com is one of China’s largest companies and the first Chinese Internet company to make the Global Fortune 500 list.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://upload.wikimedia.org/wikipedia/en/7/79/JD_logo.png"&gt;&lt;img src="https://upload.wikimedia.org/wikipedia/en/7/79/JD_logo.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;History of cluster building&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;The era of physical machines (2004-2014)&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Before 2014, our company's applications were all deployed on the physical machine. In the age of physical machines, we needed to wait an average of one week for the allocation to application coming on-line. Due to the lack of isolation, applications would affected each other, resulting in a lot of potential risks. At that time, the average number of tomcat instances on each physical machine was no more than nine. The resource of physical machine was seriously wasted and the scheduling was inflexible. The time of application migration took hours due to the breakdown of physical machines. And the auto-scaling cannot be achieved. To enhance the efficiency of application deployment, we developed compilation-packaging, automatic deployment, log collection, resource monitoring and some other systems.&lt;/p&gt;</description></item><item><title> Run Deep Learning with PaddlePaddle on Kubernetes</title><link>https://andygol-k8s.netlify.app/blog/2017/02/run-deep-learning-with-paddlepaddle-on-kubernetes/</link><pubDate>Wed, 08 Feb 2017 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2017/02/run-deep-learning-with-paddlepaddle-on-kubernetes/</guid><description>&lt;p&gt;&lt;strong&gt;&lt;a href="https://3.bp.blogspot.com/-Mwn3FU9hffI/WJk8QBxA6SI/AAAAAAAAA8w/AS5QoMdPTN8bL9jnixlsCXzj1IfYerhRQCLcB/s1600/baidu_research_logo_rgb.png"&gt;&lt;img src="https://3.bp.blogspot.com/-Mwn3FU9hffI/WJk8QBxA6SI/AAAAAAAAA8w/AS5QoMdPTN8bL9jnixlsCXzj1IfYerhRQCLcB/s200/baidu_research_logo_rgb.png" alt=""&gt;&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;What is PaddlePaddle&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;PaddlePaddle is an easy-to-use, efficient, flexible and scalable deep learning platform originally developed at Baidu for applying deep learning to Baidu products since 2014.&lt;/p&gt;
&lt;p&gt;There have been more than 50 innovations created using PaddlePaddle supporting 15 Baidu products ranging from the search engine, online advertising, to Q&amp;amp;A and system security.&lt;/p&gt;
&lt;p&gt;In September 2016, Baidu open sourced &lt;a href="https://github.com/PaddlePaddle/Paddle"&gt;PaddlePaddle&lt;/a&gt;, and it soon attracted many contributors from outside of Baidu.&lt;/p&gt;</description></item><item><title> Highly Available Kubernetes Clusters</title><link>https://andygol-k8s.netlify.app/blog/2017/02/highly-available-kubernetes-clusters/</link><pubDate>Thu, 02 Feb 2017 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2017/02/highly-available-kubernetes-clusters/</guid><description>&lt;p&gt;Today’s post shows how to set-up a reliable, highly available distributed Kubernetes cluster. The support for running such clusters on Google Compute Engine (GCE) was added as an alpha feature in &lt;a href="https://kubernetes.io/blog/2016/12/kubernetes-1-5-supporting-production-workloads/"&gt;Kubernetes 1.5 release&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Motivation&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;We will create a Highly Available Kubernetes cluster, with master replicas and worker nodes distributed among three zones of a region. Such setup will ensure that the cluster will continue operating during a zone failure.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Setting Up HA cluster&lt;/strong&gt;&lt;/p&gt;</description></item><item><title> Fission: Serverless Functions as a Service for Kubernetes</title><link>https://andygol-k8s.netlify.app/blog/2017/01/fission-serverless-functions-as-service-for-kubernetes/</link><pubDate>Mon, 30 Jan 2017 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2017/01/fission-serverless-functions-as-service-for-kubernetes/</guid><description>&lt;p&gt;&lt;a href="https://github.com/fission/fission"&gt;Fission&lt;/a&gt; is a Functions as a Service (FaaS) / Serverless function framework built on Kubernetes.&lt;/p&gt;
&lt;p&gt;Fission allows you to easily create HTTP services on Kubernetes from functions. It works at the source level and abstracts away container images (in most cases). It also simplifies the Kubernetes learning curve, by enabling you to make useful services without knowing much about Kubernetes.&lt;/p&gt;
&lt;p&gt;To use Fission, you simply create functions and add them with a CLI. You can associate functions with HTTP routes, Kubernetes events, or other triggers. Fission supports NodeJS and Python today.&lt;/p&gt;</description></item><item><title> Running MongoDB on Kubernetes with StatefulSets</title><link>https://andygol-k8s.netlify.app/blog/2017/01/running-mongodb-on-kubernetes-with-statefulsets/</link><pubDate>Mon, 30 Jan 2017 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2017/01/running-mongodb-on-kubernetes-with-statefulsets/</guid><description>&lt;div class="alert alert-danger" role="note"&gt;&lt;h4 class="alert-heading"&gt;Warning:&lt;/h4&gt;This post is several years old. The code examples need changes to work on a current Kubernetes cluster.&lt;/div&gt;

&lt;p&gt;Conventional wisdom says you can’t run a database in a container. “Containers are stateless!” they say, and “databases are pointless without state!”&lt;/p&gt;
&lt;p&gt;Of course, this is not true at all. At Google, everything runs in a container, including databases. You just need the right tools. &lt;a href="https://kubernetes.io/blog/2016/12/kubernetes-1-5-supporting-production-workloads/"&gt;Kubernetes 1.5&lt;/a&gt; includes the new &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/abstractions/controllers/statefulsets/"&gt;StatefulSet&lt;/a&gt; API object (in previous versions, StatefulSet was known as PetSet). With StatefulSets, Kubernetes makes it much easier to run stateful workloads such as databases.&lt;/p&gt;</description></item><item><title> How we run Kubernetes in Kubernetes aka Kubeception</title><link>https://andygol-k8s.netlify.app/blog/2017/01/how-we-run-kubernetes-in-kubernetes-kubeception/</link><pubDate>Fri, 20 Jan 2017 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2017/01/how-we-run-kubernetes-in-kubernetes-kubeception/</guid><description>&lt;p&gt;&lt;a href="https://giantswarm.io/"&gt;Giant Swarm&lt;/a&gt;’s container infrastructure started out with the goal to be an easy way for developers to deploy containerized microservices. Our first generation was extensively using &lt;a href="https://github.com/coreos/fleet"&gt;fleet&lt;/a&gt; as a base layer for our infrastructure components as well as for scheduling user containers.&lt;/p&gt;
&lt;p&gt;In order to give our users a more powerful way to manage their containers we introduced Kubernetes into our stack in early 2016. However, as we needed a quick way to flexibly spin up and manage different users’ Kubernetes clusters resiliently we kept the underlying fleet layer.&lt;/p&gt;</description></item><item><title> Scaling Kubernetes deployments with Policy-Based Networking</title><link>https://andygol-k8s.netlify.app/blog/2017/01/scaling-kubernetes-deployments-with-policy-base-networking/</link><pubDate>Thu, 19 Jan 2017 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2017/01/scaling-kubernetes-deployments-with-policy-base-networking/</guid><description>&lt;p&gt;Although it’s just been eighteen-months since Kubernetes 1.0 was released, we’ve seen Kubernetes emerge as the leading container orchestration platform for deploying distributed applications. One of the biggest reasons for this is the vibrant open source community that has developed around it. The large number of Kubernetes contributors come from diverse backgrounds means we, and the community of users, are assured that we are investing in an open platform. Companies like Google (Container Engine), Red Hat (OpenShift), and CoreOS (Tectonic) are developing their own commercial offerings based on Kubernetes. This is a good thing since it will lead to more standardization and offer choice to the users. &lt;/p&gt;</description></item><item><title> A Stronger Foundation for Creating and Managing Kubernetes Clusters</title><link>https://andygol-k8s.netlify.app/blog/2017/01/stronger-foundation-for-creating-and-managing-kubernetes-clusters/</link><pubDate>Thu, 12 Jan 2017 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2017/01/stronger-foundation-for-creating-and-managing-kubernetes-clusters/</guid><description>&lt;p&gt;Last time you heard from us was in September, when we announced &lt;a href="https://kubernetes.io/blog/2016/09/how-we-made-kubernetes-easy-to-install"&gt;kubeadm&lt;/a&gt;. The work on making kubeadm a first-class citizen in the Kubernetes ecosystem has continued and evolved. Some of us also met before KubeCon and had a very productive meeting where we talked about what the scopes for our SIG, kubeadm, and kops are. &lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Continuing to Define SIG-Cluster-Lifecycle&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;What is the scope for kubeadm?&lt;/strong&gt;&lt;br&gt;
We want kubeadm to be a common set of building blocks for all Kubernetes deployments; the piece that provides secure and recommended ways to bootstrap Kubernetes. Since there is no one true way to setup Kubernetes, kubeadm will support more than one method for each phase. We want to identify the phases every deployment of Kubernetes has in common and make configurable and easy-to-use kubeadm commands for those phases. If your organization, for example, requires that you distribute the certificates in the cluster manually or in a custom way, skip using kubeadm just for that phase. We aim to keep kubeadm usable for all other phases in that case. We want you to be able to pick which things you want kubeadm to do and let you do the rest yourself.&lt;/p&gt;</description></item><item><title> Kubernetes UX Survey Infographic</title><link>https://andygol-k8s.netlify.app/blog/2017/01/kubernetes-ux-survey-infographic/</link><pubDate>Mon, 09 Jan 2017 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2017/01/kubernetes-ux-survey-infographic/</guid><description>&lt;p&gt;The following infographic summarizes the findings of a survey that the team behind &lt;a href="https://github.com/kubernetes/dashboard"&gt;Dashboard&lt;/a&gt;, the official web UI for Kubernetes, sent during KubeCon in November 2016. Following the KubeCon launch of the survey, it was promoted on Twitter and various Slack channels over a two week period and generated over 100 responses. We’re delighted with the data it provides us to now make feature and roadmap decisions more in-line with the needs of you, our users.&lt;/p&gt;</description></item><item><title> Kubernetes supports OpenAPI</title><link>https://andygol-k8s.netlify.app/blog/2016/12/kubernetes-supports-openapi/</link><pubDate>Fri, 23 Dec 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/12/kubernetes-supports-openapi/</guid><description>&lt;p&gt;&lt;em&gt;&lt;strong&gt;Editor's note:&lt;/strong&gt; this post is part of a &lt;a href="https://andygol-k8s.netlify.app/blog/2016/12/five-days-of-kubernetes-1-5/"&gt;series of in-depth articles&lt;/a&gt; on what's new in Kubernetes 1.5&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.openapis.org/"&gt;OpenAPI&lt;/a&gt; allows API providers to define their operations and models, and enables developers to automate their tools and generate their favorite language’s client to talk to that API server. Kubernetes has supported swagger 1.2 (older version of OpenAPI spec) for a while, but the spec was incomplete and invalid, making it hard to generate tools/clients based on it.&lt;/p&gt;</description></item><item><title> Cluster Federation in Kubernetes 1.5</title><link>https://andygol-k8s.netlify.app/blog/2016/12/cluster-federation-in-kubernetes-1-5/</link><pubDate>Thu, 22 Dec 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/12/cluster-federation-in-kubernetes-1-5/</guid><description>&lt;p&gt;&lt;em&gt;&lt;strong&gt;Editor's note:&lt;/strong&gt; this post is part of a &lt;a href="https://andygol-k8s.netlify.app/blog/2016/12/five-days-of-kubernetes-1-5/"&gt;series of in-depth articles&lt;/a&gt; on what's new in Kubernetes 1.5&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;In the latest &lt;a href="https://kubernetes.io/blog/2016/12/kubernetes-1-5-supporting-production-workloads/"&gt;Kubernetes 1.5 release&lt;/a&gt;, you’ll notice that support for Cluster Federation is maturing. That functionality was introduced in Kubernetes 1.3, and the 1.5 release includes a number of new features, including an easier setup experience and a step closer to supporting all Kubernetes API objects.&lt;/p&gt;
&lt;p&gt;A new command line tool called ‘&lt;strong&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/admin/federation/kubefed/"&gt;kubefed&lt;/a&gt;&lt;/strong&gt;’ was introduced to make getting started with Cluster Federation much simpler. Also, alpha level support was added for Federated DaemonSets, Deployments and ConfigMaps. In summary:&lt;/p&gt;</description></item><item><title> Windows Server Support Comes to Kubernetes</title><link>https://andygol-k8s.netlify.app/blog/2016/12/windows-server-support-kubernetes/</link><pubDate>Wed, 21 Dec 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/12/windows-server-support-kubernetes/</guid><description>&lt;p&gt;&lt;em&gt;&lt;strong&gt;Editor's note:&lt;/strong&gt; this post is part of a &lt;a href="https://andygol-k8s.netlify.app/blog/2016/12/five-days-of-kubernetes-1-5/"&gt;series of in-depth articles&lt;/a&gt; on what's new in Kubernetes 1.5&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Extending on the theme of giving users choice, &lt;a href="https://kubernetes.io/blog/2016/12/kubernetes-1-5-supporting-production-workloads/"&gt;Kubernetes 1.5 release&lt;/a&gt; includes the support for Windows Servers. WIth more than &lt;a href="http://www.gartner.com/document/3446217"&gt;80%&lt;/a&gt; of enterprise apps running Java on Linux or .Net on Windows, Kubernetes is previewing capabilities that extends its reach to the mass majority of enterprise workloads. &lt;/p&gt;
&lt;p&gt;The new Kubernetes Windows Server 2016 and Windows Container support includes public preview with the following features:&lt;/p&gt;</description></item><item><title> StatefulSet: Run and Scale Stateful Applications Easily in Kubernetes</title><link>https://andygol-k8s.netlify.app/blog/2016/12/statefulset-run-scale-stateful-applications-in-kubernetes/</link><pubDate>Tue, 20 Dec 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/12/statefulset-run-scale-stateful-applications-in-kubernetes/</guid><description>&lt;p&gt;&lt;em&gt;&lt;strong&gt;Editor's note:&lt;/strong&gt; this post is part of a &lt;a href="https://andygol-k8s.netlify.app/blog/2016/12/five-days-of-kubernetes-1-5/"&gt;series of in-depth articles&lt;/a&gt; on what's new in Kubernetes 1.5&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;In the latest release, &lt;a href="https://kubernetes.io/blog/2016/12/kubernetes-1-5-supporting-production-workloads/"&gt;Kubernetes 1.5&lt;/a&gt;, we’ve moved the feature formerly known as PetSet into beta as &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/abstractions/controllers/statefulsets/"&gt;StatefulSet&lt;/a&gt;. There were no major changes to the API Object, other than the community selected name, but we added the semantics of “at most one pod per index” for deployment of the Pods in the set. Along with ordered deployment, ordered termination, unique network names, and persistent stable storage, we think we have the right primitives to support many containerized stateful workloads. We don’t claim that the feature is 100% complete (it is software after all), but we believe that it is useful in its current form, and that we can extend the API in a backwards-compatible way as we progress toward an eventual GA release.&lt;/p&gt;</description></item><item><title> Five Days of Kubernetes 1.5</title><link>https://andygol-k8s.netlify.app/blog/2016/12/five-days-of-kubernetes-1-5/</link><pubDate>Mon, 19 Dec 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/12/five-days-of-kubernetes-1-5/</guid><description>&lt;p&gt;With the help of our growing community of 1,000 contributors, we pushed some 5,000 commits to extend support for production workloads and deliver &lt;a href="https://kubernetes.io/blog/2016/12/kubernetes-1-5-supporting-production-workloads/"&gt;Kubernetes 1.5&lt;/a&gt;. While many improvements and new features have been added, we selected few to highlight in a series of in-depths posts listed below. &lt;/p&gt;
&lt;p&gt;This progress is our commitment in continuing to make Kubernetes best way to manage your production workloads at scale.&lt;/p&gt;
&lt;table&gt;
 &lt;thead&gt;
 &lt;tr&gt;
 &lt;th&gt;&lt;/th&gt;
 &lt;th&gt;Five Days of Kubernetes 1.5&lt;/th&gt;
 &lt;/tr&gt;
 &lt;/thead&gt;
 &lt;tbody&gt;
 &lt;tr&gt;
 &lt;td&gt;Day 1&lt;/td&gt;
 &lt;td&gt;&lt;a href="https://kubernetes.io/blog/2016/12/container-runtime-interface-cri-in-kubernetes"&gt;Introducing Container Runtime Interface (CRI) in Kubernetes&lt;/a&gt;&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Day 2&lt;/td&gt;
 &lt;td&gt;&lt;a href="https://kubernetes.io/blog/2016/12/statefulset-run-scale-stateful-applications-in-kubernetes"&gt;StatefulSet: Run and Scale Stateful Applications Easily in Kubernetes&lt;/a&gt;&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Day 3&lt;/td&gt;
 &lt;td&gt;&lt;a href="https://kubernetes.io/blog/2016/12/windows-server-support-kubernetes"&gt;Windows Server Support Comes to Kubernetes&lt;/a&gt;&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Day 4&lt;/td&gt;
 &lt;td&gt;&lt;a href="https://kubernetes.io/blog/2016/12/cluster-federation-in-kubernetes-1-5/"&gt;Cluster Federation in Kubernetes 1.5&lt;/a&gt;&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Day 5&lt;/td&gt;
 &lt;td&gt;&lt;a href="https://kubernetes.io/blog/2016/12/kubernetes-supports-openapi"&gt;Kubernetes supports OpenAPI&lt;/a&gt;&lt;/td&gt;
 &lt;/tr&gt;
 &lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;Connect&lt;/p&gt;</description></item><item><title> Introducing Container Runtime Interface (CRI) in Kubernetes</title><link>https://andygol-k8s.netlify.app/blog/2016/12/container-runtime-interface-cri-in-kubernetes/</link><pubDate>Mon, 19 Dec 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/12/container-runtime-interface-cri-in-kubernetes/</guid><description>&lt;p&gt;&lt;em&gt;&lt;strong&gt;Editor's note:&lt;/strong&gt; this post is part of a &lt;a href="https://andygol-k8s.netlify.app/blog/2016/12/five-days-of-kubernetes-1-5/"&gt;series of in-depth articles&lt;/a&gt; on what's new in Kubernetes 1.5&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;At the lowest layers of a Kubernetes node is the software that, among other things, starts and stops containers. We call this the “Container Runtime”. The most widely known container runtime is Docker, but it is not alone in this space. In fact, the container runtime space has been rapidly evolving. As part of the effort to make Kubernetes more extensible, we've been working on a new plugin API for container runtimes in Kubernetes, called &amp;quot;CRI&amp;quot;.&lt;/p&gt;</description></item><item><title> Kubernetes 1.5: Supporting Production Workloads</title><link>https://andygol-k8s.netlify.app/blog/2016/12/kubernetes-1-5-supporting-production-workloads/</link><pubDate>Tue, 13 Dec 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/12/kubernetes-1-5-supporting-production-workloads/</guid><description>&lt;p&gt;Today we’re announcing the release of Kubernetes 1.5. This release follows close on the heels of KubeCon/CloundNativeCon, where users gathered to share how they’re running their applications on Kubernetes. Many of you expressed interest in running stateful applications in containers with the eventual goal of running all applications on Kubernetes. If you have been waiting to try running a distributed database on Kubernetes, or for ways to guarantee application disruption SLOs for stateful and stateless apps, this release has solutions for you. &lt;/p&gt;</description></item><item><title> From Network Policies to Security Policies</title><link>https://andygol-k8s.netlify.app/blog/2016/12/from-network-policies-to-security-policies/</link><pubDate>Thu, 08 Dec 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/12/from-network-policies-to-security-policies/</guid><description>&lt;p&gt;&lt;strong&gt;Kubernetes Network Policies &lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Kubernetes supports a &lt;a href="https://andygol-k8s.netlify.app/docs/user-guide/networkpolicies/"&gt;new API for network policies&lt;/a&gt; that provides a sophisticated model for isolating applications and reducing their attack surface. This feature, which came out of the &lt;a href="https://github.com/kubernetes/community/wiki/SIG-Network"&gt;SIG-Network group&lt;/a&gt;, makes it very easy and elegant to define network policies by using the built-in labels and selectors Kubernetes constructs.&lt;/p&gt;
&lt;p&gt;Kubernetes has left it up to third parties to implement these network policies and does not provide a default implementation.&lt;/p&gt;</description></item><item><title> Kompose: a tool to go from Docker-compose to Kubernetes</title><link>https://andygol-k8s.netlify.app/blog/2016/11/kompose-tool-go-from-docker-compose-to-kubernetes/</link><pubDate>Tue, 22 Nov 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/11/kompose-tool-go-from-docker-compose-to-kubernetes/</guid><description>&lt;p&gt;At &lt;a href="http://www.skippbox.com/"&gt;Skippbox&lt;/a&gt;, we developed &lt;strong&gt;kompose&lt;/strong&gt; a tool to automatically transform your Docker Compose application into Kubernetes manifests. Allowing you to start a Compose application on a Kubernetes cluster with a single kompose up command. We’re extremely happy to have donated kompose to the &lt;a href="https://github.com/kubernetes-incubator"&gt;Kubernetes Incubator&lt;/a&gt;. So here’s a quick introduction about it and some motivating factors that got us to develop it.&lt;/p&gt;
&lt;p&gt;Docker is terrific for developers. It allows everyone to get started quickly with an application that has been packaged in a Docker image and is available on a Docker registry. To build a multi-container application, Docker has developed Docker-compose (aka Compose). Compose takes in a yaml based manifest of your multi-container application and starts all the required containers with a single command docker-compose up. However Compose only works locally or with a Docker Swarm cluster.&lt;/p&gt;</description></item><item><title> Kubernetes Containers Logging and Monitoring with Sematext</title><link>https://andygol-k8s.netlify.app/blog/2016/11/kubernetes-containers-logging-monitoring-with-sematext/</link><pubDate>Fri, 18 Nov 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/11/kubernetes-containers-logging-monitoring-with-sematext/</guid><description>&lt;p&gt;Managing microservices in containers is typically done with Cluster Managers and Orchestration tools. Each container platform has a slightly different set of options to deploy containers or schedule tasks on each cluster node. Because we do &lt;a href="http://sematext.com/kubernetes"&gt;container monitoring and logging&lt;/a&gt; at Sematext, part of our job is to share our knowledge of these tools, especially as it pertains to container observability and devops. Today we’ll show a tutorial for Container Monitoring and Log Collection on Kubernetes.&lt;/p&gt;</description></item><item><title> Visualize Kubelet Performance with Node Dashboard</title><link>https://andygol-k8s.netlify.app/blog/2016/11/visualize-kubelet-performance-with-node-dashboard/</link><pubDate>Thu, 17 Nov 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/11/visualize-kubelet-performance-with-node-dashboard/</guid><description>&lt;p&gt;&lt;em&gt;Since this article was published, the Node Performance Dashboard was retired and is no longer available.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;This retirement happened in early 2019, as part of the&lt;/em&gt; &lt;code&gt;kubernetes/contrib&lt;/code&gt;
&lt;em&gt;&lt;a href="https://github.com/kubernetes-retired/contrib/issues/3007"&gt;repository deprecation&lt;/a&gt;&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;In Kubernetes 1.4, we introduced a new node performance analysis tool, called the &lt;em&gt;node performance dashboard&lt;/em&gt;, to visualize and explore the behavior of the Kubelet in much richer details. This new feature will make it easy to understand and improve code performance for Kubelet developers, and lets cluster maintainer set configuration according to provided Service Level Objectives (SLOs).&lt;/p&gt;</description></item><item><title> CNCF Partners With The Linux Foundation To Launch New Kubernetes Certification, Training and Managed Service Provider Program</title><link>https://andygol-k8s.netlify.app/blog/2016/11/kubernetes-certification-training-and-managed-service-provider-program/</link><pubDate>Tue, 08 Nov 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/11/kubernetes-certification-training-and-managed-service-provider-program/</guid><description>&lt;p&gt;Today the CNCF is pleased to launch a new training, certification and Kubernetes Managed Service Provider (KMSP) program. &lt;/p&gt;
&lt;p&gt;The goal of the program is to ensure enterprises get the support they’re looking for to get up to speed and roll out new applications more quickly and more efficiently. The Linux Foundation, in partnership with CNCF, will develop and operate the Kubernetes training and certification.&lt;/p&gt;
&lt;p&gt;Interested in this course? Sign up &lt;a href="https://training.linuxfoundation.org/linux-courses/system-administration-training/kubernetes-fundamentals"&gt;here&lt;/a&gt; to pre-register. The course, expected to be available in early 2017, is open now at the discounted price of $99 (regularly $199) for a limited time, and the certification program is expected to be available in the second quarter of 2017. &lt;/p&gt;</description></item><item><title> Bringing Kubernetes Support to Azure Container Service</title><link>https://andygol-k8s.netlify.app/blog/2016/11/bringing-kubernetes-support-to-azure/</link><pubDate>Mon, 07 Nov 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/11/bringing-kubernetes-support-to-azure/</guid><description>&lt;p&gt;With more than a thousand people coming to &lt;a href="http://events.linuxfoundation.org/events/kubecon"&gt;KubeCon&lt;/a&gt; in my hometown of Seattle, nearly three years after I helped start the Kubernetes project, it’s amazing and humbling to see what a small group of people and a radical idea have become after three years of hard work from a large and growing community. In July of 2014, scarcely a month after Kubernetes became publicly available, Microsoft announced its initial support for Azure. The release of &lt;a href="https://kubernetes.io/blog/2016/09/kubernetes-1-4-making-it-easy-to-run-on-kuberentes-anywhere/"&gt;Kubernetes 1.4&lt;/a&gt;, brought support for native Microsoft networking, &lt;a href="https://github.com/kubernetes/kubernetes/pull/28821"&gt;load-balancer&lt;/a&gt; and &lt;a href="https://github.com/kubernetes/kubernetes/pull/29836"&gt;disk integration&lt;/a&gt;. &lt;/p&gt;</description></item><item><title> Modernizing the Skytap Cloud Micro-Service Architecture with Kubernetes</title><link>https://andygol-k8s.netlify.app/blog/2016/11/skytap-modernizing-microservice-architecture-with-kubernetes/</link><pubDate>Mon, 07 Nov 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/11/skytap-modernizing-microservice-architecture-with-kubernetes/</guid><description>&lt;p&gt;&lt;a href="https://www.skytap.com/"&gt;Skytap&lt;/a&gt; is a global public cloud that provides our customers the ability to save and clone complex virtualized environments in any given state. Our customers include enterprise organizations running applications in a hybrid cloud, educational organizations providing &lt;a href="https://www.skytap.com/solutions/virtual-training/"&gt;virtual training labs&lt;/a&gt;, users who need easy-to-maintain development and test labs, and a variety of organizations with diverse DevOps workflows.&lt;/p&gt;
&lt;p&gt;Some time ago, we started growing our business at an accelerated pace — our user base and our engineering organization continue to grow simultaneously. These are exciting, rewarding challenges! However, it's difficult to scale applications and organizations smoothly, and we’re approaching the task carefully. When we first began looking at improvements to scale our toolset, it was very clear that traditional OS virtualization was not going to be an effective way to achieve our scaling goals. We found that the persistent nature of VMs encouraged engineers to build and maintain bespoke ‘pet’ VMs; this did not align well with our desire to build reusable runtime environments with a stable, predictable state. Fortuitously, growth in the Docker and Kubernetes communities has aligned with our growth, and the concurrent explosion in community engagement has (from our perspective) helped these tools mature.&lt;/p&gt;</description></item><item><title> Introducing Kubernetes Service Partners program and a redesigned Partners page</title><link>https://andygol-k8s.netlify.app/blog/2016/10/kubernetes-service-technology-partners-program/</link><pubDate>Mon, 31 Oct 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/10/kubernetes-service-technology-partners-program/</guid><description>&lt;p&gt;Kubernetes has become a leading container orchestration system by being a powerful and flexible way to run distributed systems at scale. Through our very active open source community, equating to hundreds of person years of work, Kubernetes achieved four major releases in just one year to become a critical part of thousands of companies infrastructures. However, even with all that momentum, adopting cloud native computing is a significant transition for many organizations. It can be challenging to adopt a new methodology, and many teams are looking for advice and support through that journey.&lt;/p&gt;</description></item><item><title> Tail Kubernetes with Stern</title><link>https://andygol-k8s.netlify.app/blog/2016/10/tail-kubernetes-with-stern/</link><pubDate>Mon, 31 Oct 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/10/tail-kubernetes-with-stern/</guid><description>&lt;p&gt;We love Kubernetes here at &lt;a href="http://wercker.com/"&gt;Wercker&lt;/a&gt; and build all our infrastructure on top of it. When deploying anything you need to have good visibility to what's going on and logs are a first view into the inner workings of your application. Good old tail -f has been around for a long time and Kubernetes has this too, built right into &lt;a href="https://andygol-k8s.netlify.app/docs/user-guide/kubectl-overview/"&gt;kubectl&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;I should say that tail is by no means the tool to use for debugging issues but instead you should feed the logs into a more persistent place, such as &lt;a href="https://www.elastic.co/products/elasticsearch"&gt;Elasticsearch&lt;/a&gt;. However, there's still a place for tail where you need to quickly debug something or perhaps you don't have persistent logging set up yet (such as when developing an app in &lt;a href="https://github.com/kubernetes/minikube"&gt;Minikube&lt;/a&gt;).&lt;/p&gt;</description></item><item><title> How We Architected and Run Kubernetes on OpenStack at Scale at Yahoo! JAPAN</title><link>https://andygol-k8s.netlify.app/blog/2016/10/kubernetes-and-openstack-at-yahoo-japan/</link><pubDate>Mon, 24 Oct 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/10/kubernetes-and-openstack-at-yahoo-japan/</guid><description>&lt;p&gt;&lt;em&gt;Editor’s note: today’s post is by the Infrastructure Engineering team at Yahoo! JAPAN, talking about how they run OpenStack on Kubernetes. This post has been translated and edited for context with permission -- originally published on the &lt;a href="http://techblog.yahoo.co.jp/infrastructure/os_n_k8s/"&gt;Yahoo! JAPAN engineering blog&lt;/a&gt;. &lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Intro&lt;/strong&gt;&lt;br&gt;
This post outlines how Yahoo! JAPAN, with help from Google and Solinea, built an automation tool chain for “one-click” code deployment to Kubernetes running on OpenStack. &lt;/p&gt;
&lt;p&gt;We’ll also cover the basic security, networking, storage, and performance needs to ensure production readiness. &lt;/p&gt;</description></item><item><title> Building Globally Distributed Services using Kubernetes Cluster Federation</title><link>https://andygol-k8s.netlify.app/blog/2016/10/globally-distributed-services-kubernetes-cluster-federation/</link><pubDate>Fri, 14 Oct 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/10/globally-distributed-services-kubernetes-cluster-federation/</guid><description>&lt;p&gt;In Kubernetes 1.3, we announced Kubernetes Cluster Federation and introduced the concept of Cross Cluster Service Discovery, enabling developers to deploy a service that was sharded across a federation of clusters spanning different zones, regions or cloud providers. This enables developers to achieve higher availability for their applications, without sacrificing quality of service, as detailed in our &lt;a href="https://kubernetes.io/blog/2016/07/cross-cluster-services"&gt;previous&lt;/a&gt; blog post.&lt;/p&gt;
&lt;p&gt;In the latest release, &lt;a href="https://kubernetes.io/blog/2016/09/kubernetes-1-4-making-it-easy-to-run-on-kuberentes-anywhere/"&gt;Kubernetes 1.4&lt;/a&gt;, we've extended Cluster Federation to support Replica Sets, Secrets, Namespaces and Ingress objects. This means that you no longer need to deploy and manage these objects individually in each of your federated clusters. Just create them once in the federation, and have its built-in controllers automatically handle that for you.&lt;/p&gt;</description></item><item><title> Helm Charts: making it simple to package and deploy common applications on Kubernetes</title><link>https://andygol-k8s.netlify.app/blog/2016/10/helm-charts-making-it-simple-to-package-and-deploy-apps-on-kubernetes/</link><pubDate>Mon, 10 Oct 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/10/helm-charts-making-it-simple-to-package-and-deploy-apps-on-kubernetes/</guid><description>&lt;p&gt;There are thousands of people and companies packaging their applications for deployment on Kubernetes. This usually involves crafting a few different Kubernetes resource definitions that configure the application runtime, as well as defining the mechanism that users and other apps leverage to communicate with the application. There are some very common applications that users regularly look for guidance on deploying, such as databases, CI tools, and content management systems. These types of applications are usually not ones that are developed and iterated on by end users, but rather their configuration is customized to fit a specific use case. Once that application is deployed users can link it to their existing systems or leverage their functionality to solve their pain points.&lt;/p&gt;</description></item><item><title> Dynamic Provisioning and Storage Classes in Kubernetes</title><link>https://andygol-k8s.netlify.app/blog/2016/10/dynamic-provisioning-and-storage-in-kubernetes/</link><pubDate>Fri, 07 Oct 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/10/dynamic-provisioning-and-storage-in-kubernetes/</guid><description>&lt;p&gt;Storage is a critical part of running containers, and Kubernetes offers some powerful primitives for managing it. Dynamic volume provisioning, a feature unique to Kubernetes, allows storage volumes to be created on-demand. Without dynamic provisioning, cluster administrators have to manually make calls to their cloud or storage provider to create new storage volumes, and then create PersistentVolume objects to represent them in Kubernetes. The dynamic provisioning feature eliminates the need for cluster administrators to pre-provision storage. Instead, it automatically provisions storage when it is requested by users. This feature was introduced as alpha in Kubernetes 1.2, and has been improved and promoted to beta in the &lt;a href="https://kubernetes.io/blog/2016/09/kubernetes-1-4-making-it-easy-to-run-on-kuberentes-anywhere/"&gt;latest release, 1.4&lt;/a&gt;. This release makes dynamic provisioning far more flexible and useful.&lt;/p&gt;</description></item><item><title> How we improved Kubernetes Dashboard UI in 1.4 for your production needs​</title><link>https://andygol-k8s.netlify.app/blog/2016/10/production-kubernetes-dashboard-ui-1-4-improvements_3/</link><pubDate>Mon, 03 Oct 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/10/production-kubernetes-dashboard-ui-1-4-improvements_3/</guid><description>&lt;p&gt;With the release of &lt;a href="https://kubernetes.io/blog/2016/09/kubernetes-1-4-making-it-easy-to-run-on-kuberentes-anywhere/"&gt;Kubernetes 1.4&lt;/a&gt; last week, Dashboard – the official web UI for Kubernetes – has a number of exciting updates and improvements of its own. The past three months have been busy ones for the Dashboard team, and we’re excited to share the resulting features of that effort here. If you’re not familiar with Dashboard, the &lt;a href="https://github.com/kubernetes/dashboard#kubernetes-dashboard"&gt;GitHub repo&lt;/a&gt; is a great place to get started.&lt;/p&gt;
&lt;p&gt;A quick recap before unwrapping our shiny new features: Dashboard was initially released March 2016. One of the focuses for Dashboard throughout its lifetime has been the onboarding experience; it’s a less intimidating way for Kubernetes newcomers to get started, and by showing multiple resources at once, it provides contextualization lacking in &lt;a href="https://andygol-k8s.netlify.app/docs/user-guide/kubectl-overview/"&gt;kubectl&lt;/a&gt; (the CLI). After that initial release though, the product team realized that fine-tuning for a beginner audience was getting ahead of ourselves: there were still fundamental product requirements that Dashboard needed to satisfy in order to have a productive UX to onboard new users too. That became our mission for this release: closing the gap between Dashboard and kubectl by showing more resources, leveraging a web UI’s strengths in monitoring and troubleshooting, and architecting this all in a user friendly way.&lt;/p&gt;</description></item><item><title> How we made Kubernetes insanely easy to install</title><link>https://andygol-k8s.netlify.app/blog/2016/09/how-we-made-kubernetes-easy-to-install/</link><pubDate>Wed, 28 Sep 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/09/how-we-made-kubernetes-easy-to-install/</guid><description>&lt;p&gt;Over at &lt;a href="https://github.com/kubernetes/community/blob/master/sig-cluster-lifecycle/README.md"&gt;SIG-cluster-lifecycle&lt;/a&gt;, we've been hard at work the last few months on kubeadm, a tool that makes Kubernetes dramatically easier to install. We've heard from users that installing Kubernetes is harder than it should be, and we want folks to be focused on writing great distributed apps not wrangling with infrastructure!&lt;/p&gt;
&lt;p&gt;There are three stages in setting up a Kubernetes cluster, and we decided to focus on the second two (to begin with):&lt;/p&gt;</description></item><item><title> How Qbox Saved 50% per Month on AWS Bills Using Kubernetes and Supergiant</title><link>https://andygol-k8s.netlify.app/blog/2016/09/how-qbox-saved-50-percent-on-aws-bills/</link><pubDate>Tue, 27 Sep 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/09/how-qbox-saved-50-percent-on-aws-bills/</guid><description>&lt;p&gt;&lt;em&gt;Editor’s Note: Today’s post is by the team at Qbox, a hosted Elasticsearch provider sharing their experience with Kubernetes and how it helped save them fifty-percent off their cloud bill. &lt;/em&gt;&lt;/p&gt;
&lt;p&gt;A little over a year ago, we at Qbox faced an existential problem. Just about all of the major IaaS providers either launched or acquired services that competed directly with our &lt;a href="https://qbox.io/"&gt;Hosted Elasticsearch&lt;/a&gt; service, and many of them started offering it for free. The race to zero was afoot unless we could re-engineer our infrastructure to be more performant, more stable, and less expensive than the VM approach we had had before, and the one that is in use by our IaaS brethren. With the help of Kubernetes, Docker, and Supergiant (our own hand-rolled layer for managing distributed and stateful data), we were able to deliver 50% savings, a mid-five figure sum. At the same time, support tickets plummeted. We were so pleased with the results that we decided to &lt;a href="https://github.com/supergiant/supergiant"&gt;open source Supergiant&lt;/a&gt; as its own standalone product. This post will demonstrate how we accomplished it.&lt;/p&gt;</description></item><item><title> Kubernetes 1.4: Making it easy to run on Kubernetes anywhere</title><link>https://andygol-k8s.netlify.app/blog/2016/09/kubernetes-1-4-making-it-easy-to-run-on-kuberentes-anywhere/</link><pubDate>Mon, 26 Sep 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/09/kubernetes-1-4-making-it-easy-to-run-on-kuberentes-anywhere/</guid><description>&lt;p&gt;Today we’re happy to announce the release of Kubernetes 1.4.&lt;/p&gt;
&lt;p&gt;Since the release to general availability just over 15 months ago, Kubernetes has continued to grow and achieve broad adoption across the industry. From brand new startups to large-scale businesses, users have described how big a difference Kubernetes has made in building, deploying and managing distributed applications. However, one of our top user requests has been making Kubernetes itself easier to install and use. We’ve taken that feedback to heart, and 1.4 has several major improvements.&lt;/p&gt;</description></item><item><title> High performance network policies in Kubernetes clusters</title><link>https://andygol-k8s.netlify.app/blog/2016/09/high-performance-network-policies-kubernetes/</link><pubDate>Wed, 21 Sep 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/09/high-performance-network-policies-kubernetes/</guid><description>&lt;p&gt;&lt;strong&gt;Network Policies&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Since the release of Kubernetes 1.3 back in July, users have been able to define and enforce network policies in their clusters. These policies are firewall rules that specify permissible types of traffic to, from and between pods. If requested, Kubernetes blocks all traffic that is not explicitly allowed. Policies are applied to groups of pods identified by common labels. Labels can then be used to mimic traditional segmented networks often used to isolate layers in a multi-tier application: You might identify your front-end and back-end pods by a specific “segment” label, for example. Policies control traffic between those segments and even traffic to or from external sources.&lt;/p&gt;</description></item><item><title> Creating a PostgreSQL Cluster using Helm</title><link>https://andygol-k8s.netlify.app/blog/2016/09/creating-postgresql-cluster-using-helm/</link><pubDate>Fri, 09 Sep 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/09/creating-postgresql-cluster-using-helm/</guid><description>&lt;p&gt;&lt;a href="http://www.crunchydata.com/"&gt;Crunchy Data&lt;/a&gt; supplies a set of open source PostgreSQL and PostgreSQL related containers. The Crunchy PostgreSQL Container Suite includes containers that deploy, monitor, and administer the open source PostgreSQL database, for more details view this GitHub &lt;a href="https://github.com/crunchydata/crunchy-containers"&gt;repository&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;In this post we’ll show you how to deploy a PostgreSQL cluster using &lt;a href="https://github.com/kubernetes/helm"&gt;Helm&lt;/a&gt;, a Kubernetes package manager. For reference, the Crunchy Helm Chart examples used within this post are located &lt;a href="https://github.com/CrunchyData/crunchy-containers/tree/master/examples/kubehelm/crunchy-postgres"&gt;here&lt;/a&gt;, and the pre-built containers can be found on DockerHub at &lt;a href="https://hub.docker.com/u/crunchydata/dashboard/"&gt;this location&lt;/a&gt;.&lt;/p&gt;</description></item><item><title> Deploying to Multiple Kubernetes Clusters with kit</title><link>https://andygol-k8s.netlify.app/blog/2016/09/deploying-to-multiple-kubernetes-with-kit/</link><pubDate>Tue, 06 Sep 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/09/deploying-to-multiple-kubernetes-with-kit/</guid><description>&lt;p&gt;Our Docker journey at InVision may sound familiar. We started with Docker in our development environments, trying to get consistency there first. We wrangled our legacy monolith application into Docker images and streamlined our Dockerfiles to minimize size and amp the efficiency. Things were looking good. Did we learn a lot along the way? For sure. But at the end of it all, we had our entire engineering team working with Docker locally for their development environments. Mission accomplished! Well, not quite. Development was one thing, but moving to production was a whole other ballgame.&lt;/p&gt;</description></item><item><title> Cloud Native Application Interfaces</title><link>https://andygol-k8s.netlify.app/blog/2016/09/cloud-native-application-interfaces/</link><pubDate>Thu, 01 Sep 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/09/cloud-native-application-interfaces/</guid><description>&lt;p&gt;&lt;strong&gt;Standard Interfaces (or, the Thirteenth Factor)&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;When you say we need ‘software standards’ in erudite company, you get some interesting looks. Most concede that software standards have been central to the success of the boldest and most successful projects out there (like the Internet). Most are also skeptical about how they apply to the innovative world we live in today. Our projects are executed in week increments, not years. Getting bogged down behind mega-software-corporation-driven standards practices would be the death knell in this fluid, highly competitive world.&lt;/p&gt;</description></item><item><title> Security Best Practices for Kubernetes Deployment</title><link>https://andygol-k8s.netlify.app/blog/2016/08/security-best-practices-kubernetes-deployment/</link><pubDate>Wed, 31 Aug 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/08/security-best-practices-kubernetes-deployment/</guid><description>&lt;p&gt;&lt;em&gt;Note: some of the recommendations in this post are no longer current. Current cluster hardening options are described in this &lt;a href="https://andygol-k8s.netlify.app/docs/tasks/administer-cluster/securing-a-cluster/"&gt;documentation&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Editor’s note: today’s post is by Amir Jerbi and Michael Cherny of Aqua Security, describing security best practices for Kubernetes deployments, based on data they’ve collected from various use-cases seen in both on-premises and cloud deployments.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Kubernetes provides many controls that can greatly improve your application security. Configuring them requires intimate knowledge with Kubernetes and the deployment’s security requirements. The best practices we highlight here are aligned to the container lifecycle: build, ship and run, and are specifically tailored to Kubernetes deployments. We adopted these best practices in &lt;a href="http://blog.aquasec.com/running-a-security-service-in-google-cloud-real-world-example"&gt;our own SaaS deployment&lt;/a&gt; that runs Kubernetes on Google Cloud Platform.&lt;/p&gt;</description></item><item><title> Scaling Stateful Applications using Kubernetes Pet Sets and FlexVolumes with Datera Elastic Data Fabric</title><link>https://andygol-k8s.netlify.app/blog/2016/08/stateful-applications-using-kubernetes-datera/</link><pubDate>Mon, 29 Aug 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/08/stateful-applications-using-kubernetes-datera/</guid><description>&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Persistent volumes in Kubernetes are foundational as customers move beyond stateless workloads to run stateful applications. While Kubernetes has supported stateful applications such as MySQL, Kafka, Cassandra, and Couchbase for a while, the introduction of Pet Sets has significantly improved this support. In particular, the procedure to sequence the provisioning and startup, the ability to scale and associate durably by &lt;a href="https://andygol-k8s.netlify.app/docs/user-guide/petset/"&gt;Pet Sets&lt;/a&gt; has provided the ability to automate to scale the “Pets” (applications that require consistent handling and durable placement).&lt;/p&gt;</description></item><item><title> Kubernetes Namespaces: use cases and insights</title><link>https://andygol-k8s.netlify.app/blog/2016/08/kubernetes-namespaces-use-cases-insights/</link><pubDate>Tue, 16 Aug 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/08/kubernetes-namespaces-use-cases-insights/</guid><description>&lt;p&gt;&lt;em&gt;“Who's on first, What's on second, I Don't Know's on third” &lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;&lt;a href="https://www.youtube.com/watch?v=kTcRRaXV-fg"&gt;Who's on First?&lt;/a&gt; by Abbott and Costello&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Kubernetes is a system with several concepts. Many of these concepts get manifested as “objects” in the RESTful API (often called “resources” or “kinds”). One of these concepts is &lt;a href="https://andygol-k8s.netlify.app/docs/user-guide/namespaces/"&gt;Namespaces&lt;/a&gt;. In Kubernetes, Namespaces are the way to partition a single Kubernetes cluster into multiple virtual clusters. In this post we’ll highlight examples of how our customers are using Namespaces. &lt;/p&gt;</description></item><item><title> SIG Apps: build apps for and operate them in Kubernetes</title><link>https://andygol-k8s.netlify.app/blog/2016/08/sig-apps-running-apps-in-kubernetes/</link><pubDate>Tue, 16 Aug 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/08/sig-apps-running-apps-in-kubernetes/</guid><description>&lt;p&gt;&lt;em&gt;&lt;strong&gt;Editor's note:&lt;/strong&gt; This post is by the Kubernetes SIG-Apps team sharing how they focus on the developer and devops experience of running applications in Kubernetes.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Kubernetes is an incredible manager for containerized applications. Because of this, &lt;a href="https://kubernetes.io/blog/2016/02/sharethis-kubernetes-in-production"&gt;numerous&lt;/a&gt; &lt;a href="https://blog.box.com/blog/kubernetes-box-microservices-maximum-velocity/"&gt;companies&lt;/a&gt; &lt;a href="http://techblog.yahoo.co.jp/infrastructure/os_n_k8s/"&gt;have&lt;/a&gt; &lt;a href="http://www.nextplatform.com/2015/11/12/inside-ebays-shift-to-kubernetes-and-containers-atop-openstack/"&gt;started&lt;/a&gt; to run their applications in Kubernetes.&lt;/p&gt;
&lt;p&gt;Kubernetes Special Interest Groups (&lt;a href="https://github.com/kubernetes/community/blob/master/README.md#special-interest-groups-sig"&gt;SIGs&lt;/a&gt;) have been around to support the community of developers and operators since around the 1.0 release. People organized around networking, storage, scaling and other operational areas.&lt;/p&gt;</description></item><item><title> Create a Couchbase cluster using Kubernetes</title><link>https://andygol-k8s.netlify.app/blog/2016/08/create-couchbase-cluster-using-kubernetes/</link><pubDate>Mon, 15 Aug 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/08/create-couchbase-cluster-using-kubernetes/</guid><description>&lt;p&gt;&lt;a href="http://www.couchbase.com/nosql-databases/couchbase-server"&gt;Couchbase Server&lt;/a&gt; is an open source, distributed NoSQL document-oriented database. It exposes a fast key-value store with managed cache for submillisecond data operations, purpose-built indexers for fast queries and a query engine for executing SQL queries. For mobile and Internet of Things (IoT) environments, &lt;a href="http://developer.couchbase.com/mobile"&gt;Couchbase Lite&lt;/a&gt; runs native on-device and manages sync to Couchbase Server.&lt;/p&gt;
&lt;p&gt;Couchbase Server 4.5 was &lt;a href="http://blog.couchbase.com/2016/june/announcing-couchbase-server-4.5"&gt;recently announced&lt;/a&gt;, bringing &lt;a href="http://developer.couchbase.com/documentation/server/4.5/introduction/whats-new.html"&gt;many new features&lt;/a&gt;, including &lt;a href="http://www.couchbase.com/press-releases/couchbase-announces-support-for-docker-containers"&gt;production certified support for Docker&lt;/a&gt;. Couchbase is supported on a wide variety of orchestration frameworks for Docker containers, such as Kubernetes, Docker Swarm and Mesos, for full details visit &lt;a href="http://couchbase.com/containers"&gt;this page&lt;/a&gt;.&lt;/p&gt;</description></item><item><title> Challenges of a Remotely Managed, On-Premises, Bare-Metal Kubernetes Cluster</title><link>https://andygol-k8s.netlify.app/blog/2016/08/challenges-remotely-managed-onpremise-kubernetes-cluster/</link><pubDate>Tue, 02 Aug 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/08/challenges-remotely-managed-onpremise-kubernetes-cluster/</guid><description>&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The recently announced &lt;a href="https://platform9.com/press/platform9-makes-easy-deploy-docker-containers-production-scale/"&gt;Platform9 Managed Kubernetes&lt;/a&gt; (PMK) is an on-premises enterprise Kubernetes solution with an unusual twist: while clusters run on a user’s internal hardware, their provisioning, monitoring, troubleshooting and overall life cycle is managed remotely from the Platform9 SaaS application. While users love the intuitive experience and ease of use of this deployment model, this approach poses interesting technical challenges. In this article, we will first describe the motivation and deployment architecture of PMK, and then present an overview of the technical challenges we faced and how our engineering team addressed them.&lt;/p&gt;</description></item><item><title> Why OpenStack's embrace of Kubernetes is great for both communities</title><link>https://andygol-k8s.netlify.app/blog/2016/07/openstack-kubernetes-communities/</link><pubDate>Tue, 26 Jul 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/07/openstack-kubernetes-communities/</guid><description>&lt;p&gt;Today, &lt;a href="https://www.mirantis.com/"&gt;Mirantis&lt;/a&gt;, the leading contributor to &lt;a href="http://stackalytics.com/?release=mitaka"&gt;OpenStack&lt;/a&gt;, &lt;a href="https://techcrunch.com/2016/07/25/openstack-will-soon-be-able-to-run-on-top-of-kubernetes/"&gt;announced&lt;/a&gt; that it will re-write its private cloud platform to use Kubernetes as its underlying orchestration engine. We think this is a great step forward for both the OpenStack and Kubernetes communities. With Kubernetes under the hood, OpenStack users will benefit from the tremendous efficiency, manageability and resiliency that Kubernetes brings to the table, while positioning their applications to use more cloud-native patterns. The Kubernetes community, meanwhile, can feel confident in their choice of orchestration framework, while gaining the ability to manage both container- and VM-based applications from a single platform.&lt;/p&gt;</description></item><item><title> A Very Happy Birthday Kubernetes</title><link>https://andygol-k8s.netlify.app/blog/2016/07/happy-k8sbday-1/</link><pubDate>Thu, 21 Jul 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/07/happy-k8sbday-1/</guid><description>&lt;p&gt;Last year at OSCON, I got to reconnect with a bunch of friends and see what they have been working on. That turned out to be the &lt;a href="https://www.youtube.com/playlist?list=PL69nYSiGNLP0Ljwa9J98xUd6UlM604Y-l"&gt;Kubernetes 1.0 launch event&lt;/a&gt;. Even that day, it was clear the project was supported by a broad community -- a group that showed an ambitious vision for distributed computing. &lt;/p&gt;
&lt;p&gt;Today, on the first anniversary of the Kubernetes 1.0 launch, it’s amazing to see what a community of dedicated individuals can do. Kubernauts have collectively put in &lt;a href="https://www.openhub.net/p/kubernetes"&gt;237 person years of coding effort&lt;/a&gt; since launch to bring forward our most recent &lt;a href="https://kubernetes.io/blog/2016/07/kubernetes-1-3-bridging-cloud-native-and-enterprise-workloads/"&gt;release 1.3&lt;/a&gt;. However the community is much more than simply coding effort. It is made up of people -- individuals that have given their expertise and energy to make this project flourish. With more than 830 diverse contributors, from independents to the largest companies in the world, it’s their work that makes Kubernetes stand out. Here are stories from a couple early contributors reflecting back on the project:&lt;/p&gt;</description></item><item><title> Happy Birthday Kubernetes. Oh, the places you’ll go!</title><link>https://andygol-k8s.netlify.app/blog/2016/07/oh-the-places-you-will-go/</link><pubDate>Thu, 21 Jul 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/07/oh-the-places-you-will-go/</guid><description>&lt;p&gt;&lt;strong&gt;Dear K8s,&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;It’s hard to believe you’re only one - you’ve grown up so fast. On the occasion of your first birthday, I thought I would write a little note about why I was so excited when you were born, why I feel fortunate to be part of the group that is raising you, and why I’m eager to watch you continue to grow up!&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;--Justin&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;You started with an excellent foundation - good declarative functionality, built around a solid API with a well defined schema and the machinery so that we could evolve going forwards. And sure enough, over your first year you grew so fast: autoscaling, HTTP load-balancing support (Ingress), support for persistent workloads including clustered databases (PetSets). You’ve made friends with more clouds (welcome Azure &amp;amp; OpenStack to the family), and even started to span zones and clusters (Federation). And these are just some of the most visible changes - there’s so much happening inside that brain of yours!&lt;/p&gt;</description></item><item><title> The Bet on Kubernetes, a Red Hat Perspective</title><link>https://andygol-k8s.netlify.app/blog/2016/07/the-bet-on-kubernetes/</link><pubDate>Thu, 21 Jul 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/07/the-bet-on-kubernetes/</guid><description>&lt;p&gt;Two years ago, Red Hat made a big bet on Kubernetes. We bet on a simple idea: that an open source community is the best place to build the future of application orchestration, and that only an open source community could successfully integrate the diverse range of capabilities necessary to succeed. As a Red Hatter, that idea is not far-fetched - we’ve seen it successfully applied in many communities, but we’ve also seen it fail, especially when a broad reach is not supported by solid foundations. On the one year anniversary of Kubernetes 1.0, two years after the first open-source commit to the Kubernetes project, it’s worth asking the question:&lt;/p&gt;</description></item><item><title> Bringing End-to-End Kubernetes Testing to Azure (Part 2)</title><link>https://andygol-k8s.netlify.app/blog/2016/07/bringing-end-to-end-kubernetes-testing-to-azure-2/</link><pubDate>Mon, 18 Jul 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/07/bringing-end-to-end-kubernetes-testing-to-azure-2/</guid><description>&lt;p&gt;Historically, Kubernetes testing has been hosted by Google, running e2e tests on &lt;a href="https://cloud.google.com/compute/"&gt;Google Compute Engine&lt;/a&gt; (GCE) and &lt;a href="https://cloud.google.com/container-engine/"&gt;Google Container Engine&lt;/a&gt; (GKE). In fact, the gating checks for the submit-queue are a subset of tests executed on these test platforms. Federated testing aims to expand test coverage by enabling organizations to host test jobs for a variety of platforms and contribute test results to benefit the Kubernetes project. Members of the Kubernetes test team at Google and SIG-Testing have created a &lt;a href="http://storage.googleapis.com/kubernetes-test-history/static/index.html"&gt;Kubernetes test history dashboard&lt;/a&gt; that publishes the results from all federated test jobs (including those hosted by Google).&lt;/p&gt;</description></item><item><title> Dashboard - Full Featured Web Interface for Kubernetes</title><link>https://andygol-k8s.netlify.app/blog/2016/07/dashboard-web-interface-for-kubernetes/</link><pubDate>Fri, 15 Jul 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/07/dashboard-web-interface-for-kubernetes/</guid><description>&lt;p&gt;&lt;em&gt;&lt;strong&gt;Editor's note:&lt;/strong&gt; this post is part of a &lt;a href="https://andygol-k8s.netlify.app/blog/2016/07/five-days-of-kubernetes-1-3"&gt;series of in-depth articles&lt;/a&gt; on what's new in Kubernetes 1.3&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="http://github.com/kubernetes/dashboard"&gt;Kubernetes Dashboard&lt;/a&gt; is a project that aims to bring a general purpose monitoring and operational web interface to the Kubernetes world. Three months ago we &lt;a href="https://kubernetes.io/blog/2016/04/building-awesome-user-interfaces-for-kubernetes"&gt;released&lt;/a&gt; the first production ready version, and since then the dashboard has made massive improvements. In a single UI, you’re able to perform majority of possible interactions with your Kubernetes clusters without ever leaving your browser. This blog post breaks down new features introduced in the latest release and outlines the roadmap for the future. &lt;/p&gt;</description></item><item><title> Steering an Automation Platform at Wercker with Kubernetes</title><link>https://andygol-k8s.netlify.app/blog/2016/07/automation-platform-at-wercker-with-kubernetes/</link><pubDate>Fri, 15 Jul 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/07/automation-platform-at-wercker-with-kubernetes/</guid><description>&lt;p&gt;At &lt;a href="http://wercker.com/"&gt;Wercker&lt;/a&gt; we run millions of containers that execute our users’ CI/CD jobs. The vast majority of them are ephemeral and only last as long as builds, tests and deploys take to run, the rest are ephemeral, too -- aren't we all --, but tend to last a bit longer and run our infrastructure. As we are running many containers across many nodes, we were in need of a highly scalable scheduler that would make our lives easier, and as such, decided to implement Kubernetes.&lt;/p&gt;</description></item><item><title> Citrix + Kubernetes = A Home Run</title><link>https://andygol-k8s.netlify.app/blog/2016/07/citrix-netscaler-and-kubernetes/</link><pubDate>Thu, 14 Jul 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/07/citrix-netscaler-and-kubernetes/</guid><description>&lt;p&gt;Technical collaboration is like sports. If you work together as a team, you can go down the homestretch and pull through for a win. That’s our experience with the Google Cloud Platform team.&lt;/p&gt;
&lt;p&gt;Recently, we approached Google Cloud Platform (GCP) to collaborate on behalf of Citrix customers and the broader enterprise market looking to migrate workloads. This migration required including the &lt;a href="https://www.citrix.com/blogs/2016/06/20/the-best-docker-load-balancer-at-dockercon-in-seattle-this-week/"&gt;NetScaler Docker load balancer&lt;/a&gt;, CPX, into Kubernetes nodes and resolving any issues with getting traffic into the CPX proxies.  &lt;/p&gt;</description></item><item><title> Cross Cluster Services - Achieving Higher Availability for your Kubernetes Applications</title><link>https://andygol-k8s.netlify.app/blog/2016/07/cross-cluster-services/</link><pubDate>Thu, 14 Jul 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/07/cross-cluster-services/</guid><description>&lt;p&gt;&lt;em&gt;&lt;strong&gt;Editor's note:&lt;/strong&gt; this post is part of a &lt;a href="https://andygol-k8s.netlify.app/blog/2016/07/five-days-of-kubernetes-1-3"&gt;series of in-depth articles&lt;/a&gt; on what's new in Kubernetes 1.3&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;As Kubernetes users scale their production deployments we’ve heard a clear desire to deploy services across zone, region, cluster and cloud boundaries. Services that span clusters provide geographic distribution, enable hybrid and multi-cloud scenarios and improve the level of high availability beyond single cluster multi-zone deployments. Customers who want their services to span one or more (possibly remote) clusters, need them to be reachable in a consistent manner from both within and outside their clusters.&lt;/p&gt;</description></item><item><title> Stateful Applications in Containers!? Kubernetes 1.3 Says “Yes!”</title><link>https://andygol-k8s.netlify.app/blog/2016/07/stateful-applications-in-containers-kubernetes/</link><pubDate>Wed, 13 Jul 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/07/stateful-applications-in-containers-kubernetes/</guid><description>&lt;p&gt;Congratulations to the Kubernetes community on another &lt;a href="https://kubernetes.io/blog/2016/07/kubernetes-1-3-bridging-cloud-native-and-enterprise-workloads/"&gt;value-packed release&lt;/a&gt;. A focus on stateful applications and federated clusters are two reasons why I’m so excited about 1.3. Kubernetes support for stateful apps such as Cassandra, Kafka, and MongoDB is critical. Important services rely on databases, key value stores, message queues, and more. Additionally, relying on one data center or container cluster simply won’t work as apps grow to serve millions of users around the world. Cluster federation allows users to deploy apps across multiple clusters and data centers for scale and resiliency.&lt;/p&gt;</description></item><item><title> Thousand Instances of Cassandra using Kubernetes Pet Set</title><link>https://andygol-k8s.netlify.app/blog/2016/07/thousand-instances-of-cassandra-using-kubernetes-pet-set/</link><pubDate>Wed, 13 Jul 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/07/thousand-instances-of-cassandra-using-kubernetes-pet-set/</guid><description>&lt;p&gt;&lt;em&gt;&lt;strong&gt;Editor's note:&lt;/strong&gt; this post is part of a &lt;a href="https://andygol-k8s.netlify.app/blog/2016/07/five-days-of-kubernetes-1-3"&gt;series of in-depth articles&lt;/a&gt; on what's new in Kubernetes 1.3&lt;/em&gt;&lt;/p&gt;
&lt;h2 id="running-the-greek-pet-monster-races"&gt;Running The Greek Pet Monster Races&lt;/h2&gt;
&lt;p&gt;For the &lt;a href="https://kubernetes.io/blog/2016/07/kubernetes-1-3-bridging-cloud-native-and-enterprise-workloads/"&gt;Kubernetes 1.3 launch&lt;/a&gt;, we wanted to put the new Pet Set through its paces. By testing a thousand instances of &lt;a href="https://cassandra.apache.org/"&gt;Cassandra&lt;/a&gt;, we could make sure that Kubernetes 1.3 was production ready. Read on for how we adapted Cassandra to Kubernetes, and had our largest deployment ever.&lt;/p&gt;
&lt;p&gt;It’s fairly straightforward to use containers with basic stateful applications today. Using a persistent volume, you can mount a disk in a pod, and ensure that your data lasts beyond the life of your pod. However, with deployments of distributed stateful applications, things can become more tricky. With Kubernetes 1.3, the new &lt;a href="https://andygol-k8s.netlify.app/docs/user-guide/petset/"&gt;Pet Set&lt;/a&gt; component makes everything much easier. To test this new feature out at scale, we decided to host the Greek Pet Monster Races! We raced Centaurs and other Ancient Greek Monsters over hundreds of thousands of races across multiple availability zones.&lt;/p&gt;</description></item><item><title> Autoscaling in Kubernetes</title><link>https://andygol-k8s.netlify.app/blog/2016/07/autoscaling-in-kubernetes/</link><pubDate>Tue, 12 Jul 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/07/autoscaling-in-kubernetes/</guid><description>&lt;p&gt;&lt;em&gt;&lt;strong&gt;Editor's note:&lt;/strong&gt; this post is part of a &lt;a href="https://andygol-k8s.netlify.app/blog/2016/07/five-days-of-kubernetes-1-3"&gt;series of in-depth articles&lt;/a&gt; on what's new in Kubernetes 1.3&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Customers using Kubernetes respond to end user requests quickly and ship software faster than ever before. But what happens when you build a service that is even more popular than you planned for, and run out of compute? In &lt;a href="https://kubernetes.io/blog/2016/07/kubernetes-1-3-bridging-cloud-native-and-enterprise-workloads/"&gt;Kubernetes 1.3&lt;/a&gt;, we are proud to announce that we have a solution: autoscaling. On &lt;a href="https://cloud.google.com/compute/"&gt;Google Compute Engine&lt;/a&gt; (GCE) and &lt;a href="https://cloud.google.com/container-engine/"&gt;Google Container Engine&lt;/a&gt; (GKE) (and coming soon on &lt;a href="https://aws.amazon.com/"&gt;AWS&lt;/a&gt;), Kubernetes will automatically scale up your cluster as soon as you need it, and scale it back down to save you money when you don’t.&lt;/p&gt;</description></item><item><title> Kubernetes in Rancher: the further evolution</title><link>https://andygol-k8s.netlify.app/blog/2016/07/kubernetes-in-rancher-further-evolution/</link><pubDate>Tue, 12 Jul 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/07/kubernetes-in-rancher-further-evolution/</guid><description>&lt;p&gt;Kubernetes was the first external orchestration platform supported by &lt;a href="http://rancher.com/kubernetes"&gt;Rancher&lt;/a&gt;, and since its release, it has become one of the most widely used among our users, and continues to grow rapidly in adoption. As Kubernetes has evolved, so has Rancher in terms of adapting new Kubernetes features. We’ve started with supporting Kubernetes version 1.1, then switched to 1.2 as soon as it was released, and now we’re working on supporting the exciting new features in 1.3. I’d like to walk you through the features that we’ve been adding support for during each of these stages.&lt;/p&gt;</description></item><item><title> Five Days of Kubernetes 1.3</title><link>https://andygol-k8s.netlify.app/blog/2016/07/five-days-of-kubernetes-1-3/</link><pubDate>Mon, 11 Jul 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/07/five-days-of-kubernetes-1-3/</guid><description>&lt;p&gt;Last week we &lt;a href="https://kubernetes.io/blog/2016/07/kubernetes-1-3-bridging-cloud-native-and-enterprise-workloads/"&gt;released Kubernetes 1.3&lt;/a&gt;, two years from the day when the first Kubernetes commit was pushed to GitHub. Now 30,000+ commits later from over 800 contributors, this 1.3 releases is jam packed with updates driven by feedback from users.&lt;/p&gt;
&lt;p&gt;While many new improvements and features have been added in the latest release, we’ll be highlighting several that stand-out. Follow along and read these in-depth posts on what’s new and how we continue to make Kubernetes the best way to manage containers at scale. &lt;/p&gt;</description></item><item><title> Minikube: easily run Kubernetes locally</title><link>https://andygol-k8s.netlify.app/blog/2016/07/minikube-easily-run-kubernetes-locally/</link><pubDate>Mon, 11 Jul 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/07/minikube-easily-run-kubernetes-locally/</guid><description>&lt;p&gt;&lt;em&gt;&lt;strong&gt;Editor's note:&lt;/strong&gt; This is the first post in a &lt;a href="https://andygol-k8s.netlify.app/blog/2016/07/five-days-of-kubernetes-1-3"&gt;series of in-depth articles&lt;/a&gt; on what's new in Kubernetes 1.3&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;While Kubernetes is one of the best tools for managing containerized applications available today, and has been production-ready for over a year, Kubernetes has been missing a great local development platform.&lt;/p&gt;
&lt;p&gt;For the past several months, several of us from the Kubernetes community have been working to fix this in the &lt;a href="http://github.com/kubernetes/minikube"&gt;Minikube&lt;/a&gt; repository on GitHub. Our goal is to build an easy-to-use, high-fidelity Kubernetes distribution that can be run locally on Mac, Linux and Windows workstations and laptops with a single command.&lt;/p&gt;</description></item><item><title> rktnetes brings rkt container engine to Kubernetes</title><link>https://andygol-k8s.netlify.app/blog/2016/07/rktnetes-brings-rkt-container-engine-to-kubernetes/</link><pubDate>Mon, 11 Jul 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/07/rktnetes-brings-rkt-container-engine-to-kubernetes/</guid><description>&lt;p&gt;&lt;em&gt;&lt;strong&gt;Editor's note:&lt;/strong&gt; this post is part of a &lt;a href="https://andygol-k8s.netlify.app/blog/2016/07/five-days-of-kubernetes-1-3"&gt;series of in-depth articles&lt;/a&gt; on what's new in Kubernetes 1.3&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;As part of &lt;a href="https://kubernetes.io/blog/2016/07/kubernetes-1-3-bridging-cloud-native-and-enterprise-workloads/"&gt;Kubernetes 1.3&lt;/a&gt;, we’re happy to report that our work to bring interchangeable container engines to Kubernetes is bearing early fruit. What we affectionately call “rktnetes” is included in the version 1.3 Kubernetes release, and is ready for development use. rktnetes integrates support for &lt;a href="https://coreos.com/rkt/"&gt;CoreOS rkt&lt;/a&gt; into Kubernetes as the container runtime on cluster nodes, and is now part of the mainline Kubernetes source code. Today it’s easier than ever for developers and ops pros with container portability in mind to try out running Kubernetes with a different container engine.&lt;/p&gt;</description></item><item><title> Updates to Performance and Scalability in Kubernetes 1.3 -- 2,000 node 60,000 pod clusters</title><link>https://andygol-k8s.netlify.app/blog/2016/07/update-on-kubernetes-for-windows-server-containers/</link><pubDate>Thu, 07 Jul 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/07/update-on-kubernetes-for-windows-server-containers/</guid><description>&lt;p&gt;We are proud to announce that with the &lt;a href="https://kubernetes.io/blog/2016/07/kubernetes-1-3-bridging-cloud-native-and-enterprise-workloads/"&gt;release of version 1.3&lt;/a&gt;, Kubernetes now supports 2000-node clusters with even better end-to-end pod startup time. The latency of our API calls are within our one-second &lt;a href="https://en.wikipedia.org/wiki/Service_level_objective"&gt;Service Level Objective (SLO)&lt;/a&gt; and most of them are even an order of magnitude better than that. It is possible to run larger deployments than a 2,000 node cluster, but performance may be degraded and it may not meet our strict SLO.&lt;/p&gt;</description></item><item><title>Kubernetes 1.3: Bridging Cloud Native and Enterprise Workloads</title><link>https://andygol-k8s.netlify.app/blog/2016/07/kubernetes-1-3-bridging-cloud-native-and-enterprise-workloads/</link><pubDate>Wed, 06 Jul 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/07/kubernetes-1-3-bridging-cloud-native-and-enterprise-workloads/</guid><description>&lt;p&gt;Nearly two years ago, when we officially kicked off the Kubernetes project, we wanted to simplify distributed systems management and provide the core technology required to everyone. The community’s response to this effort has blown us away. Today, thousands of customers, partners and developers are running clusters in production using Kubernetes and have joined the cloud native revolution. &lt;/p&gt;
&lt;p&gt;Thanks to the help of over 800 contributors, we are pleased to announce today the availability of Kubernetes 1.3, our most robust and feature-rich release to date.&lt;/p&gt;</description></item><item><title> Container Design Patterns</title><link>https://andygol-k8s.netlify.app/blog/2016/06/container-design-patterns/</link><pubDate>Tue, 21 Jun 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/06/container-design-patterns/</guid><description>&lt;p&gt;Kubernetes automates deployment, operations, and scaling of applications, but our goals in the Kubernetes project extend beyond system management -- we want Kubernetes to help developers, too. Kubernetes should make it easy for them to write the distributed applications and services that run in cloud and datacenter environments. To enable this, Kubernetes defines not only an API for administrators to perform management actions, but also an API for containerized applications to interact with the management platform.&lt;/p&gt;</description></item><item><title> The Illustrated Children's Guide to Kubernetes</title><link>https://andygol-k8s.netlify.app/blog/2016/06/illustrated-childrens-guide-to-kubernetes/</link><pubDate>Thu, 09 Jun 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/06/illustrated-childrens-guide-to-kubernetes/</guid><description>&lt;p&gt;&lt;em&gt;Kubernetes is an open source project with a growing community. We love seeing the ways that our community innovates inside and on top of Kubernetes. Deis is an excellent example of company who understands the strategic impact of strong container orchestration. They contribute directly to the project; in associated subprojects; and, delightfully, with a creative endeavor to help our user community understand more about what Kubernetes is. Want to contribute to Kubernetes? One way is to get involved &lt;a href="https://github.com/kubernetes/kubernetes/issues?q=is%3Aopen+is%3Aissue+label%3Ahelp-wanted"&gt;here&lt;/a&gt; and help us with code. But, please don’t consider that the only way to contribute. This little adventure that Deis takes us is an example of how open source isn’t only code. &lt;/em&gt;&lt;/p&gt;</description></item><item><title> Bringing End-to-End Kubernetes Testing to Azure (Part 1)</title><link>https://andygol-k8s.netlify.app/blog/2016/06/bringing-end-to-end-testing-to-azure/</link><pubDate>Mon, 06 Jun 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/06/bringing-end-to-end-testing-to-azure/</guid><description>&lt;p&gt;At &lt;a href="http://www.appformix.com/"&gt;AppFormix&lt;/a&gt;, continuous integration testing is part of our culture. We see many benefits to running end-to-end tests regularly, including minimizing regressions and ensuring our software works together as a whole. To ensure a high quality experience for our customers, we require the ability to run end-to-end testing not just for our application, but for the entire orchestration stack. Our customers are adopting Kubernetes as their container orchestration technology of choice, and they demand choice when it comes to where their containers execute, from private infrastructure to public providers, including Azure. After several weeks of work, we are pleased to announce we are contributing a nightly, continuous integration job that executes e2e tests on the Azure platform. After running the e2e tests each night for only a few weeks, we have already found and fixed two issues in Kubernetes. We hope our contribution of an e2e job will help the community maintain support for the Azure platform as Kubernetes evolves.&lt;/p&gt;</description></item><item><title> Hypernetes: Bringing Security and Multi-tenancy to Kubernetes</title><link>https://andygol-k8s.netlify.app/blog/2016/05/hypernetes-security-and-multi-tenancy-in-kubernetes/</link><pubDate>Tue, 24 May 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/05/hypernetes-security-and-multi-tenancy-in-kubernetes/</guid><description>&lt;p&gt;While many developers and security professionals are comfortable with Linux containers as an effective boundary, many users need a stronger degree of isolation, particularly for those running in a multi-tenant environment. Sadly, today, those users are forced to run their containers inside virtual machines, even one VM per container.&lt;/p&gt;
&lt;p&gt;Unfortunately, this results in the loss of many of the benefits of a cloud-native deployment: slow startup time of VMs; a memory tax for every container; low utilization resulting in wasting resources.&lt;/p&gt;</description></item><item><title> CoreOS Fest 2016: CoreOS and Kubernetes Community meet in Berlin (&amp; San Francisco)</title><link>https://andygol-k8s.netlify.app/blog/2016/05/coreosfest2016-kubernetes-community/</link><pubDate>Tue, 03 May 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/05/coreosfest2016-kubernetes-community/</guid><description>&lt;p&gt;&lt;a href="https://coreos.com/fest/"&gt;CoreOS Fest 2016&lt;/a&gt; will bring together the container and open source distributed systems community, including many thought leaders in the Kubernetes space. It is the second annual CoreOS community conference, held for the first time in Berlin on May 9th and 10th. CoreOS believes Kubernetes is the container orchestration component to deliver GIFEE (Google’s Infrastructure for Everyone Else).&lt;/p&gt;
&lt;p&gt;At this year’s CoreOS Fest, there are tracks dedicated to Kubernetes where you’ll hear about various topics ranging from Kubernetes performance and scalability, continuous delivery and Kubernetes, rktnetes, stackanetes and more. In addition, there will be a variety of talks, from introductory workshops to deep-dives into all things containers and related software.&lt;/p&gt;</description></item><item><title> Introducing the Kubernetes OpenStack Special Interest Group</title><link>https://andygol-k8s.netlify.app/blog/2016/04/introducing-kubernetes-openstack-sig/</link><pubDate>Fri, 22 Apr 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/04/introducing-kubernetes-openstack-sig/</guid><description>&lt;p&gt;&lt;em&gt;&lt;strong&gt;Editor's note:&lt;/strong&gt; This week we’re featuring &lt;a href="https://github.com/kubernetes/kubernetes/wiki/Special-Interest-Groups-(SIGs)"&gt;Kubernetes Special Interest Groups&lt;/a&gt;; Today’s post is by the SIG-OpenStack team about their mission to facilitate ideas between the OpenStack and Kubernetes communities. &lt;/em&gt;&lt;/p&gt;
&lt;p&gt;The community around the Kubernetes project includes a number of Special Interest Groups (SIGs) for the purposes of facilitating focused discussions relating to important subtopics between interested contributors. Today we would like to highlight the &lt;a href="https://github.com/kubernetes/kubernetes/wiki/SIG-Openstack"&gt;Kubernetes OpenStack SIG&lt;/a&gt; focused on the interaction between &lt;a href="http://kubernetes.io/"&gt;Kubernetes&lt;/a&gt; and &lt;a href="http://www.openstack.org/"&gt;OpenStack&lt;/a&gt;, the Open Source cloud computing platform.&lt;/p&gt;</description></item><item><title> SIG-UI: the place for building awesome user interfaces for Kubernetes</title><link>https://andygol-k8s.netlify.app/blog/2016/04/building-awesome-user-interfaces-for-kubernetes/</link><pubDate>Wed, 20 Apr 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/04/building-awesome-user-interfaces-for-kubernetes/</guid><description>&lt;p&gt;&lt;em&gt;&lt;strong&gt;Editor's note:&lt;/strong&gt; This week we’re featuring &lt;a href="https://github.com/kubernetes/kubernetes/wiki/Special-Interest-Groups-(SIGs)"&gt;Kubernetes Special Interest Groups&lt;/a&gt;; Today’s post is by the SIG-UI team describing their mission and showing the cool projects they work on.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Kubernetes has been handling production workloads for a long time now (see &lt;a href="http://kubernetes.io/#talkToUs"&gt;case studies&lt;/a&gt;). It runs on public, private and hybrid clouds as well as bare metal. It can handle all types of workloads (web serving, batch and mixed) and enable &lt;a href="https://www.youtube.com/watch?v=9C6YeyyUUmI"&gt;zero-downtime rolling updates&lt;/a&gt;. It abstracts service discovery, load balancing and storage so that applications running on Kubernetes aren’t restricted to a specific cloud provider or environment.&lt;/p&gt;</description></item><item><title> SIG-ClusterOps: Promote operability and interoperability of Kubernetes clusters</title><link>https://andygol-k8s.netlify.app/blog/2016/04/sig-clusterops-promote-operability-and-interoperability-of-k8s-clusters/</link><pubDate>Tue, 19 Apr 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/04/sig-clusterops-promote-operability-and-interoperability-of-k8s-clusters/</guid><description>&lt;p&gt;&lt;em&gt;&lt;strong&gt;Editor's note:&lt;/strong&gt; This week we’re featuring &lt;a href="https://github.com/kubernetes/kubernetes/wiki/Special-Interest-Groups-(SIGs)"&gt;Kubernetes Special Interest Groups&lt;/a&gt;; Today’s post is by the SIG-ClusterOps team whose mission is to promote operability and interoperability of Kubernetes clusters -- to listen, help &amp;amp; escalate.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;We think Kubernetes is an awesome way to run applications at scale! Unfortunately, there's a bootstrapping problem: we need good ways to build secure &amp;amp; reliable scale environments around Kubernetes. While some parts of the platform administration leverage the platform (cool!), there are fundamental operational topics that need to be addressed and questions (like upgrade and conformance) that need to be answered.&lt;/p&gt;</description></item><item><title> SIG-Networking: Kubernetes Network Policy APIs Coming in 1.3</title><link>https://andygol-k8s.netlify.app/blog/2016/04/kubernetes-network-policy-apis/</link><pubDate>Mon, 18 Apr 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/04/kubernetes-network-policy-apis/</guid><description>&lt;p&gt;&lt;em&gt;&lt;strong&gt;Editor's note:&lt;/strong&gt; This week we’re featuring &lt;a href="https://github.com/kubernetes/kubernetes/wiki/Special-Interest-Groups-(SIGs)"&gt;Kubernetes Special Interest Groups&lt;/a&gt;; Today’s post is by the Network-SIG team describing network policy APIs coming in 1.3 - policies for security, isolation and multi-tenancy.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;The &lt;a href="https://kubernetes.slack.com/messages/sig-network/"&gt;Kubernetes network SIG&lt;/a&gt; has been meeting regularly since late last year to work on bringing network policy to Kubernetes and we’re starting to see the results of this effort.&lt;/p&gt;
&lt;p&gt;One problem many users have is that the open access network policy of Kubernetes is not suitable for applications that need more precise control over the traffic that accesses a pod or service. Today, this could be a multi-tier application where traffic is only allowed from a tier’s neighbor. But as new Cloud Native applications are built by composing microservices, the ability to control traffic as it flows among these services becomes even more critical.&lt;/p&gt;</description></item><item><title> How to deploy secure, auditable, and reproducible Kubernetes clusters on AWS</title><link>https://andygol-k8s.netlify.app/blog/2016/04/kubernetes-on-aws_15/</link><pubDate>Fri, 15 Apr 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/04/kubernetes-on-aws_15/</guid><description>&lt;p&gt;At CoreOS, we're all about deploying Kubernetes in production at scale. Today we are excited to share a tool that makes deploying Kubernetes on Amazon Web Services (AWS) a breeze. Kube-aws is a tool for deploying auditable and reproducible Kubernetes clusters to AWS, currently used by CoreOS to spin up production clusters.&lt;/p&gt;
&lt;p&gt;Today you might be putting the Kubernetes components together in a more manual way. With this helpful tool, Kubernetes is delivered in a streamlined package to save time, minimize interdependencies and quickly create production-ready deployments.&lt;/p&gt;</description></item><item><title> Adding Support for Kubernetes in Rancher</title><link>https://andygol-k8s.netlify.app/blog/2016/04/adding-support-for-kubernetes-in-rancher/</link><pubDate>Fri, 08 Apr 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/04/adding-support-for-kubernetes-in-rancher/</guid><description>&lt;p&gt;Over the last year, we’ve seen a tremendous increase in the number of companies looking to leverage containers in their software development and IT organizations. To achieve this, organizations have been looking at how to build a centralized container management capability that will make it simple for users to get access to containers, while centralizing visibility and control with the IT organization. In 2014 we started the open-source Rancher project to address this by building a management platform for containers.&lt;/p&gt;</description></item><item><title> Container survey results - March 2016</title><link>https://andygol-k8s.netlify.app/blog/2016/04/container-survey-results-march-2016/</link><pubDate>Fri, 08 Apr 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/04/container-survey-results-march-2016/</guid><description>&lt;p&gt;Last month, we had our third installment of our container survey and today we look at the results.  (raw data is available &lt;a href="https://docs.google.com/spreadsheets/d/13356w6I2xxKnmjblFSsKGVANZGGlX2yFMzb8eOIe2Oo/edit?usp=sharing"&gt;here&lt;/a&gt;)&lt;/p&gt;
&lt;p&gt;Looking at the headline number, “how many people are using containers” we see a decrease in the number of people currently using containers from 89% to 80%.  Obviously, we can’t be certain for the cause of this decrease, but it’s my believe that the previous number was artificially high due to sampling biases and we did a better job getting a broader reach of participants in the March survey and so the March numbers more accurately represent what is going on in the world.&lt;/p&gt;</description></item><item><title> Configuration management with Containers</title><link>https://andygol-k8s.netlify.app/blog/2016/04/configuration-management-with-containers/</link><pubDate>Mon, 04 Apr 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/04/configuration-management-with-containers/</guid><description>&lt;p&gt;&lt;em&gt;&lt;strong&gt;Editor's note:&lt;/strong&gt; this is our seventh post in a &lt;a href="https://andygol-k8s.netlify.app/blog/2016/03/five-days-of-kubernetes-12"&gt;series of in-depth posts&lt;/a&gt; on what's new in Kubernetes 1.2&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;A &lt;a href="http://12factor.net/config"&gt;good practice&lt;/a&gt; when writing applications is to separate application code from configuration. We want to enable application authors to easily employ this pattern within Kubernetes. While the Secrets API allows separating information like credentials and keys from an application, no object existed in the past for ordinary, non-secret configuration. In &lt;a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG.md/#v120"&gt;Kubernetes 1.2&lt;/a&gt;, we've added a new API resource called ConfigMap to handle this type of configuration data.&lt;/p&gt;</description></item><item><title> Using Deployment objects with Kubernetes 1.2</title><link>https://andygol-k8s.netlify.app/blog/2016/04/using-deployment-objects-with/</link><pubDate>Fri, 01 Apr 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/04/using-deployment-objects-with/</guid><description>&lt;p&gt;&lt;em&gt;&lt;strong&gt;Editor's note:&lt;/strong&gt; this is the seventh post in a &lt;a href="https://andygol-k8s.netlify.app/blog/2016/03/five-days-of-kubernetes-12"&gt;series of in-depth posts&lt;/a&gt; on what's new in Kubernetes 1.2&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Kubernetes has made deploying and managing applications very straightforward, with most actions a single API or command line away, including rolling out new applications, canary testing and upgrading. So why would we need Deployments?&lt;/p&gt;
&lt;p&gt;Deployment objects automate deploying and rolling updating applications. Compared with kubectl rolling-update, Deployment API is much faster, is declarative, is implemented server-side and has more features (for example, you can rollback to any previous revision even after the rolling update is done).&lt;/p&gt;</description></item><item><title> Kubernetes 1.2 and simplifying advanced networking with Ingress</title><link>https://andygol-k8s.netlify.app/blog/2016/03/kubernetes-1-2-and-simplifying-advanced-networking-with-ingress/</link><pubDate>Thu, 31 Mar 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/03/kubernetes-1-2-and-simplifying-advanced-networking-with-ingress/</guid><description>&lt;p&gt;&lt;em&gt;&lt;strong&gt;Editor's note:&lt;/strong&gt; This is the sixth post in a &lt;a href="https://andygol-k8s.netlify.app/blog/2016/03/five-days-of-kubernetes-12"&gt;series of in-depth posts&lt;/a&gt; on what's new in Kubernetes 1.2&lt;/em&gt;&lt;br&gt;
&lt;em&gt;Ingress is currently in beta and under active development.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;In Kubernetes, Services and Pods have IPs only routable by the cluster network, by default. All traffic that ends up at an edge router is either dropped or forwarded elsewhere. In Kubernetes 1.2, we’ve made improvements to the Ingress object, to simplify allowing inbound connections to reach the cluster services. It can be configured to give services externally-reachable URLs, load balance traffic, terminate SSL, offer name based virtual hosting and lots more.&lt;/p&gt;</description></item><item><title> Using Spark and Zeppelin to process big data on Kubernetes 1.2</title><link>https://andygol-k8s.netlify.app/blog/2016/03/using-spark-and-zeppelin-to-process-big-data-on-kubernetes/</link><pubDate>Wed, 30 Mar 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/03/using-spark-and-zeppelin-to-process-big-data-on-kubernetes/</guid><description>&lt;p&gt;&lt;em&gt;&lt;strong&gt;Editor's note:&lt;/strong&gt; this is the fifth post in a &lt;a href="https://andygol-k8s.netlify.app/blog/2016/03/five-days-of-kubernetes-12"&gt;series of in-depth posts&lt;/a&gt; on what's new in Kubernetes 1.2&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;With big data usage growing exponentially, many Kubernetes customers have expressed interest in running &lt;a href="http://spark.apache.org/"&gt;Apache Spark&lt;/a&gt; on their Kubernetes clusters to take advantage of the portability and flexibility of containers. Fortunately, with Kubernetes 1.2, you can now have a platform that runs Spark and Zeppelin, and your other applications side-by-side.&lt;/p&gt;
&lt;h3 id="why-zeppelin"&gt;Why Zeppelin? &lt;/h3&gt;
&lt;p&gt;&lt;a href="https://zeppelin.incubator.apache.org/"&gt;Apache Zeppelin&lt;/a&gt; is a web-based notebook that enables interactive data analytics. As one of its backends, Zeppelin connects to Spark. Zeppelin allows the user to interact with the Spark cluster in a simple way, without having to deal with a command-line interpreter or a Scala compiler.&lt;/p&gt;</description></item><item><title> AppFormix: Helping Enterprises Operationalize Kubernetes</title><link>https://andygol-k8s.netlify.app/blog/2016/03/appformix-helping-enterprises/</link><pubDate>Tue, 29 Mar 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/03/appformix-helping-enterprises/</guid><description>&lt;p&gt;If you run clouds for a living, you’re well aware that the tools we've used since the client/server era for monitoring, analytics and optimization just don’t cut it when applied to the agile, dynamic and rapidly changing world of modern cloud infrastructure.&lt;/p&gt;
&lt;p&gt;And, if you’re an operator of enterprise clouds, you know that implementing containers and container cluster management is all about giving your application developers a more agile, responsive and efficient cloud infrastructure. Applications are being rewritten and new ones developed – not for legacy environments where relatively static workloads are the norm, but for dynamic, scalable cloud environments. The dynamic nature of cloud native applications coupled with the shift to continuous deployment means that the demands placed by the applications on the infrastructure are constantly changing.&lt;/p&gt;</description></item><item><title> Building highly available applications using Kubernetes new multi-zone clusters (a.k.a. 'Ubernetes Lite')</title><link>https://andygol-k8s.netlify.app/blog/2016/03/building-highly-available-applications-using-kubernetes-new-multi-zone-clusters-aka-ubernetes-lite/</link><pubDate>Tue, 29 Mar 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/03/building-highly-available-applications-using-kubernetes-new-multi-zone-clusters-aka-ubernetes-lite/</guid><description>&lt;p&gt;&lt;em&gt;&lt;strong&gt;Editor's note:&lt;/strong&gt; this is the third post in a &lt;a href="https://andygol-k8s.netlify.app/blog/2016/03/five-days-of-kubernetes-12"&gt;series of in-depth posts&lt;/a&gt; on what's new in Kubernetes 1.2&lt;/em&gt;&lt;/p&gt;
&lt;h3 id="introduction"&gt;Introduction &lt;/h3&gt;
&lt;p&gt;One of the most frequently-requested features for Kubernetes is the ability to run applications across multiple zones. And with good reason — developers need to deploy applications across multiple domains, to improve availability in thxe advent of a single zone outage.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://kubernetes.io/blog/2016/03/kubernetes-1-2-even-more-performance-upgrades-plus-easier-application-deployment-and-management"&gt;Kubernetes 1.2&lt;/a&gt;, released two weeks ago, adds support for running a single cluster across multiple failure zones (GCP calls them simply &amp;quot;zones,&amp;quot; Amazon calls them &amp;quot;availability zones,&amp;quot; here we'll refer to them as &amp;quot;zones&amp;quot;). This is the first step in a broader effort to allow federating multiple Kubernetes clusters together (sometimes referred to by the affectionate nickname &amp;quot;&lt;a href="https://github.com/kubernetes/kubernetes/blob/master/docs/proposals/federation.md"&gt;Ubernetes&lt;/a&gt;&amp;quot;). This initial version (referred to as &amp;quot;Ubernetes Lite&amp;quot;) offers improved application availability by spreading applications across multiple zones within a single cloud provider.&lt;/p&gt;</description></item><item><title> 1000 nodes and beyond: updates to Kubernetes performance and scalability in 1.2</title><link>https://andygol-k8s.netlify.app/blog/2016/03/1000-nodes-and-beyond-updates-to-kubernetes-performance-and-scalability-in-12/</link><pubDate>Mon, 28 Mar 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/03/1000-nodes-and-beyond-updates-to-kubernetes-performance-and-scalability-in-12/</guid><description>&lt;p&gt;&lt;em&gt;&lt;strong&gt;Editor's note:&lt;/strong&gt; this is the first in a &lt;a href="https://andygol-k8s.netlify.app/blog/2016/03/five-days-of-kubernetes-12"&gt;series of in-depth posts&lt;/a&gt; on what's new in Kubernetes 1.2&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;We're proud to announce that with the &lt;a href="https://kubernetes.io/blog/2016/03/kubernetes-1-2-even-more-performance-upgrades-plus-easier-application-deployment-and-management"&gt;release of 1.2&lt;/a&gt;, Kubernetes now supports 1000-node clusters, with a reduction of 80% in 99th percentile tail latency for most API operations. This means in just six months, we've increased our overall scale by 10 times while maintaining a great user experience — the 99th percentile pod startup times are less than 3 seconds, and 99th percentile latency of most API operations is tens of milliseconds (the exception being LIST operations, which take hundreds of milliseconds in very large clusters).&lt;/p&gt;</description></item><item><title> Five Days of Kubernetes 1.2</title><link>https://andygol-k8s.netlify.app/blog/2016/03/five-days-of-kubernetes-12/</link><pubDate>Mon, 28 Mar 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/03/five-days-of-kubernetes-12/</guid><description>&lt;p&gt;The Kubernetes project has had some huge milestones over the past few weeks. We released &lt;a href="https://kubernetes.io/blog/2016/03/kubernetes-1-2-even-more-performance-upgrades-plus-easier-application-deployment-and-management"&gt;Kubernetes 1.2&lt;/a&gt;, had our &lt;a href="https://kubecon.io/"&gt;first conference in Europe&lt;/a&gt;, and were accepted into the &lt;a href="https://cncf.io/"&gt;Cloud Native Computing Foundation&lt;/a&gt;. While we catch our breath, we would like to take a moment to highlight some of the great work contributed by the community since our last milestone, just four months ago.&lt;/p&gt;
&lt;p&gt;Our mission is to make building distributed systems easy and accessible for all. While Kubernetes 1.2 has LOTS of new features, there are a few that really highlight the strides we’re making towards that goal. Over the course of the next week, we’ll be publishing a series of in-depth posts covering what’s new, so come back daily this week to read about the new features that continue to make Kubernetes the easiest way to run containers at scale. Thanks, and stay tuned!&lt;/p&gt;</description></item><item><title> How container metadata changes your point of view</title><link>https://andygol-k8s.netlify.app/blog/2016/03/how-container-metadata-changes-your-point-of-view/</link><pubDate>Mon, 28 Mar 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/03/how-container-metadata-changes-your-point-of-view/</guid><description>&lt;p&gt;Sure, metadata is a fancy word. It actually means “data that describes other data.” While that definition isn’t all that helpful, it turns out metadata itself is especially helpful in container environments. When you have any complex system, the availability of metadata helps you sort and process the variety of data coming out of that system, so that you can get to the heart of an issue with less headache.&lt;/p&gt;</description></item><item><title> Scaling neural network image classification using Kubernetes with TensorFlow Serving</title><link>https://andygol-k8s.netlify.app/blog/2016/03/scaling-neural-network-image-classification-using-kubernetes-with-tensorflow-serving/</link><pubDate>Wed, 23 Mar 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/03/scaling-neural-network-image-classification-using-kubernetes-with-tensorflow-serving/</guid><description>&lt;p&gt;In 2011, Google developed an internal deep learning infrastructure called &lt;a href="http://research.google.com/pubs/pub40565.html"&gt;DistBelief&lt;/a&gt;, which allowed Googlers to build ever larger &lt;a href="https://en.wikipedia.org/wiki/Artificial_neural_network"&gt;neural networks&lt;/a&gt; and scale training to thousands of cores. Late last year, Google &lt;a href="http://googleresearch.blogspot.com/2015/11/tensorflow-googles-latest-machine_9.html"&gt;introduced TensorFlow&lt;/a&gt;, its second-generation machine learning system. TensorFlow is general, flexible, portable, easy-to-use and, most importantly, developed with the open source community.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://4.bp.blogspot.com/-PDRpnk823Ps/VvHJH3vIyKI/AAAAAAAAA4g/adIWZPfa2W4ObtIaWNbhpl8UyIwk9R7xg/s1600/tensorflowserving-4.png"&gt;&lt;img src="https://4.bp.blogspot.com/-PDRpnk823Ps/VvHJH3vIyKI/AAAAAAAAA4g/adIWZPfa2W4ObtIaWNbhpl8UyIwk9R7xg/s320/tensorflowserving-4.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;The process of introducing machine learning into your product involves creating and training a model on your dataset, and then pushing the model to production to serve requests. In this blog post, we’ll show you how you can use &lt;a href="http://kubernetes.io/"&gt;Kubernetes&lt;/a&gt; with &lt;a href="http://googleresearch.blogspot.com/2016/02/running-your-models-in-production-with.html"&gt;TensorFlow Serving&lt;/a&gt;, a high performance, open source serving system for machine learning models, to meet the scaling demands of your application.&lt;/p&gt;</description></item><item><title>Kubernetes 1.2: Even more performance upgrades, plus easier application deployment and management</title><link>https://andygol-k8s.netlify.app/blog/2016/03/kubernetes-1-2-even-more-performance-upgrades-plus-easier-application-deployment-and-management/</link><pubDate>Thu, 17 Mar 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/03/kubernetes-1-2-even-more-performance-upgrades-plus-easier-application-deployment-and-management/</guid><description>&lt;p&gt;Today the Kubernetes project released Kubernetes 1.2. This release represents significant improvements for large organizations building distributed systems. Now with over 680 unique contributors to the project, this release represents our largest yet.&lt;/p&gt;
&lt;p&gt;From the beginning, our mission has been to make building distributed systems easy and accessible for all. With the Kubernetes 1.2 release we’ve made strides towards our goal by increasing scale, decreasing latency and overall simplifying the way applications are deployed and managed. Now, developers at organizations of all sizes can build production scale apps more easily than ever before. &lt;/p&gt;</description></item><item><title> ElasticBox introduces ElasticKube to help manage Kubernetes within the enterprise</title><link>https://andygol-k8s.netlify.app/blog/2016/03/elasticbox-introduces-elastickube-to/</link><pubDate>Fri, 11 Mar 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/03/elasticbox-introduces-elastickube-to/</guid><description>&lt;p&gt;Today’s guest post is brought to you by Brannan Matherson, from ElasticBox, who’ll discuss a new open source project to help standardize container deployment and management in enterprise environments. This highlights the advantages of authentication and user management for containerized applications&lt;/p&gt;
&lt;p&gt;I’m delighted to share some exciting work that we’re doing at ElasticBox to contribute to the open source community regarding the rapidly changing advancements in container technologies. Our team is kicking off a new initiative called &lt;a href="http://elastickube.com/"&gt;ElasticKube&lt;/a&gt; to help solve the problem of challenging container management scenarios within the enterprise. This project is a native container management experience that is specific to Kubernetes and leverages automation to provision clusters for containerized applications based on the latest release of Kubernetes 1.2.  &lt;/p&gt;</description></item><item><title> Kubernetes in the Enterprise with Fujitsu’s Cloud Load Control</title><link>https://andygol-k8s.netlify.app/blog/2016/03/kubernetes-in-enterprise-with-fujitsus/</link><pubDate>Fri, 11 Mar 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/03/kubernetes-in-enterprise-with-fujitsus/</guid><description>&lt;p&gt;Earlier this year, Fujitsu released its Kubernetes-based offering Fujitsu ServerView&lt;a href="http://www.fujitsu.com/software/clc/"&gt;Cloud Load Control&lt;/a&gt; (CLC) to the public. Some might be surprised since Fujitsu’s reputation is not necessarily related to software development, but rather to hardware manufacturing and IT services. As a long-time member of the Linux foundation and founding member of the ​Open Container Initiative and the Cloud Native Computing Foundation, Fujitsu does not only build software, but is committed to open source software, and contributes to several projects, including Kubernetes. But we not only believe in Kubernetes as an open source project, we also chose it as the core of our offering, because it provides the best balance of feature set, resource requirements and complexity to run distributed applications at scale.&lt;/p&gt;</description></item><item><title> Kubernetes Community Meeting Notes - 20160225</title><link>https://andygol-k8s.netlify.app/blog/2016/03/kubernetes-community-meeting-notes/</link><pubDate>Tue, 01 Mar 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/03/kubernetes-community-meeting-notes/</guid><description>&lt;h5 id="february-25th-redspread-demo-1-2-update-and-planning-1-3-newbie-introductions-sig-networking-and-a-shout-out-to-coreos-blog-post"&gt;February 25th - Redspread demo, 1.2 update and planning 1.3, newbie introductions, SIG-networking and a shout out to CoreOS blog post.&lt;/h5&gt;
&lt;p&gt;The Kubernetes contributing community meets most Thursdays at 10:00PT to discuss the project's status via videoconference. Here are the notes from the latest meeting.&lt;/p&gt;
&lt;p&gt;Note taker: [Ilan Rabinovich]&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Quick call out for sharing presentations/slides [JBeda]&lt;/li&gt;
&lt;li&gt;Demo (10 min):&lt;a href="https://redspread.com/"&gt; Redspread&lt;/a&gt; [Mackenzie Burnett, Dan Gillespie]&lt;/li&gt;
&lt;li&gt;1.2 Release Watch [T.J. Goltermann]
&lt;ul&gt;
&lt;li&gt;currently about 80 issues in the queue that need to be addressed before branching.
&lt;ul&gt;
&lt;li&gt;currently looks like March 7th may slip to later in the week, but up in the air until flakey tests are resolved.&lt;/li&gt;
&lt;li&gt;non-1.2 changes may be delayed in review/merging until 1.2 stabilization work completes.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;1.3 release planning&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Newbie Introductions&lt;/li&gt;
&lt;li&gt;SIG Reports -
&lt;ul&gt;
&lt;li&gt;Networking [Tim Hockin]&lt;/li&gt;
&lt;li&gt;Scale [Bob Wise]&lt;/li&gt;
&lt;li&gt;meeting last Friday went very well. Discussed charter AND a working deployment
&lt;ul&gt;
&lt;li&gt;moved meeting to Thursdays @ 1 (so in 3 hours!)&lt;/li&gt;
&lt;li&gt;Rob is posting a Cluster Ops announce on TheNewStack to recruit more members&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;GSoC participation -- no application submitted. [Sarah Novotny]&lt;/li&gt;
&lt;li&gt;Brian Grant has offered to review PRs that need attention for 1.2&lt;/li&gt;
&lt;li&gt;Dynamic Provisioning
&lt;ul&gt;
&lt;li&gt;Currently overlaps a bit with the ubernetes work&lt;/li&gt;
&lt;li&gt;PR in progress.&lt;/li&gt;
&lt;li&gt;Should work in 1.2, but being targeted more in 1.3&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Next meeting is March 3rd.
&lt;ul&gt;
&lt;li&gt;Demo from Weave on Kubernetes Anywhere&lt;/li&gt;
&lt;li&gt;Another Kubernetes 1.2 update&lt;/li&gt;
&lt;li&gt;Update from CNCF update&lt;/li&gt;
&lt;li&gt;1.3 commitments from google&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;No meeting on March 10th.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;To get involved in the Kubernetes community consider joining our &lt;a href="http://slack.k8s.io/"&gt;Slack channel&lt;/a&gt;, taking a look at the &lt;a href="https://github.com/kubernetes/"&gt;Kubernetes project&lt;/a&gt; on GitHub, or join the &lt;a href="https://groups.google.com/forum/#!forum/kubernetes-dev"&gt;Kubernetes-dev Google group&lt;/a&gt;. If you're really excited, you can do all of the above and join us for the next community conversation — March 3rd, 2016. Please add yourself or a topic you want to know about to the &lt;a href="https://docs.google.com/document/d/1VQDIAB0OqiSjIHI8AWMvSdceWhnz56jNpZrLs6o7NJY/edit"&gt;agenda&lt;/a&gt; and get a calendar invitation by joining &lt;a href="https://groups.google.com/forum/#!forum/kubernetes-community-video-chat"&gt;this group&lt;/a&gt;.&lt;/p&gt;</description></item><item><title> State of the Container World, February 2016</title><link>https://andygol-k8s.netlify.app/blog/2016/03/state-of-container-world-february-2016/</link><pubDate>Tue, 01 Mar 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/03/state-of-container-world-february-2016/</guid><description>&lt;p&gt;Hello, and welcome to the second installment of the Kubernetes state of the container world survey. At the beginning of February we sent out a survey about people’s usage of containers, and wrote about the &lt;a href="https://kubernetes.io/blog/2016/02/state-of-container-world-january-2016"&gt;results from the January survey&lt;/a&gt;. Here we are again, as before, while we try to reach a large and representative set of respondents, this survey was publicized across the social media account of myself and others on the Kubernetes team, so I expect some pro-container and Kubernetes bias in the data.We continue to try to get as large an audience as possible, and in that vein, please go and take the &lt;a href="https://docs.google.com/a/google.com/forms/d/1hlOEyjuN4roIbcAAUbDhs7xjNMoM8r-hqtixf6zUsp4/viewform"&gt;March survey&lt;/a&gt; and share it with your friends and followers everywhere! Without further ado, the numbers...&lt;/p&gt;</description></item><item><title>KubeCon EU 2016: Kubernetes Community in London</title><link>https://andygol-k8s.netlify.app/blog/2016/02/kubecon-eu-2016-kubernetes-community-in/</link><pubDate>Wed, 24 Feb 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/02/kubecon-eu-2016-kubernetes-community-in/</guid><description>&lt;p&gt;KubeCon EU 2016 is the inaugural European Kubernetes community conference that follows on the American launch in November 2015. KubeCon is fully dedicated to education and community engagement for &lt;a href="https://andygol-k8s.netlify.app/"&gt;Kubernetes&lt;/a&gt; enthusiasts, production users and the surrounding ecosystem.&lt;/p&gt;
&lt;p&gt;Come join us in London and hang out with hundreds from the Kubernetes community and experience a wide variety of deep technical expert talks and use cases.&lt;/p&gt;
&lt;p&gt;Don’t miss these great speaker sessions at the conference:&lt;/p&gt;</description></item><item><title> Kubernetes Community Meeting Notes - 20160218</title><link>https://andygol-k8s.netlify.app/blog/2016/02/kubernetes-community-meeting-notes_23/</link><pubDate>Tue, 23 Feb 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/02/kubernetes-community-meeting-notes_23/</guid><description>&lt;h5 id="february-18th-kmachine-demo-clusterops-sig-formed-new-k8s-io-website-preview-1-2-update-and-planning-1-3"&gt;February 18th - kmachine demo, clusterops SIG formed, new k8s.io website preview, 1.2 update and planning 1.3&lt;/h5&gt;
&lt;p&gt;The Kubernetes contributing community meets most Thursdays at 10:00PT to discuss the project's status via videoconference. Here are the notes from the latest meeting.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Note taker: Rob Hirschfeld&lt;/li&gt;
&lt;li&gt;Demo (10 min): &lt;a href="https://github.com/skippbox/kmachine"&gt;kmachine&lt;/a&gt; [Sebastien Goasguen]
&lt;ul&gt;
&lt;li&gt;started :01 intro video&lt;/li&gt;
&lt;li&gt;looking to create mirror of Docker tools for Kubernetes (similar to machine, compose, etc)&lt;/li&gt;
&lt;li&gt;kmachine (forked from Docker Machine, so has the same endpoints)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Use Case (10 min): started at :15&lt;/li&gt;
&lt;li&gt;SIG Report starter
&lt;ul&gt;
&lt;li&gt;Cluster Ops launch meeting Friday (&lt;a href="https://docs.google.com/document/d/1IhN5v6MjcAUrvLd9dAWtKcGWBWSaRU8DNyPiof3gYMY/edit"&gt;doc&lt;/a&gt;). [Rob Hirschfeld]&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Time Zone Discussion [:22]
&lt;ul&gt;
&lt;li&gt;This timezone does not work for Asia.&lt;/li&gt;
&lt;li&gt;Considering rotation - once per month&lt;/li&gt;
&lt;li&gt;Likely 5 or 6 PT&lt;/li&gt;
&lt;li&gt;Rob suggested moving the regular meeting up a little&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;k8s.io website preview [John Mulhausen] [:27]
&lt;ul&gt;
&lt;li&gt;using github for docs. you can fork and do a pull request against the site&lt;/li&gt;
&lt;li&gt;will be its own kubernetes organization BUT not in the code repo&lt;/li&gt;
&lt;li&gt;Google will offer a &amp;quot;doc bounty&amp;quot; where you can get GCP credits for working on docs&lt;/li&gt;
&lt;li&gt;Uses Jekyll to generate the site (e.g. the ToC)&lt;/li&gt;
&lt;li&gt;Principle will be to 100% GitHub Pages; no script trickery or plugins, just fork/clone, edit, and push&lt;/li&gt;
&lt;li&gt;Hope to launch at Kubecon EU&lt;/li&gt;
&lt;li&gt;Home Page Only Preview: &lt;a href="http://kub.unitedcreations.xyz"&gt;http://kub.unitedcreations.xyz&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;1.2 Release Watch [T.J. Goltermann] [:38]&lt;/li&gt;
&lt;li&gt;1.3 Planning update [T.J. Goltermann]&lt;/li&gt;
&lt;li&gt;GSoC participation -- deadline 2/19 [Sarah Novotny]&lt;/li&gt;
&lt;li&gt;March 10th meeting? [Sarah Novotny]&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;To get involved in the Kubernetes community consider joining our &lt;a href="http://slack.k8s.io/"&gt;Slack channel&lt;/a&gt;, taking a look at the &lt;a href="https://github.com/kubernetes/"&gt;Kubernetes project&lt;/a&gt; on GitHub, or join the &lt;a href="https://groups.google.com/forum/#!forum/kubernetes-dev"&gt;Kubernetes-dev Google group&lt;/a&gt;. If you're really excited, you can do all of the above and join us for the next community conversation — February 25th, 2016. Please add yourself or a topic you want to know about to the &lt;a href="https://docs.google.com/document/d/1VQDIAB0OqiSjIHI8AWMvSdceWhnz56jNpZrLs6o7NJY/edit"&gt;agenda&lt;/a&gt; and get a calendar invitation by joining &lt;a href="https://groups.google.com/forum/#!forum/kubernetes-community-video-chat"&gt;this group&lt;/a&gt;.&lt;/p&gt;</description></item><item><title> Kubernetes Community Meeting Notes - 20160211</title><link>https://andygol-k8s.netlify.app/blog/2016/02/kubernetes-community-meeting-notes-20160211/</link><pubDate>Tue, 16 Feb 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/02/kubernetes-community-meeting-notes-20160211/</guid><description>&lt;h5 id="february-11th-pangaea-demo-aws-sig-formed-release-automation-and-documentation-team-introductions-1-2-update-and-planning-1-3"&gt;February 11th - Pangaea Demo, #AWS SIG formed, release automation and documentation team introductions. 1.2 update and planning 1.3.&lt;/h5&gt;
&lt;p&gt;The Kubernetes contributing community meets most Thursdays at 10:00PT to discuss the project's status via videoconference. Here are the notes from the latest meeting.&lt;/p&gt;
&lt;p&gt;Note taker: Rob Hirschfeld&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Demo: &lt;a href="http://hasura.io/blog/pangaea-point-and-shoot-kubernetes/"&gt;Pangaea&lt;/a&gt; [Shahidh K Muhammed, Tanmai Gopal, and Akshaya Acharya]&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Microservices packages&lt;/li&gt;
&lt;li&gt;Focused on Application developers&lt;/li&gt;
&lt;li&gt;Demo at recording +4 minutes&lt;/li&gt;
&lt;li&gt;Single node kubernetes cluster — runs locally using Vagrant CoreOS image&lt;/li&gt;
&lt;li&gt;Single user/system cluster allows use of DNS integration (unlike Compose)&lt;/li&gt;
&lt;li&gt;Can run locally or in cloud&lt;/li&gt;
&lt;li&gt;&lt;em&gt;SIG Report:&lt;/em&gt;
&lt;ul&gt;
&lt;li&gt;Release Automation and an introduction to David McMahon&lt;/li&gt;
&lt;li&gt;Docs and k8s website redesign proposal and an introduction to John Mulhausen&lt;/li&gt;
&lt;li&gt;This will allow the system to build docs correctly from GitHub w/ minimal effort&lt;/li&gt;
&lt;li&gt;Will be check-in triggered&lt;/li&gt;
&lt;li&gt;Getting website style updates&lt;/li&gt;
&lt;li&gt;Want to keep authoring really light&lt;/li&gt;
&lt;li&gt;There will be some automated checks&lt;/li&gt;
&lt;li&gt;Next week: preview of the new website during the community meeting&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;[@goltermann] 1.2 Release Watch (time +34 minutes)&lt;/p&gt;</description></item><item><title> ShareThis: Kubernetes In Production</title><link>https://andygol-k8s.netlify.app/blog/2016/02/sharethis-kubernetes-in-production/</link><pubDate>Thu, 11 Feb 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/02/sharethis-kubernetes-in-production/</guid><description>&lt;p&gt;ShareThis has grown tremendously since its first days as a tiny widget that allowed you to share to your favorite social services. It now serves over 4.5 million domains per month, helping publishers create a more authentic digital experience.&lt;/p&gt;
&lt;p&gt;Fast growth came with a price. We leveraged technical debt to scale fast and to grow our products, particularly when it came to infrastructure. As our company expanded, the infrastructure costs mounted as well - both in terms of inefficient utilization and in terms of people costs. About 1 year ago, it became clear something needed to change.&lt;/p&gt;</description></item><item><title> Kubernetes Community Meeting Notes - 20160204</title><link>https://andygol-k8s.netlify.app/blog/2016/02/kubernetes-community-meeting-notes/</link><pubDate>Tue, 09 Feb 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/02/kubernetes-community-meeting-notes/</guid><description>&lt;h4 id="february-4th-rkt-demo-congratulations-on-the-1-0-coreos-ebay-puts-k8s-on-openstack-and-considers-openstack-on-k8s-sigs-and-flaky-test-surge-makes-progress"&gt;February 4th - rkt demo (congratulations on the 1.0, CoreOS!), eBay puts k8s on Openstack and considers Openstack on k8s, SIGs, and flaky test surge makes progress.&lt;/h4&gt;
&lt;p&gt;The Kubernetes contributing community meets most Thursdays at 10:00PT to discuss the project's status via a videoconference. Here are the notes from the latest meeting.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Note taker: Rob Hirschfeld&lt;/li&gt;
&lt;li&gt;Demo (20 min): CoreOS rkt + Kubernetes [Shaya Potter]
&lt;ul&gt;
&lt;li&gt;expect to see integrations w/ rkt &amp;amp; k8s in the coming months (&amp;quot;rkt-netes&amp;quot;). not integrated into the v1.2 release.&lt;/li&gt;
&lt;li&gt;Shaya gave a demo (8 minutes into meeting for video reference)
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;CLI of rkt shown spinning up containers&lt;/p&gt;</description></item><item><title> Kubernetes Community Meeting Notes - 20160128</title><link>https://andygol-k8s.netlify.app/blog/2016/02/kubernetes-community-meeting-notes-20160128/</link><pubDate>Tue, 02 Feb 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/02/kubernetes-community-meeting-notes-20160128/</guid><description>&lt;h5 id="january-28-1-2-release-update-deis-demo-flaky-test-surge-and-sigs"&gt;January 28 - 1.2 release update, Deis demo, flaky test surge and SIGs&lt;/h5&gt;
&lt;p&gt;The Kubernetes contributing community meets once a week to discuss the project's status via a videoconference. Here are the notes from the latest meeting.&lt;/p&gt;
&lt;p&gt;Note taker: Erin Boyd&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Discuss process around code freeze/code slush (TJ Goltermann)
&lt;ul&gt;
&lt;li&gt;Code wind down was happening during holiday (for 1.1)&lt;/li&gt;
&lt;li&gt;Releasing ~ every 3 months&lt;/li&gt;
&lt;li&gt;Build stability is still missing&lt;/li&gt;
&lt;li&gt;Issue on Transparency (Bob Wise)
&lt;ul&gt;
&lt;li&gt;Email from Sarah for call to contribute (Monday, January 25)
&lt;ul&gt;
&lt;li&gt;Concern over publishing dates / understanding release schedule /etc…&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Release targeted for early March
&lt;ul&gt;
&lt;li&gt;Where does one find information on the release schedule with the committed features?
&lt;ul&gt;
&lt;li&gt;For 1.2 - Send email / Slack to TJ&lt;/li&gt;
&lt;li&gt;For 1.3 - Working on better process to communicate to the community
&lt;ul&gt;
&lt;li&gt;Twitter&lt;/li&gt;
&lt;li&gt;Wiki&lt;/li&gt;
&lt;li&gt;GitHub Milestones&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;How to better communicate issues discovered in the SIG
&lt;ul&gt;
&lt;li&gt;AI: People need to email the kubernetes-dev@ mailing list with summary of findings&lt;/li&gt;
&lt;li&gt;AI: Each SIG needs a note taker&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Release planning vs Release testing
&lt;ul&gt;
&lt;li&gt;Testing SIG lead Ike McCreery
&lt;ul&gt;
&lt;li&gt;Also part of the testing infrastructure team at Google&lt;/li&gt;
&lt;li&gt;Community being able to integrate into the testing framework
&lt;ul&gt;
&lt;li&gt;Federated testing&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Release Manager = David McMahon
&lt;ul&gt;
&lt;li&gt;Request to  introduce him to the community meeting&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Demo: Jason Hansen Deis
&lt;ul&gt;
&lt;li&gt;Implemented simple REST API to interact with the platform&lt;/li&gt;
&lt;li&gt;Deis managed application (deployed via)
&lt;ul&gt;
&lt;li&gt;Source -&amp;gt; image&lt;/li&gt;
&lt;li&gt;Rolling upgrades -&amp;gt; Rollbacks&lt;/li&gt;
&lt;li&gt;AI: Jason will provide the slides &amp;amp; notes
&lt;ul&gt;
&lt;li&gt;Slides: &lt;a href="https://speakerdeck.com/slack/kubernetes-community-meeting-demo-january-28th-2016"&gt;https://speakerdeck.com/slack/kubernetes-community-meeting-demo-january-28th-2016&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Alpha information: &lt;a href="https://groups.google.com/forum/#!topic/deis-users/Qhia4DD2pv4"&gt;https://groups.google.com/forum/#!topic/deis-users/Qhia4DD2pv4&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Adding an administrative component (dashboard)&lt;/li&gt;
&lt;li&gt;Helm wraps kubectl&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Testing
&lt;ul&gt;
&lt;li&gt;Called for community interaction&lt;/li&gt;
&lt;li&gt;Need to understand friction points from community
&lt;ul&gt;
&lt;li&gt;Better documentation&lt;/li&gt;
&lt;li&gt;Better communication on how things “should work&amp;quot;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Internally, Google is having daily calls to resolve test flakes&lt;/li&gt;
&lt;li&gt;Started up SIG testing meetings (Tuesday at 10:30 am PT)&lt;/li&gt;
&lt;li&gt;Everyone wants it, but no one want to pony up the time to make it happen
&lt;ul&gt;
&lt;li&gt;Google is dedicating headcount to it (3-4 people, possibly more)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="https://groups.google.com/forum/?hl=en#%21forum/kubernetes-sig-testing"&gt;https://groups.google.com/forum/?hl=en#!forum/kubernetes-sig-testing&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Best practices for labeling
&lt;ul&gt;
&lt;li&gt;Are there tools built on top of these to leverage&lt;/li&gt;
&lt;li&gt;AI: Generate artifact for labels and what they do (Create doc)
&lt;ul&gt;
&lt;li&gt;Help Wanted Label - good for new community members&lt;/li&gt;
&lt;li&gt;Classify labels for team and area
&lt;ul&gt;
&lt;li&gt;User experience, test infrastructure, etc..&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;SIG Config (not about deployment)
&lt;ul&gt;
&lt;li&gt;Any interest in ansible, etc.. type&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;SIG Scale meeting (Bob Wise &amp;amp; Tim StClair)
&lt;ul&gt;
&lt;li&gt;Tests related to performance SLA get relaxed in order to get the tests to pass
&lt;ul&gt;
&lt;li&gt;exposed process issues&lt;/li&gt;
&lt;li&gt;AI: outline of a proposal for a notice policy if things are being changed that are critical to the system (Bob Wise/Samsung)
&lt;ul&gt;
&lt;li&gt;Create a Best Practices of set of constants into well documented place&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;To get involved in the Kubernetes community consider joining our &lt;a href="http://slack.k8s.io/"&gt;Slack channel&lt;/a&gt;, taking a look at the &lt;a href="https://github.com/kubernetes/"&gt;Kubernetes project&lt;/a&gt; on GitHub, or join the &lt;a href="https://groups.google.com/forum/#!forum/kubernetes-dev"&gt;Kubernetes-dev Google group&lt;/a&gt;. If you’re really excited, you can do all of the above and join us for the next community conversation — February 4th, 2016. Please add yourself or a topic you want to know about to the &lt;a href="https://docs.google.com/document/d/1VQDIAB0OqiSjIHI8AWMvSdceWhnz56jNpZrLs6o7NJY/edit"&gt;agenda&lt;/a&gt; and get a calendar invitation by joining &lt;a href="https://groups.google.com/forum/#!forum/kubernetes-community-video-chat"&gt;this group&lt;/a&gt;.&lt;/p&gt;</description></item><item><title> State of the Container World, January 2016</title><link>https://andygol-k8s.netlify.app/blog/2016/02/state-of-container-world-january-2016/</link><pubDate>Mon, 01 Feb 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/02/state-of-container-world-january-2016/</guid><description>&lt;p&gt;At the start of the new year, we sent out a survey to gauge the state of the container world. We’re ready to send the &lt;a href="https://docs.google.com/forms/d/13yxxBqb5igUhwrrnDExLzZPjREiCnSs-AH-y4SSZ-5c/viewform"&gt;February edition&lt;/a&gt;, but before we do, let’s take a look at the January data from the 119 responses (thank you for participating!).&lt;/p&gt;
&lt;p&gt;A note about these numbers: First, you may notice that the numbers don’t add up to 100%, the choices were not exclusive in most cases and so percentages given are the percentage of all respondents who selected a particular choice. Second, while we attempted to reach a broad cross-section of the cloud community, the survey was initially sent out via Twitter to followers of &lt;a href="https://twitter.com/brendandburns"&gt;@brendandburns&lt;/a&gt;, &lt;a href="https://twitter.com/kelseyhightower"&gt;@kelseyhightower&lt;/a&gt;, &lt;a href="https://twitter.com/sarahnovotny"&gt;@sarahnovotny&lt;/a&gt;, &lt;a href="https://twitter.com/juliaferraioli"&gt;@juliaferraioli&lt;/a&gt;, &lt;a href="https://twitter.com/thagomizer_rb"&gt;@thagomizer_rb&lt;/a&gt;, so the audience is likely not a perfect cross-section. We’re working to broaden our sample size (have I mentioned our February survey? &lt;a href="https://docs.google.com/forms/d/13yxxBqb5igUhwrrnDExLzZPjREiCnSs-AH-y4SSZ-5c/viewform"&gt;Come take it now&lt;/a&gt;).&lt;/p&gt;</description></item><item><title> Kubernetes Community Meeting Notes - 20160114</title><link>https://andygol-k8s.netlify.app/blog/2016/01/Kubernetes-Community-Meeting-Notes/</link><pubDate>Thu, 28 Jan 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/01/Kubernetes-Community-Meeting-Notes/</guid><description>&lt;h5 id="january-14-rackn-demo-testing-woes-and-kubecon-eu-cfp"&gt;January 14 - RackN demo, testing woes, and KubeCon EU CFP.&lt;/h5&gt;
&lt;hr&gt;
&lt;h2 id="note-taker-joe-beda"&gt;Note taker: Joe Beda&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Demonstration: Automated Deploy on Metal, AWS and others w/ Digital Rebar, Rob Hirschfeld and Greg Althaus from RackN&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Greg Althaus. CTO. Digital Rebar is the product. Bare metal provisioning tool.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Detect hardware, bring it up, configure raid, OS and get workload deployed.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Been working on Kubernetes workload.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Seeing trend to start in cloud and then move back to bare metal.&lt;/p&gt;</description></item><item><title>Kubernetes Community Meeting Notes - 20160121</title><link>https://andygol-k8s.netlify.app/blog/2016/01/kubernetes-community-meeting-notes_28/</link><pubDate>Thu, 28 Jan 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/01/kubernetes-community-meeting-notes_28/</guid><description>&lt;h4 id="january-21-configuration-federation-and-testing-oh-my"&gt;January 21 - Configuration, Federation and Testing, oh my. &lt;/h4&gt;
&lt;p&gt;Note taker: Rob Hirshfeld&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Use Case (10 min): &lt;a href="https://docs.google.com/a/google.com/presentation/d/1MEI97efplr3f-GDX1GcWGfkEuGKKV-4niu27kHOeMLk/edit?usp=sharing_eid&amp;ts=56a114f8"&gt;SFDC Paul Brown&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;SIG Report - SIG-config and the story of &lt;a href="https://github.com/kubernetes/kubernetes/pull/18215"&gt;#18215&lt;/a&gt;.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Application config IN K8s not deployment of K8s&lt;/li&gt;
&lt;li&gt;Topic has been reuse of configuration,specifically parameterization(aka templates). Needs:
&lt;ul&gt;
&lt;li&gt;include scoping(cluster namespace)&lt;/li&gt;
&lt;li&gt;slight customization (naming changes, but not major config)&lt;/li&gt;
&lt;li&gt;multiple positions on how todo this including allowing external or simple extensions&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;PetSet creates instances w/stable namespace&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Workflow proposal&lt;/p&gt;</description></item><item><title> Why Kubernetes doesn’t use libnetwork</title><link>https://andygol-k8s.netlify.app/blog/2016/01/why-kubernetes-doesnt-use-libnetwork/</link><pubDate>Thu, 14 Jan 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/01/why-kubernetes-doesnt-use-libnetwork/</guid><description>&lt;p&gt;Kubernetes has had a very basic form of network plugins since before version 1.0 was released — around the same time as Docker's &lt;a href="https://github.com/docker/libnetwork"&gt;libnetwork&lt;/a&gt; and Container Network Model (&lt;a href="https://github.com/docker/libnetwork/blob/master/docs/design.md"&gt;CNM&lt;/a&gt;) was introduced. Unlike libnetwork, the Kubernetes plugin system still retains its &amp;quot;alpha&amp;quot; designation. Now that Docker's network plugin support is released and supported, an obvious question we get is why Kubernetes has not adopted it yet. After all, vendors will almost certainly be writing plugins for Docker — we would all be better off using the same drivers, right?&lt;/p&gt;</description></item><item><title> Simple leader election with Kubernetes and Docker</title><link>https://andygol-k8s.netlify.app/blog/2016/01/simple-leader-election-with-kubernetes/</link><pubDate>Mon, 11 Jan 2016 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2016/01/simple-leader-election-with-kubernetes/</guid><description>&lt;h4 id="overview"&gt;Overview&lt;/h4&gt;
&lt;p&gt;Kubernetes simplifies the deployment and operational management of services running on clusters. However, it also simplifies the development of these services. In this post we'll see how you can use Kubernetes to easily perform leader election in your distributed application. Distributed applications usually replicate the tasks of a service for reliability and scalability, but often it is necessary to designate one of the replicas as the leader who is responsible for coordination among all of the replicas.&lt;/p&gt;</description></item><item><title> Creating a Raspberry Pi cluster running Kubernetes, the installation (Part 2)</title><link>https://andygol-k8s.netlify.app/blog/2015/12/creating-raspberry-pi-cluster-running/</link><pubDate>Tue, 22 Dec 2015 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2015/12/creating-raspberry-pi-cluster-running/</guid><description>&lt;p&gt;At Devoxx Belgium and Devoxx Morocco, &lt;a href="https://twitter.com/saturnism"&gt;Ray Tsang&lt;/a&gt; and I (&lt;a href="https://twitter.com/ArjenWassink"&gt;Arjen Wassink&lt;/a&gt;) showed a Raspberry Pi cluster we built at Quintor running HypriotOS, Docker and Kubernetes. While we received many compliments on the talk, the most common question was about how to build a Pi cluster themselves! We’ll be doing just that, in two parts. The &lt;a href="https://kubernetes.io/blog/2015/11/creating-a-Raspberry-Pi-cluster-running-Kubernetes-the-shopping-list-Part-1"&gt;first part covered the shopping list for the cluster&lt;/a&gt;, and this second one will show you how to get kubernetes up and running . . .&lt;/p&gt;</description></item><item><title> Managing Kubernetes Pods, Services and Replication Controllers with Puppet</title><link>https://andygol-k8s.netlify.app/blog/2015/12/managing-kubernetes-pods-services-and-replication-controllers-with-puppet/</link><pubDate>Thu, 17 Dec 2015 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2015/12/managing-kubernetes-pods-services-and-replication-controllers-with-puppet/</guid><description>&lt;p&gt;People familiar with &lt;a href="https://github.com/puppetlabs/puppet"&gt;Puppet&lt;/a&gt; might have used it for managing files, packages and users on host computers. But Puppet is first and foremost a configuration management tool, and config management is a much broader discipline than just managing host-level resources. A good definition of configuration management is that it aims to solve four related problems: identification, control, status accounting and verification and audit. These problems exist in the operation of any complex system, and with the new &lt;a href="https://forge.puppetlabs.com/garethr/kubernetes"&gt;Puppet Kubernetes module&lt;/a&gt; we’re starting to look at how we can solve those problems for Kubernetes.&lt;/p&gt;</description></item><item><title> How Weave built a multi-deployment solution for Scope using Kubernetes</title><link>https://andygol-k8s.netlify.app/blog/2015/12/how-weave-built-a-multi-deployment-solution-for-scope-using-kubernetes/</link><pubDate>Sat, 12 Dec 2015 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2015/12/how-weave-built-a-multi-deployment-solution-for-scope-using-kubernetes/</guid><description>&lt;p&gt;Earlier this year at Weaveworks we launched &lt;a href="http://weave.works/product/scope/index.html"&gt;Weave Scope&lt;/a&gt;, an open source solution for visualization and monitoring of containerised apps and services. Recently we released a hosted Scope service into an &lt;a href="http://blog.weave.works/2015/10/08/weave-the-fastest-path-to-docker-on-amazon-ec2-container-service/"&gt;Early Access Program&lt;/a&gt;. Today, we want to walk you through how we initially prototyped that service, and how we ultimately chose and deployed Kubernetes as our platform.&lt;/p&gt;
&lt;h5 id="a-cloud-native-architecture"&gt;A cloud-native architecture &lt;/h5&gt;
&lt;p&gt;Scope already had a clean internal line of demarcation between data collection and user interaction, so it was straightforward to split the application on that line, distribute probes to customers, and host frontends in the cloud. We built out a small set of microservices in the &lt;a href="http://12factor.net/"&gt;12-factor model&lt;/a&gt;, which includes:&lt;/p&gt;</description></item><item><title> Creating a Raspberry Pi cluster running Kubernetes, the shopping list (Part 1)</title><link>https://andygol-k8s.netlify.app/blog/2015/11/creating-a-raspberry-pi-cluster-running-kubernetes-the-shopping-list-part-1/</link><pubDate>Wed, 25 Nov 2015 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2015/11/creating-a-raspberry-pi-cluster-running-kubernetes-the-shopping-list-part-1/</guid><description>&lt;p&gt;At Devoxx Belgium and Devoxx Morocco, Ray Tsang and I showed a Raspberry Pi cluster we built at Quintor running HypriotOS, Docker and Kubernetes. For those who did not see the talks, you can check out &lt;a href="https://www.youtube.com/watch?v=AAS5Mq9EktI"&gt;an abbreviated version of the demo&lt;/a&gt; or the full talk by Ray on &lt;a href="https://www.youtube.com/watch?v=kT1vmK0r184"&gt;developing and deploying Java-based microservices&lt;/a&gt; in Kubernetes. While we received many compliments on the talk, the most common question was about how to build a Pi cluster themselves! We’ll be doing just that, in two parts. This first post will cover the shopping list for the cluster, and the second will show you how to get it up and running . . .&lt;/p&gt;</description></item><item><title> Monitoring Kubernetes with Sysdig</title><link>https://andygol-k8s.netlify.app/blog/2015/11/monitoring-kubernetes-with-sysdig/</link><pubDate>Thu, 19 Nov 2015 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2015/11/monitoring-kubernetes-with-sysdig/</guid><description>&lt;p&gt;&lt;em&gt;Today we’re sharing a guest post by Chris Crane from Sysdig about their monitoring integration into Kubernetes. &lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Kubernetes offers a full environment to write scalable and service-based applications. It takes care of things like container grouping, discovery, load balancing and healing so you don’t have to worry about them. The design is elegant, scalable and the APIs are a pleasure to use.&lt;/p&gt;
&lt;p&gt;And like any new infrastructure platform, if you want to run Kubernetes in production, you’re going to want to be able to monitor and troubleshoot it. We’re big fans of Kubernetes here at Sysdig, and, well: we’re here to help.&lt;/p&gt;</description></item><item><title> One million requests per second: Dependable and dynamic distributed systems at scale</title><link>https://andygol-k8s.netlify.app/blog/2015/11/one-million-requests-per-second-dependable-and-dynamic-distributed-systems-at-scale/</link><pubDate>Wed, 11 Nov 2015 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2015/11/one-million-requests-per-second-dependable-and-dynamic-distributed-systems-at-scale/</guid><description>&lt;p&gt;Recently, I’ve gotten in the habit of telling people that building a reliable service isn’t that hard. If you give me two Compute Engine virtual machines, a Cloud Load balancer, supervisord and nginx, I can create you a static web service that will serve a static web page, effectively forever.&lt;/p&gt;
&lt;p&gt;The real challenge is building agile AND reliable services. In the new world of software development it's trivial to spin up enormous numbers of machines and push software to them. Developing a successful product must &lt;em&gt;also&lt;/em&gt; include the ability to respond to changes in a predictable way, to handle upgrades elegantly and to minimize downtime for users. Missing on any one of these elements results in an &lt;em&gt;unsuccessful&lt;/em&gt; product that's flaky and unreliable. I remember a time, not that long ago, when it was common for websites to be unavailable for an hour around midnight each day as a safety window for software upgrades. My bank still does this. It’s really not cool.&lt;/p&gt;</description></item><item><title>Kubernetes 1.1 Performance upgrades, improved tooling and a growing community</title><link>https://andygol-k8s.netlify.app/blog/2015/11/kubernetes-1-1-performance-upgrades-improved-tooling-and-a-growing-community/</link><pubDate>Mon, 09 Nov 2015 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2015/11/kubernetes-1-1-performance-upgrades-improved-tooling-and-a-growing-community/</guid><description>&lt;p&gt;Since the Kubernetes 1.0 release in July, we’ve seen tremendous adoption by companies building distributed systems to manage their container clusters. We’re also been humbled by the rapid growth of the community who help make Kubernetes better everyday. We have seen commercial offerings such as Tectonic by CoreOS and RedHat Atomic Host emerge to deliver deployment and support of Kubernetes. And a growing ecosystem has added Kubernetes support including tool vendors such as Sysdig and Project Calico.&lt;/p&gt;</description></item><item><title> Kubernetes as Foundation for Cloud Native PaaS</title><link>https://andygol-k8s.netlify.app/blog/2015/11/kubernetes-as-foundation-for-cloud-native-paas/</link><pubDate>Tue, 03 Nov 2015 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2015/11/kubernetes-as-foundation-for-cloud-native-paas/</guid><description>&lt;p&gt;With Kubernetes continuing to gain momentum as a critical tool for building and scaling container based applications, we’ve been thrilled to see a growing number of platform as a service (PaaS) offerings adopt it as a foundation. PaaS developers have been drawn to Kubernetes by its rapid rate of maturation, the soundness of its core architectural concepts, and the strength of its contributor community. The &lt;a href="https://kubernetes.io/blog/2015/07/the-growing-kubernetes-ecosystem"&gt;Kubernetes ecosystem&lt;/a&gt; continues to grow, and these PaaS projects are great additions to it.&lt;/p&gt;</description></item><item><title>Some things you didn’t know about kubectl</title><link>https://andygol-k8s.netlify.app/blog/2015/10/some-things-you-didnt-know-about-kubectl_28/</link><pubDate>Wed, 28 Oct 2015 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2015/10/some-things-you-didnt-know-about-kubectl_28/</guid><description>&lt;p&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/reference/kubectl/"&gt;kubectl&lt;/a&gt; is the command line tool for interacting with Kubernetes clusters. Many people use it every day to deploy their container workloads into production clusters. But there’s more to kubectl than just &lt;code&gt;kubectl create -f or kubectl rolling-update&lt;/code&gt;. kubectl is a veritable multi-tool of container orchestration and management. Below we describe some of the features of kubectl that you may not have seen.&lt;/p&gt;
&lt;h2 id="run-interactive-commands"&gt;Run interactive commands&lt;/h2&gt;
&lt;p&gt;&lt;code&gt;kubectl run&lt;/code&gt; has been in kubectl since the 1.0 release, but recently we added the ability to run interactive containers in your cluster. That means that an interactive shell in your Kubernetes cluster is as close as:&lt;/p&gt;</description></item><item><title> Kubernetes Performance Measurements and Roadmap</title><link>https://andygol-k8s.netlify.app/blog/2015/09/kubernetes-performance-measurements-and/</link><pubDate>Thu, 10 Sep 2015 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2015/09/kubernetes-performance-measurements-and/</guid><description>&lt;p&gt;No matter how flexible and reliable your container orchestration system is, ultimately, you have some work to be done, and you want it completed quickly. For big problems, a common answer is to just throw more machines at the problem. After all, more compute = faster, right?&lt;/p&gt;
&lt;p&gt;Interestingly, adding more nodes is a little like the &lt;a href="http://www.nasa.gov/mission_pages/station/expeditions/expedition30/tryanny.html"&gt;tyranny of the rocket equation&lt;/a&gt; - in some systems, adding more machines can actually make your processing slower. However, unlike the rocket equation, we can do better. Kubernetes in v1.0 version supports clusters with up to 100 nodes. However, we have a goal to 10x the number of nodes we will support by the end of 2015. This blog post will cover where we are and how we intend to achieve the next level of performance.&lt;/p&gt;</description></item><item><title> Using Kubernetes Namespaces to Manage Environments</title><link>https://andygol-k8s.netlify.app/blog/2015/08/using-kubernetes-namespaces-to-manage/</link><pubDate>Fri, 28 Aug 2015 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2015/08/using-kubernetes-namespaces-to-manage/</guid><description>&lt;h5 id="one-of-the-advantages-that-kubernetes-provides-is-the-ability-to-manage-various-environments-easier-and-better-than-traditional-deployment-strategies-for-most-nontrivial-applications-you-have-test-staging-and-production-environments-you-can-spin-up-a-separate-cluster-of-resources-such-as-vms-with-the-same-configuration-in-staging-and-production-but-that-can-be-costly-and-managing-the-differences-between-the-environments-can-be-difficult"&gt;One of the advantages that Kubernetes provides is the ability to manage various environments easier and better than traditional deployment strategies. For most nontrivial applications, you have test, staging, and production environments. You can spin up a separate cluster of resources, such as VMs, with the same configuration in staging and production, but that can be costly and managing the differences between the environments can be difficult.&lt;/h5&gt;
&lt;h5 id="kubernetes-includes-a-cool-feature-called-namespaces-4-which-enable-you-to-manage-different-environments-within-the-same-cluster-for-example-you-can-have-different-test-and-staging-environments-in-the-same-cluster-of-machines-potentially-saving-resources-you-can-also-run-different-types-of-server-batch-or-other-jobs-in-the-same-cluster-without-worrying-about-them-affecting-each-other"&gt;Kubernetes includes a cool feature called [namespaces][4], which enable you to manage different environments within the same cluster. For example, you can have different test and staging environments in the same cluster of machines, potentially saving resources. You can also run different types of server, batch, or other jobs in the same cluster without worrying about them affecting each other.&lt;/h5&gt;
&lt;h3 id="the-default-namespace"&gt;The Default Namespace&lt;/h3&gt;
&lt;p&gt;Specifying the namespace is optional in Kubernetes because by default Kubernetes uses the &amp;quot;default&amp;quot; namespace. If you've just created a cluster, you can check that the default namespace exists using this command:&lt;/p&gt;</description></item><item><title> Weekly Kubernetes Community Hangout Notes - July 31 2015</title><link>https://andygol-k8s.netlify.app/blog/2015/08/Weekly-Kubernetes-Community-Hangout/</link><pubDate>Tue, 04 Aug 2015 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2015/08/Weekly-Kubernetes-Community-Hangout/</guid><description>&lt;p&gt;Every week the Kubernetes contributing community meet virtually over Google Hangouts. We want anyone who's interested to know what's discussed in this forum.&lt;/p&gt;
&lt;p&gt;Here are the notes from today's meeting:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Private Registry Demo - Muhammed&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Run docker-registry as an RC/Pod/Service&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Run a proxy on every node&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Access as localhost:5000&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Discussion:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Should we back it by GCS or S3 when possible?&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Run real registry backed by $object_store on each node&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;DNS instead of localhost?&lt;/p&gt;</description></item><item><title> The Growing Kubernetes Ecosystem</title><link>https://andygol-k8s.netlify.app/blog/2015/07/the-growing-kubernetes-ecosystem/</link><pubDate>Fri, 24 Jul 2015 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2015/07/the-growing-kubernetes-ecosystem/</guid><description>&lt;p&gt;Over the past year, we’ve seen fantastic momentum in the Kubernetes project, culminating with the release of &lt;a href="https://tectonic.com/"&gt;Kubernetes v1&lt;/a&gt; earlier this week. We’ve also witnessed the ecosystem around Kubernetes blossom, and wanted to draw attention to some of the cooler offerings we’ve seen.&lt;/p&gt;
&lt;p&gt;| ----- |
|&lt;/p&gt;
&lt;p&gt;&lt;img src="https://lh6.googleusercontent.com/Y6MY5k_Eq6CddNzfRrRo14kLuJwe1KYtJq_7KcIGy1bRf65KwoX1uAuCBwEL0P_FGSomZPQZ-hs7CG8Vze7qDKsISZrLEyRZkm5OSHngjjXfCItCiMXI3FtnD9iyDvYurd5sRXQ" alt=""&gt;&lt;/p&gt;
&lt;p&gt;|&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.hds.com/corporate/press-analyst-center/press-releases/2015/gl150721.html"&gt;CloudBees&lt;/a&gt; and the Jenkins community have created a Kubernetes plugin, allowing Jenkins slaves to be built as Docker images and run in Docker hosts managed by Kubernetes, either on the Google Cloud Platform or on a more local Kubernetes instance. These elastic slaves are then brought online as Jenkins schedules jobs for them and destroyed after their builds are complete, ensuring masters have steady access to clean workspaces and minimizing builds’ resource footprint.&lt;/p&gt;</description></item><item><title> Weekly Kubernetes Community Hangout Notes - July 17 2015</title><link>https://andygol-k8s.netlify.app/blog/2015/07/weekly-kubernetes-community-hangout_23/</link><pubDate>Thu, 23 Jul 2015 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2015/07/weekly-kubernetes-community-hangout_23/</guid><description>&lt;p&gt;Every week the Kubernetes contributing community meet virtually over Google Hangouts. We want anyone who's interested to know what's discussed in this forum.&lt;/p&gt;
&lt;p&gt;Here are the notes from today's meeting:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Eric Paris: replacing salt with ansible (if we want)&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;In contrib, there is a provisioning tool written in ansible&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The goal in the rewrite was to eliminate as much of the cloud provider stuff as possible&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The salt setup does a bunch of setup in scripts and then the environment is setup with salt&lt;/p&gt;</description></item><item><title> Strong, Simple SSL for Kubernetes Services</title><link>https://andygol-k8s.netlify.app/blog/2015/07/strong-simple-ssl-for-kubernetes/</link><pubDate>Tue, 14 Jul 2015 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2015/07/strong-simple-ssl-for-kubernetes/</guid><description>&lt;p&gt;Hi, I’m Evan Brown &lt;a href="http://twitter.com/evandbrown"&gt;(@evandbrown&lt;/a&gt;) and I work on the solutions architecture team for Google Cloud Platform. I recently wrote an &lt;a href="https://cloud.google.com/solutions/automated-build-images-with-jenkins-kubernetes"&gt;article&lt;/a&gt; and &lt;a href="https://github.com/GoogleCloudPlatform/kube-jenkins-imager"&gt;tutorial&lt;/a&gt; about using Jenkins on Kubernetes to automate the Docker and GCE image build process. Today I’m going to discuss how I used Kubernetes services and secrets to add SSL to the Jenkins web UI. After reading this, you’ll be able to add SSL termination (and HTTP-&amp;gt;HTTPS redirects + basic auth) to your public HTTP Kubernetes services.&lt;/p&gt;</description></item><item><title> Weekly Kubernetes Community Hangout Notes - July 10 2015</title><link>https://andygol-k8s.netlify.app/blog/2015/07/Weekly-Kubernetes-Community-Hangout/</link><pubDate>Mon, 13 Jul 2015 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2015/07/Weekly-Kubernetes-Community-Hangout/</guid><description>&lt;p&gt;Every week the Kubernetes contributing community meet virtually over Google Hangouts. We want anyone who's interested to know what's discussed in this forum.&lt;/p&gt;
&lt;p&gt;Here are the notes from today's meeting:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Eric Paris: replacing salt with ansible (if we want)
&lt;ul&gt;
&lt;li&gt;In contrib, there is a provisioning tool written in ansible&lt;/li&gt;
&lt;li&gt;The goal in the rewrite was to eliminate as much of the cloud provider stuff as possible&lt;/li&gt;
&lt;li&gt;The salt setup does a bunch of setup in scripts and then the environment is setup with salt
&lt;ul&gt;
&lt;li&gt;This means that things like generating certs is done differently on GCE/AWS/Vagrant&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;For ansible, everything must be done within ansible&lt;/li&gt;
&lt;li&gt;Background on ansible
&lt;ul&gt;
&lt;li&gt;Does not have clients&lt;/li&gt;
&lt;li&gt;Provisioner ssh into the machine and runs scripts on the machine&lt;/li&gt;
&lt;li&gt;You define what you want your cluster to look like, run the script, and it sets up everything at once&lt;/li&gt;
&lt;li&gt;If you make one change in a config file, ansible re-runs everything (which isn’t always desirable)&lt;/li&gt;
&lt;li&gt;Uses a jinja2 template&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Create machines with minimal software, then use ansible to get that machine into a runnable state
&lt;ul&gt;
&lt;li&gt;Sets up all of the add-ons&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Eliminates the provisioner shell scripts&lt;/li&gt;
&lt;li&gt;Full cluster setup currently takes about 6 minutes
&lt;ul&gt;
&lt;li&gt;CentOS with some packages&lt;/li&gt;
&lt;li&gt;Redeploy to the cluster takes 25 seconds&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Questions for Eric
&lt;ul&gt;
&lt;li&gt;Where does the provider-specific configuration go?
&lt;ul&gt;
&lt;li&gt;The only network setup that the ansible config does is flannel; you can turn it off&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;What about init vs. systemd?
&lt;ul&gt;
&lt;li&gt;Should be able to support in the code w/o any trouble (not yet implemented)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Discussion
&lt;ul&gt;
&lt;li&gt;Why not push the setup work into containers or kubernetes config?
&lt;ul&gt;
&lt;li&gt;To bootstrap a cluster drop a kubelet and a manifest&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Running a kubelet and configuring the network should be the only things required. We can cut a machine image that is preconfigured minus the data package (certs, etc)
&lt;ul&gt;
&lt;li&gt;The ansible scripts install kubelet &amp;amp; docker if they aren’t already installed&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Each OS (RedHat, Debian, Ubuntu) could have a different image. We could view this as part of the build process instead of the install process.&lt;/li&gt;
&lt;li&gt;There needs to be solution for bare metal as well.&lt;/li&gt;
&lt;li&gt;In favor of the overall goal -- reducing the special configuration in the salt configuration&lt;/li&gt;
&lt;li&gt;Everything except the kubelet should run inside a container (eventually the kubelet should as well)
&lt;ul&gt;
&lt;li&gt;Running in a container doesn’t cut down on the complexity that we currently have&lt;/li&gt;
&lt;li&gt;But it does more clearly define the interface about what the code expects&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;These tools (Chef, Puppet, Ansible) conflate binary distribution with configuration
&lt;ul&gt;
&lt;li&gt;Containers more clearly separate these problems&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;The mesos deployment is not completely automated yet, but the mesos deployment is completely different: kubelets get put on top on an existing mesos cluster
&lt;ul&gt;
&lt;li&gt;The bash scripts allow the mesos devs to see what each cloud provider is doing and re-use the relevant bits&lt;/li&gt;
&lt;li&gt;There was a large reverse engineering curve, but the bash is at least readable as opposed to the salt&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Openstack uses a different deployment as well&lt;/li&gt;
&lt;li&gt;We need a well documented list of steps (e.g. create certs) that are necessary to stand up a cluster
&lt;ul&gt;
&lt;li&gt;This would allow us to compare across cloud providers&lt;/li&gt;
&lt;li&gt;We should reduce the number of steps as much as possible&lt;/li&gt;
&lt;li&gt;Ansible has 241 steps to launch a cluster&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;1.0 Code freeze
&lt;ul&gt;
&lt;li&gt;How are we getting out of code freeze?&lt;/li&gt;
&lt;li&gt;This is a topic for next week, but the preview is that we will move slowly rather than totally opening the firehose
&lt;ul&gt;
&lt;li&gt;We want to clear the backlog as fast as possible while maintaining stability both on HEAD and on the 1.0 branch&lt;/li&gt;
&lt;li&gt;The backlog of almost 300 PRs but there are also various parallel feature branches that have been developed during the freeze&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Cutting a cherry pick release today (1.0.1) that fixes a few issues&lt;/li&gt;
&lt;li&gt;Next week we will discuss the cadence for patch releases&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;</description></item><item><title>Announcing the First Kubernetes Enterprise Training Course</title><link>https://andygol-k8s.netlify.app/blog/2015/07/announcing-first-kubernetes-enterprise/</link><pubDate>Wed, 08 Jul 2015 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2015/07/announcing-first-kubernetes-enterprise/</guid><description>&lt;p&gt;At Google we rely on Linux application containers to run our core infrastructure. Everything from Search to Gmail runs in containers.  In fact, we like containers so much that even our Google Compute Engine VMs run in containers!  Because containers are critical to our business, we have been working with the community on many of the basic container technologies (from cgroups to Docker’s LibContainer) and even decided to build the next generation of Google’s container scheduling technology, Kubernetes, in the open.&lt;/p&gt;</description></item><item><title> How did the Quake demo from DockerCon Work?</title><link>https://andygol-k8s.netlify.app/blog/2015/07/how-did-quake-demo-from-dockercon-work/</link><pubDate>Thu, 02 Jul 2015 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2015/07/how-did-quake-demo-from-dockercon-work/</guid><description>&lt;p&gt;Shortly after its release in 2013, Docker became a very popular open source container management tool for Linux. Docker has a rich set of commands to control the execution of a container. Commands such as start, stop, restart, kill, pause, and unpause. However, what is still missing is the ability to Checkpoint and Restore (C/R) a container natively via Docker itself.&lt;/p&gt;
&lt;p&gt;We’ve been actively working with upstream and community developers to add support in Docker for native C/R and hope that checkpoint and restore commands will be introduced in Docker 1.8. As of this writing, it’s possible to C/R a container externally because this functionality was recently merged in libcontainer.&lt;/p&gt;</description></item><item><title> Kubernetes 1.0 Launch Event at OSCON</title><link>https://andygol-k8s.netlify.app/blog/2015/07/kubernetes-10-launch-party-at-oscon/</link><pubDate>Thu, 02 Jul 2015 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2015/07/kubernetes-10-launch-party-at-oscon/</guid><description>&lt;p&gt;In case you haven't heard, the Kubernetes project team &amp;amp; community have some awesome stuff lined up for our release event at OSCON in a few weeks.&lt;/p&gt;
&lt;p&gt;If you haven't already registered for in person or live stream, please do it now! check out &lt;a href="http://kuberneteslaunch.com/"&gt;kuberneteslaunch.com&lt;/a&gt; for all the details. You can also find out there how to get a free expo pass for OSCON which you'll need to attend in person.&lt;/p&gt;</description></item><item><title> The Distributed System ToolKit: Patterns for Composite Containers</title><link>https://andygol-k8s.netlify.app/blog/2015/06/the-distributed-system-toolkit-patterns/</link><pubDate>Mon, 29 Jun 2015 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2015/06/the-distributed-system-toolkit-patterns/</guid><description>&lt;p&gt;Having had the privilege of presenting some ideas from Kubernetes at DockerCon 2015, I thought I would make a blog post to share some of these ideas for those of you who couldn’t be there.&lt;/p&gt;
&lt;p&gt;Over the past two years containers have become an increasingly popular way to package and deploy code. Container images solve many real-world problems with existing packaging and deployment tools, but in addition to these significant benefits, containers offer us an opportunity to fundamentally re-think the way we build distributed applications. Just as service oriented architectures (SOA) encouraged the decomposition of applications into modular, focused services, containers should encourage the further decomposition of these services into closely cooperating modular containers.  By virtue of establishing a boundary, containers enable users to build their services using modular, reusable components, and this in turn leads to services that are more reliable, more scalable and faster to build than applications built from monolithic containers.&lt;/p&gt;</description></item><item><title> Slides: Cluster Management with Kubernetes, talk given at the University of Edinburgh</title><link>https://andygol-k8s.netlify.app/blog/2015/06/slides-cluster-management-with/</link><pubDate>Fri, 26 Jun 2015 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2015/06/slides-cluster-management-with/</guid><description>&lt;p&gt;On Friday 5 June 2015 I gave a talk called &lt;a href="https://docs.google.com/presentation/d/1H4ywDb4vAJeg8KEjpYfhNqFSig0Q8e_X5I36kM9S6q0/pub?start=false&amp;loop=false&amp;delayms=3000"&gt;Cluster Management with Kubernetes&lt;/a&gt; to a general audience at the University of Edinburgh. The talk includes an example of a music store system with a Kibana front end UI and an Elasticsearch based back end which helps to make concrete concepts like pods, replication controllers and services.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://docs.google.com/presentation/d/1H4ywDb4vAJeg8KEjpYfhNqFSig0Q8e_X5I36kM9S6q0/pub?start=false&amp;loop=false&amp;delayms=3000"&gt;Cluster Management with Kubernetes&lt;/a&gt;.&lt;/p&gt;</description></item><item><title> Cluster Level Logging with Kubernetes</title><link>https://andygol-k8s.netlify.app/blog/2015/06/cluster-level-logging-with-kubernetes/</link><pubDate>Thu, 11 Jun 2015 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2015/06/cluster-level-logging-with-kubernetes/</guid><description>&lt;p&gt;A Kubernetes cluster will typically be humming along running many system and application pods. How does the system administrator collect, manage and query the logs of the system pods? How does a user query the logs of their application which is composed of many pods which may be restarted or automatically generated by the Kubernetes system? These questions are addressed by the Kubernetes cluster level logging services.&lt;/p&gt;
&lt;p&gt;Cluster level logging for Kubernetes allows us to collect logs which persist beyond the lifetime of the pod’s container images or the lifetime of the pod or even cluster. In this article we assume that a Kubernetes cluster has been created with cluster level logging support for sending logs to &lt;a href="https://cloud.google.com/logging/docs/"&gt;Google Cloud Logging&lt;/a&gt;. This is an option when creating a &lt;a href="https://cloud.google.com/container-engine/"&gt;Google Container Engine&lt;/a&gt; (GKE) cluster, and is enabled by default for the open source &lt;a href="https://cloud.google.com/compute/"&gt;Google Compute Engine&lt;/a&gt; (GCE) Kubernetes distribution. After a cluster has been created you will have a collection of system &lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/pods.md"&gt;pods&lt;/a&gt; running that support monitoring, logging and DNS resolution for names of Kubernetes services:&lt;/p&gt;</description></item><item><title> Weekly Kubernetes Community Hangout Notes - May 22 2015</title><link>https://andygol-k8s.netlify.app/blog/2015/06/Weekly-Kubernetes-Community-Hangout/</link><pubDate>Tue, 02 Jun 2015 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2015/06/Weekly-Kubernetes-Community-Hangout/</guid><description>&lt;p&gt;Every week the Kubernetes contributing community meet virtually over Google Hangouts. We want anyone who's interested to know what's discussed in this forum.&lt;/p&gt;
&lt;p&gt;Discussion / Topics&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Code Freeze&lt;/li&gt;
&lt;li&gt;Upgrades of cluster&lt;/li&gt;
&lt;li&gt;E2E test issues&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Code Freeze process starts EOD 22-May, including&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Code Slush -- draining PRs that are active. If there are issues for v1 to raise, please do so today.&lt;/li&gt;
&lt;li&gt;Community PRs -- plan is to reopen in ~6 weeks.&lt;/li&gt;
&lt;li&gt;Key areas for fixes in v1 -- docs, the experience.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;E2E issues and LGTM process&lt;/p&gt;</description></item><item><title> Kubernetes on OpenStack</title><link>https://andygol-k8s.netlify.app/blog/2015/05/kubernetes-on-openstack/</link><pubDate>Tue, 19 May 2015 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2015/05/kubernetes-on-openstack/</guid><description>&lt;p&gt;&lt;a href="https://3.bp.blogspot.com/-EOrCHChZJZE/VVZzq43g6CI/AAAAAAAAF-E/JUilRHk369E/s1600/Untitled%2Bdrawing.jpg"&gt;&lt;img src="https://3.bp.blogspot.com/-EOrCHChZJZE/VVZzq43g6CI/AAAAAAAAF-E/JUilRHk369E/s400/Untitled%2Bdrawing.jpg" alt=""&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Today, the &lt;a href="https://www.openstack.org/foundation/"&gt;OpenStack foundation&lt;/a&gt; made it even easier for you deploy and manage clusters of Docker containers on OpenStack clouds by including Kubernetes in its &lt;a href="http://apps.openstack.org/"&gt;Community App Catalog&lt;/a&gt;.  At a keynote today at the OpenStack Summit in Vancouver, Mark Collier, COO of the OpenStack Foundation, and Craig Peters,  &lt;a href="https://www.mirantis.com/"&gt;Mirantis&lt;/a&gt; product line manager, demonstrated the Community App Catalog workflow by launching a Kubernetes cluster in a matter of seconds by leveraging the compute, storage, networking and identity systems already present in an OpenStack cloud.&lt;/p&gt;</description></item><item><title> Docker and Kubernetes and AppC</title><link>https://andygol-k8s.netlify.app/blog/2015/05/docker-and-kubernetes-and-appc/</link><pubDate>Mon, 18 May 2015 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2015/05/docker-and-kubernetes-and-appc/</guid><description>&lt;p&gt;Recently we announced the intent in Kubernetes, our open source cluster manager, to support AppC and RKT, an alternative container format that has been driven by CoreOS with input from many companies (including Google).  This announcement has generated a surprising amount of buzz and has been construed as a move from Google to support Appc over Docker.  Many have taken it as signal that Google is moving away from supporting Docker.  I would like to take a moment to clarify Google’s position in this.&lt;/p&gt;</description></item><item><title> Weekly Kubernetes Community Hangout Notes - May 15 2015</title><link>https://andygol-k8s.netlify.app/blog/2015/05/weekly-kubernetes-community-hangout_18/</link><pubDate>Mon, 18 May 2015 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2015/05/weekly-kubernetes-community-hangout_18/</guid><description>&lt;p&gt;Every week the Kubernetes contributing community meet virtually over Google Hangouts. We want anyone who's interested to know what's discussed in this forum.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/issues/7018"&gt;v1 API&lt;/a&gt; - what's in, what's out
&lt;ul&gt;
&lt;li&gt;We're trying to fix critical issues we discover with v1beta3&lt;/li&gt;
&lt;li&gt;Would like to make a number of minor cleanups that will be expensive to do later
&lt;ul&gt;
&lt;li&gt;defaulting replication controller spec default to 1&lt;/li&gt;
&lt;li&gt;deduplicating security context&lt;/li&gt;
&lt;li&gt;change id field to name&lt;/li&gt;
&lt;li&gt;rename host&lt;/li&gt;
&lt;li&gt;inconsistent times&lt;/li&gt;
&lt;li&gt;typo in container states terminated (termination vs. terminated)&lt;/li&gt;
&lt;li&gt;flatten structure (requested by heavy API user)&lt;/li&gt;
&lt;li&gt;pod templates - could be added after V1, field is not implemented, remove template ref field&lt;/li&gt;
&lt;li&gt;in general remove any fields not implemented (can be added later)&lt;/li&gt;
&lt;li&gt;if we want to change any of the identifier validation rules, should do it now&lt;/li&gt;
&lt;li&gt;recently changed label validation rules to be more precise&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Bigger changes
&lt;ul&gt;
&lt;li&gt;generalized label selectors&lt;/li&gt;
&lt;li&gt;service - change the fields in a way that we can add features in a forward compatible manner if possible&lt;/li&gt;
&lt;li&gt;public IPs - what to do from a security perspective&lt;/li&gt;
&lt;li&gt;Support aci format - there is an image field - add properties to signify the image, or include it in a string&lt;/li&gt;
&lt;li&gt;inconsistent on object use / cross reference - needs design discussion&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Things to do later
&lt;ul&gt;
&lt;li&gt;volume source cleanup&lt;/li&gt;
&lt;li&gt;multiple API prefixes&lt;/li&gt;
&lt;li&gt;watch changes - watch client is not notified of progress&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;A few other proposals
&lt;ul&gt;
&lt;li&gt;swagger spec fixes - ongoing&lt;/li&gt;
&lt;li&gt;additional field selectors - additive, backward compatible&lt;/li&gt;
&lt;li&gt;additional status - additive, backward compatible&lt;/li&gt;
&lt;li&gt;elimination of phase - won't make it for v1&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Service discussion - Public IPs
&lt;ul&gt;
&lt;li&gt;with public IPs as it exists we can't go to v1&lt;/li&gt;
&lt;li&gt;Tim has been developing a mitigation if we can't get Justin's overhaul in (but hopefully we will)&lt;/li&gt;
&lt;li&gt;Justin's fix will describe public IPs in a much better way&lt;/li&gt;
&lt;li&gt;The general problem is it's too flexible and you can do things that are scary, the mitigation is to restrict public ip usage to specific use cases -- validated public IPs would be copied to status, which is what kube-proxy would use&lt;/li&gt;
&lt;li&gt;public IPs used for -
&lt;ul&gt;
&lt;li&gt;binding to nodes / node&lt;/li&gt;
&lt;li&gt;request a specific load balancer IP (GCE only)&lt;/li&gt;
&lt;li&gt;emulate multi-port services -- now we support multi-port services, so no longer necessary&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;This is a large change, 70% code complete, Tim &amp;amp; Justin working together, parallel code review and updates, need to reconcile and test&lt;/li&gt;
&lt;li&gt;Do we want to allow people to request host ports - is there any value in letting people ask for a public port? or should we assign you one?
&lt;ul&gt;
&lt;li&gt;Tim: we should assign one&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;discussion of what to do with status - if users set to empty then probably their intention&lt;/li&gt;
&lt;li&gt;general answer to the pattern is binding&lt;/li&gt;
&lt;li&gt;post v1: if we can make portal ip a non-user settable field, then we need to figure out the transition plan. need to have a fixed ip for dns.&lt;/li&gt;
&lt;li&gt;we should be able to just randomly assign services a new port and everything should adjust, but this is not feasible for v1&lt;/li&gt;
&lt;li&gt;next iteration of the proposal: PR is being iterated on, testing over the weekend, so PR hopefully ready early next week - gonna be a doozie!&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;API transition
&lt;ul&gt;
&lt;li&gt;actively removing all dependencies on v1beta1 and v1beta2, announced their going away&lt;/li&gt;
&lt;li&gt;working on a script that will touch everything in the system and will force everything to flip to v1beta3&lt;/li&gt;
&lt;li&gt;a release with both APIs supported and with this script can make sure clusters are moved over and we can move the API&lt;/li&gt;
&lt;li&gt;Should be gone by 0.19&lt;/li&gt;
&lt;li&gt;Help is welcome, especially for trivial things and will try to get as much done as possible in next few weeks&lt;/li&gt;
&lt;li&gt;Release candidate targeting mid june&lt;/li&gt;
&lt;li&gt;The new kubectl will not work for old APIs, will be a problem for GKE for clusters pinned to old version. Will be a problem for k8s users as well if they update kubectl&lt;/li&gt;
&lt;li&gt;Since there's no way to upgrade a GKE cluster, users are going to have to tear down and upgrade their cluster&lt;/li&gt;
&lt;li&gt;we're going to stop testing v1beta1 very soon, trying to streamline the testing paths in our CI pipelines&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Did we decide we are not going to do namespace autoprovisioning?
&lt;ul&gt;
&lt;li&gt;Brian would like to turn it off - no objections&lt;/li&gt;
&lt;li&gt;Documentation should include creating namepspaces&lt;/li&gt;
&lt;li&gt;Would like to impose a default CPU for the default namespace&lt;/li&gt;
&lt;li&gt;would cap the number of pods, would reduce the resource exhaustion issue&lt;/li&gt;
&lt;li&gt;would eliminate need to explicitly cap the number of pods on a node due to IP exhaustion&lt;/li&gt;
&lt;li&gt;could add resources as arguments to the porcelain commands&lt;/li&gt;
&lt;li&gt;kubectl run is a simplified command, but it could include some common things (image, command, ports). but could add resources&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Kubernetes 1.0 Launch Event
&lt;ul&gt;
&lt;li&gt;Save the * Blog posts, whitepapers, etc. welcome to be published&lt;/li&gt;
&lt;li&gt;Event will be live streamed, mostly demos &amp;amp; customer talks, keynote&lt;/li&gt;
&lt;li&gt;Big launch party in the evening&lt;/li&gt;
&lt;li&gt;Kit to send more info in next couple weeks&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;</description></item><item><title>Kubernetes Release: 0.17.0</title><link>https://andygol-k8s.netlify.app/blog/2015/05/kubernetes-release-0170/</link><pubDate>Fri, 15 May 2015 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2015/05/kubernetes-release-0170/</guid><description>&lt;p&gt;Release Notes:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Cleanups&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Remove old salt configs &lt;a href="https://github.com/kubernetes/kubernetes/pull/8065" title="Remove old salt configs"&gt;#8065&lt;/a&gt; (roberthbailey)&lt;/li&gt;
&lt;li&gt;Kubelet: minor cleanups &lt;a href="https://github.com/kubernetes/kubernetes/pull/8069" title="Kubelet: minor cleanups"&gt;#8069&lt;/a&gt; (yujuhong)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;v1beta3&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;update example/walkthrough to v1beta3 &lt;a href="https://github.com/kubernetes/kubernetes/pull/7940" title="update example/walkthrough to v1beta3"&gt;#7940&lt;/a&gt; (caesarxuchao)&lt;/li&gt;
&lt;li&gt;update example/rethinkdb to v1beta3 &lt;a href="https://github.com/kubernetes/kubernetes/pull/7946" title="update example/rethinkdb to v1beta3"&gt;#7946&lt;/a&gt; (caesarxuchao)&lt;/li&gt;
&lt;li&gt;verify the v1beta3 yaml files all work; merge the yaml files &lt;a href="https://github.com/kubernetes/kubernetes/pull/7917" title="verify the v1beta3 yaml files all work; merge the yaml files"&gt;#7917&lt;/a&gt; (caesarxuchao)&lt;/li&gt;
&lt;li&gt;update examples/cassandra to api v1beta3 &lt;a href="https://github.com/kubernetes/kubernetes/pull/7258" title="update examples/cassandra to api v1beta3"&gt;#7258&lt;/a&gt; (caesarxuchao)&lt;/li&gt;
&lt;li&gt;update service.json in persistent-volume example to v1beta3 &lt;a href="https://github.com/kubernetes/kubernetes/pull/7899" title="update service.json in persistent-volume example to v1beta3"&gt;#7899&lt;/a&gt; (caesarxuchao)&lt;/li&gt;
&lt;li&gt;update mysql-wordpress example to use v1beta3 API &lt;a href="https://github.com/kubernetes/kubernetes/pull/7864" title="update mysql-wordpress example to use v1beta3 API"&gt;#7864&lt;/a&gt; (caesarxuchao)&lt;/li&gt;
&lt;li&gt;Update examples/meteor to use API v1beta3 &lt;a href="https://github.com/kubernetes/kubernetes/pull/7848" title="Update examples/meteor to use API v1beta3"&gt;#7848&lt;/a&gt; (caesarxuchao)&lt;/li&gt;
&lt;li&gt;update node-selector example to API v1beta3 &lt;a href="https://github.com/kubernetes/kubernetes/pull/7872" title="update node-selector example to API v1beta3"&gt;#7872&lt;/a&gt; (caesarxuchao)&lt;/li&gt;
&lt;li&gt;update logging-demo to use API v1beta3; modify the way to access Elasticsearch and Kibana services &lt;a href="https://github.com/kubernetes/kubernetes/pull/7824" title="update logging-demo to use API v1beta3; modify the way to access Elasticsearch and Kibana services"&gt;#7824&lt;/a&gt; (caesarxuchao)&lt;/li&gt;
&lt;li&gt;Convert the skydns rc to use v1beta3 and add a health check to it &lt;a href="https://github.com/kubernetes/kubernetes/pull/7619" title="Convert the skydns rc to use v1beta3 and add a health check to it"&gt;#7619&lt;/a&gt; (a-robinson)&lt;/li&gt;
&lt;li&gt;update the hazelcast example to API version v1beta3 &lt;a href="https://github.com/kubernetes/kubernetes/pull/7728" title="update the hazelcast example to API version v1beta3"&gt;#7728&lt;/a&gt; (caesarxuchao)&lt;/li&gt;
&lt;li&gt;Fix YAML parsing for v1beta3 objects in the kubelet for file/http &lt;a href="https://github.com/kubernetes/kubernetes/pull/7515" title="Fix YAML parsing for v1beta3 objects in the kubelet for file/http"&gt;#7515&lt;/a&gt; (brendandburns)&lt;/li&gt;
&lt;li&gt;Updated kubectl cluster-info to show v1beta3 addresses &lt;a href="https://github.com/kubernetes/kubernetes/pull/7502" title="Updated kubectl cluster-info to show v1beta3 addresses"&gt;#7502&lt;/a&gt; (piosz)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Kubelet&lt;/p&gt;</description></item><item><title>Resource Usage Monitoring in Kubernetes</title><link>https://andygol-k8s.netlify.app/blog/2015/05/resource-usage-monitoring-kubernetes/</link><pubDate>Tue, 12 May 2015 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2015/05/resource-usage-monitoring-kubernetes/</guid><description>&lt;p&gt;Understanding how an application behaves when deployed is crucial to scaling the application and providing a reliable service. In a Kubernetes cluster, application performance can be examined at many different levels: containers, &lt;a href="https://andygol-k8s.netlify.app/docs/user-guide/pods"&gt;pods&lt;/a&gt;, &lt;a href="https://andygol-k8s.netlify.app/docs/user-guide/services"&gt;services&lt;/a&gt;, and whole clusters. As part of Kubernetes we want to provide users with detailed resource usage information about their running applications at all these levels. This will give users deep insights into how their applications are performing and where possible application bottlenecks may be found. In comes &lt;a href="https://github.com/kubernetes/heapster"&gt;Heapster&lt;/a&gt;, a project meant to provide a base monitoring platform on Kubernetes.&lt;/p&gt;</description></item><item><title> Weekly Kubernetes Community Hangout Notes - May 1 2015</title><link>https://andygol-k8s.netlify.app/blog/2015/05/Weekly-Kubernetes-Community-Hangout/</link><pubDate>Mon, 11 May 2015 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2015/05/Weekly-Kubernetes-Community-Hangout/</guid><description>&lt;p&gt;Every week the Kubernetes contributing community meet virtually over Google Hangouts. We want anyone who's interested to know what's discussed in this forum.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Simple rolling update - Brendan&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Rolling update = nice example of why RCs and Pods are good.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;...pause… (Brendan needs demo recovery tips from Kelsey)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Rolling update has recovery: Cancel update and restart, update continues from where it stopped.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;New controller gets name of old controller, so appearance is pure update.&lt;/p&gt;</description></item><item><title>Kubernetes Release: 0.16.0</title><link>https://andygol-k8s.netlify.app/blog/2015/05/kubernetes-release-0160/</link><pubDate>Mon, 11 May 2015 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2015/05/kubernetes-release-0160/</guid><description>&lt;p&gt;Release Notes:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Bring up a kuberenetes cluster using coreos image as worker nodes &lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/7445"&gt;#7445&lt;/a&gt; (dchen1107)&lt;/li&gt;
&lt;li&gt;Cloning v1beta3 as v1 and exposing it in the apiserver &lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/7454"&gt;#7454&lt;/a&gt; (nikhiljindal)&lt;/li&gt;
&lt;li&gt;API Conventions for Late-initializers &lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/7366"&gt;#7366&lt;/a&gt; (erictune)&lt;/li&gt;
&lt;li&gt;Upgrade Elasticsearch to 1.5.2 for cluster logging &lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/7455"&gt;#7455&lt;/a&gt; (satnam6502)&lt;/li&gt;
&lt;li&gt;Make delete actually stop resources by default. &lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/7210"&gt;#7210&lt;/a&gt; (brendandburns)&lt;/li&gt;
&lt;li&gt;Change kube2sky to use token-system-dns secret, point at https endpoint ... &lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/7154"&gt;#7154&lt;/a&gt;(cjcullen)&lt;/li&gt;
&lt;li&gt;Updated CoreOS bare metal docs for 0.15.0 &lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/7364"&gt;#7364&lt;/a&gt; (hvolkmer)&lt;/li&gt;
&lt;li&gt;Print named ports in 'describe service' &lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/7424"&gt;#7424&lt;/a&gt; (thockin)&lt;/li&gt;
&lt;li&gt;AWS&lt;/li&gt;
&lt;li&gt;Return public &amp;amp; private addresses in GetNodeAddresses &lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/7040"&gt;#7040&lt;/a&gt; (justinsb)&lt;/li&gt;
&lt;li&gt;Improving getting existing VPC and subnet &lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/6606"&gt;#6606&lt;/a&gt; (gust1n)&lt;/li&gt;
&lt;li&gt;Set hostname_override for minions, back to fully-qualified name &lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/7182"&gt;#7182&lt;/a&gt; (justinsb)&lt;/li&gt;
&lt;li&gt;Conversion to v1beta3&lt;/li&gt;
&lt;li&gt;Convert node level logging agents to v1beta3 &lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/7274"&gt;#7274&lt;/a&gt; (satnam6502)&lt;/li&gt;
&lt;li&gt;Removing more references to v1beta1 from pkg/ &lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/7128"&gt;#7128&lt;/a&gt; (nikhiljindal)&lt;/li&gt;
&lt;li&gt;update examples/cassandra to api v1beta3 &lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/7258"&gt;#7258&lt;/a&gt; (caesarxuchao)&lt;/li&gt;
&lt;li&gt;Convert Elasticsearch logging to v1beta3 and de-salt &lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/7246"&gt;#7246&lt;/a&gt; (satnam6502)&lt;/li&gt;
&lt;li&gt;Update examples/storm for v1beta3 &lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/7231"&gt;#7231&lt;/a&gt; (bcbroussard)&lt;/li&gt;
&lt;li&gt;Update examples/spark for v1beta3 &lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/7230"&gt;#7230&lt;/a&gt; (bcbroussard)&lt;/li&gt;
&lt;li&gt;Update Kibana RC and service to v1beta3 &lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/7240"&gt;#7240&lt;/a&gt; (satnam6502)&lt;/li&gt;
&lt;li&gt;Updating the guestbook example to v1beta3 &lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/7194"&gt;#7194&lt;/a&gt; (nikhiljindal)&lt;/li&gt;
&lt;li&gt;Update Phabricator to v1beta3 example &lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/7232"&gt;#7232&lt;/a&gt; (bcbroussard)&lt;/li&gt;
&lt;li&gt;Update Kibana pod to speak to Elasticsearch using v1beta3 &lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/7206"&gt;#7206&lt;/a&gt; (satnam6502)&lt;/li&gt;
&lt;li&gt;Validate Node IPs; clean up validation code &lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/7180"&gt;#7180&lt;/a&gt; (ddysher)&lt;/li&gt;
&lt;li&gt;Add PortForward to runtime API. &lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/7391"&gt;#7391&lt;/a&gt; (vmarmol)&lt;/li&gt;
&lt;li&gt;kube-proxy uses token to access port 443 of apiserver &lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/7303"&gt;#7303&lt;/a&gt; (erictune)&lt;/li&gt;
&lt;li&gt;Move the logging-related directories to where I think they belong &lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/7014"&gt;#7014&lt;/a&gt; (a-robinson)&lt;/li&gt;
&lt;li&gt;Make client service requests use the default timeout now that external load balancers are created asynchronously &lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/6870"&gt;#6870&lt;/a&gt; (a-robinson)&lt;/li&gt;
&lt;li&gt;Fix bug in kube-proxy of not updating iptables rules if a service's public IPs change &lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/6123"&gt;#6123&lt;/a&gt;(a-robinson)&lt;/li&gt;
&lt;li&gt;PersistentVolumeClaimBinder &lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/6105"&gt;#6105&lt;/a&gt; (markturansky)&lt;/li&gt;
&lt;li&gt;Fixed validation message when trying to submit incorrect secret &lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/7356"&gt;#7356&lt;/a&gt; (soltysh)&lt;/li&gt;
&lt;li&gt;First step to supporting multiple k8s clusters &lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/6006"&gt;#6006&lt;/a&gt; (justinsb)&lt;/li&gt;
&lt;li&gt;Parity for namespace handling in secrets E2E &lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/7361"&gt;#7361&lt;/a&gt; (pmorie)&lt;/li&gt;
&lt;li&gt;Add cleanup policy to RollingUpdater &lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/6996"&gt;#6996&lt;/a&gt; (ironcladlou)&lt;/li&gt;
&lt;li&gt;Use narrowly scoped interfaces for client access &lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/6871"&gt;#6871&lt;/a&gt; (ironcladlou)&lt;/li&gt;
&lt;li&gt;Warning about Critical bug in the GlusterFS Volume Plugin &lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/7319"&gt;#7319&lt;/a&gt; (wattsteve)&lt;/li&gt;
&lt;li&gt;Rolling update&lt;/li&gt;
&lt;li&gt;First part of improved rolling update, allow dynamic next replication controller generation. &lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/7268"&gt;#7268&lt;/a&gt; (brendandburns)&lt;/li&gt;
&lt;li&gt;Further implementation of rolling-update, add rename &lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/7279"&gt;#7279&lt;/a&gt; (brendandburns)&lt;/li&gt;
&lt;li&gt;Added basic apiserver authz tests. &lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/7293"&gt;#7293&lt;/a&gt; (ashcrow)&lt;/li&gt;
&lt;li&gt;Retry pod update on version conflict error in e2e test. &lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/7297"&gt;#7297&lt;/a&gt; (quinton-hoole)&lt;/li&gt;
&lt;li&gt;Add &amp;quot;kubectl validate&amp;quot; command to do a cluster health check. &lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/6597"&gt;#6597&lt;/a&gt; (fabioy)&lt;/li&gt;
&lt;li&gt;coreos/azure: Weave version bump, various other enhancements &lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/7224"&gt;#7224&lt;/a&gt; (errordeveloper)&lt;/li&gt;
&lt;li&gt;Azure: Wait for salt completion on cluster initialization &lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/6576"&gt;#6576&lt;/a&gt; (jeffmendoza)&lt;/li&gt;
&lt;li&gt;Tighten label parsing &lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/6674"&gt;#6674&lt;/a&gt; (kargakis)&lt;/li&gt;
&lt;li&gt;fix watch of single object &lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/7263"&gt;#7263&lt;/a&gt; (lavalamp)&lt;/li&gt;
&lt;li&gt;Upgrade go-dockerclient dependency to support CgroupParent &lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/7247"&gt;#7247&lt;/a&gt; (guenter)&lt;/li&gt;
&lt;li&gt;Make secret volume plugin idempotent &lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/7166"&gt;#7166&lt;/a&gt; (pmorie)&lt;/li&gt;
&lt;li&gt;Salt reconfiguration to get rid of nginx on GCE &lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/6618"&gt;#6618&lt;/a&gt; (roberthbailey)&lt;/li&gt;
&lt;li&gt;Revert &amp;quot;Change kube2sky to use token-system-dns secret, point at https e... &lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/7207"&gt;#7207&lt;/a&gt; (fabioy)&lt;/li&gt;
&lt;li&gt;Pod templates as their own type &lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/5012"&gt;#5012&lt;/a&gt; (smarterclayton)&lt;/li&gt;
&lt;li&gt;iscsi Test: Add explicit check for attach and detach calls. &lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/7110"&gt;#7110&lt;/a&gt; (swagiaal)&lt;/li&gt;
&lt;li&gt;Added field selector for listing pods &lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/7067"&gt;#7067&lt;/a&gt; (ravigadde)&lt;/li&gt;
&lt;li&gt;Record an event on node schedulable changes &lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/7138"&gt;#7138&lt;/a&gt; (pravisankar)&lt;/li&gt;
&lt;li&gt;Resolve &lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/issues/6812"&gt;#6812&lt;/a&gt;, limit length of load balancer names &lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/7145"&gt;#7145&lt;/a&gt; (caesarxuchao)&lt;/li&gt;
&lt;li&gt;Convert error strings to proper validation errors. &lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/7131"&gt;#7131&lt;/a&gt; (rjnagal)&lt;/li&gt;
&lt;li&gt;ResourceQuota add object count support for secret and volume claims &lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/6593"&gt;#6593&lt;/a&gt;(derekwaynecarr)&lt;/li&gt;
&lt;li&gt;Use Pod.Spec.Host instead of Pod.Status.HostIP for pod subresources &lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/6985"&gt;#6985&lt;/a&gt; (csrwng)&lt;/li&gt;
&lt;li&gt;Prioritize deleting the non-running pods when reducing replicas &lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/6992"&gt;#6992&lt;/a&gt; (yujuhong)&lt;/li&gt;
&lt;li&gt;Kubernetes UI with Dashboard component &lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/7056"&gt;#7056&lt;/a&gt; (preillyme)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;To download, please visit &lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/releases/tag/v0.16.0"&gt;https://github.com/GoogleCloudPlatform/kubernetes/releases/tag/v0.16.0&lt;/a&gt;&lt;/p&gt;</description></item><item><title> AppC Support for Kubernetes through RKT</title><link>https://andygol-k8s.netlify.app/blog/2015/05/appc-support-for-kubernetes-through-rkt/</link><pubDate>Mon, 04 May 2015 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2015/05/appc-support-for-kubernetes-through-rkt/</guid><description>&lt;p&gt;We very recently accepted a pull request to the Kubernetes project to add appc support for the Kubernetes community.  Appc is a new open container specification that was initiated by CoreOS, and is supported through CoreOS rkt container runtime.&lt;/p&gt;
&lt;p&gt;This is an important step forward for the Kubernetes project and for the broader containers community.  It adds flexibility and choice to the container-verse and brings the promise of  compelling new security and performance capabilities to the Kubernetes developer.&lt;/p&gt;</description></item><item><title> Weekly Kubernetes Community Hangout Notes - April 24 2015</title><link>https://andygol-k8s.netlify.app/blog/2015/04/weekly-kubernetes-community-hangout_29/</link><pubDate>Thu, 30 Apr 2015 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2015/04/weekly-kubernetes-community-hangout_29/</guid><description>&lt;p&gt;Every week the Kubernetes contributing community meet virtually over Google Hangouts. We want anyone who's interested to know what's discussed in this forum.&lt;/p&gt;
&lt;p&gt;Agenda:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Flocker and Kubernetes integration demo&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Notes:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;flocker and kubernetes integration demo&lt;/li&gt;
&lt;li&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Flocker Q/A&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Does the file still exists on node1 after migration?&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Brendan: Any plan this to make it a volume? So we don't need powerstrip?&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Luke: Need to figure out interest to decide if we want to make it a first-class persistent disk provider in kube.&lt;/p&gt;</description></item><item><title> Borg: The Predecessor to Kubernetes</title><link>https://andygol-k8s.netlify.app/blog/2015/04/borg-predecessor-to-kubernetes/</link><pubDate>Thu, 23 Apr 2015 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2015/04/borg-predecessor-to-kubernetes/</guid><description>&lt;p&gt;Google has been running containerized workloads in production for more than a decade. Whether it's service jobs like web front-ends and stateful servers, infrastructure systems like &lt;a href="http://research.google.com/archive/bigtable.html"&gt;Bigtable&lt;/a&gt; and &lt;a href="http://research.google.com/archive/spanner.html"&gt;Spanner&lt;/a&gt;, or batch frameworks like &lt;a href="http://research.google.com/archive/mapreduce.html"&gt;MapReduce&lt;/a&gt; and &lt;a href="http://research.google.com/pubs/pub41378.html"&gt;Millwheel&lt;/a&gt;, virtually everything at Google runs as a container. Today, we took the wraps off of Borg, Google’s long-rumored internal container-oriented cluster-management system, publishing details at the academic computer systems conference &lt;a href="http://eurosys2015.labri.fr/"&gt;Eurosys&lt;/a&gt;. You can find the paper &lt;a href="https://research.google.com/pubs/pub43438.html"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Kubernetes traces its lineage directly from Borg. Many of the developers at Google working on Kubernetes were formerly developers on the Borg project. We've incorporated the best ideas from Borg in Kubernetes, and have tried to address some pain points that users identified with Borg over the years.&lt;/p&gt;</description></item><item><title> Kubernetes and the Mesosphere DCOS</title><link>https://andygol-k8s.netlify.app/blog/2015/04/kubernetes-and-mesosphere-dcos/</link><pubDate>Wed, 22 Apr 2015 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2015/04/kubernetes-and-mesosphere-dcos/</guid><description>&lt;h1 id="kubernetes-and-the-mesosphere-dcos"&gt;Kubernetes and the Mesosphere DCOS&lt;/h1&gt;
&lt;p&gt;Today Mesosphere announced the addition of Kubernetes as a standard part of their &lt;a href="https://mesosphere.com/product/"&gt;DCOS&lt;/a&gt; offering. This is a great step forwards in bringing cloud native application management to the world, and should lay to rest many questions we hear about 'Kubernetes or Mesos, which one should I use?'. Now you can have your cake and eat it too: use both. Today's announcement extends the reach of Kubernetes to a new class of users, and add some exciting new capabilities for everyone.&lt;/p&gt;</description></item><item><title> Weekly Kubernetes Community Hangout Notes - April 17 2015</title><link>https://andygol-k8s.netlify.app/blog/2015/04/weekly-kubernetes-community-hangout_17/</link><pubDate>Fri, 17 Apr 2015 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2015/04/weekly-kubernetes-community-hangout_17/</guid><description>&lt;p&gt;Every week the Kubernetes contributing community meet virtually over Google Hangouts. We want anyone who's interested to know what's discussed in this forum.&lt;/p&gt;
&lt;p&gt;Agenda&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Mesos Integration&lt;/li&gt;
&lt;li&gt;High Availability (HA)&lt;/li&gt;
&lt;li&gt;Adding performance and profiling details to e2e to track regressions&lt;/li&gt;
&lt;li&gt;Versioned clients&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Notes&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Mesos integration&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Mesos integration proposal:&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;No blockers to integration.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Documentation needs to be updated.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;HA&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Proposal should land today.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Etcd cluster.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Load-balance apiserver.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Cold standby for controller manager and other master components.&lt;/p&gt;</description></item><item><title> Introducing Kubernetes API Version v1beta3</title><link>https://andygol-k8s.netlify.app/blog/2015/04/introducing-kubernetes-v1beta3/</link><pubDate>Thu, 16 Apr 2015 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2015/04/introducing-kubernetes-v1beta3/</guid><description>&lt;p&gt;We've been hard at work on cleaning up the API over the past several months (see &lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/issues/1519"&gt;https://github.com/GoogleCloudPlatform/kubernetes/issues/1519&lt;/a&gt; for details). The result is v1beta3, which is considered to be the release candidate for the v1 API.&lt;/p&gt;
&lt;p&gt;We would like you to move to this new API version as soon as possible. v1beta1 and v1beta2 are deprecated, and will be removed by the end of June, shortly after we introduce the v1 API.&lt;/p&gt;</description></item><item><title> Kubernetes Release: 0.15.0</title><link>https://andygol-k8s.netlify.app/blog/2015/04/kubernetes-release-0150/</link><pubDate>Thu, 16 Apr 2015 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2015/04/kubernetes-release-0150/</guid><description>&lt;p&gt;Release Notes:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Enables v1beta3 API and sets it to the default API version (&lt;a href="https://github.com/kubernetes/kubernetes/pull/6098" title="Enabling v1beta3 api version by default in master"&gt;#6098&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Added multi-port Services (&lt;a href="https://github.com/kubernetes/kubernetes/pull/6182" title="Implement multi-port Services"&gt;#6182&lt;/a&gt;)
&lt;ul&gt;
&lt;li&gt;New Getting Started Guides&lt;/li&gt;
&lt;li&gt;Multi-node local startup guide (&lt;a href="https://github.com/kubernetes/kubernetes/pull/6505" title="Docker multi-node"&gt;#6505&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Mesos on Google Cloud Platform (&lt;a href="https://github.com/kubernetes/kubernetes/pull/5442" title="Getting started guide for Mesos on Google Cloud Platform"&gt;#5442&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Ansible Setup instructions (&lt;a href="https://github.com/kubernetes/kubernetes/pull/6237" title="example ansible setup repo"&gt;#6237&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Added a controller framework (&lt;a href="https://github.com/kubernetes/kubernetes/pull/5270" title="Controller framework"&gt;#5270&lt;/a&gt;, &lt;a href="https://github.com/kubernetes/kubernetes/pull/5473" title="Add DeltaFIFO (a controller framework piece)"&gt;#5473&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;The Kubelet now listens on a secure HTTPS port (&lt;a href="https://github.com/kubernetes/kubernetes/pull/6380" title="Configure the kubelet to use HTTPS (take 2)"&gt;#6380&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Made kubectl errors more user-friendly (&lt;a href="https://github.com/kubernetes/kubernetes/pull/6338" title="Return a typed error for config validation, and make errors simple"&gt;#6338&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;The apiserver now supports client cert authentication (&lt;a href="https://github.com/kubernetes/kubernetes/pull/6190" title="Add client cert authentication"&gt;#6190&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;The apiserver now limits the number of concurrent requests it processes (&lt;a href="https://github.com/kubernetes/kubernetes/pull/6207" title="Add a limit to the number of in-flight requests that a server processes."&gt;#6207&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Added rate limiting to pod deleting (&lt;a href="https://github.com/kubernetes/kubernetes/pull/6355" title="Added rate limiting to pod deleting"&gt;#6355&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Implement Balanced Resource Allocation algorithm as a PriorityFunction in scheduler package (&lt;a href="https://github.com/kubernetes/kubernetes/pull/6150" title="Implement Balanced Resource Allocation (BRA) algorithm as a PriorityFunction in scheduler package."&gt;#6150&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Enabled log collection from master (&lt;a href="https://github.com/kubernetes/kubernetes/pull/6396" title="Enable log collection from master."&gt;#6396&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Added an api endpoint to pull logs from Pods (&lt;a href="https://github.com/kubernetes/kubernetes/pull/6497" title="Pod log subresource"&gt;#6497&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Added latency metrics to scheduler (&lt;a href="https://github.com/kubernetes/kubernetes/pull/6368" title="Add basic latency metrics to scheduler."&gt;#6368&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Added latency metrics to REST client (&lt;a href="https://github.com/kubernetes/kubernetes/pull/6409" title="Add latency metrics to REST client"&gt;#6409&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;etcd now runs in a pod on the master (&lt;a href="https://github.com/kubernetes/kubernetes/pull/6221" title="Run etcd 2.0.5 in a pod"&gt;#6221&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;nginx now runs in a container on the master (&lt;a href="https://github.com/kubernetes/kubernetes/pull/6334" title="Add an nginx docker image for use on the master."&gt;#6334&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Began creating Docker images for master components (&lt;a href="https://github.com/kubernetes/kubernetes/pull/6326" title="Create Docker images for master components "&gt;#6326&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Updated GCE provider to work with gcloud 0.9.54 (&lt;a href="https://github.com/kubernetes/kubernetes/pull/6270" title="Updates for gcloud 0.9.54"&gt;#6270&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Updated AWS provider to fix Region vs Zone semantics (&lt;a href="https://github.com/kubernetes/kubernetes/pull/6011" title="Fix AWS region vs zone"&gt;#6011&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Record event when image GC fails (&lt;a href="https://github.com/kubernetes/kubernetes/pull/6091" title="Record event when image GC fails."&gt;#6091&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Add a QPS limiter to the kubernetes client (&lt;a href="https://github.com/kubernetes/kubernetes/pull/6203" title="Add a QPS limiter to the kubernetes client."&gt;#6203&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Decrease the time it takes to run make release (&lt;a href="https://github.com/kubernetes/kubernetes/pull/6196" title="Parallelize architectures in both the building and packaging phases of `make release`"&gt;#6196&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;New volume support
&lt;ul&gt;
&lt;li&gt;Added iscsi volume plugin (&lt;a href="https://github.com/kubernetes/kubernetes/pull/5506" title="add iscsi volume plugin"&gt;#5506&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Added glusterfs volume plugin (&lt;a href="https://github.com/kubernetes/kubernetes/pull/6174" title="implement glusterfs volume plugin"&gt;#6174&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;AWS EBS volume support (&lt;a href="https://github.com/kubernetes/kubernetes/pull/5138" title="AWS EBS volume support"&gt;#5138&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Updated to heapster version to v0.10.0 (&lt;a href="https://github.com/kubernetes/kubernetes/pull/6331" title="Update heapster version to v0.10.0"&gt;#6331&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Updated to etcd 2.0.9 (&lt;a href="https://github.com/kubernetes/kubernetes/pull/6544" title="Build etcd image (version 2.0.9), and upgrade kubernetes cluster to the new version"&gt;#6544&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Updated to Kibana to v1.2 (&lt;a href="https://github.com/kubernetes/kubernetes/pull/6426" title="Update Kibana to v1.2 which paramaterizes location of Elasticsearch"&gt;#6426&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Bug Fixes
&lt;ul&gt;
&lt;li&gt;Kube-proxy now updates iptables rules if a service's public IPs change (&lt;a href="https://github.com/kubernetes/kubernetes/pull/6123" title="Fix bug in kube-proxy of not updating iptables rules if a service&amp;#39;s public IPs change"&gt;#6123&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Retry kube-addons creation if the initial creation fails (&lt;a href="https://github.com/kubernetes/kubernetes/pull/6200" title="Retry kube-addons creation if kube-addons creation fails."&gt;#6200&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Make kube-proxy more resiliant to running out of file descriptors (&lt;a href="https://github.com/kubernetes/kubernetes/pull/6727" title="pkg/proxy: panic if run out of fd"&gt;#6727&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;To download, please visit &lt;a href="https://github.com/GoogleCloudPlatform/kubernetes/releases/tag/v0.15.0"&gt;https://github.com/GoogleCloudPlatform/kubernetes/releases/tag/v0.15.0&lt;/a&gt;&lt;/p&gt;</description></item><item><title> Weekly Kubernetes Community Hangout Notes - April 10 2015</title><link>https://andygol-k8s.netlify.app/blog/2015/04/weekly-kubernetes-community-hangout_11/</link><pubDate>Sat, 11 Apr 2015 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2015/04/weekly-kubernetes-community-hangout_11/</guid><description>&lt;p&gt;Every week the Kubernetes contributing community meet virtually over Google Hangouts. We want anyone who's interested to know what's discussed in this forum.&lt;/p&gt;
&lt;p&gt;Agenda:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;kubectl tooling, rolling update, deployments, imperative commands.&lt;/li&gt;
&lt;li&gt;Downward API / env. substitution, and maybe preconditions/dependencies.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Notes from meeting:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;1. kubectl improvements&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;make it simpler to use, finish rolling update, higher-level deployment concepts.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;rolling update&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;today&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;can replace one rc by another rc specified by a file.&lt;/p&gt;</description></item><item><title>Faster than a speeding Latte</title><link>https://andygol-k8s.netlify.app/blog/2015/04/faster-than-speeding-latte/</link><pubDate>Mon, 06 Apr 2015 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2015/04/faster-than-speeding-latte/</guid><description>&lt;p&gt;Check out Brendan Burns racing Kubernetes.&lt;/p&gt;
&lt;div style="position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden;"&gt;
 &lt;iframe allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share; fullscreen" loading="eager" referrerpolicy="strict-origin-when-cross-origin" src="https://www.youtube.com/embed/7vZ9dRKRMyc?autoplay=0&amp;amp;controls=1&amp;amp;end=0&amp;amp;loop=0&amp;amp;mute=0&amp;amp;start=0" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%; border:0;" title="Latte vs. Kubernetes setup - which is faster?"&gt;&lt;/iframe&gt;
 &lt;/div&gt;</description></item><item><title> Weekly Kubernetes Community Hangout Notes - April 3 2015</title><link>https://andygol-k8s.netlify.app/blog/2015/04/Weekly-Kubernetes-Community-Hangout/</link><pubDate>Sat, 04 Apr 2015 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2015/04/Weekly-Kubernetes-Community-Hangout/</guid><description>&lt;h1 id="kubernetes-weekly-kubernetes-community-hangout-notes"&gt;Kubernetes: Weekly Kubernetes Community Hangout Notes&lt;/h1&gt;
&lt;p&gt;Every week the Kubernetes contributing community meet virtually over Google Hangouts. We want anyone who's interested to know what's discussed in this forum.&lt;/p&gt;
&lt;p&gt;Agenda:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Quinton - Cluster federation&lt;/li&gt;
&lt;li&gt;Satnam - Performance benchmarking update&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;em&gt;Notes from meeting:&lt;/em&gt;&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Quinton - Cluster federation&lt;/li&gt;
&lt;/ol&gt;
&lt;ul&gt;
&lt;li&gt;Ideas floating around after meetup in SF&lt;/li&gt;
&lt;li&gt;
&lt;ul&gt;
&lt;li&gt;Please read and comment&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Not 1.0, but put a doc together to show roadmap&lt;/li&gt;
&lt;li&gt;Can be built outside of Kubernetes&lt;/li&gt;
&lt;li&gt;API to control things across multiple clusters, include some logic&lt;/li&gt;
&lt;/ul&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Auth(n)(z)&lt;/p&gt;</description></item><item><title> Participate in a Kubernetes User Experience Study</title><link>https://andygol-k8s.netlify.app/blog/2015/03/participate-in-kubernetes-user/</link><pubDate>Tue, 31 Mar 2015 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2015/03/participate-in-kubernetes-user/</guid><description>&lt;p&gt;We need your help in shaping the future of Kubernetes and Google Container Engine, and we'd love to have you participate in a remote UX research study to help us learn about your experiences!  If you're interested in participating, we invite you to take &lt;a href="http://goo.gl/AXFFMs"&gt;this brief survey&lt;/a&gt; to see if you qualify. If you’re selected to participate, we’ll follow up with you directly.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Length: 60 minute interview&lt;/li&gt;
&lt;li&gt;Date: April 7th-15th&lt;/li&gt;
&lt;li&gt;Location: Remote&lt;/li&gt;
&lt;li&gt;Your gift: $100 Perks gift code*&lt;/li&gt;
&lt;li&gt;Study format: Interview with our researcher&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Interested in participating? Take &lt;a href="http://goo.gl/AXFFMs"&gt;this brief survey&lt;/a&gt;.&lt;/p&gt;</description></item><item><title> Weekly Kubernetes Community Hangout Notes - March 27 2015</title><link>https://andygol-k8s.netlify.app/blog/2015/03/Weekly-Kubernetes-Community-Hangout/</link><pubDate>Sat, 28 Mar 2015 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2015/03/Weekly-Kubernetes-Community-Hangout/</guid><description>&lt;p&gt;Every week the Kubernetes contributing community meet virtually over Google Hangouts. We want anyone who's interested to know what's discussed in this forum.&lt;/p&gt;
&lt;p&gt;Agenda:&lt;/p&gt;
&lt;p&gt;- Andy - demo remote execution and port forwarding&lt;/p&gt;
&lt;p&gt;- Quinton - Cluster federation - Postponed&lt;/p&gt;
&lt;p&gt;- Clayton - UI code sharing and collaboration around Kubernetes&lt;/p&gt;
&lt;p&gt;Notes from meeting:&lt;/p&gt;
&lt;p&gt;1. Andy from RedHat:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Demo remote execution&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;kubectl exec -p $POD -- $CMD&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Makes a connection to the master as proxy, figures out which node the pod is on, proxies connection to kubelet, which does the interesting bit. via nsenter.&lt;/p&gt;</description></item><item><title> Kubernetes Gathering Videos</title><link>https://andygol-k8s.netlify.app/blog/2015/03/kubernetes-gathering-videos/</link><pubDate>Mon, 23 Mar 2015 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2015/03/kubernetes-gathering-videos/</guid><description>&lt;p&gt;If you missed the Kubernetes Gathering in SF last month, fear not! Here are the videos from the evening presentations organized into a playlist on YouTube&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.youtube.com/playlist?list=PL69nYSiGNLP2FBVvSLHpJE8_6hRHW8Kxe"&gt;&lt;img src="https://img.youtube.com/vi/q8lGZCKktYo/0.jpg" alt="Kubernetes Gathering"&gt;&lt;/a&gt;&lt;/p&gt;</description></item><item><title>Welcome to the Kubernetes Blog!</title><link>https://andygol-k8s.netlify.app/blog/2015/03/welcome-to-kubernetes-blog/</link><pubDate>Fri, 20 Mar 2015 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/blog/2015/03/welcome-to-kubernetes-blog/</guid><description>&lt;p&gt;Welcome to the new Kubernetes Blog. Follow this blog to learn about the Kubernetes Open Source project. We plan to post release notes, how-to articles, events, and maybe even some off topic fun here from time to time.&lt;/p&gt;
&lt;p&gt;If you are using Kubernetes or contributing to the project and would like to do a guest post, &lt;a href="mailto:kitm@google.com"&gt;please let me know&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;To start things off, here's a roundup of recent Kubernetes posts from other sites:&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/community/static/cncf-code-of-conduct/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/community/static/cncf-code-of-conduct/</guid><description>&lt;!-- Do not edit this file directly. Get the latest from
 https://github.com/cncf/foundation/blob/main/code-of-conduct.md --&gt;
&lt;h2 id="cncf-community-code-of-conduct-v1-3"&gt;CNCF Community Code of Conduct v1.3&lt;/h2&gt;
&lt;h3 id="community-code-of-conduct"&gt;Community Code of Conduct&lt;/h3&gt;
&lt;p&gt;As contributors, maintainers, and participants in the CNCF community, and in the interest of fostering
an open and welcoming community, we pledge to respect all people who participate or contribute
through reporting issues, posting feature requests, updating documentation,
submitting pull requests or patches, attending conferences or events, or engaging in other community or project activities.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/community/static/readme/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/community/static/readme/</guid><description>&lt;p&gt;The files in this directory have been imported from other sources. Do not
edit them directly, except by replacing them with new versions.&lt;/p&gt;
&lt;p&gt;Localization note: you do not need to create localized versions of any of
the files in this directory.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/contribute/generate-ref-docs/prerequisites-ref-docs/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/contribute/generate-ref-docs/prerequisites-ref-docs/</guid><description>&lt;h3 id="requirements"&gt;Requirements:&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;You need a machine that is running Linux or macOS.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;You need to have these tools installed:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.python.org/downloads/"&gt;Python&lt;/a&gt; v3.7.x+&lt;/li&gt;
&lt;li&gt;&lt;a href="https://git-scm.com/book/en/v2/Getting-Started-Installing-Git"&gt;Git&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://go.dev/dl/"&gt;Golang&lt;/a&gt; version 1.13+&lt;/li&gt;
&lt;li&gt;&lt;a href="https://pypi.org/project/pip/"&gt;Pip&lt;/a&gt; used to install PyYAML&lt;/li&gt;
&lt;li&gt;&lt;a href="https://pyyaml.org/"&gt;PyYAML&lt;/a&gt; v5.1.2&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.gnu.org/software/make/"&gt;make&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://gcc.gnu.org/"&gt;gcc compiler/linker&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.docker.com/engine/installation/"&gt;Docker&lt;/a&gt; (Required only for &lt;code&gt;kubectl&lt;/code&gt; command reference)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Your &lt;code&gt;PATH&lt;/code&gt; environment variable must include the required build tools, such as the &lt;code&gt;Go&lt;/code&gt; binary and &lt;code&gt;python&lt;/code&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;You need to know how to create a pull request to a GitHub repository.
This involves creating your own fork of the repository. For more
information, see &lt;a href="https://andygol-k8s.netlify.app/docs/contribute/new-content/open-a-pr/#fork-the-repo"&gt;Work from a local clone&lt;/a&gt;.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;kubeadm: easily bootstrap a secure Kubernetes cluster&lt;/p&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;pre&gt;&lt;code&gt;┌──────────────────────────────────────────────────────────┐
│ KUBEADM │
│ Easily bootstrap a secure Kubernetes cluster │
│ │
│ Please give us feedback at: │
│ https://github.com/kubernetes/kubeadm/issues │
└──────────────────────────────────────────────────────────┘
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Example usage:&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs/kubeadm_certs_certificate-key/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs/kubeadm_certs_certificate-key/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;Generate certificate keys&lt;/p&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;This command will print out a secure randomly-generated certificate key that can be used with
the &amp;quot;init&amp;quot; command.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs/kubeadm_certs_check-expiration/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs/kubeadm_certs_check-expiration/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;Check certificates expiration for a Kubernetes cluster&lt;/p&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Checks expiration for the certificates in the local PKI managed by kubeadm.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs/kubeadm_certs_generate-csr/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs/kubeadm_certs_generate-csr/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;Generate keys and certificate signing requests&lt;/p&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Generates keys and certificate signing requests (CSRs) for all the certificates required to run the control plane. This command also generates partial kubeconfig files with private key data in the &amp;quot;users &amp;gt; user &amp;gt; client-key-data&amp;quot; field, and for each kubeconfig file an accompanying &amp;quot;.csr&amp;quot; file is created.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs/kubeadm_certs_renew/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs/kubeadm_certs_renew/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Renew certificates for a Kubernetes cluster&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm certs renew [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="options"&gt;Options&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;help for renew&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs/kubeadm_certs_renew_admin.conf/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs/kubeadm_certs_renew_admin.conf/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Renew the certificate embedded in the kubeconfig file for the admin to use and for kubeadm itself.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs/kubeadm_certs_renew_all/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs/kubeadm_certs_renew_all/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;Renew all available certificates&lt;/p&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Renew all known certificates necessary to run the control plane. Renewals are run unconditionally, regardless of expiration date. Renewals can also be run individually for more control.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs/kubeadm_certs_renew_apiserver-etcd-client/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs/kubeadm_certs_renew_apiserver-etcd-client/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Renew the certificate the apiserver uses to access etcd.&lt;/p&gt;
&lt;p&gt;Renewals run unconditionally, regardless of certificate expiration date; extra attributes such as SANs will be based on the existing file/certificates, there is no need to resupply them.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs/kubeadm_certs_renew_apiserver-kubelet-client/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs/kubeadm_certs_renew_apiserver-kubelet-client/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Renew the certificate for the API server to connect to kubelet.&lt;/p&gt;
&lt;p&gt;Renewals run unconditionally, regardless of certificate expiration date; extra attributes such as SANs will be based on the existing file/certificates, there is no need to resupply them.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs/kubeadm_certs_renew_apiserver/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs/kubeadm_certs_renew_apiserver/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Renew the certificate for serving the Kubernetes API.&lt;/p&gt;
&lt;p&gt;Renewals run unconditionally, regardless of certificate expiration date; extra attributes such as SANs will be based on the existing file/certificates, there is no need to resupply them.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs/kubeadm_certs_renew_controller-manager.conf/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs/kubeadm_certs_renew_controller-manager.conf/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Renew the certificate embedded in the kubeconfig file for the controller manager to use.&lt;/p&gt;
&lt;p&gt;Renewals run unconditionally, regardless of certificate expiration date; extra attributes such as SANs will be based on the existing file/certificates, there is no need to resupply them.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs/kubeadm_certs_renew_etcd-healthcheck-client/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs/kubeadm_certs_renew_etcd-healthcheck-client/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Renew the certificate for liveness probes to healthcheck etcd.&lt;/p&gt;
&lt;p&gt;Renewals run unconditionally, regardless of certificate expiration date; extra attributes such as SANs will be based on the existing file/certificates, there is no need to resupply them.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs/kubeadm_certs_renew_etcd-peer/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs/kubeadm_certs_renew_etcd-peer/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Renew the certificate for etcd nodes to communicate with each other.&lt;/p&gt;
&lt;p&gt;Renewals run unconditionally, regardless of certificate expiration date; extra attributes such as SANs will be based on the existing file/certificates, there is no need to resupply them.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs/kubeadm_certs_renew_etcd-server/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs/kubeadm_certs_renew_etcd-server/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Renew the certificate for serving etcd.&lt;/p&gt;
&lt;p&gt;Renewals run unconditionally, regardless of certificate expiration date; extra attributes such as SANs will be based on the existing file/certificates, there is no need to resupply them.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs/kubeadm_certs_renew_front-proxy-client/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs/kubeadm_certs_renew_front-proxy-client/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Renew the certificate for the front proxy client.&lt;/p&gt;
&lt;p&gt;Renewals run unconditionally, regardless of certificate expiration date; extra attributes such as SANs will be based on the existing file/certificates, there is no need to resupply them.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs/kubeadm_certs_renew_scheduler.conf/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs/kubeadm_certs_renew_scheduler.conf/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Renew the certificate embedded in the kubeconfig file for the scheduler manager to use.&lt;/p&gt;
&lt;p&gt;Renewals run unconditionally, regardless of certificate expiration date; extra attributes such as SANs will be based on the existing file/certificates, there is no need to resupply them.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs/kubeadm_certs_renew_super-admin.conf/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs/kubeadm_certs_renew_super-admin.conf/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Renew the certificate embedded in the kubeconfig file for the super-admin.&lt;/p&gt;
&lt;p&gt;Renewals run unconditionally, regardless of certificate expiration date; extra attributes such as SANs will be based on the existing file/certificates, there is no need to resupply them.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_config/kubeadm_config_images/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_config/kubeadm_config_images/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Interact with container images used by kubeadm&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm config images [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="options"&gt;Options&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;help for images&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_config/kubeadm_config_images_list/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_config/kubeadm_config_images_list/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Print a list of images kubeadm will use. The configuration file is used in case any images or image repositories are customized&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_config/kubeadm_config_images_pull/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_config/kubeadm_config_images_pull/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Pull images used by kubeadm&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm config images pull [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="options"&gt;Options&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--config string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;Path to a kubeadm configuration file.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_config/kubeadm_config_migrate/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_config/kubeadm_config_migrate/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;Read an older version of the kubeadm configuration API types from a file, and output the similar config object for the newer version&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_config/kubeadm_config_print/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_config/kubeadm_config_print/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;Print configuration&lt;/p&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;This command prints configurations for subcommands provided.
For details, see: &lt;a href="https://pkg.go.dev/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm#section-directories"&gt;https://pkg.go.dev/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm#section-directories&lt;/a&gt;&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm config print [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="options"&gt;Options&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;help for print&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_config/kubeadm_config_print_init-defaults/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_config/kubeadm_config_print_init-defaults/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;Print default init configuration, that can be used for 'kubeadm init'&lt;/p&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;This command prints objects such as the default init configuration that is used for 'kubeadm init'.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_config/kubeadm_config_print_join-defaults/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_config/kubeadm_config_print_join-defaults/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;Print default join configuration, that can be used for 'kubeadm join'&lt;/p&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;This command prints objects such as the default join configuration that is used for 'kubeadm join'.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_config/kubeadm_config_print_reset-defaults/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_config/kubeadm_config_print_reset-defaults/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;Print default reset configuration, that can be used for 'kubeadm reset'&lt;/p&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;This command prints objects such as the default reset configuration that is used for 'kubeadm reset'.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_config/kubeadm_config_print_upgrade-defaults/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_config/kubeadm_config_print_upgrade-defaults/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;Print default upgrade configuration, that can be used for 'kubeadm upgrade'&lt;/p&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;This command prints objects such as the default upgrade configuration that is used for 'kubeadm upgrade'.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_config/kubeadm_config_validate/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_config/kubeadm_config_validate/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;Read a file containing the kubeadm configuration API and report any validation problems&lt;/p&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;This command lets you validate a kubeadm configuration API file and report any warnings and errors.
If there are no errors the exit status will be zero, otherwise it will be non-zero.
Any unmarshaling problems such as unknown API fields will trigger errors. Unknown API versions and
fields with invalid values will also trigger errors. Any other errors or warnings may be reported
depending on contents of the input file.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Use this command to invoke single phase of the &amp;quot;init&amp;quot; workflow&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm init phase [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="options"&gt;Options&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;help for phase&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_addon/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_addon/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Install required addons for passing conformance tests&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm init phase addon [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="options"&gt;Options&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;help for addon&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_addon_all/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_addon_all/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Install all the addons&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm init phase addon all [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="options"&gt;Options&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--apiserver-advertise-address string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;The IP address the API Server will advertise it's listening on. If not set the default network interface will be used.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_addon_coredns/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_addon_coredns/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;Install the CoreDNS addon to a Kubernetes cluster&lt;/p&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Install the CoreDNS addon components via the API server. Please note that although the DNS server is deployed, it will not be scheduled until CNI is installed.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_addon_kube-proxy/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_addon_kube-proxy/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;Install the kube-proxy addon to a Kubernetes cluster&lt;/p&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Install the kube-proxy addon components via the API server.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_bootstrap-token/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_bootstrap-token/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;Generates bootstrap tokens used to join a node to a cluster&lt;/p&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Bootstrap tokens are used for establishing bidirectional trust between a node joining the cluster and a control-plane node.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_certs/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_certs/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Certificate generation&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm init phase certs [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="options"&gt;Options&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;help for certs&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_certs_all/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_certs_all/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Generate all certificates&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm init phase certs all [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="options"&gt;Options&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--apiserver-advertise-address string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;The IP address the API Server will advertise it's listening on. If not set the default network interface will be used.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_certs_apiserver-etcd-client/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_certs_apiserver-etcd-client/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Generate the certificate the apiserver uses to access etcd, and save them into apiserver-etcd-client.crt and apiserver-etcd-client.key files.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_certs_apiserver-kubelet-client/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_certs_apiserver-kubelet-client/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Generate the certificate for the API server to connect to kubelet, and save them into apiserver-kubelet-client.crt and apiserver-kubelet-client.key files.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_certs_apiserver/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_certs_apiserver/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Generate the certificate for serving the Kubernetes API, and save them into apiserver.crt and apiserver.key files.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_certs_ca/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_certs_ca/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Generate the self-signed Kubernetes CA to provision identities for other Kubernetes components, and save them into ca.crt and ca.key files.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_certs_etcd-ca/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_certs_etcd-ca/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Generate the self-signed CA to provision identities for etcd, and save them into etcd/ca.crt and etcd/ca.key files.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_certs_etcd-healthcheck-client/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_certs_etcd-healthcheck-client/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Generate the certificate for liveness probes to healthcheck etcd, and save them into etcd/healthcheck-client.crt and etcd/healthcheck-client.key files.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_certs_etcd-peer/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_certs_etcd-peer/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Generate the certificate for etcd nodes to communicate with each other, and save them into etcd/peer.crt and etcd/peer.key files.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_certs_etcd-server/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_certs_etcd-server/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Generate the certificate for serving etcd, and save them into etcd/server.crt and etcd/server.key files.&lt;/p&gt;
&lt;p&gt;Default SANs are localhost, 127.0.0.1, 127.0.0.1, ::1&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_certs_front-proxy-ca/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_certs_front-proxy-ca/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Generate the self-signed CA to provision identities for front proxy, and save them into front-proxy-ca.crt and front-proxy-ca.key files.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_certs_front-proxy-client/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_certs_front-proxy-client/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Generate the certificate for the front proxy client, and save them into front-proxy-client.crt and front-proxy-client.key files.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_certs_sa/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_certs_sa/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;Generate a private key for signing service account tokens along with its public key&lt;/p&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Generate the private key for signing service account tokens along with its public key, and save them into sa.key and sa.pub files.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_control-plane/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_control-plane/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Generate all static Pod manifest files necessary to establish the control plane&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm init phase control-plane [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="options"&gt;Options&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;help for control-plane&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_control-plane_all/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_control-plane_all/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Generate all static Pod manifest files&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm init phase control-plane all [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="examples"&gt;Examples&lt;/h3&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt; # Generates all static Pod manifest files for control plane components,
 # functionally equivalent to what is generated by kubeadm init.
 kubeadm init phase control-plane all
 
 # Generates all static Pod manifest files using options read from a configuration file.
 kubeadm init phase control-plane all --config config.yaml
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="options"&gt;Options&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--apiserver-advertise-address string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;The IP address the API Server will advertise it's listening on. If not set the default network interface will be used.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_control-plane_apiserver/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_control-plane_apiserver/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Generates the kube-apiserver static Pod manifest&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm init phase control-plane apiserver [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="options"&gt;Options&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--apiserver-advertise-address string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;The IP address the API Server will advertise it's listening on. If not set the default network interface will be used.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_control-plane_controller-manager/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_control-plane_controller-manager/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Generates the kube-controller-manager static Pod manifest&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm init phase control-plane controller-manager [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="options"&gt;Options&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--cert-dir string&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Default: "/etc/kubernetes/pki"&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;The path where to save and store the certificates.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_control-plane_scheduler/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_control-plane_scheduler/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Generates the kube-scheduler static Pod manifest&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm init phase control-plane scheduler [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="options"&gt;Options&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--cert-dir string&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Default: "/etc/kubernetes/pki"&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;The path where to save and store the certificates.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_etcd/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_etcd/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Generate static Pod manifest file for local etcd&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm init phase etcd [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="options"&gt;Options&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;help for etcd&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_etcd_local/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_etcd_local/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Generate the static Pod manifest file for a local, single-node local etcd instance&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm init phase etcd local [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="examples"&gt;Examples&lt;/h3&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt; # Generates the static Pod manifest file for etcd, functionally
 # equivalent to what is generated by kubeadm init.
 kubeadm init phase etcd local
 
 # Generates the static Pod manifest file for etcd using options
 # read from a configuration file.
 kubeadm init phase etcd local --config config.yaml
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="options"&gt;Options&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--cert-dir string&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Default: "/etc/kubernetes/pki"&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;The path where to save and store the certificates.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_kubeconfig/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_kubeconfig/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Generate all kubeconfig files necessary to establish the control plane and the admin kubeconfig file&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_kubeconfig_admin/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_kubeconfig_admin/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;Generate a kubeconfig file for the admin to use and for kubeadm itself&lt;/p&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Generate the kubeconfig file for the admin and for kubeadm itself, and save it to admin.conf file.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_kubeconfig_all/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_kubeconfig_all/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Generate all kubeconfig files&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm init phase kubeconfig all [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="options"&gt;Options&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--apiserver-advertise-address string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;The IP address the API Server will advertise it's listening on. If not set the default network interface will be used.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_kubeconfig_controller-manager/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_kubeconfig_controller-manager/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;Generate a kubeconfig file for the controller manager to use&lt;/p&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Generate the kubeconfig file for the controller manager to use and save it to controller-manager.conf file&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_kubeconfig_kubelet/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_kubeconfig_kubelet/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;Generate a kubeconfig file for the kubelet to use &lt;em&gt;only&lt;/em&gt; for cluster bootstrapping purposes&lt;/p&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Generate the kubeconfig file for the kubelet to use and save it to kubelet.conf file.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_kubeconfig_scheduler/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_kubeconfig_scheduler/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;Generate a kubeconfig file for the scheduler to use&lt;/p&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Generate the kubeconfig file for the scheduler to use and save it to scheduler.conf file.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_kubeconfig_super-admin/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_kubeconfig_super-admin/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Generate a kubeconfig file for the super-admin, and save it to super-admin.conf file.&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm init phase kubeconfig super-admin [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="options"&gt;Options&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--apiserver-advertise-address string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;The IP address the API Server will advertise it's listening on. If not set the default network interface will be used.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_kubelet-finalize/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_kubelet-finalize/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Updates settings relevant to the kubelet after TLS bootstrap&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm init phase kubelet-finalize [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="examples"&gt;Examples&lt;/h3&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt; # Updates settings relevant to the kubelet after TLS bootstrap&amp;#34;
 kubeadm init phase kubelet-finalize all --config
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="options"&gt;Options&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;help for kubelet-finalize&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_kubelet-finalize_all/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_kubelet-finalize_all/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Run all kubelet-finalize phases&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm init phase kubelet-finalize all [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="examples"&gt;Examples&lt;/h3&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt; # Updates settings relevant to the kubelet after TLS bootstrap&amp;#34;
 kubeadm init phase kubelet-finalize all --config
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="options"&gt;Options&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--cert-dir string&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Default: "/etc/kubernetes/pki"&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;The path where to save and store the certificates.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_kubelet-finalize_enable-client-cert-rotation/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_kubelet-finalize_enable-client-cert-rotation/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Enable kubelet client certificate rotation&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm init phase kubelet-finalize enable-client-cert-rotation [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="options"&gt;Options&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--cert-dir string&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Default: "/etc/kubernetes/pki"&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;The path where to save and store the certificates.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_kubelet-start/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_kubelet-start/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;Write kubelet settings and (re)start the kubelet&lt;/p&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Write a file with KubeletConfiguration and an environment file with node specific kubelet settings, and then (re)start kubelet.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_mark-control-plane/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_mark-control-plane/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Mark a node as a control-plane&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm init phase mark-control-plane [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="examples"&gt;Examples&lt;/h3&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt; # Applies control-plane label and taint to the current node, functionally equivalent to what executed by kubeadm init.
 kubeadm init phase mark-control-plane --config config.yaml
 
 # Applies control-plane label and taint to a specific node
 kubeadm init phase mark-control-plane --node-name myNode
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="options"&gt;Options&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--config string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;Path to a kubeadm configuration file.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_preflight/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_preflight/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Run pre-flight checks for kubeadm init.&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm init phase preflight [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="examples"&gt;Examples&lt;/h3&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt; # Run pre-flight checks for kubeadm init using a config file.
 kubeadm init phase preflight --config kubeadm-config.yaml
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="options"&gt;Options&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--config string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;Path to a kubeadm configuration file.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_show-join-command/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_show-join-command/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Show the join command for control-plane and worker node&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm init phase show-join-command [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="options"&gt;Options&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;help for show-join-command&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_upload-certs/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_upload-certs/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;Upload certificates to kubeadm-certs&lt;/p&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Upload control plane certificates to the kubeadm-certs Secret&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm init phase upload-certs [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="options"&gt;Options&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--certificate-key string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;Key used to encrypt the control-plane certificates in the kubeadm-certs Secret. The certificate key is a hex encoded string that is an AES key of size 32 bytes.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_upload-config/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_upload-config/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Upload the kubeadm and kubelet configuration to a ConfigMap&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm init phase upload-config [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="options"&gt;Options&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;help for upload-config&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_upload-config_all/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_upload-config_all/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Upload all configuration to a config map&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm init phase upload-config all [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="options"&gt;Options&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--config string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;Path to a kubeadm configuration file.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_upload-config_kubeadm/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_upload-config_kubeadm/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Upload the kubeadm ClusterConfiguration to a ConfigMap called kubeadm-config in the kube-system namespace. This enables correct configuration of system components and a seamless user experience when upgrading.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_upload-config_kubelet/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_upload-config_kubelet/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;Upload the kubelet component config to a ConfigMap&lt;/p&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Upload the kubelet configuration extracted from the kubeadm InitConfiguration object to a kubelet-config ConfigMap in the cluster&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_wait-control-plane/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_init/kubeadm_init_phase_wait-control-plane/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Wait for the control plane to start&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm init phase wait-control-plane [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="options"&gt;Options&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;help for wait-control-plane&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_join/kubeadm_join_phase/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_join/kubeadm_join_phase/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Use this command to invoke single phase of the &amp;quot;join&amp;quot; workflow&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm join phase [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="options"&gt;Options&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;help for phase&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_join/kubeadm_join_phase_control-plane-join/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_join/kubeadm_join_phase_control-plane-join/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Join a machine as a control plane instance&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm join phase control-plane-join [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="examples"&gt;Examples&lt;/h3&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt; # Joins a machine as a control plane instance
 kubeadm join phase control-plane-join all
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="options"&gt;Options&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;help for control-plane-join&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_join/kubeadm_join_phase_control-plane-join_all/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_join/kubeadm_join_phase_control-plane-join_all/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Join a machine as a control plane instance&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm join phase control-plane-join all [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="options"&gt;Options&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--apiserver-advertise-address string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;If the node should host a new control plane instance, the IP address the API Server will advertise it's listening on. If not set the default network interface will be used.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_join/kubeadm_join_phase_control-plane-join_etcd/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_join/kubeadm_join_phase_control-plane-join_etcd/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Add a new local etcd member&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm join phase control-plane-join etcd [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="options"&gt;Options&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--apiserver-advertise-address string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;If the node should host a new control plane instance, the IP address the API Server will advertise it's listening on. If not set the default network interface will be used.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_join/kubeadm_join_phase_control-plane-join_mark-control-plane/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_join/kubeadm_join_phase_control-plane-join_mark-control-plane/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Mark a node as a control-plane&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm join phase control-plane-join mark-control-plane [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="options"&gt;Options&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--config string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;Path to a kubeadm configuration file.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_join/kubeadm_join_phase_control-plane-prepare/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_join/kubeadm_join_phase_control-plane-prepare/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Prepare the machine for serving a control plane&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm join phase control-plane-prepare [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="examples"&gt;Examples&lt;/h3&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt; # Prepares the machine for serving a control plane
 kubeadm join phase control-plane-prepare all
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="options"&gt;Options&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;help for control-plane-prepare&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_join/kubeadm_join_phase_control-plane-prepare_all/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_join/kubeadm_join_phase_control-plane-prepare_all/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Prepare the machine for serving a control plane&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm join phase control-plane-prepare all [api-server-endpoint] [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="options"&gt;Options&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--apiserver-advertise-address string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;If the node should host a new control plane instance, the IP address the API Server will advertise it's listening on. If not set the default network interface will be used.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_join/kubeadm_join_phase_control-plane-prepare_certs/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_join/kubeadm_join_phase_control-plane-prepare_certs/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Generate the certificates for the new control plane components&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm join phase control-plane-prepare certs [api-server-endpoint] [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="options"&gt;Options&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--apiserver-advertise-address string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;If the node should host a new control plane instance, the IP address the API Server will advertise it's listening on. If not set the default network interface will be used.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_join/kubeadm_join_phase_control-plane-prepare_control-plane/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_join/kubeadm_join_phase_control-plane-prepare_control-plane/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Generate the manifests for the new control plane components&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm join phase control-plane-prepare control-plane [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="options"&gt;Options&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--apiserver-advertise-address string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;If the node should host a new control plane instance, the IP address the API Server will advertise it's listening on. If not set the default network interface will be used.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_join/kubeadm_join_phase_control-plane-prepare_download-certs/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_join/kubeadm_join_phase_control-plane-prepare_download-certs/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Download certificates shared among control-plane nodes from the kubeadm-certs Secret&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm join phase control-plane-prepare download-certs [api-server-endpoint] [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="options"&gt;Options&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--certificate-key string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;Use this key to decrypt the certificate secrets uploaded by init. The certificate key is a hex encoded string that is an AES key of size 32 bytes.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_join/kubeadm_join_phase_control-plane-prepare_kubeconfig/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_join/kubeadm_join_phase_control-plane-prepare_kubeconfig/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Generate the kubeconfig for the new control plane components&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm join phase control-plane-prepare kubeconfig [api-server-endpoint] [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="options"&gt;Options&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--certificate-key string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;Use this key to decrypt the certificate secrets uploaded by init. The certificate key is a hex encoded string that is an AES key of size 32 bytes.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_join/kubeadm_join_phase_etcd-join/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_join/kubeadm_join_phase_etcd-join/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Join etcd for control plane nodes&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm join phase etcd-join [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="examples"&gt;Examples&lt;/h3&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt; # Joins etcd for a control plane instance
 kubeadm join phase control-plane-join-etcd all
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="options"&gt;Options&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--apiserver-advertise-address string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;If the node should host a new control plane instance, the IP address the API Server will advertise it's listening on. If not set the default network interface will be used.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_join/kubeadm_join_phase_kubelet-start/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_join/kubeadm_join_phase_kubelet-start/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;Write kubelet settings, certificates and (re)start the kubelet&lt;/p&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Write a file with KubeletConfiguration and an environment file with node specific kubelet settings, and then (re)start kubelet.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_join/kubeadm_join_phase_kubelet-wait-bootstrap/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_join/kubeadm_join_phase_kubelet-wait-bootstrap/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Wait for the kubelet to bootstrap itself&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm join phase kubelet-wait-bootstrap [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="options"&gt;Options&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--config string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;Path to a kubeadm configuration file.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_join/kubeadm_join_phase_preflight/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_join/kubeadm_join_phase_preflight/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;Run join pre-flight checks&lt;/p&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Run pre-flight checks for kubeadm join.&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm join phase preflight [api-server-endpoint] [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="examples"&gt;Examples&lt;/h3&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt; # Run join pre-flight checks using a config file.
 kubeadm join phase preflight --config kubeadm-config.yaml
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="options"&gt;Options&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--apiserver-advertise-address string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;If the node should host a new control plane instance, the IP address the API Server will advertise it's listening on. If not set the default network interface will be used.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_join/kubeadm_join_phase_wait-control-plane/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_join/kubeadm_join_phase_wait-control-plane/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Wait for the control plane to start&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm join phase wait-control-plane [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="options"&gt;Options&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;help for wait-control-plane&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_kubeconfig/kubeadm_kubeconfig_user/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_kubeconfig/kubeadm_kubeconfig_user/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Output a kubeconfig file for an additional user.&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm kubeconfig user [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="examples"&gt;Examples&lt;/h3&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt; # Output a kubeconfig file for an additional user named foo
 kubeadm kubeconfig user --client-name=foo
 
 # Output a kubeconfig file for an additional user named foo using a kubeadm config file bar
 kubeadm kubeconfig user --client-name=foo --config=bar
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="options"&gt;Options&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--client-name string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;The name of user. It will be used as the CN if client certificates are created&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset/kubeadm_reset_phase/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset/kubeadm_reset_phase/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Use this command to invoke single phase of the &amp;quot;reset&amp;quot; workflow&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm reset phase [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="options"&gt;Options&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;help for phase&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset/kubeadm_reset_phase_cleanup-node/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset/kubeadm_reset_phase_cleanup-node/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Run cleanup node.&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm reset phase cleanup-node [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="options"&gt;Options&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--cert-dir string&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Default: "/etc/kubernetes/pki"&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;The path to the directory where the certificates are stored. If specified, clean this directory.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset/kubeadm_reset_phase_preflight/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset/kubeadm_reset_phase_preflight/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;Run reset pre-flight checks&lt;/p&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Run pre-flight checks for kubeadm reset.&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm reset phase preflight [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="options"&gt;Options&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--dry-run&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;Don't apply any changes; just output what would be done.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset/kubeadm_reset_phase_remove-etcd-member/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset/kubeadm_reset_phase_remove-etcd-member/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Remove a local etcd member for a control plane node.&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm reset phase remove-etcd-member [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="options"&gt;Options&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--dry-run&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;Don't apply any changes; just output what would be done.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_token/kubeadm_token_create/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_token/kubeadm_token_create/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;Create bootstrap tokens on the server&lt;/p&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;This command will create a bootstrap token for you.
You can specify the usages for this token, the &amp;quot;time to live&amp;quot; and an optional human friendly description.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_token/kubeadm_token_delete/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_token/kubeadm_token_delete/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;Delete bootstrap tokens on the server&lt;/p&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;This command will delete a list of bootstrap tokens for you.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_token/kubeadm_token_generate/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_token/kubeadm_token_generate/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;Generate and print a bootstrap token, but do not create it on the server&lt;/p&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;This command will print out a randomly-generated bootstrap token that can be used with
the &amp;quot;init&amp;quot; and &amp;quot;join&amp;quot; commands.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_token/kubeadm_token_list/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_token/kubeadm_token_list/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;List bootstrap tokens on the server&lt;/p&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;This command will list all bootstrap tokens for you.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_apply/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_apply/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Upgrade your Kubernetes cluster to the specified version&lt;/p&gt;
&lt;p&gt;The &amp;quot;apply [version]&amp;quot; command executes the following phases:&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_apply_phase/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_apply_phase/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Use this command to invoke single phase of the &amp;quot;apply&amp;quot; workflow&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm upgrade apply phase [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="options"&gt;Options&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;help for phase&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_apply_phase_addon/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_apply_phase_addon/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Upgrade the default kubeadm addons&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm upgrade apply phase addon [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="options"&gt;Options&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;help for addon&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_apply_phase_addon_all/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_apply_phase_addon_all/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Upgrade all the addons&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm upgrade apply phase addon all [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="options"&gt;Options&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--config string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;Path to a kubeadm configuration file.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_apply_phase_addon_coredns/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_apply_phase_addon_coredns/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Upgrade the CoreDNS addon&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm upgrade apply phase addon coredns [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="options"&gt;Options&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--config string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;Path to a kubeadm configuration file.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_apply_phase_addon_kube-proxy/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_apply_phase_addon_kube-proxy/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Upgrade the kube-proxy addon&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm upgrade apply phase addon kube-proxy [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="options"&gt;Options&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--config string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;Path to a kubeadm configuration file.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_apply_phase_bootstrap-token/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_apply_phase_bootstrap-token/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Configures bootstrap token and cluster-info RBAC rules&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm upgrade apply phase bootstrap-token [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="options"&gt;Options&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--config string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;Path to a kubeadm configuration file.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_apply_phase_control-plane/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_apply_phase_control-plane/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Upgrade the control plane&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm upgrade apply phase control-plane [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="options"&gt;Options&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--certificate-renewal&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Default: true&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;Perform the renewal of certificates used by component changed during upgrades.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_apply_phase_kubelet-config/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_apply_phase_kubelet-config/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Upgrade the kubelet configuration for this node by downloading it from the kubelet-config ConfigMap stored in the cluster&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_apply_phase_post-upgrade/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_apply_phase_post-upgrade/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Run post upgrade tasks&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm upgrade apply phase post-upgrade [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="options"&gt;Options&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--config string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;Path to a kubeadm configuration file.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_apply_phase_preflight/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_apply_phase_preflight/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Run preflight checks before upgrade&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm upgrade apply phase preflight [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="options"&gt;Options&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--allow-experimental-upgrades&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;Show unstable versions of Kubernetes as an upgrade alternative and allow upgrading to an alpha/beta/release candidate versions of Kubernetes.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_apply_phase_upload-config/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_apply_phase_upload-config/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Upload the kubeadm and kubelet configurations to ConfigMaps&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm upgrade apply phase upload-config [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="options"&gt;Options&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;help for upload-config&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_apply_phase_upload-config_all/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_apply_phase_upload-config_all/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Upload all the configurations to ConfigMaps&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm upgrade apply phase upload-config all [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="options"&gt;Options&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--config string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;Path to a kubeadm configuration file.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_apply_phase_upload-config_kubeadm/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_apply_phase_upload-config_kubeadm/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Upload the kubeadm ClusterConfiguration to a ConfigMap&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm upgrade apply phase upload-config kubeadm [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="options"&gt;Options&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--config string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;Path to a kubeadm configuration file.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_apply_phase_upload-config_kubelet/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_apply_phase_upload-config_kubelet/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Upload the kubelet configuration to a ConfigMap&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm upgrade apply phase upload-config kubelet [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="options"&gt;Options&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--config string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;Path to a kubeadm configuration file.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_diff/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_diff/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Show what differences would be applied to existing static pod manifests. See also: kubeadm upgrade apply --dry-run&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_node/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_node/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Upgrade commands for a node in the cluster&lt;/p&gt;
&lt;p&gt;The &amp;quot;node&amp;quot; command executes the following phases:&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_node_phase/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_node_phase/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Use this command to invoke single phase of the &amp;quot;node&amp;quot; workflow&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm upgrade node phase [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="options"&gt;Options&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;help for phase&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_node_phase_addon/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_node_phase_addon/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Upgrade the default kubeadm addons&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm upgrade node phase addon [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="options"&gt;Options&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;-h, --help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;help for addon&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_node_phase_addon_all/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_node_phase_addon_all/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Upgrade all the addons&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm upgrade node phase addon all [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="options"&gt;Options&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--config string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;Path to a kubeadm configuration file.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_node_phase_addon_coredns/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_node_phase_addon_coredns/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Upgrade the CoreDNS addon&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm upgrade node phase addon coredns [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="options"&gt;Options&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--config string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;Path to a kubeadm configuration file.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_node_phase_addon_kube-proxy/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_node_phase_addon_kube-proxy/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Upgrade the kube-proxy addon&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm upgrade node phase addon kube-proxy [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="options"&gt;Options&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--config string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;Path to a kubeadm configuration file.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_node_phase_control-plane/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_node_phase_control-plane/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Upgrade the control plane instance deployed on this node, if any&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm upgrade node phase control-plane [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="options"&gt;Options&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--certificate-renewal&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Default: true&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;Perform the renewal of certificates used by component changed during upgrades.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_node_phase_kubelet-config/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_node_phase_kubelet-config/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Upgrade the kubelet configuration for this node by downloading it from the kubelet-config ConfigMap stored in the cluster&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_node_phase_post-upgrade/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_node_phase_post-upgrade/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Run post upgrade tasks&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm upgrade node phase post-upgrade [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="options"&gt;Options&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--config string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;Path to a kubeadm configuration file.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_node_phase_preflight/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_node_phase_preflight/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;p&gt;Run upgrade node pre-flight checks&lt;/p&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Run pre-flight checks for kubeadm upgrade node.&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;kubeadm upgrade node phase preflight [flags]
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="options"&gt;Options&lt;/h3&gt;
 &lt;table style="width: 100%; table-layout: fixed;"&gt;
&lt;colgroup&gt;
&lt;col span="1" style="width: 10px;" /&gt;
&lt;col span="1" /&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;--config string&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;&lt;td style="line-height: 130%; word-wrap: break-word;"&gt;&lt;p&gt;Path to a kubeadm configuration file.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_plan/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade/kubeadm_upgrade_plan/</guid><description>&lt;!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
--&gt;
&lt;h3 id="synopsis"&gt;Synopsis&lt;/h3&gt;
&lt;p&gt;Check which versions are available to upgrade to and validate whether your current cluster is upgradeable. This command can only run on the control plane nodes where the kubeconfig file &amp;quot;admin.conf&amp;quot; exists. To skip the internet check, pass in the optional [version] parameter.&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/readme/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/generated/readme/</guid><description>&lt;p&gt;All files in this directory are auto-generated from other repos. &lt;strong&gt;Do not edit them manually. You must edit them in their upstream repo.&lt;/strong&gt;&lt;/p&gt;</description></item><item><title/><link>https://andygol-k8s.netlify.app/examples/readme/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/examples/readme/</guid><description>&lt;p&gt;To run the tests for a localization, use the following command:&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;go test k8s.io/website/content/&amp;lt;lang&amp;gt;/examples
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;where &lt;code&gt;&amp;lt;lang&amp;gt;&lt;/code&gt; is the two character representation of a language. For example:&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;go test k8s.io/website/content/en/examples
&lt;/code&gt;&lt;/pre&gt;</description></item><item><title>adidas Case Study</title><link>https://andygol-k8s.netlify.app/case-studies/adidas/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/case-studies/adidas/</guid><description>&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;In recent years, the adidas team was happy with its software choices from a technology perspective—but accessing all of the tools was a problem. For instance, "just to get a developer VM, you had to send a request form, give the purpose, give the title of the project, who's responsible, give the internal cost center a call so that they can do recharges," says Daniel Eichten, Senior Director of Platform Engineering. "The best case is you got your machine in half an hour. Worst case is half a week or sometimes even a week."&lt;/p&gt;</description></item><item><title>Amadeus Case Study</title><link>https://andygol-k8s.netlify.app/case-studies/amadeus/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/case-studies/amadeus/</guid><description>&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;In the past few years, Amadeus, which provides IT solutions to the travel industry around the world, found itself in need of a new platform for the 5,000 services supported by its service-oriented architecture. The 30-year-old company operates its own data center in Germany, and there were growing demands internally and externally for solutions that needed to be geographically dispersed. And more generally, "we had objectives of being even more highly available," says Eric Mountain, Senior Expert, Distributed Systems at Amadeus. Among the company's goals: to increase automation in managing its infrastructure, optimize the distribution of workloads, use data center resources more efficiently, and adopt new technologies more easily.&lt;/p&gt;</description></item><item><title>Ancestry Case Study</title><link>https://andygol-k8s.netlify.app/case-studies/ancestry/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/case-studies/ancestry/</guid><description>&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;Ancestry, the global leader in family history and consumer genomics, uses sophisticated engineering and technology to help everyone, everywhere discover the story of what led to them. The company has spent more than 30 years innovating and building products and technologies that at their core, result in real and emotional human responses. &lt;a href="https://www.ancestry.com"&gt;Ancestry&lt;/a&gt; currently serves more than 2.6 million paying subscribers, holds 20 billion historical records, 90 million family trees and more than four million people are in its AncestryDNA network, making it the largest consumer genomics DNA network in the world. The company's popular website, &lt;a href="https://www.ancestry.com"&gt;ancestry.com&lt;/a&gt;, has been working with big data long before the term was popularized. The site was built on hundreds of services, technologies and a traditional deployment methodology. "It's worked well for us in the past," says Paul MacKay, software engineer and architect at Ancestry, "but had become quite cumbersome in its processing and is time-consuming. As a primarily online service, we are constantly looking for ways to accelerate to be more agile in delivering our solutions and our&amp;nbsp;products."&lt;/p&gt;</description></item><item><title>Ant Financial Case Study</title><link>https://andygol-k8s.netlify.app/case-studies/ant-financial/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/case-studies/ant-financial/</guid><description>&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;Officially founded in October 2014, &lt;a href="https://www.antfin.com/index.htm?locale=en_us"&gt;Ant Financial&lt;/a&gt; originated from &lt;a href="https://global.alipay.com/"&gt;Alipay&lt;/a&gt;, the world's largest online payment platform that launched in 2004. The company also offers numerous other services leveraging technology innovation. With the volume of transactions Alipay handles for its 900+ million users worldwide (through its local and global partners)—256,000 transactions per second at the peak of Double 11 Singles Day 2017, and total gross merchandise value of $31 billion for Singles Day 2018—not to mention that of its other services, Ant Financial faces "data processing challenge in a whole new way," says Haojie Hang, who is responsible for Product Management for the Storage and Compute Group. "We see three major problems of operating at that scale: how to provide real-time compute, storage, and processing capability, for instance to make real-time recommendations for fraud detection; how to provide intelligence on top of this data, because there's too much data and then we're not getting enough insight; and how to apply security in the application level, in the middleware level, the system level, even the chip level." In order to provide reliable and consistent services to its customers, Ant Financial embraced containers in early 2014, and soon needed an orchestration solution for the tens-of-thousands-of-node clusters in its data centers.&lt;/p&gt;</description></item><item><title>Auditing</title><link>https://andygol-k8s.netlify.app/docs/tasks/debug/debug-cluster/audit/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/debug/debug-cluster/audit/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;Kubernetes &lt;em&gt;auditing&lt;/em&gt; provides a security-relevant, chronological set of records documenting
the sequence of actions in a cluster. The cluster audits the activities generated by users,
by applications that use the Kubernetes API, and by the control plane itself.&lt;/p&gt;
&lt;p&gt;Auditing allows cluster administrators to answer the following questions:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;what happened?&lt;/li&gt;
&lt;li&gt;when did it happen?&lt;/li&gt;
&lt;li&gt;who initiated it?&lt;/li&gt;
&lt;li&gt;on what did it happen?&lt;/li&gt;
&lt;li&gt;where was it observed?&lt;/li&gt;
&lt;li&gt;from where was it initiated?&lt;/li&gt;
&lt;li&gt;to where was it going?&lt;/li&gt;
&lt;/ul&gt;
&lt;!-- body --&gt;
&lt;p&gt;Audit records begin their lifecycle inside the
&lt;a href="https://andygol-k8s.netlify.app/docs/reference/command-line-tools-reference/kube-apiserver/"&gt;kube-apiserver&lt;/a&gt;
component. Each request on each stage
of its execution generates an audit event, which is then pre-processed according to
a certain policy and written to a backend. The policy determines what's recorded
and the backends persist the records. The current backend implementations
include logs files and webhooks.&lt;/p&gt;</description></item><item><title>BlaBlaCar Case Study</title><link>https://andygol-k8s.netlify.app/case-studies/blablacar/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/case-studies/blablacar/</guid><description>&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;The world's largest long-distance carpooling community, &lt;a href="https://www.blablacar.com/"&gt;BlaBlaCar&lt;/a&gt;, connects 40 million members across 22 countries. The company has been experiencing exponential growth since 2012 and needed its infrastructure to keep up. "When you're thinking about doubling the number of servers, you start thinking, 'What should I do to be more efficient?'" says Simon Lallemand, Infrastructure Engineer at BlaBlaCar. "The answer is not to hire more and more people just to deal with the servers and installation." The team knew they had to scale the platform, but wanted to stay on their own bare metal servers.&lt;/p&gt;</description></item><item><title>BlackRock Case Study</title><link>https://andygol-k8s.netlify.app/case-studies/blackrock/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/case-studies/blackrock/</guid><description>&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;The world's largest asset manager, &lt;a href="https://www.blackrock.com/investing"&gt;BlackRock&lt;/a&gt; operates a very controlled static deployment scheme, which has allowed for scalability over the years. But in their data science division, there was a need for more dynamic access to resources. "We want to be able to give every investor access to data science, meaning &lt;a href="https://www.python.org"&gt;Python&lt;/a&gt; notebooks, or even something much more advanced, like a MapReduce engine based on &lt;a href="https://spark.apache.org"&gt;Spark&lt;/a&gt;," says Michael Francis, a Managing Director in BlackRock's Product Group, which runs the company's investment management platform. "Managing complex Python installations on users' desktops is really hard because everyone ends up with slightly different environments. We have existing environments that do these things, but we needed to make it real, expansive and scalable. Being able to spin that up on demand, tear it down, make that much more dynamic, became a critical thought process for us. It's not so much that we had to solve our main core production problem, it's how do we extend that? How do we evolve?"&lt;/p&gt;</description></item><item><title>Box Case Study</title><link>https://andygol-k8s.netlify.app/case-studies/box/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/case-studies/box/</guid><description>&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;Founded in 2005, the enterprise content management company allows its more than 50 million users to manage content in the cloud. &lt;a href="https://www.box.com/home"&gt;Box&lt;/a&gt; was built primarily with bare metal inside the company's own data centers, with a monolithic PHP code base. As the company was expanding globally, it needed to focus on "how we run our workload across many different cloud infrastructures from bare metal to public cloud," says Sam Ghods, Cofounder and Services Architect of Box. "It's been a huge challenge because of different clouds, especially bare metal, have very different interfaces."&lt;/p&gt;</description></item><item><title>Buffer Case Study</title><link>https://andygol-k8s.netlify.app/case-studies/buffer/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/case-studies/buffer/</guid><description>&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;With a small but fully distributed team of 80 working across almost a dozen time zones, Buffer—which offers social media management to agencies and marketers—was looking to solve its "classic monolithic code base problem," says Architect Dan Farrelly. "We wanted to have the kind of liquid infrastructure where a developer could create an app and deploy it and scale it horizontally as necessary."&lt;/p&gt;

&lt;h2&gt;Solution&lt;/h2&gt;

&lt;p&gt;Embracing containerization, Buffer moved its infrastructure from Amazon Web Services' Elastic Beanstalk to Docker on AWS, orchestrated with Kubernetes.&lt;/p&gt;</description></item><item><title>Capital One Case Study</title><link>https://andygol-k8s.netlify.app/case-studies/capital-one/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/case-studies/capital-one/</guid><description>&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;The team set out to build a provisioning platform for &lt;a href="https://www.capitalone.com/"&gt;Capital One&lt;/a&gt; applications deployed on AWS that use streaming, big-data decisioning, and machine learning. One of these applications handles millions of transactions a day; some deal with critical functions like fraud detection and credit decisioning. The key considerations: resilience and speed—as well as full rehydration of the cluster from base AMIs.&lt;/p&gt;

&lt;h2&gt;Solution&lt;/h2&gt;

&lt;p&gt;The decision to run &lt;a href="https://kubernetes.io/"&gt;Kubernetes&lt;/a&gt; "is very strategic for us," says John Swift, Senior Director Software Engineering. "We use Kubernetes as a substrate or an operating system, if you will. There's a degree of affinity in our product development."&lt;/p&gt;</description></item><item><title>CERN Case Study</title><link>https://andygol-k8s.netlify.app/case-studies/cern/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/case-studies/cern/</guid><description>&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;At CERN, the European Organization for Nuclear Research, physicists conduct experiments to learn about fundamental science. In its particle accelerators, "we accelerate protons to very high energy, close to the speed of light, and we make the two beams of protons collide," says CERN Software Engineer Ricardo Rocha. "The end result is a lot of data that we have to process." CERN currently stores 330 petabytes of data in its data centers, and an upgrade of its accelerators expected in the next few years will drive that number up by 10x. Additionally, the organization experiences extreme peaks in its workloads during periods prior to big conferences, and needs its infrastructure to scale to those peaks. "We want to have a more hybrid infrastructure, where we have our on premise infrastructure but can make use of public clouds temporarily when these peaks come up," says Rocha. "We've been looking to new technologies that can help improve our efficiency in our infrastructure so that we can dedicate more of our resources to the actual processing of the data."&lt;/p&gt;</description></item><item><title>China Unicom Case Study</title><link>https://andygol-k8s.netlify.app/case-studies/chinaunicom/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/case-studies/chinaunicom/</guid><description>&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;China Unicom is one of the top three telecom operators in China, and to serve its 300 million users, the company runs several data centers with thousands of servers in each, using &lt;a href="https://www.docker.com/"&gt;Docker&lt;/a&gt; containerization and &lt;a href="https://www.vmware.com/"&gt;VMWare&lt;/a&gt; and &lt;a href="https://www.openstack.org/"&gt;OpenStack&lt;/a&gt; infrastructure since 2016. Unfortunately, "the resource utilization rate was relatively low," says Chengyu Zhang, Group Leader of Platform Technology R&amp;D, "and we didn't have a cloud platform to accommodate our hundreds of applications." Formerly an entirely state-owned company, China Unicom has in recent years taken private investment from BAT (Baidu, Alibaba, Tencent) and JD.com, and is now focusing on internal development using open source technology, rather than commercial products. As such, Zhang's China Unicom Lab team began looking for open source orchestration for its cloud infrastructure.&lt;/p&gt;</description></item><item><title>City of Montreal Case Study</title><link>https://andygol-k8s.netlify.app/case-studies/city-of-montreal/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/case-studies/city-of-montreal/</guid><description>&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;Like many governments, Montréal has a number of legacy systems, and "we have systems that are older than some developers working here," says the city's CTO, Jean-Martin Thibault. "We have mainframes, all flavors of Windows, various flavors of Linux, old and new Oracle systems, Sun servers, all kinds of databases. Like all big corporations, some of the most important systems, like Budget and Human Resources, were developed on mainframes in-house over the past 30 years." There are over 1,000 applications in all, and most of them were running on different ecosystems. In 2015, a new management team decided to break down those silos, and invest in IT in order to move toward a more integrated governance for the city. They needed to figure out how to modernize the architecture.&lt;/p&gt;</description></item><item><title>Client Authentication (v1)</title><link>https://andygol-k8s.netlify.app/docs/reference/config-api/client-authentication.v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/config-api/client-authentication.v1/</guid><description>&lt;h2 id="resource-types"&gt;Resource Types&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#client-authentication-k8s-io-v1-ExecCredential"&gt;ExecCredential&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="client-authentication-k8s-io-v1-ExecCredential"&gt;&lt;code&gt;ExecCredential&lt;/code&gt;&lt;/h2&gt;
&lt;p&gt;ExecCredential is used by exec-based plugins to communicate credentials to
HTTP transports.&lt;/p&gt;
&lt;table class="table"&gt;
&lt;thead&gt;&lt;tr&gt;&lt;th width="30%"&gt;Field&lt;/th&gt;&lt;th&gt;Description&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;apiVersion&lt;/code&gt;&lt;br/&gt;string&lt;/td&gt;&lt;td&gt;&lt;code&gt;client.authentication.k8s.io/v1&lt;/code&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;kind&lt;/code&gt;&lt;br/&gt;string&lt;/td&gt;&lt;td&gt;&lt;code&gt;ExecCredential&lt;/code&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;spec&lt;/code&gt; &lt;B&gt;[Required]&lt;/B&gt;&lt;br/&gt;
&lt;a href="#client-authentication-k8s-io-v1-ExecCredentialSpec"&gt;&lt;code&gt;ExecCredentialSpec&lt;/code&gt;&lt;/a&gt;
&lt;/td&gt;
&lt;td&gt;
 &lt;p&gt;Spec holds information passed to the plugin by the transport.&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;status&lt;/code&gt;&lt;br/&gt;
&lt;a href="#client-authentication-k8s-io-v1-ExecCredentialStatus"&gt;&lt;code&gt;ExecCredentialStatus&lt;/code&gt;&lt;/a&gt;
&lt;/td&gt;
&lt;td&gt;
 &lt;p&gt;Status is filled in by the plugin and holds the credentials that the transport
should use to contact the API.&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2 id="client-authentication-k8s-io-v1-Cluster"&gt;&lt;code&gt;Cluster&lt;/code&gt;&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Appears in:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#client-authentication-k8s-io-v1-ExecCredentialSpec"&gt;ExecCredentialSpec&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Cluster contains information to allow an exec plugin to communicate
with the kubernetes cluster being authenticated to.&lt;/p&gt;</description></item><item><title>Client Authentication (v1beta1)</title><link>https://andygol-k8s.netlify.app/docs/reference/config-api/client-authentication.v1beta1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/config-api/client-authentication.v1beta1/</guid><description>&lt;h2 id="resource-types"&gt;Resource Types&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#client-authentication-k8s-io-v1beta1-ExecCredential"&gt;ExecCredential&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="client-authentication-k8s-io-v1beta1-ExecCredential"&gt;&lt;code&gt;ExecCredential&lt;/code&gt;&lt;/h2&gt;
&lt;p&gt;ExecCredential is used by exec-based plugins to communicate credentials to
HTTP transports.&lt;/p&gt;
&lt;table class="table"&gt;
&lt;thead&gt;&lt;tr&gt;&lt;th width="30%"&gt;Field&lt;/th&gt;&lt;th&gt;Description&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;apiVersion&lt;/code&gt;&lt;br/&gt;string&lt;/td&gt;&lt;td&gt;&lt;code&gt;client.authentication.k8s.io/v1beta1&lt;/code&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;kind&lt;/code&gt;&lt;br/&gt;string&lt;/td&gt;&lt;td&gt;&lt;code&gt;ExecCredential&lt;/code&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;spec&lt;/code&gt; &lt;B&gt;[Required]&lt;/B&gt;&lt;br/&gt;
&lt;a href="#client-authentication-k8s-io-v1beta1-ExecCredentialSpec"&gt;&lt;code&gt;ExecCredentialSpec&lt;/code&gt;&lt;/a&gt;
&lt;/td&gt;
&lt;td&gt;
 &lt;p&gt;Spec holds information passed to the plugin by the transport.&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;status&lt;/code&gt;&lt;br/&gt;
&lt;a href="#client-authentication-k8s-io-v1beta1-ExecCredentialStatus"&gt;&lt;code&gt;ExecCredentialStatus&lt;/code&gt;&lt;/a&gt;
&lt;/td&gt;
&lt;td&gt;
 &lt;p&gt;Status is filled in by the plugin and holds the credentials that the transport
should use to contact the API.&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2 id="client-authentication-k8s-io-v1beta1-Cluster"&gt;&lt;code&gt;Cluster&lt;/code&gt;&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Appears in:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#client-authentication-k8s-io-v1beta1-ExecCredentialSpec"&gt;ExecCredentialSpec&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Cluster contains information to allow an exec plugin to communicate
with the kubernetes cluster being authenticated to.&lt;/p&gt;</description></item><item><title>Configure Certificate Rotation for the Kubelet</title><link>https://andygol-k8s.netlify.app/docs/tasks/tls/certificate-rotation/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/tls/certificate-rotation/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page shows how to enable and configure certificate rotation for the kubelet.&lt;/p&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.19 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Kubernetes version 1.8.0 or later is required&lt;/li&gt;
&lt;/ul&gt;
&lt;!-- steps --&gt;
&lt;h2 id="overview"&gt;Overview&lt;/h2&gt;
&lt;p&gt;The kubelet uses certificates for authenticating to the Kubernetes API. By
default, these certificates are issued with one year expiration so that they do
not need to be renewed too frequently.&lt;/p&gt;
&lt;p&gt;Kubernetes contains &lt;a href="https://andygol-k8s.netlify.app/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/"&gt;kubelet certificate
rotation&lt;/a&gt;,
that will automatically generate a new key and request a new certificate from
the Kubernetes API as the current certificate approaches expiration. Once the
new certificate is available, it will be used for authenticating connections to
the Kubernetes API.&lt;/p&gt;</description></item><item><title>Contribute to Kubernetes Documentation</title><link>https://andygol-k8s.netlify.app/docs/contribute/docs/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/contribute/docs/</guid><description>&lt;p&gt;This website is maintained by &lt;a href="https://andygol-k8s.netlify.app/docs/contribute/#get-involved-with-sig-docs"&gt;Kubernetes SIG Docs&lt;/a&gt;.
The Kubernetes project welcomes help from all contributors, new or experienced!&lt;/p&gt;
&lt;p&gt;Kubernetes documentation contributors:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Improve existing content&lt;/li&gt;
&lt;li&gt;Create new content&lt;/li&gt;
&lt;li&gt;Translate the documentation&lt;/li&gt;
&lt;li&gt;Manage and publish the documentation parts of the Kubernetes release cycle&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The blog team, part of SIG Docs, helps manage the official blogs. Read
&lt;a href="https://andygol-k8s.netlify.app/docs/contribute/blog/"&gt;contributing to Kubernetes blogs&lt;/a&gt; to learn more.&lt;/p&gt;
&lt;hr&gt;

&lt;div class="alert alert-info" role="note"&gt;&lt;h4 class="alert-heading"&gt;Note:&lt;/h4&gt;To learn more about contributing to Kubernetes in general, see the general
&lt;a href="https://www.kubernetes.dev/docs/"&gt;contributor documentation&lt;/a&gt; site.&lt;/div&gt;

&lt;!-- body --&gt;
&lt;h2 id="getting-started"&gt;Getting started&lt;/h2&gt;
&lt;p&gt;Anyone can open an issue about documentation, or contribute a change with a
pull request (PR) to the
&lt;a href="https://github.com/kubernetes/website"&gt;&lt;code&gt;kubernetes/website&lt;/code&gt; GitHub repository&lt;/a&gt;.
You need to be comfortable with
&lt;a href="https://git-scm.com/"&gt;git&lt;/a&gt; and
&lt;a href="https://skills.github.com/"&gt;GitHub&lt;/a&gt;
to work effectively in the Kubernetes community.&lt;/p&gt;</description></item><item><title>Crowdfire Case Study</title><link>https://andygol-k8s.netlify.app/case-studies/crowdfire/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/case-studies/crowdfire/</guid><description>&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.crowdfireapp.com/"&gt;Crowdfire&lt;/a&gt; helps content creators create their content anywhere on the Internet and publish it everywhere else in the right format. Since its launch in 2010, it has grown to 16 million users. The product began as a monolith app running on &lt;a href="https://cloud.google.com/appengine/"&gt;Google App Engine&lt;/a&gt;, and in 2015, the company began a transformation to microservices running on Amazon Web Services &lt;a href="https://aws.amazon.com/elasticbeanstalk/"&gt;Elastic Beanstalk&lt;/a&gt;. "It was okay for our use cases initially, but as the number of services, development teams and scale increased, the deploy times, self-healing capabilities and resource utilization started to become problems for us," says Software Engineer Amanpreet Singh, who leads the infrastructure team for Crowdfire.&lt;/p&gt;</description></item><item><title>DaoCloud Case Study</title><link>https://andygol-k8s.netlify.app/case-studies/daocloud/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/case-studies/daocloud/</guid><description>&lt;h2&gt;Challenges&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.daocloud.io/en/"&gt;DaoCloud&lt;/a&gt;, founded in 2014, is an innovation leader in the field of cloud native. It boasts independent intellectual property rights of core technologies for crafting an open cloud platform to empower the digital transformation of enterprises.&lt;/p&gt;

&lt;p&gt;DaoCloud has been engaged in cloud native since its inception. As containerization is crucial for cloud native business, a cloud platform that does not have containers as infrastructure is unlikely to attract its potential users. Therefore, the first challenge confronting DaoCloud is how to efficiently manage and schedule numerous containers while maintaining stable connectivity between them.&lt;/p&gt;</description></item><item><title>Debug Running Pods</title><link>https://andygol-k8s.netlify.app/docs/tasks/debug/debug-application/debug-running-pod/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/debug/debug-application/debug-running-pod/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page explains how to debug Pods running (or crashing) on a Node.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Your &lt;a class='glossary-tooltip' title='A Pod represents a set of running containers in your cluster.' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/workloads/pods/' target='_blank' aria-label='Pod'&gt;Pod&lt;/a&gt; should already be
scheduled and running. If your Pod is not yet running, start with &lt;a href="https://andygol-k8s.netlify.app/docs/tasks/debug/debug-application/"&gt;Debugging
Pods&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;For some of the advanced debugging steps you need to know on which Node the
Pod is running and have shell access to run commands on that Node. You don't
need that access to run the standard debug steps that use &lt;code&gt;kubectl&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="using-kubectl-describe-pod-to-fetch-details-about-pods"&gt;Using &lt;code&gt;kubectl describe pod&lt;/code&gt; to fetch details about pods&lt;/h2&gt;
&lt;p&gt;For this example we'll use a Deployment to create two pods, similar to the earlier example.&lt;/p&gt;</description></item><item><title>Debugging Kubernetes Nodes With Kubectl</title><link>https://andygol-k8s.netlify.app/docs/tasks/debug/debug-cluster/kubectl-node-debug/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/debug/debug-cluster/kubectl-node-debug/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page shows how to debug a &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/architecture/nodes/"&gt;node&lt;/a&gt;
running on the Kubernetes cluster using &lt;code&gt;kubectl debug&lt;/code&gt; command.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Developing and debugging services locally using telepresence</title><link>https://andygol-k8s.netlify.app/docs/tasks/debug/debug-cluster/local-debugging/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/debug/debug-cluster/local-debugging/</guid><description>&lt;!-- overview --&gt;
&lt;div class="alert alert-secondary callout third-party-content" role="note"&gt;&lt;strong&gt;Note:&lt;/strong&gt;&amp;puncsp;This section links to third party projects that provide functionality required by Kubernetes. The Kubernetes project authors aren't responsible for these projects, which are listed alphabetically. To add a project to this list, read the &lt;a href="https://andygol-k8s.netlify.app/docs/contribute/style/content-guide/#third-party-content"&gt;content guide&lt;/a&gt; before submitting a change. &lt;a href="#third-party-content-disclaimer"&gt;More information.&lt;/a&gt;&lt;/div&gt;
&lt;p&gt;Kubernetes applications usually consist of multiple, separate services,
each running in its own container. Developing and debugging these services
on a remote Kubernetes cluster can be cumbersome, requiring you to
&lt;a href="https://andygol-k8s.netlify.app/docs/tasks/debug/debug-application/get-shell-running-container/"&gt;get a shell on a running container&lt;/a&gt;
in order to run debugging tools.&lt;/p&gt;</description></item><item><title>Docs smoke test page</title><link>https://andygol-k8s.netlify.app/docs/test/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/test/</guid><description>&lt;p&gt;This page serves two purposes:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Demonstrate how the Kubernetes documentation uses Markdown&lt;/li&gt;
&lt;li&gt;Provide a &amp;quot;smoke test&amp;quot; document we can use to test HTML, CSS, and template
changes that affect the overall documentation.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="heading-levels"&gt;Heading levels&lt;/h2&gt;
&lt;p&gt;The above heading is an H2. The page title renders as an H1. The following
sections show H3 - H6.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-markdown" data-lang="markdown"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#800080;font-weight:bold"&gt;### H3
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;This is in an H3 section.
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#800080;font-weight:bold"&gt;#### H4
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;This is in an H4 section.
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#800080;font-weight:bold"&gt;##### H5
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;This is in an H5 section.
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#800080;font-weight:bold"&gt;###### H6
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;This is in an H6 section.
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;Produces:&lt;/p&gt;</description></item><item><title>Download Kubernetes</title><link>https://andygol-k8s.netlify.app/releases/download/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/releases/download/</guid><description>&lt;p&gt;Kubernetes ships binaries for each component as well as a standard set of client
applications to bootstrap or interact with a cluster. Components like the
API server are capable of running within container images inside of a
cluster. Those components are also shipped in container images as part of the
official release process. All binaries as well as container images are available
for multiple operating systems as well as hardware architectures.&lt;/p&gt;</description></item><item><title>Event Rate Limit Configuration (v1alpha1)</title><link>https://andygol-k8s.netlify.app/docs/reference/config-api/apiserver-eventratelimit.v1alpha1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/config-api/apiserver-eventratelimit.v1alpha1/</guid><description>&lt;h2 id="resource-types"&gt;Resource Types&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#eventratelimit-admission-k8s-io-v1alpha1-Configuration"&gt;Configuration&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="eventratelimit-admission-k8s-io-v1alpha1-Configuration"&gt;&lt;code&gt;Configuration&lt;/code&gt;&lt;/h2&gt;
&lt;p&gt;Configuration provides configuration for the EventRateLimit admission
controller.&lt;/p&gt;
&lt;table class="table"&gt;
&lt;thead&gt;&lt;tr&gt;&lt;th width="30%"&gt;Field&lt;/th&gt;&lt;th&gt;Description&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;apiVersion&lt;/code&gt;&lt;br/&gt;string&lt;/td&gt;&lt;td&gt;&lt;code&gt;eventratelimit.admission.k8s.io/v1alpha1&lt;/code&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;kind&lt;/code&gt;&lt;br/&gt;string&lt;/td&gt;&lt;td&gt;&lt;code&gt;Configuration&lt;/code&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;limits&lt;/code&gt; &lt;B&gt;[Required]&lt;/B&gt;&lt;br/&gt;
&lt;a href="#eventratelimit-admission-k8s-io-v1alpha1-Limit"&gt;&lt;code&gt;[]Limit&lt;/code&gt;&lt;/a&gt;
&lt;/td&gt;
&lt;td&gt;
 &lt;p&gt;limits are the limits to place on event queries received.
Limits can be placed on events received server-wide, per namespace,
per user, and per source+object.
At least one limit is required.&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2 id="eventratelimit-admission-k8s-io-v1alpha1-Limit"&gt;&lt;code&gt;Limit&lt;/code&gt;&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Appears in:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#eventratelimit-admission-k8s-io-v1alpha1-Configuration"&gt;Configuration&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Limit is the configuration for a particular limit type&lt;/p&gt;
&lt;table class="table"&gt;
&lt;thead&gt;&lt;tr&gt;&lt;th width="30%"&gt;Field&lt;/th&gt;&lt;th&gt;Description&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;type&lt;/code&gt; &lt;B&gt;[Required]&lt;/B&gt;&lt;br/&gt;
&lt;a href="#eventratelimit-admission-k8s-io-v1alpha1-LimitType"&gt;&lt;code&gt;LimitType&lt;/code&gt;&lt;/a&gt;
&lt;/td&gt;
&lt;td&gt;
 &lt;p&gt;type is the type of limit to which this configuration applies&lt;/p&gt;</description></item><item><title>Extend kubectl with plugins</title><link>https://andygol-k8s.netlify.app/docs/tasks/extend-kubectl/kubectl-plugins/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/extend-kubectl/kubectl-plugins/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This guide demonstrates how to install and write extensions for &lt;a href="https://andygol-k8s.netlify.app/docs/reference/kubectl/kubectl/"&gt;kubectl&lt;/a&gt;.
By thinking of core &lt;code&gt;kubectl&lt;/code&gt; commands as essential building blocks for interacting with a Kubernetes cluster,
a cluster administrator can think of plugins as a means of utilizing these building blocks to create more complex behavior.
Plugins extend &lt;code&gt;kubectl&lt;/code&gt; with new sub-commands, allowing for new and custom features not included in the main distribution of &lt;code&gt;kubectl&lt;/code&gt;.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;You need to have a working &lt;code&gt;kubectl&lt;/code&gt; binary installed.&lt;/p&gt;</description></item><item><title>Extend Service IP Ranges</title><link>https://andygol-k8s.netlify.app/docs/tasks/network/extend-service-ip-ranges/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/network/extend-service-ip-ranges/</guid><description>&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable" title="Feature Gate: MultiCIDRServiceAllocator"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt; 
 &lt;code&gt;Kubernetes v1.33 [stable]&lt;/code&gt;(enabled by default)&lt;/div&gt;

&lt;p&gt;This document shares how to extend the existing Service IP range assigned to a cluster.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Get a Shell to a Running Container</title><link>https://andygol-k8s.netlify.app/docs/tasks/debug/debug-application/get-shell-running-container/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/debug/debug-application/get-shell-running-container/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page shows how to use &lt;code&gt;kubectl exec&lt;/code&gt; to get a shell to a
running container.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>GolfNow Case Study</title><link>https://andygol-k8s.netlify.app/case-studies/golfnow/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/case-studies/golfnow/</guid><description>&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;A member of the &lt;a href="http://www.nbcunicareers.com/our-businesses/nbc-sports-group"&gt;NBC Sports Group&lt;/a&gt;, &lt;a href="https://www.golfnow.com/"&gt;GolfNow&lt;/a&gt; is the golf industry's technology and services leader, managing 10 different products, as well as the largest e-commerce tee time marketplace in the world. As its business began expanding rapidly and globally, GolfNow's monolithic application became problematic. "We kept growing our infrastructure vertically rather than horizontally, and the cost of doing business became problematic," says Sheriff Mohamed, GolfNow's Director, Architecture. "We wanted the ability to more easily expand globally."&lt;/p&gt;</description></item><item><title>Haufe Group Case Study</title><link>https://andygol-k8s.netlify.app/case-studies/haufegroup/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/case-studies/haufegroup/</guid><description>&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;Founded in 1930 as a traditional publisher, Haufe Group has grown into a media and software company with 95 percent of its sales from digital products. Over the years, the company has gone from having "hardware in the basement" to outsourcing its infrastructure operations and IT. More recently, the development of new products, from Internet portals for tax experts to personnel training software, has created demands for increased speed, reliability and scalability. "We need to be able to move faster," says Solution Architect Martin Danielsson. "Adapting workloads is something that we really want to be able to do."&lt;/p&gt;</description></item><item><title>Huawei Case Study</title><link>https://andygol-k8s.netlify.app/case-studies/huawei/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/case-studies/huawei/</guid><description>&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;A multinational company that's the largest telecommunications equipment manufacturer in the world, Huawei has more than 180,000 employees. In order to support its fast business development around the globe, &lt;a href="https://www.huawei.com/"&gt;Huawei&lt;/a&gt; has eight data centers for its internal I.T. department, which have been running 800+ applications in 100K+ VMs to serve these 180,000 users. With the rapid increase of new applications, the cost and efficiency of management and deployment of VM-based apps all became critical challenges for business agility. "It's very much a distributed system so we found that managing all of the tasks in a more consistent way is always a challenge," says Peixin Hou, the company's Chief Software Architect and Community Director for Open Source. "We wanted to move into a more agile and decent practice."&lt;/p&gt;</description></item><item><title>IBM Case Study</title><link>https://andygol-k8s.netlify.app/case-studies/ibm/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/case-studies/ibm/</guid><description>&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.ibm.com/cloud/"&gt;IBM Cloud&lt;/a&gt; offers public, private, and hybrid cloud functionality across a diverse set of runtimes from its OpenWhisk-based function as a service (FaaS) offering, managed &lt;a href="https://kubernetes.io"&gt;Kubernetes&lt;/a&gt; and containers, to &lt;a href="https://www.cloudfoundry.org"&gt;Cloud Foundry&lt;/a&gt; platform as a service (PaaS). These runtimes are combined with the power of the company's enterprise technologies, such as MQ and DB2, its modern artificial intelligence (AI) Watson, and data analytics services. Users of IBM Cloud can exploit capabilities from more than 170 different cloud native services in its catalog, including capabilities such as IBM's Weather Company API and data services. In the later part of 2017, the IBM Cloud Container Registry team wanted to build out an image trust service.&lt;/p&gt;</description></item><item><title>Image Policy API (v1alpha1)</title><link>https://andygol-k8s.netlify.app/docs/reference/config-api/imagepolicy.v1alpha1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/config-api/imagepolicy.v1alpha1/</guid><description>&lt;h2 id="resource-types"&gt;Resource Types&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#imagepolicy-k8s-io-v1alpha1-ImageReview"&gt;ImageReview&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="imagepolicy-k8s-io-v1alpha1-ImageReview"&gt;&lt;code&gt;ImageReview&lt;/code&gt;&lt;/h2&gt;
&lt;p&gt;ImageReview checks if the set of images in a pod are allowed.&lt;/p&gt;
&lt;table class="table"&gt;
&lt;thead&gt;&lt;tr&gt;&lt;th width="30%"&gt;Field&lt;/th&gt;&lt;th&gt;Description&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;apiVersion&lt;/code&gt;&lt;br/&gt;string&lt;/td&gt;&lt;td&gt;&lt;code&gt;imagepolicy.k8s.io/v1alpha1&lt;/code&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;kind&lt;/code&gt;&lt;br/&gt;string&lt;/td&gt;&lt;td&gt;&lt;code&gt;ImageReview&lt;/code&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;metadata&lt;/code&gt;&lt;br/&gt;
&lt;a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.35/#objectmeta-v1-meta"&gt;&lt;code&gt;meta/v1.ObjectMeta&lt;/code&gt;&lt;/a&gt;
&lt;/td&gt;
&lt;td&gt;
 &lt;p&gt;Standard object's metadata.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata&lt;/p&gt;
Refer to the Kubernetes API documentation for the fields of the &lt;code&gt;metadata&lt;/code&gt; field.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;spec&lt;/code&gt; &lt;B&gt;[Required]&lt;/B&gt;&lt;br/&gt;
&lt;a href="#imagepolicy-k8s-io-v1alpha1-ImageReviewSpec"&gt;&lt;code&gt;ImageReviewSpec&lt;/code&gt;&lt;/a&gt;
&lt;/td&gt;
&lt;td&gt;
 &lt;p&gt;Spec holds information about the pod being evaluated&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;status&lt;/code&gt;&lt;br/&gt;
&lt;a href="#imagepolicy-k8s-io-v1alpha1-ImageReviewStatus"&gt;&lt;code&gt;ImageReviewStatus&lt;/code&gt;&lt;/a&gt;
&lt;/td&gt;
&lt;td&gt;
 &lt;p&gt;Status is filled in by the backend and indicates whether the pod should be allowed.&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2 id="imagepolicy-k8s-io-v1alpha1-ImageReviewContainerSpec"&gt;&lt;code&gt;ImageReviewContainerSpec&lt;/code&gt;&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Appears in:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#imagepolicy-k8s-io-v1alpha1-ImageReviewSpec"&gt;ImageReviewSpec&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;ImageReviewContainerSpec is a description of a container within the pod creation request.&lt;/p&gt;</description></item><item><title>JD.com Case Study</title><link>https://andygol-k8s.netlify.app/case-studies/jd-com/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/case-studies/jd-com/</guid><description>&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;With more than 300 million active users and total 2017 revenue of more than $55 billion, &lt;a href="https://corporate.JD.com/home"&gt;JD.com&lt;/a&gt; is China's largest retailer, and its operations are the epitome of hyperscale. For example, there are more than a trillion images in JD.com's product databases—with 100 million being added daily—and this enormous amount of data needs to be instantly accessible. In 2014, JD.com moved its applications to containers running on bare metal machines using OpenStack and Docker to "speed up the delivery of our computing resources and make the operations much simpler," says Haifeng Liu, JD.com's Chief Architect. But by the end of 2015, with tens of thousands of nodes running in multiple data centers, "we encountered a lot of problems because our platform was not strong enough, and we suffered from bottlenecks and scalability issues," says Liu. "We needed infrastructure for the next five years of development, now."&lt;/p&gt;</description></item><item><title>kube-apiserver Admission (v1)</title><link>https://andygol-k8s.netlify.app/docs/reference/config-api/apiserver-admission.v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/config-api/apiserver-admission.v1/</guid><description>&lt;h2 id="resource-types"&gt;Resource Types&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#admission-k8s-io-v1-AdmissionReview"&gt;AdmissionReview&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="admission-k8s-io-v1-AdmissionReview"&gt;&lt;code&gt;AdmissionReview&lt;/code&gt;&lt;/h2&gt;
&lt;p&gt;AdmissionReview describes an admission review request/response.&lt;/p&gt;
&lt;table class="table"&gt;
&lt;thead&gt;&lt;tr&gt;&lt;th width="30%"&gt;Field&lt;/th&gt;&lt;th&gt;Description&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;apiVersion&lt;/code&gt;&lt;br/&gt;string&lt;/td&gt;&lt;td&gt;&lt;code&gt;admission.k8s.io/v1&lt;/code&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;kind&lt;/code&gt;&lt;br/&gt;string&lt;/td&gt;&lt;td&gt;&lt;code&gt;AdmissionReview&lt;/code&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;request&lt;/code&gt;&lt;br/&gt;
&lt;a href="#admission-k8s-io-v1-AdmissionRequest"&gt;&lt;code&gt;AdmissionRequest&lt;/code&gt;&lt;/a&gt;
&lt;/td&gt;
&lt;td&gt;
 &lt;p&gt;request describes the attributes for the admission request.&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;response&lt;/code&gt;&lt;br/&gt;
&lt;a href="#admission-k8s-io-v1-AdmissionResponse"&gt;&lt;code&gt;AdmissionResponse&lt;/code&gt;&lt;/a&gt;
&lt;/td&gt;
&lt;td&gt;
 &lt;p&gt;response describes the attributes for the admission response.&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2 id="admission-k8s-io-v1-AdmissionRequest"&gt;&lt;code&gt;AdmissionRequest&lt;/code&gt;&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Appears in:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#admission-k8s-io-v1-AdmissionReview"&gt;AdmissionReview&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;AdmissionRequest describes the admission.Attributes for the admission request.&lt;/p&gt;
&lt;table class="table"&gt;
&lt;thead&gt;&lt;tr&gt;&lt;th width="30%"&gt;Field&lt;/th&gt;&lt;th&gt;Description&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;uid&lt;/code&gt; &lt;B&gt;[Required]&lt;/B&gt;&lt;br/&gt;
&lt;a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/types#UID"&gt;&lt;code&gt;k8s.io/apimachinery/pkg/types.UID&lt;/code&gt;&lt;/a&gt;
&lt;/td&gt;
&lt;td&gt;
 &lt;p&gt;uid is an identifier for the individual request/response. It allows us to distinguish instances of requests which are
otherwise identical (parallel requests, requests when earlier requests did not modify etc)
The UID is meant to track the round trip (request/response) between the KAS and the WebHook, not the user request.
It is suitable for correlating log entries between the webhook and apiserver, for either auditing or debugging.&lt;/p&gt;</description></item><item><title>kube-apiserver Audit Configuration (v1)</title><link>https://andygol-k8s.netlify.app/docs/reference/config-api/apiserver-audit.v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/config-api/apiserver-audit.v1/</guid><description>&lt;h2 id="resource-types"&gt;Resource Types&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#audit-k8s-io-v1-Event"&gt;Event&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#audit-k8s-io-v1-EventList"&gt;EventList&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#audit-k8s-io-v1-Policy"&gt;Policy&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#audit-k8s-io-v1-PolicyList"&gt;PolicyList&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="audit-k8s-io-v1-Event"&gt;&lt;code&gt;Event&lt;/code&gt;&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Appears in:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#audit-k8s-io-v1-EventList"&gt;EventList&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Event captures all the information that can be included in an API audit log.&lt;/p&gt;
&lt;table class="table"&gt;
&lt;thead&gt;&lt;tr&gt;&lt;th width="30%"&gt;Field&lt;/th&gt;&lt;th&gt;Description&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;apiVersion&lt;/code&gt;&lt;br/&gt;string&lt;/td&gt;&lt;td&gt;&lt;code&gt;audit.k8s.io/v1&lt;/code&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;kind&lt;/code&gt;&lt;br/&gt;string&lt;/td&gt;&lt;td&gt;&lt;code&gt;Event&lt;/code&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;level&lt;/code&gt; &lt;B&gt;[Required]&lt;/B&gt;&lt;br/&gt;
&lt;a href="#audit-k8s-io-v1-Level"&gt;&lt;code&gt;Level&lt;/code&gt;&lt;/a&gt;
&lt;/td&gt;
&lt;td&gt;
 &lt;p&gt;AuditLevel at which event was generated&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;auditID&lt;/code&gt; &lt;B&gt;[Required]&lt;/B&gt;&lt;br/&gt;
&lt;a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/types#UID"&gt;&lt;code&gt;k8s.io/apimachinery/pkg/types.UID&lt;/code&gt;&lt;/a&gt;
&lt;/td&gt;
&lt;td&gt;
 &lt;p&gt;Unique audit ID, generated for each request.&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;stage&lt;/code&gt; &lt;B&gt;[Required]&lt;/B&gt;&lt;br/&gt;
&lt;a href="#audit-k8s-io-v1-Stage"&gt;&lt;code&gt;Stage&lt;/code&gt;&lt;/a&gt;
&lt;/td&gt;
&lt;td&gt;
 &lt;p&gt;Stage of the request handling when this event instance was generated.&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;requestURI&lt;/code&gt; &lt;B&gt;[Required]&lt;/B&gt;&lt;br/&gt;
&lt;code&gt;string&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
 &lt;p&gt;RequestURI is the request URI as sent by the client to a server.&lt;/p&gt;</description></item><item><title>kube-apiserver Configuration (v1)</title><link>https://andygol-k8s.netlify.app/docs/reference/config-api/apiserver-config.v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/config-api/apiserver-config.v1/</guid><description>&lt;p&gt;Package v1 is the v1 version of the API.&lt;/p&gt;
&lt;h2 id="resource-types"&gt;Resource Types&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#apiserver-config-k8s-io-v1-AdmissionConfiguration"&gt;AdmissionConfiguration&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#apiserver-config-k8s-io-v1-AuthenticationConfiguration"&gt;AuthenticationConfiguration&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#apiserver-config-k8s-io-v1-AuthorizationConfiguration"&gt;AuthorizationConfiguration&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#apiserver-config-k8s-io-v1-EncryptionConfiguration"&gt;EncryptionConfiguration&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#apiserver-config-k8s-io-v1-TracingConfiguration"&gt;TracingConfiguration&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="TracingConfiguration"&gt;&lt;code&gt;TracingConfiguration&lt;/code&gt;&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Appears in:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="#kubelet-config-k8s-io-v1beta1-KubeletConfiguration"&gt;KubeletConfiguration&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="#apiserver-config-k8s-io-v1-TracingConfiguration"&gt;TracingConfiguration&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="#apiserver-k8s-io-v1alpha1-TracingConfiguration"&gt;TracingConfiguration&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="#apiserver-k8s-io-v1beta1-TracingConfiguration"&gt;TracingConfiguration&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;TracingConfiguration provides versioned configuration for OpenTelemetry tracing clients.&lt;/p&gt;
&lt;table class="table"&gt;
&lt;thead&gt;&lt;tr&gt;&lt;th width="30%"&gt;Field&lt;/th&gt;&lt;th&gt;Description&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;endpoint&lt;/code&gt;&lt;br/&gt;
&lt;code&gt;string&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
 &lt;p&gt;Endpoint of the collector this component will report traces to.
The connection is insecure, and does not currently support TLS.
Recommended is unset, and endpoint is the otlp grpc default, localhost:4317.&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;samplingRatePerMillion&lt;/code&gt;&lt;br/&gt;
&lt;code&gt;int32&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
 &lt;p&gt;SamplingRatePerMillion is the number of samples to collect per million spans.
Recommended is unset. If unset, sampler respects its parent span's sampling
rate, but otherwise never samples.&lt;/p&gt;</description></item><item><title>kube-apiserver Configuration (v1alpha1)</title><link>https://andygol-k8s.netlify.app/docs/reference/config-api/apiserver-config.v1alpha1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/config-api/apiserver-config.v1alpha1/</guid><description>&lt;p&gt;Package v1alpha1 is the v1alpha1 version of the API.&lt;/p&gt;
&lt;h2 id="resource-types"&gt;Resource Types&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#apiserver-k8s-io-v1alpha1-AdmissionConfiguration"&gt;AdmissionConfiguration&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#apiserver-k8s-io-v1alpha1-AuthenticationConfiguration"&gt;AuthenticationConfiguration&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#apiserver-k8s-io-v1alpha1-AuthorizationConfiguration"&gt;AuthorizationConfiguration&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#apiserver-k8s-io-v1alpha1-EgressSelectorConfiguration"&gt;EgressSelectorConfiguration&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#apiserver-k8s-io-v1alpha1-TracingConfiguration"&gt;TracingConfiguration&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="TracingConfiguration"&gt;&lt;code&gt;TracingConfiguration&lt;/code&gt;&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Appears in:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="#kubelet-config-k8s-io-v1beta1-KubeletConfiguration"&gt;KubeletConfiguration&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="#apiserver-k8s-io-v1alpha1-TracingConfiguration"&gt;TracingConfiguration&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;TracingConfiguration provides versioned configuration for OpenTelemetry tracing clients.&lt;/p&gt;
&lt;table class="table"&gt;
&lt;thead&gt;&lt;tr&gt;&lt;th width="30%"&gt;Field&lt;/th&gt;&lt;th&gt;Description&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;endpoint&lt;/code&gt;&lt;br/&gt;
&lt;code&gt;string&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
 &lt;p&gt;Endpoint of the collector this component will report traces to.
The connection is insecure, and does not currently support TLS.
Recommended is unset, and endpoint is the otlp grpc default, localhost:4317.&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;samplingRatePerMillion&lt;/code&gt;&lt;br/&gt;
&lt;code&gt;int32&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
 &lt;p&gt;SamplingRatePerMillion is the number of samples to collect per million spans.
Recommended is unset. If unset, sampler respects its parent span's sampling
rate, but otherwise never samples.&lt;/p&gt;</description></item><item><title>kube-apiserver Configuration (v1beta1)</title><link>https://andygol-k8s.netlify.app/docs/reference/config-api/apiserver-config.v1beta1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/config-api/apiserver-config.v1beta1/</guid><description>&lt;p&gt;Package v1beta1 is the v1beta1 version of the API.&lt;/p&gt;
&lt;h2 id="resource-types"&gt;Resource Types&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#apiserver-k8s-io-v1beta1-AuthenticationConfiguration"&gt;AuthenticationConfiguration&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#apiserver-k8s-io-v1beta1-AuthorizationConfiguration"&gt;AuthorizationConfiguration&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#apiserver-k8s-io-v1beta1-EgressSelectorConfiguration"&gt;EgressSelectorConfiguration&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#apiserver-k8s-io-v1beta1-TracingConfiguration"&gt;TracingConfiguration&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="TracingConfiguration"&gt;&lt;code&gt;TracingConfiguration&lt;/code&gt;&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Appears in:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="#kubelet-config-k8s-io-v1beta1-KubeletConfiguration"&gt;KubeletConfiguration&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="#apiserver-k8s-io-v1alpha1-TracingConfiguration"&gt;TracingConfiguration&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="#apiserver-k8s-io-v1beta1-TracingConfiguration"&gt;TracingConfiguration&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;TracingConfiguration provides versioned configuration for OpenTelemetry tracing clients.&lt;/p&gt;
&lt;table class="table"&gt;
&lt;thead&gt;&lt;tr&gt;&lt;th width="30%"&gt;Field&lt;/th&gt;&lt;th&gt;Description&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;endpoint&lt;/code&gt;&lt;br/&gt;
&lt;code&gt;string&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
 &lt;p&gt;Endpoint of the collector this component will report traces to.
The connection is insecure, and does not currently support TLS.
Recommended is unset, and endpoint is the otlp grpc default, localhost:4317.&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;samplingRatePerMillion&lt;/code&gt;&lt;br/&gt;
&lt;code&gt;int32&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
 &lt;p&gt;SamplingRatePerMillion is the number of samples to collect per million spans.
Recommended is unset. If unset, sampler respects its parent span's sampling
rate, but otherwise never samples.&lt;/p&gt;</description></item><item><title>kube-controller-manager Configuration (v1alpha1)</title><link>https://andygol-k8s.netlify.app/docs/reference/config-api/kube-controller-manager-config.v1alpha1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/config-api/kube-controller-manager-config.v1alpha1/</guid><description>&lt;h2 id="resource-types"&gt;Resource Types&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration"&gt;CloudControllerManagerConfiguration&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#controllermanager-config-k8s-io-v1alpha1-LeaderMigrationConfiguration"&gt;LeaderMigrationConfiguration&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration"&gt;KubeControllerManagerConfiguration&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="ClientConnectionConfiguration"&gt;&lt;code&gt;ClientConnectionConfiguration&lt;/code&gt;&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Appears in:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="#kubescheduler-config-k8s-io-v1-KubeSchedulerConfiguration"&gt;KubeSchedulerConfiguration&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="#controllermanager-config-k8s-io-v1alpha1-GenericControllerManagerConfiguration"&gt;GenericControllerManagerConfiguration&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;ClientConnectionConfiguration contains details for constructing a client.&lt;/p&gt;
&lt;table class="table"&gt;
&lt;thead&gt;&lt;tr&gt;&lt;th width="30%"&gt;Field&lt;/th&gt;&lt;th&gt;Description&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;kubeconfig&lt;/code&gt; &lt;B&gt;[Required]&lt;/B&gt;&lt;br/&gt;
&lt;code&gt;string&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
 &lt;p&gt;kubeconfig is the path to a KubeConfig file.&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;acceptContentTypes&lt;/code&gt; &lt;B&gt;[Required]&lt;/B&gt;&lt;br/&gt;
&lt;code&gt;string&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
 &lt;p&gt;acceptContentTypes defines the Accept header sent by clients when connecting to a server, overriding the
default value of 'application/json'. This field will control all connections to the server used by a particular
client.&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;contentType&lt;/code&gt; &lt;B&gt;[Required]&lt;/B&gt;&lt;br/&gt;
&lt;code&gt;string&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
 &lt;p&gt;contentType is the content type used when sending data to the server from this client.&lt;/p&gt;</description></item><item><title>kube-proxy Configuration (v1alpha1)</title><link>https://andygol-k8s.netlify.app/docs/reference/config-api/kube-proxy-config.v1alpha1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/config-api/kube-proxy-config.v1alpha1/</guid><description>&lt;h2 id="resource-types"&gt;Resource Types&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#kubeproxy-config-k8s-io-v1alpha1-KubeProxyConfiguration"&gt;KubeProxyConfiguration&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="FormatOptions"&gt;&lt;code&gt;FormatOptions&lt;/code&gt;&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Appears in:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#LoggingConfiguration"&gt;LoggingConfiguration&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;FormatOptions contains options for the different logging formats.&lt;/p&gt;
&lt;table class="table"&gt;
&lt;thead&gt;&lt;tr&gt;&lt;th width="30%"&gt;Field&lt;/th&gt;&lt;th&gt;Description&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;text&lt;/code&gt; &lt;B&gt;[Required]&lt;/B&gt;&lt;br/&gt;
&lt;a href="#TextOptions"&gt;&lt;code&gt;TextOptions&lt;/code&gt;&lt;/a&gt;
&lt;/td&gt;
&lt;td&gt;
 &lt;p&gt;[Alpha] Text contains options for logging format &amp;quot;text&amp;quot;.
Only available when the LoggingAlphaOptions feature gate is enabled.&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;json&lt;/code&gt; &lt;B&gt;[Required]&lt;/B&gt;&lt;br/&gt;
&lt;a href="#JSONOptions"&gt;&lt;code&gt;JSONOptions&lt;/code&gt;&lt;/a&gt;
&lt;/td&gt;
&lt;td&gt;
 &lt;p&gt;[Alpha] JSON contains options for logging format &amp;quot;json&amp;quot;.
Only available when the LoggingAlphaOptions feature gate is enabled.&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2 id="JSONOptions"&gt;&lt;code&gt;JSONOptions&lt;/code&gt;&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Appears in:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#FormatOptions"&gt;FormatOptions&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;JSONOptions contains options for logging format &amp;quot;json&amp;quot;.&lt;/p&gt;
&lt;table class="table"&gt;
&lt;thead&gt;&lt;tr&gt;&lt;th width="30%"&gt;Field&lt;/th&gt;&lt;th&gt;Description&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;OutputRoutingOptions&lt;/code&gt; &lt;B&gt;[Required]&lt;/B&gt;&lt;br/&gt;
&lt;a href="#OutputRoutingOptions"&gt;&lt;code&gt;OutputRoutingOptions&lt;/code&gt;&lt;/a&gt;
&lt;/td&gt;
&lt;td&gt;(Members of &lt;code&gt;OutputRoutingOptions&lt;/code&gt; are embedded into this type.)
 &lt;span class="text-muted"&gt;No description provided.&lt;/span&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2 id="LogFormatFactory"&gt;&lt;code&gt;LogFormatFactory&lt;/code&gt;&lt;/h2&gt;
&lt;p&gt;LogFormatFactory provides support for a certain additional,
non-default log format.&lt;/p&gt;</description></item><item><title>kube-scheduler Configuration (v1)</title><link>https://andygol-k8s.netlify.app/docs/reference/config-api/kube-scheduler-config.v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/config-api/kube-scheduler-config.v1/</guid><description>&lt;h2 id="resource-types"&gt;Resource Types&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#kubescheduler-config-k8s-io-v1-DefaultPreemptionArgs"&gt;DefaultPreemptionArgs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#kubescheduler-config-k8s-io-v1-DynamicResourcesArgs"&gt;DynamicResourcesArgs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#kubescheduler-config-k8s-io-v1-InterPodAffinityArgs"&gt;InterPodAffinityArgs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#kubescheduler-config-k8s-io-v1-KubeSchedulerConfiguration"&gt;KubeSchedulerConfiguration&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#kubescheduler-config-k8s-io-v1-NodeAffinityArgs"&gt;NodeAffinityArgs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#kubescheduler-config-k8s-io-v1-NodeResourcesBalancedAllocationArgs"&gt;NodeResourcesBalancedAllocationArgs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#kubescheduler-config-k8s-io-v1-NodeResourcesFitArgs"&gt;NodeResourcesFitArgs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#kubescheduler-config-k8s-io-v1-PodTopologySpreadArgs"&gt;PodTopologySpreadArgs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#kubescheduler-config-k8s-io-v1-VolumeBindingArgs"&gt;VolumeBindingArgs&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="ClientConnectionConfiguration"&gt;&lt;code&gt;ClientConnectionConfiguration&lt;/code&gt;&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Appears in:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#kubescheduler-config-k8s-io-v1-KubeSchedulerConfiguration"&gt;KubeSchedulerConfiguration&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;ClientConnectionConfiguration contains details for constructing a client.&lt;/p&gt;
&lt;table class="table"&gt;
&lt;thead&gt;&lt;tr&gt;&lt;th width="30%"&gt;Field&lt;/th&gt;&lt;th&gt;Description&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;kubeconfig&lt;/code&gt; &lt;B&gt;[Required]&lt;/B&gt;&lt;br/&gt;
&lt;code&gt;string&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
 &lt;p&gt;kubeconfig is the path to a KubeConfig file.&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;acceptContentTypes&lt;/code&gt; &lt;B&gt;[Required]&lt;/B&gt;&lt;br/&gt;
&lt;code&gt;string&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
 &lt;p&gt;acceptContentTypes defines the Accept header sent by clients when connecting to a server, overriding the
default value of 'application/json'. This field will control all connections to the server used by a particular
client.&lt;/p&gt;</description></item><item><title>kubeadm Configuration (v1beta3)</title><link>https://andygol-k8s.netlify.app/docs/reference/config-api/kubeadm-config.v1beta3/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/config-api/kubeadm-config.v1beta3/</guid><description>&lt;h2&gt;Overview&lt;/h2&gt;
&lt;p&gt;Package v1beta3 defines the v1beta3 version of the kubeadm configuration file format.
This version improves on the v1beta2 format by fixing some minor issues and adding a few new fields.&lt;/p&gt;
&lt;p&gt;A list of changes since v1beta2:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The deprecated &amp;quot;ClusterConfiguration.useHyperKubeImage&amp;quot; field has been removed.
Kubeadm no longer supports the hyperkube image.&lt;/li&gt;
&lt;li&gt;The &amp;quot;ClusterConfiguration.dns.type&amp;quot; field has been removed since CoreDNS is the only supported
DNS server type by kubeadm.&lt;/li&gt;
&lt;li&gt;Include &amp;quot;datapolicy&amp;quot; tags on the fields that hold secrets.
This would result in the field values to be omitted when API structures are printed with klog.&lt;/li&gt;
&lt;li&gt;Add &amp;quot;InitConfiguration.skipPhases&amp;quot;, &amp;quot;JoinConfiguration.skipPhases&amp;quot; to allow skipping
a list of phases during kubeadm init/join command execution.&lt;/li&gt;
&lt;li&gt;Add &amp;quot;InitConfiguration.nodeRegistration.imagePullPolicy&amp;quot; and &amp;quot;JoinConfiguration.nodeRegistration.imagePullPolicy&amp;quot;
to allow specifying the images pull policy during kubeadm &amp;quot;init&amp;quot; and &amp;quot;join&amp;quot;.
The value must be one of &amp;quot;Always&amp;quot;, &amp;quot;Never&amp;quot; or &amp;quot;IfNotPresent&amp;quot;.
&amp;quot;IfNotPresent&amp;quot; is the default, which has been the existing behavior prior to this addition.&lt;/li&gt;
&lt;li&gt;Add &amp;quot;InitConfiguration.patches.directory&amp;quot;, &amp;quot;JoinConfiguration.patches.directory&amp;quot; to allow
the user to configure a directory from which to take patches for components deployed by kubeadm.&lt;/li&gt;
&lt;li&gt;Move the BootstrapToken* API and related utilities out of the &amp;quot;kubeadm&amp;quot; API group to a new group
&amp;quot;bootstraptoken&amp;quot;. The kubeadm API version v1beta3 no longer contains the BootstrapToken* structures.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Migration from old kubeadm config versions&lt;/p&gt;</description></item><item><title>kubeadm Configuration (v1beta4)</title><link>https://andygol-k8s.netlify.app/docs/reference/config-api/kubeadm-config.v1beta4/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/config-api/kubeadm-config.v1beta4/</guid><description>&lt;h2&gt;Overview&lt;/h2&gt;
&lt;p&gt;Package v1beta4 defines the v1beta4 version of the kubeadm configuration file format.
This version improves on the v1beta3 format by fixing some minor issues and adding a few new fields.&lt;/p&gt;
&lt;p&gt;A list of changes since v1beta3:&lt;/p&gt;
&lt;p&gt;v1.35:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Add &lt;code&gt;httpEndpoints&lt;/code&gt; field to &lt;code&gt;ClusterConfiguration.etcd.externalEtcd&lt;/code&gt; that can be used to
configure the HTTP endpoints for etcd communication in v1beta4.
This field is used to separate the HTTP traffic (such as &lt;code&gt;/metrics&lt;/code&gt; and &lt;code&gt;/health&lt;/code&gt; endpoints)
from the gRPC traffic handled by &lt;code&gt;endpoints&lt;/code&gt;.
This separation allows for better access control, as HTTP endpoints can be exposed without
exposing the primary gRPC interface.
Corresponds to etcd's &lt;code&gt;--listen-client-http-urls&lt;/code&gt; configuration.
If not provided, &lt;code&gt;endpoints&lt;/code&gt; will be used for both gRPC and HTTP traffic.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;v1.34:&lt;/p&gt;</description></item><item><title>kubeconfig (v1)</title><link>https://andygol-k8s.netlify.app/docs/reference/config-api/kubeconfig.v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/config-api/kubeconfig.v1/</guid><description>&lt;h2 id="resource-types"&gt;Resource Types&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#Config"&gt;Config&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="Config"&gt;&lt;code&gt;Config&lt;/code&gt;&lt;/h2&gt;
&lt;p&gt;Config holds the information needed to build connect to remote kubernetes clusters as a given user&lt;/p&gt;
&lt;table class="table"&gt;
&lt;thead&gt;&lt;tr&gt;&lt;th width="30%"&gt;Field&lt;/th&gt;&lt;th&gt;Description&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;apiVersion&lt;/code&gt;&lt;br/&gt;string&lt;/td&gt;&lt;td&gt;&lt;code&gt;/v1&lt;/code&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;kind&lt;/code&gt;&lt;br/&gt;string&lt;/td&gt;&lt;td&gt;&lt;code&gt;Config&lt;/code&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;kind&lt;/code&gt;&lt;br/&gt;
&lt;code&gt;string&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
 &lt;p&gt;Legacy field from pkg/api/types.go TypeMeta.
TODO(jlowdermilk): remove this after eliminating downstream dependencies.&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;apiVersion&lt;/code&gt;&lt;br/&gt;
&lt;code&gt;string&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
 &lt;p&gt;Legacy field from pkg/api/types.go TypeMeta.
TODO(jlowdermilk): remove this after eliminating downstream dependencies.&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;preferences,omitzero&lt;/code&gt; &lt;B&gt;[Required]&lt;/B&gt;&lt;br/&gt;
&lt;a href="#Preferences"&gt;&lt;code&gt;Preferences&lt;/code&gt;&lt;/a&gt;
&lt;/td&gt;
&lt;td&gt;
 &lt;p&gt;Preferences holds general information to be use for cli interactions
Deprecated: this field is deprecated in v1.34. It is not used by any of the Kubernetes components.&lt;/p&gt;</description></item><item><title>Kubelet Configuration (v1)</title><link>https://andygol-k8s.netlify.app/docs/reference/config-api/kubelet-config.v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/config-api/kubelet-config.v1/</guid><description>&lt;h2 id="resource-types"&gt;Resource Types&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#kubelet-config-k8s-io-v1-CredentialProviderConfig"&gt;CredentialProviderConfig&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="kubelet-config-k8s-io-v1-CredentialProviderConfig"&gt;&lt;code&gt;CredentialProviderConfig&lt;/code&gt;&lt;/h2&gt;
&lt;p&gt;CredentialProviderConfig is the configuration containing information about
each exec credential provider. Kubelet reads this configuration from disk and enables
each provider as specified by the CredentialProvider type.&lt;/p&gt;
&lt;table class="table"&gt;
&lt;thead&gt;&lt;tr&gt;&lt;th width="30%"&gt;Field&lt;/th&gt;&lt;th&gt;Description&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;apiVersion&lt;/code&gt;&lt;br/&gt;string&lt;/td&gt;&lt;td&gt;&lt;code&gt;kubelet.config.k8s.io/v1&lt;/code&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;kind&lt;/code&gt;&lt;br/&gt;string&lt;/td&gt;&lt;td&gt;&lt;code&gt;CredentialProviderConfig&lt;/code&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;providers&lt;/code&gt; &lt;B&gt;[Required]&lt;/B&gt;&lt;br/&gt;
&lt;a href="#kubelet-config-k8s-io-v1-CredentialProvider"&gt;&lt;code&gt;[]CredentialProvider&lt;/code&gt;&lt;/a&gt;
&lt;/td&gt;
&lt;td&gt;
 &lt;p&gt;providers is a list of credential provider plugins that will be enabled by the kubelet.
Multiple providers may match against a single image, in which case credentials
from all providers will be returned to the kubelet. If multiple providers are called
for a single image, the results are combined. If providers return overlapping
auth keys, the value from the provider earlier in this list is attempted first.&lt;/p&gt;</description></item><item><title>Kubelet Configuration (v1alpha1)</title><link>https://andygol-k8s.netlify.app/docs/reference/config-api/kubelet-config.v1alpha1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/config-api/kubelet-config.v1alpha1/</guid><description>&lt;h2 id="resource-types"&gt;Resource Types&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#kubelet-config-k8s-io-v1alpha1-CredentialProviderConfig"&gt;CredentialProviderConfig&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#kubelet-config-k8s-io-v1alpha1-ImagePullIntent"&gt;ImagePullIntent&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#kubelet-config-k8s-io-v1alpha1-ImagePulledRecord"&gt;ImagePulledRecord&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="kubelet-config-k8s-io-v1alpha1-CredentialProviderConfig"&gt;&lt;code&gt;CredentialProviderConfig&lt;/code&gt;&lt;/h2&gt;
&lt;p&gt;CredentialProviderConfig is the configuration containing information about
each exec credential provider. Kubelet reads this configuration from disk and enables
each provider as specified by the CredentialProvider type.&lt;/p&gt;
&lt;table class="table"&gt;
&lt;thead&gt;&lt;tr&gt;&lt;th width="30%"&gt;Field&lt;/th&gt;&lt;th&gt;Description&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;apiVersion&lt;/code&gt;&lt;br/&gt;string&lt;/td&gt;&lt;td&gt;&lt;code&gt;kubelet.config.k8s.io/v1alpha1&lt;/code&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;kind&lt;/code&gt;&lt;br/&gt;string&lt;/td&gt;&lt;td&gt;&lt;code&gt;CredentialProviderConfig&lt;/code&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;providers&lt;/code&gt; &lt;B&gt;[Required]&lt;/B&gt;&lt;br/&gt;
&lt;a href="#kubelet-config-k8s-io-v1alpha1-CredentialProvider"&gt;&lt;code&gt;[]CredentialProvider&lt;/code&gt;&lt;/a&gt;
&lt;/td&gt;
&lt;td&gt;
 &lt;p&gt;providers is a list of credential provider plugins that will be enabled by the kubelet.
Multiple providers may match against a single image, in which case credentials
from all providers will be returned to the kubelet. If multiple providers are called
for a single image, the results are combined. If providers return overlapping
auth keys, the value from the provider earlier in this list is attempted first.&lt;/p&gt;</description></item><item><title>Kubelet Configuration (v1beta1)</title><link>https://andygol-k8s.netlify.app/docs/reference/config-api/kubelet-config.v1beta1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/config-api/kubelet-config.v1beta1/</guid><description>&lt;h2 id="resource-types"&gt;Resource Types&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#kubelet-config-k8s-io-v1beta1-CredentialProviderConfig"&gt;CredentialProviderConfig&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#kubelet-config-k8s-io-v1beta1-ImagePullIntent"&gt;ImagePullIntent&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#kubelet-config-k8s-io-v1beta1-ImagePulledRecord"&gt;ImagePulledRecord&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#kubelet-config-k8s-io-v1beta1-KubeletConfiguration"&gt;KubeletConfiguration&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#kubelet-config-k8s-io-v1beta1-SerializedNodeConfigSource"&gt;SerializedNodeConfigSource&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="FormatOptions"&gt;&lt;code&gt;FormatOptions&lt;/code&gt;&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Appears in:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#LoggingConfiguration"&gt;LoggingConfiguration&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;FormatOptions contains options for the different logging formats.&lt;/p&gt;
&lt;table class="table"&gt;
&lt;thead&gt;&lt;tr&gt;&lt;th width="30%"&gt;Field&lt;/th&gt;&lt;th&gt;Description&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;text&lt;/code&gt; &lt;B&gt;[Required]&lt;/B&gt;&lt;br/&gt;
&lt;a href="#TextOptions"&gt;&lt;code&gt;TextOptions&lt;/code&gt;&lt;/a&gt;
&lt;/td&gt;
&lt;td&gt;
 &lt;p&gt;[Alpha] Text contains options for logging format &amp;quot;text&amp;quot;.
Only available when the LoggingAlphaOptions feature gate is enabled.&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;json&lt;/code&gt; &lt;B&gt;[Required]&lt;/B&gt;&lt;br/&gt;
&lt;a href="#JSONOptions"&gt;&lt;code&gt;JSONOptions&lt;/code&gt;&lt;/a&gt;
&lt;/td&gt;
&lt;td&gt;
 &lt;p&gt;[Alpha] JSON contains options for logging format &amp;quot;json&amp;quot;.
Only available when the LoggingAlphaOptions feature gate is enabled.&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2 id="JSONOptions"&gt;&lt;code&gt;JSONOptions&lt;/code&gt;&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Appears in:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#FormatOptions"&gt;FormatOptions&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;JSONOptions contains options for logging format &amp;quot;json&amp;quot;.&lt;/p&gt;
&lt;table class="table"&gt;
&lt;thead&gt;&lt;tr&gt;&lt;th width="30%"&gt;Field&lt;/th&gt;&lt;th&gt;Description&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;OutputRoutingOptions&lt;/code&gt; &lt;B&gt;[Required]&lt;/B&gt;&lt;br/&gt;
&lt;a href="#OutputRoutingOptions"&gt;&lt;code&gt;OutputRoutingOptions&lt;/code&gt;&lt;/a&gt;
&lt;/td&gt;
&lt;td&gt;(Members of &lt;code&gt;OutputRoutingOptions&lt;/code&gt; are embedded into this type.)
 &lt;span class="text-muted"&gt;No description provided.&lt;/span&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2 id="LogFormatFactory"&gt;&lt;code&gt;LogFormatFactory&lt;/code&gt;&lt;/h2&gt;
&lt;p&gt;LogFormatFactory provides support for a certain additional,
non-default log format.&lt;/p&gt;</description></item><item><title>Kubelet CredentialProvider (v1)</title><link>https://andygol-k8s.netlify.app/docs/reference/config-api/kubelet-credentialprovider.v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/config-api/kubelet-credentialprovider.v1/</guid><description>&lt;h2 id="resource-types"&gt;Resource Types&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#credentialprovider-kubelet-k8s-io-v1-CredentialProviderRequest"&gt;CredentialProviderRequest&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#credentialprovider-kubelet-k8s-io-v1-CredentialProviderResponse"&gt;CredentialProviderResponse&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="credentialprovider-kubelet-k8s-io-v1-CredentialProviderRequest"&gt;&lt;code&gt;CredentialProviderRequest&lt;/code&gt;&lt;/h2&gt;
&lt;p&gt;CredentialProviderRequest includes the image that the kubelet requires authentication for.
Kubelet will pass this request object to the plugin via stdin. In general, plugins should
prefer responding with the same apiVersion they were sent.&lt;/p&gt;
&lt;table class="table"&gt;
&lt;thead&gt;&lt;tr&gt;&lt;th width="30%"&gt;Field&lt;/th&gt;&lt;th&gt;Description&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;apiVersion&lt;/code&gt;&lt;br/&gt;string&lt;/td&gt;&lt;td&gt;&lt;code&gt;credentialprovider.kubelet.k8s.io/v1&lt;/code&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;kind&lt;/code&gt;&lt;br/&gt;string&lt;/td&gt;&lt;td&gt;&lt;code&gt;CredentialProviderRequest&lt;/code&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;image&lt;/code&gt; &lt;B&gt;[Required]&lt;/B&gt;&lt;br/&gt;
&lt;code&gt;string&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
 &lt;p&gt;image is the container image that is being pulled as part of the
credential provider plugin request. Plugins may optionally parse the image
to extract any information required to fetch credentials.&lt;/p&gt;</description></item><item><title>kuberc (v1alpha1)</title><link>https://andygol-k8s.netlify.app/docs/reference/config-api/kuberc.v1alpha1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/config-api/kuberc.v1alpha1/</guid><description>&lt;h2 id="resource-types"&gt;Resource Types&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#kubectl-config-k8s-io-v1alpha1-Preference"&gt;Preference&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="kubectl-config-k8s-io-v1alpha1-Preference"&gt;&lt;code&gt;Preference&lt;/code&gt;&lt;/h2&gt;
&lt;p&gt;Preference stores elements of KubeRC configuration file&lt;/p&gt;
&lt;table class="table"&gt;
&lt;thead&gt;&lt;tr&gt;&lt;th width="30%"&gt;Field&lt;/th&gt;&lt;th&gt;Description&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;apiVersion&lt;/code&gt;&lt;br/&gt;string&lt;/td&gt;&lt;td&gt;&lt;code&gt;kubectl.config.k8s.io/v1alpha1&lt;/code&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;kind&lt;/code&gt;&lt;br/&gt;string&lt;/td&gt;&lt;td&gt;&lt;code&gt;Preference&lt;/code&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;overrides&lt;/code&gt; &lt;B&gt;[Required]&lt;/B&gt;&lt;br/&gt;
&lt;a href="#kubectl-config-k8s-io-v1alpha1-CommandDefaults"&gt;&lt;code&gt;[]CommandDefaults&lt;/code&gt;&lt;/a&gt;
&lt;/td&gt;
&lt;td&gt;
 &lt;p&gt;overrides allows changing default flag values of commands.
This is especially useful, when user doesn't want to explicitly
set flags each time.&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;aliases&lt;/code&gt; &lt;B&gt;[Required]&lt;/B&gt;&lt;br/&gt;
&lt;a href="#kubectl-config-k8s-io-v1alpha1-AliasOverride"&gt;&lt;code&gt;[]AliasOverride&lt;/code&gt;&lt;/a&gt;
&lt;/td&gt;
&lt;td&gt;
 &lt;p&gt;aliases allow defining command aliases for existing kubectl commands, with optional default flag values.
If the alias name collides with a built-in command, built-in command always takes precedence.
Flag overrides defined in the overrides section do NOT apply to aliases for the same command.
kubectl [ALIAS NAME] [USER_FLAGS] [USER_EXPLICIT_ARGS] expands to
kubectl [COMMAND] # built-in command alias points to
[KUBERC_PREPEND_ARGS]
[USER_FLAGS]
[KUBERC_FLAGS] # rest of the flags that are not passed by user in [USER_FLAGS]
[USER_EXPLICIT_ARGS]
[KUBERC_APPEND_ARGS]
e.g.&lt;/p&gt;</description></item><item><title>kuberc (v1beta1)</title><link>https://andygol-k8s.netlify.app/docs/reference/config-api/kuberc.v1beta1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/config-api/kuberc.v1beta1/</guid><description>&lt;h2 id="resource-types"&gt;Resource Types&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#kubectl-config-k8s-io-v1beta1-Preference"&gt;Preference&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="kubectl-config-k8s-io-v1beta1-Preference"&gt;&lt;code&gt;Preference&lt;/code&gt;&lt;/h2&gt;
&lt;p&gt;Preference stores elements of KubeRC configuration file&lt;/p&gt;
&lt;table class="table"&gt;
&lt;thead&gt;&lt;tr&gt;&lt;th width="30%"&gt;Field&lt;/th&gt;&lt;th&gt;Description&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;apiVersion&lt;/code&gt;&lt;br/&gt;string&lt;/td&gt;&lt;td&gt;&lt;code&gt;kubectl.config.k8s.io/v1beta1&lt;/code&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;kind&lt;/code&gt;&lt;br/&gt;string&lt;/td&gt;&lt;td&gt;&lt;code&gt;Preference&lt;/code&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;defaults&lt;/code&gt; &lt;B&gt;[Required]&lt;/B&gt;&lt;br/&gt;
&lt;a href="#kubectl-config-k8s-io-v1beta1-CommandDefaults"&gt;&lt;code&gt;[]CommandDefaults&lt;/code&gt;&lt;/a&gt;
&lt;/td&gt;
&lt;td&gt;
 &lt;p&gt;defaults allow changing default option values of commands.
This is especially useful, when user doesn't want to explicitly
set options each time.&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;aliases&lt;/code&gt; &lt;B&gt;[Required]&lt;/B&gt;&lt;br/&gt;
&lt;a href="#kubectl-config-k8s-io-v1beta1-AliasOverride"&gt;&lt;code&gt;[]AliasOverride&lt;/code&gt;&lt;/a&gt;
&lt;/td&gt;
&lt;td&gt;
 &lt;p&gt;aliases allow defining command aliases for existing kubectl commands, with optional default option values.
If the alias name collides with a built-in command, built-in command always takes precedence.
Option overrides defined in the defaults section do NOT apply to aliases for the same command.
kubectl [ALIAS NAME] [USER_OPTIONS] [USER_EXPLICIT_ARGS] expands to
kubectl [COMMAND] # built-in command alias points to
[KUBERC_PREPEND_ARGS]
[USER_OPTIONS]
[KUBERC_OPTIONS] # rest of the options that are not passed by user in [USER_OPTIONS]
[USER_EXPLICIT_ARGS]
[KUBERC_APPEND_ARGS]
e.g.&lt;/p&gt;</description></item><item><title>Kubernetes Community Code of Conduct</title><link>https://andygol-k8s.netlify.app/community/code-of-conduct/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/community/code-of-conduct/</guid><description>&lt;p&gt;&lt;em&gt;Kubernetes follows the
&lt;a href="https://github.com/cncf/foundation/blob/main/code-of-conduct.md"&gt;CNCF Code of Conduct&lt;/a&gt;.
The text of the CNCF CoC is replicated below, as of
&lt;a href="https://github.com/cncf/foundation/blob/71412bb029090d42ecbeadb39374a337bfb48a9c/code-of-conduct.md"&gt;commit 71412bb02&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;
&lt;div id="cncf-code-of-conduct"&gt;

	&lt;!-- Do not edit this file directly. Get the latest from
 https://github.com/cncf/foundation/blob/main/code-of-conduct.md --&gt;
&lt;h2 id="cncf-community-code-of-conduct-v1-3"&gt;CNCF Community Code of Conduct v1.3&lt;/h2&gt;
&lt;h3 id="community-code-of-conduct"&gt;Community Code of Conduct&lt;/h3&gt;
&lt;p&gt;As contributors, maintainers, and participants in the CNCF community, and in the interest of fostering
an open and welcoming community, we pledge to respect all people who participate or contribute
through reporting issues, posting feature requests, updating documentation,
submitting pull requests or patches, attending conferences or events, or engaging in other community or project activities.&lt;/p&gt;</description></item><item><title>Kubernetes Custom Metrics (v1beta2)</title><link>https://andygol-k8s.netlify.app/docs/reference/external-api/custom-metrics.v1beta2/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/external-api/custom-metrics.v1beta2/</guid><description>&lt;p&gt;Package v1beta2 is the v1beta2 version of the custom_metrics API.&lt;/p&gt;
&lt;h2 id="resource-types"&gt;Resource Types&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#custom-metrics-k8s-io-v1beta2-MetricListOptions"&gt;MetricListOptions&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#custom-metrics-k8s-io-v1beta2-MetricValue"&gt;MetricValue&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#custom-metrics-k8s-io-v1beta2-MetricValueList"&gt;MetricValueList&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="custom-metrics-k8s-io-v1beta2-MetricListOptions"&gt;&lt;code&gt;MetricListOptions&lt;/code&gt;&lt;/h2&gt;
&lt;p&gt;MetricListOptions is used to select metrics by their label selectors&lt;/p&gt;
&lt;table class="table"&gt;
&lt;thead&gt;&lt;tr&gt;&lt;th width="30%"&gt;Field&lt;/th&gt;&lt;th&gt;Description&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;apiVersion&lt;/code&gt;&lt;br/&gt;string&lt;/td&gt;&lt;td&gt;&lt;code&gt;custom.metrics.k8s.io/v1beta2&lt;/code&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;kind&lt;/code&gt;&lt;br/&gt;string&lt;/td&gt;&lt;td&gt;&lt;code&gt;MetricListOptions&lt;/code&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;labelSelector&lt;/code&gt;&lt;br/&gt;
&lt;code&gt;string&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
 &lt;p&gt;A selector to restrict the list of returned objects by their labels.
Defaults to everything.&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;metricLabelSelector&lt;/code&gt;&lt;br/&gt;
&lt;code&gt;string&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
 &lt;p&gt;A selector to restrict the list of returned metrics by their labels&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2 id="custom-metrics-k8s-io-v1beta2-MetricValue"&gt;&lt;code&gt;MetricValue&lt;/code&gt;&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Appears in:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#custom-metrics-k8s-io-v1beta2-MetricValueList"&gt;MetricValueList&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;MetricValue is the metric value for some object&lt;/p&gt;</description></item><item><title>Kubernetes Default ServiceCIDR Reconfiguration</title><link>https://andygol-k8s.netlify.app/docs/tasks/network/reconfigure-default-service-ip-ranges/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/network/reconfigure-default-service-ip-ranges/</guid><description>&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable" title="Feature Gate: MultiCIDRServiceAllocator"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt; 
 &lt;code&gt;Kubernetes v1.33 [stable]&lt;/code&gt;(enabled by default)&lt;/div&gt;

&lt;p&gt;This document shares how to reconfigure the default Service IP range(s) assigned
to a cluster.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Kubernetes External Metrics (v1beta1)</title><link>https://andygol-k8s.netlify.app/docs/reference/external-api/external-metrics.v1beta1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/external-api/external-metrics.v1beta1/</guid><description>&lt;p&gt;Package v1beta1 is the v1beta1 version of the external metrics API.&lt;/p&gt;
&lt;h2 id="resource-types"&gt;Resource Types&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#external-metrics-k8s-io-v1beta1-ExternalMetricValue"&gt;ExternalMetricValue&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#external-metrics-k8s-io-v1beta1-ExternalMetricValueList"&gt;ExternalMetricValueList&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="external-metrics-k8s-io-v1beta1-ExternalMetricValue"&gt;&lt;code&gt;ExternalMetricValue&lt;/code&gt;&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Appears in:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#external-metrics-k8s-io-v1beta1-ExternalMetricValueList"&gt;ExternalMetricValueList&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;ExternalMetricValue is a metric value for external metric
A single metric value is identified by metric name and a set of string labels.
For one metric there can be multiple values with different sets of labels.&lt;/p&gt;
&lt;table class="table"&gt;
&lt;thead&gt;&lt;tr&gt;&lt;th width="30%"&gt;Field&lt;/th&gt;&lt;th&gt;Description&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;apiVersion&lt;/code&gt;&lt;br/&gt;string&lt;/td&gt;&lt;td&gt;&lt;code&gt;external.metrics.k8s.io/v1beta1&lt;/code&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;kind&lt;/code&gt;&lt;br/&gt;string&lt;/td&gt;&lt;td&gt;&lt;code&gt;ExternalMetricValue&lt;/code&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;metricName&lt;/code&gt; &lt;B&gt;[Required]&lt;/B&gt;&lt;br/&gt;
&lt;code&gt;string&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
 &lt;p&gt;the name of the metric&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;metricLabels&lt;/code&gt; &lt;B&gt;[Required]&lt;/B&gt;&lt;br/&gt;
&lt;code&gt;map[string]string&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
 &lt;p&gt;a set of labels that identify a single time series for the metric&lt;/p&gt;</description></item><item><title>Kubernetes Metrics (v1beta1)</title><link>https://andygol-k8s.netlify.app/docs/reference/external-api/metrics.v1beta1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/external-api/metrics.v1beta1/</guid><description>&lt;p&gt;Package v1beta1 is the v1beta1 version of the metrics API.&lt;/p&gt;
&lt;h2 id="resource-types"&gt;Resource Types&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#metrics-k8s-io-v1beta1-NodeMetrics"&gt;NodeMetrics&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#metrics-k8s-io-v1beta1-NodeMetricsList"&gt;NodeMetricsList&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#metrics-k8s-io-v1beta1-PodMetrics"&gt;PodMetrics&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#metrics-k8s-io-v1beta1-PodMetricsList"&gt;PodMetricsList&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="metrics-k8s-io-v1beta1-NodeMetrics"&gt;&lt;code&gt;NodeMetrics&lt;/code&gt;&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Appears in:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#metrics-k8s-io-v1beta1-NodeMetricsList"&gt;NodeMetricsList&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;NodeMetrics sets resource usage metrics of a node.&lt;/p&gt;
&lt;table class="table"&gt;
&lt;thead&gt;&lt;tr&gt;&lt;th width="30%"&gt;Field&lt;/th&gt;&lt;th&gt;Description&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;apiVersion&lt;/code&gt;&lt;br/&gt;string&lt;/td&gt;&lt;td&gt;&lt;code&gt;metrics.k8s.io/v1beta1&lt;/code&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;kind&lt;/code&gt;&lt;br/&gt;string&lt;/td&gt;&lt;td&gt;&lt;code&gt;NodeMetrics&lt;/code&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;metadata&lt;/code&gt;&lt;br/&gt;
&lt;a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.28/#objectmeta-v1-meta"&gt;&lt;code&gt;meta/v1.ObjectMeta&lt;/code&gt;&lt;/a&gt;
&lt;/td&gt;
&lt;td&gt;
 &lt;p&gt;Standard object's metadata.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata&lt;/p&gt;
Refer to the Kubernetes API documentation for the fields of the &lt;code&gt;metadata&lt;/code&gt; field.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;timestamp&lt;/code&gt; &lt;B&gt;[Required]&lt;/B&gt;&lt;br/&gt;
&lt;a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.28/#time-v1-meta"&gt;&lt;code&gt;meta/v1.Time&lt;/code&gt;&lt;/a&gt;
&lt;/td&gt;
&lt;td&gt;
 &lt;p&gt;The following fields define time interval from which metrics were
collected from the interval [Timestamp-Window, Timestamp].&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;window&lt;/code&gt; &lt;B&gt;[Required]&lt;/B&gt;&lt;br/&gt;
&lt;a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/apis/meta/v1#Duration"&gt;&lt;code&gt;meta/v1.Duration&lt;/code&gt;&lt;/a&gt;
&lt;/td&gt;
&lt;td&gt;
 &lt;span class="text-muted"&gt;No description provided.&lt;/span&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;usage&lt;/code&gt; &lt;B&gt;[Required]&lt;/B&gt;&lt;br/&gt;
&lt;a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.28/#resourcelist-v1-core"&gt;&lt;code&gt;core/v1.ResourceList&lt;/code&gt;&lt;/a&gt;
&lt;/td&gt;
&lt;td&gt;
 &lt;p&gt;The memory usage is the memory working set.&lt;/p&gt;</description></item><item><title>Kubernetes Metrics Reference</title><link>https://andygol-k8s.netlify.app/docs/reference/instrumentation/metrics/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/instrumentation/metrics/</guid><description>&lt;h2 id="metrics-v1-35"&gt;Metrics (v1.35)&lt;/h2&gt;
&lt;!-- (auto-generated 2026 Jan 06) --&gt;
&lt;!-- (auto-generated v1.35) --&gt;
&lt;p&gt;This page details the metrics that different Kubernetes components export. You can query the metrics endpoint for these
components using an HTTP scrape, and fetch the current metrics data in Prometheus format.&lt;/p&gt;
&lt;h3 id="list-of-stable-kubernetes-metrics"&gt;List of Stable Kubernetes Metrics&lt;/h3&gt;
&lt;p&gt;Stable metrics observe strict API contracts and no labels can be added or removed from stable metrics during their lifetime.&lt;/p&gt;
&lt;div class="metrics"&gt;&lt;div class="metric" data-stability="stable"&gt;
	&lt;div class="metric_name"&gt;apiserver_admission_controller_admission_duration_seconds&lt;/div&gt;
	&lt;div class="metric_help"&gt;Admission controller latency histogram in seconds, identified by name and broken out for each operation and API resource and type (validate or admit).&lt;/div&gt;
	&lt;ul&gt;
	&lt;li&gt;&lt;label class="metric_detail"&gt;Stability Level:&lt;/label&gt;&lt;span class="metric_stability_level"&gt;STABLE&lt;/span&gt;&lt;/li&gt;
	&lt;li data-type="histogram"&gt;&lt;label class="metric_detail"&gt;Type:&lt;/label&gt; &lt;span class="metric_type"&gt;Histogram&lt;/span&gt;&lt;/li&gt;
	&lt;li class="metric_labels_varying"&gt;&lt;label class="metric_detail"&gt;Labels:&lt;/label&gt;&lt;span class="metric_label"&gt;name&lt;/span&gt;&lt;span class="metric_label"&gt;operation&lt;/span&gt;&lt;span class="metric_label"&gt;rejected&lt;/span&gt;&lt;span class="metric_label"&gt;type&lt;/span&gt;&lt;/li&gt;&lt;/ul&gt;
	&lt;/div&gt;&lt;div class="metric" data-stability="stable"&gt;
	&lt;div class="metric_name"&gt;apiserver_admission_step_admission_duration_seconds&lt;/div&gt;
	&lt;div class="metric_help"&gt;Admission sub-step latency histogram in seconds, broken out for each operation and API resource and step type (validate or admit).&lt;/div&gt;
	&lt;ul&gt;
	&lt;li&gt;&lt;label class="metric_detail"&gt;Stability Level:&lt;/label&gt;&lt;span class="metric_stability_level"&gt;STABLE&lt;/span&gt;&lt;/li&gt;
	&lt;li data-type="histogram"&gt;&lt;label class="metric_detail"&gt;Type:&lt;/label&gt; &lt;span class="metric_type"&gt;Histogram&lt;/span&gt;&lt;/li&gt;
	&lt;li class="metric_labels_varying"&gt;&lt;label class="metric_detail"&gt;Labels:&lt;/label&gt;&lt;span class="metric_label"&gt;operation&lt;/span&gt;&lt;span class="metric_label"&gt;rejected&lt;/span&gt;&lt;span class="metric_label"&gt;type&lt;/span&gt;&lt;/li&gt;&lt;/ul&gt;
	&lt;/div&gt;&lt;div class="metric" data-stability="stable"&gt;
	&lt;div class="metric_name"&gt;apiserver_admission_webhook_admission_duration_seconds&lt;/div&gt;
	&lt;div class="metric_help"&gt;Admission webhook latency histogram in seconds, identified by name and broken out for each operation and API resource and type (validate or admit).&lt;/div&gt;
	&lt;ul&gt;
	&lt;li&gt;&lt;label class="metric_detail"&gt;Stability Level:&lt;/label&gt;&lt;span class="metric_stability_level"&gt;STABLE&lt;/span&gt;&lt;/li&gt;
	&lt;li data-type="histogram"&gt;&lt;label class="metric_detail"&gt;Type:&lt;/label&gt; &lt;span class="metric_type"&gt;Histogram&lt;/span&gt;&lt;/li&gt;
	&lt;li class="metric_labels_varying"&gt;&lt;label class="metric_detail"&gt;Labels:&lt;/label&gt;&lt;span class="metric_label"&gt;name&lt;/span&gt;&lt;span class="metric_label"&gt;operation&lt;/span&gt;&lt;span class="metric_label"&gt;rejected&lt;/span&gt;&lt;span class="metric_label"&gt;type&lt;/span&gt;&lt;/li&gt;&lt;/ul&gt;
	&lt;/div&gt;&lt;div class="metric" data-stability="stable"&gt;
	&lt;div class="metric_name"&gt;apiserver_current_inflight_requests&lt;/div&gt;
	&lt;div class="metric_help"&gt;Maximal number of currently used inflight request limit of this apiserver per request kind in last second.&lt;/div&gt;
	&lt;ul&gt;
	&lt;li&gt;&lt;label class="metric_detail"&gt;Stability Level:&lt;/label&gt;&lt;span class="metric_stability_level"&gt;STABLE&lt;/span&gt;&lt;/li&gt;
	&lt;li data-type="gauge"&gt;&lt;label class="metric_detail"&gt;Type:&lt;/label&gt; &lt;span class="metric_type"&gt;Gauge&lt;/span&gt;&lt;/li&gt;
	&lt;li class="metric_labels_varying"&gt;&lt;label class="metric_detail"&gt;Labels:&lt;/label&gt;&lt;span class="metric_label"&gt;request_kind&lt;/span&gt;&lt;/li&gt;&lt;/ul&gt;
	&lt;/div&gt;&lt;div class="metric" data-stability="stable"&gt;
	&lt;div class="metric_name"&gt;apiserver_longrunning_requests&lt;/div&gt;
	&lt;div class="metric_help"&gt;Gauge of all active long-running apiserver requests broken out by verb, group, version, resource, scope and component. Not all requests are tracked this way.&lt;/div&gt;
	&lt;ul&gt;
	&lt;li&gt;&lt;label class="metric_detail"&gt;Stability Level:&lt;/label&gt;&lt;span class="metric_stability_level"&gt;STABLE&lt;/span&gt;&lt;/li&gt;
	&lt;li data-type="gauge"&gt;&lt;label class="metric_detail"&gt;Type:&lt;/label&gt; &lt;span class="metric_type"&gt;Gauge&lt;/span&gt;&lt;/li&gt;
	&lt;li class="metric_labels_varying"&gt;&lt;label class="metric_detail"&gt;Labels:&lt;/label&gt;&lt;span class="metric_label"&gt;component&lt;/span&gt;&lt;span class="metric_label"&gt;group&lt;/span&gt;&lt;span class="metric_label"&gt;resource&lt;/span&gt;&lt;span class="metric_label"&gt;scope&lt;/span&gt;&lt;span class="metric_label"&gt;subresource&lt;/span&gt;&lt;span class="metric_label"&gt;verb&lt;/span&gt;&lt;span class="metric_label"&gt;version&lt;/span&gt;&lt;/li&gt;&lt;/ul&gt;
	&lt;/div&gt;&lt;div class="metric" data-stability="stable"&gt;
	&lt;div class="metric_name"&gt;apiserver_request_duration_seconds&lt;/div&gt;
	&lt;div class="metric_help"&gt;Response latency distribution in seconds for each verb, dry run value, group, version, resource, subresource, scope and component.&lt;/div&gt;
	&lt;ul&gt;
	&lt;li&gt;&lt;label class="metric_detail"&gt;Stability Level:&lt;/label&gt;&lt;span class="metric_stability_level"&gt;STABLE&lt;/span&gt;&lt;/li&gt;
	&lt;li data-type="histogram"&gt;&lt;label class="metric_detail"&gt;Type:&lt;/label&gt; &lt;span class="metric_type"&gt;Histogram&lt;/span&gt;&lt;/li&gt;
	&lt;li class="metric_labels_varying"&gt;&lt;label class="metric_detail"&gt;Labels:&lt;/label&gt;&lt;span class="metric_label"&gt;component&lt;/span&gt;&lt;span class="metric_label"&gt;dry_run&lt;/span&gt;&lt;span class="metric_label"&gt;group&lt;/span&gt;&lt;span class="metric_label"&gt;resource&lt;/span&gt;&lt;span class="metric_label"&gt;scope&lt;/span&gt;&lt;span class="metric_label"&gt;subresource&lt;/span&gt;&lt;span class="metric_label"&gt;verb&lt;/span&gt;&lt;span class="metric_label"&gt;version&lt;/span&gt;&lt;/li&gt;&lt;/ul&gt;
	&lt;/div&gt;&lt;div class="metric" data-stability="stable"&gt;
	&lt;div class="metric_name"&gt;apiserver_request_total&lt;/div&gt;
	&lt;div class="metric_help"&gt;Counter of apiserver requests broken out for each verb, dry run value, group, version, resource, scope, component, and HTTP response code.&lt;/div&gt;
	&lt;ul&gt;
	&lt;li&gt;&lt;label class="metric_detail"&gt;Stability Level:&lt;/label&gt;&lt;span class="metric_stability_level"&gt;STABLE&lt;/span&gt;&lt;/li&gt;
	&lt;li data-type="counter"&gt;&lt;label class="metric_detail"&gt;Type:&lt;/label&gt; &lt;span class="metric_type"&gt;Counter&lt;/span&gt;&lt;/li&gt;
	&lt;li class="metric_labels_varying"&gt;&lt;label class="metric_detail"&gt;Labels:&lt;/label&gt;&lt;span class="metric_label"&gt;code&lt;/span&gt;&lt;span class="metric_label"&gt;component&lt;/span&gt;&lt;span class="metric_label"&gt;dry_run&lt;/span&gt;&lt;span class="metric_label"&gt;group&lt;/span&gt;&lt;span class="metric_label"&gt;resource&lt;/span&gt;&lt;span class="metric_label"&gt;scope&lt;/span&gt;&lt;span class="metric_label"&gt;subresource&lt;/span&gt;&lt;span class="metric_label"&gt;verb&lt;/span&gt;&lt;span class="metric_label"&gt;version&lt;/span&gt;&lt;/li&gt;&lt;/ul&gt;
	&lt;/div&gt;&lt;div class="metric" data-stability="stable"&gt;
	&lt;div class="metric_name"&gt;apiserver_requested_deprecated_apis&lt;/div&gt;
	&lt;div class="metric_help"&gt;Gauge of deprecated APIs that have been requested, broken out by API group, version, resource, subresource, and removed_release.&lt;/div&gt;
	&lt;ul&gt;
	&lt;li&gt;&lt;label class="metric_detail"&gt;Stability Level:&lt;/label&gt;&lt;span class="metric_stability_level"&gt;STABLE&lt;/span&gt;&lt;/li&gt;
	&lt;li data-type="gauge"&gt;&lt;label class="metric_detail"&gt;Type:&lt;/label&gt; &lt;span class="metric_type"&gt;Gauge&lt;/span&gt;&lt;/li&gt;
	&lt;li class="metric_labels_varying"&gt;&lt;label class="metric_detail"&gt;Labels:&lt;/label&gt;&lt;span class="metric_label"&gt;group&lt;/span&gt;&lt;span class="metric_label"&gt;removed_release&lt;/span&gt;&lt;span class="metric_label"&gt;resource&lt;/span&gt;&lt;span class="metric_label"&gt;subresource&lt;/span&gt;&lt;span class="metric_label"&gt;version&lt;/span&gt;&lt;/li&gt;&lt;/ul&gt;
	&lt;/div&gt;&lt;div class="metric" data-stability="stable"&gt;
	&lt;div class="metric_name"&gt;apiserver_response_sizes&lt;/div&gt;
	&lt;div class="metric_help"&gt;Response size distribution in bytes for each group, version, verb, resource, subresource, scope and component.&lt;/div&gt;
	&lt;ul&gt;
	&lt;li&gt;&lt;label class="metric_detail"&gt;Stability Level:&lt;/label&gt;&lt;span class="metric_stability_level"&gt;STABLE&lt;/span&gt;&lt;/li&gt;
	&lt;li data-type="histogram"&gt;&lt;label class="metric_detail"&gt;Type:&lt;/label&gt; &lt;span class="metric_type"&gt;Histogram&lt;/span&gt;&lt;/li&gt;
	&lt;li class="metric_labels_varying"&gt;&lt;label class="metric_detail"&gt;Labels:&lt;/label&gt;&lt;span class="metric_label"&gt;component&lt;/span&gt;&lt;span class="metric_label"&gt;group&lt;/span&gt;&lt;span class="metric_label"&gt;resource&lt;/span&gt;&lt;span class="metric_label"&gt;scope&lt;/span&gt;&lt;span class="metric_label"&gt;subresource&lt;/span&gt;&lt;span class="metric_label"&gt;verb&lt;/span&gt;&lt;span class="metric_label"&gt;version&lt;/span&gt;&lt;/li&gt;&lt;/ul&gt;
	&lt;/div&gt;&lt;div class="metric" data-stability="stable"&gt;
	&lt;div class="metric_name"&gt;apiserver_storage_objects&lt;/div&gt;
	&lt;div class="metric_help"&gt;[DEPRECATED, consider using apiserver_resource_objects instead] Number of stored objects at the time of last check split by kind. In case of a fetching error, the value will be -1.&lt;/div&gt;
	&lt;ul&gt;
	&lt;li&gt;&lt;label class="metric_detail"&gt;Stability Level:&lt;/label&gt;&lt;span class="metric_stability_level"&gt;STABLE&lt;/span&gt;&lt;/li&gt;
	&lt;li data-type="gauge"&gt;&lt;label class="metric_detail"&gt;Type:&lt;/label&gt; &lt;span class="metric_type"&gt;Gauge&lt;/span&gt;&lt;/li&gt;
	&lt;li class="metric_labels_varying"&gt;&lt;label class="metric_detail"&gt;Labels:&lt;/label&gt;&lt;span class="metric_label"&gt;resource&lt;/span&gt;&lt;/li&gt;&lt;li class="metric_deprecated_version"&gt;&lt;label class="metric_detail"&gt;Deprecated Versions:&lt;/label&gt;&lt;span&gt;1.34.0&lt;/span&gt;&lt;/li&gt;&lt;/ul&gt;
	&lt;/div&gt;&lt;div class="metric" data-stability="stable"&gt;
	&lt;div class="metric_name"&gt;apiserver_storage_size_bytes&lt;/div&gt;
	&lt;div class="metric_help"&gt;Size of the storage database file physically allocated in bytes.&lt;/div&gt;
	&lt;ul&gt;
	&lt;li&gt;&lt;label class="metric_detail"&gt;Stability Level:&lt;/label&gt;&lt;span class="metric_stability_level"&gt;STABLE&lt;/span&gt;&lt;/li&gt;
	&lt;li data-type="custom"&gt;&lt;label class="metric_detail"&gt;Type:&lt;/label&gt; &lt;span class="metric_type"&gt;Custom&lt;/span&gt;&lt;/li&gt;
	&lt;li class="metric_labels_varying"&gt;&lt;label class="metric_detail"&gt;Labels:&lt;/label&gt;&lt;span class="metric_label"&gt;storage_cluster_id&lt;/span&gt;&lt;/li&gt;&lt;/ul&gt;
	&lt;/div&gt;&lt;div class="metric" data-stability="stable"&gt;
	&lt;div class="metric_name"&gt;container_cpu_usage_seconds_total&lt;/div&gt;
	&lt;div class="metric_help"&gt;Cumulative cpu time consumed by the container in core-seconds&lt;/div&gt;
	&lt;ul&gt;
	&lt;li&gt;&lt;label class="metric_detail"&gt;Stability Level:&lt;/label&gt;&lt;span class="metric_stability_level"&gt;STABLE&lt;/span&gt;&lt;/li&gt;
	&lt;li data-type="custom"&gt;&lt;label class="metric_detail"&gt;Type:&lt;/label&gt; &lt;span class="metric_type"&gt;Custom&lt;/span&gt;&lt;/li&gt;
	&lt;li class="metric_labels_varying"&gt;&lt;label class="metric_detail"&gt;Labels:&lt;/label&gt;&lt;span class="metric_label"&gt;container&lt;/span&gt;&lt;span class="metric_label"&gt;pod&lt;/span&gt;&lt;span class="metric_label"&gt;namespace&lt;/span&gt;&lt;/li&gt;&lt;/ul&gt;
	&lt;/div&gt;&lt;div class="metric" data-stability="stable"&gt;
	&lt;div class="metric_name"&gt;container_memory_working_set_bytes&lt;/div&gt;
	&lt;div class="metric_help"&gt;Current working set of the container in bytes&lt;/div&gt;
	&lt;ul&gt;
	&lt;li&gt;&lt;label class="metric_detail"&gt;Stability Level:&lt;/label&gt;&lt;span class="metric_stability_level"&gt;STABLE&lt;/span&gt;&lt;/li&gt;
	&lt;li data-type="custom"&gt;&lt;label class="metric_detail"&gt;Type:&lt;/label&gt; &lt;span class="metric_type"&gt;Custom&lt;/span&gt;&lt;/li&gt;
	&lt;li class="metric_labels_varying"&gt;&lt;label class="metric_detail"&gt;Labels:&lt;/label&gt;&lt;span class="metric_label"&gt;container&lt;/span&gt;&lt;span class="metric_label"&gt;pod&lt;/span&gt;&lt;span class="metric_label"&gt;namespace&lt;/span&gt;&lt;/li&gt;&lt;/ul&gt;
	&lt;/div&gt;&lt;div class="metric" data-stability="stable"&gt;
	&lt;div class="metric_name"&gt;container_start_time_seconds&lt;/div&gt;
	&lt;div class="metric_help"&gt;Start time of the container since unix epoch in seconds&lt;/div&gt;
	&lt;ul&gt;
	&lt;li&gt;&lt;label class="metric_detail"&gt;Stability Level:&lt;/label&gt;&lt;span class="metric_stability_level"&gt;STABLE&lt;/span&gt;&lt;/li&gt;
	&lt;li data-type="custom"&gt;&lt;label class="metric_detail"&gt;Type:&lt;/label&gt; &lt;span class="metric_type"&gt;Custom&lt;/span&gt;&lt;/li&gt;
	&lt;li class="metric_labels_varying"&gt;&lt;label class="metric_detail"&gt;Labels:&lt;/label&gt;&lt;span class="metric_label"&gt;container&lt;/span&gt;&lt;span class="metric_label"&gt;pod&lt;/span&gt;&lt;span class="metric_label"&gt;namespace&lt;/span&gt;&lt;/li&gt;&lt;/ul&gt;
	&lt;/div&gt;&lt;div class="metric" data-stability="stable"&gt;
	&lt;div class="metric_name"&gt;cronjob_controller_job_creation_skew_duration_seconds&lt;/div&gt;
	&lt;div class="metric_help"&gt;Time between when a cronjob is scheduled to be run, and when the corresponding job is created&lt;/div&gt;
	&lt;ul&gt;
	&lt;li&gt;&lt;label class="metric_detail"&gt;Stability Level:&lt;/label&gt;&lt;span class="metric_stability_level"&gt;STABLE&lt;/span&gt;&lt;/li&gt;
	&lt;li data-type="histogram"&gt;&lt;label class="metric_detail"&gt;Type:&lt;/label&gt; &lt;span class="metric_type"&gt;Histogram&lt;/span&gt;&lt;/li&gt;
	&lt;/ul&gt;
	&lt;/div&gt;&lt;div class="metric" data-stability="stable"&gt;
	&lt;div class="metric_name"&gt;job_controller_job_pods_finished_total&lt;/div&gt;
	&lt;div class="metric_help"&gt;The number of finished Pods that are fully tracked&lt;/div&gt;
	&lt;ul&gt;
	&lt;li&gt;&lt;label class="metric_detail"&gt;Stability Level:&lt;/label&gt;&lt;span class="metric_stability_level"&gt;STABLE&lt;/span&gt;&lt;/li&gt;
	&lt;li data-type="counter"&gt;&lt;label class="metric_detail"&gt;Type:&lt;/label&gt; &lt;span class="metric_type"&gt;Counter&lt;/span&gt;&lt;/li&gt;
	&lt;li class="metric_labels_varying"&gt;&lt;label class="metric_detail"&gt;Labels:&lt;/label&gt;&lt;span class="metric_label"&gt;completion_mode&lt;/span&gt;&lt;span class="metric_label"&gt;result&lt;/span&gt;&lt;/li&gt;&lt;/ul&gt;
	&lt;/div&gt;&lt;div class="metric" data-stability="stable"&gt;
	&lt;div class="metric_name"&gt;job_controller_job_sync_duration_seconds&lt;/div&gt;
	&lt;div class="metric_help"&gt;The time it took to sync a job&lt;/div&gt;
	&lt;ul&gt;
	&lt;li&gt;&lt;label class="metric_detail"&gt;Stability Level:&lt;/label&gt;&lt;span class="metric_stability_level"&gt;STABLE&lt;/span&gt;&lt;/li&gt;
	&lt;li data-type="histogram"&gt;&lt;label class="metric_detail"&gt;Type:&lt;/label&gt; &lt;span class="metric_type"&gt;Histogram&lt;/span&gt;&lt;/li&gt;
	&lt;li class="metric_labels_varying"&gt;&lt;label class="metric_detail"&gt;Labels:&lt;/label&gt;&lt;span class="metric_label"&gt;action&lt;/span&gt;&lt;span class="metric_label"&gt;completion_mode&lt;/span&gt;&lt;span class="metric_label"&gt;result&lt;/span&gt;&lt;/li&gt;&lt;/ul&gt;
	&lt;/div&gt;&lt;div class="metric" data-stability="stable"&gt;
	&lt;div class="metric_name"&gt;job_controller_job_syncs_total&lt;/div&gt;
	&lt;div class="metric_help"&gt;The number of job syncs&lt;/div&gt;
	&lt;ul&gt;
	&lt;li&gt;&lt;label class="metric_detail"&gt;Stability Level:&lt;/label&gt;&lt;span class="metric_stability_level"&gt;STABLE&lt;/span&gt;&lt;/li&gt;
	&lt;li data-type="counter"&gt;&lt;label class="metric_detail"&gt;Type:&lt;/label&gt; &lt;span class="metric_type"&gt;Counter&lt;/span&gt;&lt;/li&gt;
	&lt;li class="metric_labels_varying"&gt;&lt;label class="metric_detail"&gt;Labels:&lt;/label&gt;&lt;span class="metric_label"&gt;action&lt;/span&gt;&lt;span class="metric_label"&gt;completion_mode&lt;/span&gt;&lt;span class="metric_label"&gt;result&lt;/span&gt;&lt;/li&gt;&lt;/ul&gt;
	&lt;/div&gt;&lt;div class="metric" data-stability="stable"&gt;
	&lt;div class="metric_name"&gt;job_controller_jobs_finished_total&lt;/div&gt;
	&lt;div class="metric_help"&gt;The number of finished jobs&lt;/div&gt;
	&lt;ul&gt;
	&lt;li&gt;&lt;label class="metric_detail"&gt;Stability Level:&lt;/label&gt;&lt;span class="metric_stability_level"&gt;STABLE&lt;/span&gt;&lt;/li&gt;
	&lt;li data-type="counter"&gt;&lt;label class="metric_detail"&gt;Type:&lt;/label&gt; &lt;span class="metric_type"&gt;Counter&lt;/span&gt;&lt;/li&gt;
	&lt;li class="metric_labels_varying"&gt;&lt;label class="metric_detail"&gt;Labels:&lt;/label&gt;&lt;span class="metric_label"&gt;completion_mode&lt;/span&gt;&lt;span class="metric_label"&gt;reason&lt;/span&gt;&lt;span class="metric_label"&gt;result&lt;/span&gt;&lt;/li&gt;&lt;/ul&gt;
	&lt;/div&gt;&lt;div class="metric" data-stability="stable"&gt;
	&lt;div class="metric_name"&gt;kube_pod_resource_limit&lt;/div&gt;
	&lt;div class="metric_help"&gt;Resources limit for workloads on the cluster, broken down by pod. This shows the resource usage the scheduler and kubelet expect per pod for resources along with the unit for the resource if any.&lt;/div&gt;
	&lt;ul&gt;
	&lt;li&gt;&lt;label class="metric_detail"&gt;Stability Level:&lt;/label&gt;&lt;span class="metric_stability_level"&gt;STABLE&lt;/span&gt;&lt;/li&gt;
	&lt;li data-type="custom"&gt;&lt;label class="metric_detail"&gt;Type:&lt;/label&gt; &lt;span class="metric_type"&gt;Custom&lt;/span&gt;&lt;/li&gt;
	&lt;li class="metric_labels_varying"&gt;&lt;label class="metric_detail"&gt;Labels:&lt;/label&gt;&lt;span class="metric_label"&gt;namespace&lt;/span&gt;&lt;span class="metric_label"&gt;pod&lt;/span&gt;&lt;span class="metric_label"&gt;node&lt;/span&gt;&lt;span class="metric_label"&gt;scheduler&lt;/span&gt;&lt;span class="metric_label"&gt;priority&lt;/span&gt;&lt;span class="metric_label"&gt;resource&lt;/span&gt;&lt;span class="metric_label"&gt;unit&lt;/span&gt;&lt;/li&gt;&lt;/ul&gt;
	&lt;/div&gt;&lt;div class="metric" data-stability="stable"&gt;
	&lt;div class="metric_name"&gt;kube_pod_resource_request&lt;/div&gt;
	&lt;div class="metric_help"&gt;Resources requested by workloads on the cluster, broken down by pod. This shows the resource usage the scheduler and kubelet expect per pod for resources along with the unit for the resource if any.&lt;/div&gt;
	&lt;ul&gt;
	&lt;li&gt;&lt;label class="metric_detail"&gt;Stability Level:&lt;/label&gt;&lt;span class="metric_stability_level"&gt;STABLE&lt;/span&gt;&lt;/li&gt;
	&lt;li data-type="custom"&gt;&lt;label class="metric_detail"&gt;Type:&lt;/label&gt; &lt;span class="metric_type"&gt;Custom&lt;/span&gt;&lt;/li&gt;
	&lt;li class="metric_labels_varying"&gt;&lt;label class="metric_detail"&gt;Labels:&lt;/label&gt;&lt;span class="metric_label"&gt;namespace&lt;/span&gt;&lt;span class="metric_label"&gt;pod&lt;/span&gt;&lt;span class="metric_label"&gt;node&lt;/span&gt;&lt;span class="metric_label"&gt;scheduler&lt;/span&gt;&lt;span class="metric_label"&gt;priority&lt;/span&gt;&lt;span class="metric_label"&gt;resource&lt;/span&gt;&lt;span class="metric_label"&gt;unit&lt;/span&gt;&lt;/li&gt;&lt;/ul&gt;
	&lt;/div&gt;&lt;div class="metric" data-stability="stable"&gt;
	&lt;div class="metric_name"&gt;kubernetes_healthcheck&lt;/div&gt;
	&lt;div class="metric_help"&gt;This metric records the result of a single healthcheck.&lt;/div&gt;
	&lt;ul&gt;
	&lt;li&gt;&lt;label class="metric_detail"&gt;Stability Level:&lt;/label&gt;&lt;span class="metric_stability_level"&gt;STABLE&lt;/span&gt;&lt;/li&gt;
	&lt;li data-type="gauge"&gt;&lt;label class="metric_detail"&gt;Type:&lt;/label&gt; &lt;span class="metric_type"&gt;Gauge&lt;/span&gt;&lt;/li&gt;
	&lt;li class="metric_labels_varying"&gt;&lt;label class="metric_detail"&gt;Labels:&lt;/label&gt;&lt;span class="metric_label"&gt;name&lt;/span&gt;&lt;span class="metric_label"&gt;type&lt;/span&gt;&lt;/li&gt;&lt;/ul&gt;
	&lt;/div&gt;&lt;div class="metric" data-stability="stable"&gt;
	&lt;div class="metric_name"&gt;kubernetes_healthchecks_total&lt;/div&gt;
	&lt;div class="metric_help"&gt;This metric records the results of all healthcheck.&lt;/div&gt;
	&lt;ul&gt;
	&lt;li&gt;&lt;label class="metric_detail"&gt;Stability Level:&lt;/label&gt;&lt;span class="metric_stability_level"&gt;STABLE&lt;/span&gt;&lt;/li&gt;
	&lt;li data-type="counter"&gt;&lt;label class="metric_detail"&gt;Type:&lt;/label&gt; &lt;span class="metric_type"&gt;Counter&lt;/span&gt;&lt;/li&gt;
	&lt;li class="metric_labels_varying"&gt;&lt;label class="metric_detail"&gt;Labels:&lt;/label&gt;&lt;span class="metric_label"&gt;name&lt;/span&gt;&lt;span class="metric_label"&gt;status&lt;/span&gt;&lt;span class="metric_label"&gt;type&lt;/span&gt;&lt;/li&gt;&lt;/ul&gt;
	&lt;/div&gt;&lt;div class="metric" data-stability="stable"&gt;
	&lt;div class="metric_name"&gt;node_collector_evictions_total&lt;/div&gt;
	&lt;div class="metric_help"&gt;Number of Node evictions that happened since current instance of NodeController started.&lt;/div&gt;
	&lt;ul&gt;
	&lt;li&gt;&lt;label class="metric_detail"&gt;Stability Level:&lt;/label&gt;&lt;span class="metric_stability_level"&gt;STABLE&lt;/span&gt;&lt;/li&gt;
	&lt;li data-type="counter"&gt;&lt;label class="metric_detail"&gt;Type:&lt;/label&gt; &lt;span class="metric_type"&gt;Counter&lt;/span&gt;&lt;/li&gt;
	&lt;li class="metric_labels_varying"&gt;&lt;label class="metric_detail"&gt;Labels:&lt;/label&gt;&lt;span class="metric_label"&gt;zone&lt;/span&gt;&lt;/li&gt;&lt;/ul&gt;
	&lt;/div&gt;&lt;div class="metric" data-stability="stable"&gt;
	&lt;div class="metric_name"&gt;node_cpu_usage_seconds_total&lt;/div&gt;
	&lt;div class="metric_help"&gt;Cumulative cpu time consumed by the node in core-seconds&lt;/div&gt;
	&lt;ul&gt;
	&lt;li&gt;&lt;label class="metric_detail"&gt;Stability Level:&lt;/label&gt;&lt;span class="metric_stability_level"&gt;STABLE&lt;/span&gt;&lt;/li&gt;
	&lt;li data-type="custom"&gt;&lt;label class="metric_detail"&gt;Type:&lt;/label&gt; &lt;span class="metric_type"&gt;Custom&lt;/span&gt;&lt;/li&gt;
	&lt;/ul&gt;
	&lt;/div&gt;&lt;div class="metric" data-stability="stable"&gt;
	&lt;div class="metric_name"&gt;node_memory_working_set_bytes&lt;/div&gt;
	&lt;div class="metric_help"&gt;Current working set of the node in bytes&lt;/div&gt;
	&lt;ul&gt;
	&lt;li&gt;&lt;label class="metric_detail"&gt;Stability Level:&lt;/label&gt;&lt;span class="metric_stability_level"&gt;STABLE&lt;/span&gt;&lt;/li&gt;
	&lt;li data-type="custom"&gt;&lt;label class="metric_detail"&gt;Type:&lt;/label&gt; &lt;span class="metric_type"&gt;Custom&lt;/span&gt;&lt;/li&gt;
	&lt;/ul&gt;
	&lt;/div&gt;&lt;div class="metric" data-stability="stable"&gt;
	&lt;div class="metric_name"&gt;pod_cpu_usage_seconds_total&lt;/div&gt;
	&lt;div class="metric_help"&gt;Cumulative cpu time consumed by the pod in core-seconds&lt;/div&gt;
	&lt;ul&gt;
	&lt;li&gt;&lt;label class="metric_detail"&gt;Stability Level:&lt;/label&gt;&lt;span class="metric_stability_level"&gt;STABLE&lt;/span&gt;&lt;/li&gt;
	&lt;li data-type="custom"&gt;&lt;label class="metric_detail"&gt;Type:&lt;/label&gt; &lt;span class="metric_type"&gt;Custom&lt;/span&gt;&lt;/li&gt;
	&lt;li class="metric_labels_varying"&gt;&lt;label class="metric_detail"&gt;Labels:&lt;/label&gt;&lt;span class="metric_label"&gt;pod&lt;/span&gt;&lt;span class="metric_label"&gt;namespace&lt;/span&gt;&lt;/li&gt;&lt;/ul&gt;
	&lt;/div&gt;&lt;div class="metric" data-stability="stable"&gt;
	&lt;div class="metric_name"&gt;pod_memory_working_set_bytes&lt;/div&gt;
	&lt;div class="metric_help"&gt;Current working set of the pod in bytes&lt;/div&gt;
	&lt;ul&gt;
	&lt;li&gt;&lt;label class="metric_detail"&gt;Stability Level:&lt;/label&gt;&lt;span class="metric_stability_level"&gt;STABLE&lt;/span&gt;&lt;/li&gt;
	&lt;li data-type="custom"&gt;&lt;label class="metric_detail"&gt;Type:&lt;/label&gt; &lt;span class="metric_type"&gt;Custom&lt;/span&gt;&lt;/li&gt;
	&lt;li class="metric_labels_varying"&gt;&lt;label class="metric_detail"&gt;Labels:&lt;/label&gt;&lt;span class="metric_label"&gt;pod&lt;/span&gt;&lt;span class="metric_label"&gt;namespace&lt;/span&gt;&lt;/li&gt;&lt;/ul&gt;
	&lt;/div&gt;&lt;div class="metric" data-stability="stable"&gt;
	&lt;div class="metric_name"&gt;resource_scrape_error&lt;/div&gt;
	&lt;div class="metric_help"&gt;1 if there was an error while getting container metrics, 0 otherwise&lt;/div&gt;
	&lt;ul&gt;
	&lt;li&gt;&lt;label class="metric_detail"&gt;Stability Level:&lt;/label&gt;&lt;span class="metric_stability_level"&gt;STABLE&lt;/span&gt;&lt;/li&gt;
	&lt;li data-type="custom"&gt;&lt;label class="metric_detail"&gt;Type:&lt;/label&gt; &lt;span class="metric_type"&gt;Custom&lt;/span&gt;&lt;/li&gt;
	&lt;/ul&gt;
	&lt;/div&gt;&lt;div class="metric" data-stability="stable"&gt;
	&lt;div class="metric_name"&gt;scheduler_framework_extension_point_duration_seconds&lt;/div&gt;
	&lt;div class="metric_help"&gt;Latency for running all plugins of a specific extension point.&lt;/div&gt;
	&lt;ul&gt;
	&lt;li&gt;&lt;label class="metric_detail"&gt;Stability Level:&lt;/label&gt;&lt;span class="metric_stability_level"&gt;STABLE&lt;/span&gt;&lt;/li&gt;
	&lt;li data-type="histogram"&gt;&lt;label class="metric_detail"&gt;Type:&lt;/label&gt; &lt;span class="metric_type"&gt;Histogram&lt;/span&gt;&lt;/li&gt;
	&lt;li class="metric_labels_varying"&gt;&lt;label class="metric_detail"&gt;Labels:&lt;/label&gt;&lt;span class="metric_label"&gt;extension_point&lt;/span&gt;&lt;span class="metric_label"&gt;profile&lt;/span&gt;&lt;span class="metric_label"&gt;status&lt;/span&gt;&lt;/li&gt;&lt;/ul&gt;
	&lt;/div&gt;&lt;div class="metric" data-stability="stable"&gt;
	&lt;div class="metric_name"&gt;scheduler_pending_pods&lt;/div&gt;
	&lt;div class="metric_help"&gt;Number of pending pods, by the queue type. 'active' means number of pods in activeQ; 'backoff' means number of pods in backoffQ; 'unschedulable' means number of pods in unschedulablePods that the scheduler attempted to schedule and failed; 'gated' is the number of unschedulable pods that the scheduler never attempted to schedule because they are gated.&lt;/div&gt;
	&lt;ul&gt;
	&lt;li&gt;&lt;label class="metric_detail"&gt;Stability Level:&lt;/label&gt;&lt;span class="metric_stability_level"&gt;STABLE&lt;/span&gt;&lt;/li&gt;
	&lt;li data-type="gauge"&gt;&lt;label class="metric_detail"&gt;Type:&lt;/label&gt; &lt;span class="metric_type"&gt;Gauge&lt;/span&gt;&lt;/li&gt;
	&lt;li class="metric_labels_varying"&gt;&lt;label class="metric_detail"&gt;Labels:&lt;/label&gt;&lt;span class="metric_label"&gt;queue&lt;/span&gt;&lt;/li&gt;&lt;/ul&gt;
	&lt;/div&gt;&lt;div class="metric" data-stability="stable"&gt;
	&lt;div class="metric_name"&gt;scheduler_pod_scheduling_attempts&lt;/div&gt;
	&lt;div class="metric_help"&gt;Number of attempts to successfully schedule a pod.&lt;/div&gt;
	&lt;ul&gt;
	&lt;li&gt;&lt;label class="metric_detail"&gt;Stability Level:&lt;/label&gt;&lt;span class="metric_stability_level"&gt;STABLE&lt;/span&gt;&lt;/li&gt;
	&lt;li data-type="histogram"&gt;&lt;label class="metric_detail"&gt;Type:&lt;/label&gt; &lt;span class="metric_type"&gt;Histogram&lt;/span&gt;&lt;/li&gt;
	&lt;/ul&gt;
	&lt;/div&gt;&lt;div class="metric" data-stability="stable"&gt;
	&lt;div class="metric_name"&gt;scheduler_preemption_attempts_total&lt;/div&gt;
	&lt;div class="metric_help"&gt;Total preemption attempts in the cluster till now&lt;/div&gt;
	&lt;ul&gt;
	&lt;li&gt;&lt;label class="metric_detail"&gt;Stability Level:&lt;/label&gt;&lt;span class="metric_stability_level"&gt;STABLE&lt;/span&gt;&lt;/li&gt;
	&lt;li data-type="counter"&gt;&lt;label class="metric_detail"&gt;Type:&lt;/label&gt; &lt;span class="metric_type"&gt;Counter&lt;/span&gt;&lt;/li&gt;
	&lt;/ul&gt;
	&lt;/div&gt;&lt;div class="metric" data-stability="stable"&gt;
	&lt;div class="metric_name"&gt;scheduler_preemption_victims&lt;/div&gt;
	&lt;div class="metric_help"&gt;Number of selected preemption victims&lt;/div&gt;
	&lt;ul&gt;
	&lt;li&gt;&lt;label class="metric_detail"&gt;Stability Level:&lt;/label&gt;&lt;span class="metric_stability_level"&gt;STABLE&lt;/span&gt;&lt;/li&gt;
	&lt;li data-type="histogram"&gt;&lt;label class="metric_detail"&gt;Type:&lt;/label&gt; &lt;span class="metric_type"&gt;Histogram&lt;/span&gt;&lt;/li&gt;
	&lt;/ul&gt;
	&lt;/div&gt;&lt;div class="metric" data-stability="stable"&gt;
	&lt;div class="metric_name"&gt;scheduler_queue_incoming_pods_total&lt;/div&gt;
	&lt;div class="metric_help"&gt;Number of pods added to scheduling queues by event and queue type.&lt;/div&gt;
	&lt;ul&gt;
	&lt;li&gt;&lt;label class="metric_detail"&gt;Stability Level:&lt;/label&gt;&lt;span class="metric_stability_level"&gt;STABLE&lt;/span&gt;&lt;/li&gt;
	&lt;li data-type="counter"&gt;&lt;label class="metric_detail"&gt;Type:&lt;/label&gt; &lt;span class="metric_type"&gt;Counter&lt;/span&gt;&lt;/li&gt;
	&lt;li class="metric_labels_varying"&gt;&lt;label class="metric_detail"&gt;Labels:&lt;/label&gt;&lt;span class="metric_label"&gt;event&lt;/span&gt;&lt;span class="metric_label"&gt;queue&lt;/span&gt;&lt;/li&gt;&lt;/ul&gt;
	&lt;/div&gt;&lt;div class="metric" data-stability="stable"&gt;
	&lt;div class="metric_name"&gt;scheduler_schedule_attempts_total&lt;/div&gt;
	&lt;div class="metric_help"&gt;Number of attempts to schedule pods, by the result. 'unschedulable' means a pod could not be scheduled, while 'error' means an internal scheduler problem.&lt;/div&gt;
	&lt;ul&gt;
	&lt;li&gt;&lt;label class="metric_detail"&gt;Stability Level:&lt;/label&gt;&lt;span class="metric_stability_level"&gt;STABLE&lt;/span&gt;&lt;/li&gt;
	&lt;li data-type="counter"&gt;&lt;label class="metric_detail"&gt;Type:&lt;/label&gt; &lt;span class="metric_type"&gt;Counter&lt;/span&gt;&lt;/li&gt;
	&lt;li class="metric_labels_varying"&gt;&lt;label class="metric_detail"&gt;Labels:&lt;/label&gt;&lt;span class="metric_label"&gt;profile&lt;/span&gt;&lt;span class="metric_label"&gt;result&lt;/span&gt;&lt;/li&gt;&lt;/ul&gt;
	&lt;/div&gt;&lt;div class="metric" data-stability="stable"&gt;
	&lt;div class="metric_name"&gt;scheduler_scheduling_attempt_duration_seconds&lt;/div&gt;
	&lt;div class="metric_help"&gt;Scheduling attempt latency in seconds (scheduling algorithm + binding)&lt;/div&gt;
	&lt;ul&gt;
	&lt;li&gt;&lt;label class="metric_detail"&gt;Stability Level:&lt;/label&gt;&lt;span class="metric_stability_level"&gt;STABLE&lt;/span&gt;&lt;/li&gt;
	&lt;li data-type="histogram"&gt;&lt;label class="metric_detail"&gt;Type:&lt;/label&gt; &lt;span class="metric_type"&gt;Histogram&lt;/span&gt;&lt;/li&gt;
	&lt;li class="metric_labels_varying"&gt;&lt;label class="metric_detail"&gt;Labels:&lt;/label&gt;&lt;span class="metric_label"&gt;profile&lt;/span&gt;&lt;span class="metric_label"&gt;result&lt;/span&gt;&lt;/li&gt;&lt;/ul&gt;
	&lt;/div&gt;
&lt;/div&gt;
&lt;h3 id="list-of-beta-kubernetes-metrics"&gt;List of Beta Kubernetes Metrics&lt;/h3&gt;
&lt;p&gt;Beta metrics observe a looser API contract than its stable counterparts. No labels can be removed from beta metrics during their lifetime, however, labels can be added while the metric is in the beta stage. This offers the assurance that beta metrics will honor existing dashboards and alerts, while allowing for amendments in the future.&lt;/p&gt;</description></item><item><title>Kubernetes Release Cycle</title><link>https://andygol-k8s.netlify.app/releases/release/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/releases/release/</guid><description>&lt;!-- THIS CONTENT IS AUTO-GENERATED via https://github.com/kubernetes/website/blob/main/scripts/releng/update-release-info.sh --&gt;
&lt;div class="pageinfo pageinfo-light"&gt;
&lt;p&gt;This content is auto-generated and links may not function. The source of the document is located
&lt;a href="https://github.com/kubernetes/community/blob/master/contributors/devel/sig-release/release.md"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;!-- Localization note: omit the pageinfo block when localizing --&gt;
&lt;h2 id="targeting-enhancements-issues-and-prs-to-release-milestones"&gt;Targeting enhancements, Issues and PRs to Release Milestones&lt;/h2&gt;
&lt;p&gt;This document is focused on Kubernetes developers and contributors who need to
create an enhancement, issue, or pull request which targets a specific release
milestone.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#tldr"&gt;TL;DR&lt;/a&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#normal-dev-weeks-1-11"&gt;Normal Dev (Weeks 1-11)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#code-freeze-weeks-12-14"&gt;Code Freeze (Weeks 12-14)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#post-release-weeks-14+"&gt;Post-Release (Weeks 14+)&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="#definitions"&gt;Definitions&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#the-release-cycle"&gt;The Release Cycle&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#removal-of-items-from-the-milestone"&gt;Removal Of Items From The Milestone&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#adding-an-item-to-the-milestone"&gt;Adding An Item To The Milestone&lt;/a&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#milestone-maintainers"&gt;Milestone Maintainers&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#feature-additions"&gt;Feature additions&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#issue-additions"&gt;Issue additions&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#pr-additions"&gt;PR Additions&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="#other-required-labels"&gt;Other Required Labels&lt;/a&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#sig-owner-label"&gt;SIG Owner Label&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#priority-label"&gt;Priority Label&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#issuepr-kind-label"&gt;Issue/PR Kind Label&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The process for shepherding enhancements, issues, and pull requests into a
Kubernetes release spans multiple stakeholders:&lt;/p&gt;</description></item><item><title>Manage HugePages</title><link>https://andygol-k8s.netlify.app/docs/tasks/manage-hugepages/scheduling-hugepages/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/manage-hugepages/scheduling-hugepages/</guid><description>&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable" title="Feature Gate: HugePages"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt; 
 &lt;code&gt;Kubernetes v1.14 [stable]&lt;/code&gt;(enabled by default)&lt;/div&gt;

&lt;p&gt;Kubernetes supports the allocation and consumption of pre-allocated huge pages
by applications in a Pod. This page describes how users can consume huge pages.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;Kubernetes nodes must
&lt;a href="https://www.kernel.org/doc/html/latest/admin-guide/mm/hugetlbpage.html"&gt;pre-allocate huge pages&lt;/a&gt;
in order for the node to report its huge page capacity.&lt;/p&gt;
&lt;p&gt;A node can pre-allocate huge pages for multiple sizes, for instance,
the following line in &lt;code&gt;/etc/default/grub&lt;/code&gt; allocates &lt;code&gt;2*1GiB&lt;/code&gt; of 1 GiB
and &lt;code&gt;512*2 MiB&lt;/code&gt; of 2 MiB pages:&lt;/p&gt;</description></item><item><title>Manage TLS Certificates in a Cluster</title><link>https://andygol-k8s.netlify.app/docs/tasks/tls/managing-tls-in-a-cluster/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/tls/managing-tls-in-a-cluster/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;Kubernetes provides a &lt;code&gt;certificates.k8s.io&lt;/code&gt; API, which lets you provision TLS
certificates signed by a Certificate Authority (CA) that you control. These CA
and certificates can be used by your workloads to establish trust.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;certificates.k8s.io&lt;/code&gt; API uses a protocol that is similar to the &lt;a href="https://github.com/ietf-wg-acme/acme/"&gt;ACME
draft&lt;/a&gt;.&lt;/p&gt;

&lt;div class="alert alert-info" role="note"&gt;&lt;h4 class="alert-heading"&gt;Note:&lt;/h4&gt;Certificates created using the &lt;code&gt;certificates.k8s.io&lt;/code&gt; API are signed by a
&lt;a href="#configuring-your-cluster-to-provide-signing"&gt;dedicated CA&lt;/a&gt;. It is possible to configure your cluster to use the cluster root
CA for this purpose, but you should never rely on this. Do not assume that
these certificates will validate against the cluster root CA.&lt;/div&gt;

&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Manual Rotation of CA Certificates</title><link>https://andygol-k8s.netlify.app/docs/tasks/tls/manual-rotation-of-ca-certificates/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/tls/manual-rotation-of-ca-certificates/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This page shows how to manually rotate the certificate authority (CA) certificates.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
&lt;a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/"&gt;minikube&lt;/a&gt;
or you can use one of these Kubernetes playgrounds:&lt;/p&gt;</description></item><item><title>Mutating Admission Policy</title><link>https://andygol-k8s.netlify.app/docs/reference/access-authn-authz/mutating-admission-policy/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/access-authn-authz/mutating-admission-policy/</guid><description>&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-beta"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.34 [beta]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;!-- due to feature gate history, use manual version specification here --&gt;
&lt;p&gt;This page provides an overview of &lt;em&gt;MutatingAdmissionPolicies&lt;/em&gt;.
MutatingAdmissionPolicies allow you to change what happens when someone writes a change to the Kubernetes API.
If you want to use declarative policies just to prevent a particular kind of change to resources (for example: protecting platform namespaces from deletion),
&lt;a href="https://andygol-k8s.netlify.app/docs/reference/access-authn-authz/validating-admission-policy/"&gt;ValidatingAdmissionPolicy&lt;/a&gt;
is
a simpler and more effective alternative.&lt;/p&gt;</description></item><item><title>NAIC Case Study</title><link>https://andygol-k8s.netlify.app/case-studies/naic/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/case-studies/naic/</guid><description>&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;The &lt;a href="http://www.naic.org/"&gt;National Association of Insurance Commissioners (NAIC)&lt;/a&gt;, the U.S. standard-setting and regulatory support organization, was looking for a way to deliver new services faster to provide more value for members and staff. It also needed greater agility to improve productivity internally.&lt;/p&gt;

&lt;h2&gt;Solution&lt;/h2&gt;

&lt;p&gt;Beginning in 2016, they started using &lt;a href="https://www.cncf.io/"&gt;Cloud Native Computing Foundation (CNCF)&lt;/a&gt; tools such as &lt;a href="https://prometheus.io/"&gt;Prometheus&lt;/a&gt;. NAIC began hosting internal systems and development systems on &lt;a href="https://kubernetes.io/"&gt;Kubernetes&lt;/a&gt; at the beginning of 2018, as part of a broad move toward the public cloud. "Our culture and technology transition is a strategy embraced by our top leaders," says Dan Barker, Chief Enterprise Architect. "It has already proven successful by allowing us to accelerate our value pipeline by more than double while decreasing our costs by more than half. We are also seeing customer satisfaction increase as we add more and more applications to these new technologies."&lt;/p&gt;</description></item><item><title>Nav Case Study</title><link>https://andygol-k8s.netlify.app/case-studies/nav/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/case-studies/nav/</guid><description>&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;Founded in 2012, &lt;a href="https://www.nav.com/"&gt;Nav&lt;/a&gt; provides small business owners with access to their business credit scores from all three major commercial credit bureaus—Equifax, Experian and Dun &amp; Bradstreet—and financing options that best fit their needs. Five years in, the startup was growing rapidly, and "our cloud environments were getting very large, and our usage of those environments was extremely low, like under 1%," says Director of Engineering Travis Jeppson. "We wanted our usage of cloud environments to be more tightly coupled with what we actually needed, so we started looking at containerization and orchestration to help us be able to run workloads that were distinct from one another but could share a similar resource pool."&lt;/p&gt;</description></item><item><title>NetEase Case Study</title><link>https://andygol-k8s.netlify.app/case-studies/netease/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/case-studies/netease/</guid><description>&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;Its gaming business is one of the largest in the world, but that's not all that &lt;a href="https://netease-na.com/"&gt;NetEase&lt;/a&gt; provides to Chinese consumers. The company also operates e-commerce, advertising, music streaming, online education, and email platforms; the last of which serves almost a billion users with free email services through sites like &lt;a href="https://www.163.com/"&gt;163.com&lt;/a&gt;. In 2015, the NetEase Cloud team providing the infrastructure for all of these systems realized that their R&amp;D process was slowing down developers. "Our users needed to prepare all of the infrastructure by themselves," says Feng Changjian, Architect for NetEase Cloud and Container Service. "We were eager to provide the infrastructure and tools for our users automatically via serverless container service."&lt;/p&gt;</description></item><item><title>New York Times Case Study</title><link>https://andygol-k8s.netlify.app/case-studies/newyorktimes/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/case-studies/newyorktimes/</guid><description>&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;When the company decided a few years ago to move out of its data centers, its first deployments on the public cloud were smaller, less critical applications managed on virtual machines. "We started building more and more tools, and at some point we realized that we were doing a disservice by treating Amazon as another data center," says Deep Kapadia, Executive Director, Engineering at The New York Times. Kapadia was tapped to lead a Delivery Engineering Team that would "design for the abstractions that cloud providers offer us."&lt;/p&gt;</description></item><item><title>Nokia Case Study</title><link>https://andygol-k8s.netlify.app/case-studies/nokia/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/case-studies/nokia/</guid><description>&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.nokia.com/en_int"&gt;Nokia&lt;/a&gt;'s core business is building telecom networks end-to-end; its main products are related to the infrastructure, such as antennas, switching equipment, and routing equipment. "As telecom vendors, we have to deliver our software to several telecom operators and put the software into their infrastructure, and each of the operators have a bit different infrastructure," says Gergely Csatari, Senior Open Source Engineer. "There are operators who are running on bare metal. There are operators who are running on virtual machines. There are operators who are running on &lt;a href="https://cloud.vmware.com/"&gt;VMware Cloud&lt;/a&gt; and &lt;a href="https://www.openstack.org/"&gt;OpenStack&lt;/a&gt; Cloud. We want to run the same product on all of these different infrastructures without changing the product itself."&lt;/p&gt;</description></item><item><title>Nordstrom Case Study</title><link>https://andygol-k8s.netlify.app/case-studies/nordstrom/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/case-studies/nordstrom/</guid><description>&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;Nordstrom wanted to increase the efficiency and speed of its technology operations, which includes the Nordstrom.com e-commerce site. At the same time, Nordstrom Technology was looking for ways to tighten its technology operational costs.&lt;/p&gt;

&lt;h2&gt;Solution&lt;/h2&gt;

&lt;p&gt;After embracing a DevOps transformation and launching a continuous integration/continuous deployment (CI/CD) project four years ago, the company reduced its deployment time from three months to 30 minutes. But they wanted to go even faster across environments, so they began their cloud native journey, adopting Docker containers orchestrated with &lt;a href="http://kubernetes.io/"&gt;Kubernetes&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>Northwestern Mutual Case Study</title><link>https://andygol-k8s.netlify.app/case-studies/northwestern-mutual/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/case-studies/northwestern-mutual/</guid><description>&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;In the spring of 2015, Northwestern Mutual acquired a fintech startup, LearnVest, and decided to take "Northwestern Mutual's leading products and services and meld it with LearnVest's digital experience and innovative financial planning platform," says Brad Williams, Director of Engineering for Client Experience, Northwestern Mutual. The company's existing infrastructure had been optimized for batch workflows hosted on on-prem networks; deployments were very traditional, focused on following a process instead of providing deployment agility. "We had to build a platform that was elastically scalable, but also much more responsive, so we could quickly get data to the client website so our end-customers have the experience they expect," says Williams.&lt;/p&gt;</description></item><item><title>OpenAI Case Study</title><link>https://andygol-k8s.netlify.app/case-studies/openai/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/case-studies/openai/</guid><description>&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;An artificial intelligence research lab, OpenAI needed infrastructure for deep learning that would allow experiments to be run either in the cloud or in its own data center, and to easily scale. Portability, speed, and cost were the main drivers.&lt;/p&gt;

&lt;h2&gt;Solution&lt;/h2&gt;

&lt;p&gt;OpenAI began running Kubernetes on top of AWS in 2016, and in early 2017 migrated to Azure. OpenAI runs key experiments in fields including robotics and gaming both in Azure and in its own data centers, depending on which cluster has free capacity. "We use Kubernetes mainly as a batch scheduling system and rely on our &lt;a href="https://github.com/openai/kubernetes-ec2-autoscaler"&gt;autoscaler&lt;/a&gt; to dynamically scale up and down our cluster," says Christopher Berner, Head of Infrastructure. "This lets us significantly reduce costs for idle nodes, while still providing low latency and rapid iteration."&lt;/p&gt;</description></item><item><title>Patch Releases</title><link>https://andygol-k8s.netlify.app/releases/patch-releases/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/releases/patch-releases/</guid><description>&lt;p&gt;Schedule and team contact information for Kubernetes patch releases.&lt;/p&gt;
&lt;p&gt;For general information about Kubernetes release cycle, see the
&lt;a href="https://andygol-k8s.netlify.app/releases/release/"&gt;release process description&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id="cadence"&gt;Cadence&lt;/h2&gt;
&lt;p&gt;Our typical patch release cadence is monthly. It is
commonly a bit faster (1 to 2 weeks) for the earliest patch releases
after a 1.X minor release. Critical bug fixes may cause a more
immediate release outside of the normal cadence. We also aim to not make
releases during major holiday periods.&lt;/p&gt;</description></item><item><title>Pear Deck Case Study</title><link>https://andygol-k8s.netlify.app/case-studies/peardeck/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/case-studies/peardeck/</guid><description>&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;The three-year-old startup provides a web app for teachers to interact with their students in the classroom. The JavaScript app was built on Google's web app development platform &lt;a href="https://firebase.google.com/"&gt;Firebase&lt;/a&gt;, using &lt;a href="https://www.heroku.com/"&gt;Heroku&lt;/a&gt;. As the user base steadily grew, so did the development team. "We outgrew Heroku when we started wanting to have multiple services, and the deploying story got pretty horrendous. We were frustrated that we couldn't have the developers quickly stage a version," says CEO Riley Eynon-Lynch. "Tracing and monitoring became basically impossible." On top of that, many of Pear Deck's customers are behind government firewalls and connect through Firebase, not Pear Deck's servers, making troubleshooting even more difficult.&lt;/p&gt;</description></item><item><title>Pearson Case Study</title><link>https://andygol-k8s.netlify.app/case-studies/pearson/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/case-studies/pearson/</guid><description>&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;A global education company serving 75 million learners, Pearson set a goal to more than double that number, to 200 million, by 2025. A key part of this growth is in digital learning experiences, and Pearson was having difficulty in scaling and adapting to its growing online audience. They needed an infrastructure platform that would be able to scale quickly and deliver products to market faster.&lt;/p&gt;

&lt;h2&gt;Solution&lt;/h2&gt;

&lt;p&gt;"To transform our infrastructure, we had to think beyond simply enabling automated provisioning," says Chris Jackson, Director for Cloud Platforms &amp; SRE at Pearson. "We realized we had to build a platform that would allow Pearson developers to build, manage and deploy applications in a completely different way." The team chose Docker container technology and Kubernetes orchestration "because of its flexibility, ease of management and the way it would improve our engineers' productivity."&lt;/p&gt;</description></item><item><title>pingcap Case Study</title><link>https://andygol-k8s.netlify.app/case-studies/pingcap/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/case-studies/pingcap/</guid><description>&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;PingCAP is the company leading the development of the popular open source NewSQL database &lt;a href="https://github.com/pingcap/tidb"&gt;TiDB&lt;/a&gt;, which is MySQL-compatible, can handle hybrid transactional and analytical processing (HTAP) workloads, and has a cloud native architectural design. "Having a hybrid multi-cloud product is an important part of our global go-to-market strategy," says Kevin Xu, General Manager of Global Strategy and Operations. In order to achieve that, the team had to address two challenges: "how to deploy, run, and manage a distributed stateful application, such as a distributed database like TiDB, in a containerized world," Xu says, and "how to deliver an easy-to-use, consistent, and reliable experience for our customers when they use TiDB in the cloud, any cloud, whether that's one cloud provider or a combination of different cloud environments." Knowing that using a distributed system isn't easy, they began looking for the right orchestration layer to help reduce some of that complexity for end users.&lt;/p&gt;</description></item><item><title>Prowise Case Study</title><link>https://andygol-k8s.netlify.app/case-studies/nerdalize/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/case-studies/nerdalize/</guid><description>&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;Nerdalize offers affordable cloud hosting for customers—and free heat and hot water for people who sign up to house the heating devices that contain the company's servers. The savings Nerdalize realizes by not running data centers are passed on to its customers. When the team began using Docker to make its software more portable, it realized it also needed a container orchestration solution. "As a cloud provider, we have internal services for hosting our backends and billing our customers, but we also need to offer our compute to our end users," says Digital Product Engineer Ad van der Veer. "Since we have these heating devices spread across the Netherlands, we need some way of tying that all together."&lt;/p&gt;</description></item><item><title>Prowise Case Study</title><link>https://andygol-k8s.netlify.app/case-studies/prowise/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/case-studies/prowise/</guid><description>&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;A Dutch company that produces educational devices and software used around the world, &lt;a href="https://www.prowise.com/en/"&gt;Prowise&lt;/a&gt; had an infrastructure based on Linux services with multiple availability zones in Europe, Australia, and the U.S. "We've grown a lot in the past couple of years, and we started to encounter problems with versioning and flexible scaling," says Senior DevOps Engineer Victor van den Bosch, "not only scaling in demands, but also in being able to deploy multiple products which all have their own versions, their own development teams, and their own problems that they're trying to solve. To be able to put that all on the same platform without much resistance is what we were looking for. We wanted to future proof our infrastructure, and also solve some of the problems that are associated with just running a normal Linux service."&lt;/p&gt;</description></item><item><title>Release Managers</title><link>https://andygol-k8s.netlify.app/releases/release-managers/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/releases/release-managers/</guid><description>&lt;p&gt;&amp;quot;Release Managers&amp;quot; is an umbrella term that encompasses the set of Kubernetes
contributors responsible for maintaining release branches and creating releases
by using the tools SIG Release provides.&lt;/p&gt;
&lt;p&gt;The responsibilities of each role are described below.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#contact"&gt;Contact&lt;/a&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#security-embargo-policy"&gt;Security Embargo Policy&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="#handbooks"&gt;Handbooks&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#release-managers"&gt;Release Managers&lt;/a&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#becoming-a-release-manager"&gt;Becoming a Release Manager&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="#release-manager-associates"&gt;Release Manager Associates&lt;/a&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#becoming-a-release-manager-associate"&gt;Becoming a Release Manager Associate&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="#sig-release-leads"&gt;SIG Release Leads&lt;/a&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#chairs"&gt;Chairs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#technical-leads"&gt;Technical Leads&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="contact"&gt;Contact&lt;/h2&gt;
&lt;table&gt;
 &lt;thead&gt;
 &lt;tr&gt;
 &lt;th&gt;Mailing List&lt;/th&gt;
 &lt;th&gt;Slack&lt;/th&gt;
 &lt;th&gt;Visibility&lt;/th&gt;
 &lt;th&gt;Usage&lt;/th&gt;
 &lt;th&gt;Membership&lt;/th&gt;
 &lt;/tr&gt;
 &lt;/thead&gt;
 &lt;tbody&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;a href="mailto:release-managers@kubernetes.io"&gt;release-managers@kubernetes.io&lt;/a&gt;&lt;/td&gt;
 &lt;td&gt;&lt;a href="https://kubernetes.slack.com/messages/CJH2GBF7Y"&gt;#release-management&lt;/a&gt; (channel) / @release-managers (user group)&lt;/td&gt;
 &lt;td&gt;Public&lt;/td&gt;
 &lt;td&gt;Public discussion for Release Managers&lt;/td&gt;
 &lt;td&gt;All Release Managers (including Associates, and SIG Chairs)&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;a href="mailto:release-managers-private@kubernetes.io"&gt;release-managers-private@kubernetes.io&lt;/a&gt;&lt;/td&gt;
 &lt;td&gt;N/A&lt;/td&gt;
 &lt;td&gt;Private&lt;/td&gt;
 &lt;td&gt;Private discussion for privileged Release Managers&lt;/td&gt;
 &lt;td&gt;Release Managers, SIG Release leadership&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;a href="mailto:security-release-team@kubernetes.io"&gt;security-release-team@kubernetes.io&lt;/a&gt;&lt;/td&gt;
 &lt;td&gt;&lt;a href="https://kubernetes.slack.com/archives/G0162T1RYHG"&gt;#security-release-team&lt;/a&gt; (channel) / @security-rel-team (user group)&lt;/td&gt;
 &lt;td&gt;Private&lt;/td&gt;
 &lt;td&gt;Security release coordination with the Security Response Committee&lt;/td&gt;
 &lt;td&gt;&lt;a href="mailto:security-discuss-private@kubernetes.io"&gt;security-discuss-private@kubernetes.io&lt;/a&gt;, &lt;a href="mailto:release-managers-private@kubernetes.io"&gt;release-managers-private@kubernetes.io&lt;/a&gt;&lt;/td&gt;
 &lt;/tr&gt;
 &lt;/tbody&gt;
&lt;/table&gt;
&lt;h3 id="security-embargo-policy"&gt;Security Embargo Policy&lt;/h3&gt;
&lt;p&gt;Some information about releases is subject to embargo and we have defined policy about
how those embargoes are set. Please refer to the
&lt;a href="https://github.com/kubernetes/committee-security-response/blob/main/private-distributors-list.md#embargo-policy"&gt;Security Embargo Policy&lt;/a&gt;
for more information.&lt;/p&gt;</description></item><item><title>Notes</title><link>https://andygol-k8s.netlify.app/releases/notes/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/releases/notes/</guid><description>&lt;p&gt;Release notes can be found by reading the &lt;a href="https://github.com/kubernetes/kubernetes/tree/master/CHANGELOG"&gt;Changelog&lt;/a&gt;
that matches your Kubernetes version. View the changelog for 1.35 on
&lt;a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.35.md"&gt;GitHub&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Alternately, release notes can be searched and filtered online at: &lt;a href="https://relnotes.k8s.io"&gt;relnotes.k8s.io&lt;/a&gt;.
View filtered release notes for 1.35 on
&lt;a href="https://relnotes.k8s.io/?releaseVersions=1.35.0"&gt;relnotes.k8s.io&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>ricardo.ch Case Study</title><link>https://andygol-k8s.netlify.app/case-studies/ricardo-ch/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/case-studies/ricardo-ch/</guid><description>&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;A Swiss online marketplace, &lt;a href="https://www.ricardo.ch/de/"&gt;ricardo.ch&lt;/a&gt; was experiencing problems with velocity, as well as a "classic gap" between Development and Operations, with the two sides unable to work well together. "They wanted to, but they didn't have common ground," says Cedric Meury, Head of Platform Engineering. "This was one of the root causes that slowed us down." The company began breaking down the legacy monolith into microservices, and needed orchestration to support the new architecture in its own data centers—as well as bring together Dev and Ops.&lt;/p&gt;</description></item><item><title>Schedule GPUs</title><link>https://andygol-k8s.netlify.app/docs/tasks/manage-gpus/scheduling-gpus/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/manage-gpus/scheduling-gpus/</guid><description>&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.26 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;p&gt;Kubernetes includes &lt;strong&gt;stable&lt;/strong&gt; support for managing AMD and NVIDIA GPUs
(graphical processing units) across different nodes in your cluster, using
&lt;a class='glossary-tooltip' title='Software extensions to let Pods access devices that need vendor-specific initialization or setup' data-toggle='tooltip' data-placement='top' href='https://andygol-k8s.netlify.app/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/' target='_blank' aria-label='device plugins'&gt;device plugins&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;This page describes how users can consume GPUs, and outlines
some of the limitations in the implementation.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="using-device-plugins"&gt;Using device plugins&lt;/h2&gt;
&lt;p&gt;Kubernetes implements device plugins to let Pods access specialized hardware features such as GPUs.&lt;/p&gt;</description></item><item><title>Search Results</title><link>https://andygol-k8s.netlify.app/search/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/search/</guid><description/></item><item><title>Slamtec Case Study</title><link>https://andygol-k8s.netlify.app/case-studies/slamtec/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/case-studies/slamtec/</guid><description>&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;Founded in 2013, SLAMTEC provides service robot autonomous localization and navigation solutions. The company's strength lies in its R&amp;D team's ability to quickly introduce, and continually iterate on, its core products. In the past few years, the company, which had a legacy infrastructure based on Alibaba Cloud and VMware vSphere, began looking to build its own stable and reliable container cloud platform to host its Internet of Things applications. "Our needs for the cloud platform included high availability, scalability and security; multi-granularity monitoring alarm capability; friendliness to containers and microservices; and perfect CI/CD support," says Benniu Ji, Director of Cloud Computing Business Division.&lt;/p&gt;</description></item><item><title>SOS International Case Study</title><link>https://andygol-k8s.netlify.app/case-studies/sos/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/case-studies/sos/</guid><description>&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;For the past six decades, SOS International has been providing reliable medical and travel assistance in the Nordic region. In recent years, the company's business strategy has required increasingly intense development in the digital space, but when it came to its IT systems, "SOS has a very fragmented legacy," with three traditional monoliths (Java, .NET, and IBM's AS/400) and a waterfall approach, says Martin Ahrentsen, Head of Enterprise Architecture. "We have been forced to institute both new technology and new ways of working, so we could be more efficient with a shorter time to market. It was a much more agile approach, and we needed to have a platform that can help us deliver that to the business."&lt;/p&gt;</description></item><item><title>Spotify Case Study</title><link>https://andygol-k8s.netlify.app/case-studies/spotify/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/case-studies/spotify/</guid><description>&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;Launched in 2008, the audio-streaming platform has grown to over 200 million monthly active users across the world. "Our goal is to empower creators and enable a really immersive listening experience for all of the consumers that we have today—and hopefully the consumers we'll have in the future," says Jai Chakrabarti, Director of Engineering, Infrastructure and Operations. An early adopter of microservices and Docker, Spotify had containerized microservices running across its fleet of VMs with a homegrown container orchestration system called &lt;a href="https://github.com/spotify/helios"&gt;Helios&lt;/a&gt;. By late 2017, it became clear that "having a small team working on the features was just not as efficient as adopting something that was supported by a much bigger community," he says.&lt;/p&gt;</description></item><item><title>Squarespace Case Study</title><link>https://andygol-k8s.netlify.app/case-studies/squarespace/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/case-studies/squarespace/</guid><description>&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;Moving from a monolith to microservices in 2014 "solved a problem on the development side, but it pushed that problem to the infrastructure team," says Kevin Lynch, Staff Engineer on the Site Reliability team at Squarespace. "The infrastructure deployment process on our 5,000 VM hosts was slowing everyone down."&lt;/p&gt;

&lt;h2&gt;Solution&lt;/h2&gt;

&lt;p&gt;The team experimented with container orchestration platforms, and found that Kubernetes "answered all the questions that we had," says Lynch. The company began running Kubernetes in its data centers in 2016.&lt;/p&gt;</description></item><item><title>ThredUp Case Study</title><link>https://andygol-k8s.netlify.app/case-studies/thredup/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/case-studies/thredup/</guid><description>&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;The largest online consignment store for women's and children's clothes, ThredUP launched in 2009 with a monolithic application running on Amazon Web Services. Though the company began breaking up the monolith into microservices a few years ago, the infrastructure team was still dealing with handcrafted servers, which hampered productivity. "We've configured them just to get them out as fast as we could, but there was no standardization, and as we kept growing, that became a bigger and bigger chore to manage," says Cofounder/CTO Chris Homer. The infrastructure, they realized, needed to be modernized to enable the velocity the company needed. "It's really important to a company like us who's disrupting the retail industry to make sure that as we're building software and getting it out in front of our users, we can do it on a fast cycle and learn a ton as we experiment," adds Homer. "We wanted to make sure that our engineers could embrace the DevOps mindset as they built software. It was really important to us that they could own the life cycle from end to end, from conception at design, through shipping it and running it in production, from marketing to ecommerce, the user experience and our internal distribution center operations."&lt;/p&gt;</description></item><item><title>Validate IPv4/IPv6 dual-stack</title><link>https://andygol-k8s.netlify.app/docs/tasks/network/validate-dual-stack/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/network/validate-dual-stack/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This document shares how to validate IPv4/IPv6 dual-stack enabled Kubernetes clusters.&lt;/p&gt;
&lt;h2 id="before-you-begin"&gt;Before you begin&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Provider support for dual-stack networking (Cloud provider or otherwise must be able to
provide Kubernetes nodes with routable IPv4/IPv6 network interfaces)&lt;/li&gt;
&lt;li&gt;A &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/"&gt;network plugin&lt;/a&gt;
that supports dual-stack networking.&lt;/li&gt;
&lt;li&gt;&lt;a href="https://andygol-k8s.netlify.app/docs/concepts/services-networking/dual-stack/"&gt;Dual-stack enabled&lt;/a&gt; cluster&lt;/li&gt;
&lt;/ul&gt;


Your Kubernetes server must be at or later than version v1.23.
 &lt;p&gt;To check the version, enter &lt;code&gt;kubectl version&lt;/code&gt;.&lt;/p&gt;


&lt;div class="alert alert-info" role="note"&gt;&lt;h4 class="alert-heading"&gt;Note:&lt;/h4&gt;While you can validate with an earlier version, the feature is only GA and officially supported since v1.23.&lt;/div&gt;

&lt;!-- steps --&gt;
&lt;h2 id="validate-addressing"&gt;Validate addressing&lt;/h2&gt;
&lt;h3 id="validate-node-addressing"&gt;Validate node addressing&lt;/h3&gt;
&lt;p&gt;Each dual-stack Node should have a single IPv4 block and a single IPv6 block allocated.
Validate that IPv4/IPv6 Pod address ranges are configured by running the following command.
Replace the sample node name with a valid dual-stack Node from your cluster. In this example,
the Node's name is &lt;code&gt;k8s-linuxpool1-34450317-0&lt;/code&gt;:&lt;/p&gt;</description></item><item><title>Validating Admission Policy</title><link>https://andygol-k8s.netlify.app/docs/reference/access-authn-authz/validating-admission-policy/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/access-authn-authz/validating-admission-policy/</guid><description>&lt;!-- overview --&gt;







 &lt;div class="feature-state-notice feature-stable"&gt;
 &lt;span class="feature-state-name"&gt;FEATURE STATE:&lt;/span&gt;
 &lt;code&gt;Kubernetes v1.30 [stable]&lt;/code&gt;
 &lt;/div&gt;
 


&lt;p&gt;This page provides an overview of Validating Admission Policy.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="what-is-validating-admission-policy"&gt;What is Validating Admission Policy?&lt;/h2&gt;
&lt;p&gt;Validating admission policies offer a declarative, in-process alternative to validating admission webhooks.&lt;/p&gt;
&lt;p&gt;Validating admission policies use the Common Expression Language (CEL) to declare the validation
rules of a policy.
Validation admission policies are highly configurable, enabling policy authors to define policies
that can be parameterized and scoped to resources as needed by cluster administrators.&lt;/p&gt;</description></item><item><title>Version Skew Policy</title><link>https://andygol-k8s.netlify.app/releases/version-skew-policy/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/releases/version-skew-policy/</guid><description>&lt;!-- overview --&gt;
&lt;p&gt;This document describes the maximum version skew supported between various Kubernetes components.
Specific cluster deployment tools may place additional restrictions on version skew.&lt;/p&gt;
&lt;!-- body --&gt;
&lt;h2 id="supported-versions"&gt;Supported versions&lt;/h2&gt;
&lt;p&gt;Kubernetes versions are expressed as &lt;strong&gt;x.y.z&lt;/strong&gt;, where &lt;strong&gt;x&lt;/strong&gt; is the major version,
&lt;strong&gt;y&lt;/strong&gt; is the minor version, and &lt;strong&gt;z&lt;/strong&gt; is the patch version, following
&lt;a href="https://semver.org/"&gt;Semantic Versioning&lt;/a&gt; terminology. For more information, see
&lt;a href="https://git.k8s.io/sig-release/release-engineering/versioning.md#kubernetes-release-versioning"&gt;Kubernetes Release Versioning&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The Kubernetes project maintains release branches for the most recent three minor releases
(1.35, 1.34, 1.33).
Kubernetes 1.19 and newer receive &lt;a href="https://andygol-k8s.netlify.app/releases/patch-releases/#support-period"&gt;approximately 1 year of patch support&lt;/a&gt;.
Kubernetes 1.18 and older received approximately 9 months of patch support.&lt;/p&gt;</description></item><item><title>vsco Case Study</title><link>https://andygol-k8s.netlify.app/case-studies/vsco/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/case-studies/vsco/</guid><description>&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;After moving from &lt;a href="https://www.rackspace.com/"&gt;Rackspace&lt;/a&gt; to &lt;a href="https://aws.amazon.com/"&gt;AWS&lt;/a&gt; in 2015, &lt;a href="https://vsco.co/"&gt;VSCO&lt;/a&gt; began building &lt;a href="https://nodejs.org/en/"&gt;Node.js&lt;/a&gt; and &lt;a href="https://go.dev/"&gt;Go&lt;/a&gt; microservices in addition to running its &lt;a href="http://php.net/"&gt;PHP monolith&lt;/a&gt;. The team containerized the microservices using &lt;a href="https://www.docker.com/"&gt;Docker&lt;/a&gt;, but "they were all in separate groups of &lt;a href="https://aws.amazon.com/ec2/"&gt;EC2&lt;/a&gt; instances that were dedicated per service," says Melinda Lu, Engineering Manager for the Machine Learning Team. Adds Naveen Gattu, Senior Software Engineer on the Community Team: "That yielded a lot of wasted resources. We started looking for a way to consolidate and be more efficient in the AWS EC2 instances."&lt;/p&gt;</description></item><item><title>WebhookAdmission Configuration (v1)</title><link>https://andygol-k8s.netlify.app/docs/reference/config-api/apiserver-webhookadmission.v1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/reference/config-api/apiserver-webhookadmission.v1/</guid><description>&lt;p&gt;Package v1 is the v1 version of the API.&lt;/p&gt;
&lt;h2 id="resource-types"&gt;Resource Types&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#apiserver-config-k8s-io-v1-WebhookAdmission"&gt;WebhookAdmission&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="apiserver-config-k8s-io-v1-WebhookAdmission"&gt;&lt;code&gt;WebhookAdmission&lt;/code&gt;&lt;/h2&gt;
&lt;p&gt;WebhookAdmission provides configuration for the webhook admission controller.&lt;/p&gt;
&lt;table class="table"&gt;
&lt;thead&gt;&lt;tr&gt;&lt;th width="30%"&gt;Field&lt;/th&gt;&lt;th&gt;Description&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;apiVersion&lt;/code&gt;&lt;br/&gt;string&lt;/td&gt;&lt;td&gt;&lt;code&gt;apiserver.config.k8s.io/v1&lt;/code&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;kind&lt;/code&gt;&lt;br/&gt;string&lt;/td&gt;&lt;td&gt;&lt;code&gt;WebhookAdmission&lt;/code&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;kubeConfigFile&lt;/code&gt; &lt;B&gt;[Required]&lt;/B&gt;&lt;br/&gt;
&lt;code&gt;string&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
 &lt;p&gt;KubeConfigFile is the path to the kubeconfig file.&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;</description></item><item><title>Wikimedia Case Study</title><link>https://andygol-k8s.netlify.app/case-studies/wikimedia/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/case-studies/wikimedia/</guid><description>&lt;p&gt;The non-profit Wikimedia Foundation operates some of the largest collaboratively edited reference projects in the world, including Wikipedia. To help users maintain and use wikis, it runs Wikimedia Tool Labs, a hosting environment for community developers working on tools and bots to help editors and other volunteers do their work, including reducing vandalism. The community around Wikimedia Tool Labs began forming nearly 10 years ago.&lt;/p&gt;


&lt;div class="quote banner"&gt;

 &lt;div class="quote-text"&gt;
 &lt;img src="https://andygol-k8s.netlify.app/images/wikimedia_logo.png" alt="Wikimedia"&gt;
&lt;br&gt;
&lt;br&gt;
"Wikimedia Tool Labs is vital for making sure wikis all around the world work as well as they possibly can. Because it's grown organically for almost 10 years, it has become an extremely challenging environment and difficult to maintain. It's like a big ball of mud — you really can't see through it. With Kubernetes, we're simplifying the environment and making it easier for developers to build the tools that make wikis run better."

 
 &lt;span class="quote-author"&gt;&amp;mdash; Yuvi Panda, operations engineer at Wikimedia Foundation and Wikimedia Tool Labs&lt;/span&gt;
 
 &lt;/div&gt;
&lt;/div&gt;


&lt;h2&gt;Challenges&lt;/h2&gt;

&lt;ul&gt;
 &lt;li&gt;Simplify a complex, difficult-to-manage infrastructure&lt;/li&gt;
 &lt;li&gt;Allow developers to continue writing tools and bots using existing techniques&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;Why Kubernetes&lt;/h2&gt;

&lt;ul&gt;
 &lt;li&gt;Wikimedia Tool Labs chose Kubernetes because it can mimic existing workflows, while reducing complexity&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;Approach&lt;/h2&gt;

&lt;ul&gt;
 &lt;li&gt;Migrate old systems and a complex infrastructure to Kubernetes&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;Results&lt;/h2&gt;

&lt;ul&gt;
 &lt;li&gt;20 percent of web tools that account for more than 40 percent of web traffic now run on Kubernetes&lt;/li&gt;
 &lt;li&gt;A 25-node cluster that keeps up with each new Kubernetes release&lt;/li&gt;
 &lt;li&gt;Thousands of lines of old code have been deleted, thanks to Kubernetes&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;Using Kubernetes to provide tools for maintaining wikis&lt;/h2&gt;

&lt;p&gt;Wikimedia Tool Labs is run by a staff of four-and-a-half paid employees and two volunteers. The infrastructure didn't make it easy or intuitive for developers to build bots and other tools to make wikis work more easily. Yuvi says, "It's incredibly chaotic. We have lots of Perl and Bash duct tape on top of it. Everything is super fragile."&lt;/p&gt;</description></item><item><title>Windows debugging tips</title><link>https://andygol-k8s.netlify.app/docs/tasks/debug/debug-cluster/windows/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/docs/tasks/debug/debug-cluster/windows/</guid><description>&lt;!-- overview --&gt;
&lt;!-- body --&gt;
&lt;h2 id="troubleshooting-node"&gt;Node-level troubleshooting&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;My Pods are stuck at &amp;quot;Container Creating&amp;quot; or restarting over and over&lt;/p&gt;
&lt;p&gt;Ensure that your pause image is compatible with your Windows OS version.
See &lt;a href="https://andygol-k8s.netlify.app/docs/concepts/windows/intro/#pause-container"&gt;Pause container&lt;/a&gt;
to see the latest / recommended pause image and/or get more information.&lt;/p&gt;

&lt;div class="alert alert-info" role="note"&gt;&lt;h4 class="alert-heading"&gt;Note:&lt;/h4&gt;If using containerd as your container runtime the pause image is specified in the
&lt;code&gt;plugins.plugins.cri.sandbox_image&lt;/code&gt; field of the of config.toml configuration file.&lt;/div&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;My pods show status as &lt;code&gt;ErrImgPull&lt;/code&gt; or &lt;code&gt;ImagePullBackOff&lt;/code&gt;&lt;/p&gt;</description></item><item><title>Wink Case Study</title><link>https://andygol-k8s.netlify.app/case-studies/wink/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/case-studies/wink/</guid><description>&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;Building a low-latency, highly reliable infrastructure to serve communications between millions of connected smart-home devices and the company's consumer hubs and mobile app, with an emphasis on horizontal scalability, the ability to encrypt everything quickly and connections that could be easily brought back up if anything went wrong.&lt;/p&gt;

&lt;h2&gt;Solution&lt;/h2&gt;

&lt;p&gt;Across-the-board use of a Kubernetes-Docker-CoreOS Container Linux stack.&lt;/p&gt;

&lt;h2&gt;Impact&lt;/h2&gt;

&lt;p&gt;"Two of the biggest American retailers [Home Depot and Walmart] are carrying and promoting the brand and the hardware," Wink Head of Engineering Kit Klein says proudly – though he adds that "it really comes with a lot of pressure. It's not a retail situation where you have a lot of tech enthusiasts. These are everyday people who want something that works and have no tolerance for technical excuses." And that's further testament to how much faith Klein has in the infrastructure that the Wink team has built. With 80 percent of Wink's workload running on a unified stack of Kubernetes-Docker-CoreOS, the company has put itself in a position to continually innovate and improve its products and services. Committing to this technology, says Klein, "makes building on top of the infrastructure relatively easy."&lt;/p&gt;</description></item><item><title>Woorank Case Study</title><link>https://andygol-k8s.netlify.app/case-studies/woorank/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/case-studies/woorank/</guid><description>&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;Founded in 2011, Woorank embraced microservices and containerization early on, so its core product, a tool that helps digital marketers improve their websites' visibility on the internet, consists of 50 applications developed and maintained by a technical team of 12. For two years, the infrastructure ran smoothly on Mesos, but "there were still lots of our own libraries that we had to roll and applications that we had to bring in, so it was very cumbersome for us as a small team to keep those things alive and to update them," says CTO/Cofounder Nils De Moor. So he began looking for a new solution with more automation and self-healing built in, that would better suit the company's human resources.&lt;/p&gt;</description></item><item><title>Yahoo! Japan</title><link>https://andygol-k8s.netlify.app/case-studies/yahoo-japan/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/case-studies/yahoo-japan/</guid><description/></item><item><title>Zalando Case Study</title><link>https://andygol-k8s.netlify.app/case-studies/zalando/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andygol-k8s.netlify.app/case-studies/zalando/</guid><description>&lt;h2&gt;Challenge&lt;/h2&gt;

&lt;p&gt;Zalando, Europe's leading online fashion platform, has experienced exponential growth since it was founded in 2008. In 2015, with plans to further expand its original e-commerce site to include new services and products, Zalando embarked on a &lt;a href="https://jobs.zalando.com/tech/blog/radical-agility-study-notes/"&gt;radical transformation&lt;/a&gt; resulting in autonomous self-organizing teams. This change requires an infrastructure that could scale with the growth of the engineering organization. Zalando's technology department began rewriting its applications to be cloud-ready and started moving its infrastructure from on-premise data centers to the cloud. While orchestration wasn't immediately considered, as teams migrated to &lt;a href="https://aws.amazon.com/"&gt;Amazon Web Services&lt;/a&gt; (AWS): "We saw the pain teams were having with infrastructure and Cloud Formation on AWS," says Henning Jacobs, Head of Developer Productivity. "There's still too much operational overhead for the teams and compliance. " To provide better support, cluster management was brought into play.&lt;/p&gt;</description></item></channel></rss>