If you don't yet have a cluster, visit
bootstrapping clusters with kubeadm
.
The tasks in this section are aimed at people administering an existing cluster:
This is the multi-page printable view of this section. Click here to print.
If you don't yet have a cluster, visit
bootstrapping clusters with kubeadm
.
The tasks in this section are aimed at people administering an existing cluster:
This page explains how to add Linux worker nodes to a kubeadm cluster.
kubeadm init
and following the steps
in the document Creating a cluster with kubeadm.To add new Linux worker nodes to your cluster do the following for each machine:
kubeadm init
. For example:sudo kubeadm join --token <token> <control-plane-host>:<control-plane-port> --discovery-token-ca-cert-hash sha256:<hash>
<control-plane-host>:<control-plane-port>
, IPv6 address must be enclosed in square brackets, for example: [2001:db8::101]:2073
.If you do not have the token, you can get it by running the following command on the control plane node:
# Run this on a control plane node
sudo kubeadm token list
The output is similar to this:
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
8ewj1p.9r9hcjoqgajrj4gi 23h 2018-06-12T02:51:28Z authentication, The default bootstrap system:
signing token generated by bootstrappers:
'kubeadm init'. kubeadm:
default-node-token
By default, node join tokens expire after 24 hours. If you are joining a node to the cluster after the current token has expired, you can create a new token by running the following command on the control plane node:
# Run this on a control plane node
sudo kubeadm token create
The output is similar to this:
5didvk.d09sbcov8ph2amjw
If you don't have the value of --discovery-token-ca-cert-hash
, you can get it by running the
following commands on the control plane node:
# Run this on a control plane node
sudo cat /etc/kubernetes/pki/ca.crt | openssl x509 -pubkey | openssl rsa -pubin -outform der 2>/dev/null | \
openssl dgst -sha256 -hex | sed 's/^.* //'
The output is similar to:
8cb2de97839780a412b93877f8507ad6c94f73add17d5d7058e91741c9d5ec78
The output of the kubeadm join
command should look something like:
[preflight] Running pre-flight checks
... (log output of join workflow) ...
Node join complete:
* Certificate signing request sent to control-plane and response
received.
* Kubelet informed of new secure connection details.
Run 'kubectl get nodes' on control-plane to see this machine join.
A few seconds later, you should notice this node in the output from kubectl get nodes
.
(for example, run kubectl
on a control plane node).
kubectl -n kube-system rollout restart deployment coredns
after at least one new node is joined.Kubernetes v1.18 [beta]
This page explains how to add Windows worker nodes to a kubeadm cluster.
kubeadm init
and following the steps
in the document Creating a cluster with kubeadm.Do the following for each machine:
Then proceed with the steps outlined below.
To install containerd, first run the following command:
curl.exe -LO https://raw.githubusercontent.com/kubernetes-sigs/sig-windows-tools/master/hostprocess/Install-Containerd.ps1
Then run the following command, but first replace CONTAINERD_VERSION
with a recent release
from the containerd repository.
The version must not have a v
prefix. For example, use 1.7.22
instead of v1.7.22
:
.\Install-Containerd.ps1 -ContainerDVersion CONTAINERD_VERSION
Install-Containerd.ps1
such as netAdapterName
as you need them.skipHypervisorSupportCheck
if your machine does not support Hyper-V and cannot host Hyper-V isolated
containers.Install-Containerd.ps1
optional parameters CNIBinPath
and/or CNIConfigPath
you will
need to configure the installed Windows CNI plugin with matching values.Run the following commands to install kubeadm and the kubelet:
curl.exe -LO https://raw.githubusercontent.com/kubernetes-sigs/sig-windows-tools/master/hostprocess/PrepareNode.ps1
.\PrepareNode.ps1 -KubernetesVersion v1.32
KubernetesVersion
of PrepareNode.ps1
if needed.kubeadm join
Run the command that was output by kubeadm init
. For example:
kubeadm join --token <token> <control-plane-host>:<control-plane-port> --discovery-token-ca-cert-hash sha256:<hash>
<control-plane-host>:<control-plane-port>
, IPv6 address must be enclosed in square brackets, for example: [2001:db8::101]:2073
.If you do not have the token, you can get it by running the following command on the control plane node:
# Run this on a control plane node
sudo kubeadm token list
The output is similar to this:
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
8ewj1p.9r9hcjoqgajrj4gi 23h 2018-06-12T02:51:28Z authentication, The default bootstrap system:
signing token generated by bootstrappers:
'kubeadm init'. kubeadm:
default-node-token
By default, node join tokens expire after 24 hours. If you are joining a node to the cluster after the current token has expired, you can create a new token by running the following command on the control plane node:
# Run this on a control plane node
sudo kubeadm token create
The output is similar to this:
5didvk.d09sbcov8ph2amjw
If you don't have the value of --discovery-token-ca-cert-hash
, you can get it by running the
following commands on the control plane node:
sudo cat /etc/kubernetes/pki/ca.crt | openssl x509 -pubkey | openssl rsa -pubin -outform der 2>/dev/null | \
openssl dgst -sha256 -hex | sed 's/^.* //'
The output is similar to:
8cb2de97839780a412b93877f8507ad6c94f73add17d5d7058e91741c9d5ec78
The output of the kubeadm join
command should look something like:
[preflight] Running pre-flight checks
... (log output of join workflow) ...
Node join complete:
* Certificate signing request sent to control-plane and response
received.
* Kubelet informed of new secure connection details.
Run 'kubectl get nodes' on control-plane to see this machine join.
A few seconds later, you should notice this node in the output from kubectl get nodes
.
(for example, run kubectl
on a control plane node).
CNI setup on clusters mixed with Linux and Windows nodes requires more steps than just
running kubectl apply
on a manifest file. Additionally, the CNI plugin running on control
plane nodes must be prepared to support the CNI plugin running on Windows worker nodes.
Only a few CNI plugins currently support Windows. Below you can find individual setup instructions for them:
See Install and Set Up kubectl on Windows.
This page explains how to upgrade a Kubernetes cluster created with kubeadm from version
1.31.x to version 1.32.x, and from version
1.32.x to 1.32.y (where y > x
). Skipping MINOR versions
when upgrading is unsupported. For more details, please visit Version Skew Policy.
To see information about upgrading clusters created using older versions of kubeadm, please refer to following pages instead:
The Kubernetes project recommends upgrading to the latest patch releases promptly, and to ensure that you are running a supported minor release of Kubernetes. Following this recommendation helps you to to stay secure.
The upgrade workflow at high level is the following:
kubeadm upgrade
does not touch your workloads, only components internal to Kubernetes, but backups are always a best practice.systemctl status kubelet
or view the service logs with journalctl -xeu kubelet
.kubeadm upgrade
supports --config
with a
UpgradeConfiguration
API type which can
be used to configure the upgrade process.kubeadm upgrade
does not support reconfiguration of an existing cluster. Follow the steps in
Reconfiguring a kubeadm cluster instead.Because the kube-apiserver
static pod is running at all times (even if you
have drained the node), when you perform a kubeadm upgrade which includes an
etcd upgrade, in-flight requests to the server will stall while the new etcd
static pod is restarting. As a workaround, it is possible to actively stop the
kube-apiserver
process a few seconds before starting the kubeadm upgrade apply
command. This permits to complete in-flight requests and close existing
connections, and minimizes the consequence of the etcd downtime. This can be
done as follows on control plane nodes:
killall -s SIGTERM kube-apiserver # trigger a graceful kube-apiserver shutdown
sleep 20 # wait a little bit to permit completing in-flight requests
kubeadm upgrade ... # execute a kubeadm upgrade command
If you're using the community-owned package repositories (pkgs.k8s.io
), you need to
enable the package repository for the desired Kubernetes minor release. This is explained in
Changing the Kubernetes package repository
document.
apt.kubernetes.io
and yum.kubernetes.io
) have been
deprecated and frozen starting from September 13, 2023.
Using the new package repositories hosted at pkgs.k8s.io
is strongly recommended and required in order to install Kubernetes versions released after September 13, 2023.
The deprecated legacy repositories, and their contents, might be removed at any time in the future and without
a further notice period. The new package repositories provide downloads for Kubernetes versions starting with v1.24.0.Find the latest patch release for Kubernetes 1.32 using the OS package manager:
# Find the latest 1.32 version in the list.
# It should look like 1.32.x-*, where x is the latest patch.
sudo apt update
sudo apt-cache madison kubeadm
# Find the latest 1.32 version in the list.
# It should look like 1.32.x-*, where x is the latest patch.
sudo yum list --showduplicates kubeadm --disableexcludes=kubernetes
The upgrade procedure on control plane nodes should be executed one node at a time.
Pick a control plane node that you wish to upgrade first. It must have the /etc/kubernetes/admin.conf
file.
For the first control plane node
Upgrade kubeadm:
# replace x in 1.32.x-* with the latest patch version
sudo apt-mark unhold kubeadm && \
sudo apt-get update && sudo apt-get install -y kubeadm='1.32.x-*' && \
sudo apt-mark hold kubeadm
# replace x in 1.32.x-* with the latest patch version
sudo yum install -y kubeadm-'1.32.x-*' --disableexcludes=kubernetes
Verify that the download works and has the expected version:
kubeadm version
Verify the upgrade plan:
sudo kubeadm upgrade plan
This command checks that your cluster can be upgraded, and fetches the versions you can upgrade to. It also shows a table with the component config version states.
kubeadm upgrade
also automatically renews the certificates that it manages on this node.
To opt-out of certificate renewal the flag --certificate-renewal=false
can be used.
For more information see the certificate management guide.Choose a version to upgrade to, and run the appropriate command. For example:
# replace x with the patch version you picked for this upgrade
sudo kubeadm upgrade apply v1.32.x
Once the command finishes you should see:
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.32.x". Enjoy!
[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
kubeadm upgrade apply
, regardless of whether there are other control plane instances that have not
been upgraded. This may cause compatibility problems. Since v1.28, kubeadm defaults to a mode that checks whether all
the control plane instances have been upgraded before starting to upgrade the addons. You must perform control plane
instances upgrade sequentially or at least ensure that the last control plane instance upgrade is not started until all
the other control plane instances have been upgraded completely, and the addons upgrade will be performed after the last
control plane instance is upgraded.Manually upgrade your CNI provider plugin.
Your Container Network Interface (CNI) provider may have its own upgrade instructions to follow. Check the addons page to find your CNI provider and see whether additional upgrade steps are required.
This step is not required on additional control plane nodes if the CNI provider runs as a DaemonSet.
For the other control plane nodes
Same as the first control plane node but use:
sudo kubeadm upgrade node
instead of:
sudo kubeadm upgrade apply
Also calling kubeadm upgrade plan
and upgrading the CNI provider plugin is no longer needed.
Prepare the node for maintenance by marking it unschedulable and evicting the workloads:
# replace <node-to-drain> with the name of your node you are draining
kubectl drain <node-to-drain> --ignore-daemonsets
Upgrade the kubelet and kubectl:
# replace x in 1.32.x-* with the latest patch version
sudo apt-mark unhold kubelet kubectl && \
sudo apt-get update && sudo apt-get install -y kubelet='1.32.x-*' kubectl='1.32.x-*' && \
sudo apt-mark hold kubelet kubectl
# replace x in 1.32.x-* with the latest patch version
sudo yum install -y kubelet-'1.32.x-*' kubectl-'1.32.x-*' --disableexcludes=kubernetes
Restart the kubelet:
sudo systemctl daemon-reload
sudo systemctl restart kubelet
Bring the node back online by marking it schedulable:
# replace <node-to-uncordon> with the name of your node
kubectl uncordon <node-to-uncordon>
The upgrade procedure on worker nodes should be executed one node at a time or few nodes at a time, without compromising the minimum required capacity for running your workloads.
The following pages show how to upgrade Linux and Windows worker nodes:
After the kubelet is upgraded on all nodes verify that all nodes are available again by running the following command from anywhere kubectl can access the cluster:
kubectl get nodes
The STATUS
column should show Ready
for all your nodes, and the version number should be updated.
If kubeadm upgrade
fails and does not roll back, for example because of an unexpected shutdown during execution, you can run kubeadm upgrade
again.
This command is idempotent and eventually makes sure that the actual state is the desired state you declare.
To recover from a bad state, you can also run sudo kubeadm upgrade apply --force
without changing the version that your cluster is running.
During upgrade kubeadm writes the following backup folders under /etc/kubernetes/tmp
:
kubeadm-backup-etcd-<date>-<time>
kubeadm-backup-manifests-<date>-<time>
kubeadm-backup-etcd
contains a backup of the local etcd member data for this control plane Node.
In case of an etcd upgrade failure and if the automatic rollback does not work, the contents of this folder
can be manually restored in /var/lib/etcd
. In case external etcd is used this backup folder will be empty.
kubeadm-backup-manifests
contains a backup of the static Pod manifest files for this control plane Node.
In case of a upgrade failure and if the automatic rollback does not work, the contents of this folder can be
manually restored in /etc/kubernetes/manifests
. If for some reason there is no difference between a pre-upgrade
and post-upgrade manifest file for a certain component, a backup file for it will not be written.
/etc/kubernetes/tmp
will remain and
these backup files will need to be cleared manually.kubeadm upgrade apply
does the following:
Ready
stateCoreDNS
and kube-proxy
manifests and makes sure that all necessary RBAC rules are created.kubeadm upgrade node
does the following on additional control plane nodes:
ClusterConfiguration
from the cluster.kubeadm upgrade node
does the following on worker nodes:
ClusterConfiguration
from the cluster.This page explains how to upgrade a Linux Worker Nodes created with kubeadm.
You need to have shell access to all the nodes, and the kubectl command-line tool must be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts.
To check the version, enterkubectl version
.If you're using the community-owned package repositories (pkgs.k8s.io
), you need to
enable the package repository for the desired Kubernetes minor release. This is explained in
Changing the Kubernetes package repository
document.
apt.kubernetes.io
and yum.kubernetes.io
) have been
deprecated and frozen starting from September 13, 2023.
Using the new package repositories hosted at pkgs.k8s.io
is strongly recommended and required in order to install Kubernetes versions released after September 13, 2023.
The deprecated legacy repositories, and their contents, might be removed at any time in the future and without
a further notice period. The new package repositories provide downloads for Kubernetes versions starting with v1.24.0.Upgrade kubeadm:
# replace x in 1.32.x-* with the latest patch version
sudo apt-mark unhold kubeadm && \
sudo apt-get update && sudo apt-get install -y kubeadm='1.32.x-*' && \
sudo apt-mark hold kubeadm
# replace x in 1.32.x-* with the latest patch version
sudo yum install -y kubeadm-'1.32.x-*' --disableexcludes=kubernetes
For worker nodes this upgrades the local kubelet configuration:
sudo kubeadm upgrade node
Prepare the node for maintenance by marking it unschedulable and evicting the workloads:
# execute this command on a control plane node
# replace <node-to-drain> with the name of your node you are draining
kubectl drain <node-to-drain> --ignore-daemonsets
Upgrade the kubelet and kubectl:
# replace x in 1.32.x-* with the latest patch version
sudo apt-mark unhold kubelet kubectl && \
sudo apt-get update && sudo apt-get install -y kubelet='1.32.x-*' kubectl='1.32.x-*' && \
sudo apt-mark hold kubelet kubectl
# replace x in 1.32.x-* with the latest patch version
sudo yum install -y kubelet-'1.32.x-*' kubectl-'1.32.x-*' --disableexcludes=kubernetes
Restart the kubelet:
sudo systemctl daemon-reload
sudo systemctl restart kubelet
Bring the node back online by marking it schedulable:
# execute this command on a control plane node
# replace <node-to-uncordon> with the name of your node
kubectl uncordon <node-to-uncordon>
Kubernetes v1.18 [beta]
This page explains how to upgrade a Windows node created with kubeadm.
You need to have shell access to all the nodes, and the kubectl command-line tool must be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts.
Your Kubernetes server must be at or later than version 1.17. To check the version, enterkubectl version
.From the Windows node, upgrade kubeadm:
# replace 1.32.0 with your desired version
curl.exe -Lo <path-to-kubeadm.exe> "https://dl.k8s.io/v1.32.0/bin/windows/amd64/kubeadm.exe"
From a machine with access to the Kubernetes API, prepare the node for maintenance by marking it unschedulable and evicting the workloads:
# replace <node-to-drain> with the name of your node you are draining
kubectl drain <node-to-drain> --ignore-daemonsets
You should see output similar to this:
node/ip-172-31-85-18 cordoned
node/ip-172-31-85-18 drained
From the Windows node, call the following command to sync new kubelet configuration:
kubeadm upgrade node
From the Windows node, upgrade and restart the kubelet:
stop-service kubelet
curl.exe -Lo <path-to-kubelet.exe> "https://dl.k8s.io/v1.32.0/bin/windows/amd64/kubelet.exe"
restart-service kubelet
From the Windows node, upgrade and restart the kube-proxy.
stop-service kube-proxy
curl.exe -Lo <path-to-kube-proxy.exe> "https://dl.k8s.io/v1.32.0/bin/windows/amd64/kube-proxy.exe"
restart-service kube-proxy
From a machine with access to the Kubernetes API, bring the node back online by marking it schedulable:
# replace <node-to-drain> with the name of your node
kubectl uncordon <node-to-drain>
This page explains how to configure the kubelet's cgroup driver to match the container runtime cgroup driver for kubeadm clusters.
You should be familiar with the Kubernetes container runtime requirements.
The Container runtimes page
explains that the systemd
driver is recommended for kubeadm based setups instead
of the kubelet's default cgroupfs
driver,
because kubeadm manages the kubelet as a
systemd service.
The page also provides details on how to set up a number of different container runtimes with the
systemd
driver by default.
kubeadm allows you to pass a KubeletConfiguration
structure during kubeadm init
.
This KubeletConfiguration
can include the cgroupDriver
field which controls the cgroup
driver of the kubelet.
In v1.22 and later, if the user does not set the cgroupDriver
field under KubeletConfiguration
,
kubeadm defaults it to systemd
.
In Kubernetes v1.28, you can enable automatic detection of the cgroup driver as an alpha feature. See systemd cgroup driver for more details.
A minimal example of configuring the field explicitly:
# kubeadm-config.yaml
kind: ClusterConfiguration
apiVersion: kubeadm.k8s.io/v1beta4
kubernetesVersion: v1.21.0
---
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
cgroupDriver: systemd
Such a configuration file can then be passed to the kubeadm command:
kubeadm init --config kubeadm-config.yaml
Kubeadm uses the same KubeletConfiguration
for all nodes in the cluster.
The KubeletConfiguration
is stored in a ConfigMap
object under the kube-system
namespace.
Executing the sub commands init
, join
and upgrade
would result in kubeadm
writing the KubeletConfiguration
as a file under /var/lib/kubelet/config.yaml
and passing it to the local node kubelet.
cgroupfs
driverTo use cgroupfs
and to prevent kubeadm upgrade
from modifying the
KubeletConfiguration
cgroup driver on existing setups, you must be explicit
about its value. This applies to a case where you do not wish future versions
of kubeadm to apply the systemd
driver by default.
See the below section on "Modify the kubelet ConfigMap" for details on how to be explicit about the value.
If you wish to configure a container runtime to use the cgroupfs
driver,
you must refer to the documentation of the container runtime of your choice.
systemd
driverTo change the cgroup driver of an existing kubeadm cluster from cgroupfs
to systemd
in-place,
a similar procedure to a kubelet upgrade is required. This must include both
steps outlined below.
systemd
driver. This requires executing only the first step below
before joining the new nodes and ensuring the workloads can safely move to the new
nodes before deleting the old nodes.Call kubectl edit cm kubelet-config -n kube-system
.
Either modify the existing cgroupDriver
value or add a new field that looks like this:
cgroupDriver: systemd
This field must be present under the kubelet:
section of the ConfigMap.
For each node in the cluster:
kubectl drain <node-name> --ignore-daemonsets
systemctl stop kubelet
systemd
cgroupDriver: systemd
in /var/lib/kubelet/config.yaml
systemctl start kubelet
kubectl uncordon <node-name>
Execute these steps on nodes one at a time to ensure workloads have sufficient time to schedule on different nodes.
Once the process is complete ensure that all nodes and workloads are healthy.
Kubernetes v1.15 [stable]
Client certificates generated by kubeadm expire after 1 year. This page explains how to manage certificate renewals with kubeadm. It also covers other tasks related to kubeadm certificate management.
The Kubernetes project recommends upgrading to the latest patch releases promptly, and to ensure that you are running a supported minor release of Kubernetes. Following this recommendation helps you to to stay secure.
You should be familiar with PKI certificates and requirements in Kubernetes.
You should be familiar with how to pass a configuration file to the kubeadm commands.
This guide covers the usage of the openssl
command (used for manual certificate signing,
if you choose that approach), but you can use your preferred tools.
Some of the steps here use sudo
for administrator access. You can use any equivalent tool.
By default, kubeadm generates all the certificates needed for a cluster to run. You can override this behavior by providing your own certificates.
To do so, you must place them in whatever directory is specified by the
--cert-dir
flag or the certificatesDir
field of kubeadm's ClusterConfiguration
.
By default this is /etc/kubernetes/pki
.
If a given certificate and private key pair exists before running kubeadm init
,
kubeadm does not overwrite them. This means you can, for example, copy an existing
CA into /etc/kubernetes/pki/ca.crt
and /etc/kubernetes/pki/ca.key
,
and kubeadm will use this CA for signing the rest of the certificates.
kubeadm allows you to choose an encryption algorithm that is used for creating
public and private keys. That can be done by using the encryptionAlgorithm
field of the
kubeadm configuration:
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
encryptionAlgorithm: <ALGORITHM>
<ALGORITHM>
can be one of RSA-2048
(default), RSA-3072
, RSA-4096
or ECDSA-P256
.
kubeadm allows you to choose the validity period of CA and leaf certificates.
That can be done by using the certificateValidityPeriod
and caCertificateValidityPeriod
fields of the kubeadm configuration:
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
certificateValidityPeriod: 8760h # Default: 365 days × 24 hours = 1 year
caCertificateValidityPeriod: 87600h # Default: 365 days × 24 hours * 10 = 10 years
The values of the fields follow the accepted format for
Go's time.Duration
values, with the longest supported
unit being h
(hours).
It is also possible to provide only the ca.crt
file and not the
ca.key
file (this is only available for the root CA file, not other cert pairs).
If all other certificates and kubeconfig files are in place, kubeadm recognizes
this condition and activates the "External CA" mode. kubeadm will proceed without the
CA key on disk.
Instead, run the controller-manager standalone with --controllers=csrsigner
and
point to the CA certificate and key.
There are various ways to prepare the component credentials when using external CA mode.
PKI certificates and requirements includes information on how to prepare all the required by kubeadm component credentials manually.
This guide covers the usage of the openssl
command (used for manual certificate signing,
if you choose that approach), but you can use your preferred tools.
kubeadm can generate CSR files that you can sign manually with tools like
openssl
and your external CA. These CSR files will include all the specification for credentials
that components deployed by kubeadm require.
Alternatively, it is possible to use kubeadm phase commands to automate this process.
ca.crt
and ca.key
that you have into /etc/kubernetes/pki
on the node.config.yaml
that can be used with kubeadm init
. Make sure that this file includes
any relevant cluster wide or host-specific information that could be included in certificates, such as,
ClusterConfiguration.controlPlaneEndpoint
, ClusterConfiguration.certSANs
and InitConfiguration.APIEndpoint
.kubeadm init phase kubeconfig all --config config.yaml
and
kubeadm init phase certs all --config config.yaml
. This will generate all required kubeconfig
files and certificates under /etc/kubernetes/
and its pki
sub directory./etc/kubernetes/pki/ca.key
, delete or move to a safe location
the file /etc/kubernetes/super-admin.conf
.kubeadm join
will be called also delete /etc/kubernetes/kubelet.conf
.
This file is only required on the first node where kubeadm init
will be called.pki/sa.*
, pki/front-proxy-ca.*
and pki/etc/ca.*
are
shared between control plane nodes, You can generate them once and
distribute them manually
to nodes where kubeadm join
will be called, or you can use the
--upload-certs
functionality of kubeadm init
and --certificate-key
of kubeadm join
to automate this distribution.Once the credentials are prepared on all nodes, call kubeadm init
and kubeadm join
for these nodes to
join the cluster. kubeadm will use the existing kubeconfig and certificate files under /etc/kubernetes/
and its pki
sub directory.
kubeadm
cannot manage certificates signed by an external CA.You can use the check-expiration
subcommand to check when certificates expire:
kubeadm certs check-expiration
The output is similar to this:
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
admin.conf Dec 30, 2020 23:36 UTC 364d no
apiserver Dec 30, 2020 23:36 UTC 364d ca no
apiserver-etcd-client Dec 30, 2020 23:36 UTC 364d etcd-ca no
apiserver-kubelet-client Dec 30, 2020 23:36 UTC 364d ca no
controller-manager.conf Dec 30, 2020 23:36 UTC 364d no
etcd-healthcheck-client Dec 30, 2020 23:36 UTC 364d etcd-ca no
etcd-peer Dec 30, 2020 23:36 UTC 364d etcd-ca no
etcd-server Dec 30, 2020 23:36 UTC 364d etcd-ca no
front-proxy-client Dec 30, 2020 23:36 UTC 364d front-proxy-ca no
scheduler.conf Dec 30, 2020 23:36 UTC 364d no
CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED
ca Dec 28, 2029 23:36 UTC 9y no
etcd-ca Dec 28, 2029 23:36 UTC 9y no
front-proxy-ca Dec 28, 2029 23:36 UTC 9y no
The command shows expiration/residual time for the client certificates in the
/etc/kubernetes/pki
folder and for the client certificate embedded in the kubeconfig files used
by kubeadm (admin.conf
, controller-manager.conf
and scheduler.conf
).
Additionally, kubeadm informs the user if the certificate is externally managed; in this case, the user should take care of managing certificate renewal manually/using other tools.
The kubelet.conf
configuration file is not included in the list above because kubeadm
configures kubelet
for automatic certificate renewal
with rotatable certificates under /var/lib/kubelet/pki
.
To repair an expired kubelet client certificate see
Kubelet client certificate rotation fails.
On nodes created with kubeadm init
from versions prior to kubeadm version 1.17, there is a
bug where you manually have to modify the
contents of kubelet.conf
. After kubeadm init
finishes, you should update kubelet.conf
to
point to the rotated kubelet client certificates, by replacing client-certificate-data
and
client-key-data
with:
client-certificate: /var/lib/kubelet/pki/kubelet-client-current.pem
client-key: /var/lib/kubelet/pki/kubelet-client-current.pem
kubeadm renews all the certificates during control plane upgrade.
This feature is designed for addressing the simplest use cases; if you don't have specific requirements on certificate renewal and perform Kubernetes version upgrades regularly (less than 1 year in between each upgrade), kubeadm will take care of keeping your cluster up to date and reasonably secure.
If you have more complex requirements for certificate renewal, you can opt out from the default
behavior by passing --certificate-renewal=false
to kubeadm upgrade apply
or to kubeadm upgrade node
.
You can renew your certificates manually at any time with the kubeadm certs renew
command,
with the appropriate command line options. If you are running cluster with a replicated control
plane, this command needs to be executed on all the control-plane nodes.
This command performs the renewal using CA (or front-proxy-CA) certificate and key stored in /etc/kubernetes/pki
.
kubeadm certs renew
uses the existing certificates as the authoritative source for attributes
(Common Name, Organization, subject alternative name) and does not rely on the kubeadm-config
ConfigMap.
Even so, the Kubernetes project recommends keeping the served certificate and the associated
values in that ConfigMap synchronized, to avoid any risk of confusion.
After running the command you should restart the control plane Pods. This is required since
dynamic certificate reload is currently not supported for all components and certificates.
Static Pods are managed by the local kubelet
and not by the API Server, thus kubectl cannot be used to delete and restart them.
To restart a static Pod you can temporarily remove its manifest file from /etc/kubernetes/manifests/
and wait for 20 seconds (see the fileCheckFrequency
value in KubeletConfiguration struct.
The kubelet will terminate the Pod if it's no longer in the manifest directory.
You can then move the file back and after another fileCheckFrequency
period, the kubelet will recreate
the Pod and the certificate renewal for the component can complete.
kubeadm certs renew
can renew any specific certificate or, with the subcommand all
, it can renew all of them:
# If you are running cluster with a replicated control plane, this command
# needs to be executed on all the control-plane nodes.
kubeadm certs renew all
Clusters built with kubeadm often copy the admin.conf
certificate into
$HOME/.kube/config
, as instructed in Creating a cluster with kubeadm.
On such a system, to update the contents of $HOME/.kube/config
after renewing the admin.conf
, you could run the following commands:
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
This section provides more details about how to execute manual certificate renewal using the Kubernetes certificates API.
The Kubernetes Certificate Authority does not work out of the box. You can configure an external signer such as cert-manager, or you can use the built-in signer.
The built-in signer is part of kube-controller-manager
.
To activate the built-in signer, you must pass the --cluster-signing-cert-file
and
--cluster-signing-key-file
flags.
If you're creating a new cluster, you can use a kubeadm configuration file:
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
controllerManager:
extraArgs:
- name: "cluster-signing-cert-file"
value: "/etc/kubernetes/pki/ca.crt"
- name: "cluster-signing-key-file"
value: "/etc/kubernetes/pki/ca.key"
See Create CertificateSigningRequest for creating CSRs with the Kubernetes API.
This section provide more details about how to execute manual certificate renewal using an external CA.
To better integrate with external CAs, kubeadm can also produce certificate signing requests (CSRs). A CSR represents a request to a CA for a signed certificate for a client. In kubeadm terms, any certificate that would normally be signed by an on-disk CA can be produced as a CSR instead. A CA, however, cannot be produced as a CSR.
Renewal of ceritficates is possible by generating new CSRs and signing them with the external CA. For more details about working with CSRs generated by kubeadm see the section Signing certificate signing requests (CSR) generated by kubeadm.
Kubeadm does not support rotation or replacement of CA certificates out of the box.
For more information about manual rotation or replacement of CA, see manual rotation of CA certificates.
By default the kubelet serving certificate deployed by kubeadm is self-signed. This means a connection from external services like the metrics-server to a kubelet cannot be secured with TLS.
To configure the kubelets in a new kubeadm cluster to obtain properly signed serving
certificates you must pass the following minimal configuration to kubeadm init
:
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
serverTLSBootstrap: true
If you have already created the cluster you must adapt it by doing the following:
kubelet-config-1.32
ConfigMap in the kube-system
namespace.
In that ConfigMap, the kubelet
key has a
KubeletConfiguration
document as its value. Edit the KubeletConfiguration document to set serverTLSBootstrap: true
.serverTLSBootstrap: true
field in /var/lib/kubelet/config.yaml
and restart the kubelet with systemctl restart kubelet
The field serverTLSBootstrap: true
will enable the bootstrap of kubelet serving
certificates by requesting them from the certificates.k8s.io
API. One known limitation
is that the CSRs (Certificate Signing Requests) for these certificates cannot be automatically
approved by the default signer in the kube-controller-manager -
kubernetes.io/kubelet-serving
.
This will require action from the user or a third party controller.
These CSRs can be viewed using:
kubectl get csr
NAME AGE SIGNERNAME REQUESTOR CONDITION
csr-9wvgt 112s kubernetes.io/kubelet-serving system:node:worker-1 Pending
csr-lz97v 1m58s kubernetes.io/kubelet-serving system:node:control-plane-1 Pending
To approve them you can do the following:
kubectl certificate approve <CSR-name>
By default, these serving certificate will expire after one year. Kubeadm sets the
KubeletConfiguration
field rotateCertificates
to true
, which means that close
to expiration a new set of CSRs for the serving certificates will be created and must
be approved to complete the rotation. To understand more see
Certificate Rotation.
If you are looking for a solution for automatic approval of these CSRs it is recommended that you contact your cloud provider and ask if they have a CSR signer that verifies the node identity with an out of band mechanism.
Third party custom controllers can be used:
Such a controller is not a secure mechanism unless it not only verifies the CommonName in the CSR but also verifies the requested IPs and domain names. This would prevent a malicious actor that has access to a kubelet client certificate to create CSRs requesting serving certificates for any IP or domain name.
During cluster creation, kubeadm init
signs the certificate in the super-admin.conf
to have Subject: O = system:masters, CN = kubernetes-super-admin
.
system:masters
is a break-glass, super user group that bypasses the authorization layer (for example,
RBAC). The file admin.conf
is also created
by kubeadm on control plane nodes and it contains a certificate with
Subject: O = kubeadm:cluster-admins, CN = kubernetes-admin
. kubeadm:cluster-admins
is a group logically belonging to kubeadm. If your cluster uses RBAC
(the kubeadm default), the kubeadm:cluster-admins
group is bound to the
cluster-admin
ClusterRole.
super-admin.conf
or admin.conf
files. Instead, create least
privileged access even for people who work as administrators and use that least
privilege alternative for anything other than break-glass (emergency) access.You can use the kubeadm kubeconfig user
command to generate kubeconfig files for additional users.
The command accepts a mixture of command line flags and
kubeadm configuration options.
The generated kubeconfig will be written to stdout and can be piped to a file using
kubeadm kubeconfig user ... > somefile.conf
.
Example configuration file that can be used with --config
:
# example.yaml
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
# Will be used as the target "cluster" in the kubeconfig
clusterName: "kubernetes"
# Will be used as the "server" (IP or DNS name) of this cluster in the kubeconfig
controlPlaneEndpoint: "some-dns-address:6443"
# The cluster CA key and certificate will be loaded from this local directory
certificatesDir: "/etc/kubernetes/pki"
Make sure that these settings match the desired target cluster settings. To see the settings of an existing cluster use:
kubectl get cm kubeadm-config -n kube-system -o=jsonpath="{.data.ClusterConfiguration}"
The following example will generate a kubeconfig file with credentials valid for 24 hours
for a new user johndoe
that is part of the appdevs
group:
kubeadm kubeconfig user --config example.yaml --org appdevs --client-name johndoe --validity-period 24h
The following example will generate a kubeconfig file with administrator credentials valid for 1 week:
kubeadm kubeconfig user --config example.yaml --client-name admin --validity-period 168h
You can create certificate signing requests with kubeadm certs generate-csr
.
Calling this command will generate .csr
/ .key
file pairs for regular
certificates. For certificates embedded in kubeconfig files, the command will
generate a .csr
/ .conf
pair where the key is already embedded in the .conf
file.
A CSR file contains all relevant information for a CA to sign a certificate. kubeadm uses a well defined specification for all its certificates and CSRs.
The default certificate directory is /etc/kubernetes/pki
, while the default
directory for kubeconfig files is /etc/kubernetes
. These defaults can be
overridden with the flags --cert-dir
and --kubeconfig-dir
, respectively.
To pass custom options to kubeadm certs generate-csr
use the --config
flag,
which accepts a kubeadm configuration
file, similarly to commands such as kubeadm init
. Any specification such
as extra SANs and custom IP addresses must be stored in the same configuration
file and used for all relevant kubeadm commands by passing it as --config
.
This guide uses the default Kubernetes directory /etc/kubernetes
, which requires
a super user. If you are following this guide and are using directories that you can
write to (typically, this means running kubeadm
with --cert-dir
and --kubeconfig-dir
)
then you can omit the sudo
command).
You must then copy the files that you produced over to within the /etc/kubernetes
directory so that kubeadm init
or kubeadm join
will find them.
On the primary control plane node, where kubeadm init
will be executed, call the following
commands:
sudo kubeadm init phase certs ca
sudo kubeadm init phase certs etcd-ca
sudo kubeadm init phase certs front-proxy-ca
sudo kubeadm init phase certs sa
This will populate the folders /etc/kubernetes/pki
and /etc/kubernetes/pki/etcd
with all self-signed CA files (certificates and keys) and service account (public and
private keys) that kubeadm needs for a control plane node.
If you are using an external CA, you must generate the same files out of band and manually
copy them to the primary control plane node in /etc/kubernetes
.
Once all CSRs are signed, you can delete the root CA key (ca.key
) as noted in the
External CA mode section.
For secondary control plane nodes (kubeadm join --control-plane
) there is no need to call
the above commands. Depending on how you setup the
High Availability
cluster, you either have to manually copy the same files from the primary
control plane node, or use the automated --upload-certs
functionality of kubeadm init
.
The kubeadm certs generate-csr
command generates CSRs for all known certificates
managed by kubeadm. Once the command is done you must manually delete .csr
, .conf
or .key
files that you don't need.
This section applies to both control plane and worker nodes.
If you have deleted the ca.key
file from control plane nodes
(External CA mode), the active kube-controller-manager in
this cluster will not be able to sign kubelet client certificates. If no external
method for signing these certificates exists in your setup (such as an
external signer, you could manually sign the kubelet.conf.csr
as explained in this guide.
Note that this also means that the automatic
kubelet client certificate rotation
will be disabled. If so, close to certificate expiration, you must generate
a new kubelet.conf.csr
, sign the certificate, embed it in kubelet.conf
and restart the kubelet.
If this does not apply to your setup, you can skip processing the kubelet.conf.csr
on secondary control plane and on workers nodes (all nodes that call kubeadm join ...
).
That is because the active kube-controller-manager will be responsible
for signing new kubelet client certificates.
kubelet.conf.csr
file on the primary control plane node
(the host where you originally ran kubeadm init
). This is because kubeadm
considers that as the node that bootstraps the cluster, and a pre-populated
kubelet.conf
is needed.Execute the following command on primary (kubeadm init
) and secondary
(kubeadm join --control-plane
) control plane nodes to generate all CSR files:
sudo kubeadm certs generate-csr
If external etcd is to be used, follow the
External etcd with kubeadm
guide to understand what CSR files are needed on the kubeadm and etcd nodes. Other
.csr
and .key
files under /etc/kubernetes/pki/etcd
can be removed.
Based on the explanation in
Considerations for kubelet.conf keep or delete
the kubelet.conf
and kubelet.conf.csr
files.
Based on the explanation in Considerations for kubelet.conf, optionally call:
sudo kubeadm certs generate-csr
and keep only the kubelet.conf
and kubelet.conf.csr
files. Alternatively skip
the steps for worker nodes entirely.
If you are using external CA and already have CA serial number files (.srl
) for
openssl
, you can copy such files to a kubeadm node where CSRs will be processed.
The .srl
files to copy are /etc/kubernetes/pki/ca.srl
,
/etc/kubernetes/pki/front-proxy-ca.srl
and /etc/kubernetes/pki/etcd/ca.srl
.
The files can be then moved to a new node where CSR files will be processed.
If a .srl
file is missing for a CA on a node, the script below will generate a new SRL file
with a random starting serial number.
To read more about .srl
files see the
openssl
documentation for the --CAserial
flag.
Repeat this step for all nodes that have CSR files.
Write the following script in the /etc/kubernetes
directory, navigate to the directory
and execute the script. The script will generate certificates for all CSR files that are
present in the /etc/kubernetes
tree.
#!/bin/bash
# Set certificate expiration time in days
DAYS=365
# Process all CSR files except those for front-proxy and etcd
find ./ -name "*.csr" | grep -v "pki/etcd" | grep -v "front-proxy" | while read -r FILE;
do
echo "* Processing ${FILE} ..."
FILE=${FILE%.*} # Trim the extension
if [ -f "./pki/ca.srl" ]; then
SERIAL_FLAG="-CAserial ./pki/ca.srl"
else
SERIAL_FLAG="-CAcreateserial"
fi
openssl x509 -req -days "${DAYS}" -CA ./pki/ca.crt -CAkey ./pki/ca.key ${SERIAL_FLAG} \
-in "${FILE}.csr" -out "${FILE}.crt"
sleep 2
done
# Process all etcd CSRs
find ./pki/etcd -name "*.csr" | while read -r FILE;
do
echo "* Processing ${FILE} ..."
FILE=${FILE%.*} # Trim the extension
if [ -f "./pki/etcd/ca.srl" ]; then
SERIAL_FLAG=-CAserial ./pki/etcd/ca.srl
else
SERIAL_FLAG=-CAcreateserial
fi
openssl x509 -req -days "${DAYS}" -CA ./pki/etcd/ca.crt -CAkey ./pki/etcd/ca.key ${SERIAL_FLAG} \
-in "${FILE}.csr" -out "${FILE}.crt"
done
# Process front-proxy CSRs
echo "* Processing ./pki/front-proxy-client.csr ..."
openssl x509 -req -days "${DAYS}" -CA ./pki/front-proxy-ca.crt -CAkey ./pki/front-proxy-ca.key -CAcreateserial \
-in ./pki/front-proxy-client.csr -out ./pki/front-proxy-client.crt
Repeat this step for all nodes that have CSR files.
Write the following script in the /etc/kubernetes
directory, navigate to the directory
and execute the script. The script will take the .crt
files that were signed for
kubeconfig files from CSRs in the previous step and will embed them in the kubeconfig files.
#!/bin/bash
CLUSTER=kubernetes
find ./ -name "*.conf" | while read -r FILE;
do
echo "* Processing ${FILE} ..."
KUBECONFIG="${FILE}" kubectl config set-cluster "${CLUSTER}" --certificate-authority ./pki/ca.crt --embed-certs
USER=$(KUBECONFIG="${FILE}" kubectl config view -o jsonpath='{.users[0].name}')
KUBECONFIG="${FILE}" kubectl config set-credentials "${USER}" --client-certificate "${FILE}.crt" --embed-certs
done
Perform this step on all nodes that have CSR files.
Write the following script in the /etc/kubernetes
directory, navigate to the directory
and execute the script.
#!/bin/bash
# Cleanup CSR files
rm -f ./*.csr ./pki/*.csr ./pki/etcd/*.csr # Clean all CSR files
# Cleanup CRT files that were already embedded in kubeconfig files
rm -f ./*.crt
Optionally, move .srl
files to the next node to be processed.
Optionally, if using external CA remove the /etc/kubernetes/pki/ca.key
file,
as explained in the External CA node section.
Once CSR files have been signed and required certificates are in place on the hosts
you want to use as nodes, you can use the commands kubeadm init
and kubeadm join
to create a Kubernetes cluster from these nodes. During init
and join
, kubeadm
uses existing certificates, encryption keys and kubeconfig files that it finds in the
/etc/kubernetes
tree on the host's local filesystem.
kubeadm does not support automated ways of reconfiguring components that were deployed on managed nodes. One way of automating this would be by using a custom operator.
To modify the components configuration you must manually edit associated cluster objects and files on disk.
This guide shows the correct sequence of steps that need to be performed to achieve kubeadm cluster reconfiguration.
/etc/kubernetes/admin.conf
) and network connectivity
to a running kube-apiserver in the cluster from a host that has kubectl installedkubeadm writes a set of cluster wide component configuration options in
ConfigMaps and other objects. These objects must be manually edited. The command kubectl edit
can be used for that.
The kubectl edit
command will open a text editor where you can edit and save the object directly.
You can use the environment variables KUBECONFIG
and KUBE_EDITOR
to specify the location of
the kubectl consumed kubeconfig file and preferred text editor.
For example:
KUBECONFIG=/etc/kubernetes/admin.conf KUBE_EDITOR=nano kubectl edit <parameters>
ClusterConfiguration
During cluster creation and upgrade, kubeadm writes its
ClusterConfiguration
in a ConfigMap called kubeadm-config
in the kube-system
namespace.
To change a particular option in the ClusterConfiguration
you can edit the ConfigMap with this command:
kubectl edit cm -n kube-system kubeadm-config
The configuration is located under the data.ClusterConfiguration
key.
ClusterConfiguration
includes a variety of options that affect the configuration of individual
components such as kube-apiserver, kube-scheduler, kube-controller-manager, CoreDNS, etcd and kube-proxy.
Changes to the configuration must be reflected on node components manually.ClusterConfiguration
changes on control plane nodeskubeadm manages the control plane components as static Pod manifests located in
the directory /etc/kubernetes/manifests
.
Any changes to the ClusterConfiguration
under the apiServer
, controllerManager
, scheduler
or etcd
keys must be reflected in the associated files in the manifests directory on a control plane node.
Such changes may include:
extraArgs
- requires updating the list of flags passed to a component containerextraVolumes
- requires updating the volume mounts for a component container*SANs
- requires writing new certificates with updated Subject Alternative NamesBefore proceeding with these changes, make sure you have backed up the directory /etc/kubernetes/
.
To write new certificates you can use:
kubeadm init phase certs <component-name> --config <config-file>
To write new manifest files in /etc/kubernetes/manifests
you can use:
# For Kubernetes control plane components
kubeadm init phase control-plane <component-name> --config <config-file>
# For local etcd
kubeadm init phase etcd local --config <config-file>
The <config-file>
contents must match the updated ClusterConfiguration
.
The <component-name>
value must be a name of a Kubernetes control plane component (apiserver
, controller-manager
or scheduler
).
/etc/kubernetes/manifests
will tell the kubelet to restart the static Pod for the corresponding component.
Try doing these changes one node at a time to leave the cluster without downtime.KubeletConfiguration
During cluster creation and upgrade, kubeadm writes its
KubeletConfiguration
in a ConfigMap called kubelet-config
in the kube-system
namespace.
You can edit the ConfigMap with this command:
kubectl edit cm -n kube-system kubelet-config
The configuration is located under the data.kubelet
key.
To reflect the change on kubeadm nodes you must do the following:
kubeadm upgrade node phase kubelet-config
to download the latest kubelet-config
ConfigMap contents into the local file /var/lib/kubelet/config.yaml
/var/lib/kubelet/kubeadm-flags.env
to apply additional configuration with
flagssystemctl restart kubelet
kubeadm upgrade
, kubeadm downloads the KubeletConfiguration
from the
kubelet-config
ConfigMap and overwrite the contents of /var/lib/kubelet/config.yaml
.
This means that node local configuration must be applied either by flags in
/var/lib/kubelet/kubeadm-flags.env
or by manually updating the contents of
/var/lib/kubelet/config.yaml
after kubeadm upgrade
, and then restarting the kubelet.KubeProxyConfiguration
During cluster creation and upgrade, kubeadm writes its
KubeProxyConfiguration
in a ConfigMap in the kube-system
namespace called kube-proxy
.
This ConfigMap is used by the kube-proxy
DaemonSet in the kube-system
namespace.
To change a particular option in the KubeProxyConfiguration
, you can edit the ConfigMap with this command:
kubectl edit cm -n kube-system kube-proxy
The configuration is located under the data.config.conf
key.
Once the kube-proxy
ConfigMap is updated, you can restart all kube-proxy Pods:
Delete the Pods with:
kubectl delete po -n kube-system -l k8s-app=kube-proxy
New Pods that use the updated ConfigMap will be created.
kubeadm deploys CoreDNS as a Deployment called coredns
and with a Service kube-dns
,
both in the kube-system
namespace.
To update any of the CoreDNS settings, you can edit the Deployment and Service objects:
kubectl edit deployment -n kube-system coredns
kubectl edit service -n kube-system kube-dns
Once the CoreDNS changes are applied you can delete the CoreDNS Pods:
Obtain the Pod names:
kubectl get po -n kube-system | grep coredns
Delete a Pod with:
kubectl delete po -n kube-system <pod-name>
New Pods with the updated CoreDNS configuration will be created.
kubeadm upgrade apply
, your changes to the CoreDNS
objects will be lost and must be reapplied.During the execution of kubeadm upgrade
on a managed node, kubeadm might overwrite configuration
that was applied after the cluster was created (reconfiguration).
kubeadm writes Labels, Taints, CRI socket and other information on the Node object for a particular Kubernetes node. To change any of the contents of this Node object you can use:
kubectl edit no <node-name>
During kubeadm upgrade
the contents of such a Node might get overwritten.
If you would like to persist your modifications to the Node object after upgrade,
you can prepare a kubectl patch
and apply it to the Node object:
kubectl patch no <node-name> --patch-file <patch-file>
The main source of control plane configuration is the ClusterConfiguration
object stored in the cluster. To extend the static Pod manifests configuration,
patches can be used.
These patch files must remain as files on the control plane nodes to ensure that
they can be used by the kubeadm upgrade ... --patches <directory>
.
If reconfiguration is done to the ClusterConfiguration
and static Pod manifests on disk,
the set of node specific patches must be updated accordingly.
Any changes to the KubeletConfiguration
stored in /var/lib/kubelet/config.yaml
will be overwritten on
kubeadm upgrade
by downloading the contents of the cluster wide kubelet-config
ConfigMap.
To persist kubelet node specific configuration either the file /var/lib/kubelet/config.yaml
has to be updated manually post-upgrade or the file /var/lib/kubelet/kubeadm-flags.env
can include flags.
The kubelet flags override the associated KubeletConfiguration
options, but note that
some of the flags are deprecated.
A kubelet restart will be required after changing /var/lib/kubelet/config.yaml
or
/var/lib/kubelet/kubeadm-flags.env
.
This page explains how to enable a package repository for the desired
Kubernetes minor release upon upgrading a cluster. This is only needed
for users of the community-owned package repositories hosted at pkgs.k8s.io
.
Unlike the legacy package repositories, the community-owned package
repositories are structured in a way that there's a dedicated package
repository for each Kubernetes minor version.
This document assumes that you're already using the community-owned
package repositories (pkgs.k8s.io
). If that's not the case, it's strongly
recommended to migrate to the community-owned package repositories as described
in the official announcement.
apt.kubernetes.io
and yum.kubernetes.io
) have been
deprecated and frozen starting from September 13, 2023.
Using the new package repositories hosted at pkgs.k8s.io
is strongly recommended and required in order to install Kubernetes versions released after September 13, 2023.
The deprecated legacy repositories, and their contents, might be removed at any time in the future and without
a further notice period. The new package repositories provide downloads for Kubernetes versions starting with v1.24.0.If you're unsure whether you're using the community-owned package repositories or the legacy package repositories, take the following steps to verify:
Print the contents of the file that defines the Kubernetes apt
repository:
# On your system, this configuration file could have a different name
pager /etc/apt/sources.list.d/kubernetes.list
If you see a line similar to:
deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.31/deb/ /
You're using the Kubernetes package repositories and this guide applies to you. Otherwise, it's strongly recommended to migrate to the Kubernetes package repositories as described in the official announcement.
Print the contents of the file that defines the Kubernetes yum
repository:
# On your system, this configuration file could have a different name
cat /etc/yum.repos.d/kubernetes.repo
If you see a baseurl
similar to the baseurl
in the output below:
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.31/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.31/rpm/repodata/repomd.xml.key
exclude=kubelet kubeadm kubectl
You're using the Kubernetes package repositories and this guide applies to you. Otherwise, it's strongly recommended to migrate to the Kubernetes package repositories as described in the official announcement.
Print the contents of the file that defines the Kubernetes zypper
repository:
# On your system, this configuration file could have a different name
cat /etc/zypp/repos.d/kubernetes.repo
If you see a baseurl
similar to the baseurl
in the output below:
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.31/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.31/rpm/repodata/repomd.xml.key
exclude=kubelet kubeadm kubectl
You're using the Kubernetes package repositories and this guide applies to you. Otherwise, it's strongly recommended to migrate to the Kubernetes package repositories as described in the official announcement.
The URL used for the Kubernetes package repositories is not limited to pkgs.k8s.io
,
it can also be one of:
pkgs.k8s.io
pkgs.kubernetes.io
packages.kubernetes.io
This step should be done upon upgrading from one to another Kubernetes minor release in order to get access to the packages of the desired Kubernetes minor version.
Open the file that defines the Kubernetes apt
repository using a text editor of your choice:
nano /etc/apt/sources.list.d/kubernetes.list
You should see a single line with the URL that contains your current Kubernetes minor version. For example, if you're using v1.31, you should see this:
deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.31/deb/ /
Change the version in the URL to the next available minor release, for example:
deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.32/deb/ /
Save the file and exit your text editor. Continue following the relevant upgrade instructions.
Open the file that defines the Kubernetes yum
repository using a text editor of your choice:
nano /etc/yum.repos.d/kubernetes.repo
You should see a file with two URLs that contain your current Kubernetes minor version. For example, if you're using v1.31, you should see this:
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.31/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.31/rpm/repodata/repomd.xml.key
exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
Change the version in these URLs to the next available minor release, for example:
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.32/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.32/rpm/repodata/repomd.xml.key
exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
Save the file and exit your text editor. Continue following the relevant upgrade instructions.