This is the multi-page printable view of this section. Click here to print.
Configuration
1 - Example: Configuring a Java Microservice
1.1 - Externalizing config using MicroProfile, ConfigMaps and Secrets
In this tutorial you will learn how and why to externalize your microservice’s configuration. Specifically, you will learn how to use Kubernetes ConfigMaps and Secrets to set environment variables and then consume them using MicroProfile Config.
Before you begin
Creating Kubernetes ConfigMaps & Secrets
There are several ways to set environment variables for a Docker container in Kubernetes, including: Dockerfile, kubernetes.yml, Kubernetes ConfigMaps, and Kubernetes Secrets. In the tutorial, you will learn how to use the latter two for setting your environment variables whose values will be injected into your microservices. One of the benefits for using ConfigMaps and Secrets is that they can be re-used across multiple containers, including being assigned to different environment variables for the different containers.
ConfigMaps are API Objects that store non-confidential key-value pairs. In the Interactive Tutorial you will learn how to use a ConfigMap to store the application's name. For more information regarding ConfigMaps, you can find the documentation here.
Although Secrets are also used to store key-value pairs, they differ from ConfigMaps in that they're intended for confidential/sensitive information and are stored using Base64 encoding. This makes secrets the appropriate choice for storing such things as credentials, keys, and tokens, the former of which you'll do in the Interactive Tutorial. For more information on Secrets, you can find the documentation here.
Externalizing Config from Code
Externalized application configuration is useful because configuration usually changes depending on your environment. In order to accomplish this, we'll use Java's Contexts and Dependency Injection (CDI) and MicroProfile Config. MicroProfile Config is a feature of MicroProfile, a set of open Java technologies for developing and deploying cloud-native microservices.
CDI provides a standard dependency injection capability enabling an application to be assembled from collaborating, loosely-coupled beans. MicroProfile Config provides apps and microservices a standard way to obtain config properties from various sources, including the application, runtime, and environment. Based on the source's defined priority, the properties are automatically combined into a single set of properties that the application can access via an API. Together, CDI & MicroProfile will be used in the Interactive Tutorial to retrieve the externally provided properties from the Kubernetes ConfigMaps and Secrets and get injected into your application code.
Many open source frameworks and runtimes implement and support MicroProfile Config. Throughout the interactive tutorial, you'll be using Open Liberty, a flexible open-source Java runtime for building and running cloud-native apps and microservices. However, any MicroProfile compatible runtime could be used instead.
Objectives
- Create a Kubernetes ConfigMap and Secret
- Inject microservice configuration using MicroProfile Config
Example: Externalizing config using MicroProfile, ConfigMaps and Secrets
2 - Updating Configuration via a ConfigMap
This page provides a step-by-step example of updating configuration within a Pod via a ConfigMap
and builds upon the Configure a Pod to Use a ConfigMap task.
At the end of this tutorial, you will understand how to change the configuration for a running application.
This tutorial uses the alpine
and nginx
images as examples.
Before you begin
You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a cluster, you can create one by using minikube or you can use one of these Kubernetes playgrounds:
You need to have the curl command-line tool for making HTTP requests from
the terminal or command prompt. If you do not have curl
available, you can install it. Check the
documentation for your local operating system.
Objectives
- Update configuration via a ConfigMap mounted as a Volume
- Update environment variables of a Pod via a ConfigMap
- Update configuration via a ConfigMap in a multi-container Pod
- Update configuration via a ConfigMap in a Pod possessing a Sidecar Container
Update configuration via a ConfigMap mounted as a Volume
Use the kubectl create configmap
command to create a ConfigMap from
literal values:
kubectl create configmap sport --from-literal=sport=football
Below is an example of a Deployment manifest with the ConfigMap sport
mounted as a
volume into the Pod's only container.
apiVersion: apps/v1
kind: Deployment
metadata:
name: configmap-volume
labels:
app.kubernetes.io/name: configmap-volume
spec:
replicas: 3
selector:
matchLabels:
app.kubernetes.io/name: configmap-volume
template:
metadata:
labels:
app.kubernetes.io/name: configmap-volume
spec:
containers:
- name: alpine
image: alpine:3
command:
- /bin/sh
- -c
- while true; do echo "$(date) My preferred sport is $(cat /etc/config/sport)";
sleep 10; done;
ports:
- containerPort: 80
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
name: sport
Create the Deployment:
kubectl apply -f https://k8s.io/examples/deployments/deployment-with-configmap-as-volume.yaml
Check the pods for this Deployment to ensure they are ready (matching by selector):
kubectl get pods --selector=app.kubernetes.io/name=configmap-volume
You should see an output similar to:
NAME READY STATUS RESTARTS AGE
configmap-volume-6b976dfdcf-qxvbm 1/1 Running 0 72s
configmap-volume-6b976dfdcf-skpvm 1/1 Running 0 72s
configmap-volume-6b976dfdcf-tbc6r 1/1 Running 0 72s
On each node where one of these Pods is running, the kubelet fetches the data for that ConfigMap and translates it to files in a local volume. The kubelet then mounts that volume into the container, as specified in the Pod template. The code running in that container loads the information from the file and uses it to print a report to stdout. You can check this report by viewing the logs for one of the Pods in that Deployment:
# Pick one Pod that belongs to the Deployment, and view its logs
kubectl logs deployments/configmap-volume
You should see an output similar to:
Found 3 pods, using pod/configmap-volume-76d9c5678f-x5rgj
Thu Jan 4 14:06:46 UTC 2024 My preferred sport is football
Thu Jan 4 14:06:56 UTC 2024 My preferred sport is football
Thu Jan 4 14:07:06 UTC 2024 My preferred sport is football
Thu Jan 4 14:07:16 UTC 2024 My preferred sport is football
Thu Jan 4 14:07:26 UTC 2024 My preferred sport is football
Edit the ConfigMap:
kubectl edit configmap sport
In the editor that appears, change the value of key sport
from football
to cricket
. Save your changes.
The kubectl tool updates the ConfigMap accordingly (if you see an error, try again).
Here's an example of how that manifest could look after you edit it:
apiVersion: v1
data:
sport: cricket
kind: ConfigMap
# You can leave the existing metadata as they are.
# The values you'll see won't exactly match these.
metadata:
creationTimestamp: "2024-01-04T14:05:06Z"
name: sport
namespace: default
resourceVersion: "1743935"
uid: 024ee001-fe72-487e-872e-34d6464a8a23
You should see the following output:
configmap/sport edited
Tail (follow the latest entries in) the logs of one of the pods that belongs to this Deployment:
kubectl logs deployments/configmap-volume --follow
After few seconds, you should see the log output change as follows:
Thu Jan 4 14:11:36 UTC 2024 My preferred sport is football
Thu Jan 4 14:11:46 UTC 2024 My preferred sport is football
Thu Jan 4 14:11:56 UTC 2024 My preferred sport is football
Thu Jan 4 14:12:06 UTC 2024 My preferred sport is cricket
Thu Jan 4 14:12:16 UTC 2024 My preferred sport is cricket
When you have a ConfigMap that is mapped into a running Pod using either a
configMap
volume or a projected
volume, and you update that ConfigMap,
the running Pod sees the update almost immediately.
However, your application only sees the change if it is written to either poll for changes,
or watch for file updates.
An application that loads its configuration once at startup will not notice a change.
Note:
The total delay from the moment when the ConfigMap is updated to the moment when new keys are projected to the Pod can be as long as kubelet sync period.Also check Mounted ConfigMaps are updated automatically.
Update environment variables of a Pod via a ConfigMap
Use the kubectl create configmap
command to create a ConfigMap from
literal values:
kubectl create configmap fruits --from-literal=fruits=apples
Below is an example of a Deployment manifest with an environment variable configured via the ConfigMap fruits
.
apiVersion: apps/v1
kind: Deployment
metadata:
name: configmap-env-var
labels:
app.kubernetes.io/name: configmap-env-var
spec:
replicas: 3
selector:
matchLabels:
app.kubernetes.io/name: configmap-env-var
template:
metadata:
labels:
app.kubernetes.io/name: configmap-env-var
spec:
containers:
- name: alpine
image: alpine:3
env:
- name: FRUITS
valueFrom:
configMapKeyRef:
key: fruits
name: fruits
command:
- /bin/sh
- -c
- while true; do echo "$(date) The basket is full of $FRUITS";
sleep 10; done;
ports:
- containerPort: 80
Create the Deployment:
kubectl apply -f https://k8s.io/examples/deployments/deployment-with-configmap-as-envvar.yaml
Check the pods for this Deployment to ensure they are ready (matching by selector):
kubectl get pods --selector=app.kubernetes.io/name=configmap-env-var
You should see an output similar to:
NAME READY STATUS RESTARTS AGE
configmap-env-var-59cfc64f7d-74d7z 1/1 Running 0 46s
configmap-env-var-59cfc64f7d-c4wmj 1/1 Running 0 46s
configmap-env-var-59cfc64f7d-dpr98 1/1 Running 0 46s
The key-value pair in the ConfigMap is configured as an environment variable in the container of the Pod. Check this by viewing the logs of one Pod that belongs to the Deployment.
kubectl logs deployment/configmap-env-var
You should see an output similar to:
Found 3 pods, using pod/configmap-env-var-7c994f7769-l74nq
Thu Jan 4 16:07:06 UTC 2024 The basket is full of apples
Thu Jan 4 16:07:16 UTC 2024 The basket is full of apples
Thu Jan 4 16:07:26 UTC 2024 The basket is full of apples
Edit the ConfigMap:
kubectl edit configmap fruits
In the editor that appears, change the value of key fruits
from apples
to mangoes
. Save your changes.
The kubectl tool updates the ConfigMap accordingly (if you see an error, try again).
Here's an example of how that manifest could look after you edit it:
apiVersion: v1
data:
fruits: mangoes
kind: ConfigMap
# You can leave the existing metadata as they are.
# The values you'll see won't exactly match these.
metadata:
creationTimestamp: "2024-01-04T16:04:19Z"
name: fruits
namespace: default
resourceVersion: "1749472"
You should see the following output:
configmap/fruits edited
Tail the logs of the Deployment and observe the output for few seconds:
# As the text explains, the output does NOT change
kubectl logs deployments/configmap-env-var --follow
Notice that the output remains unchanged, even though you edited the ConfigMap:
Thu Jan 4 16:12:56 UTC 2024 The basket is full of apples
Thu Jan 4 16:13:06 UTC 2024 The basket is full of apples
Thu Jan 4 16:13:16 UTC 2024 The basket is full of apples
Thu Jan 4 16:13:26 UTC 2024 The basket is full of apples
Note:
Although the value of the key inside the ConfigMap has changed, the environment variable in the Pod still shows the earlier value. This is because environment variables for a process running inside a Pod are not updated when the source data changes; if you wanted to force an update, you would need to have Kubernetes replace your existing Pods. The new Pods would then run with the updated information.You can trigger that replacement. Perform a rollout for the Deployment, using
kubectl rollout
:
# Trigger the rollout
kubectl rollout restart deployment configmap-env-var
# Wait for the rollout to complete
kubectl rollout status deployment configmap-env-var --watch=true
Next, check the Deployment:
kubectl get deployment configmap-env-var
You should see an output similar to:
NAME READY UP-TO-DATE AVAILABLE AGE
configmap-env-var 3/3 3 3 12m
Check the Pods:
kubectl get pods --selector=app.kubernetes.io/name=configmap-env-var
The rollout causes Kubernetes to make a new ReplicaSet for the Deployment; that means the existing Pods eventually terminate, and new ones are created. After few seconds, you should see an output similar to:
NAME READY STATUS RESTARTS AGE
configmap-env-var-6d94d89bf5-2ph2l 1/1 Running 0 13s
configmap-env-var-6d94d89bf5-74twx 1/1 Running 0 8s
configmap-env-var-6d94d89bf5-d5vx8 1/1 Running 0 11s
Note:
Please wait for the older Pods to fully terminate before proceeding with the next steps.View the logs for a Pod in this Deployment:
# Pick one Pod that belongs to the Deployment, and view its logs
kubectl logs deployment/configmap-env-var
You should see an output similar to the below:
Found 3 pods, using pod/configmap-env-var-6d9ff89fb6-bzcf6
Thu Jan 4 16:30:35 UTC 2024 The basket is full of mangoes
Thu Jan 4 16:30:45 UTC 2024 The basket is full of mangoes
Thu Jan 4 16:30:55 UTC 2024 The basket is full of mangoes
This demonstrates the scenario of updating environment variables in a Pod that are derived from a ConfigMap. Changes to the ConfigMap values are applied to the Pod during the subsequent rollout. If Pods get created for another reason, such as scaling up the Deployment, then the new Pods also use the latest configuration values; if you don't trigger a rollout, then you might find that your app is running with a mix of old and new environment variable values.
Update configuration via a ConfigMap in a multi-container Pod
Use the kubectl create configmap
command to create a ConfigMap from
literal values:
kubectl create configmap color --from-literal=color=red
Below is an example manifest for a Deployment that manages a set of Pods, each with two containers.
The two containers share an emptyDir
volume that they use to communicate.
The first container runs a web server (nginx
). The mount path for the shared volume in the
web server container is /usr/share/nginx/html
. The second helper container is based on alpine
,
and for this container the emptyDir
volume is mounted at /pod-data
. The helper container writes
a file in HTML that has its content based on a ConfigMap. The web server container serves the HTML via HTTP.
apiVersion: apps/v1
kind: Deployment
metadata:
name: configmap-two-containers
labels:
app.kubernetes.io/name: configmap-two-containers
spec:
replicas: 3
selector:
matchLabels:
app.kubernetes.io/name: configmap-two-containers
template:
metadata:
labels:
app.kubernetes.io/name: configmap-two-containers
spec:
volumes:
- name: shared-data
emptyDir: {}
- name: config-volume
configMap:
name: color
containers:
- name: nginx
image: nginx
volumeMounts:
- name: shared-data
mountPath: /usr/share/nginx/html
- name: alpine
image: alpine:3
volumeMounts:
- name: shared-data
mountPath: /pod-data
- name: config-volume
mountPath: /etc/config
command:
- /bin/sh
- -c
- while true; do echo "$(date) My preferred color is $(cat /etc/config/color)" > /pod-data/index.html;
sleep 10; done;
Create the Deployment:
kubectl apply -f https://k8s.io/examples/deployments/deployment-with-configmap-two-containers.yaml
Check the pods for this Deployment to ensure they are ready (matching by selector):
kubectl get pods --selector=app.kubernetes.io/name=configmap-two-containers
You should see an output similar to:
NAME READY STATUS RESTARTS AGE
configmap-two-containers-565fb6d4f4-2xhxf 2/2 Running 0 20s
configmap-two-containers-565fb6d4f4-g5v4j 2/2 Running 0 20s
configmap-two-containers-565fb6d4f4-mzsmf 2/2 Running 0 20s
Expose the Deployment (the kubectl
tool creates a
Service for you):
kubectl expose deployment configmap-two-containers --name=configmap-service --port=8080 --target-port=80
Use kubectl
to forward the port:
# this stays running in the background
kubectl port-forward service/configmap-service 8080:8080 &
Access the service.
curl http://localhost:8080
You should see an output similar to:
Fri Jan 5 08:08:22 UTC 2024 My preferred color is red
Edit the ConfigMap:
kubectl edit configmap color
In the editor that appears, change the value of key color
from red
to blue
. Save your changes.
The kubectl tool updates the ConfigMap accordingly (if you see an error, try again).
Here's an example of how that manifest could look after you edit it:
apiVersion: v1
data:
color: blue
kind: ConfigMap
# You can leave the existing metadata as they are.
# The values you'll see won't exactly match these.
metadata:
creationTimestamp: "2024-01-05T08:12:05Z"
name: color
namespace: configmap
resourceVersion: "1801272"
uid: 80d33e4a-cbb4-4bc9-ba8c-544c68e425d6
Loop over the service URL for few seconds.
# Cancel this when you're happy with it (Ctrl-C)
while true; do curl --connect-timeout 7.5 http://localhost:8080; sleep 10; done
You should see the output change as follows:
Fri Jan 5 08:14:00 UTC 2024 My preferred color is red
Fri Jan 5 08:14:02 UTC 2024 My preferred color is red
Fri Jan 5 08:14:20 UTC 2024 My preferred color is red
Fri Jan 5 08:14:22 UTC 2024 My preferred color is red
Fri Jan 5 08:14:32 UTC 2024 My preferred color is blue
Fri Jan 5 08:14:43 UTC 2024 My preferred color is blue
Fri Jan 5 08:15:00 UTC 2024 My preferred color is blue
Update configuration via a ConfigMap in a Pod possessing a sidecar container
The above scenario can be replicated by using a Sidecar Container
as a helper container to write the HTML file.
As a Sidecar Container is conceptually an Init Container, it is guaranteed to start before the main web server container.
This ensures that the HTML file is always available when the web server is ready to serve it.
Please see Enabling sidecar containers to utilize this feature.
If you are continuing from the previous scenario, you can reuse the ConfigMap named color
for this scenario.
If you are executing this scenario independently, use the kubectl create configmap
command to create a ConfigMap
from literal values:
kubectl create configmap color --from-literal=color=blue
Below is an example manifest for a Deployment that manages a set of Pods, each with a main container and
a sidecar container. The two containers share an emptyDir
volume that they use to communicate.
The main container runs a web server (NGINX). The mount path for the shared volume in the web server container
is /usr/share/nginx/html
. The second container is a Sidecar Container based on Alpine Linux which acts as
a helper container. For this container the emptyDir
volume is mounted at /pod-data
. The Sidecar Container
writes a file in HTML that has its content based on a ConfigMap. The web server container serves the HTML via HTTP.
apiVersion: apps/v1
kind: Deployment
metadata:
name: configmap-sidecar-container
labels:
app.kubernetes.io/name: configmap-sidecar-container
spec:
replicas: 3
selector:
matchLabels:
app.kubernetes.io/name: configmap-sidecar-container
template:
metadata:
labels:
app.kubernetes.io/name: configmap-sidecar-container
spec:
volumes:
- name: shared-data
emptyDir: {}
- name: config-volume
configMap:
name: color
containers:
- name: nginx
image: nginx
volumeMounts:
- name: shared-data
mountPath: /usr/share/nginx/html
initContainers:
- name: alpine
image: alpine:3
restartPolicy: Always
volumeMounts:
- name: shared-data
mountPath: /pod-data
- name: config-volume
mountPath: /etc/config
command:
- /bin/sh
- -c
- while true; do echo "$(date) My preferred color is $(cat /etc/config/color)" > /pod-data/index.html;
sleep 10; done;
Create the Deployment:
kubectl apply -f https://k8s.io/examples/deployments/deployment-with-configmap-and-sidecar-container.yaml
Check the pods for this Deployment to ensure they are ready (matching by selector):
kubectl get pods --selector=app.kubernetes.io/name=configmap-sidecar-container
You should see an output similar to:
NAME READY STATUS RESTARTS AGE
configmap-sidecar-container-5fb59f558b-87rp7 2/2 Running 0 94s
configmap-sidecar-container-5fb59f558b-ccs7s 2/2 Running 0 94s
configmap-sidecar-container-5fb59f558b-wnmgk 2/2 Running 0 94s
Expose the Deployment (the kubectl
tool creates a
Service for you):
kubectl expose deployment configmap-sidecar-container --name=configmap-sidecar-service --port=8081 --target-port=80
Use kubectl
to forward the port:
# this stays running in the background
kubectl port-forward service/configmap-sidecar-service 8081:8081 &
Access the service.
curl http://localhost:8081
You should see an output similar to:
Sat Feb 17 13:09:05 UTC 2024 My preferred color is blue
Edit the ConfigMap:
kubectl edit configmap color
In the editor that appears, change the value of key color
from blue
to green
. Save your changes.
The kubectl tool updates the ConfigMap accordingly (if you see an error, try again).
Here's an example of how that manifest could look after you edit it:
apiVersion: v1
data:
color: green
kind: ConfigMap
# You can leave the existing metadata as they are.
# The values you'll see won't exactly match these.
metadata:
creationTimestamp: "2024-02-17T12:20:30Z"
name: color
namespace: default
resourceVersion: "1054"
uid: e40bb34c-58df-4280-8bea-6ed16edccfaa
Loop over the service URL for few seconds.
# Cancel this when you're happy with it (Ctrl-C)
while true; do curl --connect-timeout 7.5 http://localhost:8081; sleep 10; done
You should see the output change as follows:
Sat Feb 17 13:12:35 UTC 2024 My preferred color is blue
Sat Feb 17 13:12:45 UTC 2024 My preferred color is blue
Sat Feb 17 13:12:55 UTC 2024 My preferred color is blue
Sat Feb 17 13:13:05 UTC 2024 My preferred color is blue
Sat Feb 17 13:13:15 UTC 2024 My preferred color is green
Sat Feb 17 13:13:25 UTC 2024 My preferred color is green
Sat Feb 17 13:13:35 UTC 2024 My preferred color is green
Update configuration via an immutable ConfigMap that is mounted as a volume
Note:
Immutable ConfigMaps are especially used for configuration that is constant and is not expected to change over time. Marking a ConfigMap as immutable allows a performance improvement where the kubelet does not watch for changes.
If you do need to make a change, you should plan to either:
- change the name of the ConfigMap, and switch to running Pods that reference the new name
- replace all the nodes in your cluster that have previously run a Pod that used the old value
- restart the kubelet on any node where the kubelet previously loaded the old ConfigMap
An example manifest for an Immutable ConfigMap is shown below.
apiVersion: v1
data:
company_name: "ACME, Inc." # existing fictional company name
kind: ConfigMap
immutable: true
metadata:
name: company-name-20150801
Create the Immutable ConfigMap:
kubectl apply -f https://k8s.io/examples/configmap/immutable-configmap.yaml
Below is an example of a Deployment manifest with the Immutable ConfigMap company-name-20150801
mounted as a
volume into the Pod's only container.
apiVersion: apps/v1
kind: Deployment
metadata:
name: immutable-configmap-volume
labels:
app.kubernetes.io/name: immutable-configmap-volume
spec:
replicas: 3
selector:
matchLabels:
app.kubernetes.io/name: immutable-configmap-volume
template:
metadata:
labels:
app.kubernetes.io/name: immutable-configmap-volume
spec:
containers:
- name: alpine
image: alpine:3
command:
- /bin/sh
- -c
- while true; do echo "$(date) The name of the company is $(cat /etc/config/company_name)";
sleep 10; done;
ports:
- containerPort: 80
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
name: company-name-20150801
Create the Deployment:
kubectl apply -f https://k8s.io/examples/deployments/deployment-with-immutable-configmap-as-volume.yaml
Check the pods for this Deployment to ensure they are ready (matching by selector):
kubectl get pods --selector=app.kubernetes.io/name=immutable-configmap-volume
You should see an output similar to:
NAME READY STATUS RESTARTS AGE
immutable-configmap-volume-78b6fbff95-5gsfh 1/1 Running 0 62s
immutable-configmap-volume-78b6fbff95-7vcj4 1/1 Running 0 62s
immutable-configmap-volume-78b6fbff95-vdslm 1/1 Running 0 62s
The Pod's container refers to the data defined in the ConfigMap and uses it to print a report to stdout. You can check this report by viewing the logs for one of the Pods in that Deployment:
# Pick one Pod that belongs to the Deployment, and view its logs
kubectl logs deployments/immutable-configmap-volume
You should see an output similar to:
Found 3 pods, using pod/immutable-configmap-volume-78b6fbff95-5gsfh
Wed Mar 20 03:52:34 UTC 2024 The name of the company is ACME, Inc.
Wed Mar 20 03:52:44 UTC 2024 The name of the company is ACME, Inc.
Wed Mar 20 03:52:54 UTC 2024 The name of the company is ACME, Inc.
Note:
Once a ConfigMap is marked as immutable, it is not possible to revert this change nor to mutate the contents of the data or the binaryData field.In order to modify the behavior of the Pods that use this configuration, you will create a new immutable ConfigMap and edit the Deployment to define a slightly different pod template, referencing the new ConfigMap.
Create a new immutable ConfigMap by using the manifest shown below:
apiVersion: v1
data:
company_name: "Fiktivesunternehmen GmbH" # new fictional company name
kind: ConfigMap
immutable: true
metadata:
name: company-name-20240312
kubectl apply -f https://k8s.io/examples/configmap/new-immutable-configmap.yaml
You should see an output similar to:
configmap/company-name-20240312 created
Check the newly created ConfigMap:
kubectl get configmap
You should see an output displaying both the old and new ConfigMaps:
NAME DATA AGE
company-name-20150801 1 22m
company-name-20240312 1 24s
Modify the Deployment to reference the new ConfigMap.
Edit the Deployment:
kubectl edit deployment immutable-configmap-volume
In the editor that appears, update the existing volume definition to use the new ConfigMap.
volumes:
- configMap:
defaultMode: 420
name: company-name-20240312 # Update this field
name: config-volume
You should see the following output:
deployment.apps/immutable-configmap-volume edited
This will trigger a rollout. Wait for all the previous Pods to terminate and the new Pods to be in a ready state.
Monitor the status of the Pods:
kubectl get pods --selector=app.kubernetes.io/name=immutable-configmap-volume
NAME READY STATUS RESTARTS AGE
immutable-configmap-volume-5fdb88fcc8-29v8n 1/1 Running 0 13s
immutable-configmap-volume-5fdb88fcc8-52ddd 1/1 Running 0 14s
immutable-configmap-volume-5fdb88fcc8-n5jx4 1/1 Running 0 15s
immutable-configmap-volume-78b6fbff95-5gsfh 1/1 Terminating 0 32m
immutable-configmap-volume-78b6fbff95-7vcj4 1/1 Terminating 0 32m
immutable-configmap-volume-78b6fbff95-vdslm 1/1 Terminating 0 32m
You should eventually see an output similar to:
NAME READY STATUS RESTARTS AGE
immutable-configmap-volume-5fdb88fcc8-29v8n 1/1 Running 0 43s
immutable-configmap-volume-5fdb88fcc8-52ddd 1/1 Running 0 44s
immutable-configmap-volume-5fdb88fcc8-n5jx4 1/1 Running 0 45s
View the logs for a Pod in this Deployment:
# Pick one Pod that belongs to the Deployment, and view its logs
kubectl logs deployment/immutable-configmap-volume
You should see an output similar to the below:
Found 3 pods, using pod/immutable-configmap-volume-5fdb88fcc8-n5jx4
Wed Mar 20 04:24:17 UTC 2024 The name of the company is Fiktivesunternehmen GmbH
Wed Mar 20 04:24:27 UTC 2024 The name of the company is Fiktivesunternehmen GmbH
Wed Mar 20 04:24:37 UTC 2024 The name of the company is Fiktivesunternehmen GmbH
Once all the deployments have migrated to use the new immutable ConfigMap, it is advised to delete the old one.
kubectl delete configmap company-name-20150801
Summary
Changes to a ConfigMap mounted as a Volume on a Pod are available seamlessly after the subsequent kubelet sync.
Changes to a ConfigMap that configures environment variables for a Pod are available after the subsequent rollout for the Pod.
Once a ConfigMap is marked as immutable, it is not possible to revert this change
(you cannot make an immutable ConfigMap mutable), and you also cannot make any change
to the contents of the data
or the binaryData
field. You can delete and recreate
the ConfigMap, or you can make a new different ConfigMap. When you delete a ConfigMap,
running containers and their Pods maintain a mount point to any volume that referenced
that existing ConfigMap.
Cleaning up
Terminate the kubectl port-forward
commands in case they are running.
Delete the resources created during the tutorial:
kubectl delete deployment configmap-volume configmap-env-var configmap-two-containers configmap-sidecar-container immutable-configmap-volume
kubectl delete service configmap-service configmap-sidecar-service
kubectl delete configmap sport fruits color company-name-20240312
kubectl delete configmap company-name-20150801 # In case it was not handled during the task execution
3 - Configuring Redis using a ConfigMap
This page provides a real world example of how to configure Redis using a ConfigMap and builds upon the Configure a Pod to Use a ConfigMap task.
Objectives
- Create a ConfigMap with Redis configuration values
- Create a Redis Pod that mounts and uses the created ConfigMap
- Verify that the configuration was correctly applied.
Before you begin
You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a cluster, you can create one by using minikube or you can use one of these Kubernetes playgrounds:
To check the version, enterkubectl version
.- The example shown on this page works with
kubectl
1.14 and above. - Understand Configure a Pod to Use a ConfigMap.
Real World Example: Configuring Redis using a ConfigMap
Follow the steps below to configure a Redis cache using data stored in a ConfigMap.
First create a ConfigMap with an empty configuration block:
cat <<EOF >./example-redis-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: example-redis-config
data:
redis-config: ""
EOF
Apply the ConfigMap created above, along with a Redis pod manifest:
kubectl apply -f example-redis-config.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/website/main/content/en/examples/pods/config/redis-pod.yaml
Examine the contents of the Redis pod manifest and note the following:
- A volume named
config
is created byspec.volumes[1]
- The
key
andpath
underspec.volumes[1].configMap.items[0]
exposes theredis-config
key from theexample-redis-config
ConfigMap as a file namedredis.conf
on theconfig
volume. - The
config
volume is then mounted at/redis-master
byspec.containers[0].volumeMounts[1]
.
This has the net effect of exposing the data in data.redis-config
from the example-redis-config
ConfigMap above as /redis-master/redis.conf
inside the Pod.
apiVersion: v1
kind: Pod
metadata:
name: redis
spec:
containers:
- name: redis
image: redis:5.0.4
command:
- redis-server
- "/redis-master/redis.conf"
env:
- name: MASTER
value: "true"
ports:
- containerPort: 6379
resources:
limits:
cpu: "0.1"
volumeMounts:
- mountPath: /redis-master-data
name: data
- mountPath: /redis-master
name: config
volumes:
- name: data
emptyDir: {}
- name: config
configMap:
name: example-redis-config
items:
- key: redis-config
path: redis.conf
Examine the created objects:
kubectl get pod/redis configmap/example-redis-config
You should see the following output:
NAME READY STATUS RESTARTS AGE
pod/redis 1/1 Running 0 8s
NAME DATA AGE
configmap/example-redis-config 1 14s
Recall that we left redis-config
key in the example-redis-config
ConfigMap blank:
kubectl describe configmap/example-redis-config
You should see an empty redis-config
key:
Name: example-redis-config
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
redis-config:
Use kubectl exec
to enter the pod and run the redis-cli
tool to check the current configuration:
kubectl exec -it redis -- redis-cli
Check maxmemory
:
127.0.0.1:6379> CONFIG GET maxmemory
It should show the default value of 0:
1) "maxmemory"
2) "0"
Similarly, check maxmemory-policy
:
127.0.0.1:6379> CONFIG GET maxmemory-policy
Which should also yield its default value of noeviction
:
1) "maxmemory-policy"
2) "noeviction"
Now let's add some configuration values to the example-redis-config
ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: example-redis-config
data:
redis-config: |
maxmemory 2mb
maxmemory-policy allkeys-lru
Apply the updated ConfigMap:
kubectl apply -f example-redis-config.yaml
Confirm that the ConfigMap was updated:
kubectl describe configmap/example-redis-config
You should see the configuration values we just added:
Name: example-redis-config
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
redis-config:
----
maxmemory 2mb
maxmemory-policy allkeys-lru
Check the Redis Pod again using redis-cli
via kubectl exec
to see if the configuration was applied:
kubectl exec -it redis -- redis-cli
Check maxmemory
:
127.0.0.1:6379> CONFIG GET maxmemory
It remains at the default value of 0:
1) "maxmemory"
2) "0"
Similarly, maxmemory-policy
remains at the noeviction
default setting:
127.0.0.1:6379> CONFIG GET maxmemory-policy
Returns:
1) "maxmemory-policy"
2) "noeviction"
The configuration values have not changed because the Pod needs to be restarted to grab updated values from associated ConfigMaps. Let's delete and recreate the Pod:
kubectl delete pod redis
kubectl apply -f https://raw.githubusercontent.com/kubernetes/website/main/content/en/examples/pods/config/redis-pod.yaml
Now re-check the configuration values one last time:
kubectl exec -it redis -- redis-cli
Check maxmemory
:
127.0.0.1:6379> CONFIG GET maxmemory
It should now return the updated value of 2097152:
1) "maxmemory"
2) "2097152"
Similarly, maxmemory-policy
has also been updated:
127.0.0.1:6379> CONFIG GET maxmemory-policy
It now reflects the desired value of allkeys-lru
:
1) "maxmemory-policy"
2) "allkeys-lru"
Clean up your work by deleting the created resources:
kubectl delete pod/redis configmap/example-redis-config
What's next
- Learn more about ConfigMaps.
- Follow an example of Updating configuration via a ConfigMap.
4 - Adopting Sidecar Containers
This section is relevant for people adopting a new built-in sidecar containers feature for their workloads.
Sidecar containers is not a new concept as posted in the blog post. Kubernetes allowed to run multiple containers in a Pod to implement this concept. However, running sidecar container as a regular container has a lot of limitations being fixed with the new built-in sidecar containers support.
Kubernetes v1.29 [beta]
(enabled by default: true)Objectives
- Understand the need for sidecar containers
- Be able to troubleshoot issues with the sidecar containers
- Understand options to universally "inject" sidecar containers to any workload
Before you begin
You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a cluster, you can create one by using minikube or you can use one of these Kubernetes playgrounds:
Your Kubernetes server must be at or later than version 1.29. To check the version, enterkubectl version
.Sidecar containers overview
Sidecar containers are the secondary containers that run along with the main application container within the same Pod. These containers are used to enhance or to extend the functionality of the primary app container by providing additional services, or functionality such as logging, monitoring, security, or data synchronization, without directly altering the primary application code. You can read more in the Sidecar containers concept page.
The concept of sidecar containers is not new and there are multiple implementation of this concept. As well as sidecar containers that you, the person defining the Pod, want to run, you can also find that some addons modify Pods - before the Pods start running - so that there are extra sidecar containers. The mechanisms to inject those extra sidecars are often mutating webhooks. For example, a service mesh addon might inject a sidecar that configures mutual TLS and encryption in transit between different Pods.
While the concept of sidecar containers is not new, the native implementation of this feature in Kubernetes, however, is new. And as with every new feature, adopting this feature may present certain challenges.
This tutorial explore challenges and solution can be experienced by end users as well as by authors of sidecar containers.
Benefits of a built-in sidecar containers
Using Kubernetes' native support for sidecar containers provides several benefits:
- You can configure a native sidecar container to start ahead of init containers.
- The built-in sidecar containers can be authored to guarantee that they are terminated last.
Sidecar containers are terminated with a
SIGTERM
signal once all the regular containers are completed and terminated. If the sidecar container isn’t gracefully shut down, aSIGKILL
signal will be used to terminate it. - With Jobs, when Pod's
restartPolicy: OnFailure
orrestartPolicy: Never
, native sidecar containers do not block Pod completion. With legacy sidecar containers, special care is needed to handle this situation. - Also, with Jobs, built-in sidecar containers would keep being restarted once they are done, even if regular containers would not with Pod's
restartPolicy: Never
.
See differences from init containers to learn more about it.
Adopting built-in sidecar containers
The SidecarContainers
feature gate is in beta state starting from Kubernetes version 1.29 and is enabled by default.
Some clusters may have this feature disabled or have software installed that is incompatible with the feature.
When this happens, the Pod may be rejected or the sidecar containers may block Pod startup, rendering the Pod useless. This condition is easy to detect as Pod simply get stuck on initialization. However, it is rarely clear what caused the problem.
Here are the considerations and troubleshooting steps that one can take while adopting sidecar containers for their workload.
Ensure the feature gate is enabled
As a very first step, make sure that both - API server and Nodes are at Kubernetes version v1.29 or later. The feature will break on clusters where Nodes are running earlier versions where it is not enabled.
Note
The feature can be enabled on nodes with the version 1.28. The behavior of built-in sidecar container termination was different in version 1.28 and it is not recommended to adjust behavior of a sidecar to that behavior. However if that only concern is the startup order, the above statement can be changed to Nodes, running version 1.28 with the feature gate enabled.You should ensure that the feature gate is enabled for the API server(s) within the control plane and for all nodes.
One of the ways to check the feature gate enablement is running a command like this:
- For API Server
kubectl get --raw /metrics | grep kubernetes_feature_enabled | grep SidecarContainers
- For the individual node:
kubectl get --raw /api/v1/nodes/<node-name>/proxy/metrics | grep kubernetes_feature_enabled | grep SidecarContainers
If you see something like: kubernetes_feature_enabled{name="SidecarContainers",stage="BETA"} 1
,
it means that the feature is enabled.
Check for 3rd party tooling and mutating webhooks
If you experience issues when validating the feature, it may be an indication that one of the 3rd party tools or mutating webhooks are broken.
When the SidecarContainers
feature gate is enabled, Pods gain a new field in their API.
Some tools or mutating webhooks might have been built with an earlier version of Kubernetes API.
If tools pass the unknown fields as-is using various patching strategies to mutate a Pod object, this will not be a problem. However there are tools that will strip out unknown fields; if you have those, they must be recompiled with the v1.28+ version of Kubernetes API client code.
The way to check this is to use the kubectl describe pod
command with your Pod that has passed through
mutating admission.
If any tools stripped out the new field (restartPolicy:Always
), you will not see it in the command output.
If you hit an issue like this, please advise the author of the tools or the webhooks use one of the patching strategies of modifying objects instead of a full object update.
Note
Mutating webhook may update Pods based on some conditions. So sidecar containers may work for some Pods and fail for others.Automatic injection of sidecars
If you are using software that injects sidecars automatically, there are a few possible strategies you may follow to ensure that native sidecar container can be used. All of the strategies are generally options you may choose to decide whether the Pod the sidecar will be injected to will land on a Node supporting the feature or not.
As an example, you can follow this conversation in Istio community. The discussion is exploring the options listed below.
- Mark Pods that lands to nodes supporting sidecars. You can use node labels and node affinity to mark nodes supporting sidecar containers and Pods landing on those nodes.
- Check Nodes compatibility on injection. During sidecar injection you may use the following strategies to check node compatibility:
- query node version and assume the feature gate is enabled on the version 1.29+
- query node prometheus metrics and check feature enablement status
- assume the nodes are running with a supported version skew from the API server
- there may be other custom ways to detect nodes compatibility.
- Develop a universal sidecar injector. The idea of a universal sidecar container is to inject a sidecar container
as a regular container as well as a native sidecar container. And have a runtime logic to decide which one will work.
The universal sidecar injector is wasteful as it will account for requests twice, but may be considered as a workable solution for special cases.
- One way would be on start of a native sidecar container detect the node version and exit immediately if the version does not support the sidecar feature.
- Consider runtime feature detection design:
- Define an empty dir so containers can communicate with each other
- Inject init container, let's call it
NativeSidecar
withrestartPolicy=Always
. NativeSidecar
must write a file to an empty dir indicating the first run and exists immediately with exit code0
.NativeSidecar
on restart (when native sidecars are supported) checks that file already exists in the empty dir and changes it - indicating that the built-in sidecar containers are supported and running.- Inject regular container, let's call it
OldWaySidecar
. OldWaySidecar
on start checks the presence of a file in an empty dir.- If the file indicates that the
NativeSidecar
is NOT running - it assumes that the sidecar feature is not supported and works assuming it is the sidecar. - If the file indicates that the
NativeSidecar
is running - it either does nothing and sleeps forever (in case when Pod’srestartPolicy=Always
) or exists immediately with exit code0
(in case when Pod’srestartPolicy!=Always
).
What's next
- Learn more about sidecar containers.