Managing Kubernetes clusters with kubeadm

Familiarize yourself with Kubernetes architecture and components. Learn how to run Kubernetes clusters on Linux machines using kubeadm. Freeze cluster version and perform upgrades anytime without service disruption.

Managing Kubernetes clusters with kubeadm
Photo by Kvistholt Photography / Unsplash

Kubernetes cluster architecture

Overview

Kubernetes cluster architecture

Kubernetes clusters components can be organized into two groups:

The control plane and worker nodes components can be deployed on any node of the Kubernetes cluster but having dedicated nodes for the control plane and the workers is better for easier management.

Worker nodes components

On the worker nodes we have:

  • kubelet:

    • reponsible for running Pods containers
    • get running instructions from the control plane (where to run a specific Pods containers, with which configuration…)
    • reports Pods statuses to the control plane through the kube-apiserver component
  • kube-proxy (optional):

    • responsible for creating and maintaining necessessary network rules for communications to Pods, inside or from outside the cluster (reaching Pods from Services with external IPs...)
  • Container runtime:

    • a software that runs containers (containerd, cri-o…)

Control plane nodes components

On the control plane nodes we have:

  • kube-apiserver:

    • a REST API used by kubectl, worker nodes and other external components to communicate with the control plane
  • kube-scheduler:

    • decides on which worker node a newly created Pod should be running
    • scheduling decision is based on factors like:
      • resource requirements
      • affinity and anti-affinity specifications
      • data locality
      • taints and tolerations
  • etcd:

    • a key/value store database for the Kubernetes system
    • everything in the Kubernetes platform (Pods, Services…) is represented as an object stored inside that database
  • kube-controller-manager:

    • runs controller processes that continually monitor some of the Kubernetes components/objects
    • make necessary changes to make them meet the desired states defined by users or the system itself:
      • create service accounts and tokens for new namespaces
      • create Pods to run Job tasks
      • create Pods to meet ReplicaSet replicas number
      • ...
    • each controller is logically a single process, but all of them are compiled into a single binary and run in a single process
    • Here are examples of Controllers for vertical and horizontal autoscaling of Kubernetes cluster nodes and pods:
  • cloud-controller-manager:

    • bundles controllers that interact with Cloud providers APIs

Configuring control plane and worker nodes

Kubernetes cluster nodes ressources

  • 1 control plane node (2 cpu, 2GB of RAM)
  • 2 worker nodes (2 cpu, 2GB of RAM)
  • Ubuntu 22.04 LTS for all nodes

Pre-requisites

To be run on control plane nodes and worker nodes (copy/paste ok):

# Disable swap. By default Kubelet won't start when swap is enabled
sudo swapoff -a && \

# To make kubelet start properly and make Kubernetes use swap, 
# have a look at this: 
# https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#swap-configuration

# Make sure the br_netfilter and overlay kernels 
# modules are automatically loaded at boot 
echo "overlay" | sudo tee /etc/modules-load.d/k8s.conf && \
echo "br_netfilter" | sudo tee -a /etc/modules-load.d/k8s.conf && \

# Load the overlay and br_netfilter kernels modules
sudo modprobe br_netfilter && \
sudo modprobe overlay && \

# To verify, run:
# lsmod | egrep "overlay|br_netfilter"

# Result:
# overlay               151552  0
# br_netfilter           32768  0
# bridge                311296  1 br_netfilter

# Make sure IPv4 packets forwarding (routing)
# is always enabled on the machine (even after reboot)
echo "net.ipv4.ip_forward = 1" | 
  sudo tee -a /etc/sysctl.d/k8s.conf && \

# Make preceding changes effective
sudo systemctl restart procps.service && \

# To verify, run:
# sysctl net.ipv4.ip_forward

# Result:
# net.ipv4.ip_forward = 1

# Install pre-requisite packages
sudo apt update && sudo apt install -y socat software-properties-common curl

Install a container runtime

We need to install a container runtime that will be responsible for running our Kubernetes clusters pods containers. Any container runtime implementing the Kubernetes CRI (Container Runtime Interface) standard should work with Kubernetes.

We are going to use the CRI-O container runtime. For a list of available versions, have a look at CRI-O releases.

Let's install and configure CRI-O on control plane nodes and worker nodes (copy/paste ok):

# Set the version of CRI-O to install
CRIO_VERSION=v1.29 && \
    
# Get required GPG key for verifying CRI-O packages signature
curl -fsSL https://pkgs.k8s.io/addons:/cri-o:/stable:/$CRIO_VERSION/deb/Release.key |
    sudo gpg --dearmor -o /etc/apt/keyrings/cri-o-apt-keyring.gpg && \

# Add CRI-O deb packages repository
echo "deb [signed-by=/etc/apt/keyrings/cri-o-apt-keyring.gpg] https://pkgs.k8s.io/addons:/cri-o:/stable:/$CRIO_VERSION/deb/ /" |
    sudo tee /etc/apt/sources.list.d/cri-o.list && \

# Install CRI-O
sudo apt update && sudo apt install -y cri-o && \

# Start CRI-O systemd service
sudo systemctl start crio.service

Note that the cgroups driver used by the container runtime should be the same used by Kubernetes kubelet we are going to install next. CRI-O uses the systemd cgroups driver by default. To show that setting from CRI-O configuration, use:

$ crio config | grep cgroup_manager

To override a configuration setting, add a new configuration file inside /etc/crio/crio.conf.d. Use crio config | less to explore available configuration settings and their default values.

Install kubelet, kubeadm and kubectl

For a list of available Kubernetes versions, have a look at kubernetes releases.

We are going to install kubelet, kubeadm and kubectl on the control plane nodes and worker nodes (copy/paste ok):

KUBERNETES_REPO_VERSION=v1.29 && \

# Get required GPG key for verifying Kubernetes packages signature
curl -fsSL https://pkgs.k8s.io/core:/stable:/$KUBERNETES_REPO_VERSION/deb/Release.key |
    sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg && \

# Add Kubernetes deb packages repository
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/$KUBERNETES_REPO_VERSION/deb/ /" |
    sudo tee /etc/apt/sources.list.d/kubernetes.list && \

KUBERNETES_INSTALL_VERSION=1.29.9 && \
sudo apt update && sudo apt install -y kubelet=$KUBERNETES_INSTALL_VERSION* kubeadm=$KUBERNETES_INSTALL_VERSION* kubectl=$KUBERNETES_INSTALL_VERSION*

Verify:

$ apt policy kubelet kubeadm kubectl
kubelet:
  Installed: 1.29.9-1.1
  Candidate: 1.29.9-1.1
  Version table:
 *** 1.29.9-1.1 500
        500 https://pkgs.k8s.io/core:/stable:/v1.29/deb  Packages
        100 /var/lib/dpkg/status    
kubeadm:
  Installed: 1.29.9-1.1
  Candidate: 1.29.9-1.1
  Version table:
 *** 1.29.9-1.1 500
        500 https://pkgs.k8s.io/core:/stable:/v1.29/deb  Packages
        100 /var/lib/dpkg/status       
kubectl:
  Installed: 1.29.9-1.1
  Candidate: 1.29.9-1.1
  Version table:
 *** 1.29.9-1.1 500
        500 https://pkgs.k8s.io/core:/stable:/v1.29/deb  Packages
        100 /var/lib/dpkg/status

Because kubeadm manages the kubelet as a systemd service, the systemd cgroups driver is recommended over the default kubelet cgroupfs driver. Starting with version 1.22, kubeadm sets the default kubelet cgroups driver to systemd.

The cgroups driver kubelet is using is systemd, the same cgroups driver our previously configured CRI-O container runtime is using, so everything is fine.

Now let's make sure our Kubernetes installation packages are not automatically upgraded because we want to do it manually.

$ sudo apt-mark hold kubelet kubeadm kubectl
kubelet set on hold.
kubeadm set on hold.
kubectl set on hold.

Initialize the cluster on control plane node

Pods networking

The network CIDR we are going to use for the pods of the cluster corresponds to the default network CIDR of the Networking Plugin we are going to use. Indeed, Kubernetes pods networking are provided by third party addons/plugins and clusters nodes won't be ready until a networking plugin is installed.

Our CRI-O container runtime comes with a default networking configuration, that will be used by the Kubernetes cluster, if we don't install a specific networking addon. That default configuration will make our Kubernetes pods use the following subnet:

(k8s-control)$ cat /etc/cni/net.d/11-crio-ipv4-bridge.conflist | grep subnet
            [{ "subnet": "10.85.0.0/16" }]

Unfortunately, that default configuration is not suitable for clusters with more than one node. For our 3 nodes cluster, we will therefore need to install a networking plugin supporting multi node clusters. We will use Flannel for that use case.

To avoid issues with Flannel, we need to disable the default CNI bridge plugin configuration provided by CRI-O:

(k8s-control)$ cd /etc/cni/net.d/
(k8s-control)$ mv 11-crio-ipv4-bridge.conflist 11-crio-ipv4-bridge.conflist.disabled

High availability notes

As we plan to add more control plane nodes in the future (for high availability), we will use the '--control-plane-endpoint' flag, indicating the IP address or domain name of the load balancer(s) that will be in front of the control plane nodes.

For now, as we don't have load balancer(s) and multiple control planes yet, we will use, for that endpoint, a domain name ('api-servers.k8s.local') pointing to the IP address of our single control plane node.

To make the future clusters nodes properly resolve that domain name, add the following inside the '/etc/hosts' file on each machine:

<control_plane_node_ip_address> api-servers.k8s.local

After adding more control plane nodes in the future, we will simply need to make that domain name point to the IP address of the load balancer(s) that will be in front of the control planes nodes, in order to reach the cluster's API servers.

Cluster initialization

We are going to initialize the Kubernetes cluster using the default Flannel networking CIDR ('10.244.0.0/16') for the clusters pods and then deploy Flannel.

The '--cri-socket=container_runtime_unix_socket_path' option can also be used in case multiple containers runtimes are installed, to indicate the one that should be used.

Use the following command to initialize the cluster:

(k8s-control)$ sudo kubeadm init --pod-network-cidr 10.244.0.0/16 --control-plane-endpoint api-servers.k8s.local --kubernetes-version v1.29.9
[init] Using Kubernetes version: v1.29.9
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'  
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [api-servers.k8s.local k8s-control kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.27.36.172]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-control localhost] and IPs [172.27.36.172 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-control localhost] and IPs [172.27.36.172 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 7.001927 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-control as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-control as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: 9rrzpt.b53urwsvxkrzqw4i
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join api-servers.k8s.local:6443 --token 9rrzpt.b53urwsvxkrzqw4i \
        --discovery-token-ca-cert-hash sha256:0e70caff86cbda4341e7072756835402dd79b9b9be0940dc81c6cb0fd8956f4a \
        --control-plane

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join api-servers.k8s.local:6443 --token 9rrzpt.b53urwsvxkrzqw4i \
        --discovery-token-ca-cert-hash sha256:0e70caff86cbda4341e7072756835402dd79b9b9be0940dc81c6cb0fd8956f4a

Now let's create the kubeconfig file required to authenticate and communicate with the cluster:

mkdir -p $HOME/.kube && \
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config && \
sudo chown $(id -u):$(id -g) $HOME/.kube/config

We can now use 'kubectl' to update or read some of the cluster resources:

(k8s-control)$ kubectl get nodes
NAME          STATUS      ROLES           AGE   VERSION
k8s-control   NotReady    control-plane   41s   v1.29.9

(k8s-control)$ kubectl get namespaces
NAME              STATUS   AGE
default           Active   46s
kube-node-lease   Active   46s
kube-public       Active   46s
kube-system       Active   46s

# The control plane components
(k8s-control)$ kubectl get pods -n kube-system
NAME                                  READY   STATUS    RESTARTS   AGE
coredns-76f75df574-sd4sj              0/1     Pending   0          52s
coredns-76f75df574-w7f54              0/1     Pending   0          52s
etcd-k8s-control                      1/1     Running   0          68s
kube-apiserver-k8s-control            1/1     Running   0          70s
kube-controller-manager-k8s-control   1/1     Running   0          64s
kube-proxy-slpcs                      1/1     Running   0          52s
kube-scheduler-k8s-control            1/1     Running   0          68s

As we can see, the control plane node is not ready and the coredns addon pods are in a pending state. This is because there is no networking plugin installed yet.

Let's install Flannel:

(k8s-control)$ kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
namespace/kube-flannel created
serviceaccount/flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

Let's verify that the node is now ready and the coredns pods running:

(k8s-control)$ kubectl get nodes
NAME          STATUS      ROLES           AGE    VERSION
k8s-control   Ready       control-plane   5m18s  v1.29.9

(k8s-control)$ kubectl get pods -n kube-system
NAME                                  READY   STATUS    RESTARTS   AGE
coredns-76f75df574-sd4sj              1/1     Running   0          5m25
coredns-76f75df574-w7f54              1/1     Running   0          5m23
(...)

Make worker nodes join the cluster

To get the 'kubeadm join' command that should be run on worker nodes to make them join the cluster, run the following on the control plane node:

(k8s-control)$ kubeadm token create --print-join-command

Then, run the printed join command with sudo on worker nodes to make them join the Kubernetes cluster:

(k8s-worker1)$ sudo kubeadm join api-servers.k8s.local:6443 --token nhunxq.1owm513wkij8as6e --discovery-token-ca-cert-hash sha256:0e70caff86cbda4341e7072756835402dd79b9b9be0940dc81c6cb0fd8956f4a
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

We have also make 'k8s-worker2' join the cluster using the same join command.
Let's verify that the previously added worker nodes are now part of the cluster by running the following command on the control plane node:

(k8s-control)$ kubectl get nodes
NAME          STATUS   ROLES           AGE    VERSION
k8s-control   Ready    control-plane   8m6s   v1.29.9
k8s-worker1   Ready    <none>          24s    v1.29.9
k8s-worker2   Ready    <none>          16s    v1.29.9

We have successfully added worker nodes to the cluster.

Let's run some pods and verify that DNS, internet access and communication between pods are working properly:

(k8s-control)$ kubectl run nginx --image=nginx
pod/nginx created

(k8s-control)$ kubectl get pods -o wide
NAME      READY   STATUS    RESTARTS        AGE     IP           NODE          NOMINATED NODE   READINESS GATES
nginx     1/1     Running   0               7s      10.244.1.2   k8s-worker1   <none>           <none>

(k8s-control)$ kubectl run -it busybox --image=busybox -- sh
If you don't see a command prompt, try pressing enter.
/ #
/ # cat /etc/resolv.conf
search default.svc.cluster.local svc.cluster.local cluster.local mshome.net
nameserver 10.96.0.10 # IP address of the coredns service
options ndots:5
/ #
/ # nslookup hackerstack.org # DNS resolution is working
Server:         10.96.0.10
Address:        10.96.0.10:53

Non-authoritative answer:
Name:   hackerstack.org
Address: 172.67.200.204
Name:   hackerstack.org
Address: 104.21.36.232
(...)
/ #
/ # nc -vz hackerstack.org 443 # internet access is working
hackerstack.org (104.21.36.232:443) open
/ #
/ # nc -vz 10.244.1.2 80 # communication with the nginx pod is working
10.244.1.2 (10.244.1.2:80) open

Very good... things are working as expected. The '10.96.0.10' IP address used for DNS resolution by pods containers corresponds to the Kubernetes cluster DNS service whose backends are coredns pods:

(k8s-control)$ kubectl get svc -n kube-system
NAME       TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
kube-dns   ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   66m

(k8s-control)$ kubectl get pods -n kube-system
NAME                                  READY   STATUS    RESTARTS   AGE
coredns-76f75df574-sd4sj              1/1     Running   0          35m
(...)

Upgrade the cluster

Versions constraints

Here are the versions constraints of Kubernetes clusters components against kubeadm:

  • Kubelet against kubeadm:
    • same version or three versions older
    • Ex: kubeadm is at 1.31.x => kubelet at 1.31.x 1.30.x 1.29.x 1.28.x
  • Other Kubernetes components (kube-apiserver, kube-proxy, kube-controller-manager, kube-scheduler) against kubeadm:
    • same version or one version older
    • Ex: kubeadm at 1.31.x => Kubernetes components at 1.31.x or 1.30.x
    • Kubernetes components target version is specified using the kubeadm '--kubernetes-version' flag

Upgrade steps

We start by upgrading the control plane node and then the worker nodes:

  • Control plane node upgrade:

    • drain the node
    • upgrade kubeadm
    • plan the upgrade: kubeadm plan upgrade
    • apply the upgrade: kubeadm apply upgrade
    • upgrade kubelet and kubectl
    • uncordon the node
  • Worker nodes upgrade:

    • drain the node (from control plane node)
    • upgrade kubeadm
    • upgrade kubelet config, kubelet and kubectl: kubeadm upgrade node
    • uncordon the node

Upgrading the control plane node

Here is the status of the cluster nodes before upgrade:

(k8s-control)$ kubectl get nodes
NAME          STATUS   ROLES           AGE   VERSION
k8s-control   Ready    control-plane   31h   v1.29.9
k8s-worker1   Ready    <none>          31h   v1.29.9
k8s-worker2   Ready    <none>          31h   v1.29.9

We are going to upgrade the control plane node to Kubernetes version 1.30.x.

Add target Kubernetes version package repository

For a list of available Kubernetes versions, have a look at kubernetes releases.

KUBERNETES_REPO_VERSION=v1.30 && \

# Get required GPG key for verifying Kubernetes packages signature
curl -fsSL https://pkgs.k8s.io/core:/stable:/$KUBERNETES_REPO_VERSION/deb/Release.key |
    sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg && \

# Add Kubernetes deb packages repository
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/$KUBERNETES_REPO_VERSION/deb/ /" |
    sudo tee /etc/apt/sources.list.d/kubernetes.list
Drain the node and upgrade kubeadm
# Drain the node
(k8s-control)$ kubectl drain k8s-control --ignore-daemonsets
node/k8s-control cordoned
Warning: ignoring DaemonSet-managed Pods: kube-system/kube-proxy-mlzcd
node/k8s-control drained

# Verify
(k8s-control)$ kubectl get nodes
NAME          STATUS                     ROLES           AGE    VERSION
k8s-control   Ready,SchedulingDisabled   control-plane   2d7h   v1.29.9
k8s-worker1   Ready                      <none>          2d7h   v1.29.9
k8s-worker2   Ready                      <none>          23h    v1.29.9

# Upgrade kubeadm
KUBERNETES_INSTALL_VERSION=1.30.5 && \
sudo apt update && sudo apt install -y --allow-change-held-packages \
kubeadm=$KUBERNETES_INSTALL_VERSION*
Plan and apply the upgrade

Do the following to upgrade Kubernetes components (kube-apiserver, kube-proxy, kube-controller-manager, kube-scheduler) to version 1.30.5 and also etcd and CoreDNS to their lastest versions.

  • See upgrade planification
(k8s-control)$ sudo kubeadm upgrade plan v1.30.5
[preflight] Running pre-flight checks.
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: 1.29.9
[upgrade/versions] kubeadm version: v1.30.5
[upgrade/versions] Target version: v1.30.5
[upgrade/versions] Latest version in the v1.29 series: v1.30.5

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   NODE          CURRENT   TARGET
kubelet     k8s-control   v1.29.9   v1.30.5
kubelet     k8s-worker1   v1.29.9   v1.30.5
kubelet     k8s-worker2   v1.29.9   v1.30.5

Upgrade to the latest version in the v1.29 series:

COMPONENT                 NODE          CURRENT    TARGET
kube-apiserver            k8s-control   v1.29.9    v1.30.5
kube-controller-manager   k8s-control   v1.29.9    v1.30.5
kube-scheduler            k8s-control   v1.29.9    v1.30.5
kube-proxy                              1.29.9     v1.30.5
CoreDNS                                 v1.11.1    v1.11.3
etcd                      k8s-control   3.5.15-0   3.5.15-0

You can now apply the upgrade by executing the following command:

        kubeadm upgrade apply v1.30.5

_____________________________________________________________________


The table below shows the current state of component configs as understood by this version of kubeadm.
Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or
resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually
upgrade to is denoted in the "PREFERRED VERSION" column.

API GROUP                 CURRENT VERSION   PREFERRED VERSION   MANUAL UPGRADE REQUIRED
kubeproxy.config.k8s.io   v1alpha1          v1alpha1            no
kubelet.config.k8s.io     v1beta1           v1beta1             no
_____________________________________________________________________
  • Perform the upgrade
(k8s-control)$ sudo kubeadm upgrade apply v1.30.5
[preflight] Running pre-flight checks.
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.30.5"
[upgrade/versions] Cluster version: v1.29.9
[upgrade/versions] kubeadm version: v1.30.5
[upgrade] Are you sure you want to proceed? [y/N]: y
[upgrade/prepull] Pulling images required for setting up a Kubernetes cluster
[upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection
[upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull'
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.30.5" (timeout: 5m0s)...
[upgrade/etcd] Upgrading to TLS for etcd
[upgrade/staticpods] Preparing for "etcd" upgrade
[upgrade/staticpods] Current and new manifests of etcd are equal, skipping upgrade
[upgrade/etcd] Waiting for etcd to become available
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests2733535561"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2024-09-28-08-07-04/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This can take up to 5m0s
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2024-09-28-08-07-04/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This can take up to 5m0s
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2024-09-28-08-07-04/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This can take up to 5m0s
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upgrade] Backing up kubelet config file to /etc/kubernetes/tmp/kubeadm-kubelet-config1684861658/config.yaml
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.30.5". Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
Upgrade kubelet and kubectl
# Installation
KUBERNETES_INSTALL_VERSION=1.30.5 && \
sudo apt update && sudo apt install -y --allow-change-held-packages \
kubelet=$KUBERNETES_INSTALL_VERSION* kubectl=$KUBERNETES_INSTALL_VERSION* && \

# Ensure new systemd unit files are loaded
sudo systemctl daemon-reload && \

# Restart kubelet
sudo systemctl restart kubelet
Finalize control plane node upgrade

Now let's verify the control plane version:

(k8s-control)$ kubectl get nodes
NAME          STATUS                     ROLES           AGE    VERSION
k8s-control   Ready,SchedulingDisabled   control-plane   2d7h   v1.30.5
k8s-worker1   Ready                      <none>          2d7h   v1.29.9
k8s-worker2   Ready                      <none>          23h    v1.29.9

As we can see from the output of the preceding command, the control plane node 'k8s-control' has successfully been upgraded to Kubernetes version 1.30.5.

We can now uncordon the node to update its status to Ready and make sure kubeadm, kubelet and kubectl are not automatically upgraded:

kubectl uncordon k8s-control && \
sudo apt-mark hold kubeadm kubelet kubectl

Final verification steps:

(k8s-control)$ kubectl get nodes
NAME          STATUS   ROLES           AGE    VERSION
k8s-control   Ready    control-plane   2d7h   v1.30.5
k8s-worker1   Ready    <none>          2d7h   v1.29.9
k8s-worker2   Ready    <none>          24h    v1.29.9

(k8s-control)$ apt-mark showhold
kubeadm
kubectl
kubelet

Upgrading the worker nodes

Add future Kubernetes version package repository

For a list of available Kubernetes versions, have a look at kubernetes releases.

KUBERNETES_REPO_VERSION=v1.30 && \

# Get required GPG key for verifying Kubernetes packages signature
curl -fsSL https://pkgs.k8s.io/core:/stable:/$KUBERNETES_REPO_VERSION/deb/Release.key |
    sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg && \

# Add Kubernetes deb packages repository
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/$KUBERNETES_REPO_VERSION/deb/ /" |
    sudo tee /etc/apt/sources.list.d/kubernetes.list
Drain the worker node and upgrade kubeadm
# Drain the worker node from the control plane node
(k8s-control)$ kubectl drain k8s-worker1 --ignore-daemonsets --force
node/k8s-worker1 cordoned
Warning: ignoring DaemonSet-managed Pods: kube-system/kube-proxy-h7f2q; deleting Pods that declare no controller: default/busybox, default/nginx, default/ubuntu
evicting pod default/ubuntu
evicting pod default/busybox
evicting pod default/nginx
pod/nginx evicted
pod/busybox evicted
pod/ubuntu evicted
node/k8s-worker1 drained

# Upgrade kubeadm on the worker node
KUBERNETES_INSTALL_VERSION=1.30.5 && \
sudo apt update && sudo apt install -y --allow-change-held-packages \
kubeadm=$KUBERNETES_INSTALL_VERSION*
Upgrade kubelet config, kubelet and kubectl
  • Upgrade kubelet config
(k8s-worker1)$ sudo kubeadm upgrade node
[upgrade] Reading configuration from the cluster...
[upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks
[preflight] Skipping prepull. Not a control plane node.
[upgrade] Skipping phase. Not a control plane node.
[upgrade] Backing up kubelet config file to /etc/kubernetes/tmp/kubeadm-kubelet-config1488552398/config.yaml
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[upgrade] The configuration for this node was successfully updated!
[upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.
  • Upgrade kubelet and kubectl
KUBERNETES_INSTALL_VERSION=1.30.5 && \
sudo apt update && sudo apt install -y --allow-change-held-packages \
kubelet=$KUBERNETES_INSTALL_VERSION* kubectl=$KUBERNETES_INSTALL_VERSION* && \

# Ensure new systemd unit files are loaded
sudo systemctl daemon-reload && \

# Restart kubelet
sudo systemctl restart kubelet
Finalize worker node upgrade

Now let's verify the worker node version:

(k8s-control)$ kubectl get nodes
NAME          STATUS                     ROLES           AGE     VERSION
k8s-control   Ready                      control-plane   2d10h   v1.30.5
k8s-worker1   Ready,SchedulingDisabled   <none>          2d10h   v1.30.5
k8s-worker2   Ready                      <none>          26h     v1.29.9

As we can see from the output of the preceding command, the worker node 'k8s-worker1' has successfully been upgraded to Kubernetes version 1.30.5.

We can now uncordon the node to update its status to Ready and make sure kubeadm, kubelet and kubectl are not automatically upgraded:

(k8s-control)$ kubectl uncordon k8s-worker1
node/k8s-worker1 uncordoned

(k8s-worker1)$ sudo apt-mark hold kubeadm kubelet kubectl
kubeadm set on hold.
kubelet set on hold.
kubectl set on hold.

Final verification steps:

(k8s-control)$ kubectl get nodes
NAME          STATUS   ROLES           AGE     VERSION
k8s-control   Ready    control-plane   2d10h   v1.30.5
k8s-worker1   Ready    <none>          2d10h   v1.30.5
k8s-worker2   Ready    <none>          27h     v1.29.9

(k8s-worker1)$ apt-mark showhold
kubeadm
kubectl
kubelet

After repeating the previous worker node upgrade steps on the remaining 'k8s-worker2' worker node, our Kubernetes cluster is fully upgraded:

(k8s-control)$ kubectl get nodes
NAME          STATUS   ROLES           AGE     VERSION
k8s-control   Ready    control-plane   2d23h   v1.30.5
k8s-worker1   Ready    <none>          2d23h   v1.30.5
k8s-worker2   Ready    <none>          39h     v1.30.5