SKS lifecycle management
Control plane and nodepool versions and releases
Minor release support
Minor release support is aligned on the upstream project version support lifecycle.
A given version is supported for approximately 14 months after its release.
Patch release support
Exoscale supports the latest patch release (1.30.X). If your cluster is facing any issue Exoscale support may request to upgrade to the latest supported version.
Upgrade path
The following upgrade paths are supported:
- From a minor release to another supported minor and latest patch release
- From a patch release to the latest patch release
Nodepools will always get the latest template available matching its current running minor release, but will always to the latest patch release. For example, if you are running an older 1.30.1 release and 1.30.2 is available but you have not performed the upgrade of the cluster, cycling your nodepool will get the 1.30.2 template.
Lifecycle of components running inside a customer cluster (Calico, CoreDNS, Konnectivity, Kubeproxy) is fully managed by Exoscale. These components can be upgraded as follows:
- Minor releases without breaking change and CVE fix releases: at any time
- Major to minor release with breaking changes: only at cluster minor or patch release upgrade (triggered by customer)
Auto-upgrades
You can enable auto-upgrades of the control plane components to the latest patch release available on your current Kubernetes minor release by setting the auto-upgrade
parameter to true
on your cluster.
For example, you can either set the --auto-upgrade
flag when you create a cluster using the CLI (with exo compute sks create
), or enable auto-upgrade on an existing cluster with exo compute sks update
.
Auto-upgrades work exactly like upgrades triggered manually by you.
Release schedule
The design goal for SKS is to make versions available in a 2-week timeframe after their upstream release. However, this timeframe can vary depending on the importance or any breaking changes associated with the release.
Upgrading a cluster
To upgrade a cluster:
- Upgrade the SKS control plane
- Cycle your nodepools so the nodes are running the latest version
Note
When you need to upgrade e.g. from 1.27.5 to 1.30.2 you will need to perform these 2 steps for every minor release (1.28 and 1.29). 1. Upgrade the SKS control plane to 1.28.<latest version> 2. Cycle your nodepools so the nodes are running version 1.28.<latest version> 3. Upgrade the SKS control plane to 1.30.2 4. Cycle your nodepools so the nodes are running version 1.30.2
Finding out your SKS control plane version
To list the current running version:
$ exo compute sks show --zone de-fra-1 my-test-cluster
┼───────────────┼──────────────────────────────────────────────────────────────────┼
│ SKS CLUSTER │ │
┼───────────────┼──────────────────────────────────────────────────────────────────┼
│ ID │ aaea400a-b9d3-458d-adc7-e1f0b2a2ed51 │
│ Name │ my-test-cluster │
│ Description │ my-test-cluster │
│ Zone │ de-fra-1 │
│ Creation Date │ 2024-07-10 09:38:33 +0000 UTC │
│ Endpoint │ https://aaea400a-b9d3-458d-adc7-e1f0b2a2ed51.sks-de-fra-1.exo.io │
│ Version │ 1.29.6 │
│ Service Level │ pro │
│ CNI │ calico │
│ Add-Ons │ exoscale-cloud-controller │
│ State │ running │
│ Nodepools │ 2bc04c15-5996-46b2-acfb-1ac24aead8e2 | my-nodepool │
┼───────────────┼──────────────────────────────────────────────────────────────────┼
Here the running version is 1.29.6 - to list available versions:
List available versions
$ exo compute sks versions
┼─────────┼
│ VERSION │
┼─────────┼
│ 1.30.2 │
│ 1.29.6 │
│ 1.28.11 │
┼─────────┼
The version 1.30.2 is the latest available.
Trigger the upgrade of the cluster control plane
$ exo compute sks upgrade my-test-cluster --zone de-fra-1 1.30.2
✔ Upgrading SKS cluster "my-test-cluster"... 1m57s
┼───────────────┼──────────────────────────────────────────────────────────────────┼
│ SKS CLUSTER │ │
┼───────────────┼──────────────────────────────────────────────────────────────────┼
│ ID │ aaea400a-b9d3-458d-adc7-e1f0b2a2ed51 │
│ Name │ my-test-cluster │
│ Description │ my-test-cluster │
│ Zone │ de-fra-1 │
│ Creation Date │ 2024-07-10 09:38:33 +0000 UTC │
│ Endpoint │ https://aaea400a-b9d3-458d-adc7-e1f0b2a2ed51.sks-de-fra-1.exo.io │
│ Version │ 1.30.2 │
│ Service Level │ pro │
│ CNI │ calico │
│ Add-Ons │ exoscale-cloud-controller │
│ State │ running │
│ Nodepools │ 2bc04c15-5996-46b2-acfb-1ac24aead8e2 | my-nodepool │
┼───────────────┼──────────────────────────────────────────────────────────────────┼
However, the nodepool attached still has nodes running in the previous version:
$ kubectl --kubeconfig my-test-cluster.kubeconfig get node
NAME STATUS ROLES AGE VERSION
pool-ad590-fizsv Ready <none> 14m v1.29.6
pool-ad590-iyqua Ready <none> 14m v1.29.6
pool-ad590-tjxga Ready <none> 14m v1.29.6
Cycling a node pool
There are several ways to do a nodepool upgrade:
- Scale the nodepool and drain any old nodes
- Drain and destroy nodes one by one
- Create a new nodepool
Scale the nodepool and drain old nodes
The first solution is to scale your nodepool and then drain nodes from it. For example, your could scale your nodepool of 3 nodes to 6 instances:
exo compute sks nodepool scale --zone de-fra-1 my-test-cluster my-nodepool 6
(Note that we double the number of nodes in this guide for demonstration purposes, but you can upgrade your nodes in smaller batches if desired.)
When the new nodes are available, you can use kubectl drain <node name>
on your old nodes to remove pods.
After the nodes are drained, you can execute the command exo compute sks nodepool evict --zone de-fra-1 my-test-cluster my-nodepool <node name>
. You can provide several nodes as an argument to the command.
Drain and destroy nodes one by one
Another solution is to drain and destroy nodes one by one:
- drain one node using
kubectl drain <node name>
- remove it the node your cluster with
kubectl delete node <node name>
- destroy the node with
exo compute instance delete <node name>
- repeat as needed
A new instance will be automatically created to replace one that was destroyed.
Create a new nodepool
Another method is to create a new nodepool and then drain all nodes from the old nodepool.
Then you can delete the old nodepool using exo compute sks nodepool delete --zone de-fra-1 my-test-cluster my-nodepool
.
Exoscale Academy
Are you interested in general rollout and update topics with Kubernetes? Take a look the free SKS ADVANCED Course in our online academy.