Control plane and nodepool versions and releases

Minor release support

Minor release support is aligned on the upstream project version support lifecycle.

A given version is supported for approximately 14 months after its release.

Patch release support

Exoscale supports the latest patch release (1.22.X). If your cluster is facing any issue Exoscale support may request to upgrade to the latest supported version.

Upgrade path

The following upgrade paths are supported:

  • From a minor release to another supported minor and latest patch release
  • From a patch release to the latest patch release

Nodepools will always get the latest template available matching its current running minor release, but will always to the latest patch release. For example, if you are running an older 1.22.2 release and 1.22.4 is available but you have not performed the upgrade of the cluster, cycling your nodepool will get the 1.22.4 template.

Lifecycle of components running inside a customer cluster (Calico, CoreDNS, Konnectivity, Kubeproxy) is fully managed by Exoscale. These components can be upgraded as follows:

  • Minor releases without breaking change and CVE fix releases: at any time
  • Major to minor release with breaking changes: only at cluster minor or patch release upgrade (triggered by customer)

Auto-upgrades

You can enable auto-upgrades of the control plane components to the latest patch release available on your current Kubernetes minor release by setting the auto-upgrade parameter to true on your cluster.

For example, you can either set the --auto-upgrade flag when you create a cluster using the CLI (with exo compute sks create), or enable auto-upgrade on an existing cluster with exo compute sks update.

Auto-upgrades work exactly like upgrades triggered manually by you.

Release schedule

The design goal for SKS is to make versions available in a 2-week timeframe after their upstream release. However, this timeframe can vary depending on the importance or any breaking changes associated with the release.

Upgrading a cluster

To upgrade a cluster:

  1. Upgrade the SKS control plane
  2. Cycle your nodepools so the nodes are running the latest version

Finding out your SKS control plane version

To list the current running version:

$ exo compute sks show  --zone de-fra-1 my-test-cluster
┼───────────────┼──────────────────────────────────────────────────────────────────┼
│  SKS CLUSTER  │                                                                  │
┼───────────────┼──────────────────────────────────────────────────────────────────┼
│ ID            │ aaea400a-b9d3-458d-adc7-e1f0b2a2ed51                             │
│ Name          │ my-test-cluster                                                  │
│ Description   │ my-test-cluster                                                  │
│ Zone          │ de-fra-1                                                         │
│ Creation Date │ 2022-08-30 09:38:33 +0000 UTC                                    │
│ Endpoint      │ https://aaea400a-b9d3-458d-adc7-e1f0b2a2ed51.sks-de-fra-1.exo.io │
│ Version       │ 1.21.4                                                           │
│ Service Level │ pro                                                              │
│ CNI           │ calico                                                           │
│ Add-Ons       │ exoscale-cloud-controller                                        │
│ State         │ running                                                          │
│ Nodepools     │ 2bc04c15-5996-46b2-acfb-1ac24aead8e2 | my-nodepool               │
┼───────────────┼──────────────────────────────────────────────────────────────────┼

Here the running version is 1.22.1 - to list available versions:

List available versions

$ exo compute sks versions
┼─────────┼
│ VERSION │
┼─────────┼
│ 1.22.1  │
┼─────────┼

The version 1.22.1 is the latest available.

Trigger the upgrade of the cluster control plane

$ exo compute sks upgrade my-test-cluster --zone de-fra-1 1.22.1
 ✔ Upgrading SKS cluster "my-test-cluster"... 1m57s
┼───────────────┼──────────────────────────────────────────────────────────────────┼
│  SKS CLUSTER  │                                                                  │
┼───────────────┼──────────────────────────────────────────────────────────────────┼
│ ID            │ aaea400a-b9d3-458d-adc7-e1f0b2a2ed51                             │
│ Name          │ my-test-cluster                                                  │
│ Description   │ my-test-cluster                                                  │
│ Zone          │ de-fra-1                                                         │
│ Creation Date │ 2021-08-30 09:38:33 +0000 UTC                                    │
│ Endpoint      │ https://aaea400a-b9d3-458d-adc7-e1f0b2a2ed51.sks-de-fra-1.exo.io │
│ Version       │ 1.22.1                                                           │
│ Service Level │ pro                                                              │
│ CNI           │ calico                                                           │
│ Add-Ons       │ exoscale-cloud-controller                                        │
│ State         │ running                                                          │
│ Nodepools     │ 2bc04c15-5996-46b2-acfb-1ac24aead8e2 | my-nodepool               │
┼───────────────┼──────────────────────────────────────────────────────────────────┼

However, the nodepool attached still has nodes running in the previous version:

$ kubectl --kubeconfig my-test-cluster.kubeconfig get node
NAME               STATUS   ROLES    AGE   VERSION
pool-ad590-fizsv   Ready    <none>   14m   v1.21.4
pool-ad590-iyqua   Ready    <none>   14m   v1.21.4
pool-ad590-tjxga   Ready    <none>   14m   v1.21.4

Cycling a node pool

There are several ways to do a nodepool upgrade:

  1. Scale the nodepool and drain any old nodes
  2. Drain and destroy nodes one by one
  3. Create a new nodepool

Scale the nodepool and drain old nodes

The first solution is to scale your nodepool and then drain nodes from it. For example, your could scale your nodepool of 3 nodes to 6 instances:

exo compute sks nodepool scale --zone de-fra-1 my-test-cluster my-nodepool 6

(Note that we double the number of nodes in this guide for demonstration purposes, but you can upgrade your nodes in smaller batches if desired.)

When the new nodes are available, you can use kubectl drain <node name> on your old nodes to remove pods.

After the nodes are drained, you can execute the command exo compute sks nodepool evict --zone de-fra-1 my-test-cluster my-nodepool <node name>. You can provide several nodes as an argument to the command.

Drain and destroy nodes one by one

Another solution is to drain and destroy nodes one by one:

  • drain one node using kubectl drain <node name>
  • remove it the node your cluster with kubectl delete node <node name>
  • destroy the node with exo compute instance delete <node name>
  • repeat as needed

A new instance will be automatically created to replace one that was destroyed.

Create a new nodepool

Another method is to create a new nodepool and then drain all nodes from the old nodepool. Then you can delete the old nodepool using exo compute sks nodepool delete --zone de-fra-1 my-test-cluster my-nodepool.

Exoscale Academy

Are you interested in general rollout and update topics with Kubernetes? Take a look the free SKS ADVANCED Coursein our online academy.