Control plane and nodepool versions and releases

Minor release support

Kubernetes minor releases (ex 1.2X.0) are supported for 180 days after we publish it. Taking in account recent release activity of the Kubernetes upstream project this means that we support the current and the previous since the usual release cycle for minor release is ~ every 3 months

Patch release support

Exoscale supports the latest patch release (ex 1.22.X). If your cluster is facing any issue Exoscale support may request to upgrade to the latest supported version.

Upgrade path

The following upgrade paths are supported:

  • From a minor release to another supported minor AND latest patch release
  • From a patch release to the latest patch release

Nodepools will always get the latest template available matching its current running minor release, but always to the latest patch release. For example if you are running an older 1.21.2 release and 1.21.4 is available but have not performed the upgrade of the cluster yet, cycling your nodepool will get the 1.21.4 template.

Lifecycle of components running inside a customer cluster (Calico, CoreDNS, Konnectivity, Kubeproxy) is fully managed by Exoscale. These components can be upgraded as follow:

  • Minor release without breaking change, CVE fix release: at any time
  • Major | minor release with breaking changes: only at cluster minor or patch release upgrade (triggered by customer)

Auto upgrades

You can enable auto upgrades of the control plane components to the latest patch release available on your current Kubernetes minor release by setting the auto-upgrade parameter to true on your cluster.

For example, you can either set the --auto-upgrade flag when you create a cluster using the CLI (with exo sks create), or enable it on an existing cluster with exo sks update.

Auto upgrades work exactly like upgrades triggered manually by you.

Release schedule

The design goal for SKS is to make versions available in a 2 weeks timeframe after their upstream release. This is an objective and can vary depending on the importance or breaking changes associated with the release.

Upgrading a cluster

Upgrading a cluster is an operation carried in 2 steps:

  1. Upgrade the SKS control plane
  2. Cycle your Node Pool(s) to get the nodes running the latest version

Finding out your SKS control plane version

To list the current running version issue the command

exo --zone de-fra-1 sks show my-test-cluster


┼───────────────┼──────────────────────────────────────────────────────────────────┼
│  SKS CLUSTER  │                                                                  │
┼───────────────┼──────────────────────────────────────────────────────────────────┼
│ ID            │ aaea400a-b9d3-458d-adc7-e1f0b2a2ed51                             │
│ Name          │ my-test-cluster                                                  │
│ Description   │ my-test-cluster                                                  │
│ Zone          │ de-fra-1                                                         │
│ Creation Date │ 2022-08-30 09:38:33 +0000 UTC                                    │
│ Endpoint      │ https://aaea400a-b9d3-458d-adc7-e1f0b2a2ed51.sks-de-fra-1.exo.io │
│ Version       │ 1.21.4                                                           │
│ Service Level │ pro                                                              │
│ CNI           │ calico                                                           │
│ Add-Ons       │ exoscale-cloud-controller                                        │
│ State         │ running                                                          │
│ Nodepools     │ 2bc04c15-5996-46b2-acfb-1ac24aead8e2 | my-nodepool               │
┼───────────────┼──────────────────────────────────────────────────────────────────┼

Here the running version is 1.22.1

List available versions

exo sks versions


┼─────────┼
│ VERSION │
┼─────────┼
│ 1.22.1  │
┼─────────┼

The version 1.22.1 is the latest available.

Trigger the upgrade of the cluster control plane

exo --zone de-fra-1 sks upgrade my-test-cluster 1.22.1

 ✔ Upgrading SKS cluster "my-test-cluster"... 1m57s
┼───────────────┼──────────────────────────────────────────────────────────────────┼
│  SKS CLUSTER  │                                                                  │
┼───────────────┼──────────────────────────────────────────────────────────────────┼
│ ID            │ aaea400a-b9d3-458d-adc7-e1f0b2a2ed51                             │
│ Name          │ my-test-cluster                                                  │
│ Description   │ my-test-cluster                                                  │
│ Zone          │ de-fra-1                                                         │
│ Creation Date │ 2021-08-30 09:38:33 +0000 UTC                                    │
│ Endpoint      │ https://aaea400a-b9d3-458d-adc7-e1f0b2a2ed51.sks-de-fra-1.exo.io │
│ Version       │ 1.22.1                                                           │
│ Service Level │ pro                                                              │
│ CNI           │ calico                                                           │
│ Add-Ons       │ exoscale-cloud-controller                                        │
│ State         │ running                                                          │
│ Nodepools     │ 2bc04c15-5996-46b2-acfb-1ac24aead8e2 | my-nodepool               │
┼───────────────┼──────────────────────────────────────────────────────────────────┼

However the node pool attached still has nodes running in the previous version:

kubectl --kubeconfig my-test-cluster.kubeconfig get node

NAME               STATUS   ROLES    AGE   VERSION
pool-ad590-fizsv   Ready    <none>   14m   v1.21.4
pool-ad590-iyqua   Ready    <none>   14m   v1.21.4
pool-ad590-tjxga   Ready    <none>   14m   v1.21.4

Cycling a node pool

There are several ways to do a node pool upgrade.

  1. Scale the nodepool and drain old nodes
  2. Drain and destroy nodes on by one
  3. Create a new nodepool

Scale the nodepool and drain old nodes

The first solution is to scale your nodepool and then drain nodes from it. For example, your could scale your nodepool of 3 nodes to 6 machines:

exo --zone de-fra-1 sks nodepool scale my-test-cluster my-nodepool 6

Once the new nodes are available, you can use the kubectl drain <node name> on your old nodes, in order to remove pods from them.

Once the nodes are drained, you can call exo --zone de-fra-1 sks nodepool evict my-test-cluster my-nodepool <node name> . You can pass several nodes to the evict command if you want to.

Note

You don’t have to exactly double your number of nodes, you could upgrade your nodes in small batches if you want to.

Drain and destroy nodes one by one

Another solution is to drain one node (using kubectl drain <node name> again), remove it from your cluster (using kubectl delete node <node name> and then destroy the node (using exo vm delete <node name>).

A new machine will be automatically created in order to replace the destroyed one.

Create a new nodepool

You could also create brand new nodepool, drain all nodes from the old one, and then delete the old one using exo --zone de-fra-1 sks nodepool delete my-test-cluster my-nodepool.