SKS lifecycle management
Control plane and nodepool versions and releases
Minor release support
Minor release support is aligned on the upstream project version support lifecycle
A given version is supported approximately 14 months following its release.
Patch release support
Exoscale supports the latest patch release (ex 1.22.X). If your cluster is facing any issue Exoscale support may request to upgrade to the latest supported version.
Upgrade path
The following upgrade paths are supported:
- From a minor release to another supported minor AND latest patch release
- From a patch release to the latest patch release
Nodepools will always get the latest template available matching its current running minor release, but always to the latest patch release. For example if you are running an older 1.22.2 release and 1.22.4 is available but have not performed the upgrade of the cluster yet, cycling your nodepool will get the 1.22.4 template.
Lifecycle of components running inside a customer cluster (Calico, CoreDNS, Konnectivity, Kubeproxy) is fully managed by Exoscale. These components can be upgraded as follow:
- Minor release without breaking change, CVE fix release: at any time
- Major | minor release with breaking changes: only at cluster minor or patch release upgrade (triggered by customer)
Auto upgrades
You can enable auto upgrades of the control plane components to the latest patch release available on your current Kubernetes minor release by setting the auto-upgrade
parameter to true on your cluster.
For example, you can either set the --auto-upgrade
flag when you create a cluster using the CLI (with exo compute sks create
), or enable it on an existing cluster with exo compute sks update
.
Auto upgrades work exactly like upgrades triggered manually by you.
Release schedule
The design goal for SKS is to make versions available in a 2 weeks timeframe after their upstream release. This is an objective and can vary depending on the importance or breaking changes associated with the release.
Upgrading a cluster
Upgrading a cluster is an operation carried in 2 steps:
- Upgrade the SKS control plane
- Cycle your Node Pool(s) to get the nodes running the latest version
Finding out your SKS control plane version
To list the current running version issue the command
$ exo compute sks show --zone de-fra-1 my-test-cluster
┼───────────────┼──────────────────────────────────────────────────────────────────┼
│ SKS CLUSTER │ │
┼───────────────┼──────────────────────────────────────────────────────────────────┼
│ ID │ aaea400a-b9d3-458d-adc7-e1f0b2a2ed51 │
│ Name │ my-test-cluster │
│ Description │ my-test-cluster │
│ Zone │ de-fra-1 │
│ Creation Date │ 2022-08-30 09:38:33 +0000 UTC │
│ Endpoint │ https://aaea400a-b9d3-458d-adc7-e1f0b2a2ed51.sks-de-fra-1.exo.io │
│ Version │ 1.21.4 │
│ Service Level │ pro │
│ CNI │ calico │
│ Add-Ons │ exoscale-cloud-controller │
│ State │ running │
│ Nodepools │ 2bc04c15-5996-46b2-acfb-1ac24aead8e2 | my-nodepool │
┼───────────────┼──────────────────────────────────────────────────────────────────┼
Here the running version is 1.22.1
List available versions
$ exo compute sks versions
┼─────────┼
│ VERSION │
┼─────────┼
│ 1.22.1 │
┼─────────┼
The version 1.22.1 is the latest available.
Trigger the upgrade of the cluster control plane
$ exo compute sks upgrade my-test-cluster --zone de-fra-1 1.22.1
✔ Upgrading SKS cluster "my-test-cluster"... 1m57s
┼───────────────┼──────────────────────────────────────────────────────────────────┼
│ SKS CLUSTER │ │
┼───────────────┼──────────────────────────────────────────────────────────────────┼
│ ID │ aaea400a-b9d3-458d-adc7-e1f0b2a2ed51 │
│ Name │ my-test-cluster │
│ Description │ my-test-cluster │
│ Zone │ de-fra-1 │
│ Creation Date │ 2021-08-30 09:38:33 +0000 UTC │
│ Endpoint │ https://aaea400a-b9d3-458d-adc7-e1f0b2a2ed51.sks-de-fra-1.exo.io │
│ Version │ 1.22.1 │
│ Service Level │ pro │
│ CNI │ calico │
│ Add-Ons │ exoscale-cloud-controller │
│ State │ running │
│ Nodepools │ 2bc04c15-5996-46b2-acfb-1ac24aead8e2 | my-nodepool │
┼───────────────┼──────────────────────────────────────────────────────────────────┼
However the node pool attached still has nodes running in the previous version:
$ kubectl --kubeconfig my-test-cluster.kubeconfig get node
NAME STATUS ROLES AGE VERSION
pool-ad590-fizsv Ready <none> 14m v1.21.4
pool-ad590-iyqua Ready <none> 14m v1.21.4
pool-ad590-tjxga Ready <none> 14m v1.21.4
Cycling a node pool
There are several ways to do a node pool upgrade.
- Scale the nodepool and drain old nodes
- Drain and destroy nodes on by one
- Create a new nodepool
Scale the nodepool and drain old nodes
The first solution is to scale your nodepool and then drain nodes from it. For example, your could scale your nodepool of 3 nodes to 6 machines:
exo compute sks nodepool scale --zone de-fra-1 my-test-cluster my-nodepool 6
Once the new nodes are available, you can use the kubectl drain <node name>
on your old nodes, in order to remove pods from them.
Once the nodes are drained, you can execute the command exo compute sks nodepool evict --zone de-fra-1 my-test-cluster my-nodepool <node name>
(you can provide several nodes as argument to the command).
Note
You don’t have to exactly double your number of nodes, you could upgrade your nodes in small batches if you want to.
Drain and destroy nodes one by one
Another solution is to drain one node (using kubectl drain <node name>
again),
remove it from your cluster (using kubectl delete node <node name>
and
then destroy the node (using exo compute instance delete <node name>
).
A new machine will be automatically created in order to replace the destroyed one.
Create a new nodepool
You could also create brand new nodepool, drain all nodes from the old one,
and then delete the old one using exo compute sks nodepool delete --zone de-fra-1 my-test-cluster my-nodepool
.