Overview

Warning

We consider the CSI to be in Beta phase. Although it reliably performs its essential functions, missing features and bugs have to be expected.

To create PersistentVolumes(PV) in SKS we recommend using the Exoscale CSI Driver addon. CSI stands for “Container Storage Interface” and it is a necessary component for Kubernetes to provision block storage volumes and manage them as PVs.

The source code for the CSI Driver is open-source and can be installed manually, but we strongly recommend to all users that they use our SKS addon which will automatically install and manage the CSI for them.

Enabling persistent storage on an SKS cluster

Warning

Enabling the CSI will only succeed in zones where Block Storage is available.

From the CLI

exo compute sks create \
      --zone ch-gva-2 \
      --nodepool-size <2 or more> \
      --exoscale-csi \
      <cluster name>

From the portal

Navigate to “COMPUTE” > “SKS” > “ADD” and tick the “exoscale-container-storage-interface” option.

Through Terraform

Persistent storage can be enabled through the exoscale_csi property of the exoscale_sks_cluster resource.

Using the CSI

Default StorageClass

If you enable the CSI addon through any of the methods above, a default StorageClass named exoscale-sbs will be created in your cluster. Note that the reclaimPolicy is set to Delete which means, that if you delete a PersistentVolumeClaim(PVC) the backing block storage volume will be deleted automatically. If this is not the behaviour you expect, you will need to create and use your own StorageClass with reclaimPolicy set to Retain.

Creating Volumes

To create a PVC that is backed by a block storage volume you may now apply a manifest similar to the following:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-pvc
  namespace: my-namespace
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 200Gi
  storageClassName: exoscale-sbs

Only sizes that are exact multiples of 1 GiB are permissible and make sure that the size is within the limits of Block Storage.

Taking Snapshots

The following will create a Block Storage Snapshot of the volume created above:

apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
  name: my-snapshot
  namespace: my-namespace
spec:
  volumeSnapshotClassName: exoscale-snapshot
  source:
    persistentVolumeClaimName: my-pvc

Restoring from a Snapshot

It’s possible to create a new volume from the snapshot created above:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-pvc-from-snapshot
  namespace: my-namespace
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: <volume size>Gi
  dataSource:
    name: my-snapshot
    kind: VolumeSnapshot
    apiGroup: snapshot.storage.k8s.io
  storageClassName: exoscale-sbs

Using an existing Block Storage Volume

If you would like to use an already existing Block Storage Volume in your cluster you will need to find out its UUID, size and zone. With the CLI you can find this information like this:

exo compute block-storage list --zone ch-gva-2

Or if you know the name of the volume:

exo compute block-storage show --zone ch-gva-2 <volume name>

Now apply a manifest with the information you found:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: my-existing-volume-pv
spec:
  accessModes:
  - ReadWriteOnce
  storageClassName: exoscale-sbs
  capacity:
    storage: <volume size>Gi
  csi:
    driver: csi.exoscale.com
    volumeHandle: <zone>/<volume UUID>

You may now create a PVC that references this PV:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  namespace: my-namespace
  name: my-pvc
spec:
  storageClassName: exoscale-sbs
  volumeName: my-existing-volume-pv
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: <volume size>Gi

Limitations

Can I disable the CSI addon?

Unfortunately no, we don’t support disabling SKS addons.

Volumes can be attached to only one node

Kubernetes Deployments were designed for stateless applications. The issue described here can be avoided altogether by using StatefulSets instead for pods that need a PersistentVolume.

However if you do create a Deployment with PVCs, then moving it from one node to another will require downtime for said Deployment. You may encounter this if you have configured your deployment to create one pod on node-A and you change it so that the deployment needs to create the pod on node-B. By default kubernetes will follow the RollingUpdate strategy which means it first creates a new pod on node-B and waits until it is ready before terminating the pod on node-A. Unfortunately the new pod will stay in ContainerCreating state indefinitely because it will try to attach the volume to both nodes. If you find yourself in this situation consider changing the strategy to Recreate. In this case the old pod will be terminated first which allows the volume to be detached.

This limitation is not unique to Exoscale.

Online resizing of PersistentVolumes is not supported

Attached volumes cannot be resized. To resize a volume, you need to detach it from the node first.

Resizing Persistent Volume Claims (PVCs) in Kubernetes

Here’s how to expand an Exoscale Block Storage PVC:

  1. Ensure Unused PVC: Make sure all deployments using the PVC are properly deleted and no longer referencing it.

  2. Update PVC Manifest (Expansion Only): Edit your PVC manifest to specify the new, larger size. Downsizing is not supported.

Here’s an example manifest snippet with the updated size:

spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 200Gi # Update the value here (e.g., for 200 GiB)
  1. Re-apply the Manifest: Apply the updated manifest with the increased size.

  2. Resume PVC Usage: You can now use the resized PVC in new deployments.

Important Note: The actual size increase for the PVC only takes effect when you use it in a new deployment.

Self-managed installation

If you wish to setup and manage the CSI yourself, please follow these instructions.