Persistent Volume

We recommend using the Exoscale CSI Driver add-on to create Persistent Volumes (PVs). CSI (short for Container Storage Interface) is a critical component that enables Kubernetes to provision block storage volumes and manage them as PVs.

NOTE
The source code for the Exoscale CSI Driver is open source and can be installed manually. However, we strongly encourage all users to take advantage of our CSI add-on, which will automatically install and manage the CSI Driver for you.

Enabling Persistent Storage

Here is an overview of enabling persistent storage via the Exoscale Container Storage Interface (CSI). You can activate CSI either when creating or updating an SKS cluster—through the Exoscale CLI, the Exoscale Portal, or Terraform. Once enabled, CSI provides a seamless way to deploy and manage volumes within your cluster, ensuring that workloads have reliable, persistent data storage.

WARNING
Enabling the CSI will only succeed in zones where Block Storage is available.

From the CLI

When creating a new cluster:

$ exo compute sks create \
      --zone ch-gva-2 \
      --nodepool-size <2 or more> \
      --exoscale-csi \
      <cluster name>

On an existing cluster:

$ exo compute sks update \
      --zone ch-gva-2 \
      --enable-csi-addon \
      <cluster name>

From the Portal

To enable the CSI on a new cluster, navigate to “COMPUTE” > “SKS” > “ADD” and tick the “exoscale-container-storage-interface” option.

To enable the CSI on an existing cluster, navigate to “COMPUTE” > “SKS” > Select the cluster you would like to update > click the “…” on the top right > “Update Cluster” and tick the “exoscale-container-storage-interface” option.

Through Terraform

Persistent storage can be enabled through the exoscale_csi property of the exoscale_sks_cluster resource.

Using the CSI

Storage Classes

If you enable the CSI add-on through any of the methods above, a StorageClass named exoscale-sbs will be created in your cluster. Note that the reclaimPolicy is set to Delete which means, that if you delete a PersistentVolumeClaim (PVC) the backing block storage volume will be deleted automatically. If this is not the behavior you expect, you will need to create and use your own StorageClass with reclaimPolicy set to Retain.

Creating Volumes

To create a PVC that is backed by a block storage volume you may now apply a manifest similar to the following:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-pvc
  namespace: my-namespace
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 200Gi
  storageClassName: exoscale-sbs

Only sizes that are exact multiples of 1 GiB are permissible and make sure that the size is within the limits of Block Storage.

Taking Snapshots

The following will create a Block Storage Snapshot of the volume created above:

apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
  name: my-snapshot
  namespace: my-namespace
spec:
  volumeSnapshotClassName: exoscale-snapshot
  source:
    persistentVolumeClaimName: my-pvc

Restoring from a Snapshot

It’s possible to create a new volume from the snapshot created above:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-pvc-from-snapshot
  namespace: my-namespace
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: <volume size>Gi
  dataSource:
    name: my-snapshot
    kind: VolumeSnapshot
    apiGroup: snapshot.storage.k8s.io
  storageClassName: exoscale-sbs

Using an existing Block Storage Volume

If you would like to use an already existing Block Storage Volume in your cluster you will need to find out its UUID, size and zone. With the CLI you can find this information like this:

exo compute block-storage list --zone ch-gva-2

Or if you know the name of the volume:

exo compute block-storage show --zone ch-gva-2 <volume name>

Now apply a manifest with the information you found:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: my-existing-volume-pv
spec:
  accessModes:
  - ReadWriteOnce
  storageClassName: exoscale-sbs
  capacity:
    storage: <volume size>Gi
  csi:
    driver: csi.exoscale.com
    volumeHandle: <zone>/<volume UUID>

You may now create a PVC that references this PV:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  namespace: my-namespace
  name: my-pvc
spec:
  storageClassName: exoscale-sbs
  volumeName: my-existing-volume-pv
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: <volume size>Gi

Limitations

Can I Disable the CSI Add-on?

No. We do not support disabling SKS add-ons after they have been enabled.

Volumes Can Be Attached to Only One Node

Because Kubernetes Deployments are stateless by design, volumes attach to only one node at a time. This creates issues for workloads that need persistent storage. To address this, consider using StatefulSets for pods requiring a PersistentVolume.

If you use a Deployment with PVCs (short for PersistentVolumeClaim), moving workloads between nodes will require downtime. For example, if your Deployment runs a single pod on node-A and you need to relocate it to node-B, Kubernetes attempts a RollingUpdate by creating a new pod on node-B and waiting for it to become ready before removing the old pod on node-A. However, the new pod remains stuck in the ContainerCreating state, attempting to attach the volume to both nodes.

To resolve this, consider using the Recreate strategy instead. This approach terminates the existing pod first, detaching the volume before creating a new pod on the target node.

NOTE
This limitation is not unique to Exoscale.

Online Resizing of PersistentVolumes Is Not Supported

Attached volumes cannot be resized while in use. To change a volume’s size, you must first detach it from the node.

Resizing PersistentVolumeClaims (PVCs) in Kubernetes

Follow these steps to expand an Exoscale Block Storage PVC:

  1. Ensure the PVC is unused:
    Remove or delete all Deployments referencing the PVC.

  2. Update the PVC manifest (Expansion Only):
    Edit the PVC to set the new, larger size. Downsizing is not supported.

Here’s an example manifest snippet with the updated size:

spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 200Gi # Update the value here (e.g., for 200 GiB)
  1. Reapply the Manifest:
    Apply the updated manifest containing the increased size.

  2. Resume PVC Usage:
    Use the resized PVC in new deployments.

NOTE
The size increase only takes effect after the PVC is used in a new deployment.

Self-managed installation

If you wish to set up and manage the CSI yourself, please follow these instructions.