Mountpoint S3 CSI Driver
The Mountpoint for Amazon S3 CSI driver allows you to mount S3-compatible object storage buckets as volumes in Kubernetes. Since Exoscale Simple Object Storage (SOS) is exposing a S3-compatible protocol, you can use this driver to mount SOS buckets in your SKS clusters. This is ideal for read-heavy workloads such as data lakes or machine learning training data.
Prerequisites
- A SKS cluster running in the
ch-gva-2zone (or adapt the endpoint for your zone) kubectlconfigured to access your cluster- Helm installed
- Exoscale CLI installed and configured
- An existing SOS bucket
Install the Mountpoint S3 CSI Driver
- Add the Helm Repository
helm repo add aws-mountpoint-s3-csi-driver https://awslabs.github.io/mountpoint-s3-csi-driver
helm repo update- Create the namespace
kubectl create namespace mountpoint-s3- Create an IAM Role for SOS Access
The API key used by the CSI driver needs an IAM Role with a policy that grants access to your SOS bucket.
Create the role with a policy allowing full access to your bucket:
exo iam role create mountpoint-s3-role \
--policy '{
"default-service-strategy": "deny",
"services": {
"sos": {
"type": "rules",
"rules": [
{
"expression": "parameters.bucket == '\''my-bucket'\''",
"action": "allow"
}
]
}
}
}'Note
Replace my-bucket with your actual SOS bucket name.
- Create an API key with this role
exo iam api-key create mountpoint-s3-key mountpoint-s3-roleSave the key ID and secret from the output - you’ll need them in the next step.
For more fine-grained access control (read-only, prefix restrictions, etc.), see the IAM Policy Guide.
- Create the Credentials Secret
Create a Kubernetes secret containing your Exoscale API credentials:
kubectl create secret generic exoscale-sos-credentials \
--namespace mountpoint-s3 \
--from-literal=key_id='EXOxxxxxxxxxxxxxxxxxxxxxxxx' \
--from-literal=access_key='your-api-secret-here'Note
Replace the values with your actual Exoscale API key ID and secret from the previous step.
- Install the Driver
The credentials secret must exist before installing the driver, as credentials are only read at startup.
helm upgrade --install aws-mountpoint-s3-csi-driver \
aws-mountpoint-s3-csi-driver/aws-mountpoint-s3-csi-driver \
--namespace mountpoint-s3 \
--set awsAccessSecret.name=exoscale-sos-credentialsNote
If you need to update the credentials later, restart the driver pods: kubectl rollout restart daemonset -n mountpoint-s3 s3-csi-node
Verify the driver is running:
kubectl get pods -n mountpoint-s3You should see one pod per node in the Running state (the driver runs as a DaemonSet):
NAME READY STATUS RESTARTS AGE
s3-csi-controller-5dbf7f4db5-vbxnm 1/1 Running 0 2m36s
s3-csi-node-6qmd5 3/3 Running 0 2m36s
s3-csi-node-8hwxm 3/3 Running 0 2m36s
s3-csi-node-l6wxw 3/3 Running 0 2m36s
s3-csi-node-rsk84 3/3 Running 0 2m36s
s3-csi-node-sc7zq 3/3 Running 0 2m36sCreate a PersistentVolume
The Mountpoint S3 CSI driver only supports static provisioning. This means you must create a PersistentVolume (PV) that references an existing SOS bucket.
- Create a file named
sos-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: sos-pv
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteMany
mountOptions:
- allow-delete
- region ch-gva-2
- endpoint-url https://sos-ch-gva-2.exo.io
csi:
driver: s3.csi.aws.com
volumeHandle: s3-csi-driver-volume
volumeAttributes:
bucketName: my-bucketNote
Replace my-bucket with your actual SOS bucket name and adapt the zone in region and endpoint-url to match your bucket’s zone. The capacity field is informational only as S3 buckets have no fixed size.
- Apply the manifest
kubectl apply -f sos-pv.yamlMount Options
Key mount options for SOS:
| Option | Description |
|---|---|
endpoint-url https://sos-<zone>.exo.io | Required. The SOS endpoint for your zone. |
region <zone> | Required. Prevents AWS SDK region auto-detection errors. |
allow-delete | Optional. Allows deleting files from the mounted bucket. |
prefix <path>/ | Optional. Mount only a specific prefix (directory) within the bucket. |
The SOS endpoint follows the pattern https://sos-<zone>.exo.io where <zone> is your Exoscale zone (e.g., ch-gva-2, ch-dk-2, de-fra-1, de-muc-1, at-vie-1, at-vie-2, etc.).
Create a PersistentVolumeClaim
- Create a file named
sos-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: sos-pvc
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
resources:
requests:
storage: 100Gi
volumeName: sos-pv- Apply the manifest
kubectl apply -f sos-pvc.yamlVerify the PVC is bound:
kubectl get pvc sos-pvcNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
sos-pvc Bound sos-pv 100Gi RWX <unset> 47sUse the Volume in a Pod
- Create a file named
sos-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: sos-app
spec:
containers:
- name: app
image: alpine
command: ["/bin/sh", "-c", "while true; do ls -la /data; sleep 30; done"]
volumeMounts:
- name: sos-volume
mountPath: /data
volumes:
- name: sos-volume
persistentVolumeClaim:
claimName: sos-pvc- Apply the manifest
kubectl apply -f sos-pod.yaml- Verify the pod is running and can access the bucket
kubectl logs sos-appYou should see the contents of your SOS bucket listed.
Advanced: Using Prefixes for Workload Isolation
You can mount multiple PersistentVolumes from a single SOS bucket by using different prefixes.
Each prefix acts as an isolated “directory” within the bucket.
Example: Two Workloads Sharing One Bucket
Create PVs for two different applications:
apiVersion: v1
kind: PersistentVolume
metadata:
name: sos-pv-app1
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteMany
mountOptions:
- allow-delete
- region ch-gva-2
- endpoint-url https://sos-ch-gva-2.exo.io
- prefix app1/data/
csi:
driver: s3.csi.aws.com
volumeHandle: s3-csi-driver-volume-app1
volumeAttributes:
bucketName: shared-bucket
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: sos-pv-app2
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteMany
mountOptions:
- allow-delete
- region ch-gva-2
- endpoint-url https://sos-ch-gva-2.exo.io
- prefix app2/data/
csi:
driver: s3.csi.aws.com
volumeHandle: s3-csi-driver-volume-app2
volumeAttributes:
bucketName: shared-bucketWith this configuration:
sos-pv-app1only sees objects underapp1/data/in the bucketsos-pv-app2only sees objects underapp2/data/in the bucket- Objects are isolated between workloads while sharing the same underlying bucket
Note
The prefix must end with a trailing slash (e.g., app1/data/).
Limitations
Static Provisioning Only
The Mountpoint S3 CSI driver does not support dynamic provisioning. You cannot use a StorageClass to automatically create PVs or buckets. Each PV must be manually created and mapped to an existing bucket and optionally a key prefix.
Object Storage Semantics
SOS is an object store, not a POSIX filesystem. Hence, some operations behave differently:
- No in-place modifications: Existing files cannot be modified, only overwritten
- No directory rename: Directories cannot be renamed
- No symbolic links: Symlinks are not supported
- No file locking: Concurrent writes may cause conflicts
For detailed differences, please refer to the Mountpoint documentation.