How to install Longhorn on Exoscale SKS
Longhorn is a straight-forward to use storage solution for Kubernetes. It uses the local storage of all (or certain specified) instances and makes them highly available as block storage. Kubernetes Pods can then automatically derive storage (called volumes) from Longhorn. Also volumes can be automatically backed-up to the Exoscale Object Storage.
As a prerequisite for the following documentation, you need:
- A Exoscale SKS Cluster (version > 1.20.3)
- Make sure to use the parameter
--kubernetes-version "1.20.5"when creating a cluster
- Make sure to use the parameter
- Access to your Cluster via kubectl
- Basic Linux knowledge
If you don’t have access to a SKS cluster yet, follow the Quick Start Guide.
Also for using the backup to S3/Exoscale SOS functionality, you will have additionally to: - Create an Object Storage Bucket, either via the portal or the CLI. You can refer to the Object Storage documentation
- Create an IAM key, allowing access to the created Object Storage Bucket, either via the portal or the CLI. You can refer to the IAM documentation
This guide just provides a quick-start into Longhorn storage. Longhorn has many configurable options which must be considered for production environments.
If your cluster is freshly created, wait until all nodes are ready. You can verify this with
kubectl get nodes
Deploy longhorn using this command (v1.1.0 is a stable release of 18. December 2020):
kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/v1.1.0/deploy/longhorn.yaml
Wait until Longhorn is fully deployed. This will take roughly 3-5 minutes.
You can check the progress with (CTRL+C to exit):
kubectl get pods \ --namespace longhorn-system \ --watch
When all pods from the namespace longhorn-system have the status “Running”, Longhorn is fully ready to use.
Accessing the Longhorn UI
The WebUI provides a way to view information about the Cluster (like usage of storage). Also, it enables further configuration of Volumes.
The UI is only available from the internal Kubernetes network. As such you must use port-forwarding to access it:
kubectl port-forward deployment/longhorn-ui 7000:8000 -n longhorn-system
This will open up the local port 7000 and connect it directly to port 8000 of the dashboard container.
Use then this URL to access the interface:
Using Longhorn volumes in Kubernetes
Creating a Persistant Volume Claim
For using storage you need a PersistentVolumeClaim (PVC), which can claim storage. It is comparable to a Pod, however a Pod claims computing resources like CPU and RAM instead of storage.
This YAML-Manifest defines such a PVC, which would request 2 GB of storage. Also, the default storage class (longhorn) replicates the volume 3 times on different nodes in the cluster.
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: example-pvc spec: accessModes: - ReadWriteOnce storageClassName: longhorn resources: requests: storage: 2Gi
This will create a volume with an ext4 filesystem (default). Consider this Block-Volume example if your application/database supports RAW devices.
Use additionally a custom storage class if you want to modify parameters like replication count. Or if you want to enabled scheduled snapshots or backups.
Linking the Persistent Volume Claim with a Pod
A Pod in the Kubernetes world consists of one or more containers. Such a Pod can mount a PVC by referring to its name (in this case
The following example creates a pod using a Ubuntu-container image, and mounts the storage defined above in
apiVersion: v1 kind: Pod metadata: name: pod-test spec: containers: - name: container-test image: ubuntu imagePullPolicy: IfNotPresent command: - "sleep" - "604800" volumeMounts: - name: volv mountPath: /data volumes: - name: volv persistentVolumeClaim: claimName: example-pvc
You can save each configuration into a YAML file, and apply them with
kubectl apply -f yourfilename.yaml.
Check the status of your pod and PVC with
kubectl get pod and
kubectl get pvc
The Ubuntu container will sleep for a long time, so you can run commands inside of it and check its status.
Soon after deploying the example code, you can see that a volume was created in the Longhorn UI when you click on the tab “Volume”. Also one can see, that the volume created is attached to the Ubuntu pod and replicated 3 times.
To test your storage, you can attach your console to the shell of the pod by executing
kubectl exec -it pod-test -- /bin/bash and then write something into the
Even when you delete/kill the pod (e.g. via
kubectl delete pod pod-test), you can reapply the Pod via kubectl to see that the data written into
/data is preserved.
Configuring Backup to Exoscale SOS
Longhorn has the ability to backup your volumes to S3/Exoscale Object Storage. This can either be done manually or on schedule. It will also automatically detect already present backups.
Configuring the access secret
You need the following data (check the links in the chapter Prerequisites for further links):
A IAM-Key Key/Secret pair that provides access to this bucket.
The S3-Endpoint of the zone of your bucket. It has the format
https://sos-ZONE.exo.io, take care of substituting
ZONEwith the proper zone.
You can find all Zones on our website and with the cli:
exo zones list
Convert the Zone-URL and the IAM-Pair to base64 (replace DATA in the commands below).
- On Linux/Mac:
echo -n "DATA" | base64
- On Windows (inside Powershell):
Write down the following manifest in a new .yaml file, replace the 3 values in the fields of the data section respectively.
apiVersion: v1 kind: Secret metadata: name: exoscale-sos-secret namespace: longhorn-system type: Opaque data: # Zone in base64 AWS_ENDPOINTS: aHR0cHM6Ly9zb3MtYXQtdmllLTEuZXhvLmlv # https://sos-at-vie-1.exo.io AWS_ACCESS_KEY_ID: YourBase64AccessKey # access key for the bucket in base64 AWS_SECRET_ACCESS_KEY: YourBase64SecretKey # secret key for the bucket in base64
Apply the manifest via
kubectl apply -f FILENAME.yaml. It will create a new secret with the name exoscale-sos-secret in the cluster which can provide Longhorn access to the bucket.
Configuring the Backup Destination in Longhorn
Open up the Longhorn UI and go to the Tab Settings / General. Scroll down to the section Backup.
Fill out Backup Target using this format:
s3://BUCKETNAME@ZONE/ - e.g.
s3://my-cool-bucket@at-vie-1/ for a bucket in Vienna.
Write into Backup Target Credential Secret the following string - which is the name of the created secret:
Scroll down and click Save to apply your configuration.
Then browse to the Volume tab - if the site does not show any error - the connection is successful.
If you already created a volume, you can test backing it up manually, by clicking on the respective Volume in the Volume tab and then on Create Backup.