Longhorn is a straight-forward to use storage solution for Kubernetes. It uses the local storage of all (or certain specified) instances and makes them highly available as block storage. Kubernetes Pods can then automatically derive storage (called volumes) from Longhorn. Also volumes can be automatically backed-up to the Exoscale Object Storage.

Prerequisites

As a prerequisite for the following documentation, you need:

  • A Exoscale SKS Cluster (version > 1.20.3)
  • Access to your Cluster via kubectl
  • Basic Linux knowledge

If you don’t have access to a SKS cluster yet, follow the Quick Start Guide.

Also for using the backup to S3/Exoscale SOS functionality, you will have additionally to: - Create an Object Storage Bucket, either via the portal or the CLI. You can refer to the Object Storage documentation

  • Create an IAM key, allowing access to the created Object Storage Bucket, either via the portal or the CLI. You can refer to the IAM documentation

Note

This guide just provides a quick-start into Longhorn storage. Longhorn has many configurable options which must be considered for production environments.

Deploying Longhorn

If your cluster is freshly created, wait until all nodes are ready. You can verify this with kubectl get nodes

Get the link to the current Longhorn manifest from the Longhorn Docs. Apply the Longhorn manifest this way, replacing VERSION:

kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/VERSION/deploy/longhorn.yaml

Wait until Longhorn is fully deployed. This will take roughly 3-5 minutes.

You can check the progress with (CTRL+C to exit):

kubectl get pods \
--namespace longhorn-system \
--watch

When all pods from the namespace longhorn-system have the status “Running”, Longhorn is fully ready to use.

Accessing the Longhorn UI

The WebUI provides a way to view information about the Cluster (like usage of storage). Also, it enables further configuration of Volumes.

The UI is only available from the internal Kubernetes network. As such you must use port-forwarding to access it:

kubectl port-forward deployment/longhorn-ui 7000:8000 -n longhorn-system

This will open up the local port 7000 and connect it directly to port 8000 of the dashboard container.

Use then this URL to access the interface: http://127.0.0.1:7000

0.65

Using Longhorn volumes in Kubernetes

Creating a Persistant Volume Claim

For using storage you need a PersistentVolumeClaim (PVC), which can claim storage. It is comparable to a Pod, however a Pod claims computing resources like CPU and RAM instead of storage.

This YAML-Manifest defines such a PVC, which would request 2 GB of storage. Also, the default storage class (longhorn) replicates the volume 3 times on different nodes in the cluster.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: example-pvc
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: longhorn
  resources:
    requests:
      storage: 2Gi

Note

This will create a volume with an ext4 filesystem (default). Consider this Block-Volume example if your application/database supports RAW devices.

Note

Use additionally a custom storage class if you want to modify parameters like replication count. Or if you want to enabled scheduled snapshots or backups.

Linking the Persistent Volume Claim with a Pod

A Pod in the Kubernetes world consists of one or more containers. Such a Pod can mount a PVC by referring to its name (in this case example-pvc).

The following example creates a pod using a Ubuntu-container image, and mounts the storage defined above in /data.

apiVersion: v1
kind: Pod
metadata:
  name: pod-test
spec:
  containers:
  - name: container-test
    image: ubuntu
    imagePullPolicy: IfNotPresent
    command:
      - "sleep"
      - "604800"
    volumeMounts:
    - name: volv
      mountPath: /data
  volumes:
  - name: volv
    persistentVolumeClaim:
      claimName: example-pvc

You can save each configuration into a YAML file, and apply them with kubectl apply -f yourfilename.yaml. Check the status of your pod and PVC with kubectl get pod and kubectl get pvc

The Ubuntu container will sleep for a long time, so you can run commands inside of it and check its status.

Soon after deploying the example code, you can see that a volume was created in the Longhorn UI when you click on the tab “Volume”. Also one can see, that the volume created is attached to the Ubuntu pod and replicated 3 times.

To test your storage, you can attach your console to the shell of the pod by executing kubectl exec -it pod-test -- /bin/bash and then write something into the /data folder.

Even when you delete/kill the pod (e.g. via kubectl delete pod pod-test), you can reapply the Pod via kubectl to see that the data written into /data is preserved.

Setting a storageclass as default

Longhorn comes with the storageclass named longhorn (see also custom storage class example), which defines the number of replicas, backup schedule, further properties, and that a potentially requested volume is a Longhorn-Volume.

You might want to set this (or your custom storageclass if applicable) to the default storageclass in your Cluster. This way you can also directly provision helm packages which depend on Volumes.

To do so, you could either modify the longhorn.yaml and reapply it - or you can simply issue this command at any time after installing Longhorn: kubectl patch storageclass longhorn -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

Configuring Backup to Exoscale SOS

Longhorn has the ability to backup your volumes to S3/Exoscale Object Storage. This can either be done manually or on schedule. It will also automatically detect already present backups.

Configuring the access secret

You need the following data (check the links in the chapter Prerequisites for further links):

  • A IAM-Key Key/Secret pair that provides access to this bucket.

  • The S3-Endpoint of the zone of your bucket. It has the format https://sos-ZONE.exo.io, take care of substituting ZONE with the proper zone.

Note

You can find all Zones on our website and with the cli: exo zones list

Convert the Zone-URL and the IAM-Pair to base64 (replace DATA in the commands below).

  • On Linux/Mac: echo -n "DATA" | base64
  • On Windows (inside Powershell):
    • $pwd =[System.Text.Encoding]::UTF8.GetBytes("DATA")
    • [Convert]::ToBase64String($pwd)

Write down the following manifest in a new .yaml file, replace the 3 values in the fields of the data section respectively.

apiVersion: v1
kind: Secret
metadata:
  name: exoscale-sos-secret
  namespace: longhorn-system
type: Opaque
data:
  # Zone in base64
  AWS_ENDPOINTS: aHR0cHM6Ly9zb3MtYXQtdmllLTEuZXhvLmlv # https://sos-at-vie-1.exo.io
  AWS_ACCESS_KEY_ID: YourBase64AccessKey # access key for the bucket in base64
  AWS_SECRET_ACCESS_KEY: YourBase64SecretKey # secret key for the bucket in base64

Apply the manifest via kubectl apply -f FILENAME.yaml. It will create a new secret with the name exoscale-sos-secret in the cluster which can provide Longhorn access to the bucket.

Configuring the Backup Destination in Longhorn

Open up the Longhorn UI and go to the Tab Settings / General. Scroll down to the section Backup.

Fill out Backup Target using this format: s3://BUCKETNAME@ZONE/ - e.g. s3://my-cool-bucket@at-vie-1/ for a bucket in Vienna.

Write into Backup Target Credential Secret the following string - which is the name of the created secret: exoscale-sos-secret

Scroll down and click Save to apply your configuration.

Then browse to the Volume tab - if the site does not show any error - the connection is successful.

If you already created a volume, you can test backing it up manually, by clicking on the respective Volume in the Volume tab and then on Create Backup.

Longhorn Documentation - https://longhorn.io/docs/

Latest Longhorn Releases - https://github.com/longhorn/longhorn/releases