The Exoscale flexible storage template aims to empower users so they can manage their instance’s disk/storage as they deem fit. The storage template provides:

  • A minimalistic Debian-based template, with no extraneous components and only what is required to get you started (namely, cloud-init and sudo)

  • A built from scratch template using the Debian “netinst” installation media and the ssh-server installation scenario (AKA tasksel)

  • Uses UEFI boot mode and a GPT partition table (as opposed to Debian’s stock cloud image), to enable support for disk images larger than 2 TB (along Exoscale Storage Optimized Instances)

  • Uses LVM logical partitioning (as opposed to Debian’s stock cloud image), to allow you to manage partitions and filesystems as you deem fit

An (Almost) Standard Debian Template

The flexible storage template is based on - but deviates minimally from - Debian’s stock installation configuration, so it is easy to embrace for users accustomed to Debian or Ubuntu.

Deployment

Creating an flexible storage instance is similar to creating any other Exoscale Compute instance. Here is an example with the CLI:

$ exo compute instance create storage \
  --zone <desired-zone> \
  --template 'Exoscale Flexible Storage 10' \
  --disk-size 50 \
  --ssh-key <your-SSH-keypair>

Log In

After the instance is deployed, log into it via SSH with the CLI:

$ exo compute instance ssh storage
# [output]
debian@storage:~$

Then use sudo to obtain administrative privileges:

debian@storage:~$ sudo -i
# [output]
root@storage:~#

(For the sake of readability, we will hide the root@storage:...# command prompt in further shell commands examples.)

Flexible Storage Management

Thanks to the Linux Logical Volume Manager (LVM), as well as following the Linux Filesystem Hierarchy Standard (FHS) and general best practices, the flexible storage template’s initial partitioning separates a few systems. These are ext4-formatted partitions, namely / (root), /var, /tmp (as well as /boot and /boot/efi):

# List block devices (disks/partitions)
$ lsblk
# [output]
NAME                MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
vda                 254:0    0   50G  0 disk
|-vda1              254:1    0  511M  0 part /boot/efi
|-vda2              254:2    0  512M  0 part /boot
`-vda3              254:3    0    9G  0 part
  |-vg.flex-lv.root 253:0    0    5G  0 lvm  /
  |-vg.flex-lv.var  253:1    0    2G  0 lvm  /var
  `-vg.flex-lv.tmp  253:2    0    1G  0 lvm  /tmp

Note the 50G disk (vda) space not being entirely used by underlying partitions (vda<N>).

As opposed to most other templates, partition sizes will not automatically grow after an instance’s disk is resized. You can create or resize partitions according to your requirements.

For the sake of the example, we will demonstrate both: - how to resize the /var partition - how to create a new /data partition with the xfs filesystem

Resize the Physical Volume

The first step is to grow the /dev/vda3 partition, corresponding to the LVM Physical Volume (PV):

# Show LVM Volume Groups (VGs)
$ vgs
# [output]
  VG      #PV #LV #SN Attr   VSize   VFree
  vg.flex   1   3   0 wz--n- <9.00g  1.00g

# Grow the LVM-backing disk partition
$ growpart /dev/vda 3
# [output]
CHANGED: partition=3 start=2097152 old: size=18872320 end=20969472 new: size=102760415,end=104857567

# Grow the LVM Physical Volume (PV)
$ pvresize /dev/vda3
# [output]
  Physical volume "/dev/vda3" changed
  1 physical volume(s) resized or updated / 0 physical volume(s) not resized

# Show LVM Volume Groups (VGs)
$ vgs
# [output]
  VG      #PV #LV #SN Attr   VSize   VFree
  vg.flex   1   3   0 wz--n- <49.00g 41.00g

Note the increase of VFree space in the LVM Volume Group (VG).

Resize an existing Logical Volume/partition

To resize (or extend) an existing LVM Logical Volume (LV) and partition (with /var as an example):

# Grow the LVM Logical Volume (LV)
$ lvresize -L +10G /dev/mapper/vg.flex-lv.var
# [output]
  Size of logical volume vg.flex/lv.var changed from <2.00 GiB (511 extents) to <12.00 GiB (3071 extents).
  Logical volume vg.flex/lv.var successfully resized.

# Grow the filesystem
$ resize2fs /dev/mapper/vg.flex-lv.var
# [output]
resize2fs 1.44.5 (15-Dec-2018)
Filesystem at /dev/mapper/vg.flex-lv.var is mounted on /var; on-line resizing required
old_desc_blocks = 1, new_desc_blocks = 2
The filesystem on /dev/mapper/vg.flex-lv.var is now 3144704 (4k) blocks long.

# Show the filesystem capacity/usage
$ df -h /var
# [output]
Filesystem                  Size  Used Avail Use% Mounted on
/dev/mapper/vg.flex-lv.var   12G  217M   12G   2% /var

Create a new Logical Volume/partition

You can create as many partitions or filesystems as you need. Here is an example with an xfs-formatted /data partition:

# Install XFS utilities
$ apt-get install xfsprogs

# Create the LVM Logical Volume (LV)
$ lvcreate -n lv.data -L 20G vg.flex
# [output]
  Logical volume "lv.data" created.

# Show the LVM Logical Volume (LV) details
$ lvdisplay /dev/mapper/vg.flex-lv.data
# [output (partial)]
  --- Logical volume ---
  LV Path                /dev/vg.flex/lv.data
  LV Name                lv.data
  VG Name                vg.flex
  LV Status              available
  LV Size                20.00 GiB

# Create the filesystem
$ mkfs.xfs -L DATA /dev/mapper/vg.flex-lv.data
# [output (partial)]
meta-data=/dev/mapper/vg.flex-lv.data isize=512    agcount=4, agsize=1310720 blks

# Create the filesystem mountpoint
$ mkdir -p /data

# Mount the filesystem
$ mount /dev/mapper/vg.flex-lv.data /data

# Show the filesystem capacity/usage
$ df -h /data
# [output]
Filesystem                   Size  Used Avail Use% Mounted on
/dev/mapper/vg.flex-lv.data   20G   53M   20G   1% /data

You may want to add to /etc/fstab to allow automatic mounting at boot time:

$ grep /data /etc/fstab
# [output]
/dev/mapper/vg.flex-lv.data /data xfs defaults 0 2