The Exoscale Flexible Storage template aims at empowering users to manage their instance’s disk/storage as they deem fit, by providing:

  • a minimalistic Debian-based template, with no extraneous components and only what is required to get you started (namely: cloud-init and sudo)

  • built from scratch using the Debian “netinst” installation media and the ssh-server installation scenario (aka. tasksel)

  • using UEFI boot mode and a GPT partition table (as opposed to Debian’s stock cloud image), such as to enable support for larger-than-2TiB disk images (e.g. along Exoscale Storage Optimized Instances)

  • using LVM logical partitioning (as opposed to Debian’s stock cloud image), such as to allow you to manage partitions and filesystems as you deem fit

An (almost) standard Debian template

The Exoscale Flexible Storage template is based on and deviates minimally from Debian’s stock installation/configuration, thus making it easy to embrace for users accustomed to Debian/Ubuntu.

Deployment

Creating an Exoscale Flexible Storage instance is similar to creating any other Exoscale Compute instance; example given using the Exoscale CLI:

$ exo compute instance create storage \
  --zone <desired-zone> \
  --template 'Exoscale Flexible Storage 10' \
  --disk-size 50 \
  --ssh-key <your-SSH-keypair>

Logging in

Once the Compute instance is deployed, log into it via SSH with the Exoscale CLI:

$ exo compute instance ssh storage
# [output]
debian@storage:~$

Then use sudo to obtain administrative (root) privileges:

debian@storage:~$ sudo -i
# [output]
root@storage:~#

(from now on and for the sake of readability, we’ll suppress the root@storage:...# command prompt in shell commands examples)

Flexible storage management

Thanks to the Linux Logical Volume Manager (LVM) as well as following the Linux Filesystem Hierarchy Standard (FHS) and best-recommended practices, the Exoscale Flexible Storage template’s initial partitioning separates a few system, ext4-formatted partitions, namely / (root), /var, /tmp (as well as /boot and /boot/efi):

# List block devices (disks/partitions)
$ lsblk
# [output]
NAME                MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
vda                 254:0    0   50G  0 disk
|-vda1              254:1    0  511M  0 part /boot/efi
|-vda2              254:2    0  512M  0 part /boot
`-vda3              254:3    0    9G  0 part
  |-vg.flex-lv.root 253:0    0    5G  0 lvm  /
  |-vg.flex-lv.var  253:1    0    2G  0 lvm  /var
  `-vg.flex-lv.tmp  253:2    0    1G  0 lvm  /tmp

Mark the 50G disk (vda) space not being entirely used by underlying partitions (vda<N>).

As opposed to most other templates, partitions size will not automatically grow once an instance’s disk is resized, empowering you with creating/resizing partitions according to your requirements.

For the sake of the example, we will both: - resize the /var partition - create a new /data partition with the xfs filesystem

Resize the Physical Volume

The first step is to grow the /dev/vda3 partition, corresponding to the LVM Physical Volume (PV):

# Show LVM Volume Groups (VGs)
$ vgs
# [output]
  VG      #PV #LV #SN Attr   VSize   VFree
  vg.flex   1   3   0 wz--n- <9.00g  1.00g

# Grow the LVM-backing disk partition
$ growpart /dev/vda 3
# [output]
CHANGED: partition=3 start=2097152 old: size=18872320 end=20969472 new: size=102760415,end=104857567

# Grow the LVM Physical Volume (PV)
$ pvresize /dev/vda3
# [output]
  Physical volume "/dev/vda3" changed
  1 physical volume(s) resized or updated / 0 physical volume(s) not resized

# Show LVM Volume Groups (VGs)
$ vgs
# [output]
  VG      #PV #LV #SN Attr   VSize   VFree
  vg.flex   1   3   0 wz--n- <49.00g 41.00g

Mark the increase of VFree space in the LVM Volume Group (VG).

Resize an existing Logical Volume/partition

You may then resize (extend) an existing LVM Logical Volume (LV) and partition; example given /var:

# Grow the LVM Logical Volume (LV)
$ lvresize -L +10G /dev/mapper/vg.flex-lv.var
# [output]
  Size of logical volume vg.flex/lv.var changed from <2.00 GiB (511 extents) to <12.00 GiB (3071 extents).
  Logical volume vg.flex/lv.var successfully resized.

# Grow the filesystem
$ resize2fs /dev/mapper/vg.flex-lv.var
# [output]
resize2fs 1.44.5 (15-Dec-2018)
Filesystem at /dev/mapper/vg.flex-lv.var is mounted on /var; on-line resizing required
old_desc_blocks = 1, new_desc_blocks = 2
The filesystem on /dev/mapper/vg.flex-lv.var is now 3144704 (4k) blocks long.

# Show the filesystem capacity/usage
$ df -h /var
# [output]
Filesystem                  Size  Used Avail Use% Mounted on
/dev/mapper/vg.flex-lv.var   12G  217M   12G   2% /var

Create a new Logical Volume/partition

You may also create as many partitions/filesystems as you need; example given an xfs-formatted /data partition:

# Install XFS utilities
$ apt-get install xfsprogs

# Create the LVM Logical Volume (LV)
$ lvcreate -n lv.data -L 20G vg.flex
# [output]
  Logical volume "lv.data" created.

# Show the LVM Logical Volume (LV) details
$ lvdisplay /dev/mapper/vg.flex-lv.data
# [output (partial)]
  --- Logical volume ---
  LV Path                /dev/vg.flex/lv.data
  LV Name                lv.data
  VG Name                vg.flex
  LV Status              available
  LV Size                20.00 GiB

# Create the filesystem
$ mkfs.xfs -L DATA /dev/mapper/vg.flex-lv.data
# [output (partial)]
meta-data=/dev/mapper/vg.flex-lv.data isize=512    agcount=4, agsize=1310720 blks

# Create the filesystem mountpoint
$ mkdir -p /data

# Mount the filesystem
$ mount /dev/mapper/vg.flex-lv.data /data

# Show the filesystem capacity/usage
$ df -h /data
# [output]
Filesystem                   Size  Used Avail Use% Mounted on
/dev/mapper/vg.flex-lv.data   20G   53M   20G   1% /data

Which you may want to add to /etc/fstab to allow its automatic mounting at boot time:

$ grep /data /etc/fstab
# [output]
/dev/mapper/vg.flex-lv.data /data xfs defaults 0 2