SKS is Exoscale’s managed Kubernetes offering which consists of:

  • Managed Kubernetes control planes
  • Dynamic nodepool attachment
  • Control plane access management facilities
  • Full API support

SKS creates and manages the lifecycle of Kubernetes clusters in any existing Exoscale zone.

Prerequisites

To interact with Exoscale SKS and the resulting Kubernetes cluster, the following items are required:

  • An active Exoscale account
  • The Exoscale CLI tool version 1.24 minimum installed on your laptop
  • The kubectl binary on your laptop (see note below)
  • Basic knowledge of Kubernetes clusters and YAML manifest files

Creating a cluster from the CLI

Creating a cluster can be done with a single command. The exo sks create command creates a control plane. By default, the CNI Calico is deployed on the cluster, and the Exoscale Cloud Controller Manager (CCM) automatically wired for interaction with Exoscale IaaS.

Because Kubernetes is a clustered application, the control plane and the nodes need to communicate. Useful ingress ports to have open in your security group before starting:

  • 30000 - 32767 TCP from all sources for NodePort services (used by the LoadBalancer services for example)
  • 10250 - 10250 TCP with the security group as a source
  • 4789 - 4789 UDP with the security group as a source (if using the default CNI Calico, to allow only the traffic between the nodes)

To create a new group like above you can use the CLI with

exo firewall create sks-security-group

exo firewall add sks-security-group -d "NodePort services" -p tcp -P 30000-32767

exo firewall add sks-security-group -d "SKS kubelet" -p tcp -P 10250 -s sks-security-group

exo firewall add sks-security-group -d "Calico traffic" -p udp -P 4789 -s sks-security-group

Then we can initiate the cluster:

exo sks create my-test-cluster --description "my-test-cluster" \
--nodepool-name "my-test-nodepool" \
--nodepool-size 3 \
--nodepool-security-group "sks-security-group" \
--zone de-fra-1 \
--service-level pro

The above command starts a fully dedicated Kubernetes cluster in your Organization in the Frankfurt zone with the service level pro and with 3 nodes. We strongly recommend you to have at least two workers on your SKS clusters. Some components cannot be upgraded safely on single node clusters.

The Exoscale orchestrator then creates on your behalf a new instance pool with 3 instances. By default the instances sizes are Medium instances with 50GB disks attached.

These default values can be changed at creation time with additional exo create sks command flags. Run the command without parameters to get the full list.

 ✔ Creating SKS cluster "my-test-cluster"... 1m18s
 ✔ Adding Nodepool "mytestskscluster"... 6s
┼───────────────┼──────────────────────────────────────────────────────────────────┼
│  SKS CLUSTER  │                                                                  │
┼───────────────┼──────────────────────────────────────────────────────────────────┼
│ ID            │ 87149afe-56de-4018-aa8f-46fdec24a483                             │
│ Name          │ my-test-cluster                                                  │
│ Description   │ my-test-cluster                                                  │
│ Zone          │ de-fra-1                                                         │
│ Creation Date │ 2021-02-12 10:48:33 +0000 UTC                                    │
│ Endpoint      │ https://87149afe-56de-4018-aa8f-46fdec24a483.sks-de-fra-1.exo.io │
│ Version       │ 1.20.2                                                           │
│ Service Level │ pro                                                              │
│ CNI           │ calico                                                           │
│ Add-Ons       │ exoscale-cloud-controller                                        │
│ State         │ running                                                          │
│ Nodepools     │ 57168bac-4613-4968-80cc-9d088243b10e | my-test-nodepool          │
┼───────────────┼──────────────────────────────────────────────────────────────────┼

You could also add the --auto-upgrade option to the command in order to enable automatic upgrades of your control plane components to the latest Kubernetes patch version available.

Managing nodepools

You can at any time create, delete, update, or scale SKS nodepools.

The --nodepool options on exo sks create are optional. You also add nodepools to an existing cluster by using the exo sks nodepool add command.

Each nodepool can have different characteristics (firewalling configuration, private networks, instance type, disk size, anti affinity groups…) depending on the applications you want to run on them.

You can retrieve the list of all the available commands and options with the --help flag, for example exo sks --help, exo sks nodepool --help or exo sks nodepool add --help.

Identifying your cluster and interacting with Kubernetes

First identify your newly created cluster with the command

exo --zone de-fra-1 sks list

┼──────────────────────────────────────┼─────────────────┼──────────┼
│                  ID                  │      NAME       │   ZONE   │
┼──────────────────────────────────────┼─────────────────┼──────────┼
│ 87149afe-56de-4018-aa8f-46fdec24a483 │ my-test-cluster │ de-fra-1 │
┼──────────────────────────────────────┼─────────────────┼──────────┼

and then list the full details with

exo --zone de-fra-1 sks show my-test-cluster

Labels can be associated with clusters and nodepools to help classify and organize them.

The --label key=value and --nodepool-label options can be passed to the exo sks create command in order to add labels to your clusters or nodepools. The sks nodepool add command also supports --label.

The labels options can be repeated in order to add multiple labels to an entity.

Kubeconfig

You can then generate a kubeconfig file for your cluster:

exo --zone de-fra-1 sks kubeconfig my-test-cluster kube-admin \
--group system:masters > my-test-cluster.kubeconfig

The kube-admin parameter is the name of your user in the kubeconfig file and in the certificate. You could configure fine-grained permissions by creating an user in Kubernetes and then reference it in your kubeconfig. You can optionally pass a TTL (in seconds) for the validity period of your kubeconfig’s certificate by using the --ttl option. By default, a TTL of 30 days is used. The group parameter is a list of the Kubernetes groups associated with the generated certificate.

This command returns a kubeconfig file. You can use it by passing it to kubectl using the --kubeconfig flag to send queries to your cluster.

Installing kubectl

from now on the interaction with Kubernetes itself must be done with Kubernetes tooling. kubectl - a short for Kubernetes Control - is the official command-line tool for Kubernetes and can be installed on Linux, MacOS or Windows

You can list your Kubernetes workers nodes with:

kubectl --kubeconfig my-test-cluster.kubeconfig get node

NAME               STATUS   ROLES    AGE   VERSION
pool-ad590-fizsv   Ready    <none>   14m   v1.20.2
pool-ad590-iyqua   Ready    <none>   14m   v1.20.2
pool-ad590-tjxga   Ready    <none>   14m   v1.20.2

Your Scalable Kubernetes Service cluster is now fully ready for production.

Instances prefixes

By default, virtual machines managed by a nodepool are named pool-<first 5 chars of the underlying instance pool ID>-<random string>.

This pattern can be customized by passing the --instance-prefix parameter during a nodepool creation. For example, if you pass the string applications to --instance-prefix, pool will be replaced by applications in the virtual machine names.