Scalable Kubernetes Service (or SKS) is Exoscale’s managed Kubernetes offering, which consists of:

  • Managed Kubernetes control planes
  • Dynamic Nodepool attachment
  • Control plane access management facilities
  • Full API support

SKS creates and manages the lifecycle of Kubernetes clusters in any existing Exoscale zone.

New to Kubernetes?

Learn the basic concepts of Kubernetes and container orchestration with our free SKS Starter Course in the Exoscale Academy.

Prerequisites

To interact with Exoscale SKS and the resulting Kubernetes cluster, the following items are required:

  • An active Exoscale account
  • The Exoscale CLI tool - version 1.24 or above
  • The kubectl binary (see note below)
  • Basic knowledge of Kubernetes clusters and YAML manifest files

Creating a cluster from the CLI

Creating a cluster can be done with the exo compute sks create command, which creates a control plane. By default, the CNI Calico is deployed on the cluster. The Exoscale Cloud Controller Manager (CCM) is automatically wired for interaction with Exoscale IaaS.

Because Kubernetes is a clustered application, the control plane and the nodes need to communicate. Useful ingress ports to have open in your security group before starting:

  • 30000 to 32767 TCP from all sources for NodePort and LoadBalancer services
  • 10250 TCP with the security group as a source

To create a new group like the above, you can use the CLI:

exo compute security-group create sks-security-group

exo compute security-group rule add sks-security-group \
    --description "NodePort services" \
    --protocol tcp \
    --network 0.0.0.0/0 \
    --port 30000-32767

exo compute security-group rule add sks-security-group \
    --description "SKS kubelet" \
    --protocol tcp \
    --port 10250 \
    --security-group sks-security-group

Depending on which CNI plugin you want to use, you will also need to open additional ports.

If using Calico as CNI plugin (default), you need to open:

  • 4789 UDP with the security group as a source for VXLAN communication between nodes
exo compute security-group rule add sks-security-group \
    --description "Calico traffic" \
    --protocol udp \
    --port 4789 \
    --security-group sks-security-group

Then you can initiate the cluster:

exo compute sks create my-test-cluster \
    --zone de-fra-1 \
    --service-level pro \
    --nodepool-name my-test-nodepool \
    --nodepool-size 3 \
    --nodepool-security-group sks-security-group

If using Cilium as CNI plugin, you need to open:

  • 8472 UDP with the security group as a source for VXLAN communication between nodes
  • 4240 TCP with the security group as a source for network connectivity health API (health-checks)
  • PING (ICMP type 8 & code 0) with the security group as a source for health checks
exo compute security-group rule add sks-security-group \
    --description "Cilium (healthcheck)" \
    --protocol icmp \
    --icmp-type 8 \
    --icmp-code 0 \
    --security-group sks-security-group

exo compute security-group rule add sks-security-group \
    --description "Cilium (vxlan)" \
    --protocol udp \
    --port 8472 \
    --security-group sks-security-group

exo compute security-group rule add sks-security-group \
    --description "Cilium (healthcheck)" \
    --protocol tcp \
    --port 4240 \
    --security-group sks-security-group

Then, you can initiate the cluster, specifying Cilium as CNI plugin using the --cni flag:

exo compute sks create my-test-cluster \
    --zone de-fra-1 \
    --cni cilium \
    --service-level pro \
    --nodepool-name my-test-nodepool \
    --nodepool-size 3 \
    --nodepool-security-group sks-security-group

This exo compute sks create command starts a fully dedicated Kubernetes cluster in your organization in the Frankfurt zone with the service level pro and with 3 nodes.

We strongly recommend you to have at least two workers on your SKS clusters. Some components cannot be upgraded safely on single node clusters.

The Exoscale orchestrator then creates a new instance pool with 3 instances on your behalf. By default, the instances sizes are Medium instances with 50 GB disks attached.

These default values can be changed at creation time with additional exo compute sks create command flags. Run the command without parameters to get the full list:

 ✔ Creating SKS cluster "my-test-cluster"... 1m18s
 ✔ Adding Nodepool "mytestskscluster"... 6s
┼───────────────┼──────────────────────────────────────────────────────────────────┼
│  SKS CLUSTER  │                                                                  │
┼───────────────┼──────────────────────────────────────────────────────────────────┼
│ ID            │ 87149afe-56de-4018-aa8f-46fdec24a483                             │
│ Name          │ my-test-cluster                                                  │
│ Description   │ my-test-cluster                                                  │
│ Zone          │ de-fra-1                                                         │
│ Creation Date │ 2024-07-11 10:48:33 +0000 UTC                                    │
│ Endpoint      │ https://87149afe-56de-4018-aa8f-46fdec24a483.sks-de-fra-1.exo.io │
│ Version       │ 1.29.6                                                           │
│ Service Level │ pro                                                              │
│ CNI           │ calico                                                           │
│ Add-Ons       │ exoscale-cloud-controller                                        │
│ State         │ running                                                          │
│ Nodepools     │ 57168bac-4613-4968-80cc-9d088243b10e | my-test-nodepool          │
┼───────────────┼──────────────────────────────────────────────────────────────────┼

You could also add the --auto-upgrade option to the command in order to enable automatic upgrades of your control plane components to the latest Kubernetes patch version available.

Managing Nodepools

You can create, delete, update, or scale SKS nodepools at any time.

The --nodepool options on exo compute sks create are optional. You also add nodepools to an existing cluster by using the exo compute sks nodepool add command.

Each nodepool can have different characteristics - such as Security Groups, Private Networks, instance type, disk size, Anti-Affinity Groups and so on - depending on the applications you want to run on them.

You can retrieve the list of all the available commands and options with the --help flag in the CLI, for example:

  • exo compute sks --help

  • exo compute sks nodepool --help

  • exo compute sks nodepool add --help

Identifying your cluster and interacting with Kubernetes

First, identify your newly created cluster with the command:

$ exo compute sks list

┼──────────────────────────────────────┼─────────────────┼──────────┼
│                  ID                  │      NAME       │   ZONE   │
┼──────────────────────────────────────┼─────────────────┼──────────┼
│ 87149afe-56de-4018-aa8f-46fdec24a483 │ my-test-cluster │ de-fra-1 │
┼──────────────────────────────────────┼─────────────────┼──────────┼

Then list the full details:

exo compute sks show my-test-cluster --zone de-fra-1

Kubeconfig

You can then generate a kubeconfig file for your cluster:

exo compute sks kubeconfig my-test-cluster kube-admin \
    --zone de-fra-1 \
    --group system:masters > my-test-cluster.kubeconfig

The kube-admin parameter is the name of your user in the kubeconfig file and in the certificate. You can configure fine-grained permissions by creating an user in Kubernetes and then reference it in your kubeconfig. You can optionally pass a TTL (in seconds) for the validity period of your kubeconfig’s certificate by using the --ttl option. By default, a TTL of 30 days is used.

The group parameter is a list of the Kubernetes groups associated with the generated certificate. This command returns a kubeconfig file. You can use it by passing it to kubectl using the --kubeconfig flag to send queries to your cluster.

Installing kubectl

From now on, the interaction with Kubernetes itself must be done with Kubernetes tooling. kubectl (short for Kubernetes Control) is the official command-line tool for Kubernetes. It is compatible with Linux, MacOS or Windows.

You can list your Kubernetes workers nodes with:

kubectl --kubeconfig my-test-cluster.kubeconfig get node

NAME               STATUS   ROLES    AGE   VERSION
pool-ad590-fizsv   Ready    <none>   14m   v1.29.6
pool-ad590-iyqua   Ready    <none>   14m   v1.29.6
pool-ad590-tjxga   Ready    <none>   14m   v1.29.6

Your Scalable Kubernetes Service cluster is now fully ready for production.