Presentation

Scalable Kubernetes Service (or SKS) is a managed control plane service for Kubernetes (sometimes referred to as K8S) by Exoscale.

Terminology

  • Instance Pool: a group of similar Compute instances whose lifecycle is managed by the scheduler, created upon a set of user-specified instance properties (such as size, template, security groups, et cetera)
  • Nodepool: an instance pool managed by an SKS scheduler, which users use to assign Kubernetes pods to a specific group of nodes by specifying a nodeSelector spec
  • Node: a function assumed by a set of components running on a Compute instance member of an instance pool, mapping to a Kubernetes node.
  • Control Plane: a set of components managing the lifecycle of a Kubernetes cluster (TLS certificates, etcd cluster, Kubernetes Master-related components), mapping to a Kubernetes “master”
  • Cluster: virtual entity encapsulating a control plane and nodepool

Features

Scalable Kubernetes Service has an expansive feature set:

  • A managed, highly available control plane (Pro version).
  • Automatic scalability of control plane resources (Pro version).
  • A nodepool (like the instance pool) can be grown or shrunk while live.
  • Multiple nodepools can be attached to a SKS cluster.
  • Kubernetes services of type LoadBalancer in a SKS cluster can be exposed by a Network Load Balancer, provisioned by the control plane cloud-controller component.
  • SKS clusters can be created/grown/shrunk/destroyed on demand.
  • SKS clusters can be upgraded on demand to a new available version.
  • SKS clusters can be configured to automatically upgrade to the latest Kubernetes patch version.
  • SKS cluster’s root credentials can be retrieved via an SKS API call. You can also request a kubeconfig file for an user or a group that exists in your Kubernetes cluster to attach specific permissions using the Kubernetes RBAC mechanism. SKS authentication and RBAC can be done via OpenID connect identity provider.
  • Nodepools can be attached to managed private networks to make SKS communicate with applications running on your private networks.
  • You can add labels to SKS clusters and nodepools.
  • A nodepool’s labels are propagated to Kubernetes nodes. A “node.exoscale.net/nodepool-id” label which refers to the SKS nodepool ID is also set to the node.
  • SKS nodepools support dedicated hypervisors, so SKS worker machines can run on your dedicated hypervisors.
  • SKS nodepools support GPU-enabled instances.
  • At Nodepool creation time, Kubernetes taints can be attached to workers. For example, env=demo:NoSchedule will create the key: env, value: demo and effect: NoSchedule taint on that worker.

Architecture

Following is a breakdown of the overall SKS architecture.

Overall SKS Architecture Diagram

Control Plane

You deploy several components when you create a cluster. Some run on the Exoscale side, and some run on your cluster. We run on the Exoscale side:

We also deploy on your cluster:

  • Kube Proxy to forward the traffic to pods (using the iptables mode).
  • CoreDNS to manage the cluster DNS. The CoreDNS ConfigMap (named coredns in the kube-system namespace) is never updated by Exoscale after the first deployment so you can override it if needed.
  • The Konnectivity agent is also deployed on the cluster.
  • Calico is deployed by default to manage the cluster network. You can also choose to deploy a cluster without CNI plugins.
  • Metrics server is also deployed by default. You can use it to gather pods and nodes metrics, and auto-scale your deployments.

Nodepools

A Nodepool is a logical group of Kubernetes workers. When you create a nodepool, you specify:

  • the characteristics for the workers (instance disk size, instance type, firewall or anti-affinity rules, et cetera)
  • the number of workers

The worker instances (or virtual machines) will be provisioned and will join the cluster automatically.

A cluster can have multiple nodepools, each with its own characteristics depending of what you need.

Nodepools are backed by Instance Pools. When a Nodepool is created, a new instance pool named nodepool-<nodepool-name>-<nodepool-id> is also created.

Please note that you cannot interact with this Instance Pool directly. It will be managed completely by Exoscale depending on the nodepool state. If you want to scale your nodepool, or evict machines from it, you need to target the nodepool, not the Instance Pool.

Exoscale Cloud Controller Manager

The Exoscale Cloud Controller Manager performs several actions on the cluster:

  • Validates nodes. When a node joins the cluster, the controller will enable it and approve the Kubelet server certificate signing request.
  • Manages load balancers. The controller will automatically provision and manage Exoscale Network Load balancers based on Kubernetes services of the type LoadBalancer for a highly available load balancing setup for your cluster.

When the Exoscale Cloud Controller is enabled, an IAM key is automatically provisioned on your account. You can find more information about this Cloud Controller IAM key (and how to rotate it) in the SKS Certificates and Keys section.

Pricing Tiers

SKS is available in 2 versions with the following differences:

Exoscale SKS STARTER PRO
Usage for K8S in development pipeline; proof of concepts for all workloads that need flexibility and full protection
API yes yes
CLI yes yes
Terraform yes yes
Control plane scaling no yes
Auto-Upgrade opt-in opt-in
High Availability no yes
Backup of etcd no Min. Daily
SLA no 99.95%
Price Free See Pricing

Note: Starter clusters with no nodepools attached will be deleted after 30 days of inactivity.

Service Level and Support

With SKS, all components of the control plane are covered by our SLA including:

  • etcd
  • Apiserver
  • Scheduler
  • Controller-manager
  • CCM

There is no SLA for any node components running inside an SKS cluster. However, each node is covered by the standard compute SLA of 99.95%.

We deploy the following components inside SKS clusters:

  • Calico
  • CoreDNS
  • Konnectivity
  • Kubeproxy

These components are not covered by the SKS SLA, as we cannot ensure a clear responsibility split between between parties. We support these in best-effort mode and provide upgrade tools and operations. User support scope is limited to the components mentioned above.

A temporary Kubeconfig may be requested to user to access its cluster if any of these components needs troubleshooting.

Availability

The SKS offering is currently available in the following Exoscale zones:

  • at-vie-1
  • at-vie-2
  • bg-sof-1
  • ch-dk-2
  • ch-gva-2
  • de-fra-1
  • de-muc-1

Limitations

SKS is available with the following limitations to ensure performance and supportability:

  • Minimum instance size: Small or 2 GB RAM equivalent
  • Minimum instance disk size: 20 GB
  • No cross-zone stretch support: each cluster is local to a single zone only
  • Root credentials maximum have a maximum time-to-live (or TTL) of 30 days

See Also

Exoscale Academy

Are you interested in a general introduction of SKS features? Take a look at the free online academy courses: SKS STARTER and SKS ADVANCED.