Presentation

SKS - Scalable Kubernetes Service - is a managed control plane service for Kubernetes (K8S) by Exoscale.

Terminology

  • Instance Pool (IP): group of similar compute instances whose lifecycle is managed by the scheduler, created upon a set of user-specified instance properties (e.g. size, template, security groups…)
  • Node Pool (NP): IP managed by SKS scheduler (user can’t modify properties directly), which can be used by users to assign K8S pods to specific group of Nodes by specifying a nodeSelector spec
  • Node: function assumed by a set of components running on a Compute instance member of an IP, mapping to a K8S Node.
  • Control Plane (CP): set of components managing the lifecycle of a K8S cluster (TLS certificates, etcd cluster, K8S Master-related components), mapping to a K8S “master”
  • Cluster: virtual entity encapsulating a CP and # NP

Features

Scalable Kubernetes Service has the following feature set

  • Managed, highly available CP (depending on version)
  • A NP can be grown/shrunk live (as the underlying IP can)
  • Multiple NP can be attached to a SKS cluster
  • K8S services of type LoadBalancer in a SKS cluster can be exposed by a Network Load Balancer (provisioned by the CP cloud-controller component)
  • SKS cluster can be created/grown/shrunk/destroyed on demand
  • SKS cluster can be upgraded on demand to a new available version
  • SKS clusters can be configured to be automatically upgraded to the latest Kubernetes patch version available
  • SKS cluster’s root credentials (kubeconfig) can be retrieved via an SKS API call (the credentials have a TTL of 30 days). You can also request a kubeconfig file for an user or a group that exist in your Kubernetes cluster in order to attach specific permissions (using the Kubernetes RBAC mechanism)
  • NP can be attached to managed private networks in order to make SKS communicate with applications running on your private networks
  • You can add labels to SKS clusters and nodepools
  • Nodepool’s labels are propagated to Kubernetes nodes. All nodepool labels will be prefixed with “node.exoscale.net”. For example, assuming the nodepool label contains env=production, on Kubernetes nodes the label will become “node.exoscale.net/env=production”. A “node.exoscale.net/nodepool-id” label which refers to the sks nodepool id is also set to the node.
  • SKS nodepools support dedicated hypervisors. It means SKS worker machines can run on your dedicated hypervisors

Architecture

The overall SKS architecture is as described below:

Overall SKS Architecture Diagram

Control Plane

Several components are deployed when you create a cluster. Some runs on our side, some runs on your cluster. We run on our side:

We also deploy on your cluster:

  • Kube Proxy to forward the traffic to pods (using the iptables mode).
  • CoreDNS to manage the cluster DNS. The CoreDNS ConfigMap (named coredns in the kube-system namespace) is never updated by Exoscale after the first deployment, in order to let you override it if needed.
  • The Konnectivity agent is also deployed on the cluster.
  • Calico is deployed by default to manage the cluster network. You can also choose to deploy a cluster without CNI plugins if you want to.
  • Metrics server is also deployed by default. You can use it to gather pods and nodes metrics, and it also allows you to autoscale your deployments.

Nodepools

A Nodepool is a logical group of kubernetes workers. When you create a nodepool, you specify the characteristics you want for the Kubernetes workers (instance disk size, instance type, firewalling or anti affinity rules…), the number of workers, and workers virtual machines will be provisionned and will automatically join the cluster.

A cluster can have multiple nodepools, each with its own characteristics depending of what you want to do.

Nodepools are backed by Instance Pools. When a Nodepool is created, a new Instance Pool named nodepool-<nodepool-name>-<nodepool-id> will be created.

It’s important to understand that you cannot interact with this Instance Pool directly. It will be managed completely by Exoscale depending of the Nodepool state. If you want to scale your Nodepool, or evict machines from it, you need to target the Nodepool, not the Instance Pool.

Exoscale Cloud Controller Manager

The Exoscale Cloud Controller Manager performs several actions on the cluster:

  • Validate nodes. When a node joins the cluster, the controller will enable it and approve the Kubelet server certificate signing request.
  • Manage Load Balancers. The controller will automatically provision and manage Exoscale Network Load balancers based on Kubernetes services of type LoadBalancer. This allows you to have a highly available load balancing setup for your cluster.

When the Exoscale Cloud Controller is enabled, an IAM key is automatically provisioned on your account. You can find more information about this key (and how to rotate it if needed) in this documentation.

Pricing tiers

SKS is available in 2 versions with the following differences:

Exoscale SKS STARTER PRO
Usage for K8S in the development pipeline and proof of concepts for all workloads that need flexibility and full protection
API yes yes
CLI yes yes
Terraform yes yes
Auto-Upgrade opt-in opt-in
High Availability no yes
Backup of etcd no Min. Daily
SLA no 99.95%
Price Free See pricing

Note: Starter clusters with no nodepools attached will be deleted after 30 days of inactivity.

Service Level and Support

With SKS all components of the Control Plane are covered by our SLA including:

  • etcd
  • Apiserver
  • Scheduler
  • Controller-manager
  • CCM

There is no SLA for any node components running inside the SKS cluster. Each Node is however covered by the standard compute SLA of 99.95%.

At the time of writing we deploy the following components inside SKS clusters:

  • Calico
  • CoreDNS
  • Konnectivity
  • Kubeproxy

These components are not covered by the SKS SLA as it is not possible to ensure a clear responsibility split between between parties. We support these in best effort mode and provide upgrade tools and operations.

User support scope is limited to the components mentioned above.

A temporary Kubeconfig may be requested to user to access its cluster if any of these components needs troubleshooting.

Availability

The SKS offering is currently available in the following Exoscale zones:

  • at-vie-1
  • bg-sof-1
  • ch-dk-2
  • ch-gva-2
  • de-fra-1
  • de-muc-1

Limitations

SKS is available with the following limitations to ensure correct performance and supportability:

  • Minimum instance size: Small or 2 GB RAM equivalent
  • Minimum instance disk size: 20GB
  • No cross zone stretch support: each cluster is local to a single zone only
  • Root credentials maximum Time To Live of 30 days

See Also