OpenID Connect

Kubernetes natively supports OpenID Connect as an authentication method adding external (and more granular) access control to your SKS cluster via the OAuth2 version found in many identity providers.

At creation time, you can launch a SKS cluster by specifying the OpenID Connect parameters so that no further configuration is required to access Kubernetes control plane management.

This feature is accessible via the CLI with the following format:

      --oidc-client-id string                  OpenID client ID
      --oidc-groups-claim string               OpenID JWT claim to use as the user's group
      --oidc-groups-prefix string              OpenID prefix prepended to group claims
      --oidc-issuer-url string                 OpenID provider URL
      --oidc-required-claim string             a key=value pair that describes a required claim in the OpenID Token
      --oidc-username-claim string             OpenID JWT claim to use as the user name
      --oidc-username-prefix string            OpenID prefix prepended to username claims

Neither Exoscale nor Kubernetes currently provides an OpenID Connect Identity Provider. You can use an existing public OpenID Connect Identity Provider or you can run your own Identity Provider, such as Dex or Keycloak.

Labels

SKS clusters and nodepools support labels similarly to Compute instance labels. Labels can be associated with clusters and nodepools to help classify and organize them.

The --label key=value and --nodepool-label options can be passed to the exo compute sks create command to add labels to your clusters or nodepools. The sks nodepool add command also supports --label.

The labels options can be repeated to add multiple labels to an entity.

Labels in the ‘kubernetes.io’ namespace must begin with an allowed prefix:

  • ‘kubelet.kubernetes.io’
  • ‘node.kubernetes.io’

Or be in the specifically-allowed set:

  • ‘beta.kubernetes.io/arch’
  • ‘beta.kubernetes.io/instance-type’
  • ‘beta.kubernetes.io/os’
  • ‘failure-domain.beta.kubernetes.io/region’
  • ‘failure-domain.beta.kubernetes.io/zone’
  • ‘kubernetes.io/arch’
  • ‘kubernetes.io/hostname’
  • ‘kubernetes.io/os’
  • ‘node.kubernetes.io/instance-type’
  • ‘topology.kubernetes.io/region’
  • ‘topology.kubernetes.io/zone’

If you need another label in the ‘kubernetes.io’ namespace, you will need to set it after the nodes are registered with:

kubectl label node NODE-NAME unallowed.kubernetes.io/prefix=true

Managed Compute instances prefixes

By default, Compute instances managed by an SKS nodepool are named pool-<first 5 chars of the underlying instance pool ID>-<random string>.

This pattern can be customized by passing the --instance-prefix parameter during a nodepool creation or update. For example, if you specify --instance-prefix=applications, pool will be replaced by applications in the Compute instance names.

Kubernetes Taints

Node affinity is a property of pods that attracts them to a set of nodes. Kubernetes taints, on the other hand, are the opposite: they allow a node to repel a set of pods.

You can read the official Kubernetes documenation to learn more about taints and toleration.

To associate taints on your nodepool workers, you can use the --nodepool-taint=KEY=VALUE:EFFECT option.

For instances use the --nodepool-taint=type=GPU:NoSchedule option.

After a taint is registered on a nodepool worker, only pods with tolerations will be scheduled for that worker.

Taints options can be repeated in order to register multiple taints.

Removing kube-proxy from your cluster

Some CNI plugins support replacement of kube-proxy in favor of internal better-optimized solutions.

Please note that if you deploy your SKS clusters with a pre-configured CNI plugin, this is already handled by our SKS orchestrator and does not apply to your situation.

An example of support replacement of kube-proxy is a custom deployment of Cilium with “strict” kube-proxy replacement.

In this specific situation, you would ideally remove kube-proxy before any pods are scheduled in the cluster. To achieve this, remove the kube-proxy DaemonSet from the kube-system namespace:

kubectl -n kube-system delete ds kube-proxy

Then deploy your custom CNI configuration.

Exoscale Academy

If you leveraging this advanced topic, you maybe interested in a structured troublshooting approach for SKS? Take a look the free SKS ADVANCED Coursein our online academy.