Getting further with SKS
OpenID Connect
Kubernetes natively supports OpenID Connect as an authentication method adding external and more granular access control to your SKS cluster via this OAuth2 version found in many identity providers.
At creation time, a SKS cluster can be launched specifying the OpenID Connect parameters so that no further configuration is required to get access to the Kubernetes control plane management.
This feature is accessible via the exo
command line with the following format:
--oidc-client-id string OpenID client ID
--oidc-groups-claim string OpenID JWT claim to use as the user's group
--oidc-groups-prefix string OpenID prefix prepended to group claims
--oidc-issuer-url string OpenID provider URL
--oidc-required-claim string a key=value pair that describes a required claim in the OpenID Token
--oidc-username-claim string OpenID JWT claim to use as the user name
--oidc-username-prefix string OpenID prefix prepended to username claims
At this time neither Exoscale nor Kubernetes provides an OpenID Connect Identity Provider. You can use an existing public OpenID Connect Identity Provider or you can run your own Identity Provider, such as Dex, Keycloak, or others.
Labels
SKS Clusters and Nodepools support labels similarly to compute instance labels. Labels can be associated with clusters and Nodepools to help classify and organize them.
The --label key=value
and --nodepool-label
options can be passed to the exo compute sks create
command in order to add labels to your clusters or Nodepools. The sks nodepool add
command also supports --label
.
The labels options can be repeated in order to add multiple labels to an entity.
Labels in the ‘kubernetes.io’ namespace must begin with an allowed prefix (‘kubelet.kubernetes.io’, ‘node.kubernetes.io’) or be in the specifically allowed set (‘beta.kubernetes.io/arch’, ‘beta.kubernetes.io/instance-type’, ‘beta.kubernetes.io/os’, ‘failure-domain.beta.kubernetes.io/region’, ‘failure-domain.beta.kubernetes.io/zone’, ‘kubernetes.io/arch’, ‘kubernetes.io/hostname’, ‘kubernetes.io/os’, ‘node.kubernetes.io/instance-type’, ‘topology.kubernetes.io/region’, ‘topology.kubernetes.io/zone’)
If you need another label in the ‘kubernetes.io’ namespace, you will need to set it after the nodes are registered with:
kubectl label node NODE-NAME unallowed.kubernetes.io/prefix=true
Managed Compute instances prefixes
By default, Compute instances managed by an SKS Nodepool are named pool-<first 5 chars of the underlying instance pool ID>-<random string>
.
This pattern can be customized by passing the --instance-prefix
parameter during a Nodepool creation/update. For example, if you specify --instance-prefix=applications
, pool
will be replaced by applications
in the Compute instance names.
Taints
Node affinity is a property of pods that attracts them to a set of nodes. Taints, on the other hand, are the opposite: they allow a node to repel a set of pods. To discover more about Taints and Toleration, check out the official documentation about taints & tolerations
To associate taints on your nodepool workers you can use --nodepool-taint=KEY=VALUE:EFFECT
option, for instance: --nodepool-taint=type=GPU:NoSchedule
.
Once a taint is registered on a nodepool worker, only pods with tolerations will be scheduled for that worker.
Taints options can be repeated in order to register multiple taints.
Removing kube-proxy from your cluster
Some CNI plugins support replacement of kube-proxy in favor of internal better-optimized solution.
NOTE: If you deploy your SKS clusters with a pre-configured CNI plugin, you are NOT concerned by this situation as this is already handled by our SKS orchestrator.
An example of such case is a custom deployment of Cilium with “strict” kube-proxy replacement.
In this specific situation, you would ideally remove kube-proxy before any Pods are scheduled in the cluster.
To achieve this, just remove the kube-proxy DaemonSet
from the kube-system
namespace:
kubectl -n kube-system delete ds kube-proxy
Then deploy your custom CNI configuration.