Enabling GPU support in SKS nodes
Exoscale SKS allows you to run GPU-accelerated workloads, such as Machine Learning (ML), data analytics, and video transcoding, on your cluster. In this documentation, we will guide you through the steps to enable GPU support in Exoscale SKS nodes.
Prerequisites
As a prerequisite for the following documentation, you need:
- An Exoscale SKS cluster on the Pro plan.
- An organization with at least one GPU instance type authorized.
- Access to your cluster via
kubectl
. - Basic Linux knowledge.
If you do not have access to an SKS cluster, follow the Quick Start Guide.
Enabling GPU Support in SKS
To use GPUs in Kubernetes, the NVIDIA Device Plugin is required. The NVIDIA Device Plugin is a DaemonSet that automatically enumerates the number of GPUs on each node of the cluster and allows Pods to run on GPUs.
To enable GPU support in Exoscale SKS nodes, you need to deploy the following DaemonSet:
kubectl create -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/main/nvidia-device-plugin.yml
Note
This is a simple static DaemonSet meant to demonstrate the basic features of the nvidia-device-plugin
.
Running and testing GPU Jobs
With the DaemonSet deployed, NVIDIA GPUs can now be requested by a container using the nvidia.com/gpu
resource type:
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: gpu-pod
spec:
restartPolicy: Never
containers:
- name: cuda-container
image: nvcr.io/nvidia/k8s/cuda-sample:vectoradd-cuda10.2
resources:
limits:
nvidia.com/gpu: 1 # requesting 1 GPU
tolerations:
- key: nvidia.com/gpu
operator: Exists
effect: NoSchedule
EOF
kubectl logs gpu-pod
[Vector addition of 50000 elements]
Copy input data from the host memory to the CUDA device
CUDA kernel launch with 196 blocks of 256 threads
Copy output data from the CUDA device to the host memory
Test PASSED
Done