Kubernetes automatically spans an internal network across all nodes. That means your Pods and Containers are reachable via a private IP-Address inside the Cluster. To route traffic from the outside, you need in most cases a Load Balancer and optionally an Ingress.

Prerequisites

As a prerequisite for the following documentation, you need:

  • A Exoscale SKS Cluster
  • Access to your Cluster via kubectl
  • Basic Linux knowledge

If you don’t have access to a SKS cluster yet, follow the Quick Start Guide.

Note

For Network Load Balancers (NLB) to automatically create, you need the Exoscale Cloud Controller-Manager (CCM) add-on installed in your cluster. This is done automatically on default settings.

Exposing a single service with an Exoscale Load Balancer

To expose a single Pod or a Deployment, you need to create a service of type LoadBalancer.

The following example creates a deployment named hello, which consists of 3 hello-world pods/containers, i.e. this could be your web application:

kubectl create deployment hello --image=nginxdemos/hello:plain-text --replicas=3

As there is no service yet defined, you can only access the webpage of the containers via the respective internal IP inside the cluster. Use kubectl get pods -o wide, if you want to take a look at the created Pods.

To allow access from outside the cluster, you can expose the deployment with a Loadbalancer:

kubectl expose deployment hello --port=80 --target-port=80 --name=hello-service --type=LoadBalancer

This will create a service named hello-service exposing the deployment hello. --port specifies the port on where the LoadBalancer should listen, which should be 80 in most cases. --target-port specifies the port, the container exposes or provides.

As we specified --type=LoadBalancer, Kubernetes will talk over the CCM (Cloud Controller Manager) to Exoscale, and create a real Exoscale Network Load Balancer. To get its IP-Address you can use kubectl get svc:

> kubectl get svc -n default
NAME            TYPE           CLUSTER-IP       EXTERNAL-IP       PORT(S)        AGE
hello-service   LoadBalancer   10.102.248.197   194.182.170.158   80:30665/TCP   2m3s
kubernetes      ClusterIP      10.96.0.1        <none>            443/TCP        52m

You can also see your load balancer in the Exoscale portal or show its information using the CLI (with the exo nlb list and exo nlb show commands).

In this case, the web application would be now reachable via 194.182.170.158. The service LoadBalancer will automatically route and balance traffic to your Pods.

Note

Deleting a cluster will not always delete the Network Load Balancers created by a Kubernetes Service. Make sure to check your Load Balancers in the Exoscale UI or CLI after deleting a SKS cluster, and delete them if necessary. When the annotation “exoscale-loadbalancer-external” is set to true (see the example below), the Load Balancer will never be automatically deleted.

You can also specify this setup as a manifest, where you also have the possibility of defining additional annotations for the LoadBalancer-service:

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: hello
  name: hello
spec:
  replicas: 3
  selector:
    matchLabels:
      app: hello
  template:
    metadata:
      labels:
        app: hello
    spec:
      containers:
      - image: nginxdemos/hello:plain-text
        name: hello
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: hello
  name: hello-service
  annotations:
    # Uncomment if you want to use an already existing Exoscale LoadBalancer
    #service.beta.kubernetes.io/exoscale-loadbalancer-id: "09191de9-513b-4270-a44c-5aad8354bb47"
    #service.beta.kubernetes.io/exoscale-loadbalancer-external: "true"
    # When having multiple Nodepools attached to your SKS Cluster,
    # you need to specify then the ID of the underlying Instance Pool the NLB should forward traffic to
    #service.beta.kubernetes.io/exoscale-loadbalancer-service-instancepool-id: "F0D7A23E-14B8-4A6E-A134-1BFD0DF9A068"
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: hello
  type: LoadBalancer

You can find all annotations for the LoadBalancer service in the documentation of the Exoscale CCM.

Note

A Network Load Balancer can only route to one Instance Pool. If you have multiple Nodepools attached to the SKS cluster, you need to define to which Instance Pool (underlying one of the attached Nodepools) to route to via an annotation. Also, when a Kubernetes Service points to a scheduled pod outside of its attached Instance Pool, it can still be reached, as Kubernetes then automatically handles internal routing. However, this extra hop may result in non-optimal traffic balancing.

Routing with a Loadbalancer Service

When you have multiple services/websites, you can either create additional Network Load Balancers, or (preferably) use an Ingress in addition to the LB-service as explained in the next chapter.

Using an Ingress to expose multiple websites

A Network Load Balancer only provides Level 4/TCP routing, and as such can’t distinguish between different hostnames or website-paths. To enable the latter (Level 7 routing), we can point the NLB to a reverse-proxy (Ingress) which is deployed on an arbitrary amount of nodes.

For defining routes, one creates Ingress resources. However they have no effect without an Ingress Controller (a deployed reverse proxy); the next subchapter explains on how to install the controller ingress-nginx.

Note

There are multiple solutions for ingress controllers available: https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/

The next figure shows how the routing approximately works with a combination of Network Load Balancer, Nginx Ingress Controller and deployments for different websites.

Routing using an Ingress

Deploying ingress-nginx controller

Download the current manifest of ingress-nginx for Exoscale (direct link).

To implement the actual routing, the manifest creates a DaemonSet ingress-nginx-controller, which creates a NGINX Pod on all (or some) Nodes for routing.

Note

You can also consider searching for kind: DaemonSet in the manifest and replace it with kind: Deployment (e.g. for large clusters). You would have then to scale the Deployment ingress-nginx-controller by yourself.

As this manifest creates a service of type LoadBalancer, depending on your set-up, you want to add custom annotations like done in the first chapter of this documentation.

Save the manifest, and apply it with kubectl apply -f deploy.yaml.

Wait until the ingress controller pods have started, check via:

kubectl get pods -n ingress-nginx \
  -l app.kubernetes.io/name=ingress-nginx --watch

It should show Running for the controllers after a while:

ingress-nginx-controller-7b78df5bb4-c8zgq   1/1     Running             0          29s

Use CTRL+C to cancel the --watch command.

You are now ready to deploy ingress configurations. To get the IP of the created LoadBalancer you can use kubectl get svc -n ingress-nginx, or check your load balancer information using the Exoscale portal or CLI.

Creating Ingress rules

You can now use an Ingress-rule to connect your applications with the Load Balancer. This is an example Ingress rule, save it to a file and apply if with kubectl apply -f rules.yaml:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-myapp
  annotations:
    # This is important, it links to the ingress-nginx controller
    kubernetes.io/ingress.class: "nginx"
spec:
  rules:
  - http:
      paths:
      - path: /app1
        pathType: Prefix
        backend:
          service:
            name: hello
            port: # this is the port of the ClusterIP service
              number: 80
      - path: /app2
        pathType: Prefix
        backend:
          service:
            name: other-service
            port:
              number: 80

The example will route the path /app1 and /app2 to different backend services.

Note that the Ingress must point to an internal ClusterIP service. For running this example, you can take the example deployment of the first chapter, and change type: LoadBalancer to type: ClusterIP. The address http://YOURLOADBALANCERIP/app1 will then route to the hello deployment.

You can also specify different hosts, either in the same Ingress-ruleset or via different manifests as in the following example:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-myapp
  annotations:
    kubernetes.io/ingress.class: "nginx"
spec:
  rules:
  - host: "example.com"
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: hello
            port:
              number: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-otherapp
  annotations:
    kubernetes.io/ingress.class: "nginx"
spec:
  rules:
  - host: "example-otherapp.com"
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: other-service
            port:
              number: 80

Use kubectl get ingress to list the created rulesets. It will also show the external IP-Address of the LoadBalancer.

The next figure shows how the objects/services of the whole configuration are interrelated.

Link of configurations using Ingress

Exoscale Network Load Balancer with Kubernetes Services - https://github.com/exoscale/exoscale-cloud-controller-manager/blob/master/docs/service-loadbalancer.md

Description of Exoscale Network Load Balancers - /documentation/compute/network-load-balancer/

Kubernetes LoadBalancer - https://kubernetes.io/docs/concepts/services-networking/service/

Kubernetes Ingress - https://kubernetes.io/docs/concepts/services-networking/ingress/