SKS STARTER
Deploy and run containerized applications with ease. We guide you through the first steps of container orchestration and understanding basic Kubernetes concepts.
SKS Welcome
Welcome
In this SKS Starter training, we cover how Kubernetes (K8s) works, why we want to use K8s, and, very importantly, we will take a practical approach to K8s to show you the real magic of the technology.
VIDEO
Application Deployment
Application Deployment
Traditional Deployment
Early on, organizations ran applications on physical servers. There was no way to define resource boundaries for applications in a physical server, and this caused resource allocation issues. For example, if multiple applications run on a physical server, there can be instances where one application would take up most of the resources, and as a result, the other applications would underperform. A solution for this would be to run each application on a different physical server. But this did not scale as resources were underutilized, and it was expensive for organizations to maintain many physical servers.
Virtualized Deployment
As a solution, virtualization was introduced. It allows you to run multiple Virtual Machines (VMs) on a single physical server’s CPU. Virtualization allows applications to be isolated between VMs and provides a level of security as the information of one application cannot be freely accessed by another application. Virtualization allows better utilization of resources in a physical server and allows better scalability because an application can be added or updated easily, reduces hardware costs, and much more. With virtualization, you can present a set of physical resources as a cluster of disposable virtual machines. Each VM is a full machine running all the components, including its own operating system, on top of the virtualized hardware.
Container Deployment
Containers are similar to VMs, but they have relaxed isolation properties to share the Operating System (OS) among the applications. Therefore, containers are considered lightweight. Similar to a VM, a container has its own file system, share of CPU, memory, process space, and more. As they are decoupled from the underlying infrastructure, they are portable across clouds and OS distributions.
VIDEO
Sample Application
Sample Application using Node.js
This sample app demonstration shows how traditional software deployment looks. A little JavaScript spins up a web server and runs the sample app on the server. It demonstrates how easy it is to run web-based applications, but still, you need to install and run Node.js beforehand. In addition, you have to use the proper versions of the software components in place and take care of all dependencies (runtime environments, libraries, …). Otherwise, the app will not run or will not run properly.
This is the source code of our sample app:
const express = require('express’)
const app = express()
const port = 3000
app.get('/', (req, res) => {
res.send('Hello World!’)
})
app.listen(port, () => {
console.log(`Example app listening at http://localhost:${port}`)
})
Take the code into action:
The application at work, displaying Hello World!:
Let’s look behind the curtain and how this software approach works; the demo video shows you the significant steps necessary to run the node.js sample app.
VIDEO
Running Applications
What do I need …
Considering the next steps based on the Node.js sample app from before.
What do I need to run apps in container?
- Node.js – in the correct Version
- All dependencies of the App (e.g. via Package Manager NPM)
Possibly … when the Application gets bigger …
- Database (and its requirements …)
- Private Network
VIDEO
Dependencies
Dealing with Dependencies
Installing Dependencies …
… on Ubuntu
# Install repository
$ curl -fsSL https://deb.nodesource.com/setup_current.x | sudo -E bash –sudo apt-get update
# Install NodeJS
$ sudo apt-get install -y nodejs
# Install Dependencies of the app
$ npm install
Running …
… the application
# Download or Upload app.js to the server somehow…
# Run
$ node app.js
VIDEO
Containers
Containers
VIDEO
Dockerfile
Dockerfile
FROM node:12-alpine
- to be based on which other Docker image?
RUN apk add --no-cache python g++ make
- install additional software
WORKDIR /app
- create and use directory /app inside the container
COPY . .
- 1st parameter: local directory (./ is the directory where the Dockerfile is)
- 2nd parameter: target directory inside the container
- → copy everything where the Dockerfile is in, into the container under /app
RUN npm install
- dependencies for the application
CMD ["node", "src/index.js"]
- run the app - just like when running outside a container
VIDEO
VIDEO
Build & Run
Build & Run
docker build
In our example, the first step is to build the hello-world
container with the > docker build
command.
docker run
In our example, the second step is to run the hello-world
container with the > docker run
command.
VIDEO
Docker Hub
Docker Hub
Using a publicly available repository to manage, store, retrieve, and share (if you want to) your container images is a real added value. In traditional IT scenarios, such a solution has to be custom-built most of the time. In the container world, such solutions are part of the ecosystem. With docker containers, the Docker Hub is a natural solution as a repository for our scenarios.
VIDEO
Docker / Kubernetes
From Docker To Kubernetes
If you want to scale all the container benefits to an entire IT environment, you need additional functions to coordinate, scale, manage, and automate. For example, taking containerized applications from your local machine to your local server can be done without additional help. Still, if you want to do this with 10, 100, or even 1000 applications and start distributing the workloads, clustering them for higher availability and stability in operations to serve the 24/7 demands of today’s customers, you need additional help. This help for containerized applications is Kubernetes. This help for a scalable, available, and more flexible infrastructure is Exoscale.
VIDEO
What is it?
What is it?
Google open-sourced the Kubernetes project in 2014. Kubernetes combines over 15 years of Google’s experience running production workloads at scale with best-of-breed ideas and practices from the community.
The name Kubernetes originates from Greek, meaning helmsman or pilot.
Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services that facilitates both automation and declarative configuration.
Software needed
Software needed
Kubernetes
kubectl
– for accessing clusters
https://kubernetes.io/de/docs/tasks/tools/install-kubectl/
Local Test Cluster
minikube
– tool to create a cluster on your local computer
https://kubernetes.io/de/docs/setup/minikube/
Exoscale Cluster
exoscale-CLI
or exoscale-UI
– used to create HA-Clusters on Exoscale
https://community.exoscale.com/documentation/compute/quick-start/
VIDEO
Imperative vs Declarative
Imperative vs Declarative
“Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services that facilitates both automation and declarative configuration.”
Configuration Differences
What a difference a configuration makes :).
The significant change in this new IT world is that well-trained and practiced processes are outdated and useless. The imperative was the old world, you defined step-by-step guidance, and you also executed or oversaw the execution of those configuration steps. The declarative way is to determine the desired state and an intelligent system conducts and supervises the configuration and operation steps.
Features
Features
Kubernetes Feature Details
self-healing
Kubernetes restarts containers that fail, replaces containers, kills containers that don’t respond to your user-defined health check, and doesn’t advertise them to clients until they are ready to serve.
automatic bin packing
You provide Kubernetes with a cluster of nodes that it can use to run containerized tasks. You tell Kubernetes how much CPU and RAM each container needs. Kubernetes can fit containers onto your nodes to make the best use of your resources.
automated rollouts and rollbacks
You can describe the desired state for your deployed containers using Kubernetes. Furthermore, it can change the actual state to the desired state at a controlled rate; e.g., you can automate Kubernetes to create new containers, remove existing containers and adopt all their resources to the new container.
secret and configuration management
Kubernetes lets you store and manage sensitive information, such as passwords, OAuth tokens, and SSH keys. You can deploy and update secrets and application configuration without rebuilding your container images without exposing secrets in your setup.
service discovery and load balancing
Kubernetes can expose a container using the DNS name or using their IP address. If traffic to a container is high, Kubernetes can load balance and distribute the network traffic to stabilize the deployment.
storage orchestration
Kubernetes allows you to automatically mount a storage system of your choice, such as local storage, public cloud providers, and more.
Positioning
Positioning
Kubernetes is …
… providing the building blocks for creating developer and infrastructure platforms but preserves user choice and flexibility where it is essential.
… extensible, and lets users integrate their logging, monitoring, alerting, and many more solutions because it is not monolithic, and these solutions are optional and pluggable.
Kubernetes is NOT …
… a traditional, all-inclusive PaaS system.
Kubernetes operates at the container level rather than at the hardware level. It provides some generally helpful features common to PaaS offerings, such as deployment, scaling, load balancing.
… a mere orchestration system.
It eliminates the need for orchestration. The definition of orchestration is executing a defined workflow:
first, do A, then B, then C → imperative
Kubernetes comprises independent, composable control processes that continuously drive the current state:
towards the desired state → declarative
Kubernetes does NOT …
… limit the types of applications supported.
Kubernetes aims to support a highly diverse workload, including stateless, stateful, and data-processing workloads. If an application can run in a container, it should run great on Kubernetes.
… deploy source code and does not build your application.
Organizational cultures determine Continuous Integration, Delivery, and Deployment (CI/CD) workflows and preferences and technical requirements.
… provide application-level services;
such as middleware, data-processing frameworks, databases, caches, nor cluster storage systems as built-in services. Application access the components mentioned above through portable mechanisms - both are running Kubernetes.
…provide nor adopt any comprehensive machine management.
The task requires additional components for system configuration, system management & maintenance, etc…
Basic Commands
Basic Commands
To get a feeling for this new world, let’s have a look at some simple applications of the kubectl
command.
VIDEO
Simple Examples
Simple Examples
Let’s look at two examples. First, a simple hello-world container and second, a simple Ubuntu container.
- simple
hello-world
container
- simple
ubuntu
container
Watch these two examples below in the videos.
VIDEO
VIDEO
Basic Concepts
Basic Concepts
The four foundational Kubernetes concepts listed below are essential to run your modern applications on scalable cloud infrastructure.
- PODs
- DEPLOYMENTs
- STATEFULSETs
- SERVICEs
These four concepts are explained in more detail in the videos below.