CCE - Certified Container Engineer
Container Benefits
Application Deployment
Observing the application deployment process over time will clearly show the benefits containers bring to the table.
Traditional Deployment
Early on, organizations ran applications on physical servers. There was no way to define resource boundaries for applications in a physical server, and this caused resource allocation issues. For example, suppose multiple applications run on a physical server. In that case, there can be instances where one application would take up most of the resources. As a result, the other applications would underperform. A solution would be to run each application on a different physical server. However, this did not scale as resources were underutilized, and it was expensive for organizations to maintain many physical servers.
Virtualized Deployment
As a solution, virtualization was introduced. It allows you to run multiple Virtual Machines (VMs) on a single physical server’s CPU. Virtualization will enable applications to be isolated between VMs and provides a level of security as the information of one application cannot be freely accessed by another. Virtualization allows better utilization of resources in a physical server. It will enable better scalability because an application can be added or updated easily, reduces hardware costs, and much more. With virtualization, you can present a set of physical resources as a cluster of disposable virtual machines. Each VM is a complete machine running all the components, including its operating system, on top of the virtualized hardware.
Container Deployment
Containers are similar to VMs but have relaxed isolation properties to share the Operating System (OS) among the applications. Therefore, containers are considered lightweight. Identical to a VM, a container has its file system, CPU share, memory, process space, and more. Because they are decoupled from the underlying infrastructure, containers are portable across clouds and OS distributions.
VIDEO
Sample Application
Sample Application using Node.js
This sample app demonstration shows how traditional software deployment looks. A little JavaScript spins up a web server and runs the sample app on the server. It demonstrates how easy it is to run web-based applications, but you must install and run Node.js beforehand. In addition, you have to use the proper versions of the software components and take care of all dependencies (runtime environments, libraries, etc.). Otherwise, the app will not run properly.
This is the source code of our sample app:
const express = require('express’)
const app = express()
const port = 3000
app.get('/', (req, res) => {
res.send('Hello World!’)
})
app.listen(port, () => {
console.log(`Example app listening at http://localhost:${port}`)
})
Take the code into action:
The application at work, displaying Hello World!:
Look behind the curtain and how this software approach works; the demo video shows you the significant steps necessary to run the node.js sample app.
VIDEO
Running Applications
What do I need …
Considering the following steps based on the Node.js sample app from before.
What do I need to run apps in container?
- Node.js – in the correct Version
- All dependencies of the App (e.g. via Package Manager NPM)
Possibly … when the Application gets bigger …
- Database (and its requirements…)
- Private Network
VIDEO
Dependencies
Dealing with Dependencies
Installing Dependencies …
… on Ubuntu
# Install repository
$ curl -fsSL https://deb.nodesource.com/setup_current.x | sudo -E bash –sudo apt-get update
# Install NodeJS
$ sudo apt-get install -y nodejs
# Install Dependencies of the app
$ npm install
Running …
… the application
# Download or Upload app.js to the server somehow…
# Run
$ node app.js
VIDEO
Containers
Containers
VIDEO
Dockerfile
Dockerfile
FROM node:12-alpine
- to be based on which other Docker image?
RUN apk add --no-cache python g++ make
- install additional software
WORKDIR /app
- create and use directory /app inside the container
COPY . .
- 1st parameter: local directory (./ is the directory where the Dockerfile is)
- 2nd parameter: target directory inside the container
- → copy everything where the Dockerfile is in, into the container under /app
RUN npm install
- dependencies for the application
CMD ["node", "src/index.js"]
- run the app - just like when running outside a container
VIDEO
VIDEO
Build & Run
Build & Run
docker build
In our example, the first step is to build the hello-world
container with the > docker build
command.
docker run
In our example, the second step is to run the hello-world
container with the > docker run
command.
VIDEO
Docker Hub
Docker Hub
Using a publicly available repository to manage, store, retrieve, and share (if you want to) your container images is a real added value. In traditional IT scenarios, such a solution has to be custom-built most of the time. In the container world, such solutions are part of the ecosystem. With Docker containers, the Docker Hub is a natural solution as a repository for our scenarios.
VIDEO
Docker - What’s next?
From Docker To Kubernetes
To scale all the container benefits to an entire IT environment, you need additional functions to coordinate, scale, manage, and automate. For example, taking containerized applications from your local machine to your local server can be done without additional help. Still, if you want to do this with 10, 100, or even 1000 applications and start distributing the workloads, clustering them for higher availability and stability in operations to serve the 24/7 demands of today’s customers, you need additional help. This help for containerized applications is Kubernetes. This helps create a scalable, available, and more flexible infrastructure, on Exoscale.
VIDEO
Kubernetes Benefits
What is it?
Google open-sourced the Kubernetes project in 2014. Kubernetes combines over 15 years of Google’s experience running production workloads at scale with best-of-breed ideas and practices from the community.
The name Kubernetes originates from Greek, meaning helmsman or pilot.
Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services that facilitates automation and declarative configuration.
Software needed
Software needed
Kubernetes
kubectl
– for accessing clusters
https://kubernetes.io/de/docs/tasks/tools/install-kubectl/
Local Test Cluster
minikube
– tool to create a cluster on your local computer
https://kubernetes.io/de/docs/setup/minikube/
Exoscale Cluster
exoscale-CLI
or exoscale-UI
– used to create HA-Clusters on Exoscale
https://community.exoscale.com/documentation/compute/quick-start/
VIDEO
Imperative vs Declarative
Imperative vs Declarative
“Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services that facilitates both automation and declarative configuration.”
Configuration Differences
What a difference a configuration makes :).
The significant change in this new IT world is that well-trained and practiced processes are outdated and useless. The imperative was the old world; you defined step-by-step guidance and executed or oversaw the execution of those configuration steps. The declarative way is to determine the desired state, and an intelligent system conducts and supervises the configuration and operation steps.
Features
Features
Kubernetes Feature Details
self-healing
Kubernetes restarts containers that fail, replaces containers, kills containers that don’t respond to your user-defined health check, and doesn’t advertise them to clients until they are ready to serve.
automatic bin packing
You provide Kubernetes with a cluster of nodes that it can use to run containerized tasks. You tell Kubernetes how much CPU and RAM each container needs. Kubernetes can fit containers onto your nodes to best use your resources.
automated rollouts and rollbacks
Using Kubernetes, you can describe the desired state for your deployed containers. Furthermore, it can change the actual state to the desired state at a controlled rate. For example, you can automate Kubernetes to create new containers, remove existing containers, and adopt all their resources to the new container.
secret and configuration management
Kubernetes lets you store and manage sensitive information, such as passwords, OAuth tokens, and SSH keys. You can deploy and update secrets and application configuration without rebuilding your container images or exposing secrets in your setup.
service discovery and load balancing
Kubernetes can expose a container using the DNS name or using its IP address. If traffic to a container is high, Kubernetes can load balance and distribute the network traffic to stabilize the deployment.
storage orchestration
Kubernetes allows you to automatically mount a storage system of your choice, such as local storage or public cloud providers.
Positioning
Positioning
Kubernetes is …
… providing the building blocks for creating developer and infrastructure platforms but preserves user choice and flexibility where it is essential.
… extensible, and lets users integrate their logging, monitoring, alerting, and many more solutions because it is not monolithic, and these solutions are optional and pluggable.
Kubernetes is NOT …
… a traditional, all-inclusive PaaS system. Kubernetes operates at the container level rather than at the hardware level. It provides some generally helpful features standard to PaaS offerings, such as deployment, scaling, and load balancing.
… a mere orchestration system. It eliminates the need for orchestration. The definition of orchestration is executing a defined workflow:
first, do A, then B, then C → imperative
Kubernetes comprises independent, composable control processes that continuously drive the current state:
towards the desired state → declarative
Kubernetes does NOT …
… limit the types of applications supported.
Kubernetes aims to support a highly diverse workload, including stateless, stateful, and data-processing workloads. If an application can run in a container, it should run great on Kubernetes.
… deploy source code and does not build your application. Organizational cultures determine Continuous Integration, Delivery, and Deployment (CI/CD) workflows, preferences, and technical requirements.
… provide application-level services; such as middleware, data-processing frameworks, databases, caches, or cluster storage systems as built-in services. Application access to the components mentioned above is done through portable mechanisms - both run Kubernetes.
…provide nor adopt any comprehensive machine management. The task requires additional components for system configuration, system management & maintenance, …
Basic Commands
Basic Commands
To understand this new world, let’s examine some simple applications of the kubectl
command.
VIDEO
Simple Examples
Simple Examples
Let’s look at two examples: a simple hello-world
container and a simple ubuntu
container.
- simple
hello-world
container
- simple
ubuntu
container
Watch the two examples in the videos below.
VIDEO
VIDEO
Basic Concepts
Basic Concepts
The four foundational Kubernetes concepts listed below are essential to run your modern applications on scalable cloud infrastructure.
- PODs
- DEPLOYMENTs
- STATEFULSETs
- SERVICEs
These four concepts are explained in more detail in the videos below.
VIDEO
VIDEO
VIDEO
VIDEO
Kubernetes Concepts
Kubernetes Concepts
This section covers various concepts and explains how to leverage Kubernetes’ declarative orchestration powers. We show different usage scenarios for initial configurations and necessary update operations in an SKS Cluster, which provides a convenient managed Kubernetes Service for running large-scale containerized workloads.
- Ingress
- Cluster
- Manifests
- Namespaces
- Updates
Ingress
Ingress
We already know the LoadBalancer
service in Kubernetes from the SKS starter course. However, LoadBalancer
works only on network layer-4; it can not distinguish between different hostnames and paths and cannot terminate SSL traffic. These additional capabilities are provided in Kubernetes by the Ingress service.
Ingress
uses a reverse proxy, like nginx
, to handle network layer-7 balancing in the Kubernetes cluster and the often-needed additional, multiple routing paths. If advanced routing demands are made, the Ingress
service can provide and handle them.
VIDEO
Ingress Config
Ingress Config
How do you configure an Ingress
? The Ingress
service consists of the Ingress Controller
and the Ingress Configuration
(called Ingress
in Kubernetes). Kubernetes reads the configuration and deploys it to the controller.
The video below shows the flow of the configuration process.
VIDEO
Cluster Structure
Cluster Structure
Let’s take a look at the Kubernetes Cluster and its components. Although it is a complex structure, the beauty of SKS is that we manage the complexity. Kubernetes as a managed service remains a very flexible solution because the possibility of shaping the services with add-ons makes it very customizable.
VIDEO
Manifest Theory
Manifest Theory
The beauty of Kubernetes lies in its descriptive/declarative nature of infrastructure management and operation. Instead of writing a series of single commands on the CLI, you write it down in a stateless manifest; the format used is a .yaml
file.
The details of the manifest concept and how to compose them are laid out in the video below.
VIDEO
Manifest Praxis
Manifest Praxis
Let’s examine the actual usage of the manifest concept and how to apply it to a Kubernetes cluster, creating, configuring, and reconfiguring resources with it.
The manifest usage details and how to execute them are in the video below.
VIDEO
Manifest Tricks
Manifest Tricks
Building a manifest from scratch can be tedious. However, you can use many cool tricks to develop your manifests, like automatically creating one with the kubectl
command and the --dry-run
option or using a pre-built manifest from the Kubernetes documentation.
VIDEO
Namespace Theory
Namespace Theory
Namespaces separate components and are a construct to introduce additional structure to your configurations.
The way the construct namespace works is shown in the video below.
VIDEO
Namespace Praxis
Namespace Praxis
How does this separation look in practice, and how can the namespace concept be leveraged for better orchestration?
See the application of this construct in praxis in the video below.
VIDEO
Updates Theory
Updates Theory
The manifest is very versatile for creating and orchestrating Kubernetes structures, as well as updating them. Kubernetes will care for the rest after declaring the new situation and triggering the update. This means following a clearly defined workflow of process steps to update all distributed components and safely removing old versions.
The details are explained in the video below.
VIDEO
Updates Praxis
Updates Praxis
The kubectl
command provides rollout and update capabilities for containerized applications and the Kubernetes constructs. Users expect applications to be constantly available, and developers deploy new versions several times daily.
The concept of rolling updates allows an update with zero downtime by incrementally updating Pods with new ones. With the command
> kubectl rollout ...
, you can check your deployment status. You can also undo your deployment in an unsuccessful rollout and return it to an older version.
The declarative nature of Kubernetes and its ability to track actions make undo’s and redo’s possible. This allows you to keep your application in the desired state while extending or fixing it. If necessary, you can change the actual condition in a controlled way to reach the desired state again with zero downtime for your application.
The video below demonstrates this rather complex sounding process in some possible variations. Therefore, a more extended sequence was necessary to show all the essential concepts coming together and let you experience the flexibility and stability of a Kubernetes environment.
VIDEO
Kubernetes Storage
Kubernetes Storage
In this section, we cover various aspects of storage in the context of Kubernetes. We show different usage scenarios of temporary and persistent storage supplied with respective volume versions and other storage technologies, as well as the configurations of these variants.
- Volumes
- Persistent Volumes
- Storage Technologies
- Exoscale Block Storage
Volumes
Volumes
Containerized (stateless) applications have the exact needs as usual (stateless) applications. They need all types of storage to share, read, and access information. The volume concept handles attaching and sharing temporary storage to a Pod, respectively, to a container or multiple containers.
Containers can mount and use different storage types, e.g., temporary volumes, network volumes, and configuration maps. The video below shows how volumes are defined and used in manifests, the configuration mechanism in Kubernetes.
VIDEO
Persistent Volumes
Overview
Kubernetes is a powerful container orchestration platform that abstracts and simplifies the management of containerized applications. One crucial aspect of running applications is data persistence, where Persistent Volumes (PVs) come into play. PVs provide a robust way to manage storage that outlasts the lifecycle of individual Pods. Kubernetes is usually meant to be stateless. If you need to run databases, one recommended way is to use Exoscale’s Managed Database service (DBaaS).
If you want to run stateful workloads in Kubernetes, temporary storage managed by volumes is unsuitable for this application category. Instead, you should use a StatefulSet
and, for example, Exoscale’s Block Storage.
A Persistent Volume (PV) in Kubernetes is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using storage classes. PVs are independent of the lifecycle of a Pod that uses storage. They encapsulate implementation details about the storage, including whether it is backed by NFS, iSCSI, cloud provider storage systems, or other storage technologies.
A Persistent Volume Claim (PVC) is a user’s request for storage in Kubernetes, specifying size, access modes, and other needs. It abstracts storage configuration, enabling Kubernetes to match the request with a suitable Persistent Volume (PV). Once bound to a PV, the storage is accessible to the user’s application.
The Persistent Volume Access Mode (PVAM) defines how a volume can be accessed once mounted to a node.
Persistent Volume (PV)
- Definition: A Persistent Volume is a cluster resource, similar to a node, representing a storage piece. It can be created by an administrator or dynamically through a Storage Class (SC)
- Lifecycle Independence: PVs exist independently of the Pods. They persist even when the Pods using them are deleted, thereby retaining data across Pod restarts and rescheduling.
Provisioning
- Static Provisioning: An administrator manually creates PVs specifying the storage details.
- Dynamic Provisioning: When a Persistent Volume Claim (PVC) is created, Kubernetes uses an SC to provision a PV to automatically fulfill the claim.
Binding
- Kubernetes attempts to find an available PV that matches the PVC’s specified resources and access modes. If found, it binds the PV to the PVC.
Using
- Once a PV is bound to a PVC, pods can mount and use the storage by referencing the PVC.
Reclaiming
When a PVC is deleted, the PV enters a reclaim phase determined by its reclaim policy (Retain
, Recycle
, Delete
):
- Retain : The PV remains in the cluster, retaining its data. It needs to be manually cleaned up and reused.
- Recycle: The PV data is scrubbed and becomes available again. (deprecated in newer versions)
- Delete : The PV and its associated storage are deleted.
Benefits of Persistent Volumes
- Data Persistence: Provide a way to persist data beyond the lifecycle of individual pods, enabling stateful applications.
- Decoupling of Storage and Compute: Independently manage storage and compute resources, allowing for flexible scaling and management.
- Centralized Storage Management: Simplifies the storage administration by abstracting the underlying storage infrastructure.
Use Cases
- Databases: Where data persistence, performance, and integrity are crucial.
- Content Management Systems (CMS): Where multiple instances need access to shared data.
- File Storage: For applications requiring shared access to multiple node files.
In summary, Persistent Volumes in Kubernetes are essential for managing persistent storage in a container orchestration environment. They provide a scalable and flexible solution for stateful applications that need to maintain data across the lifecycle of Pods.
Persistent Volume Claim (PVC)
Definition: A Persistent Volume Claim is a user’s request for storage. It binds to an available PV that meets the claim’s resource requirements.
Coupling with PV: When a PVC is created, Kubernetes looks for a suitable PV or dynamically provisions one based on the SC defined. Once a PV is bound to a PVC, it remains bound until the PVC is deleted.
Definition: A Persistent Volume Claim is a user’s request for storage. It binds to an available PV that meets the claim’s resource requirements.
Coupling with PV: When a PVC is created, Kubernetes looks for a suitable PV or dynamically provisions one based on the StorageClass defined. Once a PV is bound to a PVC, it remains bound until the PVC is deleted.
StorageClass (SC)
- Definition: A SC describes the different types of storage the cluster offers. It allows administrators to define storage classes, such as
fast
orslow
, which PVCs can reference for dynamic provisioning. - Dynamic Provisioning: It refers to the automatic creation of storage resources when a PVC is made. When a PVC requests storage, Kubernetes uses the specified SC to dynamically provision a (PV) that matches the claim’s requirements. This process eliminates the need for administrators to manually pre-provision storage, ensuring that storage is allocated on-demand according to application needs. This automated approach simplifies resource management and optimizes storage utilization within the cluster.
Presistent Volume Access Modes (PVAM)
PVs have access modes that describe how they can be mounted to a host:
- ReadWriteOnce (RWO): A single node can mount the volume as read-write.
- ReadOnlyMany (ROX): Many nodes can mount the volume as read-only.
- ReadWriteMany (RWX): Multiple nodes can mount the volume as read-write.
- ReadWriteOncePod(RWOP): A single Pod can mount the volume as read-write.
These access modes help define how the storage can be consumed, facilitating different use cases and performance characteristics. Persistent Volumes (PVs) in Kubernetes provide a reliable and consistent way to store data essential for applications running in a cluster. They support various access modes, determining how nodes can mount and access the volume. This allows administrators to choose the proper storage solution based on their application’s requirements.
Here, we describe two common access modes:
- ReadWriteOnce
- ReadWriteMany
ReadWriteOnce (RWO)
The ReadWriteOnce
access mode allows a single node to mount the volume as read-write. Only one Pod running on one node can write to the volume simultaneously.
Advantages
- Performance: This mode offers the best performance since no file-locking requirements exist. The absence of locking mechanisms reduces latency and overhead, making data operations faster and more efficient.
- Simplicity: Ideal for applications that don’t need concurrent write access from multiple nodes, simplifying data consistency management
Use Cases
- Databases: Suited for database workloads where data consistency and integrity are critical.
- Stateful Applications: Applications where each instance manages its data independently.
- Storage Types: Commonly supported by block storage solutions, which are optimized for high performance and low-latency access.
ReadWriteMany (RWX)
The ReadWriteMany
access mode allows multiple nodes to simultaneously mount the volume as read-write. This enables several pods across different nodes to access and modify the data concurrently.
Advantages
- Shared Workloads: This feature enables scenarios where multiple application instances must be written to a standard data set, facilitating easy sharing and collaboration.
- Scalability: Enhances scalability for applications that can benefit from distributed read-write access.
Considerations
- File Locking: This requires file-locking mechanisms to ensure data consistency and integrity when multiple nodes are writing to the volume simultaneously. This can introduce latency and negatively affect performance.
- Complexity: File-locking can complicate application logic and database management, making it unsuitable for high-performance, heavily transactional applications.
Use Cases
- Content Management Systems (CMS): Ideal for web applications where content can be modified by multiple users or instances simultaneously.
- Shared File Repositories: Useful for applications that need a shared file system accessible from multiple pods.
- Storage Types: This mode often requires distributed file systems or network-based storage solutions, such as NFS (Network File System), to handle concurrent client access.
Understanding these access modes helps you choose the proper storage backend and optimize the performance based on the specific needs of your applications in a Kubernetes environment.
VIDEO
VIDEO
VIDEO
Persistent Volume Access Modes
Storage
Overview
Kubernetes’s extensible nature allows it to leverage various storage technologies, providing several solutions, especially for persistent volume scenarios. These include both proprietary and open-source options.
- Block Storage
offers persistent storage by exposing storage blocks directly to Kubernetes workloads, making it ideal for situations demanding high performance, low latency, and precise I/O control. Moreover, block storage is available from various cloud providers or on-premises systems, and it provides the flexibility and scalability required for stateful applications and databases in Kubernetes environments. It ensures reliable, durable storage that integrates seamlessly with pods, regardless of deployment location. Block Storage can be attached to a single instance and reattached to others. This is particularly advantageous for
StatefulSets
using Kubernetes Persistent Volume Claims (PVCs). - Object Storage is typically integrated into the application, where any compatible S3 library can save static files. Alternatively, it can be mounted as a Persistent Volume Claim (PVC) using tools like Mountpoint. Due to its nature, S3 does not support high IOPS, so it’s best suited for storing static content rather than high-transaction data.
- Local Storage can be mounted for temporary, volatile data. Still, it is unsuitable for data that needs to persist, as it is inherently ephemeral.
- OpenEBS is a leading open-source project that provides cloud-native storage solutions for Kubernetes deployments. Unlike other storage options, OpenEBS integrates seamlessly with Kubernetes, making it a highly regarded cloud-native solution within the CNCF landscape.
- Portworx is a container storage solution tailored for Kubernetes. It focuses on high availability in clustered environments. As host-attached storage, each volume maps directly to the host it is attached to. It utilizes I/O technology and auto-tuning based on the protocol in use.
- Rook stands out as a top-rated open-source storage solution for Kubernetes because of its storage orchestration capabilities. Rook transforms storage volumes into self-scaling, self-healing, and self-managing systems as a production-grade solution, ensuring seamless integration within the Kubernetes environment.
Many more technologies are available. A complete, up-to-date list of supported technologies is on Kubernetes Storage.
Exoscale Block Storage
Overview
Exoscale’s Block Storage offers a robust and distributed block device solution for Exoscale Compute instances, known for its redundancy and reliability. A Volume, a singular storage unit, can be partitioned and formatted to accommodate directories and files. One critical feature of Block Storage is the Snapshot, which captures the state of a volume at a specific moment and allows users to create new volumes based on that state.
Exoscale’s Block Storage provides high-performance volumes over a network, making it an optimal database choice. These volumes require integration using a Container Storage Interface (CSI) driver. Block Storage supports the ReadWriteOnce
access mode, ensuring that volumes are persistent and can automatically detach and reattach to nodes when a Pod is rescheduled. Additionally, the system supports snapshots, allowing users to capture the state of a volume at any given moment and create new volumes based on that state. This feature underscores the redundancy and reliability that Exoscale’s Block Storage offers.
NOTE! Here, you can find more details on Block Storage.
VIDEO
VIDEO
Kubernetes CSI
Kubernetes CSI
Block Storage can be directly used in Kubernetes via the Exoscale CSI plugin, which you can select to install during SKS cluster creation. Alternatively, you can follow the CSI’s Github Repository directions to install it.
Here is an example using a Pod:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: exoscale-block-storage
resources:
requests:
storage: 10Gi
---
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: mycontainer
image: nginx
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: my-storage
volumes:
- name: my-storage
persistentVolumeClaim:
claimName: my-pvc
It creates a single Pod and mounts a Block Storage Volume with 10 gigabytes of storage. Should the Pod be moved to a different node, Kubernetes can also move the Block Storage Volume to the new node.
Here is an example using a StatefulSet
, which is commonly used for databases. For each Pod created by the StatefulSet
, a Block Storage volume is attached. If a Pod is relocated, it matches the block storage volume originally created for it.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web-server
spec:
serviceName: "web"
replicas: 3
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- name: web
image: nginx:latest
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 10Gi
storageClassName: exoscale-block-storage
Check kubectl describe pod PODNAME -n NAMESPACENAME
and kubectl get events -n NAMESPACENAME
if you encounter problems.
Troubleshooting
Troubleshooting
This section covers various techniques to stay ahead of the game, such as running workloads with Kubernetes. We keep in mind the cycle of Development, Operations, Monitoring, DevOps, and Debugging in agile environments powered by modern software solutions. We are looking at necessary tasks and how to get things done with graphical and CLI tools.
k8slens.dev
kubectl
Errors, Debugging …
Errors, Debugging …
The art of identifying and finding errors in IT systems, whether code or infrastructure configurations, is almost the same today. However, both sides of IT systems challenge us with increased complexity and the cost of more flexibility, features, and performance.
Hence, the toolbox to conquer those challenges must be fine-honed to do the trick. The preferences in tools are vast, and we look at both diametral ends of tool sets, graphical user interface, and command line.