Kubernetes

What is Kubernetes?

The creators of Kubernetes describe it as follows: “Kubernetes, also known as K8s*, is an open source system for automating deployment, scaling, and management of containerised applications.” This is often summarised as container orchestration.

The goals have not changed since Google published Kubernetes in 2014. The user base, however, just keeps growing. 2015 saw the establishment of the Cloud Native Computing Foundation (CNCF), tasked with maintaining and advancing Kubernetes.

*The ‘k’ stands for the K in Kubernetes, the figure ‘8’ for the eight following letters and the ‘s’ for the last letter of the word (k + 8 letters + s or K’uberenete’s). K8s is a so-called numeronym.

What is Containerisation?

What is a Container?

Containerisation refers to the idea of running applications in so-called containers.

The idea of applications in containers is often explained with the image of a container ship. The standard shipping container was introduced in the 1960s. Instead of transporting goods on small and sometimes specialised ships, a container ship can transport dozens, hundreds or even thousands of standardised shipping containers.

Bear in mind: None of the containers need their own engine or crew, and neither the ship nor the other containers care about the contents of a specific container. Also, thanks to standardisation, very different goods can be loaded, transported and transhipped using very different means of transport.

Containers in IT behave in a similar way. A container in IT is a software-based standard package that contains the application and all files required to perform a certain task or run the application (so-called workloads). This includes code, application and systems libraries as well as the system configuration. K8s is in charge of orchestrating the computer workloads that are run on the machines of a cluster (nodes).

Advantages of Containers

  • Containers enable the simplification/optimisation of internal processes (development, testing)

  • This also simplifies deployment and operation

  • Containers can be deployed (started, stopped) and replicated as often as required

  • Thanks to the flexibility of containers, the company infrastructure is multi-cloud capable

Lennart JütteJanuary 2023

What is Cloud Computing?

Cloud computing has become an important tool for companies to offer services and technologies to companies and customers regardless of the operating system. In other words, to offer their core business digitally via their own digital platform. We show what the digital cloud describes.

What is Cloud Computing?

Important: The statement that “IT containers bring down costs because they require fewer IT resources (pay-per-use cloud billing)” does not always hold true in real life. The technological flexibility and related time and work required come at a price, but this is not necessarily a negative. It always depends on the context of the specific project. There are, of course, also positive effects.

Conventional IT

To understand containerisation better, it helps to take a look at ‘conventional’ IT.

In conventional IT, if you want to run applications on physical computers or virtual machines (VMs), you also need to store and maintain a large number of libraries on the respective machines, in addition to the required operating system. The many installed programs are frequently at odds with the requirements regarding the system environment, which are often very specific. Equally, the settings in the development environment often differ from those in the testing environment and the production system. Add to that the enormous effort that goes into setting up the VMs, the operation of physical computers and the limitations around technological upgrades.

Editorial comment: These days, the challenge lies not only in making applications available across multiple platforms, orchestrated via Kubernetes, but the development itself is also much more complex – even though operation gets easier.

Containers and Their Applications

A container is not, as the shipping image above would suggest, something tangible. It is a virtualisation technology to run an application isolated from all other containers and the underlying operating system. As opposed to a conventional virtual machine, a container does not come with its own operating system. The ‘containerised’ application is initially only aware of itself and its own libraries. Add to that everything shared by the host – for instance, a network or hardware connected to individual machines. This could, for example, be GPUs in the cloud.

The container engine, which controls the deployment of individual containers, still requires a machine, an operating system with a core and libraries, but these are not visible to and mostly irrelevant for the containers. The advantage is that while conventional applications require machines that are almost custom-configured, containers use the container engine for the abstraction of application, hardware and environment. This way, just a few powerful machines can manage a multitude of very heterogeneous applications. It is also easier to scale because only generic nodes are used instead of customised machines.

Kubernetes in our projects

The image shows a car displayed on a tablet to illustrate its features and specifications.

Mercedes-Benz.io

Multiple Markets, One E-commerce Standard

Mercedes-Benz.io GmbH is expanding its online trade and is relying on Unic expertise. The realized trading platform is built on SAP Commerce and Kubernetes.

Multiple Markets, One E-commerce Standard

Kubernetes – the Helmsman

Earlier (see: What is a Container?), we mentioned the container ship example. If such a large vessel was loaded without a plan and left to drift aimlessly across the oceans, this would surely end in disaster. In IT, Kubernetes is the helmsman who steers the ‘IT container ship’ and the fleet that usually comes with it, which is called a cluster.

A manifest enclosed with the app containers (see: Record of Intent Concept) ensures that copies of an application are distributed across the nodes of a cluster. If a container falls off the ‘IT ship’, a copy is created immediately – in other words, a replacement container starts up automatically, taking over the load previously assigned to it.

Kubernetes – the Engine Room / K8s Components

Kubernetes itself is a straightforward platform that consists of several components and is predominantly written in the ‘Go’ language.

The heart of K8s is the so-called control plane. It consists of several applications assuming specific roles, which are usually also distributed across several nodes.

  • etcd – A distributed, robust, high-availability key-value store containing all relevant data for K8s (configuration, service identification and scheduling).

  • kube-scheduler – An application that identifies newly created pods (i.e., a set of containers) and selects a node for them to run on.

  • kube-controller-manager – Runs controller processes in charge of various management tasks. It publishes various events and processes them to respond to changes in state, for instance, to identify new nodes or nodes shutting down. The kube-controller-manager also administers access rights for the K8s API (API = interface).

  • cloud-controller-manager – Manages a surrounding cloud (where applicable) to keep the configuration of connected networks and load balancers up to date. It also monitors the lifecycle of individual nodes in the cloud.

Editorial comment: API stands for the Application Programming Interface. This is a code-based part of a software system that is made available to other programs to connect to the system. An API enables automated data transmission (machine-to-machine communication) between two systems or applications.

Worker Nodes

The control plane is useless without its worker nodes. Applications run on worker nodes, and without them, the control plane would lack its purpose. In addition to these applications, the worker nodes also run the actual workloads for the user. For orchestration, they get support from the following components:

  • Container Engine – The docker engine has proven itself as a reliable runtime environment over the last few years. It enables the containerised applications to run independently from infrastructure. It provides tools, for instance, for bundling all application dependencies in a container.

  • kubelet – Ensures that the containers requested in the manifests (see: Record of Intent Concept) are started and configured correctly. At the same time, the kubelet service ensures that it stays that way. The kubelets on the nodes monitor the status of the pods, compare it to the manifests and try to eliminate any differences or compensate for outages. The service also enables the communication between clusters.

  • kube-proxy – A network proxy that enables K8s to dynamically forward external requests to the endpoints provided to a suitable container. The service is also in charge of the communication between services within the cluster.

  • Editorial comment on the container engine: There are also other tools, such as containerd, CRI-O and Mirantis Container Runtime. Most of these engines originate from the Linux world. That is why Kubernetes clusters are usually set up in Linux environments. Recent versions of Windows Server, however, also enable the operation and management of Windows containers through Kubernetes.

Record of Intent Concept

Responsible developers use so-called resources (see: Resources in Kubernetes) to define the most important components to make an application available in K8s. The manifests mentioned above, which come in the form of well-structured text files, are indispensable for this. They determine the desired state for an application, the so-called ‘record of intent‘.

Resources in Kubernetes

To run an application container with high availability and high scalability in Kubernetes, a manifest is required. In the manifest, developers organise a set of containers in the pods mentioned above and define additional resources to describe to Kubernetes how pods and their dependencies are to be made available and handled. There are plenty of resources, of course, so here is an overview of the core resources.

  • Deployment – Deployment describes how an application is to be distributed across a cluster and how individual instances are to be distributed as pods. The deployment also defines requirements for the lifecycle of a pod or application. To make the best use of the resource planner and/or scale well, estimated CPU and RAM resources are listed.

  • Secrets and ConfigMaps Secrets and ConfigMaps are used to separate the configuration of individual container instances from the actual image. Sensible and important configuration data as well as passwords are injected directly into the respective containers.

  • Services – Services can be defined to group pods behind an endpoint. This means that communication with pods is possible via internal name resolution using previously defined ports. This enables very simple communication between different services.

  • Ingress – The Ingress object and the associated Ingress controller allow external requests to specific addresses and paths to be mapped to the associated services. This enables K8s to forward these requests to the relevant pods for response.

Interaction of Resources

The manifests described above are sent to the control plane via the API mentioned earlier. From then on, the control plane ensures that the required resources are made available and remain available.

If, for example, an increase in the number of simultaneously running pods is required or a configuration adjustment is necessary, the responsible developers adapt the respective manifest and resubmit the text file. The desired state is updated. Kubernetes takes care of the implementation.

K8s – the Toolbox

The text-based configuration in the manifests described above lends itself to storing defined objects in version management systems such as ‘git’ to have access to a changelog. This helps with tracking changes and rollbacks.

Editing text files usually does not require any special tools, which makes them easy to adapt, even in the browser. The use of well-structured text files allows for a certain degree of automated validation of changes before they are submitted. This also prevents errors in the cluster.

In addition, text-based formats lend themselves to working with templates, as you will find in the Kubernetes package manager Helm (as in ‘the helm of a ship’).

Kustomize, a tool officially advertised by Kubernetes, takes a different route and encourages developers to expand and adapt existing manifests with transformers and generators. This allows you to apply specific changes for your environment to a general manifest.

In the end, all these tools do is generate structured text that is then processed automatically or submitted directly to K8s. It would therefore also be possible to implement a simplified version of these tools with the right visualisation. The possibilities are endless.

Kubernetes – the Automation

One of the many tools to bridge the gap between stored manifests and K8s is Flux.

A group of controllers running in the K8s cluster monitor an external configuration source. This is usually a git repository. Changes in this location are detected by Flux and applied to the cluster immediately. Tools like Helm and Kustomize can be integrated to transform the code before it is submitted.

Flux is also capable of detecting other changes. If a new container image is published, Flux can detect this and adapt the relevant configuration in a git repository. This would then lead to the adapted code being deployed to the K8s cluster.

This will trigger Flux to emit events. These events and the messages by K8s will then be taken up by other tools in turn, triggering automations such as tests.

Advantages of automation: With a solid strategy and careful configuration of the related tools, almost all steps in the lifecycle of a containerised application can be automated. The declarative approach and the high flexibility of Kubernetes help lighten the workload of development teams and reduce the effort involved in setting up a stable CI/CD. This concept, designed for resilience and scalability, enables you to run small and large workloads efficiently and reliably.

Contact for your Digital Solution with Unic

Book an appointment

Are you keen too discuss your digital tasks with us? We would be happy to exchange ideas with you.

Jörg Nölke
Jörg Nölke
Gerrit Taaks
Gerrit Taaks

Contact for your Digital Solution

Book an appointment

Are you keen to talk about your next project? We will be happy exchange ideas with you.

Melanie Klühe
Melanie Klühe
Stefanie Berger
Stefanie Berger
Philippe Surber
Philippe Surber
Stephan Handschin
Stephan Handschin