Containerization in software: revolution in deployment processes
How software containers simplify the delivery and operation of modern solutions
from Paul Heinzlreiter
How do you deliver a new software solution to customers so that it is easy to use and operate? The word container conjures up images of efficient mass freight transport on large cargo ships or similar logistics chains, which are made possible by the standardized dimensions of the containers. This image also applies to containers in software development. Similar to the use of containers in classic logistics, software containers also enable the flexible use of software in a wide variety of system environments. Software packages can be packaged and executed together with the associated libraries via a defined interface. This approach can be used for standard software packages such as web servers or databases as well as for newly developed individual software.
Containerization means that no direct software installation is necessary on the underlying system, only a service for running containers needs to be installed. By packaging services such as a database and the libraries it requires in a container, it is easily possible for different versions of the same library to be used in different containers without causing conflicts.
Contents
- Differentiation from virtualization and cloud computing
- Clustered container solutions
- Application scenarios
- Automatic scaling of services
- Management and evaluation of software systems
- Automated build pipelines in software development
- Use in the projects of RISC Software GmbH
- Author
- Read more

Differentiation from virtualization and cloud computing
Virtualization is another common method of isolating software installations from the basic system of the executing computer. However, this involves replicating an entire computer including virtual hardware and operating system, which leads to significant main memory and processor usage as well as longer start times. In contrast, only a specific service – such as a web server or a database – is usually delivered in a container.
In the area of cloud computing – especially with Infrastructure as a Service – virtualized computers are usually used. However, this also brings with it the limitations mentioned above. Of course, the large public cloud providers now also offer specialized runtime environments for containers. However, it is also possible to operate a self-managed, scalable container environment. In this scenario, the container runtime environment is set up and configured as part of the installation activities. The basis here is a number of computers, which are either available directly at the customer’s premises (on premise) or can also be rented in a data center (dedicated hosting).
Clustered container solutions
Docker and Podman are widely used solutions for containerizing software. These are usually used to run containers on individual computers. Several containers can also be started and managed together. An example of this is a database and the web server on which the web application runs, which displays data from the database.
As the limitation to one computer severely restricts the expandability of a solution, a cluster of computers is usually used to host containers in production environments. This can be achieved using Kubernetes as an execution platform, for example.

When using computer clusters to run containers, the available capacity for running containers can be easily increased by adding additional compute nodes. An appropriately configured Kubernetes cluster also increases the reliability of the overall system, as individual computers can fail if sufficient redundant computers are available. The flexible execution of containers on a Kubernetes cluster enables the efficient use of existing hardware, especially when the workload of the systems fluctuates. Thanks to its technological flexibility, Kubernetes also enables the integration of a wide variety of storage solutions such as local hard disks as well as storage accessible via the network. If no cloud servers are to be used, the use of containers enables the flexible replacement of deployed services such as databases in order to easily implement new technical requirements.
By using Kubernetes, it is also possible to flexibly adapt the execution location of the software. On the one hand, the widespread cloud offerings from Amazon, Microsoft or Google, as well as European clouds, can be used for this. On the other hand, a Kubernetes cluster can also be installed on local hardware in order to retain maximum control over your own programs and data. The particular advantage of Kubernetes in this context is that the existing software and service setup can be transferred from one infrastructure to another without any additional effort or changes to the configuration. This means that a vendor lock-in (= barrier due to switching costs or barriers to changing the product or provider) can be avoided, as is the case with the direct use of specific cloud services, for example.
Application scenarios
Automatische Skalierung von Services
Der Einsatz von Containern ermöglicht es, durch die Verwendung von vorkonfigurierten Standardabbildern den Konfigurationsaufwand für die einzelnen Services zu minimieren. Darüber hinaus kann flexibel auf sich ändernde Anforderungen reagiert werden. Ein typischer Anwendungsfall dafür ist die automatische Skalierung von Services, wenn beispielsweise bei einem Online-Händler im Weihnachtsgeschäft mehr Anfragen eintreffen und mehr Webserver hochgefahren werden, um diese zu behandeln. Genauso kann die Menge der Container wieder reduziert werden, wenn weniger Anfragen eintreffen.
Management und Evaluierung von Softwaresystemen
Genauso kommt es gerade bei komplexeren Softwarearchitekturen, welche aus vielen verschiedenen Komponenten bestehen und über lange Zeit produktiv eingesetzt werden, oft zu einer evolutionären Weiterentwicklung der eingesetzten Technologie. Durch Containerisierung und Services wie Kubernetes wird es möglich, einzelne Systemkomponenten flexibel auszutauschen, ohne das (De)Installationen von Softwarepaketen oder Bibliotheken direkt auf den ausführenden Servern durchgeführt werden müssen. Dies ist wegen Abhängigkeiten zu verschiedenen Versionen derselben Biblithek oft problematisch, gerade bei Software, welche über mehrere Jahre eingesetzt wird.
Ein weiteres Anwendungsfeld sind Evaluierungen von Softwaresystemen, welche eine fundierte technische Grundlage für Technologieentscheidungen liefern. Damit kann einfach untersucht werden, welche Systeme die vorliegenden Kundenanforderungen am besten erfüllen. Gerade im Bereich des Data Engineering stellt sich zu Beginn eines Projekts oft die Frage, mit welcher Technologie man die Anforderungen der Kund*innen am besten erfüllen kann. Während man manchmal – oft aufgrund nicht-technischer Kriterien – manche Lösungen schnell ausschliessen kann, ist es oft für eine fundierte Entscheidung notwendig, verschiedene Ansätze mit dem Anwendungsfall der Kund*innen zu testen. Gründe dafür dafür können spezielle Anforderungen wie die Datenmenge, welche pro Zeiteineinheit in das System eingebracht werden soll oder speziell notwendige Vorverarbeitungsschritte sein.
Automatisierte Build-Pipelines in der Softwareentwicklung
In der Softwareentwicklung sowie bei der Ausrollung der entwickelten Software in den Betrieb kann Containerisierung im Allgemeinen und Kubernetes im Speziellen wesentliche Beiträge liefern. In einem modernen agilen Softwareentwicklungsprozess spielen automatisierte Tests eine zentrale Rolle, da sie das Softwareentwicklungsteam vor allem gegen unabsichtlich wieder eingeführte Implementierungsfehler absichern. Um die regelmäßige Durchführung dieser Tests zu gewährleisten, werden Tests automatisiert spätestens nach dem Abschluss der Implementierung einer Teilfunktionalität durchgeführt. Da hierfür auch oft eine spezifische Testumgebung notwendig ist, bietet es sich an, die Tests in Containern durchzuführen, welche die aktuellste Version der entwickelten Software und die benötigten Bibliotheken beinhalten. Zusätzlich benötigte Ressourcen wie beispielsweise Datenbanken werden hierbei als Teil des Testsetups konfiguriert und automatisch als separater Container mit ausgeführt. Der Einsatz von Containern vereinfacht dadurch die Entwicklung hochqualitativer Software.
Für eine vereinfachte Softwareinstallation bei Kund*innen unabhängig von einem effektiven Softwareentwicklungsprozess ist auch die Containerisierung der fertig entwickelten Software erstrebenswert, da sie die Ausrollung in den verschiedensten Systemumgebungen bei den Kund*innen extrem vereinfacht. Damit ist es ausreichend, eine standardisierte Container-Runtime zu installieren, und die Software kann direkt ohne weitere Installation benutzt werden. Dies ist möglich, weil der Container bereits alle Bibliotheken und Abhängigkeiten mitbringt, welche für die neue Software benötigt werden. Beispielsweise kann beim Einsatz von Docker ein Softwarewerkzeug, das für eine Linux-Umgebung entwickelt wurde, ohne Probleme auf einem Windows System ausgeführt werden.
Use in the projects of RISC Software GmbH
A Kubernetes cluster was set up for productive operation at the Leichtmetallkompetenzzentrum Ranshofen (LKR). This is used for data analysis tasks and for the simulation of continuous aluminum casting. In addition, the simulation code developed at the LKR as part of the FFG-funded project FReSgO was adapted to run in containers . The Kubernetes setup enables the LKR to react flexibly to new data processing requirements that arise in the course of research activities with variable processes in the simulation of continuous aluminum casting, data collection in mechanical material treatment and data analysis. This forms the basis for the further development of the system and the integration of machine learning approaches in the FFG project opt1mus.
A NoSQL database was set up for MOWIS GmbH as part of an earlier project to support the ongoing integration of weather data from various sources. This system was initially strongly designed for batch operation (= packet/batch transmission or processing of data). The batch approach here refers to both data import and export, as the data was delivered in bulk and the calculated weather forecasts were also delivered in blocks. When an interactive web service interface was added as an additional requirement, the aim was to expand the existing system to include the new functionality. The critical factor here was not to overload the system with the expected number of interactive requests. However, modern web service interfaces are expected to respond as quickly as possible. Queries often run over many layers from the interface service via an abstraction layer, via a business logic layer, then often via further abstraction layers until the data is read or written from the storage location. If a large number of services are involved, the query takes even longer.
Caches can accelerate such queries as intermediate storage. The Redis in-memory cache allows any structured key-value pairs to be stored in the main memory, each of which can be assigned an expiry time. On a computer network, Redis can combine the main memory of several computers into one large cache.
Since a Kubernetes cluster had already been set up on the system as part of an earlier expansion, it was an obvious decision to also run the Redis cache for accelerating the weather queries on the Kubernetes cluster, which made it much easier to roll out the system, even though the new requirements were not yet known at the time of system design.
RISC Software GmbH has many years of experience with the containerization of applications, the installation and configuration of Kubernetes-based environments, as well as their use in various scenarios. We will be happy to advise you on the best solution for your project and support you during implementation.
Ansprechperson
Author
DI Paul Heinzlreiter
Senior Data Scientist