Containerization in software: revolution in deployment processes
How Software Containers Simplify the Delivery and Operation of Modern Solutions
by Paul Heinzlreiter
How do you deliver a new software solution to customers, ensuring that it can be easily used and operated? When we think of containers, we often think of efficient mass freight transport on large cargo ships or similar logistics chains, which are made possible by standardized container dimensions. This analogy also applies to containers in software development. Similar to the use of containers in classical logistics, software containers enable the flexible deployment of software in various system environments. Through a defined interface, software packages, together with the required libraries, can be packaged and brought to execution. This approach can be applied to both standard software packages like web servers or databases, as well as newly developed custom software.
Containerization eliminates the need for direct software installation on the underlying system; only a service to execute containers needs to be installed. By packaging services, such as a database and the libraries it requires, in a container, it becomes possible to use different versions of the same library in different containers without causing conflicts.
Content
- Distinction from Virtualization and Cloud Computing
- Clustered Container Solutions
- Use Cases
- Automatic Scaling of Services
- Management and Evaluation of Software Systems
- Automated Build Pipelines in Software Development
- Use in RISC Software GmbH Projects
- Author
- Read More
Distinction from Virtualization and Cloud Computing
Another common method for isolating software installations from the host system is virtualization. However, this involves simulating an entire computer, including virtual hardware and an operating system, which results in significant memory and CPU usage as well as longer startup times. In contrast, containers typically deliver only a specific service, such as a web server or a database.
In the field of cloud computing, especially in Infrastructure as a Service (IaaS), virtualized machines are commonly used. However, this also brings about the aforementioned limitations. Large public cloud providers now offer specialized container runtime environments. However, it is also possible to operate a self-managed scalable container environment. In this scenario, the container runtime environment is set up and configured as part of the installation activities. The basis here is a number of computers that are either available on-premises or rented in a data center (dedicated hosting).
Clustered Container Solutions
Popular solutions for containerizing software include Docker and Podman. These are typically used to run containers on individual computers. Multiple containers can also be started and managed together. An example of this is a database and the web server that runs the web application displaying data from the database.
Since limiting to a single computer severely restricts scalability, clusters of computers are typically used in production environments to host containers. This can be achieved using platforms like Kubernetes.
When using clusters of computers to run containers, adding more nodes can easily increase the available capacity for container execution. A properly configured Kubernetes cluster also increases the overall system’s fault tolerance since individual computers can fail without impacting availability if sufficient redundant computers are available. When system load fluctuates, the flexible execution of containers on a Kubernetes cluster enables efficient use of available hardware. Kubernetes’ technological flexibility also allows the integration of various storage solutions, such as local drives or network-accessible storage. If cloud servers are not preferred, containers allow for the flexible replacement of services, such as databases, to meet new technical requirements.
Kubernetes also makes it possible to flexibly adapt the software’s execution location. Both widespread cloud offerings from Amazon, Microsoft, or Google, as well as European clouds, can be utilized. Alternatively, a Kubernetes cluster can be installed on local hardware to maintain maximum control over programs and data. The advantage of Kubernetes in this context is that the existing software and service setup can be transferred between infrastructures without additional effort or changes in configuration. This prevents vendor lock-in, which can occur with the direct use of specific cloud services.
Use Cases
Automatic Scaling of Services
The use of containers allows for minimal configuration effort for individual services by using pre-configured standard images. Furthermore, changing requirements can be responded to flexibly. A typical use case for this is the automatic scaling of services, for example, when an online retailer receives more requests during the Christmas season, and more web servers are needed to handle them. Similarly, the number of containers can be reduced again when fewer requests come in.
Management and Evaluation of Software Systems
In more complex software architectures that consist of many different components and are used over a long period, the technology employed often evolves. Containerization and services like Kubernetes allow system components to be exchanged flexibly without the need to (un)install software packages or libraries directly on the executing servers. This is often problematic due to dependencies on different versions of the same library, especially with software that is used over several years.
Another application area is the evaluation of software systems, which provides a solid technical foundation for technology decisions. It allows for easy examination of which systems best meet customer requirements.
Automated Build Pipelines in Software Development
In software development, as well as in the deployment of developed software in operation, containerization in general and Kubernetes in particular can make significant contributions. In a modern agile software development process, automated tests play a central role, as they protect the development team from unintentionally reintroducing implementation errors. To ensure the regular execution of these tests, they are automated and run after the completion of a feature. Since a specific test environment is often required, it makes sense to run the tests in containers, which contain the latest version of the developed software and the necessary libraries.
Use in RISC Software GmbH Projects
For the Leichtmetallkompetenzzentrum Ranshofen (LKR), a Kubernetes cluster was set up for production. This is used for data analysis and simulation tasks in aluminum continuous casting. Additionally, the simulation code developed at LKR as part of the FFG-funded project FReSgO was adapted for execution in containers. The Kubernetes setup allows LKR to respond flexibly to new requirements in its research activities related to variable processes in aluminum continuous casting simulation, data collection in mechanical material processing, and data analysis. This forms the basis for system development and the integration of machine learning approaches in the FFG project opt1mus.
For MOWIS GmbH, a NoSQL database was built in a previous project to support the continuous integration of weather data from various sources. This system was initially batch-oriented, handling both import and export of data in bulk. When the extension of an interactive web service interface became a requirement, the goal was to expand the existing system with the new functionality without overloading it due to the expected increase in interactive requests. The expectations for modern web service interfaces are quick responses. Queries often pass through multiple layers before data is read or written. With many services involved, this can take even more time.
Caches can speed up such queries. The in-memory cache Redis allows for storing arbitrarily structured key-value pairs in memory, each with an expiration time. Redis can combine the memory of several computers into one large cache on a cluster.
Since a Kubernetes cluster was already set up as part of a previous extension, it was a logical decision to also run the Redis cache on the Kubernetes cluster to speed up weather queries, significantly simplifying system deployment despite the new requirements not being known at the time of system design.
RISC Software GmbH has many years of experience with containerizing applications, installing and configuring Kubernetes-based environments, and their use in various scenarios. We are happy to advise you on the best solution for your project and support you in its implementation.
Contact Person
Author
DI Paul Heinzlreiter
Senior Data Scientist