Request a Demo

Easing Up Patching Using Containers and Kubernetes

Only unused applications don’t receive updates. Otherwise, there are always more bugs to resolve, new requirements to address, and the latest software to integrate. Patching is the name that’s been given to the act of improving software. For example, if a bugfix is developed, a new patch on the given application is applied; if a new feature goes to production, another patch changes the current software to a new version.

Patching has its risks. New versions can introduce new bugs, or wrong configurations may cause service disruptions. Because of these dangers, an upgrade plan should have a rollback procedure in place in order to restore the system’s previous state quickly with only a minor or zero outage.

Mutable and Immutable Infrastructure

Traditionally, patching involves one or more servers running the application assigned to change. Someone creates a patching plan (a recipe read by a human operator or a script run on the machine) that should be previously tested in a developer and staging environment. If any problem arises, all steps roll back to the original state.

Mutable Infrastructure

A mutable infrastructure involves a server changing its shape to accommodate new releases via steps, or instructions, that change the server to apply the new artifact. Steps can include application installation, new dynamic libraries, network mapping, folder configuration, and the deployed artifact. Each step changes the server from the original shape. If the current deployment needs to roll back, every previous step must be completely undone to restore the initial state. So, it’s a strenuous task to revert any changes made. 

To mitigate problems with a mutable infrastructure, such as an upgrade rollback, and reduce human error, people have created automated tools that prepare servers and allow for automatic updates. The most well-known tools are Puppet, Red Hat Ansible, and Chef. The idea is similar: You create recipes, and the tools then apply the methods on the designated server. 

There are several modules to ease the installation of the most-popular tools. Still, the server is mutable and, as such, may have differences when two environments are comparable. For example, say someone in the past installed a new version of software to evaluate it in staging. After a while, it was uninstalled, but the previous installation also added some dynamic libraries that were not removed after rolling back. The staging environment will never be like the production environment at this moment. Moreover, sometimes monitoring or auditing tools are only installed in production, which can lead to unexpected bugs, unforeseen in staging environments.

Immutable Infrastructure

An immutable infrastructure, as you should see by now, involves structures (called images) and is designed to be replicated in several different environments with no changes to how it was initially set up. After starting, the best practice is to not change what is configured inside the image, even though you can. But what if you need to update software? You can do so by using the existing image as a base and create a new immutable version based on the previous one. This method solves most of the problems of a mutable infrastructure, allowing development, staging, and production to have the same code and behave exactly as expected. Need to roll back? Just use the previous image instead of the new one. 

Stateful applications do often have risks when deployed in an immutable infrastructure. Many application data are not backward compatible, so simply rolling back a previous image can lead to unresponsive applications. You need to take the same care with applications that do database migrations (such as schema or data transformation changes) as you would with a mutable infrastructure.

Immutable Infrastructure with Docker

How can you apply an immutable infrastructure? One way is to create new virtual machine images every time with strict access policies to avoid undesired changes. But virtual machines are massive structures that take time to load and have a large memory footprint, as they need to load an entire operating system kernel each time. So, is there a better alternative?

Solution via Containers

Containers are self-contained units of software that describe the desired state of an application. They are not virtual machines, but applications running on a top-level machine (sometimes bare-metal machine) that are separated from others by a set of rules managed by the operating system kernel. They also have no access to the outside, unless programmed to. With these rules, it is possible to create network interfaces and storage and also limit computer resources. A container loads instantly and uses much less memory than a virtual machine, as it shares the same kernel between all containers.

Docker

Docker is the most prominent container solution available. It is easy to develop and ubiquitous, as it is available in all major cloud providers and operational systems (even Windows, with native and virtualized containers, and MacOS). 

Images are like layers of an onion: Each layer configures something, and the next one uses the previous layer to create a new change. In the end, an application is properly configured using several layers that can be publicly available. Every image also has a URL, several tags, and a unique identifier. New versions will have one or more tag values and a new unique identifier, and you can override previous tags so that the latest tag is used to identify the latest available version. Thus, even if an image loses all of its tags, you can simply recover it by searching for the unique identifier.

Screen Shot 2020-04-01 at 10.40.01
Image 1: Dockerfile for Java Applications

The example above shows an image file description. Line 1 gives the upper-level image (an OpenJDK 8 Image), line 2 copies a built artifact to an internal folder, and line 3 gives the command that Docker should start when it is started. If you need to deploy a new version of your artifact, you need to build the above file again with the new artifact.

Patching with Docker and Kubernetes

So, how do you apply a patch in an immutable infrastructure? In a mutable infrastructure, each software can have specific ways to do this with little interruption, aka Hot Deployment (e.g., Tomcat Parallel Deployment). In other words, you can apply as many versions as you want in the same infrastructure, and it works.

On an immutable infrastructure, however, things are different. You can’t change the artifacts, as you have to generate a new image with each new artifact. To deploy a new version, a new image must run side by side with the older one, and a load balancer must also be used to direct traffic to the older version, while the new version is deployed. After deployment, the load balancer must then transfer all traffic to the new version.

This is all rather complicated, since you have to handle the machine, network interfaces, and cloud providers for each deployment. But what if there was an automated tool to handle all of your deployments, check availability, and handle outages automatically?

K8s to the Rescue

Kubernetes (K8s) is a container orchestrator that manages all containers of an infrastructure with more than one machine (even hundreds) as well as service disruptions, hardware fails, or any other outage.

K8s also deals with patching and offers some standard practices for hot deployment when deploying a new artifact in substitution of a running one. When performing a rolling update, K8s guarantees that 75% of the instances are up, while 25% are changing over to the new version.

Docker and K8s Security Risks 

Any software is prone to bugs and security issues, but there are ways you can take action to reduce them to a minimum.

Own Your Images

Most Docker images are created based on publicly available images such as Ubuntu or Alpine Distribution. Some images provide software like Postgres or Redis. Anyone can publish images, and they may change over time. Your current image always uses the same top-level images. But nothing guarantees you are going to use the same top-level when creating a new image if you are using tags (especially tags like latest). Thus, your image is vulnerable if someone intentionally adds security breaches in your top-level software. 

If you can’t or don’t want to rely on external images and any of their updates, you can produce your own image (even if based on a publicly available image). Those images can be hosted openly or privately by paying a small fee to Docker Hub. Other container registries are also available for this purpose. 

Keep It Simple

If you use the same machine to deploy several artifacts in a mutable structure, Docker images are oriented to handle one artifact each. This means you may have several different distributions on the same machine, with different versions and security updates. When creating a new image, keep it as simple as possible to reduce possible issues. Choose Alpine Distribution, and only install the software you need. If you use tools to build your images, delete the tools no longer needed after image creation or use multi-stage builds

Keep Scanning

Docker images can be constantly scanned for security breaches with third-party services. These services have a vast database of security breaches and scan your images to verify if one of them has any known issue. You need to continually scan, as issues are discovered every time–a clean image yesterday may have issues today.

Updates

On the Kubernetes side, you should keep the cluster orchestrator updated. Usually, on SaaS solutions, K8s is continually updated with security patches; nevertheless, you should read the lines of the service documentation to confirm this.

Countermeasures

You should deploy countermeasures if one container is breached. It is crucial to configure K8s properly so a breach does not propagate to other parts besides the container itself. For this, a helpful solution is using service meshes such as Istio, which provides a secure communication layer for service discovery, observability, policies, and security. On the security side, Istio forces encrypted and authenticated communication between services. On the policy side, Istio guarantees that applications can only access the designated services and nothing else.

Conclusion

In this post, we walked through several topics to understand how to perform patching in applications using Docker and Kubernetes. We also explained how an immutable infrastructure smooths a deployment and guarantees that every environment is using the same components. Docker and Kubernetes are an excellent combination to apply patching on immutable infrastructures. While Docker provides all you need to create immutable containers, K8s comes with straightforward and zero-downtime deployment.

Using Kubernetes and Docker does not eliminate security risks, and many things must be addressed to guarantee that your environment is secure. From taking care of Docker images to reducing a container’s reachability on other services, security is a continuous task for anyone working with internal or public services.

Don’t miss out on the latest

Get notified on Industry updates.
we promise not to spam

Related Posts

Popular Articles

03.3.2020 | vulnerabilities , Ghostcat

| Posted by Yonatan Amitay
The Apache Tomcat servers that have been released over the last thirteen years are vulnerable to a bug known as “Ghostcat” (CVE-2020-1938) that ...
Read more

07.15.2020 | vulnerabilities , SIGRed

| Posted by Yonatan Amitay
What is the SIGRed Vulnerability (CVE-2020-1350)? SIGRed (CVE-2020-1350) is a critical, wormable RCE (remote code execution) vulnerability in the ...
Read more
  With nearly 15,000 new vulnerabilities discovered in 2017, and even more expected this year – the competition for ‘worst vulnerability’ is a tough ...
Read more