Request a Demo

Your Guide to Patching Immutable Infrastructure

With the ephemeral nature of containers, you might think that patching is far less critical than it really is. But in fact, as with more traditional systems, patching both the containers and the underlying management systems is just as critical as tending to any other system.

In this article, we will talk about the three main considerations when dealing with patching in an immutable infrastructure: containers, management, and hosts. We will then discuss some additional elements of patching today and finally provide an example of patching a Docker image in general, as well as deploying updated images in managed Kubernetes systems, such as Amazon Elastic Kubernetes Service (EKS) or the Azure Kubernetes Service (AKS).

The Basics

As mentioned above, there are three things you need to deal with when patching an immutable infrastructure:

  • Containers - These are the backbone of your workload, often what drive your processes and process your data. Container best practices state that each unique container should not be changed, hence the immutable designation. Therefore, the source image must be patched itself.
  • Management - Your management layer, the piece that orchestrates the communication, storage, and availability of your containers, must also be managed. Keeping this layer up to date will ensure that you have not just the latest functionality, but also the latest security patches. An example of this is Kubernetes. This system manages your container networking, horizontal and vertical scaling, container image deployment, and–depending on the environment–rolling updates.
  • Ecosystem - Supporting containerized workloads includes a variety of systems. Container registries, both public and private, hold the images that you use to deploy new containers. Container images, which are stored in registries, need to be continuously updated as well, both for functionality and security.
Challenges

What are the inherent difficulties in patching for containers, management (orchestration), and the ecosystem itself? Below are a few of the challenges associated with a modern cloud containerized environment.

  • There are often many more containers than traditional virtual machines (VMs), so keeping a large number of containers up to date becomes more difficult than in traditional systems.
  • When updating an orchestration layer such as AKS, EKS, or a stand-alone Kubernetes install, it’s important that the running (“live”) containers are not affected and successfully reconnect to your management system.
  • To deploy new containers, public and private registries are very important, but making sure that the images are safe and usable is critical to ensuring security and stability of your workloads.
Patching Process

Despite the differences between a traditional environment and an immutable infrastructure, there are important patching commonalities shared between the two.

Latest vs. Specific Software Versions – Although you may miss out on the quicker availability of newer features and fixes, most of the time, problems are introduced by change. Following a policy of slowly rolling out updates and/or sticking to a specific version (unless an update is security related), can help to prevent future problems.

Rolling Updates – In addition to controlling rollout speed, controlling what systems get updates and in what order is just as important. Pushing updates to all systems at once is generally ill-advised. Allowing for evaluation and rolling the updates across the systems is safer.

Patching In-Place vs. Replacing Image – Especially in an immutable infrastructure environment, it is important that the images are replaced rather than the image directly patched. The primary concept with immutable infrastructure is that you are not changing the image itself but replacing the image with an updated version.

Patching Configuration – Just like it is advisable to replace the images themselves and not patch a live configuration, it is important to update the configuration that the images are based on. This means that any new image is based off of the new and updated configuration.

The Role of CI/CD

Many container environments have their configuration managed via code, which lends itself perfectly to continuous integration and continuous deployment (CI/CD). Just as you may want to test your code against test cases to make sure there is no broken functionality, you will want to do the same for your container infrastructure.

An ideal environment for this would be one that utilizes the following:

  • Version Controlled Configuration (GIT)
  • Automated Build
  • Configuration and Build Validation
  • Automated Deployments
  • Verification (metrics, etc.)

Most CI/CD environments can be used, such as Azure DevOps, AWS CodeStar, or Jenkins. What this means in a practical sense for patching is that, upon an image configuration update, automated builds are kicked off, tested, and ultimately deployed without lengthy manual steps. Since we are not patching an individual piece of software on an already-running virtual machine, we can utilize this environment to roll out updates as necessary in a controlled fashion.

Release Environments

With an investment in CI/CD and by using a containerized infrastructure, you are able to utilize atomic deployments (either a fully successful release or totally failed) and methodologies such as blue-green or rolling releases.

Tying this into patching means that when deploying updated images, re-deploying all of the containers at once may not be the best idea, especially if there is an unforeseen error or performance regression. With CI/CD, not only can we catch problems early, but due to it utilizing release environments (e.g., staging or development environment), you can test those changes early.

Once your changes have been verified, you can then use deployment methodologies, such as blue-green or rolling releases, to switch to the new environment en masse or gradually roll out the updates.

Patching in Practice

How does practical container patching work? First, we will look at Docker in general and how simple the management of container versions can be. After that, we will look at an example of a Kubernetes system that lives in the cloud. Specifically, we are going to be looking at the Azure Kubernetes System (AKS). This is a managed Kubernetes environment in the Microsoft Azure cloud.

Docker Patching

Updating a Docker image is easy in and of itself, but even more important is the verification of the image. Pulling an image from an untrusted registry without verification could potentially leave your environment open to being compromised.

To verify the container images within Docker, it is as easy as setting the DOCKER_CONTENT_TRUST environmental variable to 1. By doing so, by default, every pulled container image is checked for a valid signature against the registry. What are the steps to then retrieve an updated image, verify that image, and run it? Let's walk through those steps below:

# Enable Docker Content Trust, only pull signed images from the registry

export DOCKER_CONTENT_TRUST=1

# Stop Existing Ubuntu Container

docker stop ubuntu

# Remove All Existing Stopped Containers

docker container prune

# Pull the Latest Ubuntu Container

docker pull ubuntu:latest

# Output Signature of Container

docker trust inspect --pretty ubuntu:latest

# Run our Ubuntu Container

docker run ubuntu

 

These types of commands can be easily used within any CI/CD environment as well. By utilizing verification of signatures and making sure that Docker is only pulling validated images, you are far less likely to inadvertently pull a malicious image.

Azure Kubernetes System (AKS)

Docker image verification is easy, but how does this work in an AKS environment? Enabling the Cluster Autoscaler is very simple when you create your AKS cluster, as you can see in these instructions. Below, we outline the steps to not only tag an updated image, but deploy that updated image to an AKS environment and then make sure it functions as expected.

# Retrieve the loginServer value to be used within Docker commands

# The value of the LoginServer will be from the Azure Container Registry

az acr list --resource-group ContainerRG --query "[].{acrLoginServer:loginServer}" --output table


# Tag our 'custom-container' Docker Image with its updated 'v2' image

docker tag custom-container {LoginServer}/custom-container:v2

# Push our new container version to our ACR

docker push {LoginServer}/custom-container:v2

# Optionally scale up the number of containers we have to make sure we have enough to

# deploy the new version via rolling updates

kubectl scale --replicas=3 deployment/custom-container

# Deploy the new version of the image across our environment

kubectl set image deployment custom-container custom-container={LoginServer}/custom-container:v2

# Finally verify that the image itself is running and works

kubectl get service custom-container

 

Amazon Elastic Kubernetes Service (EKS)

Much like the AKS example above, let us demonstrate how this same procedure works in an EKS environment. Unlike in AKS, it’s not quite as easy to enable cluster auto-scaling. To do this, there are a few components to enable, outlined in this linked article. Below, we outline the steps to not only tag an updated image, but deploy that updated image to an EKS environment and then make sure it functions as expected.

# Retrieve Docker login (using latest AWS CLI), replace Region with your values
aws ecr get-login --region {Region} --no-include-email

# Example Returned command
# docker login -u AWS -p {password}
https://{aws_account_id}.dkr.ecr.{region}.amazonaws.com

# Tag our 'custom-container' Docker Image with its updated 'v2' image
docker tag custom-container https://{aws_account_id}.dkr.ecr.{region}.amazonaws.com/custom-container:v2

# Push our new container version to our Amazon ECR
docker push https://{aws_account_id}.dkr.ecr.{region}.amazonaws.com/custom-container:v2

# Optionally scale up the number of containers we have to make sure we have enough to
# deploy the new version via rolling updates
kubectl scale --replicas=3 deployment/custom-container

# Deploy the new version of the image across our environment
kubectl set image deployment custom-container custom-container=https://{aws_account_id}.dkr.ecr.{region}.amazonaws.com/custom-container:v2

# Finally verify that the image itself is running and works
kubectl get service custom-container

 

Tying It All Together

Patching is critical to maintaining a secure and performant environment. Although it may seem as if it's not as necessary within an immutable infrastructure, there is far more to it than meets the eye. Moreover, it is even more important to make sure that each layer is patched and maintained. This could arguably be seen as more complicated than maintaining a traditional server infrastructure, but it is well worth the benefits that an immutable infrastructure confers.

Don’t miss out on the latest

Get notified on Industry updates.
we promise not to spam

Related Posts

Popular Articles

03.3.2020 | vulnerabilities , Ghostcat

| Posted by Yonatan Amitay
The Apache Tomcat servers that have been released over the last thirteen years are vulnerable to a bug known as “Ghostcat” (CVE-2020-1938) that ...
Read more

07.15.2020 | vulnerabilities , SIGRed

| Posted by Yonatan Amitay
What is the SIGRed Vulnerability (CVE-2020-1350)? SIGRed (CVE-2020-1350) is a critical, wormable RCE (remote code execution) vulnerability in the ...
Read more
  With nearly 15,000 new vulnerabilities discovered in 2017, and even more expected this year – the competition for ‘worst vulnerability’ is a tough ...
Read more