News

What is a Linux Container Image?

Linux® container is a set of 1 or more processes that are isolated from the rest of the system. All the files necessary to run them are provided from a distinct image, meaning Linux containers are portable and consistent as they move from development, to testing, and finally to production. This makes them much quicker to use than development pipelines that rely on replicating traditional testing environments. Because of their popularity and ease of use containers are also an important part of IT security.

The idea of what we now call container technology first appeared in 2000 as FreeBSD jails, a technology that allows the partitioning of a FreeBSD system into multiple subsystems, or jails. Jails were developed as safe environments that a system administrator could share with multiple users inside or outside of an organization.

In 2001, an implementation of an isolated environment made its way into Linux, by way of Jacques Gélinas’ VServer project. Once this foundation was set for multiple controlled userspaces in Linux, pieces began to fall into place to form what is today’s Linux container.

Very quickly, more technologies combined to make this isolated approach a reality. Control groups (cgroups) is a kernel feature that controls and limits resource usage for a process or groups of processes. And systemd, an initialization system that sets up the userspace and manages their processes, is used by cgroups to provide greater control over these isolated processes. Both of these technologies, while adding overall control for Linux, were the framework for how environments could be successful in staying separated.

Imagine you’re developing an application. You do your work on a laptop and your environment has a specific configuration. Other developers may have slightly different configurations. The application you’re developing relies on that configuration and is dependent on specific libraries, dependencies, and files. Meanwhile, your business has development and production environments that are standardized with their own configurations and their own sets of supporting files. You want to emulate those environments as much as possible locally, but without all the overhead of recreating the server environments. So, how do you make your app work across these environments, pass quality assurance, and get your app deployed without massive headaches, rewriting, and break-fixing? The answer: containers.

 

The container that holds your application has the necessary libraries, dependencies, and files so you can move it through production without nasty side effects. In fact, the contents of a container image can be thought of as an installation of a Linux distribution because it comes complete with RPM packages, configuration files, etc. But, container image distribution is a lot easier than installing new copies of operating systems. Crisis averted–everyone’s happy.

That’s a common example, but Linux containers can be applied to many different problems where portability, configurability, and isolation is needed. The point of Linux containers is to develop faster and meet business needs as they arise. In some cases, such as real-time data streaming with Apache Kafka, containers are essential because they're the only way to provide the scalability an application needs. No matter the infrastructure—on-premise, in the cloud, or a hybrid of the two—containers meet the demand. Of course, choosing the right container platform is just as important as the containers themselves.

Red Hat® OpenShift® includes everything needed for hybrid cloud, enterprise container, and Kubernetes development and deployments.

Use containers to Build, Share and Run your applications

A container image contains a packaged application, along with its dependencies, and information on what processes it runs when launched.

Containers are a critical part of today's IT operations. A container image contains a packaged application, along with its dependencies, and information on what processes it runs when launched.

You create container images by providing a set of specially formatted instructions, either as commits to a registry or as a Dockerfile. For example, this Dockerfile creates a container for a PHP web application:

FROM registry.access.redhat.com/ubi8/ubi:8.1

RUN yum --disableplugin=subscription-manager -y module enable php:7.3 \
  && yum --disableplugin=subscription-manager -y install httpd php \
  && yum --disableplugin=subscription-manager clean all

ADD index.php /var/www/html

RUN sed -i 's/Listen 80/Listen 8080/' /etc/httpd/conf/httpd.conf \
  && sed -i 's/listen.acl_users = apache,nginx/listen.acl_users =/' /etc/php-fpm.d/www.conf \
  && mkdir /run/php-fpm \
  && chgrp -R 0 /var/log/httpd /var/run/httpd /run/php-fpm \
  && chmod -R g=u /var/log/httpd /var/run/httpd /run/php-fpm

EXPOSE 8080
USER 1001
CMD php-fpm & httpd -D FOREGROUND

Each instruction in this file adds a layer to the container image. Each layer only adds the difference from the layer below it, and then, all these layers are stacked together to form a read-only container image.

How does that work?

You need to know a few things about container images, and it's important to understand the concepts in this order:

  1. Union file systems
  2. Copy-on-Write
  3. Overlay File Systems
  4. Snapshotters

Union File Systems (Aufs)

The Union File System (UnionFS) is built into the Linux kernel, and it allows contents from one file system to be merged with the contents of another, while keeping the "physical" content separate. The result is a unified file system, even though the data is actually structured in branches.

The idea here is that if you have multiple images with some identical data, instead of having this data copied over again, it's shared by using something called a layer.

 

UnionFS

Each layer is a file system that can be shared across multiple containers, e.g., The httpd base layer is the official Apache image and can be used across any number of containers. Imagine the disk space we just saved since we are using the same base layer for all our containers.

These image layers are always read-only, but when we create a new container from this image, we add a thin writable layer on top of it. This writable layer is where you create/modify/delete or make other changes required for each container.

Copy-on-write

When you start a container, it appears as if the container has an entire file system of its own. That means every container you run in the system needs its own copy of the file system. Wouldn't this take up a lot of disk space and also take a lot of time for the containers to boot? No—because every container does not need its own copy of the filesystem!

Containers and images use a copy-on-write mechanism to achieve this. Instead of copying files, the copy-on-write strategy shares the same instance of data to multiple processes and copies only when a process needs to modify or write data. All other processes would continue to use the original data. Before any write operation is performed in a running container, a copy of the file to be modified is placed on the writeable layer of the container. This is where the write takes place. Now you know why it's called copy-on-write.

This strategy optimizes both image disk space usage and the performance of container start times and works in conjunction with UnionFS.

Overlay File System

An overlay sits on top of an existing filesystem, combines an upper and lower directory tree, and presents them as a single directory. These directories are called layers. The lower layer remains unmodified. Each layer adds only the difference (the diff, in computing terminology) from the layer below it, and this unification process is referred to as a union mount.

The lowest directory or an Image layer is called lowerdir, and the upper directory is called upperdir. The final overlayed or unified layer is called merged.

 

Layered file system

Common terminology consists of these layer definitions:

  • Base layer is where the files of your filesystem are located. In terms of container images, this layer would be your base image.
  • Overlay layer is often called the container layer, as all the changes that are made to a running container, as adding, deleting, or modifying files, are written to this writable layer. All changes made to this layer are stored in the next layer, and is a union view of the Base and Diff layers.
  • Diff layer contains all changes made in the Overlay layer. If you write something that's already in the Base layer, then the overlay file system copies the file to the Diff layer and makes the modifications you intended to write. This is called a copy-on-write.

Snapshotters

Containers can build, manage, and distribute changes as a part of their container filesystem using layers and graph drivers. But working with graph drivers is really complicated and is error-prone. SnapShotters are different from graph drivers, as they have no knowledge of images or containers.

Snapshotters work very similar to Git, such as the concept of having trees, and tracking changes to trees for each commit. A snapshot represents a filesystem state. Snapshots have parent-child relationships using a set of directories. A diff can be taken between a parent and its snapshot to create a layer.

The Snapshotter provides an API for allocating, snapshotting, and mounting abstract, layered file systems.

Wrap up

You now have a good sense of what container images are and how their layered approach makes containers portable. Next up, I'll cover container runtimes and internals.