In-depth Docker

In-depth Docker


Date: January 19, 2024 Author: Walter Code

In our series of Docker blogs, we intend to walk you through deploying, managing, and extending Docker. This is the second blogpost in this series, so you might want to check out the first one at: that by first introducing you to the basics of Docker and now it’s time to talk about configuration management, components and commands. After that we will start to use Docker to build containers and services to perform a variety of tasks.

Docker with configuration management

Since Docker was announced, there have been a lot of discussions about where Docker fits with configuration management tools like Puppet and Chef. Docker includes an image-building and image-management solution. One of the drivers for modern configuration management tools was the response to the “golden image” model. 

With golden images, you end up with massive and unmanageable image sprawl: large numbers of (deployed) complex images in varying states of versioning. You create randomness and exacerbate entropy in your environment as your image use grows. Images also tend to be heavy and unwieldy. This often forces manual change or layers of deviation and unmanaged configuration on top of images, because the underlying images lack appropriate flexibility

Compared to traditional image models, Docker is a lot more lightweight: images are layered, and you can quickly iterate on them. There is some legitimate argument to suggest that these attributes alleviate many of the management problems traditional images present.

Docker’s technical components

Docker can be run on any x64 host running a modern Linux kernel. We recommend kernel version 3.10 and later. It has low overhead and can be used on servers, desktops, or laptops. Run inside a virtual machine, you can also deploy Docker on OS X and Microsoft Windows. It includes:

• A native Linux container format that Docker calls libcontainer.

• Linux kernel namespaces, which provide isolation for filesystems, processes, and networks. 

• Filesystem isolation: each container is its own root filesystem.

• Process isolation: each container runs in its own process environment. 

• Network isolation: separate virtual interfaces and IP addressing between containers. 

• Resource isolation and grouping: resources like CPU and memory are allocated individually to each Docker container using the cgroups, or control groups, kernel feature.

• Copy-on-write: filesystems are created with copy-on-write, meaning they are layered and fast and require limited disk usage. 

• Logging: STDOUT, STDERR and STDIN from the container are collected, logged, and available for analysis or trouble-shooting. 

• Interactive shell: You can create a pseudo-tty and attach to STDIN to provide an interactive shell to your container.

Docker user interfaces 

You can also potentially use a graphical user interface to manage Docker once you’ve got it installed. Currently, there are a small number of Docker user interfaces and web consoles available in various states of development, including

• Shipyard – Shipyard gives you the ability to manage Docker resources, including containers, images, hosts, and more from a single management interface. It’s open source, and the code is available from https://github. com/ehazlett/shipyard.

• Portainer – UI for Docker is a web interface that allows you to interact with the Docker Remote API. It’s written in JavaScript using the AngularJS framework.

 • Kitematic – is a GUI for OS X and Windows that helps you run Docker locally and interact with the Docker Hub. It’s a free product released by Docker Inc.

Dockerfile commands

  • ADD – copies the files from a source on the host into the container’s own filesystem at the set destination.
  • CMD – can be used for executing a specific command within the container.
  • ENTRYPOINT – sets a default application to be used every time a container is created with the image.
  • ENV – sets environment variables
  • EXPOSE – associates a specific port to enable networking between the container and the outside world
  • FROM – defines the base image used to start the build process 
  • MAINTAINER – defines a full name and e mail address of the image creator
  • RUN – the central executing directive for Dockerfiles
  • USER – sets the UID (or username) which is to run the container.
  • VOLUME – used to enable access from the container to a directory on the host machine.
  • WORKDIR – sets the path where the command, defined with CMD, is to be executed.
  • LABEL – allows you to add a label to your docker image

The Docker filesystem layers

When Docker first starts a container, the initial read-write layer is empty. As changes occur, they are applied to this layer; for example, if you want to change a file, then that file will be copied from the read-only layer below into the readwrite layer. The read-only version of the file will still exist but is now hidden underneath the copy. 

Docker Compose and Swarm

In addition to solitary containers we can also run Docker containers in stacks and clusters, what Docker calls swarms. The Docker ecosystem contains two more tools:

• Docker Compose – which allows you to run stacks of containers to represent application stacks, for example web server, application server and database server containers running together to serve a specific application. Docker Compose is currently available for Linux, Windows, and OS X. It can be installed directly as a binary, via Docker for Mac or Windows or via a Python Pip package.

• Docker Swarm – which allows you to create clusters of containers, called swarms, that allow you to run scalable workloads. Swarm is shipped integrated into Docker since Docker 1.12. Prior to that it was a standalone application licensed with the Apache 2 license.

Conclusion: