Docker and the basics of Linux containers

Docker is a popular tool that allows developers to deploy and run applications across multiple computers regardless of their particular configuration. Docker manages to run the same application in different systems by using containers. Containers pack up applications along with other peaces they need such as dependencies or libraries. This way we can be sure the program will run even on machines with different settings than the one it was used to create it.

What the hell are Linux containers exactly?

Linux containers are sets of processes that run isolated from the system. Their purpose is to allow us to develop faster and adjust to business changes on the go. Let’s say you are building an application on your laptop which is using a specific configuration with specific set of tools installed on your system. Then you pass your code to a QA engineer to run it and test it behaves as expected. Probably this second person will have a different configuration whilst your application was dependent on your machine’s libraries, dependencies, files etc. 

To pass your code through quality assurance properly, you will deploy your app on the QA engenier’s machine using containers that will have the necessary elements that your code uses to run.      

Docker is not a virtual machine

Docker does not create a whole virtual OS. Instead, it allows applications to share the same Linux kernel by initiating containers. This is quite important since it entails a sharp difference in performance compared to creating virtual machines.

The magic of Docker is pretty much that it runs on the OS of the host computer. Virtualisation lets you host different Operating systems on the same hardware. On the other hand, containerisation simply runs processes in the same OS. Moreover, virtualisation depends on a hypervisor to emulate hardware, a feature that requires considerably more resources than containerisatiom because containers don’t need this hardware emulation. 

Why is Docker useful in software testing?

In software development there are many  reasons why Docker is fabulous but in software testing, it would summarise Docker’s advantages in two: isolation and control. When using containers you don’t need to code customised scripts to deploy your app or to setup its fixtures. You can just create a docker image with the necessary libraries and dependencies. 

Also, containerisation does not interfere with software’s functionality, so testers can keep their initial approach to functional testing. With Docker, QA engineers create a predictable and reproducible testing environment. The number of environmental variables we have to test is considerably reduced because we are able to bundle up all configuration data into a container. This is pretty attractive for testers since we can be sure, to a certain extent, that the application behaves during testing in a similar fashion as it would in a production environment. 

Docker is also beneficial when we enter the beautiful realm of test automation. Automation is a crucial part of QA and we can get the most out of it with Docker since it sharply reduces infrastructure dependencies. 

Docker basic commands

One cool thing about Docker is that it simplifies a lot working with containers. Therefore, we could claim that there’s virtually no technical entry barrier to use it besides some familiarity with command line. 

Here I will go over some basic Docker commands to illustrate the convenience of its use. For a detailed Docker installation guide you can refer to the official documentation

Docker pull

With this command you will be fetching a particular docker image you want to run in your system. 


docker pull  <url of where your code is hosted>

Docker images

This command offers you a full list of images hosted on your system but they are not necessarily running. 


docker images

Docker run

Docker run as, it indicates, runs your application inside the container.  The run command tells Docker to find the image you have specified and runs a command inside that container. 


docker run <image name>

Pretty simple right? You can also add flags and more than one argument to this command to make your code behave in a constrained way. One thing to highlight here is that the whole process is super fast compared to virtualisation! To see the overwhelming difference in time and performance just create a virtual machine, run a command and then kill it. 

Docker ps

As you might have guessed, this command will output a detailed list of all docker images running. 


docker ps

You can also use de -a flag to visualize all containers you ran. The output will consist on currently running and exited docker processes. You can check the STATUS column to see if the image is running or has died already. 

Cleaning up containers

It is a good practice to delete your containers once you are done using them. To delete a container use docker rm providing the id of the container to be deleted as an argument. 


docker rm <container id>

Removing containers one by one might seem a little cumbersome. I usually delete docker containers in bundles by stoping them. I can accomplish this process with two commands:

docker stop $(docker ps -a -q) 

This will stop all your running containers. We need the -q flag to output only the containers’ ids since the stop command only accepts processes ids.

docker container prune

This will remove all stopped containers.  

A useful habit is to include the -rm flag in your docker run commands. This will gracefully kill your containers once the process running inside them has finished! 


That is all I have for Docker basics. As you can see, Docker is a sharp tool that has considerable benefits in development and business.  If you feel like getting your hands dirty, go ahead and install Docker in your favorite OS and play with these commands. And if you are curious run a couple of processes using virtualisation and compare!