Traditional Virtual Machines

Before discussing Docker, we'll take a look at traditional way of virtualization and problems with them that most of the application developers used to confront and we'll see that how Docker solves those problems.

In the past (even now at some places), if a developer wants a virtualized environment of a Linux machine (for example) to run or test his application, he has to install and configure a hyper-visor on the top of his existing infrastructure. The way it was done was that he will spin up a new virtual machine using a hyper-visor, install Linux (or any other OS) in that virtual machine (VM) after that he will install all the binaries and libraries on that Linux instance. After all installations and configurations, he was able to run his application on a Linux environment.

Using this approach for running applications has some drawbacks:

  • You've to spin up a whole new virtual machine and install a required licensed OS to just run your single application.
  • You've to configure and install all the binaries and libraries for your application manually each time.
  • They have a lots of overhead over your computing power.
  • Though they are portable across different hyper-visors and are completely isolated from their host machine but they have large file size in the terms of portability.

Another common problem that most of the configuration managers used confront is the phrase. "It works on my machine but not on yours!".

Docker helps us to resolve these issues of virtualization. Lets see with an overview of the Docker's way of virtualization and see different components of Docker and how they work.

What is Docker?

Docker is a Container Base System. In Docker, applications run in containers. Now the question is why containers? The answer to this question is simple. We see in a real world that if a country wants to export any luggage to another country, they put them into a container then they ship that container either by road or through a dock to another country.

Within containers, every stock is placed such that it does not get damaged upon small jerks and everything is placed safely regardless of where the container is moving.

The same above analogy applies to Docker for running and maintaining distributed applications. It packs your application in a container with all of your application's bins\libs and dependencies and makes it fully isolated from external environments regardless of where it is running. After making a container of our application, we can ship that container to any where i.e. To any other OS, to Docker Registry (such as Docker Hub) or to the cloud (such as Microsoft Azure Container Services).

If we compare a Docker container with a traditional VM, we see that Docker saves us from a Guest OS. We don't need any Guest OS in order to run or test our application.

Now lets understand some of the building blocks of Docker so that we can work around it efficiently.

Docker, in general, is composed of 5 core components:

  • Docker Image and Dockerfile
  • Docker Daemon
  • Docker Client
  • Docker Host
  • Docker Registry and Docker Hub

We interact with Docker through Docker Client, we type any command to Docker Client that we wish to run for manipulating containers. Docker Daemon is the component which manages the Docker images and containers on our local machine. It manages and manipulates containers using commands received from Docker Client.

In Windows, you've to install Virtual Box which comes with Docker Toolbox. Which will install a small Linux kernel sufficient for running Docker Host. You configure your shell, PowerShell or MSBuild as Docker client.

Docker Containers and Docker Daemon sit inside of Docker Host which initiate the Docker on our system. We use our system terminals for connecting with Docker Client.

As we said that applications run in containers. We make a Docker container of our application by making a Docker Image of our application. This Docker image is made by writing a Dockerfile. Dockerfile is a recipe or blueprint of a Docker container. Dockerfile is composed of different commands for making a Docker image.

After writing and making a Dockerfile and a Docker image of our application, we used to push that image to Docker Registry for distributing our application either privately or publicly. The default Docker Registry is Docker Hub where we can find different Docker images either from official Docker images or from some third party sources.

Docker officially released "Docker for Windows" which doesn't need any default VM for Linux kernel and no more need for Virtual Box. It uses Linux's Alpha kernel and is made for you in Windows using Hyper-V.