Introduction to Containerized applications with Docker

Ravindran Kugan
10 min readJun 7, 2021

--

Containers (Photo by Erwan Hesry on Unsplash)

In the modern software world, Enterprise Level Applications are expected to run at all times, have less downtime and should be accessible everywhere and be developed quickly and deployed as soon as possible. The way to deploy these software has seen its changes in the past years from being monolithic application to becoming service oriented. And in this service oriented architecture containerized application is all the talk nowadays. So in this article I will explain how we evolved into Containerized application and give an introduction to the leaders in containers Docker.

A Brief History Lesson

Generation 1

Around five to six years ago our applications would be deployed on a physical server. You may be asking now days we are also doing the same but what has changed. Imagine our application has 3 layers. The client side layer which will be used by the client, the business logic or a backend that does major processes and finally a database layer which we use to store the data. If we are to implement this type of system on a web based architecture we used to use 3 individual web servers(Physical Machines) to host the 3 layers/components. This itself brought in a lot of disadvantages.

  • As a whole physical box is given for a single component a lot of memory and processing power is wasted. (It does not use 100% of the physical box’s power)
  • We need to have separate network connections for each of our servers.
  • We need to buy and have license for the operating system that the server is going to run on.
  • The cost will be high as we are buying more than 1 OS and physical boxes in order to host our application.
  • Maintenance is hard as we need to take care of 3 individual servers.

Even though we were able to host and run applications the wasted space and the processing power of the physical machines were too much, so developers needed a new way to host services through internet. That is where we come to the next generation Hypervisor.

Generation 2 (Hypervisor )

Some people call this the VM ware generation which is not how it should be called. It should be called Hypervisor generation. In this generation we still host our application on the web but instead of having different physical machines for different components, we will be hosting the components on one physical machine with the help of Hypervisor. Hypervisor is a virtual machine monitor (VMM) which is a software that helps to create and run Virtual Machines (VM).Look at the below diagram.

Hypervisor Simplified

As seen from the above diagram. We will be taking a powerful hardware machine and then install hypervisor on top of that machine. And then with hypervisor we will be able to create virtual machines which will be running our applications. As shown in the above diagram each virtual machine can have its own OS and host our application. For the example that I gave in Generation 1, we are able to implement the whole application on a single machine and have spare processing power to work with. With the additional space we can run proxies or Load balancer for our application as well.

Hypervisor was able to solve the major problem from the previous generation which is the wasted space and processing power of the physical machine. But still we have some more problems.

  • Licensing cost for the operating system that we are using. (Not Linux of course 😅)
  • There is a lot of management work needed be done. We need to patch and update each of the operating systems that run on the VM.
  • Starting a VM is not a quick process, we need to boot up the Operating System which will take some time.
  • If we want to create a new VM, we need to get the license for the new OS and do a lot of configuration work. This configuration work is not some 2 hour 3 hour work, it takes a lot of time to create and run a new VM. (VM images can reduce the time to set up a new VM.)

Note: A VM image means an executable file that will have a preconfigured virtual machine with OS and other software set up. VM images can be used to quickly set up a VM in our hypervisor environment. Click Here for more information.

We as humans strive for better things in our life, just like that programmers wanted bit more, they wanted to get rid of the disadvantages of the hypervisor. For that we jumped into the next generation which is the containerized applications.

Containerized Applications

In order to rectify the problems from the hypervisor we moved on towards the next generation which is Containerized Applications. So what is a container?. According to the leader of containerized application docker a container is “A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another”. In simple terms containers will help us to run our codes on a single operating system while isolating them (One component is not dependent on the health of the other). Look at the below diagram.

Container Simplified

As shown in the above diagram we will be having our container on top of the Operating System. And we will be creating containers that will be creating and running our application. Having only one OS brings in several advantages;

  • With this we are solving the major problem from the previous generation which is updating, patching, licensing each Operating System individually.
  • There will be only one single operating system so the development team can work on their application while the IT team don’t need to worry about the versions and specifications that the application needs to run on.
  • The space that is used to store the each and every OS that is used in the VM will be free when we use containers.
  • Containers work on top of an operating system, which means that we don’t waste any additional time to boot up our container.
  • Containers will also provide applications self healing.
  • Most cloud platform providers support containers.

Now that I have talked about the path we took to get to containers, lets take a look at the leading container, Docker.

Note: The container engine will be explained when I explain about docker.

Docker

Docker was created around the year 2010 by the company dotCloud as an Internal Project. This project was created under the Docker inc cooperation and the project was named Docker. It was released around 2013 as an open source project backed by Apache 2.0 license. Even though Docker inc is the major player in the creation of Docker it is still an open source project and is not owned by Docker inc. Docker was created using Google’s programming language Go. Look at the below image for the architecture of the Docker.

Docker Simplified

This is how Docker will be normally working. The client is the where we run the client side commands which will be sent to docker daemon which is server side. The docker daemon will be doing the heavy lifting of running the processes. The Registry is the part where Docker Images are stored. These processes will be done with the help of docker engine (Registry and Image will be explained below).

Note : The actual diagram will be a little complex and I don’t want to confuse the newcomers with a lot of information. Also the docker client and the docker daemon can be in separate machines or in the same machine as well.

Docker Engine (Container Engine)

Docker engine is not the docker project. It is part of a part of the docker project. As I said before the above mentioned components as well as security and Orchestration are all built around the docker engine. Lets take a look at some of the important components.

Registry

Registry is the place where we retrieve Docker Images to create Docker containers. We can create our own private registry or use a public registry to store and retrieve our images. Some of the most used public registries are;

  • Docker Hub
  • Google Container Registry
  • Amazon EC2 Container Registry

The place we host our images will depend on the company or individual preference. So registries can be installed on local machines and proper licenses can be acquired for them as well.

Note : The largest and most used docker registry in the market is the Docker hub. There are more than 250 thousand of user created repositories that can be accessed from the docker hub.

Images

Some of you must be wondering what is this image I speak of, is it some sort of jpg or mpeg file. I am sorry to disappoint you by saying that Image means a file with some instruction to create a container. Images are like a template. We can take them and use out of the box or change the configuration for our own specifications. This is simply done by creating a docker file.

Note : These images are what makes dockers(containers) faster than hypervisors. If we change some lines in our docker file only the changes are rebuilt. Not the entire image.

Orchestration

In a real life orchestra there is a guy that will be waving a wand. The musicians will be playing according to his orders. Just like that an application that we are building may have a lot of services in individual containers. These containers then may need to talk with each other, exchange data and work together in order to achieve a bigger goal. For that just like a real life orchestra we will need someone to manage this complex task. The micro management of this task is Orchestration.

If we consider a monolithic application we will be mentioning all the dependencies and which component should be communicating with which. But when we use dockers, each services are created in an isolated environment. So orchestration is the process that will help us to say which comes first, which are the dependencies and the route the services should take to achieve the goal.

The tools that help us with orchestration are known as orchestrators. The two open source orchestrators that are on the market are;

Kubernetes : Provided by Google

Docker Swarm : Provided by Docker Project.

Note : To read more about orchestration click here.

Now that we know about the components of docker and how each works, now lets take a look at some finer points of docker where people might go wrong.

Key points about dockers

  • Persistence of Docker : Some programmers think that docker is not persistent. Persistent means that when we shut down and restart our docker all the changes that we made will be lost. This is entirely a false statement. Just like how Virtual Machines keep their configuration after a restart, dockers will also keep their configuration and data after restart which means that docker IS PERSISTENT.
  • Migration to Docker : Programmers may say that Dockers are only for new applications and not for legacy applications. This can be true or false depending on the situation. When we jumped from 1st generation to 2nd generation(VM) we can simply migrate. But with dockers we use smaller services and then combine them together to achieve a common task (Microservice architecture: An article will be coming soon about this😉). For example an architecture that was built with dockers will provide much more healthful services than the older generation. For example if a container is failing we can easily create a new container with that image and roll it out. We can rewrite and redesign legacy applications to fit with this architecture to gain the full features and advantages of dockers.

Open Container Initiative

After Docker Inc release the docker project, some company used it out. They had some issues with its architecture, so they decided to make some modifications to it and release their own container provider application naming it RKT(Rocket spelled in a cool way). Now there are two different frameworks using two different approaches. So in order to have a standardized framework in the market the two parties came to an agreement and started the Open Container Imitative in 2015 of June.

This organization have set the rules and specifications for containers that is used in the industry. The OCI moved all the vendor specific code into the container engine. OCI tells that the purpose of container based applications is that they should not be dependent into one vendor and not be dependent on one single platform.

Note: To read more about container based applications click here.

Thanks for reading my long article. Just like some of you all I am also a newbie in the world of Dockers, if whatever I have mentioned is wrong please drop me a note.

REFERENCES

This video by Krishantha Dinesh is the major source for my article. Please check it if you have free time.

--

--

Ravindran Kugan
Ravindran Kugan

Written by Ravindran Kugan

Associate Software Engineer at Virtusa

No responses yet