Dockerizing Your Hello World Application
Learn how to dockerize your Hello World application in this article by Yogesh Raheja, a certified DevOps and cloud expert with a decade of IT experience.
Docker and containers, in general, are very powerful tools worth exploring. By combining resource isolation features, including union capable filesystem (UCF), Docker allows for the creation of packages called containers, which include everything that is needed to run an application. Containers, like virtual machines, are self-contained, but they virtualize the OS itself, instead of virtualizing the hardware.
In practice, this makes a huge difference. Starting a virtual machine, such as an EC2 instance, takes time. This comes from the fact that in order to start a virtual machine, the hypervisor (that’s the name of the technology that creates and runs virtual machines) has to simulate all motions involved in starting a physical server, loading an operating system, and going through the different run-levels. In addition, virtual machines have a much larger footprint on the disk and in the memory.
With Docker, the added layer is hardly noticeable and the size of the containers can stay very small. In order to better illustrate this, we will first install Docker and explore its basic usage a bit.
Docker in action
To see Docker in action, we will start by installing it on our computer. The installation of Docker is very straightforward; you can follow the instructions found at http://dockr.ly/2iVx6yG to install and start Docker on Mac, Linux, and Windows. Docker provides two offerings: Docker Community Edition (CE) and Docker Enterprise Edition(EE).
We will use Docker CE on a Linux based Centos 7.x distribution in this article. If you want to use the same operating system, then follow the instructions available at https://docs.docker.com/install/linux/docker-ce/centos/ to set up Docker locally on your system. When you are done with the installation of Docker CE, verify the installed Docker version using the docker
utility:
$ docker –versionDocker version 18.06.1-ce, build e68fc7a
Once Docker is up and running, we can start using it as follows:
1. The first thing that we will do is pull an image from a registry. By default, Docker points to Docker Hub (https://hub.docker.com), which is the official Docker registry from the company Docker Inc. In order to pull an image, we will run the following command:
$ docker pull alpine
We will use the latest
default tag, as follows;
Using default tag: latestlatest: Pulling from library/alpine8e3ba11ec2a2: Pull completeDigest: sha256:7043076348bf5040220df6ad703798fd8593a0918d06d3ce30c6c93be117e430Status: Downloaded newer image for alpine:latest
2. In a matter of seconds, Docker will download the image called alpine
from the registry, which is a minimal Docker image based on Alpine Linux with a complete package index. This is only 4.41 MB
in size:
$ docker imagesREPOSITORY TAG IMAGE ID CREATED SIZEalpine latest 11cd0b38bc3c 2 months ago 4.41 MB
When working with Docker, the size of a container matters. Consequently, working with smaller base images, such as Alpine Linux, is highly recommended.
3. We can now run our container. In order to do this, we will start with the following simple command:
$ docker run alpine echo "Hello World" Hello World
4. On the surface, not a lot seems to have happened here. However, what really happened behind the scenes is a lot more interesting; Docker loaded the alpine
Linux image that we previously pulled, and used the Alpine operating system echo
command to print Hello World
. Finally, because the echo
command completed, the container was terminated.
Containers can also be used in a more interactive way, as follows:
· We can, for example, start a shell and interact with it using the following command:
$ docker run -it alpine /bin/sh
The –i
option means interactive; this allows us to type commands in our container while the -t
option allocates a pseudo TTY to see what we are typing as well as the output of our commands.
· Containers can also be run in the background using the –d
option, which will detach our container from the Terminal:
$ docker run -d alpine sleep 1000 c274537aec04d08c3033f45ab723ba90bcb40240d265851b28f39122199b0600
This command returns a 64-bit long ID of the container running the alpine
image and the sleep 1000
command.
· We can keep track of the different running containers running using the following command:
$ docker ps
The output of running the preceding command is as follows:
· Running containers can be stopped using the stop
option followed by the container name or ID (adapt the ID and name based on the output of your docker ps
command):
$ docker stop c274537aec04 c274537aec04
You can also use the following command:
$ docker stop friendly_dijkstra friendly_dijkstra
· Stopped containers can be started again with the start
option, as follows:
$ docker start friendly_dijkstra friendly_dijkstra
· Finally, containers can be removed by using the rm
command, but always stop the container before removing them:
$ docker stop <ID/NAME>$ docker rm <ID/NAME>
The output of the preceding command is as follows:
Running simple commands through containers is sometimes useful but, as we know, the real strength of Docker is its ability to handle any code, including our web application. In order to make that happen, we will use another key concept of Docker: a Dockerfile.
Creating our Dockerfile
Dockerfiles are text files that are usually collocated with applications that instruct Docker on how to build a new Docker image. Through the creation of those files, you have the ability to tell Docker which Docker image to start from, what to copy on the container filesystem, what network port to expose, and so on. You can find the full documentation of the Dockerfile at http://dockr.ly/2jmoZMw. We are going to create a Dockerfile for our Hello World application using the following commands:
$ cd helloworld$ touch Dockerfile
The first instruction of a Dockerfile is always a FROM
instruction. This tells Docker which Docker image to start from. We could use the Alpine image, as we did, but we can also save some time using an image that has more than just an operating system.
Through Docker Hub, the official Docker registry, Docker provides a number of curated sets of Docker repositories called official. We know that in order to run our application, we need Node.js and npm
. We can use the Docker CLI to look for an official node
image. To do this, we will use the docker search
command and filter only on official images:
$ docker search --filter=is-official=true nodeNAME DESCRIPTION STARS OFFICIALAUTOMATEDnode Node.js is a JavaScript-based platform for s… 6123 [OK]
Alternatively, we can also search for this using our browser. As a result, we would end up with that same image, https://hub.docker.com/_/node/. As we can see, the following screenshot comes in a variety of versions:
Docker images are always made up of a name and a tag, using the syntax name:tag
. If the tag is omitted, Docker will default to latest
. From the preceding docker pull
command, we can see how the output says Using default tag: latest
. When creating a Dockerfile, it is best practice to use an explicit tag that doesn't change over time (unlike the latest
tag).
On the first line of our file, we will add the following:
FROM node:carbon
This will tell Docker that we want to use that specific version of the node
image. This means that we won't have to install node
or npm
. Since we have the OS and runtime binaries needed by our application, we can start looking into adding our application to this image. First, we will want to create a directory on top of the node:carbon
image's filesystem, to hold our code. We can do that using the RUN
instruction, as follows:
RUN mkdir -p /usr/local/helloworld/
We now want to copy our application files onto the image. We will use the COPY
directive to do that:
COPY helloworld.js package.json /usr/local/helloworld/
Make sure that you copy the helloworld.js
and package.json
files inside the /helloworld
project directory where you are locally developing Dockerfile. The files are placed at https://github.com/yogeshraheja/helloworld/blob/master/helloworld.js and https://github.com/yogeshraheja/helloworld/blob/master/package.json.
We will now use the WORKDIR
instruction to set our new working directory to be that helloworld
directory:
WORKDIR /usr/local/helloworld/
We can now run the npm install
command to download and install our dependencies. Because we won't use that container to test our code, we can just install the npm
packages needed for production, as follows:
RUN npm install --production
Our application uses port 3000
. We need to make this port accessible to our host. In order to do that, we will use the EXPOSE
instruction:
EXPOSE 3000
Finally, we can start our application. For that, we will use the ENTRYPOINT
instruction:
ENTRYPOINT [ "node", "helloworld.js" ]
We can now save the file. It should look like the template at https://github.com/yogeshraheja/helloworld/blob/master/Dockerfile. We can now build our new image.
Back in the Terminal, we will use the docker
command, but this time with the build
argument. We will also use the-t
option to provide the name helloworld
to our image, followed by a (.
) dot that indicates the location of our Dockerfile:
$ docker build -t helloworld .Sending build context to Docker daemon 4.608kBStep 1/7 : FROM node:carboncarbon: Pulling from library/nodef189db1b88b3: Pull complete3d06cf2f1b5e: Pull complete687ebdda822c: Pull complete99119ca3f34e: Pull completee771d6006054: Pull completeb0cc28d0be2c: Pull complete9bbe77ca0944: Pull complete75f7d70e2d07: Pull completeDigest: sha256:3422df4f7532b26b55275ad7b6dc17ec35f77192b04ce22e62e43541f3d28eb3Status: Downloaded newer image for node:carbon---> 8198006b2b57Step 2/7 : RUN mkdir -p /usr/local/helloworld/---> Running in 2c727397cb3eRemoving intermediate container 2c727397cb3e---> dfce290bb326Step 3/7 : COPY helloworld.js package.json /usr/local/helloworld/---> ad79109b5462Step 4/7 : WORKDIR /usr/local/helloworld/---> Running in e712a394acd7Removing intermediate container e712a394acd7---> b80e558dff23Step 5/7 : RUN npm install --production---> Running in 53c81e3c707anpm notice created a lockfile as package-lock.json. You should commit this file.npm WARN helloworld@1.0.0 No descriptionup to date in 0.089sRemoving intermediate container 53c81e3c707a---> 66c0acc080f2Step 6/7 : EXPOSE 3000---> Running in 8ceba9409a63Removing intermediate container 8ceba9409a63---> 1902103f865cStep 7/7 : ENTRYPOINT [ "node", "helloworld.js" ]---> Running in f73783248c5fRemoving intermediate container f73783248c5f---> 4a6cb81d088dSuccessfully built 4a6cb81d088dSuccessfully tagged helloworld:latest
As you can see, each command produces a new intermediary container with the changes triggered by that step. We can now run our newly created image to create a container with the following command:
$ docker run -p 3000:3000 -d helloworld e47e4130e545e1b2d5eb2b8abb3a228dada2b194230f96f462a5612af521ddc5
Here, we have added the -p
option to our command to map the exposed port of our container to a port on our host. There are a few ways to validate that our container is working correctly. We can start by looking at the logs produced by our container (replace the container ID with the output of the previous command):
$ docker logs e47e4130e545e1b2d5eb2b8abb3a228dada2b194230f96f462a5612af521ddc5Server running
We can also use the docker ps
command to see the status of our container:
$ docker ps
The output of the preceding command is as follows:
And, of course, we can simply test the application with the curl
command:
$ curl localhost:3000Hello World
Also, if your host has a public IP, then you can even verify the output on the browser with <ip:exposedport>
, which in this case is 54.205.200.149:3000
:
Finally, kill the container using the docker kill
command and container ID:
$ docker kill e47e4130e545e47e4130e545
Since our image is working correctly, we can commit the code to GitHub:
$ git add Dockerfile$ git commit -m "Adding Dockerfile"$ git push
In addition, you can now create an account (for free) on Docker Hub and upload that new image. If you want to give it a try, you can follow the instructions at http://dockr.ly/2ki6DQV.
Having the ability to easily share containers makes a big difference when collaborating on projects. Instead of sharing code and asking people to compile or build packages, you can actually share a Docker image. For instance, this can be done by running the following:
docker pull yogeshraheja/helloworld
The output of running the preceding command is as follows:
You can experience the Hello World application, the exact way I see it, no matter what your underlying architecture is. This new way of running applications makes Docker a very strong solution for sharing work or collaborating on projects.
If you found this article interesting, you can explore Effective DevOps with AWS — Second Edition to scale and maintain outstanding performance in your AWS-based infrastructure using DevOps principles. Effective DevOps with AWS — Second Edition will help you to understand how the most successful tech start-ups launch and scale their services on AWS, and will teach you how you can do the same.
Join our community Slack and read our weekly Faun topics ⬇