What I have learned about Docker in the past few days

In the past few days, I was only reading about Docker, trying to understand it more and more, see how can I use it and how can I implement it in my workflow, and yet still need a lot to learn.

In my learning journey I depends on reading articles (a lot of articles), checking Docker documentation, watching online courses or even youtube videos, and trust me not all of them is worth the time I spend because either they are old or does not cover the situation am trying to achieve in my head. And am not saying they are bad, but everyone knows that Docker release cycle is fast, and with each new version they introduce a big breaking features which will not work with the older versions, so if you dont follow up fast you will lose the track and thats also apply to any article you read, if its talking about version 1.11 mean while you are working with 1.12, you may get lucky and you find the solution for the issue you are facing.

Now enough with this long talk, which is debatable by the way, lets focus about what I have learned in the past few days and how I got what I want fast.

Basically I wanted to have a small API + Database images, based on Docker like this

 

Now to do so, you will need to create a docker-compose.yml file to define each service, and here we have two one for the API and one for the Database, my API service will depend on nodejs so basically you can either build your own base image or you can just pull someone else image, for me and since am learning I choose to build my own image (and please dont use it, its for old nodejs), plus I wanted to make my Dockerfile for the API some how cleaner this way I will not need to build the base image each time I want to build the API, so the rule number one : Always try to have a base image that you can use all the time. Now that we have the base image for my API, I need decide what to do for the DB, I choose the other option here and got the official base image.

Now that we have agreed about the base images, lets see how to build our own image for the API first.

– Lets see first the content of my Dockerfile file:

 

  1. I Exposed the port 3000 from the image.
  2. I choose the dir /app to be the working directory.
  3. Created an environment var NODE_ENV .
  4. Now copied the package.json file to /app directory.
  5. Run npm install to install all the packages inside the image.

As we can see the content is simple and easy to understand, and the first part of the docker-compose.yml is like this :

Its just a simple file where I define some other information, and define the volumes that I will share between my image and my computer.

Am not going to explain that much about it, but you should notice the command part where I run node server.js to start my API, and the ports part where I map the port 3000 from my host to the port 3000 on the image.

Running docker-compose up --build worked like a charm but for sure the API will face some problem as I didnt setup the database image yet, lucky for me I had some vagrant image running for the database, cause I need to make sure that each part is working like a charm before I can build the second one.

With docker-compose you can scale your image and run more than one node from the same image using the scale command, so I though to myself why not, lets try it and see how we can scale this simple API image, so basically what I though this will result is something like this

so I run the command docker-compose scale api=4 and it failed. So rule number two : Never map ports if you want to scale your service. and the solution to my problem was by removing the ports part from my docker-composer.yml file, but then now I have no idea what is the port which will be mapped from my host computer to the image, cause each time I’ll get a different random port. So after searching and reading again I found that the only way to make it work is to put a load balancer in front of my images and it should be the main communication channel between me and the images like this

So after some search I found out that I can use an image for haProxy as my load balancer entry, so the docker-composer.yml file is now like this:

Things that have changed is :

  1. added a new environment variable TCP_PORTS which tell haproxy that the communication with my API service will be via TCP not HTTP.
  2. mapped the local ports 80 and 433 to the image port 3000 so any request over the ports 80 or 433 will be redirected to port 3000 which is the port that mapped by my API service.
  3. mapped the file /var/run/docker.sock to the load balancer image so it can be managed by the load balancer.
  4. created a new network so that both images will be on the same network and they can be communicate without any problem.

So running docker-compose scale api=4 now worked like a charm, and it scaled the service as requested, so rule number three: Read about the tools that you are using and check what options they provide.

Now the last part would be to add the database service, which is not a big issue just a new service which can be added to the docker-compose.yml file so our infrastructure will be like this

Now if you look close you will notice that I have added the database to load balancer too, and thats because in my head I didnt want to deal with the databases directly, but then I remembered that I do need to access it via sequelpro so the final code for docker-compose.yml file is the following:

and as you can noticed we liked both the API and the Database to the Load Balancer service.

now even if you tried to scale the database it will not be a problem for you, but you should not do that, so rule number four: Scaling database servers is not good if you didnt configure a cluster. yes you do need a cluster for your databases so that the data will be distributed to all instances.

PS :

  1. Please note that this is not a so nice infrastructure for production, it was just for leaning and experimental only.
  2. Yes I did build my API with nodejs, but this does not mean that you cant do it with PHP (for example laravel app), the API here is just a service and has nothing to do with the programming language.
  3. If you want to read more about Dockerize you can check https://github.com/jwilder/dockerize
  4. English is not my first language, so if you noticed any lingual mistake please let me know to fix it.
Filed under: docker, Linux, MySQL, nodejs, Other

1 Comment

  1. Way cool! Some very valid points! I appreciate you penning this post
    and also the rest of the website is also really good.


Add a Comment

Your email address will not be published. Required fields are marked *

Comment *

Name *
Email *
Website

10 + 14 =