Photo by Victoire Joncheray on Unsplash

Manager: I cloned the project and followed the Readme but it doesn't work because of an error about a missing dependency.

Developer: Hmmm weird πŸ€” yet, it works on my computer πŸ€·πŸΎβ€β™‚οΈ

If you are a developer for a while, you have already faced this kind of discussion with your manager or teammates. Indeed, we usually build applications through many releases, and for each release, we need to share the code with our peer. Unfortunately, at this point, many problems often happen:

  • The incompatibility between operating systems. You code the project on MacOS, but the production environment runs on Ubuntu.
  • Even if you are on the same OS, an update can break a library used in the project, leading to a non-working project.
  • Even if everything works as expected, sometimes the steps to run the project are hard.
  • It is hard to keep a history of working code ready to run in production.

Above are some problems Docker solves very well. Adding Docker Compose to compose many Docker image make it more powerful.

Prerequisites

To follow this tutorial, you need Docker to be installed and working. If not installed, follow this link to install it according to your operating system.

We will take the Node.js project we have done in this tutorial to generate a PDF and make it run through Docker Compose. Here are the steps:

  1. Create a docker image of the project
  2. Run docker image of a Mongo database
  3. Run a docker image of the project
  4. Make them communicate together
  5. Use Docker Compose to easily manage them

Setup the project

Clone the project from this URL and follow the Readme file to make it work. If you don't have a Mongo database, you can follow this tutorial to install; otherwise, just continue the tutorial.

git clone https://github.com/tericcabrel/blog-tutorials.git

cd blog-tutorials/node-webapp-pdf

yarn install

cp .env.example .env
nano .env

yarn start

If everything configured as expected, you will have the project running.

Create a docker image of the project

Build a docker image of the project makes it agnostic of the operating systems meaning we should expect the same behavior no matter the operating system running the project through Docker.

Build the project

We use Typescript in the project but, Node.js run only Javascript file, so we need to transpile our .ts file to .js which is straightforward. Run yarn tsc.

Note: If you got typescript errors from files located in the node_modules folder, open your tsconfig.json file, then add the following code:

{
  "skipLibCheck": true,
}

Now, you have a folder called build containing the .js files but, if you pay attention, there is a missing directory views that contains the handlebars file. This is because the tsc command only handles .ts files and ignores others. Since the folder is necessary for the project, we have to copy it inside the build folder using a bash command:

cp -r src/views build

Here is the summary to build our project for production:

# update tsconfig.json to set "skipLibCheck"

yarn tsc

cp -r src/views build

Build the docker image

At the root project directory, create a file named Dockerfile. We will write the instructions to build an image. Open the file, then add the code below:

FROM mhart/alpine-node:14

RUN mkdir -p /home/app

WORKDIR /home/app

COPY build ./build
COPY public ./public
COPY package.json .

RUN yarn install --frozen-lockfile --production

EXPOSE 4500

ENTRYPOINT ["node", "build/index.js"]

Save the file, then run the command below to build the image. Here is the docker build command signature:

docker build <username>/<image_name>:<image_tag> <dockerfile_path

We can add others options. Check the documentation to learn more. For my case, the command is:

docker build -t tericcabrel/node-webapp:v1 .

Once completed, run docker image ls to view the list of docker images:

Docker image of the application

Run a container from the image

Let's start a container from our Docker image in an interactive way to view the logs by running this command:

docker run -it -p 4500:4500  --name node_pdf --rm tericcabrel/node-webapp:v1

Oops, we got an error saying we can connect to the database host called undefined . This is because we read database credentials from a '.env' file that has not been copied in the docker image. This was done intentionally because docker provides a way to add an environment variable went start a container. Let's start our container with the .env file:

docker run -it -p 4500:4500  --name node_pdf --rm --env-file .env tericcabrel/node-webapp:v1

Oops, we got an error saying we can connect to the database host called 127.0.0.1:27017 . This happens because the Docker container can't reach the MongoDB instance installed on the host.

Create a network to host containers

We need to run a docker container from the MongoDB image then run a docker container of our app. But it still will not work because the containers aren't on the same network.

So we will start by creating a network then start our two containers inside this network:

docker network create node-webapp-network

docker run -d --network node-webapp-network --name mongodb mongo:4.4.6

docker run -it -p 4500:4500 --network node-webapp-network --name node_pdf --rm --env-file .env tericcabrel/node-webapp:v1

The app still can't connect Β to the database for two reasons:

  • Since the two containers are in the same network, using 127.0.0.1 as the database host is not good because it points to the host, and we should use the container's name instead.
  • Our .env file contains a value for database username and password, but we didn't set them when starting the MongoDB container.

Update database credentials

To set value, for username and password, will set two environment variables MONGO_INITDB_ROOT_USERNAME and MONGO_INITDB_ROOT_PASSWORD when starting the container:

# Display running container to get the container ID of mongodb
docker ps

# Kill the running container
docker container kill <mongodb_container_id>

# Remove all stopped containers
docker container prune -f

# Run the container with database credentials
docker run -d --network node-webapp-network -e MONGO_INITDB_ROOT_USERNAME=app_user -e MONGO_INITDB_ROOT_PASSWORD=app_password  --name mongodb mongo:4.4.6

Now, let's edit our .env to update database credentials with these values.

DB_HOST=mongodb
DB_PORT=27017
DB_USER=app_user
DB_PASS=app_password
DB_NAME=admin

Now start a container for our app.

docker run -it -p 4500:4500 --network node-webapp-network --name node_pdf --rm --env-file .env tericcabrel/node-webapp:v1
Successfully run the docker container with MongoDB container.

Hoooooraaay πŸŽ‰ πŸŽ‰ πŸŽ‰

Note: Another way to make two containers communicate with each other is to create a link between them. But, it is not recommended to use it, and it will be deprecated further.

Use Docker Compose to manage containers

As you can see, we did many things to make our app works and all this in an imperative way which requires to know every command, option and also remember which container to start before another.
Fortunately, Docker Compose is here to simplify the work for us. We will create a docker-compose.yml file and define what we want, and Docker Compose will do the work. This is a declarative way. To learn more about the difference between a declarative and imperative style, check this link.

Create a docker-compose.yml and add the code below:

version: "3"
services:
  nodeapp:
    container_name: node_pdf
    restart: always
    build:
      context: .
      dockerfile: Dockerfile
    env_file: .env
    ports:
      - "4500:4500"
    links:
      - mongodb
    depends_on:
      - mongodb
    environment:
      WAIT_HOSTS: mongodb:27017
    networks:
      - node-webapp-network
  mongodb:
    container_name: mongodb
    image: mongo:4.4.6
    volumes:
      - ~/mongo:/data/db
    ports:
      - "27017:27017"
    environment:
      - MONGO_INITDB_ROOT_USERNAME=app_user
      - MONGO_INITDB_ROOT_PASSWORD=app_password
      - MONGO_INITDB_DATABASE=admin
    networks:
      - node-webapp-network
networks:
  node-webapp-network:
    driver: bridge
Docker Compose configuration file for the project

The instructions in the file above produce the same result as the instructions we did with Docker. Build and run with the following command:

docker-compose build

docker-compose up
Build and run the project with Docker Compose

That's it!

Now you can share your project, and the only thing someone has to do is run the two previous commands, and the project is running.

Find the final source code of this tutorial here.

I hope you found it interesting and see you at the next tutorial πŸ˜‰.