For deploying a Node.js application in production, you can execute the code directly and use a process manager like PM2 to monitor your application. A better approach is to create a Docker image and start a container from that image.
With Docker, you can tag your image and revert to a previous working image when the current has a critical bug; developers can run the application without worrying about the OS and configuration settings.
We will see how to build a Docker image of a Node.js application and if you are interested in how to run the application with PM2, I wrote a post about it.
You need these tools installed on your computer to follow this tutorial.
Setup the project
We will use a Node.js Rest API we build in this previous article below.
Let's clone the project locally:
git clone https://github.com/tericcabrel/blog-tutorials cd blog-tutorials/node-rest-api-swagger
You can follow the instructions in README to run the project locally. We will run the application after we build the Docker image.
Write the Dockerfile
We will take advantage of the Docker multi-stage build to:
- Make the image build agnostic from any operating system
- Reduce the size of the Docker image
At the root project directory, create a file called
Dockerfile and add the code below:
FROM mhart/alpine-node:16 as builder RUN mkdir -p /app WORKDIR /app COPY . . RUN yarn install RUN yarn tsc FROM mhart/alpine-node:16 as app ENV NODE_ENV=production RUN mkdir -p /app WORKDIR /app COPY --chown=node:node --from=builder /app/package.json /app COPY --chown=node:node --from=builder /app/build/ /app RUN yarn install --frozen-lockfile --production EXPOSE 4500 ENTRYPOINT ["node", "index.js"]
This file has two stages
app and use Node alpine 16 as the base image.
In the first stage
In the second stage
app, we copy the package.json and the content of the folder generated by the project built in the previous stage. We also set the ownership of the files to the user
node (automatically created in the Node alpine image). Finally, we define the command to run when the container starts.
Run the command below to build the image:
docker build -t node-app .
View the image detail by running
docker image ls.
Test the Docker image
Let's try to run the image and verify the application work as expected. Since the Node project interacts with MongoDB, so we need to start a container with the command below:
docker network create node-app-network docker run -d --network node-app-network -e MONGO_INITDB_ROOT_USERNAME=app_user -e MONGO_INITDB_ROOT_PASSWORD=app_password --name mongodb mongo:5.0
Create a file
.env that will contain the environment variables to inject when starting a Docker container of our application image.
HOST=http://localhost PORT=4500 DB_HOST=mongodb DB_PORT=27017 DB_USER=app_user DB_PASS=app_password DB_NAME=test
Run the command below to start the container of the project
docker run -it -p 4500:4500 --network node-app-network --name node-rest-api --rm --env-file .env node-app:latest
Open your browser and navigate to http://localhost:4500/documentation
Reduce the size using Esbuild
After building the application, we still need the production dependencies to run the application, which is why the Docker image contains a node_modules folder.
I use a tool called Dive to explore the content of a Docker image. We can see the node_modules folder takes 191MB of space.
Here is what the Dockerfile now looks like:
FROM mhart/alpine-node:16 as builder RUN mkdir -p /app WORKDIR /app COPY . . RUN yarn install RUN npx esbuild ./src/index.ts --bundle --platform=node --outfile=build/index.js FROM mhart/alpine-node:16 as app ENV NODE_ENV=production RUN mkdir -p /app WORKDIR /app COPY --chown=node:node --from=builder /app/build/index.js /app EXPOSE 4500 ENTRYPOINT ["node", "index.js"]
Build a new image:
docker build -t node-app .
We reduced the size by also three times. Start the container again and make it works.
Caveat: One of the drawbacks of bundling everything into a single file is that we don't have a human-readable stack trace of an error or a log message. To fix that, you can generate source maps of the file and package it in the Docker image, but this solution is not recommended as it slows the application.
In an upcoming post, I will show the recommended approach to fixing this issue.
When building a Docker image, take advantage of multi-stage builds, and use Esbuild to package your application in a single file when the size is critical (running on an AWS Lambda Function).
Use the tool Dive to explore the content of the Docker image.
You can find the code source on the GitHub repository.