Build a deployment pipeline for MoleculerJS using Docker, Github Actions and Fly.io

My learning journey on how I built a deployment pipeline for MoleculerJS using Docker, Github Actions and Fly.io.

Written on: 1 Sep 2023

Last year, I was working on a project together with a couple of friends as part of NUS Google Developer Student Club. We opted to use MoleculerJS, a microservice framework, despite not being very familiar with the microservices architecture. One of the biggest challenges we faced was figuring out how to deploy our application.

As one of the most senior members on the team, I proposed to the team that we maintain our codebase as a monorepo and try to keep the deployment process as simple as possible. Given the complexity of the project along with the fact that we had to learn the microservices paradigm, I thought it would be best if the rest of the team could focus on development with MoleculerJS instead of worrying about how to run Docker containers.

To host our application, I proposed to use Fly.io to which the team agreed. Personally, I've been a long time fan and user of Fly.io. Ever since Heroku removed their hobby plan, I've been looking for a free hosting service that I can use to host my projects. Fly.io is a great alternative to Heroku. While there are many ways of deploying your applications, one of the recommended ways is to use a Dockerfile. While Fly.io does not use docker containers under the hood, having a docker approach is great for portability and reproducibility – you can utilise create a Dockerfile for local development and testing, and then reuse the same Dockerfile to deploy to Fly.io without much changes. Fly.io also has a great cli tool and documentation that makes it really easy to get started.

In this article, I will not be sharing how to use MoleculerJS. Instead, I will be sharing how I built a deployment pipeline for MoleculerJS + Typescript using Docker, Github Actions and Fly.io.

Prerequisites

Some knowledge of Docker, Docker Compose and Github Actions is required.

Goals

In summary, the expecations I had for the deployment pipeline were:

  1. One command to quick start the entire project application locally – it should just work
  2. As little configuration files as possible – if anybody wants to make any adjustments, it should be easy to do so
  3. Build and execution process across the various environments should be as similar as possible

Local deployment with Docker and Docker Compose

Before I begin, here's a quick overview of the project structure. I've simplified it for brevity.

. ├── .github/ │ └── workflows/ │ └── fly-deploy.yml ├── dockerfiles/ │ ├── main.Dockerfile │ └── prisma-migrator.Dockerfile ├── src/ │ └── services/ │ ├── user_service/ │ │ └── user.service.ts │ └── auth_service/ │ └── auth.service.ts ├── prisma/ │ ├── migrations/ │ └── schema.prisma ├── package.json ├── package-lock.json ├── moleculer.config.json ├── tsconfig.json ├── docker-compose.yml └── fly.toml

Understanding how to run MoleculerJS

MoleculerJS provides it own quickstart deployment tool, Moleculer Runner. By providing the names of the services to be run as arguments or environment variables, Moleculer Runner will automatically run the services for you. By defining the environment variables SERIVCEDIR and SERVICES, Moleculer Runner will automatically find the services and run them. More configuration options can be found in the documentation.

Composing multiple services in separate Docker containers

Docker Compose allows you to define the various containers and their configurations in a single file. One of the ways of deploying would be to create separate Dockerfiles for each service and specify them accordingly in the docker-compose.yml file.

However, since I was working in a monorepo, I kept it simple and built all the services (written in Typescript) with one Dockerfile.

# main.Dockerfile FROM node:16.20.0-alpine as builder WORKDIR /build COPY ./src ./src COPY package*.json ./ COPY tsconfig.json ./ COPY ./prisma ./prisma RUN npm ci --silent --omit=dev RUN npm run build FROM node:16.20.0-alpine WORKDIR /app COPY --from=builder /build/dist ./dist COPY --from=builder /build/node_modules ./node_modules COPY package*.json ./ COPY ./prisma ./prisma COPY moleculer.config.json ./ # No CMD is defined as the different services will need a different command each

Those familiar with Docker will notice that I did not define a CMD for the Dockerfile. The CMD defines the command that will be executed to start the docker container. I did not define a CMD because the team would be using different commands to start the services differently based on the environment and other factors. Therefore, I defined the starting commands for each service in the docker-compose.yml file in the command property.

services: user-service: build: context: . dockerfile: dockerfiles/main.Dockerfile command: npm start environment: - SERVICEDIR=./dist/services/user_service auth-service: build: context: . dockerfile: dockerfiles/main.Dockerfile command: npm start environment: - SERVICEDIR=./dist/services/auth_service

Realising that I didn't necessarily need to define a CMD in the Dockerfile itself was actually a key moment for me when building the deployment pipeline. Prior to this, I was struggling really hard with trying to figure out how to start the different services in different environments and variables all with one command. If you are struggling like me, I hope this helps you out greatly!

Running Prisma

Prisma is a really popular ORMS for NodeJS environment and my team used it for our project. However, one of the issues I faced was that we needed to run the command prisma migrate deploy on every deployment. This introduced an additional step to the deployment process.

Initially, I thought of using the pre scripts in the package.json file. Before each service started up via the start script, the prestart script could run and deploy the migrations. However, this would mean that the migrations would be deployed multiple times in our multi-service architecture.

Solution: Using a separate Docker container to run Prisma

The solution I came up with was to use a separate Docker container to run Prisma. I would define the migration command in a separate Dockerfile and have the container run first before all other services. This was the docker-compose.yml file I came up with.

services: user-service: build: dockerfile: Dockerfile command: npm start environment: - SERVICEDIR=./dist/services/user_service depends_on: prisma-migrator: condition: service_completed_successfully auth-service: build: dockerfile: Dockerfile command: npm start environment: - SERVICEDIR=./dist/services/auth_service depends_on: prisma-migrator: condition: service_completed_successfully prisma-migrator: build: dockerfile: prisma.Dockerfile

This is the prisma.Dockerfile:

FROM node:16.20.0-alpine WORKDIR /app COPY ./prisma ./prisma RUN npm i prisma@4.15.0 CMD npx prisma migrate dev

The depends_on property in the docker-compose.yml file ensures that the prisma-migrator container runs first before the other services. The condition: service_completed_successfully property ensures that the other services only run if the prisma-migrator container runs successfully. Overall, the migrations are only deployed once and we have a reliable way of controlling the startup order of the containers.

Putting it all together

Simply running docker-compose up will start the build process, deploy migrations to the database before finally starting up the services. Having the entire local development process being available in one command was really helpful for the team as it allowed the team to reproduce the setup easily without actually having to know the ins-and-outs of Docker.

Deplyoying to Fly.io

Deploying to Fly.io is quite similar to using Docker Compose to launch the containers locally. The only difference is that I will be using the fly.toml file to define the deployment process instead of the docker-compose.yml file. fly.toml is the default configration file used when deploying applications. You can read more about it here.

Defining the Docker image

Firstly, I needed to define the docker image that Fly.io will use to deploy the application. This is done by defining the image property in the fly.toml file. I defined the image to be built using the main.Dockerfile file. Images can be published to the Fly.io registry or to any other registry. I chose to publish my image to the Fly.io registry as it was the simplest option.

[build] image = "registry.fly.io/my-app" # not the real app name

Defining separate processes

Fly.io allows multiple processes to be run in separate VMs within a single Fly application. Each process will utilise the docker image built using main.Dockerfile and run a separate service based on the command given to it. This can be done by defining the [processes] property and specifying the commands to be run for each process.

[processes] auth = "npm start -- ./dist/services/auth_service/auth.service.js" duser = "npm start -- ./dist/services/user_service/user.service.js"

Running Prisma

Similar to the local deployment, I needed to run any new prisma migrations before I started up my services. Thankfully, Fly.io has a release_command configuration that allows you to run a command before all the processes within the application are deployed.

[deploy] release_command = "npx prisma migrate deploy"

Overall Fly.toml file

Here is the overall fly.toml file that I came up with. Note that I've deliberately left out some of key properties such as networking and environment variables for brevity.

app = "my-app" [build] image = "registry.fly.io/my-app" [processes] auth = "npm start -- ./dist/services/auth_service/auth.service.js" user = "npm start -- ./dist/services/user_service/user.service.js" [deploy] release_command = "npx prisma migrate deploy"

The Glue – Github Actions

Github Actions is a CI/CD tool that allows you to automate your deployment workflow. By defining the actions and the events that trigger them, I can automate the deployment process to Fly.io.

name: fly-deploy on: push: branches: - "main" env: FLY_API_TOKEN: ${{ secrets.FLY_API_TOKEN }} jobs: publish: runs-on: ubuntu-latest steps: - name: Check out the repo uses: actions/checkout@v3 - name: Build and push uses: docker/build-push-action@v4 with: context: . file: ./dockerfiles/main.Dockerfile push: true tags: registry.fly.io/my-app deploy: runs-on: ubuntu-latest needs: publish steps: - name: Check out the repo uses: actions/checkout@v3 - uses: superfly/flyctl-actions/setup-flyctl@master - working-directory: . run: flyctl deploy -c fly.toml --remote-only

The workflow I've defined will build and publish our application image to the Fly.io registry. Once the publish job completes successfully, the deploy job will run and deploy the application. The deployment and running of the services are defined in the fly.toml file which we covered earlier.

Conclusion

Building this pipeline was not easy. It took me hours of combing through documentation along with numerous trial-and-error attempts to get it right. It may not necessarily be the best way to deploy a MoleculerJS application, but it worked really well for me and my team. If you are stuck attempting to build a similar deployment pipeline, I hope that some of the techniques I've shared will help you out. Don't be afraid to experiment and try out new things – you'll never know what you'll discover! Thanks for reading!

– Josh