This repository demonstrates a production-grade CI/CD workflow utilizing Docker for containerization, Travis CI for continuous integration, and AWS Elastic Beanstalk for deployment.
The React application is kept intentionally simple, consisting only of the default React opening page. This allows us to focus solely on understanding the complete workflow of a production-grade CI/CD pipeline.
- React: A simple front-end web application.
- Docker: Used to containerize the application for consistent development and production environments.
- Travis CI: Handles automated testing and deployment in a continuous integration/continuous deployment (CI/CD) pipeline.
- AWS Elastic Beanstalk: Deploys the containerized React app for production use.
By keeping the React application simple, we avoid spending time on UI development. Instead, this project emphasizes the workflow and infrastructure behind building, testing, and deploying a React application using Docker and AWS services.
-
Install Node.js: Install Node.js by running the following command:
sudo apt install nodejs
-
Verify the installation: Run the following command to see you have specifit version of Node.js
node -v
-
Change the Directory: To run couple of command inside the project directory, go in this directory with:
cd your-project-name
- Create Dockerfile for Development:
- File Name:
Dockerfile.dev
- Base Image:
node:16-alpine
- Configuration Steps:
- Use a base image with Node.js and Alpine:
FROM node:16-alpine
- Switch to a non-root user for better security:
USER node
- Create the application directory and set it as the working directory:
RUN mkdir -p /home/node/app WORKDIR /home/node/app
- Copy
package.json
and install dependencies (set ownership tonode
user):COPY --chown=node:node ./package.json ./ RUN npm install
- Copy the rest of the project files (set ownership to
node
user):COPY --chown=node:node ./ ./
- Set the command to start the application:
CMD ["npm", "start"]
- File Name:
- Build Docker Image:
- Command:
docker build -f Dockerfile.dev -t your-image-name .
- Note: Replace
your-image-name
with your desired image name.
- Command:
- Run Docker Container:
- Command:
docker run -p 3000:3000 your-image-name
- Note: Ensure you use
-p
to map ports between the container and your local machine.
- Command:
-
Check Development Server:
- URL:
http://localhost:3000
- Note: Ensure that the development server is accessible via this URL.
- URL:
-
Remove Duplicate Dependencies:
-
Handling Source Code Changes:
-
Issue: Changes to source code inside the container won't automatically reflect in the browser.
-
Solution: Use Docker volumes to mount your source code into the container to see real-time changes without rebuilding the image. Instead of copying entire directories, set up a reference to the local file system.
-
Docker Run Command Syntax:
- Add a
-v
flag to map a folder inside the container to a folder outside the container. - Example:
docker run -it -p 3000:3000 -v /home/node/app/node_modules -v $(pwd):/home/node/app <image id>
- Add a
-
Important Notes:
- The
-v
flag sets up a mapping between a local directory and a container directory. - If you use
$(pwd)
on a terminal, it prints the path to the current directory. This may not work in all terminal types (e.g., Windows Command Prompt).
- The
-
Verify Container Changes:
-
Action: Make changes to the source code and verify that they are reflected in the browser after using Docker volumes.
- With the correct run syntax, the project starts up as expected.
- Changes to the local file system are automatically reflected inside the running Docker container.
- React's automatic refresh feature updates the page when code changes are made.
-
-
Automate with Docker Compose: Docker Compose streamlines the development process by simplifying long Docker run commands, allowing us to easily manage port mappings and volume mounts with a single command, making it more efficient for running applications like our React app in development.
- Create
docker-compose.yml
File:version: '3' services: web: build: context: . dockerfile: Dockerfile.dev ports: - "3000:3000" # map the 3000 outside of the container to 3000 inside of the container volumes: - /home/node/app/node_modules # this line provides not mapping up against app node modules inside of the container - .:/home/node/app # . (dot) => current directory (pwd in run command) | this line provides map that folder (current folder) outside of the container to the app folder inside of the container
- Command to Run Docker Compose:
docker-compose up
-
Keep
COPY
Command in Dockerfile:- Reason: Although volume mapping eliminates the need to copy the source code for development, it's good to leave the
COPY . .
instruction for future use cases, such as production or non-Docker Compose scenarios.
- Reason: Although volume mapping eliminates the need to copy the source code for development, it's good to leave the
-
Use Docker Compose for Simplified Commands:
- Reason: Docker Compose helps avoid long, complex
docker run
commands by managing port mappings and volumes for you.
- Reason: Docker Compose helps avoid long, complex
- Create
-
Container Snapshot Issue:
- Explanation: When the container is created, it captures a snapshot of the code. Any changes made to your test files after the container starts won't be reflected unless the container is rebuilt or volumes are used.
-
Solution - Using Docker Volumes:
- Command:
docker run -it -v $(pwd):/app your-image-id npm run test
- Explanation: By attaching a volume, changes made to your local test files are reflected inside the running container. This way, you can rerun the tests with updated code.
- Command:
-
Create a Separate Service for Testing:
- Update
docker-compose.yml
:version: '3' services: web: build: context: . dockerfile: Dockerfile.dev ports: - "3000:3000" volumes: - /app/node_modules - .:/app test: stdin_open: true build: context: . dockerfile: Dockerfile.dev volumes: - /home/node/app/node_modules - .:/home/node/app command: ["npm", "run", "test"]
- Explanation: This Docker Compose setup creates two services:
web
for running the development server.test
for running the test suite with live file changes using volumes.
- Update
-
Run Docker Compose:
- Command:
docker-compose up --build
- Explanation: This command builds both services (
web
andtest
), enabling live code changes with automatic test re-runs when files are modified.
- Command:
-
Lack of Interactivity in Test Suite:
- Problem: Docker Compose doesn't provide full terminal interactivity (e.g., you can't press
P
,T
,Q
to filter tests). - Solution: Use
docker exec
to interact with a running container.
- Problem: Docker Compose doesn't provide full terminal interactivity (e.g., you can't press
-
Interactive Test Execution:
- Command:
docker exec -it <container_id> npm run test
- Explanation: You can use this command to interact with the test suite after the container has been created, allowing full control (e.g., filtering test cases with
P
,T
,Q
).
- Command:
-
Docker Volumes Enable Live Changes:
- Explanation: By attaching volumes, you can run tests with live code changes without needing to rebuild the container.
-
Two Approaches for Running Tests:
- Use
docker run -it
for full interactivity. - Use Docker Compose to manage both services (
web
andtest
), with automatic test re-runs via volume mapping.
- Use
-
Best Practice: For interactivity with tests, use
docker exec
to attach to the running container.
In this part, we will walk through how to configure a production-ready Docker setup for a React application using multi-stage builds with Node.js and NGINX. This allows us to build the app with Node.js and serve the static files using NGINX, which is better suited for production environments.
-
Build Phase:
- Use Node Alpine: Use the official Node.js Alpine image as the base to build the React app.
- Install Dependencies: Use
npm install
to install all necessary dependencies from thepackage.json
file. - Build the Production Files: Run
npm run build
to generate the optimized static files for production.
-
Run Phase:
- Use NGINX: The second phase uses NGINX to serve the built static files.
- Copy Files: Copy the files from the first phase (build phase) into the NGINX server's root directory.
# ---- First Phase: Build ----
# Use the official Node.js 16 image with Alpine Linux as the base image.
FROM node:16-alpine as builder
# Set the working directory inside the container.
WORKDIR /home/node/app
# Copy only the package.json file to the working directory.
COPY package.json .
# Install all the dependencies.
RUN npm install
# Copy the rest of the application code to the working directory.
COPY . .
# Build the React application for production.
RUN npm run build
# ---- Second Phase: Run ----
# Use the official Nginx image to serve the built files.
FROM nginx
EXPOSE 80
# Copy the built files from the first phase (builder) to Nginx's default serving directory.
COPY --from=builder /home/node/app/build /usr/share/nginx/html
-
Build the Docker Image:
- Command:
docker build -t your-image-name .
- Explanation: This command builds the Docker image for production using the Dockerfile. Replace
your-image-name
with the desired name for your image.
- Command:
-
Run the Container:
- Command:
docker run -p 8080:80 your-image-name
- Explanation: The
-p
flag maps the container's port80
to the local machine's port8080
. NGINX will serve the production-ready React application onlocalhost:8080
.
- Command:
-
Build Phase:
- Install Dependencies: We first install the necessary dependencies from
package.json
because these dependencies are required to runnpm run build
. - Run Build: The
npm run build
command generates abuild
folder containing the production-ready static files. This is the folder we ultimately care about.
- Install Dependencies: We first install the necessary dependencies from
-
Run Phase:
- NGINX: In the second phase, we switch to an NGINX image. We copy the contents of the
build
folder from the first phase into NGINX's serving directory,/usr/share/nginx/html
. - Multi-Stage Build: The key advantage of the multi-stage build process is that we only copy the production-ready files (the
build
directory) to the final image, discarding everything else from the first phase. This keeps the image size minimal.
- NGINX: In the second phase, we switch to an NGINX image. We copy the contents of the
-
Multi-Stage Build:
- Why Multi-Stage? The idea behind a multi-stage build is to use one Docker image to build the app and then use another lighter image (like NGINX) to serve the static files. This helps reduce the final image size.
- Optimized for Production: By using NGINX, a high-performance web server, we ensure our application is ready for production workloads.
-
NGINX:
- Purpose: NGINX is used to serve the static files (HTML, CSS, JavaScript) generated by the React build process.
- Port Configuration: We map port
80
inside the container (NGINX's default port) to port8080
on the host machine.
- Test the Setup:
- URL: http://localhost:8080
- Explanation: After running the container, navigate to this URL in your browser. You should see the default React "Welcome to React" page, indicating that the production setup is working.
-
Development Workflow Overview:
- We’ve now successfully set up Docker containers to handle:
- Running
npm run start
for development. - Running
npm run test
for development. - Running
npm run build
for production environments.
- Running
- Now that our Docker setup is complete, it's time to implement a Continuous Integration/Continuous Deployment (CI/CD) pipeline that will:
- Use GitHub as the repository hosting service.
- Use Travis CI for running automated tests and managing deployment.
- Use AWS Elastic Beanstalk for production deployment.
- Here is the flow diagram of what we are going to build:
- We’ve now successfully set up Docker containers to handle:
-
Services Overview:
-
GitHub: We'll be using GitHub to manage our source code and the development process, including branches for features and master branch for deployment.
- Assumption: You are familiar with GitHub, including creating branches, making commits, and pushing code.
- Requirement: If you don’t have a GitHub account, create one at GitHub Signup.
-
Travis CI: We'll be integrating Travis CI, a Continuous Integration service that automatically runs our tests and deploys the app to AWS once the code is merged into the master branch.
- Assumption: No prior experience with Travis CI is required. We'll walk through everything step by step.
- Sign up: If you don't already have a Travis CI account, head over to Travis CI and sign up using your GitHub account.
-
AWS Elastic Beanstalk: We’ll use AWS to deploy and host our application.
- Assumption: AWS may require a credit card to sign up for an account. If you don't have or prefer not to use AWS, that's okay — the steps for deploying to other cloud providers like Google Cloud or Digital Ocean are quite similar.
- Note: You can still follow along to understand the process, as deploying Dockerized apps across different cloud platforms is quite similar.
-
- GitHub Setup:
- Create a GitHub Repository: Push your existing project to GitHub.
- Command:
git init git remote add origin <your-repo-url> git add . git commit -m "Initial commit" git push -u origin master
- Explanation: The above commands initialize a git repository, add your remote GitHub URL, stage all files, commit them, and push to the master branch.
- Command:
- Create a GitHub Repository: Push your existing project to GitHub.
- Travis CI Configuration:
-
Add Travis CI to Your Repository:
- Go to Travis CI and log in using your GitHub account.
- Enable your repository under the Travis CI dashboard.
-
Create
.travis.yml
File:- Add the following configuration to enable continuous integration:
language: node_js node_js: - "12" services: - docker # Script for running tests before deploying script: - docker build -t your-app-name . - docker run your-app-name npm test # Deploy to AWS Elastic Beanstalk deploy: provider: elasticbeanstalk region: "us-west-2" app: "your-app-name" env: "YourApp-env" bucket_name: "elasticbeanstalk-us-west-2-your-bucket" bucket_path: "your-app-name" on: branch: master
- Explanation: This configuration tells Travis CI to use Node.js and Docker, build your Docker image, run tests, and deploy to AWS when changes are pushed to the master branch.
- Add the following configuration to enable continuous integration:
-
- AWS Elastic Beanstalk Deployment:
-
Create an AWS Elastic Beanstalk Environment:
- Go to the AWS Management Console and search for Elastic Beanstalk.
- Create a new application and environment.
- Choose Docker as the platform and set up your environment.
-
Configure AWS CLI:
- Install the AWS CLI and configure it with your credentials:
aws configure
- Explanation: This allows Travis CI to authenticate with your AWS account for deployment.
- Install the AWS CLI and configure it with your credentials:
-
- Testing Your CI/CD Pipeline:
-
Push a Feature Branch:
- Create a new feature branch:
git checkout -b feature-branch
- Make changes, commit, and push the branch to GitHub.
- Create a new feature branch:
-
Merge to Master:
- Once the feature is complete, merge your feature branch into master:
git checkout master git merge feature-branch git push origin master
- Outcome: Travis CI will automatically run tests and deploy the app to AWS when the code is pushed to the master branch.
- Once the feature is complete, merge your feature branch into master:
-
- Best Practices:
- Use Feature Branches: Always develop new features on a separate branch and only merge into master when ready for deployment.
- Automated Testing: Ensure your tests are running correctly in the Travis CI pipeline before deployment.
- Monitor Deployment: Keep track of your deployments through AWS Elastic Beanstalk to ensure that everything is running smoothly in production.