Codebase for the Health Equity Tracker, Satcher Health Leadership Institute, Morehouse School of Medicine.
Prompted by the COVID-19 pandemic, the Health Equity Tracker was created in 2020 to aggregate up-to-date demographic data from the hardest-hit communities. The Health Equity Tracker aims to give a detailed view of health outcomes by race, ethnicity, sex, socioeconomic status, and other critical factors. Our hope is that it will help policymakers understand what resources and support affected communities need to be able to improve their outcomes.
To contribute to this project:
- Fork the repository on github
- On your development machine, clone your forked repo and add the official repo as a remote.
- Tip: our development team keeps the remote name
origin
for the original repo, and uses a different name for our forked remote likejosh
,ben
, oreric
.
- Tip: our development team keeps the remote name
When you're ready to make changes:
- Pull the latest changes from the official repo.
- Tip: If your official remote is named
origin
, rungit pull origin main
- Tip: If your official remote is named
- Create a local branch, make changes, and commit to your local branch. Repeat until changes are ready for review.
- [Optional] Rebase your commits so you have few commits with clear commit messages.
- Push your branch to your remote fork, use the github UI to open a pull request (PR), and add reviewer(s).
- Push new commits to your remote branch as you respond to reviewer comments.
- Note: once a PR is under review, don't rebase changes you've already pushed to the PR. This can confuse reviewers.
- When ready to submit, use the "Squash and merge" option (found under the submit button dropdown options). This maintains linear history and ensures your entire PR is merged as a single commit, while being simple to use in most cases. If there are conflicts, pull the latest changes from main, merge them into your PR, and try again.
Read more about the forking workflow here.
Note that there are a few downsides to "Squash and merge":
- The official repo will not show commits from collaborators if the PR is a collaborative branch.
- Working off the same branch or a dependent branch duplicates commits on the dependent branch and can cause repeated merge conflicts. To work around this, if you have a PR
my_branch_1
and you want to start work on a new PR that is dependent onmy_branch_1
, you can do the following:- Create a new local branch
my_branch_2
based onmy_branch_1
. Continue to develop onmy_branch_2
. - If
my_branch_1
is updated (including by merging changes from main), switch tomy_branch_2
and rungit rebase -i my_branch_1
to incorporate the changes intomy_branch_2
while maintaining the the branch dependency. - When review is done, squash and merge
my_branch_1
. Don't deletemy_branch_1
yet. - From local client, go to main branch and pull from main to update the local main branch with the squashed change.
- From local client, run
git rebase --onto main my_branch_1 my_branch_2
. This tells git to move all the commits betweenmy_branch_1
andmy_branch_2
onto main. You can now deletemy_branch_1
.
- Create a new local branch
For details on "Squash and merge" see here
The frontend consists of
health-equity-tracker/frontend/
: A React app that contains all code and static resources needed in the browser (html, JS, CSS, images). This app was bootstrapped with Create React App. Documentation on Create React App can be found here.health-equity-tracker/frontend_server/
: A lightweight server that serves the React app as static files and forwards data requests to the data server.health-equity-tracker/data_server/
: A data server that responds to data requests by serving data files that have been exported from the data pipeline.
In addition, we have a Storybook project that also lives in health-equity-tracker/frontend/
. Storybook is a library that allows us to explore and develop UI components in isolation. Stories for each UI component are contained in the same directory as the component in a subfolder called "storybook". The current main branch version of Storybook can be seen here: https://het-storybook.netlify.app
The frontend React App runs in different environments. We use configuration files (frontend/.env.prod
, frontend/.env.staging
, etc) to control settings in different environments. These include things like the data server URL and logging settings. These can be overridden for local development using a frontend/.env.development
file.
Switch to the frontend/
directory, then install dependencies using NPM.
Note: you will need a compatible version of Node.JS and NPM installed locally; see the "engines" field in frontend/package.json
for the required of each. It's recommended to use Node Version Manager (nvm
) if you need to have multiple versions of Node.JS / NPM installed on your machine, though members have also had success with Homebrew.
cd frontend && npm install
If you encounter errors during install that mention gyp
, that refers to a Node.js native addon build tool that is required for some modules. Follow the instructions on the gyp github repo for installation and setting up required dependencies (eg Python and certain build tools like XCode Command Line Tools for OS X).
Since the frontend is a static site that just connects to an API for data requests, most frontend development happens independently of server-side changes. If you're only changing client-side behavior, you only need to run the React App. The simplest way to do this is to connect the frontend to the test website server. First, copy frontend/.env.example
into frontend/.env.development
. This file is already set up to point to the test website server.
To start a local development server, switch to the frontend/
directory and run:
npm run start:development
The site should now be visible at http://localhost:3000
. Any changes to source code will cause a live reload of the site.
Note: you can also run npm start
without a .env.development
file. This will read environment variables from your terminal.
Note: when new environment variables are added, be sure to update the .env.example
file so developers can reference it for their own .env.development
files.
Environment variables in frontend/.env.development
can be tweaked as needed for local development.
The REACT_APP_BASE_API_URL
can be changed for different setups:
- You can deploy the frontend server to your own GCP project
- You can run the frontend server locally (see below)
- You can run Docker locally (see below)
- You can set it to an empty string or remove it to make the frontend read files from the
/public/tmp
directory. This allows testing behavior by simply dropping local files into that directory.
You can also force specific dataset files to read from the /public/tmp
directory by setting an environment variable with the name REACT_APP_FORCE_STATIC
variable to a comma-separated list of filenames. For example, REACT_APP_FORCE_STATIC=my_file1.json,my_file2.json
would force my_file1.json
and my_file2.json
to be served from /public/tmp
even if REACT_APP_BASE_API_URL
is set to a real server url.
If you need to run the frontend server locally to test server-side changes:
Copy frontend_server/.env.example
into frontend_server/.env.development
, and update DATA_SERVER_URL
to point to a specific data server url, similar to above.
To run the frontend server locally, navigate to the frontend_server/
directory and run:
node -r dotenv/config server.js dotenv_config_path=.env.development
This will start the server at http://localhost:8080
. However, since it mostly serves static files from the build/
directory, you will either need to
- run the frontend server separately and set the
REACT_APP_BASE_API_URL
url tohttp://localhost:8080
(see above), or - go to the
frontend/
directory and runnpm run build:development
. Then copy thefrontend/build/
directory tofrontend_server/build/
Similarly to the frontend React app, the frontend server can be configured for local development by changing environment variables in frontend_server/.env.development
. Copy frontend_server/.env.example
to get started.
If you need to test Dockerfile changes or run the frontend in a way that more closely mirrors the production environment, you can run it using Docker. This will build both the frontend React app and the frontend server.
Run the following commands from the root project directory:
- Build the frontend Docker image:
docker build -t <some-identifying-tag> -f frontend_server/Dockerfile . --build-arg="DEPLOY_CONTEXT=development"
- Run the frontend Docker image:
docker run -p 49160:8080 -d <some-identifying-tag>
- Navigate to
http://localhost:49160
.
When building with Docker, changes will not automatically be applied; you will need to rebuild the Docker image.
Refer to Deploying your own instance with terraform for instructions on deploying the frontend server to your own GCP project.
To run unit tests, switch to the frontend/
directory and run:
npm test
This will run tests in watch mode, so you may have the tests running while developing.
To run e2e tests, switch to the frontend/
directory and run:
npm run e2e
This will use Playwright test runner to launch the React app if needed, and then confirm routing/rendering is working as expected. These tests are run on GitHub pull request commits.
To run url tests, switch to the frontend/
directory and run:
npm run url
This will use Playwright test runner to launch the React app if needed, and then confirm all outgoing links are returning successful responses. This runs weekly on GitHub.
Storybook is currently not actively maintained and may not function properly.
To run storybook locally, switch to the frontend/
directory and run:
npm run storybook:development
Storybook local development also uses frontend/.env.development
for configuration. However, storybook environment variables must start with STORYBOOK_
instead of REACT_APP_
. Most environment variables have an equivalent STORYBOOK_
version.
To create a "production" build do:
npm run build:${DEPLOY_CONTEXT}
This will use the frontend/.env.${DEPLOY_CONTEXT}
file for environment variables and outputs bundled files in the frontend/build/
directory. These are the files that are used for hosting the app in production environments.
Note: this is a one-way operation. Once you eject
, you can’t go back!
Don't do this unless there's a strong need to. See https://create-react-app.dev/docs/available-scripts/#npm-run-eject for further information.
Install Cloud SDK (Quickstart) Install Terraform (Getting started) Install Docker Desktop (Get Docker)
gcloud config set project <project-id>
Unit tests can be run using pytest. Running pytest will recursively look for and execute test files.
pip install pytest
pytest
To test from the packaged version of the ingestion library, run pip install -e python/ingestion
before testing.
- Create a virtual environment in your project directory, for example:
python3 -m venv .venv
- Activate the venv:
source .venv/bin/activate
- Install pip-tools and other packages as needed:
pip install pip-tools
To test a Cloud Run service triggered by a Pub/Sub topic, run
gcloud pubsub topics publish projects/<project-id>/topics/<your_topic_name> --message "your_message" --attribute=KEY1=VAL1,KEY2=VAL2
See Documentation for details.
Most python code should go in the /python
directory, which contains packages that can be installed into any service. Each sub-directory of /python
is a package with an __init__.py
file, a setup.py
file, and a requirements.in
file. Shared code should go in one of these packages. If a new sub-package is added:
-
Create a folder
/python/<new_package>
. Inside, add:- An empty
__init__.py
file - A
setup.py
file with options:name=<new_package>
,package_dir={'<new_package>': ''}
, andpackages=['<new_package>']
- A
requirements.in
file with the necessary dependencies
- An empty
-
For each service that depends on
/python/<new_package>
, follow instructions at Adding an internal dependency
To work with the code locally, run pip install ./python/<package>
from the root project directory. If your IDE complains about imports after changing code in /python
, re-run pip install ./python/<package>
.
Note: generally this should only be done for a new service. Otherwise, please add python code to the python/
directory.
When adding a new python root-level python directory, be sure to update .github/workflows/linter.yml
to ensure the directory is linted and type-checked.
-
Add the dependency to the appropriate
requirements.in
file.- If the dependency is used by
/python/<package>
, add it to the/python/<package>/requirements.in
file. - If the dependency is used directly by a service, add it to the
<service_directory>/requirements.in
file.
- If the dependency is used by
-
For each service that needs the dependency (for deps in
/python/<package>
this means every service that depends on/python/<package>
):- Run
cd <service_directory>
, thenpip-compile requirements.in
where<service_directory>
is the root-level directory for the service. This will generate arequirements.txt
file. - Run
pip install -r requirements.txt
to ensure your local environment has the dependencies, or runpip install <new_dep>
directly. Note, you'll first need to have followed the python environment setup described above Python environment setup.
- Run
-
Update the requirements.txt for unit tests
pip-compile python/tests/requirements.in -o python/tests/requirements.txt
If a service adds a dependency on /python/<some_package>
:
- Add
-r ../python/<some_package>/requirements.in
to the<service_directory>/requirements.in
file. This will ensure that any deps needed for the package get installed for the service. - Follow step 2 of Adding an external dependency to generate the relevant
requirements.txt
files. - Add the line
RUN pip install ./python/<some_package>
to<service_directory>/Dockerfile
- Install Docker
- Install Docker Compose
- Set environment variables
- PROJECT_ID
- GCP_KEY_PATH (See documentation on creating and downloading keys.)
- DATASET_NAME
- GCS_LANDING_BUCKET
- GCS_MANUAL_UPLOADS_BUCKET
- MANUAL_UPLOADS_DATASET
- MANUAL_UPLOADS_PROJECT
- EXPORT_BUCKET
From inside the airflow/dev/
directory:
-
Build the Docker containers
make build
-
Stand up the multi-container environment
make run
-
At the UI link below, you should see the list of DAGs pulled from the
dags/
folder. These files will automatically update the Airflow webserver when changed. -
To run them manually, select the desired DAG, toggle to
On
and clickTrigger Dag
. -
When finished, turn down the containers
make kill
More info on Apache Airflow in general.
To upload to BigQuery from your local development environment, use these setup directions with an experimental Cloud project. This may be useful when iterating quickly if your Cloud Run ingestion job isn’t able to upload to BigQuery for some reason such as JSON parsing errors.
Before deploying, make sure you have installed Terraform and a Docker client (e.g. Docker Desktop). See Set up above.
-
Edit the
config/example.tfvars
file and rename it toconfig/terraform.tfvars
-
Login to glcoud
gcloud auth application-default login
- Login to docker
gcloud auth configure-docker
- Build and push docker images
./scripts/push_images
- Setup your cloud environment with
terraform
pushd config
terraform apply --var-file digest.tfvars
popd
- Configure the airflow server
pushd airflow
./upload-dags.sh
./update-environment-variables.sh
popd
- Build and push docker images
./scripts/push_images
- Setup your cloud environment with
terraform
pushd config
terraform apply --var-file digest.tfvars
popd
- To redeploy, e.g. after making changes to a Cloud Run service, repeat steps 4-5. Make sure you run the docker commands from your base project dir and the terraform commands from the
config/
directory.
Terraform doesn't automatically diff the contents of cloud run services, so simply calling terraform apply
after making code changes won't upload your new changes. This is why Steps 4 and 5 are needed above. Here is an alternative:
Use terraform taint
to mark a resource as requiring redeploy. Eg terraform taint google_cloud_run_service.ingestion_service
.
You can then set the ingestion_image_name
variable in your tfvars file to <your-ingestion-image-name>
and gcs_to_bq_image_name
to <your-gcs-to-bq-image-name>
. Then replace Step 5 above with just terraform apply
. Step 4 is still required.
-
Go to Cloud Console.
-
Search for Composer
-
A list of environments should be present. Look for data-ingestion-environment
-
Click into the details, and navigate to the environment configuration tab.
-
One of the properties listed is Airflow web UI link.
All files in the airflows/dags directory will be uploaded to the test airflow environment. Please only put DAG files in this directory.