Generates perceptual diff of a git repository as you make pull requests.
System Requirements
Download and install the following:
- Python 2 w/ pip installed
- Node w/ yarn installed
- Redis
- PostgreSQL
- PhantomJS
Create databases:
Run psql postgres
to open the interactive terminal. Then run these commands:
CREATE DATABASE garnish_db;
CREATE USER garnish_user WITH LOGIN PASSWORD 'garnish';
This will create a new database and user used by the app.
Python and NodeJs Packages
$ pip install -r requirements.txt
$ yarn install
Make sure to set up PostgreSQL:
CREATE DATABASE garnish_db
CREATE USER garnish_user WITH LOGIN PASSWORD 'garnish'
Run Docker:
docker run build . -t <IMAGE_ORG>/<IMAGE_NAME>:<IMAGE_TAG>
docker run -it <JUST_GENERATED_DOCKER_BUILD_HASH> bash
There are 3 testing configurations I will cover in this setup:
- Production, which is the live hosted environment,
- VM replica of Production for testing how deployment configuration may operate in production
- Local Development for basic application without production in mind.
You may create a file named HookCatcher/HookCatcher/user_settings.py
in the same directory as manage.py
that instantiates all these envionment variables as normal python strings.
Define the following environment variables
GIT_REPO='YOUR_GITHUB_USERNAME/YOUR_GITHUB_REPO'
Your Github personal access token:
GIT_OAUTH='YOU_AUTH_ID_HERE'
The name of the directory in the Git Repository that stores the state representation JSON files. See this folder for example:
STATES_FOLDER='NAME_OF_YOUR_STATES_FOLDER'
Set what screen capture tools and resolutions you want. Add the screenshot configuration file to root of this directory. See this file for example:
SCREENSHOT_CONFIG='PATH_TO_YOUR_CONFIG_FILE'
Specify the port that is running Redis (defaults to 6379):
REDIS_PORT='REDIS_PORT_NUMBER'
Specify the port that is running PostgreSQL (defaults to 5432):
POSTGRES_PORT='POSTGRES_PORT_NUMBER'
$ brew install kubernetes-cli
$ brew install kubernetes-helm
$ helm init
gcloud container clusters get-credentials health-inspector --zone us-central1-f --project health-inspector-182716
- Logging into console.cloud.google.com, go to Health Inspector project
- Click "Kubernetes Engine" on the left hand dropdown menu
- Click "Kubernetes clusters" (should be default)
- Select "health-inspector" under the Kubernetes clusters list
- Click the "Connect" button on the top
- You should be prompted with a gcloud command line command
Add storages
to your INSTALLED_APPS
INSTALLED_APPS = (
...
'storages',
...
)
Define these variables to your settings.py
file:
# Leverage object file storage in s3 bucket
DEFAULT_FILE_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'
AWS_ACCESS_KEY_ID = os.getenv('AWS_ACCESS_KEY_ID')
AWS_SECRET_ACCESS_KEY = os.getenv('AWS_SECRET_ACCESS_KEY')
AWS_STORAGE_BUCKET_NAME = os.getenv('AWS_STORAGE_BUCKET_NAME')
AWS_S3_ENDPOINT_URL = os.getenv('AWS_S3_ENDPOINT_URL')
Create the following tags:
s3:
AWS_ACCESS_KEY_ID: <SECRET_READ_BELOW>
AWS_SECRET_ACCESS_KEY: <SECRET_READ_BELOW>
AWS_STORAGE_BUCKET_NAME: "health-inspector"
AWS_S3_ENDPOINT_URL: "https://storage.googleapis.com"
To find the values of AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY,
- Logging into console.cloud.google.com, go to Health Inspector project
- Click "Storage" on the left hand dropdown menu
- Click "Settings" (NOT default)
- Select "Interoperability" from the tabs on top
- Click the "Create a new key" button at the bottom
- Copy the Access Key to "AWS_ACCESS_KEY_ID" and Secret to "AWS_SECRET_ACCESS_KEY"
1. Edit the values under the image: tag in /chart/hookcatcher/values.yaml to name of the docker build:
org: <IMAGE_ORG>
name: <IMAGE_NAME>
tag: <IMAGE_TAG>
$docker run build . -t <IMAGE_ORG>/<IMAGE_NAME>:<IMAGE_TAG>
$docker push <IMAGE_ORG>/<IMAGE_NAME>:<IMAGE_TAG>
$ helm upgrade --install <PATH_TO_DIRECTORYOF_CHART.YAML> <NAME_OF_DEPLOYMENT> --debug
$ kubectl get pods
When all these pods are ready 1/1 then they are all healthy and ready to go. Congrats!
$ kubectl get ing
Try checking the logs of the pod:
$ kubectl logs --follow=true <POD_NAME>
$ kubectl describe pod <POD_NAME>
$ kubectl exec -it <POD_NAME> bash
GOTO this link for instructions on downloading a VM and minikube
$ minikube start
$ eval $(minikube docker-env)
$ helm init
Add storages
to your INSTALLED_APPS
INSTALLED_APPS = (
...
'storages',
...
)
Define these variables to your settings.py
file:
# Leverage object file storage in s3 bucket
DEFAULT_FILE_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'
AWS_ACCESS_KEY_ID = os.getenv('AWS_ACCESS_KEY_ID')
AWS_SECRET_ACCESS_KEY = os.getenv('AWS_SECRET_ACCESS_KEY')
AWS_STORAGE_BUCKET_NAME = os.getenv('AWS_STORAGE_BUCKET_NAME')
AWS_S3_ENDPOINT_URL = os.getenv('AWS_S3_ENDPOINT_URL')
Create the following tags:
s3:
AWS_ACCESS_KEY_ID:
AWS_SECRET_ACCESS_KEY:
AWS_STORAGE_BUCKET_NAME: "health-inspector"
AWS_S3_ENDPOINT_URL: "https://storage.googleapis.com"
To find the values of AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
,
- Logging into console.cloud.google.com, go to Health Inspector project
- Click "Storage" on the left hand dropdown menu
- Click "Settings" (NOT default)
- Select "Interoperability" from the tabs on top
- Click the "Create a new key" button at the bottom
- Copy the Access Key to
AWS_ACCESS_KEY_ID
and Secret toAWS_SECRET_ACCESS_KEY
Edit the values under the image: tag in the /chart/hookcatcher/values.yaml to name of the docker build:
org: <IMAGE_ORG>
name: <IMAGE_NAME>
tag: <IMAGE_TAG>
$docker run build . -t <IMAGE_ORG>/<IMAGE_NAME>:<IMAGE_TAG>
$ helm upgrade --install <PATH_TO_DIRECTORYOF_CHART.YAML> <NAME_OF_DEPLOYMENT> --debug
-
Open a new window and start Redis by running the command
$ redis-server
-
From the root, navigate into the HookCatcher directory
$ cd Hookcatcher/
-
Open how ever many more windows and start a Redis Queue worker on each
$ python manage.py rqworker default
-
Run
$ python manage.py migrate
to set up the database. (This only needs to be run the first time) -
To start the server, run
$ python manage.py runserver (port)
NOTE: port defaults to 8000
To view site enter the following website url into your browser: http://127.0.0.1:8000/
NOTE: make sure you don't have DEFAULT_FILE_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'
if you have been publishing to production recently
In the root of this directory utilize the following Django commands.
NOTE: you must run redis-server before you run the auto-screenshot command
$ python manage.py auto-screenshot <Github Pull Request Number>
$ python manage.py simpleGetScreenshot <URL> <Image Name>
$ python manage.py simpleGetDiff <Image Name 1> <Image Name 2> <Resulting Diff Name>
- The application parses a Github API payload to store metadata about the recently submitted pull request.
NOTE: Make sure to have a folder in the target repository that defines the states you wish to capture.
The path to this folder should be defined by the environment variable
STATES_FOLDER
. A state can simply be a JSON file with the page url path, a unique name to identify the state, and a comprehensive description. For example:
{
"url": "/user/#/signin",
"name": "Login Page",
"description": "View of the login page when a user first visits the site."
}
Please use a single JSON file for each state, but feel free to define as many states as you would like in this folder.
Devs: these processes are defined by this file
- Open to the web application and nagivate to the pull request of the repository of interest. There, you will be prompted with two textboxes to enter the host domain urls of the head and base branches that pertain to this pull request.
- This will schedule the rest of the processes including taking screenshots of all the states for the head and base branch, and then creating a perceptual difference of these two versions
Devs: these processes are defined by this file
- For a granular test to see how the screenshotting procedure is functioning, you can generate images for a particular state defined by this file
-
This script relies on the screenshot configurations of this configuration file to know of all the screen sizes and browsers to use for screenshotting.
-
If the specified browser is chrome, Puppeteer is driving a headless Chromium browser in the background that you can view here. You can also use node to isolate this script and test Puppeteer's functionality using the following command
$ node puppeteer.js --url=<URL> --imgName=<IMAGE_FILE> --imgWidth=<IMAGE_WIDTH> --imgHeight=<IMAGE_WIDTH>
- For a granular test of the image pixel-by-pixel visual diffing, we leverage ImageMagick. Provide two existing images to compare and a the name of the new diff image to generate the visual regression.
Devs: these processes are defined by this file