Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

test(snapshot): major improvements #15350

Merged
merged 20 commits into from
Jun 28, 2024
Merged
Show file tree
Hide file tree
Changes from 17 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
8 changes: 4 additions & 4 deletions .eslintignore
Original file line number Diff line number Diff line change
Expand Up @@ -28,9 +28,9 @@ robot-server/**
shared-data/python/**
hardware-testing/**

# app-testing don't format the json protocols
app-testing/files
# app testing don't format the snapshots
app-testing/tests/__snapshots__
# analyses-snapshot-testing don't format the json protocols
analyses-snapshot-testing/files
# don't format the snapshots
analyses-snapshot-testing/tests/__snapshots__
opentrons-ai-server/package
opentrons-ai-server/api/storage/index/
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
# This workflow runs lint on pull requests that touch anything in the app-testing directory
# This workflow runs lint on pull requests that touch anything in the analyses-snapshot-testing directory

name: 'app-testing lint'
name: 'analyses-snapshot-testing lint'

on:
pull_request:
paths:
- 'app-testing/**'
- 'analyses-snapshot-testing/**'

concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
Expand All @@ -17,7 +17,7 @@ defaults:

jobs:
lint:
name: 'app-testing lint'
name: 'analyses-snapshot-testing lint'
timeout-minutes: 5
runs-on: 'ubuntu-latest'
steps:
Expand All @@ -29,20 +29,20 @@ jobs:
with:
python-version: '3.12'
cache: 'pipenv'
cache-dependency-path: app-testing/Pipfile.lock
cache-dependency-path: analyses-snapshot-testing/Pipfile.lock
- name: Setup
id: install
working-directory: ./app-testing
working-directory: ./analyses-snapshot-testing
run: make setup
- name: black-check
if: always() && steps.install.outcome == 'success' || steps.install.outcome == 'skipped'
working-directory: ./app-testing
working-directory: ./analyses-snapshot-testing
run: make black-check
- name: ruff
if: always() && steps.install.outcome == 'success' || steps.install.outcome == 'skipped'
working-directory: ./app-testing
working-directory: ./analyses-snapshot-testing
run: make ruff-check
- name: mypy
if: always() && steps.install.outcome == 'success' || steps.install.outcome == 'skipped'
working-directory: ./app-testing
working-directory: ./analyses-snapshot-testing
run: make mypy
34 changes: 17 additions & 17 deletions .github/workflows/analyses-snapshot-test.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,12 +3,12 @@ name: Analyses Snapshot Test
on:
workflow_dispatch:
inputs:
TARGET:
description: 'Target branch or tag'
ANALYSIS_REF:
description: 'Branch or tag that provides the analysis output at test runtime'
Comment on lines +6 to +7
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Add something like what ref you want to compare snaphsot results to

required: true
default: 'edge'
TEST_SOURCE:
description: 'Target for the test code'
SNAPSHOT_REF:
description: 'Branch or tag that provides the snapshot and test code at test runtime'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: what ref you want to use to run code and generate snapshots with

This is something that has always confused me every time I go to run something. So being a bit verbose would be helpful here.

If you can, you might provide a link to the README

required: true
default: 'edge'
schedule:
Expand All @@ -28,57 +28,57 @@ jobs:
timeout-minutes: 15
runs-on: ubuntu-latest
env:
TARGET: ${{ github.event.inputs.TARGET || github.head_ref || 'edge' }}
TEST_SOURCE: ${{ github.event.inputs.TEST_SOURCE || github.head_ref || 'edge' }}
ANALYSIS_REF: ${{ github.event.inputs.ANALYSIS_REF || github.head_ref || 'edge' }}
SNAPSHOT_REF: ${{ github.event.inputs.SNAPSHOT_REF || github.head_ref || 'edge' }}

steps:
- name: Checkout Repository
uses: actions/checkout@v4
with:
ref: ${{ env.TEST_SOURCE }}
ref: ${{ env.SNAPSHOT_REF }}

- name: Docker Build
working-directory: app-testing
working-directory: analyses-snapshot-testing
run: make build-opentrons-analysis

- name: Set up Python 3.12
uses: actions/setup-python@v5
with:
python-version: '3.12'
cache: 'pipenv'
cache-dependency-path: app-testing/Pipfile.lock
cache-dependency-path: analyses-snapshot-testing/Pipfile.lock

- name: Setup Python Dependencies
working-directory: app-testing
working-directory: analyses-snapshot-testing
run: make setup

- name: Run Test
id: run_test
working-directory: app-testing
working-directory: analyses-snapshot-testing
run: make snapshot-test

- name: Upload Report
if: '!cancelled()'
uses: actions/upload-artifact@v4
with:
name: test-report
path: app-testing/results/
path: analyses-snapshot-testing/results/

- name: Handle Test Failure
if: failure()
working-directory: app-testing
working-directory: analyses-snapshot-testing
run: make snapshot-test-update

- name: Create Snapshot update Request
id: create-pull-request
if: failure()
uses: peter-evans/create-pull-request@v5
with:
commit-message: 'fix(app-testing): snapshot failure capture'
title: 'fix(app-testing): snapshot failure capture'
commit-message: 'fix(analyses-snapshot-testing): snapshot failure capture'
title: 'fix(analyses-snapshot-testing): ${{ env.ANALYSIS_REF }} snapshot failure capture'
body: 'This PR is an automated snapshot update request. Please review the changes and merge if they are acceptable or find you bug and fix it.'
y3rsh marked this conversation as resolved.
Show resolved Hide resolved
branch: 'app-testing/${{ env.TARGET }}-from-${{ env.TEST_SOURCE}}'
base: ${{ env.TEST_SOURCE}}
branch: 'analyses-snapshot-testing/${{ env.ANALYSIS_REF }}-from-${{ env.SNAPSHOT_REF}}'
base: ${{ env.SNAPSHOT_REF}}

- name: Comment on PR
if: failure() && github.event_name == 'pull_request'
Expand Down
File renamed without changes.
24 changes: 19 additions & 5 deletions app-testing/Makefile → analyses-snapshot-testing/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -48,29 +48,43 @@ teardown:

.PHONY: format-readme
format-readme:
yarn prettier --ignore-path .eslintignore --write app-testing/**/*.md
yarn prettier --ignore-path .eslintignore --write analyses-snapshot-testing/**/*.md

.PHONY: install-pipenv
install-pipenv:
python -m pip install -U pipenv

ANALYSIS_REF ?= edge
PROTOCOL_NAMES ?= all
OVERRIDE_PROTOCOL_NAMES ?= all

export ANALYSIS_REF
export PROTOCOL_NAMES
export OVERRIDE_PROTOCOL_NAMES

.PHONY: snapshot-test
snapshot-test:
@echo "ANALYSIS_REF is $(ANALYSIS_REF)"
@echo "PROTOCOL_NAMES is $(PROTOCOL_NAMES)"
@echo "OVERRIDE_PROTOCOL_NAMES is $(OVERRIDE_PROTOCOL_NAMES)"
python -m pipenv run pytest -k analyses_snapshot_test -vv

.PHONY: snapshot-test-update
snapshot-test-update:
@echo "ANALYSIS_REF is $(ANALYSIS_REF)"
@echo "PROTOCOL_NAMES is $(PROTOCOL_NAMES)"
@echo "OVERRIDE_PROTOCOL_NAMES is $(OVERRIDE_PROTOCOL_NAMES)"
python -m pipenv run pytest -k analyses_snapshot_test --snapshot-update

TARGET ?= edge
CACHEBUST := $(shell date +%s)

.PHONY: build-opentrons-analysis
build-opentrons-analysis:
@echo "Building docker image for $(TARGET)"
@echo "If you want to build a different version, run 'make build-opentrons-analysis TARGET=<version>'"
@echo "Building docker image for $(ANALYSIS_REF)"
@echo "The image will be named opentrons-analysis:$(ANALYSIS_REF)"
@echo "If you want to build a different version, run 'make build-opentrons-analysis ANALYSIS_REF=<version>'"
@echo "Cache is always busted to ensure latest version of the code is used"
docker build --build-arg OPENTRONS_VERSION=$(TARGET) --build-arg CACHEBUST=$(CACHEBUST) -t opentrons-analysis:$(TARGET) citools/.
docker build --build-arg OPENTRONS_VERSION=$(ANALYSIS_REF) --build-arg CACHEBUST=$(CACHEBUST) -t opentrons-analysis:$(ANALYSIS_REF) citools/.

.PHONY: generate-protocols
generate-protocols:
Expand Down
File renamed without changes.
File renamed without changes.
40 changes: 40 additions & 0 deletions analyses-snapshot-testing/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
# Analyses Generation and Snapshot Testing

## Setup

1. Follow the instructions in [DEV_SETUP.md](../DEV_SETUP.md)
1. `cd analyses-snapshot-testing`
1. use pyenv to install python 3.12 and set it as the local python version for this directory
1. `make setup`
1. Have docker installed and ready

## Concepts

- If working locally the branch you have checked out is the test code/snapshots you are working with.
- In CI this is the `SNAPSHOT_REF`. This is the branch or tag of the test code/snapshots that analyses generated will be compared to.
- The `ANALYSIS_REF` is the branch or tag that you want analyses generated from.

## Running the tests locally

- Compare the current branch snapshots to analyses generated from the edge branch
- `make build-opentrons-analysis ANALYSIS_REF=edge` this builds a docker image named and tagged `opentrons-analysis:edge`
- this pulls the latest edge every time it builds!
- `make snapshot-test ANALYSIS_REF=edge`
- This runs the test. The test:
- Spins up a container from the `opentrons-analysis:edge` image. ANALYSIS_REF=edge specifies the image to use.
- Analyses as .json files are generated for all protocols defined in [protocols.py](./automation/data/protocols.py) and [protocols_with_overrides.py](./automation/data/protocols_with_overrides.py)
- the test compares the generated analyses to the snapshots in the [./tests/**snapshots**/](./tests/__snapshots__/) directory

## Updating the snapshots

- Assuming you have already built the `opentrons-analysis:edge` image
- `make snapshot-test-update ANALYSIS_REF=edge`
- This will update the snapshots in the [./tests/**snapshots**/](./tests/__snapshots__/) directory with the analyses generated from the edge branch

## Running the tests against specific protocols

> We are omitting ANALYSIS_REF=edge because we can, it is the default in the Makefile

- `make snapshot-test PROTOCOL_NAMES=Flex_S_v2_19_Illumina_DNA_PCR_Free OVERRIDE_PROTOCOL_NAMES=none`
- `make snapshot-test PROTOCOL_NAMES=none OVERRIDE_PROTOCOL_NAMES=Flex_X_v2_18_NO_PIPETTES_Overrides_BadTypesInRTP`
- `make snapshot-test PROTOCOL_NAMES="Flex_S_v2_19_Illumina_DNA_PCR_Free,OT2_S_v2_18_P300M_P20S_HS_TC_TM_SmokeTestV3" OVERRIDE_PROTOCOL_NAMES=none`
54 changes: 54 additions & 0 deletions analyses-snapshot-testing/automation/data/protocol_registry.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
from typing import Optional

from automation.data.protocol import Protocol
from automation.data.protocol_with_overrides import ProtocolWithOverrides
from automation.data.protocols import Protocols
from automation.data.protocols_with_overrides import ProtocolsWithOverrides

ALL_PROTOCOLS = "all"


class ProtocolRegistry:
def __init__(self, protocol_names: str = ALL_PROTOCOLS, override_protocol_names: str = ALL_PROTOCOLS) -> None:
self.protocols: Protocols = Protocols()
self.protocols_with_overrides: ProtocolsWithOverrides = ProtocolsWithOverrides()
self.protocol_names = protocol_names
self.override_protocol_names = override_protocol_names
self.protocols_to_test: Optional[list[Protocol]] = self._what_protocols()

def _what_protocols(self) -> Optional[list[Protocol]]:
protocols_to_test: list[Protocol] = []

if self.protocol_names.lower() == ALL_PROTOCOLS:
protocols_to_test.extend(self.all_defined_protocols())
elif self.protocol_names.lower() == "none":
pass
else:
for protocol_name in [x.strip() for x in self.protocol_names.split(",")]:
protocol: Protocol = getattr(self.protocols, protocol_name) # raises
protocols_to_test.append(protocol)

if self.override_protocol_names.lower() == ALL_PROTOCOLS:
protocols_to_test.extend(self.all_defined_protocols_with_overrides())
elif self.override_protocol_names.lower() == "none":
pass
else:
for protocol_with_overrides__name in [x.strip() for x in self.override_protocol_names.split(",")]:
protocol_with_overrides: ProtocolWithOverrides = getattr(
self.protocols_with_overrides, protocol_with_overrides__name
) # raises
if protocol_with_overrides.protocols is not None:
protocols_to_test.extend(protocol_with_overrides.protocols)
if protocols_to_test == []:
return None
return protocols_to_test

def all_defined_protocols(self) -> list[Protocol]:
return [getattr(self.protocols, prop) for prop in dir(self.protocols) if "__" not in prop]

def all_defined_protocols_with_overrides(self) -> list[Protocol]:
protocols_with_overrides = [
getattr(self.protocols_with_overrides, prop) for prop in dir(self.protocols_with_overrides) if "__" not in prop
]
# Flatten the list of lists into a single list of protocols
return [protocol for protocol_with_overrides in protocols_with_overrides for protocol in protocol_with_overrides.protocols]
Loading
Loading