Skip to content

Latest commit

 

History

History
356 lines (230 loc) · 18.2 KB

testing.md

File metadata and controls

356 lines (230 loc) · 18.2 KB

Testing

The code deployed to prod.cms.va.gov undergoes numerous tests before merging, and automated tests are run before deployment.

The automated test suite for cms.va.gov is defined in the tests.yml file and is run using the Task tool, allowing the same command to be used for local development, CI (via Tugboat) and for DEV, STAGING and PROD environments.

Task is installed by the install_task.sh script.

Goals

To adopt a strong test driven culture, the testing tools must:

  1. Run the same tests in multiple environments with minimal configuration and a single command.
  2. Allow developers to define tests to include in the suite and to write the tests.
  3. Provide feedback to developers as quickly as possible and make test output as readable and accessible as possible. (e.g. GitHub issue comments with failure reasons)

Scope

To avoid entanglement of tests, tests should adhere, when possible, to their own area of concern. Practice separation of concerns as much as possible. There are three areas of concern.

  1. CMS - This is the functioning of being able to login, edit and publish content. It's boundary of concern ends at the GraphQL endpoints.
  2. Front-end - This is the Metalsmith build that creates the HTML front-end from the content accessed at the GraphQL endpoints of the CMS.
  3. Content - This is the realm of making sure menu links and other links in content work. 508 testing is also part of content testing.

Entanglement should be avoided because it causes people from the non-relevant team to spend time solving issues that are not in their area of concern. Example: Developers chasing down a mis-entered content link is not a good use of time. End to End tests should be achieved when possible, by each area of concern providing coverage for their particular area.

VA.gov CMS Test Suite

Always refer to the file tests.yml for the canonical list of required tests that are included in the automated testing system, and are required to pass before merge or deployment,

There are 3 main types of tests:

  1. Static Tests: Static tests are run by git pre-commit hooks: Developers cannot commit code if any of these tests fail. Static tests only require the source code. No active database is needed.

    See the hooks/pre-commit file for the exact command run before git commit.

    Each static test should also be run by a corresponding Github action and block PR merges on failure. Github Actions are added and edited in the Github workflows directory. When adding a new GitHub Action, our preferred process to minimize technical debt and maintenance is the following:

    1. When possible, use a well-supported action from the open-source community. The reviewdog organization on GitHub is often a good place to start looking.
    2. If the Action cannot meet our requirements without modifications, resolve in this order:
    3. Modify the existing Action for configurability and attempt to contribute the modification upstream
    4. If the contribution is not accepted or greater modifications are needed, create a new Action in a repo under the DSVA GitHub organization 1. If possible, try to contribute this new Action upstream under the reviewdog space

    Existing tests:

    1. va/tests/phpcs - "PHP CodeSniffer" tests ensure coding standards are met.
    2. CodeQL - Automated vulnerability scanning
    3. ESlint - JavaScript linting
    4. PHPCS - PHP linting
    5. StyleLint (modules) - JavaScript style checks for custom Drupal module code
    6. StyleLint (themes) - JavaScript style checks for custom Drupal theme code
    7. PHPStan - Static code analysis
  2. WEB Integration Tests (e.g. WEB == FE decoupled content-build repo)

    1. Behat Decoupled.feature runs a content build and tests for content changes.

    The long term goal is to run all of the WEB project's tests in our test suite, but more work is needed in the WEB codebase to make that possible.

  3. Functional Tests

    1. va/tests/phpunit - The CMS PHPUnit Tests include a number of functional tests, including creating media, testing GraphQL, performance and security. See the tests/phpunit folder to see all the PHPUnit tests.

      Utilizing the DrupalTestTraits library with PHPUnit gives developers the ability to bootstrap Drupal and write tests in PHP without an abstraction layer provided by Gherkin. PHPUnit is the preferred tool to write tests due to its speed of execution.

      Run all tests:

      ddev phpunit
      

      Run a specific test with the "path" argument: The path can be to a specific test file, or a directory with tests.

      ddev phpunit-run {Path-to-test}
      
      ddev phpunit-run docroot/modules/contrib/config_split/tests/src/Kernel/ConfigSplitCliServiceTest.php
      

      Run a specific test:

      ddev phpunit-run {Path-to-test} --filter {test-function-name}
      
      ddev phpunit-run docroot/modules/contrib/config_split/tests/src/Kernel/ConfigSplitCliServiceTest.php --filter testGrayAndBlackListExport
      

      Run a group of PHPUnit tests:

      ddev phpunit-run . --group security
    2. va/tests/behat - The Behat test suite includes:

      1. Content-Edit-Web-Rebuild test:

        This test is critical: it ensures the CMS does not break the WEB build.

        See tests/behat/features/content.feature

      2. Permissons Test:

        See tests/behat/features/perms.feature

      3. Drupal Spec Tests: The DST tool enforces the desired structure of the Drupal site by generating Gherkin Feature files. See tests/behat/drupal-spec-tool folder for all of the tests and more information on managing the Drupal Spec Tool and VA's Spec tool doc here.

      Run a specific behat test with the --name or --tags options:

      ddev behat --tags=dst
    3. va/tests/cypress - The Cypress test suite includes end-to-end behavioral and accessibility tests.

      To run a specific test:

      node_modules/.bin/cypress run --spec "tests/cypress/integration/behavioral/content_release.feature"

      To run and debug cypress tests in a web UI, run the following commands from the project root on your local machine (not within ddev):

      npm run test:cypress:interactive

      You will see a window with a list of tests. Just click on the name of any test to run it within a browser.

Running Tests

The main way to run tests is the ./bin/task --taskfile=tests.yml command.

Run ./bin/task --help for more information.

NOTE: The bin directory is automatically included in the $PATH for all Composer commands, including Task itself.

See Composer Paths for more information on Composer and $PATH.

Local Testing with ddev: ddev test

This project is configured to work with ddev out of the box.

ddev commands are listed in `ddev help. There are some helper commands that map to shell commands.

ddev Command Shell Command
ddev task ./bin/task
ddev test ./bin/task --taskfile=tests.yml
ddev test va/deploy ./bin/task --taskfile=tests.yml va/deploy
ddev web-build composer va:web:build
ddev phpunit ./bin/task --taskfile=tests.yml va/tests/phpunit
ddev phpstan bin/phpstan analyze
ddev web-build composer va:web:build
ddev behat cd /app/tests/behat && /app/bin/behat

NOTES:

  • Any arguments passed to the ddev command are passed through to the composer command.
  • Any composer command can be run inside a ddev container after you run ddev ssh.

Limit tests to run

You can add an argument to filter the tests to run:

# Run the entire test suite.
./bin/task --taskfile=tests.yml

# Run `va/tests/phpunit` only
./bin/task --taskfile=tests.yml va/tests/phpunit

GitHub Integration

The Task tool also integrates with GitHub through ReviewDog, providing pass/fail commit status for each test listed in tests.yml, and posting errors as comments on the commit's page on GitHub.com.

Branch Enforcement Rules

All of the tests in tests.yml are required to pass before a Pull Request can be merged. This is enforced by GitHub.com and is configurable: See the Branches section of the repository's Settings.

GitHub comment with the output from a failed test.

If an individual test fails, the Task tool creates a comment on the commit with the failed test results. The test results are also logged in Tugboat.

GitHub Statuses API

The API used by Yaml Tests and GitHub for testing code is called the "Statuses API": https://developer.github.com/v3/repos/statuses/

It stores test results attached to the commit, based on SHA.

Yaml-tasks reads the SHA of git repository, runs the test, and sends the state to GitHub Status API, which sends it along to the users.

What you end up seeing is something like this:

GitHub Commit Statuses, some failing, some passing.

NOTE: The GitHub API stores this information attached to the Commit, not to the PR.

This means if you open a second PR with the same commits, the commit status AND the commit comments will show in both pull requests.

Composer, config.bin-path, and $PATH

Composer automatically loads the directory bin into the PATH of any composer command or script. More accurately, it includes the directory set in the config.bin-dir section of composer.json.

This means you only have to include the script name when referring to them in composer.json or in tests.yml.

For example, if you wanted to create a composer special-tests command as an alias for yaml-tasks but with a different file and with a filter, add this to composer.json:

{
  "scripts": {
    "special-tests": [
      "which yaml-tasks",
      "yaml-tasks myuniquetests --file=custom.yml"
    ]
  }
}

Or, if you want to run drush or npm (or any other script in the bin dir) as a test, just call the script name:

# tasks.yml example that runs commands from the project's ./bin directory.
example/drush/status: drush status
example/drush/version: drush --version
example/npm/which: which npm
example/npm/version: npm --version

The which npm command helps you find out which file is actually being run.

In this project's case, which npm would print /path/to/va.gov-cms/bin/npm.

Fortify security scans

Fortify scans are run manually.

About Drupal Security Team Coverage When a module is covered by the Drupal Security Team it means that the team will receive reports of vulnerabilities from the Drupal community and the general public and will work with the maintainer to fix and coordinate the module and advisory release.

Symfony and other non-Drupal.org hosted libraries are all out of scope for the Drupal Security Team, though the security team will occasionally work with these projects security teams to coordinate releases or help test etc. Symfony has an active security team and process/advisories (see https://symfony.com/blog/category/security-advisories).

Composer libraries don't have any defined process nor advisories, therefore this scan offers of additional scrutiny.

Excluded directories Drupal 8 core and contributed modules covered by the Drupal Security Team were not included in the scan.

  ./docroot/core/**/*"
  ./docroot/includes/**/*"
  ./docroot/modules/contrib/**/*"
  ./docroot/themes/contrib/**/*"
  ./docroot/profiles/**/*"
  ./docroot/scripts/**/*"

Included Vendor Libraries Vender libraries are third party open source packages included by Drupal core and modules to add functionality. For example Drupal 9 includes the Symfony open source project which in turn may include libraries from other open source projects. Symfony has an active security team monitoring security and posting process/advisories (see https://symfony.com/blog/category/security-advisories).

Whether these third party libraries are secure involves multiple factors (and has no definitive answer) project lifetime, maintenance status, frequency/size of major changes, number of maintainers, skills of maintainers in security topics, security of the projects own dependencies, security surface area (does the project deal with user actions, data, sessions, external systems etc), security architecture and threat model, code quality, documentation etc.

PHPStan

PHPStan performs static analysis on the codebase and reports issues such as references to unknown/undeclared properties, incorrect argument types in function calls, functions that are too long or too complex, etc.

Magic Properties and Other Annoyances

Developing with Drupal idiomatically tends to conflict with PHPStan.

For instance, you might type code like $node->field_address->zip_code. If $node is declared as or implied to be a \Drupal\node\Entity\Node object, then PHPStan will look for a property named $field_address on \Drupal\node\Entity\Node. $node might also be interpreted as \Drupal\node\NodeInterface, or \Drupal\Core\Entity\EntityInterface, or any of several other interfaces. But functionality for accessing fields via constructs like $node->field_address is implemented via "magic properties," to which PHPStan does not and cannot have access. As a consequence, PHPStan will view the use of these magic properties as errors.

To permit both idiomatic Drupaling and good static analysis, we simply allowlist errors that arise from this sort of use.

This can be done by adding new expressions to the parameters.ignoreErrors array in phpstan.neon.

parameters:
  ...
  ignoreErrors:
    - '#Access to an undefined property Drupal\\node\\NodeInterface::\$field_address\.#'

This is hardly ideal, but we are optimistic that entity bundle classes will permit us to remove this sort of hack.

Baseline

It sometimes happens that a developer will duplicate or repeat some code within our codebase and then find, much to their surprise, that PHPStan throws an error for the new code while seeming to ignore the old code. This has historic reasons.

PHPStan was integrated into the codebase after a substantial amount of development had already occurred. Ordinarily, a PHPStan error would prevent code from being approved and merged. But running PHPStan initially revealed a couple of hundred issues, almost all having to do with magic properties and other Drupal idioms. Rather than break the build for days or weeks to eliminate these issues, we opted instead to generate a baseline and fail only builds that introduced new code issues.

A PHPStan baseline is simply a list of existing errors. We maintain the baseline in our codebase (see phpstan-baseline.neon to prevent these historical errors from interfering with our CI/CD processes.

This does have drawbacks, though; it can be confusing to have the same code in two places and see one instance trigger a PHPStan error and the other seem to slip through. (And depending on where the addition is made, the error message may be misleading and point to old code and not the new code!) And, if the issue is corrected (or the code removed) in the future, the baseline must be altered to match the new error set, so removing technical debt is slightly penalized by a maintenance burden.

But, all things considered, this seems to be the least painful way of managing static analysis.

How Do I...?

View the verbose output of the test runs on a PR?

Follow one of two approaches, depending on the type of the test.

If the test name begins with "va/tests/...", e.g. "va/tests/cypress"

This test is run on Tugboat.

Find the PR that contains the links to the Tugboat environment:

Tugboat PR Comment

Click the link under Dashboard (SOCKS must be enabled to access Tugboat).

Once in the Tugboat instance dashboard, scroll down to the Preview Build Log and click "See Full Log".

Preview Build Log

This will give you a scrollable view of all of the logged information output since the Tugboat environment was (re)built, including all test output. Unfortunately, and extremely frustratingly, 1) it will autoscroll to the bottom until all tests have completed, and 2) the text is not searchable.

If you need to find some particular string, select all and copy it to an IDE/text editor/whatever.

Otherwise, just scroll back to find the failed test, and go from there.

Otherwise...

This test is run by a GitHub Action. Click the "Details" link in the row of the failed test. This should take you to a view of the run details, which should contain the logged information.

Table of Contents