Skip to content

Latest commit

 

History

History
169 lines (115 loc) · 7.54 KB

README.md

File metadata and controls

169 lines (115 loc) · 7.54 KB

Testing

Software testing has been around for decades, and it has been proven to provide many crucial benefits, including:

  • Reduce the number of bugs and regressions
  • Increase project velocity (in the long run)
    • Improves accuracy of scheduling estimates
    • Saves time and money
  • Increase user trust and satisfaction

You should use automated testing. Do not fall prey to common rationalizations and excuses relating to insufficient time, money, or resources. Time spent developing tests is repaid ten fold.

Quality is the ally of schedule and cost, not their adversary. If we have to sacrifice quality to meet schedule, it’s because we are doing the job wrong from the very beginning.

-- James A. Ward

The bitterness of poor quality remains long after the sweetness of meeting the schedule has been forgotten.

-- Karl Wiegers

That being said, two important pitfalls should be acknowledged:

  1. It is possible to do automated testing incorrectly such that it is too expensive. See Why Test Automation Costs Too Much.
  2. It is possible to write automated tests that have little value.

To avoid these pitfalls, follow the best practices outlined in sections below.

Resources

Types of testing

You should use the following types of automated tests:

Functional testing

Functional tests should be written with Behat using a Behavior Driven Development methodology.

See Behat section below for more details.

Unit testing

Unit testing is the practice of testing the components of a program automatically, using a test program to provide inputs to each component and check the outputs. They are meant to be highly granular and completely independent. Running units tests should be a very fast process.

PHPUnit should be used for unit testing. See PHPUnit section below for more details

Test directory structure

This directory contains all projects tests, grouped by testing technology. For all configuration related to builds that actually run these tests, please see the build directory.

tests
├── behat - contains all Behat tests
│    ├── features
│    │   ├── bootstrap
│    │   └── Example.feature
│    ├── behat.yml - contains behat configuration common to all behat profiles.
│    └── integration.yml - contains behat configuration for the integration profile, which is used to run tests on the integration environment.
├── jmeter  - contains all jMeter tests
└── phpunit - contains all PHP Unit tests

Executing tests

Before attempting to execute any tests, verify that composer dependencies are built by running composer install in the project root.

Each testing type can be either executed directly, or via a corresponding Phing target. Phing will execute the tests with default values defined in your project's yaml configuration files (project.yml). Examples:

  • ./blt.sh tests:all
  • ./blt.sh tests:behat
  • ./blt.sh tests:phpunit

To execute the tests directly (without Phing) see the following examples:

  • ./vendor/bin/behat -c tests/behat/local.yml tests/behat/features/Examples.feature
  • ./vendor/bin/phpunit tests/phpunit/BLTTest.php

For more information on the commands, run:

  • ./vendor/bin/phpunit --help
  • ./vendor/bin/behat --help

Behat

The high-level purpose BDD is to create a strong connection between business requirements and the actual tests. Behat tests should mirror ticket acceptance criteria as closely as possible.

Consequently, proper Behat tests should be written using business domain language. The test should be comprehensible by the stakeholder and represent a clear business value. It should represent a typical user behavior and need not be an exhaustive representation of all possible scenarios.

See referenced materials for more information on BDD best practices.

Configuration

Configuration for the BLT Behat commands is stored in the behat array in build/core/phing/build.yml. Any of these values can be overriden by defining the new values in * build/custom/phing/build.yml. E.g., if you wish to scan only tests/behat/features for Behat tests, you could add the following to build/custom/phing/build.yml:

behat:
  config: ${repo.root}/tests/behat/local.yml
  profile: local
  # If true, `drush runserver` will be used for executing tests.
  run-server: false
  # This is used for ad-hoc creation of a server via `drush runserver`.
  server-url: http://127.0.0.1:8888
  # If true, PhantomJS GhostDriver will be launched with Behat.
  launch-phantom: true
  # An array of paths with behat tests that should be executed.
  paths:
    # - ${docroot}/modules
    # - ${docroot}/profiles
    - ${repo.root}/tests/behat
  tags: '~ajax'

Behat's own configuration is defined in the following files:

  • tests/behat/behat.yml
  • tests/behat/example.local.yml
  • tests/behat/local.yml

Best practices:

  • Behat tests must be used behaviorally. I.E., they must use business domain language.
  • Each test should be isolated. E.g., it should not depend on conditions created by another test. In pratice, this means:
    • Resetting testing environment via CI after test suite runs
    • Defining explicit cleanup tasks in features
  • @todo add examples of good and bad features

Common mistakes

  • Writing Behat tests that do not use business domain language.
  • Tests are not sufficiently isolated. Making tests interdependent diminishes their value!
  • Writing tests that are exhaustive of all scenarios rather than representative of a typical scenario.
  • Writing Behat tests when a unit test should be employed.

Resources:

PHPUnit

Project level, functional PHPUnit tests are included in tests/phpunit. Any PHPUnit tests that affect specific modules or application level features should be placed in the same directory as that module, not in this directory.

Best practices

  • Tests should not contain any control statements
  • Be careful to make both positive and negative assertions of expectations
  • @todo add examples of good and bad tests

Common mistakes

  • Writing unit tests that are not independent
  • Making unit tests too large. Tests should be small and granular.
  • Asserting only positive conditions. Negative assertions should also be made.

Resources