Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable reviewOnFail for axe.run() #2593

Open
StephenEsser opened this issue Oct 25, 2020 · 10 comments
Open

Enable reviewOnFail for axe.run() #2593

StephenEsser opened this issue Oct 25, 2020 · 10 comments
Labels
core Issues in the core code (lib/core) feat New feature or enhancement

Comments

@StephenEsser
Copy link

StephenEsser commented Oct 25, 2020

A reviewOnFail option was added in #2186 This option works with axe.configure, but does not work with axe.run().

Expectation: The reviewOnFail option should be honored when passed as a rule configuration to axe.run()

Actual: axe.run()does not honor/acknowledge the reviewOnFail option. reviewOnFail can only be enabled by running axe.configure().

Motivation: Axe rules occasionally need to be configurable for test steps that walk through a functional workflow. A test step might disclose content with an accessibility issue that needs review. Rather than use configure for the rule for the entire test run it is only necessary/wanted to configure the rule for that step in the test, similar to enable/disable.

Requesting that axe.run() accept reviewOnFail as an option for rules.

const options = { rules: { 'color-contrast': { reviewOnFail: true } } }; // Currently rules can only be enabled/disabled

axe.run(context, options);
@straker
Copy link
Contributor

straker commented Oct 27, 2020

Thanks for the suggestion. Right now the only option available for rules in axe.run is the enabled option, so we'll have to discuss if we want to expand that.

@straker straker added the feat New feature or enhancement label Oct 27, 2020
@WilcoFiers
Copy link
Contributor

I'm not quite sure I understand what you are trying to accomplish with this. The "reviewOnFail" option was added not to update existing rules but to make it a little easier for folks to create entirely new rules. This is not a use case we anticipated. If I understand you right, you occasionally test scenarios where the widget is not complete enough to come up with an accessible name. Can you explain what you would do with this review issue once it is returned?

@WilcoFiers WilcoFiers added the info needed More information or research is needed to continue label Oct 28, 2020
@StephenEsser
Copy link
Author

StephenEsser commented Oct 28, 2020

I'll add a little more background information. For now though, I think the global reviewOnFail (axe.configure) is going to work for our use-case, but I think this feature is worth discussing a little more.

I work on a team that designs reusable web components and an automation testing infrastructure. Our testing infrastructure is built using WebdriverIO and integrates axe accessibility testing and screenshot testing. Our automation tests are often used to run full functional application workflows and test for accessibility (and screenshot changes) at different stages in the test run.

An example of that might look something like this:

Note Terra.validates.accessibility(); runs axe.run()

describe('Form', () => {
  it('should fill out and submit a form', () => {
    // Navigate to the example form in the web browser
    browser.url('/example-form.html');

    // Run axe (and usually take a screenshot for regression testing) to verify the default form passes accessibility.
    Terra.validates.accessibility();

    // Focus an input.
    browser.click('#input');

    // Type into the input.
    browser.keys('Foo Bar');

    // Run axe (and usually take a screenshot for regression testing) to verify the focused input with text passes accessibility.
    Terra.validates.accessibility();

    // Submit the form.
    browser.click('#submit');

    // Run axe (and usually take a screenshot for regression testing) to verify the submitted form passes accessibility.
    // This step may also test the accessibility of a form that failed validation on submit and is displaying error indicators (text, red outlines, etc..).
    Terra.validates.accessibility();
  });
});

These tests often involve disclosing content or progressing through workflows and at various stages of the workflow we want to take a screenshot and validate accessibility. Accessibility violations fail the test run. An assertion is run on the result of axe.run. If the violation array contains any results we fail the test and log the violations to the test output.

Fast forward. Axe occasionally introduces new rules in minor version bumps. These new rules generally result in violations. For this reason we lock axe into a specific version and update to new versions only after investigating the changes. This is where reviewOnFail has come in handy. When axe introduces new rules we can mark the new rules to reviewOnFail. This allows us to run the rules, but not report them as violations and fail the test. Instead, they are reported as "incomplete" and logged to the test output at the end of the test run. This allows us to upgrade axe without breaking existing tests, but also report and track rules that were reported as incomplete. (New rules eventually become failures, but start as warnings for passivity).

reviewOnFail also comes in handy when adopting new tag standards. When a new tag is adopted each of the rules can be marked as reviewOnFail to report the new rule failures as warnings to the test output without failing the test run. After the adoption period has matured, the new rules can be changed to full violations.

Axe allows individual rules to be completely disabled per axe run:

axe.run({ rules: 'color-contrast': { enabled: false } });

Rather than disabling the rule completely, marking it for review would allow the incomplete rules to be reported at the end of the test run:

axe.run({ rules: 'color-contrast': { reviewOnFail: true } });

@WilcoFiers
Copy link
Contributor

Thank you for adding this information. This makes sense to me now. I don't think I've ever seen axe-core being used that way, but it is very cool. I'll keep the issue open as a reminder for us to look at this. Making reviewOnFail available in axe.run() might not be the best way to solve this. As I mentions it wasn't designed for that, but I know the use case. We have some other ideas on how to address this, but nothing too concrete at the moment.

@WilcoFiers WilcoFiers added core Issues in the core code (lib/core) and removed info needed More information or research is needed to continue labels Nov 11, 2020
@1Copenut
Copy link
Contributor

I agree with the use case(s) outlined here and see a lot of benefit in allowing reviewOnFail in the axe.run options. I work on design system components that might fail an axe check because of local state or employ alternative means to navigate by keyboard like roving tabindex.

@Capocaccia
Copy link
Contributor

@straker - This issue is still open bug the API docs says reviewOnFail is a valid API option.

https://www.deque.com/axe/core-documentation/api-documentation/#parameters-1

However, the type is missing the the API. Can you clarify if the reviewOnFail configuration option is enabled in the API?

@straker
Copy link
Contributor

straker commented Jun 27, 2022

@Capocaccia reviewOnFail is available as a configuration option for axe rules. That is, when creating a new rule you can pass it as an option:

axe.configure({
  rules: [
    {
      id: 'new-rule',
      reviewOnFail: true
      // ...
    }
  ]
})

reviewOnFail is not available for axe.run.

@Capocaccia
Copy link
Contributor

Capocaccia commented Jun 27, 2022

@straker - Thanks for the update. By that logic, the type for Rule is out of date as the reviewOnFail option is missing.

https://github.com/dequelabs/axe-core/blob/develop/axe.d.ts#L224

I tried to submit a PR for this but this repo is closed to outside contributors it appears.

@straker
Copy link
Contributor

straker commented Jun 27, 2022

@Capocaccia Were you trying to make a pr directly using the file edit flow in GitHub? If so you'll need to fork the repo and make your changes in your fork, then you can open a pr from your fork.

@Capocaccia
Copy link
Contributor

Thanks Ill make a PR from my fork.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
core Issues in the core code (lib/core) feat New feature or enhancement
Projects
None yet
Development

No branches or pull requests

5 participants