Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Visual regression testing #1207

Closed
kirovboris opened this issue Feb 2, 2017 · 30 comments
Closed

Visual regression testing #1207

kirovboris opened this issue Feb 2, 2017 · 30 comments

Comments

@kirovboris
Copy link
Collaborator

@DevExpress/testcafe Let's discuss functionality of layout testing

I suggest to use the following approach. Perhaps it looks too over engineered, but it is pretty flexible at my sight.

Screenshot provider

I suggest to provide possibility to use any comparing screenshots library.
We do two options out-of-the-box - per-pixel comparison and perceptual hash comparison.

To avoid working with huge binary images we can calculate perceptual hash.
In this case we don't create screenshot file and compare only hash value.
To build the diff images use should run the local tests in per-pixel mode.

Also we can provide possibility of taking screenshot in black and white mode or ultra contrast mode. I think it can be passed to the chosen library as layoutTestOptions.

So we create:

  1. Pixel perfect provider with options
{
    mode: 'color' | 'black_and_white',
    threshold: Number	
}
  1. Perceptual hash with options
{
    threshold: Number 
    // or
    exactMatching: true | false 
    //it will be depend of implementation
}

Run options

For the run function we add options for screenshot comparing:

{
    etalonsPath: <relative_or_absolute_path>,
    layoutTestProvider: <provider_name>,
    layoutTestOptions: {...},
    updateEtalons: true | false
}

Accordingly, we add the similar options to the CLI interface:
--etalons-path, --layout-provider, --update-etalons.

Test API

We provide two different ways to testing layout.

  1. We look for an etalon automatically and user should just call t.checkLayout() method.

checkLayout() is searching for a screenshot for test in according to index of using in the test code body, starting with 0.

test('testName', async t => {
    await t
        .click('#el')
        .checkLayout() //looking for .../<testName>/<workerName>/1.png        
        .type('.inputClass', 'value')
        .checkLayout() //looking for .../<testName>/<workerName>/2.png
        ......
        .click(...)
        .checkLayout() //looking for .../<testName>/<workerName>/<N>.png
});
  1. We provide testControler.Image constructor and .equalLayout() assertion.
    That means user should decide on their own how to store artifacts and etalons, we just use comparing logic from provider.

E.g.:

 test('testName', async t => {
     await t.expect(new t.Image(<image_path>)).equalLayout();
 });

Also we should resize browser window up to screenshot size.

Screenshots storage

Every provider implements mechanism of storing artifacts.
For per-pixel provider we store screenshots similarly with current screenshots directory. Also we should create difference files in that directory.

Etalons will be got from path that specified in etalonsPath for programmatically API or from --etalons-path parameter for CLI.
For hash comparison we write key-value pairs to the .json file.

{
    '<testName1><workerName1><etalonId1>': '3c3e0e1a3a1e1e2e',
    ...
    '<testNameN><workerNameN><etalonIdN>': 'ac3e0e1a3a1e1e2F'
}

Updating of etalons

As soon as the first screenshot will be different from etalon we run web application with GUI interface for managing etalons.
Service will run only if was passed --update-etalons to CLI or updateEtalons to the runOptions in programmatically API.
In this case we output in report only path to the difference file.

As alternative we can just output paths to the artifacts, etalons and difference files, but it does not look convenient.

@inikulin
Copy link
Contributor

inikulin commented Feb 2, 2017

I see the following ways for API simplification:

  1. Automatically generate etalons on first run
  2. It would be nice to have things out of the box regarding comparison tools? Reading this proposal I have no idea which one to choose.

Overall, can we reduce API to single method?

@VasilyStrelyaev
Copy link
Collaborator

Let's not use the word etalon for base or baseline images. There is no such a word as etalon. It's even underlined by spell checkers.

@inikulin inikulin added TYPE: enhancement The accepted proposal for future implementation. and removed TYPE: proposal labels Apr 26, 2017
@inikulin inikulin added this to the Planned milestone Apr 26, 2017
@mjhea0
Copy link

mjhea0 commented Apr 26, 2017

@VasilyStrelyaev is there a development branch for this?

@kirovboris
Copy link
Collaborator Author

After discussion we decide to simplify layout testing the following way:

We look for an etalon automatically and user should just call t.checkLayout() method.
t.checkLayout() is searching for a screenshot for test in according to index of using in the test code body, starting with 0.

test('testName', async t => {
    await t
        .click('#el')
        .checkLayout() //looking for .../<testName>/<workerName>/1.png        
        .type('.inputClass', 'value')
        .checkLayout() //looking for .../<testName>/<workerName>/2.png
        ......
        .click(...)
        .checkLayout() //looking for .../<testName>/<workerName>/<N>.png
});

If etalon isn't exists it should be created.

For test run, you can specify path to screenshots --screenshots and path to etalons --etalons
Path to etalons can be relative or absolute.

In screenshot directory we generate file with meta infotamation about screenshot comparison.
If test fails, we output path to this file to the report.

Afterward you can run testcafe etalon-manager <path-to-diff-file>. It will launch the application for management the screenshots and etalons and show differences in screenshots for failed tests.

@AlexanderMoskovkin
Copy link
Contributor

is there a development branch for this?

@mjhea0 We haven't started to work on this feature yet.
Usually we work on any feature in own fork and make a pull request from there. In the upstream repository usually we don't create new branches.

@mjhea0
Copy link

mjhea0 commented Apr 27, 2017

Thoughts on getting this into the next release? Or at the very least #1357?

@AlexanderMoskovkin
Copy link
Contributor

We've planned to fix #1357 in this release iteration. We will be able to provide you with an alpha version as soon as the fix is ready

@mjhea0
Copy link

mjhea0 commented May 1, 2017

Rough example of how I'm doing the visual regression tests- https://github.com/mjhea0/testcafe-visual-regression

@AlexanderMoskovkin
Copy link
Contributor

@mjhea0 Thanks for the link, it looks interesting

@ttiurani
Copy link

Any news on this since May? I'm evaluating TestCafe for a new project, and layout testing is a crucial deciding factor on TestCafe vs a Selenium based stack. I'm guessing I'm not alone.

@AlexanderMoskovkin
Copy link
Contributor

Hi @ttiurani,

We haven't implemented this functionality yet but we keep this feature in mind and we've already made some spike solutions. Now we are planning our next releases so I can't provide you with estimates for this feature so far. Stay with us and we'll notify you about our progress.

@sunainajain
Copy link

waiting on this .....

@mdstaff
Copy link

mdstaff commented Apr 25, 2018

Hi all, I'm a bit curious if there was a decision made between "pixel perfect" and "perceptual hash" methods to implement this feature. I've been looking at this interesting repository from Amex: https://github.com/americanexpress/jest-image-snapshot and it looks promising.

@ttiurani
Copy link

ttiurani commented Apr 26, 2018

Just as an note on this: we actually ended up doing image testing against our Storybook and not with TestCafe and are very happy with our decision. We're using Storyshots image snapshots:

https://github.com/storybookjs/storybook/tree/next/addons/storyshots/storyshots-puppeteer

I would argue that for a lot of cases visual testing the Storybook stories is the right thing to do, and then TestCafe is only used for integration testing.

@kirovboris
Copy link
Collaborator Author

@mdstaff, @ttiurani Thanks for the idea, we'll check this out.

@cbou
Copy link

cbou commented Sep 8, 2018

Any news for this issue? I would like to use Testcafe but I need visual regression...

@VasilyStrelyaev VasilyStrelyaev changed the title Layout testing Visual regression testing Feb 27, 2019
@vladnauto
Copy link

We also want to implement visual regression in our project and that would be great to be able to deal with dynamic content the similar way it is done in applitools
https://applitools.com/tutorials/selenium-javascript.html#part-5-use-advanced-ai-tools-to-work-with-real-world-scenarios
where you can ignore region
or webdriverio visual regression,
https://webdriver.io/blog/2019/05/18/visual-regression-for-v5.html
Where you can provide list of selectors to hide it before the test.
I assume, the latter one can be done easily. By setting for each of those elements display: none, or visibility: hidden.

@AndreyBelym AndreyBelym added AREA: server SYSTEM: driver SYSTEM: runner SYSTEM: screenshot capturing An issue related to the screenshot capturing functionality. labels Feb 10, 2020
@btkostner
Copy link

For what it's worth, I was able to get something to work with using the native takeElementScreenshot and the pixelmatch package. Most of the code is taken from the jest-image-snapshot package, but if you would like to take a look, it's available in one of my repositories. It is currently running in CI on Linux and Windows (not macOS due to a different issue).

If this looks promising and similar to what we want, I would be happy to start work on a branch.

@need-response-app need-response-app bot added the STATE: Need response An issue that requires a response or attention from the team. label Mar 27, 2020
@Dmitry-Ostashev
Copy link
Contributor

@btkostner We are happy to hear that you found your own way of testing visual regression. Your approach doesn't seem to require a change in the TestCafe repository. I suppose you could represent your solution as your own separate package.

@need-response-app need-response-app bot removed the STATE: Need response An issue that requires a response or attention from the team. label Mar 30, 2020
@benmonro
Copy link
Contributor

@miherlosev any update on this? it was high priority, but what happened?

@need-response-app need-response-app bot added the STATE: Need response An issue that requires a response or attention from the team. label Aug 25, 2020
@AndreyBelym
Copy link
Contributor

Our attempts to create a prototype revealed that this feature requires a lot more resources and time than we thought initially. Since there exist some third-party visual testing solutions, we decided to switch focus onto other big features like multi-window testing. We will think about considering this feature for the next roadmap.

@need-response-app need-response-app bot removed the STATE: Need response An issue that requires a response or attention from the team. label Aug 26, 2020
@VasilyStrelyaev VasilyStrelyaev removed the AREA: docs An issue related to the product documentation. label Sep 28, 2020
@AndreyBelym AndreyBelym added STATE: Stale An outdated issue that will be automatically closed by the Stale bot. FREQUENCY: level 2 and removed STATE: Stale An outdated issue that will be automatically closed by the Stale bot. labels Oct 19, 2020
@dmitry-kurmanov
Copy link

3d party solutions JFYI :
https://www.npmjs.com/package/devextreme-screenshot-comparer
https://github.com/tacoss/testcafe-blink-diff

@need-response-app need-response-app bot added the STATE: Need response An issue that requires a response or attention from the team. label Nov 26, 2021
@miherlosev miherlosev removed their assignment Dec 17, 2021
@AlexSkorkin AlexSkorkin removed the STATE: Need response An issue that requires a response or attention from the team. label Dec 22, 2021
@AdrienLemaire
Copy link

@dmitrykurmanov can you confirm that devextreme-screenshot-comparer is a tool made by DevExpress ?
I cannot find the repo in https://github.com/orgs/DevExpress/repositories, Google doesn't return anything else than the npm package, and the package's readme/package.json do not mention a repo source.
Not sure where I should go if I have issues or questions regarding this package.

@need-response-app need-response-app bot added the STATE: Need response An issue that requires a response or attention from the team. label Feb 28, 2022
@alexfi1in
Copy link
Contributor

The repo is private and is intended for internal use only. No support is provided for it.

@need-response-app need-response-app bot removed the STATE: Need response An issue that requires a response or attention from the team. label Mar 2, 2022
@helen-dikareva
Copy link
Collaborator

@miherlosev
Copy link
Collaborator

miherlosev commented Jun 19, 2023

Hi @folks,

At present, we support NativeAutomation mode only for Chromium-based browsers. Later, we are going to support other browsers. That said, we do not expect to implement it in the near future.
You can use implementations shared by the community: testcafe-blink-diff (suitable for local comparison), @percy/testcafe (integration with https://percy.io/ image comparison infrastructure).
If these are not suitable, you can create your image comparison approach with the takeScreenshot action and one of the image comparison libraries: resemble, pixelmatch, blink-diff, etc.
Also, you can do it yourself and create a PR. We will be happy to review it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests