-
Notifications
You must be signed in to change notification settings - Fork 671
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Visual regression testing #1207
Comments
I see the following ways for API simplification:
Overall, can we reduce API to single method? |
Let's not use the word |
@VasilyStrelyaev is there a development branch for this? |
After discussion we decide to simplify layout testing the following way: We look for an etalon automatically and user should just call test('testName', async t => {
await t
.click('#el')
.checkLayout() //looking for .../<testName>/<workerName>/1.png
.type('.inputClass', 'value')
.checkLayout() //looking for .../<testName>/<workerName>/2.png
......
.click(...)
.checkLayout() //looking for .../<testName>/<workerName>/<N>.png
}); If etalon isn't exists it should be created. For test run, you can specify path to screenshots In screenshot directory we generate file with meta infotamation about screenshot comparison. Afterward you can run |
@mjhea0 We haven't started to work on this feature yet. |
Thoughts on getting this into the next release? Or at the very least #1357? |
We've planned to fix #1357 in this release iteration. We will be able to provide you with an alpha version as soon as the fix is ready |
Rough example of how I'm doing the visual regression tests- https://github.com/mjhea0/testcafe-visual-regression |
@mjhea0 Thanks for the link, it looks interesting |
Any news on this since May? I'm evaluating TestCafe for a new project, and layout testing is a crucial deciding factor on TestCafe vs a Selenium based stack. I'm guessing I'm not alone. |
Hi @ttiurani, We haven't implemented this functionality yet but we keep this feature in mind and we've already made some spike solutions. Now we are planning our next releases so I can't provide you with estimates for this feature so far. Stay with us and we'll notify you about our progress. |
waiting on this ..... |
Hi all, I'm a bit curious if there was a decision made between "pixel perfect" and "perceptual hash" methods to implement this feature. I've been looking at this interesting repository from Amex: https://github.com/americanexpress/jest-image-snapshot and it looks promising. |
Just as an note on this: we actually ended up doing image testing against our Storybook and not with TestCafe and are very happy with our decision. We're using Storyshots image snapshots: https://github.com/storybookjs/storybook/tree/next/addons/storyshots/storyshots-puppeteer I would argue that for a lot of cases visual testing the Storybook stories is the right thing to do, and then TestCafe is only used for integration testing. |
Any news for this issue? I would like to use Testcafe but I need visual regression... |
We also want to implement visual regression in our project and that would be great to be able to deal with dynamic content the similar way it is done in applitools |
For what it's worth, I was able to get something to work with using the native If this looks promising and similar to what we want, I would be happy to start work on a branch. |
@btkostner We are happy to hear that you found your own way of testing visual regression. Your approach doesn't seem to require a change in the TestCafe repository. I suppose you could represent your solution as your own separate package. |
@miherlosev any update on this? it was high priority, but what happened? |
Our attempts to create a prototype revealed that this feature requires a lot more resources and time than we thought initially. Since there exist some third-party visual testing solutions, we decided to switch focus onto other big features like multi-window testing. We will think about considering this feature for the next roadmap. |
@dmitrykurmanov can you confirm that devextreme-screenshot-comparer is a tool made by DevExpress ? |
The repo is private and is intended for internal use only. No support is provided for it. |
coverage for view request https://stackoverflow.com/questions/72867706/visual-testing-how-can-i-assess-the-code-coverage |
Hi @folks, At present, we support NativeAutomation mode only for Chromium-based browsers. Later, we are going to support other browsers. That said, we do not expect to implement it in the near future. |
@DevExpress/testcafe Let's discuss functionality of layout testing
I suggest to use the following approach. Perhaps it looks too over engineered, but it is pretty flexible at my sight.
Screenshot provider
I suggest to provide possibility to use any comparing screenshots library.
We do two options out-of-the-box - per-pixel comparison and perceptual hash comparison.
To avoid working with huge binary images we can calculate perceptual hash.
In this case we don't create screenshot file and compare only hash value.
To build the diff images use should run the local tests in per-pixel mode.
Also we can provide possibility of taking screenshot in black and white mode or ultra contrast mode. I think it can be passed to the chosen library as
layoutTestOptions
.So we create:
Run options
For the
run
function we add options for screenshot comparing:Accordingly, we add the similar options to the CLI interface:
--etalons-path
,--layout-provider
,--update-etalons
.Test API
We provide two different ways to testing layout.
t.checkLayout()
method.checkLayout()
is searching for a screenshot for test in according to index of using in the test code body, starting with0
.testControler.Image
constructor and.equalLayout()
assertion.That means user should decide on their own how to store artifacts and etalons, we just use comparing logic from provider.
E.g.:
Also we should resize browser window up to screenshot size.
Screenshots storage
Every provider implements mechanism of storing artifacts.
For per-pixel provider we store screenshots similarly with current screenshots directory. Also we should create difference files in that directory.
Etalons will be got from path that specified in
etalonsPath
for programmatically API or from--etalons-path
parameter for CLI.For hash comparison we write key-value pairs to the
.json
file.Updating of etalons
As soon as the first screenshot will be different from etalon we run web application with GUI interface for managing etalons.
Service will run only if was passed
--update-etalons
to CLI orupdateEtalons
to therunOptions
in programmatically API.In this case we output in report only path to the difference file.
As alternative we can just output paths to the artifacts, etalons and difference files, but it does not look convenient.
The text was updated successfully, but these errors were encountered: