Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OSOE-770: Add support for structured html-validate output #738

Closed
wants to merge 29 commits into from

Conversation

AydinE
Copy link
Contributor

@AydinE AydinE commented Apr 8, 2024

Hi @DemeSzabolcs

Before I continue along this path I wanted to check with you first on my current approach:
See also: Lombiq/UI-Testing-Toolbox#354

Currently to make sure that it's backwards compatible and does not break old tests I have not changed the default output formatter. Because of this however when using this the user will have to manually set the formatter before usage,
See: https://github.com/Lombiq/Open-Source-Orchard-Core-Extensions/compare/issue/[OSOE-770](https://lombiq.atlassian.net/browse/OSOE-770)?expand=1#diff-2af6fb34d863dcd5368561a878f904cf43ec23c5e9f70feb218a89f36bee2845R26

configuration.HtmlValidationConfiguration.HtmlValidationOptions.ResultFileFormatter =
                    HtmlValidateFormatter.Names.Json;

If JSON formatter is set, the new validationResult.GetParsedErrorsAsync() can now be used instead of the traditional validationResult.GetErrorsAsync() - I've added some checks and throw specific errors to tell the user about the issue if they try to use the validationResult.GetParsedErrorsAsync() without setting the formatter to JSON first.

After this the errors are formatted to the new HtmlValidationError object see:
https://github.com/Lombiq/UI-Testing-Toolbox/pull/354/files/701f9017ecc72947a76501ae9df2f76d7d713b33#diff-befb2930039e072827e787f53aab0f5ae1adda39f5ca02961606e36460738dbc
In this way certain RuleId's or whatever else can be excluded in a neater way as before.

Let me know your thoughts and/or any changes you'd like to see

@DemeSzabolcs
Copy link
Member

I see, thanks.

Currently to make sure that it's backwards compatible and does not break old tests I have not changed the default output formatter.

Is there a chance that it would break something somehow?

  1. Because if not, I think it should be the default output formatter, based on this part in the issue description:

Keep the simple text output too, but this output should from then on be the recommended way. Change all built-in features to use that.

@AydinE
Copy link
Contributor Author

AydinE commented Apr 8, 2024

I see, thanks.

Currently to make sure that it's backwards compatible and does not break old tests I have not changed the default output formatter.

Is there a chance that it would break something somehow?

  1. Because if not, I think it should be the default output formatter, based on this part in the issue description:

Keep the simple text output too, but this output should from then on be the recommended way. Change all built-in features to use that.

@DemeSzabolcs I would of course change all the references on our side to use the new formatted output but I meant more in the sense of external projects who rely on this for testing those would break with this update right?

@DemeSzabolcs
Copy link
Member

I would of course change all the references on our side to use the new formatted output but I meant more in the sense of external projects who rely on this for testing those would break with this update right?

If you can set it to the default inside the https://github.com/Lombiq/UI-Testing-Toolbox project, then I think no. Because these modules are not updated anywhere automatically.

Other projects would still use the old version. And we usually have submodule/ Orchard Core update to-do-s, where the developer will update the module and also fix the the tests based on the breaking changes.

@AydinE
Copy link
Contributor Author

AydinE commented Apr 9, 2024

Alright @DemeSzabolcs I have set JSON as the default and have changed the tests to use the new parsed output let me know if anything else is needed

@DemeSzabolcs
Copy link
Member

Okay, I don't know about anything right now. Let me know when it's ready for a review.

@AydinE
Copy link
Contributor Author

AydinE commented Apr 10, 2024

@DemeSzabolcs In that case I think it's ready for a review

@DemeSzabolcs
Copy link
Member

Okay, thank you! I will do a review in the upcoming days.

@DemeSzabolcs
Copy link
Member

These run was flaky: https://github.com/Lombiq/Open-Source-Orchard-Core-Extensions/actions/runs/8619854435?pr=738
image

I downloaded the failure dump
image

And it's failed with this message:
image

@DemeSzabolcs
Copy link
Member

@AydinE
Copy link
Contributor Author

AydinE commented Apr 20, 2024

@AydinE There is a static code analyzer warning: https://github.com/Lombiq/Open-Source-Orchard-Core-Extensions/actions/runs/8738561440/job/23977863448?pr=738#step:6:162

Hey @DemeSzabolcs indeed, I was actually waiting for feedback from you on the notes I put before fixing everything up.

This comment has been minimized.

@AydinE
Copy link
Contributor Author

AydinE commented Apr 23, 2024

Hey @DemeSzabolcs while trying to finish up this tasks while running tests locally I've run into an issue and I am wondering if you have the same or if you know how to potentially fix it.
When I run the tests the chrome instance that was used for the test does not seem to be disposed. This happens both if I run the tests through the IDE or via the CLI with dotnet test (headless) this actually locks up my PC fully if I try to run the full suite of tests. If I run them one by one I have to run taskkill to get rid of the leftover tasks.

This actually shouldn't happen because of the call to _context.Scope?.Dispose(); here which should underwater also call _driver?.Dispose();.

Because of this, a full test run now takes about 31 minutes on a relatively powerful PC that I have here at home. I am wondering if this issue is also happening for you and the rest of the team, and also perhaps in the pipeline?

Would love to get your thoughts on this

Edit:
Just for some extra info, first couple of groups of tests go smoothly and quickly but later tests run veeeerry slow because old instances of chrome keep the CPU at 100% as more tests are run:

image

@DemeSzabolcs
Copy link
Member

Hey @DemeSzabolcs while trying to finish up this tasks while running tests locally I've run into an issue and I am wondering if you have the same or if you know how to potentially fix it. When I run the tests the chrome instance that was used for the test does not seem to be disposed. This happens both if I run the tests through the IDE or via the CLI with dotnet test (headless) this actually locks up my PC fully if I try to run the full suite of tests. If I run them one by one I have to run taskkill to get rid of the leftover tasks.

This actually shouldn't happen because of the call to _context.Scope?.Dispose(); here which should underwater also call _driver?.Dispose();.

Because of this, a full test run now takes about 31 minutes on a relatively powerful PC that I have here at home. I am wondering if this issue is also happening for you and the rest of the team, and also perhaps in the pipeline?

Would love to get your thoughts on this

Edit: Just for some extra info, first couple of groups of tests go smoothly and quickly but later tests run veeeerry slow because old instances of chrome keep the CPU at 100% as more tests are run:

image

I see. I don't have this issue, but I recommend checking out this, it might help: https://github.com/Lombiq/UI-Testing-Toolbox/blob/43eba31b50a2273b2c7ae4fe1406e901137877fc/Lombiq.Tests.UI/Docs/Troubleshooting.md?plain=1#L15

Copy link

This pull request has merge conflicts. Please resolve those before requesting a review.

@AydinE
Copy link
Contributor Author

AydinE commented Apr 23, 2024

I see. I don't have this issue, but I recommend checking out this, it might help: https://github.com/Lombiq/UI-Testing-Toolbox/blob/43eba31b50a2273b2c7ae4fe1406e901137877fc/Lombiq.Tests.UI/Docs/Troubleshooting.md?plain=1#L15

It's not an aborted test, this is happening for succesfully run tests. For every single test on multiple machines and multiple IDE's and the CLI too

@AydinE
Copy link
Contributor Author

AydinE commented Apr 30, 2024

Hey @DemeSzabolcs ,

I am having a bit of a hard time understanding why builds in the pipeline either fail or even sometimes when it succeeds it says it might be "flaky".

The issue is always with the same test: dd8c6e5#diff-2af6fb34d863dcd5368561a878f904cf43ec23c5e9f70feb218a89f36bee2845R20 weirdly though no matter how hard I try I can not replicate this issue locally (2 different machines, different IDE's) and I even have tried running the exact command that the pipeline seems to run through the CLI.
dotnet test --configuration Debug --nologo --logger 'trx;LogFileName=test-results.trx' --logger 'console;verbosity=detailed' --verbosity quiet test/Lombiq.OSOCE.Tests.UI/Lombiq.OSOCE.Tests.UI.csproj
None of these ever give the same issues as the pipeline.

Is there anyway to more closely reproduce the pipeline or do you have any ideas why this specific test might be failing in the pipeline but not locally?

Copy link

This pull request has merge conflicts. Please resolve those before requesting a review.

Temporarily disable this test to see if the build will succeed without it
Copy link

github-actions bot commented May 1, 2024

This pull request has merge conflicts. Please resolve those before requesting a review.

@AydinE
Copy link
Contributor Author

AydinE commented May 2, 2024

Hey @DemeSzabolcs ,

Did you see my message? #738 (comment)
Any thoughts?

@DemeSzabolcs
Copy link
Member

Hey @DemeSzabolcs ,

Did you see my message? #738 (comment) Any thoughts?

For some reason no, thanks for pinging! I will check it out today.

@DemeSzabolcs
Copy link
Member

DemeSzabolcs commented May 2, 2024

@AydinE
So the flakiness indication works this way: A test is retried 3 times on the CI, if it fails 0 times, then there will be no warning, if it fails 1-2 times, but passes on the third run, there will be a flakiness warning (there will be a generated UI test failure dump too, so you can see what the problem is). If a test fails three times, then the workflow will fail too and the test will be considered as failed.

Some flakiness is okay. By default a test is retried 3 times. If it only fails once and it's not the same test every time, and there are only some runs where a test is flaky but otherwise there are a lot of runs where there are no flaky tests, it's okay.

Of course if it fails 3 times in a row, then the workflow will fail and that indicates that there is a real problem.

If you could not reproduce it locally, for debugging this I would recommend looking into the failure dumps and build logs.

I checked out the failed run of UIKitShowcasePageShouldBeCorrect and this was in the test output log:

2024-05-01 08:48:24.5374 - The test failed with the following exception: Lombiq.Tests.UI.Exceptions.HtmlValidationAssertionException: Invalid JSON, make sure to set the OutputFormatter to JSON. Check the HTML validation report in the failure dump for details.
 ---> System.Text.Json.JsonException: Invalid JSON, make sure to set the OutputFormatter to JSON.

You can download it from here:https://github.com/Lombiq/Open-Source-Orchard-Core-Extensions/actions/runs/8907281677/artifacts/1463356716

So it's the same problem as I mentioned here: #738 (comment)

@AydinE
Copy link
Contributor Author

AydinE commented May 2, 2024

I checked out the failed run of UIKitShowcasePageShouldBeCorrect and this was in the test output log:

@DemeSzabolcs
Yeah that is the weird part, the error is: Invalid JSON, make sure to set the OutputFormatter to JSON.
However as mentioned that makes no sense because the default output is now set to JSON and in one of the latest commits I even specifically set it to JSON just to for that specific test, same issue.

Weirdly enough though this is not the only test that uses this parsed error method and none of the other tests seem to have the same issue. So I am just wondering if there is anything different about the tests running in the pipeline compared to running them locally? because it's hard to debug if I can't replicate the issue.

EDIT: In the last run the test that has the issue is disabled just to see if any of the other tests were having the same problems but as we can see they run fine.

@DemeSzabolcs
Copy link
Member

I checked out the failed run of UIKitShowcasePageShouldBeCorrect and this was in the test output log:

@DemeSzabolcs Yeah that is the weird part, the error is: Invalid JSON, make sure to set the OutputFormatter to JSON. However as mentioned that makes no sense because the default output is now set to JSON and in one of the latest commits I even specifically set it to JSON just to for that specific test, same issue.

Weirdly enough though this is not the only test that uses this parsed error method and none of the other tests seem to have the same issue. So I am just wondering if there is anything different about the tests running in the pipeline compared to running them locally? because it's hard to debug if I can't replicate the issue.

EDIT: In the last run the test that has the issue is disabled just to see if any of the other tests were having the same problems but as we can see they run fine.

The difference between the local and CI run could be, that the CI run uses Ubuntu: https://github.com/Lombiq/Open-Source-Orchard-Core-Extensions/blob/dev/.github/workflows/build-and-test.yml#L17

I would also try to compare the failing test, to the other ones where you needed to change the configuration, to see if there is any difference between this and the other ones.

Also the failure dump says that the error is here:https://github.com/Lombiq/UI-Testing-Toolbox/pull/354/files#diff-67847fe58db929ae56cd24e4ba914c28af919cdf0272fd3cbf193c995d2666aeR37-R40

So I guess there is a special case in this test on the CI, when it enters the if. Maybe the if method should be modify to avoid this flakiness.
You could also debug it in the CI btw. Instead of throwing inside the if, you could try printing out the output value, to see what is the problem.

Other than this, I don't have any idea.

Copy link

github-actions bot commented May 3, 2024

This pull request has merge conflicts. Please resolve those before requesting a review.

@AydinE AydinE closed this May 3, 2024
@AydinE AydinE deleted the issue/OSOE-770 branch May 3, 2024 17:54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants