-
Notifications
You must be signed in to change notification settings - Fork 411
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Test DGPv2 example projects #3821
Test DGPv2 example projects #3821
Conversation
703458b
to
cfb8e0c
Compare
bfb815c
to
3c32c89
Compare
cfb8e0c
to
832f87e
Compare
- Convert the existing DGPv1 examples to demonstrate DGPv2. - Import [Dokkatoo examples](https://github.com/adamko-dev/dokkatoo/tree/v2.4.0/examples). Part of KT-71346 To keep the PR smaller, tests to ensure these projects work as expected will be added in a following PR.
Test the DGPv2 examples by comparing using golden testing. The example projects are executed using Gradle TestKit. The generated Dokka output is compared against data committed into the directory `./dokka-integration-tests/gradle/src/testExampleProjects/expectedData/` KT-71346
2aaf61a
to
2cc1ad2
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the main test class to review. I'm adding a comment to highlight it because the GitHub UI is hiding the diff.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For some time we had an idea of having such tests (which compares HTML), so that's nice. But I'm a bit afraid that if we will have some expected HTML changes, probably the process of updating the data will be cumbersome.
Would be nice to understand: how hard it will be to maintain those examples?
dokka-integration-tests/gradle/src/testExampleProjects/kotlin/ExampleProjectsTest.kt
Outdated
Show resolved
Hide resolved
dokka-integration-tests/gradle/src/testExampleProjects/kotlin/ExampleProjectsTest.kt
Show resolved
Hide resolved
It seems the GH UI is lagging pretty heavily for me due to the test fixture; rebasing the fixture to the first (or last) commit and making a stacked branch will probably reduce the lag |
.substringAfter("// begin-report-data", "could not find 'begin-report-data'") | ||
.substringBefore("// end-report-data", "could not find 'end-report-data'") | ||
|
||
return Json.decodeFromString<ConfigurationCacheReportData>(reportData) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking at this CC check I have a couple of thoughts:
- Could we just
ignoreUnknownKeys
and only read the totalProblemCount? Otherwise this will break if the CC report json format changes - Do we really want to parse this json at all? What is the benefit compared to just failing the test?
- Does this CC check ever actually run for a build that fails due to a CC serialization error? I think the test is going to fail prematurely with a build failure or an assertion failure before the
totalProblemCount
assertion runs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Could we just
ignoreUnknownKeys
and only read the totalProblemCount? Otherwise this will break if the CC report json format changes
Yes, we could add ignoreUnknownKeys
, but at the moment it's simple to parse everything. We could add it in the future if necessary.
The CC test here is a 'soft launch'. In the future I want to specifically read the JSON CC report data and test the registered inputs are as expected.
- Do we really want to parse this json at all? What is the benefit compared to just failing the test?
The difference with checking the CC report JSON is we can be more certain if CC fails. Just parsing the console output can lead to false positives and uncertain results. We can also get and report more specific information about failures, which might not be present in the console output.
- Does this CC check ever actually run for a build that fails due to a CC serialization error? I think the test is going to fail prematurely with a build failure or an assertion failure before the
totalProblemCount
assertion runs
iirc there are some CC problems that are reported, but don't fail the build. But I can't remember them right now...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For me this check with JSON parsing looks not so useful, as the build should definitely fail, if CC is not supported for some reason with CC enabled.
I feel like it brings more harm (maintainability, possible format changes, etc) than profits (which are unknown to me).
E.g We can also get and report more specific information about failures, which might not be present in the console output.
It's always possible to run the test locally, anyway, to fix/investigate the issue we will need it. Of course it's not an option for TC, but I think that we could just then store CC reports somewhere - IMO, this will be easier to maintain and will have more profit (as we could read the whole report)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
there are some CC problems that are reported, but don't fail the build
Are these going to be reflected in the totalProblemCount
? In case they are, I still wonder if specifying org.gradle.configuration-cache.problems=fail
and org.gradle.configuration-cache.max-problems=0
will achieve failing the build with CC errors
report more specific information about failures
Would it be possible to zip the entire build/reports/configuration-cache
directory into the GH/TC artifacts to be able to inspect the full Gradle report?
test the registered inputs are as expected.
I want to highlight that this might bring a maintenance cost where some CC inputs are going to change due to changes like KGP version, which are unlikely to be of interest to Dokka.
dokka-integration-tests/gradle/src/testExampleProjects/kotlin/ExampleProjectsTest.kt
Show resolved
Hide resolved
dokka-integration-tests/gradle/src/testExampleProjects/kotlin/ExampleProjectsTest.kt
Show resolved
Hide resolved
dokka-integration-tests/gradle/src/testExampleProjects/kotlin/ExampleProjectsTest.kt
Show resolved
Hide resolved
dokka-integration-tests/gradle/src/testExampleProjects/kotlin/ExampleProjectsTest.kt
Outdated
Show resolved
Hide resolved
dokka-integration-tests/gradle/src/testExampleProjects/kotlin/ExampleProjectsTest.kt
Show resolved
Hide resolved
dokka-integration-tests/gradle/src/testExampleProjects/kotlin/ExampleProjectsTest.kt
Show resolved
Hide resolved
dokka-runners/dokka-gradle-plugin/src/testFixtures/kotlin/GradleTestKitUtils.kt
Outdated
Show resolved
Hide resolved
projectDir.walk() | ||
.filter { it.name == "settings.gradle.kts" || it.name == "settings.gradle" } | ||
.forEach { p -> | ||
val repoLine = p.useLines { it.firstOrNull { l -> l.trim() == "repositories {" } } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This implementation looks somewhat fragile because it relies on seeking repositories {
substring. Also what is the use-case for calling this function, but then doing nothing if the repositories are not specified in the test project?
Can we simplify this implementation to:
- Find and assert
settings.gradle(.kts)
file exists - Inject
pluginManagement
anddependencyResolutionManagement
blocks with necessary repositories
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This implementation looks somewhat fragile because it relies on seeking
repositories {
substring. Also what is the use-case for calling this function, but then doing nothing if the repositories are not specified in the test project?
If the repositories aren't correctly updated, then I would expect that the test will fail. The version of DGP is not published to a remote repository, and is only available in a local directory.
Can we simplify this implementation to:
- Find and assert
settings.gradle(.kts)
file exists
Some of the example projects (e.g. the composite build example) have multiple settings.gradle
s, so they need to be discovered and updated dynamically. We could try to manually specify this per project, but since the tests should fail without a correct repository, doing an automatic find/replace seems fine to me.
- Inject
pluginManagement
anddependencyResolutionManagement
blocks with necessary repositories
Do you mean appending multiple pluginManagement {}
and dependencyResolutionManagement {}
blocks at the end of the file?
That could work for dependencyResolutionManagement {}
, because Gradle will allow repeated blocks, but for pluginManagement {}
Gradle will fail if there are multiple blocks, so pluginManagement{ repositories {} }
has to be updated in-place.
Maybe an init plugin could help? But I'm not sure if that could work with the composite build example.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My main argument for not doing this substring search/replace is that I find reasoning about such code cumbersome if the behavior doesn't match my expectation about what this code should be doing. If you want to keep the substring search/replace, I would advise to instead use an explicit marker:
repositories {
<repositories_marker>
}
which will explicitly fail to evaluate if it wasn't processed
pluginManagement{ repositories {} } has to be updated in-place.
Can we avoid specifying pluginManagement
to avoid having to patch it later?
work with the composite build example
Have you tried it out? I would expect init scripts to apply to all project hierarchies in the composite build
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My main argument for not doing this substring search/replace is that I find reasoning about such code cumbersome if the behavior doesn't match my expectation about what this code should be doing. If you want to keep the substring search/replace, I would advise to instead use an explicit marker:
repositories { <repositories_marker> }
which will explicitly fail to evaluate if it wasn't processed
The start of the repositories {
block is already a marker.
Introducing non-standard tags harms the effectiveness of the projects as examples. As a user I would be confused by seeing <repositories_marker>
.
pluginManagement{ repositories {} } has to be updated in-place.
Can we avoid specifying
pluginManagement
to avoid having to patch it later?
No.
work with the composite build example
Have you tried it out? I would expect init scripts to apply to all project hierarchies in the composite build
I haven't tried.
I like that this PR expands on E2E testing in Dokka; however, is it really feasible to test the generated html output in the long term? Imho, at the E2E level, html is more of an implementation detail. I imagine if someone is going to implement a major Dokka redesign in the future, they will have to regenerate these html files after every UI change. |
For some context, Dokka's output formats don't change very often. I've used a similar sort of direct file comparison testing in Dokkatoo for over a year, and I haven't had any instability. We can work on making the reference files easier to update, but that's something I'd rather look at if/when it becomes a problem (https://xkcd.com/1205/). |
In coming days and months we will have HTML changes: related to nav bar, ToC, search, may be more. Let's do an experiment (mind or real). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think, that we should merge this PR without some way to update expectedData
.
E.g #3821 (comment)
I still want to challenge the idea that we should test html output:
but then is there a point in testing the html output that doesn't really change? In general I would expect E2E testing to be rather carried out using a UI testing framework like Selenium |
There is already a way to update the expected data |
Yes, there is a point. We want to make sure that changes in DGP/AGP/Gradle/KGP don't affect the HTML output. |
It takes a few minutes to do. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's see how it goes with maintaining of such tests...
Also, let's add information on how to investigate failures and update expected data in https://github.com/Kotlin/dokka/blob/master/dokka-integration-tests/README.md before merging
Test the DGPv2 examples by comparing using golden testing.
The example projects are executed using Gradle TestKit. The generated Dokka output is compared against data committed into the directory
./dokka-integration-tests/gradle/src/testExampleProjects/expectedData/
To re-emphasise, everything in
testExampleProjects/
is test data, and these are the remaining modified files:KT-71346
Dependencies