-
Notifications
You must be signed in to change notification settings - Fork 52
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Design proposal: move the current functionality of Kover Gradle Plugin to Kotlin Gradle Plugin #608
Comments
* It is here to stay * We would live to maintain it in its current form and have a clear vision for future development (#608) with a straightforward migration plan
* It is here to stay * We would live to maintain it in its current form and have a clear vision for future development (#608) with a straightforward migration plan
Additionally: consider adding the ability to exclude source sets or android build variants in the build file of the corresponded project, so even if the entire project is being built, user can limit the contents of the report |
Suggestions from #600: Use |
This all sounds great. I especially like the idea of merging the concept of Kover Variants with kotlin plugin standards. This seems like it is in the same spirit of what Detekt did recently when they made a one to one mapping with kotlin source sets. I'm guessing that since Kover relies on code execution it will be per target instead of per source set (so we won't get "common" kover tasks, or if we do they would just aggregate all the tasks for the targets in the "common" tree). |
Filtering at the source set level is still questionable, because it is quite difficult to distinguish classes from different sets, because they are all combined in one directory (compiled class files), which is why Kotlin compilations are now used. In order to ensure that they are distinguished, it is necessary to be not just part of KGP, but also of the Kotlin compiler - and this is a more complex task that requires a more thoughtful design. |
The prototype has been implemented. You can apply the plugin by specifying in the plugins {
id("org.jetbrains.kotlinx.kover.aggregation") version "0.8.2"
} There is no need to apply it anywhere else, all uses of the This aggregation plugin works differently. Report generation tasks are created only in the root project, settings are made only in the To instrument the tests, you need to enable coverage measurement, this can be done by adding the CLI argument kover {
enableCoverage() The key difference is that running the In order not to restart tests in different builds, use the build cache. |
Hi @shanshin sorry for the long delay. I am happy to try to be an early adopter of the new plugin and provide feedback. I tried it, and here are my initial experiences. First, let me explain my project setup a bit. I couldn't apply the settings plugin by id with Given my usual technique of applying plugins programmatically, I found the plugin class Reading through your previous comment, I had some questions and concerns about task ordering and task dependencies. I understand in general why the shift to making kover tasks not depend directly on test tasks, to provide more flexibility to the user for what to record coverage of. I might like that idea, though I don't fully understand it and have some concerns. I felt a little lost trying to figure out what I need to do in order to ensure proper task ordering (do I need to manually make kover report tasks I also felt unsure about how up-to-date checking would work here. In the previous plugin, I think that if a test source set changes, it will invalidate the test and therefore invalidate the kover tasks. Here, I am unsure what exactly happens. I would guess that since the kover tasks no longer depend directly on the test tasks, that the kover tasks have to always run in every build. This is unusual for a gradle task, which is supposed to have up-to-date checking. But if I am guessing correctly, maybe the thinking is that if all other tasks are up-to-date and don't run, then the kover task will have nothing to do anyway. But then again, what if I have two modules and one has an up-to-date test and the other doesn't? Will kover just aggregate in the results from the invalidated module and ignore the one that is up-to-date? I have more questions like this, but I'll stop here and see if you can point me towards some documentation or answer first. And for additional context, while I rely heavily on Gradle's task up-to-date checking as well as Gradle's configuration cache, I do not use the build cache at all. In the end, I was able to successfully generate a coverage report for one of my module's tests. It looked good. I rely on the validation tasks for my normal workflow, so I'll be able to try this out more thoroughly once those are ready. In summary:
|
@mgroth0, thanks for the feedback!
Indeed. Only the application by ID was considered, so there was not even an idea to make it public.
I will clarify this point a little:
this is not entirely correct, the Kover tasks still depend on the test tasks, the only difference is that running the Kover tasks does not automatically trigger the tests (using
As described above, the Kover tasks depend on the test tasks as before, therefore, the up-to-date checks work as before, and if neither the tests nor the source code have changed, then the report will not be regenerated. The only difference that you should consider when up-to-date checking is that it now depend on the compilation and testing tasks running.
It will be added in the next release :) |
I don't know, to be honest. The biggest clue I have is that if I take the parts out of my settings script that define exclusive content for repositories, then
I think I am having trouble reading and understanding this. My understanding is that you now use (see gradle docs for
I will describe the situation that confuses me. First consider this hypothetical scenario:
Now, consider the same scenario but in which we use So in that scenario, the concern I have is how would kover know to still include the output from module A? You said:
But in this scenario, module A is up-to-date so even though its test task is selected as part of the build, both its test and compilation tasks will be skipped, not started. So then I wonder, will kover still know to include the results from A, or will it now just include B? |
It seems that due to the complexity of Gradle, we have slight differences in terminology. If I write "the task is running", I mean that the task is explicitly called from the console, or the task being executed triggers the execution of this task (through an explicit dependency between tasks, in some cases of implicit dependency, in case of dependency resolution). The tasks being executed are sorted, forming a so-called task graph. As a result of the task execution, the task may have one of several outcomes. For example, when recompiling without changing the source code, the compilation task executes anyway, but it has an outcome You write that the task being executed is a task with an outcome of
The task outcomes are a bit confused here.
Yes, that's right. However, it says here that
No, compilation tasks still produce inputs for Kover tasks. The Kover settings plugin is written in such a way that it uses inputs only for those tasks that are in the task execution graph. Summarizing, Kover tasks are insensitive to up-to-date checks of one or more compilation and test tasks. You can verify this by reproducing your user case: classes and coverage from Kover guarantees the stability of reports regardless of the up-to-date status of compilation and test tasks. |
You're completely right about us having different terminology and thank you for taking the time to sort this all out! Its important to emphasize from what you said that gradle itself isn't always consistent with its own terminology, like "task dependency".
That's the main thing I needed to hear. and I have no evidence that this guarantee doesn't hold true. I was just concerned because I couldn't understand it.
Once the next version is available with verifications I'll try to include a test specifically addressing these concerns I have when I set up the whole configuration I usually use. I'm guessing the test will pass, but it will be nice to confirm.
Noting this seems like the most important change; my understanding is that users will need to manually select the tests tasks that they want to include. So they might run |
Abstract
Calculating code coverage for applications written in Kotlin is a basic functionality closely related to the language itself.
Now this functionality is supplied separately in the form of a Gradle plugin, which is in no way related to the Kotlin compiler used.
Enabling coverage measurement requires a number of additional actions, using a plugin, and configuring scripts through extensions unrelated to Kotlin.
The Kotlin coverage settings have nothing to do with the Kotlin settings, which makes these settings less obvious.
For a smoother use of Kotlin, we decided to move the DSL coverage collection settings to Kotlin Gradle Plugin.
Motivation
Disadvantages of the current approach:
kover
dependency when aimplementation
dependency has already been declared - this complicates the configuration for builds with a large number of subprojectsAmong the additional advantages, integration with Kotlin Gradle Plugin will allow to add the functionality of native targets instrumentation more smoothly
Continued development of Kover Gradle Plugin
We will continue to maintain the current plugin and also add some small but convenient functionality.
Planned changes:
However, the development of new big features will be suspended within this plugin, DSL will no longer be redesigned.
Products and naming
Functionality of coverage measurement settings will move to Kotlin Gradle Plugin and Kotlin Maven Plugin.
The word
Kover
will denote a variety of tools that will be published for general use, designed to work with the coverage of applications written in Kotlin.Such tools will solve highly specialized tasks, and it is assumed that they will be used in custom pipelines: called from CLI scripts or from other build systems.
Examples of these tools:
Development milestones
The text was updated successfully, but these errors were encountered: