Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Design proposal: move the current functionality of Kover Gradle Plugin to Kotlin Gradle Plugin #608

Open
shanshin opened this issue May 15, 2024 · 10 comments
Assignees
Labels
Feature Feature request issue type S: in progress Status: implementing or design in process

Comments

@shanshin
Copy link
Collaborator

Abstract

Calculating code coverage for applications written in Kotlin is a basic functionality closely related to the language itself.

Now this functionality is supplied separately in the form of a Gradle plugin, which is in no way related to the Kotlin compiler used.
Enabling coverage measurement requires a number of additional actions, using a plugin, and configuring scripts through extensions unrelated to Kotlin.

The Kotlin coverage settings have nothing to do with the Kotlin settings, which makes these settings less obvious.

For a smoother use of Kotlin, we decided to move the DSL coverage collection settings to Kotlin Gradle Plugin.

Motivation

Disadvantages of the current approach:

  • the complexity of the concepts used in Kover Gradle Plugin: it introduces its own concepts of artifacts, coverage measurement options, report variants, its own rules for combining variants, its own rules for merging reports. All these concepts do not fit well with Kotlin, and its look like an artificial pile-up
  • after creating a minimal Kotlin application, it is not obvious that you need to use a third-party utility to measure coverage
  • some Kover plugin settings duplicate existing settings, for example, the need to add an kover dependency when a implementation dependency has already been declared - this complicates the configuration for builds with a large number of subprojects
  • unobvious to configure Kover as a separate Gradle plugin that is not related to Kotlin in any way
  • difficulty of supporting two Gradle plugins in the IDE - it is easier to support one Kotlin Gradle Plugin than two plugins with different versions
  • the current plugin DSL is specific to Java or JaCoCo in many respects
  • complicated integration with the Kotlin compiler
  • different lifecycle of Kover Gradle Plugin and Kotlin Gradle Plugin potentially complicates delivering support for new compiler features

Among the additional advantages, integration with Kotlin Gradle Plugin will allow to add the functionality of native targets instrumentation more smoothly

Continued development of Kover Gradle Plugin

We will continue to maintain the current plugin and also add some small but convenient functionality.

Planned changes:

However, the development of new big features will be suspended within this plugin, DSL will no longer be redesigned.

Products and naming

Functionality of coverage measurement settings will move to Kotlin Gradle Plugin and Kotlin Maven Plugin.

The word Kover will denote a variety of tools that will be published for general use, designed to work with the coverage of applications written in Kotlin.
Such tools will solve highly specialized tasks, and it is assumed that they will be used in custom pipelines: called from CLI scripts or from other build systems.

Examples of these tools:

  • JVM agent
  • generating human-readable reports
  • verification of coverage values
  • working with the Kotlin code coverage export format

Development milestones

  • finalizing the tasks specified in section
  • develop a temporary kover settings plugin as prototype of configuring coverage in Kotlin Gradle Plugin to test new ideas
  • after the stabilization of the design and processing of the main user cases collected on the basis of settings plugin feedback, integration with Kotlin Gradle Plugin will begin
  • full integration with Kotlin Gradle Plugin and alignment of workflow and lifecycles, new HTML and export reports
  • implementation of coverage measurement support in the Kotlin compiler
@shanshin shanshin added Feature Feature request issue type S: untriaged Status: issue reported but unprocessed labels May 15, 2024
@shanshin shanshin self-assigned this May 15, 2024
qwwdfsad added a commit that referenced this issue May 16, 2024
* It is here to stay
* We would live to maintain it in its current form and have a clear vision for future development (#608) with a straightforward migration plan
qwwdfsad added a commit that referenced this issue May 16, 2024
* It is here to stay
* We would live to maintain it in its current form and have a clear vision for future development (#608) with a straightforward migration plan
@shanshin
Copy link
Collaborator Author

shanshin commented Jun 6, 2024

Additionally: consider adding the ability to exclude source sets or android build variants in the build file of the corresponded project, so even if the entire project is being built, user can limit the contents of the report

@shanshin
Copy link
Collaborator Author

shanshin commented Jun 7, 2024

Suggestions from #600: Use DomainObjectCollection, DomainObjectSet etc for managed objects collections

shanshin added a commit that referenced this issue Jun 24, 2024
shanshin added a commit that referenced this issue Jun 24, 2024
shanshin added a commit that referenced this issue Jun 24, 2024
@shanshin shanshin added S: in progress Status: implementing or design in process and removed S: untriaged Status: issue reported but unprocessed labels Jun 24, 2024
shanshin added a commit that referenced this issue Jun 26, 2024
@mgroth0
Copy link

mgroth0 commented Jun 27, 2024

This all sounds great. I especially like the idea of merging the concept of Kover Variants with kotlin plugin standards. This seems like it is in the same spirit of what Detekt did recently when they made a one to one mapping with kotlin source sets.

I'm guessing that since Kover relies on code execution it will be per target instead of per source set (so we won't get "common" kover tasks, or if we do they would just aggregate all the tasks for the targets in the "common" tree).

@shanshin
Copy link
Collaborator Author

This all sounds great. I especially like the idea of merging the concept of Kover Variants with kotlin plugin standards. This seems like it is in the same spirit of detekt/detekt#6973.

I'm guessing that since Kover relies on code execution it will be per target instead of per source set (so we won't get "common" kover tasks, or if we do they would just aggregate all the tasks for the targets in the "common" tree).

Filtering at the source set level is still questionable, because it is quite difficult to distinguish classes from different sets, because they are all combined in one directory (compiled class files), which is why Kotlin compilations are now used.

In order to ensure that they are distinguished, it is necessary to be not just part of KGP, but also of the Kotlin compiler - and this is a more complex task that requires a more thoughtful design.

@shanshin
Copy link
Collaborator Author

@mgroth0

The prototype has been implemented.
It would be very useful for us to get a primary feedback on this plugin, what functionality is missing, what would you like to add (besides verification).

You can apply the plugin by specifying in the settings.gradle[.kts] file

plugins {
    id("org.jetbrains.kotlinx.kover.aggregation") version "0.8.2"
}

There is no need to apply it anywhere else, all uses of the id("org.jetbrains.kotlinx.kover") plugin must be removed.

This aggregation plugin works differently.

Report generation tasks are created only in the root project, settings are made only in the settings.gradle.kts file.

To instrument the tests, you need to enable coverage measurement, this can be done by adding the CLI argument -Pkover, or by writing to settings.gradle.kts

kover {
    enableCoverage()

The key difference is that running the koverHtmlReport or koverXmlReport tasks does not automatically run the tests. Only those classes that were compiled within the same assembly will be included in the report, and coverage will be taken only from running tests.
This way you can by yourself control which coverage and which classes are included in the report - by starting different compilation and testing tasks.

In order not to restart tests in different builds, use the build cache.

@mgroth0
Copy link

mgroth0 commented Aug 27, 2024

Hi @shanshin sorry for the long delay. I am happy to try to be an early adopter of the new plugin and provide feedback.

I tried it, and here are my initial experiences.

First, let me explain my project setup a bit. I couldn't apply the settings plugin by id with id("org.jetbrains.kotlinx.kover.aggregation") version "0.8.2" in my own project. I am not sure exactly why, but I should emphasize that I never apply settings plugins this way. I write my own build logic plugin which applies all of my other plugins programatically. This point is not an issue I think you should worry about as its likely a problem with my project, but it explains part of my motivation for what I did.

Given my usual technique of applying plugins programmatically, I found the plugin class KoverSettingsGradlePlugin so I could apply it with plugins.apply(KoverSettingsGradlePlugin::class) in my settings script (in the future, I would do this directly inside of my own settings plugin). Here I came across the first issue, which is that KoverSettingsGradlePlugin is internal. I had to use reflection to get a reference to the KoverSettingsGradlePlugin, and then I was finally able to apply the plugin. I have never had this issue before, so I believe it is conventional and practical to make plugin classes public.

Reading through your previous comment, I had some questions and concerns about task ordering and task dependencies. I understand in general why the shift to making kover tasks not depend directly on test tasks, to provide more flexibility to the user for what to record coverage of. I might like that idea, though I don't fully understand it and have some concerns.

I felt a little lost trying to figure out what I need to do in order to ensure proper task ordering (do I need to manually make kover report tasks mustRunAfter all the tasks I am recording coverage of?).

I also felt unsure about how up-to-date checking would work here. In the previous plugin, I think that if a test source set changes, it will invalidate the test and therefore invalidate the kover tasks. Here, I am unsure what exactly happens. I would guess that since the kover tasks no longer depend directly on the test tasks, that the kover tasks have to always run in every build. This is unusual for a gradle task, which is supposed to have up-to-date checking. But if I am guessing correctly, maybe the thinking is that if all other tasks are up-to-date and don't run, then the kover task will have nothing to do anyway. But then again, what if I have two modules and one has an up-to-date test and the other doesn't? Will kover just aggregate in the results from the invalidated module and ignore the one that is up-to-date? I have more questions like this, but I'll stop here and see if you can point me towards some documentation or answer first.

And for additional context, while I rely heavily on Gradle's task up-to-date checking as well as Gradle's configuration cache, I do not use the build cache at all.

In the end, I was able to successfully generate a coverage report for one of my module's tests. It looked good. I rely on the validation tasks for my normal workflow, so I'll be able to try this out more thoroughly once those are ready.

In summary:

  1. Make KoverSettingsGradlePlugin public
  2. Document how task ordering will work
  3. Document how up-to-date checking will work
  4. (as you know) Add verification tasks

@shanshin
Copy link
Collaborator Author

@mgroth0, thanks for the feedback!

Make KoverSettingsGradlePlugin public

Indeed. Only the application by ID was considered, so there was not even an idea to make it public.
But I still wonder why you couldn't apply it by ID from your plugin (this may be useful for other users using the same approach)


Document how task ordering will work

I will clarify this point a little:

shift to making kover tasks not depend directly on test tasks

this is not entirely correct, the Kover tasks still depend on the test tasks, the only difference is that running the Kover tasks does not automatically trigger the tests (using mustRunAfter instead of dependsOn).
So if you run the tests and the Kover task in any order, the report will be generated only after all running tests are completed.


Document how up-to-date checking will work

As described above, the Kover tasks depend on the test tasks as before, therefore, the up-to-date checks work as before, and if neither the tests nor the source code have changed, then the report will not be regenerated.

The only difference that you should consider when up-to-date checking is that it now depend on the compilation and testing tasks running.
As indicated in the documentation, only those classes for which the compilation task was started are included in the reports. Therefore, if you change the Gradle startup command and add a call to a new compilation (or test) task, the previous report will be irrelevant and Kover will generate an actual report or search the cache entry for a new set of ran commands.


(as you know) Add verification tasks

It will be added in the next release :)

@mgroth0
Copy link

mgroth0 commented Aug 30, 2024

But I still wonder why you couldn't apply it by ID from your plugin (this may be useful for other users using the same approach)

I don't know, to be honest. The biggest clue I have is that if I take the parts out of my settings script that define exclusive content for repositories, then org.jetbrains.kotlinx.kover.aggregation will load just fine by id. Upon further testing, this problem is not kover-specific and not specific to any repository either. Seems that if I have exclusiveContent for any repository, I can't load any plugins by id any more (except my own, which are in their own local maven repo)). This is a separate issue I should probably raise to gradle devs at some point. I just haven't got around to reporting it because I have not been loading plugins by id anyway.


this is not entirely correct, the Kover tasks still depend on the test tasks, the only difference is that running the Kover tasks does not automatically trigger the tests (using mustRunAfter instead of dependsOn).

As described above, the Kover tasks depend on the test tasks as before

I think I am having trouble reading and understanding this. My understanding is that you now use mustRunAfter instead of dependsOn. If we stick with a strict definition of a task dependency, I would say that mustRunAfter is not a dependency. Rather, it is just a hint to gradle for task ordering. So when you say "Kover tasks depend on the test tasks as before", I think I just want to make sure we are on the same page, because I think there is no task dependencies any more, strictly speaking

(see gradle docs for mustRunAfter:

For each supplied task, this action adds a task 'ordering', and does not specify a 'dependency' between the tasks


therefore, the up-to-date checks work as before, and if neither the tests nor the source code have changed, then the report will not be regenerated

I will describe the situation that confuses me.

First consider this hypothetical scenario:

  • we have project with module A and module B.
  • kover uses regular task dependencies (dependsOn). A regular task dependency implies an information flow from the dependency tasks to the dependent tasks. So, the kover task will use output from tasks :A:test and :B:test as input. Of course, this is complicated because that output is generated from instrumentation added by the kover plugin. But put that aside for a moment
  • we run kover once and get a report
  • now we edit module B
  • we run kover again. It uses the new output from module B, and also the stored output from module A. Because module A is up-to-date, it doesn't re-run tests. Instead, kover just reuses the old results which are still valid

Now, consider the same scenario but in which we use mustRunAfter instead of dependsOn. The important thing about mustRunAfter is it doesn't create a dependency, and it allows the kover task to run even if the other task doesn't. This typically means that the other task doesn't produce input required for kover.

So in that scenario, the concern I have is how would kover know to still include the output from module A? You said:

As indicated in the documentation, only those classes for which the compilation task was started are included in the reports.

But in this scenario, module A is up-to-date so even though its test task is selected as part of the build, both its test and compilation tasks will be skipped, not started. So then I wonder, will kover still know to include the results from A, or will it now just include B?

@shanshin
Copy link
Collaborator Author

shanshin commented Aug 30, 2024

It seems that due to the complexity of Gradle, we have slight differences in terminology.

If I write "the task is running", I mean that the task is explicitly called from the console, or the task being executed triggers the execution of this task (through an explicit dependency between tasks, in some cases of implicit dependency, in case of dependency resolution).

The tasks being executed are sorted, forming a so-called task graph.

As a result of the task execution, the task may have one of several outcomes.

For example, when recompiling without changing the source code, the compilation task executes anyway, but it has an outcome UP-TO-DATE (in some cases FROM-CACHE) - the task is executed, but its actions are not executed, and the outputs were saved earlier (restored from the cache for FROM-CACHE outcome).
*you can see all the executed tasks and their outcomes in the build log using -i of --info keys

You write that the task being executed is a task with an outcome of EXECUTED, however, there are many places in the Gradle documentation, a task with any outcome is considered to be a task being executed.

But in this scenario, module A is up-to-date so even though its test task is selected as part of the build, both its test and compilation tasks will be skipped, not started.

The task outcomes are a bit confused here.
When you re-run the compilation without changing the source code, the outcome of the task execution is UP-TO-DATE (in some cases FROM-CACHE).

For each supplied task, this action adds a task 'ordering', and does not specify a 'dependency' between the tasks

Yes, that's right. However, it says here that mustRunAfter itself does not add dependency.
There are other ways to define dependencies between tasks, for example, by using the output of one task as the input of another (the so-called implicit dependency) - that's why mustRunAfter is used here.
It's funny that in this chapter the use of mustRunAfter is called explicit dependency

This typically means that the other task doesn't produce input required for kover.

No, compilation tasks still produce inputs for Kover tasks. The Kover settings plugin is written in such a way that it uses inputs only for those tasks that are in the task execution graph.


Summarizing, Kover tasks are insensitive to up-to-date checks of one or more compilation and test tasks.

You can verify this by reproducing your user case: classes and coverage from A module will be present in the report.
Then you can re-execute the same command, but with a complete restart of all actions from all executed tasks, this can be done by adding the --rerun-tasks key.

Kover guarantees the stability of reports regardless of the up-to-date status of compilation and test tasks.

@mgroth0
Copy link

mgroth0 commented Aug 30, 2024

You're completely right about us having different terminology and thank you for taking the time to sort this all out! Its important to emphasize from what you said that gradle itself isn't always consistent with its own terminology, like "task dependency".

Kover guarantees the stability of reports regardless of the up-to-date status of compilation and test tasks.

That's the main thing I needed to hear. and I have no evidence that this guarantee doesn't hold true. I was just concerned because I couldn't understand it.

You can verify this

Once the next version is available with verifications I'll try to include a test specifically addressing these concerns I have when I set up the whole configuration I usually use. I'm guessing the test will pass, but it will be nice to confirm.

The Kover settings plugin is written in such a way that it uses inputs only for those tasks that are in the task execution graph.

Noting this seems like the most important change; my understanding is that users will need to manually select the tests tasks that they want to include. So they might run :a:test :b:test koverHtmlReport or just test koverHtmlReport, whereas previously just koverHtmlReport was enough. Now running the kover task on its own won't be enough. Hope I got this right.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Feature Feature request issue type S: in progress Status: implementing or design in process
Projects
None yet
Development

No branches or pull requests

2 participants