Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add pipeline support #7

Merged
merged 22 commits into from
Aug 25, 2020
Merged

Add pipeline support #7

merged 22 commits into from
Aug 25, 2020

Conversation

XiongKezhi
Copy link
Contributor

@XiongKezhi XiongKezhi commented Aug 5, 2020

Target pipeline:

publishChecks(
    name: "check", 
    detailsURL: "ci.jenkins.io", 
    status: "COMPLETED",
    conclusion: "SUCCESS",
    output: [
        title: "output",
        summary: "summary",
        text: "text",
        annotations: [
            [path: "src/main/foo.java", startLine: 0, endLine: 0, annotationLevel: "WARNING", message: "foo test"],
            [path: "src/main/bar.java", startLine: 1, endLine: 1, annotationLevel: "WARNING", message: "bar test"]
        ],
        images: [
            [alt: "foo alt", imageUrl: "foo url", caption: "foo image"],    
            [alt: "foo alt", imageUrl: "foo url", caption: "foo image"]
        ]
    ],
    actions: [
        [label: "foo label", description: "foo action", identifier: "foo identifier"],
        [label: "bar label", description: "bar action", identifier: "bar identifier"]
    ]
)

  • Implement publish checks step
  • Add tests

@XiongKezhi XiongKezhi added the enhancement New feature or request label Aug 5, 2020
@XiongKezhi XiongKezhi self-assigned this Aug 5, 2020
@XiongKezhi XiongKezhi requested a review from a team August 6, 2020 15:18
@codecov
Copy link

codecov bot commented Aug 6, 2020

Codecov Report

Merging #7 into master will increase coverage by 7.54%.
The diff coverage is 88.88%.

Impacted file tree graph

@@             Coverage Diff              @@
##             master       #7      +/-   ##
============================================
+ Coverage     83.33%   90.87%   +7.54%     
- Complexity       49       69      +20     
============================================
  Files            10       11       +1     
  Lines           210      274      +64     
  Branches          6        6              
============================================
+ Hits            175      249      +74     
+ Misses           35       23      -12     
- Partials          0        2       +2     
Impacted Files Coverage Δ Complexity Δ
...ava/io/jenkins/plugins/checks/api/ChecksImage.java 90.00% <75.00%> (-10.00%) 5.00 <2.00> (ø)
...enkins/plugins/checks/steps/PublishChecksStep.java 88.13% <88.13%> (ø) 16.00 <16.00> (?)
...ins/plugins/checks/BuildStatusChecksPublisher.java 53.33% <100.00%> (+53.33%) 1.00 <0.00> (+1.00)
...io/jenkins/plugins/checks/api/ChecksPublisher.java 100.00% <100.00%> (ø) 1.00 <0.00> (ø)
...ins/plugins/checks/api/ChecksPublisherFactory.java 85.00% <100.00%> (+11.31%) 9.00 <1.00> (+3.00)

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 4dac20b...16a3d3f. Read the comment docs.

private final String rawDetails;
@SuppressFBWarnings(value = "NP_NONNULL_FIELD_NOT_INITIALIZED_IN_CONSTRUCTOR",
justification = "Empty constructor used by stapler")
public class ChecksAnnotation extends AbstractDescribableImpl<ChecksAnnotation> implements Serializable {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it worth separating our pipeline models from our API models?

So that they don't get polluted with all the setters etc required by stapler?

private ChecksConclusion conclusion;
private LocalDateTime startedAt = LocalDateTime.now();
private LocalDateTime completedAt = LocalDateTime.now();
private ChecksOutput output;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I really think we should flatten this down so that users don't have to do:

publishChecks(
    name: "check", 
    detailsURL: "ci.jenkins.io", 
    status: "COMPLETED",
    conclusion: "SUCCESS",
    output: [
        title: "output",
        summary: "summary",
        text: "text",
        annotations: [
            [path: "src/main/foo.java", startLine: 0, endLine: 0, annotationLevel: "WARNING", message: "foo test"],
            [path: "src/main/bar.java", startLine: 1, endLine: 1, annotationLevel: "WARNING", message: "bar test"]
        ],
        images: [
            [alt: "foo alt", imageUrl: "foo url", caption: "foo image"],    
            [alt: "foo alt", imageUrl: "foo url", caption: "foo image"]
        ]
    ],
)

instead a simple usage would be;

publishChecks(
  name: "MyCheck",
  conclusion: "success",
  summary: "Everything is OK!"
)

i.e. it would be good to start from a users perspective, and look at some scenarios of what they want to use it for.

Most users probably won't user annotations / images
They will just want to set a status, with some messaging I would expect

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, I didn't consider the user scenarios and afraid too many fields will explode this class, so just flattened the first level checks ChecksDetails.

After your example, one pipeline usage I can come up with now is that users may want to publish checks to more precisely indicate the stages (especially for their customized ones).

@XiongKezhi
Copy link
Contributor Author

For now, I think it's ok we do not need to provide annotations and images, but I think it would be better to provide a new action object for the pipeline step: after we support the re-run request, users can create re-run action in pipeline. What do you think? @timja

@timja
Copy link
Member

timja commented Aug 7, 2020

Hmm not sure if actions is too useful it sends a webhook event to the app, and in order to do anything with it you need a Jenkins plugin...

@XiongKezhi
Copy link
Contributor Author

Hmm not sure if actions is too useful it sends a webhook event to the app, and in order to do anything with it you need a Jenkins plugin...

But we'll need to handle the webhook event for the re-run request anyway, there is a identifier for each kind of action, so we can tell the users to use the re-run identifier if they want to add the action.

@timja
Copy link
Member

timja commented Aug 7, 2020

Hmm not sure if actions is too useful it sends a webhook event to the app, and in order to do anything with it you need a Jenkins plugin...

But we'll need to handle the webhook event for the re-run request anyway, there is a identifier for each kind of action, so we can tell the users to use the re-run identifier if they want to add the action.

if there's nothing we can do about the webhook event we'd just have to ignore it.

We could have another step which is wait for github checks event, where you pass the identifier and we attach a pauseaction to the build...

but that's more than a v1, I would add than in a follow up, if we could get an MVP first that would be great

@ianfixes
Copy link

ianfixes commented Aug 11, 2020

Joining the discussion here via Gitter, I'll briefly summarize some of the stuff I ran into since posting on JENKINS-51225.

  1. Jenkins can run tests both on the source and target branch of a pull request (i.e. the branch by itself, and then the result of merging the source branch with the target branch. This means that the value you have for GIT_COMMIT might not be one that GitHub has ever heard of. We hack around this by running git log --pretty='%H %h %ae' -n 3 and picking the first commit that isn't authored by nobody@nowhere.
  2. Assuming that you've now found the "true" commit of a merged PR branch, you are now in the uncomfortable position of getting actual static analysis errors but not being sure whether the line numbers have shifted due to the merge! So in this case, we still report the overall conclusion of the test (pass, fail, neutral) but summarize the results in a table instead of annotating what might be the wrong line.
  3. Now that you're annotating the source branch and the merged branch in parallel, note that GitHub will refuse to report 2 different sets of annotations with the same name (the second will overwrite the first) -- so you'll want to alter the report name for uniqueness. (We append (PR) to the report name if it's a pull request branch).
  4. Now that you're reporting 2 sets of results properly, you may notice that there can be thousands of annotations. GitHub's API wants you to split these into batches of 50. Instead of reporting every possible batch of annotations, we've found that having 50 errors is more than enough for an engineer to work on -- we only report the first 50. But to do that intelligently, we need to make sure that the most severe errors are promoted to the top of that list.
  5. Now that you're reporting the most severe errors, you might notice that you might be reporting on files that were not actually part of the pull request!. So in the case of a PR test, we have the advantage of being able to tell what files had changed between source and target branch: we use a tip from JENKINS-54285 with the addition of git fetch origin ${env.CHANGE_TARGET}:refs/remotes/origin/${env.CHANGE_TARGET} to ensure that we have the proper local data. This gives us the list of files that have actually changed, and we prioritize annotations in this change set before all others.
  6. Finally, we have derived incredible value from overriding the severity of reports that we get from other tools. For example, we may be trying out a new static analysis tool but figuring out what settings to use. Or we may want to report errors as warnings to allow a transition period to a new set of engineering best practices. Or we may even want to try out new, stricter settings for a tool we already use.

Our implementation of this is centered around uploading annotations in the structure that GitHub expects; this general-purpose function enforces the naming (PR or not PR) and prioritization of annotations based on source file and severity. We then have a supplementary set of functions to translate from common report formats (eslint, rubocop, hadolint, etc) into GitHub format -- each of which accepts a closure for adjusting the severity of an annotation entry. With that, we have everything we need to support a lot of interesting use cases.

So with that said, I really like the flexibility being proposed here:

publishChecks(
    name: "check", 
    detailsURL: "ci.jenkins.io", 
    status: "COMPLETED",
    conclusion: "SUCCESS",
    output: [
        title: "output",
        summary: "summary",
        text: "text",
        annotations: [
            [path: "src/main/foo.java", startLine: 0, endLine: 0, annotationLevel: "WARNING", message: "foo test"],
            [path: "src/main/bar.java", startLine: 1, endLine: 1, annotationLevel: "WARNING", message: "bar test"]
        ],
        images: [
            [alt: "foo alt", imageUrl: "foo url", caption: "foo image"],    
            [alt: "foo alt", imageUrl: "foo url", caption: "foo image"]
        ]
    ],
)

Although the detailsURL and status should not be up to the user to supply -- Jenkins knows what those values are. Also, I would break up output into the 3 things that are genuinely up to the user: title, description, and a set of annotations. The summary (as indicated above) may need to be constructed from the annotations in the case of a pull request branch.

Sorry this was long, and I accept in advance that my use cases very likely don't represent "most users" 😄

Ask me anything about how this is working for our team so far!

@XiongKezhi
Copy link
Contributor Author

@ianfixes Thanks for sharing the issues you met on your implementation and feedback very much! 👏

Many of your problems we have met as well, here are some of our solutions we implemented:

For the true commit of PR, our implementation is based on GitHub Branch Source Plugin which is an implementation of the Branch API, so resolving the true commit of PR isn't a hard problem after that.

For the limit of annotations, our depending library did that for us: https://github.com/hub4j/github-api/blob/master/src/main/java/org/kohsuke/github/GHCheckRunBuilder.java#L138

For test and static warnings report, we have implemented the consumers in other plugins, you can find them in our blog post: https://www.jenkins.io/blog/2020/08/03/github-checks-api-plugin-coding-phase-2/. We only reported the new issues of the PR comparing with the target branch, so resolving the problem about "files that were not actually part of the pull request".

@ianfixes
Copy link

Thanks XiongKezhi! I figured that you would have run into a few of these and resolved them.

After thinking about it, I like the idea of this syntax:

// specify everything manually
publishChecks(
    name: "my check", 
    conclusion: "SUCCESS",
    output: [
        title: "output",
        summary: "summary",
        text: "text",
        annotations: [
            [path: "src/main/foo.java", startLine: 0, endLine: 0, annotationLevel: "WARNING", message: "foo test"],
            [path: "src/main/bar.java", startLine: 1, endLine: 1, annotationLevel: "WARNING", message: "bar test"]
        ],
        images: [
            [alt: "foo alt", imageUrl: "foo url", caption: "foo image"],    
            [alt: "foo alt", imageUrl: "foo url", caption: "foo image"]
        ]
    ]
)

Or for using a built-in converter/tool to read a report from a different format (if I'm reading the page you referred to correctly):

// "formatIssues" style borrowed from "recordIssues"
publishChecks(
    name: "my check", 
    conclusion: "SUCCESS",
    output: formatIssues(tools: [checkStyle(pattern: 'target/checkstyle-result.xml'),
                         spotBugs(pattern: 'target/spotbugsXml.xml'),
                         pmdParser(pattern: 'target/pmd.xml'),
                         cpd(pattern: 'target/cpd.xml')],
                         qualityGates: [[threshold: 1, type: 'TOTAL', unstable: true]])
)

@XiongKezhi
Copy link
Contributor Author

@ianfixes Is there any reason that you don't want to use warnings-ng to help publish tool reports but want to convert them by hand?

@XiongKezhi
Copy link
Contributor Author

Strange, the PR passes my local tests while the CI says

[ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.8.1:compile (default-compile) on project checks-api: Compilation failure
[ERROR] /home/runner/work/checks-api-plugin/checks-api-plugin/src/main/java/io/jenkins/plugins/checks/api/ChecksImage.java:[79,36] cannot find symbol
[ERROR]   symbol:   variable imageUrl
[ERROR]   location: class io.jenkins.plugins.checks.api.ChecksImage

And there is no line#79 at all in ChecksImage.java

@ianfixes
Copy link

ianfixes commented Aug 14, 2020

Is there any reason that you don't want to use warnings-ng to help publish tool reports

Perhaps the biggest reason is that we've found working with plugins to be terrible, especially for scripted pipeline. (We have to wait for 3rd parties to fix bugs or implement issues, the experience may not work in Blue Ocean, there is no guarantee that the plugin won't be abandoned, etc). And in many cases, the functionality is trivial enough for us to just implement ourselves in a script library -- such that we have full control over it. So the tendency is to choose plugins that "do one thing and do it well", then wrap them to suit our needs.

In the case of publishing reports specifically, as I mentioned above sometimes we want to adjust the severity of the annotations that a tool provides. For example, we may want to soft-launch some style guidelines such that the errors are reduced to warnings for some grace period. Or we want to test out a proposed linting tool, in much the same way. Or we want an ability to introduce our own logic to mute certain errors.

Our feeling is that the tools are there to help us, and not the other way around. So we leave room to tailor the reports to our own needs.

@XiongKezhi XiongKezhi marked this pull request as ready for review August 17, 2020 15:24
Copy link
Member

@uhafner uhafner left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would help if we add the help texts (and the Snippet Generator configuration).

I think it would be also helpful to have a real integration test that creates a sample pipeline using the step. Then we can see in the console log if the null publisher is correctly invoked in the API plugin.
Examples: https://github.com/jenkinsci/warnings-ng-plugin/blob/master/plugin/src/test/java/io/jenkins/plugins/analysis/warnings/steps/StepsITest.java
(If this is too much work for now we can also add such a test to the GitHub checks plugin).

}

@DataBoundSetter
public void setTitle(final String title) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would make sense to add config.jelly and *-help.html files for the fields (you already have the text in the JavaDoc of the ChecksOutput classes). Then the online documentation (Snippet Generator and Online Reference) has some more details. You can open the Snippet Generator using the "Pipeline Syntax" link on a job page.

@NonNull
@Override
public String getDisplayName() { // it's pipeline step, so where does the name shown?
return "Publish customized checks to SCM platforms";
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The text is shown in the Snippet Generator and Online Reference (and should be part of Messages)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Optional (since we have no I18n yet in the whole project): it would make sense to create a Messages.properties file and add all UI labels (example: https://github.com/jenkinsci/forensics-api-plugin/blob/master/src/main/resources/io/jenkins/plugins/forensics/miner/Messages.properties). Then you can replace these strings with the constant in the generated Messages class (example: https://github.com/jenkinsci/forensics-api-plugin/blob/master/src/main/java/io/jenkins/plugins/forensics/miner/ForensicsJobAction.java#L34).

But we can postpone that part to the after GSoC work.

@XiongKezhi XiongKezhi requested a review from uhafner August 22, 2020 10:37
Copy link
Member

@uhafner uhafner left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me! Just some minor things.

@NonNull
@Override
public String getDisplayName() { // it's pipeline step, so where does the name shown?
return "Publish customized checks to SCM platforms";
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Optional (since we have no I18n yet in the whole project): it would make sense to create a Messages.properties file and add all UI labels (example: https://github.com/jenkinsci/forensics-api-plugin/blob/master/src/main/resources/io/jenkins/plugins/forensics/miner/Messages.properties). Then you can replace these strings with the constant in the generated Messages class (example: https://github.com/jenkinsci/forensics-api-plugin/blob/master/src/main/java/io/jenkins/plugins/forensics/miner/ForensicsJobAction.java#L34).

But we can postpone that part to the after GSoC work.

@XiongKezhi XiongKezhi merged commit dd34cb7 into master Aug 25, 2020
@XiongKezhi XiongKezhi deleted the pipeline-support branch August 25, 2020 07:50
XiongKezhi added a commit that referenced this pull request Oct 25, 2020
…s/cache-v2.1.0

Bump actions/cache from v1 to v2.1.0
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants