Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ci-signal-reporter, automatically generate weekly ci signal report #2443

Closed
leonardpahlke opened this issue Feb 22, 2022 · 16 comments
Closed
Assignees
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-priority
Milestone

Comments

@leonardpahlke
Copy link
Member

leonardpahlke commented Feb 22, 2022

which part of the ci-signal-tool does this effect

  • 🟢 ci-signal-report-cli

Summary: Generate a weekly ci signal report which can be manually shared in slack & k/dev

Current situation

Currently, the weekly ci signal report is generated manually. Some inputs are supplied by this tool. This can be fully automated. See previous ci signal reports for template reference

Desired outcome

The ci sginal report tool should enable creating the entire weekly ci signal report.

Benefits: The preparation of the weekly ci signal report takes some time each week, which could be better used. In addition, establishing a clear structure via the ci signal reporting tool would ensure the quality of the report.

Concerns: The report sometimes contains references to specific events, such as release cuts and specific information about a topic or similar. These must be added dynamically

/kind feature

@leonardpahlke
Copy link
Member Author

/assign @voigt

@leonardpahlke
Copy link
Member Author

I think we need to get the github information from the new project board before we make progress here. There is an issue open ref #2396 and slack thread

@leonardpahlke
Copy link
Member Author

leonardpahlke commented Mar 16, 2022

@voigt #2454 has been merged I think we are good to start working on this issue. Ref google group k/dev for CI Signal report examples

@leonardpahlke
Copy link
Member Author

/milestone v1.24

@k8s-ci-robot k8s-ci-robot added this to the v1.24 milestone Mar 18, 2022
@leonardpahlke
Copy link
Member Author

cc @RobertKielty

@leonardpahlke
Copy link
Member Author

/unassign @voigt
/assign @RobertKielty

@k8s-ci-robot k8s-ci-robot assigned RobertKielty and unassigned voigt Apr 12, 2022
@RobertKielty
Copy link
Member

RobertKielty commented May 3, 2022

After one-on-one talks with @leonardpahlke and @hh (Thank you both for your input and guidance) I propose the following for next steps on this work:

  1. Configure the ci-signal-reporter to run its report as a Periodic Prow Job on the k8s prow instance (initially once per week, frequency may increase towards end of a release)

  2. The report will need some tweaks to get it to target release-specific GH project boards @leonardpahlke has already started to take a look at that.

  3. Have this PR ready for review KubeCon Valencia so that we can meetup, discuss further and solicit feedback from the community.

/cc @BenTheElder @wojtek-t @kubernetes/sig-testing @kubernetes/sig-release

@leonardpahlke
Copy link
Member Author

/milestone v1.25

@RobertKielty
Copy link
Member

I have done work on this in my local dev env to configure a Prow Job to run the report automatically. Using pj-on-kind.sh to test in a local env.

I reached out to SIG Testing on Slack to find out how to best inject the required GITHUB_AUTH token into the container where the report runs. @BenTheElder, super helpful as ever, let me know that I would need to speak to the github-management team on the specific question of GH Tokens. We have reached out to the them on slack. Relevant docs are here https://github.com/kubernetes/community/tree/master/github-management

Additionally, Ben pointed out that I could run this report as a Github Action which I will also look at today from a learning pov. https://docs.github.com/en/actions/learn-github-actions

During discussions with @leonardpahlke at KubeCon we both have ambitions to make use of his report to track stats such as Mean Time to Resolve Reported Flakes, Reported Flake Count per Release Cycle, per SIG etc. etc. so as we consume the report we want to encourage the SIG Release Team to suggest stats that the community would find useful to identify resource gaps on the project from the e2e test maintenance POV.

I have added this issue to the agenda of the upcoming meeting so as to solicit input from SIG Release (and Release Engineering) and on best course of implementation action (ProwJob vs. Github Action) and also start thinking about analytics we could mine both within a release cycle and across release cycles.

@RobertKielty
Copy link
Member

Discussed next steps on this at the SIG release meeting date today, Tuesday May 31, 2022

Thanks to everyone who provided feedback. Ref: SIG Release Meeting Tuesday May 31, 2022

@RobertKielty
Copy link
Member

Next Steps ...

  • Create a Cloud Build that creates a container that runs the ci-report
  • Create a Github Action that makes use of that image to rin the report peridocally
  • Make use of sig-release developed tools to push the report results up to a repo
  1. Cloud Build
    Images maintained in this kubernetes/release repo are kept in the images folder along with build mechanisms
    There are two mechanisms of building out images, one using a Makefile one that does not. Will assess both for this use case and pick the best approach in consultation with the community.

@leonardpahlke
Copy link
Member Author

Another option would be to let the report run over the https://github.com/kubernetes-sigs/testgrid-json-exporter project.

See diagram (scheduled reports, cron job)

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 7, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Oct 7, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Nov 6, 2022
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-priority
Projects
None yet
Development

No branches or pull requests

5 participants