Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Testgrid should show version of binary fetched by --extract flag #19839

Closed
mborsz opened this issue Nov 4, 2020 · 13 comments
Closed

Testgrid should show version of binary fetched by --extract flag #19839

mborsz opened this issue Nov 4, 2020 · 13 comments
Labels
area/config Issues or PRs related to code in /config area/prow Issues or PRs related to prow area/testgrid kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/scalability Categorizes an issue or PR as relevant to SIG Scalability. sig/testing Categorizes an issue or PR as relevant to SIG Testing.

Comments

@mborsz
Copy link
Member

mborsz commented Nov 4, 2020

What would you like to be added:
I would like to see a new column header in testgrid showing actual kubernetes version that is being tested when --extract= option is used in the test.

The testgrid config has a columnheader field that can be used for this purpose (https://godoc.org/github.com/GoogleCloudPlatform/testgrid/pb/config#TestGroup), but it's not possible to use it our prowjobs as they are using annotation-based testgrid configuration (like testgrid-dashboards, testgrid-num-failures-to-alert).

This is a feature request to add a new annotation that can be used to add job-version to the testgrid view.

Why is this needed:
In #19838 the ci-kubernetes-e2e-gce-scale-performance test started continuously testing the same, stale k8s version (v1.20.0-beta.0.54+2729b8e3751434), while the commit number in testgrid was progressing:

image

See https://k8s-testgrid.appspot.com/sig-scalability-gce#gce-master-scale-performance

We found this issue by an accident, when I was debugging why kubernetes/kubernetes#96117 hasn't been rolled out to our tests. I would like to see some column that will make it clear that we are testing the same version continuously and that will be more meaningful than master's commit in this context.

@MushuEE
Copy link
Contributor

MushuEE commented Nov 6, 2020

Would it be better to migrate this issue to https://github.com/GoogleCloudPlatform/testgrid?

@MushuEE
Copy link
Contributor

MushuEE commented Nov 6, 2020

Does https://github.com/GoogleCloudPlatform/testgrid/blob/master/config.md#column-headers enable you to do what you are looking for?

Within the finished.json I see:

  "metadata": {
    "repo-commit": "eca53507be3d9008ff6d4cb776c02733a1ea898b", 
    "repos": {
      "k8s.io/kubernetes": "master", 
      "k8s.io/perf-tests": "master"
    }, 
    "infra-commit": "ab50c5702", 
    "repo": "k8s.io/kubernetes", 
    "job-version": "v1.20.0-beta.1.129+eca53507be3d90", 
    "pod": "820efc68-1f88-11eb-a4fe-eec30b56844c", 
    "revision": "v1.20.0-beta.1.129+eca53507be3d90"
  }

Do any of those work?

@mborsz
Copy link
Member Author

mborsz commented Nov 6, 2020

Yes, columnheader with job-version works just fine, but there is no option to use testgrid's columnheader feature using annotation config we have in this repo. As mentioned in the first comment, this is a feature request to add support for annotation-based columnheader converter:

The testgrid config has a columnheader field that can be used for this purpose (https://godoc.org/github.com/GoogleCloudPlatform/testgrid/pb/config#TestGroup), but it's not possible to use it our prowjobs as they are using annotation-based testgrid configuration (like testgrid-dashboards, testgrid-num-failures-to-alert).

This is a feature request to add a new annotation that can be used to add job-version to the testgrid view.

Would it be better to migrate this issue to https://github.com/GoogleCloudPlatform/testgrid?

Correct me, if I'm wrong, but it looks like the prowjobs's annotation -> testgrid config converter is specific to this repo:

func applySingleProwjobAnnotations(c *configpb.Configuration, pc *prowConfig.Config, j prowConfig.JobBase, jobType prowapi.ProwJobType, repo string, dc *yamlcfg.DefaultConfiguration) error {

/cc @michelle192837

@MushuEE
Copy link
Contributor

MushuEE commented Nov 6, 2020

@mborsz I see, I misunderstood. Thanks

@michelle192837
Copy link
Contributor

Yup, translating Prow jobs to TestGrid config is specific to this repo, though it's also fine to file the issue in https://github.com/GoogleCloudPlatform/testgrid (a lot of the config marshalling code is over there).

With respect to this ask, we can think about it, but at some point configuration starts getting away from fields that are easy to add in Configurator. Column headers, for instance, are not a simple scalar field, they are a repeated field that should be a oneof, and I'm wary about trying to manage that or similar things in configuration.

That said, I think feel free to add this particular case to https://github.com/kubernetes/test-infra/blob/master/testgrid/cmd/configurator/prow.go? We also might have a relatively lightweight solution for more complicated configs in a bit, but I need to test if it works with Prow-based TestGrid config too.

@BenTheElder BenTheElder added area/config Issues or PRs related to code in /config area/prow Issues or PRs related to prow area/testgrid labels Jan 6, 2021
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 6, 2021
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 6, 2021
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-contributor-experience at kubernetes/community.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-contributor-experience at kubernetes/community.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@spiffxp
Copy link
Member

spiffxp commented Aug 11, 2021

@k8s-ci-robot k8s-ci-robot added sig/testing Categorizes an issue or PR as relevant to SIG Testing. sig/scalability Categorizes an issue or PR as relevant to SIG Scalability. labels Aug 11, 2021
@k8s-ci-robot k8s-ci-robot reopened this Aug 11, 2021
@k8s-ci-robot
Copy link
Contributor

@spiffxp: Reopened this issue.

In response to this:

/reopen
/sig testing
/sig scalability
related:

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/config Issues or PRs related to code in /config area/prow Issues or PRs related to prow area/testgrid kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/scalability Categorizes an issue or PR as relevant to SIG Scalability. sig/testing Categorizes an issue or PR as relevant to SIG Testing.
Projects
None yet
Development

No branches or pull requests

8 participants