Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support publishing CI artifacts to S3 #16659

Closed
wants to merge 3 commits into from

Conversation

rifelpet
Copy link
Member

@rifelpet rifelpet commented Jul 6, 2024

@k8s-ci-robot
Copy link
Contributor

Skipping CI for Draft Pull Request.
If you want CI signal for your change, please convert it to an actual PR.
You can still manually trigger a test run with /test all

@k8s-ci-robot k8s-ci-robot added do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Jul 6, 2024
@k8s-ci-robot k8s-ci-robot requested a review from johngmyers July 6, 2024 01:57
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please ask for approval from rifelpet. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@@ -34,6 +36,7 @@ import (
const (
defaultJobName = "pull-kops-e2e-kubernetes-aws"
defaultGCSPath = "gs://k8s-staging-kops/pulls/%v/pull-%v"
defaultS3Path = "s3://k8s-infra-kops-ci-results/pulls/%v/pull-%v"
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rifelpet rifelpet changed the title Add s3-publish-ci make target Support publishing CI artifacts to S3 Jul 6, 2024
@rifelpet rifelpet force-pushed the s3-stage-location branch from 85e8356 to 35c1d03 Compare July 6, 2024 02:39
@k8s-ci-robot
Copy link
Contributor

@rifelpet: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
pull-kops-e2e-k8s-aws-calico-k8s-infra 35c1d03 link true /test pull-kops-e2e-k8s-aws-calico-k8s-infra

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@upodroid
Copy link
Member

upodroid commented Jul 6, 2024

We want kubetest2-kops to dynamically create a temporary s3 bucket when the presubmit is running, similar to this code
https://github.com/kubernetes/kops/blob/master/tests/e2e/kubetest2-kops/gce/gcs.go

I made the IAM changes, but it allows presubmits to write to our CI bucket which isn't ideal(existing behaviour we should change).

@rifelpet
Copy link
Member Author

rifelpet commented Jul 7, 2024

Yes I see the migrated job on your PR passed, using k8s-staging-kops:

https://prow.k8s.io/view/gs/kubernetes-jenkins/pr-logs/pull/kops/16648/pull-kops-e2e-k8s-aws-calico-k8s-infra/1809653969442574336

I believe this means we could migrate all presubmit jobs to use this, but you're right giving presubmit jobs write permissions to the same bucket used for post-merge artifacts like version markers is not great.

We have the k8s-infra-kops-ci-results bucket created in kubernetes/k8s.io#2678, is that in the correct project and can we use it for either presubmit artifacts or version markers, and then revoke permissions for presubmits to whichever bucket we choose for version markers?

@upodroid
Copy link
Member

upodroid commented Jul 7, 2024

For now, let's retain the existing behaviour as AWS presubmits have been writing to the k8s-staging-kops bucket and let's try to switch to dynamic buckets on AWS.

As for gs://k8s-infra-kops-ci-results, I don't want to use this bucket and we should delete it.

@@ -271,6 +274,15 @@ gcs-publish-ci: gsutil version-dist-ci
echo "${GCS_URL}/${VERSION}" > ${UPLOAD}/${LATEST_FILE}
gsutil -h "Cache-Control:private, max-age=0, no-transform" cp ${UPLOAD}/${LATEST_FILE} ${GCS_LOCATION}

.PHONY: s3-publish-ci
s3-publish-ci: VERSION := ${KOPS_CI_VERSION}+${GITSHA}
s3-publish-ci: version-dist-ci
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We do also have hack/upload, which might make this easier?

cmd.SetDir(b.KopsRoot)
exec.InheritOutput(cmd)
if err := cmd.Run(); err != nil {
return nil, err
}

// Get the full path (including subdirectory) that we uploaded to
// It is written by gcs-publish-ci to .build/upload/latest-ci.txt
// It is written by the *-publish-ci make tasks to .build/upload/latest-ci.txt
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might be nice to spell out the two tasks here (assuming we can't combine them), to make it more explicit and to help with searching...

Copy link
Member

@justinsb justinsb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This actually LGTM. We're probably holding this PR for now (given that the test-infra one is closed), but I think a good reason to upload artifacts to S3 would be to cut down on bandwidth etc costs for testing. But I guess they aren't that large....

@upodroid
Copy link
Member

upodroid commented Jul 19, 2024

We do need to ship this to fix the AWS presubmits publishing to the k8s-staging-kops bucket. We carried over that practice when the jobs were migrated to the community cluster.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle stale
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 17, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle rotten
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Nov 16, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Reopen this PR with /reopen
  • Mark this PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closed this PR.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Reopen this PR with /reopen
  • Mark this PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants