-
Notifications
You must be signed in to change notification settings - Fork 840
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create S3 buckets in AWS regions #3595
Comments
/assign @jaypipes @BobyMCbobs |
Maybe put the code within |
From the data these are the regions: us-west-2 |
The PR address the teraform of S3 buckets. |
@BobyMCbobs we need a project to run this in. To create the buckets |
@jaypipes working with Adolfo to run the mirroring. @arnaud > We should do a one sync copy from GCS to S3... 10 buckets is going to take a while. If we do the one sync, then later use the mirror to push these options. It's going to be easier, as we already have a copy of the source of truth over s3 so we don't waste time. We already have one is a single s3 bucket, we just need to copy to one. @arnaud > There is one permission to be added to do the once off sync
When we have that first bucket, then we can try some additional stuff that @BenTheElder is writing. So we can redirect everything to that one bucket.... some prow jobs running inside AWS so we can test that.
Not technically a blocker... between Caleb and Arnaud. |
Arnaud is going to create another issue, lets be sure it's on the board. |
@hh From https://kubernetes.slack.com/archives/CCK68P2Q2/p1650407982934869, I'm now not sure if we need to created a new issue. |
Update for things@jaypipes and I have been paring on setting up the accounts in a organized manner and are setting up a new account for the registry buckets to go into. The structure is root/Kubernetes (OU)/registry.k8s.io (OU)/[email protected] (account). Currently in the middle of sorting IAM access to this account for provisioning the buckets and IAM role for accessing it. Ticket for updating docs which reflect the org/account structuring: #3668 |
From meeting April 27, |
Bucket updateTried to do an initial password (re)set for the [email protected] AWS account. Will be paring with @jaypipes when next available to complete it. |
Buckets created in #3693 |
add in eu-west-1 AWS region #3699 |
@BobyMCbobs @jaypipes Can we get the full list of the buckets names and the ARNs of those buckets ? |
@ameukam, the bucket names and ARNs are as follows
(this is me figuring out the ARNs so let me know if it works OK) cc @Riaankl |
@BobyMCbobs Thanks!!! I took the liberty edit the AWS accounts to avoid potential attacks. |
I don't think there's anything particularly sensitive about the AWS account ID, @ameukam :) |
I think we can consider this as done. @BobyMCbobs Now the buckets are up, what are the blockers to sync from the blobs from k8s.gcr.io to all those buckets ? (see: #3623) |
Hey all, If this is done - who has access to these buckets and how do we govern that? We really really need to move this forward ASAP. What's blocking? |
xref: #3807 for a possible option to allow CI jobs to access AWS without storing creds once bootstrapped via workload identity. Alternatively, if someone who has access currently can at least do a one-off sync #3623 of the current contents, we can at least make progress on rolling out "traffic to AWS" in registry.k8s.io while we finish figuring out the ongoing automated sync. #3666 "bulk sync existing image layers to these s3 layers as a starting point (from GCS/GCR)" @ameukam can provide pointers on rclone from GCR's GCS to s3. |
/cc @jaypipes |
Depends on cncf-infra/aws-infra#4 |
/sig k8s-infra Follow-up: |
@ameukam: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Why close this if it is not accessible yet?
…On Tue, Jul 5, 2022 at 1:56 AM Kubernetes Prow Robot < ***@***.***> wrote:
@ameukam <https://github.com/ameukam>: Closing this issue.
In response to this
<#3595 (comment)>
:
/sig k8s-infra
/area infra
/area release-eng
/priority critical-urgent
/milestone v1.25
/close
Follow-up:
- cncf-infra/aws-infra#4
<cncf-infra/aws-infra#4>
- #3623 <#3623>
Instructions for interacting with me using PR comments are available here
<https://git.k8s.io/community/contributors/guide/pull-requests.md>. If
you have questions or suggestions related to my behavior, please file an
issue against the kubernetes/test-infra
<https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:>
repository.
—
Reply to this email directly, view it on GitHub
<#3595 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABKWAVBHBHNZYPCL45FLXUDVSP2EPANCNFSM5SQ42UEA>
.
You are receiving this because you commented.Message ID: <kubernetes/k8s.
***@***.***>
|
The purpose of the issue was to make sure the buckets exist whether private or public. anonymous access to those is a different goal and I think we should direct future questions to cncf-infra/aws-infra#4 (currently admins access to those buckets is handled by CNCF). |
${prefix}registry-k8s-io-${gcs_bucket_name}-${region}
The text was updated successfully, but these errors were encountered: