-
Notifications
You must be signed in to change notification settings - Fork 76
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Serve container image layers from AWS by default (make exception when clients are from Google) #143
Comments
SGTM on principle. |
Yeah:
The main detail that needs settling is how we handle routing non-AWS users to AWS. @ameukam suggested perhaps we should just go ahead and switch to cloudfront. |
@BenTheElder switching to cloudfront sounds like a good quick win, lets do that and leave the other suggestion for a longer time frame. (switching to cloudfront sounds like a reversible choice) |
Are we switching layer serving to CloudFront, or https://registry.k8s.io/ itself? The first option is straightforward but doesn't cut the GCP bill. To tell the truth, I'm not sure how the second option helps the GCP bill either. |
Perhaps we're thinking of using AWS (and CloudFront) to serve the lot, and not use GCP at all? |
The blobs are the vast majority of the bandwidth spend. The registry itself
just serves JSON and 302s.
…On Mon, Jan 30, 2023, 7:33 AM Tim Bannister ***@***.***> wrote:
Perhaps we're thinking of using AWS (and CloudFront) to serve the lot, and
not use GCP at all?
—
Reply to this email directly, view it on GitHub
<#143 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABKWAVCG6DZ4TGI2LL66WQ3WU7NLLANCNFSM6AAAAAAUKL5EU4>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
@ameukam was suggesting migrating to cloudfront for layer serving instead of regionalizing to s3 ourselves (which we currently do by mapping AWS client IP known region to nearest-serving S3 region). We either have to do that or otherwise update how we regionalize to work with non-AWS users, as a prerequesite to "default layer serving to Amazon". As Tim said, serving content blobs is the only expensive part. The option to regionalize by assigning a default S3 bucket per cloud run region is potentially less work than spinning up cloudfront, depending on who's working on it. It doesn't require new infra but the mapping would take some thought. |
Ah, right. Then for layers we either:
Serving directly from S3 for clients inside AWS has benefits (that mainly accrue to the client) - for example, they can use a gateway-type VPC endpoint for image pulls and avoid using the public internet. Switching away might merit a notification that people who relied on this property now cannot. |
This option is the one we need to go with. we don't want to deal with specific use cases that will increase our operational burden. For users with specific requirements we will suggest to have a local mirror. |
OK; I do think we should announce the change though. We don't need to add a wait period, because we already told people not to rely on implementation details. |
We have to do this not just because S3 offline (seems unlikely anyhow) but the more common problem that async layer population hasn't happened yet. Synchronous promotion to AWS has not landed in the image promoter / release process. This part is already implemented. https://github.com/kubernetes/registry.k8s.io/blob/main/cmd/archeio/docs/request-handling.md
I think people here are conflating sticking cloudfront in front of the entire service, which I do not agree with and had not been suggested previously, as opposed to sticking cloudfront in front of the layer store, it doesn't make technical sense when registry.k8s.io itself is serving nothing* but redirects. We should look at cloudfront for the layer hosting. |
I amended #143 (comment) to clarify |
Also, in the future we'll want to do different cost routing (say we start to also use fastly, or azure), which is easier to do if it's just updating the redirect logic. |
/retitle Serve container image layers from AWS by default (make exception when clients are from Google) |
That's an interesting point, though our stance so far has very much been that:
This sort of detail is what prevented us from redirecting k8s.gcr.io and bringing our costs down immediately, we cannot dig ourselves back into that hole. If anything we should make "breaking" changes to those depending on implementation details more often (e.g. perhaps renaming the buckets) to underline the point that they're just implementation details and we will use whatever we can fund. |
BTW https://kubernetes.io/blog/2022/11/28/registry-k8s-io-faster-cheaper-ga/#why-isn-t-there-a-stable-list-of-domains-ips-why-can-t-i-restrict-image-pulls is a better link for “don't depend on any implementation details”. |
Similar changes in the wider community and their communication: https://support.hashicorp.com/hc/en-us/articles/11239867821203?_ga=2.46340071.1359745362.1675131001-690834462.1675131001 |
PRs are ready, #147 and kubernetes/k8s.io#4739 |
kubernetes/k8s.io#4741 promoted the image. last step updating prod. this change is safe, because even if we misconfigured a default url we will detect the content as not available on AWS and fallback to upstream registry on AR. the runtime logic diff is pretty small, mostly of the diff is in refactoring the cloud IP management and updating the runtime deployment configs to map cloud run region to default s3 region (for clients where we cannot detect a known region based on IP). will follow-up with a prod deployment PR shortly. sandbox is running smoothly. |
kubernetes/k8s.io#4742 this is deployed |
AFAICT the simple per-cloud-run-region regionalizing approach is working well, based on logs etc. For example pulling from the California Bay area, I am redirected to GCP us-west2 Artifact Registry (Los Angeles) and AWS us-west1 S3 bucket (N. California). We can revisit cloudfront later, but I don't think we need to rush. We might want to consider adding more S3 regions, notably South America where we have cloud run / artifact registry but no AWS presence kubernetes/k8s.io#4739 (comment) |
Our current logic is to default to Google for traffic not from AWS.
We should update the logic to default to AWS if not Google.
This will directly address our top two priorities from our meeting last week.
Our main logic for handling redirects is here:
https://github.com/kubernetes/registry.k8s.io/blob/main/cmd/archeio/app/handlers.go#L123-L131
I'm suggesting the following or similar:
We will need to create a
net/cidrs/gcp
similar to main/pkg/net/cidrs/awsIt should be nearly the same code, with minor changes to main/pkg/net/cidrs/aws/internal/ranges2go/genrawdata.sh
Swapping out the AWS ranges with GCP ranges:
The text was updated successfully, but these errors were encountered: