-
Notifications
You must be signed in to change notification settings - Fork 832
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
403 Access Denied while pulling registry.k8s.io/kube-state-metrics in eu-central-1 on AWS #4214
Comments
I'm on mobile, but the first thing we should check is if the regional s3 bucket is correctly configured (which I don't have access to) |
Thank you for the report. This is a good place to report this. |
Working on it |
@Riaankl #4118 seems to be about writing to the bucket / replication, but this is about reading? What am I missing? FWIW without auth I ran:
This gave me GCP-only hosting as I'm not in AWS, but it allowed me to see what layers are used for this image. Then manually downloading a layer from one of the current implementation detail buckets, the one we should be seeing for eu-central-1 users:
With no auth issue 🤔 registry.k8s.io should not require authentication ... 🤔 @ritvikgautam would it be possible for you to run the crane command in this environment and share the output? The docker error message above unfortunately doesn't tell us what endpoint served this error, though based on:
It appears to be one of the S3 buckets. |
@BenTheElder , my bad. Was refering to Caleb's comment in the PR about the 403 issue, not the PR it self. |
This is not related. |
There are only 3 content digests in this image (two layers and the config), all of them fetch fine without auth from the eu-central-1 bucket fetching directly. Given:
It should have been while trying to fetch But that fetches fine locally with curl unauthenticated. All three should be going through the same API flow and redirects. This is the manifest: {
"schemaVersion": 2,
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"config": {
"mediaType": "application/vnd.docker.container.image.v1+json",
"size": 2228,
"digest": "sha256:ec6e2d871c544073e0d0a2448b23f98a1aa47b7c60ae9d79ac5d94d92ea45949"
},
"layers": [
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 801012,
"digest": "sha256:0a602d5f6ca3de9b0e0d4d64e8857e504ec7a8c47f1ec617d82a81f6c64b0fe8"
},
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 11156867,
"digest": "sha256:68ad17e1eab7fdb4ef2e7eb00885d2b12aeaf8365095eaf7e37e8cb22e4bda27"
}
]
} @BobyMCbobs maybe worth running through https://aws.amazon.com/premiumsupport/knowledge-center/s3-troubleshoot-403/ That object should be https://prod-registry-k8s-io-eu-central-1.s3.dualstack.eu-central-1.amazonaws.com/containers/images/sha256:ec6e2d871c544073e0d0a2448b23f98a1aa47b7c60ae9d79ac5d94d92ea45949, though it happened on more than one image in the OP. |
This is peculiar to me. The HEAD check should defer to k8s.gcr.io, given the lack of a particular blob. Separately, I am just now performing a manual sync now to ensure that everything is up to date. |
GET and HEAD are proxied identically. AFAICT if we are getting a 403 it's an auth restriction somewhere in AWS. It shouldn't be any of the GCR fallbacks, you can hit them all with no auth and I can't find evidence that anything changed there. |
@BenTheElder Here's the output of the crane command from the environment:
|
Thank you! This is very strange ... curl https://prod-registry-k8s-io-eu-central-1.s3.dualstack.eu-central-1.amazonaws.com/containers/images/sha256%3Aec6e2d871c544073e0d0a2448b23f98a1aa47b7c60ae9d79ac5d94d92ea45949
{"architecture":"amd64","config":{"Hostname":"","Domainname":"","User":"nobody","AttachStdin":false,"AttachStdout":false,"AttachStderr":false,"ExposedPorts":{"8080/tcp":{},"8081/tcp":{}},"Tty":false,"OpenStdin":false,"StdinOnce":false,"Env":["PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin","SSL_CERT_FILE=/etc/ssl/certs/ca-certificates.crt"],"Cmd":null,"Image":"sha256:d35e7be12f94022e0c53d095efb1b646847a5720f04815320d3aac51e20a25da","Volumes":null,"WorkingDir":"/","Entrypoint":["/kube-state-metrics","--port=8080","--telemetry-port=8081"],"OnBuild":null,"Labels":null},"container":"51d40723584de5ff7fa1697a90ca9752e490fbcd5f4148124ee150c7098fef73","container_config":{"Hostname":"51d40723584d","Domainname":"","User":"nobody","AttachStdin":false,"AttachStdout":false,"AttachStderr":false,"ExposedPorts":{"8080/tcp":{},"8081/tcp":{}},"Tty":false,"OpenStdin":false,"StdinOnce":false,"Env":["PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin","SSL_CERT_FILE=/etc/ssl/certs/ca-certificates.crt"],"Cmd":["/bin/sh","-c","#(nop) ","EXPOSE 8080 8081"],"Image":"sha256:d35e7be12f94022e0c53d095efb1b646847a5720f04815320d3aac51e20a25da","Volumes":null,"WorkingDir":"/","Entrypoint":["/kube-state-metrics","--port=8080","--telemetry-port=8081"],"OnBuild":null,"Labels":{}},"created":"2022-08-24T16:44:22.744035043Z","docker_version":"20.10.17","history":[{"created":"1970-01-01T00:00:00Z","author":"Bazel","created_by":"bazel build ..."},{"created":"2022-08-24T16:44:21.805033518Z","created_by":"/bin/sh -c #(nop) COPY file:6fa688f274e1c78ca58b46ab2cb9ab7d4b208aa3f8380a30dcfcdc29ea267ab7 in / "},{"created":"2022-08-24T16:44:22.481603829Z","created_by":"/bin/sh -c #(nop) USER nobody","empty_layer":true},{"created":"2022-08-24T16:44:22.615739065Z","created_by":"/bin/sh -c #(nop) ENTRYPOINT [\"/kube-state-metrics\" \"--port=8080\" \"--telemetry-port=8081\"]","empty_layer":true},{"created":"2022-08-24T16:44:22.744035043Z","created_by":"/bin/sh -c #(nop) EXPOSE 8080 8081","empty_layer":true}],"os":"linux","rootfs":{"type":"layers","diff_ids":["sha256:c456571abc85581a0ac79dbfe2b13d71d8049c24042db7be14838a55499e4ffd","sha256:c0024e78de05f4a736c74b5df94a8b030f03376bb7551c9fe9e56e9c51eebe45"]}} Curling works as expected, and your client is even able to download the layers it seems, but not this config blob for some reason. |
It seems like this has to either be an ACL issue on the s3 bucket, or something on that AWS account / environment maybe (?) |
I think this is potentially an environment related issue. I just tried this on a non-company AWS account in I don't think there is any proxy or firewall at the instance level, but there could be policies defined at the VPC level. I'll have to check with other folks to confirm this tomorrow. I'll update here when I have the confirmation. Big thanks to all for jumping into this and helping out to pinpoint the cause! Sorry for the false alarm. |
Please let us know if you have reason to suspect it's not a company policy and instead an upstream bug 😅 |
@BenTheElder: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
We are getting a 403 Access Denied message when trying to pull
registry.k8s.io/kube-state-metrics/kube-state-metrics
ineu-central-1
(Frankfurt) on AWS.We are facing this issue only in
eu-central-1
, it works fine in 5 other regions on AWS with the same configurations.From this doc, I understand our request to pull this image originating from
eu-central-1
on AWS could be getting redirected to a nearby repository. This would explain why we're facing this only in this particular region on AWS. (Sorry if this isn't the right place to report this)Also referencing the issue opened at
kube-state-metrics
: prometheus-community/helm-charts#2421The text was updated successfully, but these errors were encountered: