-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
podman push twice results in two different digests #6496
Comments
A friendly reminder that this issue had no activity for 30 days. |
bump |
@mtrmac PTAL |
Thanks for your report. Yes, that is almost certainly containers/image#733 . You can verify that by using |
Also seeing this behavior in podman 2.0.3
$ podman build -t docker.io/jdockter/test:test -f test.Dockerfile $ podman push docker.io/jdockter/test:test |
I have this issue (I think) as well. I'm building with podman 2.0.3 using Furthermore, when doing:
podman apparently "corrects" the layer mediatype and the server ends up with the tags on different images. I can see using |
@mtrmac Seems we are not making progress on this? |
I'm seeing the same behaviour with podman 2.1.1. Here my debugging details : https://gist.github.com/rbo/2bcae948fe5e278fc68d12c365d20af1 (Don't want to make to much noise here.) It looks like to me it change the manifest version. First push |
@vrothberg This is blocked on a PR that you opened many months ago. Any progress? |
I guess you refer to containers/image#733? That's just an issue. I haven't been working on this but I believe @nalind is looking into this. |
@nalind Any thoughts on this? |
This is fixed now. |
@vrothberg I can't seem to follow the breadcrumbs. Which version of podman includes the fix? |
Apologies. Podman v3.0 should include the fix. At least, I couldn't reproduce locally and @nalind fixed a bug in the blob-info cache that went into v3.0. |
I know that this is might be a long shot, but I've seen this issue using 3.0.1, where I have an image built locally with one hash, then push it, and in the repo gets a different hash ending with multiple digests:
My workaround so far is to delete the local image and pull again, that pulls only on hash. |
That's what I'd expect if we pulled an image using a named reference that corresponded to a manifest list -- in those cases we save both the manifest list and the manifest of the arch-specific image in the record for the local copy of the image. If that image is then tagged with another name, that name gets attached to the same record, which still has two manifests in it. Those manifests will have different digests because their contents are different. I guess it could also happen if we pushed that same image back to the registry without help from the blob info cache (which aims to let us reuse blobs that we know are already present in the registry), and we had to compress the image's layers again, and that yielded different results due to differences in the versions of the compression libraries used to compress them. That would produce a new manifest that described the same arch-specific image's configuration and layer blobs, and if you then re-pulled the image, you'd get another manifest added to the image's record. If neither of those describes how you got to there, please describe which commands you used, so that I can try to reproduce it here and figure out what's going on. Thanks! |
/kind bug
Description
If I attempt to push the same tag to docker.io twice in a row, I get a different digest.
If I remove the blob cache
/var/lib/containers/cache/blob-info-cache-v1.boltdb
, then the correct digest is calculated.Steps to reproduce the issue:
Describe the results you received:
The digest is different for images that are stored in the image cache.
Describe the results you expected:
The digest should be the same regardless of it's cache state.
Additional information you deem important (e.g. issue happens only occasionally):
Deleting the cache prior to any
podman push
resolves the problem.Output of
podman version
:Output of
podman info --debug
:Package info (e.g. output of
rpm -q podman
orapt list podman
):Additional environment details (AWS, VirtualBox, physical, etc.):
This appears to be similar to the workaround for: containers/image#733
The text was updated successfully, but these errors were encountered: