Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Parallel pushes fail with invalid written size #2706

Closed
tonistiigi opened this issue Oct 10, 2018 · 3 comments · Fixed by #5379
Closed

Parallel pushes fail with invalid written size #2706

tonistiigi opened this issue Oct 10, 2018 · 3 comments · Fixed by #5379
Labels

Comments

@tonistiigi
Copy link
Member

Description

The push handler does not work properly if there are multiple pushes of the same blob running concurrently. It seems that both pushes write to the same place and one of them will fail with an error saying that twice as many bytes were written as are in the descriptor.

This is not only the case when two competing pushes are pushing same data to the same location but it is quite easy to hit it when just pushing a single manifest list as the subimages can easily contain duplicates. Also possible to hit it by just pushing multiple images that share some layers to the same repo.

Steps to reproduce the issue:

  1. Push an image(s)/manifest-list with same layers

Output of containerd --version:

v1.1.3
@dmcgowan dmcgowan added this to the 1.3 milestone Oct 17, 2018
@tiborvass
Copy link

I just hit this one.

@hairyhenderson
Copy link
Contributor

I seem to be hitting this occasionally running a buildx build like:

$ docker buildx build \
	--platform linux/amd64,linux/arm64,linux/arm/v6,linux/arm/v7,windows/amd64 \
	--push .

A sample failure log is here: https://github.com/hairyhenderson/gomplate/runs/610612798?check_suite_focus=true

This isn't consistent, but happens roughly 25-30% of the time. Usually triggering a new build will resolve it.

(@tonistiigi pointed me to this in Slack)

@hairyhenderson
Copy link
Contributor

hairyhenderson commented Apr 23, 2020

There is definitely an identical layer between all variants of the image:

$ docker buildx imagetools inspect hairyhenderson/gomplate:slim | grep -B2 arm/v
  Name:      docker.io/hairyhenderson/gomplate:slim@sha256:ed7b70e4c99b4f4d3f5866f7b7818305f10fde522eec3e739a630dc9f7e31575
  MediaType: application/vnd.docker.distribution.manifest.v2+json
  Platform:  linux/arm/v6
--
  Name:      docker.io/hairyhenderson/gomplate:slim@sha256:781407a27dcb8db689e2ea98e9e4c0a91c6ae9c66ba24b1ff6d48b43a363bca8
  MediaType: application/vnd.docker.distribution.manifest.v2+json
  Platform:  linux/arm/v7
$ docker buildx imagetools inspect hairyhenderson/gomplate:slim@sha256:ed7b70e4c99b4f4d3f5866f7b7818305f10fde522eec3e739a630dc9f7e31575
{
   "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
   "schemaVersion": 2,
   "config": {
      "mediaType": "application/vnd.docker.container.image.v1+json",
      "digest": "sha256:f82dcf37ebc8bcb1976c46fc98208f1a6081617617d16e94d2448822e35cd7a6",
      "size": 1866
   },
   "layers": [
      {
         "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
         "digest": "sha256:263e0689b0e87ec3c01c7efae5f3efca0495d430a883d9ccff5ca7b549f5db61",
         "size": 131688
      },
      {
         "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
         "digest": "sha256:10d6f462236a489a0ebba1615aef8e3187a8b99ff9f759d9bf325649348d783d",
         "size": 6547127
      }
   ]
}
$ docker buildx imagetools inspect hairyhenderson/gomplate:slim@sha256:781407a27dcb8db689e2ea98e9e4c0a91c6ae9c66ba24b1ff6d48b43a363bca8
{
   "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
   "schemaVersion": 2,
   "config": {
      "mediaType": "application/vnd.docker.container.image.v1+json",
      "digest": "sha256:dd37c2f01bcee100af6bbccd3a9b93ab9dabb07351ab02dad9115a6ff03d6a56",
      "size": 1866
   },
   "layers": [
      {
         "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
         "digest": "sha256:263e0689b0e87ec3c01c7efae5f3efca0495d430a883d9ccff5ca7b549f5db61",
         "size": 131688
      },
      {
         "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
         "digest": "sha256:e01a906d72e11dbaecacef1317a54c206c0352f345bb734dbc636356feabd7f2",
         "size": 6543238
      }
   ]
}

The common layer here is sha256:263e0689b0e87ec3c01c7efae5f3efca0495d430a883d9ccff5ca7b549f5db61, and I suspect it's the /etc/ssl/certs/ca-certificates.crt file that I copy from a common stage.

Note that this isn't unique to arm/v6 and arm/v7 - all of them contain this layer, so there's maybe probably a higher chance of hitting this bug?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants