-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to build alpine:edge containers for armv7 #1946
Comments
I don't quite understand what you are reporting. Dynamic binaries built in alpine have a dependency on musl-ld, static ones do not. This is all expected. What is the "time64 change/regression" that you are mentioning? |
This may help explain more - https://wiki.alpinelinux.org/wiki/Release_Notes_for_Alpine_3.13.0#musl_1.2 So the issue: the buildkit released binaries for armv7 are no longer capable of building alpine:edge. |
|
So iiuc binaries built with new musl do not work inside Docker older than 19.03.9 (may 2020) for 32 bit arch . Is there any other case affected or anything we should do on BuildKit side? Do the release binaries also have issues and should we built them with older musl for some time? |
/cc @ncopa |
Was debugging docker/buildx#511 and looks like current state of the armv7 release is quite problematic. But I don't understand the full extent of the issue. Also not quite sure if this is related to this issue but fails in a similar manner and also appears on armv7. First, the buildkit release binaries are not even built in alpine. They are built into static binaries in I can reproduce the issue in docker/buildx#511 . But only with a native fallback to armv7, under qemu emulation everything works fine. Also, with If I run So looks that something very wrong in the seccomp inside the runc we build in the Dockerfile. Or maybe the one that ships with docker doesn't apply seccomp properly and that's why it works at all outside buildkit. @AkihiroSuda @tiborvass @justincormack edit: I tried the static runc binaries from https://download.docker.com/linux/static/stable/armhf/ and none of them work as well on my test node. Neither buildkit nor |
The most recent potential changes in 32 bit are the 64 bit time changes which are in Debian, which might not be in the libseccomp version being used - these updates need to be in a bunch of places. If you can work out from strace where it fails that would help. |
running
on
|
Are you able to verify the seccomp-profile that's set in the OCI spec of the container?
Do you mean the binaries are defunct? (see docker-library/docker#260)? I think that could be related to the switch to using the AWS nodes (arm64, but building arm32 on them), although I think Stefan looked into it, and found that there were issues before that for armhf |
Ok so thats time64 by the look of it. You need to make sure that the profile includes these and the libseccomp library version does. |
fix in #1955 |
I've hit an interesting issue trying to build an alpine:edge container on armv7. I believe this is because of the musl change in alpine:edge and time64.
the dockerfile can be trivial. eg
I then run
buildkitd
in one xterm and./bin/buildctl build --frontend=dockerfile.v0 --local context=. --local dockerfile=../
in the other.I've narrowed my problem down to buildkitd.
The good news - if built just right this works fine.
The bad news - its sort of a pain.
How to make it work
From an armv7 host:
This binary works. However, you'll have to copy
/lib/ld-musl-armhf.so.1
from the container to your host for it to run.I tried to apply the static linking logic used in the Dockerfile
go build -ldflags "-extldflags '-static'" -o ./buildkitd -tags "osusergo netgo static_build seccomp" ./cmd/buildkitd
. This binary does not work.I think I can work around this for now by building a Frankenstein container that takes moby/buildkit:master and adds my binary and the ld-musl-armhf.so.1. I think the real fix will take some toolchain/linker know-how that might go beyond me.
The text was updated successfully, but these errors were encountered: