Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to build alpine:edge containers for armv7 #1946

Closed
doanac opened this issue Jan 15, 2021 · 11 comments · Fixed by #1955 or moby/moby#42056
Closed

Unable to build alpine:edge containers for armv7 #1946

doanac opened this issue Jan 15, 2021 · 11 comments · Fixed by #1955 or moby/moby#42056

Comments

@doanac
Copy link

doanac commented Jan 15, 2021

I've hit an interesting issue trying to build an alpine:edge container on armv7. I believe this is because of the musl change in alpine:edge and time64.

the dockerfile can be trivial. eg

FROM alpine:edge
# apk add will fail
RUN apk add git

I then run buildkitd in one xterm and ./bin/buildctl build --frontend=dockerfile.v0 --local context=. --local dockerfile=../ in the other.

I've narrowed my problem down to buildkitd.

The good news - if built just right this works fine.
The bad news - its sort of a pain.

How to make it work

From an armv7 host:

docker run --rm -it alpine:edge
apk add go git
git clone https://github.com/moby/buildkit
cd buildkit
go build -o buildkit ./cmd/buildkitd

This binary works. However, you'll have to copy /lib/ld-musl-armhf.so.1 from the container to your host for it to run.

I tried to apply the static linking logic used in the Dockerfile go build -ldflags "-extldflags '-static'" -o ./buildkitd -tags "osusergo netgo static_build seccomp" ./cmd/buildkitd. This binary does not work.

I think I can work around this for now by building a Frankenstein container that takes moby/buildkit:master and adds my binary and the ld-musl-armhf.so.1. I think the real fix will take some toolchain/linker know-how that might go beyond me.

@tonistiigi
Copy link
Member

I don't quite understand what you are reporting. Dynamic binaries built in alpine have a dependency on musl-ld, static ones do not. This is all expected. What is the "time64 change/regression" that you are mentioning?

@doanac
Copy link
Author

doanac commented Jan 16, 2021

This may help explain more - https://wiki.alpinelinux.org/wiki/Release_Notes_for_Alpine_3.13.0#musl_1.2

So the issue: the buildkit released binaries for armv7 are no longer capable of building alpine:edge.

@sergey-safarov
Copy link

this is an issue with docker / libseccomp blocking unknown (new) syscalls

Link

@tonistiigi
Copy link
Member

So iiuc binaries built with new musl do not work inside Docker older than 19.03.9 (may 2020) for 32 bit arch . Is there any other case affected or anything we should do on BuildKit side? Do the release binaries also have issues and should we built them with older musl for some time?

@thaJeztah
Copy link
Member

/cc @ncopa

@tonistiigi
Copy link
Member

tonistiigi commented Jan 26, 2021

Was debugging docker/buildx#511 and looks like current state of the armv7 release is quite problematic. But I don't understand the full extent of the issue. Also not quite sure if this is related to this issue but fails in a similar manner and also appears on armv7.

First, the buildkit release binaries are not even built in alpine. They are built into static binaries in golang:buster. These static binaries are later used inside a minimal alpine environment(but I reproduced also in ubuntu host).

I can reproduce the issue in docker/buildx#511 . But only with a native fallback to armv7, under qemu emulation everything works fine. Also, with RUN --security=insecure the issue does not appear, so that points to seccomp as well.

If I run buildkitd on the host directly, I see the issue with the static buildkit-runc binary. If I use the dynamic runc binary that ships with docker ubuntu packages then everything works fine. I rebuilt buildkit-runc for the exact commit and still same issue.

So looks that something very wrong in the seccomp inside the runc we build in the Dockerfile. Or maybe the one that ships with docker doesn't apply seccomp properly and that's why it works at all outside buildkit.

@AkihiroSuda @tiborvass @justincormack

edit: I tried the static runc binaries from https://download.docker.com/linux/static/stable/armhf/ and none of them work as well on my test node. Neither buildkit nor docker run

@justincormack
Copy link
Contributor

The most recent potential changes in 32 bit are the 64 bit time changes which are in Debian, which might not be in the libseccomp version being used - these updates need to be in a bunch of places. If you can work out from strace where it fails that would help.

@tonistiigi
Copy link
Member

tonistiigi commented Jan 26, 2021

@justincormack

running sleep:

brk(NULL)                               = 0x438000
brk(0x459000)                           = 0x459000
clock_nanosleep_time64(CLOCK_REALTIME, 0, {tv_sec=2, tv_nsec=0}, 0xbeb23c80) = 2
sleep: write(7, "sleep: ", 7)

on apt update I see

getpid()                                = 9
clock_gettime64(CLOCK_MONOTONIC, 0xbeacab90) = -1 ENETDOWN (Network is down)

@thaJeztah
Copy link
Member

Or maybe the one that ships with docker doesn't apply seccomp properly and that's why it works at all outside buildkit.

Are you able to verify the seccomp-profile that's set in the OCI spec of the container?

edit: I tried the static runc binaries from https://download.docker.com/linux/static/stable/armhf/ and none of them work as well on my test node. Neither buildkit nor docker run

Do you mean the binaries are defunct? (see docker-library/docker#260)? I think that could be related to the switch to using the AWS nodes (arm64, but building arm32 on them), although I think Stefan looked into it, and found that there were issues before that for armhf

@justincormack
Copy link
Contributor

Ok so thats time64 by the look of it. You need to make sure that the profile includes these and the libseccomp library version does.

@tonistiigi
Copy link
Member

fix in #1955

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
5 participants