Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Provide multi-arch images #83

Closed
galderz opened this issue Jul 29, 2020 · 21 comments
Closed

Provide multi-arch images #83

galderz opened this issue Jul 29, 2020 · 21 comments

Comments

@galderz
Copy link
Member

galderz commented Jul 29, 2020

Trying to use image on aarch64 and it's not working:

[INFO] [io.quarkus.deployment.pkg.steps.NativeImageBuildStep] podman run -v /home/g/downloads/code-with-quarkus/target/code-with-quarkus-1.0.0-SNAPSHOT-native-image-source-jar:/project:z --env LANG=C --userns=keep-id --rm quay.io/quarkus/ubi-quarkus-native-image:20.1.0-java11 -J-Dsun.nio.ch.maxUpdateArraySize=100 -J-Djava.util.logging.manager=org.jboss.logmanager.LogManager -J-Dvertx.logger-delegate-factory-class-name=io.quarkus.vertx.core.runtime.VertxLogDelegateFactory -J-Dvertx.disableDnsResolver=true -J-Dio.netty.leakDetection.level=DISABLED -J-Dio.netty.allocator.maxOrder=1 -J-Duser.language=en -J-Dfile.encoding=UTF-8 --initialize-at-build-time= -H:InitialCollectionPolicy=com.oracle.svm.core.genscavenge.CollectionPolicy$BySpaceAndTime -H:+JNI -jar code-with-quarkus-1.0.0-SNAPSHOT-runner.jar -H:FallbackThreshold=0 -H:+ReportExceptionStackTraces -H:-AddAllCharsets -H:-IncludeAllTimeZones -H:EnableURLProtocols=http --no-server -H:-UseServiceLoaderFeature -H:+StackTrace code-with-quarkus-1.0.0-SNAPSHOT-runner
{"msg":"exec container process `/opt/graalvm/bin/native-image`: Exec format error","level":"error","time":"2020-07-29T10:03:36.000781782Z"}

The image descriptor is wrongly assuming all images are for amd64, see here.

Since GraalVM offers aarch64 downloads, should we provode complementary images for that arch?

@cescoffier
Copy link
Member

That would be great!

@cescoffier
Copy link
Member

@galderz which images should we provide for ARM?
For example, does it make sense for s2i and tooling?

@galderz
Copy link
Member Author

galderz commented Aug 18, 2020

I'd start just with the GraalVM and Mandrel images (the equivalent of ubi-quarkus-native-image and ubi-quarkus-mandrel).

How much are the others used? centos-quarkus-maven, ubi-quarkus-native-s2i and ubi-quarkus-native-binary-s2i?

If you have any ideas on the naming I'd love to hear them too :)

@cescoffier can you assign this to me?

@cescoffier
Copy link
Member

it's all yours @galderz :-)

Others are not wildly used and I don't see the use case to use s2i on ARM.

matthyx pushed a commit to matthyx/quarkus-images that referenced this issue Sep 20, 2021
…round-IPv4-addresses-in-v4v6-mode

Bug 1801407: Omit brackets around IPv4 addresses in v4v6 mode
@debu999
Copy link

debu999 commented Jan 9, 2022

Its been a while any tracktion to this.
To run our apps we do the following for arm64 viz. aarch64
Create a custom container on mac mini which have arm64 arch.

Dockerfile.graalvmaarch64

FROM ghcr.io/graalvm/graalvm-ce:latest AS build
WORKDIR /project
RUN gu install native-image
VOLUME ["/project"]
ENTRYPOINT ["native-image"]

to build image app we use the build command
docker build -f Dockerfile.graalvmaarch64 -t doogle999/quarkus-build-aarch64 .

Then to build quarkus app with aarch64 we use the same container to build
quarkus.native.builder-image=doogle999/quarkus-build-aarch64:latest

Once the app is build to create image out of the native compiled app we use the following Dockerfile.native

Dockerfile.native (ubi-minimal aarch64 image)

FROM redhat/ubi8-minimal
WORKDIR /work/
RUN chown 1001 /work \
    && chmod "g+rwx" /work \
    && chown root:root /work
COPY --chown=root:root target/*-runner /work/application


EXPOSE 8080
USER root

CMD ["./application", "-Dquarkus.http.host=0.0.0.0"]

Can we have the following images with arm64 support.

  1. quay.io/quarkus/quarkus-distroless-image
  2. quay.io/quarkus/quarkus-micro-image

@cescoffier
Copy link
Member

The problem is that at the moment, we do not have the hardware to make progress on this task. This should be fixed in the next few months.

@debu999
Copy link

debu999 commented Jan 10, 2022

Sure no issues, we are anyways using quarkus with arm64, i guess with more Mac's coming up with arm64 and cloud providers giving arm64 hardware, it might be good to add them in for new folks. Overall everyone is able to make things work anyways with workarounds hopefully we get it natively documented and supported soon.

Btw: already shared at https://www.nevernull.io/blog/building-a-native-java-application-for-arm64-with-quarkus/

As time passes I am growing a bigger fan of quarkus, so just putting up to see the frework grow more acceptance with time. It's definitely a framework to bank on.

Thanks for getting it on top of Java for the community.

@jorsol
Copy link

jorsol commented Jan 27, 2022

Also, ideally, this should be multi-arch images, not different images per arch something like the ubi8 ones:

docker manifest inspect --verbose registry.access.redhat.com/ubi8-minimal:8.5

@bentaljaard
Copy link

bentaljaard commented Jan 27, 2022

Hi guys, as part of the Quarkus IOT community we have built a native ARM builder image that makes use of qemu-user-static to allow us to built ARM8 images on x86 machines. https://github.com/qiot-project/qiot-ubi-multiarch-builder. Please let us know if we could help with this

@galderz
Copy link
Member Author

galderz commented Apr 25, 2022

Thx all for the feedback. We've recently updated this repo to use Cekit 4.1.0, so we should now be able to produce multi arch images (we needed cekit/cekit#761).

I was exploring how multi-arch images work in docker and ended up executing the same command @jorsol suggested above:

$ docker manifest inspect --verbose registry.access.redhat.com/ubi8/ubi-minimal:8.5
...
				"architecture": "amd64",
				"os": "linux"
...
				"architecture": "arm64",
				"os": "linux"
...
				"architecture": "ppc64le",
				"os": "linux"
...
				"architecture": "s390x",
				"os": "linux"

Our immediate targets are linux/amd64 and linux/aarch64, so I'll be focusing on those.

Thx all for the links and references to previous efforts. I'm currently trying to figure out how the hook the right tooling for each arch with cekit.

@galderz
Copy link
Member Author

galderz commented Apr 25, 2022

FYI, I'm testing things out in galderz#3. Still WIP

@galderz
Copy link
Member Author

galderz commented Apr 26, 2022

I've reached a bit of a crossroads, which I'm going to let simmer for some days. Here's a summary on where I've got so far:

galderz#3 should have all required from cekit perspective, but the issue is that even if arm64 platforms is passed to docker, x86 layers keep on being built.

I added the QEMU and Buildx steps required to build on CI, as hinted by Marek in this Zulip chat stream, but that alone won't be enough. In essence, cekit needs to be calling docker buildx to pass in platforms, but even having an alias for docker build to docker buildx is not enough because cekit does not use the docker CLI.

Instead, cekit uses the docker python client, which does not have support for buildkit. Assuming still that we use the docker builder, the simplest workaround appears to be this one. That is, hacking cekit that if platforms is passed in, use the docker cli instead of the docker python client to build. A more complex alternative, would be for cekit docker layer to switch to buildkit.

More potential workarouds exist, such as using podman or buildah builders instead of docker with cekit. However, these might each have its own set of can of worms lurking.

So, the simplest thing right now might be to hack cekit as explained above.

@galderz
Copy link
Member Author

galderz commented Apr 26, 2022

One other workaround, as suggested by Severin, might be to use cekit only to generate the Dockerfile and then just use these buildx GH action steps, assuming the file is generated in the same folder on which docker build . is called.

@galderz
Copy link
Member Author

galderz commented Apr 28, 2022

We've made good progress with Severin's workaround, see galderz#3. After applying the suggestion above, with a few tweaks, I was able to build multi-arch images on CI (see CI run for galderz@aee10c2).

Then I tried to re-enable tests, but the aarch64 ones fail, but not to due to an error, but because the image runs very slowly. This is due to the fact that they're running emulated on top of an x86 VM. The image build messages appear but trickle very slowly, e.g.

2022-04-27T16:43:28.9702182Z       ========================================================================================================================
2022-04-27T16:43:28.9702485Z       [1/7] Initializing...                                                                                   (63.5s @ 0.16GB)
2022-04-27T16:43:28.9702859Z        Version info: 'GraalVM 22.0.0.2 Java 17 CE'

It takes ~1m for the first native image build message to appear, vs 4 seconds for x86 (snippet take from here).

I'm not sure how to fix this. To keep the tests we need to split the tests (and possibly the builds too) to run on top of an aarch64 VM. Or remove/disable the tests for aarch64?

@edeandrea
Copy link

Just throwing my 2 cents in. I'm not sure what cekit is so can't comment on that, but I am using the buildx builder as a Github action to build JVM multi-arch images

In there I am also manually constructing the manifest lists using the docker manifest command, mainly because I don't want to build & push the images in a single step. I want to build all the images, then once they all build successfully, then push them.

I've also just contributed some changes to the quarkus-container-image-docker extension to use this as well (quarkusio/quarkus#25589).

I haven't been able to work on the native side yet, but everything is all stubbed out while until this issue is solved.

@galderz
Copy link
Member Author

galderz commented Jul 29, 2022

I've unassigned me since @cescoffier is working on more comprehensive solution (maybe #200?).

To reiterate, please reply to this Stackoverflow question when this is complete.

@easen-amp
Copy link

Posting as this might help someone else out...

As a work around in multi-stage docker builds, instead of

FROM quay.io/quarkus/quarkus-micro-image:1.0
...

use

# Replicating https://github.com/quarkusio/quarkus-images/blob/main/quarkus-micro-image.yaml
FROM registry.access.redhat.com/ubi8/ubi-minimal as ubi
FROM registry.access.redhat.com/ubi8/ubi-micro:latest
COPY --from=ubi /usr/lib64/libgcc_s.so.1 /usr/lib64/libgcc_s.so.1
COPY --from=ubi /usr/lib64/libstdc++.so.6 /usr/lib64/libstdc++.so.6
COPY --from=ubi /usr/lib64/libz.so.1 /usr/lib64/libz.so.1
...

The end reusult is the same, but ubi-minimal & ubi-micro images both have arm64 images.

@cescoffier
Copy link
Member

I've pushed the new images, which are all multi-archs.

@edeandrea
Copy link

Hey @cescoffier circling back to this. When you say you've pushed new images, are those new images

quay.io/quarkus/ubi-quarkus-mandrel-builder-image and quay.io/quarkus/quarkus-micro-image:2.0?

@cescoffier
Copy link
Member

Yes

@edeandrea
Copy link

Sweet. I will then tackle getting this into superheroes after Devoxx next week. Not sure I want to tackle that at 2pm on a Friday before being gone for a week :)

What could possibly go wrong? :D

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants