-
-
Notifications
You must be signed in to change notification settings - Fork 238
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multi platform images #2273
base: main
Are you sure you want to change the base?
Multi platform images #2273
Conversation
🦙 MegaLinter status:
|
Descriptor | Linter | Files | Fixed | Errors | Elapsed time |
---|---|---|---|---|---|
✅ BASH | bash-exec | 5 | 0 | 0.01s | |
✅ BASH | shellcheck | 5 | 0 | 0.09s | |
✅ BASH | shfmt | 5 | 0 | 0 | 0.37s |
✅ COPYPASTE | jscpd | yes | no | 3.42s | |
✅ DOCKERFILE | hadolint | 123 | 0 | 17.86s | |
✅ JSON | eslint-plugin-jsonc | 23 | 0 | 0 | 2.28s |
✅ JSON | jsonlint | 21 | 0 | 0.19s | |
✅ JSON | v8r | 23 | 0 | 14.3s | |
✅ MAKEFILE | checkmake | 1 | 0 | 0.01s | |
markdownlint | 255 | 0 | 254 | 28.05s | |
✅ MARKDOWN | markdown-link-check | 255 | 0 | 5.89s | |
✅ MARKDOWN | markdown-table-formatter | 255 | 0 | 0 | 29.08s |
✅ OPENAPI | spectral | 1 | 0 | 1.41s | |
bandit | 200 | 61 | 3.18s | ||
✅ PYTHON | black | 200 | 0 | 0 | 4.78s |
✅ PYTHON | flake8 | 200 | 0 | 1.93s | |
✅ PYTHON | isort | 200 | 0 | 0 | 0.84s |
✅ PYTHON | mypy | 200 | 0 | 10.74s | |
✅ PYTHON | pylint | 200 | 0 | 14.4s | |
pyright | 200 | 319 | 22.8s | ||
✅ PYTHON | ruff | 200 | 0 | 0 | 0.5s |
✅ REPOSITORY | checkov | yes | no | 35.64s | |
✅ REPOSITORY | git_diff | yes | no | 0.37s | |
grype | yes | 1 | 10.34s | ||
✅ REPOSITORY | secretlint | yes | no | 11.58s | |
✅ REPOSITORY | trivy | yes | no | 25.56s | |
✅ REPOSITORY | trivy-sbom | yes | no | 1.07s | |
✅ SPELL | cspell | 665 | 0 | 26.81s | |
✅ SPELL | lychee | 335 | 0 | 3.6s | |
✅ XML | xmllint | 3 | 0 | 0 | 0.37s |
✅ YAML | prettier | 160 | 0 | 0 | 4.92s |
✅ YAML | v8r | 102 | 0 | 161.76s | |
✅ YAML | yamllint | 161 | 0 | 1.58s |
See detailed report in MegaLinter reports
It isn't the best for us, but I think that what is done elsewhere to workaround this is instead of loading ( |
Internal contributors can use push because they have access to credentials, but I wouldn't like forks to be able to push to docker hub mega linter images ^^ |
Of course, but the fork would need to log on the registry to be able to do that, and they couldn't. |
cdc6de6
to
14f6821
Compare
@echoix I have done rebase and I am working on this PR, to see if I can at least clear the doubt of which ones will be compatible with arm64 and which ones will not. My idea is to create from |
Yep, that's a good idea. Last weekend I stumbled to look again at status, and saw that there was some progress in the last weeks on the Docker issue, docker side, but it was just released, but not a complete fix. I think it will start to work easily one day, without manually playing with manifests :) |
Once we got the infrastructure sorted out correctly, I plan that we could start with only one flavor, even without all linters included, and ramp up from there. It'll let some time to sort things up. What is still bugging me is that I didn't find a way that could allow us to test the flavors quickly in CI. It's a bit risky that we ship untested images. |
Using too much memory? |
We can maybe take a look at why the two powershell jobs are taking more than an hour to build... |
@echoix there's probably a blocking prompt somewhere |
Maybe.. but even the raw logs don't contain anything else than waiting for a runner to pick the job |
I see |
During the execution we didn't have any more output if we didn't have that section opened when it ran, and for the last 10 mins (where I had some opened), no new log lines were outputted. In the gear menu, there's a link to see raw logs, but it wasn't out yet. We wanted to have other CI jobs to run, and they were blocked there |
Now, we can see what was running:
On many of the cancelled tasks, we see that there wasn't any output for a long time, more than an hour. I see that the rest of the image was building fine, the amd64 steps were practically finished inside of a minute (started at 20:17:11, and 20:18:03 was the step before last for amd64), even when interleaving steps for both platforms together (since the emulation for arm64 takes longer, it will always finish last, but still). Theres probably some crash or some really long emulated step for it to not have any output in an hour. |
af6a7ed
to
c0d5dbe
Compare
9b31731
to
5e9c6a1
Compare
Not sure about your powershell issue but I've been trying to do something similar. We have an internal requirement which causes us to re-build most tools (so we can patch cves before the upstream project) and I noticed that many of the tools which don't support native arm can be easily compiled. For example any of the GO binaries (which there are many) can be built using the following dockerfile snippet: # syntax=docker/dockerfile:1.5-labs
FROM --platform=$BUILDPLATFORM scratch as gitleaks-download
ADD --chown=65532 https://github.com/gitleaks/gitleaks.git#v8.16.2 /gitleaks
FROM --platform=$BUILDPLATFORM golang:alpine as build-gitleaks
COPY --link --from=gitleaks-download --chown=65532:65532 /gitleaks /gitleaks
WORKDIR /gitleaks
USER 65532
ENV GOCACHE=/gitleaks/.cache
ARG TARGETARCH
ENV GOARCH=$TARGETARCH
RUN --mount=type=cache,id=gopkg-${BUILDARCH},sharing=locked,target=/go/pkg,uid=65532 \
--mount=target=/gitleaks/.cache,id=go-${BUILDARCH},sharing=shared,type=cache,uid=65532 \
## Patch CVEs
## END Patch
go mod tidy \
&& go mod vendor
RUN --mount=target=/gitleaks/.cache,id=gitleaks-cache,sharing=shared,type=cache,uid=65532 \
CGO_ENABLED=0 go build -mod=vendor -o /tmp/gitleaks
FROM scratch as gitleaks-fs
COPY --link --from=build-gitleaks /tmp/gitleaks /usr/bin/gitleaks
RUN ["/usr/bin/gitleaks", "--help"]
FROM scratch as gitleaks
COPY --link --from=gitleaks-fs / This will use the native CPU to build both binaries, then check them using the target architecture (the final from just cleans up the output layers if you build that target). If you don't want to use |
I like your pattern! One of my concerns with the multi-arch images is build time. Ideally we would use published and compiled versions for the correct architecture, and only fallback to compiling if needed. Some of the problems we might need help: Finding a way to be able to build and test multi-arch images in CI. At the surface, almost everything is there, but in practice, we hit a gray spot with Docker that wasn't quite resolved last time I looked. We can build multi-arch images, but cannot load them. It seems we need to push to registry, then pull one in the local docker to run it. We would like to avoid the old manifest-based approach to publishing individual images with tags per arch, then having a third tag that is only a manifest linking the two images. Is there a way to run an image of another arch through QEMU to run a test in the container? (Like making sure it runs. Like tools that aren't static, single files like Go based linters, making sure they can actually start) Lastly, what is missing from starting to enable linters for other arches (once the minimal pushing of the images can be done, pushing directly to the registry is a workaround), is to really finish up parsing the yaml descriptor files to follow the default behaviour when overrides aren't used (to allow for exceptions to accommodate each linter), then write out the Dockerfiles (with that build.py script, we usually know what Dockerfile we want to be outputted). I assume that's out of your specialty. 😃 |
@waterfoul, I forgot to ask about your chown handling in your example. I know about the Linux permissions styles like 755, 655, 777, 400 etc, but is it a permission or a user that is 65532. What does it solve and what were the drawbacks when you implemented this? I know that we have had and might still have issues especially with some node modules that downloads with high user ids that cause problems, and the workaround solution of changing the owner to root, is another source of problems, especially when the files created by megalinter, mounted to the computer of a user running it locally, becomes undeletable. Is this something that you and your team have encountered and have solved? |
I understand the concern but there are quite a few things you can leverage if you build it yourself to shrink the image. This example doesn't do those (yet) since I'm still working on shrinking our internal image. What is the longest build time which would be acceptable?
Buildkit already does this for you if the environment is configured correctly. Try using the flag "--platform linux/amd64,linux/arm64/v8" which should automatically use a local qemu to run the arm layers and the final tagged image should already contain both images with the manifest wired up correctly. If you want to build them separately you can run both builds with different --export-cache/--import-cahe (or --cache-from/--cache-to with docker buildx build) registries and no outputs. You can then add flags to import both separate registries for the final build and it should import the caches and merge the images.
Yes and no. I'll take a look. I have a pretty broad set of skills at this point so I might be able to help (I was a Full stack AppDev before I moved into DevSecOps). There are quite a few things I've found in the dockerfile build process that I'd like to adjust so I'll take a look at that.
We have a policy that we only run using root unless explicitly necessary, we even do this when building since any process running in a build is just running inside a container. If you run everything as root a poisoned binary can be used to escape the build process and potentially be used for nefarious purposes like installing remote shells inside build infrastructure. While difficult it's not impossible and running as a non root user thwarts 99% of attempts since you need root to access most of what could be used to escape the container (including most docker engine exploits). 65532 is just a generic userid we used for the megalinter build. Making the container run under a non-root user would have issues (including those you listed and others) which could be solved but the building user in a different stage which has a static binary output would have no impact on it except for the owner id being set on the binary. I'd like to eventually help you tackle the final user eventually but that should be a separate effort |
This is an exciting thread to see! When the time comes, I have been very slowly making progress on setting a non-root user and running the container as a non-root user in #1985. |
@waterfoul I see you have a lot of experience and ideas so it makes perfect sense for you to continue this PR if you want to. |
Can this announce be a wake up call for this long time new feature ? :p |
But we need to manage to do all this with 14 Gb storage, 7Gb ram... |
@echoix if you need more space I could ask @oxsecurity :) |
I'm not sure yet, but at least they're fast. That kind of fast: OSGeo/grass#3395 (comment). To speed up serial-only tests like that, unoptimized, the raw speed and disk access speed must be for something. |
@echoix @nvuillam I just merged main in this branch to get it back up to date. What do we do to unlock it?
|
/build
|
I want to unlock this, what do we do @echoix? cc @nvuillam |
This PR is probably the oldest PR of MegaLinter history 🥲 |
We continue the discussion from #1553