-
Notifications
You must be signed in to change notification settings - Fork 95
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add multi-stage dockerfile #556
Conversation
This is very interesting! This does remove several of the benefits of having the split container image build pipeline, but I also get the desire to have a faster iterative loop. This also gives me an interesting idea on how to maintain the incremental cache on build-image powered Docker builds, without it running into and invalidating incremental changes on local builds and vice versa (which I've also run into a lot). I'll give it a shot on my end, and we can compare and contrast approaches. |
What benefits are you thinking of? This essentially is that, you can stop the build at any of the |
I'm going to also have a go at adding a |
Okay, I should clarify few things, in general I’m largely in agreement with you, this PR is just currently divergent because I started with the simplest image that could work, but we should align this with what we currently have.
This should be changed so it’s one and the same.
Well like the above, we should just have the current make commands use this dockerfile for its builds, so if you’re someone who’s using Make, there’s zero changes to you. I will also again note that this project will (and does) have a higher number of potential contributors on Windows being a gamedev related project, and Since you can already develop cross platform without make because of cargo, I think it makes sense that Unix is also not a requirement for building a image, especially for once-off or infrequent contributors where
The image is currently ~38MB according to GCR, again the reason for Debian was just because I know GLIBC works with Rust there with no changes. Changing to distroless is no problem.
This one I’m confused about, this multi-stage image removes the need for the build image, because that’s essentially what all the stages before the final one is, build images that prepare and compile the binary. We can have more stages that extra info or args for compiling different platforms. I’ll also just note separately that we’ve run into strange issues with the the build image step that we haven’t been able report because we haven’t found anything actionable or concrete (also some of them I thought was just me, but have seen other people report them recently.) Things like running into illegal instruction kills on macOS or just stopping with no error output on WSL. |
In theory, we could pull the linux binary from the image -- but then it does diverge from how we build the Windows binary (although we build the macOS binary is a different way as well). The build-image is handy because it encapsulates all the tooling we use - from mdbook, htmltest, gcloud, etc -- having that all in one place makes a lot of tasks very easy to implement -- so if we have a multi-stage docker image, I don't think we can ever get away from maintaining multiple images to keep them all in sync (this is why I've generally stayed away from multi-stage images where you are publishing both binaries and container images).
That is a 100% valid point for sure. I see other infrastructure projects (K8s, etc) use Make + Docker, and require Windows users to use WSL, but I expect it's a much smaller number of Windows contributors in those parts. Slightly tangential point on this then - I was digging through cargo extensions and came across: If we moved from Make to this (or another cargo based task runner extension), I'm also wondering if that would potentially move Windows users off of the need for WSL, which will hopefully fix much of t he issues? Since people are will have cargo installed anyway, to install a cargo plugin doesn't seem like an onerous ask if it helps with cross platform development. WDYT?
Please do report - even if it's just a screen grab of what happened in a "Strange things that happen on macOS" and a "strange things that happen in Windows" issue, so we can collate and see if we can troubleshoot. |
Why would it diverge? If we setup the build stage of the docker image for cross-compliation we can have the same image build a Windows, Linux, and macOS binary with the only thing "changing" is the target that you provide as an argument to
I'm also not quite sure I understand why we can't do the same here? There's no reason we can't make the
Yeah I'm totally in favour of this, I think since you have to have cargo anyway asking someone to run There's a few other ones I know of you might want to have a look at. I have no real preference myself (except maybe a slight preference for earthly because I think it's a cool new technology), so whichever tool you find that works best for you is fine with me.
|
That seems like a complicated multi-stage Docker image, but it does seem possible. The macOS build image is rather complicated, hence it was easier to just use it for mac builds -- I wonder if you used it as a base image that would work, or if it's defaults would overwrite anything else you would want to do. Maybe a silly question - but do build steps cache between runs? Every time I've tried them I found they didn't, but maybe I've done it wrong? (Docs seems to indicate that they are?)
I think I'd have to see an example of more than building a linux container, to be honest. I'm not sure it would be an good iterative loop, because there is only one level of caching (docker). Also, I'd like to see how you would automate pulling the binary out of the docker image (I'm assuming I'm trying to work out how you would run the agones integration test? How would you get the kubernetes and gcloud credentials into the container? If you wanted to copy them in, you would have to run the If you want to do the equivalent of |
To be clear, I'm not proposing we use docker for anything but making a container containing the build tools, and building the final artifacts. Multi-stage Docker is not a replacement for make, it's a replacement for having two seperate docker images and trying to connect them together with make like we have now. We'd still need make or another command runner for actually running things like k8s or gcloud. This would just make it so that |
That makes sense - I think it's going to get messy, given the variety of artifacts we end up generating. I think it will take building out a replacement for several targets that we currently have in Make right now to be able to see if it's really viable or not -- as each time I think through a target, I end up getting a little stuck, or run into wrinkles I'm not quite sure are going to work out. But, working through it, here is a few ways it might work (in my head at least), and the ramifications therein: Rust + General build ImagesWe have three active images - one for Rust + Containe building (Dockerfile in root of project), and a second one in
Mac binary builder:
Running through each of the targets,
This feels like the wrong approach, as you lose functionality that's very useful. Build step entirely replaces
|
f692100
to
78bc49c
Compare
78bc49c
to
978f60a
Compare
978f60a
to
0e9368d
Compare
0e9368d
to
9ae2394
Compare
Build Succeeded 🥳 Build Id: 4c910dc8-c358-4a25-8126-e7bad6e4a8d4 The following development images have been built, and will exist for the next 30 days: To build this version:
|
Fixes #553 at least for me. This is much faster on both macOS and Windows (it also removes the WSL requirement on Windows as you can now build with
docker build .
), and thanks tocargo-chef
, it has much better caching. This doesn't touch the CI infra, as I figured it would be better to build up the support needed first.@markmandel is there anything else we need to include in the image other than the binary?