-
Notifications
You must be signed in to change notification settings - Fork 168
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cosa/mantle integration #163
Comments
so I think this proposal has interesting overlap with the discussion in #52 . What I was proposing there is that we keep including the tools from mantle inside the coreos-assembler container, but we just pull them from another location (i.e. another build system takes care of them). Carrying this discussion further: Option A: This proposal #163. Split out mantle from COSA. We build mantle and create a container. The build system and local devs handle grabbing both mantle container and coreos-assembler container to perform all actions they needs to do. Option B: Split out build of mantle from COSA (i.e. #52). The coreos-assembler container build pulls in pre-built binaries from mantle. The advantages listed from the description:
I think we can still achieve this goal with option B by just using separate verb in CoreOS Assembler
With option B the number of rebuilds of the container would be increased. For the prod side of things this shouldn't be a big deal because the container build should be automated anyway. For the local dev side right now we do check rpm dependencies in the host and we could extend that to check that kola/ore/plume software was above a defined minimum release version. WDYT?
With option B the container size would be slightly reduced from what it is today because we'd only include the binaries for mantle/ binaries, but we would still include the mantle binaries so not as much of a reduction, but then again you only need one container \o/ :) Of course with option B (similar to option A) we'd need to figure out building the mantle pieces somewhere stable and adding them into the coreos-assembler container. |
I'm personally still a fan of keeping them separate. Having the mantle container could also be useful in it's own right. On the CL side we could probably replace the CL jenkins job that rebuilds mantle every half hour if there's changes (@bgilbert @dm0- thoughts?). Since mantle is all golang and statically linked, the container should also be pretty small (assuming a multistage build).
I disagree. Instead of a container which does one thing, we'd have a container which does a few things. In general we ought to separate out things if we have the chance. I don't like bundling things just because we can.
Sure it's automated, but that's still just burning CPU cycles for no reason. If we avoid extra logic for dep checking for local dev that's a win in my book. Keep everything as simple as possible, otherwise things have a tendency to explode in complexity.
Building mantle into a container automatically would be pretty trivial. Just set up a dockerfile and hook the repo up to quay. Just building the artifacts straight would probably be harder. |
but by that same logic we'd need a new container for everything we want to do within CoreOS Assembler. i.e. we'd need a container for rpm-ostree/ostree. A container for image build. A container for generating a ostree-in-container piece. I feel like part of the point of CoreOS Assembler is the gluing together of everything you need to build/test/release. |
I'd argue assembling the image is coupled enough to be bundled. If we do want a higher level abstraction for chaining everything together, I'm down, but it should chain the different stages (ie via multiple docker invocations). This would then be mirrored in the CI pipeline. It also simplifies things like getting credentials into COSA since (at least for FCOS) it shouldn't need any credentials to build everything. The mantle container will still need them but at least then it will be clear what parts need what creds. |
yeah. the idea in my head at least is that we have different stages of the pipeline call into coreos-assembler to do small pieces of the work.. For example
But when you're doing things locally you'd just run |
I agree on roughly the same workflow (i.e. stages 1-5) but still think mantle ought to be it's own container. We've talked in #75 about having a top level entry point script that handles launching podman/docker. What if we had that script be the common bit and it used different containers for different steps if it needed to? In your example 1-3 would use the assembler container 4 would use the mantle container and 5 might also use the mantle container (e.g. for ore) or might use some other container or might even just use the aws cli. |
I think we're mostly splitting hairs here. We could use the mantle container for step 4/5 (and others) but the question is.. is it better than, similar, or worse than including it. I think if the answer is similar or worse than then we should just include it in the assembler container. If the answer is better than then let's create a separate mantle container.
we could do that but that script is managed in the coreos-assembler repo, so IMHO might as well include mantle in the container :) Either way I think either of these solutions work fine, we're both just trying to find the best solution. Also we can pivot on the choice we make now if it proves to be the wrong choice. |
I'm not inherently opposed to multiple containers, but it (along with an external container-launching entrypoint) rather directly conflicts with my current workflow of doing most of my day-to-day work inside a single container. |
We could also build it in a separate container and just build COSA I agree that part of the awesomeness of COSA is that it's "context" aware. So e.g. |
@cgwalters The issue is the need for nested containers, right? @jlebon If anything I think we'd want to I really don't like integrating everything when it's not necessary. |
Yes I think that would be optimal and it fits precisely into what I was proposing in #52. Upstream container runtime can https://docs.docker.com/develop/develop-images/multistage-build/#use-an-external-image-as-a-stage |
possible fix in #297 |
Repurposing this one to be about mantle/cosa in general. I still actually lean towards merging the projects, though it's hard to do nicely without destroying git history AFAIK. |
Without commenting on the merits of merging the projects: note that the parents of a git merge don't need to have a common ancestor. |
Per the 2020.02.25 Cabal, we decided to move forward evaluating the feasibility of this.
|
We're going to merge it into the coreos-assembler repo. coreos/coreos-assembler#163
Basically it doesn't make sense to separate building, testing, and uploads. There are too many entangled problem domains, among them: - How to run qemu - Parsing build schemas - Uploading to one AWS region, running tests, then replicating etc. Closes: coreos#163
Basically it doesn't make sense to separate building, testing, and uploads. There are too many entangled problem domains, among them: - How to run qemu - Parsing build schemas - Uploading to one AWS region, running tests, then replicating etc. Closes: coreos#163
Basically it doesn't make sense to separate building, testing, and uploads. There are too many entangled problem domains, among them: How to run qemu Parsing build schemas Uploading to one AWS region, running tests, then replicating etc. This merges the https://github.com/coreos/mantle project into coreos-assembler. Closes: coreos#163
We're going to merge it into the coreos-assembler repo. coreos/coreos-assembler#163
As per discussion, we should look into moving the mantle bits to their own container. They aren't used to build the image; only for uploading, releasing, testing, etc.
Splitting them out would have the following advantages:
There are of course a few disadvantages:
The text was updated successfully, but these errors were encountered: