-
Notifications
You must be signed in to change notification settings - Fork 168
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
mantle: use mantle rpm #297
Conversation
# Mantle suite | ||
#FEDORA mantle-kola | ||
#FEDORA mantle-kolet | ||
#FEDORA mantle-plume |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we need plume? cc @sayanchowdhury
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We planned to use plume for COSA too, so yes. During the discussion we had on call, plume needed to have the feature to upload the image as a local disk image for FCOS, that is not implemented but is kind of trivial-to-medium issue.
this seems to cut down on our container size by over a GB too:
|
Seems sane to me! Assume the size reduction is dropping the golang compiler. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice, I like the size reduction! (Though this comes at the expense of course of not getting changes in as fast as we might want.)
This means you can't use the How frequently is the mantle RPM built? |
That is one drawback of this change.
Right now it's manual 😢 Though we are looking to solve the problem there with all of our rpm content over in coreos/fedora-coreos-tracker#84. I should probably start making some progress on that. |
That would also happen though if we built this container in OSBS. |
@@ -49,3 +49,9 @@ ignition | |||
|
|||
# for grub install when creating images without anaconda | |||
grub2-pc | |||
|
|||
# Mantle suite | |||
#FEDORA mantle-kola |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This isn't Fedora specific.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the rpm doesn't exist in EL right now so I was thinking it would be better to not include it?
I guess other changes I've made make it where we don't support both ways anyway so i should just force it on both and require us to get the rpm in.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sound good?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It'd totally break RHCOS to drop this though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(And yeah, the additional RPM requirement is another big pain point sadly...)
applying hold label until we can get an installable mantle rpm for EL7 COSA builds |
@dustymabe pointed me at this EPEL build of the However, we are trying to build the EL7 COSA with out any EPEL bits My proposal is to merge this change as is and operate the RHCOS pipeline in a "split-brain" approach, where we build RHCOS with an EL7 COSA, but we run any @cgwalters @darkmuggle WDYT? |
Needs a rebase ... other than that 👍 |
I'm still not sold on this on a conceptual level. It both means that we might end up testing with old versions of mantle (or need to constantly be updating the RPM) and thus miss new tests and that we'll build it differently when binding in source vs when not. Finally, we'll still need to ship the go compiler when we add support for building RPMs on the fly (like |
Yeah I can say for sure this will be a pain. Unless of course we deploy and own our own "build from git master" rdgo like system. (edit: this subthread was covered here #297 (comment) ) |
If we add the rdgo system, ensure the artifacts are built the same way in both koji and with rdgo, use rdgo for the developer workflow, and can ensure the mantle rpm is kept up to date (i.e. automated) I'm fine with that. But I don't want two separate build paths and don't want to miss tests because we forgot to update an rpm. |
Isn't The other conversation item that stood out is needing newer Kola bits? From my point of view, the solution to keep everyone happy, would be to have different containers for the use-case. I'm thinking:
With Multi-stage builds we can fix the Kola issue. In rough pseudo code (YMMV), For Thoughts? |
For those interested in my mutl-stage idea, PR agaisnt this branch is over at dustymabe#1 |
Right. But also install newer stuff in it the same way it would be installed if you did a full build. If we switch to using RPMs for mantle before having a way of building the rpm locally then we get divergence between developer and release workflows.
Right. Currently we pull in the latest mantle every time the container is built and build it directly (i.e. not as an RPM). If we don't have RPMs being automatically built for mantle and switch to them, we'll get lag between when new tests are added and when the mantle rpm is built and when that new rpm is included in the container. The dual container idea wouldn't solve the lag issue since it's inherent to using RPMs unless the RPMs are automatically built. If we did go down the dual container route, we'd need to be very careful to ensure we build things the same way in both. |
That's not enough for me at least because I need work on rpm-ostree too which pulls in both C and Rust (plus cmake, autotools, etc., plus additional stuff for the test suite). It's just easier for me to have a dev workflow that involves installing the coreos-assembler components inside my existing dev container. |
Right, I think what we do is have the base container built, and then install the developer toolchains that are needed, but do the builds serially. |
How far do we go though? Is it |
whew.. so much discussion when I leave for a day :) I think we are going to have to have ways to have rpms get built in a more continuous fashion. This isn't just a problem for the mantle codebase (kola), but many more of our tools which we are developing at a fast pace as well. Obviously building kola every time we build the container is one way to keep it up to date, but as colin mentioned this doesn't scale well to all the build tools and FCOS components that will need to be built, which is why I think solving it more generically is going to be something we want to do. This would be something like coreos/fedora-coreos-tracker#84, which obviously is just a concept right now, so no real solution at this moment. So where are we right now? We have a few options: 1 - wait for coreos/fedora-coreos-tracker#84 to get implemented and leave it the way it is for now (this has implications for multi-arch as this was one case where it was going to be nice to be able to just install the rpm for the right architecture) As for Should we discuss this in another forum? We do have the community meeting tomorrow but don't know if that is the right place |
This means we can stop building it during the container build. It also means we get multi-arch by default. The rpm is built against: - x86_64, ppc64le, aarch64, i686, s390x, armv7hl
rebased on latest master |
closing this for now. Talked with @ajeddeloh @darkmuggle @cgwalters in IRC. We're going to wait until we make hacking on rpms + building rpms more continuously happen before we start using the mantle rpm here. |
This means we can stop building it during the container build.
It also means we get multi-arch by default. The rpm is built against:
- x86_64, ppc64le, aarch64, i686, s390x, armv7hl