Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

mantle: use mantle rpm #297

Closed
wants to merge 1 commit into from
Closed

mantle: use mantle rpm #297

wants to merge 1 commit into from

Conversation

dustymabe
Copy link
Member

This means we can stop building it during the container build.

It also means we get multi-arch by default. The rpm is built against:
- x86_64, ppc64le, aarch64, i686, s390x, armv7hl

@dustymabe dustymabe requested a review from ajeddeloh January 25, 2019 21:24
# Mantle suite
#FEDORA mantle-kola
#FEDORA mantle-kolet
#FEDORA mantle-plume
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we need plume? cc @sayanchowdhury

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We planned to use plume for COSA too, so yes. During the discussion we had on call, plume needed to have the feature to upload the image as a local disk image for FCOS, that is not implemented but is kind of trivial-to-medium issue.

@dustymabe
Copy link
Member Author

this seems to cut down on our container size by over a GB too:

[dustymabe@media coreos-assembler (upstreammaster *%>)]$ sudo podman images | grep coreos-assembler
localhost/coreos-assembler                  latest   f8ca10933f4d   28 minutes ago      1.91 GB
quay.io/coreos-assembler/coreos-assembler   latest   262941aa9836   28 hours ago        3.08 GB

@cgwalters
Copy link
Member

Seems sane to me! Assume the size reduction is dropping the golang compiler.

Copy link
Member

@jlebon jlebon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice, I like the size reduction! (Though this comes at the expense of course of not getting changes in as fast as we might want.)

@ajeddeloh
Copy link
Contributor

This means you can't use the ./bottlecap --dev mode to make changes to mantle and have them picked up inside the container. This is esp. useful for writing kola tests.

How frequently is the mantle RPM built?

@dustymabe
Copy link
Member Author

dustymabe commented Jan 27, 2019

This means you can't use the ./bottlecap --dev mode to make changes to mantle and have them picked up inside the container. This is esp. useful for writing kola tests.

That is one drawback of this change.

How frequently is the mantle RPM built?

Right now it's manual 😢 Though we are looking to solve the problem there with all of our rpm content over in coreos/fedora-coreos-tracker#84. I should probably start making some progress on that.

@cgwalters
Copy link
Member

It also means we get multi-arch by default.

That would also happen though if we built this container in OSBS.

@@ -49,3 +49,9 @@ ignition

# for grub install when creating images without anaconda
grub2-pc

# Mantle suite
#FEDORA mantle-kola
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This isn't Fedora specific.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the rpm doesn't exist in EL right now so I was thinking it would be better to not include it?

I guess other changes I've made make it where we don't support both ways anyway so i should just force it on both and require us to get the rpm in.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sound good?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It'd totally break RHCOS to drop this though.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(And yeah, the additional RPM requirement is another big pain point sadly...)

@dustymabe dustymabe added the hold waiting on something label Jan 29, 2019
@dustymabe
Copy link
Member Author

applying hold label until we can get an installable mantle rpm for EL7 COSA builds

@miabbott
Copy link
Member

miabbott commented Feb 4, 2019

applying hold label until we can get an installable mantle rpm for EL7 COSA builds

@dustymabe pointed me at this EPEL build of the mantle RPM that we could use in the EL7 COSA - https://bodhi.fedoraproject.org/updates/FEDORA-EPEL-2019-514a53d87b

However, we are trying to build the EL7 COSA with out any EPEL bits

My proposal is to merge this change as is and operate the RHCOS pipeline in a "split-brain" approach, where we build RHCOS with an EL7 COSA, but we run any kola tests using a Fedora COSA.

@cgwalters @darkmuggle WDYT?

@ashcrow
Copy link
Member

ashcrow commented Feb 5, 2019

Needs a rebase ... other than that 👍

@ajeddeloh
Copy link
Contributor

I'm still not sold on this on a conceptual level. It both means that we might end up testing with old versions of mantle (or need to constantly be updating the RPM) and thus miss new tests and that we'll build it differently when binding in source vs when not. Finally, we'll still need to ship the go compiler when we add support for building RPMs on the fly (like cros_workon in the CL SDK). I don't see many advantages other than shaving a little bit off the image build time (mantle takes <5min on a laptop when building all targets). Just because we can do something with RPMs doesn't mean we always should.

@cgwalters
Copy link
Member

cgwalters commented Feb 5, 2019

(or need to constantly be updating the RPM)

Yeah I can say for sure this will be a pain. Unless of course we deploy and own our own "build from git master" rdgo like system. (edit: this subthread was covered here #297 (comment) )

@ajeddeloh
Copy link
Contributor

If we add the rdgo system, ensure the artifacts are built the same way in both koji and with rdgo, use rdgo for the developer workflow, and can ensure the mantle rpm is kept up to date (i.e. automated) I'm fine with that. But I don't want two separate build paths and don't want to miss tests because we forgot to update an rpm.

@darkmuggle
Copy link
Contributor

darkmuggle commented Feb 5, 2019

Isn't bottlecap --dev an anti-pattern for containers? The value in it seems to be to take the current coreos-assembler container (which takes a while to build) and then quickly install newer stuff in it?

The other conversation item that stood out is needing newer Kola bits?

From my point of view, the solution to keep everyone happy, would be to have different containers for the use-case. I'm thinking:

  • base container for the dependency and RPM-based
  • dev container that is based on the base container, w/ GoLang

With Multi-stage builds we can fix the Kola issue. In rough pseudo code (YMMV), Dockerfile could something like https://github.com/darkmuggle/coreos-assembler/blob/PR297/Dockerfile#L1-L12

For Dockerfile.dev, we base it off the base, and then install GoLang and modify bottlecap --dev to use the developer build.

Thoughts?

@darkmuggle
Copy link
Contributor

For those interested in my mutl-stage idea, PR agaisnt this branch is over at dustymabe#1

@ajeddeloh
Copy link
Contributor

The value in it seems to be to take the current coreos-assembler container (which takes a while to build) and then quickly install newer stuff in it?

Right. But also install newer stuff in it the same way it would be installed if you did a full build. If we switch to using RPMs for mantle before having a way of building the rpm locally then we get divergence between developer and release workflows.

The other conversation item that stood out is needing newer Kola bits?

Right. Currently we pull in the latest mantle every time the container is built and build it directly (i.e. not as an RPM). If we don't have RPMs being automatically built for mantle and switch to them, we'll get lag between when new tests are added and when the mantle rpm is built and when that new rpm is included in the container.

The dual container idea wouldn't solve the lag issue since it's inherent to using RPMs unless the RPMs are automatically built.

If we did go down the dual container route, we'd need to be very careful to ensure we build things the same way in both.

@cgwalters
Copy link
Member

* dev container that is _based_ on the base container, w/ GoLang

That's not enough for me at least because I need work on rpm-ostree too which pulls in both C and Rust (plus cmake, autotools, etc., plus additional stuff for the test suite). It's just easier for me to have a dev workflow that involves installing the coreos-assembler components inside my existing dev container.

@darkmuggle
Copy link
Contributor

* dev container that is _based_ on the base container, w/ GoLang

That's not enough for me at least because I need work on rpm-ostree too which pulls in both C and Rust (plus cmake, autotools, etc., plus additional stuff for the test suite). It's just easier for me to have a dev workflow that involves installing the coreos-assembler components inside my existing dev container.

Right, I think what we do is have the base container built, and then install the developer toolchains that are needed, but do the builds serially.

@cgwalters
Copy link
Member

and then install the developer toolchains that are needed, but do the builds serially.

How far do we go though? Is it yum builddep mantle rpm-ostree ...and what if I want to hack on say systemd and drop that into overrides? If we take this out to yum builddep * it's going to get enormous.

@dustymabe
Copy link
Member Author

whew.. so much discussion when I leave for a day :)

I think we are going to have to have ways to have rpms get built in a more continuous fashion. This isn't just a problem for the mantle codebase (kola), but many more of our tools which we are developing at a fast pace as well. Obviously building kola every time we build the container is one way to keep it up to date, but as colin mentioned this doesn't scale well to all the build tools and FCOS components that will need to be built, which is why I think solving it more generically is going to be something we want to do. This would be something like coreos/fedora-coreos-tracker#84, which obviously is just a concept right now, so no real solution at this moment.

So where are we right now?

We have a few options:

1 - wait for coreos/fedora-coreos-tracker#84 to get implemented and leave it the way it is for now (this has implications for multi-arch as this was one case where it was going to be nice to be able to just install the rpm for the right architecture)
2 - merge this now and work towards coreos/fedora-coreos-tracker#84 in the interim. Some pain will be involved if we do have new tests get added that we want to include.

As for cros_workon I think it would be useful to be able to have sources get easily rebuilt and tested in our target FCOS we are building and I don't see why we can't build the rpm locally but it will require buildeps to be downloaded. Not a big deal but something to be called out.

Should we discuss this in another forum? We do have the community meeting tomorrow but don't know if that is the right place

This means we can stop building it during the container build.

It also means we get multi-arch by default. The rpm is built against:
    - x86_64, ppc64le, aarch64, i686, s390x, armv7hl
@dustymabe
Copy link
Member Author

rebased on latest master

@dustymabe
Copy link
Member Author

closing this for now.

Talked with @ajeddeloh @darkmuggle @cgwalters in IRC. We're going to wait until we make hacking on rpms + building rpms more continuously happen before we start using the mantle rpm here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
hold waiting on something
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants