-
Notifications
You must be signed in to change notification settings - Fork 168
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
consider seperating host bits from build root bits #75
Comments
Personally, I We could further "productize" that model by more formally having this one git repo support being built in two ways - as a package (rpm/ebuild/whatever) and as a container (with the package going into the container, with its deps).
If I'm understanding you correctly, we currently have thing being built explicitly split out right? Do you disagree with that? |
I think it'd make sense to also include a e.g.
One downside at least is that your scripts and your container are now decoupled, so if a build script needs a new package (or revert a package), it's more work to get it into the build environments and synced. |
I haven't tried yet but that does have some host dependencies, right? (e.g. rpm-ostree). Fedora should not be a requirement for the best developer experience (rpm-ostree isn't packaged by many other distros). No second class developer host os citizens! Additionally things like recreating
I do and I don't. Yes we have the thing being built split out, but how we invoke tools to build it is just as important and should be split out too. I'd argue things like the
Agreed, but I expect that to be much more infrequent.
I don't follow? |
Right. And short of this use case...there's not really a compelling reason to do so that I can think of. However, at least rpm is packaged for Debian - so is libsolv. That just leaves librepo I think. |
I think this is part of my motivation for starting the conversation in: #52 (comment) |
So I do run it in a container, not necessarily the container. Let's keep in mind that I could also on my Silverblue host pull (looks up Gentoo docker images...hum I'm confused do I want this one? How does
Currently, the [1]
And that's a broken link |
Sure, but I don't think that's a reasonable expectation for all users.
Right, but that makes it significantly harder to experiment with doing things outside the current virt-install method (to be clear, I'm not saying we shouldn't use virt-install, but rather that we shouldn't lock ourselves to it just yet, especially in the project's infancy). Pulling it into a separate repo, or even moving it to fedora-coreos-config (not sure how I feel about that) would make it much more flexible. There's also several things I want to experiment with where I'd want to make that script more substantial. For example: using Ignition to create partitions and filesystems instead of anaconda or manually configuring grub (kickstart might allow this, not sure yet, but it doesn't look obvious). Basically experimenting with not using kickstart becomes incredibly painful if this is part of the container. As for the gentoo container: I dunno, might need an emerge-websync or something first? |
OK, I get it now. Definitely interesting to support Ignition for this instead. One thing I'd like to avoid is us generating loopback devices in the container itself since they're not namespaced in the kernel and are easy to leak. The filesystem developers also use loopback devices a lot for testing but they'll tell you not to do it in "production". So if we're doing things instead inside a VM I think that's better. Now that means ignition inside a VM. We could simply download a Container Linux disk image just like we do for Anaconda and provide it an ignition that does the install and contains a different ignition config to make filesystems? Maybe for now...we define a very high level inflexible
And translate that to either Kickstart or Ignition? I'd be totally fine changing the default config to not use LVM for now in aid of this too. |
We use loopback devices for the Ignition blackbox tests and I wholeheartedly agree they're utterly awful to deal with... but... we do use them (successfully) to build CL today. So I'm torn on it /shrug. What I really want is for loopback devices to be good. As for how using Ignition to generate images would work, I'm not 100% sure yet; it's still in the "idea rattling around in my head" phase. There are definitely parts of the process it can't do and shouldn't ever want to do (like deploying an ostree). It's probably a good idea to start with a list of things the image creator needs to do (e.g. partition, create fs's, deploy a tree, install bootloaders, etc) and figure out what we want to do for each step. As for the high level WRT lvm: I think we should drop that in favor of partitions regardless since that was the conclusion from the fcos tracker discussion |
|
I don't follow? I'm thinking of a system that looks more like this (could be run from a vm launched from the assembler container or using loopback devices on the assembler container):
|
There's a big difference between those two cases though; in the VM case, it's code outside the VM that needs to create the initial disk image - and that needs to be a specific size. Although, I guess we could just create a 100GB or some large size but thinly provisioned qcow2, then after This problem is really exactly the same as what's being solved with the magical So...we could go with |
Should have been more clear; I was imagining the script being copied to the vm (which has the necessary tools like Ignition, ostree, etc)
I was thinking of having a separate disk attached; the VM can boot off it's own disk, but doesn't install there. That seems like more work than it's worth.
Exactly, I just want something more flexible than kickstart running in the vm so we can do things it doesn't support. To be clear and jump a little bit back on topic: I'm not sure this is a path we want to go down, I'd like to experiment with it and bundling the build scripts makes that harder since it requires rebuilding the container or having rpm-ostree on the host. |
I definitely don't rebuild the container each time I want to make a change. I just do It should also work to turn the existing official container into a pet/dev container by just bind mounting in If you're talking about running things on the host...well we're going to battle about that 😄 - I avoid running things on the host as much as possible on my dev desktop, servers, everywhere. |
Pet containers (while neat and useful) shouldn't be a requirement for development. Neither should any one distro.
Thoughts on bind mounting that in by default (in the docs) and adding an option to automatically run make/make install when you run What advantage is there to bundling the build script? |
Not sure if I totally agree with this. Let's break it apart:
I don't disagree with that but I don't think we need to bend over backwards to make sure this works on every distro that exists.
Agree, but pet containers (or rather containers in general) sure make it easier to run on distros that might otherwise have a hard time with situation above, right? |
If we do that, we need to ship our build dependencies (e.g.
|
Right, weird distro-isms (e.g. differences with things like /dev/shm being a symlink, ancient packages, etc) aren't something we should bend over backwards for, but things like not having fedora-specific packages (e.g. rpm-ostree) should not make your development experience worse/slower.
That'd be a giant workaround rather than addressing the root problems. We can require a fedora enviroment to do a build (i.e. coreos-assembler) but we shouldn't require fedora environment to work on the build process which also uses a fedora environment (which in turns runs fedora in a vm :D ). I shouldn't need 3 fedora installs to build an image. Two is iffy enough in my book.
I'm fine with that. Also if we go the route in #52 then we could drop some of that, yes? What advantage is there to bundling the build script? |
If we split out the code, this repository would contain...the I don't quite understand how this would help you immediately - how would your workflow be different if |
FWIW I'd be a big fan of having the host container have everything needed to assemble the OS, but none of the instructions, then bind mounting the configs (from the fcos config repo) and the build scripts (from wherever those would live, if they don't live in the fcos config repo). |
I think we have different opinions on what coreos-assembler should be. I think it ought to be the host to run the build process on but not include the scripts, configs, etc. So I don't think it would even need a new submodule for the scripts, since they wouldn't be included, just like how the fcos configs aren't a submodule. I think (correct me if I'm wrong) you think it ought to be a tool that includes the build scripts where you just point it at the config and it spits out the image/tree.
If it were in a different git repo (and bind-mounted in) I could make changes there and commit them without needing to build a new container each time. I can do that now sorta with some extra work (as you sugguested earlier with bind mounting in the src directory and run |
I guess I don't understand I often find tools that I want to use that aren't in Fedora. I've got a few options. 1) build from source, 2) create an rpm and try to get it into Fedora 3) create an rpm and make a copr and pull from that. Is this any different than that case?
so let me count:
I know we are working on how we do the 2nd part. anaconda just happens to be what we use right now. |
Sure, but as @cgwalters pointed out:
I'm talking about if you want to work on the build process itself, not just working on the OS.
I think you're conflating the "install" that's generating the disk image that would actually be installed and the installer that actually runs on bare metal. I'm just talking about image generation here. |
Still trying to drill down on this. Maybe we should grab each other in IRC.. If
1&2 can be the same container. I'm just hacking around inside the coreos-assembler container. What I'd really like is for us to make it so that rebuilding the container is really lightweight, though. Which is why I opened #52. If we made rebuilding the container lightweight and also made it easy to bind mount stuff in and hack would it help?
ahh ok. yeah I was, but i'll drop that tangent so we don't lose focus |
Sure.
I don't think that matters? I still want the coreos-assembler container to have it, I just want it so as long as you're not changing the tools themselves (i.e. making changes to the fcos build scripts but not rpm-ostree itself) you don't need to update the whole container. The problem is right now that's cumbersome and so the alternative path is "do it on your host" (or pet container, but lets not dive into that). I want to fix that. I also want to be able to work directly on the repo with the script rather than work in the container then copy my changes over. |
I am not fundamentally opposed to changes here, though I struggle with the number of git repositories we have already. Particularly since we're going to be creating another repo with the Jenkins pipelines at least I'd guess. I don't see the scripts as entangled with the "container" much today, and that's a good thing. There's not much magic that lives in But of course today we already have a separate git repo with scripts - that's mantle. I'm not quite sure what you're envisioning hacking on, but given you already have a mantle dev environment, you could continue using that, and then whatever new tools would just get aggregated into here? |
I think that's inevitable. We could employ some sort of git-wrangler like
Were you refering to cmd-build.sh or build.sh?
I would call mantle scripts; its more testing and release tools (except maybe cork, which we don't use here). It's also post-build stuff; it doesn't impact what actually gets built.
|
Just like how we split out an easily machine-readable `deps.txt`, do the same for our build dependencies. However as part of adding a new developer flow to this container, do ship those dependencies. Ref: coreos#75
We (@cgwalters @dustymabe @jlebon and I) discussed this some in
I'm almost certainly missing some things, please chime in with what I missed. Above my (hopefully) impartial summary, below is my opinion: For fcos and rhcos we need to do weird things with how we build the images. Things like installing grub to both efi and bios, having custom grub configs, etc. These are things which anaconda does not support right now (not sure if it would make sense to upstream that). So that means we're going to need to encode those steps in a build script. The work to implement features like automatic rollback is mostly work in the OS tree itself (i.e. the bits handled by rpm-ostree) and the build scripts to do things like install custom grub configs. This means the build scripts and the contents of the configs are inherently tied. The build scripts and the build environment are also tied, but not as strongly. The main dependency there is just having the tools available or up to date. Separating the build scripts into either the config repo or their own repo would also make it easier for fcos to charge ahead and experiment with different ways of constructing the os while not impacting rhcos until those features are ready to be picked up (since the build scripts would belong to the rhcos config, rather than the shared coreos-assembler container). |
One angle I started thinking about this from is: How are we different from any other container for which people want to do development conveniently? One answer to that is the 90% container application case is to be "pure" Go/Rust/Ruby or whatever, and so building the container is basically just an app build, and if you're doing things right you're caching your dependencies. For us we have a huge list of dependencies installed via One angle we could take on this then is to split out a separate layer with our dependencies that we build separately. |
ok I think I've got at least a little something that should make "hacking" configs and the (uncompiled) build scripts a little easier. first off set some env variables that represent the locations of your git repos on your filesystem:
Then set an alias that has a pinch of magic in it
Then you can The magic I mentioned above is:
each of these basically says if variable is set then insert a @ajeddeloh WDYT? should I update the readme with something like this in it ? |
also note that the volume mounts are read only, which I like in case something in the container goes off the rails it doesn't delete your repos from your host. |
While it's true that today the scripts are basically installed "literally", this would box us into that. That's probably fine, but worth noting. |
yeah this wouldn't work for anything compiled.. we also have to consciously put things into |
Just like how we split out an easily machine-readable `deps.txt`, do the same for our build dependencies. However as part of adding a new developer flow to this container, do ship those dependencies. Ref: #75
I want to shy away from things that diverge from the normal flow for development. In the CL SDK we have the idea of New proposal (mostly deals with #1): The container ships with built versions of all the tooling (including build scripts) installed. These tools are built inside the container as part of it's build process. If you don't need to make changes to the build process, it's just ready to go. The container also bind mounts in sources for all the tooling. An global option There is a little bit of a "chicken and egg" problem here in that if you need to change something about Thoughts? This doesn't impose restrictions on where the projects live either (build scripts can be in coreos-assembler or fedora-coreos-config or wherever). |
Seems OK, but I'd also say that since it requires bind mounting in the source which one cloned externally, we could have convention for |
With the merging of #182 I'm closing this since the core problem was addressed. |
There are two main parts of coreos-assembler: the os container and the build scripts. For CL these are actually seperate and the src directory gets bind-mounted into the chroot. Separating them (e.g. into seperate repos) here would have a few advantages:
The text was updated successfully, but these errors were encountered: