Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

gf-oemid: Support replacing oemid #177

Merged
merged 1 commit into from
Oct 18, 2018
Merged

Conversation

cgwalters
Copy link
Member

Today we only build a qemu image. I plan to work on
"build postprocessing" tools so that one can take an existing
build and extend it with more image types.

This is prep for that, by allowing us to replace the qemu image's
oemid.

Today we only build a qemu image.  I plan to work on
"build postprocessing" tools so that one can take an existing
build and extend it with more image types.

This is prep for that, by allowing us to replace the qemu image's
oemid.
@jlebon
Copy link
Member

jlebon commented Oct 17, 2018

I plan to work on
"build postprocessing" tools so that one can take an existing
build and extend it with more image types.

How does this relate to the buildcfg.yaml idea in #80? Because I really like that it strongly connects them all as one "compose" vs. doing it as a secondary step.

@cgwalters
Copy link
Member Author

cgwalters commented Oct 17, 2018

How does this relate to the buildcfg.yaml idea in #80? Because I really like that it strongly connects them all as one "compose" vs. doing it as a secondary step.

Right. So...I would like to do "progressive builds" - like where we first build the ostree, and possibly rebase an existing system to it to test. And/or we then next build the qemu.qcow2, and in the same build container boot it and run basic sanity checks.

I don't want to upload AMIs to every region on every build, or build an installer ISO if the basic boot in qemu test fails.

This also relates to an idea I had around adding a "tags" toplevel element to the builds.json, so that one could promote a tested build: e.g. extend it to all AWS regions, then create tag.

@cgwalters
Copy link
Member Author

cgwalters commented Oct 17, 2018

The other thing is that #80 implies a new config file which all the tools would need to parse, and that quickly gets into the language issue. I feel like we still should do #80 in terms of at least defining a "baseline" - people interested in bare metal systems may never want a -qemu.qcow2 etc.

@lucab
Copy link
Contributor

lucab commented Oct 18, 2018

It could be that I'm just echoing the comments above in a different form, but I'm wary of trying to replace/reuse bits across images/OEMs. In CL we have an intermediate neutral GPT-disk image (coreos_production_image.bin) that is used internally in the SDK as the pristine baseline image to produce all other OEM images.

@cgwalters
Copy link
Member Author

In CL we have an intermediate neutral GPT-disk image

Right, there's also a pristine image in c-a. But in the end right now the only thing different between our images is the oem.id. If we were to do something like the OEM partition then clearly this would need to change. But since we're not doing that now...

@jlebon
Copy link
Member

jlebon commented Oct 18, 2018

I don't want to upload AMIs to every region on every build, or build an installer ISO if the basic boot in qemu test fails.

That makes sense to me, though we could just make that part of the build process. I.e. after building the qcow2, sanity check it boots, and then continue on to do the derivatives. Though I admit it's a bit odd to have build be that smart.

Anyway, this patch looks sensible enough on its own merits.

@jlebon jlebon merged commit 4a579a8 into coreos:master Oct 18, 2018
@dustymabe
Copy link
Member

👍

@cgwalters cgwalters mentioned this pull request Oct 24, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants