-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Teach installer to specify OS version through oscontainer image URLs #281
Comments
Is this AMI-specific? Why would the AMI lag, but the libvirt image be current?
This would effectively give users two knobs for the same thing:
Do we need both of those knobs? If AMI lag isn't too great, it seems like we could stick with just the AMI as the knob, and let post-install tooling like the machine API handle subsequent modifications. If the AMI is just an OSTree-capable stub (i.e. without some packages needed to run the installer/cluster), then can we just remove the AMI/libvirt-image knob completely and automatically pull an OSTree-capable stub without giving the user a choice (they could still choose their target OSTree)? How does this all fit in with release tracks (tested, etc.) and automatic upgrades? Can the installer continue to ignore those (and leave them to other operators to handle or not)? |
I think the long term plan is that only some (regularly updated, but "outdated") snapshots of RHCOS AMIs will be made public. So the installer needs to be able to work from a public RHCOS AMI and bring it up to date. Libvirt images are available internally just for development. (And I don't doubt we'll keep pushing out up to date private RHCOS AMIs as well).
Yeah, the AMI should be fully capable of joining the cluster seeing as it's just an older version of RHCOS. I guess the way I was thinking about it is that we should care less about the AMI used to bootstrap the cluster and more about the oscontainer we want to install since that's actually what will run the cluster. It seems like that setting should belong in /cc @ashcrow to keep me honest |
Related issue on the OS side: openshift/os#307 |
I think big picture we want to always pivot early on in the installation. I am working on redoing the os build process to use pivot. |
Ok. So are folks comfortable removing the AMI knob (and the libvirt image knob?) and replacing it (them) with oscontainer image URLs? The installer would just pick something sane for the initial image, and callers wanting a specific eventual OS would need to use override the default oscontainer image URL. I guess you could point that at a local (caching) registry if you wanted to efficiently launch multiple nodes. And once the cluster comes up, maybe the machine-config operator could point new nodes at image streams in the cluster's own registry. |
Yes, though we will have changes that require updating those (in fact the recent partitioning switch is one) but those should become infrequent - let's say we limit ourselves to only respinning the AMI once every 6 months or so. |
/cc @mrguitar |
Yeah, that sounds great! So just to be explicit here, do we agree that the installer should take care of the initial pivot? It seems like leaving it up to the MCO would be more disruptive to the cluster than doing it upfront when not much is installed yet? |
I think doing it upfront would be ideal. I wouldn't make it a hard requirement though. |
Yes, although since #119 we are no longer handling workers in the installer, so this would just be for masters (and bootstrap?).
I'm also concerned about agility without an AMI knob. A really old AMI may not be capable of running a modern master. If the installer pivots masters early, we don't have to worry about that. |
Hmm, I'm not very familiar with the k8s cluster API, though are there hooks somewhere where we could pivot the workers before going any further? |
my personal take on how we should implement "pivoting" a slightly out of date starting image is by doing the pivot in ignition in the initramfs before the system boots completely That being said I started a recent mail list thread about the priority of doing such work. I don't think it is something we have prioritized for our first deliverable of this new platform. Please respond to the mail so we can discuss further. |
Hmm. I understand that we want to enable e.g. Ignition disks so that people can customize the FS layout etc, and that will require a lot more of libostree in the initramfs. But today there's only a tiny Pulling in the full pivot code would require the whole container runtime plus rpm-ostree which....is a lot more. |
Agree it would add more complexity to the initramfs but would solve some problems (i.e. ignition run twice). I still think I'd rather see it there since if it's in the base it will definitely benefit anyone else wanting similar functionality. Of course this is a future goal, so subject to change based on what needs pop up. |
I said the above. Though at this point I don't think we should focus on an early |
Since we last discussed this, pipeline v2 landed and the installer uses that, and installer releases are likely to pin to a RHCOS version. (Will the installer also pin to release payloads?) This |
That was my understanding. Then the container image URL from the release payload would be used as the |
We have #732 in the merge queue to pin release images now. What do we need to do to get that wired up to the pivoting code? If we're not handilng pivots in Ignition, what should the installer be doing to get these pivots to happen? Note that we no longer touch worker nodes at all, we just push in machine-set configs like this and the cluster-API providers (like this) create the workers for us. We just use a pointer Ignition there, the real ignition is served by the machine-config operator. It looks like the daemon has a |
I think you're looking for @cgwalters |
BTW this issue duplicates openshift/machine-config-operator#183 Let's try to standardize on some terminology to reduce confusion - the "image" term is ambiguous between VM and container images. "release images" correspondingly could refer to the CVO release payload or to the RHCOS releases. My proposal is:
Then an important bridge between the two is the "oscontainer". A RHCOS build also contains an oscontainer with the same content. I think the release payload should define the oscontainer. |
Does it not do this now? Maybe the oscontainer repo needs an |
And once we get it, how does that info get from the release payload into the machine-control configs (or wherever it needs to go to make pivots happen)? |
It doesn't for a few reasons - for RHCOS today we have a "unified" build process that does all of the bootimages (openstack/libvirt/AMI) and generates the oscontainer. The way the oscontainer is constructed today needs privileges - and generating the bootimages also hard requires KVM. To simplify the model we now support only needing /dev/kvm. My thought is that we have a push to the release payload from the RHCOS pipeline. |
The strawman in this issue was that it's a ConfigMap that gets merged into the configs. |
Just to elaborate, the current plan is:
Yeah, might be better at this point to close this issue and keep discussions there. Eating my words from this morning, I'm not sure if the installer actually needs to do anything to get this wired up since it's going through the release payload -> CVO -> MCC. (Well, there might be changes needed to the installer once we add some RHCOS build tagging so it only selects the latest passed builds, but that's not what this ticket was originally opened for). |
Or a payload CI just fetches the latest oscontainer digest from a well-known location (like the installer currently does for AMIs) and bakes it into the release payload when the tests pass? |
That's a better phrasing yes - when I said "push" I was more thinking "submit a PR" which gets e2e tested like any other PR. |
So should the installer be doing this? Or is there an oscontainer repo hooked up to Prow that can do it? If you want the installer to do it, are you pushing these anywhere besides registry.svc.ci.openshift.org, pr do you want us pinning to that (I don't think it's retention period is very long)? @smarterclayton, can we hook the installer up to the operator image ranslation somehow? Maybe this should go in machine-config if there's no Prow-wired oscontainer repo? |
/close In favor of openshift/machine-config-operator#183 |
@wking: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
To get in the payload you would push to the openshift image stream on api.ci
(and eventually the OCP one).
That’s the only way of getting into the payload we had planned to support
from releases. You push when you’re confident your push works against
latest (the same contract as PR merges).
On Nov 29, 2018, at 11:32 AM, W. Trevor King <[email protected]> wrote:
So should the installer be doing this? Or is there an oscontainer repo
hooked up to Prow that can do it? If you want the installer to do it, are
you pushing these anywhere besides registry.svc.ci.openshift.org, pr do you
want us pinning to that (I don't think it's retention period is very long)?
@smarterclayton <https://github.com/smarterclayton>, can we hook the
installer up to the operator image ranslation somehow? Maybe this should go
in machine-config if there's no Prow-wired oscontainer repo?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#281 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABG_p-77aPNFpikuyf3wGmkaTZI8YqcBks5u0AwugaJpZM4Wwhk2>
.
|
As mentioned in #267, the latest public RHCOS AMI will likely not be the latest RHCOS we want. Right now, MachineConfigs hardcode
://dummy
as theOSImageURL
: https://github.com/openshift/machine-config-operator/blob/a91ab5279755f87f0953f294e9add7584761a489/pkg/controller/template/render.go#L208. This should instead be the URL to the oscontainer image the installer would like to have installed. In more concrete terms: we could e.g. hardcode it for now as is currently done inami.go
for AMIs until we're ready to just always pick the latest, but also have something analogous toec2AMIOverride
, e.g.osImageURLOverride
?The MCD will then immediately update the node to the desired oscontainer upon startup.
The text was updated successfully, but these errors were encountered: