Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Upgrade to Catalyst 4 and don't use repo snapshots for stage1 #2115

Merged
merged 8 commits into from
Jul 15, 2024

Conversation

chewi
Copy link
Contributor

@chewi chewi commented Jul 11, 2024

Upgrade to Catalyst 4 and don't use repo snapshots for stage1

Split out from #2093.

Although Catalyst 4 still only has release candidates, Gentoo has been using it for its releases for years. Catalyst 3 was masked a while ago, so we should not linger on this version.

Catalyst 4 has totally changed the way repositories are handled. It only works when the name of the directory containing the repository matches the configured name of that repository. This was not the case for us, with the coreos repository residing in the coreos-overlay directory. We wanted to move and rename our repositories anyway, but this is a big change, so we'll do separately. For now, this just renames coreos to coreos-overlay.

This also stops using repo snapshots for stage1, which avoids the need to fix up the coreos repo name in the seed with a hook script and also renders all the existing hooks scripts obsolete. This approach is in line with what Gentoo does.

How to use

Simply enter a recent SDK and run sudo ./bootstrap_sdk. Note that this will automatically modify the SDK container to rename the coreos repo and upgrade Catalyst, so you will need to recreate the container to go back to an older branch.

Testing done

I have manually rebuilt the SDK and am currently also rebuilding it in Jenkins using 4012.0.0 as the seed.

  • Changelog entries added in the respective changelog/ directory (user-facing change, bug fix, security fix, update)
  • Inspected CI output for image differences: /boot and /usr size, packages, list files for any missing binaries, kernel modules, config files, kernel modules, etc.

@chewi
Copy link
Contributor Author

chewi commented Jul 12, 2024

I have made associated documentation changes in flatcar/flatcar-website#351.

Copy link

github-actions bot commented Jul 12, 2024

Test report for 4027.0.0+nightly-20240710-2100 / amd64 arm64

Platforms tested : qemu_uefi-amd64 qemu_update-amd64 qemu_uefi-arm64 qemu_update-arm64

ok bpf.execsnoop 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok bpf.local-gadget 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.basic 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.cgroupv1 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.cloudinit.basic 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.cloudinit.multipart-mime 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.cloudinit.script 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.disk.raid0.data 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.disk.raid0.root 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.disk.raid1.data 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.disk.raid1.root 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.etcd-member.discovery 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.etcd-member.etcdctlv3 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.etcd-member.v2-backup-restore 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.filesystem 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.flannel.udp 🟢 Succeeded: qemu_uefi-amd64 (1)

ok cl.flannel.vxlan 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.instantiated.enable-unit 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.kargs 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.luks 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.oem.indirect 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.oem.indirect.new 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.oem.regular 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.oem.regular.new 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (2) ❌ Failed: qemu_uefi-arm64 (1)

                Diagnostic output for qemu_uefi-arm64, run 1
    L1: "  "
    L2: " Error: _oem.go:199: Couldn_t reboot machine: machine __1638ad52-13d2-4488-b4c8-24d5659202d9__ failed basic checks: some systemd units failed:"
    L3: "??? ldconfig.service loaded failed failed Rebuild Dynamic Linker Cache"
    L4: "status: "
    L5: "journal:-- No entries --"
    L6: "harness.go:593: Found systemd unit failed to start (?[0;1;39mldconfig.service?[0m - Rebuild Dynamic Linker Cache.  ) on machine 1638ad52-13d2-4488-b4c8-24d5659202d9 console_"
    L7: " "

ok cl.ignition.oem.reuse 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (2) ❌ Failed: qemu_uefi-arm64 (1)

                Diagnostic output for qemu_uefi-arm64, run 1
    L1: "  "
    L2: " Error: _harness.go:593: Found systemd unit failed to start (?[0;1;39mldconfig.service?[0m - Rebuild Dynamic Linker Cache.  ) on machine d2110af5-7505-499d-952d-c51cb4f6acde console_"
    L3: " "

ok cl.ignition.oem.wipe 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.partition_on_boot_disk 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.symlink 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.translation 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v1.btrfsroot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v1.ext4root 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v1.groups 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v1.once 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v1.sethostname 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v1.users 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v1.xfsroot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v2.btrfsroot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v2.ext4root 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v2.users 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v2.xfsroot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v2_1.ext4checkexisting 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v2_1.swap 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v2_1.vfat 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.install.cloudinit 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.internet 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.locksmith.cluster 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (3) ❌ Failed: qemu_uefi-arm64 (1, 2)

                Diagnostic output for qemu_uefi-arm64, run 2
    L1: " Error: _locksmith.go:183: [0] ssh unreachable or system not ready: failure checking if machine is running: systemctl is-system-running returned stdout: ____, stderr: ____, err: dial tcp 10.0.0.5:22: c?onnect: connection refused, systemctl list-jobs returned stdout: ____, stderr: ____, err: dial tcp 10.0.0.5:22: connect: connection refused [1] ssh unreachable or system not ready: failure checking if? machine is running: systemctl is-system-running returned stdout: ____, stderr: ____, err: dial tcp 10.0.0.5:22: connect: connection refused, systemctl list-jobs returned stdout: ____, stderr: ____, e?rr: dial tcp 10.0.0.5:22: connect: connection refused [2] ssh unreachable or system not ready: failure checking if machine is running: systemctl is-system-running returned stdout: ____, stderr: ____, ?err: dial tcp 10.0.0.5:22: connect: connection refused, systemctl list-jobs returned stdout: ____, stderr: ____, err: dial tcp 10.0.0.5:22: connect: connection refused_"
    L2: " "
                Diagnostic output for qemu_uefi-arm64, run 1
    L1: "  "
    L2: " Error: _locksmith.go:183: [0] ssh unreachable or system not ready: failure checking if machine is running: systemctl is-system-running returned stdout: ____, stderr: ____, err: dial tcp 10.0.0.17:22: ?connect: connection refused, systemctl list-jobs returned stdout: ____, stderr: ____, err: dial tcp 10.0.0.17:22: connect: connection refused [1] ssh unreachable or system not ready: context canceled ?[2] ssh unreachable or system not ready: context canceled_"
    L3: " "

ok cl.misc.falco 🟢 Succeeded: qemu_uefi-amd64 (1)

ok cl.network.initramfs.second-boot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.network.listeners 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.network.wireguard 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.omaha.ping 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.osreset.ignition-rerun 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.overlay.cleanup 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.swap_activation 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.sysext.boot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.sysext.fallbackdownload # SKIP 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.tang.nonroot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.tang.root 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.toolbox.dnf-install 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.tpm.nonroot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.tpm.root 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.tpm.root-cryptenroll 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.tpm.root-cryptenroll-pcr-noupdate 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (2) ❌ Failed: qemu_uefi-arm64 (1)

                Diagnostic output for qemu_uefi-arm64, run 1
    L1: "  "
    L2: " Error: _tpm.go:327: machine __baad69e1-ddaa-4180-8501-6360b5402e3a__ failed basic checks: some systemd units failed:"
    L3: "??? ldconfig.service loaded failed failed Rebuild Dynamic Linker Cache"
    L4: "status: "
    L5: "journal:-- No entries --"
    L6: "harness.go:593: Found systemd unit failed to start (?[0;1;39mldconfig.service?[0m - Rebuild Dynamic Linker Cache.  ) on machine baad69e1-ddaa-4180-8501-6360b5402e3a console_"
    L7: " "

ok cl.tpm.root-cryptenroll-pcr-withupdate 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.update.badverity 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.update.grubnop 🟢 Succeeded: qemu_uefi-amd64 (1)

ok cl.update.payload 🟢 Succeeded: qemu_update-amd64 (1); qemu_update-arm64 (2) ❌ Failed: qemu_update-arm64 (1)

                Diagnostic output for qemu_update-arm64, run 1
    L1: "  "
    L2: " Error: _cluster.go:125: Created symlink /etc/systemd/system/locksmithd.service ??? /dev/null."
    L3: "update.go:328: Triggering update_engine"
    L4: "update.go:347: Rebooting test machine"
    L5: "update.go:328: Triggering update_engine"
    L6: "update.go:347: Rebooting test machine"
    L7: "update.go:350: reboot failed: machine __6aaddb8a-301b-4439-b097-8a8fc5cb7cca__ failed basic checks: some systemd units failed:"
    L8: "??? ldconfig.service loaded failed failed Rebuild Dynamic Linker Cache"
    L9: "status: "
    L10: "journal:-- No entries --"
    L11: "harness.go:593: Found systemd unit failed to start (?[0;1;39mldconfig.service?[0m - Rebuild Dynamic Linker Cache.  ) on machine 6aaddb8a-301b-4439-b097-8a8fc5cb7cca console_"
    L12: " "

ok cl.update.reboot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.users.shells 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.verity 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.auth.verify 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.groups 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.once 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.resource.local 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.resource.remote 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.resource.s3.versioned 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.security.tls 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.sethostname 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.systemd.enable-service 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.locksmith.reboot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.locksmith.tls 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.selinux.boolean 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.selinux.enforce 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.tls.fetch-urls 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.update.badusr 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok devcontainer.docker 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok devcontainer.systemd-nspawn 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.btrfs-storage 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.containerd-restart 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.devicemapper-storage 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.enable-service.sysext 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.lib-coreos-dockerd-compat 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.network 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.selinux 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.userns 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok extra-test.[first_dual].cl.update.docker-btrfs-compat 🟢 Succeeded: qemu_update-amd64 (1); qemu_update-arm64 (1)

ok extra-test.[first_dual].cl.update.payload 🟢 Succeeded: qemu_update-amd64 (1); qemu_update-arm64 (1)

ok kubeadm.v1.28.7.calico.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.28.7.calico.cgroupv1.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.28.7.cilium.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.28.7.cilium.cgroupv1.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.28.7.flannel.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.28.7.flannel.cgroupv1.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.29.2.calico.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.29.2.cilium.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.29.2.flannel.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.30.1.calico.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.30.1.cilium.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.30.1.flannel.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok linux.nfs.v3 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok linux.nfs.v4 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok linux.ntp 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok misc.fips 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok packages 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok sysext.custom-docker.sysext 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok sysext.custom-oem 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok sysext.disable-containerd 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok sysext.disable-docker 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok sysext.simple 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok systemd.journal.remote 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok systemd.journal.user 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok systemd.sysusers.gshadow 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

@chewi
Copy link
Contributor Author

chewi commented Jul 15, 2024

I ran a two-phase build in Jenkins, and it looks good. It doesn't show the changes in the SDK, but the production image is barely changed besides packages changing from ::coreos to ::coreos-overlay.

Good to go then? I don't think it needs a changelog entry as its not user-facing.

@ader1990
Copy link
Contributor

As it affects the end user image package full name, maybe it is worth a changes changelog? packages changing from ::coreos to ::coreos-overlay.

@chewi
Copy link
Contributor Author

chewi commented Jul 15, 2024

As it affects the end user image package full name, maybe it is worth a changes changelog? packages changing from ::coreos to ::coreos-overlay.

It's going to change to ::flatcar-overlay in the PR after this one, but I guess I should still add this in the meantime and update it later.

chewi added 8 commits July 15, 2024 14:20
This is needed by Catalyst 4.

Signed-off-by: James Le Cuirot <[email protected]>
This is needed by Catalyst 4.

Signed-off-by: James Le Cuirot <[email protected]>
This hasn't been needed for a while, since Gentoo started handling
Python modules according to PEP 517. We now need util-linux with Python
support for Catalyst 4, and this hack erroneously causes the module to
be installed under /usr/lib64 rather than /usr/lib.

Signed-off-by: James Le Cuirot <[email protected]>
Catalyst 4 has totally changed the way repositories are handled. It only
works when the name of the directory containing the repository matches
the configured name of that repository. This was not the case for us,
with the coreos repository residing in the coreos-overlay directory. We
wanted to move and rename our repositories anyway, but this is a big
change, so we'll do separately. For now, this just renames coreos to
coreos-overlay.

Catalyst 4 also ingests the main repository snapshot as a squashfs
rather than a tarball. It features a utility to generate such a
snapshot, but it doesn't fit Flatcar well, particularly because it
expects each ebuild repository to reside at the top level of its own git
repository. It was very easy to call tar2sqfs manually though.

Signed-off-by: James Le Cuirot <[email protected]>
This is what upstream Gentoo does. They would previously update the
entire seed, but this took a long time. Our seeds are much bigger, so we
kept repo snapshots to build stage1 against these instead. The new
method of only rebuilding packages with changed sub-slots is a good
compromise and removes the need to write stage1 hooks that selectively
catch the repository up.

This also avoids some conflicts by adding the `--ignore-world` option.
Gentoo seeds have nothing in @world. We have much more, but none of that
is needed for stage1.

This continues to exclude cross-*-cros-linux-gnu/* as that is not needed
for stage1. It now also excludes dev-lang/rust, because it is never a
DEPEND, so it would not break other packages in this way. It may fail to
run due to a sub-slot change in one of its own dependencies, but it is
also unlikely to be needed in stage1 and it is not configured to use the
system LLVM. If needs be, we could improve the behaviour of Portage's
@changed-subslot to respect `--with-bdeps`.

In my testing, it was unable to handle an SDK from 17 months ago, but
one from 7 months ago did work. In practise, we will always use a much
more recent one, which is far more likely to work.

Signed-off-by: James Le Cuirot <[email protected]>
The changes to support Catalyst 4 are not backwards compatible and we
need a seamless transition for builds in CI.

Signed-off-by: James Le Cuirot <[email protected]>
Copy link
Contributor

@ader1990 ader1990 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks!

@chewi chewi merged commit 7c23476 into main Jul 15, 2024
1 check failed
@chewi chewi deleted the chewi/catalyst-4 branch July 15, 2024 15:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants