Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Standalone containerd #4757

Merged
merged 3 commits into from
Dec 16, 2021
Merged

Conversation

brandond
Copy link
Member

Proposed Changes

Build and ship standalone containerd

Types of Changes

packaging

Verification

ls -la /var/lib/rancher/k3s/data/current/bin/* | grep containerd

Linked Issues

User-Facing Change

The embedded containerd binary is no longer part of the K3s multicall bundle.

Further Comments

@brandond brandond requested a review from a team as a code owner December 15, 2021 23:33
# CGO_ENABLED=0 "${GO}" build -tags "$TAGS" -ldflags "$VERSIONFLAGS $LDFLAGS $STATIC" -o bin/containerd ./cmd/containerd/

echo Building k3s
CGO_ENABLED=1 "${GO}" build -tags "$TAGS" -ldflags "$VERSIONFLAGS $LDFLAGS $STATIC" -o bin/k3s ./cmd/server/main.go
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's great to move to k3s instead of the existing containerd multi-call binary thing we have now.

That being said, rancher/rancher#35723 will pretty much immediately get broken by this change (in addition to any other downstream consumer of K3s that doesn't realize we're making a change like this)

Copy link
Member Author

@brandond brandond Dec 15, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From what @ibuildthecloud said a while back, apparently containerd has (or had at some point) some internal checks as to what the actual filename of the running binary is, so the main multicall entrypoint had to be named containerd. I don't know that we've tried unwinding this at any point, but if we can keep this change even after re-adding it, that would be great.

You are correct that it will break that rancher PR, if they try to update to 1.23 at any point. We'll need to coordinate that.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I'm aware of the tribal knowledge there as to why we are using containerd vs. k3s, I'm just trying to call out that we may accidentally break downstream consumers of the project. I think we can manage that on our end (i.e. Rancher projects that consume K3s in "advanced" manners), but there may be others out there that get affected by this. Perhaps we should release note it? but then again, it doesn't affect the user... hmm

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i like the release note idea

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i'm not too worried about breaking an implied contract, that being where and what executables are in the docker container image.

PKG_CRICTL="github.com/kubernetes-sigs/cri-tools"
PKG_CRICTL="github.com/kubernetes-sigs/cri-tools/pkg"
PKG_K8S_BASE="k8s.io/component-base"
PKG_K8S_CLIENT="k8s.io/client-go/pkg"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are we going to backport these changes to other K3s branches or is this going to be a 1.23+ thing?

If not, maybe we should make these specific changes/optimizations in a separate PR that is then backported to the rest of the branches.

Copy link
Member Author

@brandond brandond Dec 15, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I would like to change this, and the move of the other VERSION_ definitions out to versions.sh, across all the branches, just for consistency's sake. I can move them to a separate commit that can be cherry-picked.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Moved out to a separate commit for ease of cherry-picking

@@ -8,7 +8,8 @@ fi

. ./scripts/version.sh

MAX_BINARY_SIZE=61000000
# Try to keep the K3s binary under 60 megabytes
MAX_BINARY_SIZE=$((60 * 1024 * 1024))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we not just fill this in as a constant?

Copy link
Member Author

@brandond brandond Dec 15, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Where would you see it living? The only place we use it is in this script to fail CI if we've bloated things too much.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mean fill it in statically - i.e. set to 62914560 instead of evaluating it every time the script is run. This is a nit though.

Copy link
Member Author

@brandond brandond Dec 16, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah. I figured it was easier on us as maintainers in this format; I don't off the top of my head know how many MiB 62914560 bytes is.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

k3s binary dist/artifacts/k3s size 63610880 exceeds max acceptable size of 62914560 bytes

Maybe time to let bloat set this to 64 MB? 🤣

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It definitely is more readable, but IMO it could be a comment.

Either way, it doesn't block anything I was just looking to see if that is something we might want to do.

Regardless, I think the value needs to be bumped anyway because CI appears to be failing due to it being too low.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CI should pass now that I've rebased on top of the k3s-root update. I also bumped it to 64MB for additional headroom.

@brandond brandond force-pushed the standalone_containerd branch from aa4a541 to 4591dc9 Compare December 16, 2021 00:31
@brandond brandond requested a review from Oats87 December 16, 2021 00:32
@brandond
Copy link
Member Author

CI is currently failing because this commit violates the size limit if merged without #4758 landing first. Updating k3s-root gets us those precious kbytes back.

k3s binary dist/artifacts/k3s size 63610880 exceeds max acceptable size of 62914560 bytes

@Oats87
Copy link
Member

Oats87 commented Dec 16, 2021

CI is currently failing because this commit violates the size limit if merged without #4758 landing first. Updating k3s-root gets us those precious kbytes back.

k3s binary dist/artifacts/k3s size 63610880 exceeds max acceptable size of 62914560 bytes

How much space is saved with the k3s-root PR?

I (personally) am not opposed to increasing the binary size limit to something like 64 MB, especially if we're going to decrease size quickly afterwards with the other PR.

@brandond
Copy link
Member Author

How much space is saved with the k3s-root PR

Just did the math at #4758 (comment)

@Oats87
Copy link
Member

Oats87 commented Dec 16, 2021

So,
63610880-1376256 = 62234624
which leaves us with a 680 KB until we pass the threshold. I don't think that's reasonable.

We know that this PR/enhancement is going to change the binary size of K3s in a non-trivial manner so I don't see why we don't just bump our upper limit to 64 from 60 MB

* Move runc and k3s-root versions into versions.sh
* Remove commented-out cruft from build script
* Other minor tweaks

Signed-off-by: Brad Davidson <[email protected]>
@brandond brandond force-pushed the standalone_containerd branch from 4591dc9 to b34f781 Compare December 16, 2021 17:37
Signed-off-by: Brad Davidson <[email protected]>
Signed-off-by: Brad Davidson <[email protected]>
@brandond brandond force-pushed the standalone_containerd branch from b34f781 to 8380426 Compare December 16, 2021 17:49
@brandond
Copy link
Member Author

cgroupv2 flaked; merging

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants