-
Notifications
You must be signed in to change notification settings - Fork 716
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RFE: Constrained deployment scenarios (Edge) #2000
Comments
xref: kubernetes-sigs/kind#485 Infrequently I check KIND in docker for mac's minimum settings (~1GB ram .5GB swap, 1 CPU IIRC), we could definitely do better. One resource that doesn't get much attention (and tbh I don't really expect to but...) for cheap / underpowered environments is disk I/O ... xref: kubernetes-sigs/kind#845 |
We used to add a lot of caching which is totally un-necessary if you bin-smash etcd into the api-server. Also init routines are the devil. |
@timothysc there is a use case for a completely stateless clusters. edge clusters that connect to a just thinking out loud.. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
had a discussion with @timothysc about this today. i tried googling what people are doing and found some interesting results:
also in the past few years i've seen more users mentioning the RPI with kubeadm just works for them, transparently. what i'd would like to gather more details about here is the following:
|
I did some home experiments over the summer, on different versions of the Raspberry Pi (from 0 to 4) Earlier I had been using kubernetes 1.10 on Raspberry Pi 2, so that was my baseline - from years ago. My conclusions: The raspberry pi 0w and original 1 were OK for running containers on, but not really for kubernetes (at all). The raspberry pi 2 runs much better with k3s, mostly due to the available 860M - not enough to run k8s. The raspberry pi 3 and 4 runs regular k8s, mostly out of the box. I made it harder by using More details: I made my own custom Linux distribution, and then did my own custom Kubernetes distribution... Community: These people have some great resources about running k8s@home, but usually in the "rack scale". https://github.com/Raspbernetes * that would be from the book "Kubernetes: Up and Running": "Building a Raspberry Pi Kubernetes Cluster" This thesis is still the best motivation that I have seen of why you should build a Raspberry Pi cluster. And this is of course the original distribution, less needed now when everyone supports ARM but anyway. |
There are a number of optimizations that could be done if we start to entertain: Bin-smashing control plane components into a single component and remove caching and overhead. (agent-worker model). This is a massive amount of work but I think could be done in stages starting with etcd<>apiserver. There is really no reason we can't bundle them into a single binary and simplify the deployment and reduce a large amount of overhead. Hard audit on the kubelet overhead to reduce startup costs. |
This is what k3s is doing, busybox type of binary. That and moving from etcd to sqlite are the biggest wins. |
I think we need to stay true to kubernetes for a number of reasons. sqlite may be fine for small environments but it violates the core reason why etcd was chosen.... CP storage. So I think there is a lot of low hanging fruit that we can tackle. |
Here was the difference in installed size: k8s (arm64): 487M rootfs + 997M images So it is 50% of the OS, but 25% of the total... The 40M binary is because it is self-extracting. With k3s being accepted into the CNCF, it is now a question of Deployment form factor rather than a fork. There was originally a similar such project in minikube (called "localkube") that served the same purpose. Anyway, that's how I ended up with selecting k3s for the Raspberry Pi 2 - but k8s for the Raspberry Pi 3. As mentioned, k3s also has lots of "batteries included" that could be looked upon as "inspiration" for k8s.
Having similar features in e.g. kubeadm would lessen the need for |
@afbjorklund thank you for the useful details. you seem to have a lot of experience with running Kubernetes on RPI. as someone who is not a RPI user, looking at this: and WRT kubeadm system-reqs, i can see one can buy a 35$ board ( usability of kubeadm vs k3s aside, i'm wondering how much picking k3s over kubeadm here is the result of budget vs "i have some old boards lying around that i want to run a cluster on" also something that i'm very curious about is, can you explain why did you need to run Kuberentes on RPI, other than experimentation? do you have real world examples from other users?
kubeadm has the principle of doing a minimal viable cluster with only required components and deviating from stock k8s as less as possible. instead of deviating from a component default value, we actually try to go to the component maintainers and try to push changes in the defaults if the defaults are not sane. |
Thank you! There are of course lots of other little ARM details, and also it has changed over the Kubernetes years... I recommend the summary from Alex Ellis: https://www.raspberrypi.org/blog/five-years-of-raspberry-pi-clusters/
I got the feeling that kubeadm sort of "gave up" on raspberry pi (around 1.10), and that k3s has revitalized it again. I did some presentations on it last year: https://boot2podman.github.io/2019/02/19/containers-without-docker.html
Well, as I was trying to explain (and link) above there is for me not so much the need but that it can be done. I know that Rancher has a lot of customers running lots of small clusters, and they talked about it on Kubecon. The actual reason I started was that Ops wouldn't allow Docker for security reasons, and Dev were running Swarm. So I started with running Kubernetes on virtual machines, and then built my own physical cluster on a Hackathon...
I guess I meant more "addons" than actual code or configuration changes. But maybe that is outside the scope, and better handled by someone else. As far as I can tell, Rancher (now SUSE) is doing the right thing. Their distribution is certified and patches go upstream. If you look at the latest projects from this year, I gave up on tinkering with podman and k3s and just went with docker and k8s. But now the summer is over, and the projects are all "done". Most likely I will be doing something different, this Hacktober. |
You don't want to be running Raspberry Pi Zero W (or 1), unless you want to build your own images... When it comes to the Raspberry Pi 2, 3, and 4 they all cost the same ($35) so there are "other factors". I went with model 3B for "arm64", as a trade-off. Probably around $250, for a four-node cluster ? I think that kubeadm should keep armv7 and 1 GB of memory as a minimum, to exclude the older models. Even running on armv7 with 1 GB will be a compromise, and might require enabling swap (on the master).
I would probably just leave the "single binary" and "etcd alternatives" up to k3s - since it is already available ? It doubles requirements (to 2 CPU and 2 GB), but makes it easier for the maintainers to not have a custom distro.
|
the interest was reduced around that time, mostly due to the lack of CI resources. if we were able to get a way to request ARM VMs this would have been great and we could have said that ARM is truly supported. nowadays we have ARM (the company) approaching Kubernetes looking for running clusters in CI, to more officially claim the support. similar goes for IBM and their hardware. i guess, we could also try QEMU and see how that goes, but instead we prefer the vendors to take action here and own their support.
something that Alex's blog is not talking about is all the weird bugs we saw running kubeadm / k8s on RPI reported by users.
this is the usual case that i've seen. K8s on RPI was more of a hobby project, not that that is an argument to not support the use case.
Rancher has interesting claims in this area. Edge is very broad and minimal device specs vary a lot. the other day i emailed some experts and i was told the kubeadm system requirements are quite normal for an Edge / IoT machine in terms of business ideas these days...
i think we are yet to see Rancher patches in Kubernetes. there might be contributions here and there but here are some stats: |
that is quite reasonable.
yes. unlikely we can get support for CP nodes on 512 RAM.
it may work. it depends...from what i've seen once you enable some work on 1 GB RAM CP node, the CP components would start crashlooping and that's not really kubeadm's fault at that point. k8s needs optimizations in some areas and nobody is sending contributions for those.
after some recent feedback that i got, i'm leaning towards -1 to the binary smashing idea, not because k3s already exists but rather i'm against the practice in general and it has to be proven that the maintenance burden is really justified. if that is what the community and users want we can enable this custom howebrew bin-smashed distribution under a kubernetes-sigs repository and this SIG can help with the governance. my only requirement there would be to not fork anything and first provide some stats / numbers how much of an improvement the idea is in the proposal phase. also i don't see the kubeadm maintainers wanting to maintain something like that. so if the same community and users don't step up to maintain it, i don't see it happening. |
Having some CI machines with arm64 would be great, it would expand the "coverage" beyond amd64 ? I have found the qemu arm support to be a bit buggy, it mostly works but there are some strange crashes * boot2podman/boot2podman#18 (comment) I made a similar request for minikube, but no takers so far (though more cloud providers have arm now) * kubernetes/minikube#6280 (comment) But I don't know anything about the kubeadm project CI, maybe it is better than the minikube project CI... |
Yes, afaik the ci is amd64 only still. |
Do not forget the CNI-pluginOn a small system some CNI-plugins must be avoided, e.g. |
FWIW, Rancher's kine project (which is a shim for the etcd API) now includes an example of how to setup kubeadm with external etcd mode and use mysql as the storage backend: |
i experimented a little with building a minimal binary of kubeadm + all shipped k8s components and it was around 200 MB. that did not include etcd + coredns. it is technically possible to build an image that is ALA hyperkube and a binary that exposes all components as subcommands but there has not been recent demand for k8s to host such a project. k3s seems to fit most users, but it's still considered partly a fork. we can revisit in the future. |
This is a FEATURE REQUEST?
Currently we do not advertise the minimum target environments where we support kubeadm and do testing on. The purpose of this issue is to:
Versions
kubeadm version (use
kubeadm version
): *Environment:
Constrained arm-like environments <2GB of memory and limited CPU.
What happened?
k8s-edge deployment scenarios with a full control. a.k.a - kubernetes power toaster.
What you expected to happen?
We should have CI and it should just work.
/cc @wojtek-t - to help trace the bloat and weird perf issues.
The text was updated successfully, but these errors were encountered: