-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
determine and document resource requirements #485
Comments
/assign |
I am trying |
will actually have to follow up and sample the usage over time, the lowest settings on docker desktop / macOS appear to be above our lower bound 🙃 |
We are using both CircleCI machine and remote_docker environments, which are 2 CPU, to test Istio - it seems to be working quite well. Since Circle doesn't allow higher CPU, it would be good to keep this as a baseline. |
Besides - if the ARM64 bugs are fixed, I would hope Kind will run on Raspberry Pi - k8s can run just fine. |
A single node should work with considerably less than this, however the rest of what we can do performance wise is mostly bound by Kubernetes / CRI / ... CNI is probably the last place we have room for squeezing this lower and we're working on that. It may regress some in the future due to the components we don't control, but keeping everything as low as we can is a high priority 👍
As far as I know ARM64 works but requires building images yourself. Currently it will be painful to cross-build those because of getting kubernetes loaded, but that is being worked on in low priority. There is some limited ARM64 CI working now from the openlab folks. |
these will shift a bit with the updated CNI configuration (should be lower), but will remeasure. still need to document as well |
since we've approached the limits of what we can reduce from kind's end alone, some experimentation with making upstream Kubernetes lighter: https://github.com/BenTheElder/kubernetes/tree/experiment if we go forward with this change upstream then kind will support leveraging it immediately. |
I've improved that prototype with the goals of:
So far that more or less works and I've created a provisional PR upstream. At this point I think the next step is a KEP, expect more on this in the near future :-) We may need to slightly adjust what else we ship though (EG currently we are missing the metrics APIs) but will continue to push for light weight clusters overall. I think we can lighten some other things at the same time to make room without adding much overhead. |
#932 + recent containerd build infra and upgrades should reduce the memory overhead per pod. |
/help |
@BenTheElder: Please ensure the request meets the requirements listed here. If this request no longer meets these requirements, the label can be removed In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Hi @BenTheElder |
the image can be left actually, as the image is talking about building kubernetes image which takes more resources. it would be helpful to determine exactly how much resources a typical kind cluster uses in a repeatable fashion and keep this documented. |
And for that, I will have to perform some experiments and report back right? Do you suggest me to check multiple times? |
that's a good idea, I think the most important thing is that we write down how we determined this somewhere so we can come back and verify what it's currently at :-) |
On it. Will notify once I am done |
Different hardware system and system config can behave differently, not sure where to benchmark memory & time . |
FWIW I ran an experiment on my kids 2 core i5 with 8GB of ram dedicated to docker and was able to run a four node kind cluster with no issues , including scheduling of 10+ pods (14 if you include CNI) and sonobuoy . Meanwhile pushing to ten nodes on a massive server with 48 cores failed bc of etcd. So sounds like the most important tweak for kind may be he running etcd in memory if running large number of nodes. |
just wanted to share that aws publishes the maximum pods you can run on an ec2 instances for eks. which you can roughly map to an amount of cpus and memory per pod found that here https://stackoverflow.com/questions/57970896/pod-limit-on-node-aws-eks not sure if its helpful or not for finding the right numbers for kind but figured why not share |
What would you like to be documented: A more accurate lower bound on resources when using kind with docker desktop. Currently we suggest 4GB / 4 CPU which while probably accurate for building kubernetes should be more than we need to run a node.
https://kind.sigs.k8s.io/docs/user/quick-start/#creating-a-cluster
Why is this needed: We don't want to overstate requirements dramatically and scare off potential users :-)
We'll need to do some testing to determine the threshold. It should be reduced at HEAD, it might also be interesting to check if that is true 🙃
The text was updated successfully, but these errors were encountered: