Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't create multiple single-node clusters (10th cluster creation fails) #1388

Closed
davrodpin opened this issue Mar 9, 2020 · 6 comments
Closed
Labels
kind/support Categorizes issue or PR as a support question.

Comments

@davrodpin
Copy link

What happened:

Can't create the 10th single-node control-plane cluster. The other nine (9) are working just fine.

What you expected to happen:

I was expecting to create hundreds of single-node clusters to perform a specific scale test using kubefed.

How to reproduce it (as minimally and precisely as possible):

Execute the command below and the last cluster creation will always fail:

for i in $(seq 1 10);do kind create cluster --name kind-${i}; done

Anything else we need to know?:

Link to files with the creation of the ten clusters and also a new attempt of creating the 10th cluster with verbosity flag on

https://gist.github.com/davrodpin/c7885d7cc4498f2475b9ccabe91e90f7

Environment:

  • kind version: (use kind version):
kind v0.7.0 go1.13.5 linux/amd64
  • Kubernetes version: (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.1", GitCommit:"d224476cd0730baca2b6e357d144171ed74192d6", GitTreeState:"clean", BuildDate:"2020-01-14T21:04:32Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2020-01-14T00:09:19Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
  • Docker version: (use docker info):
docker info
Client:
 Debug Mode: false

Server:
 Containers: 12
  Running: 10
  Paused: 0
  Stopped: 2
 Images: 9
 Server Version: 19.03.3
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: b34a5c8af56e510852c35414db4c1f4fa6172339
 runc version: 3e425f80a8c931f88e6d94a8c831b9d5aa481657
 init version: fec3683
 Security Options:
  apparmor
  seccomp
   Profile: default
 Kernel Version: 5.3.0-29-generic
 Operating System: Ubuntu 19.10
 OSType: linux
 Architecture: x86_64
 CPUs: 16
 Total Memory: 62.5GiB
 Name: <name>
 ID: XISS:Y5XP:B5OL:ZU77:PSFA:3SFQ:7XFQ:WGID:DSKG:A5FN:5CFF:VJDB
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

WARNING: No swap limit support
  • OS (e.g. from /etc/os-release):
NAME="Ubuntu"
VERSION="19.10 (Eoan Ermine)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 19.10"
VERSION_ID="19.10"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=eoan
UBUNTU_CODENAME=eoan
@davrodpin davrodpin added the kind/bug Categorizes issue or PR as related to a bug. label Mar 9, 2020
@BenTheElder
Copy link
Member

BenTheElder commented Mar 9, 2020

I was expecting to create hundreds of single-node clusters to perform a specific scale test using kubefed.

these clusters still have the real overhead of running kubernetes. despite doing what we can to make it smaller than the average cluster they require non-zero resources. your host may not have enough:

  • cpu
  • memory
  • disk I/O
    ...

@BenTheElder
Copy link
Member

BenTheElder commented Mar 9, 2020

As a rule of thumb kubeadm recommends at least 2GB Ram and a 2 CPU (to also have room for workloads). KIND can run run in under 1GB / 1CPU (and we've tweaked some things to help with that) but it depends a bit on your workload.

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#before-you-begin
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#before-you-begin

#485

@BenTheElder BenTheElder added kind/support Categorizes issue or PR as a support question. and removed kind/bug Categorizes issue or PR as related to a bug. labels Mar 9, 2020
@davrodpin
Copy link
Author

Hi @BenTheElder,

The machine that I am trying to create the clusters has a Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz, which means it has 16 cores. The total amount of RAM is 64GB.

Also, I don't have any workloads running yet. Just trying to create the clusters.

@BenTheElder
Copy link
Member

BenTheElder commented Mar 10, 2020 via email

@davrodpin
Copy link
Author

I will keep trying and reopen/update this issue in case I find anything. Thanks, @BenTheElder

@BenTheElder
Copy link
Member

I would add that I would certainly like things to be cheaper and we're engaged upstream here where we can but it's somewhat limited.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question.
Projects
None yet
Development

No branches or pull requests

2 participants