-
Notifications
You must be signed in to change notification settings - Fork 6.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
podCIDR allocation is not working as expected #5231
Comments
@sohnaeo I'm not sure this is a bug. It looks like you're probably using calico here. Calico assigns blocks of IPs to a node and then if it fills up, it assigns another block from the 10.242.64.0/21 pool. All the IPs here are from that range, so I don't see what the problem is. |
Thanks for quick reply, actually I dig more into and it seems IP addresses given to pods are managed by the chosen CNI IPAM plugin. Calico's IPAM plugin doesn't respect the values given to Node.Spec.PodCIDR, and instead manages its own per-node. In our private network, we cant use BIRD (BGP) and have to rely on the static routes so we would like to be dead sure what routes need to be added on control planes and nodes. But due to new feature of calico, each node can have pod of that big range 10.242.64.0/21. We would like to make sure podCIDR works for each node so each node have pods that CIDR is assigned to it. for example /usr/local/bin/kubectl get nodes node1 -ojsonpath='{.spec.podCIDR}' |
I fixed this issue by hacking the below network_plugin/calico/templates/cni-calico.conflist.j2 FROM TO: {% else %} Is it possible to provide an option to use "host-local" for ETCD as well ? Could I raise PR for this ? |
I'm not sure if this is a supported way Calico can operate here. Maybe you should switch to flannel, which respects the node pod cidr allocations. |
We can't use Flannel due to security as it is an overlay network, we have to use layer 3 network protocol Calico. We also cant run BIRD/BGP that's the reason we need to add static routes so pods can reachable on the podcidr allocated nodes |
Encountered similar issue. Thanks for the hack @sohnaeo Edit: For me the hack you provided didn't worked.
+1 |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Problem:
**pods are not getting the ips from the podCIDR which assigned to the nodes **
1- Checkout master branch
2- Create inventory, changed only 3 variables
a) Change the etcd deployment to host
b) Change the pod subnets and service addresses
kube_service_addresses: 10.242.0.0/21
kube_pods_subnet: 10.242.64.0/21
kube_network_node_prefix: 24
3- Once Cluster is up , check the pods CIDR assigned to each node
/usr/local/bin/kubectl get nodes node1 -ojsonpath='{.spec.podCIDR}'
node1-->10.242.64.0/24
node3-->10.242.65.0/24
node4-->10.242.66.0/24
node5-->10.242.67.0/24
4- kubectl apply -f nginx.yml with replicas of 6
nginx-5754944d6c-8kzhj 1/1 Running 0 66m 10.242.70.2 node5
nginx-5754944d6c-b2tvh 1/1 Running 0 66m 10.242.67.3 node4
nginx-5754944d6c-dj4qq 1/1 Running 0 66m 10.242.66.1 node3
nginx-5754944d6c-wbhdb 1/1 Running 0 66m 10.242.70.3 node5
nginx-5754944d6c-x7gdq 1/1 Running 0 66m 10.242.66.2 node3
nginx-5754944d6c-z9vcv 1/1 Running 0 66m 10.242.67.2 node4
look at above pods are getting the ips from range which not assigned to the hosts. It is working fine in kubernetes 1.9
Environment: master branch
Cloud provider or hardware configuration: AWS
OS (
printf "$(uname -srm)\n$(cat /etc/os-release)\n"
):NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"
CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"
ansible --version
):ansible 2.7.12
config file = /home/farhan/workspaces/kubespray-orignal/ansible.cfg
configured module search path = ['/home/farhan/workspaces/kubespray-orignal/library']
ansible python module location = /usr/lib/python3.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.7.4 (default, Jul 16 2019, 07:12:58) [GCC 9.1.0]
**Kubespray version (commit) (
git rev-parse --short HEAD
):86cc703Network plugin used: defult
Copy of your inventory file:
[all]
node1 ansible_host=13.x.x.x ip=13.211.170.14 # ip=10.3.0.1 etcd_member_name=etcd1
node2 ansible_host=3.x.x.x ip=3.104.120.158 # ip=10.3.0.2 etcd_member_name=etcd2
node3 ansible_host=13.x.x.x ip=13.210.80.241 # ip=10.3.0.3 etcd_member_name=etcd3
[kube-master]
node1
[etcd]
node5
[kube-node]
node2
node3
node4
[calico-rr]
[k8s-cluster:children]
kube-master
kube-node
calico-rr
The text was updated successfully, but these errors were encountered: