Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pod IP is allocated outside of Node's podCIDR #2592

Closed
squat opened this issue May 3, 2019 · 5 comments
Closed

Pod IP is allocated outside of Node's podCIDR #2592

squat opened this issue May 3, 2019 · 5 comments

Comments

@squat
Copy link

squat commented May 3, 2019

I launched a three node vanilla Typhoon cluster on bare metal launching no extra pods.
When I run kubectl get pods -n kube-system -o yaml I notice that my pods have IPs like:

  • 10.2.56.1
  • 10.2.180.1
  • etc

These pod IPs are outside of the podCIDRs for any node in the cluster, which are:

  • 10.2.1.0/24
  • 10.2.2.0/24
  • 10.2.3.0/24

The default Calico IPPool is 10.2.0.0/16:

apiVersion: v1
items:
- apiVersion: crd.projectcalico.org/v1
  kind: IPPool
  metadata:
    creationTimestamp: "2019-05-02T14:16:38Z"
    generation: 1
    name: default-ipv4-ippool
    resourceVersion: "210"
    selfLink: /apis/crd.projectcalico.org/v1/ippools/default-ipv4-ippool
    uid: e62c714f-6ce4-11e9-bb92-5600020480e3
  spec:
    blockSize: 24
    cidr: 10.2.0.0/16
    ipipMode: Always
    natOutgoing: true
    nodeSelector: all()
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

But the actual blocks do not seem to align with what the node resource says is allocated to it.

Is it expected for Calico to ignore the podCIDR on the node resources? How can make Calico respect the assigned podCIDR?

Expected Behavior

I expected pod IPs to always be contained by the podCIDR shown in the node resource.

Current Behavior

Pod IPs are allocated from blocks outside of the node's podCIDR.

Steps to Reproduce (for bugs)

  1. launch a Typhoon cluster
  2. kubectl get pods -o wide
  3. kubectl get nodes -o yaml

Your Environment

Three node bare metal Typhoon cluster.
Kubernetes v1.14.0
Calico v3.6.1

Any help is appreciated. Thanks!

@caseydavenport
Copy link
Member

caseydavenport commented May 3, 2019

@squat this is actually expected behavior.

The IP addresses given to pods are managed by the chosen CNI IPAM plugin. Calico's IPAM plugin doesn't respect the values given to Node.Spec.PodCIDR, and instead manages its own per-node blocks in order to provide better overall IP address space usage (as well as a number of other IPAM features which require a more flexible allocation scheme).

You can, however, use a different IPAM plugin if the Node.Spec.PodCIDR blocks are important to you. Is there a reason you need the addresses to align with what is on the Node?

@squat
Copy link
Author

squat commented May 4, 2019

@caseydavenport ah got it. Thanks!

It’s not important necessarily for the CIDRs actually to match the podCIDRs on the node (though it is a little confusing as a user); rather, I’d like to know if there is a way to determine the subnet that Calico reserves for a given node. Is this captured in a Calico custom resource for example?

@caseydavenport
Copy link
Member

Yeah, you can use kubectl get blockaffinities to show the mapping between node and CIDR that Calico is using. Note these aren't intended to be modified directly - Calico dynamically allocates and removes these as needed and so modifying them by hand would require a lot of care to prevent things breaking.

@squat
Copy link
Author

squat commented May 4, 2019

Perfect. There’s no need to modify them, only to identify the mapping from node<->CIDR, so this is totally sufficient.

I could have sworn that a few months ago the CIDRs allocated by Calico matched those allocated by the node controller. Did this behavior change recently? Or was it a happy coincidence?

@caseydavenport
Copy link
Member

I could have sworn that a few months ago the CIDRs allocated by Calico matched those allocated by the node controller. Did this behavior change recently

Yeah, the default IPAM behavior changed in v3.6.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants