-
Notifications
You must be signed in to change notification settings - Fork 103
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
No topology key found on hw nodes #400
Comments
Having the same issue, seems that the wrong node is selected:
Seeing this logs:
When updating manually the |
As per the hint in linked issue, perhaps this can be easily solved by setting allowed topologies on the StorageClass? That is, assuming the StorageClass has an allowedTopologies selector that accurately matches hcloud Nodes only, then we can be sure the Kubernetes scheduler won't try to schedule a Pod with hcloud PVC attachment(s) on non-hcloud nodes. This only solves the issue for Kube, I have no idea about Swarm/Nomad. |
This issue has been marked as stale because it has not had recent activity. The bot will close the issue if no further action occurs. |
This issue has been marked as stale because it has not had recent activity. The bot will close the issue if no further action occurs. |
Due to a bug in the scheduler a node with no driver instance might be picked and the volume is stuck in pending as the "no capacity - > reschedule" recovery is never triggered [[0]](kubernetes/kubernetes#122109), [[1]](kubernetes-csi/external-provisioner#544). - See #400 --------- Co-authored-by: lukasmetzner <[email protected]> Co-authored-by: Julian Tölle <[email protected]>
Great to see such a thorough and satisfying conclusion/solution here! 👍 |
(never sure if you are sarcastic or not) You can check the updated docs to learn more about it: https://github.com/hetznercloud/csi-driver/tree/main/docs/kubernetes#integration-with-root-servers We ended up going with the The necessary label is automatically added by hcloud-cloud-controller-manager if the customer is running that in their cluster. |
I'm impressed that my customary acerbic wit has left such an indelible mark ;) I wasn't being sarcastic at all! I had also tripped over the corresponding stuff in cluster-autoscaler - hence being impressed with the thoroughness of the fix here! (and of course that the fix took a similar to shape to how I proposed also leaves me feeling additionally chuffed xD) |
Due to a bug in the scheduler a node with no driver instance might be picked and the volume is stuck in pending as the "no capacity - > reschedule" recovery is never triggered [[0]](kubernetes/kubernetes#122109), [[1]](kubernetes-csi/external-provisioner#544). - See #400 --------- Co-authored-by: lukasmetzner <[email protected]> Co-authored-by: Julian Tölle <[email protected]>
###⚠️ Removed Feature from v2.10.0 We have reverted a workaround for an upstream issue in the Kubernetes scheduler where nodes without the CSI Plugin (e.g. Robot servers) would still be considered for scheduling, but then creating and attaching the volume fails with no automatic reconciliation of the this error. Due to variations in the CSI specification implementation, these changes disrupted Nomad clusters, requiring us to revert them. We are actively working on placing this workaround behind a feature flag, allowing Kubernetes users to bypass the upstream issue. This affects you, if you have set the Helm value `allowedTopologyCloudServer` in v2.10.0. If you are affected by the Kubernetes upstream issue, we will provide a fix in the next minor version v2.11.0. Learn more about this in [#400](#400) and [#771](#771). ### Bug Fixes - reverted NodeGetInfo response as it breaks Nomad clusters (#776) Co-authored-by: releaser-pleaser <>
Hi, We encountered compatibility issues with Nomad clusters due to differences in CSI Spec implementations, which led us to revert our recent changes. We’ve now released v2.10.1 to address this. Moving forward, we’ll implement a feature flag to reintroduce this workaround, scheduled for release in v2.11.0. We apologize for any inconvenience this may have caused. Best regards, |
We are reintroducing a feature originally present in v2.10.0 to prevent pods from getting stuck in the `pending` state in clusters with non-cloud nodes. This feature is now optional and can be enabled via the Helm Chart. By default, it remains disabled to avoid compatibility issues with Nomad clusters, which have a different CSI spec implementation. Learn more about it in #400.
v1.11.0 got released with new feature flag |
In clusters with hardware nodes, a new PVC and its workload can be stuck in
Pending
state if they are scheduled without nodeAffinity.Steps to reproduce:
instance.hetzner.cloud/is-root-server=true
as mentioned in the READMEWaitForFirstConsumer
volumeBindingModeExpected Behaviour:
hcloud-csi-controller should provide the desired / required topology constaints to the k8s scheduler, which then schedules the pod on a node fulfilling the topology requirements.
As the hardware node does not run csi-driver and cannot mount hetzner cloud volumes, the workload should not be scheduled there.
Observed Behaviour:
Pending
state.csi-provisioner
of the CSI Controller deployment logs this Error:More Info:
Tested with csi-driver 2.1.1 as well as 2.2.0, together with csi-provisioner 3.4.0
hcloud-csi-node
does not run on the hw nodecsinode
object for the node lists no driver:csinode
object of the virtual node looks ok:csinode
object of the hardware node does not have a driver and therefore no topology key, as the node intentionally runs nohcloud-csi-node
pod due to thenodeAffinity
:Theory
It seems we are hitting this Issue in csi-provisioner.
As the hardware node has no csi-driver pod and therefore no driver or topology key listed, the csi-provisioner breaks. It is trying to build the preferred topology to give it to the scheduler, but as the hardware node has no topology key, the csi-provisioner fails. Pod and PVC cannot finish scheduling and remain in
Pending
state forever.Workaround
This issue can be avoided by making sure the object that uses the PVC (StatefulSet, Pod etc.) cannot be scheduled on the hardware node in the first place. This can be done by specifying a
nodeAffinity
:Proposed Solution
The external-provisioner Issue, lists a few possible solutions on the csi-driver side, such as running the csi-driver on all nodes, including hardware nodes.
CSI-controller would then need to be aware of which nodes are virtual or hardware when providing the topology preferences to the k8s scheduler.
The text was updated successfully, but these errors were encountered: