You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
created a new workload cluster with a MachineHealthCheck watching the worker nodes on the cluster
delete the workload cluster
What happened
The workload cluster along with the MachineHealthCheck gets deleted; however, the cache set up by the capi-controller continues to look for nodes on the workload cluster. The logs from the capi-controller is overwhelmed with the following:
E0413 19:26:49.844602 1 reflector.go:153] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:105: Failed to list *v1.Node: Get <workload-cluster-api-server>:6443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp: lookup <workload-cluster-api-server> no such host
What did you expect to happen:
The cache stops when the cluster/machine health check is deleted.
Anything else you would like to add:
we seem to start the cache here with a context Done channel, but that channel will never close.
restarting the capi-controller pod will of course clear the error.
Environment:
Cluster-api version: 0.3.3
Minikube/KIND version: 0.7.0
Kubernetes version: (use kubectl version): 1.17.3
/kind bug
The text was updated successfully, but these errors were encountered:
What steps did you take and what happened:
What happened
The workload cluster along with the MachineHealthCheck gets deleted; however, the cache set up by the capi-controller continues to look for nodes on the workload cluster. The logs from the capi-controller is overwhelmed with the following:
What did you expect to happen:
The cache stops when the cluster/machine health check is deleted.
Anything else you would like to add:
we seem to start the cache here with a context Done channel, but that channel will never close.
restarting the capi-controller pod will of course clear the error.
Environment:
kubectl version
): 1.17.3/kind bug
The text was updated successfully, but these errors were encountered: