You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
From 6.6.0 release, it does not seem to be possible anymore to scale down Deployments to 0 with the wait flag set to true. My guess is that this is caused by #4976 which removed some of the Deployment specific scale-down logic ad the wait never completes. Possibly, because when spec.replicas is set to 0, .status.replicas is not set in the Deployment resource (Kube 1.26.0).
I guess one can work around it by not setting the wait to true when scaling to 0, but this worked fine in 6.5.1, so if nothing else it is a backward compatibility issue
The issue here is that scale.getStatus/Spec().getReplicas() will return null, not 0 when scaling to zero. That will have to be accounted for in HasMetadataOperation
As a sidenote -> from a quick read of the scaling code in HasMetadataOperation, it seems to have hardcoded expectations that .spec.replicas and .status.replicas are always used. But that is not always the case -> this can use different fields. For example in one of the Strimzi resources, we use
I do not know how many other CRDs like that will exist. I do not really have an usecase for this right now. I also know it can be worked around by changing the resource itself. So I did not want to open an issue for it just like that if I don't really need it. But I think this is an limitation and if you want, I can open an issue to track it.
Describe the bug
From 6.6.0 release, it does not seem to be possible anymore to scale down Deployments to 0 with the
wait
flag set to true. My guess is that this is caused by #4976 which removed some of the Deployment specific scale-down logic ad the wait never completes. Possibly, because whenspec.replicas
is set to0
,.status.replicas
is not set in the Deployment resource (Kube 1.26.0).I guess one can work around it by not setting the
wait
totrue
when scaling to0
, but this worked fine in 6.5.1, so if nothing else it is a backward compatibility issueFabric8 Kubernetes Client version
6.6.0
Steps to reproduce
Use the following code to reproduce it:
With Fabric8 6.5.1 it will complete just fine. WIth 6.6.0 it will get stuck at
scale(0, true)
.Expected behavior
The same behavior as with 6.5.1.
Runtime
Kubernetes (vanilla)
Kubernetes API Server version
other (please specify in additional context)
Environment
Linux
Fabric8 Kubernetes Client Logs
No response
Additional context
The text was updated successfully, but these errors were encountered: