-
Notifications
You must be signed in to change notification settings - Fork 350
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Invalid replica state for HPA #3132
Comments
I had a look at this and the most immediate solution I can think is to have a default for |
I've checked with the K8S team and it seems to be an HPA API requirement. @astefanutti @tadayosi would you see any harm to set the default of |
Yes it seems KEDA also requires the I agree, the In the short term, before a long term solution is found, we could also advocate to document that for HPA to work, users have to set the replicas explicitly on the integrations. I'd rather not patch Camel K here and there to accommodate how each autoscaler interprets the scale sub-resource specification and derive their own requirements. |
Agreed. Let's turn this into a documentation request then. Thanks for the feedback! |
For many cases defaulting to |
The defaulting proposed above is at the CRD level, as in https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#defaulting. It has the advantage that it avoids the operator touching the spec block which should be owned by users, but has the "disadvantage" that it's static. I'm quoting disadvantage, because I could almost argue it's an advantage, i.e., having a sensible, consistent default, that's not dynamic because it must accommodate interpretation / implementation of auto-scalers. |
This issue has been automatically marked as stale due to 90 days of inactivity. |
Integration does not have initial state of replica required by HPA.
Simple deploy:
Integration OK:
HPA spec:
HPA can not find replicas field:
The workaround is force any scale:
Then HPA start working:
Version: Camel K 1.8.2
The text was updated successfully, but these errors were encountered: