You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Here is a rollout which is in a Suspended state, but with four replicasets scaled to two:
The expectation is in a steady state (Suspended), there should only be two replicasets scaled higher than 0 (active and preview)
I ran a diff against the last three revisions (12, 11, 9) of the ReplicaSet. I'm not sure what happened to ReplicaSet revision 10. Notice that the only differences are in metadata and status. The replicaset spec is the same, which means the pod template is the same. However the bug is that the replicaset hash name is not the same.
This implies that the pod template hash may be getting different hashes for the same pod template.
During this time, we know from talking to the user, that the rollout's spec.template.spec was changed to only modify resource requests/limits to equivalent values (e.g. 2000m -> '2'). I suspect that the underlying issue is when we call: controller.ComputeHash(), it does not correctly considering these values to be the same, and results in different pod template hashes.
The text was updated successfully, but these errors were encountered:
We confirmed the pod template hash is sensitive to resource differences. The solution is to remarshal the object to normalize it before computing the pod template hash.
Here is a rollout which is in a Suspended state, but with four replicasets scaled to two:
The expectation is in a steady state (Suspended), there should only be two replicasets scaled higher than 0 (active and preview)
I ran a diff against the last three revisions (12, 11, 9) of the ReplicaSet. I'm not sure what happened to ReplicaSet revision 10. Notice that the only differences are in metadata and status. The replicaset spec is the same, which means the pod template is the same. However the bug is that the replicaset hash name is not the same.
This implies that the pod template hash may be getting different hashes for the same pod template.
During this time, we know from talking to the user, that the rollout's
spec.template.spec
was changed to only modify resource requests/limits to equivalent values (e.g. 2000m -> '2'). I suspect that the underlying issue is when we call:controller.ComputeHash()
, it does not correctly considering these values to be the same, and results in different pod template hashes.The text was updated successfully, but these errors were encountered: