-
Notifications
You must be signed in to change notification settings - Fork 880
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow retrying rollout without scaling down canary pods #1807
Comments
This issue is stale because it has been open 60 days with no activity. |
Hi, I hope this can still be done. I think it would be a useful improvement :) |
I see a bit of overlap between this feature and #1779 |
Hmm, it seems like that is covering something different (minimum pods during a rollout). With this ticket I'd like a way to reuse pods for the new version that are already running if we want to retry the rollout. |
This issue is stale because it has been open 60 days with no activity. |
Please don't close this :) |
This issue is stale because it has been open 60 days with no activity. |
Summary
Right now, Argo Rollouts provides the
abortScaleDownDelaySeconds
to keep the canary pods scaled up for a period of time after an aborted deployment. I think this was designed to provide enough time for the traffic split to take effect and all traffic to be sent back to the stable version.However, it would also be useful if retrying the rollout could reuse those scaled-up pods. Let's say you abort a rollout with
abortScaleDownDelaySeconds: 600
or longer so the canary pods stay around for quite a while after aborting. If you retry the deployment, rollouts reuses the replicaset but starts at the beginning of the canary steps which means that all but one of the canary pods gets scaled down, and then the pods will get scaled back up again as it progresses through the steps.Note that my rollout steps use
setCanaryScale
so perhaps this is only an issue when using that, but it would be nice ifsetCanaryScale
(andsetWeight
if this isn't already the case) would only increase the number of pods as necessary (not decrease). So once the deployment is retried, it would see that there were enough pods and just stay at the higher number until the canary steps needed more.Message from the maintainers:
Impacted by this bug? Give it a 👍. We prioritize the issues with the most 👍.
The text was updated successfully, but these errors were encountered: