You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When Applications or Namespaces are Terminating Applications that are syncing and can make no further progress should timeout.
Motivation
We have a model where we use namespaces to have ephemeral environments of our stack. We are using Apps in any namespace and things are working good since the transition to Argo. Our roll out of Argo has been very smooth, so much so that most users haven't needed to login to it actually :).
One issue is that we delete these environments by working with the namespace either directly with kubectl or with k9s. Users are very familiar with just deleting the namespace this way to blow up there environment.
These environments are also created using imperative code (at the moment), not an Application Set.
Anyway we seem to have issues, where of maybe 2-3% of a time, a namespace will be deleted during a sync and then it becomes stuck. The only solution seems to be manually terminate all syncs (or remove all finalizers).
Proposal
I'm a bit shaky on the specifics and full consequences of any particular proposal, maybe my initial guess would be to listen to delete events on applications. If an application is being terminated, and is currently in the sync state, cancel it.
Then just let the rest of the code handle things as normal.
The text was updated successfully, but these errors were encountered:
Summary
When Applications or Namespaces are Terminating Applications that are syncing and can make no further progress should timeout.
Motivation
We have a model where we use namespaces to have ephemeral environments of our stack. We are using Apps in any namespace and things are working good since the transition to Argo. Our roll out of Argo has been very smooth, so much so that most users haven't needed to login to it actually :).
One issue is that we delete these environments by working with the namespace either directly with
kubectl
or withk9s
. Users are very familiar with just deleting the namespace this way to blow up there environment.These environments are also created using imperative code (at the moment), not an Application Set.
Anyway we seem to have issues, where of maybe 2-3% of a time, a namespace will be deleted during a sync and then it becomes stuck. The only solution seems to be manually terminate all syncs (or remove all finalizers).
Proposal
I'm a bit shaky on the specifics and full consequences of any particular proposal, maybe my initial guess would be to listen to delete events on applications. If an application is being terminated, and is currently in the sync state, cancel it.
Then just let the rest of the code handle things as normal.
The text was updated successfully, but these errors were encountered: