You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, I assume there are three states of a actions-runner-controller deployment:
New CRDS + new controller
New CRDs + old controller
Old CRDs + new controller
1 and 2 are normal. 1 is the "most" normal as that's the state you get after a successful upgrade. 2 is also normal as we intent do upgrade CRDs before upgrading the controller.
Currently, we only cover the case 1 in our acceptance test suite, so I'd like to add 2 and 3.
The addition of 3 is especially important, as we don't test 3 at all today and I believe that's why we ended up seeing issues like #427, #467, #468, and so on that happened when one tried to upgrade the controller without upgrading CRDs first.
The idea for 3 is that we want to make the controller not to break badly when CRDs are outdated. If CRDs are outdated it should just print errors and keep running without e.g. breaking the entire K8s cluster or the controller deployment, until CRDs are finally upgraded.
There maybe a potential enhancement to the controller to print some useful or kind error messages that helps the user to notice that they missed upgrading CRDs. But that may be difficult to implement and it's another story, at least.
The text was updated successfully, but these errors were encountered:
mumoshu
changed the title
Improve acceptance test to cover more controller and CRD compatibility
Improve acceptance test to cover more controller and CRD compatibility matrix
May 21, 2021
Currently, I assume there are three states of a actions-runner-controller deployment:
1 and 2 are normal. 1 is the "most" normal as that's the state you get after a successful upgrade. 2 is also normal as we intent do upgrade CRDs before upgrading the controller.
Currently, we only cover the case 1 in our acceptance test suite, so I'd like to add 2 and 3.
The addition of 3 is especially important, as we don't test 3 at all today and I believe that's why we ended up seeing issues like #427, #467, #468, and so on that happened when one tried to upgrade the controller without upgrading CRDs first.
The idea for 3 is that we want to make the controller not to break badly when CRDs are outdated. If CRDs are outdated it should just print errors and keep running without e.g. breaking the entire K8s cluster or the controller deployment, until CRDs are finally upgraded.
There maybe a potential enhancement to the controller to print some useful or kind error messages that helps the user to notice that they missed upgrading CRDs. But that may be difficult to implement and it's another story, at least.
The text was updated successfully, but these errors were encountered: