-
Notifications
You must be signed in to change notification settings - Fork 431
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable feature flag disabled error reporting for AzureMachinePool, AzureManagedCluster #2207
Comments
@CecileRobertMichon: GuidelinesPlease ensure that the issue body includes answers to the following questions:
For more details on the requirements of such an issue, please see here and ensure that they are met. If this request no longer meets these requirements, the label can be removed In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/assign |
/unassign @meghanajangi Please feel free to reassign if you pick this up again |
/assign @Prajyot-Parab |
Copied from kubernetes-sigs/cluster-api#6331
User Story
As a user I would like to get transparent errors about Feature Flags on object creation.
Detailed Description
In order to get clear errors about feature flag status we can enable webhooks for objects when the associated feature flag is not enabled. The issue with this is whether or not enabling the webhooks in this way violates the way users understand feature flags.
All we're enabling here is reporting the status of the feature flag using the webhook so IMO this should not be a problem.
Additional Details
Currently there are two patterns in Cluster API for dealing with webhooks and feature flags:
Pattern 1: Don't start the webhook
MachinePools, ClusterResourceSet
Error:
Pattern 2: Start the webhook but refuse object creation and updates.
Error:
Steps to implement
Both patterns block usage of the feature, but the error message from pattern 2 is readable and tells users exactly what happened. As a user I would prefer to see this error so I understand what changes I make to the system.
To do this we would need to:
Example PR to fix this is Cluster API: kubernetes-sigs/cluster-api#6348
/good-first-issue
/kind cleanup
The text was updated successfully, but these errors were encountered: