-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
"cannot set blockOwnerDeletion" when using OpenShift as management cluster #4880
Comments
This appears to relate to https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#ownerreferencespermissionenforcement It affects other objects:
|
I have worked round this problem by adding the i.e. - apiGroups:
- cluster.x-k8s.io
resources:
- clusters
- clusters/status
- clusters/finalizers and the same for clusters, machinedeployments, machinehealthchecks, machinepools, machines, and machinesets. |
Do we know when the finalizer enforcement was introduced? cc @fabriziopandini @randomvariable /milestone v0.4 |
Is it possible that non-OpenShift clusters don't normally have OwnerReferencesPermissionEnforcement enabled, and OpenShift clusters aren't typically used as capi management clusters? i.e. It's latent, but sufficiently uncommon that you just don't see it often? |
We have e2e tests on all the latest versions of Kubernetes. If this feature needs to be enabled, we probably didn't catch it yet. |
ooh, interesting, let me find out. |
The default list of admission controllers is, as of v1.21: I do think this is because OpenShift turns on some additional ones by default, and it's caught out some controllers in k/k as recently as April. Kubebuilder nowadays defaults to adding the finalizer permissions, we probably started before that was included in the templating. Makes sense to add the same annotations. |
/lifecycle active |
What steps did you take and what happened:
Attempted to create a cluster using clusterctl v0.4.0 and infrastructure openstack:v0.4.0-beta.0 using OpenShift 4.9 as the management cluster.
capi-controller-manager fails with:
What did you expect to happen:
Success.
Anything else you would like to add:
This appears to be the same as #3274, which was using AWS in place of OpenStack. It wouldn't appear to be a CAPO issue.
Environment:
Minikube/KIND version:
N/A (OpenShift 4.9 Nightly)
Kubernetes version: (use
kubectl version
):/etc/os-release
):RHCOS
/kind bug
[One or more /area label. See https://github.com/kubernetes-sigs/cluster-api/labels?q=area for the list of labels]
The text was updated successfully, but these errors were encountered: