Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Missing Regex Validation upon cluster creation leads to cluster unable to provision or delete #3874

Closed
mkarroqe opened this issue Aug 23, 2023 · 9 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/backlog Higher priority than priority/awaiting-more-evidence.

Comments

@mkarroqe
Copy link

/kind bug

What steps did you take and what happened:
When creating a cluster with a . character in the cluster name, no warning is generated that it is an invalid regex character. This leads to the cluster stuck in a failed provisioning state when creating:

When creating the cluster, all that was seen in the capz logs was the reconciling AzureManagedControlPlane:

I0822 21:00:17.659085       1 azuremanagedmachinepool_controller.go:192] controllers.AzureManagedMachinePoolReconciler.Reconcile "msg"="AzureManagedControlPlane is not initialized" "AzureManagedMachinePool"={"name":"mp8cl5m","namespace":"default"} "controller"="azuremanagedmachinepool" "controllerGroup"="infrastructure.cluster.x-k8s.io" "controllerKind"="AzureManagedMachinePool" "kind"="AzureManagedMachinePool" "name"="mp8cl5m" "namespace"="default" "ownerCluster"="test.cluster.name" "reconcileID"="419e9f39-a664-4b2d-b4e0-9853e4d1edd4" "x-ms-correlation-request-id"="9dc5fb5e-2027-415d-84db-03ef1646b4bf"
I0822 21:00:29.698002       1 azuremanagedcontrolplane_controller.go:190] controllers.AzureManagedControlPlaneReconciler.Reconcile "msg"="WARNING, You're using deprecated functionality: Using Azure credentials from the manager environment is deprecated and will be removed in future releases. Please specify an AzureClusterIdentity for the AzureManagedControlPlane instead, see: https://capz.sigs.k8s.io/topics/multitenancy.html " "AzureManagedControlPlane"={"name":"test.cluster.name","namespace":"default"} "cluster"="test.cluster.name" "controller"="azuremanagedcontrolplane" "controllerGroup"="infrastructure.cluster.x-k8s.io" "controllerKind"="AzureManagedControlPlane" "kind"="AzureManagedControlPlane" "name"="test.cluster.name" "namespace"="default" "reconcileID"="9053f1b3-4a2a-4407-a5ea-b6227d48f4ea" "x-ms-correlation-request-id"="a6d4c1f8-d0e7-486a-9b1e-8e20544930c2"
I0822 21:00:29.698476       1 azuremanagedcontrolplane_controller.go:224] controllers.AzureManagedControlPlaneReconciler.reconcileNormal "msg"="Reconciling AzureManagedControlPlane" "AzureManagedControlPlane"={"name":"test.cluster.name","namespace":"default"} "controller"="azuremanagedcontrolplane" "controllerGroup"="infrastructure.cluster.x-k8s.io" "controllerKind"="AzureManagedControlPlane" "name"="test.cluster.name" "namespace"="default" "reconcileID"="9053f1b3-4a2a-4407-a5ea-b6227d48f4ea" "x-ms-correlation-request-id"="a6d4c1f8-d0e7-486a-9b1e-8e20544930c2"

When attempting to delete, the cluster is unable to delete, and only then the following error can be seen in the capz logs:

Invalid input: autorest/validation: validation failed: parameter=resourceName constraint=Pattern value="test.cluster.name" details: value doesn't match pattern ^[a-zA-Z0-9]$|^[a-zA-Z0-9][-_a-zA-Z0-9]{0,61}[a-zA-Z0-9]$

Full error when deleting:

kubectl logs deploy/capz-controller-manager -n capz-system manager | grep test.cluster.name | grep err

I0822 21:09:23.700758       1 azuremanagedcontrolplane_controller.go:270] controllers.AzureManagedControlPlaneReconciler.reconcileDelete "msg"="Reconciling AzureManagedControlPlane delete" "AzureManagedControlPlane"={"name":"test.cluster.name","namespace":"default"} "controller"="azuremanagedcontrolplane" "controllerGroup"="infrastructure.cluster.x-k8s.io" "controllerKind"="AzureManagedControlPlane" "name"="test.cluster.name" "namespace"="default" "reconcileID"="85735ebc-4722-4c62-8598-aa1f6598b449" "x-ms-correlation-request-id"="e827e76b-f717-497f-a3de-ce0877167500"
E0822 21:09:23.701328       1 controller.go:326]  "msg"="Reconciler error" "error"="error deleting AzureManagedControlPlane default/test.cluster.name: failed to delete AzureManagedControlPlane service managedcluster: failed to delete resource test.cluster.name/test.cluster.name (service: managedcluster): containerservice.ManagedClustersClient#Delete: Invalid input: autorest/validation: validation failed: parameter=resourceName constraint=Pattern value=\"test.cluster.name\" details: value doesn't match pattern ^[a-zA-Z0-9]$|^[a-zA-Z0-9][-_a-zA-Z0-9]{0,61}[a-zA-Z0-9]$" "AzureManagedControlPlane"={"name":"test.cluster.name","namespace":"default"} "controller"="azuremanagedcontrolplane" "controllerGroup"="infrastructure.cluster.x-k8s.io" "controllerKind"="AzureManagedControlPlane" "name"="test.cluster.name" "namespace"="default" "reconcileID"="85735ebc-4722-4c62-8598-aa1f6598b449"

What did you expect to happen:
I expected there to be an error when creating the cluster, preventing me from attempting to provision in the first place.

Anything else you would like to add:
I have drafted some code changes to add a condition to check for this when the cluster is created; I will push the PR up shortly

Environment:

  • cluster-api-provider-azure version: v1.8.5
  • Kubernetes version: (use kubectl version): 1.26.3
  • OS (e.g. from /etc/os-release): macos Ventura 13.3.1
@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Aug 23, 2023
@mboersma
Copy link
Contributor

/priority backlog

@k8s-ci-robot k8s-ci-robot added the priority/backlog Higher priority than priority/awaiting-more-evidence. label Aug 24, 2023
@CecileRobertMichon
Copy link
Contributor

@mkarroqe thanks for opening this issue. There is some previous discussion in #1674 that you might find relevant.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 26, 2024
@mboersma
Copy link
Contributor

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 29, 2024
@dtzar
Copy link
Contributor

dtzar commented Apr 4, 2024

related to #4699

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 3, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Aug 2, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Sep 1, 2024
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
Archived in project
Development

Successfully merging a pull request may close this issue.

6 participants