Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

panic in scalesets.go: boot diagnostics #3200

Closed
mweibel opened this issue Feb 27, 2023 · 3 comments · Fixed by #3201
Closed

panic in scalesets.go: boot diagnostics #3200

mweibel opened this issue Feb 27, 2023 · 3 comments · Fixed by #3201
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@mweibel
Copy link
Contributor

mweibel commented Feb 27, 2023

/kind bug

[Before submitting an issue, have you checked the Troubleshooting Guide?]

What steps did you take and what happened:
I upgraded from a previous version to 1.7.2. capz-controller-manager crashes because of a nil pointer.

panic: runtime error: invalid memory address or nil pointer dereference [recovered]
       panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x1a61c32]

goroutine 758 [running]:
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile.func1()
       /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:118 +0x1f4
panic({0x2001e60, 0x3947280})
       /usr/local/go/src/runtime/panic.go:884 +0x212
sigs.k8s.io/cluster-api-provider-azure/azure/services/scalesets.(*Service).validateSpec(0xc001867380, {0x26a7ed8?, 0xc001867500?})
       /workspace/azure/services/scalesets/scalesets.go:393 +0xab2
sigs.k8s.io/cluster-api-provider-azure/azure/services/scalesets.(*Service).Reconcile(0xc001867380, {0x26a7ed8?, 0xc001867470?})
       /workspace/azure/services/scalesets/scalesets.go:85 +0xcb
sigs.k8s.io/cluster-api-provider-azure/exp/controllers.(*azureMachinePoolService).Reconcile(0xc0018673e0, {0x26a7ed8?, 0xc0014f6180?})
       /workspace/exp/controllers/azuremachinepool_reconciler.go:68 +0x1b5
sigs.k8s.io/cluster-api-provider-azure/exp/controllers.(*AzureMachinePoolReconciler).reconcileNormal(0xc000784dc0, {0x26a7ea0?, 0xc0014bfb00?}, 0xc001ed75e0, 0xc000a77880)
       /workspace/exp/controllers/azuremachinepool_controller.go:290 +0x25e
sigs.k8s.io/cluster-api-provider-azure/exp/controllers.(*AzureMachinePoolReconciler).Reconcile(0xc000784dc0, {0x26a7ed8?, 0xc001a6b980?}, {{{0xc000896100?, 0xe?}, {0xc0008960f0?, 0x7?}}})
       /workspace/exp/controllers/azuremachinepool_controller.go:253 +0xd66
sigs.k8s.io/cluster-api-provider-azure/pkg/coalescing.(*reconciler).Reconcile(0xc0004b3400, {0x26a7ed8?, 0xc001a6b8c0?}, {{{0xc000896100?, 0x10?}, {0xc0008960f0?, 0x40dc87?}}})
       /workspace/pkg/coalescing/reconciler.go:109 +0x3ee
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile(0x26a7e30?, {0x26a7ed8?, 0xc001a6b8c0?}, {{{0xc000896100?, 0x22200e0?}, {0xc0008960f0?, 0x404554?}}})
       /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:121 +0xc8
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc0002c8820, {0x26a7e30, 0xc000a62100}, {0x20b6820?, 0xc000543460?})
       /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:320 +0x33c
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc0002c8820, {0x26a7e30, 0xc000a62100})
       /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:273 +0x1d9
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2()
       /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:234 +0x85
created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2
       /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:230 +0x333

What did you expect to happen:
No panic.

Anything else you would like to add:
We have nothing configured for boot diagnostics.

Environment:

  • cluster-api-provider-azure version: 1.7.2
  • Kubernetes version: (use kubectl version): 1.23
  • OS (e.g. from /etc/os-release): linux
@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Feb 27, 2023
@mweibel
Copy link
Contributor Author

mweibel commented Feb 27, 2023

I wonder (as a user), too, why the boot diagnostics are enabled by default since that changes behaviour from before the feature got implemented.

mweibel added a commit to helio/cluster-api-provider-azure that referenced this issue Feb 27, 2023
mweibel added a commit to helio/cluster-api-provider-azure that referenced this issue Feb 27, 2023
@nawazkh
Copy link
Member

nawazkh commented Feb 27, 2023

I wonder (as a user), too, why the boot diagnostics are enabled by default since that changes behaviour from before the feature got implemented.

Boot diagnostics were always enabled since this PR #901. It was made optional with the introduction of #2401. Are we looking at the reasoning prior to #901 ?

@mweibel
Copy link
Contributor Author

mweibel commented Feb 28, 2023

@nawazkh I somehow thought the default was disabled before, but looking at the code and the dates it seems I had it wrong in mind. Nevermind then - thanks for linking the right PRs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants