Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

KubeadmControlPlane crashes in GetWorkloadCluster if remote.RestConfig is nil #2754

Closed
sadysnaat opened this issue Mar 23, 2020 · 4 comments · Fixed by #2757
Closed

KubeadmControlPlane crashes in GetWorkloadCluster if remote.RestConfig is nil #2754

sadysnaat opened this issue Mar 23, 2020 · 4 comments · Fixed by #2757
Assignees
Labels
area/control-plane Issues or PRs related to control-plane lifecycle management kind/bug Categorizes issue or PR as related to a bug. priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now.
Milestone

Comments

@sadysnaat
Copy link
Contributor

What steps did you take and what happened:
try deploying a cluster using release 0.3.2

E0323 15:13:08.749266       1 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
goroutine 340 [running]:
k8s.io/apimachinery/pkg/util/runtime.logPanic(0x16ac540, 0x2703060)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:74 +0xa3
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:48 +0x82
panic(0x16ac540, 0x2703060)
	/usr/local/go/src/runtime/panic.go:679 +0x1b2
sigs.k8s.io/cluster-api/controlplane/kubeadm/internal.(*Management).GetWorkloadCluster(0xc0003056e0, 0x1b085e0, 0xc0000d2060, 0xc00064c810, 0xb, 0xc00064c7f0, 0xb, 0xc000638000, 0x1, 0x1, ...)
	/workspace/controlplane/kubeadm/internal/cluster.go:73 +0xb6
sigs.k8s.io/cluster-api/controlplane/kubeadm/controllers.(*KubeadmControlPlaneReconciler).updateStatus(0xc0006bb020, 0x1b085e0, 0xc0000d2060, 0xc00025e280, 0xc000718780, 0x100000000000000, 0x1ad9ce0)
	/workspace/controlplane/kubeadm/controllers/kubeadm_control_plane_controller.go:333 +0x488
sigs.k8s.io/cluster-api/controlplane/kubeadm/controllers.(*KubeadmControlPlaneReconciler).Reconcile.func1(0xc00046dca8, 0xc00046dc98, 0xc0006bb020, 0x1b085e0, 0xc0000d2060, 0xc00025e280, 0xc000718780, 0x1b185e0, 0xc0003af7a0, 0xc0002f4870)
	/workspace/controlplane/kubeadm/controllers/kubeadm_control_plane_controller.go:175 +0xea
sigs.k8s.io/cluster-api/controlplane/kubeadm/controllers.(*KubeadmControlPlaneReconciler).Reconcile(0xc0006bb020, 0xc000765ed0, 0xb, 0xc000765eb0, 0xe, 0xc000474c00, 0x0, 0x1ac7e20, 0xc0006991e0)
	/workspace/controlplane/kubeadm/controllers/kubeadm_control_plane_controller.go:194 +0x707
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc0001e40c0, 0x170d900, 0xc0000c8060, 0x0)
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:256 +0x162
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc0001e40c0, 0x0)
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:232 +0xcb
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker(0xc0001e40c0)
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:211 +0x2b
k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc000573af0)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:152 +0x5e
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000573af0, 0x3b9aca00, 0x0, 0x1, 0xc0006be0c0)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:153 +0xf8
k8s.io/apimachinery/pkg/util/wait.Until(0xc000573af0, 0x3b9aca00, 0xc0006be0c0)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:88 +0x4d
created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:193 +0x328
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
	panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x1e0 pc=0x151fea6]

goroutine 340 [running]:
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:55 +0x105
panic(0x16ac540, 0x2703060)
	/usr/local/go/src/runtime/panic.go:679 +0x1b2
sigs.k8s.io/cluster-api/controlplane/kubeadm/internal.(*Management).GetWorkloadCluster(0xc0003056e0, 0x1b085e0, 0xc0000d2060, 0xc00064c810, 0xb, 0xc00064c7f0, 0xb, 0xc000638000, 0x1, 0x1, ...)
	/workspace/controlplane/kubeadm/internal/cluster.go:73 +0xb6
sigs.k8s.io/cluster-api/controlplane/kubeadm/controllers.(*KubeadmControlPlaneReconciler).updateStatus(0xc0006bb020, 0x1b085e0, 0xc0000d2060, 0xc00025e280, 0xc000718780, 0x100000000000000, 0x1ad9ce0)
	/workspace/controlplane/kubeadm/controllers/kubeadm_control_plane_controller.go:333 +0x488
sigs.k8s.io/cluster-api/controlplane/kubeadm/controllers.(*KubeadmControlPlaneReconciler).Reconcile.func1(0xc00046dca8, 0xc00046dc98, 0xc0006bb020, 0x1b085e0, 0xc0000d2060, 0xc00025e280, 0xc000718780, 0x1b185e0, 0xc0003af7a0, 0xc0002f4870)
	/workspace/controlplane/kubeadm/controllers/kubeadm_control_plane_controller.go:175 +0xea
sigs.k8s.io/cluster-api/controlplane/kubeadm/controllers.(*KubeadmControlPlaneReconciler).Reconcile(0xc0006bb020, 0xc000765ed0, 0xb, 0xc000765eb0, 0xe, 0xc000474c00, 0x0, 0x1ac7e20, 0xc0006991e0)
	/workspace/controlplane/kubeadm/controllers/kubeadm_control_plane_controller.go:194 +0x707
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc0001e40c0, 0x170d900, 0xc0000c8060, 0x0)
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:256 +0x162
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc0001e40c0, 0x0)
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:232 +0xcb
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker(0xc0001e40c0)
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:211 +0x2b
k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc000573af0)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:152 +0x5e
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000573af0, 0x3b9aca00, 0x0, 0x1, 0xc0006be0c0)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:153 +0xf8
k8s.io/apimachinery/pkg/util/wait.Until(0xc000573af0, 0x3b9aca00, 0xc0006be0c0)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:88 +0x4d
created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:193 +0x328

What did you expect to happen:
no panic, assignment to restconfig timeout should be done after error check

restConfig, err := remote.RESTConfig(ctx, m.Client, clusterKey)
restConfig.Timeout = 30 * time.Second
if err != nil {
return nil, err

Anything else you would like to add:
No

Environment:

  • Cluster-api version: 0.3.2
  • Minikube/KIND version: kind v0.7.0 go1.13.6 darwin/amd64
  • Kubernetes version: (use kubectl version):
    Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.5", GitCommit:"20c265fef0741dd71a66480e35bd69f18351daea", GitTreeState:"clean", BuildDate:"2019-10-15T19:16:51Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"darwin/amd64"}
    Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2020-01-14T00:09:19Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
  • OS (e.g. from /etc/os-release):
    Darwin Deepaks-MacBook-Pro.local 19.3.0 Darwin Kernel Version 19.3.0: Thu Jan 9 20:58:23 PST 2020; root:xnu-6153.81.5~1/RELEASE_X86_64 x86_64

/kind bug

@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Mar 23, 2020
@vincepri
Copy link
Member

/priority critical-urgent
/milestone v0.3.3

@fabriziopandini @sedefsavas any of you want to take this one? Should be a pretty easy fix + unit test

@k8s-ci-robot k8s-ci-robot added the priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now. label Mar 23, 2020
@k8s-ci-robot k8s-ci-robot added this to the v0.3.3 milestone Mar 23, 2020
@vincepri
Copy link
Member

/area control-plane

@k8s-ci-robot k8s-ci-robot added the area/control-plane Issues or PRs related to control-plane lifecycle management label Mar 23, 2020
@sedefsavas
Copy link

/assign

@vincepri vincepri added area/clusterctl Issues or PRs related to clusterctl and removed area/clusterctl Issues or PRs related to clusterctl labels Mar 23, 2020
sedefsavas pushed a commit to sedefsavas/cluster-api that referenced this issue Mar 23, 2020
@sedefsavas
Copy link

I am unable to add unit test for GetWorkloadCluster() as it is using controller-runtime client. So, just added the fix.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/control-plane Issues or PRs related to control-plane lifecycle management kind/bug Categorizes issue or PR as related to a bug. priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants