Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

clusterctl fails with a nil pointer if no controlPlaneRef set in Cluster #8603

Closed
tobiasgiese opened this issue May 4, 2023 · 5 comments · Fixed by #8604
Closed

clusterctl fails with a nil pointer if no controlPlaneRef set in Cluster #8603

tobiasgiese opened this issue May 4, 2023 · 5 comments · Fixed by #8604
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. triage/accepted Indicates an issue or PR is ready to be actively worked on.

Comments

@tobiasgiese
Copy link
Member

What steps did you take and what happened?

Usually this should never happen. During a migration from a non-CAPI-managed cluster to a CAPI managed cluster I forgot to add the control plane reference in the Cluster object. This results in a nil pointer as the following code expects at least 1 control plane

// Adds control plane
controlPlane, err := external.Get(ctx, c, cluster.Spec.ControlPlaneRef, cluster.Namespace)
if err == nil {
addControlPlane(cluster, controlPlane, tree, options)
}

To reproduce we can simply remove the controlPlaneRef from the Cluster and run a clusterctl describe cluster.

> kubectl patch cluster capi-quickstart --type='json' -p='[{"op": "remove", "path": "/spec/controlPlaneRef"}]'
cluster.cluster.x-k8s.io/capi-quickstart patched
> clusterctl describe cluster capi-quickstart -n default --grouping=false --show-conditions=all
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x96bdca]

goroutine 1 [running]:
k8s.io/apimachinery/pkg/apis/meta/v1/unstructured.(*Unstructured).GetAnnotations(0x1b4dd80?)
	k8s.io/[email protected]/pkg/apis/meta/v1/unstructured/unstructured.go:408 +0x4a
sigs.k8s.io/cluster-api/cmd/clusterctl/client/tree.getAnnotation(...)
	sigs.k8s.io/cluster-api/cmd/clusterctl/client/tree/annotations.go:125
sigs.k8s.io/cluster-api/cmd/clusterctl/client/tree.getBoolAnnotation({0x21628c0?, 0x0?}, {0x1e45082?, 0x28?})
	sigs.k8s.io/cluster-api/cmd/clusterctl/client/tree/annotations.go:130 +0x47
sigs.k8s.io/cluster-api/cmd/clusterctl/client/tree.IsGroupingObject(...)
	sigs.k8s.io/cluster-api/cmd/clusterctl/client/tree/annotations.go:71
sigs.k8s.io/cluster-api/cmd/clusterctl/client/tree.ObjectTree.Add({{0x21629d8, 0xc000703380}, {{0x7fff1fa45472, 0x3}, 0x0, 0x0, 0x0, 0x1, 0x0, 0x0}, ...}, ...)
	sigs.k8s.io/cluster-api/cmd/clusterctl/client/tree/tree.go:123 +0x346
sigs.k8s.io/cluster-api/cmd/clusterctl/client/tree.Discovery.func1({0x21628c0?, 0x0?}, 0xc0001cd500)
	sigs.k8s.io/cluster-api/cmd/clusterctl/client/tree/discovery.go:107 +0x114
sigs.k8s.io/cluster-api/cmd/clusterctl/client/tree.Discovery({0x21451d0, 0xc000126008}, {0x21502b8?, 0xc00035c700}, {0x7fff1fa45447, 0x7}, {0x7fff1fa45434, 0xf}, {{0x7fff1fa45472, 0x3}, ...})
	sigs.k8s.io/cluster-api/cmd/clusterctl/client/tree/discovery.go:124 +0x6fd
sigs.k8s.io/cluster-api/cmd/clusterctl/client.(*clusterctlClient).DescribeCluster(0x0?, {{{0x0, 0x0}, {0x0, 0x0}}, {0x7fff1fa45447, 0x7}, {0x7fff1fa45434, 0xf}, {0x7fff1fa45472, ...}, ...})
	sigs.k8s.io/cluster-api/cmd/clusterctl/client/describe.go:91 +0x218
sigs.k8s.io/cluster-api/cmd/clusterctl/cmd.runDescribeCluster(0x0?, {0x7fff1fa45434, 0xf})
	sigs.k8s.io/cluster-api/cmd/clusterctl/cmd/describe_cluster.go:154 +0x1d8
sigs.k8s.io/cluster-api/cmd/clusterctl/cmd.glob..func6(0x3194820?, {0xc000800870?, 0x5?, 0x5?})
	sigs.k8s.io/cluster-api/cmd/clusterctl/cmd/describe_cluster.go:105 +0x2d
github.com/spf13/cobra.(*Command).execute(0x3194820, {0xc0008007d0, 0x5, 0x5})
	github.com/spf13/[email protected]/command.go:916 +0x862
github.com/spf13/cobra.(*Command).ExecuteC(0x3197340)
	github.com/spf13/[email protected]/command.go:1044 +0x3bd
github.com/spf13/cobra.(*Command).Execute(...)
	github.com/spf13/[email protected]/command.go:968
sigs.k8s.io/cluster-api/cmd/clusterctl/cmd.Execute()
	sigs.k8s.io/cluster-api/cmd/clusterctl/cmd/root.go:105 +0x25
main.main()
	sigs.k8s.io/cluster-api/cmd/clusterctl/main.go:27 +0x17

What did you expect to happen?

At least an error that no control plane reference is set. Alternatively, no valid KCP can be displayed via clusterctl describe cluster.

Cluster API version

main branch

Kubernetes version

No response

Anything else you would like to add?

/

Label(s) to be applied

/kind bug
One or more /area label. See https://github.com/kubernetes-sigs/cluster-api/labels?q=area for the list of labels.

@k8s-ci-robot k8s-ci-robot added kind/bug Categorizes issue or PR as related to a bug. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels May 4, 2023
@killianmuldoon
Copy link
Contributor

/triage accepted

We should in all cases try to catch nil-pointers IMO.

@k8s-ci-robot k8s-ci-robot added triage/accepted Indicates an issue or PR is ready to be actively worked on. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels May 4, 2023
@aniruddha2000
Copy link
Contributor

would like to work
/assign

@tobiasgiese
Copy link
Member Author

would like to work /assign

Sorry, didn't saw that 😞 was already implementing the fix during issue creation

@aniruddha2000 aniruddha2000 removed their assignment May 4, 2023
@aniruddha2000
Copy link
Contributor

No worries! I would definitely like to see the solution once it's there

@tobiasgiese
Copy link
Member Author

No worries! I would definitely like to see the solution once it's there

It's already there, see #8604

/assign

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants