Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

panic when exec "pd health" with tls #8237

Closed
Lily2025 opened this issue Jun 3, 2024 · 4 comments · Fixed by #8239
Closed

panic when exec "pd health" with tls #8237

Lily2025 opened this issue Jun 3, 2024 · 4 comments · Fixed by #8239
Labels
affects-8.1 This bug affects the 8.1.x(LTS) versions. severity/major type/bug The issue is confirmed as a bug.

Comments

@Lily2025
Copy link

Lily2025 commented Jun 3, 2024

Bug Report

What did you do?

1、tiup deploy cluster
2、exec "pd health" with tls
tiup ctl:nightly pd -u https://pd1-peer:2379 health --cacert /root/.tiup/storage/cluster/clusters/tidbcluster/tls/ca.crt --cert /root/.tiup/storage/cluster/clusters/tidbcluster/tls/client.crt --key /root/.tiup/storage/cluster/clusters/tidbcluster/tls/client.pem Starting component ctl: /root/.tiup/components/ctl/v8.2.0-alpha-nightly/ctl pd -u https://pd1-peer:2379 health --cacert /root/.tiup/storage/cluster/clusters/tidbcluster/tls/ca.crt --cert /root/.tiup/storage/cluster/clusters/tidbcluster/tls/client.crt --key /root/.tiup/storage/cluster/clusters/tidbcluster/tls/client.pem

What did you expect to see?

cmd can success

What did you see instead?

cmd panic:

`panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x98 pc=0x11afa05]

goroutine 1 [running]:
github.com/tikv/pd/tools/pd-ctl/pdctl/command.showHealthCommandFunc(0xc000599800, {0x14c4cf6?, 0x4?, 0x14c4bae?})
/workspace/source/pd/tools/pd-ctl/pdctl/command/health_command.go:33 +0x25
github.com/spf13/cobra.(*Command).execute(0xc000599800, {0xc00034af00, 0x8, 0x8})
/root/go/pkg/mod/github.com/spf13/[email protected]/command.go:987 +0xaa3
github.com/spf13/cobra.(*Command).ExecuteC(0xc0000bb800)
/root/go/pkg/mod/github.com/spf13/[email protected]/command.go:1115 +0x3ff
github.com/spf13/cobra.(*Command).Execute(...)
/root/go/pkg/mod/github.com/spf13/[email protected]/command.go:1039
github.com/tikv/pd/tools/pd-ctl/pdctl.MainStart({0xc00012c0b0, 0x9, 0x9})
/workspace/source/pd/tools/pd-ctl/pdctl/ctl.go:105 +0x228
main.main()
/workspace/source/pd/tools/pd-ctl/main.go:67 +0x3cf`

What version of PD are you using (pd-server -V)?

"binary_version": "v8.2.0-alpha-63-g19c9852",
"git_hash": "19c9852decda4cb49a2319b453c4f01c6a26014f"
@Lily2025 Lily2025 added the type/bug The issue is confirmed as a bug. label Jun 3, 2024
@Lily2025
Copy link
Author

Lily2025 commented Jun 3, 2024

/severity major

@okJiang
Copy link
Member

okJiang commented Jun 3, 2024

@JmPotato why do we need check if get the cluster information successfully before init PDClient? PD client has the feature of service discover.🤔
image

@JmPotato
Copy link
Member

JmPotato commented Jun 3, 2024

@JmPotato why do we need check if get the cluster information successfully before init PDClient? PD client has the feature of service discover.🤔

image

Because even in the interact mode, we can use --pd/u to connect different clusters for different commands, and each PD client can only connect to an unique cluster, it's necessary to check for each command if the client needs to be reinitialized.

ti-chi-bot bot added a commit that referenced this issue Jul 29, 2024
ref #7300, close #8237

fix panic when call pd-ctl cluster with tls

Signed-off-by: ti-chi-bot <[email protected]>
Signed-off-by: okJiang <[email protected]>

Co-authored-by: okJiang <[email protected]>
Co-authored-by: ti-chi-bot[bot] <108142056+ti-chi-bot[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
affects-8.1 This bug affects the 8.1.x(LTS) versions. severity/major type/bug The issue is confirmed as a bug.
Projects
Development

Successfully merging a pull request may close this issue.

3 participants