Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

v3.3.0-rc.0 endpoint health --cluster with auth requires password input twice #9094

Closed
OPSTime opened this issue Jan 4, 2018 · 13 comments
Closed

Comments

@OPSTime
Copy link

OPSTime commented Jan 4, 2018

$ /usr/local/etcd/etcdctl --user root endpoint health --cluster
Password:
Password:
http://192.168.0.82:2379 is healthy: successfully committed proposal: took = 2.133151ms
http://192.168.0.81:2379 is healthy: successfully committed proposal: took = 2.089219ms
http://192.168.0.83:2379 is healthy: successfully committed proposal: took = 2.37011ms
$ /usr/local/etcd/etcdctl --user root endpoint health
Password:
127.0.0.1:2379 is healthy: successfully committed proposal: took = 2.230373ms

@gyuho
Copy link
Contributor

gyuho commented Jan 4, 2018

what's the etcd version?

@OPSTime
Copy link
Author

OPSTime commented Jan 4, 2018

etcdctl version: 3.3.0
API version: 3.3

@gyuho
Copy link
Contributor

gyuho commented Jan 4, 2018

3.3.0-rc.0 version string was wrong. What's the output of

etcd --version
ETCDCTL_API=3 etcdctl version

@OPSTime
Copy link
Author

OPSTime commented Jan 4, 2018

$ /usr/local/etcd/etcd --version
etcd Version: 3.3.0
Git SHA: f7a395f
Go Version: go1.9.2
Go OS/Arch: linux/amd64

$ ETCDCTL_API=3 /usr/local/etcd/etcdctl version
etcdctl version: 3.3.0
API version: 3.3

@OPSTime
Copy link
Author

OPSTime commented Jan 4, 2018

Other issue about restore:

$ rm -rf /data/etcd/intranet-test.*
$ sudo -u etcd ETCDCTL_API=3 /usr/local/etcd/etcdctl snapshot restore /tmp/83.snapshot.db --data-dir /data/etcd/intranet-test.data
$ sudo systemctl start etcd

log:

Jan 04 15:50:05 node3 systemd[1]: Starting Etcd Server...
Jan 04 15:50:05 node3 etcd[9237]: Loading server configuration from "/usr/local/etcd/conf/etcd.conf"
Jan 04 15:50:05 node3 etcd[9237]: etcd Version: 3.3.0
Jan 04 15:50:05 node3 etcd[9237]: Git SHA: f7a395f
Jan 04 15:50:05 node3 etcd[9237]: Go Version: go1.9.2
Jan 04 15:50:05 node3 etcd[9237]: Go OS/Arch: linux/amd64
Jan 04 15:50:05 node3 etcd[9237]: setting maximum number of CPUs to 2, total number of available CPUs is 2
Jan 04 15:50:05 node3 etcd[9237]: the server is already initialized as member before, starting as etcd member...
Jan 04 15:50:05 node3 etcd[9237]: listening for peers on http://192.168.0.83:2380
Jan 04 15:50:05 node3 etcd[9237]: pprof is enabled under /debug/pprof
Jan 04 15:50:05 node3 etcd[9237]: listening for client requests on 192.168.0.83:2379
Jan 04 15:50:05 node3 etcd[9237]: listening for client requests on 127.0.0.1:2379
Jan 04 15:50:05 node3 etcd[9237]: name = node3
Jan 04 15:50:05 node3 etcd[9237]: data dir = /data/etcd/intranet-test.data
Jan 04 15:50:05 node3 etcd[9237]: member dir = /data/etcd/intranet-test.data/member
Jan 04 15:50:05 node3 etcd[9237]: dedicated WAL dir = /data/etcd/intranet-test.wal.data
Jan 04 15:50:05 node3 etcd[9237]: heartbeat = 100ms
Jan 04 15:50:05 node3 etcd[9237]: election = 1000ms
Jan 04 15:50:05 node3 etcd[9237]: snapshot count = 10000
Jan 04 15:50:05 node3 etcd[9237]: advertise client URLs = http://192.168.0.83:2379
Jan 04 15:50:05 node3 etcd[9237]: starting member c4cf1eeedaaed948 in cluster 8f718bddbdb82c07
Jan 04 15:50:05 node3 etcd[9237]: c4cf1eeedaaed948 became follower at term 0
Jan 04 15:50:05 node3 etcd[9237]: newRaft c4cf1eeedaaed948 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
Jan 04 15:50:05 node3 etcd[9237]: c4cf1eeedaaed948 became follower at term 1
Jan 04 15:50:05 node3 etcd[9237]: restore compact to 6
Jan 04 15:50:05 node3 etcd[9237]: simple token is not cryptographically signed
Jan 04 15:50:05 node3 etcd[9237]: started HTTP pipelining with peer 736948b5f5c09617
Jan 04 15:50:05 node3 etcd[9237]: started HTTP pipelining with peer 8cd04f584be13fc0
Jan 04 15:50:05 node3 etcd[9237]: starting peer 736948b5f5c09617...
Jan 04 15:50:05 node3 etcd[9237]: started HTTP pipelining with peer 736948b5f5c09617
Jan 04 15:50:05 node3 etcd[9237]: started streaming with peer 736948b5f5c09617 (writer)
Jan 04 15:50:05 node3 etcd[9237]: started streaming with peer 736948b5f5c09617 (writer)
Jan 04 15:50:05 node3 etcd[9237]: started peer 736948b5f5c09617
Jan 04 15:50:05 node3 etcd[9237]: added peer 736948b5f5c09617
Jan 04 15:50:05 node3 etcd[9237]: starting peer 8cd04f584be13fc0...
Jan 04 15:50:05 node3 etcd[9237]: started HTTP pipelining with peer 8cd04f584be13fc0
Jan 04 15:50:05 node3 etcd[9237]: started streaming with peer 736948b5f5c09617 (stream MsgApp v2 reader)
Jan 04 15:50:05 node3 etcd[9237]: started streaming with peer 8cd04f584be13fc0 (writer)
Jan 04 15:50:05 node3 etcd[9237]: started streaming with peer 736948b5f5c09617 (stream Message reader)
Jan 04 15:50:05 node3 etcd[9237]: started streaming with peer 8cd04f584be13fc0 (writer)
Jan 04 15:50:05 node3 etcd[9237]: started peer 8cd04f584be13fc0
Jan 04 15:50:05 node3 etcd[9237]: started streaming with peer 8cd04f584be13fc0 (stream MsgApp v2 reader)
Jan 04 15:50:05 node3 etcd[9237]: started streaming with peer 8cd04f584be13fc0 (stream Message reader)
Jan 04 15:50:05 node3 etcd[9237]: added peer 8cd04f584be13fc0
Jan 04 15:50:05 node3 etcd[9237]: starting server... [version: 3.3.0, cluster version: to_be_decided]
Jan 04 15:50:05 node3 etcd[9237]: peer 736948b5f5c09617 became active
Jan 04 15:50:05 node3 etcd[9237]: established a TCP streaming connection with peer 736948b5f5c09617 (stream Message reader)
Jan 04 15:50:05 node3 etcd[9237]: established a TCP streaming connection with peer 736948b5f5c09617 (stream MsgApp v2 reader)
Jan 04 15:50:05 node3 etcd[9237]: c4cf1eeedaaed948 [term: 1] received a MsgHeartbeat message with higher term from 736948b5f5c09617 [term: 81]
Jan 04 15:50:05 node3 etcd[9237]: c4cf1eeedaaed948 became follower at term 81
Jan 04 15:50:05 node3 etcd[9237]: tocommit(288025) is out of range [lastIndex(0)]. Was the raft log corrupted, truncated, or lost?
Jan 04 15:50:05 node3 bash[9237]: panic: tocommit(288025) is out of range [lastIndex(0)]. Was the raft log corrupted, truncated, or lost?
Jan 04 15:50:05 node3 systemd[1]: etcd.service: main process exited, code=exited, status=2/INVALIDARGUMENT
Jan 04 15:50:05 node3 systemd[1]: Failed to start Etcd Server.
Jan 04 15:50:05 node3 systemd[1]: Unit etcd.service entered failed state.
Jan 04 15:50:05 node3 systemd[1]: etcd.service failed.
Jan 04 15:50:06 node3 systemd[1]: etcd.service holdoff time over, scheduling restart.
Jan 04 15:50:06 node3 systemd[1]: Starting Etcd Server...
Jan 04 15:50:06 node3 etcd[9253]: Loading server configuration from "/usr/local/etcd/conf/etcd.conf"
Jan 04 15:50:06 node3 etcd[9253]: etcd Version: 3.3.0
Jan 04 15:50:06 node3 etcd[9253]: Git SHA: f7a395f
Jan 04 15:50:06 node3 etcd[9253]: Go Version: go1.9.2
Jan 04 15:50:06 node3 etcd[9253]: Go OS/Arch: linux/amd64
Jan 04 15:50:06 node3 etcd[9253]: setting maximum number of CPUs to 2, total number of available CPUs is 2
Jan 04 15:50:06 node3 etcd[9253]: the server is already initialized as member before, starting as etcd member...
Jan 04 15:50:06 node3 etcd[9253]: listening for peers on http://192.168.0.83:2380
Jan 04 15:50:06 node3 etcd[9253]: pprof is enabled under /debug/pprof
Jan 04 15:50:06 node3 etcd[9253]: listening for client requests on 192.168.0.83:2379
Jan 04 15:50:06 node3 etcd[9253]: listening for client requests on 127.0.0.1:2379
Jan 04 15:50:06 node3 etcd[9253]: recovered store from snapshot at index 1
Jan 04 15:50:06 node3 etcd[9253]: restore compact to 6
Jan 04 15:50:06 node3 etcd[9253]: name = node3
Jan 04 15:50:06 node3 etcd[9253]: data dir = /data/etcd/intranet-test.data
Jan 04 15:50:06 node3 etcd[9253]: member dir = /data/etcd/intranet-test.data/member
Jan 04 15:50:06 node3 systemd[1]: etcd.service: main process exited, code=exited, status=2/INVALIDARGUMENT
Jan 04 15:50:06 node3 systemd[1]: Failed to start Etcd Server.
Jan 04 15:50:06 node3 systemd[1]: Unit etcd.service entered failed state.
Jan 04 15:50:06 node3 systemd[1]: etcd.service failed.
Jan 04 15:50:06 node3 systemd[1]: etcd.service holdoff time over, scheduling restart.
Jan 04 15:50:06 node3 systemd[1]: Starting Etcd Server...
Jan 04 15:50:06 node3 etcd[9262]: Loading server configuration from "/usr/local/etcd/conf/etcd.conf"
Jan 04 15:50:06 node3 etcd[9262]: etcd Version: 3.3.0
Jan 04 15:50:06 node3 etcd[9262]: Git SHA: f7a395f
Jan 04 15:50:06 node3 etcd[9262]: Go Version: go1.9.2
Jan 04 15:50:06 node3 etcd[9262]: Go OS/Arch: linux/amd64
Jan 04 15:50:06 node3 etcd[9262]: setting maximum number of CPUs to 2, total number of available CPUs is 2
Jan 04 15:50:06 node3 etcd[9262]: the server is already initialized as member before, starting as etcd member...
Jan 04 15:50:06 node3 etcd[9262]: listening for peers on http://192.168.0.83:2380
Jan 04 15:50:06 node3 etcd[9262]: pprof is enabled under /debug/pprof
Jan 04 15:50:06 node3 etcd[9262]: listening for client requests on 192.168.0.83:2379
Jan 04 15:50:06 node3 etcd[9262]: listening for client requests on 127.0.0.1:2379
Jan 04 15:50:06 node3 etcd[9262]: recovered store from snapshot at index 1
Jan 04 15:50:06 node3 etcd[9262]: restore compact to 6
Jan 04 15:50:06 node3 etcd[9262]: name = node3
Jan 04 15:50:06 node3 etcd[9262]: data dir = /data/etcd/intranet-test.data
Jan 04 15:50:06 node3 etcd[9262]: member dir = /data/etcd/intranet-test.data/member
Jan 04 15:50:06 node3 etcd[9262]: dedicated WAL dir = /data/etcd/intranet-test.wal.data
Jan 04 15:50:06 node3 etcd[9262]: heartbeat = 100ms
Jan 04 15:50:06 node3 etcd[9262]: election = 1000ms
Jan 04 15:50:06 node3 etcd[9262]: snapshot count = 10000
Jan 04 15:50:06 node3 etcd[9262]: advertise client URLs = http://192.168.0.83:2379
Jan 04 15:50:06 node3 etcd[9262]: restarting member c4cf1eeedaaed948 in cluster 8f718bddbdb82c07 at commit index 0
Jan 04 15:50:06 node3 etcd[9262]: c4cf1eeedaaed948 state.commit 0 is out of range [1, 1]
Jan 04 15:50:06 node3 bash[9262]: panic: c4cf1eeedaaed948 state.commit 0 is out of range [1, 1]
Jan 04 15:50:06 node3 bash[9262]: goroutine 1 [running]:
Jan 04 15:50:06 node3 bash[9262]: github.com/coreos/etcd/cmd/vendor/github.com/coreos/pkg/capnslog.(*PackageLogger).Panicf(0xc4201a6e20, 0x100c936, 0x2b, 0xc42004c900, 0x4, 0x4)
Jan 04 15:50:06 node3 bash[9262]: /home/gyuho/etcd/release/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/github.com/coreos/pkg/capnslog/pkg_logger.go:75 +0x16d
Jan 04 15:50:06 node3 bash[9262]: github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/raft.(*raft).loadState(0xc420216200, 0x1, 0x0, 0x0, 0x0, 0x0, 0x0)
Jan 04 15:50:06 node3 bash[9262]: /home/gyuho/etcd/release/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/raft/raft.go:1349 +0x1c7
Jan 04 15:50:06 node3 bash[9262]: github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/raft.newRaft(0xc4201e3590, 0xc42001a1d8)
Jan 04 15:50:06 node3 bash[9262]: /home/gyuho/etcd/release/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/raft/raft.go:342 +0xed8
Jan 04 15:50:06 node3 bash[9262]: github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/raft.RestartNode(0xc4201e3590, 0x0, 0x0)
Jan 04 15:50:06 node3 bash[9262]: /home/gyuho/etcd/release/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/raft/node.go:219 +0x43
Jan 04 15:50:06 node3 bash[9262]: github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/etcdserver.restartNode(0xc42024b0e9, 0x5, 0x0, 0x0, 0x0, 0x0, 0xc4200fb900, 0x1, 0x1, 0xc4200fb800, ...)
Jan 04 15:50:06 node3 bash[9262]: /home/gyuho/etcd/release/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/etcdserver/raft.go:448 +0x54a
Jan 04 15:50:06 node3 bash[9262]: github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/etcdserver.NewServer(0xc42024b0e9, 0x5, 0x0, 0x0, 0x0, 0x0, 0xc4200fb900, 0x1, 0x1, 0xc4200fb800, ...)
Jan 04 15:50:06 node3 bash[9262]: /home/gyuho/etcd/release/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/etcdserver/server.go:390 +0x282e
Jan 04 15:50:06 node3 bash[9262]: github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/embed.StartEtcd(0xc420169b00, 0xc4201f4c00, 0x0, 0x0)
Jan 04 15:50:06 node3 bash[9262]: /home/gyuho/etcd/release/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/embed/etcd.go:184 +0x814
Jan 04 15:50:06 node3 bash[9262]: github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/etcdmain.startEtcd(0xc420169b00, 0x6, 0xfeb202, 0x6, 0x1)
Jan 04 15:50:06 node3 bash[9262]: /home/gyuho/etcd/release/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/etcdmain/etcd.go:186 +0x73
Jan 04 15:50:06 node3 bash[9262]: github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/etcdmain.startEtcdOrProxyV2()
Jan 04 15:50:06 node3 bash[9262]: /home/gyuho/etcd/release/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/etcdmain/etcd.go:103 +0x14dd
Jan 04 15:50:06 node3 bash[9262]: github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/etcdmain.Main()
Jan 04 15:50:06 node3 bash[9262]: /home/gyuho/etcd/release/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/etcdmain/main.go:46 +0x3f
Jan 04 15:50:06 node3 bash[9262]: main.main()
Jan 04 15:50:06 node3 bash[9262]: /home/gyuho/etcd/release/etcd/gopath/src/github.com/coreos/etcd/cmd/etcd/main.go:28 +0x20
Jan 04 15:50:06 node3 systemd[1]: etcd.service: main process exited, code=exited, status=2/INVALIDARGUMENT
Jan 04 15:50:06 node3 systemd[1]: Failed to start Etcd Server.
Jan 04 15:50:06 node3 systemd[1]: Unit etcd.service entered failed state.
Jan 04 15:50:06 node3 systemd[1]: etcd.service failed.
Jan 04 15:50:06 node3 systemd[1]: etcd.service holdoff time over, scheduling restart.

Configuration: /usr/local/etcd/conf/etcd.conf

name: "node3"
data-dir: /data/etcd/intranet-test.data
wal-dir: /data/etcd/intranet-test.wal.data
snapshot-count: 10000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 8589934592
listen-peer-urls: http://192.168.0.83:2380
listen-client-urls: http://192.168.0.83:2379,http://127.0.0.1:2379
max-snapshots: 5
max-wals: 5
cors:
initial-advertise-peer-urls: http://192.168.0.83:2380
advertise-client-urls: http://192.168.0.83:2379
discovery:
discovery-fallback: "proxy"
discovery-proxy:
discovery-srv:
initial-cluster: node1=http://192.168.0.81:2380,node2=http://192.168.0.82:2380,node3=http://192.168.0.83:2380
initial-cluster-token: "wpbch1bi7yebkdWWfoemlqxyjbwrqt"
initial-cluster-state: existing
strict-reconfig-check: false
enable-v2: false
enable-pprof: true
proxy: "off"
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
ca-file:
cert-file:
key-file:
client-cert-auth: false
trusted-ca-file:
auto-tls: false
peer-transport-security:
ca-file:
cert-file:
key-file:
peer-client-cert-auth: false
trusted-ca-file:
auto-tls: false
debug: false
log-package-levels: etcdmain=DEBUG,etcdserver=DEBUG
log-output: default
force-new-cluster: false

@gyuho
Copy link
Contributor

gyuho commented Jan 4, 2018

@lyddragon Thanks. Is there any easy way to reproduce that panic locally?

@OPSTime
Copy link
Author

OPSTime commented Jan 4, 2018

$ ETCDCTL_API=3 /usr/local/etcd/etcdctl --user root --endpoints 192.168.0.83:2379 snapshot save /tmp/83.snapshot.db

$ systemctl stop etcd
$ rm -rf /data/etcd/*
$ sudo -u etcd ETCDCTL_API=3 /usr/local/etcd/etcdctl snapshot restore /tmp/83.snapshot.db --data-dir /data/etcd/intranet-test.data
$ sudo systemctl start etcd

configuration:/usr/local/etcd/conf/etcd.conf

https://github.com/coreos/etcd/blob/master/etcd.conf.yml.sample

name: "node3"
data-dir: /data/etcd/intranet-test.data
wal-dir: /data/etcd/intranet-test.wal.data
snapshot-count: 10000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 8589934592
listen-peer-urls: http://192.168.0.83:2380
listen-client-urls: http://192.168.0.83:2379,http://127.0.0.1:2379
max-snapshots: 5
max-wals: 5
cors:
initial-advertise-peer-urls: http://192.168.0.83:2380
advertise-client-urls: http://192.168.0.83:2379
discovery:
discovery-fallback: "proxy"
discovery-proxy:
discovery-srv:
initial-cluster: node1=http://192.168.0.81:2380,node2=http://192.168.0.82:2380,node3=http://192.168.0.83:2380
initial-cluster-token: "wpbch1bi7yebkdWWfoemlqxyjbwrqt"
initial-cluster-state: existing
strict-reconfig-check: false
enable-v2: false
enable-pprof: true
proxy: "off"
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
ca-file:
cert-file:
key-file:
client-cert-auth: false
trusted-ca-file:
auto-tls: false
peer-transport-security:
ca-file:
cert-file:
key-file:
peer-client-cert-auth: false
trusted-ca-file:
auto-tls: false
debug: false
log-package-levels: etcdmain=DEBUG,etcdserver=DEBUG
log-output: default
force-new-cluster: false

systemd configuration:

[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
WorkingDirectory=/usr/local/etcd/
User=etcd
ExecStart=/bin/bash -c "GOMAXPROCS=$(nproc) /usr/local/etcd/etcd --config-file /usr/local/etcd/conf/etcd.conf"
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

@gyuho gyuho changed the title /usr/local/etcd/etcdctl --user root endpoint health --cluster,need to enter twice password v3.3.0-rc.0 endpoint health --cluster with auth requires password input twice Jan 4, 2018
@OPSTime
Copy link
Author

OPSTime commented Jan 4, 2018

Use snapshot restore member, Why not rejoin the cluster?

sudo -u etcd ETCDCTL_API=3 /usr/local/etcd/etcdctl snapshot restore /tmp/83.snapshot.db --name node3 --initial-cluster node1=http://192.168.0.81:2380,node2=http://192.168.0.82:2380,node3=http://192.168.0.83:2380 --initial-cluster-token wpbch1bi7yebkdWWfoemlqxyjbwrqt --initial-advertise-peer-urls=http://192.168.0.83:2380 --data-dir /data/etcd/intranet-test.data

@OPSTime
Copy link
Author

OPSTime commented Jan 5, 2018

new member can't join to cluster

use snapshot to restore node1、node2、node3

on node1

$ ETCDCTL_API=3 /usr/local/etcd/etcdctl --user 'root:111111' get 0 'z'
k1
1
k2
2
k3
3
k4
4
$ ETCDCTL_API=3 /usr/local/etcd/etcdctl snapshot save /tmp/s.db
$ scp /tmp/s.db node2:/tmp
$ scp /tmp/s.db node3:/tmp

on node1、node2、node3

$ systemctl stop etcd
$ rm -rf /data/etcd/*

on node1

$ sudo -u etcd ETCDCTL_API=3 /usr/local/etcd/etcdctl snapshot restore /tmp/s.db --name node1 --initial-cluster node1=http://192.168.0.81:2380,node2=http://192.168.0.82:2380,node3=http://192.168.0.83:2380 --initial-cluster-token wpbch1bi7yebkdWWfoemlqxyjbwrqt --initial-advertise-peer-urls=http://192.168.0.81:2380 --data-dir /data/etcd/intranet-test.data
$ systemctl start etcd

on node2

$ sudo -u etcd ETCDCTL_API=3 /usr/local/etcd/etcdctl snapshot restore /tmp/s.db --name node2 --initial-cluster node1=http://192.168.0.81:2380,node2=http://192.168.0.82:2380,node3=http://192.168.0.83:2380 --initial-cluster-token wpbch1bi7yebkdWWfoemlqxyjbwrqt --initial-advertise-peer-urls=http://192.168.0.82:2380 --data-dir /data/etcd/intranet-test.data
$ systemctl start etcd

on node3

$ sudo -u etcd ETCDCTL_API=3 /usr/local/etcd/etcdctl snapshot restore /tmp/s.db --name node3 --initial-cluster node1=http://192.168.0.81:2380,node2=http://192.168.0.82:2380,node3=http://192.168.0.83:2380 --initial-cluster-token wpbch1bi7yebkdWWfoemlqxyjbwrqt --initial-advertise-peer-urls=http://192.168.0.83:2380 --data-dir /data/etcd/intranet-test.data
$ systemctl start etcd

on node1

$ ETCDCTL_API=3 /usr/local/etcd/etcdctl member add node4 --peer-urls='http://192.168.0.80:2380'

on node4

$ cat /usr/local/etcd/conf/etcd.conf
......
initial-cluster-state: existing
......
$ systemctl start etcd
$ ETCDCTL_API=3 /usr/local/etcd/etcdctl --user 'root:111111' member list
47040c1ea30ca295, started, node4, http://192.168.0.80:2380, http://192.168.0.80:2379
736948b5f5c09617, started, node1, http://192.168.0.81:2380, http://192.168.0.81:2379
8cd04f584be13fc0, started, node2, http://192.168.0.82:2380, http://192.168.0.82:2379
c4cf1eeedaaed948, started, node3, http://192.168.0.83:2380, http://192.168.0.83:2379
$ ETCDCTL_API=3 /usr/local/etcd/etcdctl --user 'root:111111' endpoint status --cluster
http://192.168.0.80:2379, 47040c1ea30ca295, 3.3.0, 20 kB, false, 7, 11
http://192.168.0.81:2379, 736948b5f5c09617, 3.3.0, 20 kB, true, 7, 11
http://192.168.0.82:2379, 8cd04f584be13fc0, 3.3.0, 20 kB, false, 7, 11
http://192.168.0.83:2379, c4cf1eeedaaed948, 3.3.0, 20 kB, false, 7, 11
$ ETCDCTL_API=3 /usr/local/etcd/etcdctl --user 'root:111111' endpoint health --cluster
http://192.168.0.81:2379 is healthy: successfully committed proposal: took = 3.253438ms
http://192.168.0.82:2379 is healthy: successfully committed proposal: took = 3.751166ms
http://192.168.0.83:2379 is healthy: successfully committed proposal: took = 3.486096ms
http://192.168.0.80:2379 is healthy: successfully committed proposal: took = 4.078175ms

but no data

$ ETCDCTL_API=3 /usr/local/etcd/etcdctl --user 'root:111111' get 0 'z'

or

$ cat /usr/local/etcd/conf/etcd.conf
......
initial-cluster-state: existing
......
$ sudo -u etcd ETCDCTL_API=3 /usr/local/etcd/etcdctl snapshot restore /tmp/s.db --name node4 --initial-cluster node1=http://192.168.0.81:2380,node2=http://192.168.0.82:2380,node3=http://192.168.0.83:2380,node4=http://192.168.0.80:2380 --initial-cluster-token wpbch1bi7yebkdWWfoemlqxyjbwrqt --initial-advertise-peer-urls=http://192.168.0.80:2380 --data-dir /data/etcd/intranet-test.data

start failed

$ systemctl start etcd

@OPSTime
Copy link
Author

OPSTime commented Jan 8, 2018

-cluster with auth requires password input twice

$ ETCDCTL_API=3 /usr/local/etcd/etcdctl --user root endpoint health
Password:
127.0.0.1:2379 is healthy: successfully committed proposal: took = 1.176983ms
$ ETCDCTL_API=3 /usr/local/etcd/etcdctl --user root endpoint health --cluster
Password:
Password:
http://10.20.6.81:2379 is healthy: successfully committed proposal: took = 1.876926ms
http://10.20.6.82:2379 is healthy: successfully committed proposal: took = 3.685239ms
http://10.20.6.83:2379 is healthy: successfully committed proposal: took = 3.491436ms
$ ETCDCTL_API=3 /usr/local/etcd/etcdctl --user root endpoint status
Password:
127.0.0.1:2379, 736948b5f5c09617, 3.3.0, 20 kB, true, 5, 30
$ ETCDCTL_API=3 /usr/local/etcd/etcdctl --user root endpoint status --cluster
Password:
Password:
http://10.20.6.81:2379, 736948b5f5c09617, 3.3.0, 20 kB, true, 5, 33
http://10.20.6.82:2379, 8cd04f584be13fc0, 3.3.0, 20 kB, false, 5, 34
http://10.20.6.83:2379, c4cf1eeedaaed948, 3.3.0, 20 kB, false, 5, 35

@hexfusion
Copy link
Contributor

-cluster with auth requires password input twice

FWIW I can reproduce with v3.3.0-rc.0 and v3.3.0-rc.1 going to explore a bit.

@hexfusion
Copy link
Contributor

But if I pass the password directly --user root:pass it works. Perhaps this simply masks the issue.

mitake added a commit to mitake/etcd that referenced this issue Jan 12, 2018
Current etcdctl endpoint health --cluster asks password twice if auth
is enabled. This is because the command creates two client instances:
one for the purpose of checking endpoint health and another for
getting cluster members with MemberList(). The latter client doesn't
need to be authenticated because MemberList() is a public RPC. This
commit makes the client with no authed one.

Fix etcd-io#9094
@mitake
Copy link
Contributor

mitake commented Jan 12, 2018

@lyddragon thanks for reporting the problem. I created a PR for fixing this here: #9136 could you try it?

gyuho pushed a commit that referenced this issue Jan 12, 2018
Current etcdctl endpoint health --cluster asks password twice if auth
is enabled. This is because the command creates two client instances:
one for the purpose of checking endpoint health and another for
getting cluster members with MemberList(). The latter client doesn't
need to be authenticated because MemberList() is a public RPC. This
commit makes the client with no authed one.

Fix #9094
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

4 participants