Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

v3.3.0-rc.0 snapshot restore panic on raft #9096

Closed
gyuho opened this issue Jan 4, 2018 · 8 comments
Closed

v3.3.0-rc.0 snapshot restore panic on raft #9096

gyuho opened this issue Jan 4, 2018 · 8 comments
Assignees

Comments

@gyuho
Copy link
Contributor

gyuho commented Jan 4, 2018

#9094 (comment)
#9094 (comment)
#9094 (comment)

$ rm -rf /data/etcd/intranet-test.*
$ sudo -u etcd ETCDCTL_API=3 /usr/local/etcd/etcdctl snapshot restore /tmp/83.snapshot.db --data-dir /data/etcd/intranet-test.data
$ sudo systemctl start etcd
$ ETCDCTL_API=3 /usr/local/etcd/etcdctl --user root  --endpoints 192.168.0.83:2379 snapshot save /tmp/83.snapshot.db

$ systemctl stop etcd
$ rm -rf /data/etcd/*
$ sudo -u etcd ETCDCTL_API=3 /usr/local/etcd/etcdctl snapshot restore /tmp/83.snapshot.db --data-dir /data/etcd/intranet-test.data
$ sudo systemctl start etcd
50:05 node3 systemd[1]: Starting Etcd Server...
50:05 node3 etcd[9237]: Loading server configuration from "/usr/local/etcd/conf/etcd.conf"
50:05 node3 etcd[9237]: etcd Version: 3.3.0
50:05 node3 etcd[9237]: Git SHA: f7a395f
50:05 node3 etcd[9237]: Go Version: go1.9.2
50:05 node3 etcd[9237]: Go OS/Arch: linux/amd64
50:05 node3 etcd[9237]: setting maximum number of CPUs to 2, total number of available CPUs is 2
50:05 node3 etcd[9237]: the server is already initialized as member before, starting as etcd member...
50:05 node3 etcd[9237]: listening for peers on http://192.168.0.83:2380
50:05 node3 etcd[9237]: pprof is enabled under /debug/pprof
50:05 node3 etcd[9237]: listening for client requests on 192.168.0.83:2379
50:05 node3 etcd[9237]: listening for client requests on 127.0.0.1:2379
50:05 node3 etcd[9237]: name = node3
50:05 node3 etcd[9237]: data dir = /data/etcd/intranet-test.data
50:05 node3 etcd[9237]: member dir = /data/etcd/intranet-test.data/member
50:05 node3 etcd[9237]: dedicated WAL dir = /data/etcd/intranet-test.wal.data
50:05 node3 etcd[9237]: heartbeat = 100ms
50:05 node3 etcd[9237]: election = 1000ms
50:05 node3 etcd[9237]: snapshot count = 10000
50:05 node3 etcd[9237]: advertise client URLs = http://192.168.0.83:2379
50:05 node3 etcd[9237]: starting member c4cf1eeedaaed948 in cluster 8f718bddbdb82c07
50:05 node3 etcd[9237]: c4cf1eeedaaed948 became follower at term 0
50:05 node3 etcd[9237]: newRaft c4cf1eeedaaed948 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
50:05 node3 etcd[9237]: c4cf1eeedaaed948 became follower at term 1
50:05 node3 etcd[9237]: restore compact to 6
50:05 node3 etcd[9237]: simple token is not cryptographically signed
50:05 node3 etcd[9237]: started HTTP pipelining with peer 736948b5f5c09617
50:05 node3 etcd[9237]: started HTTP pipelining with peer 8cd04f584be13fc0
50:05 node3 etcd[9237]: starting peer 736948b5f5c09617...
50:05 node3 etcd[9237]: started HTTP pipelining with peer 736948b5f5c09617
50:05 node3 etcd[9237]: started streaming with peer 736948b5f5c09617 (writer)
50:05 node3 etcd[9237]: started streaming with peer 736948b5f5c09617 (writer)
50:05 node3 etcd[9237]: started peer 736948b5f5c09617
50:05 node3 etcd[9237]: added peer 736948b5f5c09617
50:05 node3 etcd[9237]: starting peer 8cd04f584be13fc0...
50:05 node3 etcd[9237]: started HTTP pipelining with peer 8cd04f584be13fc0
50:05 node3 etcd[9237]: started streaming with peer 736948b5f5c09617 (stream MsgApp v2 reader)
50:05 node3 etcd[9237]: started streaming with peer 8cd04f584be13fc0 (writer)
50:05 node3 etcd[9237]: started streaming with peer 736948b5f5c09617 (stream Message reader)
50:05 node3 etcd[9237]: started streaming with peer 8cd04f584be13fc0 (writer)
50:05 node3 etcd[9237]: started peer 8cd04f584be13fc0
50:05 node3 etcd[9237]: started streaming with peer 8cd04f584be13fc0 (stream MsgApp v2 reader)
50:05 node3 etcd[9237]: started streaming with peer 8cd04f584be13fc0 (stream Message reader)
50:05 node3 etcd[9237]: added peer 8cd04f584be13fc0
50:05 node3 etcd[9237]: starting server... [version: 3.3.0, cluster version: to_be_decided]
50:05 node3 etcd[9237]: peer 736948b5f5c09617 became active
50:05 node3 etcd[9237]: established a TCP streaming connection with peer 736948b5f5c09617 (stream Message reader)
50:05 node3 etcd[9237]: established a TCP streaming connection with peer 736948b5f5c09617 (stream MsgApp v2 reader)
50:05 node3 etcd[9237]: c4cf1eeedaaed948 [term: 1] received a MsgHeartbeat message with higher term from 736948b5f5c09617 [term: 81]
50:05 node3 etcd[9237]: c4cf1eeedaaed948 became follower at term 81
50:05 node3 etcd[9237]: tocommit(288025) is out of range [lastIndex(0)]. Was the raft log corrupted, truncated, or lost?
50:05 node3 bash[9237]: panic: tocommit(288025) is out of range [lastIndex(0)]. Was the raft log corrupted, truncated, or lost?
50:05 node3 systemd[1]: etcd.service: main process exited, code=exited, status=2/INVALIDARGUMENT
50:05 node3 systemd[1]: Failed to start Etcd Server.
50:05 node3 systemd[1]: Unit etcd.service entered failed state.
50:05 node3 systemd[1]: etcd.service failed.
50:06 node3 systemd[1]: etcd.service holdoff time over, scheduling restart.
50:06 node3 systemd[1]: Starting Etcd Server...
50:06 node3 etcd[9253]: Loading server configuration from "/usr/local/etcd/conf/etcd.conf"
50:06 node3 etcd[9253]: etcd Version: 3.3.0
50:06 node3 etcd[9253]: Git SHA: f7a395f
50:06 node3 etcd[9253]: Go Version: go1.9.2
50:06 node3 etcd[9253]: Go OS/Arch: linux/amd64
50:06 node3 etcd[9253]: setting maximum number of CPUs to 2, total number of available CPUs is 2
50:06 node3 etcd[9253]: the server is already initialized as member before, starting as etcd member...
50:06 node3 etcd[9253]: listening for peers on http://192.168.0.83:2380
50:06 node3 etcd[9253]: pprof is enabled under /debug/pprof
50:06 node3 etcd[9253]: listening for client requests on 192.168.0.83:2379
50:06 node3 etcd[9253]: listening for client requests on 127.0.0.1:2379
50:06 node3 etcd[9253]: recovered store from snapshot at index 1
50:06 node3 etcd[9253]: restore compact to 6
50:06 node3 etcd[9253]: name = node3
50:06 node3 etcd[9253]: data dir = /data/etcd/intranet-test.data
50:06 node3 etcd[9253]: member dir = /data/etcd/intranet-test.data/member
50:06 node3 systemd[1]: etcd.service: main process exited, code=exited, status=2/INVALIDARGUMENT
50:06 node3 systemd[1]: Failed to start Etcd Server.
50:06 node3 systemd[1]: Unit etcd.service entered failed state.
50:06 node3 systemd[1]: etcd.service failed.
50:06 node3 systemd[1]: etcd.service holdoff time over, scheduling restart.
50:06 node3 systemd[1]: Starting Etcd Server...
50:06 node3 etcd[9262]: Loading server configuration from "/usr/local/etcd/conf/etcd.conf"
50:06 node3 etcd[9262]: etcd Version: 3.3.0
50:06 node3 etcd[9262]: Git SHA: f7a395f
50:06 node3 etcd[9262]: Go Version: go1.9.2
50:06 node3 etcd[9262]: Go OS/Arch: linux/amd64
50:06 node3 etcd[9262]: setting maximum number of CPUs to 2, total number of available CPUs is 2
50:06 node3 etcd[9262]: the server is already initialized as member before, starting as etcd member...
50:06 node3 etcd[9262]: listening for peers on http://192.168.0.83:2380
50:06 node3 etcd[9262]: pprof is enabled under /debug/pprof
50:06 node3 etcd[9262]: listening for client requests on 192.168.0.83:2379
50:06 node3 etcd[9262]: listening for client requests on 127.0.0.1:2379
50:06 node3 etcd[9262]: recovered store from snapshot at index 1
50:06 node3 etcd[9262]: restore compact to 6
50:06 node3 etcd[9262]: name = node3
50:06 node3 etcd[9262]: data dir = /data/etcd/intranet-test.data
50:06 node3 etcd[9262]: member dir = /data/etcd/intranet-test.data/member
50:06 node3 etcd[9262]: dedicated WAL dir = /data/etcd/intranet-test.wal.data
50:06 node3 etcd[9262]: heartbeat = 100ms
50:06 node3 etcd[9262]: election = 1000ms
50:06 node3 etcd[9262]: snapshot count = 10000
50:06 node3 etcd[9262]: advertise client URLs = http://192.168.0.83:2379
50:06 node3 etcd[9262]: restarting member c4cf1eeedaaed948 in cluster 8f718bddbdb82c07 at commit index 0
50:06 node3 etcd[9262]: c4cf1eeedaaed948 state.commit 0 is out of range [1, 1]
50:06 node3 bash[9262]: panic: c4cf1eeedaaed948 state.commit 0 is out of range [1, 1]
50:06 node3 bash[9262]: goroutine 1 [running]:
50:06 node3 bash[9262]: github.com/coreos/etcd/cmd/vendor/github.com/coreos/pkg/capnslog.(*PackageLogger).Panicf(0xc4201a6e20, 0x100c936, 0x2b, 0xc42004c900, 0x4, 0x4)
50:06 node3 bash[9262]: /home/gyuho/etcd/release/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/github.com/coreos/pkg/capnslog/pkg_logger.go:75 +0x16d
50:06 node3 bash[9262]: github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/raft.(*raft).loadState(0xc420216200, 0x1, 0x0, 0x0, 0x0, 0x0, 0x0)
50:06 node3 bash[9262]: /home/gyuho/etcd/release/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/raft/raft.go:1349 +0x1c7
50:06 node3 bash[9262]: github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/raft.newRaft(0xc4201e3590, 0xc42001a1d8)
50:06 node3 bash[9262]: /home/gyuho/etcd/release/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/raft/raft.go:342 +0xed8
50:06 node3 bash[9262]: github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/raft.RestartNode(0xc4201e3590, 0x0, 0x0)
50:06 node3 bash[9262]: /home/gyuho/etcd/release/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/raft/node.go:219 +0x43
50:06 node3 bash[9262]: github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/etcdserver.restartNode(0xc42024b0e9, 0x5, 0x0, 0x0, 0x0, 0x0, 0xc4200fb900, 0x1, 0x1, 0xc4200fb800, ...)
50:06 node3 bash[9262]: /home/gyuho/etcd/release/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/etcdserver/raft.go:448 +0x54a
50:06 node3 bash[9262]: github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/etcdserver.NewServer(0xc42024b0e9, 0x5, 0x0, 0x0, 0x0, 0x0, 0xc4200fb900, 0x1, 0x1, 0xc4200fb800, ...)
50:06 node3 bash[9262]: /home/gyuho/etcd/release/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/etcdserver/server.go:390 +0x282e
50:06 node3 bash[9262]: github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/embed.StartEtcd(0xc420169b00, 0xc4201f4c00, 0x0, 0x0)
50:06 node3 bash[9262]: /home/gyuho/etcd/release/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/embed/etcd.go:184 +0x814
50:06 node3 bash[9262]: github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/etcdmain.startEtcd(0xc420169b00, 0x6, 0xfeb202, 0x6, 0x1)
50:06 node3 bash[9262]: /home/gyuho/etcd/release/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/etcdmain/etcd.go:186 +0x73
50:06 node3 bash[9262]: github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/etcdmain.startEtcdOrProxyV2()
50:06 node3 bash[9262]: /home/gyuho/etcd/release/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/etcdmain/etcd.go:103 +0x14dd
50:06 node3 bash[9262]: github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/etcdmain.Main()
50:06 node3 bash[9262]: /home/gyuho/etcd/release/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/etcdmain/main.go:46 +0x3f
50:06 node3 bash[9262]: main.main()
50:06 node3 bash[9262]: /home/gyuho/etcd/release/etcd/gopath/src/github.com/coreos/etcd/cmd/etcd/main.go:28 +0x20
50:06 node3 systemd[1]: etcd.service: main process exited, code=exited, status=2/INVALIDARGUMENT
50:06 node3 systemd[1]: Failed to start Etcd Server.
50:06 node3 systemd[1]: Unit etcd.service entered failed state.
50:06 node3 systemd[1]: etcd.service failed.
50:06 node3 systemd[1]: etcd.service holdoff time over, scheduling restart.
@gyuho
Copy link
Contributor Author

gyuho commented Jan 4, 2018

@lyddragon Did you upgrade from previous version? Or just fresh cluster from v3.3.0-rc.0 and this happened? Haven't been able to reproduce yet.

@OPSTime
Copy link

OPSTime commented Jan 5, 2018

no update,but use old snapshot to restore.

@gyuho
Copy link
Contributor Author

gyuho commented Jan 5, 2018

old snapshot to restore.

Snapshot from v3.3.0-rc.0 server, and you downloaded snapshot using v3.3.0-rc.0 etcdctl, right?

@gyuho gyuho self-assigned this Jan 6, 2018
@gyuho
Copy link
Contributor Author

gyuho commented Jan 6, 2018

Finally reproduced. I will try to fix shortly.
Auth was not related. Seems like snapshot restore has some bugs.

@lyddragon basically, we are doing

  1. set up 3-node cluster (A, B, C)
  2. get snapshot of A
  3. shut down A
  4. restore A snapshot
  5. restart A with same configuration with initial cluster state existing
  6. panic

Just curious, what is your use case on this?

If you want to destroy member A completely, I would remove/add back to the cluster.

@OPSTime
Copy link

OPSTime commented Jan 7, 2018

add failed, look my other issue

@OPSTime
Copy link

OPSTime commented Jan 7, 2018

"v3.3.0-rc.0 endpoint health --cluster with auth requires password input twice #9094"

@gyuho
Copy link
Contributor Author

gyuho commented Jan 7, 2018

@xiang90 I double checked our code and don't think it's a bug.

Reproducible e2e test case:

// TestCtlV3SnapshotRestoreMultiCluster ensures that restoring one member from snapshot
// does not panic when rejoining the cluster (fix https://github.com/coreos/etcd/issues/9096).
func TestCtlV3SnapshotRestoreMultiCluster(t *testing.T) {
	testCtl(t, snapshotRestoreMultiClusterTest, withCfg(configNoTLS), withQuorum())
}

func snapshotRestoreMultiClusterTest(cx ctlCtx) {
	if err := ctlV3Put(cx, "foo", "bar", ""); err != nil {
		cx.t.Fatalf("ctlV3Put error (%v)", err)
	}

	fpath := filepath.Join(os.TempDir(), "test.snapshot")
	defer os.RemoveAll(fpath)
	if err := ctlV3SnapshotSave(cx, fpath); err != nil {
		cx.t.Fatalf("snapshotTest ctlV3SnapshotSave error (%v)", err)
	}

	// shut down first member, restore, restart from snapshot
	if err := cx.epc.procs[0].Close(); err != nil {
		cx.t.Fatalf("failed to close (%v)", err)
	}

	ep, ok := cx.epc.procs[0].(*etcdServerProcess)
	if !ok {
		cx.t.Fatalf("expected *etcdServerProcess")
	}

	newDataDir := filepath.Join(os.TempDir(), "snap.etcd")
	os.RemoveAll(newDataDir)
	defer os.RemoveAll(newDataDir)
	ep.cfg.dataDirPath = newDataDir

	if err := spawnWithExpect(append(
		cx.PrefixArgs(),
		"snapshot", "restore", fpath,
		"--name", ep.cfg.name,
		"--initial-cluster", ep.cfg.initialCluster,
		"--initial-cluster-token", ep.cfg.initialToken,
		"--initial-advertise-peer-urls", ep.cfg.purl.String(),
		"--data-dir", newDataDir),
		"membership: added member"); err != nil {
		cx.t.Fatalf("failed to restore (%v)", err)
	}
	for i := range ep.cfg.args {
		if ep.cfg.args[i] == "--data-dir" {
			ep.cfg.args[i+1] = newDataDir
			break
		}
	}
	ep.cfg.args = append(ep.cfg.args, "--initial-cluster-state", "existing")

	var err error
	ep.proc, err = spawnCmd(append([]string{ep.cfg.execPath}, ep.cfg.args...))
	if err != nil {
		cx.t.Fatalf("failed to spawn etcd (%v)", err)
	}

	// will error "read /dev/ptmx: input/output error" if process panicked
	if err = ep.waitReady(); err != nil {
		cx.t.Fatalf("failed to start from snapshot restore (%v)", err)
	}
}
../bin/etcd-4461: 2018-01-07 09:19:50.258428 I | raft: ca50e9357181d758 became follower at term 2
../bin/etcd-4461: 2018-01-07 09:19:50.258446 C | raft: tocommit(9) is out of range [lastIndex(3)]. Was the raft log corrupted, truncated, or lost?
../bin/etcd-4402: 2018-01-07 09:19:50.258503 I | rafthttp: established a TCP streaming connection with peer ca50e9357181d758 (stream MsgApp v2 reader)
../bin/etcd-4461: panic: tocommit(9) is out of range [lastIndex(3)]. Was the raft log corrupted, truncated, or lost?
../bin/etcd-4461: 
../bin/etcd-4461: goroutine 116 [running]:
../bin/etcd-4461: github.com/coreos/etcd/cmd/vendor/github.com/coreos/pkg/capnslog.(*PackageLogger).Panicf(0xc420161fa0, 0x102290f, 0x5d, 0xc421070140, 0x2, 0x2)
../bin/etcd-4461: 	/home/gyuho/go/src/github.com/coreos/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/github.com/coreos/pkg/capnslog/pkg_logger.go:75 +0x16d
../bin/etcd-4461: github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/raft.(*raftLog).commitTo(0xc4200e00e0, 0x9)
../bin/etcd-4461: 	/home/gyuho/go/src/github.com/coreos/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/raft/log.go:191 +0x15c
../bin/etcd-4461: github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/raft.(*raft).handleHeartbeat(0xc4201fe200, 0x8, 0xca50e9357181d758, 0x5ac8aa22f1eb4c8f, 0x2, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
../bin/etcd-4461: 	/home/gyuho/go/src/github.com/coreos/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/raft/raft.go:1195 +0x54
../bin/etcd-4461: github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/raft.stepFollower(0xc4201fe200, 0x8, 0xca50e9357181d758, 0x5ac8aa22f1eb4c8f, 0x2, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
../bin/etcd-4461: 	/home/gyuho/go/src/github.com/coreos/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/raft/raft.go:1141 +0x439
../bin/etcd-4461: github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/raft.(*raft).Step(0xc4201fe200, 0x8, 0xca50e9357181d758, 0x5ac8aa22f1eb4c8f, 0x2, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
../bin/etcd-4461: 	/home/gyuho/go/src/github.com/coreos/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/raft/raft.go:869 +0x1465
../bin/etcd-4461: github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/raft.(*node).run(0xc4201e2540, 0xc4201fe200)
../bin/etcd-4461: 	/home/gyuho/go/src/github.com/coreos/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/raft/node.go:323 +0x113e
../bin/etcd-4461: created by github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/raft.RestartNode
../bin/etcd-4461: 	/home/gyuho/go/src/github.com/coreos/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/raft/node.go:223 +0x321

3.2, 3.3, master all panic, and it's expected.

The restored member joins the cluster with commit index 3 (equal to the number of nodes in the cluster) because snapshot file has no information about revision or any other raft fields from previous cluster. So, if other peers increments its indexes and newly joined member does not become the leader, it will ask for future index thus panics.

I think we just need to document this clearly, snapshot restore only supports fresh cluster, not for new member to existing cluster.

@gyuho
Copy link
Contributor Author

gyuho commented Jan 8, 2018

@lyddragon Closing because this is expected.

creates new etcd data directories; all members should restore using the same snapshot. Restoring overwrites some snapshot metadata (specifically, the member ID and cluster ID); the member loses its former identity. This metadata overwrite prevents the new member from inadvertently joining an existing cluster. Therefore in order to start a cluster from a snapshot, the restore must start a new logical cluster.

https://github.com/coreos/etcd/blob/master/Documentation/op-guide/recovery.md#restoring-a-cluster

Snapshot restores a fresh cluster, thus cannot join the existing cluster unless all other members are restored from same snapshot file.

@gyuho gyuho closed this as completed Jan 8, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

2 participants