-
Notifications
You must be signed in to change notification settings - Fork 9.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
clientv3/integration: add TestBalancerUnderServerShutdownWatch #8758
Conversation
/cc @jpbetz can you give this a review? |
|
||
// TestBalancerUnderServerFailureWatch expects that watch client | ||
// switch its endpoints when the member of the pinned endpoint fails. | ||
func TestBalancerUnderServerFailureWatch(t *testing.T) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
UnderServerShutdown?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
failure is too generic: network partition, server shutdown, etc.
lead := clus.WaitLeader(t) | ||
|
||
// close unnecessary clients | ||
clus.Client(lead).Close() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just start with no client. create one client to do put.
cli.SetEndpoints(eps...) | ||
|
||
wch := cli.Watch(context.Background(), "foo", clientv3.WithCreatedNotify()) | ||
<-wch |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
select on this. we need to have a timeout.
defer cli.Close() | ||
cli.SetEndpoints(eps...) | ||
|
||
wch := cli.Watch(context.Background(), "foo", clientv3.WithCreatedNotify()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
make both watch key and value consts.
// switch to others when eps[lead] fails | ||
for ev := range wch { | ||
received := false | ||
for _, e := range ev.Events { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why do we need this loop when we only put one key?
|
||
// writes to eps[lead+1] | ||
for { | ||
_, err = clus.Client((lead+1)%3).Put(context.Background(), "foo", "bar") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
give a timeout here instead of using context.Background.
|
||
select { | ||
case <-donec: | ||
case <-time.After(5 * time.Second): // enough time for leader election, balancer switch |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how long is the election timeout?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
10ms
for integration package.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i mean election timeout for the etcdserver. i want to understand why we wait for 5 seconds here, not 10s 20s.
t.Fatal(err) | ||
} | ||
defer cli1.Close() | ||
cli1.SetEndpoints(eps...) // add all eps |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// add all eps to list, so that when the original pined one fails the client can switch to other available eps.
lead := clus.WaitLeader(t) | ||
|
||
// pin eps[lead] | ||
cli1, err := clientv3.New(clientv3.Config{Endpoints: []string{eps[lead]}}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
watchCli
clus.Members[lead].Terminate(t) | ||
|
||
// writes to eps[lead+1] | ||
cli2, err := clientv3.New(clientv3.Config{Endpoints: []string{eps[(lead+1)%3]}}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
putCli
lgtm after resolving all the comments. leave to @jpbetz for a final approve. |
// writes to eps[lead+1] | ||
cli2, err := clientv3.New(clientv3.Config{Endpoints: []string{eps[(lead+1)%3]}}) | ||
if err != nil { | ||
t.Fatal(err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
add context of the error.
if err == rpctypes.ErrTimeout || err == rpctypes.ErrTimeoutDueToLeaderFail { | ||
continue | ||
} | ||
t.Fatal(err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
add context to the error. only logging the error msg itself is not super helpful
c3e9d2e
to
41042ea
Compare
Current Watch integration tests haven't covered the balancer switch behavior under server failures. Signed-off-by: Gyu-Ho Lee <[email protected]>
/lgtm Test logic looks correct. I don't have any concerns. Thanks! |
SemaphoreCI is unrelated (gRPC issue again). So merging. |
Current Watch integration tests haven't covered the balancer
switch behavior under server failures.
#8711