Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Clarification on WatchChan behavior. #8188

Closed
SuhasAnand opened this issue Jun 29, 2017 · 19 comments
Closed

Clarification on WatchChan behavior. #8188

SuhasAnand opened this issue Jun 29, 2017 · 19 comments

Comments

@SuhasAnand
Copy link

From the ClientV3 godoc its not clear if the watch response will have non nil error for every scenario.

for example consider the following code

  rch := client.Watch(
    watchCtx, "/your/prefix", clientv3.WithPrefix(),
  )
  for wresp := range rch {
    err := wresp.Err()
    if err != nil {
      return err
    }
    for _, ev := range wresp.Events {
      err := doSomething(ev)
      if err != nil {
        return err
      }
    }
  }
 fmt.Println("I should never be printed")

In godoc I see

//Canceled is used to indicate watch failure.
// If the watch failed and the stream was about to close, before the channel is closed,
// the channel sends a final response that has Canceled set to true with a non-nil Err().

so is it fair to expect in above code I should not see the line fmt.Println("I should never be printed") being executed ?

@xiang90
Copy link
Contributor

xiang90 commented Jun 29, 2017

if you do not cancel the watch, yes.

@xiang90 xiang90 closed this as completed Jun 29, 2017
@SuhasAnand
Copy link
Author

@xiang90 Could you explain more i.e. when can a watch get canceled (lets assume, user will never cancel) what happens if the leader dies, or if the client loses connection to the etcd cluster would that cause the watch to get canceled ? in other words, what can cause the watch to get canceled other than user invoked cancel ?

I am asking this because, I am seeing this issue very frequently at scale of ~2-4M constant K/V updates.

@xiang90 xiang90 reopened this Jun 29, 2017
@xiang90
Copy link
Contributor

xiang90 commented Jun 29, 2017

I am seeing this issue very frequently at scale of ~2-4M constant K/V updates.

what issue?

@SuhasAnand
Copy link
Author

SuhasAnand commented Jun 29, 2017

what issue?

where watch closes without any error without cancel being invoke and etcd cluster is fine.

@xiang90
Copy link
Contributor

xiang90 commented Jun 29, 2017

@SuhasAnand Can you somehow reproduce it? If so, please provide a script to us and we can help you on it.

@SuhasAnand
Copy link
Author

SuhasAnand commented Jun 29, 2017

Can you somehow reproduce it? If so, please provide a script to us and we can help you on it.

Sure.

@xiang90 Meanwhile could you please help answer the following ?

Could you explain more i.e. when can a watch get canceled (lets assume, user will never cancel) what happens if the leader dies, or if the client loses connection to the etcd cluster would that cause the watch to get canceled ? in other words, what can cause the watch to get canceled other than user invoked cancel ?

@SuhasAnand
Copy link
Author

@xiang90 This is what I have observed

// Canceled is used to indicate watch failure.
// If the watch failed and the stream was about to close, before the channel is closed,
// the channel sends a final response that has Canceled set to true with a non-nil Err().
Canceled bool

Based on the above in godoc I inferred , just before WatchChan is closed Canceled is always set, hence I would only need to check for Canceled, and if it is set, only then there will be a non nil error, but in reality its not so, watchchan can be closed without canceled being set, so IMO we should change the godoc to reflect this, because, in my case watchchan did fail but Canceled was not set. (ISSUE 1)

  rch := client.Watch(
    watchCtx, "/your/prefix", clientv3.WithPrefix(),
  )
  for wresp := range rch {
    if wresp.Canceled {
     err := wresp.Err()
      if err != nil {
       return err
     }
    }
    for _, ev := range wresp.Events {
      err := doSomething(ev)
      if err != nil {
        return err
      }
    }
  }
 fmt.Println("I should never be printed")

on further analysis, I saw at a scaled run where where are several (~2M) KV pairs associated with the same lease then upon lease expiry grpc-proxy (there are several issues using grpc proxy more on that later) / etcd will send around ~2M DEL watchresponse notifications, which will close the channel (with canceled not being set) and the Err being set to : error="rpc error: code = ResourceExhausted desc = grpc: received message larger than max (140103271 vs. 4194304)"
(ISSUE 2) IMO, this is a bug, i.e. TTL expiry on ~2M keys should not kill the watcher.

@xiang90
Copy link
Contributor

xiang90 commented Jul 3, 2017

error="rpc error: code = ResourceExhausted desc = grpc: received message larger than max (140103271 vs. 4194304)"

this is a bug. gRPC probably has a upper bound of message size. etcd does not try to respect that.

@heyitsanthony
Copy link
Contributor

This also blocks #7624 if implemented using a keyspace-wide watch. The watch messages will need a fragmentation flag (probably want semantics like limit and more from Range) to indicate the events for a single revision are split over multiple messages.

@mangoslicer
Copy link
Contributor

@heyitsanthony

I'm interested in working on #7624, if no one has started working on this blocking issue with your suggested solution, I can implement it. Would the implementation involve:

  1. Adding a limit field to clientv3.watchRequest and and a public Limit field to pb.WatchRequest
  2. Adding a More field to mvcc.WatchResponseand to pb.WatchResponse
  3. Setting some reasonable limit based of the gRPC max
  4. Modifying the watchableStore's notify method to send all the events in segments according to the limit.

Also, a limit field that is settable by users might not be that useful, since there isn't going to be a case where the user wants to walk away without only the first N events. All the events must be sent over, so the limit seems more like an implementation details rather than an option. The only scenario where I can imagine a user settable limit to be useful is if, for some reason, a user only wants N messages to be sent at a time.

@heyitsanthony
Copy link
Contributor

@mangoslicer #7624 is a separate issue from this; it shouldn't need any modifications to the RPC messages.

Thinking about this issue more, limit semantics with some max n events would be something different from what's needed for the fix. Probably Limit would just drop more than n events and set More, but never deliver those extra events.

The fix here would involve a fragment boolean flag when creating the watch that will enable splitting up the large revisions (indicated with fragment set on the watch response when fragmented). This will also need client-side work to piece the results together.

@eparis
Copy link
Contributor

eparis commented Jul 13, 2017

https://github.com/grpc/grpc-go/pull/1165/files#diff-e1550a73f5d25064c8b586ec68d81a64R107

Appears to have just recently (May 22) merged a change to grpc which caused things to start failing for me. I can no longer do etcdctl get /prefix --prefix --keys-only with a large number of keys returned.

The old value was just "slightly" larger: https://github.com/grpc/grpc-go/pull/1165/files#diff-e1550a73f5d25064c8b586ec68d81a64L105

@heyitsanthony
Copy link
Contributor

@eparis please open another issue for this? The fix for etcdctl get shouldn't need any API changes since there's already pagination in the range rpc. Thanks!

@heyitsanthony heyitsanthony added this to the unplanned milestone Jul 18, 2017
@heyitsanthony heyitsanthony self-assigned this Jul 18, 2017
@mangoslicer
Copy link
Contributor

@heyitsanthony

Did you already start working on the fix? Or is this issue still up for grabs?

@heyitsanthony
Copy link
Contributor

heyitsanthony commented Jul 19, 2017

@mangoslicer haven't started; have at it

@mangoslicer
Copy link
Contributor

@heyitsanthony I put up a PR, can you take a look?

@dmokhov
Copy link

dmokhov commented Oct 27, 2017

@xiang90 Do I understand correctly that WatchChan is closed when corresponding watchGrpcStream closed (e.g. due to ErrNoLeader) and to resume watching on a key I have to Close() clientv3.Client, create another one with client := clientv3.New() and call client.Watch()?

Calling client.Watch() with the same parameters (namely, context) without re-creating client will result in picking closed watchGrpcStream from watcher's streams map since map key will be the same (ctxKey := fmt.Sprintf("%v", ctx)).

Source: this part of clientv3 code:
https://github.com/coreos/etcd/blob/v3.2.9/clientv3/watch.go#L257-L288

So, am I right?

@dmokhov
Copy link

dmokhov commented Nov 9, 2017

No, code won't pick closed watchGrpcStream because (*watcher)closeStream will delete it from streams map.

Thank you.

@gyuho
Copy link
Contributor

gyuho commented Feb 7, 2018

error="rpc error: code = ResourceExhausted desc = grpc: received message larger than max (140103271 vs. 4194304)"

This is fixed in 3.3. The default limit was 4MiB in gRPC side. Now unlimited by default (configurable).

Also we clarified watch behavior in godoc https://godoc.org/github.com/coreos/etcd/clientv3#Watcher.

Moving proxy watch fragment discussion to a separate issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Development

No branches or pull requests

7 participants