-
Notifications
You must be signed in to change notification settings - Fork 192
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pod get with watch looses events #41
Comments
I haven't seen this. I'll try to repro later tonight. Thanks for the report @hekike . |
@hekike I'm having trouble reproducing. What version is your kube api? can you post the snippet that comes after the one you posted above? e.g., something like: |
@silasbw my only thought is that we are hitting an issue with there not being |
Hmm, maybe. I wouldn't expect keep-alive to matter because we're only sending a single request (the @hekike one possibility: is that that the kube apiserver is intentionally closing the HTTP connection according to |
@silasbw Yes, I'm using the default one and I would like to keep it open for much longer. Basically for forever. Shouldn't we add a re-connect logic to watch? But in this case reconnect would fire |
@hekike we should consider re-connecting, but I'm not sure if I know a good solution. If we wanted to implement re-connection logic, what would be a good general purpose API? If we want something that "give me all the events via a stream" i worry it would be complicated to support the all part. Isn't there this race?
Implementing something that ensures we don't miss the event in 4. seems challenging. If we communicated re-connection attempts, application specific logic could deal with potentially loosing events, but then the API is "give me most of the events (via a stream?) and let me know when I might have missed some". At that point, I'm not sure there's much benefit. Thoughts? |
Yes, you are right, it's not an easy one. But the current "watch until you can" is also not very good. It's misleading. Would it be crazy to create two watch streams in the bg (re-connect them frequently but separately after each other) and always switch to the live one? Something like the concept of blue-green deployments? It's still not bullet proof, but probably would solve the connection timeout issues. |
I like the blue-green approach but I think it would be challenging to get right. How would we synchronize the two streams? With Can you provide a more complete example of what you're implementing? I wonder if there's a higher-level abstraction we could provide that would be useful for your implementation. For example, if you'd like to cache the state of objects locally, or be notified when objects change, we could write an abstraction on top of watching that does automatic reconnects and provides a different API (e.g., automatically updates a cache to read from, or emits an |
You are right, these are hard questions. I'm still thinking about a proper solution. In my use-case I solved it with re-fetching the pod endpoint periodically and maintaining it locally. But it's not really the point of the watch API. For me it was enough to know the running pods in the namespace instead of knowing the exact changes. We can figure out a complex solution for this, but it's also up to you that what's the scope of this library. Maybe it would be enough to add a notice to the README that watch doesn't live for forever and be careful to use it. What do you think? |
Good suggestion about noting in the README.md: Another thing we could do is add some examples: links to real application using kubernetes-client and add some toy examples to this repo. A toy example illustrating useful ways to leverage watching (and handle disconnects) could be helpful. |
I use watch in the following format:
After a while it stops listening to events.
Do you have an idea?
Thanks!
The text was updated successfully, but these errors were encountered: