-
Notifications
You must be signed in to change notification settings - Fork 88
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
1.17.0 breaking pubsub #25
Comments
Another related issue: googleapis/python-pubsub#74 |
I was seeing the same issue in all our production apps, and had to revert it to v1.16.0 for it to start working again. |
Most likely caused by 2b103b6, but possibly 14f1f34. @plamut Could you try reproducing the issue at each of these commits to track down which one is the problem?
|
@busunkim96 I checked and 2b103b6 indeed seems to be the culprit. Installing that version of |
Lowering priority as discussed in the weekly meeting. Pub/Sub is pinning the api-core version as a workaround. |
Wondering if this is addressed in 1.19? |
@kunduanqb Not yet, see #30 |
Closes #25. This PR adds the ability to disable automatically pre-fetching the first item of a stream returned by `*-Stream` gRPC callables. This hook will be used in PubSub to fix the [stalled stream issue](googleapis/python-pubsub#93), while also not affecting Firestore, since the default behavior is preserved. I realize the fix is far from ideal, but it's the least ugly among the approaches I tried, e.g. somehow passing the flag through `ResumableBidiRpc` (it's a messy rabbit hole). On the PubSub side monkeypatching the generated SubscriberClient will be needed, but it's a (relatively) clean one-liner: ```patch diff --git google/cloud/pubsub_v1/gapic/subscriber_client.py google/cloud/pubsub_v1/gapic/subscriber_client.py index e98a686..1d6c058 100644 --- google/cloud/pubsub_v1/gapic/subscriber_client.py +++ google/cloud/pubsub_v1/gapic/subscriber_client.py @@ -1169,6 +1169,8 @@ class SubscriberClient(object): default_timeout=self._method_configs["StreamingPull"].timeout, client_info=self._client_info, ) + # TODO: explain this monkeypatch! + self.transport.streaming_pull._prefetch_first_result_ = False return self._inner_api_calls["streaming_pull"]( requests, retry=retry, timeout=timeout, metadata=metadata ``` If/when we merge this, we should also release it, and then we can add `!= 1.17.0` to the `google-api-core` version pin in PubSub. ### PR checklist - [x] Make sure to open an issue as a [bug/issue](https://github.com/googleapis/python-api-core/issues/new/choose) before writing your code! That way we can discuss the change, evaluate designs, and agree on the general idea - [x] Ensure the tests and linter pass - [x] Code coverage does not decrease (if any source code was changed) - [x] Appropriate docs were updated (if necessary)
I'm seeing a customer report this for 1.22.1. Does like look familiar or is it an unrelated thing? cc @plamut Their dependencies:
Error:
|
@pradn While the status code is the same (UNKNOWN), the reason seems different here:
(the original issue's reason was "Exception iterating requests!") Can you check if "stream removed" is something that can be triggered by the server when it decides to terminate a stream for some reason? I don't recall seeing this before, nor any part of the hand-written code that would "remove" the stream. |
I don't think "stream removed" errors come from the server side. I also see this issue in other libraries, like in node. There seems to be no resolution, yet. |
So, has there been any resolution to the stream removed error? I'm getting it on a long running win10 Pub/Sub subscriber in a while(true) loop. I'm using the latest core 1.26.1 do I need to go back to 1.17? |
I am running into the same issue as Marcus. The core is at 1.26.2 |
Hi all,
Just a heads-up - 1.17.0 appears to be breaking the current pubsub client. Lots of time spent debugging this, hopefully it will help some other poor bastards out there:
Pinning to 1.16.0 in requirements.txt fixes the issue.
The text was updated successfully, but these errors were encountered: