Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug: Active Streams Silently Killed after connectionTimeout #752

Open
lapumb opened this issue Dec 11, 2024 · 3 comments
Open

Bug: Active Streams Silently Killed after connectionTimeout #752

lapumb opened this issue Dec 11, 2024 · 3 comments

Comments

@lapumb
Copy link

lapumb commented Dec 11, 2024

Any active stream(s) are silently killed without notifying the listeners once the first gRPC call is made after the ClientChannel's connectionTimeout is reached.

This feels like a bug to me.

gRPC Package Version Info:

  grpc:
    dependency: "direct main"
    description:
      name: grpc
      sha256: "5b99b7a420937d4361ece68b798c9af8e04b5bc128a7859f2a4be87427694813"
      url: "https://pub.dev"
    source: hosted
    version: "4.0.1"

Platform: Linux

Repro steps

  1. Listen to a server --> client stream
  2. Wait the ChannelOptions.connectionTimeout amount of time
  3. Make a new / different gRPC call
  4. The server detects the previously-active stream as 'cancelled'
  5. The client continues to listen to the previously-active stream and never gets notified of cancellation (onDone)

Run the examples in https://github.com/lapumb/grpc-stream-example to easily reproduce.

Expected result: If the server detects a stream cancellation, the client (listener) should be notified via a stream error or the onDone event.

Actual result: The server ends the stream and the client continues to listen indefinitely.

Details

It appears that streams active when the connection timeout occurs will stay in a working / data receiving state until a new gRPC call is made after the connection timeout.

To get around this, I have had to implement heartbeats in my streams so my client can detect if it actually stopped receiving data altogether.

@BenJacques
Copy link

I'm experiencing this issue as well. It's causing enough of a headache that I'm considering ripping out streams altogether from my app and hosting a client and both ends of the connection for bi-directional communication.

@chazh
Copy link

chazh commented Dec 16, 2024

I spent sometime looking into this based on the code provided and noticed that the stream actually does not report back any information about the connection being lost or disconnected.

I added to the a onConnectionStateChanged listener to the clientChannel that just prints out the state. This is what was reported in the logs.

Client Connection state changed: ConnectionState.connecting
Client Connection state changed: ConnectionState.ready
Received message: 0 at 2024-12-13 16:35:23.335838
--------------------------------------------------------------------------------
It has been 0:00:30.000000, the connection timeout should have taken effect!
Try pinging the server (by typing 's') to see if the connection is still active.
--------------------------------------------------------------------------------
Sending ping...
Client Connection state changed: ConnectionState.idle
Client Connection state changed: ConnectionState.connecting
Client Connection state changed: ConnectionState.ready
No messages received in the last 2 minutes
No messages received in the last 2 minutes

I would have expected the connection state to change to a disconnecting state or failure state at this point.

@lapumb
Copy link
Author

lapumb commented Dec 18, 2024

Maybe a better way to phrase my problem: is this a limitation of gRPC? Or is this a bug in this package? I am very curious if other languages implement this in the same manner.

@lapumb lapumb changed the title Active Streams versus connectionTimeout Bug: Active Streams Silently Killed after connectionTimeout Jan 7, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants