Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GRPC client stream onMessage and onClose executes on different threads #15187

Closed
Andro999b opened this issue Feb 19, 2021 · 7 comments · Fixed by #15212
Closed

GRPC client stream onMessage and onClose executes on different threads #15187

Andro999b opened this issue Feb 19, 2021 · 7 comments · Fixed by #15212
Labels
area/grpc gRPC kind/bug Something isn't working
Milestone

Comments

@Andro999b
Copy link
Contributor

Andro999b commented Feb 19, 2021

When calling client grpc service method form blocking code(no matter if its blocking or async client). Stream close will be executed on worker thread, but on message event will be executed on event-loop thread. So client stream usually closed before able to reads any message

Expected behavior
OnMessage and OnClose executed in correct order

To Reproduce
https://drive.google.com/file/d/16xdsSuSV4V9pQAJt_HDYoppgqsAGOGoX/view?usp=sharing

Steps to reproduce the behavior:
archive contains single UT that reproduce incorrect behavior.
For more details you will need to debug class io.grpc.stub.ClientCall.BlockingResponseStream.QueuingListener to see threads

Environment (please complete the following information):

  • Output of uname -a or ver: Linux andrii-PC 5.4.0-65-generic Gizmo fixes and enhancements #73-Ubuntu SMP Mon Jan 18 17:25:17 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
  • Output of java -version: openjdk version "11.0.9" 2020-10-20
  • GraalVM version (if different from Java): OpenJDK Runtime Environment GraalVM CE 20.3.0 (build 11.0.9+10-jvmci-20.3-b06)
  • Quarkus version or git rev: 1.11.3
  • Build tool (ie. output of mvnw --version or gradlew --version): maven 3.6.3 or gradle 6.5.1
@Andro999b Andro999b added the kind/bug Something isn't working label Feb 19, 2021
@quarkus-bot
Copy link

quarkus-bot bot commented Feb 19, 2021

/cc @cescoffier, @michalszynkiewicz

@cescoffier
Copy link
Member

@Andro999b I just looked that the reproducer, and it seems to work (I bumped the version to 999-SNAPSHOT).

I've tried with and without the @Blocking annotation on the service.

Did I miss something?

@Andro999b
Copy link
Contributor Author

Andro999b commented Feb 19, 2021

@cescoffier Pls look at comment that i left in test. Its very trick to reproduce because its relied on threads execution order.
Basicaly to see it you will need to debug lines io.grpc.stub.ClientCalls:694 (onClose) and io.grpc.stub.ClientCalls:688 (onMessage) too see that them executes on different threads. On my machine onClose usually executes fitst.

@Andro999b
Copy link
Contributor Author

@cescoffier also rechecked this with 999-SNAPSHOT still valid

@Andro999b
Copy link
Contributor Author

Andro999b commented Feb 19, 2021

@cescoffier I have played i bit with my reporoducer and found out that it not reproduce bug) But i think i actually found out a problem:
in IOThreadClientInterceptor there is ForwardingClientCall that override onMesage and execute then inside vertx context(that is 'event-thread-loop') but IOThreadClientInterceptor doesnt override onClose that mean onClose executed on caller thread. So if at some point of time there is enough workload in 'evenet-thread-loop' onMessage maybe delayed enouggh to be executed right after onClose. This why i observe issue in my app, that runs alot of additions stuff but not in test. I think IOThreadClientInterceptor should also override onClose method and run it on 'event-loop-thread'

@cescoffier
Copy link
Member

Thanks! Fancy a PR? It looks you pinpointed the issue.

@Andro999b
Copy link
Contributor Author

Will try to make fix tomorrow.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/grpc gRPC kind/bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants