You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Recently our Netty backends have fully switched to reactive-streams powered processing (see #3337).
Even raw requests and responses are now represented as Subscribers and Publishers, which had to be implemented manually for Future and Loom backends. The implementation is rather naive:
Request Body
FileWriterSubscriber requests exactly 1 next chunk after writing current chunk.
Similarly for InputStreamBody, where we load data using our SimpleSubscriber before exposing it to the logic as an InputStream.
Response Body
FileRangePublisher reads in chunks and manually serializes requests using an AtomicBoolean
InputStreamPublisher works similarly, it also uses a repeated blocking {} call to read from an InputStream
We'd like to measure how fast are these implementations, possibly compared to the previous impl (1.9.1) and streaming-based Netty backends like cats and zio, which work in a different way:
Publisher from the request is converted into a fs2.Stream or ZStream using library code
The obtained stream is then written to a file, also using library code
This issue has been created as a bookmark after noticing that our new backends elements may impact performance. Let's start with specifying what exactly we want to measure and compare.
The text was updated successfully, but these errors were encountered:
Recently our Netty backends have fully switched to reactive-streams powered processing (see #3337).
Even raw requests and responses are now represented as Subscribers and Publishers, which had to be implemented manually for Future and Loom backends. The implementation is rather naive:
Request Body
FileWriterSubscriber
requests exactly 1 next chunk after writing current chunk.InputStreamBody
, where we load data using ourSimpleSubscriber
before exposing it to the logic as anInputStream
.Response Body
FileRangePublisher
reads in chunks and manually serializes requests using anAtomicBoolean
InputStreamPublisher
works similarly, it also uses a repeatedblocking {}
call to read from an InputStreamWe'd like to measure how fast are these implementations, possibly compared to the previous impl (1.9.1) and streaming-based Netty backends like
cats
andzio
, which work in a different way:Publisher
from the request is converted into afs2.Stream
orZStream
using library codeThis issue has been created as a bookmark after noticing that our new backends elements may impact performance. Let's start with specifying what exactly we want to measure and compare.
The text was updated successfully, but these errors were encountered: