-
Notifications
You must be signed in to change notification settings - Fork 422
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MaxContentLength support (streaming) #3319
Conversation
ffb2afd
to
e61e396
Compare
@Mergifyio update |
✅ Branch has been successfully updated |
} | ||
|
||
private def nettyRequestBytes(serverRequest: ServerRequest): F[Array[Byte]] = serverRequest.underlying match { | ||
case req: FullHttpRequest => monad.delay(ByteBufUtil.getBytes(req.content())) | ||
case _: StreamedHttpRequest => toStream(serverRequest).compile.to(Chunk).map(_.toArray[Byte]) | ||
case _: StreamedHttpRequest => toStream(serverRequest, maxBytes = None).compile.to(Chunk).map(_.toArray[Byte]) // TODO |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TODO ?:)
It seems there's a half-done maxContentLegth implementation for netty, in |
} | ||
|
||
private def nettyRequestBytes(serverRequest: ServerRequest): RIO[Env, Array[Byte]] = serverRequest.underlying match { | ||
case req: FullHttpRequest => ZIO.succeed(ByteBufUtil.getBytes(req.content())) | ||
case _: StreamedHttpRequest => toStream(serverRequest).run(ZSink.collectAll[Byte]).map(_.toArray) | ||
case _: StreamedHttpRequest => toStream(serverRequest, maxBytes = None).run(ZSink.collectAll[Byte]).map(_.toArray) // TODO |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
that's for the next PRs?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, coming soon :) #3337
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice (& tedious) work, thanks :)
That current netty-specific maxContentLength support is actually an interesting problem to discuss. It's
If we remove it entirely in favor of the new per-endpoint solution, we will lose features of 1) global scope 2) limiting response body. |
Ideally, we want both: a global setting in But we should definitely have one solution, so let's keep the "new" one only. I can't see limiting response size as a useful feature (you can do it in the business logic anyway) |
Started as a solution to #3056 for Netty, but addresses more backends in general.
This PR introduces a possibility to add a
MaxContentLength
attribute to any endpoint. When set, decodingRequestBody
in supported backends prevents loading too much into memory, and fails if the limit is exceeded.As a result,
DefaultExceptionHandler
returnsHTTP 413 Payload Too Large
.Streaming only
This is only a part of the full feature. The PR covers streaming support for streaming request body, excluding:
In a follow-up PR, I'll add:
endpoint.maxContentLength(x)
instead of setting an attribute.