-
Notifications
You must be signed in to change notification settings - Fork 78
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
HTTPResponseCompressor's API doesn't support selective compression #26
Comments
CC @kjessup |
Yeah the current API is definitely limited: unfortunately, that limitation basically comes from the fact that there doesn't appear to be a really wonderful set of API choices that work nicely. The most obvious alternative API design is to follow the pattern established in the protocol upgrader, which is that you'd have a constructor like this: public init(_ shouldCompress: (HTTPRequestHead, HTTPResponseHead) -> Bool) This would, for each response head, ask whether the response itself should be compressed. This is a strict improvement over the current model, but has the disadvantage of requiring the user do some awkward state coordination if they can't determine whether they should compress based purely on the header block. It's unquestionably an improvement over the current API, but it's not necessarily a great fit. More generally this handler is not a good fit for many projects. It has a bunch of other limitations:
While we can design a better API for this handler, it's not clear to me that we should. In general for web frameworks I'd say that compression of resources is something the framework itself should be handling, and that it should not be part of the channel pipeline. Frameworks should support offline compression of static resources into multiple compressed formats such that those compressed responses can be served from disk, and should allow compression of dynamic resources based on meta-knowledge of the resource itself. This handler is not so much a general solution to that problem as it is an example, and something that can be used in situations where you are confident that all resources can reasonably be compressed (e.g. you only ship JSON responses). |
I don't have a great idea either for an API either. Instead of a
with
and if they want to only selectively compress
or something along those lines. So I agree we probably want to take this out of the pipeline. |
I think that raises a follow-on question for me: should we just be providing an encapsulation for compression in general? At this stage we're basically just wrapping zlib, so should we lean into that and just provide a useful API to wrapping zlib? |
I haven't been following this closely, but IMO compressing and decompressing channels are useful for many verbose HTTP APIs, e.g. WebDAV or anything JSON. This is separate to compressing (and storing/caching) large resources like say PDFs or images. |
@helje5 Definitely agreed, we're mostly just trying to work out what the set of primitives NIO provides should be, vs what frameworks should be handling themselves. I see the questions as being as follows:
My current set of answers is:
But I don't hold any of these positions strongly, which is why I'm interested in kicking the idea around here and seeing what others think. I'm happy to be swayed on any of those answers. |
The more you think about it, the harder it gets. There are also the API issues around TE vs Content-Encoding (and the effects of the latter, e.g. potentially different ETags, ranges etc). My feeling is that you would definitely want compression handlers in NIO. But to make it work, I think you would need to be able to create stacked pipelines? So that you can create a new one which is just active for the current request and add the compression handler if necessary. So I wonder whether the API discussion is more an issue with the current "pipeline API" (e.g. in Node you can just freely nest streams). |
All of this is why I don't think any complex implementation of this belongs in a channel handler. It just requires too much knowledge that is present at the framework level and that would have to be communicated to NIO. All of this just so that NIO can call It seems more generally useful to have it be easy for frameworks to call
This is definitely an approach that can be taken. It wouldn't be hard to write a channel handler that creates a small sub-channel for each request/response. This is different to what we do in HTTP/2, because HTTP/2 fans-out one channel to many sub-channels, rather than one to one, but it's not hard to do. But it seems to be that what we're doing here is middlewares. Almost all web frameworks today are shipping some form of middleware API, and the vast majority of them do not put their middleware onto the event loop. One certainly could, but I don't know if NIO should be pushing that as the expected design pattern. A web framework that wants to build things this way could certainly be built (/me looks meaningfully at µExpress), but I don't know if NIO should be shipping that. |
After further discussion with @helje5, I think a more useful construction may be this:
How do people feel about this approach? |
Sounds quite good but we need to assess how many allocations etc that'll cause per request. |
Thats is actually a fair point. So maybe some "should-I-do-my-job-callback" API is the better way to go. |
The current HTTPResponseCompressor API doesn't support selective compression:
https://github.com/apple/swift-nio/blob/ed28803a78b187e202c27a62c7a497fbe9cfbbd7/Sources/NIOHTTP1/HTTPResponseCompressor.swift#L59-L63
It's a duplex handler because it needs to grep the
accept-encoding
header from the request out in order to compress the response. The problem is that if the client says it supports compression the response compressor will then always compress...The only way that I can see to not compress a certain response is to remove the responsecompressor just before sending a response and then to immediately create a new compressor that then needs to be added to the pipeline. And that will also only work without pipelining (or with the pipeline helper enabled). Don't think the current API is good enough tbh.
The text was updated successfully, but these errors were encountered: