- Sponsor
-
Notifications
You must be signed in to change notification settings - Fork 723
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add auto compression filter #513
base: master
Are you sure you want to change the base?
Conversation
General question: I was always under the impression (couldn't find a good source to cite quickly though), that just-in-time brotli compression was discouraged, because of the higher compression times. - Is this not a problem here? Otherwise, blindly following the clients preferred encoding could be the wrong thing to do. |
@fahrradflucht interesting point, that's something I hadn't thought of. Taking a look at this blog post by CloudFare where they benchmarked their zlib implementation vs brotli, it does look like brotli might not be the best for on-the-fly compression. That being said, this combined with #472, I'd still consider it an MVP. I've been planning on putting up an RFC for more fine tuned compression filters. This RFC would ideally give the user a tool to prevent blindly following the clients preferred encoding. |
cc @seanmonstar for review, because I can't figure out how to add a reviewer 🤔 |
@ParkMyCar rebase conflicting files |
@jxs maybe you could review this ? |
@fahrradflucht While I haven't tested this implementation, generally speaking brolti with a lower compression setting (like 4) has a slightly better compression performance than GZIP and slightly faster as well so it's generally acceptable. Also, brotli is useful for when the site/application is behind a CDN which can cache the compressed content (so not every user has to pay the price for compression). |
IMO, it would be nice to add to this implementation a "threshold" where compression is only performed on items which are larger than the threshold. Compressing tiny objects usually isn't worth the time it takes. Just a thought. |
Would it make sense to have the |
What is the status of this PR? Is there anything of note that would prevent me from using it as-is? |
Hey folks, sorry it's been a long time since I've been able to look at this. The diff definitely needs to be rebased on top of the latest master, but the majority of changes pertain to the compression module, so rebasing shouldn't be too bad. If I find some time I can do it, but if someone else wants to open a new PR with rebased and/or modified changes, they should feel free 🙂 @nicksrandall I agree, being able to specify some parameters, maybe around content size or compression level, would be nice. There should probably be an RFC or an API proposal before work begins though as I can imagine a few different APIs for that. If you're interested in designing that I'd suggest looking at what other web frameworks do too @kaj I think that would be a great idea! Although that would require a decent amount of changes to the existing compression filters, since currently they're all built as @novacrazy This PR is largely a "wrapper" around some work from #472 which was added as part of the v0.2.3 release, whats stopping you from using this PR as-is, is the requirement of an |
2077: gzip content negotiation r=dwerner a=dwerner This PR brings in content-negotiation using warp's `compression` feature. This should be considered an interim fix until something like seanmonstar/warp#513 lands. Co-authored-by: Daniel Werner <dan@casperlabs.io>
Disable response compression until `Accept-Encoding` headers are properly evaluated, and the compression can be chosen based on the HTTP request. This relies on seanmonstar/warp#513 being implemented.
This PR introduces a filter
warp::compression::auto()
which will pick a compression algorithm based on the value in theAccept-Encoding
header. If there is noAccept-Encoding
header, then no compression is applied.This also fixes the bug where having two compression filters would put two
Accept-Encoding
headers in the response, and the request would fail. While not recommended, double compressing a response is possible and shown in the example.Note: This work is dependent on a PR for hyperium/headers that enables reading quality value syntax from the
Accept-Encoding
header, as such it cannot be landed until that PR is merged.This is follow up work from: #472