Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add auto compression filter #513

Open
wants to merge 4 commits into
base: master
Choose a base branch
from

Conversation

ParkMyCar
Copy link
Contributor

@ParkMyCar ParkMyCar commented Mar 30, 2020

This PR introduces a filter warp::compression::auto() which will pick a compression algorithm based on the value in the Accept-Encoding header. If there is no Accept-Encoding header, then no compression is applied.

This also fixes the bug where having two compression filters would put two Accept-Encoding headers in the response, and the request would fail. While not recommended, double compressing a response is possible and shown in the example.

Note: This work is dependent on a PR for hyperium/headers that enables reading quality value syntax from the Accept-Encoding header, as such it cannot be landed until that PR is merged.

This is follow up work from: #472

@fahrradflucht
Copy link

fahrradflucht commented Apr 13, 2020

General question: I was always under the impression (couldn't find a good source to cite quickly though), that just-in-time brotli compression was discouraged, because of the higher compression times. - Is this not a problem here? Otherwise, blindly following the clients preferred encoding could be the wrong thing to do.

@ParkMyCar
Copy link
Contributor Author

@fahrradflucht interesting point, that's something I hadn't thought of. Taking a look at this blog post by CloudFare where they benchmarked their zlib implementation vs brotli, it does look like brotli might not be the best for on-the-fly compression.

That being said, this combined with #472, I'd still consider it an MVP. I've been planning on putting up an RFC for more fine tuned compression filters. This RFC would ideally give the user a tool to prevent blindly following the clients preferred encoding.

@ParkMyCar
Copy link
Contributor Author

cc @seanmonstar for review, because I can't figure out how to add a reviewer 🤔

@fahrradflucht fahrradflucht mentioned this pull request Apr 25, 2020
@seanmonstar seanmonstar self-requested a review May 19, 2020 16:36
@ernestas-poskus
Copy link

@ParkMyCar rebase conflicting files

@ernestas-poskus
Copy link

@jxs maybe you could review this ?

@nicksrandall
Copy link

@fahrradflucht While I haven't tested this implementation, generally speaking brolti with a lower compression setting (like 4) has a slightly better compression performance than GZIP and slightly faster as well so it's generally acceptable. Also, brotli is useful for when the site/application is behind a CDN which can cache the compressed content (so not every user has to pay the price for compression).

@nicksrandall
Copy link

IMO, it would be nice to add to this implementation a "threshold" where compression is only performed on items which are larger than the threshold. Compressing tiny objects usually isn't worth the time it takes. Just a thought.

@kaj
Copy link

kaj commented Jan 19, 2021

Would it make sense to have the compression::auto() filter uncompress responses that are pre-compressed with a compression that is not supported by the request? That would make it easier to store static files precompressed in the binary, which would be nice.

@novacrazy
Copy link

What is the status of this PR? Is there anything of note that would prevent me from using it as-is?

@ParkMyCar
Copy link
Contributor Author

Hey folks, sorry it's been a long time since I've been able to look at this. The diff definitely needs to be rebased on top of the latest master, but the majority of changes pertain to the compression module, so rebasing shouldn't be too bad. If I find some time I can do it, but if someone else wants to open a new PR with rebased and/or modified changes, they should feel free 🙂

@nicksrandall I agree, being able to specify some parameters, maybe around content size or compression level, would be nice. There should probably be an RFC or an API proposal before work begins though as I can imagine a few different APIs for that. If you're interested in designing that I'd suggest looking at what other web frameworks do too

@kaj I think that would be a great idea! Although that would require a decent amount of changes to the existing compression filters, since currently they're all built as fn(body) -> Response, not saying it can't be done though

@novacrazy This PR is largely a "wrapper" around some work from #472 which was added as part of the v0.2.3 release, whats stopping you from using this PR as-is, is the requirement of an Accept-Encoding header. Browsers are able to specify the type of compression encoding they support, and their preferences, via this header, but unfortunately the headers crate doesn't support this header yet. I put up a PR to add support but that's still open as well.

@jxs jxs added the waiting-on-author awaiting some action (such as code changes) from the PR or issue author. label Jun 3, 2021
casperlabs-bors-ng bot added a commit to casper-network/casper-node that referenced this pull request Sep 16, 2021
Merge #2077

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
2077: gzip content negotiation r=dwerner a=dwerner

This PR brings in content-negotiation using warp's `compression` feature. This should be considered an interim fix until something like seanmonstar/warp#513 lands.


Co-authored-by: Daniel Werner <dan@casperlabs.io>
tobiemh added a commit to surrealdb/surrealdb that referenced this pull request Jul 29, 2022
Disable response compression until `Accept-Encoding` headers are properly evaluated, and the compression can be chosen based on the HTTP request. This relies on seanmonstar/warp#513 being implemented.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
waiting-on-author awaiting some action (such as code changes) from the PR or issue author.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants