-
-
Notifications
You must be signed in to change notification settings - Fork 242
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Express returns a non-compliant HTTP/206 response when gzip is enabled #185
Comments
if the client send a From the top of my head, I can tell a couple of cases where this change might break applications that depend on genomics data that needs to handle huge files such as https://github.com/igvteam/igv.js/ that make request to The decompression is handled by the browser and the final receiver gets the bytes of the payload that corresponds to the uncompressed form. This has always been like this. For a lot of developers, intuition & usability >>> RFC compliance. Am I misinterpreting something? PS: Not everyone uses Next.js or related Azure services. And they have to serve GBs of files that can be compressed fairly well on request (1-10MB chunks )such as genomic data and CANNOT compute the total length of the compressed file ahead of time for performance reasons. That means those who do use compression for large files that can benefit from this library will have drop it or filter these files out. How do these half-baked requirements make their way into RFC is really appalling sometimes.. |
Is it possible to add a config object such as: app.use(compression({ enforceRFC7233: true})) which would default to |
Hi @IbrahimTanyalcin you can make any rule you like to determine if the response should br compressed or not using the |
@dougwilson I understand that I can pass a filter function, but that means dropping support for compression for plain text file types (fasta etc.) that would actually benefit the greatest from such compression. The benefit from compressing a bundle from Next.js of let's say 300kB is nothing compared to the benefit of compressing a 10Mb chunk of a 5Gb genomic sequence file. Am I wrong in my reasoning? It would be so nice if it was possible to devise a solution that wouldn't break apps of people like me and also allow @mscbpi and others to achieve what they want. |
I'm not dure I understand. If you return |
In genomics we deal with large files, let's say
The RFC requires the server to know the gzipped size beforehand so that the I think a solution that wouldn't break backwards compatibility without resorting to |
Ok here is some more info, in my route file for serving static content, I have something like: app.use('/static', compression(), _static, function(req, res, next) { //downstream middleware ...
I also dug into what are the request headers for such large fasta files and compared them to regular js/html/css files. Here is the causal js/html/css files: And here is the request headers I send for a supposedly large genomic file: So it turns out, the client logic is sending This means in my case this wouldn't seem to break the behavior with large files, as they are already not compressed. I was wrong. ( There might still be other genomics apps though that do not send the correct encoding request header and expect it work though 🤷) |
Hi @IbrahimTanyalcin thanks a lot, This is not Microsoft azure's point of view/interpretation of standard but they may be themselves wrong.
Other resources / SR lead to the same observation. |
@mscbpi yes you are correct, I never objected the RFC, however from a technical standpoint it is very costly to compute the total zipped size and adjust the I wish a provisionary header like |
CDNs (mostly Azure Front Door) is using HTTP Range Requests to retrieve data from Origin when caching enabled.
Express should ignore
Range:
requests header whengzip
compression is in place since Express is unable to respond aHTTP/206
compliant answer with the right computedContent-Range
header in the response that is taking the compressed data length into account.A fair compliant workaround is to, in that very case of compression where computing
Content-Range
values would be too complex, ignore client'sRange:
header in the request, and answer the whole compressed content in a HTTP/200 response.Handling
Range:
headers is optional so answering aHTTP/200
is OK.Answering a
HTTP/206
with wrongContent-Range
values is notOK.Meanwhile another workaround is to disable compression and make CDN handle it, or disable CDN caching, however it would be fair to expect Express to return a compliant HTTP response in any case.
References:
RFC7233
Details and highlighting Express behavior:
https://github.com/DanielLarsenNZ/nodejs-express-range-headers
The text was updated successfully, but these errors were encountered: