-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support brotli compression on the server side #142334
Support brotli compression on the server side #142334
Conversation
@@ -8,6 +8,7 @@ | |||
|
|||
import { Server, Request } from '@hapi/hapi'; | |||
import HapiStaticFiles from '@hapi/inert'; | |||
const Brok = require('brok'); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
import Brok from 'brok';
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The linter doesn't like it
packages/core/http/core-http-server-internal/src/http_server.ts:55:18 - error TS2307: Cannot find module 'brok' or its corresponding type declarations.
55 import Brok from 'brok';
Not sure how to fix it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are two things here.
- each dependency of a package must be declared in it's bazel build file.
- apparently this package doesn't provide the proper typings.
I'll try to push to your PR to fix that today if I can find the time
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done in bf59350
@pgayvallet do you happen to know if cloud's infrastucture (ie, everything that sits between the browser and the actual Kibana server) supports brotli? |
@dgieselaar no, I don't (even if given this is a content compression and not touching the payload I would assume it should work). |
@pgayvallet you're probably right, I'm confusing it with environments where compression happens at the edge. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Pinging @elastic/kibana-core (Team:Core) |
Code looks good but let's wait to do a performance comparison before merging. I've kicked-off two builds but haven't had a chance to compare the results: |
@rudolf looking at the two reports, it looks like both builds have very similar results. Do you share my observations? |
Is it possible that hapi is very picky on how the Accept-Encoding is formed ? |
Good find, this seems rather brittle! |
Updated elastic/kibana-load-testing#323 and ran a new performance build based on that: |
Well, 50th percentile response time seems to be slightly slower while 99th percentile is slightly faster 🤷
|
This is a slightly unexpected result 🤔 |
@rudolf do those numbers serve as just a sanity check, or are the results significant? I saw some discussion around the numbers being unstable? can we move forward with this or do we need to run additional benchmarks? |
@dgieselaar I think we can just say that these numbers show it doesn't make a significant difference, positive or negative. That means we're fine to merge this into main/8.6 |
@rockdaboot These are generic performance tests and you can check out the requests made for each of the scenarios here https://github.com/elastic/kibana-load-testing/tree/main/src/test/scala/org/kibanaLoadTest/scenario |
Thanks @rudolf ! Just one more question about the benchmarks. Are the client(s) running inside the same region or even inside the same network as Kibana ? |
@elasticmachine merge upstream |
I think these are run on the same server using dedicated hardware @dmlemeshko do we have any docs about the infrastructure? For completeness sake I worked with @dmlemeshko to pull cpu and memory usage from these bulids. We don't (yet) have a way to distinguish between builds based on the metrics, but we used timestamps to identify the brotli build. In the graph below the brotli build is in the center and the other two builds are https://kibana-stats.elastic.dev/app/dashboards#/view/7b3eb890-457d-11ed-9db6-5d970c9cc752 |
@rudolf so, from our last weekly's discussions, for |
We are using bare metal machine EX-62 provided by Hezner:
Both ES/Kibana and Gatling runner hosted on this machine, which obviously might have some impact (ES and Gatling can battle for resources), but for now we stick to this setup. If you are interested about ES/Kibana server arguments, you can find it in FTR config that we use for runs. I will add it in kibana-load-testing repo docs, and make sure it is always up-to-date |
@rockdaboot thanks again for the initial PR, I'm taking over to add the config flag and polish the details :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! I just added some minor comments for my own education
@@ -386,6 +386,11 @@ Specifies an array of trusted hostnames, such as the {kib} host, or a reverse | |||
proxy sitting in front of it. This determines whether HTTP compression may be used for responses, based on the request `Referer` header. | |||
This setting may not be used when <<server-compression, `server.compression.enabled`>> is set to `false`. *Default: `none`* | |||
|
|||
`server.compression.brotli.enabled`:: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we add documentation about quality
as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Woups, I forgot to add the review comment for that one. Actually no 😅 : I added this setting with the intent of keeping it internal (undocumented) for now, as it was mostly to allow us to eventually perform perf testing on cloud environment tweaking the quality
value.
|
||
it(`supports brotli compression`, async () => { | ||
await supertest | ||
.get('/app/kibana') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: should we explicitly test it with an API? Just in case /app and /api are served differently (as we do today with bundles)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yea, all these compression.ts
tests are based on the postulate that application pages and API endpoints are served the same way. Seems like a good idea to duplicate the tests against an 'API' endpoint
💚 Build Succeeded
Metrics [docs]
History
To update your PR or re-run it, just comment with: cc @pgayvallet |
Background
While investigating a high latency in the Kibana server for the response serialization, I realized that Kibana uses
Content-Encoding: gzip
even if the browser supportsbr
(brotli compression).Since Brotli is known to often use less CPU time at even a better compression ratio then Gzip, I gave it a spin.
FYI, our data is JSON encoded.
Results from console testing
Compression of a 41492998 bytes JSON file. The size of data is quite common for generating flamegraphs.
This result for gzip is roughly the same that I saw in the browser's developer console.
Compression level 3 seemed to be a good compromise between CPU usage and compression ratio (I also tested other level).
Browser Support of brotli
All modern browsers support
br
Content-Encoding. Older browsers will only offergzip
anddeflate
.Details on this: https://caniuse.com/?search=brotli
Results from the MVP Universal Profiling cluster
The brotli compression shaves off ~40% from the "white gap" on the right side (serialization + compression).
The compressed network payload goes down from 6.7MB to 3.9MB, ~40% less data to transfer (seen in the browser dev console under "network").
gzip
brotli
TBD
Conclusion
The support of the
br
Content-Encoding has the potential of strongly reducing CPU usage and moderately lowering network bandwidth for Kibana in cases were larger amounts of compressible data is transferred. This may have positive effects cloud costs.The risk of introducing downsides seem to be relatively low.