You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When proxying HTTP (reading requests) proxy starts allocating memory for every request causing GC pressure.
This is a common case where we want to know what we are proxying to gather metrics and provide more info to users...
In this benchmark GC takes approx. 5% CPU time - 2nd block to the right. RUN=BenchmarkRespBody1k run_bench http http
For each response body it allocates a 32k buffer. The body size is irrelevant, I'm using 1k in the benchmark. It would allocate another 32k buffer if there was a request body.
We can fix that by porting Martian to fasthttp or some other "zero alloc" http packages.
Typically they offer an easy migration path and 10x perf gain.
The text was updated successfully, but these errors were encountered:
After some more investigation I decided to try to fix it.
Using another library has a bunch of drawback I'd like to avoid:
Breaking compatibility
New bugs / stability / SECURITY
It needs to be done and tested...
The main issue is the allocations in io.copyBuffer(), since the buffer is large it quickly triggers GC.
I sent a patch to fix it golang/go#57205.
The other issue is known and investigated by Olivier Poitrey (wow) golang/go#37809.
It seems there is no easy fix for it, and it's much less of an issue for us.
The reviewer sent another patch that moves the io.Copy buffer pool optimization from http package directly to io! This is much larger scope but the issue is fixed at its core now. Currently it's marked as waiting-release for 1.21 Go release.
When proxying HTTP (reading requests) proxy starts allocating memory for every request causing GC pressure.
This is a common case where we want to know what we are proxying to gather metrics and provide more info to users...
In this benchmark GC takes approx. 5% CPU time - 2nd block to the right.
RUN=BenchmarkRespBody1k run_bench http http
There are two sources of allocations:
For each response body it allocates a 32k buffer. The body size is irrelevant, I'm using 1k in the benchmark. It would allocate another 32k buffer if there was a request body.
We can fix that by porting Martian to fasthttp or some other "zero alloc" http packages.
Typically they offer an easy migration path and 10x perf gain.
The text was updated successfully, but these errors were encountered: