-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tune http.Client's Transport's MaxIdleConns/MaxIdleConnsPerHost #127
Comments
I'll try to get to this very soon. Nice research @0xdevalias thank you! |
came here to ask about this, i am looking for a simple way to control the amount of connections i am establishing via the command line. |
Did some quick testing on windows and there is indeed a speedup. Default values from the
And after setting the following values: client.client = &http.Client{
Timeout: opt.Timeout,
CheckRedirect: redirectFunc,
Transport: &http.Transport{
Proxy: proxyURLFunc,
MaxIdleConns: 100,
MaxIdleConnsPerHost: 100,
TLSClientConfig: &tls.Config{
InsecureSkipVerify: opt.InsecureSSL,
},
}}
So with default parameters it takes around 17 seconds for a 5000 entry wordlist and with both parameters set to 100 we are around 15 seconds. So I think we should use this optimization and hope we will not kill many servers with it :) |
I created a PR over here: |
In doing some work on another project, i've learnt far more than I ever expected to about go's
http.Client
, and how it's defaults may not be ideal. We're already setting timeouts, which is good, but the defaults around idle connection reuse are pretty woeful:This is our current settings:
Some refs:
Looking at the source for net/http/transport.go we can see that the
MaxIdleConnections
is100
, butDefaultMaxIdleConnsPerHost
is only 2:What this means in practical terms is that for every 100 connections we open to a webserver, we close 98 of them, which leads to a lot of sockets left in our kernel in the
TIME_WAIT
state, and also means we need to pay the overhead of establishing a new TCP (and/or TLS) handshake on those 98 closed sockets.I'm not 100% on this, but I think the best setting would be to set
MaxIdleConns
==MaxIdleConnsPerHost
== number of goroutines/'threads' specified.Then if we make sure that we set
keep-alive
on requests to the server, if supported, it should mean that we get a lot more reuse per socket, which should mean faster gobusting.For this to work, you also have to ensure you fully read/close the response body:
It looks as though the entire body is already being read, in these areas, so this should be fine:
There are a LOT more little things that can be tuned in the
http.Client
'sTransport
, but not sure if/what would make as much difference as this.Making use of
keep-alive
was one of the techniques discussed in this research:If anyone wants to benchmark the speed/resource difference before/after, i'd be pretty interested to see just how much of an improvement it makes, but from the above video, it claims about a 400% increase from reusing the connections (and like 6000% if you want to use http pipelining..)
The text was updated successfully, but these errors were encountered: