-
Notifications
You must be signed in to change notification settings - Fork 471
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Apache Bench can fill up ipvs service proxy in seconds #544
Comments
Check with |
I actually don't see any TIME_WAITS on the physical host, I do see a ton in the containers. I made 6 replicas this time, all still on the same host node6. Below is another picture of where the numbers are at when the requests stop being answered again. It seems the limit is still 14000, just divided among the containers now. Below is the number of TIME-WAITS retrieve from each container. |
Just wanted to point out I am not using DSR, which causes me to wonder why there is an accumulation of TIME_WAIT connections, since in my case shouldn't the LVS be able to see all packets sent both ways? |
TIME_WAIT is in the tcp standard. The state should linger for 2 minutes (depending on how the connection was shutdown) and ipvs also keeps the state to be able to forward stray packets. |
I've done some research on what is going on and it turns out there is a legitimate problem with IPVS. IPVS is not reusing ports like it is supposed to and thus the ephemeral ports are exhausted depending on the ephemeral port range (net.ipv4.ip_local_port_range). Setting net.ipv4.vs.conntrack=0 in sysctl somehow solves the re-use problem, but it breaks nodeport (and probably other stuff) so I don't believe that is the solution. I don't know if it's just CentOS 7 that is affected or this is a broader problem but I imagine many other engineering teams using IPVS as a service proxy are going to eventually encounter this limitation. |
We have been investigating the problem and have come to the following conclusions:
|
Thank you for looking into this. We aren't utilizing BGP or ECMP yet so a load balancer will add all nodes regardless. For example disabling conntrack won't affect a POD hitting a service IP to reach another POD on a different node? And will session affinity like clientip still work? EDIT: EDIT2: setting: appears to solve everything, even node port! |
Special sauce for me seems to be: net.ipv4.vs.conntrack=1 |
Our tests showed that disabling reuse with 'net.ipv4.vs.conn_reuse_mode=0' will interfere with scaling. When adding more pods in a high traffic scenario the traffic will stick to the old and overloaded pods and when scaling down, the traffic will be send to non existent pods. |
Please read this excellent comment on a refered issue; moby/moby#35082 (comment) Be aware that the case where a stream of connects from a single source may not be the common case in real life. It is more likely that you have few connections but from very many sources. You may try to tune your system to handle a case that only exist in your lab. While doing so you tweak parameters that are standard and are there for a reason. The result may be that your app becomes more unstable in real life where the networks is less reliable but performs excellent in your lab which is probably a LAN. |
Have you ever set |
One of the suggestions was to set --notrack on host:
This makes issues with non local pod communication, AFAIK. Also, for reference "one second delay communication" article which explains and provides some of solutions, https://marc.info/?l=linux-virtual-server&m=151743061027765&w=2 |
I have the same confusing feeling that why IPVS drops SYN packet that hits IPVS connection in TIME_WAIT state if such connection uses Netfilter connection tracking (conntrack=1)? |
@neeseius we have set conn_reuse_mode to 0 in the lastest build, could you test if you are experiencing the same problem with |
@roffe I tried your image in our setup and it seems to solve the problem! When running with |
no, that was the only change |
This does appear to the solve the problem in my testing, even when scaling up and down. I know we toyed with these parameters before, but it interfered with scaling. However I noticed this is new: Is that was made the difference? |
|
v0.2.3 released with IPVS throughput fixes |
Don't understand how can the last two be used in the same time when the kernel docs about
so by setting |
And this is the main problem I see with this since setting it to zero basically disables |
must be a typo in the docs, kernel does not check if conn_reuse_mode is 0 when expiring nodest conn it seems: https://github.com/torvalds/linux/blob/master/net/netfilter/ipvs/ip_vs_core.c#L1982 |
Otherwise, will meet issue cloudnativelabs/kube-router#544
[ Upstream commit f0a5e4d7a594e0fe237d3dfafb069bb82f80f42f ] YangYuxi is reporting that connection reuse is causing one-second delay when SYN hits existing connection in TIME_WAIT state. Such delay was added to give time to expire both the IPVS connection and the corresponding conntrack. This was considered a rare case at that time but it is causing problem for some environments such as Kubernetes. As nf_conntrack_tcp_packet() can decide to release the conntrack in TIME_WAIT state and to replace it with a fresh NEW conntrack, we can use this to allow rescheduling just by tuning our check: if the conntrack is confirmed we can not schedule it to different real server and the one-second delay still applies but if new conntrack was created, we are free to select new real server without any delays. YangYuxi lists some of the problem reports: - One second connection delay in masquerading mode: https://marc.info/?t=151683118100004&r=1&w=2 - IPVS low throughput #70747 kubernetes/kubernetes#70747 - Apache Bench can fill up ipvs service proxy in seconds #544 cloudnativelabs/kube-router#544 - Additional 1s latency in `host -> service IP -> pod` kubernetes/kubernetes#90854 Fixes: f719e37 ("ipvs: drop first packet to redirect conntrack") Co-developed-by: YangYuxi <[email protected]> Signed-off-by: YangYuxi <[email protected]> Signed-off-by: Julian Anastasov <[email protected]> Reviewed-by: Simon Horman <[email protected]> Signed-off-by: Pablo Neira Ayuso <[email protected]> Signed-off-by: Sasha Levin <[email protected]>
[ Upstream commit f0a5e4d7a594e0fe237d3dfafb069bb82f80f42f ] YangYuxi is reporting that connection reuse is causing one-second delay when SYN hits existing connection in TIME_WAIT state. Such delay was added to give time to expire both the IPVS connection and the corresponding conntrack. This was considered a rare case at that time but it is causing problem for some environments such as Kubernetes. As nf_conntrack_tcp_packet() can decide to release the conntrack in TIME_WAIT state and to replace it with a fresh NEW conntrack, we can use this to allow rescheduling just by tuning our check: if the conntrack is confirmed we can not schedule it to different real server and the one-second delay still applies but if new conntrack was created, we are free to select new real server without any delays. YangYuxi lists some of the problem reports: - One second connection delay in masquerading mode: https://marc.info/?t=151683118100004&r=1&w=2 - IPVS low throughput #70747 kubernetes/kubernetes#70747 - Apache Bench can fill up ipvs service proxy in seconds #544 cloudnativelabs/kube-router#544 - Additional 1s latency in `host -> service IP -> pod` kubernetes/kubernetes#90854 Fixes: f719e3754ee2 ("ipvs: drop first packet to redirect conntrack") Co-developed-by: YangYuxi <[email protected]> Signed-off-by: YangYuxi <[email protected]> Signed-off-by: Julian Anastasov <[email protected]> Reviewed-by: Simon Horman <[email protected]> Signed-off-by: Pablo Neira Ayuso <[email protected]> Signed-off-by: Sasha Levin <[email protected]> Signed-off-by: John Vincent <[email protected]> Signed-off-by: John Vincent <[email protected]>
[ Upstream commit f0a5e4d7a594e0fe237d3dfafb069bb82f80f42f ] YangYuxi is reporting that connection reuse is causing one-second delay when SYN hits existing connection in TIME_WAIT state. Such delay was added to give time to expire both the IPVS connection and the corresponding conntrack. This was considered a rare case at that time but it is causing problem for some environments such as Kubernetes. As nf_conntrack_tcp_packet() can decide to release the conntrack in TIME_WAIT state and to replace it with a fresh NEW conntrack, we can use this to allow rescheduling just by tuning our check: if the conntrack is confirmed we can not schedule it to different real server and the one-second delay still applies but if new conntrack was created, we are free to select new real server without any delays. YangYuxi lists some of the problem reports: - One second connection delay in masquerading mode: https://marc.info/?t=151683118100004&r=1&w=2 - IPVS low throughput #70747 kubernetes/kubernetes#70747 - Apache Bench can fill up ipvs service proxy in seconds #544 cloudnativelabs/kube-router#544 - Additional 1s latency in `host -> service IP -> pod` kubernetes/kubernetes#90854 Fixes: f719e37 ("ipvs: drop first packet to redirect conntrack") Co-developed-by: YangYuxi <[email protected]> Signed-off-by: YangYuxi <[email protected]> Signed-off-by: Julian Anastasov <[email protected]> Reviewed-by: Simon Horman <[email protected]> Signed-off-by: Pablo Neira Ayuso <[email protected]> Signed-off-by: Sasha Levin <[email protected]>
[ Upstream commit f0a5e4d ] YangYuxi is reporting that connection reuse is causing one-second delay when SYN hits existing connection in TIME_WAIT state. Such delay was added to give time to expire both the IPVS connection and the corresponding conntrack. This was considered a rare case at that time but it is causing problem for some environments such as Kubernetes. As nf_conntrack_tcp_packet() can decide to release the conntrack in TIME_WAIT state and to replace it with a fresh NEW conntrack, we can use this to allow rescheduling just by tuning our check: if the conntrack is confirmed we can not schedule it to different real server and the one-second delay still applies but if new conntrack was created, we are free to select new real server without any delays. YangYuxi lists some of the problem reports: - One second connection delay in masquerading mode: https://marc.info/?t=151683118100004&r=1&w=2 - IPVS low throughput #70747 kubernetes/kubernetes#70747 - Apache Bench can fill up ipvs service proxy in seconds #544 cloudnativelabs/kube-router#544 - Additional 1s latency in `host -> service IP -> pod` kubernetes/kubernetes#90854 Fixes: f719e37 ("ipvs: drop first packet to redirect conntrack") Co-developed-by: YangYuxi <[email protected]> Signed-off-by: YangYuxi <[email protected]> Signed-off-by: Julian Anastasov <[email protected]> Reviewed-by: Simon Horman <[email protected]> Signed-off-by: Pablo Neira Ayuso <[email protected]> Signed-off-by: Sasha Levin <[email protected]>
[ Upstream commit f0a5e4d7a594e0fe237d3dfafb069bb82f80f42f ] YangYuxi is reporting that connection reuse is causing one-second delay when SYN hits existing connection in TIME_WAIT state. Such delay was added to give time to expire both the IPVS connection and the corresponding conntrack. This was considered a rare case at that time but it is causing problem for some environments such as Kubernetes. As nf_conntrack_tcp_packet() can decide to release the conntrack in TIME_WAIT state and to replace it with a fresh NEW conntrack, we can use this to allow rescheduling just by tuning our check: if the conntrack is confirmed we can not schedule it to different real server and the one-second delay still applies but if new conntrack was created, we are free to select new real server without any delays. YangYuxi lists some of the problem reports: - One second connection delay in masquerading mode: https://marc.info/?t=151683118100004&r=1&w=2 - IPVS low throughput #70747 kubernetes/kubernetes#70747 - Apache Bench can fill up ipvs service proxy in seconds #544 cloudnativelabs/kube-router#544 - Additional 1s latency in `host -> service IP -> pod` kubernetes/kubernetes#90854 Fixes: f719e37 ("ipvs: drop first packet to redirect conntrack") Co-developed-by: YangYuxi <[email protected]> Signed-off-by: YangYuxi <[email protected]> Signed-off-by: Julian Anastasov <[email protected]> Reviewed-by: Simon Horman <[email protected]> Signed-off-by: Pablo Neira Ayuso <[email protected]> Signed-off-by: Sasha Levin <[email protected]>
[ Upstream commit f0a5e4d7a594e0fe237d3dfafb069bb82f80f42f ] YangYuxi is reporting that connection reuse is causing one-second delay when SYN hits existing connection in TIME_WAIT state. Such delay was added to give time to expire both the IPVS connection and the corresponding conntrack. This was considered a rare case at that time but it is causing problem for some environments such as Kubernetes. As nf_conntrack_tcp_packet() can decide to release the conntrack in TIME_WAIT state and to replace it with a fresh NEW conntrack, we can use this to allow rescheduling just by tuning our check: if the conntrack is confirmed we can not schedule it to different real server and the one-second delay still applies but if new conntrack was created, we are free to select new real server without any delays. YangYuxi lists some of the problem reports: - One second connection delay in masquerading mode: https://marc.info/?t=151683118100004&r=1&w=2 - IPVS low throughput #70747 kubernetes/kubernetes#70747 - Apache Bench can fill up ipvs service proxy in seconds #544 cloudnativelabs/kube-router#544 - Additional 1s latency in `host -> service IP -> pod` kubernetes/kubernetes#90854 Fixes: f719e3754ee2 ("ipvs: drop first packet to redirect conntrack") Co-developed-by: YangYuxi <[email protected]> Signed-off-by: YangYuxi <[email protected]> Signed-off-by: Julian Anastasov <[email protected]> Reviewed-by: Simon Horman <[email protected]> Signed-off-by: Pablo Neira Ayuso <[email protected]> Signed-off-by: Sasha Levin <[email protected]>
[ Upstream commit f0a5e4d7a594e0fe237d3dfafb069bb82f80f42f ] YangYuxi is reporting that connection reuse is causing one-second delay when SYN hits existing connection in TIME_WAIT state. Such delay was added to give time to expire both the IPVS connection and the corresponding conntrack. This was considered a rare case at that time but it is causing problem for some environments such as Kubernetes. As nf_conntrack_tcp_packet() can decide to release the conntrack in TIME_WAIT state and to replace it with a fresh NEW conntrack, we can use this to allow rescheduling just by tuning our check: if the conntrack is confirmed we can not schedule it to different real server and the one-second delay still applies but if new conntrack was created, we are free to select new real server without any delays. YangYuxi lists some of the problem reports: - One second connection delay in masquerading mode: https://marc.info/?t=151683118100004&r=1&w=2 - IPVS low throughput #70747 kubernetes/kubernetes#70747 - Apache Bench can fill up ipvs service proxy in seconds #544 cloudnativelabs/kube-router#544 - Additional 1s latency in `host -> service IP -> pod` kubernetes/kubernetes#90854 Fixes: f719e37 ("ipvs: drop first packet to redirect conntrack") Co-developed-by: YangYuxi <[email protected]> Signed-off-by: YangYuxi <[email protected]> Signed-off-by: Julian Anastasov <[email protected]> Reviewed-by: Simon Horman <[email protected]> Signed-off-by: Pablo Neira Ayuso <[email protected]> Signed-off-by: Sasha Levin <[email protected]>
[ Upstream commit f0a5e4d7a594e0fe237d3dfafb069bb82f80f42f ] YangYuxi is reporting that connection reuse is causing one-second delay when SYN hits existing connection in TIME_WAIT state. Such delay was added to give time to expire both the IPVS connection and the corresponding conntrack. This was considered a rare case at that time but it is causing problem for some environments such as Kubernetes. As nf_conntrack_tcp_packet() can decide to release the conntrack in TIME_WAIT state and to replace it with a fresh NEW conntrack, we can use this to allow rescheduling just by tuning our check: if the conntrack is confirmed we can not schedule it to different real server and the one-second delay still applies but if new conntrack was created, we are free to select new real server without any delays. YangYuxi lists some of the problem reports: - One second connection delay in masquerading mode: https://marc.info/?t=151683118100004&r=1&w=2 - IPVS low throughput #70747 kubernetes/kubernetes#70747 - Apache Bench can fill up ipvs service proxy in seconds #544 cloudnativelabs/kube-router#544 - Additional 1s latency in `host -> service IP -> pod` kubernetes/kubernetes#90854 Fixes: f719e37 ("ipvs: drop first packet to redirect conntrack") Co-developed-by: YangYuxi <[email protected]> Signed-off-by: YangYuxi <[email protected]> Signed-off-by: Julian Anastasov <[email protected]> Reviewed-by: Simon Horman <[email protected]> Signed-off-by: Pablo Neira Ayuso <[email protected]> Signed-off-by: Sasha Levin <[email protected]>
[ Upstream commit f0a5e4d7a594e0fe237d3dfafb069bb82f80f42f ] YangYuxi is reporting that connection reuse is causing one-second delay when SYN hits existing connection in TIME_WAIT state. Such delay was added to give time to expire both the IPVS connection and the corresponding conntrack. This was considered a rare case at that time but it is causing problem for some environments such as Kubernetes. As nf_conntrack_tcp_packet() can decide to release the conntrack in TIME_WAIT state and to replace it with a fresh NEW conntrack, we can use this to allow rescheduling just by tuning our check: if the conntrack is confirmed we can not schedule it to different real server and the one-second delay still applies but if new conntrack was created, we are free to select new real server without any delays. YangYuxi lists some of the problem reports: - One second connection delay in masquerading mode: https://marc.info/?t=151683118100004&r=1&w=2 - IPVS low throughput #70747 kubernetes/kubernetes#70747 - Apache Bench can fill up ipvs service proxy in seconds #544 cloudnativelabs/kube-router#544 - Additional 1s latency in `host -> service IP -> pod` kubernetes/kubernetes#90854 Fixes: f719e37 ("ipvs: drop first packet to redirect conntrack") Co-developed-by: YangYuxi <[email protected]> Signed-off-by: YangYuxi <[email protected]> Signed-off-by: Julian Anastasov <[email protected]> Reviewed-by: Simon Horman <[email protected]> Signed-off-by: Pablo Neira Ayuso <[email protected]> Signed-off-by: Sasha Levin <[email protected]>
[ Upstream commit f0a5e4d7a594e0fe237d3dfafb069bb82f80f42f ] YangYuxi is reporting that connection reuse is causing one-second delay when SYN hits existing connection in TIME_WAIT state. Such delay was added to give time to expire both the IPVS connection and the corresponding conntrack. This was considered a rare case at that time but it is causing problem for some environments such as Kubernetes. As nf_conntrack_tcp_packet() can decide to release the conntrack in TIME_WAIT state and to replace it with a fresh NEW conntrack, we can use this to allow rescheduling just by tuning our check: if the conntrack is confirmed we can not schedule it to different real server and the one-second delay still applies but if new conntrack was created, we are free to select new real server without any delays. YangYuxi lists some of the problem reports: - One second connection delay in masquerading mode: https://marc.info/?t=151683118100004&r=1&w=2 - IPVS low throughput #70747 kubernetes/kubernetes#70747 - Apache Bench can fill up ipvs service proxy in seconds #544 cloudnativelabs/kube-router#544 - Additional 1s latency in `host -> service IP -> pod` kubernetes/kubernetes#90854 Fixes: f719e37 ("ipvs: drop first packet to redirect conntrack") Co-developed-by: YangYuxi <[email protected]> Signed-off-by: YangYuxi <[email protected]> Signed-off-by: Julian Anastasov <[email protected]> Reviewed-by: Simon Horman <[email protected]> Signed-off-by: Pablo Neira Ayuso <[email protected]> Signed-off-by: Sasha Levin <[email protected]>
[ Upstream commit f0a5e4d7a594e0fe237d3dfafb069bb82f80f42f ] YangYuxi is reporting that connection reuse is causing one-second delay when SYN hits existing connection in TIME_WAIT state. Such delay was added to give time to expire both the IPVS connection and the corresponding conntrack. This was considered a rare case at that time but it is causing problem for some environments such as Kubernetes. As nf_conntrack_tcp_packet() can decide to release the conntrack in TIME_WAIT state and to replace it with a fresh NEW conntrack, we can use this to allow rescheduling just by tuning our check: if the conntrack is confirmed we can not schedule it to different real server and the one-second delay still applies but if new conntrack was created, we are free to select new real server without any delays. YangYuxi lists some of the problem reports: - One second connection delay in masquerading mode: https://marc.info/?t=151683118100004&r=1&w=2 - IPVS low throughput #70747 kubernetes/kubernetes#70747 - Apache Bench can fill up ipvs service proxy in seconds #544 cloudnativelabs/kube-router#544 - Additional 1s latency in `host -> service IP -> pod` kubernetes/kubernetes#90854 Fixes: f719e37 ("ipvs: drop first packet to redirect conntrack") Co-developed-by: YangYuxi <[email protected]> Signed-off-by: YangYuxi <[email protected]> Signed-off-by: Julian Anastasov <[email protected]> Reviewed-by: Simon Horman <[email protected]> Signed-off-by: Pablo Neira Ayuso <[email protected]> Signed-off-by: Sasha Levin <[email protected]>
[ Upstream commit f0a5e4d7a594e0fe237d3dfafb069bb82f80f42f ] YangYuxi is reporting that connection reuse is causing one-second delay when SYN hits existing connection in TIME_WAIT state. Such delay was added to give time to expire both the IPVS connection and the corresponding conntrack. This was considered a rare case at that time but it is causing problem for some environments such as Kubernetes. As nf_conntrack_tcp_packet() can decide to release the conntrack in TIME_WAIT state and to replace it with a fresh NEW conntrack, we can use this to allow rescheduling just by tuning our check: if the conntrack is confirmed we can not schedule it to different real server and the one-second delay still applies but if new conntrack was created, we are free to select new real server without any delays. YangYuxi lists some of the problem reports: - One second connection delay in masquerading mode: https://marc.info/?t=151683118100004&r=1&w=2 - IPVS low throughput #70747 kubernetes/kubernetes#70747 - Apache Bench can fill up ipvs service proxy in seconds #544 cloudnativelabs/kube-router#544 - Additional 1s latency in `host -> service IP -> pod` kubernetes/kubernetes#90854 Fixes: f719e3754ee2 ("ipvs: drop first packet to redirect conntrack") Co-developed-by: YangYuxi <[email protected]> Signed-off-by: YangYuxi <[email protected]> Signed-off-by: Julian Anastasov <[email protected]> Reviewed-by: Simon Horman <[email protected]> Signed-off-by: Pablo Neira Ayuso <[email protected]> Signed-off-by: Sasha Levin <[email protected]>
[ Upstream commit f0a5e4d7a594e0fe237d3dfafb069bb82f80f42f ] YangYuxi is reporting that connection reuse is causing one-second delay when SYN hits existing connection in TIME_WAIT state. Such delay was added to give time to expire both the IPVS connection and the corresponding conntrack. This was considered a rare case at that time but it is causing problem for some environments such as Kubernetes. As nf_conntrack_tcp_packet() can decide to release the conntrack in TIME_WAIT state and to replace it with a fresh NEW conntrack, we can use this to allow rescheduling just by tuning our check: if the conntrack is confirmed we can not schedule it to different real server and the one-second delay still applies but if new conntrack was created, we are free to select new real server without any delays. YangYuxi lists some of the problem reports: - One second connection delay in masquerading mode: https://marc.info/?t=151683118100004&r=1&w=2 - IPVS low throughput #70747 kubernetes/kubernetes#70747 - Apache Bench can fill up ipvs service proxy in seconds #544 cloudnativelabs/kube-router#544 - Additional 1s latency in `host -> service IP -> pod` kubernetes/kubernetes#90854 Fixes: f719e37 ("ipvs: drop first packet to redirect conntrack") Co-developed-by: YangYuxi <[email protected]> Signed-off-by: YangYuxi <[email protected]> Signed-off-by: Julian Anastasov <[email protected]> Reviewed-by: Simon Horman <[email protected]> Signed-off-by: Pablo Neira Ayuso <[email protected]> Signed-off-by: Sasha Levin <[email protected]>
[ Upstream commit f0a5e4d7a594e0fe237d3dfafb069bb82f80f42f ] YangYuxi is reporting that connection reuse is causing one-second delay when SYN hits existing connection in TIME_WAIT state. Such delay was added to give time to expire both the IPVS connection and the corresponding conntrack. This was considered a rare case at that time but it is causing problem for some environments such as Kubernetes. As nf_conntrack_tcp_packet() can decide to release the conntrack in TIME_WAIT state and to replace it with a fresh NEW conntrack, we can use this to allow rescheduling just by tuning our check: if the conntrack is confirmed we can not schedule it to different real server and the one-second delay still applies but if new conntrack was created, we are free to select new real server without any delays. YangYuxi lists some of the problem reports: - One second connection delay in masquerading mode: https://marc.info/?t=151683118100004&r=1&w=2 - IPVS low throughput #70747 kubernetes/kubernetes#70747 - Apache Bench can fill up ipvs service proxy in seconds #544 cloudnativelabs/kube-router#544 - Additional 1s latency in `host -> service IP -> pod` kubernetes/kubernetes#90854 Fixes: f719e37 ("ipvs: drop first packet to redirect conntrack") Co-developed-by: YangYuxi <[email protected]> Signed-off-by: YangYuxi <[email protected]> Signed-off-by: Julian Anastasov <[email protected]> Reviewed-by: Simon Horman <[email protected]> Signed-off-by: Pablo Neira Ayuso <[email protected]> Signed-off-by: Sasha Levin <[email protected]>
[ Upstream commit f0a5e4d7a594e0fe237d3dfafb069bb82f80f42f ] YangYuxi is reporting that connection reuse is causing one-second delay when SYN hits existing connection in TIME_WAIT state. Such delay was added to give time to expire both the IPVS connection and the corresponding conntrack. This was considered a rare case at that time but it is causing problem for some environments such as Kubernetes. As nf_conntrack_tcp_packet() can decide to release the conntrack in TIME_WAIT state and to replace it with a fresh NEW conntrack, we can use this to allow rescheduling just by tuning our check: if the conntrack is confirmed we can not schedule it to different real server and the one-second delay still applies but if new conntrack was created, we are free to select new real server without any delays. YangYuxi lists some of the problem reports: - One second connection delay in masquerading mode: https://marc.info/?t=151683118100004&r=1&w=2 - IPVS low throughput #70747 kubernetes/kubernetes#70747 - Apache Bench can fill up ipvs service proxy in seconds #544 cloudnativelabs/kube-router#544 - Additional 1s latency in `host -> service IP -> pod` kubernetes/kubernetes#90854 Fixes: f719e37 ("ipvs: drop first packet to redirect conntrack") Co-developed-by: YangYuxi <[email protected]> Signed-off-by: YangYuxi <[email protected]> Signed-off-by: Julian Anastasov <[email protected]> Reviewed-by: Simon Horman <[email protected]> Signed-off-by: Pablo Neira Ayuso <[email protected]> Signed-off-by: Sasha Levin <[email protected]>
[ Upstream commit f0a5e4d7a594e0fe237d3dfafb069bb82f80f42f ] YangYuxi is reporting that connection reuse is causing one-second delay when SYN hits existing connection in TIME_WAIT state. Such delay was added to give time to expire both the IPVS connection and the corresponding conntrack. This was considered a rare case at that time but it is causing problem for some environments such as Kubernetes. As nf_conntrack_tcp_packet() can decide to release the conntrack in TIME_WAIT state and to replace it with a fresh NEW conntrack, we can use this to allow rescheduling just by tuning our check: if the conntrack is confirmed we can not schedule it to different real server and the one-second delay still applies but if new conntrack was created, we are free to select new real server without any delays. YangYuxi lists some of the problem reports: - One second connection delay in masquerading mode: https://marc.info/?t=151683118100004&r=1&w=2 - IPVS low throughput #70747 kubernetes/kubernetes#70747 - Apache Bench can fill up ipvs service proxy in seconds #544 cloudnativelabs/kube-router#544 - Additional 1s latency in `host -> service IP -> pod` kubernetes/kubernetes#90854 Fixes: f719e37 ("ipvs: drop first packet to redirect conntrack") Co-developed-by: YangYuxi <[email protected]> Signed-off-by: YangYuxi <[email protected]> Signed-off-by: Julian Anastasov <[email protected]> Reviewed-by: Simon Horman <[email protected]> Signed-off-by: Pablo Neira Ayuso <[email protected]> Signed-off-by: Sasha Levin <[email protected]>
[ Upstream commit f0a5e4d7a594e0fe237d3dfafb069bb82f80f42f ] YangYuxi is reporting that connection reuse is causing one-second delay when SYN hits existing connection in TIME_WAIT state. Such delay was added to give time to expire both the IPVS connection and the corresponding conntrack. This was considered a rare case at that time but it is causing problem for some environments such as Kubernetes. As nf_conntrack_tcp_packet() can decide to release the conntrack in TIME_WAIT state and to replace it with a fresh NEW conntrack, we can use this to allow rescheduling just by tuning our check: if the conntrack is confirmed we can not schedule it to different real server and the one-second delay still applies but if new conntrack was created, we are free to select new real server without any delays. YangYuxi lists some of the problem reports: - One second connection delay in masquerading mode: https://marc.info/?t=151683118100004&r=1&w=2 - IPVS low throughput #70747 kubernetes/kubernetes#70747 - Apache Bench can fill up ipvs service proxy in seconds #544 cloudnativelabs/kube-router#544 - Additional 1s latency in `host -> service IP -> pod` kubernetes/kubernetes#90854 Fixes: f719e37 ("ipvs: drop first packet to redirect conntrack") Co-developed-by: YangYuxi <[email protected]> Signed-off-by: YangYuxi <[email protected]> Signed-off-by: Julian Anastasov <[email protected]> Reviewed-by: Simon Horman <[email protected]> Signed-off-by: Pablo Neira Ayuso <[email protected]> Signed-off-by: Sasha Levin <[email protected]>
[ Upstream commit f0a5e4d7a594e0fe237d3dfafb069bb82f80f42f ] YangYuxi is reporting that connection reuse is causing one-second delay when SYN hits existing connection in TIME_WAIT state. Such delay was added to give time to expire both the IPVS connection and the corresponding conntrack. This was considered a rare case at that time but it is causing problem for some environments such as Kubernetes. As nf_conntrack_tcp_packet() can decide to release the conntrack in TIME_WAIT state and to replace it with a fresh NEW conntrack, we can use this to allow rescheduling just by tuning our check: if the conntrack is confirmed we can not schedule it to different real server and the one-second delay still applies but if new conntrack was created, we are free to select new real server without any delays. YangYuxi lists some of the problem reports: - One second connection delay in masquerading mode: https://marc.info/?t=151683118100004&r=1&w=2 - IPVS low throughput #70747 kubernetes/kubernetes#70747 - Apache Bench can fill up ipvs service proxy in seconds #544 cloudnativelabs/kube-router#544 - Additional 1s latency in `host -> service IP -> pod` kubernetes/kubernetes#90854 Fixes: f719e37 ("ipvs: drop first packet to redirect conntrack") Co-developed-by: YangYuxi <[email protected]> Signed-off-by: YangYuxi <[email protected]> Signed-off-by: Julian Anastasov <[email protected]> Reviewed-by: Simon Horman <[email protected]> Signed-off-by: Pablo Neira Ayuso <[email protected]> Signed-off-by: Sasha Levin <[email protected]>
[ Upstream commit f0a5e4d7a594e0fe237d3dfafb069bb82f80f42f ] YangYuxi is reporting that connection reuse is causing one-second delay when SYN hits existing connection in TIME_WAIT state. Such delay was added to give time to expire both the IPVS connection and the corresponding conntrack. This was considered a rare case at that time but it is causing problem for some environments such as Kubernetes. As nf_conntrack_tcp_packet() can decide to release the conntrack in TIME_WAIT state and to replace it with a fresh NEW conntrack, we can use this to allow rescheduling just by tuning our check: if the conntrack is confirmed we can not schedule it to different real server and the one-second delay still applies but if new conntrack was created, we are free to select new real server without any delays. YangYuxi lists some of the problem reports: - One second connection delay in masquerading mode: https://marc.info/?t=151683118100004&r=1&w=2 - IPVS low throughput #70747 kubernetes/kubernetes#70747 - Apache Bench can fill up ipvs service proxy in seconds #544 cloudnativelabs/kube-router#544 - Additional 1s latency in `host -> service IP -> pod` kubernetes/kubernetes#90854 Fixes: f719e37 ("ipvs: drop first packet to redirect conntrack") Co-developed-by: YangYuxi <[email protected]> Signed-off-by: YangYuxi <[email protected]> Signed-off-by: Julian Anastasov <[email protected]> Reviewed-by: Simon Horman <[email protected]> Signed-off-by: Pablo Neira Ayuso <[email protected]> Signed-off-by: Sasha Levin <[email protected]>
[ Upstream commit f0a5e4d7a594e0fe237d3dfafb069bb82f80f42f ] YangYuxi is reporting that connection reuse is causing one-second delay when SYN hits existing connection in TIME_WAIT state. Such delay was added to give time to expire both the IPVS connection and the corresponding conntrack. This was considered a rare case at that time but it is causing problem for some environments such as Kubernetes. As nf_conntrack_tcp_packet() can decide to release the conntrack in TIME_WAIT state and to replace it with a fresh NEW conntrack, we can use this to allow rescheduling just by tuning our check: if the conntrack is confirmed we can not schedule it to different real server and the one-second delay still applies but if new conntrack was created, we are free to select new real server without any delays. YangYuxi lists some of the problem reports: - One second connection delay in masquerading mode: https://marc.info/?t=151683118100004&r=1&w=2 - IPVS low throughput #70747 kubernetes/kubernetes#70747 - Apache Bench can fill up ipvs service proxy in seconds #544 cloudnativelabs/kube-router#544 - Additional 1s latency in `host -> service IP -> pod` kubernetes/kubernetes#90854 Fixes: f719e37 ("ipvs: drop first packet to redirect conntrack") Co-developed-by: YangYuxi <[email protected]> Signed-off-by: YangYuxi <[email protected]> Signed-off-by: Julian Anastasov <[email protected]> Reviewed-by: Simon Horman <[email protected]> Signed-off-by: Pablo Neira Ayuso <[email protected]> Signed-off-by: Sasha Levin <[email protected]>
[ Upstream commit f0a5e4d7a594e0fe237d3dfafb069bb82f80f42f ] YangYuxi is reporting that connection reuse is causing one-second delay when SYN hits existing connection in TIME_WAIT state. Such delay was added to give time to expire both the IPVS connection and the corresponding conntrack. This was considered a rare case at that time but it is causing problem for some environments such as Kubernetes. As nf_conntrack_tcp_packet() can decide to release the conntrack in TIME_WAIT state and to replace it with a fresh NEW conntrack, we can use this to allow rescheduling just by tuning our check: if the conntrack is confirmed we can not schedule it to different real server and the one-second delay still applies but if new conntrack was created, we are free to select new real server without any delays. YangYuxi lists some of the problem reports: - One second connection delay in masquerading mode: https://marc.info/?t=151683118100004&r=1&w=2 - IPVS low throughput #70747 kubernetes/kubernetes#70747 - Apache Bench can fill up ipvs service proxy in seconds #544 cloudnativelabs/kube-router#544 - Additional 1s latency in `host -> service IP -> pod` kubernetes/kubernetes#90854 Fixes: f719e37 ("ipvs: drop first packet to redirect conntrack") Co-developed-by: YangYuxi <[email protected]> Signed-off-by: YangYuxi <[email protected]> Signed-off-by: Julian Anastasov <[email protected]> Reviewed-by: Simon Horman <[email protected]> Signed-off-by: Pablo Neira Ayuso <[email protected]> Signed-off-by: Sasha Levin <[email protected]>
[ Upstream commit f0a5e4d7a594e0fe237d3dfafb069bb82f80f42f ] YangYuxi is reporting that connection reuse is causing one-second delay when SYN hits existing connection in TIME_WAIT state. Such delay was added to give time to expire both the IPVS connection and the corresponding conntrack. This was considered a rare case at that time but it is causing problem for some environments such as Kubernetes. As nf_conntrack_tcp_packet() can decide to release the conntrack in TIME_WAIT state and to replace it with a fresh NEW conntrack, we can use this to allow rescheduling just by tuning our check: if the conntrack is confirmed we can not schedule it to different real server and the one-second delay still applies but if new conntrack was created, we are free to select new real server without any delays. YangYuxi lists some of the problem reports: - One second connection delay in masquerading mode: https://marc.info/?t=151683118100004&r=1&w=2 - IPVS low throughput #70747 kubernetes/kubernetes#70747 - Apache Bench can fill up ipvs service proxy in seconds #544 cloudnativelabs/kube-router#544 - Additional 1s latency in `host -> service IP -> pod` kubernetes/kubernetes#90854 Fixes: f719e3754ee2 ("ipvs: drop first packet to redirect conntrack") Co-developed-by: YangYuxi <[email protected]> Signed-off-by: YangYuxi <[email protected]> Signed-off-by: Julian Anastasov <[email protected]> Reviewed-by: Simon Horman <[email protected]> Signed-off-by: Pablo Neira Ayuso <[email protected]> Signed-off-by: Sasha Levin <[email protected]>
[ Upstream commit f0a5e4d7a594e0fe237d3dfafb069bb82f80f42f ] YangYuxi is reporting that connection reuse is causing one-second delay when SYN hits existing connection in TIME_WAIT state. Such delay was added to give time to expire both the IPVS connection and the corresponding conntrack. This was considered a rare case at that time but it is causing problem for some environments such as Kubernetes. As nf_conntrack_tcp_packet() can decide to release the conntrack in TIME_WAIT state and to replace it with a fresh NEW conntrack, we can use this to allow rescheduling just by tuning our check: if the conntrack is confirmed we can not schedule it to different real server and the one-second delay still applies but if new conntrack was created, we are free to select new real server without any delays. YangYuxi lists some of the problem reports: - One second connection delay in masquerading mode: https://marc.info/?t=151683118100004&r=1&w=2 - IPVS low throughput #70747 kubernetes/kubernetes#70747 - Apache Bench can fill up ipvs service proxy in seconds #544 cloudnativelabs/kube-router#544 - Additional 1s latency in `host -> service IP -> pod` kubernetes/kubernetes#90854 Fixes: f719e3754ee2 ("ipvs: drop first packet to redirect conntrack") Co-developed-by: YangYuxi <[email protected]> Signed-off-by: YangYuxi <[email protected]> Signed-off-by: Julian Anastasov <[email protected]> Reviewed-by: Simon Horman <[email protected]> Signed-off-by: Pablo Neira Ayuso <[email protected]> Signed-off-by: Sasha Levin <[email protected]>
[ Upstream commit f0a5e4d7a594e0fe237d3dfafb069bb82f80f42f ] YangYuxi is reporting that connection reuse is causing one-second delay when SYN hits existing connection in TIME_WAIT state. Such delay was added to give time to expire both the IPVS connection and the corresponding conntrack. This was considered a rare case at that time but it is causing problem for some environments such as Kubernetes. As nf_conntrack_tcp_packet() can decide to release the conntrack in TIME_WAIT state and to replace it with a fresh NEW conntrack, we can use this to allow rescheduling just by tuning our check: if the conntrack is confirmed we can not schedule it to different real server and the one-second delay still applies but if new conntrack was created, we are free to select new real server without any delays. YangYuxi lists some of the problem reports: - One second connection delay in masquerading mode: https://marc.info/?t=151683118100004&r=1&w=2 - IPVS low throughput #70747 kubernetes/kubernetes#70747 - Apache Bench can fill up ipvs service proxy in seconds #544 cloudnativelabs/kube-router#544 - Additional 1s latency in `host -> service IP -> pod` kubernetes/kubernetes#90854 Fixes: f719e3754ee2 ("ipvs: drop first packet to redirect conntrack") Co-developed-by: YangYuxi <[email protected]> Signed-off-by: YangYuxi <[email protected]> Signed-off-by: Julian Anastasov <[email protected]> Reviewed-by: Simon Horman <[email protected]> Signed-off-by: Pablo Neira Ayuso <[email protected]> Signed-off-by: Sasha Levin <[email protected]>
[ Upstream commit f0a5e4d7a594e0fe237d3dfafb069bb82f80f42f ] YangYuxi is reporting that connection reuse is causing one-second delay when SYN hits existing connection in TIME_WAIT state. Such delay was added to give time to expire both the IPVS connection and the corresponding conntrack. This was considered a rare case at that time but it is causing problem for some environments such as Kubernetes. As nf_conntrack_tcp_packet() can decide to release the conntrack in TIME_WAIT state and to replace it with a fresh NEW conntrack, we can use this to allow rescheduling just by tuning our check: if the conntrack is confirmed we can not schedule it to different real server and the one-second delay still applies but if new conntrack was created, we are free to select new real server without any delays. YangYuxi lists some of the problem reports: - One second connection delay in masquerading mode: https://marc.info/?t=151683118100004&r=1&w=2 - IPVS low throughput #70747 kubernetes/kubernetes#70747 - Apache Bench can fill up ipvs service proxy in seconds #544 cloudnativelabs/kube-router#544 - Additional 1s latency in `host -> service IP -> pod` kubernetes/kubernetes#90854 Fixes: f719e3754ee2 ("ipvs: drop first packet to redirect conntrack") Co-developed-by: YangYuxi <[email protected]> Signed-off-by: YangYuxi <[email protected]> Signed-off-by: Julian Anastasov <[email protected]> Reviewed-by: Simon Horman <[email protected]> Signed-off-by: Pablo Neira Ayuso <[email protected]> Signed-off-by: Sasha Levin <[email protected]>
[ Upstream commit f0a5e4d7a594e0fe237d3dfafb069bb82f80f42f ] YangYuxi is reporting that connection reuse is causing one-second delay when SYN hits existing connection in TIME_WAIT state. Such delay was added to give time to expire both the IPVS connection and the corresponding conntrack. This was considered a rare case at that time but it is causing problem for some environments such as Kubernetes. As nf_conntrack_tcp_packet() can decide to release the conntrack in TIME_WAIT state and to replace it with a fresh NEW conntrack, we can use this to allow rescheduling just by tuning our check: if the conntrack is confirmed we can not schedule it to different real server and the one-second delay still applies but if new conntrack was created, we are free to select new real server without any delays. YangYuxi lists some of the problem reports: - One second connection delay in masquerading mode: https://marc.info/?t=151683118100004&r=1&w=2 - IPVS low throughput #70747 kubernetes/kubernetes#70747 - Apache Bench can fill up ipvs service proxy in seconds #544 cloudnativelabs/kube-router#544 - Additional 1s latency in `host -> service IP -> pod` kubernetes/kubernetes#90854 Fixes: f719e3754ee2 ("ipvs: drop first packet to redirect conntrack") Co-developed-by: YangYuxi <[email protected]> Signed-off-by: YangYuxi <[email protected]> Signed-off-by: Julian Anastasov <[email protected]> Reviewed-by: Simon Horman <[email protected]> Signed-off-by: Pablo Neira Ayuso <[email protected]> Signed-off-by: Sasha Levin <[email protected]>
[ Upstream commit f0a5e4d7a594e0fe237d3dfafb069bb82f80f42f ] YangYuxi is reporting that connection reuse is causing one-second delay when SYN hits existing connection in TIME_WAIT state. Such delay was added to give time to expire both the IPVS connection and the corresponding conntrack. This was considered a rare case at that time but it is causing problem for some environments such as Kubernetes. As nf_conntrack_tcp_packet() can decide to release the conntrack in TIME_WAIT state and to replace it with a fresh NEW conntrack, we can use this to allow rescheduling just by tuning our check: if the conntrack is confirmed we can not schedule it to different real server and the one-second delay still applies but if new conntrack was created, we are free to select new real server without any delays. YangYuxi lists some of the problem reports: - One second connection delay in masquerading mode: https://marc.info/?t=151683118100004&r=1&w=2 - IPVS low throughput #70747 kubernetes/kubernetes#70747 - Apache Bench can fill up ipvs service proxy in seconds #544 cloudnativelabs/kube-router#544 - Additional 1s latency in `host -> service IP -> pod` kubernetes/kubernetes#90854 Fixes: f719e37 ("ipvs: drop first packet to redirect conntrack") Co-developed-by: YangYuxi <[email protected]> Signed-off-by: YangYuxi <[email protected]> Signed-off-by: Julian Anastasov <[email protected]> Reviewed-by: Simon Horman <[email protected]> Signed-off-by: Pablo Neira Ayuso <[email protected]> Signed-off-by: Sasha Levin <[email protected]>
[ Upstream commit f0a5e4d7a594e0fe237d3dfafb069bb82f80f42f ] YangYuxi is reporting that connection reuse is causing one-second delay when SYN hits existing connection in TIME_WAIT state. Such delay was added to give time to expire both the IPVS connection and the corresponding conntrack. This was considered a rare case at that time but it is causing problem for some environments such as Kubernetes. As nf_conntrack_tcp_packet() can decide to release the conntrack in TIME_WAIT state and to replace it with a fresh NEW conntrack, we can use this to allow rescheduling just by tuning our check: if the conntrack is confirmed we can not schedule it to different real server and the one-second delay still applies but if new conntrack was created, we are free to select new real server without any delays. YangYuxi lists some of the problem reports: - One second connection delay in masquerading mode: https://marc.info/?t=151683118100004&r=1&w=2 - IPVS low throughput #70747 kubernetes/kubernetes#70747 - Apache Bench can fill up ipvs service proxy in seconds #544 cloudnativelabs/kube-router#544 - Additional 1s latency in `host -> service IP -> pod` kubernetes/kubernetes#90854 Fixes: f719e3754ee2 ("ipvs: drop first packet to redirect conntrack") Co-developed-by: YangYuxi <[email protected]> Signed-off-by: YangYuxi <[email protected]> Signed-off-by: Julian Anastasov <[email protected]> Reviewed-by: Simon Horman <[email protected]> Signed-off-by: Pablo Neira Ayuso <[email protected]> Signed-off-by: Sasha Levin <[email protected]>
[ Upstream commit f0a5e4d7a594e0fe237d3dfafb069bb82f80f42f ] YangYuxi is reporting that connection reuse is causing one-second delay when SYN hits existing connection in TIME_WAIT state. Such delay was added to give time to expire both the IPVS connection and the corresponding conntrack. This was considered a rare case at that time but it is causing problem for some environments such as Kubernetes. As nf_conntrack_tcp_packet() can decide to release the conntrack in TIME_WAIT state and to replace it with a fresh NEW conntrack, we can use this to allow rescheduling just by tuning our check: if the conntrack is confirmed we can not schedule it to different real server and the one-second delay still applies but if new conntrack was created, we are free to select new real server without any delays. YangYuxi lists some of the problem reports: - One second connection delay in masquerading mode: https://marc.info/?t=151683118100004&r=1&w=2 - IPVS low throughput #70747 kubernetes/kubernetes#70747 - Apache Bench can fill up ipvs service proxy in seconds #544 cloudnativelabs/kube-router#544 - Additional 1s latency in `host -> service IP -> pod` kubernetes/kubernetes#90854 Fixes: f719e37 ("ipvs: drop first packet to redirect conntrack") Co-developed-by: YangYuxi <[email protected]> Signed-off-by: YangYuxi <[email protected]> Signed-off-by: Julian Anastasov <[email protected]> Reviewed-by: Simon Horman <[email protected]> Signed-off-by: Pablo Neira Ayuso <[email protected]> Signed-off-by: Sasha Levin <[email protected]>
[ Upstream commit f0a5e4d7a594e0fe237d3dfafb069bb82f80f42f ] YangYuxi is reporting that connection reuse is causing one-second delay when SYN hits existing connection in TIME_WAIT state. Such delay was added to give time to expire both the IPVS connection and the corresponding conntrack. This was considered a rare case at that time but it is causing problem for some environments such as Kubernetes. As nf_conntrack_tcp_packet() can decide to release the conntrack in TIME_WAIT state and to replace it with a fresh NEW conntrack, we can use this to allow rescheduling just by tuning our check: if the conntrack is confirmed we can not schedule it to different real server and the one-second delay still applies but if new conntrack was created, we are free to select new real server without any delays. YangYuxi lists some of the problem reports: - One second connection delay in masquerading mode: https://marc.info/?t=151683118100004&r=1&w=2 - IPVS low throughput #70747 kubernetes/kubernetes#70747 - Apache Bench can fill up ipvs service proxy in seconds #544 cloudnativelabs/kube-router#544 - Additional 1s latency in `host -> service IP -> pod` kubernetes/kubernetes#90854 Fixes: f719e37 ("ipvs: drop first packet to redirect conntrack") Co-developed-by: YangYuxi <[email protected]> Signed-off-by: YangYuxi <[email protected]> Signed-off-by: Julian Anastasov <[email protected]> Reviewed-by: Simon Horman <[email protected]> Signed-off-by: Pablo Neira Ayuso <[email protected]> Signed-off-by: Sasha Levin <[email protected]>
I am not sure if I have something configured wrong but here is my Centos7 physical node and kube-router agent setup:
[ipvsadm package]
$ rpm -q ipvsadm
ipvsadm-1.27-7.el7.x86_64
[kube router process and options]
$ ps -ocommand= -C kube-router
/usr/local/bin/kube-router --run-router=true --run-firewall=true --run-service-proxy=true --kubeconfig=/etc/kubernetes/kube-router.kubeconfig --hostname-override=node6 --enable-overlay=true
[service]
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
test-svc NodePort 172.30.176.114 80:30530/TCP 7h
[ipvs]
$ ipvsadm -ln | head -n 1
IP Virtual Server version 1.2.1 (size=4096)
[ipvs service]
$ ipvsadm -ln | grep -A1 30530
TCP 10.200.1.146:30530 rr
-> 172.32.9.68:80 Masq 1 0 0
If I use apache bench with tcp keepalive all is swell and absurdly fast, posting over 10,000 requests per second and ipvsadm will show stats like below during such a test:
$ ipvsadm -ln | grep -A1 30530
TCP 10.200.1.146:30530 rr
-> 172.32.9.68:80 Masq 1 0 757
However if I run the same test without keep-alive then "InActConn" jumps up to 14000 within a few seconds and up until that point things are very fast, but after that point the virtual server just completely hangs up and stops responding to requests until "InActConn" drops back below 14000. This happens if I run apache bench on the node itself and hit the clusterIp, or if I run it from a random server and hit the nodeport.
---ipvs
$ ipvsadm -ln | grep -A1 30530
TCP 10.200.1.146:30530 rr
-> 172.32.9.68:80 Masq 1 0 14115
--- apache bench output
ab -c 100 -n 20000 http://node6:30530
This is ApacheBench, Version 2.3 <$Revision: 1826891 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking node6 (be patient)
Completed 2000 requests
Completed 4000 requests
Completed 6000 requests
Completed 8000 requests
Completed 10000 requests
Completed 12000 requests
Completed 14000 requests
Completed 16000 requests
Completed 18000 requests
Completed 20000 requests
Finished 20000 requests
Server Software: Apache/2.4.34
Server Hostname: node6
Server Port: 30530
Document Path: /
Document Length: 2512 bytes
Concurrency Level: 100
Time taken for tests: 63.914 seconds
Complete requests: 20000
Failed requests: 0
Total transferred: 55860000 bytes
HTML transferred: 50240000 bytes
Requests per second: 312.92 [#/sec] (mean)
Time per request: 319.569 [ms] (mean)
Time per request: 3.196 [ms] (mean, across all concurrent requests)
Transfer rate: 853.50 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 1 311 456.6 10 1005
Processing: 1 7 4.5 8 36
Waiting: 0 7 4.5 8 36
Total: 2 318 453.1 19 1010
Percentage of the requests served within a certain time (ms)
50% 19
66% 21
75% 1003
80% 1004
90% 1004
95% 1005
98% 1005
99% 1006
100% 1010 (longest request)
The text was updated successfully, but these errors were encountered: