You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
After connection from htc to hts is terminated, hts does not detect that the incoming connection has been lost. It will keep waiting inside its inner loop for the tunnel to come back, which means hts never closes its connection to its upstream device, port, or pipe. This has implications for when hts is itself connected to something stateful, say like ssh.
Steps to reproduce:
In this example, hts is forwarding to an ssh server: $ hts -D 5 -F sshserver:22 listenaddress:8080
Run hts in the above example configuration
Connect htc to hts with a simple redirect to stdout. The ssh server that hts connects to will respond by sending its initial connection string which you will see on stdout: $ htc -s -P httpproxyserver:8888 htsserver:8080 SSH-2.0-dropbear ������-��_m���curve25519-sha256,[email protected],d, <etc>
Hit CTRL-C to break out of htc
Try again from step 1. The upstream ssh server will now not respond because hts never closed its connection when htc terminated.
hts is detecting the disconnect at step 2 as you can tell from the log. Below, CTRL-C was hit to terminate htc at 1043333:
You can see that hts detects that htc was terminated, however it never breaks out of its inner loop and keeps polling, which causes it to maintain its connection to the ssh server which never resets.
This seems to be related to handle_input() (common.h:154) which ignores EAGAIN errors, and to tunnel_read_request() (tunnel.c:802) which, when it detects a connection has been terminated is explcitly setting errno to EAGAIN. This seems to be forcing hts to treat a terminated connection the same way as a poll timeout.
The following patch seems to correct the problem:
--- tunnel.c.orig 2023-08-26 10:55:56.287510606 -0300
+++ tunnel.c 2023-08-26 10:55:19.755465737 -0300
@@ -823,11 +823,12 @@
if (tunnel_is_client (tunnel)
&& tunnel_in_connect (tunnel) == -1)
return -1;
- errno = EAGAIN;
+ /*errno = EAGAIN; Why EAGAIN? This seems to treat a connection closed on the other side the same way as a missed poll*/
+ errno = ECANCELED;
return -1;
}
*request = req;
tunnel->in_total_raw += n;
log_annoying ("request = 0x%x (%s)", req, REQ_TO_STRING (req));
I haven't made this a pull request because I'm not sure the ramifications on what this will do on temporary proxy issues. The same fix can be done on detecting when the tunnel output is closed, but this requires some changes to the way tunnel padding occurs, since it's during tunnel padding that it gets detects.
The text was updated successfully, but these errors were encountered:
After connection from htc to hts is terminated, hts does not detect that the incoming connection has been lost. It will keep waiting inside its inner loop for the tunnel to come back, which means hts never closes its connection to its upstream device, port, or pipe. This has implications for when hts is itself connected to something stateful, say like ssh.
Steps to reproduce:
In this example, hts is forwarding to an ssh server:
$ hts -D 5 -F sshserver:22 listenaddress:8080
$ htc -s -P httpproxyserver:8888 htsserver:8080
SSH-2.0-dropbear
������-��_m���curve25519-sha256,[email protected],d, <etc>
hts is detecting the disconnect at step 2 as you can tell from the log. Below, CTRL-C was hit to terminate htc at 1043333:
You can see that hts detects that htc was terminated, however it never breaks out of its inner loop and keeps polling, which causes it to maintain its connection to the ssh server which never resets.
This seems to be related to handle_input() (common.h:154) which ignores EAGAIN errors, and to tunnel_read_request() (tunnel.c:802) which, when it detects a connection has been terminated is explcitly setting errno to EAGAIN. This seems to be forcing hts to treat a terminated connection the same way as a poll timeout.
The following patch seems to correct the problem:
I haven't made this a pull request because I'm not sure the ramifications on what this will do on temporary proxy issues. The same fix can be done on detecting when the tunnel output is closed, but this requires some changes to the way tunnel padding occurs, since it's during tunnel padding that it gets detects.
The text was updated successfully, but these errors were encountered: