-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Getting MaxListenersExceededWarning and then port7777 shutting down #36
Comments
Hi @kimberlymcm, sorry for the delay. Regarding the But that doesn't bring us closer to solving this 🤔 It's weird that the container shuts down immediately. Would you be able to open up the AWS Console and see if there are logs for the container? You can open up the 7777 ECS Cluster in "ECS" or follow this link: https://us-east-1.console.aws.amazon.com/ecs/home?region=eu-west-1#/clusters/7777Cluster/tasks (change the region in the URL because opening it). You could check out the logs, or also the "Stopped reason": |
I experience the same thing and I found this environment variable set in the Fargate task (at the same place where
And indeed it quits exactly after 2 hours and it shows this in my console when the 2 hours are up:
For me this is totally fine so I can't leave it running 24/7 by accident - having the option to adjust it to a different amount would be nice though. |
Even with the latest version, this is still an issue and sometimes shuts down even after 10 minutes. Can this be solved please? It's quite useless if I am getting disconnected all the time. |
Hey @slootjes, could you check out the logs and info I described in my comment above? That might help understand what is going wrong. |
I just see |
I'm sorry I just want to make sure we are talking about the same thing. In your previous comment you talked about the 7777 CLI output, not CloudWatch logs of the container running in AWS, correct? (I'm confused by the "as I stated before" and the fact that you shared a different log output) But the new log line seems to be from CloudWatch, correct? So I assume the 7777 container does not log anything else at all? Not even a log showing that your
That's why I'm asking to check in the AWS console to see the status of the container. If the new log line you shared is actually coming from CloudWatch logs, it means the server runs correctly and it's not the server that is disconnecting. It is the CLI or the network in-between. I'm sorry you're having such troubles, could you, by any chance, have any firewall set up? (I've seen issues with firewalls on Windows, though TBH that woludn't explain why it stops working at some point) As a last resort, I would try reinstalling 7777:
(remember to set the correct region/profile) (FYI if things get too desperate and you stop using 7777, please send us an email so that we refund the purchase) |
There seem to be 2 issues:
Since I'm running 7777 from Docker for security reasons, I can not uninstall it. I'm running the latest version published, I updated it using I really like 7777 and |
Ah, the --ttl option looks very useful, thank you! The random resets happened 3 times today, not reaching the 2 hours. I setup 7777 manually in my AWS account. I don't think this is the problem as one time today I had the 7200 timeout so that means the setup itself is stable, it's just getting a connection error somewhere randomly. |
Can I help to debug this? I really want to use 7777 but after the 4th edit: it seems to happen when I'm idle for a while (but can happen as fast as a few minutes), not using the tunnel. |
@slootjes yes maybe a new issue could be good. When you mention "idle", do you mean the laptop/PC goes to sleep? 🤔 that could be it maybe? |
I mean that the tunnel isn't actively used, not doing queries against the database. Maybe it needs some kind of keep-alive or such? Maybe relevant (and stated before) is that I'm running 7777 from Docker. |
@slootjes that is a very good point! I just released v1.1.14 and implemented a SSH keep-alive. Every 5 seconds, a packet is sent to try and keep the SSH tunnel alive. Could you update and let me know if it helps? |
@mnapoli I've pulled the updated container and will report back, thanks! |
@mnapoli this seems to do the trick, it's stable now. Thanks a lot for your amazing support! |
That's awesome news, thanks for testing! |
It connects and then it immediately shuts down after this MaxListenersExceededWarning. The exact same process was working before. Thoughts on how to fix?
The text was updated successfully, but these errors were encountered: