Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] Node Connection Issues(~600 nodes) in v0.23.0-alpha12 #1966

Closed
3 of 4 tasks
nadongjun opened this issue Jun 4, 2024 · 8 comments
Closed
3 of 4 tasks

[Bug] Node Connection Issues(~600 nodes) in v0.23.0-alpha12 #1966

nadongjun opened this issue Jun 4, 2024 · 8 comments
Labels
bug Something isn't working

Comments

@nadongjun
Copy link
Contributor

Is this a support request?

  • This is not a support request

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

To verify if issue #1656 persists in v0.23.0-alpha12, a connection test was conducted. When attempting to connect 600 tailscale nodes using the v0.23.0-alpha12 version of headscale, the following error occurs frequently and some nodes become offline after connecting. There was no CPU or memory overload.

  • Error log: ERR update not sent, context cancelled error="context deadline exceeded" node.id=xxxx

Expected Behavior

All 600 tailscale nodes should connect successfully to the headscale server and operate stably without error logs.

Steps To Reproduce

  1. Prepare seven aws ec2 instances (type: t2.medium).
  2. Deploy the headscale server in a container on one instance.
  3. Deploy 100 tailscale containers on each of the remaining six instances. (total: 600)
  4. Connect each tailscale container to the headscale server.
  5. Check error logs and connection status.

Environment

- OS: Linux/Unix, Amazon Linux
- Headscale version: v0.23.0-alpha12
- Tailscale version: v1.66.4

Runtime environment

  • Headscale is behind a (reverse) proxy
  • Headscale runs in a container

Anything else?

headscale_log_2024-06-03.txt
headscale_node_list.txt

Attached are the container logs of the tested headscale and the node list when attempting to connect approximately 600 nodes.

Based on these logs, it appears that issue #1656 persists in v0.23.0-alpha12.

@nadongjun nadongjun added the bug Something isn't working label Jun 4, 2024
@kradalby
Copy link
Collaborator

kradalby commented Jun 5, 2024

did you verify that there was a problem with the connections between nodes, or are you saying that you do not expect any errors?

@kradalby kradalby added this to the v0.23.0 milestone Jun 5, 2024
@nadongjun
Copy link
Contributor Author

did you verify that there was a problem with the connections between nodes, or are you saying that you do not expect any errors?

I verified that there are two issues in the latest version:

(1) When 600 users join a single Headscale server, the error "ERR update not sent, context cancelled..." occurs in Headscale.

(2) Some of the joined 600 users are in an offline status when checked with headscale node list.

There are no issues with connections between users who are in an online status.

@kradalby
Copy link
Collaborator

kradalby commented Jun 6, 2024

t2.medium sounds a bit optimistic, its unclear if its too small for the headscale, or for the test clients:

The error mentioned would mean one or more of:

  • The node has gone away and its not taking the update
  • The node is reconnecting and the update is being sent to the "closed" version
  • The node did not accept the message fast enough

The problem here might be either that the Headscale machine does not have enough resources to maintain all of the connections, or the VMs running 100s of client does not have enough resources to run them all.

The machine used in #1656 is significantly larger, its probably a bit overspecced with the new alpha.
Have you tried the same with 0.22.3 (latest stable)? It is a lot more inefficient so might struggle more on a t2.medium.

@kradalby kradalby removed this from the v0.23.0 milestone Jun 6, 2024
@jwischka
Copy link

jwischka commented Jun 6, 2024

Another important question is whether you are running sqlite or postgres. If sqlite try enabling wal, or switching to postgres. Sounds like it could be a concurrency issue.

@nadongjun
Copy link
Contributor Author

I am currently using sqlite(without wal option). I will rerun the same tests on a higher performance instance using postgres.

@kradalby
Copy link
Collaborator

Please try with WAL first.

@kradalby
Copy link
Collaborator

WAL on by default for SQLite is coming in #1985.

I will close this issue as it is more of a performance/scaling thing than a bug. We have a couple of hidden tuning options, which together with WAL might be good content for a "performance" or "scaling" guide in the future.

@dustinblackman
Copy link

dustinblackman commented Aug 12, 2024

Using Postgres I'm experiencing the same issue here using alpha 12 in a network of ~30 nodes, with a handful of ephemeral nodes coming in and out through the day. I've seen both regular users on laptops, and machines in the cloud be able to connect to Headscale, but then not be able to reach any other node in the network. Headscale outputs the same errors at stated at the beginning of the issue, though while digging through the new map session logic I'm unsure if the error and the issue is related. If I were to guess something is hanging in

case update, ok := <-m.ch:

I had the problem with a laptop connecting to a remote machine, so I had ran tailscale down && tailscale up on the remote machine, and it then fixed the problem. I'm betting there is an issue with connection recovery in the notifier, either to the node or database. I'll dig through logs later in the evening.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

4 participants