You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
The validation of peers IP doesn't reflect the fact its public IP is SNATed.
To Reproduce
We create a cluster of nodes on GCP project. All have assigned a public IP address.
Then we nominated one of them to be a boot node and configure all others to use it through its public IP address.
However, such setup won't work, the peers refused to accept connection due to mismatch between signed IP and expected IP.
Expected behavior
We should be able to use any of our GCP nodes as boot nodes, and add then to the list through public IPs, regardless these run on the same VPC.
Work around
Boot nodes inside our GCP project need to be configured through private IP addresses.
Version:
nearcore master branch
rust version - not applicable
docker - not applicable
mainnet/testnet/forknet
Additional context
The public IP is not owned by the box, but there’s a (S)NATing router in the way.
Outcoming traffic from the node is using its private IP, which is then translated by router to associated public IP. Corresponding incoming traffic to this public address is then translated by the router back to the proper private address. This is normal network behaviour of any IPv4 SNAT router.
Note: In real scenarios, the same public IP address can be shared by many boxes behind the same router. Then the combination of both source and destination addresses plus source and destination ports has to be tracked by the router. More advanced routers are tracking the connection, as some protocols, like ftp can be opening inbound connections, but this is irrelevant to the issue.
For example, when we create two GCP nodes in the same VPC, both with public IP assigned to them.
Say, that the first box has private IP 10.128.0.89, and public IP 34.28.59.183.
Then the second box has IPs 10.128.0.78, and 34.170.29.107.
Describe the bug
The validation of peers IP doesn't reflect the fact its public IP is SNATed.
To Reproduce
We create a cluster of nodes on GCP project. All have assigned a public IP address.
Then we nominated one of them to be a boot node and configure all others to use it through its public IP address.
However, such setup won't work, the peers refused to accept connection due to mismatch between signed IP and expected IP.
Expected behavior
We should be able to use any of our GCP nodes as boot nodes, and add then to the list through public IPs, regardless these run on the same VPC.
Work around
Boot nodes inside our GCP project need to be configured through private IP addresses.
Version:
Additional context
The public IP is not owned by the box, but there’s a (S)NATing router in the way.
Outcoming traffic from the node is using its private IP, which is then translated by router to associated public IP. Corresponding incoming traffic to this public address is then translated by the router back to the proper private address. This is normal network behaviour of any IPv4 SNAT router.
Note: In real scenarios, the same public IP address can be shared by many boxes behind the same router. Then the combination of both source and destination addresses plus source and destination ports has to be tracked by the router. More advanced routers are tracking the connection, as some protocols, like ftp can be opening inbound connections, but this is irrelevant to the issue.
For example, when we create two GCP nodes in the same VPC, both with public IP assigned to them.
Say, that the first box has private IP
10.128.0.89
, and public IP34.28.59.183
.Then the second box has IPs
10.128.0.78
, and34.170.29.107
.We open terminal to the first box
and run following command on the first box:
Then we open terminal to the the second box
and run following command on the second box:
Then we see the following output
You can see that the public destination address
34.170.29.107
has been translated, or de-NATed if you wish, to the private address by10.128.0.78
.The text was updated successfully, but these errors were encountered: