-
Notifications
You must be signed in to change notification settings - Fork 992
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Client cannot handle DNS updates #144
Comments
You're totally right here. Hostname to IP resolution is handled by the Java mechanisms which then means, once the hostname is looked up, the address is cached for the duration of the JVM uptime. Custom DNS lookups are possible but require additional infrastructure, like providing a DNS address. Although I like the failover (and DNS lookup), I see that it has natural limits. True that these failovers can be handled on DNS level. However, I see also following things:
We should collect our thoughts and discuss these. |
Just checked, the netty DNS resolver is part of netty 4.1/5.0 and not 4.0. The JavaVM allows to tweak DNS caching by setting Update: Using system properties works only if not using a security manager. If using a security manager following properties need to be set before the first TCP connection is initiated (otherwise they are ignored):
This means not caching the |
@pulse00 Could you give it a try by subclassing |
@pulse00 Perhaps https://github.com/mp911de/spinach/wiki/SocketAddress-Supplier-API can help you. Just pushed the code which allows to update the cluster view, no more relying on DNS once the initial connect is done. |
cool, thanks. we'll give that a try and let you know if it solved the issue. |
To cope with the Java DNS resolution settings, I would change to not cache the |
The SocketAddress obtained by getResolvedAddress is no longer cached. This is to allow dynamic DNS updates at the time of establishing a connection. This change introduces the possibility to change the Redis connection point details because the connection point details are obtained directly from the RedisURI argument.
The SocketAddress obtained by getResolvedAddress is no longer cached. This is to allow dynamic DNS updates at the time of establishing a connection. This change introduces the possibility to change the Redis connection point details because the connection point details are obtained directly from the RedisURI argument.
The SocketAddress obtained by getResolvedAddress is no longer cached. This is to allow dynamic DNS updates at the time of establishing a connection. This change introduces the possibility to change the Disque connection point details because the connection point details are obtained directly from the DisqueURI argument. Reference: mp911de/lettuce#144
@pulse00 any update on this topic from your side? |
@mp911de sorry for the delay - we'll be testing failover scenarios in our infrastructure the next couple of days, i hope i can give you some feedback then. |
np, thx for the update. |
Hi Marc, we are taking up this issue again. Since spring-data-redis still relies on lettuce 3.3 we cannot switch to the latest release of lettuce 4.0 including your change. Hence we sublassed I'll keep you posted as soon as we have conducted our outage test. |
The fix is (will be) part of the 3.4 release (see 7a7cb47). Give Spring Data Redis with the 3.4-SNAPSHOT build a try. |
Thanks Marc, we will do that |
I was thinking to employ 3.4.1, but just noticed this "Lettuce 3.4 " labeled issue was still open. I think this issue is already fixed, merged and already a part of 3.4.1.Final. |
Thanks for the updates, and I'm looking forward to having 3.4.2 😄 |
We're running a disque cluster on aws. Today we've tested a complete failure of a single node. The setup looks like this
node1.example.com
running one disque nodenode2.example.com
running another disque nodeclient.example.com
running the client (in our case a spinach client)both nodes have been clustered using
disque cluster meet ...
.Now the outage test looked like this:
node1.example.com
node1.example.com
(get's a new IP address on AWS)disque cluster meet node1.example.com 7711
onnode2.example.com
node2.example.com
node2.example.com
(get's a new IP address on AWS)disque cluster meet node2.example.com 7711
onnode1.example.com
From the disque perspective this test was successfull, however spinach could not handle the DNS change:
I'm creating this issue in this repository, because i think the underlying
ConnectionWatchdog
handles the failover scenario.The text was updated successfully, but these errors were encountered: