Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ldrelay does not know about unintentional data change in redis backend. #56

Closed
lukasmrtvy opened this issue Feb 28, 2019 · 1 comment
Closed

Comments

@lukasmrtvy
Copy link
Contributor

lukasmrtvy commented Feb 28, 2019

We are currently using Azure Redis in basic tier (no data persistance) for DEV purposes.
There is possibility that data stored in redis without persistance (persistance is enteprise feature in higher tier) are lost based on infrastructure changes:

Redis Data Persistence: The Premium tier allows you to persist the cache data in an Azure Storage account. In a Basic/Standard cache, all the data is stored only in memory. If there are underlying infrastructure issues there can be potential data loss. We recommend using the Redis data persistence feature in the Premium tier to increase resiliency against data loss. Azure Cache for Redis offers RDB and AOF (coming soon) options in Redis persistence. For more information, see How to configure persistence for a Premium Azure Cache for Redis.

This will cause that feature flags are lost and ldrelay does not know anything about this change -> this will results in errors in #51

Of course that restart will help and feature flags will be loaded, but this is just a "quickfix".
Is possible to let ldrelay to check data in redis constantly? Or should I consider that enteprise tier of Redis (with persistant data) is better aproach? ( its 10x more expensive that basic tier)

Thanks

@bwoskow-ld
Copy link
Member

Hi lukasmrtvy,

First of all, apologies for the delayed response to this issue. It slipped past us somehow.

We understand your request to mean that, upon LD Relay querying Redis for a feature flag and detecting that the flag isn't found, Relay should reconnect to LaunchDarkly to reacquire the latest flag state. This is a non-trivial change to make because it is quite different from how the Relay is architected.

As you pointed out, currently the only way to stop errors like this when using a non-persistent Redis cache would be to restart LD Relay. This is because LD Relay establishes its streaming connection with LaunchDarkly when initializing and at that time it reacquires the complete flag state. Outside of initializing the streaming connection, LD Relay typically only receives events representing changes in flag states.

Based on the current architecture, our suggestion for you is to either use a persistent data store or don't connect LD Relay to redis and use an in-memory store within LD Relay. The first option would obviously be more optimal as multiple LD Relay instances could share persistent state however it would cost more money. The second option would be less costly but would mean that each of your LD Relay instances (I'm not sure how many you have) would maintain their own in-memory data stores.

LaunchDarklyCI pushed a commit that referenced this issue Nov 7, 2019
…and-public-metric-views

[ch21033] Separate private and public metrics
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants