You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on May 30, 2024. It is now read-only.
Describe the bug
We deployed a change where it uses the SDK to get list of user's feature flags, which seems to kill our production GKE pods. We had been using .variation() before with no issues but when we started using allFlagsState() with allValues() all hell broke loose.
Wanted to get some ideas on if the LD crew had any ideas as to what caused this.
constisFeatureBlahFlagEnabled=async(accountId: string): Promise<boolean>=>{constisBlahFlag=awaitldClient.variation('blah.flag',{key: accountId},false)returnisBlahFlag}constgetAllFlags=async(accountId: string): Promise<LDFlagSet>=>{constuser={key: accountId,}constldFlagsState=awaitldClient.allFlagsState(user)constallFlags=awaitldFlagsState.allValues()returnallFlags}constgetFlags=async(accountId: string): Promise<Flags>=>{awaitflagClientReady()// here we use LaunchDarkly.init and await for waitForInitializationconstisBlahEnabled=awaitisFeatureBlahFlagEnabled(accountId)constallFlags=awaitgetAllFlags(accountId)return{
isBlahEnabled,
allFlags,}}
Inside getFlags only new line that has been added is const allFlags = await getAllFlags(accountId) which calls allFlagsState and allValues. isFeatureBlahFlagEnabled() which calls .variation() has been there and working for a couple of months now.
A look at the pods and their CPU utilization.
Nothing on the logs that gave us any clue as to what was happening. Looking at the logs, it seems the upstream requests just stop.
To reproduce
Unfortunately I was not able to reproduce on local machine, don't see any memory issues on local environment.
Expected behaviour
Able to return user's list of feature flags with out killing the pod.
Logs
Nothing on the logs that gave us any clue as to what was happening. Looking at the logs, it seems the upstream requests just stop.
SDK version "launchdarkly-node-server-sdk": "^5.9.2"
Language version, developer tools
Node.js 12.13.0
OS/platform
On GKE.
Additional context
Uses .RedisFeatureStore() before initiating
I saw a couple of “[LaunchDarkly] User key is blank. Flag evaluation will proceed, but the user will not be stored in LaunchDarkly” but this was before all hell broke loose.
Just want some feedback on what's happening, if anyone else has experienced this. Where to go from here.
The text was updated successfully, but these errors were encountered:
To look into this issue further, we recommend that you file a support request by emailing [email protected]. This issue tracker is intended for SDK bug reports and feature requests, and this issue sounds more like a support request.
The support team has familiarized themselves with this GitHub issue and is ready to assist you once you email them.
To look into this issue further, we recommend that you file a support request by emailing [email protected]. This issue tracker is intended for SDK bug reports and feature requests, and this issue sounds more like a support request.
The support team has familiarized themselves with this GitHub issue and is ready to assist you once you email them.
Describe the bug
We deployed a change where it uses the SDK to get list of user's feature flags, which seems to kill our production GKE pods. We had been using
.variation()
before with no issues but when we started usingallFlagsState()
withallValues()
all hell broke loose.Wanted to get some ideas on if the LD crew had any ideas as to what caused this.
getFlags
only new line that has been added isconst allFlags = await getAllFlags(accountId)
which callsallFlagsState
andallValues
.isFeatureBlahFlagEnabled()
which calls.variation()
has been there and working for a couple of months now.To reproduce
Unfortunately I was not able to reproduce on local machine, don't see any memory issues on local environment.
Expected behaviour
Able to return user's list of feature flags with out killing the pod.
Logs
Nothing on the logs that gave us any clue as to what was happening. Looking at the logs, it seems the upstream requests just stop.
SDK version
"launchdarkly-node-server-sdk": "^5.9.2"
Language version, developer tools
Node.js 12.13.0
OS/platform
On GKE.
Additional context
.RedisFeatureStore()
before initiating“[LaunchDarkly] User key is blank. Flag evaluation will proceed, but the user will not be stored in LaunchDarkly”
but this was before all hell broke loose.The text was updated successfully, but these errors were encountered: