Skip to content
This repository has been archived by the owner on May 30, 2024. It is now read-only.

.allFlagsState().allValues() causes GKE pods to crash #163

Closed
prayashm97 opened this issue Nov 19, 2019 · 2 comments
Closed

.allFlagsState().allValues() causes GKE pods to crash #163

prayashm97 opened this issue Nov 19, 2019 · 2 comments

Comments

@prayashm97
Copy link

Describe the bug
We deployed a change where it uses the SDK to get list of user's feature flags, which seems to kill our production GKE pods. We had been using .variation() before with no issues but when we started using allFlagsState() with allValues() all hell broke loose.

Wanted to get some ideas on if the LD crew had any ideas as to what caused this.

const isFeatureBlahFlagEnabled = async (accountId: string): Promise<boolean> => {
  const isBlahFlag = await ldClient.variation(
    'blah.flag',
    { key: accountId },
    false
  )
  return isBlahFlag
}

const getAllFlags = async (accountId: string): Promise<LDFlagSet> => {
  const user = {
    key: accountId,
  }
  const ldFlagsState = await ldClient.allFlagsState(user)
  const allFlags = await ldFlagsState.allValues()
  return allFlags
}

const getFlags = async (accountId: string): Promise<Flags> => {
  await flagClientReady() // here we use LaunchDarkly.init and await for waitForInitialization
  const isBlahEnabled = await isFeatureBlahFlagEnabled(accountId)
  const allFlags = await getAllFlags(accountId)
  return {
    isBlahEnabled,
    allFlags,
  }
}
  1. Inside getFlags only new line that has been added is const allFlags = await getAllFlags(accountId) which calls allFlagsState and allValues. isFeatureBlahFlagEnabled() which calls .variation() has been there and working for a couple of months now.
  2. A look at the pods and their CPU utilization.
    image
  3. Nothing on the logs that gave us any clue as to what was happening. Looking at the logs, it seems the upstream requests just stop.

To reproduce
Unfortunately I was not able to reproduce on local machine, don't see any memory issues on local environment.

Expected behaviour
Able to return user's list of feature flags with out killing the pod.

Logs
Nothing on the logs that gave us any clue as to what was happening. Looking at the logs, it seems the upstream requests just stop.

SDK version
"launchdarkly-node-server-sdk": "^5.9.2"

Language version, developer tools
Node.js 12.13.0

OS/platform
On GKE.

Additional context

  1. Uses .RedisFeatureStore() before initiating
  2. I saw a couple of “[LaunchDarkly] User key is blank. Flag evaluation will proceed, but the user will not be stored in LaunchDarkly” but this was before all hell broke loose.
  3. Just want some feedback on what's happening, if anyone else has experienced this. Where to go from here.
@bwoskow-ld
Copy link
Member

Hi @prayashm97,

To look into this issue further, we recommend that you file a support request by emailing [email protected]. This issue tracker is intended for SDK bug reports and feature requests, and this issue sounds more like a support request.

The support team has familiarized themselves with this GitHub issue and is ready to assist you once you email them.

Cheers,
Ben

@prayashm97
Copy link
Author

Hi @prayashm97,

To look into this issue further, we recommend that you file a support request by emailing [email protected]. This issue tracker is intended for SDK bug reports and feature requests, and this issue sounds more like a support request.

The support team has familiarized themselves with this GitHub issue and is ready to assist you once you email them.

Cheers,
Ben

Thanks Ben! Will do! 😄

LaunchDarklyCI pushed a commit that referenced this issue Dec 11, 2019
…n-fixes

minor fixes to config validation messages + add comment
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants