Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[redis-cluster] All nodes turned to master #5431

Closed
NadavkOptimalQ opened this issue Feb 9, 2021 · 14 comments
Closed

[redis-cluster] All nodes turned to master #5431

NadavkOptimalQ opened this issue Feb 9, 2021 · 14 comments
Labels
stale 15 days without activity

Comments

@NadavkOptimalQ
Copy link

Which chart:
redis-cluster-4.2.3

Describe the bug
We have a cluster of 12 nodes - 6 masters and 6 slaves.
We had some errors in our system and looking at the redis I saw that there was information missing.
I wanted to match the master and salve to see if the data exists in either but all the nodes are masters

To Reproduce
I'm not really sure how to reproduce this. Some of our pods did get restarted a few times so it might be related.

Expected behavior
have 6 masters and 6 slaves

Version of Helm and Kubernetes:

  • Output of helm version:
version.BuildInfo{Version:"v3.4.2", GitCommit:"23dd3af5e19a02d4f4baa5b2f242645a1a3af629", GitTreeState:"dirty", GoVersion:"go1.15.5"}
  • Output of kubectl version:
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.1", GitCommit:"c4d752765b3bbac2237bf87cf0b1c2e307844666", GitTreeState:"clean", BuildDate:"2020-12-19T08:38:20Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.10", GitCommit:"41d24ec9c736cf0bdb0de3549d30c676e98eebaf", GitTreeState:"clean", BuildDate:"2021-01-18T09:12:27Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}

Additional context
logs from a pod that did get restarted:

redis-cluster 12:01:32.18
redis-cluster 12:01:32.18 Welcome to the Bitnami redis-cluster container
redis-cluster 12:01:32.18 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-redis-cluster
redis-cluster 12:01:32.19 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-redis-cluster/issues
redis-cluster 12:01:32.19
redis-cluster 12:01:32.19 INFO  ==> ** Starting Redis setup **
redis-cluster 12:01:32.20 WARN  ==> You set the environment variable ALLOW_EMPTY_PASSWORD=yes. For safety reasons, do not use this flag in a production environment.
redis-cluster 12:01:32.21 INFO  ==> Initializing Redis
redis-cluster 12:01:32.22 INFO  ==> Setting Redis config file
Changing old IP 10.240.3.94 by the new one 10.240.3.94
Changing old IP 10.240.2.72 by the new one 10.240.2.72
Changing old IP 10.240.1.168 by the new one 10.240.1.168
Changing old IP 10.240.3.235 by the new one 10.240.3.235
Changing old IP 10.240.2.190 by the new one 10.240.2.190
Changing old IP 10.240.0.59 by the new one 10.240.0.59
Changing old IP 10.240.1.130 by the new one 10.240.1.130
Changing old IP 10.240.3.103 by the new one 10.240.3.103
Changing old IP 10.240.1.246 by the new one 10.240.1.246
Changing old IP 10.240.3.243 by the new one 10.240.3.243
Changing old IP 10.240.1.115 by the new one 10.240.1.115
Changing old IP 10.240.2.230 by the new one 10.240.2.230
redis-cluster 12:01:32.40 INFO  ==> ** Redis setup finished! **

1:C 09 Feb 2021 12:01:32.424 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1:C 09 Feb 2021 12:01:32.424 # Redis version=6.0.9, bits=64, commit=00000000, modified=0, pid=1, just started
1:C 09 Feb 2021 12:01:32.424 # Configuration loaded
1:M 09 Feb 2021 12:01:32.425 * Node configuration loaded, I'm 5f5c1430794e69ffb15938051e983f6784f0fbd3
                _._
           _.-``__ ''-._
      _.-``    `.  `_.  ''-._           Redis 6.0.9 (00000000/0) 64 bit
  .-`` .-```.  ```\/    _.,_ ''-._
 (    '      ,       .-`  | `,    )     Running in cluster mode
 |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379
 |    `-._   `._    /     _.-'    |     PID: 1
  `-._    `-._  `-./  _.-'    _.-'
 |`-._`-._    `-.__.-'    _.-'_.-'|
 |    `-._`-._        _.-'_.-'    |           http://redis.io
  `-._    `-._`-.__.-'_.-'    _.-'
 |`-._`-._    `-.__.-'    _.-'_.-'|
 |    `-._`-._        _.-'_.-'    |
  `-._    `-._`-.__.-'_.-'    _.-'
      `-._    `-.__.-'    _.-'
          `-._        _.-'
              `-.__.-'

1:M 09 Feb 2021 12:01:32.425 # Server initialized
1:M 09 Feb 2021 12:01:32.425 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo madvise > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled (set to 'madvise' or 'never').
1:M 09 Feb 2021 12:01:32.426 * Ready to accept connections
1:M 09 Feb 2021 12:01:34.427 # Cluster state changed: ok

output for cluster nodes:

73fe721fbf1b2eb8e52106df07b9e29f324fa3b3 10.240.3.243:6379@16379 master - 0 1612877027000 10 connected
5f5c1430794e69ffb15938051e983f6784f0fbd3 10.240.3.94:6379@16379 master - 0 1612877025000 1 connected 0-2730
89f1ded99add132ec3289c233d9a1574c9b5d635 10.240.2.190:6379@16379 master - 0 1612877029150 5 connected 10923-13652
ee8127d17e0f94998dc363f6266a4fc30cedc2a4 10.240.1.168:6379@16379 master - 0 1612877026000 3 connected 5461-8191
cb9d641070653ff63c1aff00b12e9500daf51d58 10.240.1.130:6379@16379 master - 0 1612877025000 7 connected
567cd56df4adf195bbb98f1782b2ec7bbb868010 10.240.2.230:6379@16379 master - 0 1612877028146 12 connected
67f1375b9c702393ce56da176590feb98e9b34e3 10.240.1.246:6379@16379 master - 0 1612877026000 9 connected
c03b88d982895675cda415fe513759d81d32d98f 10.240.3.235:6379@16379 master - 0 1612877026000 4 connected 8192-10922
b4f765329b1e9f48e1c3369a7fae8ed8967b879f 10.240.0.59:6379@16379 master - 0 1612877026000 6 connected 13653-16383
5dbb21eec2023ca22129fc7875b306c57e88a02a 10.240.3.103:6379@16379 master - 0 1612877026643 8 connected
441685af873e4ccb3ca983345d4439ff089d5c99 10.240.2.72:6379@16379 master - 0 1612877027143 2 connected 2731-5460
715869caa51f8e80ec6ccee7f6d8f0cd90ee277b 10.240.1.115:6379@16379 myself,master - 0 1612877026000 11 connected

logs from a pod that didn't get restarted:


1:C 12 Jan 2021 08:56:40.059 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1:C 12 Jan 2021 08:56:40.059 # Redis version=6.0.9, bits=64, commit=00000000, modified=0, pid=1, just started
1:C 12 Jan 2021 08:56:40.059 # Configuration loaded
1:M 12 Jan 2021 08:56:40.060 * Node configuration loaded, I'm 715869caa51f8e80ec6ccee7f6d8f0cd90ee277b
                _._
           _.-``__ ''-._
      _.-``    `.  `_.  ''-._           Redis 6.0.9 (00000000/0) 64 bit
  .-`` .-```.  ```\/    _.,_ ''-._
 (    '      ,       .-`  | `,    )     Running in cluster mode
 |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379
 |    `-._   `._    /     _.-'    |     PID: 1
  `-._    `-._  `-./  _.-'    _.-'
 |`-._`-._    `-.__.-'    _.-'_.-'|
 |    `-._`-._        _.-'_.-'    |           http://redis.io
  `-._    `-._`-.__.-'_.-'    _.-'
 |`-._`-._    `-.__.-'    _.-'_.-'|
 |    `-._`-._        _.-'_.-'    |
  `-._    `-._`-.__.-'_.-'    _.-'
      `-._    `-.__.-'    _.-'
          `-._        _.-'
              `-.__.-'

1:M 12 Jan 2021 08:56:40.061 # Server initialized
1:M 12 Jan 2021 08:56:40.061 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo madvise > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled (set to 'madvise' or 'never').
1:M 12 Jan 2021 08:56:40.061 * Ready to accept connections
1:M 12 Jan 2021 08:56:40.354 # Address updated for node 441685af873e4ccb3ca983345d4439ff089d5c99, now 10.240.2.110:6379
1:M 12 Jan 2021 08:56:42.129 # Cluster state changed: ok
1:M 12 Jan 2021 08:57:05.698 * FAIL message received from cb9d641070653ff63c1aff00b12e9500daf51d58 about 73fe721fbf1b2eb8e52106df07b9e29f324fa3b3
1:M 12 Jan 2021 08:58:03.224 # Address updated for node 73fe721fbf1b2eb8e52106df07b9e29f324fa3b3, now 10.240.3.243:6379
1:M 12 Jan 2021 08:58:03.270 * Clear FAIL state for node 73fe721fbf1b2eb8e52106df07b9e29f324fa3b3: master without slots is reachable again.
1:M 12 Jan 2021 08:58:23.386 # Address updated for node 67f1375b9c702393ce56da176590feb98e9b34e3, now 10.240.1.246:6379
1:M 12 Jan 2021 08:58:44.230 * FAIL message received from 441685af873e4ccb3ca983345d4439ff089d5c99 about 5dbb21eec2023ca22129fc7875b306c57e88a02a
1:M 12 Jan 2021 08:59:17.101 # Address updated for node 5dbb21eec2023ca22129fc7875b306c57e88a02a, now 10.240.3.103:6379
1:M 12 Jan 2021 08:59:17.173 * Clear FAIL state for node 5dbb21eec2023ca22129fc7875b306c57e88a02a: master without slots is reachable again.
1:M 12 Jan 2021 08:59:40.357 * FAIL message received from 67f1375b9c702393ce56da176590feb98e9b34e3 about cb9d641070653ff63c1aff00b12e9500daf51d58
1:M 12 Jan 2021 08:59:41.047 # Address updated for node cb9d641070653ff63c1aff00b12e9500daf51d58, now 10.240.1.130:6379
1:M 12 Jan 2021 08:59:41.121 * Clear FAIL state for node cb9d641070653ff63c1aff00b12e9500daf51d58: master without slots is reachable again.
1:M 12 Jan 2021 09:00:02.687 * Marking node b4f765329b1e9f48e1c3369a7fae8ed8967b879f as failing (quorum reached).
1:M 12 Jan 2021 09:00:02.687 # Cluster state changed: fail
1:M 12 Jan 2021 09:00:08.595 # Address updated for node b4f765329b1e9f48e1c3369a7fae8ed8967b879f, now 10.240.0.59:6379
1:M 12 Jan 2021 09:00:28.537 * FAIL message received from 67f1375b9c702393ce56da176590feb98e9b34e3 about c03b88d982895675cda415fe513759d81d32d98f
1:M 12 Jan 2021 09:00:35.789 * Clear FAIL state for node b4f765329b1e9f48e1c3369a7fae8ed8967b879f: is reachable again and nobody is serving its slots after some time.
1:M 12 Jan 2021 09:01:33.540 # Address updated for node c03b88d982895675cda415fe513759d81d32d98f, now 10.240.3.235:6379
1:M 12 Jan 2021 09:01:33.568 * Clear FAIL state for node c03b88d982895675cda415fe513759d81d32d98f: is reachable again and nobody is serving its slots after some time.
1:M 12 Jan 2021 09:01:33.568 # Cluster state changed: ok
1:M 12 Jan 2021 09:01:52.878 * FAIL message received from b4f765329b1e9f48e1c3369a7fae8ed8967b879f about ee8127d17e0f94998dc363f6266a4fc30cedc2a4
1:M 12 Jan 2021 09:01:52.878 # Cluster state changed: fail
1:M 12 Jan 2021 09:01:53.941 # Address updated for node ee8127d17e0f94998dc363f6266a4fc30cedc2a4, now 10.240.1.168:6379
1:M 12 Jan 2021 09:02:18.862 * FAIL message received from 89f1ded99add132ec3289c233d9a1574c9b5d635 about 441685af873e4ccb3ca983345d4439ff089d5c99
1:M 12 Jan 2021 09:02:21.179 # Address updated for node 441685af873e4ccb3ca983345d4439ff089d5c99, now 10.240.2.72:6379
1:M 12 Jan 2021 09:02:28.631 * Clear FAIL state for node ee8127d17e0f94998dc363f6266a4fc30cedc2a4: is reachable again and nobody is serving its slots after some time.
1:M 12 Jan 2021 09:02:35.793 # Address updated for node 5f5c1430794e69ffb15938051e983f6784f0fbd3, now 10.240.3.94:6379
1:M 12 Jan 2021 09:02:51.695 * Clear FAIL state for node 441685af873e4ccb3ca983345d4439ff089d5c99: is reachable again and nobody is serving its slots after some time.
1:M 12 Jan 2021 09:02:51.695 # Cluster state changed: ok
@javsalgar
Copy link
Contributor

Hi,

Thank you for using Bitnami. Could you provide the values that you used for deploying the chart?

@NadavkOptimalQ
Copy link
Author

## String to partially override common.names.fullname template (will maintain the release name)
##
nameOverride:

## String to fully override common.names.fullname template
##
fullnameOverride: redis-cluster

## Redis Cluster settings
##
cluster:
  ## Enable the creation of a Job that will execute the proper command to create the Redis Cluster.
  ##
  init: true
  ## Number of Redis nodes to be deployed
  ##
  nodes: 12
  ## Parameter to be passed as --cluster-replicas to the redis-cli --cluster create
  ## 1 means that we want a replica for every master created
  ##
  replicas: 1

  ## This section allows to update the Redis cluster nodes.
  ##
  update:
    ## Setting this to true a hook will add nodes to the Redis cluster after the upgrade.
    ## currentNumberOfNodes is required
    ##
    addNodes: false
    ## Set to the number of nodes that are already deployed
    ##
    currentNumberOfNodes: 12
    ## When using external access set the new externalIPs here as an array
    ##
    newExternalIPs: []

redis:
  ## Custom command to override image cmd
  ##
  command: []

  ## Custom args for the custom command:
  ##
  args: []

  # Whether to use AOF Persistence mode or not
  # It is strongly recommended to use this type when dealing with clusters
  #
  # ref: https://redis.io/topics/persistence#append-only-file
  # ref: https://redis.io/topics/cluster-tutorial#creating-and-using-a-redis-cluster
  useAOFPersistence: "no"

  # Redis port
  port: 6379

  ## Additional Redis configuration for the nodes
  ## ref: https://redis.io/topics/config
  ##
  configmap: |
    appendonly no
    protected-mode no
    repl-diskless-sync yes
    save ""   


  ## Redis resource requests and limits
  ## ref: http://kubernetes.io/docs/user-guide/compute-resources/
  ##
  resources:
    requests:
      memory: "6000Mi"
      cpu: "300m"
    limits:
      memory: "7000Mi"
      cpu: "900m"

  ## Node labels for Redis pods assignment
  ## ref: https://kubernetes.io/docs/user-guide/node-selection/
  ##
  nodeSelector: 
    nodepooltype: highmem

networkPolicy:
  ## Specifies whether a NetworkPolicy should be created
  ##
  enabled: false

serviceAccount:
  ## Specifies whether a ServiceAccount should be created
  ##
  create: false
  ## The name of the ServiceAccount to use.
  ## If not set and create is true, a name is generated using the fullname template
  ##
  name:

rbac:
  ## Specifies whether RBAC resources should be created
  ##
  create: false

  role:
    ## Rules to create. It follows the role specification
    # rules:
    #  - apiGroups:
    #    - extensions
    #    resources:
    #      - podsecuritypolicies
    #    verbs:
    #      - use
    #    resourceNames:
    #      - gce.unprivileged
    rules: []

## Redis pod Security Context
##
podSecurityContext:
  enabled: true
  fsGroup: 1001
  runAsUser: 1001
  ## sysctl settings
  ##
  ## Uncomment the setting below to increase the net.core.somaxconn value
  ##
  sysctls:
  # - name: net.core.somaxconn
  #   value: "10000"

## Containers Security Context
##
containerSecurityContext:
  enabled: true
  runAsUser: 1001
  ## sysctl settings
  ##
  ## Uncomment the setting below to increase the net.core.somaxconn value
  ##
  sysctls:
  # - name: net.core.somaxconn
  #   value: "10000"

## Use password authentication
##
usePassword: false
## Redis password
## Defaults to a random 10-character alphanumeric string if not set and usePassword is true
## ref: https://github.com/bitnami/bitnami-docker-redis#setting-the-server-password-on-first-run
##
 
## Labels for all the deployed objects
##
commonLabels:
  name: redis-cluster
  env: porky
  app: redis-cluster

## Redis Service properties for standalone mode.
##
service:
  port: 6379

  ## Provide any additional annotations which may be required. This can be used to
  ## set the LoadBalancer service type to internal only.
  ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer
  ##
  
  labels: 
    name: redis-cluster
    env: porky
    app: redis-cluster

## Enable persistence using Persistent Volume Claims
## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
##
persistence:
  enabled: true
  
## Update strategy, can be set to RollingUpdate or onDelete by default.
## https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#updating-statefulsets
##
statefulset:
  updateStrategy: RollingUpdate
  ## Partition update strategy
  ## https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#partitions

## Prometheus Exporter / Metrics
##
metrics:
  enabled: false

  ## Metrics exporter resource requests and limits
  ## ref: http://kubernetes.io/docs/user-guide/compute-resources/
  ##
  resources: {}

  ## Extra arguments for Metrics exporter, for example:
  ## extraArgs:
  ##   check-keys: myKey,myOtherKey
  ##
  extraArgs: {}

  ## Metrics exporter pod Annotation and Labels
  ##
  podAnnotations:
    prometheus.io/scrape: "true"
    prometheus.io/port: "9121"
  podLabels: {}

  # Enable this if you're using https://github.com/coreos/prometheus-operator
  serviceMonitor:
    enabled: true
    ## Specify a namespace if needed
    ##
    #namespace:

  ## Custom PrometheusRule to be defined
  ## The value is evaluated as a template, so, for example, the value can depend on .Release or .Chart
  ## ref: https://github.com/coreos/prometheus-operator#customresourcedefinitions
  ##
  prometheusRule:
    enabled: false
    additionalLabels: {}
    namespace: ""
    rules:
    # These are just examples rules, please adapt them to your needs.
    # Make sure to constraint the rules to the current postgresql service.
     - alert: RedisDown
       expr: redis_up{service="{{ template "common.names.fullname" . }}-metrics"} == 0
       for: 2m
       labels:
         severity: error
       annotations:
         summary: Redis instance {{ "{{ $instance }}" }} down
         description: Redis instance {{ "{{ $instance }}" }} is down.
     - alert: RedisMemoryHigh
       expr: >
          redis_memory_used_bytes{service="{{ template "common.names.fullname" . }}-metrics"} * 100
          /
          redis_memory_max_bytes{service="{{ template "common.names.fullname" . }}-metrics"}
          > 90
       for: 2m
       labels:
         severity: error
       annotations:
         summary: Redis instance {{ "{{ $instance }}" }} is using too much memory
         description: Redis instance {{ "{{ $instance }}" }} is using {{ "{{ $value }}" }}% of its available memory.
     - alert: RedisKeyEviction
       expr: increase(redis_evicted_keys_total{service="{{ template "common.names.fullname" . }}-metrics"}[5m]) > 0
       for: 1s
       labels:
         severity: error
       annotations:
         summary: Redis instance {{ "{{ $instance }}" }} has evicted keys
         description: Redis instance {{ "{{ $instance }}" }} has evicted {{ "{{ $value }}" }} keys in the last 5 minutes.

  ## Metrics exporter pod priorityClassName
  # priorityClassName: {}
  service:
    type: ClusterIP
    ## Use serviceLoadBalancerIP to request a specific static IP,
    ## otherwise leave blank
    ##
    loadBalancerIP:
    annotations: {}
    labels: {}


## Sysctl InitContainer
## used to perform sysctl operation to modify Kernel settings (needed sometimes to avoid warnings)
##
sysctlImage:
  enabled: false
  command: []
  registry: docker.io
  repository: bitnami/minideb
  tag: buster
  pullPolicy: Always
  ## Optionally specify an array of imagePullSecrets.
  ## Secrets must be manually created in the namespace.
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
  ##
  pullSecrets:
  #   - myRegistryKeySecretName
  mountHostSys: false
  resources: {}
  # resources:
  #   requests:
  #     memory: 128Mi
  #     cpu: 100m

@javsalgar
Copy link
Contributor

Hi,

We've been investigating this issue in the past, and we added several fixes to avoid this situation. Could you confirm that the issue did not occur at the initial startup? I guess this could be related to the nodes restart.

@NadavkOptimalQ
Copy link
Author

Yes
The cluster was up for 27 days and we didn't come across this issue.

This might be related to the redis version?
The cluster was using redis version 6.0.9
I recreated the cluster yesterday (had to. is our production cluster) and specified version 6.0.10. Haven't had any issues so far but it might come up again

@javsalgar
Copy link
Contributor

Please let us know if the incident happens again and the reason of the shutdown. We want to avoid issues like that and want to find out the cause of this issue.

@NadavkOptimalQ
Copy link
Author

Hi
Just got this to happen again in our staging environment.
After some issues, I recreated the cluster from scratch.

Then I ran this:
helm upgrade --timeout 600s --set "fullnameOverride=redis-cluster,usePassword=false,cluster.nodes=6,cluster.update.addNodes=true,cluster.update.currentNumberOfNodes=8" bitnami/redis-cluster

This added two new nodes, then started to restart some of the other nodes in the cluster and they all changed to master
:(

Not sure this is 100% reproducable but if you can't reproduce it I'll be happy to try to get it to happen again

@javsalgar
Copy link
Contributor

Hi,

We are unable to reproduce it. Could it be that something crashes during the first initialization? Do you see any crash before turning them to master?

@github-actions
Copy link

This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.

@github-actions github-actions bot added the stale 15 days without activity label Apr 14, 2021
@OpsPita
Copy link

OpsPita commented Apr 16, 2021

Hey , i also got this problem in my staging environment

@github-actions github-actions bot removed the stale 15 days without activity label Apr 17, 2021
@javsalgar
Copy link
Contributor

javsalgar commented Apr 19, 2021

Hi,

Could you provide more details about the chart version you are using and the Kubernetes platform? Do the logs show anything meaningful?

@github-actions
Copy link

github-actions bot commented May 5, 2021

This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.

@github-actions github-actions bot added the stale 15 days without activity label May 5, 2021
@github-actions
Copy link

Due to the lack of activity in the last 5 days since it was marked as "stale", we proceed to close this Issue. Do not hesitate to reopen it later if necessary.

@naveedsyed1746
Copy link

image
All nodes are turning to master in prod

@juan131
Copy link
Contributor

juan131 commented Dec 20, 2021

@naveedsyed1746 Let's continue the conversation at #5418

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
stale 15 days without activity
Projects
None yet
Development

No branches or pull requests

5 participants