Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Costum config may reset when operator update dynamic flags #245

Closed
ianhhhhhhhhe opened this issue Aug 16, 2023 · 1 comment
Closed

Costum config may reset when operator update dynamic flags #245

ianhhhhhhhhe opened this issue Aug 16, 2023 · 1 comment

Comments

@ianhhhhhhhhe
Copy link

ianhhhhhhhhe commented Aug 16, 2023

Describe the bug (required)

the service was deployed by operator and I had the following config:

nebula:
  logRotate:
    rotate: 5
    size: 200M
  graphd:
    affinity:
      podAntiAffinity:
        preferredDuringSchedulingIgnoredDuringExecution:
          - podAffinityTerm:
              labelSelector:
                matchLabels:
                  app.kubernetes.io/component: graphd
              topologyKey: kubernetes.io/hostname
            weight: 100
    config:
      auth_type: password
      enable_authorize: "true"
      max_sessions_per_ip_per_user: "300"
      minloglevel: "2"
      session_idle_timeout_secs: "120"
      system_memory_high_watermark_ratio: "0.9"
      v: "2"
    env:
      - name: TZ
        value: Asia/Shanghai
    image: vesoft/nebula-graphd
    logStorage: 50Gi
    nodeSelector:
      nebula: ""
    replicas: 3
    resources:
      limits:
        cpu: "8"
        memory: 16Gi
      requests:
        cpu: "8"
        memory: 16Gi
    sidecarContainers:
      - command:
          - sh
          - -ce
          - |-
            version=3.5.0
            wget -O /usr/local/bin/nebula-console https://ghproxy.com/github.com/vesoft-inc/nebula-console/releases/download/v$version/nebula-console-linux-amd64-v$version
            chmod a+x /usr/local/bin/nebula-console
            while true; do find logs/ -size +1048576k -type f -delete;sleep 1h; done
        image: alpine:edge
        name: nebula-console
        volumeMounts:
          - mountPath: /logs
            name: graphd-log
            subPath: logs
  imagePullPolicy: IfNotPresent
  metad:
    affinity:
      podAntiAffinity:
        preferredDuringSchedulingIgnoredDuringExecution:
          - podAffinityTerm:
              labelSelector:
                matchLabels:
                  app.kubernetes.io/component: metad
              topologyKey: kubernetes.io/hostname
            weight: 100
    config:
      minloglevel: "2"
      v: "2"
    dataStorage: 100Gi
    env:
      - name: TZ
        value: Asia/Shanghai
    image: vesoft/nebula-metad
    logStorage: 50Gi
    nodeSelector:
      nebula: ""
    replicas: 3
    resources:
      limits:
        cpu: "8"
        memory: 16Gi
      requests:
        cpu: "8"
        memory: 16Gi
    sidecarContainers:
      - command:
          - sh
          - -ce
          - while true; do find logs/ -size +1048576k -type f -delete;sleep 1h; done
        image: alpine:edge
        name: clean-logs
        volumeMounts:
          - mountPath: /logs
            name: metad-log
            subPath: logs
  schedulerName: default-scheduler # default-scheduler, nebula-scheduler
  storageClassName: local-path-retain
  storaged:
    affinity:
      podAntiAffinity:
        preferredDuringSchedulingIgnoredDuringExecution:
          - podAffinityTerm:
              labelSelector:
                matchLabels:
                  app.kubernetes.io/component: storaged
              topologyKey: kubernetes.io/hostname
            weight: 100
    config:
      minloglevel: "2"
      v: "2"
    dataStorage: 100Gi
    env:
      - name: TZ
        value: Asia/Shanghai
    image: vesoft/nebula-storaged
    logStorage: 50Gi
    nodeSelector:
      nebula: ""
    replicas: 3
    resources:
      limits:
        cpu: "8"
        memory: 16Gi
      requests:
        cpu: "8"
        memory: 16Gi
    sidecarContainers:
      - command:
          - sh
          - -ce
          - while true; do find logs/ -size +1048576k -type f -delete;sleep 1h; done
        image: alpine:edge
        name: clean-logs
        volumeMounts:
          - mountPath: /logs
            name: storaged-log
            subPath: logs
  version: v3.5.0

but the metad's config and the storaged's config were still the default values. So were the config yamls inside the pods that operator generated.

Your Environments (required)

  • OS: k8s
  • tag: v1.4.2, v1.5.0

How To Reproduce(required)

Steps to reproduce the behavior

  1. use operator to deploy the whole services
  2. change the config yaml, just add minloglevel=3 and v=3 into config field.

Optional Solution

Add another config which is not included by the v1alpha1.DynamicFlags.

@ianhhhhhhhhe ianhhhhhhhhe changed the title [bug] Costum config may reset when operator update dynamic flags Costum config may reset when operator update dynamic flags Aug 21, 2023
@MegaByte875
Copy link
Contributor

#250

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants