Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

An incomplete cr.spec.kubernetesConfig.redisSecret causes nil-pointer dereference inside operator #286

Closed
hoyhbx opened this issue Jun 4, 2022 · 0 comments · Fixed by #293
Labels
bug Something isn't working

Comments

@hoyhbx
Copy link
Contributor

hoyhbx commented Jun 4, 2022

What version of redis operator are you using?

redis-operator version: We are using redis-operator built from the HEAD and built from the latest release

Does this issue reproduce with the latest release?
Yes, the problem reproduces with quay.io/opstree/redis-operator:v0.10.0

What operating system and processor architecture are you using (kubectl version)?

kubectl version Output
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.1", GitCommit:"86ec240af8cbd1b60bcc4c03c20da9b98005b92e", GitTreeState:"clean", BuildDate:"2021-12-16T11:41:01Z", GoVersion:"go1.17.5", Compiler:"gc", Platform:"linux/amd64"}

What did you do?

To reproduce, apply the below cr file

cr_cluster.yaml
apiVersion: redis.redis.opstreelabs.in/v1beta1
kind: RedisCluster
metadata:
  name: test-cluster
spec:
  clusterSize: 3
  kubernetesConfig:
    image: quay.io/opstree/redis:v6.2.5
    imagePullPolicy: IfNotPresent
    redisSecret:
      name: igxkfdfzwl
    resources:
      limits:
        cpu: 101m
        memory: 128Mi
      requests:
        cpu: 101m
        memory: 128Mi
  redisExporter:
    enabled: true
    image: quay.io/opstree/redis-exporter:1.0
    imagePullPolicy: IfNotPresent
    resources:
      limits:
        cpu: 100m
        memory: 128Mi
      requests:
        cpu: 100m
        memory: 128Mi
  redisFollower:
    livenessProbe:
      tcpSocket: null
  storage:
    volumeClaimTemplate:
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 1Gi

Observe that the operator crashes and restarts. Below is the operator crash log:

kubectl logs deployment.apps/redis-operator -n redis-operator output
1.6542940573701448e+09	INFO	controller_redis	Redis statefulset get action failed	{"Request.StatefulSet.Namespace": "default", "Request.StatefulSet.Name": "test-cluster-leader"}
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x141516d]

goroutine 344 [running]:
redis-operator/k8sutils.getEnvironmentVariables({0x17259a9, 0x7}, 0x0, 0xc0003be9d1, 0xc0006ab380, 0x0, 0xc0003be9d1, 0x0, 0x0)
/workspace/k8sutils/statefulset.go:419 +0x66d
redis-operator/k8sutils.generateContainerDef({0xc0000ff140, 0x0}, {{0xc00069e800, 0x1c}, {0xc0003be7d0, 0xc}, 0xc0006ab360, {0xc0006b03c0, 0x22}, {0xc0003be820, ...}, ...}, ...)
/workspace/k8sutils/statefulset.go:215 +0xcd
redis-operator/k8sutils.generateStatefulSetsDef({{0xc0000ff140, 0x13}, {0x0, 0x0}, {0xc0003be7c0, 0x7}, {0x0, 0x0}, {0x0, 0x0}, ...}, ...)
/workspace/k8sutils/statefulset.go:133 +0x145
redis-operator/k8sutils.CreateOrUpdateStateFul({_, _}, {{0xc0000ff140, 0x13}, {0x0, 0x0}, {0xc0003be7c0, 0x7}, {0x0, 0x0}, ...}, ...)
/workspace/k8sutils/statefulset.go:62 +0x1e7
redis-operator/k8sutils.RedisClusterSTS.CreateRedisClusterSetup({{0x1724f41, 0x6}, 0x0, 0x0, 0x0, 0x0}, 0xc0005af8c0)
/workspace/k8sutils/redis-cluster.go:155 +0x685
redis-operator/k8sutils.CreateRedisLeader(0xc0005af8c0)
/workspace/k8sutils/redis-cluster.go:108 +0xf1
redis-operator/controllers.(*RedisClusterReconciler).Reconcile(0xc00079f290, {0xc000623e90, 0x155c060}, {{{0xc0003be7c0, 0x1666c40}, {0xc0003be7b0, 0x30}}})
/workspace/controllers/rediscluster_controller.go:69 +0x316
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile(0xc0000f40b0, {0x1940918, 0xc000623e90}, {{{0xc0003be7c0, 0x1666c40}, {0xc0003be7b0, 0x413a94}}})
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:114 +0x26f
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc0000f40b0, {0x1940870, 0xc0007b8a00}, {0x15b1740, 0xc0002ce240})
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:311 +0x33e
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc0000f40b0, {0x1940870, 0xc0007b8a00})
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:266 +0x205
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2()
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:227 +0x85
created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:223 +0x357

What did you expect to see?
Redis-operator does not crash and (perhaps) rejecting the cr input.

What did you see instead?
Redis-operator crashed with the forementioned cr yaml file

Possible root cause
Neither key nor name are required under the field cr.spec.kubernetesConfig.redisSecret according to the CRD #L155-L163 inside the operator repo.

The operator crashes due to a nil-pointer reference when only one of the two fields: key and name is specified. The nil pointer reference is at

Name: *secretName,
},
Key: *secretKey,

@hoyhbx hoyhbx added the bug Something isn't working label Jun 4, 2022
@hoyhbx hoyhbx changed the title An incomplete cr.spec.kubernetesConfig.redisSecret causes nil-pointer reference inside operator An incomplete cr.spec.kubernetesConfig.redisSecret causes nil-pointer dereference inside operator Jun 4, 2022
@iamabhishek-dubey iamabhishek-dubey linked a pull request Jun 28, 2022 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant