Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Not able to create cluster : spec.storageConfig.additionalVolumes[0].pvcSpec: Required value #1371

Open
gk-fschubert opened this issue Jul 18, 2024 · 0 comments
Labels
bug Something isn't working

Comments

@gk-fschubert
Copy link

gk-fschubert commented Jul 18, 2024

What happened?

I want to start with k8ssandra and started with a relative simple configuration.
There are 3 different clusters. One of them should host the operator and the other two should host the data planes.

I've followed the documentation:

  • added the helm repo
  • installed the k8ssandra-operator on the operator node in controlPlane mode
  • installed the k8ssandra-operator on the two data nodes with controlPlane=False
  • added the kubeconfig as secret
  • created the clientConfig CRD
  • created the k8ssandraCluster CRD

The issue is that the operator always prints this error:
Error: CassandraDatacenter.cassandra.datastax.com "jenslab" is invalid: spec.storageConfig.additionalVolumes[0].pvcSpec: Required value
As far as I understood the CRD docs (v 1.17.0) additionalVolumes are optional.
And even If I declare additionalVolumes, the error is the same except for the index where the issue is raises
Error: CassandraDatacenter.cassandra.datastax.com "jenslab" is invalid: spec.storageConfig.additionalVolumes[3].pvcSpec: Required value

is there something wrong with my CRD definition?

apiVersion: k8ssandra.io/v1alpha1
kind: K8ssandraCluster
metadata:
  name: test-database
  namespace: k8ssandra-operator
spec:
  reaper: 
  stargate:
    size: 1
    heapSize: 256M
  cassandra:
    serverVersion: "4.0.1"
    config:
      jvmOptions:
        heapSize: 512M
    networking:
      hostNetwork: false
    datacenters:
      - metadata:
          name: jenslab
        k8sContext: jens-lab
        size: 3
        storageConfig:
          cassandraDataVolumeClaimSpec:
            storageClassName: do-block-storage
            accessModes:
              - ReadWriteOnce
            resources:
              requests:
                storage: 20Gi
      - metadata:
          name: lab
        k8sContext: lab
        size: 3
        storageConfig:
          cassandraDataVolumeClaimSpec:
            storageClassName: default
            accessModes:
              - ReadWriteOnce
            resources:
              requests:
                storage: 20Gi

Environment

  • K8ssandra Operator version:
    Operator image: cr.k8ssandra.io/k8ssandra/k8ssandra-operator:v1.17.0
$ kubectl describe deployment k8ssandra-operator -n k8ssandra-operator
Name:                   k8ssandra-operator
Namespace:              k8ssandra-operator
CreationTimestamp:      Thu, 18 Jul 2024 14:54:31 +0200
Labels:                 app.kubernetes.io/instance=k8ssandra-operator
                        app.kubernetes.io/managed-by=Helm
                        app.kubernetes.io/name=k8ssandra-operator
                        app.kubernetes.io/part-of=k8ssandra-k8ssandra-operator-k8ssandra-operator
                        control-plane=k8ssandra-operator
                        helm.sh/chart=k8ssandra-operator-1.17.0
Annotations:            deployment.kubernetes.io/revision: 1
                        meta.helm.sh/release-name: k8ssandra-operator
                        meta.helm.sh/release-namespace: k8ssandra-operator
Selector:               app.kubernetes.io/instance=k8ssandra-operator,app.kubernetes.io/name=k8ssandra-operator,app.kubernetes.io/part-of=k8ssandra-k8ssandra-operator-k8ssandra-operator,control-plane=k8ssandra-operator
Replicas:               1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:           app.kubernetes.io/instance=k8ssandra-operator
                    app.kubernetes.io/managed-by=Helm
                    app.kubernetes.io/name=k8ssandra-operator
                    app.kubernetes.io/part-of=k8ssandra-k8ssandra-operator-k8ssandra-operator
                    control-plane=k8ssandra-operator
                    helm.sh/chart=k8ssandra-operator-1.17.0
  Service Account:  k8ssandra-operator
  Containers:
   k8ssandra-operator:
    Image:      cr.k8ssandra.io/k8ssandra/k8ssandra-operator:v1.17.0
    Port:       9443/TCP
    Host Port:  0/TCP
    Command:
      /manager
    Liveness:   http-get http://:8081/healthz delay=15s timeout=1s period=20s #success=1 #failure=3
    Readiness:  http-get http://:8081/readyz delay=5s timeout=1s period=10s #success=1 #failure=3
    Environment:
      WATCH_NAMESPACE:           (v1:metadata.namespace)
      K8SSANDRA_CONTROL_PLANE:  true
      SERVICE_ACCOUNT_NAME:      (v1:spec.serviceAccountName)
      OPERATOR_NAMESPACE:        (v1:metadata.namespace)
    Mounts:
      /controller_manager_config.yaml from manager-config (rw,path="controller_manager_config.yaml")
      /tmp/k8s-webhook-server/serving-certs from cert (ro)
  Volumes:
   manager-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      k8ssandra-operator-manager-config
    Optional:  false
   cert:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  k8ssandra-operator-webhook-server-cert
    Optional:    false
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   k8ssandra-operator-594dfb764 (1/1 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  41m   deployment-controller  Scaled up replica set k8ssandra-operator-594dfb764 to 1
  • Kubernetes version information:
$ kubectl version
Client Version: v1.29.0
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.28.2
  • Kubernetes cluster kind:
    • managed cluster on Azure (AKS) and DigitalOcean

┆Issue is synchronized with this Jira Story by Unito
┆Issue Number: K8OP-8

@gk-fschubert gk-fschubert added the bug Something isn't working label Jul 18, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
No open projects
Status: No status
Development

No branches or pull requests

1 participant