Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Wrong dataPool referenced when creating pv with topologyconstrainedPools #2828

Closed
sbskas opened this issue Jan 27, 2022 · 7 comments · Fixed by #2830
Closed

Wrong dataPool referenced when creating pv with topologyconstrainedPools #2828

sbskas opened this issue Jan 27, 2022 · 7 comments · Fixed by #2830
Assignees
Labels
bug Something isn't working component/rbd Issues related to RBD

Comments

@sbskas
Copy link
Contributor

sbskas commented Jan 27, 2022

Using erasure-coded constrainedDataPools as follow:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
   name: block-topology-ec
provisioner: ceph.rbd.csi.ceph.com
parameters:
    clusterID: test

    imageFormat: "2"
    imageFeatures: layering


    pool: default-replicated
    dataPool: default-erasure

    topologyConstrainedPools: |
      [
        {
          "poolName": "pool-replicated-zone1",
          "dataPool": "pool-erasure-zone1",

          "domainSegments": [
            {
              "domainLabel":"zone",
              "value":"zone1"
            }
          ]
       },
       {
          "poolName": "pool-replicated-zone2",
          "dataPool": "pool-erasure-zone2",

          "domainSegments": [
            {
              "domainLabel":"zone",
              "value":"zone2"
            }
          ]
       },
       {
          "poolName": "pool-replicated-zone3",
          "dataPool": "pool-erasure-zone3",
          
          "domainSegments": [
            { 
              "domainLabel":"zone",
              "value":"zone3"
            }
          ]
       }
      ]

volumeBindingMode: WaitForFirstConsumer

And proper tagging of the nodes (topology.kubernetes.io/zone: zone1/2/3), deploying topology contrained pool works ok.

rbd  -p pool-replicated-zone3  -p pool-replicated-zone3 info csi-vol-386734ac-7edd-11ec-a2c5-3a73ec2b3d1a|grep pool
	data_pool: pool-erasure-zone3
	features: layering, data-pool

However, dataPool referenced in PV is set as the default data-pool.

kubectl get pv pvc-5d721f53-6cb3-4b19-bb1b-3a72ff55bf89 -o yaml |grep -i -e pool  -e handle
      dataPool: default-erasure
      journalPool: repldefault
      pool: pool-replicated-zone3
    volumeHandle: 0001-0009-rook-ceph-0000000000000005-386734ac-7edd-11ec-a2c5-3a73ec2b3d1a
@sbskas
Copy link
Contributor Author

sbskas commented Jan 27, 2022

The problem seems to be here:

func buildCreateVolumeResponse(req *csi.CreateVolumeRequest, rbdVol *rbdVolume) *csi.CreateVolumeResponse {
volumeContext := req.GetParameters()
volumeContext["pool"] = rbdVol.Pool
volumeContext["journalPool"] = rbdVol.JournalPool
volumeContext["imageName"] = rbdVol.RbdImageName

It's doesn't update the volumeContext["dataPool"].

@sbskas sbskas changed the title Wrong dataPool referenced when creating pv with topologyconstrainedPool Wrong dataPool referenced when creating pv with topologyconstrainedPools Jan 27, 2022
@nixpanic nixpanic added bug Something isn't working component/rbd Issues related to RBD labels Jan 31, 2022
@Madhu-1
Copy link
Collaborator

Madhu-1 commented Jan 31, 2022

@sbskas is it causing any functionality issue? or its just a wrong data is show in PV object?

@sbskas
Copy link
Contributor Author

sbskas commented Jan 31, 2022

No functionality is impacted as far as I know. Only recorded information is wrong.
I don't know about rook behaviour in case of wrong data in the pv.

@Madhu-1
Copy link
Collaborator

Madhu-1 commented Jan 31, 2022

No functionality is impacted as far as I know. Only recorded information is wrong. I don't know about rook behaviour in case of wrong data in the pv.

no problem with Rook. Rook doesn't do anything with this data, its only for cephcsi

@sbskas
Copy link
Contributor Author

sbskas commented Jan 31, 2022

So no functionnality impacted as far as I see.
PV datapool is just wrong.
You cannot rely on the information with kubectl get pv.
You have to fire a rbd image info to get the correct information.

@Madhu-1
Copy link
Collaborator

Madhu-1 commented Jan 31, 2022

Yes, I just wanted to understand is there any functional problem. as your PR is fixing the problem we can take that in.

@humblec
Copy link
Collaborator

humblec commented Jan 31, 2022

So no functionnality impacted as far as I see. PV datapool is just wrong. You cannot rely on the information with kubectl get pv. You have to fire a rbd image info to get the correct information.

Yeah, this need to be fixed. Thanks for catching it @sbskas 👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working component/rbd Issues related to RBD
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants