Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The PV could be created with a new deployed cinder-csi, more than 1.5hour later, create the pv will report failed to provision volume with StorageClass "csi-sc-cinderplugin": timed out waiting for the condition #535

Closed
darcyllingyan opened this issue Mar 17, 2019 · 10 comments

Comments

@darcyllingyan
Copy link

darcyllingyan commented Mar 17, 2019

Hi,
Initially I installed the cinder-csi driver, the pvc could be created and bound, after several hours or longer when I want to create a POD with the volume provided by CSI again, it always failed and report ProvisioningFailed 49s (x4 over 11m) csi-cinderplugin_csi-provisioner-cinderplugin-0_9498d1df-459b-11e9-8537-36bea033277e failed to provision volume with StorageClass "csi-sc-cinderplugin": timed out waiting for the condition.

The log is below:
1.

# kubectl get pod -n kube-system
NAME                                         READY   STATUS    RESTARTS   AGE
csi-attacher-cinderplugin-0                  2/2     Running   0          3d22h
csi-nodeplugin-cinderplugin-4n6sc            2/2     Running   0          3d22h
csi-nodeplugin-cinderplugin-6c5x4            2/2     Running   0          3d22h
csi-nodeplugin-cinderplugin-rfkz6            2/2     Running   0          3d22h
csi-nodeplugin-cinderplugin-wpb8g            2/2     Running   0          3d22h
csi-provisioner-cinderplugin-0               2/2     Running   0          3d22h

# cat pvc-csi.yaml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: csi-pvc-cinderplugin
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: "csi-sc-cinderplugin"

# kubectl get pvc
NAME                   STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS          AGE
csi-pvc-cinderplugin   Pending                                                                        csi-sc-cinderplugin   25m

# kubectl describe pvc csi-pvc-cinderplugin
Events:
  Type       Reason                Age                  From                                                                                  Message
  ----       ------                ----                 ----                                                                                  -------
  Normal     Provisioning          5m15s (x7 over 26m)  csi-cinderplugin_csi-provisioner-cinderplugin-0_9498d1df-459b-11e9-8537-36bea033277e  External provisioner is provisioning volume for claim "default/csi-pvc-cinderplugin"
  Warning    ProvisioningFailed    111s (x7 over 22m)   csi-cinderplugin_csi-provisioner-cinderplugin-0_9498d1df-459b-11e9-8537-36bea033277e  failed to provision volume with StorageClass "csi-sc-cinderplugin": timed out waiting for the condition
  Normal     ExternalProvisioning  60s (x102 over 26m)  persistentvolume-controller                                                           waiting for a volume to be created, either by external provisioner "csi-cinderplugin" or manually created by system administrator

# kubectl get sc
NAME                       PROVISIONER                    AGE
cinder                     kubernetes.io/cinder           47h
cinder-az-nova (default)   kubernetes.io/cinder           47h
cinder-az-nova-xfs         kubernetes.io/cinder           47h
cinder-xfs                 kubernetes.io/cinder           47h
csi-sc-cinderplugin        csi-cinderplugin               47h
csi-sc-cinderplugin-nova   csi-cinderplugin               115m
local-storage              kubernetes.io/no-provisioner   47h

Below is the external-provisioner.log
external_provisioner.log

The external-attacher.log
external-attacher.log

The cinder_csi-provisioner log:

I0317 14:39:10.909183       1 driver.go:56] Driver: cinder.csi.openstack.org version: 1.0.0
I0317 14:39:10.909330       1 driver.go:88] Enabling controller service capability: LIST_VOLUMES
I0317 14:39:10.909335       1 driver.go:88] Enabling controller service capability: CREATE_DELETE_VOLUME
I0317 14:39:10.909340       1 driver.go:88] Enabling controller service capability: PUBLISH_UNPUBLISH_VOLUME
I0317 14:39:10.909344       1 driver.go:88] Enabling controller service capability: CREATE_DELETE_SNAPSHOT
I0317 14:39:10.909348       1 driver.go:88] Enabling controller service capability: LIST_SNAPSHOTS
I0317 14:39:10.909353       1 driver.go:100] Enabling volume access mode: SINGLE_NODE_WRITER
I0317 14:39:10.909376       1 driver.go:110] Enabling node service capability: STAGE_UNSTAGE_VOLUME
I0317 14:39:10.909628       1 server.go:108] Listening for connections on address: &net.UnixAddr{Name:"/csi/csi.sock", Net:"unix"}
E0317 15:53:57.461244       1 utils.go:77] GRPC error: Cannot delete the volume "0176c31c-5689-4d54-bbfa-99dcd72e3231", it's still attached to a node
E0317 15:56:29.773616       1 utils.go:77] GRPC error: Cannot delete the volume "6f1b0e00-ce49-4e1d-8780-ef2764a98cbf", it's still attached to a node
E0318 01:01:11.044757       1 utils.go:77] GRPC error: Cannot delete the volume "ed5bde86-ef5a-48e9-ba45-00e9f395ce04", it's still attached to a node

The cinder_csi-attacher log.

I0317 14:39:10.709975       1 driver.go:56] Driver: cinder.csi.openstack.org version: 1.0.0
I0317 14:39:10.710119       1 driver.go:88] Enabling controller service capability: LIST_VOLUMES
I0317 14:39:10.710125       1 driver.go:88] Enabling controller service capability: CREATE_DELETE_VOLUME
I0317 14:39:10.710130       1 driver.go:88] Enabling controller service capability: PUBLISH_UNPUBLISH_VOLUME
I0317 14:39:10.710134       1 driver.go:88] Enabling controller service capability: CREATE_DELETE_SNAPSHOT
I0317 14:39:10.710138       1 driver.go:88] Enabling controller service capability: LIST_SNAPSHOTS
I0317 14:39:10.710143       1 driver.go:100] Enabling volume access mode: SINGLE_NODE_WRITER
I0317 14:39:10.710149       1 driver.go:110] Enabling node service capability: STAGE_UNSTAGE_VOLUME
I0317 14:39:10.710403       1 server.go:108] Listening for connections on address: &net.UnixAddr{Name:"/csi/csi.sock", Net:"unix"}
E0318 01:29:54.885628       1 utils.go:77] GRPC error: Volume "4b9d0bd8-68f9-486c-be9f-b6ed774609fd" failed to be attached within the alloted time
E0318 01:30:09.710929       1 utils.go:77] GRPC error: Volume "4b9d0bd8-68f9-486c-be9f-b6ed774609fd" failed to be attached within the alloted time
E0318 01:30:27.790058       1 utils.go:77] GRPC error: Volume "4b9d0bd8-68f9-486c-be9f-b6ed774609fd" failed to be attached within the alloted time
E0318 01:30:39.207716       1 utils.go:77] GRPC error: Volume "4b9d0bd8-68f9-486c-be9f-b6ed774609fd" failed to be attached within the alloted time
E0318 01:30:54.239216       1 utils.go:77] GRPC error: Volume "4b9d0bd8-68f9-486c-be9f-b6ed774609fd" failed to be attached within the alloted time

note: 1. When the cinder-csi driver was created initially, the pvc could be created successfully, several days(hours) later when I want to create new pvc again, it reports the error. After I delete the pvc which had been created successfully and re-install again, it also couldn't be created.

2. If I delete the cinder-csi-plugin and re-install it again, and then create the pvc, the pvc could be created successfully.

3. The 1.5hour isn't an exactly time, in different clusters the provision failed time is different

Can you help to check it about the root cause? I have no idea about it after checking several days, thanks very much!

THanks

@darcyllingyan darcyllingyan changed the title Failed to provision volume with StorageClass "csi-sc-cinderplugin": timed out waiting for the condition The PV could be created with a new deployed cinder-csi, more than 1hour later, create the pv will report failed to provision volume with StorageClass "csi-sc-cinderplugin": timed out waiting for the condition Mar 17, 2019
@darcyllingyan darcyllingyan changed the title The PV could be created with a new deployed cinder-csi, more than 1hour later, create the pv will report failed to provision volume with StorageClass "csi-sc-cinderplugin": timed out waiting for the condition The PV could be created with a new deployed cinder-csi, more than 1.5hour later, create the pv will report failed to provision volume with StorageClass "csi-sc-cinderplugin": timed out waiting for the condition Mar 17, 2019
@darcyllingyan
Copy link
Author

darcyllingyan commented Mar 18, 2019

Does anyone has suggestions? Thanks.

Thanks
Darcy

@darcyllingyan
Copy link
Author

Hi @adisky ,
Thanks for the response, below is the cinder-container deployed with csi-provisioner

# docker ps|grep csi-provisioner
fc2f0835c401        bcmt-registry:5000/cinder-csi-plugin@sha256:ae6be6d8c3c27eac12580c02f6943c2ae9d588e50da6c64c2701ffed6edef164                            "/bin/cinder-csi-p..."   16 hours ago        Up 16 hours                             k8s_cinder_csi-provisioner-cinderplugin-0_kube-system_81a7f87d-4989-11e9-87cf-fa163ed94ef7_0

# docker logs fc2f0835c401
I0318 14:24:16.990310       1 driver.go:56] Driver: cinder.csi.openstack.org version: 1.0.0
I0318 14:24:16.990440       1 driver.go:88] Enabling controller service capability: LIST_VOLUMES
I0318 14:24:16.990446       1 driver.go:88] Enabling controller service capability: CREATE_DELETE_VOLUME
I0318 14:24:16.990450       1 driver.go:88] Enabling controller service capability: PUBLISH_UNPUBLISH_VOLUME
I0318 14:24:16.990454       1 driver.go:88] Enabling controller service capability: CREATE_DELETE_SNAPSHOT
I0318 14:24:16.990458       1 driver.go:88] Enabling controller service capability: LIST_SNAPSHOTS
I0318 14:24:16.990462       1 driver.go:100] Enabling volume access mode: SINGLE_NODE_WRITER
I0318 14:24:16.990468       1 driver.go:110] Enabling node service capability: STAGE_UNSTAGE_VOLUME
I0318 14:24:16.990692       1 server.go:108] Listening for connections on address: &net.UnixAddr{Name:"/csi/csi.sock", Net:"unix"}
E0318 14:24:17.896211       1 utils.go:77] GRPC error: Cannot delete the volume "49d8f7ff-50f1-4234-9b72-284ee502120f", it's still attached to a node
E0318 14:24:17.902135       1 utils.go:77] GRPC error: Cannot delete the volume "ec5806d8-6351-4290-b246-b6021a17503a", it's still attached to a node
E0318 14:24:17.907503       1 utils.go:77] GRPC error: Cannot delete the volume "4b9d0bd8-68f9-486c-be9f-b6ed774609fd", it's still attached to a node
E0318 14:24:17.992839       1 utils.go:77] GRPC error: Cannot delete the volume "730509f8-3c4c-48f0-8884-8bae700cac26", it's still attached to a node
E0318 14:24:33.106835       1 utils.go:77] GRPC error: Cannot delete the volume "4b9d0bd8-68f9-486c-be9f-b6ed774609fd", it's still attached to a node
E0318 14:24:33.130396       1 utils.go:77] GRPC error: Cannot delete the volume "ec5806d8-6351-4290-b246-b6021a17503a", it's still attached to a node
E0318 14:25:03.295083       1 utils.go:77] GRPC error: Cannot delete the volume "4b9d0bd8-68f9-486c-be9f-b6ed774609fd", it's still attached to a node
E0318 14:25:03.306262       1 utils.go:77] GRPC error: Cannot delete the volume "ec5806d8-6351-4290-b246-b6021a17503a", it's still attached to a node
E0318 15:45:07.322016       1 utils.go:77] GRPC error: Cannot delete the volume "670b75db-4f0a-4738-b883-7425ef67d547", it's still attached to a node
E0318 15:45:09.225669       1 utils.go:77] GRPC error: Cannot delete the volume "f1729308-6b2c-41d2-a426-59a106873571", it's still attached to a node
E0318 15:45:25.052077       1 utils.go:77] GRPC error: Cannot delete the volume "8c44b5dc-22e2-466b-938f-ad01994fdc4a", it's still attached to a node

@darcyllingyan
Copy link
Author

darcyllingyan commented Mar 19, 2019

Hi @adisky ,

Below is my step by step procedure:
OK,

  1. I docker pull all the images needed for the cinder-csi-plugin and push to my local registrary:
# docker images|grep csi
bcmt-registry:5000/csi/node-driver-registrar                                v1.0.1              fc693d017953        6 days ago          116 MB
bcmt-registry:5000/csi/csi-snapshotter                                      v1.0.1              7d5a186e47e1        6 days ago          118 MB
bcmt-registry:5000/csi/csi-provisioner                                      v1.0.1              c559e3777ddc        6 days ago          123 MB
bcmt-registry:5000/csi/csi-attacher                                         v1.0.1              5d15505d8198        6 days ago          119 MB
bcmt-registry:5000/docker.io/k8scloudprovider/cinder-csi-plugin             latest              11dfac77b67b        10 days ago         312 MB
  1. Download the cinder-csi-plugin manifest
# ls
csi-attacher-cinderplugin.yaml  csi-nodeplugin-cinderplugin.yaml  csi-provisioner-cinderplugin.yaml  csi-secret-cinderplugin.yaml       csi-snapshotter-rbac.yaml
csi-attacher-rbac.yaml          csi-nodeplugin-rbac.yaml          csi-provisioner-rbac.yaml          csi-snapshotter-cinderplugin.yaml
  1. Apply all the yaml and all the cinder-csi-plugin are started
# kubectl get pod -n kube-system
NAME                                                  READY   STATUS    RESTARTS   AGE
csi-attacher-cinderplugin-0                           2/2     Running   0          47m
csi-nodeplugin-cinderplugin-bfhnb                     2/2     Running   0          47m
csi-provisioner-cinderplugin-0                        2/2     Running   0          47m
csi-snapshotter-cinder-0                              2/2     Running   0          47m
  1. Create the storageclass:
# kubectl get sc
NAME                       PROVISIONER                    AGE
cinder                     kubernetes.io/cinder           25h
cinder-az-nova (default)   kubernetes.io/cinder           25h
cinder-az-nova-xfs         kubernetes.io/cinder           25h
cinder-xfs                 kubernetes.io/cinder           25h
csi-sc-cinderplugin        csi-cinderplugin               25h
local-storage              kubernetes.io/no-provisioner   25h
  1. Create the pvc.yaml
# cat pvc.yaml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: csi-pvc-cinderplugin555
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: csi-sc-cinderplugin
# kubectl apply -f pvc.yaml 
persistentvolumeclaim/csi-pvc-cinderplugin555 created

# kubectl get pvc
NAME                         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS          AGE
cnet1-cnet-db                Bound    pvc-c3703b29-4a10-11e9-a9fd-fa163e961a4d   2Gi        RWO            csi-sc-cinderplugin   40m
csi-pvc-cinderplugin         Bound    pvc-37cc0e48-4984-11e9-a9fd-fa163e961a4d   1Gi        RWO            csi-sc-cinderplugin   17h
csi-pvc-cinderplugin-test1   Bound    pvc-e3854cb9-4945-11e9-a9fd-fa163e961a4d   1Gi        RWO            csi-sc-cinderplugin   24h
csi-pvc-cinderplugin222      Bound    pvc-0d0f4089-4a11-11e9-a9fd-fa163e961a4d   1Gi        RWO            csi-sc-cinderplugin   38m
csi-pvc-cinderplugin333      Bound    pvc-8b7d2be0-4a11-11e9-a9fd-fa163e961a4d   1Gi        RWO            csi-sc-cinderplugin   34m
csi-pvc-cinderplugin444      Bound    pvc-c4b72dd1-4a14-11e9-a9fd-fa163e961a4d   1Gi        RWO            csi-sc-cinderplugin   11m
csi-pvc-cinderplugin555      Bound    pvc-60c5c525-4a16-11e9-a9fd-fa163e961a4d   1Gi        RWO            csi-sc-cinderplugin   20s

  1. Now all the pvc could be created after I tried many times, but after more than 70 mins later, when I tried to create a new simple pvc again, the timeout error will be reported.

  2. Now if I kubectl delete pvc csi-pvc-cinderplugin555 and install again, it couldn't be install as the timeout error.

  3. Only if I delete the cinder-csi-plugin kubectl delete -f cinder-csi-plugin and re-install kubectl apply -f cinder-csi-plugin again, the cinder-csi-plugin can be used, but after 70mins later, it's timeout agagin.

Thanks
Darcy

@darcyllingyan
Copy link
Author

Hi,
Seems now it isn't more than 70 mins, the timeout error is generated.
Just now I just create a new pvc csi-pvc-cinderplugin555 and it can bount.

# kubectl get pvc
NAME                         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS          AGE
cnet1-cnet-db                Bound    pvc-c3703b29-4a10-11e9-a9fd-fa163e961a4d   2Gi        RWO            csi-sc-cinderplugin   40m
csi-pvc-cinderplugin         Bound    pvc-37cc0e48-4984-11e9-a9fd-fa163e961a4d   1Gi        RWO            csi-sc-cinderplugin   17h
csi-pvc-cinderplugin-test1   Bound    pvc-e3854cb9-4945-11e9-a9fd-fa163e961a4d   1Gi        RWO            csi-sc-cinderplugin   24h
csi-pvc-cinderplugin222      Bound    pvc-0d0f4089-4a11-11e9-a9fd-fa163e961a4d   1Gi        RWO            csi-sc-cinderplugin   38m
csi-pvc-cinderplugin333      Bound    pvc-8b7d2be0-4a11-11e9-a9fd-fa163e961a4d   1Gi        RWO            csi-sc-cinderplugin   34m
csi-pvc-cinderplugin444      Bound    pvc-c4b72dd1-4a14-11e9-a9fd-fa163e961a4d   1Gi        RWO            csi-sc-cinderplugin   11m
csi-pvc-cinderplugin555      Bound    pvc-60c5c525-4a16-11e9-a9fd-fa163e961a4d   1Gi        RWO            csi-sc-cinderplugin   20s

Now when I want to create a new pvc csi-pvc-cinderplugin666 , the timeout error is reported.

# kubectl get pvc
NAME                         STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS          AGE
cnet1-cnet-db                Bound     pvc-c3703b29-4a10-11e9-a9fd-fa163e961a4d   2Gi        RWO            csi-sc-cinderplugin   59m
csi-pvc-cinderplugin         Bound     pvc-37cc0e48-4984-11e9-a9fd-fa163e961a4d   1Gi        RWO            csi-sc-cinderplugin   17h
csi-pvc-cinderplugin-test1   Bound     pvc-e3854cb9-4945-11e9-a9fd-fa163e961a4d   1Gi        RWO            csi-sc-cinderplugin   25h
csi-pvc-cinderplugin222      Bound     pvc-0d0f4089-4a11-11e9-a9fd-fa163e961a4d   1Gi        RWO            csi-sc-cinderplugin   57m
csi-pvc-cinderplugin333      Bound     pvc-8b7d2be0-4a11-11e9-a9fd-fa163e961a4d   1Gi        RWO            csi-sc-cinderplugin   54m
csi-pvc-cinderplugin444      Bound     pvc-c4b72dd1-4a14-11e9-a9fd-fa163e961a4d   1Gi        RWO            csi-sc-cinderplugin   30m
csi-pvc-cinderplugin555      Bound     pvc-60c5c525-4a16-11e9-a9fd-fa163e961a4d   1Gi        RWO            csi-sc-cinderplugin   19m
csi-pvc-cinderplugin666      Pending                                                                        csi-sc-cinderplugin   4m28s

# kubectl describe pvc csi-pvc-cinderplugin666
Events:
  Type       Reason                Age                   From                                                                                  Message
  ----       ------                ----                  ----                                                                                  -------
  Warning    ProvisioningFailed    117s                  csi-cinderplugin_csi-provisioner-cinderplugin-0_71d56d0e-4a0f-11e9-868c-663f1f51a46c  failed to provision volume with StorageClass "csi-sc-cinderplugin": timed out waiting for the condition
  Normal     Provisioning          102s (x2 over 5m21s)  csi-cinderplugin_csi-provisioner-cinderplugin-0_71d56d0e-4a0f-11e9-868c-663f1f51a46c  External provisioner is provisioning volume for claim "default/csi-pvc-cinderplugin666"
  Normal     ExternalProvisioning  6s (x22 over 5m21s)   persistentvolume-controller                                                           waiting for a volume to be created, either by external provisioner "csi-cinderplugin" or manually created by system administrator
Mounted By:  <none>

  1. The sc.yaml
cat csi-sc-cinder.yaml.j2 
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
   name: csi-sc-cinderplugin
provisioner: csi-cinderplugin
  1. The pvc.yaml
# cat pvc.yaml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: csi-pvc-cinderplugin666
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: csi-sc-cinderplugin

I just use the nginx as the pod yaml(ps: I mainly frenquently create the pvc to do the test)

# cat test.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: nginx 
spec:
  containers:
  - image: bcmt-registry:5000/cbur/cburm:1.8-14-692
    imagePullPolicy: IfNotPresent
    name: nginx
    ports:
    - containerPort: 80
      protocol: TCP
    volumeMounts:
      - mountPath: /var/lib/www/html
        name: csi-data-cinderplugin 
  volumes:
  - name: csi-data-cinderplugin
    persistentVolumeClaim:
      claimName: csi-pvc-cinderplugin
      readOnly: false

@darcyllingyan
Copy link
Author

  1. I have confirmed that the cinder volume could be created by dashboard or internal provisioner, seems the create volume timeout issue only exist when creating volume with external provisioner.

@darcyllingyan
Copy link
Author

After contact the lab administor, he and I both can NOT sure whether the volume creation request is used for CSI

2019-03-20 07:10:41.409 57649 INFO cinder.api.openstack.wsgi [req-eb15d369-17ae-40ed-8582-317bdee43f76 b10ea67e3f2c44128b15c1d412b724bd e91a6003e90b444dbde067bf5d739344 - default default] POST +https://10.75.225.36:13776/v3/e91a6003e90b444dbde067bf5d739344/volumes+
2019-03-20 07:10:41.411 57649 INFO cinder.api.v3.volumes [req-eb15d369-17ae-40ed-8582-317bdee43f76 b10ea67e3f2c44128b15c1d412b724bd e91a6003e90b444dbde067bf5d739344 - default default] Create volume of 1 GB
2019-03-20 07:10:41.412 57649 INFO cinder.volume.api [req-eb15d369-17ae-40ed-8582-317bdee43f76 b10ea67e3f2c44128b15c1d412b724bd e91a6003e90b444dbde067bf5d739344 - default default] Availability Zones retrieved successfully.
2019-03-20 07:10:41.431 57649 WARNING cinder.volume.api [req-eb15d369-17ae-40ed-8582-317bdee43f76 b10ea67e3f2c44128b15c1d412b724bd e91a6003e90b444dbde067bf5d739344 - default default] Task 'cinder.volume.flows.api.create_volume.ExtractVolumeRequestTask;volume:create' (052d3c7f-845c-476a-8745-7863c15f1032) transitioned into state 'FAILURE' from state 'RUNNING'
1 predecessors (most recent first):
  Flow 'volume_create_api'
2019-03-20 07:10:41.431 57649 ERROR cinder.volume.api Traceback (most recent call last):
2019-03-20 07:10:41.431 57649 ERROR cinder.volume.api   File "/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/executor.py", line 53, in _execute_task
2019-03-20 07:10:41.431 57649 ERROR cinder.volume.api     result = task.execute(**arguments)
2019-03-20 07:10:41.431 57649 ERROR cinder.volume.api   File "/usr/lib/python2.7/site-packages/cinder/volume/flows/api/create_volume.py", line 444, in execute
2019-03-20 07:10:41.431 57649 ERROR cinder.volume.api     group)
2019-03-20 07:10:41.431 57649 ERROR cinder.volume.api   File "/usr/lib/python2.7/site-packages/cinder/volume/flows/api/create_volume.py", line 337, in _extract_availability_zone
2019-03-20 07:10:41.431 57649 ERROR cinder.volume.api     raise exception.InvalidInput(reason=msg)
2019-03-20 07:10:41.431 57649 ERROR cinder.volume.api     raise exception.InvalidInput(reason=msg)
2019-03-20 07:10:41.431 57649 ERROR cinder.volume.api InvalidInput: Invalid input received: Availability zone 'zone2' is invalid.
2019-03-20 07:10:41.431 57649 ERROR cinder.volume.api
2019-03-20 07:10:41.435 57649 WARNING cinder.volume.api [req-eb15d369-17ae-40ed-8582-317bdee43f76 b10ea67e3f2c44128b15c1d412b724bd e91a6003e90b444dbde067bf5d739344 - default default] Task 'cinder.volume.flows.api.create_volume.ExtractVolumeRequestTask;volume:create' (052d3c7f-845c-476a-8745-7863c15f1032) transitioned into state 'REVERTED' from state 'REVERTING'
2019-03-20 07:10:41.437 57649 WARNING cinder.volume.api [req-eb15d369-17ae-40ed-8582-317bdee43f76 b10ea67e3f2c44128b15c1d412b724bd e91a6003e90b444dbde067bf5d739344 - default default] Flow 'volume_create_api' (5c960838-68bd-4c6d-b559-784ae3f835ee) transitioned into state 'REVERTED' from state 'RUNNING'
2019-03-20 07:10:41.438 57649 INFO cinder.api.openstack.wsgi [req-eb15d369-17ae-40ed-8582-317bdee43f76 b10ea67e3f2c44128b15c1d412b724bd e91a6003e90b444dbde067bf5d739344 - default default] +https://10.75.225.36:13776/v3/e91a6003e90b444dbde067bf5d739344/volumes+ returned with HTTP 400
2019-03-20 07:10:41.439 57649 INFO eventlet.wsgi.server [req-eb15d369-17ae-40ed-8582-317bdee43f76 b10ea67e3f2c44128b15c1d412b724bd e91a6003e90b444dbde067bf5d739344 - default default] 172.17.1.19 "POST /v3/e91a6003e90b444dbde067bf5d739344/volumes HTTP/1.1" status: 400  len: 451 time: 0.0544291

@ramineni
Copy link
Contributor

@darcyllingyan are any of your nodes in zone2 ? could you give me zone annotation of all your nodes

@darcyllingyan
Copy link
Author

@ramineni ,
Thanks for the response.
I have four nodes in the cluster.

darcy-1903-cws-01   zone1
darcy-1903-cws-02   zone1
darcy-1903-cwes-01 zone2
darcy-1903-we-01    zone2

I'm still contacting the lab adminostor and I think the the zone issue shouldn't be caused by CSI( The lab administror send the log only based on the time and it maybe not the externa provision volume request) as the "availability: nova" is defined in storageclass.

# cat sc.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: csi-sc-cinderplugin-nova
provisioner: csi-cinderplugin
parameters:
  availability: nova

@darcyllingyan darcyllingyan changed the title The PV could be created with a new deployed cinder-csi, more than 1.5hour later, create the pv will report failed to provision volume with StorageClass "csi-sc-cinderplugin": timed out waiting for the condition Cinder-CSI ReAuth issue: The PV could be created with a new deployed cinder-csi, more than 1.5hour later, create the pv will report failed to provision volume with StorageClass "csi-sc-cinderplugin": timed out waiting for the condition Mar 21, 2019
@darcyllingyan darcyllingyan changed the title Cinder-CSI ReAuth issue: The PV could be created with a new deployed cinder-csi, more than 1.5hour later, create the pv will report failed to provision volume with StorageClass "csi-sc-cinderplugin": timed out waiting for the condition The PV could be created with a new deployed cinder-csi, more than 1.5hour later, create the pv will report failed to provision volume with StorageClass "csi-sc-cinderplugin": timed out waiting for the condition Mar 21, 2019
@darcyllingyan
Copy link
Author

Close now and report later after have some further progress.

@Zanooda
Copy link

Zanooda commented Dec 15, 2023

@darcyllingyan How did you resolve that issue? I have something similar I am fighting right now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants