-
Notifications
You must be signed in to change notification settings - Fork 617
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[cinder-csi-plugin] filesystem of a larger volume created from a snapshot/volume is not expanded #1539
Comments
@alibo please share the logs after enabling verbose to 5 . Add --v=5 here https://github.com/kubernetes/cloud-provider-openstack/blob/master/manifests/cinder-csi-plugin/cinder-csi-controllerplugin.yaml#L106 . We do pass the requested larger size in pvc to create volume call, so the same should be passed to cinder backend . Ideally should create of larger size if supported . Do you see any errors ? |
It's true, the cinder creates a larger volume and when it's attached to the node it has the right size when I check it with But when it's mounted by cinder-csi-plugin, its filesystem (ext4 in this case) is not resized by The reason is |
I can also share the logs if it's needed. |
yes, logs should be helpful a general question is |
@alibo I'm not sure I understood the problem correctly. Cloned volume is altogether new volume with increased size already(not the existing volume where size is changed ), volume expansion needed if size needs to be changed after creation, which is not the case here. so there shouldnt be need for NodeExpandVolume to called as the new volume created is of larger size and the same can be attached to node without need of NodeExpandVolume. |
@jichenjc IMHO, it depends on how we see the issue:
I think the maintainers of this project can decide :) @ramineni Ok, let me give you an example and share the related logs:
PVC: (1G) apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"source-pvc","namespace":"monitoring"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":"block-storage-standard"}}
pv.kubernetes.io/bind-completed: "yes"
pv.kubernetes.io/bound-by-controller: "yes"
volume.beta.kubernetes.io/storage-provisioner: cinder.csi.openstack.org
creationTimestamp: "2021-05-25T06:10:49Z"
finalizers:
- kubernetes.io/pvc-protection
name: source-pvc
namespace: monitoring
resourceVersion: "37061644"
selfLink: /api/v1/namespaces/monitoring/persistentvolumeclaims/source-pvc
uid: 637f9cf2-0abd-4924-bf46-4d6bc344a509
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: block-storage-standard
volumeMode: Filesystem
volumeName: pvc-637f9cf2-0abd-4924-bf46-4d6bc344a509
status:
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi
phase: Bound PV: apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
pv.kubernetes.io/provisioned-by: cinder.csi.openstack.org
creationTimestamp: "2021-05-25T06:10:50Z"
finalizers:
- kubernetes.io/pv-protection
name: pvc-637f9cf2-0abd-4924-bf46-4d6bc344a509
resourceVersion: "37061641"
selfLink: /api/v1/persistentvolumes/pvc-637f9cf2-0abd-4924-bf46-4d6bc344a509
uid: e0df3c80-5d22-4eeb-8210-a34f4667b8fe
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: source-pvc
namespace: monitoring
resourceVersion: "37061613"
uid: 637f9cf2-0abd-4924-bf46-4d6bc344a509
csi:
driver: cinder.csi.openstack.org
fsType: ext4
volumeAttributes:
storage.kubernetes.io/csiProvisionerIdentity: 1621922914419-8081-cinder.csi.openstack.org
volumeHandle: 7b4d2908-b022-47e2-91ef-4429fde91f26
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: topology.cinder.csi.openstack.org/zone
operator: In
values:
- nova
persistentVolumeReclaimPolicy: Delete
storageClassName: block-storage-standard
volumeMode: Filesystem
status:
phase: Bound Cinder: openstack volume show pvc-637f9cf2-0abd-4924-bf46-4d6bc344a509
+--------------------------------+---------------------------------------------------------+
| Field | Value |
+--------------------------------+---------------------------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2021-05-25T06:10:50.000000 |
| description | Created by OpenStack Cinder CSI driver |
| encrypted | False |
| id | 7b4d2908-b022-47e2-91ef-4429fde91f26 |
| migration_status | None |
| multiattach | False |
| name | pvc-637f9cf2-0abd-4924-bf46-4d6bc344a509 |
| os-vol-host-attr:host | hostgroup@okd-replica-3-standard#okd-replica-3-standard |
| os-vol-mig-status-attr:migstat | None |
| os-vol-mig-status-attr:name_id | None |
| os-vol-tenant-attr:tenant_id | bff078a1683d405d8e3f27d22d26df12 |
| properties | cinder.csi.openstack.org/cluster='kubernetes' |
| replication_status | None |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | available |
| type | okd-block-storage-standard |
| updated_at | 2021-05-25T06:10:50.000000 |
| user_id | c92a8b81a2fc445081d40b94384e0cd0 |
+--------------------------------+---------------------------------------------------------+
/prometheus $ df -h | grep /dev/vdf
/dev/vdf 975.9M 2.5M 957.4M 0% /prometheus
/prometheus $ echo "test" > /prometheus/test.txt
/prometheus $ cat /prometheus/test.txt
test
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: cloned-pvc
spec:
accessModes:
- ReadWriteOnce
dataSource:
kind: PersistentVolumeClaim
name: source-pvc
resources:
requests:
storage: 2Gi
storageClassName: block-storage-standard
PVC (2g) apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"cloned-pvc","namespace":"monitoring"},"spec":{"accessModes":["ReadWriteOnce"],"dataSource":{"kind":"PersistentVolumeClaim","name":"source-pvc"},"resources":{"requests":{"storage":"2Gi"}},"storageClassName":"block-storage-standard"}}
pv.kubernetes.io/bind-completed: "yes"
pv.kubernetes.io/bound-by-controller: "yes"
volume.beta.kubernetes.io/storage-provisioner: cinder.csi.openstack.org
creationTimestamp: "2021-05-25T10:26:55Z"
finalizers:
- kubernetes.io/pvc-protection
name: cloned-pvc
namespace: monitoring
resourceVersion: "37196636"
selfLink: /api/v1/namespaces/monitoring/persistentvolumeclaims/cloned-pvc
uid: 4f5fe6f8-e2ae-49a9-8767-e613b4728982
spec:
accessModes:
- ReadWriteOnce
dataSource:
apiGroup: null
kind: PersistentVolumeClaim
name: source-pvc
resources:
requests:
storage: 2Gi
storageClassName: block-storage-standard
volumeMode: Filesystem
volumeName: pvc-4f5fe6f8-e2ae-49a9-8767-e613b4728982
status:
accessModes:
- ReadWriteOnce
capacity:
storage: 2Gi
phase: Bound PV (2g) apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
pv.kubernetes.io/provisioned-by: cinder.csi.openstack.org
creationTimestamp: "2021-05-25T10:26:59Z"
finalizers:
- kubernetes.io/pv-protection
name: pvc-4f5fe6f8-e2ae-49a9-8767-e613b4728982
resourceVersion: "37196634"
selfLink: /api/v1/persistentvolumes/pvc-4f5fe6f8-e2ae-49a9-8767-e613b4728982
uid: ed5637ba-125d-4e80-a26b-ec9935ae61bc
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 2Gi
apiVersion: v1
kind: PersistentVolumeClaim
name: cloned-pvc
namespace: monitoring
resourceVersion: "37196597"
uid: 4f5fe6f8-e2ae-49a9-8767-e613b4728982
csi:
driver: cinder.csi.openstack.org
fsType: ext4
volumeAttributes:
storage.kubernetes.io/csiProvisionerIdentity: 1621937257931-8081-cinder.csi.openstack.org
volumeHandle: 17fb58ba-bcf0-4dc2-bd11-1aa21cd996c4
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: topology.cinder.csi.openstack.org/zone
operator: In
values:
- nova
persistentVolumeReclaimPolicy: Delete
storageClassName: block-storage-standard
volumeMode: Filesystem
status:
phase: Bound Cinder: +--------------------------------+---------------------------------------------------------+
| Field | Value |
+--------------------------------+---------------------------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2021-05-25T10:26:58.000000 |
| description | Created by OpenStack Cinder CSI driver |
| encrypted | False |
| id | 17fb58ba-bcf0-4dc2-bd11-1aa21cd996c4 |
| migration_status | None |
| multiattach | False |
| name | pvc-4f5fe6f8-e2ae-49a9-8767-e613b4728982 |
| os-vol-host-attr:host | hostgroup@okd-replica-3-standard#okd-replica-3-standard |
| os-vol-mig-status-attr:migstat | None |
| os-vol-mig-status-attr:name_id | None |
| os-vol-tenant-attr:tenant_id | bff078a1683d405d8e3f27d22d26df12 |
| properties | cinder.csi.openstack.org/cluster='kubernetes' |
| replication_status | None |
| size | 2 |
| snapshot_id | None |
| source_volid | 7b4d2908-b022-47e2-91ef-4429fde91f26 |
| status | available |
| type | okd-block-storage-standard |
| updated_at | 2021-05-25T10:26:59.000000 |
| user_id | c92a8b81a2fc445081d40b94384e0cd0 |
+--------------------------------+---------------------------------------------------------+
I0525 10:26:55.503759 1 controller.go:1332] provision "monitoring/cloned-pvc" class "block-storage-standard": started
I0525 10:26:55.504274 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"monitoring", Name:"cloned-pvc", UID:"4f5fe6f8-e2ae-49a9-8767-e613b4728982", APIVersion:"v1", ResourceVersion:"37196597", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "monitoring/cloned-pvc"
I0525 10:26:59.040417 1 controller.go:1439] provision "monitoring/cloned-pvc" class "block-storage-standard": volume "pvc-4f5fe6f8-e2ae-49a9-8767-e613b4728982" provisioned
I0525 10:26:59.040464 1 controller.go:1456] provision "monitoring/cloned-pvc" class "block-storage-standard": succeeded
I0525 10:26:59.060545 1 controller.go:1332] provision "monitoring/cloned-pvc" class "block-storage-standard": started
I0525 10:26:59.060585 1 controller.go:1341] provision "monitoring/cloned-pvc" class "block-storage-standard": persistentvolume "pvc-4f5fe6f8-e2ae-49a9-8767-e613b4728982" already exists, skipping
I0525 10:26:59.060600 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"monitoring", Name:"cloned-pvc", UID:"4f5fe6f8-e2ae-49a9-8767-e613b4728982", APIVersion:"v1", ResourceVersion:"37196597", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-4f5fe6f8-e2ae-49a9-8767-e613b4728982
I0525 10:26:55.513919 1 utils.go:100] GRPC call: /csi.v1.Controller/CreateVolume
I0525 10:26:55.513977 1 utils.go:101] GRPC request: name:"pvc-4f5fe6f8-e2ae-49a9-8767-e613b4728982" capacity_range:<required_bytes:2147483648 > volume_capabilities:<mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER > > parameters:<key:"csi.storage.k8s.io/pv/name" value:"pvc-4f5fe6f8-e2ae-49a9-8767-e613b4728982" > parameters:<key:"csi.storage.k8s.io/pvc/name" value:"cloned-pvc" > parameters:<key:"csi.storage.k8s.io/pvc/namespace" value:"monitoring" > parameters:<key:"type" value:"okd-block-storage-standard" > volume_content_source:<volume:<volume_id:"7b4d2908-b022-47e2-91ef-4429fde91f26" > > accessibility_requirements:<requisite:<segments:<key:"topology.cinder.csi.openstack.org/zone" value:"nova" > > preferred:<segments:<key:"topology.cinder.csi.openstack.org/zone" value:"nova" > > >
I0525 10:26:55.514208 1 controllerserver.go:44] CreateVolume: called with args {Name:pvc-4f5fe6f8-e2ae-49a9-8767-e613b4728982 CapacityRange:required_bytes:2147483648 VolumeCapabilities:[mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER > ] Parameters:map[csi.storage.k8s.io/pv/name:pvc-4f5fe6f8-e2ae-49a9-8767-e613b4728982 csi.storage.k8s.io/pvc/name/cloned-pvc csi.storage.k8s.io/pvc/namespace:monitoring type:okd-block-storage-standard] Secrets:map[] VolumeContentSource:volume:<volume_id:"7b4d2908-b022-47e2-91ef-4429fde91f26" > AccessibilityRequirements:requisite:<segments:<key:"topology.cinder.csi.openstack.org/zone" value:"nova" > > preferred:<segments:<key:"topology.cinder.csi.openstack.org/zone" value:"nova" > > XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0525 10:26:59.039580 1 controllerserver.go:135] CreateVolume: Successfully created volume 17fb58ba-bcf0-4dc2-bd11-1aa21cd996c4 in Availability Zone: nova of size 2 GiB
I0525 10:26:59.039666 1 utils.go:106] GRPC response: volume:<capacity_bytes:2147483648 volume_id:"17fb58ba-bcf0-4dc2-bd11-1aa21cd996c4" content_source:<volume:<volume_id:"7b4d2908-b022-47e2-91ef-4429fde91f26" > > accessible_topology:<segments:<key:"topology.cinder.csi.openstack.org/zone" value:"nova" > > >
/prometheus $ df -h | grep vdf
/dev/vdf 975.9M 2.5M 957.4M 0% /prometheus
/prometheus $ cat /prometheus/test.txt
test Cinder: openstack volume show pvc-4f5fe6f8-e2ae-49a9-8767-e613b4728982 -c status -c size -c name
+--------+------------------------------------------+
| Field | Value |
+--------+------------------------------------------+
| name | pvc-4f5fe6f8-e2ae-49a9-8767-e613b4728982 |
| size | 2 |
| status | in-use |
+--------+------------------------------------------+ Inside node: sudo df -h | grep pvc-4f5fe6f8-e2ae-49a9-8767-e613b4728982
/dev/vdf 976M 2.6M 958M 1% /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-4f5fe6f8-e2ae-49a9-8767-e613b4728982/globalmount
sudo lsblk | grep pvc-4f5fe6f8-e2ae-49a9-8767-e613b4728982
vdf 252:80 0 2G 0 disk /var/lib/kubelet/pods/f4e7ca68-9608-486d-9ff3-3e17e6dda990/volumes/kubernetes.io~csi/pvc-4f5fe6f8-e2ae-49a9-8767-e613b4728982/mount
I0525 10:33:55.750287 1 utils.go:100] GRPC call: /csi.v1.Node/NodeStageVolume
I0525 10:33:55.750319 1 utils.go:101] GRPC request: volume_id:"17fb58ba-bcf0-4dc2-bd11-1aa21cd996c4" publish_context:<key:"DevicePath" value:"/dev/vdf" > staging_target_path:"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-4f5fe6f8-e2ae-49a9-8767-e613b4728982/globalmount" volume_capability:<mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER > > volume_context:<key:"storage.kubernetes.io/csiProvisionerIdentity" value:"1621937257931-8081-cinder.csi.openstack.org" >
I0525 10:33:55.750444 1 nodeserver.go:336] NodeStageVolume: called with args {VolumeId:17fb58ba-bcf0-4dc2-bd11-1aa21cd996c4 PublishContext:map[DevicePath:/dev/vdf] StagingTargetPath:/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-4f5fe6f8-e2ae-49a9-8767-e613b4728982/globalmount VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER > Secrets:map[] VolumeContext:map[storage.kubernetes.io/csiProvisionerIdentity:1621937257931-8081-cinder.csi.openstack.org] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0525 10:33:55.916020 1 utils.go:100] GRPC call: /csi.v1.Node/NodeGetCapabilities
I0525 10:33:55.916066 1 utils.go:101] GRPC request:
I0525 10:33:55.916146 1 nodeserver.go:454] NodeGetCapabilities called with req: &csi.NodeGetCapabilitiesRequest{XXX_NoUnkeyedLiteral:struct {}{}, XXX_unrecognized:[]uint8(nil), XXX_sizecache:0}
I0525 10:33:55.916169 1 utils.go:106] GRPC response: capabilities:<rpc:<type:STAGE_UNSTAGE_VOLUME > > capabilities:<rpc:<type:EXPAND_VOLUME > > capabilities:<rpc:<type:GET_VOLUME_STATS > >
I0525 10:33:55.917564 1 utils.go:100] GRPC call: /csi.v1.Node/NodeGetVolumeStats
I0525 10:33:55.917590 1 utils.go:101] GRPC request: volume_id:"f0b9b361-8e01-4e10-8888-e07a2f1c5a76" volume_path:"/var/lib/kubelet/pods/08b39976-4c6c-4a89-8bf3-ecc025a146fa/volumes/kubernetes.io~csi/pvc-4bc15865-c68f-4269-9bbb-ec10f5e9f4ab/mount"
I0525 10:33:55.917648 1 nodeserver.go:462] NodeGetVolumeStats: called with args {VolumeId:f0b9b361-8e01-4e10-8888-e07a2f1c5a76 VolumePath:/var/lib/kubelet/pods/08b39976-4c6c-4a89-8bf3-ecc025a146fa/volumes/kubernetes.io~csi/pvc-4bc15865-c68f-4269-9bbb-ec10f5e9f4ab/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0525 10:33:55.917717 1 utils.go:106] GRPC response: usage:<available:1003843584 total:1023303680 used:2682880 unit:BYTES > usage:<available:65513 total:65536 used:23 unit:INODES >
I0525 10:33:56.157852 1 mount.go:171] Found disk attached as "virtio-17fb58ba-bcf0-4dc2-b"; full devicepath: /dev/disk/by-id/virtio-17fb58ba-bcf0-4dc2-b
I0525 10:33:56.157970 1 mount_linux.go:405] Attempting to determine if disk "/dev/disk/by-id/virtio-17fb58ba-bcf0-4dc2-b" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/by-id/virtio-17fb58ba-bcf0-4dc2-b])
I0525 10:33:56.221151 1 mount_linux.go:408] Output: "DEVNAME=/dev/disk/by-id/virtio-17fb58ba-bcf0-4dc2-b\nTYPE=ext4\n", err: <nil>
I0525 10:33:56.221230 1 mount_linux.go:298] Checking for issues with fsck on disk: /dev/disk/by-id/virtio-17fb58ba-bcf0-4dc2-b
I0525 10:33:56.580565 1 mount_linux.go:394] Attempting to mount disk /dev/disk/by-id/virtio-17fb58ba-bcf0-4dc2-b in ext4 format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-4f5fe6f8-e2ae-49a9-8767-e613b4728982/globalmount
I0525 10:33:56.580694 1 mount_linux.go:146] Mounting cmd (mount) with arguments (-t ext4 -o defaults /dev/disk/by-id/virtio-17fb58ba-bcf0-4dc2-b /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-4f5fe6f8-e2ae-49a9-8767-e613b4728982/globalmount)
I0525 10:33:56.634902 1 utils.go:106] GRPC response:
I0525 10:33:56.964651 1 utils.go:100] GRPC call: /csi.v1.Node/NodeGetCapabilities
I0525 10:33:56.964690 1 utils.go:101] GRPC request:
I0525 10:33:56.964743 1 nodeserver.go:454] NodeGetCapabilities called with req: &csi.NodeGetCapabilitiesRequest{XXX_NoUnkeyedLiteral:struct {}{}, XXX_unrecognized:[]uint8(nil), XXX_sizecache:0}
I0525 10:33:56.964766 1 utils.go:106] GRPC response: capabilities:<rpc:<type:STAGE_UNSTAGE_VOLUME > > capabilities:<rpc:<type:EXPAND_VOLUME > > capabilities:<rpc:<type:GET_VOLUME_STATS > >
I0525 10:33:56.975079 1 utils.go:100] GRPC call: /csi.v1.Node/NodePublishVolume
I0525 10:33:56.975142 1 utils.go:101] GRPC request: volume_id:"17fb58ba-bcf0-4dc2-bd11-1aa21cd996c4" publish_context:<key:"DevicePath" value:"/dev/vdf" > staging_target_path:"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-4f5fe6f8-e2ae-49a9-8767-e613b4728982/globalmount" target_path:"/var/lib/kubelet/pods/f4e7ca68-9608-486d-9ff3-3e17e6dda990/volumes/kubernetes.io~csi/pvc-4f5fe6f8-e2ae-49a9-8767-e613b4728982/mount" volume_capability:<mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER > > volume_context:<key:"storage.kubernetes.io/csiProvisionerIdentity" value:"1621937257931-8081-cinder.csi.openstack.org" >
I0525 10:33:56.975335 1 nodeserver.go:50] NodePublishVolume: called with args {VolumeId:17fb58ba-bcf0-4dc2-bd11-1aa21cd996c4 PublishContext:map[DevicePath:/dev/vdf] StagingTargetPath:/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-4f5fe6f8-e2ae-49a9-8767-e613b4728982/globalmount TargetPath:/var/lib/kubelet/pods/f4e7ca68-9608-486d-9ff3-3e17e6dda990/volumes/kubernetes.io~csi/pvc-4f5fe6f8-e2ae-49a9-8767-e613b4728982/mount VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER > Readonly:false Secrets:map[] VolumeContext:map[storage.kubernetes.io/csiProvisionerIdentity:1621937257931-8081-cinder.csi.openstack.org] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0525 10:33:57.356742 1 mount_linux.go:146] Mounting cmd (mount) with arguments (-t ext4 -o bind /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-4f5fe6f8-e2ae-49a9-8767-e613b4728982/globalmount /var/lib/kubelet/pods/f4e7ca68-9608-486d-9ff3-3e17e6dda990/volumes/kubernetes.io~csi/pvc-4f5fe6f8-e2ae-49a9-8767-e613b4728982/mount)
I0525 10:33:57.360361 1 mount_linux.go:146] Mounting cmd (mount) with arguments (-t ext4 -o bind,remount,rw /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-4f5fe6f8-e2ae-49a9-8767-e613b4728982/globalmount /var/lib/kubelet/pods/f4e7ca68-9608-486d-9ff3-3e17e6dda990/volumes/kubernetes.io~csi/pvc-4f5fe6f8-e2ae-49a9-8767-e613b4728982/mount) Please, let me know if you need any more information :) |
TLDR;
Someone with node access should run reize2fs manually in such cases, which is not desired at all. |
@m-yosefpor You probably didn't read my comment above. |
@lingxiankong I just changed the name of volumes:
- name: data
persistentVolumeClaim:
claimName: cloned-pvc # <-- it was `source-pvc` But the issue is here: Kubelet doesn't know the new volume is created from another volume with a different size, so it just compares the requested storage in the spec of both PV and PVC. As you can see in the example I've mentioned, both PV and PVC have the same size, so It doesn't call |
One solution is setting the requested storage of new PV to the size of the source PV in But it sounds like a hacky workaround to me since the requested storage in PV should reflect the size of the block volume not the filesystem. |
CSI Spec says csi driver should decide whether to reject the request or resize the filesytem by itself: container-storage-interface/spec#452
|
@alibo I tried to replace the pod directly, but got error
then I delete the pod and recreated a new one
weird thing happend, I got 2 G at first try
then keep getting 1G after first try ( delete everything and redo )
and from the CSI spec, seems we need fix this in our driver ,need test more and provide additional comments |
@jichenjc I've changed the pod's spec in the
if the partition of the source volume is not created (it usually happens when the pod is not created yet (for the first time), see # 2 in my example scenario). When it's cloned, the driver realizes the partition is not created and runs
|
ok, I can reproduce what you mentioned above by using deployment (previously I am using pod)
as you pointed out is checking as pv is definitely comes from cinder and in our sample it's 2G |
@jichenjc Seems PV controller in Kubernetes overwrites it during binding: In the above issue, he said they're planning to fix the issue in CSI drivers using the new mount-utils package (it's already imported in #1341 and #1440):
the merged PR in it resizes the volume in NodeStageVolume if it's needed using similar codes that also exist in mount-utils pkg
|
thanks for the detailed info , this is really helpful ~ @alibo |
@jichenjc We've added to One thing I suggest considering is unlike volume, err := ns.Cloud.GetVolume(volumeID)
if err != nil {
// TODO
return err
}
if volume.SourceVolID != nil || volume.SnapshotID != nil {
r := mountutil.NewResizeFs(ns.Mount.Mounter().Exec)
if needResize, err := r.NeedResize(devicePath, stagingTarget); !needResize || err != nil {
// TODO
return err
}
if _, err := r.Resize(devicePath, stagingTarget); err != nil {
// TODO
return err
}
} In this case, the driver won't resize the volume before the |
make sense to me , let's see whether anyone want to submit a PR then we can continue discussion there or I can submit one after my recent busy days |
Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug
What happened:
If a new PVC is created from a Snapshot or another volume and its requested size is greater than the source volume/snapshot, its filesystem won't be resized.
What you expected to happen:
The filesystem of the new/dest volume should be expanded up to block device size.
How to reproduce it:
Source PVC:
Cloned PVC + increased its requested storage:
Anything else we need to know?:
It can be expanded by increasing the requested storage of the cloned PVC after it's created, but I think the easier option for a normal k8s user is expanding it in the
NodePublishVolume
orNodeStageVolume
method implicitly if it's required.Environment:
The text was updated successfully, but these errors were encountered: