Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add situation if you have deploy one node per VApp #138

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

Negashev
Copy link

@Negashev Negashev commented Feb 9, 2023

Description

VApp and node names are the same
in vcloud-csi-configmap

  vcloud-csi-config.yaml: |+
    vcd:
      host: VCD_HOST
      org: ORG
      vdc: OVDC
      vAppName: NODE_PER_VAPP
    clusterid: CLUSTER_ID

Checklist

  • tested locally
  • updated any relevant dependencies
  • updated any relevant documentation or

Testing Done

controller

I0309 08:28:52.296082       1 driver.go:57] Driver: [named-disk.csi.cloud-director.vmware.com] Version: [main-branch]
I0309 08:28:52.296114       1 driver.go:68] Adding volume capability [SINGLE_NODE_WRITER]
I0309 08:28:52.296116       1 driver.go:68] Adding volume capability [SINGLE_NODE_READER_ONLY]
I0309 08:28:52.296118       1 driver.go:68] Adding volume capability [MULTI_NODE_READER_ONLY]
I0309 08:28:52.296120       1 driver.go:68] Adding volume capability [MULTI_NODE_SINGLE_WRITER]
I0309 08:28:52.296121       1 driver.go:68] Adding volume capability [MULTI_NODE_MULTI_WRITER]
I0309 08:28:52.296125       1 driver.go:81] Enabling node service capability: [STAGE_UNSTAGE_VOLUME]
I0309 08:28:52.296127       1 driver.go:81] Enabling node service capability: [GET_VOLUME_STATS]
I0309 08:28:52.296129       1 driver.go:98] Enabling controller service capability: [CREATE_DELETE_VOLUME]
I0309 08:28:52.296132       1 driver.go:98] Enabling controller service capability: [LIST_VOLUMES]
I0309 08:28:52.296135       1 driver.go:98] Enabling controller service capability: [PUBLISH_UNPUBLISH_VOLUME]
I0309 08:28:52.296203       1 cloudconfig.go:58] Unable to get refresh token: [open /etc/kubernetes/vcloud/basic-auth/refreshToken: no such file or directory]
I0309 08:28:52.296229       1 cloudconfig.go:91] Using username/secret based credentials.
I0309 08:28:52.296247       1 auth.go:44] Using VCD OpenAPI version [36.0]
I0309 08:28:52.892622       1 client.go:185] Client is sysadmin: [false]
I0309 08:28:52.892641       1 main.go:146] Using ClusterID [] from env since config has an empty string
I0309 08:28:52.892645       1 driver.go:113] Driver setup called
I0309 08:28:52.892649       1 driver.go:122] Skipping RDE CSI section upgrade as invalid RDE: []
I0309 08:28:52.892782       1 driver.go:194] Listening for connections on address: "//var/lib/csi/sockets/pluginproxy/csi.sock"
I0309 08:28:53.394110       1 driver.go:168] GRPC call: [/csi.v1.Identity/Probe]: [&csi.ProbeRequest{XXX_NoUnkeyedLiteral:struct {}{}, XXX_unrecognized:[]uint8(nil), XXX_sizecache:0}]
I0309 08:28:53.394138       1 identity.go:33] Probe: called with args [{XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}]
I0309 08:28:53.395349       1 driver.go:168] GRPC call: [/csi.v1.Identity/GetPluginInfo]: [&csi.GetPluginInfoRequest{XXX_NoUnkeyedLiteral:struct {}{}, XXX_unrecognized:[]uint8(nil), XXX_sizecache:0}]
I0309 08:28:53.395364       1 identity.go:38] GetPluginInfo: called with args [{XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}]
I0309 08:28:53.395949       1 driver.go:168] GRPC call: [/csi.v1.Identity/GetPluginCapabilities]: [&csi.GetPluginCapabilitiesRequest{XXX_NoUnkeyedLiteral:struct {}{}, XXX_unrecognized:[]uint8(nil), XXX_sizecache:0}]
I0309 08:28:53.396065       1 identity.go:48] GetPluginCapabilities: called with args [{XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}]
I0309 08:28:53.396613       1 driver.go:168] GRPC call: [/csi.v1.Controller/ControllerGetCapabilities]: [&csi.ControllerGetCapabilitiesRequest{XXX_NoUnkeyedLiteral:struct {}{}, XXX_unrecognized:[]uint8(nil), XXX_sizecache:0}]
I0309 08:28:53.396630       1 controller.go:357] ControllerGetCapabilities: called with args [csi.ControllerGetCapabilitiesRequest{XXX_NoUnkeyedLiteral:struct {}{}, XXX_unrecognized:[]uint8(nil), XXX_sizecache:0}]
I0309 08:28:53.397092       1 driver.go:168] GRPC call: [/csi.v1.Controller/ControllerGetCapabilities]: [&csi.ControllerGetCapabilitiesRequest{XXX_NoUnkeyedLiteral:struct {}{}, XXX_unrecognized:[]uint8(nil), XXX_sizecache:0}]
I0309 08:28:53.397118       1 controller.go:357] ControllerGetCapabilities: called with args [csi.ControllerGetCapabilitiesRequest{XXX_NoUnkeyedLiteral:struct {}{}, XXX_unrecognized:[]uint8(nil), XXX_sizecache:0}]
I0309 08:28:53.510360       1 driver.go:168] GRPC call: [/csi.v1.Identity/Probe]: [&csi.ProbeRequest{XXX_NoUnkeyedLiteral:struct {}{}, XXX_unrecognized:[]uint8(nil), XXX_sizecache:0}]
I0309 08:28:53.510393       1 identity.go:33] Probe: called with args [{XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}]
I0309 08:28:53.510804       1 driver.go:168] GRPC call: [/csi.v1.Identity/GetPluginInfo]: [&csi.GetPluginInfoRequest{XXX_NoUnkeyedLiteral:struct {}{}, XXX_unrecognized:[]uint8(nil), XXX_sizecache:0}]
I0309 08:28:53.510820       1 identity.go:38] GetPluginInfo: called with args [{XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}]
I0309 08:28:53.511088       1 driver.go:168] GRPC call: [/csi.v1.Identity/GetPluginCapabilities]: [&csi.GetPluginCapabilitiesRequest{XXX_NoUnkeyedLiteral:struct {}{}, XXX_unrecognized:[]uint8(nil), XXX_sizecache:0}]
I0309 08:28:53.511104       1 identity.go:48] GetPluginCapabilities: called with args [{XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}]
I0309 08:28:53.511442       1 driver.go:168] GRPC call: [/csi.v1.Controller/ControllerGetCapabilities]: [&csi.ControllerGetCapabilitiesRequest{XXX_NoUnkeyedLiteral:struct {}{}, XXX_unrecognized:[]uint8(nil), XXX_sizecache:0}]
I0309 08:28:53.511457       1 controller.go:357] ControllerGetCapabilities: called with args [csi.ControllerGetCapabilitiesRequest{XXX_NoUnkeyedLiteral:struct {}{}, XXX_unrecognized:[]uint8(nil), XXX_sizecache:0}]

node

I0309 08:32:44.471878       1 driver.go:57] Driver: [named-disk.csi.cloud-director.vmware.com] Version: [main-branch]
I0309 08:32:44.471930       1 driver.go:68] Adding volume capability [SINGLE_NODE_WRITER]
I0309 08:32:44.471933       1 driver.go:68] Adding volume capability [SINGLE_NODE_READER_ONLY]
I0309 08:32:44.471935       1 driver.go:68] Adding volume capability [MULTI_NODE_READER_ONLY]
I0309 08:32:44.471939       1 driver.go:68] Adding volume capability [MULTI_NODE_SINGLE_WRITER]
I0309 08:32:44.471941       1 driver.go:68] Adding volume capability [MULTI_NODE_MULTI_WRITER]
I0309 08:32:44.471946       1 driver.go:81] Enabling node service capability: [STAGE_UNSTAGE_VOLUME]
I0309 08:32:44.471950       1 driver.go:81] Enabling node service capability: [GET_VOLUME_STATS]
I0309 08:32:44.471953       1 driver.go:98] Enabling controller service capability: [CREATE_DELETE_VOLUME]
I0309 08:32:44.471955       1 driver.go:98] Enabling controller service capability: [LIST_VOLUMES]
I0309 08:32:44.471957       1 driver.go:98] Enabling controller service capability: [PUBLISH_UNPUBLISH_VOLUME]
I0309 08:32:44.472589       1 cloudconfig.go:58] Unable to get refresh token: [open /etc/kubernetes/vcloud/basic-auth/refreshToken: no such file or directory]
I0309 08:32:44.472623       1 cloudconfig.go:91] Using username/secret based credentials.
I0309 08:32:44.492134       1 auth.go:44] Using VCD OpenAPI version [36.0]
I0309 08:32:45.536693       1 client.go:185] Client is sysadmin: [false]
I0309 08:32:45.536750       1 main.go:146] Using ClusterID [] from env since config has an empty string
I0309 08:32:45.536764       1 driver.go:113] Driver setup called
I0309 08:32:45.536781       1 driver.go:118] Skipping RDE CSI section upgrade as upgradeRde flag is false
I0309 08:32:45.537543       1 driver.go:194] Listening for connections on address: "//csi/csi.sock"
I0309 08:32:45.746071       1 driver.go:168] GRPC call: [/csi.v1.Identity/GetPluginInfo]: [&csi.GetPluginInfoRequest{XXX_NoUnkeyedLiteral:struct {}{}, XXX_unrecognized:[]uint8(nil), XXX_sizecache:0}]
I0309 08:32:45.746114       1 identity.go:38] GetPluginInfo: called with args [{XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}]
I0309 08:32:46.787640       1 driver.go:168] GRPC call: [/csi.v1.Node/NodeGetInfo]: [&csi.NodeGetInfoRequest{XXX_NoUnkeyedLiteral:struct {}{}, XXX_unrecognized:[]uint8(nil), XXX_sizecache:0}]
I0309 08:34:11.682576       1 driver.go:168] GRPC call: [/csi.v1.Node/NodeGetCapabilities]: [&csi.NodeGetCapabilitiesRequest{XXX_NoUnkeyedLiteral:struct {}{}, XXX_unrecognized:[]uint8(nil), XXX_sizecache:0}]
I0309 08:34:11.682611       1 node.go:357] NodeGetCapabilities called with req: &csi.NodeGetCapabilitiesRequest{XXX_NoUnkeyedLiteral:struct {}{}, XXX_unrecognized:[]uint8(nil), XXX_sizecache:0}
I0309 08:34:11.683991       1 driver.go:168] GRPC call: [/csi.v1.Node/NodeGetVolumeStats]: [&csi.NodeGetVolumeStatsRequest{VolumeId:"pvc-9fbc9f74-b17d-41a6-a8e9-14dd6f558574", VolumePath:"/var/lib/kubelet/pods/f0e01227-1fbd-4d84-81f8-107deb81c685/volumes/kubernetes.io~csi/pvc-9fbc9f74-b17d-41a6-a8e9-14dd6f558574/mount", StagingTargetPath:"", XXX_NoUnkeyedLiteral:struct {}{}, XXX_unrecognized:[]uint8(nil), XXX_sizecache:0}]
I0309 08:34:11.684112       1 node.go:376] NodeGetVolumeStats called with req: &csi.NodeGetVolumeStatsRequest{VolumeId:"pvc-9fbc9f74-b17d-41a6-a8e9-14dd6f558574", VolumePath:"/var/lib/kubelet/pods/f0e01227-1fbd-4d84-81f8-107deb81c685/volumes/kubernetes.io~csi/pvc-9fbc9f74-b17d-41a6-a8e9-14dd6f558574/mount", StagingTargetPath:"", XXX_NoUnkeyedLiteral:struct {}{}, XXX_unrecognized:[]uint8(nil), XXX_sizecache:0}
I0309 08:34:49.668587       1 driver.go:168] GRPC call: [/csi.v1.Node/NodeUnpublishVolume]: [&csi.NodeUnpublishVolumeRequest{VolumeId:"pvc-9fbc9f74-b17d-41a6-a8e9-14dd6f558574", TargetPath:"/var/lib/kubelet/pods/f0e01227-1fbd-4d84-81f8-107deb81c685/volumes/kubernetes.io~csi/pvc-9fbc9f74-b17d-41a6-a8e9-14dd6f558574/mount", XXX_NoUnkeyedLiteral:struct {}{}, XXX_unrecognized:[]uint8(nil), XXX_sizecache:0}]
I0309 08:34:49.670000       1 node.go:343] Attempting to unmount pod mount dir [/var/lib/kubelet/pods/f0e01227-1fbd-4d84-81f8-107deb81c685/volumes/kubernetes.io~csi/pvc-9fbc9f74-b17d-41a6-a8e9-14dd6f558574/mount].
time="2023-03-09T08:34:49Z" level=info msg="unmount command" cmd=umount path="/var/lib/kubelet/pods/f0e01227-1fbd-4d84-81f8-107deb81c685/volumes/kubernetes.io~csi/pvc-9fbc9f74-b17d-41a6-a8e9-14dd6f558574/mount"
I0309 08:34:50.168982       1 node.go:348] NodeUnpublishVolume successful for disk [pvc-9fbc9f74-b17d-41a6-a8e9-14dd6f558574] at mount dir [/var/lib/kubelet/pods/f0e01227-1fbd-4d84-81f8-107deb81c685/volumes/kubernetes.io~csi/pvc-9fbc9f74-b17d-41a6-a8e9-14dd6f558574/mount]
I0309 08:34:50.224866       1 driver.go:168] GRPC call: [/csi.v1.Node/NodeGetCapabilities]: [&csi.NodeGetCapabilitiesRequest{XXX_NoUnkeyedLiteral:struct {}{}, XXX_unrecognized:[]uint8(nil), XXX_sizecache:0}]
I0309 08:34:50.224903       1 node.go:357] NodeGetCapabilities called with req: &csi.NodeGetCapabilitiesRequest{XXX_NoUnkeyedLiteral:struct {}{}, XXX_unrecognized:[]uint8(nil), XXX_sizecache:0}
I0309 08:34:50.225725       1 driver.go:168] GRPC call: [/csi.v1.Node/NodeUnstageVolume]: [&csi.NodeUnstageVolumeRequest{VolumeId:"pvc-9fbc9f74-b17d-41a6-a8e9-14dd6f558574", StagingTargetPath:"/var/lib/kubelet/plugins/kubernetes.io/csi/named-disk.csi.cloud-director.vmware.com/46eb3fa211567578f78bd6e36b2770b54c8a41a1c6eb90319e7e4e25c1449d6d/globalmount", XXX_NoUnkeyedLiteral:struct {}{}, XXX_unrecognized:[]uint8(nil), XXX_sizecache:0}]
I0309 08:34:50.227166       1 node.go:195] Path [/var/lib/kubelet/plugins/kubernetes.io/csi/named-disk.csi.cloud-director.vmware.com/46eb3fa211567578f78bd6e36b2770b54c8a41a1c6eb90319e7e4e25c1449d6d/globalmount] is not mounted. Hence assuming already unmounted.
I0309 08:34:54.642644       1 driver.go:168] GRPC call: [/csi.v1.Node/NodeGetCapabilities]: [&csi.NodeGetCapabilitiesRequest{XXX_NoUnkeyedLiteral:struct {}{}, XXX_unrecognized:[]uint8(nil), XXX_sizecache:0}]
I0309 08:34:54.642685       1 node.go:357] NodeGetCapabilities called with req: &csi.NodeGetCapabilitiesRequest{XXX_NoUnkeyedLiteral:struct {}{}, XXX_unrecognized:[]uint8(nil), XXX_sizecache:0}
I0309 08:34:54.645302       1 driver.go:168] GRPC call: [/csi.v1.Node/NodeGetCapabilities]: [&csi.NodeGetCapabilitiesRequest{XXX_NoUnkeyedLiteral:struct {}{}, XXX_unrecognized:[]uint8(nil), XXX_sizecache:0}]
I0309 08:34:54.645336       1 node.go:357] NodeGetCapabilities called with req: &csi.NodeGetCapabilitiesRequest{XXX_NoUnkeyedLiteral:struct {}{}, XXX_unrecognized:[]uint8(nil), XXX_sizecache:0}
I0309 08:34:54.645843       1 driver.go:168] GRPC call: [/csi.v1.Node/NodeGetCapabilities]: [&csi.NodeGetCapabilitiesRequest{XXX_NoUnkeyedLiteral:struct {}{}, XXX_unrecognized:[]uint8(nil), XXX_sizecache:0}]
I0309 08:34:54.645868       1 node.go:357] NodeGetCapabilities called with req: &csi.NodeGetCapabilitiesRequest{XXX_NoUnkeyedLiteral:struct {}{}, XXX_unrecognized:[]uint8(nil), XXX_sizecache:0}
I0309 08:34:54.694157       1 driver.go:168] GRPC call: [/csi.v1.Node/NodeStageVolume]: [&csi.NodeStageVolumeRequest{VolumeId:"pvc-9fbc9f74-b17d-41a6-a8e9-14dd6f558574", PublishContext:map[string]string{"diskID":"pvc-9fbc9f74-b17d-41a6-a8e9-14dd6f558574", "diskUUID":"6000c296-fd65-217a-373a-48cd1ac7df6d", "filesystem":"ext4", "vmID":"ocean-worker-gg97r"}, StagingTargetPath:"/var/lib/kubelet/plugins/kubernetes.io/csi/named-disk.csi.cloud-director.vmware.com/46eb3fa211567578f78bd6e36b2770b54c8a41a1c6eb90319e7e4e25c1449d6d/globalmount", VolumeCapability:(*csi.VolumeCapability)(0xc0000aa480), Secrets:map[string]string(nil), VolumeContext:map[string]string{"busSubType":"VirtualSCSI", "busType":"SCSI", "diskID":"urn:vcloud:disk:582e1d48-6419-4242-beb0-b6ed02347622", "filesystem":"ext4", "storage.kubernetes.io/csiProvisionerIdentity":"1675958001585-8081-named-disk.csi.cloud-director.vmware.com", "storageProfile":"FAST"}, XXX_NoUnkeyedLiteral:struct {}{}, XXX_unrecognized:[]uint8(nil), XXX_sizecache:0}]
I0309 08:34:54.694215       1 node.go:58] NodeStageVolume: called with args [csi.NodeStageVolumeRequest{VolumeId:"pvc-9fbc9f74-b17d-41a6-a8e9-14dd6f558574", PublishContext:map[string]string{"diskID":"pvc-9fbc9f74-b17d-41a6-a8e9-14dd6f558574", "diskUUID":"6000c296-fd65-217a-373a-48cd1ac7df6d", "filesystem":"ext4", "vmID":"ocean-worker-gg97r"}, StagingTargetPath:"/var/lib/kubelet/plugins/kubernetes.io/csi/named-disk.csi.cloud-director.vmware.com/46eb3fa211567578f78bd6e36b2770b54c8a41a1c6eb90319e7e4e25c1449d6d/globalmount", VolumeCapability:(*csi.VolumeCapability)(0xc0000aa480), Secrets:map[string]string(nil), VolumeContext:map[string]string{"busSubType":"VirtualSCSI", "busType":"SCSI", "diskID":"urn:vcloud:disk:582e1d48-6419-4242-beb0-b6ed02347622", "filesystem":"ext4", "storage.kubernetes.io/csiProvisionerIdentity":"1675958001585-8081-named-disk.csi.cloud-director.vmware.com", "storageProfile":"FAST"}, XXX_NoUnkeyedLiteral:struct {}{}, XXX_unrecognized:[]uint8(nil), XXX_sizecache:0}]
I0309 08:34:54.860852       1 node.go:433] CSI node plugin rescanned the scsi host [host0] successfully
I0309 08:34:55.029069       1 node.go:433] CSI node plugin rescanned the scsi host [host1] successfully
I0309 08:34:55.630204       1 node.go:433] CSI node plugin rescanned the scsi host [host2] successfully
I0309 08:34:56.218261       1 node.go:433] CSI node plugin rescanned the scsi host [host3] successfully
I0309 08:34:56.218461       1 node.go:479] Checking file: [/dev/disk/by-path/pci-0000:00:07.1-ata-1] => [/dev/sr0]
I0309 08:34:57.756551       1 node.go:486] Encountered error while processing file [/dev/sr0]: [exit status 1]
I0309 08:34:57.756580       1 node.go:487] Please check if the `disk.enableUUID` parameter is set to 1 for the VM in VC config.
I0309 08:34:57.756604       1 node.go:479] Checking file: [/dev/disk/by-path/pci-0000:00:10.0-scsi-0:0:0:0] => [/dev/sda]
I0309 08:34:57.759697       1 node.go:479] Checking file: [/dev/disk/by-path/pci-0000:00:10.0-scsi-0:0:0:0-part1] => [/dev/sda1]
I0309 08:34:57.762090       1 node.go:479] Checking file: [/dev/disk/by-path/pci-0000:00:10.0-scsi-0:0:0:0-part2] => [/dev/sda2]
I0309 08:34:57.764581       1 node.go:479] Checking file: [/dev/disk/by-path/pci-0000:00:10.0-scsi-0:0:0:0-part3] => [/dev/sda3]
I0309 08:34:57.766895       1 node.go:479] Checking file: [/dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:0:0] => [/dev/sdb]
I0309 08:34:57.769312       1 node.go:508] Obtained matching disk [/dev/sdb]
I0309 08:34:57.770768       1 node.go:155] Mounting device [/dev/sdb] to folder [/var/lib/kubelet/plugins/kubernetes.io/csi/named-disk.csi.cloud-director.vmware.com/46eb3fa211567578f78bd6e36b2770b54c8a41a1c6eb90319e7e4e25c1449d6d/globalmount] of type [ext4] with flags [[rw]]
time="2023-03-09T08:34:57Z" level=info msg="attempting to mount disk" fsType=ext4 options="[rw defaults]" source=/dev/sdb target=/var/lib/kubelet/plugins/kubernetes.io/csi/named-disk.csi.cloud-director.vmware.com/46eb3fa211567578f78bd6e36b2770b54c8a41a1c6eb90319e7e4e25c1449d6d/globalmount
time="2023-03-09T08:34:57Z" level=info msg="mount command" args="-t ext4 -o rw,defaults /dev/sdb /var/lib/kubelet/plugins/kubernetes.io/csi/named-disk.csi.cloud-director.vmware.com/46eb3fa211567578f78bd6e36b2770b54c8a41a1c6eb90319e7e4e25c1449d6d/globalmount" cmd=mount
I0309 08:34:57.775425       1 node.go:162] Mounted device [/dev/sdb] at path [/var/lib/kubelet/plugins/kubernetes.io/csi/named-disk.csi.cloud-director.vmware.com/46eb3fa211567578f78bd6e36b2770b54c8a41a1c6eb90319e7e4e25c1449d6d/globalmount] with fs [ext4] and options [[rw]]
I0309 08:34:57.775446       1 node.go:165] NodeStageVolume successfully staged at [/var/lib/kubelet/plugins/kubernetes.io/csi/named-disk.csi.cloud-director.vmware.com/46eb3fa211567578f78bd6e36b2770b54c8a41a1c6eb90319e7e4e25c1449d6d/globalmount] for device [/dev/sdb]
I0309 08:34:57.776131       1 driver.go:168] GRPC call: [/csi.v1.Node/NodeGetCapabilities]: [&csi.NodeGetCapabilitiesRequest{XXX_NoUnkeyedLiteral:struct {}{}, XXX_unrecognized:[]uint8(nil), XXX_sizecache:0}]
I0309 08:34:57.776163       1 node.go:357] NodeGetCapabilities called with req: &csi.NodeGetCapabilitiesRequest{XXX_NoUnkeyedLiteral:struct {}{}, XXX_unrecognized:[]uint8(nil), XXX_sizecache:0}
I0309 08:34:57.782495       1 driver.go:168] GRPC call: [/csi.v1.Node/NodeGetCapabilities]: [&csi.NodeGetCapabilitiesRequest{XXX_NoUnkeyedLiteral:struct {}{}, XXX_unrecognized:[]uint8(nil), XXX_sizecache:0}]
I0309 08:34:57.782524       1 node.go:357] NodeGetCapabilities called with req: &csi.NodeGetCapabilitiesRequest{XXX_NoUnkeyedLiteral:struct {}{}, XXX_unrecognized:[]uint8(nil), XXX_sizecache:0}
I0309 08:34:57.783074       1 driver.go:168] GRPC call: [/csi.v1.Node/NodeGetCapabilities]: [&csi.NodeGetCapabilitiesRequest{XXX_NoUnkeyedLiteral:struct {}{}, XXX_unrecognized:[]uint8(nil), XXX_sizecache:0}]
I0309 08:34:57.783112       1 node.go:357] NodeGetCapabilities called with req: &csi.NodeGetCapabilitiesRequest{XXX_NoUnkeyedLiteral:struct {}{}, XXX_unrecognized:[]uint8(nil), XXX_sizecache:0}
I0309 08:34:57.784984       1 driver.go:168] GRPC call: [/csi.v1.Node/NodePublishVolume]: [&csi.NodePublishVolumeRequest{VolumeId:"pvc-9fbc9f74-b17d-41a6-a8e9-14dd6f558574", PublishContext:map[string]string{"diskID":"pvc-9fbc9f74-b17d-41a6-a8e9-14dd6f558574", "diskUUID":"6000c296-fd65-217a-373a-48cd1ac7df6d", "filesystem":"ext4", "vmID":"ocean-worker-gg97r"}, StagingTargetPath:"/var/lib/kubelet/plugins/kubernetes.io/csi/named-disk.csi.cloud-director.vmware.com/46eb3fa211567578f78bd6e36b2770b54c8a41a1c6eb90319e7e4e25c1449d6d/globalmount", TargetPath:"/var/lib/kubelet/pods/2f82df96-c822-4e09-a74b-b7462c018292/volumes/kubernetes.io~csi/pvc-9fbc9f74-b17d-41a6-a8e9-14dd6f558574/mount", VolumeCapability:(*csi.VolumeCapability)(0xc000272280), Readonly:false, Secrets:map[string]string(nil), VolumeContext:map[string]string{"busSubType":"VirtualSCSI", "busType":"SCSI", "diskID":"urn:vcloud:disk:582e1d48-6419-4242-beb0-b6ed02347622", "filesystem":"ext4", "storage.kubernetes.io/csiProvisionerIdentity":"1675958001585-8081-named-disk.csi.cloud-director.vmware.com", "storageProfile":"FAST"}, XXX_NoUnkeyedLiteral:struct {}{}, XXX_unrecognized:[]uint8(nil), XXX_sizecache:0}]
I0309 08:34:57.785138       1 node.go:214] NodePublishVolume: called with args csi.NodePublishVolumeRequest{VolumeId:"pvc-9fbc9f74-b17d-41a6-a8e9-14dd6f558574", PublishContext:map[string]string{"diskID":"pvc-9fbc9f74-b17d-41a6-a8e9-14dd6f558574", "diskUUID":"6000c296-fd65-217a-373a-48cd1ac7df6d", "filesystem":"ext4", "vmID":"ocean-worker-gg97r"}, StagingTargetPath:"/var/lib/kubelet/plugins/kubernetes.io/csi/named-disk.csi.cloud-director.vmware.com/46eb3fa211567578f78bd6e36b2770b54c8a41a1c6eb90319e7e4e25c1449d6d/globalmount", TargetPath:"/var/lib/kubelet/pods/2f82df96-c822-4e09-a74b-b7462c018292/volumes/kubernetes.io~csi/pvc-9fbc9f74-b17d-41a6-a8e9-14dd6f558574/mount", VolumeCapability:(*csi.VolumeCapability)(0xc000272280), Readonly:false, Secrets:map[string]string(nil), VolumeContext:map[string]string{"busSubType":"VirtualSCSI", "busType":"SCSI", "diskID":"urn:vcloud:disk:582e1d48-6419-4242-beb0-b6ed02347622", "filesystem":"ext4", "storage.kubernetes.io/csiProvisionerIdentity":"1675958001585-8081-named-disk.csi.cloud-director.vmware.com", "storageProfile":"FAST"}, XXX_NoUnkeyedLiteral:struct {}{}, XXX_unrecognized:[]uint8(nil), XXX_sizecache:0}
I0309 08:34:57.785302       1 node.go:275] Ensured that dir [/var/lib/kubelet/pods/2f82df96-c822-4e09-a74b-b7462c018292/volumes/kubernetes.io~csi/pvc-9fbc9f74-b17d-41a6-a8e9-14dd6f558574/mount] exists.
I0309 08:34:57.787966       1 node.go:297] Mounting dir [/var/lib/kubelet/plugins/kubernetes.io/csi/named-disk.csi.cloud-director.vmware.com/46eb3fa211567578f78bd6e36b2770b54c8a41a1c6eb90319e7e4e25c1449d6d/globalmount] to folder [/var/lib/kubelet/pods/2f82df96-c822-4e09-a74b-b7462c018292/volumes/kubernetes.io~csi/pvc-9fbc9f74-b17d-41a6-a8e9-14dd6f558574/mount] with flags [[rw]]
time="2023-03-09T08:34:57Z" level=info msg="mount command" args="-o bind /var/lib/kubelet/plugins/kubernetes.io/csi/named-disk.csi.cloud-director.vmware.com/46eb3fa211567578f78bd6e36b2770b54c8a41a1c6eb90319e7e4e25c1449d6d/globalmount /var/lib/kubelet/pods/2f82df96-c822-4e09-a74b-b7462c018292/volumes/kubernetes.io~csi/pvc-9fbc9f74-b17d-41a6-a8e9-14dd6f558574/mount" cmd=mount
time="2023-03-09T08:34:57Z" level=info msg="mount command" args="-o remount,rw /var/lib/kubelet/plugins/kubernetes.io/csi/named-disk.csi.cloud-director.vmware.com/46eb3fa211567578f78bd6e36b2770b54c8a41a1c6eb90319e7e4e25c1449d6d/globalmount /var/lib/kubelet/pods/2f82df96-c822-4e09-a74b-b7462c018292/volumes/kubernetes.io~csi/pvc-9fbc9f74-b17d-41a6-a8e9-14dd6f558574/mount" cmd=mount
I0309 08:34:57.789965       1 node.go:304] Mounted dir [/var/lib/kubelet/plugins/kubernetes.io/csi/named-disk.csi.cloud-director.vmware.com/46eb3fa211567578f78bd6e36b2770b54c8a41a1c6eb90319e7e4e25c1449d6d/globalmount] at path [/var/lib/kubelet/pods/2f82df96-c822-4e09-a74b-b7462c018292/volumes/kubernetes.io~csi/pvc-9fbc9f74-b17d-41a6-a8e9-14dd6f558574/mount] with options [[rw]]
I0309 08:34:57.789986       1 node.go:306] NodeStageVolume successfully staged at [/var/lib/kubelet/pods/2f82df96-c822-4e09-a74b-b7462c018292/volumes/kubernetes.io~csi/pvc-9fbc9f74-b17d-41a6-a8e9-14dd6f558574/mount] for host dir [/var/lib/kubelet/plugins/kubernetes.io/csi/named-disk.csi.cloud-director.vmware.com/46eb3fa211567578f78bd6e36b2770b54c8a41a1c6eb90319e7e4e25c1449d6d/globalmount]
I0309 08:36:11.336961       1 driver.go:168] GRPC call: [/csi.v1.Node/NodeGetCapabilities]: [&csi.NodeGetCapabilitiesRequest{XXX_NoUnkeyedLiteral:struct {}{}, XXX_unrecognized:[]uint8(nil), XXX_sizecache:0}]
I0309 08:36:11.337003       1 node.go:357] NodeGetCapabilities called with req: &csi.NodeGetCapabilitiesRequest{XXX_NoUnkeyedLiteral:struct {}{}, XXX_unrecognized:[]uint8(nil), XXX_sizecache:0}
I0309 08:36:11.338115       1 driver.go:168] GRPC call: [/csi.v1.Node/NodeGetVolumeStats]: [&csi.NodeGetVolumeStatsRequest{VolumeId:"pvc-9fbc9f74-b17d-41a6-a8e9-14dd6f558574", VolumePath:"/var/lib/kubelet/pods/2f82df96-c822-4e09-a74b-b7462c018292/volumes/kubernetes.io~csi/pvc-9fbc9f74-b17d-41a6-a8e9-14dd6f558574/mount", StagingTargetPath:"", XXX_NoUnkeyedLiteral:struct {}{}, XXX_unrecognized:[]uint8(nil), XXX_sizecache:0}]
I0309 08:36:11.338642       1 node.go:376] NodeGetVolumeStats called with req: &csi.NodeGetVolumeStatsRequest{VolumeId:"pvc-9fbc9f74-b17d-41a6-a8e9-14dd6f558574", VolumePath:"/var/lib/kubelet/pods/2f82df96-c822-4e09-a74b-b7462c018292/volumes/kubernetes.io~csi/pvc-9fbc9f74-b17d-41a6-a8e9-14dd6f558574/mount", StagingTargetPath:"", XXX_NoUnkeyedLiteral:struct {}{}, XXX_unrecognized:[]uint8(nil), XXX_sizecache:0}

disk

Issue

If applicable, please reference the relevant issue

Fixes #


This change is Reviewable

VApp and node names are the same
@vmwclabot
Copy link
Member

@Negashev, you must sign our contributor license agreement before your changes are merged. Click here to sign the agreement. If you are a VMware employee, read this for further instruction.

@vmwclabot
Copy link
Member

@Negashev, we have received your signed contributor license agreement. The review is usually completed within a week, but may take longer under certain circumstances. Another comment will be added to the pull request to notify you when the merge can proceed.

@vmwclabot
Copy link
Member

@Negashev, VMware has rejected your signed contributor license agreement. The merge can not proceed until the agreement has been resigned. Click here to resign the agreement. Reject reason:

VMware is not able to accept your contribution at this time

@ymo24
Copy link
Contributor

ymo24 commented Feb 14, 2023

Hi, would you re-edit this PR and then fill the pull-request template? Please ensure to attach the testing screenshot.

@vmwclabot
Copy link
Member

@Negashev, we have received your signed contributor license agreement. The review is usually completed within a week, but may take longer under certain circumstances. Another comment will be added to the pull request to notify you when the merge can proceed.

@vmwclabot
Copy link
Member

@Negashev, VMware has rejected your signed contributor license agreement. The merge can not proceed until the agreement has been resigned. Click here to resign the agreement. Reject reason:

VMware is not able to accept your contribution at this time

@vmwclabot
Copy link
Member

@Negashev, we have received your signed contributor license agreement. The review is usually completed within a week, but may take longer under certain circumstances. Another comment will be added to the pull request to notify you when the merge can proceed.

@vmwclabot
Copy link
Member

@Negashev, VMware has rejected your signed contributor license agreement. The merge can not proceed until the agreement has been resigned. Click here to resign the agreement. Reject reason:

VMware is not able to accept your contribution at this time.

@Negashev
Copy link
Author

Negashev commented May 9, 2023

@ymo24

Hello! @vmwclabot block my PR

@vmwclabot
Copy link
Member

@Negashev, we have received your signed contributor license agreement. The review is usually completed within a week, but may take longer under certain circumstances. Another comment will be added to the pull request to notify you when the merge can proceed.

@vmwclabot
Copy link
Member

@Negashev, VMware has rejected your signed contributor license agreement. The merge can not proceed until the agreement has been resigned. Click here to resign the agreement. Reject reason:

VMware is not able to accept your contribution at this time. Apologies.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants