-
Notifications
You must be signed in to change notification settings - Fork 803
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
controller.yaml is inconsistently using volumes to mount csi.sock - PR with potential fix #213
Comments
@frittentheke thx for the issue and fix. |
@leakingtapan ... thanks for getting back to me this quickly!
for both, kubelet and kube-apiserver. |
Reading https://github.com/kubernetes/community/blob/master/contributors/design-proposals/storage/container-storage-interface.md#recommended-mechanism-for-deploying-csi-drivers-on-kubernetes over and over again, I am quite convinced for the controller pod all that is required is a common or shared EmptyDir for the driver to place its csi.sock socket in and the helpers (csi-provisioner, ...) to be able to talk to. |
I looked at the clustre-driver-registrar example. I am convinced too. Its much cleaner this way for the controller manifest. @frittentheke thx for sending out the fix |
BTW, I noticed you are using t3.medium instance, this will required #178 to be implemented since nitro instance uses NVMe for EBS volume |
@leakingtapan thanks for accepting my PR. Yeah, nvme type devices for nitro are currently still an issue, right. The idea stated in the corresponding issue is great to NOT rely on any udev mapping of devices. Container Linux already does things differently and an EBS CSI driver should be fully independed regarding how the OS is dealing with its device naming / mapping. |
…le-count OCPBUGS-4185: Fix nodeService.getVolumesLimit() adding more instance
/kind bug
What happened?
When using the provided deployment yaml files to set up the controller the driver-registrar container crashes.
What you expected to happen?
The full pod to start up successfully and then to register the CSI driver with the kubelet.
How to reproduce it (as minimally and precisely as possible)?
Simply apply https://github.com/kubernetes-sigs/aws-ebs-csi-driver/blob/1b8d9d76b5ad775845aacd7533ce28309a72e03a/deploy/kubernetes/controller.yaml
Anything else we need to know?:
I attempted a fix - see #212
Environment
AWS region: eu-central-1
EC2 type: t3.medium
OS: Container Linux
Kubernetes version (use
kubectl version
):1.13.3 // 1.14.0-alpha.3
Driver version:
0.3.0-alpha - "latest"
The text was updated successfully, but these errors were encountered: