-
Notifications
You must be signed in to change notification settings - Fork 547
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
run kubernetes csi-test suite on ceph-csi #250
Comments
csi-test plugin sanity runs by acting like the kubernetes driver services (provisioner/attacher/snapshotter etc.). IOW it sends in the required RPCs that invoke CSI plugin functionality across supported calls. Hence, it does not need a kubernetes setup to run, or the kubernetes sidecars to function, it needs a running plugin instance to send the RPCs to. Also, it does not, as a result, pollute the CO system it is running on (in case it is used in such a manner). Hence to run this against ceph-csi, without mocking any interface in ceph-csi plugins, we need the following prerequisites satisfied,
To run it in the above environment,
---
clusterID: rookcluster
pool: replicapool
imageFormat: "2"
imageFeatures: layering
NOTE: Current pass rate is about 50% with some low hangers that we can fix to improve the same. Thoughts on integration:
Other frameworks that test CSI plugin functionality from a kubernetes standpoint also help, but to test out ceph-csi specific internals, IMO we need to start here and dig down into the code for other edge conditions, than use broader integration test frameworks to achieve the same. |
Here are the pod and RBAC manifests used against v1.0.0 rbdplugin for a test. |
As a part of broader CI possibility, played with running minikube on TravisCI and here is an initial run, that setups up rook on a minikube instance in TravisCI. The files and code are here, and we can possibly start from here and see how to run csi-test suite next on the same. |
CircleCI config to run minikube https://github.com/ceph/cn/blob/master/.circleci/config.yml |
@ShyamsundarR - any updates? |
I am not on this task. I provided information based on runs and setup that I did to test the same out, to aid adoption into the e2e framework. I agree this is a priority, as it helps keep the CSI plugins code quality higher and should be tracked as such. |
@humblec @ShyamsundarR @poornimag @nixpanic anyone like to take this up for next release? |
Not sure for what release this will be ready, but I'm working on this a little now. Environment for local testing:
This should allow for easy integration in the CentOS CI or other CI environments. |
@nixpanic how about running this as a kubernetes Job in E2E testing? we should be able to download the csi-sanity binary and connect to the socket where csi driver is running |
More details are in https://kubernetes.io/blog/2020/01/08/testing-of-csi-drivers/ |
Yes we need to run csi-test with cephcsi |
as long as we cover the features with |
to make sure we return proper error code and to make sure we are not missing any other valid or invalid cases |
@nixpanic do we need it in release-3.0.0 milestone? or can we move it outside |
@nixpanic can this be done in 3.1.0? or do we need to move it out to the next release? |
Moving this from release 3.1.0 |
@yati1998 is this something you would like to explore ? |
@humblec sure I will take a look in this. |
I'm moving this out of 3.4 in favour of #2015 |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions. |
/assign @humblec |
Avoid extra RPC calls as request will be requested again for updating the LastSyncTime in the status. The image need to be promoted only one time not always during the reconcile. fixes: #250 Co-authored-by: Madhu Rajanna <[email protected]> Signed-off-by: yati1998 <[email protected]>
Syncing latest changes from upstream devel for ceph-csi
we need to integrate ceph-csi with csi-test to test the CSI API capability of a driver.
The text was updated successfully, but these errors were encountered: