-
Notifications
You must be signed in to change notification settings - Fork 24
WIP: Basic tests to verify setup and Gluster volume #73
base: master
Are you sure you want to change the base?
Conversation
Two tests are available: - Setup test: Tests expected number of pods and Running status - CSI and Gluster volume test: Creates 1GiB gluster volume using Persistent Volume claim and runs nginx pod to verify mount and persisted data. TODO: - Dependencies need to be installed(`pytest` and `kubernetes`) - Sleeps in tests needs to be removed - Internal modules needs to be refactored - Integrate Pytest with tox and nose similar to project https://github.com/gluster/libgfapi-python - Deployment of these test scripts to master node - Tests running infra creation How I tested: - Copied `/root/.kube/config` from one of the node(kube1) to my local machine(`/home/aravinda/.kube/config`) - Run tests from local machine ``` cd gcs pytest e2e ``` Signed-off-by: Aravinda VK <[email protected]>
cnt = 0 | ||
expected = 3 | ||
for pod in pods: | ||
if pod.metadata.namespace == "gcs" and pod.metadata.name.startswith("gluster-kube"): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please make the namespace as a var.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ack
op_cnt_expected = 1 | ||
for pod in pods: | ||
if pod.metadata.namespace == "gcs": | ||
if pod.metadata.name.startswith("etcd-operator"): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pod names can be variables.
op_cnt += 1 | ||
continue | ||
|
||
if pod.metadata.name.startswith("etcd-"): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pod names can be variables.
nodeplugins_cnt += 1 | ||
|
||
if pod.metadata.name.startswith("csi-provisioner-glusterfsplugin"): | ||
provisioner_cnt += 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pod names can be variables.
return client.CoreV1Api() | ||
|
||
|
||
def gluster_pods_verify(pods): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
expected pod count needs to be the input
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we can have a generic function which takes namespace
pod name
and expected count
as input.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good idea. ack
@@ -0,0 +1,19 @@ | |||
--- |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cant we make use of existing examples
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
from where? Sorry I didn't understand this comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
currently we have an example templates in https://github.com/gluster/gcs/blob/master/deploy/examples
can we make use of it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
got it. Let me check if I can use it. In that volume claim and pod creation is added in single yaml file.
- Also, application pods will change when we introduce more tests, we can reuse volume claims examples.
- Once we start running application tests inside pod then those examples will look complicated for users.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to carefully consider the trade-off:
- Using the examples makes sure they keep working
- It creates interdependencies that may be non-obvious & potentially complicate the examples.
- I'm not sure the test pods, whatever they end up being, will be particularly useful as "examples".
Perhaps a compromise would be to have a hybrid approach where we run the examples, ensuring they continue to work, but put the bulk of the custom test pods w/ the e2e?
@@ -0,0 +1,13 @@ | |||
--- |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same here, if it's needed to have a separate pod and PVC template we can separate it out in the example folder.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking good.
metadata: | ||
name: gv1 | ||
annotations: | ||
volume.beta.kubernetes.io/storage-class: glusterfs-csi |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should be provided as spec.storageClassName: glusterfs-csi
# FIXME: Remove sleep | ||
time.sleep(100) | ||
|
||
# Read the data again to see if it is persisted |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
@@ -0,0 +1,19 @@ | |||
--- |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to carefully consider the trade-off:
- Using the examples makes sure they keep working
- It creates interdependencies that may be non-obvious & potentially complicate the examples.
- I'm not sure the test pods, whatever they end up being, will be particularly useful as "examples".
Perhaps a compromise would be to have a hybrid approach where we run the examples, ensuring they continue to work, but put the bulk of the custom test pods w/ the e2e?
Two tests are available:
Persistent Volume claim and runs nginx pod to verify mount and
persisted data.
TODO:
pytest
andkubernetes
)https://github.com/gluster/libgfapi-python
How I tested:
/root/.kube/config
from one of the node(kube1) to my localmachine(
/home/aravinda/.kube/config
)Signed-off-by: Aravinda VK [email protected]