-
Notifications
You must be signed in to change notification settings - Fork 102
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[RFE] Add storage contol plane related scenarios to ocp wrapper #427
Comments
Related to #426 |
Can you define what you mean by scenario? |
@mfleader "scenario" just a generic term for a new workload. Like one alongside those here: https://github.com/cloud-bulldozer/kube-burner/tree/master/cmd/kube-burner/ocp-config |
Do the pods exist before the scenario starts? From my initial work, it seems like you cannot mount a pvc volume by patching an existing pod. |
No. Think of it as you will create everything for the workload. Attaching a volume to an existing pod isn't a valid use case. Usually, you create PV/PVC first, then a pod that mounts the volume next. Since we're interested in dynamic provisioning here, then we only need to create the PVCs and not the PVs. Maybe you'll have to create a StorageClass that supports dynamic provisioning. You'll have to do that analysis. |
Should this create a bunch of PVCs and Pods (like 100 each), and then reclaim the PVC and delete the Pod, or should it create 1 each and then reclaim and delete them over N iterations? |
Our current workloads create everything and take measurements to measure the scaleup, then delete as an unmeasured "garbage collection", but we are working towards measuring GC as well. If you simply follow that pattern, then we can compare the added effect/overhead of PVC mounting on top of what we already understand reasonably well (e.g. node-density with added PVC). Usually creating a single pod doesn't consume enough resources for us to measure, but we could achieve that by using churn. It could be a second phase to figure out whether churn is beneficial here and what settings to use. |
Should this live in |
Yes: "PV/PVC related scenarios on OCP wrapper ..." |
Just to outline, few points to consider while adding a new workload.
|
With regards to potential dynamic storage provisioners, the obvious ones seem to be these OpenShift plugins for these platforms. Are there other platforms that we want to test that would need a dynamic storage provisioner not already available as one of the above plugins? |
How do the other workloads 'scale'? |
How are the labels supposed to be specified? |
How do you verify the count of objects that have been created? |
These seem to be a good enough and handful of provisioners as a part of our workload. And also I like this idea of parameterizing the |
https://github.com/cloud-bulldozer/kube-burner/blob/master/pkg/burner/create.go#L200 |
https://github.com/cloud-bulldozer/kube-burner/blob/master/pkg/burner/create.go#L207 |
https://github.com/cloud-bulldozer/kube-burner/blob/master/pkg/burner/utils.go#L82 |
Regarding mechanics and working of other workloads, I would suggest to explore the codebase and play around with it on a cluster to understand the common patterns and functionality across workloads/jobs. This is not something one could entirely help others with, requires some own research. |
I feel like we might care about the PVC claim size to cluster/node total, or free, storage size ratio, but I'm unsure how to implement that currently. |
Yes, its good to set a smaller value as default and expose claim size as configurable option to the end user. I see current value is being set to |
Is your feature request related to a problem? Please describe.
It would be good to have some scenarios that create PVCs, bind them to PVs, mount them in pods and finally reclaim them.
Describe the solution you'd like
PV/PVC related scenarios on OCP wrapper with appropriate checks that a storageclass exists in case dynamic provisioning is reqired.
Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.
Additional context
Add any other context or screenshots about the feature request here.
The text was updated successfully, but these errors were encountered: