Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RFE] Add storage contol plane related scenarios to ocp wrapper #427

Closed
smalleni opened this issue Aug 12, 2023 · 21 comments · Fixed by #465
Closed

[RFE] Add storage contol plane related scenarios to ocp wrapper #427

smalleni opened this issue Aug 12, 2023 · 21 comments · Fixed by #465
Assignees
Labels
enhancement New feature or request

Comments

@smalleni
Copy link
Contributor

smalleni commented Aug 12, 2023

Is your feature request related to a problem? Please describe.

It would be good to have some scenarios that create PVCs, bind them to PVs, mount them in pods and finally reclaim them.

Describe the solution you'd like

PV/PVC related scenarios on OCP wrapper with appropriate checks that a storageclass exists in case dynamic provisioning is reqired.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

@smalleni smalleni added the enhancement New feature or request label Aug 12, 2023
@smalleni
Copy link
Contributor Author

Related to #426

@mfleader
Copy link
Contributor

Can you define what you mean by scenario?

@afcollins
Copy link
Contributor

@mfleader "scenario" just a generic term for a new workload. Like one alongside those here: https://github.com/cloud-bulldozer/kube-burner/tree/master/cmd/kube-burner/ocp-config

@mfleader
Copy link
Contributor

Do the pods exist before the scenario starts?

From my initial work, it seems like you cannot mount a pvc volume by patching an existing pod.

@afcollins
Copy link
Contributor

afcollins commented Sep 14, 2023

No. Think of it as you will create everything for the workload. Attaching a volume to an existing pod isn't a valid use case. Usually, you create PV/PVC first, then a pod that mounts the volume next.

Since we're interested in dynamic provisioning here, then we only need to create the PVCs and not the PVs. Maybe you'll have to create a StorageClass that supports dynamic provisioning. You'll have to do that analysis.

@mfleader
Copy link
Contributor

Should this create a bunch of PVCs and Pods (like 100 each), and then reclaim the PVC and delete the Pod, or should it create 1 each and then reclaim and delete them over N iterations?

@afcollins
Copy link
Contributor

Our current workloads create everything and take measurements to measure the scaleup, then delete as an unmeasured "garbage collection", but we are working towards measuring GC as well.

If you simply follow that pattern, then we can compare the added effect/overhead of PVC mounting on top of what we already understand reasonably well (e.g. node-density with added PVC).

Usually creating a single pod doesn't consume enough resources for us to measure, but we could achieve that by using churn. It could be a second phase to figure out whether churn is beneficial here and what settings to use.

@mfleader
Copy link
Contributor

Should this live in cmd/kube-burner/ocp-config or in a different directory?

@mfleader mfleader mentioned this issue Sep 19, 2023
7 tasks
@afcollins
Copy link
Contributor

Yes: "PV/PVC related scenarios on OCP wrapper ..."

@vishnuchalla
Copy link
Collaborator

vishnuchalla commented Sep 19, 2023

Just to outline, few points to consider while adding a new workload.

  • Is the workload configurable and does it follow the same pattern with other workloads when it comes to scalability standpoint?
  • Are the newly added objects getting created as expected with kube-burner specific labels? Also we need to make sure if the count of created objects is verified.
  • Check & verify if the measurements are being calculated as expected and they only consider the resources that are part of this newly added workload.
  • Finally make sure the resources added in this new scenario are also being properly garbage collected at the end of the workload.

@mfleader
Copy link
Contributor

mfleader commented Sep 20, 2023

With regards to potential dynamic storage provisioners, the obvious ones seem to be these OpenShift plugins for these platforms.

Are there other platforms that we want to test that would need a dynamic storage provisioner not already available as one of the above plugins?

@mfleader
Copy link
Contributor

How do the other workloads 'scale'?

@mfleader
Copy link
Contributor

How are the labels supposed to be specified?

@mfleader
Copy link
Contributor

How do you verify the count of objects that have been created?

@vishnuchalla
Copy link
Collaborator

With regards to potential dynamic storage provisioners, the obvious ones seem to be these OpenShift plugins for these platforms.

Are there other platforms that we want to test that would need a dynamic storage provisioner not already available as one of the above plugins?

These seem to be a good enough and handful of provisioners as a part of our workload. And also I like this idea of parameterizing the STORAGE_PROVISIONER with a flexibility for user to specify it based on their requirements and infra.

@vishnuchalla
Copy link
Collaborator

@vishnuchalla
Copy link
Collaborator

@vishnuchalla
Copy link
Collaborator

How do you verify the count of objects that have been created?

https://github.com/cloud-bulldozer/kube-burner/blob/master/pkg/burner/utils.go#L82

@vishnuchalla
Copy link
Collaborator

Regarding mechanics and working of other workloads, I would suggest to explore the codebase and play around with it on a cluster to understand the common patterns and functionality across workloads/jobs. This is not something one could entirely help others with, requires some own research.

Docs: https://cloud-bulldozer.github.io/kube-burner/v1.7.8/

@mfleader
Copy link
Contributor

I feel like we might care about the PVC claim size to cluster/node total, or free, storage size ratio, but I'm unsure how to implement that currently.

@vishnuchalla
Copy link
Collaborator

Yes, its good to set a smaller value as default and expose claim size as configurable option to the end user. I see current value is being set to 1GB which is too high.
cc: @afcollins

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants