Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable churn for our node-density tests #93

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

jtaleric
Copy link
Contributor

@jtaleric jtaleric commented Jul 23, 2024

Type of change

  • Refactor
  • New feature
  • Bug fix
  • Optimization
  • Documentation Update

Description

Enabling churn (default :false) for our node-density tests in kube-burner-ocp

Related Tickets & Documents

  • Related Issue #
  • Closes #

Checklist before requesting a review

  • I have performed a self-review of my code.
  • If it is a core feature, I have added thorough tests.

@jtaleric jtaleric requested review from a team as code owners July 23, 2024 20:25
@jtaleric jtaleric force-pushed the node-density-churn branch from d85a6a3 to d80b722 Compare July 24, 2024 11:16
@jtaleric jtaleric requested a review from rsevilla87 July 24, 2024 11:41
@jtaleric
Copy link
Contributor Author

@rsevilla87 / @vishnuchalla any feedback?

Copy link
Member

@rsevilla87 rsevilla87 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jtaleric , when using churnDeletionStrategy: default, kube-burner will remove the node-density namespace, maybe we can can consider using gvr to have a better control on how the workload behaves?

@rsevilla87 rsevilla87 self-requested a review July 24, 2024 12:39
@rsevilla87
Copy link
Member

rsevilla87 commented Jul 24, 2024

Also, another thing to keep in mind is that workload usually creates just one namespace, (in standard node-density at least), and churnPercent is calculated from the number of created namespaces for standard node-density the churning stage will just remove that namespace, is that what we're pursuing with this change?

@jtaleric
Copy link
Contributor Author

Let me know if these changes address your concerns @vishnuchalla / @rsevilla87 . Thanks!

@afcollins
Copy link
Contributor

I believe the concern is in the churn logic in kube-burner uses namespaces as unit to delete.

We could achieve the same effect using iterationsPerNamespace=1 when we use churn on the cni and heavy workloads to get the desired effect. Since using churn with these workloads changes the workload as we understand them, changing itsPerNs may be fine for this purpose.

I think we could add these flags at this time to make them available in kube-burner-ocp.
I created kube-burner/kube-burner#673 to track that refactoring.

@jtaleric
Copy link
Contributor Author

I believe the concern is in the churn logic in kube-burner uses namespaces as unit to delete.

We could achieve the same effect using iterationsPerNamespace=1 when we use churn on the cni and heavy workloads to get the desired effect. Since using churn with these workloads changes the workload as we understand them, changing itsPerNs may be fine for this purpose.

I think we could add these flags at this time to make them available in kube-burner-ocp. I created kube-burner/kube-burner#673 to track that refactoring.

@afcollins I was under the impression moving to gvr vs default for churn strategy would help with this?

churnDeletionStrategy	Churn deletion strategy to apply, default or gvr (where default churns namespaces and gvr churns objects within namespaces)	String	default

@rsevilla87 / @vishnuchalla can someone confirm the difference between gvr and default ?

@afcollins
Copy link
Contributor

Oh! I see now that yes, it looks like gvr deletion strategy does not blindly delete the entire namespace at once. But I don't see that it is tracking the resources that make up an iteration. It is still only deleting a namespace as a unit, but just in a different way.

@vishnuchalla
Copy link
Contributor

I believe the concern is in the churn logic in kube-burner uses namespaces as unit to delete.
We could achieve the same effect using iterationsPerNamespace=1 when we use churn on the cni and heavy workloads to get the desired effect. Since using churn with these workloads changes the workload as we understand them, changing itsPerNs may be fine for this purpose.
I think we could add these flags at this time to make them available in kube-burner-ocp. I created kube-burner/kube-burner#673 to track that refactoring.

@afcollins I was under the impression moving to gvr vs default for churn strategy would help with this?

churnDeletionStrategy	Churn deletion strategy to apply, default or gvr (where default churns namespaces and gvr churns objects within namespaces)	String	default

@rsevilla87 / @vishnuchalla can someone confirm the difference between gvr and default ?

@jtaleric gvr deletes all the individual resources in a namespace with configured QPS and Burst instead of directly deleting a namespace which will be similar to oc delete ns namespace (.i.e. default strategy). So it still deletes the entire namespace in a less intensive way.

Regarding iterations, yes with the latest commit in this PR we can achieve that behavior by considering each namespace as an iteration (i.e. similar to other workloads). Hopefully that solves your purpose. If not, and if you strongly wish to have all the iterations within a singe namespace then we might have to think something like kube-burner/kube-burner#673. But before that I would like to understand why do we want all the iterations in a single namespace?

@jtaleric
Copy link
Contributor Author

I believe the concern is in the churn logic in kube-burner uses namespaces as unit to delete.
We could achieve the same effect using iterationsPerNamespace=1 when we use churn on the cni and heavy workloads to get the desired effect. Since using churn with these workloads changes the workload as we understand them, changing itsPerNs may be fine for this purpose.
I think we could add these flags at this time to make them available in kube-burner-ocp. I created kube-burner/kube-burner#673 to track that refactoring.

@afcollins I was under the impression moving to gvr vs default for churn strategy would help with this?

churnDeletionStrategy	Churn deletion strategy to apply, default or gvr (where default churns namespaces and gvr churns objects within namespaces)	String	default

@rsevilla87 / @vishnuchalla can someone confirm the difference between gvr and default ?

@jtaleric gvr deletes all the individual resources in a namespace with configured QPS and Burst instead of directly deleting a namespace which will be similar to oc delete ns namespace (.i.e. default strategy). So it still deletes the entire namespace in a less intensive way.

Regarding iterations, yes with the latest commit in this PR we can achieve that behavior by considering each namespace as an iteration (i.e. similar to other workloads). Hopefully that solves your purpose. If not, and if you strongly wish to have all the iterations within a singe namespace then we might have to think something like kube-burner/kube-burner#673. But before that I would like to understand why do we want all the iterations in a single namespace?

For my vary narrow focused work, having 1 iteration == 1 namespace does suffice.

Copy link

This pull request has become stale and will be closed automatically within 7 days.

@github-actions github-actions bot added the stale label Oct 27, 2024
@github-actions github-actions bot closed this Nov 3, 2024
@vishnuchalla vishnuchalla reopened this Nov 3, 2024
@github-actions github-actions bot removed the stale label Nov 4, 2024
@memodi
Copy link

memodi commented Nov 7, 2024

@jtaleric thanks for this PR, I think this would be will very helpful to exercise some of the network observability components like flowlogs-pipeline

Joe Talerico added 3 commits December 16, 2024 15:21
Enabling churn (default :false) for our node-density tests in
kube-burner-ocp

Signed-off-by: Joe Talerico <[email protected]>
Based on feedback, updating node-density based on feedback
Adding in a missing env var

Signed-off-by: Joe Talerico <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants