Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WaitForPodsReady: a mode where jobs don't block the queue head #610

Closed
3 tasks done
Tracked by #636
ahg-g opened this issue Mar 6, 2023 · 8 comments
Closed
3 tasks done
Tracked by #636

WaitForPodsReady: a mode where jobs don't block the queue head #610

ahg-g opened this issue Mar 6, 2023 · 8 comments
Assignees
Labels
kind/feature Categorizes issue or PR as related to a new feature.

Comments

@ahg-g
Copy link
Contributor

ahg-g commented Mar 6, 2023

What would you like to be added:
A mode of operation for WaitForPodsReady where jobs don't block the head of the queue, but still get suspended if they aren't ready after a while.

Why is this needed:
Blocking the queue until a Job is ready guarantees all-or-nothing scheduling, but it is slow at scale. Consider the case where a large number of jobs are awaiting to be scheduled and suddenly lots of resources become available (e.g., a large job finishes, releasing significant amount of resources).

Completion requirements:

This enhancement requires the following artifacts:

  • Design doc
  • API change
  • Docs update

The artifacts should be linked in subsequent comments.

@ahg-g ahg-g added the kind/feature Categorizes issue or PR as related to a new feature. label Mar 6, 2023
@alculquicondor
Copy link
Contributor

alculquicondor commented Apr 12, 2023

We probably would start optimistically admitting every workload and then setting some kind of backoff when resources are unavailable.

Should the backoff be per flavor?

Note: not expecting an answer... just dumping my current open questions :)

@KunWuLuan
Copy link
Member

Hi, I think we can do two more things to help solve the problem:

  1. continue admitting more workloads until there is no more resources in cohort
  2. requeue the current workload if it wait too long for pods ready

How do you think? @alculquicondor @ahg-g

@KunWuLuan
Copy link
Member

/assign

@trasc
Copy link
Contributor

trasc commented Apr 19, 2023

Hi @KunWuLuan,

2. requeue the current workload if it wait too long for pods ready

is done in #599 / #689 .

I think, what we need to, is to investigate the effect of dropping the s.cache.WaitForPodsReady(ctx) in

if !s.cache.PodsReadyForAllAdmittedWorkloads(ctx) {
log.V(5).Info("Waiting for all admitted workloads to be in the PodsReady condition")
// Block admission until all currently admitted workloads are in
// PodsReady condition if the waitForPodsReady is enabled
if err := workload.UnsetAdmissionWithCondition(ctx, s.client, e.Obj, "Waiting", "waiting for all admitted workloads to be in PodsReady condition"); err != nil {
log.Error(err, "Could not update Workload status")
}
s.cache.WaitForPodsReady(ctx)
log.V(5).Info("Finished waiting for all admitted workloads to be in the PodsReady condition")
}
}

and continue from there.

@KunWuLuan
Copy link
Member

Hi, @trasc I think you are right. 💯

Moreover, maybe we can add a switch to let user choose whether to block the admission while still waiting for pods ready.
If false, we just skip all these checking in

if s.waitForPodsReady {
if !s.cache.PodsReadyForAllAdmittedWorkloads(ctx) {
log.V(5).Info("Waiting for all admitted workloads to be in the PodsReady condition")
// Block admission until all currently admitted workloads are in
// PodsReady condition if the waitForPodsReady is enabled
if err := workload.UnsetAdmissionWithCondition(ctx, s.client, e.Obj, "Waiting", "waiting for all admitted workloads to be in PodsReady condition"); err != nil {
log.Error(err, "Could not update Workload status")
}
s.cache.WaitForPodsReady(ctx)
log.V(5).Info("Finished waiting for all admitted workloads to be in the PodsReady condition")
}
}

. Then the other jobs can continue being admitted until resources are exhausted. WDYT?

@alculquicondor If you have time, you can also participate in the discussion, which will be of great help. Thank you very much. 😆 👍

@alculquicondor
Copy link
Contributor

@KunWuLuan thanks for your feedback. I'm currently with limited availability as I'm attending kubecon. I'll get back to this thread next week.
If you have some time, feel free to review the open PRs listed above.

But in general, this feature should be optional.

@alculquicondor
Copy link
Contributor

/close
Fixed in #708

@k8s-ci-robot
Copy link
Contributor

@alculquicondor: Closing this issue.

In response to this:

/close
Fixed in #708

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature.
Projects
None yet
Development

No branches or pull requests

5 participants