Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

KEP for TTL-after-finished controller #2552

Merged
merged 1 commit into from
Aug 23, 2018

Conversation

janetkuo
Copy link
Member

@janetkuo janetkuo commented Aug 16, 2018

@k8s-ci-robot k8s-ci-robot added size/L Denotes a PR that changes 100-499 lines, ignoring generated files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. sig/apps Categorizes an issue or PR as relevant to SIG Apps. sig/architecture Categorizes an issue or PR as relevant to SIG Architecture. labels Aug 16, 2018
@janetkuo
Copy link
Member Author

/assign @tnozicka @kow3ns @enisoc

cc @kubernetes/sig-apps-pr-reviews

@kow3ns
Copy link
Member

kow3ns commented Aug 16, 2018

/approve

@kow3ns
Copy link
Member

kow3ns commented Aug 16, 2018

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Aug 16, 2018
Because Job controller depends on Pods to exist to work correctly. In Job
validation, `ttlSecondsAfterFinished` of its pod template shouldn't be set, to
prevent users from breaking their Jobs. Users should set TTL seconds on a Job,
instead of Pods owned by a Job.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will job validation prevent this? how does this work with podgc today? does that mess up jobs?

Copy link
Member Author

@janetkuo janetkuo Aug 17, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When we introduce TTLSecondsAfterFinished to pods, we will implement job validation so that job won't create pods with TTLSecondsAfterFinished set, and therefore won't mess up jobs. When TTLSecondsAfterFinished is implemented for pods, it can replace podgc.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are there other higher level resources that should make sure to validate this since PodTemplateSpec is fairly common?

w.r.t. validation I guess this is exclusive with restartPolicy: Always, often used with workload controllers.

Copy link
Member

@liggitt liggitt Aug 22, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's common for higher level resources to call generic pod spec validation, then add their own restrictions on top:

Job makes sure the pod spec specifies restart policy of "never":
https://github.com/kubernetes/kubernetes/blob/master/pkg/apis/batch/validation/validation.go#L121-L127

ReplicationController makes sure the pod spec specifies restart policy of "always":
https://github.com/kubernetes/kubernetes/blob/master/pkg/apis/core/validation/validation.go#L3969-L3972

agree that pod spec validation should only allow this to be set on a pod spec for restart policies that make sense

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the advice. Added validation for restart policy.

time.

Mitigations:
* In Kubernetes, it's required to run NTP on all nodes ([#6159][]) to avoid time
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does that ensure clocks are correct inside the controller manager container?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Clocks aren't always correct, but the difference should be small. We can also add a validation to disallow non-zero TTL less than 1 minute.

This can be promoted to beta when it satisfies users' need for cleaning up
finished resource objects, without regressions.

This will be promoted to GA once it's gone a sufficient amount of time as beta

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From the previous dicussions on the google doc:

Because metadata is used by all resources, but not all resources are finishable (only Pods/Jobs). We want to implement this for Pods/Jobs first. If users like this feature, and more use cases from custom resources, we can move it to metadata. To avoid the overhead, we can either remove this alpha field in spec and add a field in metadata, or generalize by keeping it in the spec of each finishable resource.

I am not sure I want us promoting something to beta/GA if we know that we will replace it with a generic mechanism. We should flesh out the path here a bit more in that regard.

Copy link
Member Author

@janetkuo janetkuo Aug 21, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will only be promoted to beta after this decision is finalized, i.e. whether we decide to replace it with a generic mechanism or not to do it. Adding this feature as alpha first is essential to help us gather feedback before deciding to generalize it or not.

Copy link

@tnozicka tnozicka Aug 22, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will only be promoted to beta after this decision is finalized, i.e. whether we decide to replace it with a generic mechanism or not to do it. Adding this feature as alpha first is essential to help us gather feedback before deciding to generalize it or not.

I agree we should try this as alpha to gather feedback. I was asking to formalize what you just said as beta graduation criteria in the proposal.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, updated graduation criteria.

Copy link

@tnozicka tnozicka left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nits, few questions, looks good overall

higher-level resource (e.g. CronJob for Jobs or Job for Pods), or owned by some
other resources, it's difficult for the users to clean them up automatically,
and those Jobs and Pods can accumulate and overload a Kubernetes cluster very
easily. Even if we can avoid the overload issue by implementing a cluster-wide

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

avoid the overload issue by implementing a cluster-wide (global) resource quota

I was told yesterday we already have it with count/* quota. The non-terminated count for pods apply only for a specific count/pods. Apologies if I got you confused by my comment on the google doc preceding this.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We currently have ResourceQuota, which sets quota restrictions per namespace. Is there a per cluster / global quota available today?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, seems like we have only quota per namespace, best you get to protect the etcd/cluster now is that the cluster object count is limited by multiplying the quota per namespace by the number of namespaces.


```go
type JobSpec struct {
// ttlSecondsAfterFinished limits the lifetime of a Job that has finished

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: s/ttlSecondsAfterFinished/TTLSecondsAfterFinished (applies globally)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

doc is supposed to reference the serialized name so the generated doc matches what people see in the API

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the clarification, I wasn't aware API is an exception for the Golang rule where every comment should begin with name of the type.

Because Job controller depends on Pods to exist to work correctly. In Job
validation, `ttlSecondsAfterFinished` of its pod template shouldn't be set, to
prevent users from breaking their Jobs. Users should set TTL seconds on a Job,
instead of Pods owned by a Job.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are there other higher level resources that should make sure to validate this since PodTemplateSpec is fairly common?

w.r.t. validation I guess this is exclusive with restartPolicy: Always, often used with workload controllers.

1. Check its `.status.conditions` to see if it has finished (`Complete` or
`Failed`). If it hasn't finished, do nothing.
1. Otherwise, if the Job has finished, check if Job's
`.spec.ttlSecondsAfterFinished` field is set. Do nothing if the TTL field is

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this be logically the first step? Does it make sense to check the condition if TTLSecondsAfterFinished isn't set?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since both steps merely check the fields of the Job, the order is interchangeable. We don't need to do any further computation if either the job hasn't finished or its ttl it's set.

Copy link

@tnozicka tnozicka Aug 24, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think there is an important distinction here. .spec.ttlSecondsAfterFinished is the field that allows this controller to process those jobs. This is what discriminates if a Job is added to a queue or not and we should be very explicit about this. No other action should be taken on a Job if this field is not set. Logically it doesn't make sense to check a status for a Job if you shouldn't be managing it at all. Controllers/code can evolve and we should structure it well from the start even though now the read-only actions don't cause issues.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To clarify, Jobs are only added to the workqueue (for cleanup) when they (1) finished (2) have a TTL set. No action will be taken if not both conditions are met.

@k8s-ci-robot k8s-ci-robot removed the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Aug 22, 2018
@janetkuo
Copy link
Member Author

Thanks for the review. Rebased and addressed comments. PTAL

@kow3ns
Copy link
Member

kow3ns commented Aug 23, 2018

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Aug 23, 2018
@calebamiles
Copy link
Contributor

/approve

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: calebamiles, kow3ns

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Aug 23, 2018
@k8s-ci-robot k8s-ci-robot merged commit 9805a93 into kubernetes:master Aug 23, 2018
@smarterclayton
Copy link
Contributor

@stevekuznetsov as someone who has actually implemented a variant of this for a CI cluster, can you comment about your usecase here and give some concrete feedback

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. sig/apps Categorizes an issue or PR as relevant to SIG Apps. sig/architecture Categorizes an issue or PR as relevant to SIG Architecture. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants