-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
handle the restartPolicy for play kube and generate kube #7671
handle the restartPolicy for play kube and generate kube #7671
Conversation
dfaf5c5
to
1687af6
Compare
case v1.RestartPolicyNever: | ||
ctrRestartPolicy = libpod.RestartPolicyNo | ||
default: // Default to Always | ||
ctrRestartPolicy = libpod.RestartPolicyAlways |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this the default? I remember the upstream issue mentioning OnFailure was the default for Kube. @haircommander Would you happen to know this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[root@k8s-m1 ~]# kubectl explain pod.spec.restartPolicy
KIND: Pod
VERSION: v1
FIELD: restartPolicy <string>
DESCRIPTION:
Restart policy for all containers within the pod. One of Always, OnFailure,
Never. Default to Always. More info:
https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy
Default to Always
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mheon, if you are referring to my issue, then I took this info from this doc:
–restart-policy=policy
Set the systemd restart policy. The restart-policy must be one of: “no”, “on-success”, “on-failure”, “on-abnormal”, “on-watchdog”, “on-abort”, or “always”. The default policy is on-failure.
But now I see that it's a wrong doc. In Kubernetes the default restart policy is indeed Always
. I have just updated the issue.
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: mheon, zhangguanzhang The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
test/e2e/play_kube_test.go
Outdated
Labels: make(map[string]string), | ||
Annotations: make(map[string]string), | ||
Name: defaultPodName, | ||
RestartPolicy: "", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we may want the default here to be never
as that matches the behavior the tests were written to expect
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
got it
1687af6
to
af08051
Compare
Could you make sure that the restart policy is also written to the yaml file if |
a difficulty here is podman's restartPolicy is container level, k8s is pod level. what do we do when they differ? |
Restart policy for pods would be a fun RFE, I might get ambitious and code that up at some point. |
6599559
to
df68d47
Compare
Signed-off-by: zhangguanzhang <[email protected]>
df68d47
to
f0ccac1
Compare
It had be done |
/retest |
ci had error, but no reason 😥, @vrothberg @TomSweeneyRedHat PTAL |
Restarted the job. Thanks for the ping! |
thanks |
Thanks @zhangguanzhang |
Fixes: #7656
Signed-off-by: zhangguanzhang [email protected]