-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
disable disk eviction by default #293
disable disk eviction by default #293
Conversation
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: BenTheElder The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@@ -169,6 +181,11 @@ kind: JoinConfiguration | |||
# no-op entry that exists soley so it can be patched | |||
apiVersion: kubelet.config.k8s.io/v1beta1 | |||
kind: KubeletConfiguration | |||
evictionHard: | |||
memory.available: "1" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cc @neolit123 @dashpole for some reason cluster up fails when this is memory.available: "0"
. I can't find any reason that should be invalid yet though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i cannot find where in the source 0 would be a special case.
also this seems undocumented.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we're now not setting this key, but this is still... curious
I also confirmed that a default cluster from this has configz like:
-> (pretty printed after the fact
|
/hold |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
one concern here.
is this cluster passing conformance; i.e. do we have conformance tests that rely on hard eviction?
if we merge this PR we are solving some strange user case that i think we need more evidence about, but hopefully we are not breaking something else.
If this solves Mitar's problem the next thing to consider is smaller
thresholds rather than none.
We do not have conformance depending on this. I don't think you could make
a reasonable test for that very easily. Eviction behavior is supposed to be
tuneable.
…On Mon, Feb 11, 2019, 19:40 Lubomir I. Ivanov ***@***.*** wrote:
***@***.**** commented on this pull request.
once concern here.
is this cluster passing conformance; i.e. do we have conformance tests
that rely on hard eviction?
if we merge this PR we are solving some strange user case that i think we
need more evidence about, but hopefully we are not breaking something else.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#293 (review)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AA4Bqz9n4Y6OHdSs30vNmt02UYbVz4dIks5vMjeVgaJpZM4a1giY>
.
|
It does solve a problem for me and now I can run kind on my laptop. |
Awesome! Now the question is which resource and should we adjust this back a bit... the defaults are: evictionHard:
memory.available: "100Mi"
nodefs.available: "10%"
nodefs.inodesFree: "5%"
imagefs.available: "15%" |
See my comment in #55. |
So specifically we probably just need the # config.yaml
kind: Config
apiVersion: kind.sigs.k8s.io/v1alpha2
nodes:
- role: control-plane
kubeadmConfigPatches:
- |
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
evictionHard:
memory.available: "100Mi"
nodefs.available: "10%"
nodefs.inodesFree: "5%"
imagefs.available: "0%"
|
I think this example config is invalid YAML. |
fixed, it was missing a |
7001e02
to
43dd01d
Compare
43dd01d
to
ffb0a9f
Compare
/hold cancel |
hm, so between this in the other thread - what are the technical details on why the kubelet is "failing" and what are the conditions it fails under? i saw the user's host node has a lot of GB of images. i.e. @mitar was claiming that he has enough disk space on the host:
yet it seems to me we were passing the default thresholds. /lgtm |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
So the problem is that having percentage as a limit might not be really reasonable for end-user laptop. For example, I have 2 TB drive, so 10% is 200 GB. So if I have 190 GB of free disk space (which I would claim is enough for my intended work with kind) Kubernetes would see it under the limit. I would say that I agree with setting the limits to 0. It is not like you are loosing any data when running kind. So I do not think we should worry about available disk space. If it can run, it can. Otherwise it fails. And this is it. Moreover I can imagine that this can also help on CI if things there become close to full as well. One does not expect that free space on your CI worker influences will your CI run fail or succeed (unless it is out of disk space). |
Those thresholds are intended for use on dedicated VMs with reserved resources, kind runs on hosts that have lots of extra noise / resource usage on them. Basically @mitar's comment above :-) #293 (comment) |
/hold cancel |
This won't automerge due to the dual netlifys we somehow have now. I think I know how this happened (one we don't control), and will follow up... Manual merge in the meantime. |
* Upgrade versions doc * update versions * update versions * update versions
hopefully resolves #55
with this PR the default configuration for kind nodes simply won't have hard eviction.