Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question: will kind delete unused images? #658

Closed
WeihanLi opened this issue Jun 26, 2019 · 13 comments
Closed

Question: will kind delete unused images? #658

WeihanLi opened this issue Jun 26, 2019 · 13 comments
Labels
kind/cleanup Categorizes issue or PR as related to cleaning up code, process, or technical debt. kind/support Categorizes issue or PR as a support question.

Comments

@WeihanLi
Copy link

Will kind remove unused images? If not, can I delete the image manually?

What should be cleaned up or changed:

unused docker images when the kubernetes does not need that image.

for example, deploy with revisionHistoryLimit: 0, when new image used, the before image should be deleted

Why is this needed:

if not clean these unused docker images, the disk resource needed will increase crazily

@WeihanLi WeihanLi added the kind/cleanup Categorizes issue or PR as related to cleaning up code, process, or technical debt. label Jun 26, 2019
@BenTheElder
Copy link
Member

If you delete a kind cluster all resources on the host are released including disks space used by the nodes.

@WeihanLi
Copy link
Author

I wanna delete those unused images without delete the kind cluster, I tried to docker exec kind-control-plane docker images, but got an error like unknown command docker

OCI runtime exec failed: exec failed: container_linux.go:348: starting container process caused "exec: \"docker\": executable file not found in $PATH": unknown

@BenTheElder
Copy link
Member

I wanna delete those unused images without delete the kind cluster, I tried to docker exec kind-control-plane docker images, but got an error like unknown command docker

Ah. So imageGC is disabled by default with kind but you could turn it on with a kubeadm config patch.
The reason this is off is to avoid GCing the core images under disk pressure and because we don't know how much disk you intend to keep free on the host.

For a production cluster with a dedicated VM / machine this would be a terrible idea, but for kind we expect that you'll run this on your workstation and it will see how much space is free there, which may be some much lower percentage than the eviction threshold for a production setup.

We will probably ease this at some point to some non-zero but small threshold (?) ... it needs thought and experimentation / data.

Now as to why docker exec kind-control-plane docker images didn't work, confusingly around kind v0.3.0 we switched to using CRI (contianerd) inside the nodes for a few reasons, so instead you need docker exec kind-control-plane crictl inspecti. We have it already configured so you don't need to specify the CRI endpoint.

More docs for crictl here 😅 https://github.com/kubernetes-sigs/cri-tools/blob/master/docs/crictl.md

@BenTheElder BenTheElder added the kind/support Categorizes issue or PR as a support question. label Jun 26, 2019
@WeihanLi
Copy link
Author

OK, Thanks @BenTheElder

@BenTheElder
Copy link
Member

Is this an issue you're actively encountering or a hypothetical? image content storage is somewhat deduped but...

Ideally using kind you never need to docker exec yourself, but it is of course available for debugging etc. 😅

@WeihanLi
Copy link
Author

I just met the problem yesterday... , the disk usage achieved 100%, some apps can not work like before, after some disk files cleaned and docker images removed, worked as before with system reboot.

@WeihanLi
Copy link
Author

I guess I should config something about imageGC with kubelet

@WeihanLi
Copy link
Author

Ah. So imageGC is disabled by default with kind but you could turn it on with a kubeadm config patch.
The reason this is off is to avoid GCing the core images under disk pressure and because we don't know how much disk you intend to keep free on the host.

Is there some more details docs? @BenTheElder

@tao12345666333
Copy link
Member

tao12345666333 commented Jun 26, 2019

kind: Cluster
apiVersion: kind.sigs.k8s.io/v1alpha3
# patch the generated kubeadm config with some extra settings
kubeadmConfigPatches:
- |
    apiVersion: kubelet.config.k8s.io/v1beta1
  kind: KubeletConfiguration
  metadata:
    name: config
  imageGCHighThresholdPercent: 90
  evictionHard:
    nodefs.available: "0%"
    nodefs.inodesFree: "0%"
    imagefs.available: "70%"

you can edit config file like this. I will open a PR for this ASAP. (It will probably be this evening or tomorrow. I just finished KubeCon today and I am on my way home.

EDIT: You need to format it, I have a problem with the phone format now.

@WeihanLi
Copy link
Author

Thanks for your help @tao12345666333

@tao12345666333
Copy link
Member

open a PR for this. #663

@RonakPrajapatiPanamax
Copy link

I guess I should config something about imageGC with kubelet
Did you use "imageGC with kubelet" configuraton patches. @BenTheElder can we use above kubelet config patch I want to keep at 50% usage of disk of node all applicatoin images should be evicated.

@BenTheElder I see your comment on other linked documenataion PR which is raise. So is it good can we use this configuration in such a way that core images should not get evicted.

I just want some automate way of configuration So after 50% of node disk usage all my application level images which is older then 24 hour should be deleted .

@BenTheElder
Copy link
Member

BenTheElder commented May 30, 2023

So is it good can we use this configuration in such a way that core images should not get evicted.

The built in GC is not capable of doing this, it can only prevent GC for the "pause" image and everything else is at risk.

The current discussion in SIG node is to have cri-o and containerd implement something like containerd/containerd#7944 instead of Kubernetes becoming aware of this.

Depending on your use-case, this might be safe already. If you're connected to the internet and have no issues with kind pulling public images that will be slightly larger than the ones we provide, you can configure the GC to 50% of disk.

So after 50% of node disk usage all my application level images which is older then 24 hour should be deleted .

Kubernetes's image GC cannot do this. It can be configured to trigger on 50% but it will not look at image age and doesn't distinguish the type of image, it's very rudimentary.

You could write a custom tool to implement this against the containerd or CRI APIs, or script executing ctr/crictl on the nodes.

FWIW: your node(s) are all sharing the host disk space, so that's also something to watch out for. Also on some linux environments kubelet cannot see filesystem stats in the containerized environment, like #2524 (comment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/cleanup Categorizes issue or PR as related to cleaning up code, process, or technical debt. kind/support Categorizes issue or PR as a support question.
Projects
None yet
Development

No branches or pull requests

4 participants