-
Notifications
You must be signed in to change notification settings - Fork 456
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
crictl images prune #399
Comments
And is there any workaround for now to accomplish the same thing with a combination of commands? |
There is a cleanup function in cri-o repo: https://github.com/kubernetes-sigs/cri-o/blob/master/test/helpers.bash#L301 |
@feiskyer I tried the referenced function. It seems to try to delete all images, not unused images. Of course, the crictl images -q | xargs -n 1 crictl rmi 2>/dev/null |
@steven-sheehy Yep, there is no such single command yet as it's not part of CRI. My concern of doing is that it may break kubelet container lifecycle, e.g. kubelet would pull an image before creating the container. If the prune happens during them, the container creation may be failed. So I think it's better to also consider image pull time when doing prune, but this is not included in CRI. |
I think the main use case of a prune command would be to be ran manually by a user or after a helm upgrade as part of continuous deployment to free up space. So it's most likely not ran very often to encounter the scenario you mention. And if it does happen to be ran in between image pull and execution, won't the kubelet just try the pull and execute again after the back off period? |
This is possible to implement in crictl. We can:
There might be some race condition, but that should be race, and we should get eventual consistency. |
Kubernetes keep unused images on the node, and considers image locality during scheduling. I'm not sure whether |
You guys are right, there is some race condition with kubelet, the container runtime and my workaround prune command above. I've since switched to containerd 1.2.0 and right after I do a helm upgrade I perform the prune and now the cluster cannot both terminate or start some pods. This is to be expected except that it never resolves itself by re-pulling the images. The errors in kubectl describe pod show:
journalctl -fu containerd
journalctl -fu kubelet
@Random-Liu Should I open an issue with containerd? You guys may not recommend pruning, but this issue didn't occur with CRI-O and I think cluster should have eventual consistency, as you mentioned. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
As part of CRC we disable some of operators like (monitoring/machine-config ..etc.) but the images are always present on the node since there is no knob on installer side right now to disable those operators from starting (iirc.) Now overall disk size is increases, till 4.2 our final disk size was around 2GB but with 4.3 it is increasing around 3GB and that is because all the other images are added as part of CVO payload. crictl version used in the RHCOS is `0.1.0` which doesn't have fix for kubernetes-sigs/cri-tools#399 one yet so using kubernetes-sigs/cri-tools#399 (comment) as workaround.
As part of CRC we disable some of operators like (monitoring/machine-config ..etc.) but the images are always present on the node since there is no knob on installer side right now to disable those operators from starting (iirc.) Now overall disk size is increases, till 4.2 our final disk size was around 2GB but with 4.3 it is increasing around 3GB and that is because all the other images are added as part of CVO payload. crictl version used in the RHCOS is `0.1.0` which doesn't have fix for kubernetes-sigs/cri-tools#399 one yet so using kubernetes-sigs/cri-tools#399 (comment) as workaround.
As part of CRC we disable some of operators like (monitoring/machine-config ..etc.) but the images are always present on the node since there is no knob on installer side right now to disable those operators from starting (iirc.) Now overall disk size is increases, till 4.2 our final disk size was around 2GB but with 4.3 it is increasing around 3GB and that is because all the other images are added as part of CVO payload. `crictl images` will list all images, even the one being used, but crictl rmi will only be able to remove the unused images, and will error out on the other images. crictl version used in the RHCOS is `0.1.0` which doesn't have fix for kubernetes-sigs/cri-tools#399 one yet so using kubernetes-sigs/cri-tools#399 (comment) as workaround.
As part of CRC we disable some of operators like (monitoring/machine-config ..etc.) but the images are always present on the node since there is no knob on installer side right now to disable those operators from starting (iirc.) Now overall disk size is increases, till 4.2 our final disk size was around 2GB but with 4.3 it is increasing around 3GB and that is because all the other images are added as part of CVO payload. `crictl images` will list all images, even the one being used, but crictl rmi will only be able to remove the unused images, and will error out on the other images. crictl version used in the RHCOS is `0.1.0` which doesn't have fix for kubernetes-sigs/cri-tools#399 one yet so using kubernetes-sigs/cri-tools#399 (comment) as workaround.
command like "docker system prune" sudo crictl ps -a | grep -v Running | awk '{print $1}' | xargs sudo crictl rm && sudo crictl rmi --prune |
I just switched to CRI-O and crictl and I'm trying to find an equivalent of the docker command to delete unused images to cleanup disk space. In docker, I would just run
docker image prune -a -f
to do this. I can't seem to find the equivalent in crictl, so is there one and if not can one be added? Something likecrictl images prune
or as a new param to existingcrictl rmi --prune
?crictl: v1.12.0
crio: 1.11.7
The text was updated successfully, but these errors were encountered: