You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
If you are interested in working on this issue or have submitted a pull request, please leave a comment
Over time, the data stored in /var/lib/containerd may grow in size, but the inodes should not outnumber the data being written there.
Actual Behavior
Our EKS worker nodes were unable to deploy new images. Upon investigation, we found that the /var/lib/containerd partition had 22% remaining when doing df -h, but inodes were at 100% used when performing df -i.
Steps to Reproduce
I'm not sure how to reproduce this. We have been running these nodes for about three weeks now, and just today we noticed this issue. To resolve this, I bumped the size of the EBS volume to 100GB in the .pkvars file, and adjusted the partitioning script to allocate more space to /var/lib/containerd after running into disk space issues with the standard build. Here is the relevant portion of my partition-disks.sh file:
I can't think of any important factoids, other than what I stated above. The things that are 'custom' about my build is that the disk has been bumped to 100gb from 64, and the partition script was changed to give more space to /var/lib/containerd. Other than that my AMI should look very similar to others.
References
#0000
The text was updated successfully, but these errors were encountered:
Community Note
Configuration
Packer Version: 1.9.4
Packer Configuration:
Expected Behavior
Over time, the data stored in /var/lib/containerd may grow in size, but the inodes should not outnumber the data being written there.
Actual Behavior
Our EKS worker nodes were unable to deploy new images. Upon investigation, we found that the /var/lib/containerd partition had 22% remaining when doing
df -h
, but inodes were at 100% used when performingdf -i
.Steps to Reproduce
I'm not sure how to reproduce this. We have been running these nodes for about three weeks now, and just today we noticed this issue. To resolve this, I bumped the size of the EBS volume to 100GB in the .pkvars file, and adjusted the partitioning script to allocate more space to /var/lib/containerd after running into disk space issues with the standard build. Here is the relevant portion of my partition-disks.sh file:
Important Factoids
I can't think of any important factoids, other than what I stated above. The things that are 'custom' about my build is that the disk has been bumped to 100gb from 64, and the partition script was changed to give more space to /var/lib/containerd. Other than that my AMI should look very similar to others.
References
The text was updated successfully, but these errors were encountered: