-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Nested image pull inside QM container id failing #278
Comments
I can think of workarounds, Question is, what is the correct solution
shall we add it to QM setup? |
Resolve containers#278 Signed-off-by: Yariv Rachmani <[email protected]>
Resolve containers#278 Signed-off-by: Yariv Rachmani <[email protected]>
Resolve containers#278 Signed-off-by: Yariv Rachmani <[email protected]>
Resolve containers#278 Signed-off-by: Yariv Rachmani <[email protected]>
Resolve containers#278 Signed-off-by: Yariv Rachmani <[email protected]>
PR #280 does not resolve this one Error: failed to get new shm lock manager: failed to create 2048 locks in /libpod_lock: read-only file inside qm while pulling ffi-tools image One more detail, it happens on TF AWS instance, not reporduced in c9s vm |
Inside contianer is /dev/shm readonly? Looks like something is setup incorrectly. |
|
OK, as i suspect it is an issue with AWS and qm setup, i removed the /var/qm partition for verifying root cause. Added a parameter for add disk part, default is no. The error is still valid, Will check with TestingFarm https://artifacts.dev.testing-farm.io/c209b61e-87bb-4027-986d-3037cdc19854/work-ffiai68i_jy/log.txt |
Reproduced in Testing Farm with reserved machine podman images |
@rhatdan After every qm restart it returns to this status,
|
Certainly ooks like a bug. What does the qm.service show for the podman command? |
Sure @rhatdan
|
@dougsland |
--read-only-tmpfs=false |
Yes, i see, although i have to admit that name is confusing |
Yes I agree and actually internal to the code it is labeled ReadWriteTmpfs. Most users should never touch that flag. The basic idea of the flag is to allow users to configure the system in a way where the processes within the container can write no where, or just to volumes mounted into the container. |
Found this issue, adding 'VolatileTmp=true' |
Resolve #278 Signed-off-by: Yariv Rachmani <[email protected]>
With podman 4.7 and up inside c9s deployment
podman exec -it qm bash
bash-5.1# podman run -it quay.io/centos-sig-automotive/ffi-tools:latest
return the following error
Error: writing blob: storing blob to file "/var/tmp/container_images_storage738866264/1": write /var/tmp/container_images_storage738866264/1: no space left on device
podman exec -it qm bash -c "df -kh"
Filesystem Size Used Avail Use% Mounted on
/dev/vda1 23G 5.1G 18G 23% /
tmpfs 64M 0 64M 0% /dev
tmpfs 887M 0 887M 0% /tmp
/dev/vda2 23G 618M 22G 3% /var
tmpfs 887M 140K 887M 1% /run
tmpfs 887M 0 887M 0% /run/lock
tmpfs 887M 0 887M 0% /var/tmp
tmpfs 355M 9.7M 346M 3% /etc/hosts
shm 63M 84K 63M 1% /dev/shm
tmpfs 887M 8.0M 879M 1% /var/log/journal
I can think of workarounds, Question is, what is the correct solution
cat /etc/redhat-release
CentOS Stream release 9
Host rpms
qm-0.6.0-1.20231113152056710175.main.7.gbdc6b1f.el9.noarch
podman-4.8.0~dev-1.20231115215527013754.main.2457.ec2e533a2.el9.x86_64
Host kernel uname -r
5.14.0-383.el9.x86_64
CI FFI gate is failing due to that
The text was updated successfully, but these errors were encountered: