Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

squashfs error when finishing build #37

Closed
loomy opened this issue Jun 24, 2023 · 16 comments
Closed

squashfs error when finishing build #37

loomy opened this issue Jun 24, 2023 · 16 comments
Assignees
Labels
documentation Improvements or additions to documentation

Comments

@loomy
Copy link

loomy commented Jun 24, 2023

the tool unsquashfs4-avm-le can´t unpack the image

FATAL ERROR:Data queue size is too large
    Filesystem on build/original/firmware/var/tmp/filesystem.image is xz compressed (4:0)

from the call

+ /home/matze/workspace/freetz/freetz-ng/tools/unsquashfs4-avm-le -no-progress -exit-on-error -dest build/original/filesystem build/original/firmware/var/tmp/filesystem.image
+ grep -v '^$'
+ sed -e 's/^/    /g'
    FATAL ERROR:Data queue size is too large
    Filesystem on build/original/firmware/var/tmp/filesystem.image is xz compressed (4:0)
+ STATUS=1
+ '[' 1 -gt 0 ']'
+ error 1 'modunsqfs: Error in build/original/firmware/var/tmp/filesystem.image'

tool linkeage:

$ ldd tools/unsquashfs4-avm-le 
	linux-vdso.so.1 (0x00007ffe68174000)
	libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f4e4ea15000)
	libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f4e4e8c6000)
	libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007f4e4e8aa000)
	libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f4e4e6b8000)
	/lib64/ld-linux-x86-64.so.2 (0x00007f4e4eb7a000)
builduser@12dce50e2872:

works with my local unsquashfs not with the one in /tools. this just happens on docker build.
if I build on my host, the tool works fine.

running the container on manjaro Linux 12dce50e2872 5.15.114-2-MANJARO #1 SMP PREEMPT Sun Jun 4 10:32:43 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux.

any hints?

@pfichtner
Copy link
Owner

any hints?

Not at the moment, needs further investigation.
So if I understand correctly: unsquashfs4-avm-le works when running your machine but not when running inside docker? If yes this seems to relate to the docker environment otherwise it could be a general problem related to unsquashfs4-avm-le

@loomy
Copy link
Author

loomy commented Jun 24, 2023

correct.

builduser@3f014853baa5:~$ tools/unsquashfs4-avm-le -l build/original/firmware/var/tmp/filesystem.image
Filesystem on build/original/firmware/var/tmp/filesystem.image is xz compressed (4:0)
FATAL ERROR:Data queue size is too large
builduser@3f014853baa5:~$ exit
exit
[matze@f006hge freetz-ng]$ tools/unsquashfs4-avm-le -l build/original/firmware/var/tmp/filesystem.image
Filesystem on build/original/firmware/var/tmp/filesystem.image is xz compressed (4:0)
Parallel unsquashfs: Using 16 processors
10779 inodes (12104 blocks) to write

squashfs-root
squashfs-root/bin
...

docker Version: Docker version 24.0.0, build 98fdcd769b

docker info:

containerd version: 1677a17964311325ed1c31e2c0a3589ce6d5c30d.m
 runc version: 
 init version: de40ad0
 Security Options:
  apparmor
  seccomp
   Profile: builtin
  cgroupns
 Kernel Version: 5.15.114-2-MANJARO
 Operating System: Manjaro Linux
 OSType: linux
 Architecture: x86_64

docker is running in swarm mode. but freetz running as container. not as a service

@pfichtner
Copy link
Owner

So a first guess is a problem with loop devices inside the docker container.
Could you please try to run the docker container in privileged mode by starting the container using --privileged=true and test if this would fix the issue?

@pfichtner pfichtner self-assigned this Jun 25, 2023
@pfichtner pfichtner added the bug Something isn't working label Jun 25, 2023
@loomy
Copy link
Author

loomy commented Jul 2, 2023

sorry was on holiday.
tried with privileged, same error

@pfichtner
Copy link
Owner

Could you please provide a .config file to reproduce the problem?

@loomy
Copy link
Author

loomy commented Jul 3, 2023

I don´t think it's related to the config. I tried several Models in Basic config. Same error
but here it is

https://pastebin.com/FEYPnu9w

already tried in non swarm mode for docker, no difference

@pfichtner
Copy link
Owner

Building images for 3270, 3370, 7570, 7390, 7490 once a week inside this docker containers from this image (host is ubuntu).
So just checking if I can build successfully an image with the provided config.

@pfichtner
Copy link
Owner

Can't reproduce since compile fails regardless the gcc version selected 🙄

---> library/popt ... preparing ... configuring ... building ... building ... done.
---> package/rrdtool ... preparing ... configuring ... building ... building ... done.
---> package/openssl ... preparing ... configuring ... building ... building ... 
mips-linux-uclibc-gcc: error: unrecognized command line option '--quiet'
make[3]: *** [Makefile:687: apps/app_rand.o] Error 1
make[3]: *** Waiting for unfinished jobs....
mips-linux-uclibc-gcc: error: unrecognized command line option '--quiet'
make[3]: *** [Makefile:695: apps/apps.o] Error 1
make[2]: *** [Makefile:174: all] Error 2
make[1]: *** [make/pkgs/openssl/openssl.mk:94: source/target-mips_gcc-4.9.4_uClibc-0.9.33.2-nptl/openssl-1.1.1u/libssl.so.1.1] Terminated
make: *** [Makefile:46: envira] Terminated

@pwFoo
Copy link

pwFoo commented Aug 17, 2023

Hi @pfichtner,
any solution here? Read about missing kernel headers (generic? arm?...). But maybe needed from host kernel version for cross compile?

@pfichtner
Copy link
Owner

No, no solution so far, haven't investigated time into this anymore.
What exactly do you think the problem could be? Unfortunately I didn't understand.
I will set up a manjaro VM at the weekend, in which I can hopefully fix the problem. I suspect the problem depends on the host system.

@pfichtner
Copy link
Owner

reproducible in a manjaro VM. Stay tuned

@pfichtner
Copy link
Owner

The error is caused by restricted default prlimits (NOFILE "max number of open files") in manjaros docker containers.

host

RESOURCE   DESCRIPTION                              SOFT       HARD UNITS
NOFILE     max number of open files           1073741816 1073741816 files

docker

RESOURCE   DESCRIPTION                              SOFT       HARD UNITS
NOFILE     max number of open files                        1024          524288 files

One possible solution is to include the ulimits when starting the container.
docker run --rm -it --ulimit nofile=262144:262144 -v $PWD:/workspace pfichtner/freetz

The error could be proved and was gone after increasing the container's ulimit.

pfichtner added a commit that referenced this issue Aug 19, 2023
@pfichtner
Copy link
Owner

Added some hints relating to this issue to the README with 073fc68

@loomy
Copy link
Author

loomy commented Aug 21, 2023

thx!

@pfichtner pfichtner added documentation Improvements or additions to documentation and removed bug Something isn't working labels Dec 14, 2023
@fda77
Copy link

fda77 commented Dec 14, 2023

Von meinem Fedora mit dem ich meist compiliere:

$ prlimit
RESOURCE   DESCRIPTION                             SOFT      HARD UNITS
AS         address space limit                unlimited unlimited bytes
CORE       max core file size                 unlimited unlimited bytes
CPU        CPU time                           unlimited unlimited seconds
DATA       max data size                      unlimited unlimited bytes
FSIZE      max file size                      unlimited unlimited bytes
LOCKS      max number of file locks held      unlimited unlimited locks
MEMLOCK    max locked-in-memory address space   8388608   8388608 bytes
MSGQUEUE   max bytes in POSIX mqueues            819200    819200 bytes
NICE       max nice prio allowed to raise             0         0
NOFILE     max number of open files                1024    524288 files
NPROC      max number of processes                17337     17337 processes
RSS        max resident set size              unlimited unlimited bytes
RTPRIO     max real-time priority                     0         0
RTTIME     timeout for real-time tasks        unlimited unlimited microsecs
SIGPENDING max number of pending signals          17337     17337 signals
STACK      max stack size                       8388608 unlimited bytes

$ ulimit -n
1024

$ ulimit -n -H
524288

this is like "docker" in #37 (comment) .. so NOFILE seems to be okay

@fda77

This comment was marked as off-topic.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation
Projects
None yet
Development

No branches or pull requests

4 participants