-
Notifications
You must be signed in to change notification settings - Fork 58
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Big memory usage in debian vs ubuntu when mounting a 120 dwarfs #154
Comments
Hi! Am I understanding correctly that your use case is to run 120 separate Have you accessed any files in the mounted file system images yet? Or is this just after you've mounted them? Maybe you can elaborate on your use case and why you're running that many instances? Depending on how big the underlying DwarFS images are and also depending on the access pattern, I'd expect each instance to use up to the configured |
I'm setting up my live SSD to have videos and pdfs in modules. There are no file opened on debian or ubuntu just mounted. Also the cachesize by default is set to 32mb so it every less then default. |
Why? Also, what "modules"? |
I made a fork of Tomas-M linux-live scripts. Its based on that. Modules/bundles are just what i call any squashfs/dwarfs files that are add to root (/) filesystem. The 'why' is cause I want help Flash Drives For Freedom on the software side: https://flashdrivesforfreedom.org/ I think this filesystem could be helpful in hiding data for them cause there not likely the officials to have dwarfs. |
I'm not convinced that this is a good idea. Why not just use proper encryption? You'll even get write support and more importantly, your data is actually safe. All that aside, why does it have to be 120 "modules" then? What's the benefit over just one? |
Its cause I'm trying to organize the media collection going into this. I don't want everything to be a blob cause that blob will have to be put into 16/32/64 gb sticks also. Its a mess as it is. I have write support using overlayfs to save changes. I use this filesystem as my daily driver so everything is just save to changes and home folder. 2nd i think the best there going to get is Security through obscurity. Boot a liveusb with media without complicated things is the best i can do. |
Coming back to the original issue, I don't see why the same binary would use different amounts of memory under the same circumstances. The only thing that comes to my mind right now would be different stack sizes. Would you mind running
on both machines and posting the output? |
Yeah, those look pretty identical to me. I still have no idea where the difference in RSS comes from. One thing you could try is to just use the |
Nothing really changed with ram usage for debian 12. Here is the kernel diff between debian and ubuntu kernels using this command: diff -aur config-6.1.0-10-amd64 config-6.2.0-24-generic I think looking at the kernel config may give more insight into why this is happening. |
Do you notice the same thing with other programs? Or other fuse drivers? |
you're using the binaries or building from source? same options? same compiler? |
i was using the static binary at the time. I moved on back to squashfs for my live os and i'm also using ubuntu. This bug from what i can remember is most likely a debian bug but not sure. Best guess for someone testing this is to make a 100+ dwarfs images and mount them all on debian to test the ram usage. Hope this helps. |
once i have the itp done i will consider testing this. |
The ideas outlined in #219 will likely help with this issue by allowing to mount multiple images in the same process. |
So i noticed that Debian takes a ton more ram running the same 120 dwarfs then Ubuntu. Ubuntu is at 418mb but with Debian its is at 1.69gb. My guess is Debian kernel is the problem cause fuse is a kernel module and not build into kernel vmlinuz image.
here is Debian ram usage:
here is Ubuntu ram usage:
Debian kernel is 6.1.0-10-amd64 and Ubuntu kernel is 6.2.0-25-generic. I hope this helps.
The text was updated successfully, but these errors were encountered: