-
Notifications
You must be signed in to change notification settings - Fork 804
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
lfs_file_sync (lfs_file_close) failure, lfs_dir_commitattr returning LFS_ERR_NOSPC; #478
Comments
Hi @Karlhead, sorry for the late response The fact that NOSPC is comming from One option is to increase the block size to a multiple of the block device's block size. So 1KiB, 2KiB, 4KiB, etc. With 8M blocks this would also slightly improve the allocator performance. |
Hi @geky, no worries. I have no custom attributes attached to the files, but the metadata is as you pointed out not fitting in the block anyways. Thanks! If you have the time: 8M blocks would slightly improve the allocator performance, how so? |
Huh, I don't know what I meant by "8M", maybe that was a typo. With larger (1/2/4KiB) blocks, the performance of the allocator slightly improves improves because there are less blocks in the filesystem. When the allocator runs, it doesn't actually read each block, but it needs to read metadata referencing the blocks. So less blocks == less metadata == faster allocator. The tradeoff is that the filesystem may waste more space. LittleFS has inline files, but if your file is larger than the inline size (cache size), it will use full blocks for the file. Other filesystems do something similar for similar reasons: |
Thanks! |
Hello, in my system I reproduce a similar scenario.
This happens using a clean formatted partition, so I can exclude power-loss related bugs. To trigger the problem I have to copy a folder from my PC to my ARM target using SAMBA connection. I always see the problem. If I use a sector size = 1024 (as @geky ) I do not see the problem. Is this a real solution ? The find -ls output is attached. Thanks for support and this great project. Paolo |
Hmm, do you have any custom attributes? If so how many bytes of custom attributes do you have on each file? In theory if the size of the file name + custom attributes for a single file is < 1/2 the block size you shouldn't see this. The filesystem should split metadata blocks until each file gets its own metadata block worst case. It's possible there is a bug that is leading to the filesystem not splitting metadata blocks when it needs to. Other info that would help:
|
Hello @geky
gdb --args lfs /dev/mmcblk3p3 /data2 -f (gdb) backtrace I hope this helps. |
An other small information... If the files/folders are written by smbd (samba daemon ) I see the problem. |
Tried to add a mutex on all fuse functions, so each call to lfs is serialized. |
Today I tried to downgrade littlefs keeping the same littlefs_fuse.
|
In my case, the commit 0d4c0b1 introduces the problem. Tried on different versions. If I revert this commit, I do not see the problem anymore. I don't know and I do not understand well the internals of littlefs. It could not be the real solution. |
Now I'm able to reproduce the problem very easily on a linux pc. On the same folder of lfs binary generated, I put the following shell script in a file, for example go.sh
It simply creates a 256K image. It format it in. It creates If a mkdir or touch command fails, it stops with a message. I run go.sh. If now I digit manually Now also |
I continue to see this issue with version 2.4.1. Has there been any progress in resolving this issue? Additional information: |
Hello,
I'm facing som issues which I'm having a hard time understanding. I've been using lfs for some time now without experiencing this issue before, but lately this issue has been surfacing more than once.
I'm downloading data into several files, one at a time, around 10 files with approx. 80 KB data in each file. The first 9 files are succesfully filled with data and closed correctly, but when I'm trying to close the last file, I will get the LFS_ERR_NOSPC error code returned from the lfs_file_close-function, and the same thing happends if I try to call lfs_file_sync before lfs_file_close as well. The problem is consistent until I re-format the filesystem.
As I'm debugging the lfs_dir_commitattr-function I can see that off + dsize is larger than end, resulting in the following return:
dsize = 16
commit->block = 7712
commit->off = 490
commit->begin = 0
commit->end = 504
if (commit->off + dsize > commit->end) {
return LFS_ERR_NOSPC;
}
However, there is no way that my device is out of memory. There is a total of 8388608 blocks (512B each) to lfs's disposal.
When calling lfs_fs_size, I can verify that only 1960 blocks are in use and 8386648 blocks are left.
Any help would be greatly appreciated.
The text was updated successfully, but these errors were encountered: