-
Notifications
You must be signed in to change notification settings - Fork 816
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
lfs_file_write following lfs_file_sync is really heavy operation #564
Comments
I've studied this a bit further and found out that the lfs_file_write following lfs_file_sync seems to copy the last partial block of the file to a new block. Since block size is 128kB it will be really heavy operation. I've attached a log of a simple test case that I did. The test does:
As can be seen in the log the write following lfs_file_sync is small when file size is small or a bit over 131072 bytes, and gets progressively heavier on every sync towads the 128kB limit. At worst the write takes more than 2500 ms! I'm contemplating calling the sync only when the file size is a bit over multiple of 128 kB (-8 bytes overhead?) but it sounds like a bit of a kludge. If there is any way to improve this in LittleFS, that would be great! |
I also tried setting the metadata_max configuration (created by the #502) to 4 kB, but it did not affect the "write after sync" -performance. |
Just found this comment in another issue: which explains this issue. I'm +1 for either of the suggested improvements since the current situation is quite difficult with large block size (= NAND chips). For now I've implemented the lfs_file_sync call when going over to new block but I think that may cause wasting blocks (I haven't confirmed if that is the case). |
Hi @petri-lipponen-suunto |
@petri-lipponen-suunto, sorry for not responding, this flew under my radar. The proposal in #344 (comment) is in the works, but is unfortunately a part of a larger set of changes which will take some time to land.
I'm curious if they figured it out, because the correct alignment is unfortunately excruciatingly complicated. Calling sync after the first 128KiB works, but the next block will contain a pointer and have only 128KiB-4B of data. The math gets real ugly and is explained a bit more in DESIGN.md. But let me emphasize, the math gets real ugly. To avoid any wasted blocks, you need to sync when Where For @petri-lipponen-suunto, this may have gone unnoticed because this ends up pretty close to just 128KiB. Overflow would result in copying only a couple of extra words in each block, but it does end up with more block erases than are necessary. |
@geky nice, I'll try to |
@geky I tried this. Something must be wrong because the value of off is never anywhere close to 0. Is w8 32*8 or 32? I tried both, makes no difference. |
For anyone looking at this the equation is wrong, it should be:
|
@geky I can't make this work with your maths. The offset is never zero (I think to be expected as there's some kind of accounting at the beginning of the block)? But the equation for filesize in your DESIGN which you would expect to be the maximum filesize for a given number of blocks does not seem to work when you plug it back in the offset. I wrote the following python script:
which gives:
which seems reasonable, but then I would expect this to yield an offset at the end of the block, but it does not:
so something is messed up somewhere |
Ok, I think I have it figured out. offset is never zero - its always the offset to the next write point in the block so towards the end of the block it grows until it reaches the block size - at the point the next write address is in the next block past the pointer allocations. So really you need to sync at the point that offset goes past the end of the block and into the next one. I'm sure you can calculate this from knowing the actual files size and the actual allocated number of blocks, but its not super-easy since blocks are not necessarily contiguous. So I think probably the thing to do is when the amount of data exceeds that left in the block (blocksize - current offset), write only that amount of data and then call sync. |
Ah, you're right. My bad, it's been a while since I've looked at this math. The above equations for You would want to either subtract
Note also that block 0 is a special case. |
I have a code that saves data from streaming sensors for later retrieval. The backend is a 1 Gb NAND flash with blocksize of 128 kB (MT29). I've been experimenting with the regular flushing of the file to make sure that it is up-to-date in case something goes wrong and have bumped into following issue:
When the device has saved a largish amount of data to the flash, the lfs_file_write following lfs_file_sync takes a really long time (seconds). I've attached a log demonstrating the issue that shows the sync when about 40 kB has been written into the file. The sync itself is quite fast, but the lfs_file_write after the sync (lines 15-984) seems to consist of reading of 80 kB and writing 40 kB which obviously is a very heavy operation in 256 byte pieces over SPI.
I'm quite confused about the behaviour and would like to know more: is this my mis-understanding of the littlefs operation or is there a bug? I would not think that the FS needs to copy the whole file when flushing it...
I'm currently using LittleFS v.2.4.0 (from git tag) but noticed this already a version or two ago.
write_and_sync.log
The text was updated successfully, but these errors were encountered: