Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

crashing when writing a 1MB file after reaching the size of two blocks #200

Open
friguimahdi opened this issue Jun 4, 2019 · 8 comments
Labels
needs investigation no idea what is wrong

Comments

@friguimahdi
Copy link

Hello, so i'm working with littlefs on an STM32L4 QSPI device . At first, primary tests are fine everything is working as intended so i wanted to push it a little further and play with larger files.
I'm trying to write a 1MB image into my QSPI but in the middle of the process i'm having a crash, here is my call stack :
call_stack

I tried to change my configuration ( block_size) several times and did some tests i noticed that my file size can't exceed the size of two blocks when reaching it i get that crash. I can't figure out why i'm having this strange behavior.
Thank you for your help.

@lsilvaalmeida
Copy link

Did you verified this assert? This can help you to debug it.

@friguimahdi
Copy link
Author

Hello @lsilvaalmeida and thank you for your reply.
Here's the assert that fails :
LFS_ASSERT(head >= 2 && head <= lfs->cfg->block_count)
which is located in :
lfs_ctz_extend function
when it fails head value is 4294967295 , block_count 256 , i guess something is wrong with that head value .
any suggestions?

@lsilvaalmeida
Copy link

This means that your head is pointing to -1 (4294967295 = 0xFFFFFFFF), which is strange. Try to see if head has this value even at the beginning of this call or where this changes his value.
Furthermore, which are your cache and block sizes?

@friguimahdi
Copy link
Author

friguimahdi commented Jun 10, 2019

Hello @lsilvaalmeida Sorry for the late response.

my cache_size is 256
block_size 32768 ( 0x8000 )

head value is becoming strange only when reaching the size of two blocks , here is a screenshot showing where it is changing :

Screenshot from 2019-06-10 09-48-13

It's after going out of the lfs_bd_read function head value takes 4294967295 so i tried to follow where it's changing in lfs_bd_read but i didn't find any operation that can do that in this function it's just after the return that head takes that value, even after head = lfs_fromle32(head) it dosen't change.

I have to mention that if i increase the block size which makes my file_size < 2*block_size everything is fine it crashes only when my file_size is superior to the size of two blocks.

@friguimahdi
Copy link
Author

friguimahdi commented Jun 10, 2019

While checking if i can find a related issue i guess i found one #6
I tried the solution proposed, changed lfs_ctz, lfs_popc, lfs_npw2 implementation but the problem persists. I'm using SW4STM32 IDE.

@friguimahdi
Copy link
Author

I'm using FatFs to read an image 1MB from an sdcard and then write it on the QSPI using littlefs.
can it be related to FatFs causing this strange behavior ?

@friguimahdi
Copy link
Author

It's not caused by FatFs, well i'm trying to write a large file using a for loop (no FatFs here) , it's crashing but not in the head assert , it's also in lfs_ctz_extend function but just before the head assert when going in the lfs_bd_read my user_read function is returning an error ( after a timeout in the qspi driver)
when the crash happens block = 4294967295 ( same value as head before) .
Increasing cashe_size and block_size resolves the problem. Can't figure out what's causing this

@friguimahdi
Copy link
Author

friguimahdi commented Jun 12, 2019

Tested the same code but with littlefs v1.7 , working just fine ..
might be something related to the cache_size and my QSPI driver.

@geky geky added the needs investigation no idea what is wrong label Aug 7, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
needs investigation no idea what is wrong
Projects
None yet
Development

No branches or pull requests

3 participants