-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ruff fails with memory allocation of X bytes failed #8147
Comments
I also just found out that using |
Hi! Is this with the The stack trace from a debug build (e.g. clone the repository, |
Hi, yeah, it is the check command, the full invocation looks like this: |
I noticed that yesterday too. It can happen if the caches are incompatible. I'm surprised you run into this because the ruff version is part of the cache key: Different ruff versions should never use the same cache. At least, that's how it's supposed to work. Can you try running Do I understand it correctly that you use a pre-built ruff version (you're not building ruff yourself)? |
Executing |
Thanks @harpener. Clad to hear that it helped. Yes, please come back if you run into the problem again. I'll keep this issue open for now to see if other users run into this problem as well. |
I also ran into this problem and I was not switching between Ruff versions (all stables on ruff 0.1.2) but was running Ruff over and over. At some seemingly-random point I ran ruff and got the error:
|
Thank you for reporting -- we'll investigate. |
I suspect that we read an incompatible cache for whatever reason, which makes |
I had same problem with bincode 1.3.3, when I changed cache format(from storing BtreeMap<String, Item> to Vec) This was "fixed" in 1.3.1 release - bincode-org/bincode#336, but I can still reproduce problem with 1.3.3 version. Maybe this is already fixed with 2.0.0 version, but not tested it yet |
New bincode issue - bincode-org/bincode#676 |
So it is not really a bug in bincode, but by default it not limit allocation size, which may be really big in some situations(broken cache, changed save structure) To fix problem, limits needs to be set - https://docs.rs/bincode/latest/bincode/config/trait.Options.html#method.with_limit and probably invalid cache files should be also deleted |
Thanks. I prefer identifying the root cause of this rather than hidding it with the size limit. Ruff shouldn't load incompstible caches, nor is it ever growing. |
Thins I tried without success so far:
|
I managed to capture a backtrace of the crash:
Full trace here: https://gist.github.com/mrcljx/c0aa363ba32b16709d4d87a7c82278ae -
|
@mrcljx oh nice, thanks for capturing the stacktrace. Unfortunately, I'm still surprised why it tries to load a version-incompatible cache. Or is it that we have a incompatible Anyone running into this issue and working on an open source project. Could you share the GitHub commit and upload your |
I have the same issue, here's what my cache looks like:
I looked at file size, there's nothing out of the ordinary, the biggest file is 1.1mb and the total size is 10mb.
|
Is this happening in an open source project or a project where you would feel comfortable sharing the given repository state and cache directory content? It would help us analyse the issue. |
Sadly I'm not comfortable sharing this information. One thing that comes to mind is I have ruff running from 2 different virtual envs on the same repo:
maybe it could explain why the cache goes into a bad state. |
I started getting this issue with Ruff running as a File Watcher sorting imports in PyCharm (so it runs on single files each time use runs ctrl+s). The Output:
Environment:
Code is not open source, so can't share unfortunately.
|
Our team is working with a fairly large repo and we're getting the same issue. Again, running |
I am also running into the same issue, but in my case since inside |
Interesting. The |
this is the entire output at the moment: ❯ ls /Users/mattia.baldari/project/.ruff_cache/
0.1.13 0.1.14 CACHEDIR.TAG |
I had encountered same error. ruff 0.1.13 My command,
I am using ruff for globally not related for 1 project. |
@rumbarum would you be able to share the repro file and the state of your cache directory so that we could try to reproduce locally? |
@MichaReiser |
@rumbarum What would be useful for us is to have a project with a cache directory state that allows us to reproduce the error. What we need for this is:
|
@MichaReiser |
@MichaReiser
|
Thanks @rumbarum I suspect that we run into some sort of concurrency issue that corrupts the cache. I'll play a bit with file watchers to see if I can corrupt my local cache. |
FYI, my ruff config
Ruff 0.1.3 And If any code is not included in cache file (I only see paths), I will share current |
I currently have this error, so I am reporting here in the hope it is useful.
Running with I tried copying ( The project is open source. I have made a branch from the checkout I currently see the issue in, including a zipped cache from the existing environment (sad) and the new environment (happy) in the root of the repo. Something else that could might be relevant. In the current state of that branch, there is a formatting issue that black and ruff disagree on ( Given I can't even reproduce the error by copying the entire failing project directory I am not sure if checking out the project will help, but maybe the cache will be useful. I will leave the broken checkout untouched in case there are any other diagnostics that could be done with it. |
Thanks @GDYendell . that's amazing. I tried to reproduce locally with ruff build from the v0.1.14 tag but failed to reproduce. I believe the reason is that our cache uses the project root as cache key. Would you mind sharing the absolute path of your project root? I think that should allow me to "force" ruff to use your cache data. If that doesn't work, then I probably have to look at the bincode files manually... I guess I can try to call decode on the files you provided. Let me give this a try |
One possibility is that the Lines 168 to 176 in b3dc565
We could try to delete the file when writing failed but there's still a risk that the process dies in the middle |
The absolute path to the broken checkout is |
I created a small script that deserializes the cache and prints the current offset. This can be useful to investigate where the file is corrupt. The problematic cache file is
The next entry import entry (index 17) starts at offset 8181 but the length is only one byte instead of 8 (I can tell because what starts after is the name
I suspect that we run into some form of race and the file only gets written partially. This is possible because we use a I think a first good starting point to fix this is to write the cache to a temporary file and then rename it when writing was successful. This doesn't just help with potential races but also protects about corrupted cache files in case the ruff process dies in the middle of writing the cache (for whatever reason). |
Thank you all for your help and reproductions! Your reproductions and error reports allowed me to reproduce the bug locally and I now have a fix ready for review. |
Hi guys,
for some time now, ruff has been failing with error "memory allocation of 7453010365139656813 bytes failed" if I run it for a specific file with option
--force-exclude
. Without it, is works fine, but includes excluded files for which it runs. It is working fine when checking the whole project, but I also use it among file watchers in Pycharm where it fails. I am using a custom config with some file exclusions, so in order to avoid ruff reports for excluded files, I need--force-exclude
option. However, the file watcher has not been working properly for me, not from start, but for some time now, could be weeks or months. It only happens in one of my bigger projects, so it does not seem broken, maybe only demanding somehow. It does not bother me much, because it is so fast I can run it for the whole project instantly. :-)Python: 3.8
Ruff: 0.0.292
Do you have any idea what could be causing this? Thanks
The text was updated successfully, but these errors were encountered: