-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Queue watcher: memory error during tiff->nii conversion #5
Comments
ok yes I do believe this scan size is already very close to mazing out memory. The transfer server should take really low memory, for example the chunksize of data it holds in memory is |
OK that's good that server and queue aren't interfering with one-another. Beyond periodically restarting the machine I think it would be worthwhile to remove some of the crap running in the background that we don't need - the biggest hog was little chrome processes (even though chrome wasn't open, because chrome). First things first I think we should get chrome off that thing and just use firefox. |
Is it common for users to have images that are this big? This is like 25GB. Is it too annoying or clunky to pre-compute how much memory the array will take up beforehand, and if it's over some threshold maybe skip that image, or split it up into several sub-stacks. To avoid crashing the pipeline. |
The size Alex is using here is exactly the same size I use for my anat scans. Ashley has files that are even bigger, so she uses the "split" flag in her user config file so that main.py runs convert_tiff_collections_to_nii_split instead of convert_tiff_collections_to_nii. This does what you suggest. The only annoying part is she needs to re-merge them later in on sherlock. I think checking the expected mem size then based on a threshold call the split version if it is above the threshold makes sense. Or we can just make users set that flag. I would want to set the threshold above what the above scan is (25GB) since I actually personally haven't had that crash on me and would prefer not the extra complication of splitting and merging. An alternative is we could save as .h5 instead of .nii which accepts saving in one or several slices at a time into the file on disk so this would work seamlessly for any size file. I suppose it comes down to your question of how often is this actually an issue. idk if anyone other than Ashley uses really big files, and if they do maybe they can just set the flag. But I'm open to changes! |
Most recently on @AlexYkHao data from 20220417. I've seen it before and seems to happen only when server + queue watcher are going at the same time, and also when a big anatomical scan is being converted.
From
dataflow_log_20220417-212808.txt
The text was updated successfully, but these errors were encountered: