-
Notifications
You must be signed in to change notification settings - Fork 228
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
dcm2niix on a cluster #104
Comments
Hi, maybe I'm wrong, butI guess the performance was limited by I/O. The cluster is generally used for tasks of high computational loads. It might not do the same well in the case of high I/O tasks. You may observe similar result if you try to decompress an archive with a lot of small files in it on a cluster. It will probably be slower than on your local computer, on which the I/O resource is exclusively for your use. |
Short Answer: Long Answer: |
One other thought: it might be worth talking to your acquisition team about adjusting the DICOM output and the vendor about streamlining their implementation of DICOM. From the user's perspective it is really convenient Phillips allows an fMRI data to be saved as a single 4D file, or as thousands of separate images. You may want to see if selecting between these two options makes a difference to your conversion times. It is possible that dcm2niix can be improved to read the 4D files more efficiently. However, it is certainly the case that from the developers perspective the current Philips 4D implementation is extremely laborious to decode, and streamlining this at the source could make everyone's life easier. |
Thank you so much for all the help! I will discuss these ideas with our cluster and MR acquisition team, and post any solutions we come up with. Much appreciated! |
@pspec: I suggest you download and build the latest version (13-June-2017). One challenge with DICOM files is that we do not know the size of the header until we have parsed it. In the past, I simply loaded the entire file to RAM. The new version will load your DICOM file in 1Mb segments. This is faster for huge 4D Philips datasets (e.g. a 270 Mb DICOM file has a ~8mb header), in particular when using systems with slow disk access (e.g. clusters). Note that we still need to eventually load the whole image and write it to disk. In practice, I think this new version ends up being around 15% faster. The value "MaxBufferSz" in nii_dicom.cpp controls the size of the cache with a 1Mb default. You re-instate the old behavior compile with the "-dmyLoadWholeFileToReadHeader" directive. |
Hello, any advice for running dcm2niix on an HPC cluster? I'm finding it to be drastically slower on the cluster than running locally. I'm trying to convert fMRI multi-band imaging datasets acquired on a Phillips machine. Each run of fMRI is approx. 20,000 dicoms.
I typically request 1 node with 16gb of physical memory when submitting dcm2niix jobs to the cluster, and it takes about 20-30 minutes for a set of 20,000 dicoms. Running it locally takes about 2 minutes for the same set of dicoms. I have tried both internal compression and pigz, but still takes very long regardless.
Thanks for the help!
The text was updated successfully, but these errors were encountered: