Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Overlay segmentations #248

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

JuanPabloMontoya271
Copy link

Enabled segmentation overlay for NIFTI, NRRD, and VASP.

Currently it only supports single label segmentations loaded from the GUI Inspector.

Next steps:

  • Add multi label segmentations
  • Add Segmentation Transfer Function to modify the colors of the labels
  • Add Dicom segmentations
image

Data retrieved from TotalSegmentator Dataset

@mlavik1
Copy link
Owner

mlavik1 commented May 10, 2024

Hi!
Thanks a lot for this PR. I'll see if I can test it sometime this weekend :)

Update: Might need a few more days (these weeks have been a bit busy)

@MichaelOvens
Copy link
Contributor

Segmentation overlay would be a great addition to this project, but I'm not sure about the VolumeLoader class - it seems much thicker than it needs to be. There are already established functions and classes for loading in volumes - why do we need a new one? It's also unclear from the naming that this doesn't just load volumes, it loads segmentations.

I'm wondering if a better approach would be to load the segmentation volume as a separate dataset, then expose an inspector field on the Volume Rendered Object where it could be dropped and a button pressed to apply the dataset as a segmentation, which would copy the segmentation's _DataTex to the base object's _SegmentationText and then destroy the segmentation object. The advantage would be that we can re-use all of the existing import pathways; the disadvantage would be that it would require rendering the segmentation before applying it. But that might not be a bad thing as it would allow visual inspection of the segmentation before application.

@mlavik1
Copy link
Owner

mlavik1 commented Aug 24, 2024

Hi @JuanPabloMontoya271 and @MichaelOvens
Thanks to both of you for your contributions here (code, suggestions, etc.).

I think what @MichaelOvens suggested could work well. I've made a prototype implementation, and plan to finalise that and make a PR soon. That can also be used for PET/CT datasets, such as this one: https://drive.google.com/file/d/1FOmLZGQBVLp66DcDYAjWs5qxoW0zo30f/view?usp=sharing
(source: https://www.aliza-dicom-viewer.com/download/datasets)

I could let the user load two datasets, and then link them together. Then in the shader I'll have a secondary volume and secondary transfer function. Later we could also implement multiple transfer functions for each segmentation label. And as an optimisation, it might be possible to combine the two datasets together in one texture, to make texture fetch faster. Lots of fun things to work on!

Questions:

  • Do you have any extra sample datasets that you could share with me? I'll try out the TotalSegmentator dataset as suggested in the PR. Other suggestions are welcome too!
  • If you have any examples form other software that I could use as a reference, I'd greatly appreciate any suggestions!

@mlavik1
Copy link
Owner

mlavik1 commented Aug 28, 2024

Hi again, So I've made a PR based on what we discussed above: #264
I needed to implement support for PET/CT rendering, and it turned out that can be implemented quite similarly to segmentations. So the implementation supports both PET/CT and segmentations. It uses a secondary dataset and transfer function. For PET/CT the TF will map density values to colour/alpha. For segmentations the TF is internally used to map segmentation label ID to colour.

There are also two render modes for segmentations:

  • Overlay: Draw segmentations on top of volume
  • Isolate: Only render the part of the volume that overlaps with a segmentation label

Next I'll see if I can add a feature that lets you set a seperate TF for each segmentation label. That should be fairly easy, since we can just stack the TFs on top of one another in the generated Texture2D.

Thanks to both of you for suggestions!
I'll likely merge this soon, but let me know if you notice anything that looks wrong, or if you have any further suggestions.

@JuanPabloMontoya271
Copy link
Author

Thank you, that sounds like a good plan. I had also been experimenting with PET/CT, it will be a good addition.

Let me know if there is something I can help with.

@mlavik1
Copy link
Owner

mlavik1 commented Aug 28, 2024

Thank you, that sounds like a good plan. I had also been experimenting with PET/CT, it will be a good addition.

Let me know if there is something I can help with.

Ok, that's nice! You might have more experience with that than me then, since this was my first experiment with PET/CT. So please let me know if I've overlooked something important.
And thanks, you've already helped a lot! (both with the initial implementation here and the dataset you recommended).
I'll happily add you to the CREDITS.md, if you don't mind? UPDATE: I see you did that already in this PR, so I just added your name to the CREDITS.md in my branch as well :)

@JuanPabloMontoya271
Copy link
Author

Thank you for adding me to the credits :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants