MONAI Versatile Imaging SegmenTation and Annotation
(We're seeking collaborators. If your institution is interested, please fill out the survey: https://forms.office.com/r/RedPQc9fmw)
- Overview
- MONAI VISTA Training and FineTuning
- MONAI VISTA with MONAI Label
- Video Demo
- Community
- License
- Reference
MONAI Meetup presentation at MIDL 2023
MONAI VISTA provides domain-specific workflows for building and utilizing foundation models for medical image segmentation. It leverages state-of-the-art deep learning technology to establish a new collaborative approach for developing robust and versatile segmentation models and applications.
This repository hosts the ongoing effort of building MONAI VISTA and is currently under active development.
This section provides MONAI Label integration and sample apps. The integration is a server-client system that facilitates interactive medical image segmentation using VISTA via the sample 3D slicer plugin.
MONAI VISTA models are integrated based on MONAI Label. Start using MONAI Label locally and run the installation with your familiar visualization tools. Stable version software represents the currently tested and supported visualization tools with the latest release of MONAI Label.
Refer to MONAI Label installation page for details. For milestone releases, users can install from PyPl with the command:
pip install monailabel
For Docker and Github installation, refer to MONAI Label Github
Based on MONAI Label, MONAI VISTA is developed as an app. This app has example models to do both interactive and "Everything" segmentation over medical images. Prompt-based segment experience is highlighted. Including class prompts and point click prompts, Segmentation with the latest deep learning architectures (e.g., Segmentation Anything Model (SAM)) for multiple lung, abdominal, and pelvis organs. Interactive tools include control points and class prompt check boxes developed with viewer plugins.
Get the monaivista app with:
# Clone MONAI VISTA repo
git clone [email protected]:Project-MONAI/VISTA.git
# the sample monaivista app is in the monailabel folder
cd VISTA/monailabel
For more details on monaivista
app, see the sample-app page.
The interactive annotation experience with prompt-based segmentation models needs the integration of medical image viewers. MONAI VISTA and MONAI Label support multiple open-sourced viewers, such as 3D Slicer and OHIF.
Example of 3D Slicer integration:
3D Slicer is a free, open-source software for visualization, processing, segmentation, registration, and other 3D images and meshes. 3D Slicer is a mature and well-tested viewer for radiology studies and algorithms.
To use MONAI Label with 3D Slicer, you'll need to download and install 3D Slicer. MONAI Label supports stable and preview versions of 3D Slicer, version 5.0 or higher. For more information on installing 3D Slicer, check out the 3D Slicer Documentation
The plugin needs to be added in developer mode. Please follow the below steps.
git clone [email protected]:Project-MONAI/VISTA.git
- Find the plugin folder:
plugins/slicer/MONAILabel
- Open 3D Slicer: Go to Edit -> Application Settings -> Modules -> Additional Module Paths
- Add New Module Path: <FULL_PATH>/plugins/slicer/MONAILabel (You can drag the slicer/MONAILabel folder to the module panel.)
- Restart 3D Slicer
Prepare some sample data to start with:
Download MSD pancreas dataset as the sample dataset using monailabel API. The task is the volumetric (3D) segmentation of the pancreas from CT image. The dataset is from the 2018 MICCAI challenge.
monailabel datasets --download --name Task07_Pancreas --output .
Specify the sample app and sample datasets' path in the following command:
monailabel start_server --app monaivista --studies ./Task07_Pancreas/imagesTs --conf models vista_point_2pt5
- Open 3D Slicer and MONAI VISTA-Label plugin.
- Connect to the monailabel server, start annotating!
Join the conversation on Twitter @ProjectMONAI or join our Slack channel.
Ask and answer questions on MONAI VISTA's GitHub discussions tab.
The model is licensed under the Apache 2.0 license.
The current model is trained and developed based on Segment Anything Model (SAM). Check the 3rd party license for reference.
We greatly appreciate the authors of Segment Anything
and TotalSegmentator
for releasing their work under a permissive license to the community.
@article{kirillov2023segany,
title={Segment Anything},
author={Kirillov, Alexander and Mintun, Eric and Ravi, Nikhila and Mao, Hanzi and Rolland, Chloe and Gustafson, Laura and Xiao, Tete and Whitehead, Spencer and Berg, Alexander C. and Lo, Wan-Yen and Doll{\'a}r, Piotr and Girshick, Ross},
journal={arXiv:2304.02643},
year={2023}
}
@article{wasserthal2022totalsegmentator,
title={TotalSegmentator: robust segmentation of 104 anatomical structures in CT images},
author={Wasserthal, Jakob and Meyer, Manfred and Breit, Hanns-Christian and Cyriac, Joshy and Yang, Shan and Segeroth, Martin},
journal={arXiv preprint arXiv:2208.05868},
year={2022}
}
This integration is based on MONAI Label:
@article{diaz2022monai,
title={Monai label: A framework for ai-assisted interactive labeling of 3d medical images},
author={Diaz-Pinto, Andres and Alle, Sachidanand and Nath, Vishwesh and Tang, Yucheng and Ihsani, Alvin and Asad, Muhammad and P{\'e}rez-Garc{\'\i}a, Fernando and Mehta, Pritesh and Li, Wenqi and Flores, Mona and others},
journal={arXiv preprint arXiv:2203.12362},
year={2022}
}