Skip to content
This repository has been archived by the owner on Jan 3, 2023. It is now read-only.

intel-iot-devkit/brain-tumor-segmentations

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DISCONTINUATION OF PROJECT

This project will no longer be maintained by Intel. Intel has ceased development and contributions including, but not limited to, maintenance, bug fixes, new releases, or updates, to this project. Intel no longer accepts patches to this project.

Brain Tumor Segmentation (BraTS) with Intel® Distribution of OpenVINO™ toolkit

Details
Target OS: Ubuntu* 18.04 LTS
Programming Language: Python* 3.6
Time to Complete: 30-40min

brain-tumor-segmentation-OpenVINO

What it does

This reference implementation applies the U-Net architecture to segment brain tumors from raw MRI scans. The application plots the brain tumor matter segmented and calculates the Dice coefficient between ground truth and the predicted result.

Requirements

Hardware

  • 6th to 8th Generation Intel® Core™ processor with Iris® Pro graphics or Intel® HD Graphics

Software

  • Ubuntu* 18.04 LTS
    Note: We recommend using a 4.14+ Linux* kernel with this software. Run the following command to determine your kernel version:

    uname -a
    
  • OpenCL™ Runtime Package

  • Intel® Distribution of OpenVINO™ toolkit 2020 R3

  • Matplotlib

How It works

The application uses MRI scans as the input data source. The results from the model are used to calculate Dice coefficient and to plot prediction results of the brain tumor matter segmented.

Architecture-Diagram

The Dice coefficient (the standard metric for the BraTS dataset used in the application) for our model is about 0.82-0.88. Menze et al. reported that expert neuroradiologists manually segmented these tumors with a cross-rater Dice score of 0.75-0.85, meaning that the model’s predictions are on par with what expert physicians have made. The below MRI brain scans highlight brain tumor matter segmented using deep learning.

brain-tumor-segmentation

What is U-Net?

The U-Net architecture is used to create deep learning models for segmenting nerves in ultrasound images, lungs in CT scans, and even interference in radio telescopes.

U-Net is designed like an auto-encoder. It has an encoding path (“contracting”) paired with a decoding path (“expanding”) which gives it the “U” shape. However, in contrast to the autoencoder, U-Net predicts a pixelwise segmentation map of the input image rather than classifying the input image as a whole. For each pixel in the original image, it asks the question: “To which class does this pixel belong?” This flexibility allows U-Net to predict different parts of the tumor simultaneously.

U-Net

Setup

Get the code

Clone the reference implementation:

sudo apt-get update && sudo apt-get install git
git clone https://github.com/intel-iot-devkit/brain-tumor-segmentations.git

Install the Intel® Distribution of OpenVINO™ toolkit

Refer to Install Intel® Distribution of OpenVINO™ toolkit for Linux* on how to install and setup the Intel® Distribution of OpenVINO™ toolkit.

You will need the OpenCL™ Runtime Package if you plan to run inference on the GPU. It is not mandatory for CPU inference.

Other dependencies

NumPy

NumPy is a library for the Python programming language, adding support for large, multi-dimensional arrays and matrices, along with a large collection of high-level mathematical functions to operate on these arrays.

Matplotlib

Matplotlib is a plotting library for the Python programming language and its numerical mathematics extension NumPy. It provides an object-oriented API for embedding plots into applications.

Which model to use

This application uses a pre-trained model (unet_model_for_decathlon.hdf5), that is provided in the /resources directory. This model is trained using the Task01_BrainTumour.tar dataset from the Medical Segmentation Decathlon, made available under the (CC BY-SA 4.0) license. Instructions on how to train your model can be found here https://github.com/IntelAI/unet/tree/master/2D

To install the dependencies of the RI and to optimize the pre-trained model, run the following command:

cd <path_to_the_Brain_Tumor_Segmentaion_OpenVINO_directory>
./setup.sh

What Input to use

The application uses MRI scans from Task01_BrainTumour.h5, that is provided in the /resources directory.

Setup the environment

You must configure the environment to use the Intel® Distribution of OpenVINO™ toolkit one time per session by running the following command:

source /opt/intel/openvino/bin/setupvars.sh

Note: This command needs to be executed only once in the terminal where the application will be executed. If the terminal is closed, the command needs to be executed again.

Run the Application

cd <path_to_the_Brain_Tumor_Segmentaion_OpenVINO_directory>/application

To see a list of the various options:

./brain_tumor_segmentation.py -h

A user can specify what target device to run on by using the device command-line argument -d followed by one of the values CPU, GPU, HDDLorMYRIAD.

Running on the CPU

Although the application runs on the CPU by default, this can also be explicitly specified through the -d CPU command-line argument:

./brain_tumor_segmentation.py -r ../results/ -m ../resources/output/IR_models/FP32/saved_model.xml -d CPU --data_file ../resources/Task01_BrainTumour.h5

Running on the integrated GPU

  • To run on the integrated Intel® GPU with floating point precision 32 (FP32), use the -d GPU command-line argument:

    ./brain_tumor_segmentation.py -r ../results/ -m ../resources/output/IR_models/FP32/saved_model.xml -d GPU --data_file ../resources/Task01_BrainTumour.h5
    

    FP32: FP32 is single-precision floating-point arithmetic uses 32 bits to represent numbers. 8 bits for the magnitude and 23 click here

  • To run on the integrated Intel® GPU with floating point precision 16 (FP16), use the following command:

    ./brain_tumor_segmentation.py -r ../results/ -m ../resources/output/IR_models/FP16/saved_model.xml -d GPU --data_file ../resources/Task01_BrainTumour.h5
    

    FP16: FP16 is half-precision floating-point arithmetic uses 16 bits. 5 bits for the magnitude and 10 bits for the precision. For more information, click here

Running on the Intel® Neural Compute Stick 2

To run on the Intel® Neural Compute Stick 2, use the -d MYRIAD command-line argument.

./brain_tumor_segmentation.py -r ../results/ -m ../resources/output/IR_models/FP16/saved_model.xml -d MYRIAD --data_file ../resources/Task01_BrainTumour.h5

Note: The Intel® Neural Compute Stick 2 can only run FP16 models. The model that is passed to the application, through the -m <path_to_model> command-line argument, must be of data type FP16.

Run on the Intel® Movidius™ Vision Processing Unit (VPU)

To run on the Intel® Movidius™ Vision Processing Unit (VPU), use the -d HDDL command-line argument:

./brain_tumor_segmentation.py -r ../results/ -m ../resources/output/IR_models/FP16/saved_model.xml -d HDDL --data_file ../resources/Task01_BrainTumour.h5

Note: The Intel® Movidius™ VPU can only run FP16 models. The model that is passed to the application, through the -m <path_to_model> command-line argument, must be of data type FP16.