A DenseBlock-Unet for Retinal Blood Vessel Segmentation
Notice: This Project structure updated on 9th June!
You can find old version in branch old
This model is inspired by DenseNet and @orobix/retina-unet,I modify the Conv2d block to DenseBlock and finally I get better result.The DenseBlock struct is shown below.This struct maximisely use the extracted feature.If u want further information,please read the DenseNet Paper and code
Trian With 40 images of DRIVE dataset and DenseBlock-Unet model. Results on DRIVE database:
Methods | AUC ROC on DRIVE |
---|---|
Liskowski | 0.9790 |
Retina-Unet | 0.9790 |
VesselNet | 0.9841 |
The structure is based on my own DL_Segmention_Template.Difference between this project and the template is that we have metric module in dir: perception/metric/. To get more Information about the structure please see readme in DL_Segmention_Template.
You can find model parameter in configs/segmention_config.json.
please run main_trainer.py first time,then you will get data_route in experiment dir.Put your data in there, now you can run main_trainer.py again to train a model.
the model is trained with DRIVE dataset on my own desktop(intel i7-7700hq,24g,gtx1050 2g) within 30 minutes. Datatset and pretrained model can be found here.For Chinese, you can download here.
if u want to test your own image,put ur image to (VesselNet)/test/origin,and change the img_type of predict settings in configs/segmention_config.json,run main_test.py to get your result.The result is in (VesselNet)/test/result
First of all,I choose 48x48pix patches to train the model.The patch size means that model can't be too deep.So in future,I want to test 128X128pix patches and 96x96 patches.
Second,Attention-based Unet and DeepLab-v3+ are also worth to try.