You can find the latest version of our paper here.
Hirofumi Kobayashi @liilii_tweet
Ahmet Can Solak @_ahmetcansolak
Joshua Batson @thebasepoint
Loic A. Royer @loicaroyer
We propose a general framework for solving inverse problems in the presence of noise that requires no signal prior, no noise estimate, and no clean training data. The only assumptions are that the forward model is available, differentiable and that the noise exhibits statistical independence across different measurement dimensions. We build upon the theory of 'J-invariant' functions Batson & Royer 2019 and show how self-supervised denoising à la Noise2Self is a special case of learning a noise-tolerant pseudo-inverse of the identity. We demonstrate our approach by showing how a convolutional neural network can be taught in a self-supervised manner to deconvolve images and surpass in image quality classical inversion schemes such as Lucy-Richardson deconvolution.
$ git clone https://github.com/royerlab/ssi-code
$ cd ssi-code
We recommend that you create a dedicated conda environment for SSI:
$ conda create --name ssi python=3.7
$ conda activate ssi
$ pip install -r requirements.txt
# (For CUDA 10.0)
$ pip install cupy-cuda100
# (For CUDA 10.1)
$ pip install cupy-cuda101
# (For CUDA 10.2)
$ pip install cupy-cuda102
If you are using a conda environment:
$ conda install cudatoolkit=CUDA_VERSION
If you are NOT using a conda environment make sure you have all CUDA drivers installed properly on your system for the later options.
You can find the demos in code/demo/demo2D.py
and
code/demo/demo3D.py
files.
You can run the demos by:
python -m ssi.demo.demo2D
python -m ssi.demo.demo3D
This should go fast if your GPU is reasonably recent.
Once done, a napari window will open to let you compare the images. Please note that due to the stochastic nature of CNN training, and because we use so little training data, and also because perhaps we have not fully understood how to train our nets, we occasionally observe failed runs.
Things to observe: Varying the number of iterations for Lucy-Richardson (LR) lets you explore the trade-off between sharpness and noise reduction. Yet, LR has trouble to acheive both. In particular, you can see that the Spectral Mutual Information (SMI) goes down dramatically as you go towards low iterations (particularly true for image 'drosophila'), but the PSNR varies in the opposite way. That's because while you have good noise reduction at low iterations, you loose fidelity in the high-frequencies of the image. Why? LR reconstructs images by first starting with the low frequencies and then slowly refines the higher ones -- that's when trouble arises and noise gets amplified. Different comparison metrics quantify different aspects of image similarity, SMI (introduced in this paper) is good at telling us if the images are dissimilar (or similar) in the frequency domain. Our approach, Self-Supervised Inversion (SSI) will acheive a good trade-off in comparison.
You can also try other images with:
python -m code.demo.demo2D characters
We recommend to try the following images: 'drosophila' (default), 'usaf', 'characters'.
Feedback, pull-requests, and ideas to improve this work are very welcome and will be duly acknowledged.
Image Deconvolution via Noise-Tolerant Self-Supervised Inversion. Hirofumi Kobayashi, Ahmet Can Solak, Joshua Batson, Loic A. Royer. arXiv 2020.
BSD 3-Clause