Skip to content

SchroeterJulien/ACCV-2020-Subpixel-Point-Localization

Repository files navigation

Learning Multi-Instance Sub-pixel Point Localization

ACCV 2020

In this work, we propose a novel approach that allows for the end-to-end learning of multi-instance point detection with inherent sub-pixel precision capabilities. To infer unambiguous localization estimates, our model relies on three components: the continuous prediction capabilities of offset-regression-based models, the finer-grained spatial learning ability of a novel continuous heatmap matching loss function introduced to that effect, and the prediction sparsity ability of count-based regularization. We demonstrate state-of-the-art sub-pixel localization accuracy on molecule localization microscopy and checkerboard detection, and improved sub-frame event detection performance in sport videos.

Both PyTorch and Tensorflow implementations of our loss function are available.

LossFunction


(Section 3.3) Detection Sparsity (Supplemental Videos generation) [Tensorflow (1.13)]

To produce the convergence videos (NoRegularizer.mp4 and WithRegularizer.mp4) simply set both: whether you want to use counting regularization (in line 7 in main.py) and the name of your video (in line 7 in main.py and line 4 in saveVideo.py). Then run,

python main.py
python saveVideo.py

Convergence WITHOUT regularizartion

drawing

Convergence WITH regularizartion

drawing

(Section 4.1) Single Molecule Localization Microscopy [PyTorch (1.3)]

            [Deep-Storm Benchmark & Training Data] by Nehme et al. (see demo 1 for data)

            [Ground-Truth Positions] from Sage et al. (also included in our repo data/GT.csv)

            [Assessement Tool] by Sage et al.

Create dataset as in Nehme et al.

create_dataset.py

To train our model:

python pytoch_Main.py

Before running the script, copy all the benchmark data provided by Nehme et al. (demo 1) in a directory "benchmark data".

To obtain the final metrics, use the tool provided by Sage et al. (CompareLocalization.jar), the grond-truth position are given in data/GT.csv.

LossFunction


(Section 4.2) Checkerboard Corner Detection [PyTorch (1.3)]

            [Test Dataset Request] The dataset has to be requested from the authors of ROCHADE: Robust Checkerboard Advanced Detection for Camera Calibration.

            [ROCHADE Benchmarks] upon request to the authors, [OCamCalib Benchmarks], [OpenCV Benchmarks]

            [Training dataset] The training dataset can be provided upon request.

For all dataset related functionalities:

create_dataset.py, create_downsampled_set.py

To train our model:

python pytoch_main_ours.py

To train the other deep learning benchmark:

python pytoch_main_HM_benchmark.py

All the evaluations can be done using:

pytoch_eval_synthetic.py, pytoch_eval_uEye.py, pytoch_eval_GoPro.py, pytoch_eval_reprojection_uEye.py, pytoch_eval_reprojection_GoPro.py.

LossFunction


(Section 4.3) Sub-frame Temporal Event Detection [PyTorch (1.3)]

            [Benchmark Code and Dataset] by McNally et al.

To run our model on for all downsampling rate and splits:

bash multi_run_steps.sh

To run the dense classification benchmark:

bash dense_original_run_steps.sh

By settings the flag on line 27 in upsampling_original_train.py and on line 32 in upsampling_original_eval.py, one can run the naive upsample benchmark (False) or the frame interpolation benchmark (True) with the following command:

bash upsampling_original_run_steps.sh

Note that for the frame upsampling benchmark, the videos need first to be downsampled and then upsampled using the frame interpolation model from Bao et al.. To do so, use "Golf_upsample_video.py" in conjunction with their official implementation [git].

LossFunction

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published