Skip to content

Commit

Permalink
Merge pull request #11 from NicolaBernini/cnn_20190428_1842_1
Browse files Browse the repository at this point in the history
CNN 20190428 1842 1 - Added D2 Net
  • Loading branch information
NicolaBernini authored Apr 29, 2019
2 parents 1d928f2 + 4e35142 commit e5e7246
Show file tree
Hide file tree
Showing 2 changed files with 17 additions and 1 deletion.
2 changes: 2 additions & 0 deletions CNN/ComputerVision/d2_net.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
{"cells":[{"metadata":{},"cell_type":"markdown","source":"\n# Analysis of D2-Net: A Trainable CNN for Joint Description and Detection of Local Features\n\n[D2-Net: A Trainable CNN for Joint Description and Detection of Local Features](https://dsmn.ml/files/d2-net/d2-net.pdf)\n\n\n"},{"metadata":{},"cell_type":"markdown","source":"\n# Abstract and Intro \n\n> In this work we address the problem of finding reliable pixel-level correspondences under difficult imaging conditions.\n\n- Pixel-level correspondences is the fundamental task to solve many relevant problems in geometric computer vision like Visual Odometry, Structure from Motion, Localization, Mapping, Optical Flow, ... \n\n\n## Traditional Approaches \n\n- Rely on manually engineered features detection and description algorithms \n- Consist of the following pipeline \n\nThe 3 steps pipeline \n\n1. Feature Detection \n2. Feature Description \n3. Features Matching \n\n\n## Proposed Approach \n\n- Rely on automatically learned features (no manual engineering)\n- Consists of a 2 steps pipeline \n\n1. Joint Features Detection and Description \n2. Features Matching \n\n\n"},{"metadata":{},"cell_type":"markdown","source":"\n## Features Description \n\nThe Features Description is a mapping like \n\n$$ f(u,v,I_{N}) \\rightarrow s \\in S $$\n\nwith \n- $u,v$ : Feature Image Coordinates \n- $I_{N}$ : Feature Center Neighborhood, representing the appearance (e.g. a $W \\times H$ neighborhood)\n- $S$ : Mixed Spatial and Semantic Space resulting from cartesian product of $W \\times H$ Spatial Domain and a $C$ Semantic Domain depending on the feature description \n\n**NOTE**: \n- The $S$ is essentially a $W \\times H \\times C$ Tensor Space, so the bread and butter of Deep Neural Networks hence of CNN as well \n- However at the same time it also represents the output space of manually engineered feature detection and description pipeline \n"},{"metadata":{},"cell_type":"markdown","source":"\n## Feature Matching \n\nThe Features Matching is typically performed with an Approximate Nearest Neighbor Search in a Spatial + Semantic Space so something like the $S$ Space above described \n\n"},{"metadata":{"_cell_guid":"79c7e3d0-c299-4dcb-8224-4455121ee9b0","collapsed":true,"_uuid":"d629ff2d2480ee46fbb7e2d37f6b5fab8052498a","trusted":false},"cell_type":"code","source":"","execution_count":null,"outputs":[]}],"metadata":{"kernelspec":{"display_name":"Python 3","language":"python","name":"python3"},"language_info":{"name":"python","version":"3.6.4","mimetype":"text/x-python","codemirror_mode":{"name":"ipython","version":3},"pygments_lexer":"ipython3","nbconvert_exporter":"python","file_extension":".py"}},"nbformat":4,"nbformat_minor":1}

16 changes: 15 additions & 1 deletion CNN/ComputerVision/readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ The CNN are at the core of NN which can achieve SOTA on the following types of C

- Recognition
- Detection
- Features
- Segmentation
- Localization
- Mapping
Expand All @@ -17,7 +18,6 @@ The CNN are at the core of NN which can achieve SOTA on the following types of C
- Y2019
- [Summary](Optimal_Approach_for_Image_Recognition_using_Deep_Convolutional_Architecture.ipynb)

Work in progress



Expand All @@ -27,3 +27,17 @@ Work in progress



# Features

[D2-Net: A Trainable CNN for Joint Description and Detection of Local Features](https://dsmn.ml/files/d2-net/d2-net.pdf)
- Y2019 (CVPR 2019)
- [Summary](d2_net.ipynb)







Work in progress

0 comments on commit e5e7246

Please sign in to comment.