This repo contains the details for "Contrastive Learning-based Place Descriptor Representation for Cross-modality Place Recognition".
Our experiment is tested on Ubuntu 20.04 with Python 3.8 with PyTorch 1.13.6.
- build environment
conda create -n tmnet python=3.8 conda activate tmnet pip install -r requirements.txt
We conduct the image-to-point-cloud place recognition based on KITTI dataset and KITTI-360 dataset.
-
The KITTI DATASET The used data for the experiment can be downloaded from here.
Folder structure:
data ├── sequences │ └── 00 │ ├── image_2 │ ├── Velodyne │ └── poses.txt └── ...
-
The KITTI-360 DATASET The used data can be downloaded from here.
Folder structure:
data ├── data_2d_raw │ ├── 2013_05_28_drive_0002_sync │ │ ├── image_00 │ │ └── ... │ └── ... ├── data_3d_raw │ ├── 2013_05_28_drive_0002_sync │ │ ├── image_00 │ │ └── ... │ └── ... └── data_poses ├── 2013_05_28_drive_0002_sync │ ├── image_00 │ └── ... └── ...
Our image-to-point-cloud place recognition on the unseen KITTI test sequence.