Skip to content

emilyemliyM/TMNet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

40 Commits
 
 
 
 

Repository files navigation

Contrastive Learning-based Place Descriptor Representation for Cross-modality Place Recognition

This repo contains the details for "Contrastive Learning-based Place Descriptor Representation for Cross-modality Place Recognition".

🔑 Set up

Our experiment is tested on Ubuntu 20.04 with Python 3.8 with PyTorch 1.13.6.

  • build environment
    conda create -n tmnet python=3.8
    conda activate tmnet
    pip install -r requirements.txt
    

📚 DataSet

We conduct the image-to-point-cloud place recognition based on KITTI dataset and KITTI-360 dataset.

  • The KITTI DATASET The used data for the experiment can be downloaded from here.

    Folder structure:

    data  
    ├── sequences  
    │   └── 00  
    │       ├── image_2  
    │       ├── Velodyne  
    │       └── poses.txt  
    └── ...
    
  • The KITTI-360 DATASET The used data can be downloaded from here.

    Folder structure:

    data  
    ├── data_2d_raw    
    │   ├── 2013_05_28_drive_0002_sync  
    │   │   ├── image_00  
    │   │   └── ...  
    │   └── ...  
    ├── data_3d_raw  
    │   ├── 2013_05_28_drive_0002_sync  
    │   │   ├── image_00  
    │   │   └── ...  
    │   └── ...  
    └── data_poses  
        ├── 2013_05_28_drive_0002_sync  
        │   ├── image_00  
        │   └── ...  
        └── ...
    

💡 Visualization

Our image-to-point-cloud place recognition on the unseen KITTI test sequence.

描述文本

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published