Skip to content
/ VTE Public
forked from kregmi/VTE

video trajectory estimation

Notifications You must be signed in to change notification settings

IoflyTang/VTE

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 
 
 

Repository files navigation

VTE

[Paper, ICCV 2021]

Poster

poster

Setup

Getting Started

  • Install PyTorch and its dependencies:

conda install pytorch torchvision torchaudio cudatoolkit=10.2 -c pytorch

  • Clone this repo:
https://github.com/kregmi/VTE.git
cd VTE

Code

GeoTemporalFeatureLearning:

Code to conduct geo-temporal feature learning. The encoder consists of code for 2D CNN backbone. The temporalAttention consists of the implementation of transformer-based attention module.

  • Training/Testing the model

Training is done in two stages.

First, train the `encoder` module using the `main.py` file.
Save the encoder featues using the `test_bdd.py` file.
Second, train the `temporalAttention` module using the `train.py` file.
Use `eval.py` to evaluate the trained model. 

Additional instruction is provided within each module's README.

TrajectorySmoothingNetwork:

Code to train/test the trajectory smoothing network.

  • Training/Testing the model
`train.py` contains the code for training. 
`eval.py` contains the code for testing.

Dataset

BDD videos are downloaded from the official BDD website.

The corresponding Google StreetView Images are downloaded using Google Cloud Platform.

Please contact us for additional instruction to obtain the dataset used in this work.

Citation

If you find our works useful for your research, please cite our work:

  • Video Geo-Localization Employing Geo-Temporal Feature Learning and GPS Trajectory Smoothing, ICCV 2021, pdf, bibtex

Questions

Please contact: [email protected]

About

video trajectory estimation

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%