Official dataset development kit for CODa. We strongly recommend using this repository to visualize the dataset and understand its contents.
We have tested this devkit in the following environments:
Python==3.8, 3.9
Ubuntu 20.04
gcc==9.4.0
Run the following command to download the development kit.
git clone [email protected]:ut-amrl/coda-devkit.git
We use conda for all package management in this repository. To install all library dependencies, create the conda environment using the following command.
conda env create -f environment.yml
Then activate the environment as follows. Further library usage instructions are documented in the GETTING STARTED section.
conda activate coda
The GETTING_STARTED documentation describes how to download CODa programmatically and use the visualization scripts. To download the tiny and small splits of CODa by individual files, go to the Texas Data Repository.
The DATA_REPORT documentation describes the contents and file structure of CODa. It is important to read this before using the dataset.
To run the 3D object detection models from CODa, refer to CODa's sister Github repository: coda-models. This repo provides the benchmarks, pretrained_weights, and model training configurations to reproduce the results in our paper.
If you use our dataset of the tools, we would appreciate if you cite both our paper and dataset.
@misc{zhang2023robust,
title={Towards Robust Robot 3D Perception in Urban Environments: The UT Campus Object Dataset},
author={Arthur Zhang and Chaitanya Eranki and Christina Zhang and Ji-Hwan Park and Raymond Hong and Pranav Kalyani and Lochana Kalyanaraman and Arsh Gamare and Arnav Bagad and Maria Esteva and Joydeep Biswas},
year={2023},
eprint={2309.13549},
archivePrefix={arXiv},
primaryClass={cs.RO}
}
@data{T8/BBOQMV_2023,
author = {Zhang, Arthur and Eranki, Chaitanya and Zhang, Christina and Hong, Raymond and Kalyani, Pranav and Kalyanaraman, Lochana and Gamare, Arsh and Bagad, Arnav and Esteva, Maria and Biswas, Joydeep},
publisher = {Texas Data Repository},
title = {{UT Campus Object Dataset (CODa)}},
year = {2023},
version = {DRAFT VERSION},
doi = {10.18738/T8/BBOQMV},
url = {https://doi.org/10.18738/T8/BBOQMV}
}
The following table is necessary for this dataset to be indexed by search engines such as Google Dataset Search.
property | value | ||||||
---|---|---|---|---|---|---|---|
name | UT CODa: UT Campus Object Dataset |
||||||
alternateName | UT Campus Object Dataset |
||||||
url | https://github.com/ut-amrl/coda-devkit |
||||||
sameAs | https://amrl.cs.utexas.edu/coda/ |
||||||
sameAs | https://dataverse.tdl.org/dataset.xhtml?persistentId=doi:10.18738/T8/BBOQMV |
||||||
description | The UT Campus Object Dataset is a large-scale multiclass, multimodal egocentric urban robot dataset operated by human operators under a variety of weather, lighting, and viewpoint variations. We release this dataset publicly and pretrained models to help advance egocentric perception and navigation research in urban environments. |
||||||
provider |
|
||||||
license |
|