Skip to content

Latest commit

 

History

History
187 lines (159 loc) · 4.39 KB

data.md

File metadata and controls

187 lines (159 loc) · 4.39 KB

Throughout the documentation we refer to MAED root folder as $ROOT. All the datasets listed below should be put in or linked to $ROOT/data.

Data Preparation

1. Download Datasets

You should first down load the datasets used in MAED.

  • InstaVariety

Download the preprocessed tfrecords provided by the authors of Temporal HMR.

Directory structure:

insta_variety
|-- train
|   |-- insta_variety_00_copy00_hmr_noS5.ckpt-642561.tfrecord
|   |-- insta_variety_01_copy00_hmr_noS5.ckpt-642561.tfrecord
|   `-- ...
`-- test
    |-- insta_variety_00_copy00_hmr_noS5.ckpt-642561.tfrecord
    |-- insta_variety_01_copy00_hmr_noS5.ckpt-642561.tfrecord
    `-- ...

As the original InstaVariety is saved in tfrecord format, which is not suitable for use in Pytorch. You could run this script which will extract frames of every tfrecord and save them as jpeg.

Directory structure after extraction:

insta_variety_img
|-- train
    |-- insta_variety_00_copy00_hmr_noS5.ckpt-642561.tfrecord
    |   |-- 0
    |   |-- 1
    |   `-- ...
    |-- insta_variety_01_copy00_hmr_noS5.ckpt-642561.tfrecord
    |   |-- 0
    |   |-- 1
    |   `-- ...
    `-- ...

Donwload the dataset using the bash script provided by the authors. We will be using standard cameras only, so wall and ceiling cameras aren't needed. Then, run
the script from the official VIBE repo to extract frames of videos.

Directory structure:

$ROOT/data
mpi_inf_3dhp
|-- S1
|   |-- Seq1
|   |-- Seq2
|-- S2
|   |-- Seq1
|   |-- Seq2
|-- ...
`-- util

Human 3.6M is not a open dataset now, thus it is optional in our training code. However, Human 3.6M has non-negligible effect on the final performance of MAED.

Once getting available to the Human 3.6M dataset, one could refer to the script from the official SPIN repository to preprocess the Human 3.6M dataset. Directory structure:

human3.6m
|-- annot
|-- dataset_extras
|-- S1
|-- S11
|-- S5
|-- S6
|-- S7
|-- S8
`-- S9

Directory structure:

3dpw
|-- imageFiles
|   |-- courtyard_arguing_00
|   |-- courtyard_backpack_00
|   |-- ...
`-- sequenceFiles
    |-- test
    |-- train
    `-- validation

Directory structure:

pennaction
|-- frames
|   |-- 0000
|   |-- 0001
|   |-- ...
`-- labels
    |-- 0000.mat
    |-- 0001.mat
    `-- ...

Directory structure:

posetrack
|-- images
|   |-- train
|   |-- val
|   |-- test
`-- posetrack_data
    `-- annotations
        |-- train
        |-- val
        `-- test

Directory structure:

mpii
|-- 099992483.jpg
|-- 099990098.jpg
`-- ...

Directory structure:

coco2014-all
|-- COCO_train2014_000000000001.jpg
|-- COCO_train2014_000000000002.jpg
`-- ...

Directory structure:

lspet
|-- im00001.jpg
|-- im00002.jpg
`-- ...

2. Download Annotation (pt format)

Download annotation data for MAED from Google Drive and move the whole directory to $ROOT/data.

3. Download SMPL data

Download SMPL data for MAED from Google Drive and move the whole directory to $ROOT/data.

It's Done!

After downloading all the datasets and annotations, the directory structure of $ROOT/data should be like:

$ROOT/data
|-- insta_variety
|-- insta_variety_img
|-- 3dpw
|-- mpii3d
|-- posetrack
|-- pennaction
|-- coco2014-all
|-- lspet
|-- mpii
|-- smpl_data
    |-- J_regressor_extra.npy
    `-- ...
`-- database
    |-- insta_train_db.pt
    |-- 3dpw_train_db.pt
    |-- lspet_train_db.pt
    `-- ...