From 9c51284db22b5e2f4cb8abd79949eb70892077ec Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E6=9F=B3=E5=B8=85?= Date: Fri, 9 Sep 2022 16:02:59 +0800 Subject: [PATCH 1/5] Issue 239 and 189 solution --- docs/install.md | 20 +++++++++++--- docs/preprocess_dataset.md | 56 ++++++++++++++++++++++++-------------- 2 files changed, 51 insertions(+), 25 deletions(-) diff --git a/docs/install.md b/docs/install.md index 662c6739..67d09736 100644 --- a/docs/install.md +++ b/docs/install.md @@ -2,10 +2,11 @@ -- [Requirements](#requirements) -- [Prepare environment](#prepare-environment) -- [Install MMHuman3D](#install-mmhuman3d) -- [A from-scratch setup script](#a-from-scratch-setup-script) +- [Installation](#installation) + - [Requirements](#requirements) + - [Prepare environment](#prepare-environment) + - [Install MMHuman3D](#install-mmhuman3d) + - [A from-scratch setup script](#a-from-scratch-setup-script) @@ -54,6 +55,12 @@ conda install pytorch=1.8.0 torchvision cudatoolkit=10.2 -c pytorch **Important:** Make sure that your compilation CUDA version and runtime CUDA version match. Besides, for RTX 30 series GPU, cudatoolkit>=11.0 is required. +To make sure that your installed the right pytorch version, you should check if you get `True` when running the following commands: +``` +import torch +torch.cuda.is_available() +``` +If you get `False`, you should choose to install other pytorch versions. d. Install PyTorch3D from source. @@ -150,6 +157,11 @@ cd mmdetection pip install -r requirements/build.txt pip install -v -e . ``` +Here, to check that your mmdet is compatible with pytorch, check if you have problems with running the following command: +``` +from mmdet.apis import inference_detector, init_detector +``` +If you meet errors, please check your pytorch and mmcv versions and install other versions of pytorch. - mmpose (optional) ```shell diff --git a/docs/preprocess_dataset.md b/docs/preprocess_dataset.md index 481a6ec4..bee42f58 100644 --- a/docs/preprocess_dataset.md +++ b/docs/preprocess_dataset.md @@ -4,26 +4,35 @@ -- [Datasets for supported algorithms](#datasets-for-supported-algorithms) -- [Folder structure](#folder-structure) - * [AGORA](#agora) - * [COCO](#coco) - * [COCO-WholeBody](#coco-wholebody) - * [CrowdPose](#crowdpose) - * [EFT](#eft) - * [GTA-Human](#gta-human) - * [Human3.6M](#human36m) - * [Human3.6M Mosh](#human36m-mosh) - * [HybrIK](#hybrik) - * [LSP](#lsp) - * [LSPET](#lspet) - * [MPI-INF-3DHP](#mpi-inf-3dhp) - * [MPII](#mpii) - * [PoseTrack18](#posetrack18) - * [Penn Action](#penn-action) - * [PW3D](#pw3d) - * [SPIN](#spin) - * [SURREAL](#surreal) +- [Data preparation](#data-preparation) + - [Overview](#overview) + - [Datasets for supported algorithms](#datasets-for-supported-algorithms) + - [Folder structure](#folder-structure) + - [AGORA](#agora) + - [AMASS](#amass) + - [COCO](#coco) + - [COCO-WholeBody](#coco-wholebody) + - [CrowdPose](#crowdpose) + - [EFT](#eft) + - [GTA-Human](#gta-human) + - [Human3.6M](#human36m) + - [Human3.6M Mosh](#human36m-mosh) + - [HybrIK](#hybrik) + - [LSP](#lsp) + - [LSPET](#lspet) + - [MPI-INF-3DHP](#mpi-inf-3dhp) + - [MPII](#mpii) + - [PoseTrack18](#posetrack18) + - [Penn Action](#penn-action) + - [PW3D](#pw3d) + - [SPIN](#spin) + - [SURREAL](#surreal) + - [VIBE](#vibe) + - [FreiHand](#freihand) + - [EHF](#ehf) + - [FFHQ](#ffhq) + - [ExPose](#expose) + - [Stirling](#stirling) ## Overview @@ -131,7 +140,8 @@ DATASET_CONFIGS = dict( ## Datasets for supported algorithms -For all algorithms, the root path for our datasets and output path for our preprocessed npz files are stored in `data/datasets` and `data/preprocessed_datasets`. As such, use this command with the listed `dataset-names`: +For all algorithms, the root path for our datasets and output path for our preprocessed npz files are stored in `data/datasets` and `data/preprocessed_datasets`. +As such, use this command with the listed `dataset-names`: ```bash python tools/convert_datasets.py \ @@ -188,6 +198,10 @@ mmhuman3d ├── mpii_train.npz └── pw3d_test.npz ``` +Note that, to avoid generating npz files every iteration during training, please create a cache directory linked with the preprocessed files. To do so, run the following command: +``` +ln -s data/cache data/preprocessed_datasets +``` For SPIN training, the following datasets are required: - [COCO](#coco) From 616c8f6c0e41ecdf3f654ab31eb1d7d6e94184d5 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E6=9F=B3=E5=B8=85?= Date: Wed, 14 Sep 2022 10:37:31 +0800 Subject: [PATCH 2/5] create a blank cache folder instead of a soft link --- docs/preprocess_dataset.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/preprocess_dataset.md b/docs/preprocess_dataset.md index bee42f58..e48e83ee 100644 --- a/docs/preprocess_dataset.md +++ b/docs/preprocess_dataset.md @@ -198,9 +198,9 @@ mmhuman3d ├── mpii_train.npz └── pw3d_test.npz ``` -Note that, to avoid generating npz files every iteration during training, please create a cache directory linked with the preprocessed files. To do so, run the following command: +Note that, to avoid generating npz files every iteration during training, please create a blank cache directory. To do so, run the following command: ``` -ln -s data/cache data/preprocessed_datasets +mkdir data/cache ``` For SPIN training, the following datasets are required: From 250fd80159d89cd2056670ebe9a30683dd3213b4 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E6=9F=B3=E5=B8=85?= Date: Fri, 16 Sep 2022 18:22:51 +0800 Subject: [PATCH 3/5] cache combined with cache config to speed up data reading --- docs/preprocess_dataset.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/docs/preprocess_dataset.md b/docs/preprocess_dataset.md index e48e83ee..d35dc323 100644 --- a/docs/preprocess_dataset.md +++ b/docs/preprocess_dataset.md @@ -200,8 +200,9 @@ mmhuman3d ``` Note that, to avoid generating npz files every iteration during training, please create a blank cache directory. To do so, run the following command: ``` -mkdir data/cache +cp -r data/preprocessed_datasets data/cache ``` +Also, rememeber to use the *_cache.py config during training. For SPIN training, the following datasets are required: - [COCO](#coco) From c58fd0666ad341fd279ac44d6fd8b6fc8328e028 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E6=9F=B3=E5=B8=85?= Date: Thu, 22 Sep 2022 15:23:10 +0800 Subject: [PATCH 4/5] lint test --- docs/conf.py | 3 +-- docs/preprocess_dataset.md | 2 +- 2 files changed, 2 insertions(+), 3 deletions(-) diff --git a/docs/conf.py b/docs/conf.py index aabb7d4d..0f03f200 100644 --- a/docs/conf.py +++ b/docs/conf.py @@ -11,9 +11,8 @@ # documentation root, use os.path.abspath to make it absolute, like shown here. # import os -import sys - import pytorch_sphinx_theme +import sys sys.path.insert(0, os.path.abspath('..')) diff --git a/docs/preprocess_dataset.md b/docs/preprocess_dataset.md index d35dc323..9d0a9d8d 100644 --- a/docs/preprocess_dataset.md +++ b/docs/preprocess_dataset.md @@ -202,7 +202,7 @@ Note that, to avoid generating npz files every iteration during training, please ``` cp -r data/preprocessed_datasets data/cache ``` -Also, rememeber to use the *_cache.py config during training. +Also, remember to use the *_cache.py config during training. For SPIN training, the following datasets are required: - [COCO](#coco) From 4941e4fae8ee76ba8fa31b9f2a58ebf76590670e Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E6=9F=B3=E5=B8=85?= Date: Thu, 22 Sep 2022 16:19:45 +0800 Subject: [PATCH 5/5] lint testing --- docs/conf.py | 3 ++- docs/preprocess_dataset.md | 2 +- 2 files changed, 3 insertions(+), 2 deletions(-) diff --git a/docs/conf.py b/docs/conf.py index 0f03f200..aabb7d4d 100644 --- a/docs/conf.py +++ b/docs/conf.py @@ -11,9 +11,10 @@ # documentation root, use os.path.abspath to make it absolute, like shown here. # import os -import pytorch_sphinx_theme import sys +import pytorch_sphinx_theme + sys.path.insert(0, os.path.abspath('..')) # -- Project information ----------------------------------------------------- diff --git a/docs/preprocess_dataset.md b/docs/preprocess_dataset.md index 9d0a9d8d..0537a77e 100644 --- a/docs/preprocess_dataset.md +++ b/docs/preprocess_dataset.md @@ -140,7 +140,7 @@ DATASET_CONFIGS = dict( ## Datasets for supported algorithms -For all algorithms, the root path for our datasets and output path for our preprocessed npz files are stored in `data/datasets` and `data/preprocessed_datasets`. +For all algorithms, the root path for our datasets and output path for our preprocessed npz files are stored in `data/datasets` and `data/preprocessed_datasets`. As such, use this command with the listed `dataset-names`: ```bash