From 5ec8a02105b6aaec6afe643920fe88a618684fe3 Mon Sep 17 00:00:00 2001 From: icedwater Date: Mon, 16 Sep 2024 19:11:06 +0800 Subject: [PATCH] Updated README.md. - included updated environment and setup instructions - shifted data collection part into detail/summary blocks --- README.md | 59 +++++++++++++++++++++++++++++-------------------------- 1 file changed, 31 insertions(+), 28 deletions(-) diff --git a/README.md b/README.md index 5e73d7f..8c76ec6 100644 --- a/README.md +++ b/README.md @@ -29,6 +29,7 @@ If you find this code useful in your research, please cite: ## Getting started This code was developed on `Ubuntu 20.04 LTS` with Python 3.7, CUDA 11.7 and PyTorch 1.13.1. +The current `requirements.txt` was set up with Python 3.9, CUDA 11.3, PyTorch 1.12.1. ### 1. Setup environment @@ -46,12 +47,10 @@ This codebase shares a large part of its base dependencies with [GMD](https://gi Setup virtual env: ```shell -python3 -m venv .env_condmdi -source .env_condmdi/bin/activate -pip uninstall ffmpeg -pip install spacy -python -m spacy download en_core_web_sm -pip install git+https://github.com/openai/CLIP.git +python3 -m venv .env_condmdi # pick your preferred name here +source .env_condmdi/bin/activate # and use that name in place of .env_condmdi +pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 torchaudio==0.12.1 --extra-index-url https://download.pytorch.org/whl/cu113 +pip install -r requirements.txt # updated to include spacy and clip configuration ``` Download dependencies: @@ -78,36 +77,40 @@ bash prepare/download_recognition_unconstrained_models.sh ### 2. Get data There are two paths to get the data: -(a) **Generation only** with pretrained text-to-motion model without training or evaluating - -(b) **Get full data** to train and evaluate the model. - +
+ (a) **Generation only** with pretrained text-to-motion model without training or evaluating -#### a. Generation only (text only) + #### a. Generation only (text only) -**HumanML3D** - Clone HumanML3D, then copy the data dir to our repository: + **HumanML3D** - Clone HumanML3D, then copy the data dir to our repository: -```shell -cd .. -git clone https://github.com/EricGuo5513/HumanML3D.git -unzip ./HumanML3D/HumanML3D/texts.zip -d ./HumanML3D/HumanML3D/ -cp -r HumanML3D/HumanML3D diffusion-motion-inbetweening/dataset/HumanML3D -cd diffusion-motion-inbetweening -cp -a dataset/HumanML3D_abs/. dataset/HumanML3D/ -``` + ```shell + cd .. + git clone https://github.com/EricGuo5513/HumanML3D.git + unzip ./HumanML3D/HumanML3D/texts.zip -d ./HumanML3D/HumanML3D/ + cp -r HumanML3D/HumanML3D diffusion-motion-inbetweening/dataset/HumanML3D + cd CondMDI + cp -a dataset/HumanML3D_abs/. dataset/HumanML3D/ + ``` +
+
+ (b) **Get full data** to train and evaluate the model. -#### b. Full data (text + motion capture) + #### b. Full data (text + motion capture) -**[Important !]** -Following GMD, the representation of the root joint has been changed from relative to absolute. Therefore, you need to replace the original files and run GMD's version of `motion_representation.ipynb` and `cal_mean_variance.ipynb` provided in `./HumanML3D_abs/` instead to get the absolute-root data. + **HumanML3D** - Follow the instructions in [HumanML3D](https://github.com/EricGuo5513/HumanML3D.git), + then copy the result dataset to our repository: -**HumanML3D** - Follow the instructions in [HumanML3D](https://github.com/EricGuo5513/HumanML3D.git), -then copy the result dataset to our repository: + **[Important !]** + Following GMD, the representation of the root joint has been changed from relative to absolute. Therefore, when setting up HumanML3D, please + run GMD's version of `motion_representation.ipynb` and `cal_mean_variance.ipynb` instead to get the absolute-root data. These files are made + available in `./dataset/HumanML3D_abs/`. -```shell -cp -r ../HumanML3D/HumanML3D ./dataset/HumanML3D -``` + ```shell + cp -r ../HumanML3D/HumanML3D ./dataset/HumanML3D + ``` +
### 3. Download the pretrained models