Skip to content

Latest commit

 

History

History
76 lines (65 loc) · 3.89 KB

README_create_uv_texture.md

File metadata and controls

76 lines (65 loc) · 3.89 KB

Create facial UV-texture dataset

Run source codes

  • Prepare a directory of dataset project, which contains a "images" subfolder.
  • Put the original facial images into the "images" subfolder.
  • Modify the configuration and then run the following script to create the facial UV-texture dataset.
sh run_ffhq_uv_dataset.sh  # Please refer to this script for detailed configuration

Details of each step

Step 0 - Preparation

proj_data_dir=../examples/dataset_examples
checkpoints_dir=../checkpoints
topo_assets_dir=../topo_assets
  • Put the original facial images into "images" subfolder of the dataset project.
  • The checkpoints and topology assets can be downloaded from here.

Step 1 - Inversion

cd ./DataSet_Step1_Inversion
python run_e4e_inversion.py \
    --proj_data_dir ${proj_data_dir} \
    --e4e_model_path ${checkpoints_dir}/e4e_model/e4e_ffhq_encode.pt \
    --shape_predictor_model_path ${checkpoints_dir}/dlib_model shape_predictor_68_face_landmarks.dat

Step 2 - Detect attributes of the inverted faces

cd ../DataSet_Step2_Det_Attributes
python run_dpr_light.py \
    --proj_data_dir ${proj_data_dir} \
    --dpr_model_path ${checkpoints_dir}/dpr_model/trained_model_03.t7
python run_ms_api_attr.py \
    --proj_data_dir ${proj_data_dir}

Step 3 - StyleGAN-based facial image editing

cd ../DataSet_Step3_Editing
python run_styleflow_editing.py \
    --proj_data_dir ${proj_data_dir} \
    --network_pkl ${checkpoints_dir}/stylegan_model/stylegan2-ffhq-config-f.pkl \
    --flow_model_path ${checkpoints_dir}/styleflow_model/modellarge10k.pt \
    --exp_direct_path ${checkpoints_dir}/styleflow_model/expression_direction.pt \
    --exp_recognition_path ${checkpoints_dir}/exprecog_model/FacialExpRecognition_model.t7 \
    --edit_items delight,norm_attr,multi_yaw

Step 4 - UV-texture extraction, correction & completion

cd ../DataSet_Step4_UV_Texture
python run_unwrap_texture.py \
    --proj_data_dir ${proj_data_dir} \
    --ckp_dir ${checkpoints_dir} \
    --topo_dir ${topo_assets_dir}
  • Using our trained Deep3D model to predict 3D shapes of multi-view facial images.
  • Extract facial textures from multi-view facial images.
  • Perform texture correction & completion to generate high-quality UV-texture maps robustly.