This document explains how to create photorealistic renders using DermSynth3D but with Unity instead of PyTorch3D.
The render creation process consists of four main steps:
- Raw data processing: in this step, a Python script (process_raw_data.py) is used to extract the information required to create renders from the raw data and then arrange and save the information in a way that facilitates the render creation process using Unity.
- FBX file creation: in this step, Blender is used to create FBX files of the modified 3DBodyTex scans (i.e. 3DBodyTex meshes with modified texture maps), to facilitate scan import into Unity.
- Render creation: in this step, Unity is used to create renders of the modified 3DBodyTex scans.
- Render post-processing: in this step, a Python script (
add_2d_bkg_to_unity_renders.py
) is used to add backgrounds to the renders created in step 3.
Name | Why Required |
---|---|
Blender | To convert 3DBodyTex scan OBJ files into FBX files, to facilitate import of scans into Unity |
Unity | To create renders |
Python | To process raw data and post-process renders created using Unity |
Follow the instructions in datasets to download the 3DBodyTex scans (OBJ and MTL files) required to create renders.
In addition, follow the instructions in usage to generate the synthetic data and metadata using DermSynth3D pipeline.
The (sub)folder in ./data
should look like below:
DermSynth3D_private/
└── data/
├── blended_lesions # the 'save_dir` key in configs/blend.yaml
│ │ ├── gen_008/ # folder containing images, targets and metadata
│ │ │ └── data.csv
│ │ ├── ...
│ │ └── gen_355/
│ │ └── data.csv
│ ├── 3dbodytex_dataset/ # folder containing the meshes, texture maps, and texture masks
│ │ ├── 355-m-scape039/
│ │ │ ├── model_highres_0_normalized_debug.png
│ │ │ └── model_highres_0_normalized_mask.png
│ │ ├── ...
│ └── processed_raw_data/ # folder containing the processed renders
│ ├── 0.jpg
│ ├── ...
│ └── 7475.jpg
└── processed_raw_data/ # folder containing the metadata for unity and texture maps after pre-processing
├── 008-f-run-blended/
│ ├── all_params.csv
│ └── model_highres_0_normalized.png
├── ...
Once you have a folder structure that resembles the one above, run the script process_raw_data.py
to:
- read the raw data CSV files
- extract and save the information as CSV files in a format that is easier for Unity to read.
More specifically, the script extracts the following information:
- The location (x-, y- and z-coordinates) of the point light in each scene, and the x-, y- and z-coordinates of the point which the light is focused on
- The location (x-, y- and z-coordinates) of the camera in each scene, and the x-, y- and z-coordinates of the point which the camera is focused on
- The ID of the background that should be added to each render
Note that all the coordinates are in the PyTorch3D coordinate system which is different to that of Unity.
The script assumes that folders and files have the structure and naming convention shown in the Raw Data section above. It creates a new folder for each modified 3DBodyTex scan and saves the CSV file of extracted information and a renamed copy of the modified texture map in this folder. The texture map is renamed so that the 3DBodyTex scan OBJ, MTL and modified texture map all have the same name. All the folders, CSV files and modified texture maps created by the script are stored in the processed_raw_data folder
The 3DBodyTex scan meshes are OBJ files. However, import of such files into Unity is more error-prone than import of FBX files. Converting the 3DBodyTex scans into FBX files is therefore desirable and achieved using Blender.
In order to convert the .obj
scans to .fbx
:
- Copy the OBJ and MTL files of each 3DBodyTex scan to the corresponding folder created during the raw data processing. For example, copy the OBJ and MTL files of 3DBodyTex scan 008-f-run to the 008-f-run folder.
- Import the OBJ file into Blender.
- Export the scan as an FBX file, ensuring the correct export settings (see Figure below) are selected.
For more information about converting OBJ files to FBX files using Blender, see this video.
The Unity perception package creates binary masks showing where the scans are in the renders. These binary masks are required to add backgrounds to the renders.
- Unity perception package
- The C# scripts MoveCamera.cs and MoveLight.cs
- Complete steps 1 and 2 of this tutorial to set up a new project in Unity and download the Unity perception package.
- Create an empty scene in Unity.
- Add a camera and a light to the scene.
- Add a Perception Camera component to the camera. See step 3 of the tutorial above for more information on how to do this.
- Modify the Scheduled Capture properties of the camera to:
- Simulation Delta Time: 0.01
- Start at Frame: 99
- Frames Between Captures: 99
- Change the camera Field of View to 30 and its Near Clipping Plane to 0.01.
- Change the output image matrix size to 512x512 pixels (Game View>>Free Aspect>>+).
- Change the folder where renders will be saved (Edit>>Project Settings>>Perception>>Solo Endpoint>>Base Path).
- Add the C# scripts MoveCamera.cs and MoveLight.cs to the Assets folder in Unity.
- Add MoveCamera.cs to the camera you added to the scene in step 3.
- Add MoveLight.cs to the light you added to the scene in step 3.
- Add a folder containing a modified 3DBodyTex scan FBX file to the Assets folder.
- Open the newly added folder and add the model_highres_0_normalized FBX file to the scene.
- Set up the ground-truth label for the FBX file. See step 4 of the tutorial above for more information on how to do this.
- Add the all_params.csv file from the folder added to the Assets folder in step 12 to the MoveCamera component of the camera and the MoveLight component of the light.
- Click on the play button to run the simulation, and then press on the play button again to stop the simulation when all renders have been created. Renders and binary masks showing where the scans are in the renders will be saved in the folder specified in step 8.
- Delete the model_highres_0_normalized FBX file in the scene.
- Delete the folder added to the Assets folder in step 12.
- Repeat steps 12 to 18 (except step 15) for another folder containing a modified 3DBodyTex scan FBX file.
For detailed instructions on how to set up the Unity perception package, see this video.
MoveCamera.cs and MoveLight.cs read the coordinates in the CSV files of extracted information and move the camera and light to these coordinates.
After the pre-processing of raw data is completeed, we are now ready to create the renderings using Unity with the background scene same as that inraw metadata
.
Consequently, run the python script add_2d_bkg_to_unity_renders.py
to combine the background images and the renders and binary masks created using Unity and the Unity perception package, and then save the resulting images as PNG files.
The Figure below shows an overview of this process.
The script assumes that folders and files have the structure and naming convention shown below. It creates two new folders (renders and masks) in each modified 3DBodyTex scan folder (e.g. 008-f-run-blended in the Figure below) and saves the post-processed images as PNG files in renders and renamed copies of the binary masks in masks. The name of each post-processed image is the unique ID listed in the corresponding CSV file of raw data.