Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Training with dataset THuman2.0 #2

Open
CastoHu opened this issue Nov 26, 2022 · 4 comments
Open

Training with dataset THuman2.0 #2

CastoHu opened this issue Nov 26, 2022 · 4 comments

Comments

@CastoHu
Copy link

CastoHu commented Nov 26, 2022

Hi,

Thanks for your new idea, I find that you trained the PIFu method with THuman2.0 Dataset. I also train it myself, but the results of the sdf are all 0. I think maybe it is because the mesh in THuman2.0 is not watertight. I wonder how you use THuman2.0 and Twindom dataset training PIFu method.

Looking forward to your reply.

Thanks.

@fengq1a0
Copy link
Owner

fengq1a0 commented Nov 26, 2022

@CastoHu

  1. Try to use https://github.com/sxyu/sdf instead of trimesh to compute occ, which support non-watertight meshes. Note that occupany (occ) is used in PIFu, but not sdf. https://github.com/sxyu/sdf may crash when computing sdf. But it works well in computing occ.
  2. Another solution is to generate watertight meshes with tradition remesh algorithom. You can supervise the network with the generated meshes with very little loss of precision.

We use the solution1 in our experiments.

@CastoHu
Copy link
Author

CastoHu commented Nov 28, 2022

@CastoHu

1. Try to use https://github.com/sxyu/sdf instead of trimesh to compute occ, which support non-watertight meshes. Note that occupany (occ) is used in PIFu, but not sdf. https://github.com/sxyu/sdf may crash when computing sdf. But it works well in computing occ.

2. Another solution is to generate watertight meshes with tradition remesh algorithom. You can supervise the network with the generated meshes with very little loss of precision.

We use the solution1 in our experiments.

Thanks for your reply.

Today I tried to calculate occ using the method you gave me. Since PIFu is obtained by uniform sampling and surface sampling, I used the sampling method in the source code and used pyrender to render the sample points. I found that the human mesh in the THumen2.0 dataset is too small and usually makes mistakes after training and the method you provided is also calculated by sampling points. I wonder how you solved this problem?

Looking forward to your reply.

@fengq1a0
Copy link
Owner

@CastoHu
Just resize the mesh, I think. In fact, I don't know what happened to your project. Everything worked well when I was training my PIFu.

In my opinion, there are three things that matter: 1. the image you rendered; 2. the projection matrix (calib); 3. the mesh (points). If the image and the calib are right, the mesh should be in the right size and place. So I think you should check them all.

What do you mean by "human mesh in the THumen2.0 dataset is too small and usually makes mistakes after training"?
Is that every thing right with your image and calib? Check the intermediate results (xyz) of the model. Check "xyz" between "self. projection" and "self.index" in the function "def query". Get them on a image without pyrender.

@CastoHu
Copy link
Author

CastoHu commented Dec 1, 2022

@CastoHu Just resize the mesh, I think. In fact, I don't know what happened to your project. Everything worked well when I was training my PIFu.

In my opinion, there are three things that matter: 1. the image you rendered; 2. the projection matrix (calib); 3. the mesh (points). If the image and the calib are right, the mesh should be in the right size and place. So I think you should check them all.

What do you mean by "human mesh in the THumen2.0 dataset is too small and usually makes mistakes after training"? Is that every thing right with your image and calib? Check the intermediate results (xyz) of the model. Check "xyz" between "self. projection" and "self.index" in the function "def query". Get them on a image without pyrender.

What I actually mean is that the human body coordinates in the THuman2.0 dataset are normalized, whereas the data in the rp dataset used in the source code is raw. I tried a few things:

  1. After sampling the surface of the original THuman dataset, I scaled the coordinates of the points and then added an offset, so that the human body could have a good proportion in the uniformly sampled space. However, the error obtained during training is almost 0.
  2. I shrink the uniform sampling space to a similar scale to the human body, and adjust the standard deviation used when adding normal offset to the sampling of the human body surface. However, when generating an obj file with only a very small number of points, mesh is extremely unacceptable.
    I don't know how you solved the problem of sampling. If possible, could you please briefly share some modifications you have made to PIFu?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants