Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about texture generation with model.obj #5

Open
JackeyLa5 opened this issue Dec 18, 2024 · 6 comments
Open

Question about texture generation with model.obj #5

JackeyLa5 opened this issue Dec 18, 2024 · 6 comments

Comments

@JackeyLa5
Copy link

First of all, I would like to express my appreciation for your excellent work! It’s really impressive.
while testing, I encountered an issue with the generated model. Specifically, the file assets/models/f9/f96a044e1b5c4d7680c1b47db07df12f/model.obj already has a texture applied to it. I am attempting to follow the method described in the paper, where a photo from a specific viewpoint is used as a prompt to generate the texture. To do this, I replaced the existing texture image assets/models/f9/f96a044e1b5c4d7680c1b47db07df12f/model.png.
image
However, the result has not been ideal.
image
Could you please guide me on whether I’ve made a mistake in the process or if there's something else I should be considering? Any help would be greatly appreciated.

Thank you in advance!

@XinYu-Andy
Copy link
Member

Hello, the "model.png" is used to render a specific view for inference. If you want to use an already-textured specific view of a model, you need to replace the "rendered view" with your image (see this line) and provide a pose (see this line).

@JackeyLa5
Copy link
Author

I may not have expressed my intent clearly.

I want to generate a textured model of a chair that matches a front-facing photo of the chair, given an untextured chair model. The process should work as demonstrated in the paper.
image

I don’t have a uv_map_gt. Instead, I replaced the file assets/models/f9/f96a044e1b5c4d7680c1b47db07df12f/model.png with the front-facing photo of the chair. However, the result I obtained looks strange.

Could you let me know if I’ve done something wrong or missed any key steps?

@XinYu-Andy
Copy link
Member

Oh, I understand what you mean. The codebase now only supports to render a condition view (e.g., a front-face), and then use this view image (not the uv_map_gt, not the "model.png") for synthesizing the whole texture.
As I said, if we want to use a custom image, we also need to know the pose of this image (i.e., this line). Do you know the pose of your image? Can you show me what your image looks like?

@JackeyLa5
Copy link
Author

Thank you for your reply. I was just making a simple attempt as I am interested in generating textures from custom images.
Can the pose of a custom image be considered as the pose of the object in the image relative to the model itself? If the custom image is the front view as shown in the paper, can the pose be considered as an identity matrix? Where should I specify my custom image and this pose without modifying the code in texgen_test.py? For example, what parameters should be specified in texgen_test.yaml when running launch.py?

@XinYu-Andy
Copy link
Member

Thank you for your reply. I was just making a simple attempt as I am interested in generating textures from custom images. Can the pose of a custom image be considered as the pose of the object in the image relative to the model itself? If the custom image is the front view as shown in the paper, can the pose be considered as an identity matrix? Where should I specify my custom image and this pose without modifying the code in texgen_test.py? For example, what parameters should be specified in texgen_test.yaml when running launch.py?

If the custom image’s shape is not aligned with the mesh, it may be hard to process. If we have a perfect-aligned single view image, we need to estimate a pose so that we can do texture warping in our code. My practice is warping the image onto the uv space and get an incomplete texture map, then use our network for inpainting. In our benchmark, we know the pose of testing images.

@journey-zhuang
Copy link

I follow up on this question: Is there any provided code for processing an input front-view custom image to get the projected incomplete UV map? If not, what tools or methods should I use to process and obtain data that meets the input requirements? Thank you for your assistance!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants