You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The image/text to mesh is achieved by combining with 3D generation methods.
We first obtain dense meshes from 3D generation methods and use them as input to our methods.
Note that the shape quality of dense meshes should be high enough. Thus, feed-forward 3D generation methods may often produce bad results due to insufficient shape quality. We suggest using results from SDS-based pipelines (like DreamCraft3D) as the input of MeshAnything as they produce better shape quality.
Is that not possible right now?
The input types are mesh and pc_normal, but this shows text and image:
Maybe I'm missing something.
The text was updated successfully, but these errors were encountered: