Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

support for >800 faces #11

Open
yosun opened this issue Jun 25, 2024 · 10 comments
Open

support for >800 faces #11

yosun opened this issue Jun 25, 2024 · 10 comments

Comments

@yosun
Copy link

yosun commented Jun 25, 2024

seriously.

@buaacyw
Copy link
Owner

buaacyw commented Jun 26, 2024

It takes more computation resources for more faces.

@yosun
Copy link
Author

yosun commented Jun 26, 2024

it's useless for 800 faces since when u want to convert a bad 3d scanned mesh to "artist made", you start with a high poly mesh

@buaacyw
Copy link
Owner

buaacyw commented Jun 26, 2024

This is an academic project; you can't expect it to go from nothing to industry use overnight, right?

@matbee-eth
Copy link

This is an academic project; you can't expect it to go from nothing to industry use overnight, right?

Is there any type of continued pretraining or finetuning we could do to achieve > 800?

@yosun
Copy link
Author

yosun commented Jun 27, 2024

This is an academic project; you can't expect it to go from nothing to industry use overnight, right?

well, you're certainly great at marketing, even if your research is basically a toy project.

i don't understand why people are excited by the results when existing procedural mesh cleanup techniques are cheaper, faster, and more useful than this.

@buaacyw
Copy link
Owner

buaacyw commented Jun 27, 2024

This is an academic project; you can't expect it to go from nothing to industry use overnight, right?

Is there any type of continued pretraining or finetuning we could do to achieve > 800?

Hi! Thanks for your interest. Like LLMs, simply prolong the size of positional encoding vector and train the model on a mesh dataset with meshes more than 800 faces.

@buaacyw
Copy link
Owner

buaacyw commented Jun 27, 2024

This is an academic project; you can't expect it to go from nothing to industry use overnight, right?

well, you're certainly great at marketing, even if your research is basically a toy project.

i don't understand why people are excited by the results when existing procedural mesh cleanup techniques are cheaper, faster, and more useful than this.

May I ask which specific methods are referred to by 'existing procedural mesh cleanup techniques'? I will make sure to study them thoroughly.

@yosun
Copy link
Author

yosun commented Jul 10, 2024

i'm not sure at what level to start the discussion.

here is a very old demo (no it's not the gen AI InstantMesh from tencent) - it's any mesh to smooth quads from 10 years ago: https://github.com/wjakob/instant-meshes

since then, there have been many advances in procedural non-AI (cheap computationally, consumer GPU takes ms etc) conversion of messy meshes to smooth quads...

in this context, it would be great to see how the results of your solution compare to existing procedural mesh solutions.

hope that by sharing some oldskool techniques, we can more sooner get to the common industry use cases of generated meshes with 800+ faces being usable in your solution.

@matbee-eth
Copy link

matbee-eth commented Jul 10, 2024

This is an academic project; you can't expect it to go from nothing to industry use overnight, right?

Is there any type of continued pretraining or finetuning we could do to achieve > 800?

Hi! Thanks for your interest. Like LLMs, simply prolong the size of positional encoding vector and train the model on a mesh dataset with meshes more than 800 faces.

Curious if you've outlined a strategy for doing this?

Perhaps we could do some mass scraping of free 3d models. I have 3 gigabit internet + 100tb of disk space, and experience in scraping.

@buaacyw
Copy link
Owner

buaacyw commented Jul 11, 2024

This is an academic project; you can't expect it to go from nothing to industry use overnight, right?

Is there any type of continued pretraining or finetuning we could do to achieve > 800?

Hi! Thanks for your interest. Like LLMs, simply prolong the size of positional encoding vector and train the model on a mesh dataset with meshes more than 800 faces.

Curious if you've outlined a strategy for doing this?

Perhaps we could do some mass scraping of free 3d models. I have 3 gigabit internet + 100tb of disk space, and experience in scraping.

Thanks for reaching out, but I think ObjaverseXL already has enough 3D models for training.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants