Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Quality and GPU memory usage issues #71

Open
Linkersem opened this issue Sep 12, 2024 · 2 comments
Open

Quality and GPU memory usage issues #71

Linkersem opened this issue Sep 12, 2024 · 2 comments

Comments

@Linkersem
Copy link

Hello, @Snosixtyboo @ameuleman my device is 4090 24G.

First,when using the SIBR viewer to view my trained model (model size is 4G), I found that the gpu memory is about 22G, if this is the case, if I have a larger scene, 24G of video memory will not be able to withstand it, but the paper says that you can try to render 88G size files, which makes me feel a little confused here.

Second,I have a set of data collected from a large scene, which is denser than the sample data you provided. After training, I found that the details were not well represented compared to 3dgs, especially some fonts were very blurry.
I have conducted some analysis on the previous issue and found that re triangulating the scene after segmenting it into chunks can result in a very sparse point cloud, which makes it difficult to reconstruct some areas. Therefore, I will replace the triangulated data with the original colmap point cloud data. However, this leads to a lot of white fog in the final trained model

May I ask what the reason is? How should this phenomenon be resolved?

@ameuleman
Copy link
Collaborator

Hi,

  1. The viewer loads only a part of the hierarchy into GPU memory, following the --budget parameter. Nodes will be loaded and removed from GPU memory as the user moves through the scene.
  2. Discussion on issue 61

@Linkersem
Copy link
Author

Hello, thank you for your reply!

  1. I understand what you said, but I would like to ask if there is a quantitative indicator for this? For example, if I have a 4G model, how much GPU memory do I need? For example, my 4G model cannot run on one of my 2060 8G devices and will crash, but it can be browsed normally on a 4090 24G device.
    I hope to get a more specific conclusion, such as the relationship between gpu memory and model size? For example, I also have a 500M model, which can be rendered normally on a 2060 8G device.

  2. I will continue to discuss this in Weird result #61

Thank you again for your patient reply!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants