You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a machine with 64 GB RAM, and it seems that something during the sample baking is eating up so much memory that basically the whole process grinds to a near-halt. I can see in TaskManager that memory is all eaten up by Unity and MemoryCompression/pagefile processes dominate the processing time.
First few seconds progressing at a hundred samples per second, after that it's like 10-20 seconds per sample (basically down to good ol' offline baking speeds).
To reproduce, take any chunked photogrammetry mesh. In my case it's 75 chunks with 5 LOD levels each (see pull request to allow for that) and a total of 150k vertices.
Subsets of the mesh work fine (like, only have 20 chunks active) and at high-speed-GPU-rendering rates.
The text was updated successfully, but these errors were encountered:
What output resolution are you baking at? The bake process runs by rendering a shadow map, so it should not be that resource-intensive to do the actual per-sample rendering, but I have noticed freezes when outputting huge files. I'm not sure if there's a way to throttle the GPU or something, as it seems to just happily accept as much work as you throw at it, even if it means completely locking up your PC. I'll check the code again and see if I can at least reduce the memory impact.
I have a machine with 64 GB RAM, and it seems that something during the sample baking is eating up so much memory that basically the whole process grinds to a near-halt. I can see in TaskManager that memory is all eaten up by Unity and MemoryCompression/pagefile processes dominate the processing time.
First few seconds progressing at a hundred samples per second, after that it's like 10-20 seconds per sample (basically down to good ol' offline baking speeds).
To reproduce, take any chunked photogrammetry mesh. In my case it's 75 chunks with 5 LOD levels each (see pull request to allow for that) and a total of 150k vertices.
Subsets of the mesh work fine (like, only have 20 chunks active) and at high-speed-GPU-rendering rates.
The text was updated successfully, but these errors were encountered: