-
Notifications
You must be signed in to change notification settings - Fork 40
Ray bending and mesh extraction #4
Comments
Hi Kurt, I commented out the ray bending visualization code in line 530 because it generates large files (10-100MB per frame), but it should just work if you uncomment it. It visualizes rays, both the initial straight rays and then the bent rays, as lines in an .obj file. It doesn't visualize the geometry of the reconstruction, it only depends on the ray bending/deformation by itself, nothing else. I haven't looked into extracting surface geometry (e.g. as meshes) at all. You can take a look at this issue here: yenchenlin/nerf-pytorch#2 Ideally, one would want to have a single "template" mesh that is then only deformed across time, i.e. that all the deformed versions of the mesh are in correspondence. That won't be trivial to do, you would need to do some nearest-neighbor correspondence estimation for example. Instead, what you will in all likelihood get is one mesh per timestep, with each mesh having completely different edges/connectivity/topology. If you try some form of marching cubes, then viewpoints (at the same timestep) shouldn't matter, since marching cubes doesn't depend on viewpoints. Best regards, |
Hi, I tried exporting the mesh from the trained model for the example sequence given here . Followed yanchenlin/nerf-pytorch to implement the code snippet given below. I used the first ray bending latent vector hoping I would retrieve the mesh for the first time-step. Then applied marching cubes to obtain the .ply object. However the output mesh object does not produce the expected topology. I tried changing the threshold for marching cubes as well. Could you please help me with this. I am new to NeRFs. Thanks and regards, `sys.stdout.flush()' print('Args:') parser = train.config_parser() input = 'experiments/example_sequence' (render_kwargs_train,render_kwargs_test,start,grad_vars,load_weights_into_network, bds_dict = { "near": 2., "far": 6., } render_kwargs_test.update(bds_dict) device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") N, chunk = 255, 1024*64 t = np.linspace(-1.2, 1.2, N + 1) sigma = [] with torch.no_grad(): density = torch.cat(sigma,dim=0).detach().cpu().numpy().squeeze() import mcubes vertices, triangles = mcubes.marching_cubes(density.reshape(256,256,-1), threshold) |
Hi Uchitha,
I haven't tried extracting the geometry and so can't really help with
this code snippet. You could post a screenshot of the geometry you get.
One thing I noticed is that you query in x,y,z in (-1.2,+1.2). You might
want to use the bounding box from here:
https://github.com/facebookresearch/nonrigid_nerf/blob/ad24a1bb4a6c46968ac58ebe5cc24906bb63394c/free_viewpoint_rendering.py#L178
Other than that, Fig. 4 from Unbiased4D gives an impression of the
quality you can expect: https://arxiv.org/pdf/2206.08368.pdf
|
Assuming the network is trained correctly (e.g. novel-view renderings
look recognizable), then it might be that thresholds are not good or
that the core of the scene is somewhere inside that cube. The bounding
box is rather generously large, most of that is never supervised
usefully. That's all I can come up with.
|
Hi,
I am experimenting with your code and am interested in the ray bending visualisations and mesh generation.
I see this is commented out in train.py around line 530. Do you have any insight into extracting meshes at certain timepoints and viewpoints?
Cheers,
Kurt
The text was updated successfully, but these errors were encountered: