Skip to content
This repository has been archived by the owner on Oct 31, 2023. It is now read-only.

Ray bending and mesh extraction #4

Open
kurtjcu opened this issue Jan 19, 2021 · 5 comments
Open

Ray bending and mesh extraction #4

kurtjcu opened this issue Jan 19, 2021 · 5 comments

Comments

@kurtjcu
Copy link

kurtjcu commented Jan 19, 2021

Hi,
I am experimenting with your code and am interested in the ray bending visualisations and mesh generation.
I see this is commented out in train.py around line 530. Do you have any insight into extracting meshes at certain timepoints and viewpoints?

Cheers,
Kurt

@edgar-tr
Copy link
Contributor

Hi Kurt,

I commented out the ray bending visualization code in line 530 because it generates large files (10-100MB per frame), but it should just work if you uncomment it. It visualizes rays, both the initial straight rays and then the bent rays, as lines in an .obj file. It doesn't visualize the geometry of the reconstruction, it only depends on the ray bending/deformation by itself, nothing else.

I haven't looked into extracting surface geometry (e.g. as meshes) at all. You can take a look at this issue here: yenchenlin/nerf-pytorch#2 Ideally, one would want to have a single "template" mesh that is then only deformed across time, i.e. that all the deformed versions of the mesh are in correspondence. That won't be trivial to do, you would need to do some nearest-neighbor correspondence estimation for example. Instead, what you will in all likelihood get is one mesh per timestep, with each mesh having completely different edges/connectivity/topology. If you try some form of marching cubes, then viewpoints (at the same timestep) shouldn't matter, since marching cubes doesn't depend on viewpoints.

Best regards,
Edgar

@anona-R
Copy link

anona-R commented Jan 30, 2023

Hi,

I tried exporting the mesh from the trained model for the example sequence given here . Followed yanchenlin/nerf-pytorch to implement the code snippet given below. I used the first ray bending latent vector hoping I would retrieve the mesh for the first time-step. Then applied marching cubes to obtain the .ply object. However the output mesh object does not produce the expected topology. I tried changing the threshold for marching cubes as well. Could you please help me with this. I am new to NeRFs.

Thanks and regards,
Uchitha

`sys.stdout.flush()'
basedir = './experiments/example_sequence/logs'
expname = ''
config = os.path.join(basedir, expname, 'config.txt')

print('Args:')
print(open(config, 'r').read())

parser = train.config_parser()
ft_str = ''
args = parser.parse_args('--config {} '.format(config) + ft_str)

input = 'experiments/example_sequence'

(render_kwargs_train,render_kwargs_test,start,grad_vars,load_weights_into_network,
checkpoint_dict,
get_training_ray_bending_latents,
load_llff_dataset,
raw_render_path,
render_convenient,
convert_rgb_to_saveable,
convert_disparity_to_saveable,
convert_disparity_to_jet,
convert_disparity_to_phong,
store_ray_bending_mesh_visualization,
to8b) = free_viewpoint_rendering._setup_nonrigid_nerf_network(input)

bds_dict = { "near": 2., "far": 6., }

render_kwargs_test.update(bds_dict)

device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
NeRF = render_kwargs_train['network_fine']

N, chunk = 255, 1024*64

t = np.linspace(-1.2, 1.2, N + 1)
query_points = np.stack(np.meshgrid(t, t, t), -1).astype(np.float32)
flat = torch.from_numpy(query_points.reshape([-1, 3])).to(device)
query_fn = render_kwargs_train['network_query_fn']

sigma = []
def get_training_additional_pixel_information(checkpoint="latest"):
training_latent_vectors = os.path.join(
basedir, "latest" + ".tar")
training_latent_vectors = torch.load(training_latent_vectors)[
"ray_bending_latent_codes"]
training_latent_vector = training_latent_vectors[0].to(device=device)
additional_pixel_information = {
"ray_bending_latents": training_latent_vector.reshape(1,64).expand(256*256, 64),
}
training_latent_vector.detach()
return additional_pixel_information

with torch.no_grad():
for i in range(0, flat.shape[0], chunk):
pts = flat[i:i+chunk,None,:]
viwedirs = torch.zeros_like(flat[i:i+chunk])
viwedirs=None
detailed_output=False
additional_pixel_information = get_training_additional_pixel_information()
raw = query_fn(pts, viwedirs,additional_pixel_information, NeRF,detailed_output,)
sigma.append(raw[...,-1])

density = torch.cat(sigma,dim=0).detach().cpu().numpy().squeeze()
plt.hist(np.maximum(0, density), log=True)
plt.savefig('density.png')
plt.show()

import mcubes
import trimesh
threshold = 0.5

vertices, triangles = mcubes.marching_cubes(density.reshape(256,256,-1), threshold)
print('done', vertices, triangles)
mesh = trimesh.Trimesh(vertices, triangles)
mesh.export('0017.ply')`

@edgar-tr
Copy link
Contributor

edgar-tr commented Jan 30, 2023 via email

@anona-R
Copy link

anona-R commented Jan 31, 2023

Thank you for the info on bounding box and expected output. So I changed the bounding box to suit the example sequence still I only get a cube shaped object as the mesh.
Final output mesh looks like this :

NR_NeRF_mesh2

@edgar-tr
Copy link
Contributor

edgar-tr commented Jan 31, 2023 via email

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants