Skip to content
This repository has been archived by the owner on Oct 31, 2023. It is now read-only.

Blurry results on moving parts #2

Open
phongnhhn92 opened this issue Jan 18, 2021 · 6 comments
Open

Blurry results on moving parts #2

phongnhhn92 opened this issue Jan 18, 2021 · 6 comments

Comments

@phongnhhn92
Copy link

Hi,
I have been testing NR NERF on the ayush sequence from the CVD method.
I have followed the instruction to get the poses using colmap. Then i modified the config to train this sequence as follows:

dataset_type = llff
datadir = data/ayush/

rootdir = experiments/
expname =  ayush

debug = False

ray_bending = simple_neural # None, simple_neural
ray_bending_latent_size = 32

factor = 4

offsets_loss_weight = 0.2
divergence_loss_weight = 3
rigidity_loss_weight = 1
use_viewdirs = False
approx_nonrigid_viewdirs = True

lrate_decay = 250000
N_iters = 200000
i_video = 200000
i_testset = 200000

N_samples = 64
N_importance = 64
N_rand = 1024
chunk = 32768
netchunk = 65536

train_block_size = 0
test_block_size = 0

precrop_iters = 0
precrop_frac = 0.5

raw_noise_std = 1e0
bd_factor = minmax

After training, I get this results:
a
I use factor = 4 to reduce the training time. I think the background is nice but the waving hands are blurry. I read your paper and it seems like we need to carefully tune the weights of 3 losses ? My question is how to tune those numbers and get plausible results ?

@edgar-tr
Copy link
Contributor

Hi,

this looks like the scene has become fully static. The losses require some fiddling for each scene, which just means that you train it a number of times with different weights and then try to tweak them until you hit the best performance. I'm working on a slightly improved version that should be a bit easier to train, but I'll put that up in a couple of weeks probably. For now, you can try to lower all the three weights by, say, a factor of 10. The background should become unstable eventually (you'll know what I mean once it happens) and that hand should become better. It could be that the motion of the hand is too fast, especially since the input images seem like they have some motion blur.

If that doesn't work, try to turn all three of them off, the reconstruction should be okay, but novel view synthesis will be bad. Additionally, the hand might be too thin and you might need to increase N_samples and N_importance to, say, 256 and 128, respectively. Note that that will slow down training quite a bit, maybe four times. If all of that fails, the fingers in the input might move too detailed for the method. But first just try to lessen the regularizers.

Hope that helps,
Edgar

@phongnhhn92
Copy link
Author

Hi,
Thanks for your comment! I am training a new model based on your suggestion(more sample points, small weights on 3 none-rigid losses). However, I don't understand the intuition to lessen 3 reg.loss term.
Aren't they suppose to help with the none-rigid parts? If I decrease those term then your method becomes rigidNERf, right? In that case, the hand is expected to be less accurate for view synthesis. Am I wrong?

@edgar-tr
Copy link
Contributor

The three regularization losses help to constrain the problem, especially for novel view synthesis where the data term doesn't constrain the learned model enough. Very roughly speaking, the ray bending is pretty free to do whatever it wants, which leads to undesirable, unstable novel view synthesis results if the ray bending isn't regularized. The regularization losses push the model to be more rigid. In the limit where their weights go to infinity, our method becomes the same as rigid Nerf. You will see this if you turn all three of them off and look at novel view results.

@Time-Lord12th
Copy link

could you please tell me how to get the poses using colmap? thanks a lot.

@edgar-tr edgar-tr mentioned this issue Aug 9, 2021
@edgar-tr
Copy link
Contributor

edgar-tr commented Aug 9, 2021

could you please tell me how to get the poses using colmap? thanks a lot.

Hi, which part are you stuck on? There's an explanation on how to integrate Colmap into NR-NeRF here https://github.com/facebookresearch/nonrigid_nerf#installation and then there's an explanation on how to actually run Colmap here: https://github.com/facebookresearch/nonrigid_nerf#preprocess

@edgar-tr
Copy link
Contributor

edgar-tr commented Aug 9, 2021

Also, can you please open a new issue for this? I tried transferring your comment into a new issue but couldn't figure out how.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants