Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to use newly trained models #5

Open
kunalchelani opened this issue Jun 23, 2021 · 0 comments
Open

Unable to use newly trained models #5

kunalchelani opened this issue Jun 23, 2021 · 0 comments

Comments

@kunalchelani
Copy link

I am trying to train the coarsenet to see if it can learn to produce images from very sparse point clouds. Here is my series of steps:

  1. I load the pretrained coarsent model ('wts/depth_sift_rgb/coarsenet.model.npz') and a suitable annotations file (containing very few examples).
  2. Providing a low percentage of points to be used (between 5 and 10), I let it run for some iterations (around 10000) without changing any other default hyper-parameters.
  3. To check the results, I run the demo_5k.ply, after changing the coarsenet model to one obtained from above. First, I run into the following error, which I think is because the saved weights do not have the mean and variances (for eg 'ec0_mn', 'ec0_vr') saved for each layer, as provided in the pre-trained weights. If anyone has successfully directly used modified weights, it wold be really helpful.
    error_invsfm
    I try to circumvent the above error by explicitly loading the corsenet model with batchnorm set to 'set' and then doing the following:
C.load(sess, cnet_input)
for k in C.bn_outs:
    C.weights[k] = C.bn_outs[k]

Running with these weights gives bogus results (uniformly grey images as shown below). I wonder how the weights have shifted so much in just a few iterations starting from pre-trained weights which give great results.
depth_sift_rgb_test_100 0

I tried with the example annotation file as well and without the sparsity constraints but it led to similar results. So basically, if I sart from pretrained weights, the model seems to just degrade very quickly even using the same training data. think I might be doing something wrong as these results are strange. Has anyone been successful in training over additional data starting from pre-trained weights ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant