Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue with testing model on custom data #1

Open
pavan4 opened this issue Aug 13, 2018 · 16 comments
Open

Issue with testing model on custom data #1

pavan4 opened this issue Aug 13, 2018 · 16 comments

Comments

@pavan4
Copy link

pavan4 commented Aug 13, 2018

Hello,

I have been playing around with your code and was successfully able to test the model on the provided test data.

However, I would like to test this on custom rgbd data which is following the same structure as scannet.

How do I create the .sdf.ann and .image files for a scene? The test set has them generated but I could not find anywhere the method to generate those files

I did try to rename my rgb images and depth images to match one of the scene(scene0779_00) 0.jpg,20.jpg .. for rgb 0.png,20.png ... for depth and 0.txt, 20.txt ... . Doing so resulted in predictions being made on toilet,sink etc., which were part of the original scene and not my custom scene as, I believe, the 3d data is being loaded for .image file.

@zzxmllq
Copy link

zzxmllq commented Sep 4, 2018

@pavan4 hi how do you work well on test.py? I find there is no 2d .jpg .png pose and so on. Should I run prepare_2d_data.py and apply for the scannetv2 data? Thanks for your help!

@pavan4 pavan4 changed the title Issue with testing it on custom data Issue with testing model on custom data Sep 11, 2018
@pavan4
Copy link
Author

pavan4 commented Sep 11, 2018

@zzxmllq You have to fetch the data from scannet and use the script in prepare_data folder. You have to download the entire scannet data or do some reverse engineering to fetch only one scene if you want to test one scene. I tested it with scene_0779 from the scannet v2 dataset

@zzxmllq
Copy link

zzxmllq commented Sep 18, 2018

@pavan4 thanks!I get it.

@edith-langer
Copy link

@pavan4 I am also interested in using my own data. Have you ever found out how to use 3DMV to do that?

@pavan4
Copy link
Author

pavan4 commented Nov 14, 2018

@edith-langer tl;dr I haven't been able to do it. The way it is setup, seems not straight forward

ts;dr

  1. You can use the test data to just test on scannet data, you can follow the steps I commented above to replicate the results of the paper

  2. If you want just want to test the provided trained model with custom data, you either have to figure out the exact format used to generate the .sdf.ann files and .image files as shown from the line in code below; Or alternatively, figure out the output from load_scene and load_scene_image_info_multi to match the values of those functions and pass it down the model

    scene_file = os.path.join(opt.data_path_3d, scene_name + '.sdf.ann')

  3. If you want to train on custom data, you have to use the file format from scannet project for the custom data and pass it via the script in
    prepare_data to generate the training data

@edith-langer
Copy link

Thank you @pavan4 for your answer!
Were you able to inspect the labeled reconstruction? The only file I get after applying 3DMV to one of the test scenes is sceneXXXX_00.bin in an output folder. How can I use that binary file?

@pavan4
Copy link
Author

pavan4 commented Nov 21, 2018

Yes you can visualize the data using the generated .bin file using this

def read_array_from_file(filename):
    file = open(filename,'rb')
    vals = []
    try:
        sz0 = struct.unpack('Q',file.read(struct.calcsize('Q')))[0]
        sz1 = struct.unpack('Q',file.read(struct.calcsize('Q')))[0]
        sz2 = struct.unpack('Q',file.read(struct.calcsize('Q')))[0]
        bytes_read = True
        while bytes_read:
            bytes_read = file.read(struct.calcsize('B'))
            if bytes_read:
                vals.append(struct.unpack('B',bytes_read)[0])
    finally:
        # print sz0, sz1, sz2 , len(vals)
        a = np.resize(vals,(sz0,sz1,sz2))
        # print a.shape
        file.close()
        return a

I can't remember where I got this structure but this worked.

You can use a 3d plot to plot the volume but you should be aware that this is huge volume and pretty much can't see anything.

I used marching cubes to simplify and visualize like so, (there might be other efficient methods, and this is just one method)

masks = read_array_from_file('./output/scene0779_00_masks.bin')
from skimage.measure import marching_cubes_lewiner
verts, faces, normals, values = marching_cubes_lewiner(masks)
import visvis as vv 
vv.mesh(np.fliplr(verts), faces, normals, values) 
vv.use().Run() 

@BonJovi1
Copy link

Hi @pavan4
Thanks a lot for sharing the code to visualise the generated .bin files. I just had a couple of quick questions, I'd be grateful if you could answer them.

  1. I was able to successfully test the pre-trained model on scene0708_00. When I visualise using marching cubes, I get something like this:
    image

Is this what you mean by "you should be aware that this is huge volume and pretty much can't see anything"? Is there any way I could possibly zoom into it or something, like we do in meshlab? I wasn't able to find a way to visualise the .bin file in meshlab. Any suggestions for that?

  1. For training, we require those .h5 files present in 3dmv_scannet_v2_train. Unfortunately, each of those .h5 files corresponds to every scene in the dataset. So in order to successfully train the network, I'd need to download all the scenes and I don't have 1 TB of space. Is there any other simpler way to be able to train the network, maybe on a smaller subset of the ScanNet/any other dataset?

Thanks a lot, and apologies for the newbie questions,
Abhinav

@BonJovi1
Copy link

Hi @edith-langer @pavan4
Sorry for the spam. I found this script that converts the .bin file into an .obj file! link to script
I ran that and visualised the .obj file, and got something like this.

But it isn't showing any semantic segmentation, just the voxels in black and white. Could you please take a look at the last if __name__ == '__main__' part in the script? Do I need some sort of semantic voxel labelling or any other crucial information?
image

Thank you so much,
Abhinav

@saurabheights
Copy link

@BonJovi1 Hi, this is saurabh, u contacted me over this issue over email. Make sure to add information here on how u got this resolved as we discussed. It will help future users.

@Maple-3d
Copy link

Hi!@pavan4
Have you continued to study this paper?I have the same problem as you, I cannot generate my own .ann .image data.Have you solved this problem later?

@saurabheights
Copy link

@Maple-3d and for any future users.

This is my discussion with Pavan over email. I am copypasting(with minor edits) because it has been too long since I worked on this and though I am free now, I would prefer to spend my time on the things I currently work at. Please note that opinions here regarding not using scannet are from my personal experience. If you are looking at this dataset for research purpose and this is first 3D dataset you are using, it is wrong choice. Start with small and move to scannet only when you need to produce SOTA results for research publications/companies.

Email conversation below. Not sure if it will be as helpful to you guys. Expect spending a few days to understand every file content of this dataset. Some links here might be helpful.

pavan4 - The semantic segmentation isn't there, the voxels are all in grey. How exactly do I get the semantic voxel labelling for this? Do I need some extra information on the semantics of the scene and if so, would you happen to know where do I get it? In your code, you've mentioned 'semantic_AllKnownVoxels.bin', but I wasn't able to find any file like that...

me - This semantic_AllKnownVoxels.bin is something I produced for some other task, ignore this.

https://github.com/ScanNet/ScanNet#data-organization

Download scannet data for one scene, i.e. one room only. But download everything in that one scene

See here - https://github.com/ScanNet/ScanNet#data-organization
_vh_clean_2.labels.ply <-- this should be the labels for training(If i remember correctly), open it in meshlab(ubuntu, probably available for windows too) to visualize.
There are .bin files generated by Prof. Dr. Angela from these .ply files to train 3DMV and is available for download at 3dmv repo in readme.md, which u can visualize using VolumetricGridVisualizer.py

For testing
VolumetricGridVisualiser.py
https://github.com/saurabheights/3DMV/blob/colab-branch/3dmv/VolumetricGridVisualizer.py#L211
Print voxel[x,y,z], are they non zero. If so, they should get colored.

https://github.com/saurabheights/3DMV/blob/colab-branch/3dmv/VolumetricGridVisualizer.py#L214
Make sure is_semantic_else_scan is true.

Sorry if info is not clear or enough, it has been over a year, since I even looked into this and its quite late here for me to dive further.

[P.S. - I will extremely highly recommend you to ignore this dataset. If you are doing research, working with a dataset of 1.3 TB is not the right way. Idea matters not the dataset. One of my biggest mistake during my masters, so take caution. When you will start training, it takes multiple days, plus downloading this dataset is another huge headache(takes over few days, get urls from downlaod_scannet, and use multiple file downloader). Google colab wont be able to help you at all. Furthermore, please dont use my repo, it will make you lose more time and cause more havoc since this is heavily changed code for my specific work].

pavan - Thank you so so very much for your reply!
Yes, I had checked the value of 'is_semantic_else_scan' and it was true! That's when I realised that there was nothing wrong with your code and something wrong with my visualisation. So then I further converted the .obj file to a .ply file and it worked!

And yes, I could've also printed voxel[x,y,z] and checked if it was non-zero, thanks for giving me that idea also!

For training, they have given all the train scenes in .h5 format. And each .h5 file corresponds to EVERY 2D scene in the dataset! So in order to successfully train the network, we'd need to have all the data, which as you said, is 1.3TB and we don't have that much space on our college servers. I think I'll just take your advice and ignore this dataset. You're right, idea matters and not the dataset!

I just had one last query:
I selected one scene, and downloaded everything for that scene. But unfortunately, we can only do that for the train scenes. For test scenes, we only get 4 files, and it doesn't include the ground truth.
/scene0707_00
- scene0707_00.sens
- scene0707_00.txt
- scene0707_00_vh_clean.ply
- scene0707_00_vh_clean_2.ply
Do you know of any ways we could get the ground truth for test scenes? How could I evaluate the predictions that we get from the network?

Once again, I'm much obliged to you for the detailed reply, thanks a lot. And it's amazing that you remember this much even after year, I forget my codes in just two months!

me - My bad, its .h5 files. Note here, these .h5 files are not 1.3 TB something. Its .sens files,. etc which takes most of the space. The .h5 files if i am correct, are available on repository of 3dmv by Prof. Dr. Angela. Regarding, 'And each .h5 file corresponds to EVERY 2D scene in the dataset!', not they do not. The .h5 files are only good 2d scenes with enough voxels in the 30x30x60 dimensions voxel space and enough of them are labeled(dont remember this properly, but check the research paper for more details on how they filtered the scannet dataset). Note - This is important for you guys to generate your own data to train. Here basically from 3D data you have, you filter out regions with too many incomplete voxels. This can happen due to user not covering whole region properly during scanning. Next you should not use regions where there is no other object but floor. Not much information there for network to learn from.

Next, if you dont need 2D images, you should just download .h5 files which are much smaller in size and ignore scannet data. Note: you will still have to download scannet for test set(only files necessary for test, whether it would be .sens or .ply, I am not sure/do not remember) but its gonna be much smaller, 100/200 GB, something.

Do you know of any ways we could get the ground truth for test scenes? How could I evaluate the predictions that we get from the network? <- Ok, this is bad. When you train a model, you train on training set and validate on validation set. Validation set is basically to tell at what point, your model starts overfitting or for finding best hyperparameters.

Test set - Here you only get the output and maybe there is evaluation server run by Angela and submit there(Google for this). If your results are really good, you have to submit papers something and present in conference, etc. This is why ground truth of test set is never made public. If you had access to test set, you can just train on that(cheat system) and get better results than others.

============== end =============

Finally, just a short note. 3DMV uses 30x30x60 which requires some preprocessing of data you have. I would suggest you to try ScanComplete by Prof. Dr. Angela Dai. This is an amazing work and is applicable for both scan completion as well as semantic segmentation. The best part is you dont have to do much preprocessing with a specific size(30x30x60). I havent used it, but found the paper much impressive and a good leap from 3DMV.

@demul
Copy link

demul commented Sep 7, 2021

@edith-langer have to figured out how to create '.sdf.ann' and '.image'?
Thanks

@edith-langer
Copy link

@edith-langer have to figured out how to create '.sdf.ann' and '.image'?
Thanks

Sorry, I can not help you. I haven't used 3DMV in the end.

@HaFred
Copy link

HaFred commented May 26, 2023

Hi @pavan4 , did you come across the error as below?

RuntimeError: Error(s) in loading state_dict for Sequential:
	Missing key(s) in state_dict: "0.0.weight", "0.0.bias", "2.weight", "2.bias", "2.running_mean", "2.running_var", "3.weight", "6.0.0.3.bias", "6.0.0.4.running_mean", "6.0.0.4.running_var", "6.0.0.7.bias", "6.0.0.7.running_mean", "6.0.0.7.running_var", "8.0.0.0.weight", "8.0.0.1.weight", "8.0.0.1.bias", "8.0.0.1.running_mean", "8.0.0.1.running_var", "8.0.0.2.weight", "8.0.0.3.weight", "8.0.0.3.bias", "8.0.0.4.weight", "8.0.0.4.bias", "8.0.0.4.running_mean", "8.0.0.4.running_var", "8.0.0.5.weight", "8.0.0.6.weight", "8.0.0.7.weight", "8.0.0.7.bias", "8.0.0.7.running_mean", "8.0.0.7.running_var", "8.2.weight", "9.0.0.0.weight", "9.0.0.1.weight", "9.0.0.1.bias", "9.0.0.1.running_mean", "9.0.0.1.running_var", "9.0.0.2.weight", "9.0.0.3.weight", "9.0.0.3.bias", "9.0.0.4.weight", "9.0.0.4.bias", "9.0.0.4.running_mean", "9.0.0.4.running_var", "9.0.0.5.weight", "9.0.0.6.weight", "9.0.0.7.weight", "9.0.0.7.bias", "9.0.0.7.running_mean", "9.0.0.7.running_var", "9.2.weight", "10.0.0.0.weight", "10.0.0.1.weight", "10.0.0.1.bias", "10.0.0.1.running_mean", "10.0.0.1.running_var", "10.0.0.2.weight", "10.0.0.3.weight", "10.0.0.3.bias", "10.0.0.4.weight", "10.0.0.4.bias", "10.0.0.4.running_mean", "10.0.0.4.running_var", "10.0.0.5.weight", "10.0.0.6.weight", "10.0.0.7.weight", "10.0.0.7.bias", "10.0.0.7.running_mean", "10.0.0.7.running_var", "10.2.weight", "11.0.0.0.weight", "11.0.0.1.weight", "11.0.0.1.bias", "11.0.0.1.running_mean", "11.0.0.1.running_var", "11.0.0.2.weight", "11.0.0.3.weight", "11.0.0.3.bias", "11.0.0.4.weight", "11.0.0.4.bias", "11.0.0.4.running_mean", "11.0.0.4.running_var", "11.0.0.5.weight", "11.0.0.6.weight", "11.0.0.7.weight", "11.0.0.7.bias", "11.0.0.7.running_mean", "11.0.0.7.running_var", "11.2.weight", "12.0.0.0.weight", "12.0.0.1.weight", "12.0.0.1.bias", "12.0.0.1.running_mean", "12.0.0.1.running_var", "12.0.0.2.weight", "12.0.0.3.weight", "12.0.0.4.weight", "12.0.0.4.bias", "12.0.0.5.weight", "12.0.0.5.bias", "12.0.0.5.running_mean", "12.0.0.5.running_var", "12.0.0.6.weight", "12.0.0.7.weight", "12.0.0.8.weight", "12.0.0.8.bias", "12.0.0.8.running_mean", "12.0.0.8.running_var", "12.2.weight", "13.0.0.0.weight", "13.0.0.1.weight", "13.0.0.1.bias", "13.0.0.1.running_mean", "13.0.0.1.running_var", "13.0.0.2.weight", "13.0.0.3.weight", "13.0.0.3.bias", "13.0.0.4.weight", "13.0.0.4.bias", "13.0.0.4.running_mean", "13.0.0.4.running_var", "13.0.0.5.weight", "13.0.0.6.weight", "13.0.0.7.weight", "13.0.0.7.bias", "13.0.0.7.running_mean", "13.0.0.7.running_var", "13.2.weight", "14.0.0.0.weight", "14.0.0.1.weight", "14.0.0.1.bias", "14.0.0.1.running_mean", "14.0.0.1.running_var", "14.0.0.2.weight", "14.0.0.3.weight", "14.0.0.3.bias", "14.0.0.4.weight", "14.0.0.4.bias", "14.0.0.4.running_mean", "14.0.0.4.running_var", "14.0.0.5.weight", "14.0.0.6.weight", "14.0.0.7.weight", "14.0.0.7.bias", "14.0.0.7.running_mean", "14.0.0.7.running_var", "14.2.weight", "15.0.0.0.weight", "15.0.0.1.weight", "15.0.0.1.bias", "15.0.0.1.running_mean", "15.0.0.1.running_var", "15.0.0.2.weight", "15.0.0.3.weight", "15.0.0.3.bias", "15.0.0.4.weight", "15.0.0.4.bias", "15.0.0.4.running_mean", "15.0.0.4.running_var", "15.0.0.5.weight", "15.0.0.6.weight", "15.0.0.7.weight", "15.0.0.7.bias", "15.0.0.7.running_mean", "15.0.0.7.running_var", "15.2.weight", "16.0.0.0.weight", "16.0.0.1.weight", "16.0.0.1.bias", "16.0.0.1.running_mean", "16.0.0.1.running_var", "16.0.0.2.weight", "16.0.0.3.weight", "16.0.0.4.weight", "16.0.0.4.bias", "16.0.0.5.weight", "16.0.0.5.bias", "16.0.0.5.running_mean", "16.0.0.5.running_var", "16.0.0.6.weight", "16.0.0.7.weight", "16.0.0.8.weight", "16.0.0.8.bias", "16.0.0.8.running_mean", "16.0.0.8.running_var", "16.2.weight", "17.0.0.0.weight", "17.0.0.1.weight", "17.0.0.1.bias", "17.0.0.1.running_mean", "17.0.0.1.running_var", "17.0.0.2.weight", "17.0.0.3.weight", "17.0.0.3.bias", "17.0.0.4.weight", "17.0.0.4.bias", "17.0.0.4.running_mean", "17.0.0.4.running_var", "17.0.0.5.weight", "17.0.0.6.weight", "17.0.0.7.weight", "17.0.0.7.bias", "17.0.0.7.running_mean", "17.0.0.7.running_var", "17.2.weight", "18.0.0.0.weight", "18.0.0.1.weight", "18.0.0.1.bias", "18.0.0.1.running_mean", "18.0.0.1.running_var", "18.0.0.2.weight", "18.0.0.3.weight", "18.0.0.3.bias", "18.0.0.4.weight", "18.0.0.4.bias", "18.0.0.4.running_mean", "18.0.0.4.running_var", "18.0.0.5.weight", "18.0.0.6.weight", "18.0.0.7.weight", "18.0.0.7.bias", "18.0.0.7.running_mean", "18.0.0.7.running_var", "18.2.weight", "19.0.0.0.weight", "19.0.0.1.weight", "19.0.0.1.bias", "19.0.0.1.running_mean", "19.0.0.1.running_var", "19.0.0.2.weight", "19.0.0.3.weight", "19.0.0.3.bias", "19.0.0.4.weight", "19.0.0.4.bias", "19.0.0.4.running_mean", "19.0.0.4.running_var", "19.0.0.5.weight", "19.0.0.6.weight", "19.0.0.7.weight", "19.0.0.7.bias", "19.0.0.7.running_mean", "19.0.0.7.running_var", "19.2.weight", "20.0.0.0.weight", "20.0.0.1.weight", "20.0.0.1.bias", "20.0.0.1.running_mean", "20.0.0.1.running_var", "20.0.0.2.weight", "20.0.0.3.weight", "20.0.0.4.weight", "20.0.0.4.bias", "20.0.0.5.weight", "20.0.0.5.bias", "20.0.0.5.running_mean", "20.0.0.5.running_var", "20.0.0.6.weight", "20.0.0.7.weight", "20.0.0.8.weight", "20.0.0.8.bias", "20.0.0.8.running_mean", "20.0.0.8.running_var", "20.2.weight", "21.0.0.0.weight", "21.0.0.1.weight", "21.0.0.1.bias", "21.0.0.1.running_mean", "21.0.0.1.running_var", "21.0.0.2.weight", "21.0.0.3.weight", "21.0.0.3.bias", "21.0.0.4.weight", "21.0.0.4.bias", "21.0.0.4.running_mean", "21.0.0.4.running_var", "21.0.0.5.weight", "21.0.0.6.weight", "21.0.0.7.weight", "21.0.0.7.bias", "21.0.0.7.running_mean", "21.0.0.7.running_var", "21.2.weight", "22.0.0.0.weight", "22.0.0.1.weight", "22.0.0.1.bias", "22.0.0.1.running_mean", "22.0.0.1.running_var", "22.0.0.2.weight", "22.0.0.3.weight", "22.0.0.3.bias", "22.0.0.4.weight", "22.0.0.4.bias", "22.0.0.4.running_mean", "22.0.0.4.running_var", "22.0.0.5.weight", "22.0.0.6.weight", "22.0.0.7.weight", "22.0.0.7.bias", "22.0.0.7.running_mean", "22.0.0.7.running_var", "22.2.weight", "23.0.0.0.weight", "23.0.0.1.weight", "23.0.0.1.bias", "23.0.0.1.running_mean", "23.0.0.1.running_var", "23.0.0.2.weight", "23.0.0.3.weight", "23.0.0.3.bias", "23.0.0.4.weight", "23.0.0.4.bias", "23.0.0.4.running_mean", "23.0.0.4.running_var", "23.0.0.5.weight", "23.0.0.6.weight", "23.0.0.7.weight", "23.0.0.7.bias", "23.0.0.7.running_mean", "23.0.0.7.running_var", "23.2.weight", "24.0.0.0.weight", "24.0.0.1.weight", "24.0.0.1.bias", "24.0.0.1.running_mean", "24.0.0.1.running_var", "24.0.0.2.weight", "24.0.0.3.weight", "24.0.0.4.weight", "24.0.0.4.bias", "24.0.0.5.weight", "24.0.0.5.bias", "24.0.0.5.running_mean", "24.0.0.5.running_var", "24.0.0.6.weight", "24.0.0.7.weight", "24.0.0.8.weight", "24.0.0.8.bias", "24.0.0.8.running_mean", "24.0.0.8.running_var", "24.2.weight", "25.0.0.0.weight", "25.0.0.1.weight", "25.0.0.1.bias", "25.0.0.1.running_mean", "25.0.0.1.running_var", "25.0.0.2.weight", "25.0.0.3.weight", "25.0.0.3.bias", "25.0.0.4.weight", "25.0.0.4.bias", "25.0.0.4.running_mean", "25.0.0.4.running_var", "25.0.0.5.weight", "25.0.0.6.weight", "25.0.0.7.weight", "25.0.0.7.bias", "25.0.0.7.running_mean", "25.0.0.7.running_var", "25.2.weight", "26.0.weight". 
	Unexpected key(s) in state_dict: "0.2.weight", "0.0.0.0.weight", "0.0.0.1.weight", "0.0.0.1.bias", "0.0.0.1.running_mean", "0.0.0.1.running_var", "0.0.0.2.weight", "0.0.0.3.weight", "0.0.0.3.bias", "0.0.0.4.weight", "0.0.0.4.bias", "0.0.0.4.running_mean", "0.0.0.4.running_var", "0.0.0.5.weight", "0.0.0.6.weight", "0.0.0.7.weight", "0.0.0.7.bias", "0.0.0.7.running_mean", "0.0.0.7.running_var", "1.0.0.0.weight", "1.0.0.1.weight", "1.0.0.1.bias", "1.0.0.1.running_mean", "1.0.0.1.running_var", "1.0.0.2.weight", "1.0.0.3.weight", "1.0.0.3.bias", "1.0.0.4.weight", "1.0.0.4.bias", "1.0.0.4.running_mean", "1.0.0.4.running_var", "1.0.0.5.weight", "1.0.0.6.weight", "1.0.0.7.weight", "1.0.0.7.bias", "1.0.0.7.running_mean", "1.0.0.7.running_var", "1.2.weight", "2.0.0.0.weight", "2.0.0.1.weight", "2.0.0.1.bias", "2.0.0.1.running_mean", "2.0.0.1.running_var", "2.0.0.2.weight", "2.0.0.3.weight", "2.0.0.4.weight", "2.0.0.4.bias", "2.0.0.5.weight", "2.0.0.5.bias", "2.0.0.5.running_mean", "2.0.0.5.running_var", "2.0.0.6.weight", "2.0.0.7.weight", "2.0.0.8.weight", "2.0.0.8.bias", "2.0.0.8.running_mean", "2.0.0.8.running_var", "2.2.weight", "3.0.0.0.weight", "3.0.0.1.weight", "3.0.0.1.bias", "3.0.0.1.running_mean", "3.0.0.1.running_var", "3.0.0.2.weight", "3.0.0.3.weight", "3.0.0.3.bias", "3.0.0.4.weight", "3.0.0.4.bias", "3.0.0.4.running_mean", "3.0.0.4.running_var", "3.0.0.5.weight", "3.0.0.6.weight", "3.0.0.7.weight", "3.0.0.7.bias", "3.0.0.7.running_mean", "3.0.0.7.running_var", "3.2.weight", "6.0.0.5.bias", "6.0.0.5.running_mean", "6.0.0.5.running_var", "6.0.0.8.weight", "6.0.0.8.bias", "6.0.0.8.running_mean", "6.0.0.8.running_var". 
size mismatch for 4.0.0.0.weight: copying a param with shape torch.Size([32, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([16, 16, 2, 2]).
	size mismatch for 4.0.0.1.weight: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 4.0.0.1.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 4.0.0.1.running_mean: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 4.0.0.1.running_var: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 4.0.0.2.weight: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 4.0.0.3.weight: copying a param with shape torch.Size([32, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([16, 16, 3, 3]).
	size mismatch for 4.0.0.3.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 4.0.0.4.weight: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 4.0.0.4.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 4.0.0.4.running_mean: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 4.0.0.4.running_var: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 4.0.0.5.weight: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 4.0.0.6.weight: copying a param with shape torch.Size([128, 32, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 16, 1, 1]).
	size mismatch for 4.0.0.7.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
	size mismatch for 4.0.0.7.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
	size mismatch for 4.0.0.7.running_mean: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
	size mismatch for 4.0.0.7.running_var: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
	size mismatch for 4.2.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
	size mismatch for 5.0.0.0.weight: copying a param with shape torch.Size([32, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([16, 64, 1, 1]).
	size mismatch for 5.0.0.1.weight: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 5.0.0.1.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 5.0.0.1.running_mean: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 5.0.0.1.running_var: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 5.0.0.2.weight: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 5.0.0.3.weight: copying a param with shape torch.Size([32, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([16, 16, 3, 3]).
	size mismatch for 5.0.0.3.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 5.0.0.4.weight: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 5.0.0.4.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 5.0.0.4.running_mean: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 5.0.0.4.running_var: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 5.0.0.5.weight: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 5.0.0.6.weight: copying a param with shape torch.Size([128, 32, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 16, 1, 1]).
	size mismatch for 5.0.0.7.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
	size mismatch for 5.0.0.7.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
	size mismatch for 5.0.0.7.running_mean: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
	size mismatch for 5.0.0.7.running_var: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
	size mismatch for 5.2.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
	size mismatch for 6.0.0.0.weight: copying a param with shape torch.Size([32, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([16, 64, 1, 1]).
	size mismatch for 6.0.0.1.weight: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 6.0.0.1.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 6.0.0.1.running_mean: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 6.0.0.1.running_var: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 6.0.0.2.weight: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 6.0.0.3.weight: copying a param with shape torch.Size([32, 32, 1, 5]) from checkpoint, the shape in current model is torch.Size([16, 16, 3, 3]).
	size mismatch for 6.0.0.4.weight: copying a param with shape torch.Size([32, 32, 5, 1]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 6.0.0.4.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 6.0.0.5.weight: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 6.0.0.6.weight: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([64, 16, 1, 1]).
	size mismatch for 6.0.0.7.weight: copying a param with shape torch.Size([128, 32, 1, 1]) from checkpoint, the shape in current model is torch.Size([64]).
	size mismatch for 6.2.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
	size mismatch for 7.0.0.0.weight: copying a param with shape torch.Size([32, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([16, 64, 1, 1]).
	size mismatch for 7.0.0.1.weight: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 7.0.0.1.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 7.0.0.1.running_mean: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 7.0.0.1.running_var: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 7.0.0.2.weight: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 7.0.0.3.weight: copying a param with shape torch.Size([32, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([16, 16, 3, 3]).
	size mismatch for 7.0.0.3.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 7.0.0.4.weight: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 7.0.0.4.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 7.0.0.4.running_mean: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 7.0.0.4.running_var: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 7.0.0.5.weight: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 7.0.0.6.weight: copying a param with shape torch.Size([128, 32, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 16, 1, 1]).
	size mismatch for 7.0.0.7.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
	size mismatch for 7.0.0.7.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
	size mismatch for 7.0.0.7.running_mean: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
	size mismatch for 7.0.0.7.running_var: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
	size mismatch for 7.2.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).

I guess the pre-trained model from readme is different what you were using. Could you kindly share your scannet5_model2d.pth? Thank you very much.

@TyroneLi
Copy link

TyroneLi commented Aug 9, 2023

Hi @pavan4 , did you come across the error as below?

RuntimeError: Error(s) in loading state_dict for Sequential:
	Missing key(s) in state_dict: "0.0.weight", "0.0.bias", "2.weight", "2.bias", "2.running_mean", "2.running_var", "3.weight", "6.0.0.3.bias", "6.0.0.4.running_mean", "6.0.0.4.running_var", "6.0.0.7.bias", "6.0.0.7.running_mean", "6.0.0.7.running_var", "8.0.0.0.weight", "8.0.0.1.weight", "8.0.0.1.bias", "8.0.0.1.running_mean", "8.0.0.1.running_var", "8.0.0.2.weight", "8.0.0.3.weight", "8.0.0.3.bias", "8.0.0.4.weight", "8.0.0.4.bias", "8.0.0.4.running_mean", "8.0.0.4.running_var", "8.0.0.5.weight", "8.0.0.6.weight", "8.0.0.7.weight", "8.0.0.7.bias", "8.0.0.7.running_mean", "8.0.0.7.running_var", "8.2.weight", "9.0.0.0.weight", "9.0.0.1.weight", "9.0.0.1.bias", "9.0.0.1.running_mean", "9.0.0.1.running_var", "9.0.0.2.weight", "9.0.0.3.weight", "9.0.0.3.bias", "9.0.0.4.weight", "9.0.0.4.bias", "9.0.0.4.running_mean", "9.0.0.4.running_var", "9.0.0.5.weight", "9.0.0.6.weight", "9.0.0.7.weight", "9.0.0.7.bias", "9.0.0.7.running_mean", "9.0.0.7.running_var", "9.2.weight", "10.0.0.0.weight", "10.0.0.1.weight", "10.0.0.1.bias", "10.0.0.1.running_mean", "10.0.0.1.running_var", "10.0.0.2.weight", "10.0.0.3.weight", "10.0.0.3.bias", "10.0.0.4.weight", "10.0.0.4.bias", "10.0.0.4.running_mean", "10.0.0.4.running_var", "10.0.0.5.weight", "10.0.0.6.weight", "10.0.0.7.weight", "10.0.0.7.bias", "10.0.0.7.running_mean", "10.0.0.7.running_var", "10.2.weight", "11.0.0.0.weight", "11.0.0.1.weight", "11.0.0.1.bias", "11.0.0.1.running_mean", "11.0.0.1.running_var", "11.0.0.2.weight", "11.0.0.3.weight", "11.0.0.3.bias", "11.0.0.4.weight", "11.0.0.4.bias", "11.0.0.4.running_mean", "11.0.0.4.running_var", "11.0.0.5.weight", "11.0.0.6.weight", "11.0.0.7.weight", "11.0.0.7.bias", "11.0.0.7.running_mean", "11.0.0.7.running_var", "11.2.weight", "12.0.0.0.weight", "12.0.0.1.weight", "12.0.0.1.bias", "12.0.0.1.running_mean", "12.0.0.1.running_var", "12.0.0.2.weight", "12.0.0.3.weight", "12.0.0.4.weight", "12.0.0.4.bias", "12.0.0.5.weight", "12.0.0.5.bias", "12.0.0.5.running_mean", "12.0.0.5.running_var", "12.0.0.6.weight", "12.0.0.7.weight", "12.0.0.8.weight", "12.0.0.8.bias", "12.0.0.8.running_mean", "12.0.0.8.running_var", "12.2.weight", "13.0.0.0.weight", "13.0.0.1.weight", "13.0.0.1.bias", "13.0.0.1.running_mean", "13.0.0.1.running_var", "13.0.0.2.weight", "13.0.0.3.weight", "13.0.0.3.bias", "13.0.0.4.weight", "13.0.0.4.bias", "13.0.0.4.running_mean", "13.0.0.4.running_var", "13.0.0.5.weight", "13.0.0.6.weight", "13.0.0.7.weight", "13.0.0.7.bias", "13.0.0.7.running_mean", "13.0.0.7.running_var", "13.2.weight", "14.0.0.0.weight", "14.0.0.1.weight", "14.0.0.1.bias", "14.0.0.1.running_mean", "14.0.0.1.running_var", "14.0.0.2.weight", "14.0.0.3.weight", "14.0.0.3.bias", "14.0.0.4.weight", "14.0.0.4.bias", "14.0.0.4.running_mean", "14.0.0.4.running_var", "14.0.0.5.weight", "14.0.0.6.weight", "14.0.0.7.weight", "14.0.0.7.bias", "14.0.0.7.running_mean", "14.0.0.7.running_var", "14.2.weight", "15.0.0.0.weight", "15.0.0.1.weight", "15.0.0.1.bias", "15.0.0.1.running_mean", "15.0.0.1.running_var", "15.0.0.2.weight", "15.0.0.3.weight", "15.0.0.3.bias", "15.0.0.4.weight", "15.0.0.4.bias", "15.0.0.4.running_mean", "15.0.0.4.running_var", "15.0.0.5.weight", "15.0.0.6.weight", "15.0.0.7.weight", "15.0.0.7.bias", "15.0.0.7.running_mean", "15.0.0.7.running_var", "15.2.weight", "16.0.0.0.weight", "16.0.0.1.weight", "16.0.0.1.bias", "16.0.0.1.running_mean", "16.0.0.1.running_var", "16.0.0.2.weight", "16.0.0.3.weight", "16.0.0.4.weight", "16.0.0.4.bias", "16.0.0.5.weight", "16.0.0.5.bias", "16.0.0.5.running_mean", "16.0.0.5.running_var", "16.0.0.6.weight", "16.0.0.7.weight", "16.0.0.8.weight", "16.0.0.8.bias", "16.0.0.8.running_mean", "16.0.0.8.running_var", "16.2.weight", "17.0.0.0.weight", "17.0.0.1.weight", "17.0.0.1.bias", "17.0.0.1.running_mean", "17.0.0.1.running_var", "17.0.0.2.weight", "17.0.0.3.weight", "17.0.0.3.bias", "17.0.0.4.weight", "17.0.0.4.bias", "17.0.0.4.running_mean", "17.0.0.4.running_var", "17.0.0.5.weight", "17.0.0.6.weight", "17.0.0.7.weight", "17.0.0.7.bias", "17.0.0.7.running_mean", "17.0.0.7.running_var", "17.2.weight", "18.0.0.0.weight", "18.0.0.1.weight", "18.0.0.1.bias", "18.0.0.1.running_mean", "18.0.0.1.running_var", "18.0.0.2.weight", "18.0.0.3.weight", "18.0.0.3.bias", "18.0.0.4.weight", "18.0.0.4.bias", "18.0.0.4.running_mean", "18.0.0.4.running_var", "18.0.0.5.weight", "18.0.0.6.weight", "18.0.0.7.weight", "18.0.0.7.bias", "18.0.0.7.running_mean", "18.0.0.7.running_var", "18.2.weight", "19.0.0.0.weight", "19.0.0.1.weight", "19.0.0.1.bias", "19.0.0.1.running_mean", "19.0.0.1.running_var", "19.0.0.2.weight", "19.0.0.3.weight", "19.0.0.3.bias", "19.0.0.4.weight", "19.0.0.4.bias", "19.0.0.4.running_mean", "19.0.0.4.running_var", "19.0.0.5.weight", "19.0.0.6.weight", "19.0.0.7.weight", "19.0.0.7.bias", "19.0.0.7.running_mean", "19.0.0.7.running_var", "19.2.weight", "20.0.0.0.weight", "20.0.0.1.weight", "20.0.0.1.bias", "20.0.0.1.running_mean", "20.0.0.1.running_var", "20.0.0.2.weight", "20.0.0.3.weight", "20.0.0.4.weight", "20.0.0.4.bias", "20.0.0.5.weight", "20.0.0.5.bias", "20.0.0.5.running_mean", "20.0.0.5.running_var", "20.0.0.6.weight", "20.0.0.7.weight", "20.0.0.8.weight", "20.0.0.8.bias", "20.0.0.8.running_mean", "20.0.0.8.running_var", "20.2.weight", "21.0.0.0.weight", "21.0.0.1.weight", "21.0.0.1.bias", "21.0.0.1.running_mean", "21.0.0.1.running_var", "21.0.0.2.weight", "21.0.0.3.weight", "21.0.0.3.bias", "21.0.0.4.weight", "21.0.0.4.bias", "21.0.0.4.running_mean", "21.0.0.4.running_var", "21.0.0.5.weight", "21.0.0.6.weight", "21.0.0.7.weight", "21.0.0.7.bias", "21.0.0.7.running_mean", "21.0.0.7.running_var", "21.2.weight", "22.0.0.0.weight", "22.0.0.1.weight", "22.0.0.1.bias", "22.0.0.1.running_mean", "22.0.0.1.running_var", "22.0.0.2.weight", "22.0.0.3.weight", "22.0.0.3.bias", "22.0.0.4.weight", "22.0.0.4.bias", "22.0.0.4.running_mean", "22.0.0.4.running_var", "22.0.0.5.weight", "22.0.0.6.weight", "22.0.0.7.weight", "22.0.0.7.bias", "22.0.0.7.running_mean", "22.0.0.7.running_var", "22.2.weight", "23.0.0.0.weight", "23.0.0.1.weight", "23.0.0.1.bias", "23.0.0.1.running_mean", "23.0.0.1.running_var", "23.0.0.2.weight", "23.0.0.3.weight", "23.0.0.3.bias", "23.0.0.4.weight", "23.0.0.4.bias", "23.0.0.4.running_mean", "23.0.0.4.running_var", "23.0.0.5.weight", "23.0.0.6.weight", "23.0.0.7.weight", "23.0.0.7.bias", "23.0.0.7.running_mean", "23.0.0.7.running_var", "23.2.weight", "24.0.0.0.weight", "24.0.0.1.weight", "24.0.0.1.bias", "24.0.0.1.running_mean", "24.0.0.1.running_var", "24.0.0.2.weight", "24.0.0.3.weight", "24.0.0.4.weight", "24.0.0.4.bias", "24.0.0.5.weight", "24.0.0.5.bias", "24.0.0.5.running_mean", "24.0.0.5.running_var", "24.0.0.6.weight", "24.0.0.7.weight", "24.0.0.8.weight", "24.0.0.8.bias", "24.0.0.8.running_mean", "24.0.0.8.running_var", "24.2.weight", "25.0.0.0.weight", "25.0.0.1.weight", "25.0.0.1.bias", "25.0.0.1.running_mean", "25.0.0.1.running_var", "25.0.0.2.weight", "25.0.0.3.weight", "25.0.0.3.bias", "25.0.0.4.weight", "25.0.0.4.bias", "25.0.0.4.running_mean", "25.0.0.4.running_var", "25.0.0.5.weight", "25.0.0.6.weight", "25.0.0.7.weight", "25.0.0.7.bias", "25.0.0.7.running_mean", "25.0.0.7.running_var", "25.2.weight", "26.0.weight". 
	Unexpected key(s) in state_dict: "0.2.weight", "0.0.0.0.weight", "0.0.0.1.weight", "0.0.0.1.bias", "0.0.0.1.running_mean", "0.0.0.1.running_var", "0.0.0.2.weight", "0.0.0.3.weight", "0.0.0.3.bias", "0.0.0.4.weight", "0.0.0.4.bias", "0.0.0.4.running_mean", "0.0.0.4.running_var", "0.0.0.5.weight", "0.0.0.6.weight", "0.0.0.7.weight", "0.0.0.7.bias", "0.0.0.7.running_mean", "0.0.0.7.running_var", "1.0.0.0.weight", "1.0.0.1.weight", "1.0.0.1.bias", "1.0.0.1.running_mean", "1.0.0.1.running_var", "1.0.0.2.weight", "1.0.0.3.weight", "1.0.0.3.bias", "1.0.0.4.weight", "1.0.0.4.bias", "1.0.0.4.running_mean", "1.0.0.4.running_var", "1.0.0.5.weight", "1.0.0.6.weight", "1.0.0.7.weight", "1.0.0.7.bias", "1.0.0.7.running_mean", "1.0.0.7.running_var", "1.2.weight", "2.0.0.0.weight", "2.0.0.1.weight", "2.0.0.1.bias", "2.0.0.1.running_mean", "2.0.0.1.running_var", "2.0.0.2.weight", "2.0.0.3.weight", "2.0.0.4.weight", "2.0.0.4.bias", "2.0.0.5.weight", "2.0.0.5.bias", "2.0.0.5.running_mean", "2.0.0.5.running_var", "2.0.0.6.weight", "2.0.0.7.weight", "2.0.0.8.weight", "2.0.0.8.bias", "2.0.0.8.running_mean", "2.0.0.8.running_var", "2.2.weight", "3.0.0.0.weight", "3.0.0.1.weight", "3.0.0.1.bias", "3.0.0.1.running_mean", "3.0.0.1.running_var", "3.0.0.2.weight", "3.0.0.3.weight", "3.0.0.3.bias", "3.0.0.4.weight", "3.0.0.4.bias", "3.0.0.4.running_mean", "3.0.0.4.running_var", "3.0.0.5.weight", "3.0.0.6.weight", "3.0.0.7.weight", "3.0.0.7.bias", "3.0.0.7.running_mean", "3.0.0.7.running_var", "3.2.weight", "6.0.0.5.bias", "6.0.0.5.running_mean", "6.0.0.5.running_var", "6.0.0.8.weight", "6.0.0.8.bias", "6.0.0.8.running_mean", "6.0.0.8.running_var". 
size mismatch for 4.0.0.0.weight: copying a param with shape torch.Size([32, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([16, 16, 2, 2]).
	size mismatch for 4.0.0.1.weight: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 4.0.0.1.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 4.0.0.1.running_mean: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 4.0.0.1.running_var: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 4.0.0.2.weight: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 4.0.0.3.weight: copying a param with shape torch.Size([32, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([16, 16, 3, 3]).
	size mismatch for 4.0.0.3.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 4.0.0.4.weight: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 4.0.0.4.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 4.0.0.4.running_mean: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 4.0.0.4.running_var: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 4.0.0.5.weight: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 4.0.0.6.weight: copying a param with shape torch.Size([128, 32, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 16, 1, 1]).
	size mismatch for 4.0.0.7.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
	size mismatch for 4.0.0.7.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
	size mismatch for 4.0.0.7.running_mean: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
	size mismatch for 4.0.0.7.running_var: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
	size mismatch for 4.2.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
	size mismatch for 5.0.0.0.weight: copying a param with shape torch.Size([32, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([16, 64, 1, 1]).
	size mismatch for 5.0.0.1.weight: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 5.0.0.1.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 5.0.0.1.running_mean: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 5.0.0.1.running_var: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 5.0.0.2.weight: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 5.0.0.3.weight: copying a param with shape torch.Size([32, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([16, 16, 3, 3]).
	size mismatch for 5.0.0.3.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 5.0.0.4.weight: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 5.0.0.4.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 5.0.0.4.running_mean: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 5.0.0.4.running_var: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 5.0.0.5.weight: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 5.0.0.6.weight: copying a param with shape torch.Size([128, 32, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 16, 1, 1]).
	size mismatch for 5.0.0.7.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
	size mismatch for 5.0.0.7.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
	size mismatch for 5.0.0.7.running_mean: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
	size mismatch for 5.0.0.7.running_var: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
	size mismatch for 5.2.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
	size mismatch for 6.0.0.0.weight: copying a param with shape torch.Size([32, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([16, 64, 1, 1]).
	size mismatch for 6.0.0.1.weight: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 6.0.0.1.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 6.0.0.1.running_mean: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 6.0.0.1.running_var: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 6.0.0.2.weight: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 6.0.0.3.weight: copying a param with shape torch.Size([32, 32, 1, 5]) from checkpoint, the shape in current model is torch.Size([16, 16, 3, 3]).
	size mismatch for 6.0.0.4.weight: copying a param with shape torch.Size([32, 32, 5, 1]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 6.0.0.4.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 6.0.0.5.weight: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 6.0.0.6.weight: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([64, 16, 1, 1]).
	size mismatch for 6.0.0.7.weight: copying a param with shape torch.Size([128, 32, 1, 1]) from checkpoint, the shape in current model is torch.Size([64]).
	size mismatch for 6.2.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
	size mismatch for 7.0.0.0.weight: copying a param with shape torch.Size([32, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([16, 64, 1, 1]).
	size mismatch for 7.0.0.1.weight: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 7.0.0.1.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 7.0.0.1.running_mean: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 7.0.0.1.running_var: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 7.0.0.2.weight: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 7.0.0.3.weight: copying a param with shape torch.Size([32, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([16, 16, 3, 3]).
	size mismatch for 7.0.0.3.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 7.0.0.4.weight: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 7.0.0.4.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 7.0.0.4.running_mean: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 7.0.0.4.running_var: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 7.0.0.5.weight: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for 7.0.0.6.weight: copying a param with shape torch.Size([128, 32, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 16, 1, 1]).
	size mismatch for 7.0.0.7.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
	size mismatch for 7.0.0.7.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
	size mismatch for 7.0.0.7.running_mean: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
	size mismatch for 7.0.0.7.running_var: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
	size mismatch for 7.2.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).

I guess the pre-trained model from readme is different what you were using. Could you kindly share your scannet5_model2d.pth? Thank you very much.

Hi, check this repro in their data folder, I use theirs and succeed.
https://github.com/daveredrum/Pointnet2.ScanNet

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

9 participants