-
Notifications
You must be signed in to change notification settings - Fork 51
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to prepare the data for trainN.py
?
#22
Comments
1: Yes, we used images from left camera and right camera. |
I see, thank you for the answers! @JiaxiongQ |
@JiaxiongQ Question 2: If yes, there's only sparse |
Yes, because we only generated surface normal from the depth of left camera. |
Thank you! |
Hi @JiaxiongQ ,
sparse.shape I probably need to change the EDIT: When changing the shape with |
Sorry, I don't know why this would happen, but you can use ‘torch.permute()’ to change the dimension and make all the dimension of inputs like (b, c, 256, 512). |
Hi @JiaxiongQ ,
Didn't change anything else. After using
Full code of that function:
Now changed to and new error:
|
This code is mainly for KITTI,you should modify it and just insure the file names can be matched. |
Hi @JiaxiongQ , |
Regarding Q2, the raw KITTI data overview page provides a |
I meet the same problem about dimension mismatch, I have change a lot to fit it on sythtic dataset, You could debug the program step by step to change the dimension order to fix it. the recommend dimension order of PyTorch is (B, C, H, W). May it help you |
Q1:Yes, we use images from the left camera and the right camera;
Q2:We don't find this link, we just download the dataset one by one;
Q3:It is flexible to organize files, you just need to make sure that all
the images are corresponding;
Q4:No, you just need to use the one DCU to train surface normal. The
synthetic data is used to improve the quality of surface normal, the
download link is in README.md;
Q5:The whole training process takes 15 epochs on 3 GPUs(1080 ti).
…On Thu, Apr 9, 2020 at 9:24 AM graycrown ***@***.***> wrote:
Hi, thanks for this amazing repo. @JiaxiongQ
<https://github.com/JiaxiongQ>
I'm trying to get trainN.py and nomalLoader.py to work in order to train
the first NN.
This is what I understood so far that I need in order to train:
1. Download data_depth_velodyne which is the sparse Lidar dataset.
2. Download data_depth_annotated which is the ground-truth (Dense)
Lidar dataset.
3. Use the second repo. in order to generate from the ground-truth
Dense Lidar dataset the ground-truth normals.
4. Download ALL the RGB Kitti images from all the categories ( City |
Residential | Road | Campus | Person | Calibration ), Is there a link to
download all at once instead of downloading one by one?
*Question 1:* Do I need to extract all the RGB Images into the folders
one by one into data_depth_velodyne/train/..*sync/ - I need to add
image_02 and image_03 folders to each of the sync folders? (This is
implied from your code)
*Question 2:* Is there a way to download all the RGB Images in one-shot
instead of clicking one by one and extracting them one by one to all the
folders?
In nomalLoader.py the function dataloader(filepath) returns 3 variables:
left_train,normalS_train,normal_gts which are:
a. left_train - the RGB Kitty Image folders 'data_depth_velodyne/train/..*sync/image02
& 03/data.
b. normalS_train - - the Sparse lidar folders 'data_depth_velodyne/train/..*sync/proj_depth/velodyne_raw/image02
& 03/.
c. normal_gts is the folder which has all the normals I generated from
dense gt: data_depth_annotated/*_sync/proj_depth/groundtruth/image_02 &
image_03 -> gt/out/train/*_sync/image_02 & image_03 or should it be all
in gt/out/train/*_sync/? Because in the code there isn't anything about
concatinating the image_02 & image_03.
*Question 3:* please look at c., I asked there about the ground-truth
normals.
*Question 4:* When and where the synthetic data is used? Do we use it
also in trainN.py? Do we use it in all the 3 NNs?
*Question 5:* How many epochs is recommended to train on?
Other than that, thank you. It took me so many hours just to get to the
point I understand how to get the data ready (and still trying), I'll
definitely add a guide on how to prepare the data to train after this post,
so others can save many hours to understand the process.
About Q2, You can find it in tool sets in KITTI homepage, someone support
a script to download them all.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#22 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AJANJRATRINBRO3ODJPHKOTRLUPWHANCNFSM4KCWAYVA>
.
|
Hi, thanks for this amazing repo. @JiaxiongQ
I'm trying to get
trainN.py
andnomalLoader.py
to work in order to train the first NN.This is what I understood so far that I need in order to train:
data_depth_velodyne
which is the sparse Lidar dataset.data_depth_annotated
which is the ground-truth (Dense) Lidar dataset.Question 1: Do I need to extract all the RGB Images into the folders one by one into
data_depth_velodyne/train/..*sync/
- I need to addimage_02
andimage_03
folders to each of thesync
folders? (This is implied from your code)Question 2: Is there a way to download all the RGB Images in one-shot instead of clicking one by one and extracting them one by one to all the folders?
In
nomalLoader.py
the functiondataloader(filepath)
returns 3 variables:left_train,normalS_train,normal_gts
which are:a.
left_train
- the RGB Kitty Image folders'data_depth_velodyne/train/..*sync/image02 & 03/data
.b.
normalS_train
- - the Sparse lidar folders'data_depth_velodyne/train/..*sync/proj_depth/velodyne_raw/image02 & 03/
.c.
normal_gts
is the folder which has all the normals I generated from dense gt:data_depth_annotated/*_sync/proj_depth/groundtruth/image_02 & image_03
->gt/out/train/*_sync/image_02 & image_03
or should it be all ingt/out/train/*_sync/
? Because in the code there isn't anything about concatinating theimage_02 & image_03
.Question 3: please look at
c.
, I asked there about the ground-truth normals.Question 4: When and where the synthetic data is used? Do we use it also in
trainN.py
? Do we use it in all the 3 NNs?Question 5: How many epochs is recommended to train on?
Other than that, thank you. It took me so many hours just to get to the point I understand how to get the data ready (and still trying), I'll definitely add a guide on how to prepare the data to train after this post, so others can save many hours to understand the process.
The text was updated successfully, but these errors were encountered: