Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

why does the implementation not use data normalization / zero-center? #12

Open
pzz2011 opened this issue Dec 11, 2017 · 4 comments
Open

Comments

@pzz2011
Copy link

pzz2011 commented Dec 11, 2017

No description provided.

@pzz2011 pzz2011 changed the title why does the implementation use data normalization / zero-center? why does the implementation not use data normalization / zero-center? Dec 11, 2017
@twtygqyy
Copy link
Owner

@pzz2011
Copy link
Author

pzz2011 commented Dec 23, 2017

@twtygqyy I didn't find any info about 0-1 normalized in the code.... :-)

@pzz2011
Copy link
Author

pzz2011 commented Dec 23, 2017

@twtygqyy another question here, the size of the generated .h5 file for 291 png(13M) is about 14G.

If I want to use 800 pngs(1000x800) to generate .h5 file => It will cause OOM。
In fact,when I use 300pngs(1000x800), the generate_train.m require > 128G memory => cause OOM again. => and it cause >50G .h5 file in my disk.

any advice? thanks.

@twtygqyy
Copy link
Owner

@pzz2011 image = im2double(image(:, :, 1)); will do the trick. Regarding OOM issue, it is better to split the training set into multiple h5 file and modify the dataloader to load them one by one.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants