Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Final convolution layer 5_3 and not 5_4? #7

Closed
sdeck51 opened this issue Feb 24, 2017 · 2 comments
Closed

Final convolution layer 5_3 and not 5_4? #7

sdeck51 opened this issue Feb 24, 2017 · 2 comments
Labels

Comments

@sdeck51
Copy link

sdeck51 commented Feb 24, 2017

Hello!

I've been reading up on the paper as well as reading your code to get a good grasp of how to do image segmentation using ConvNets. I was wondering why in FCN.py line 86 you set the conv layer to 5_3 instead of 5_4? Also to clarify for FCN in general, we're changing the fully connected layers to convolution layers as well which is what you're doing from 87 to 108 and then afterwards start working backwards, "deconvolving" the image, which it appears you do three times?

Thanks.

@shekkizh
Copy link
Owner

Hi,
I don't really have a justification for the choice of conv5_3 over 5_4. I would assume either choice would work fine but keep in mind that with another conv layer you use additional memory.

And yes, in FCNs we cleverly change fc layer with conv layers using 1x1 kernels.

Quoting Yann LeCun, In Convolutional Nets, there is no such thing as "fully-connected layers". There are only convolution layers with 1x1 convolution kernels and a full connection table.

The deconv/conv transpose operation is to up-sample the images back to input size.

@varungupta31
Copy link

@sdeck51 #94 (comment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants