You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've been reading up on the paper as well as reading your code to get a good grasp of how to do image segmentation using ConvNets. I was wondering why in FCN.py line 86 you set the conv layer to 5_3 instead of 5_4? Also to clarify for FCN in general, we're changing the fully connected layers to convolution layers as well which is what you're doing from 87 to 108 and then afterwards start working backwards, "deconvolving" the image, which it appears you do three times?
Thanks.
The text was updated successfully, but these errors were encountered:
Hi,
I don't really have a justification for the choice of conv5_3 over 5_4. I would assume either choice would work fine but keep in mind that with another conv layer you use additional memory.
And yes, in FCNs we cleverly change fc layer with conv layers using 1x1 kernels.
Quoting Yann LeCun, In Convolutional Nets, there is no such thing as "fully-connected layers". There are only convolution layers with 1x1 convolution kernels and a full connection table.
The deconv/conv transpose operation is to up-sample the images back to input size.
Hello!
I've been reading up on the paper as well as reading your code to get a good grasp of how to do image segmentation using ConvNets. I was wondering why in FCN.py line 86 you set the conv layer to 5_3 instead of 5_4? Also to clarify for FCN in general, we're changing the fully connected layers to convolution layers as well which is what you're doing from 87 to 108 and then afterwards start working backwards, "deconvolving" the image, which it appears you do three times?
Thanks.
The text was updated successfully, but these errors were encountered: