Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SpatialAveragePooling supports padding, ceil mode and more #365

Merged
merged 1 commit into from
Dec 14, 2015

Conversation

fmassa
Copy link
Contributor

@fmassa fmassa commented Sep 6, 2015

Generalizes SpatialAveragePooling to support free zero padding and ceil mode.

When using padding or ceil mode, the effective number of elements in a region might be different from kW*kH. The :count_exclude_padding() method considers only the number of elements in the pooling region for the averaging, whereas :count_include_padding() always divides by kW*kH.

@szagoruyko has a cuda version of this PR.

@szagoruyko
Copy link
Member

name count_include_padding is not good for a function, can you rename it to something like setCountIncludePad?

@soumith
Copy link
Member

soumith commented Sep 11, 2015

this looks pretty good! squash and ready to push?

@fmassa
Copy link
Contributor Author

fmassa commented Sep 11, 2015

Just squashed the commits. Is everyone ok with the namings for the new methods ?

@soumith
Copy link
Member

soumith commented Sep 11, 2015

naming looks okay. will wait for sergey to make the cunn changes.

@hughperkins
Copy link
Contributor

This PR is useful, eg it's a prerequisite for neural-style to work on OpenCL/clnn in certain configurations.

@hughperkins
Copy link
Contributor

Hey! 2016 is nearly upon us ;-) Anything outstanding that needs to be addressed prior to merging this?

@fmassa
Copy link
Contributor Author

fmassa commented Nov 11, 2015

I think we have addressed all the remaining problems with the Cuda version, but I haven't written all the unit tests for it. I'll finish the Cuda PR tomorrow.

@hughperkins
Copy link
Contributor

Cool. Thank you Francisco :-)


By default, the output of each pooling region is divided by the fixed value
`kW*kH`. If using padding or `ceil` mode, the number of elements in a region
can be smaller than `kW*kW`. To switch between different division factors,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

kW*kW should be kW*kH

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for pointing out this typo. I'm fixing it.
Also, I should probably slightly reformulate this part of the documentation as well, because now I don't divide by the fixed value kW*kH all the time. Indeed, for some degenerate cases (where the end position of the pooling region is outside the padded image), I only count the number of elements inside the padded image, which can be smaller than kW*kH

@soumith
Copy link
Member

soumith commented Dec 1, 2015

@fmassa whenever you have time, can you bring this to a conclusion. No real hurry, but def would like to see this and it's CUDA version in.

@fmassa
Copy link
Contributor Author

fmassa commented Dec 1, 2015

@soumith The only thing missing is that it gives different results with respect to cudnn implementation for the ExcludePad case (which currently seems to be buggy though...). Did you had the chance to look into this cudnn issue ? cc @szagoruyko
I'll give a third pass through both PRs today.

@hughperkins
Copy link
Contributor

@fmassa : If I understand correctly, the issue is that the behavior doesnt quite match the cudnn behavior, and you dont want people to start using it, and then it becomes hard to fix the behavior later?

So, can we do one or more of the following options?

  • document clearly that the reference implementation for this is the cudnn implementation, and that where any backwards-incompatible discrepancy is discovered in the future, that the cudnn implementation convention will be followed, and/or
  • add asserts for the corner-cases we've found, so they wont be allowed to execute (but other cases will be allowed through ok), and/or
  • add an option 'allow_unstable_api = false', which removes such asserts from the execution path, so does run ok (but with an explicit decision by the callee to use something unstable)

(maybe just the first option, on its own, is sufficient?)

@fmassa
Copy link
Contributor Author

fmassa commented Dec 1, 2015

@soumith I just passed again through both PRs and I think this one is good. The setCountExcludePad case looks good to me as well, so I would say this is the intended behaviour.
I just fixed a small condition on the assert wrt the input size. The same should be fixed in the CUDA PR, see https://github.com/torch/cunn/pull/134/files#r46350006 . Also, I made some small comments on the CUDA PR here and here.
Tell me if you think this is good so that I can squash the commits.

@soumith
Copy link
Member

soumith commented Dec 14, 2015

can you squash the commits, this is good to go. will merge asap.

…ision

Generalizes SpatialAveragePooling. When using padding or ceil mode, the number of elements in a region might be different from . The  method considers only the number of elements in the pooling region for the averaging, whereas  always divides by .
@fmassa
Copy link
Contributor Author

fmassa commented Dec 14, 2015

@soumith just squashed the commits.

soumith added a commit that referenced this pull request Dec 14, 2015
SpatialAveragePooling supports padding, ceil mode and more
@soumith soumith merged commit cbeeb0d into torch:master Dec 14, 2015
@soumith
Copy link
Member

soumith commented Dec 14, 2015

Thanks. now @szagoruyko has to finish his asap.

@hughperkins
Copy link
Contributor

Cooooll.... :-) Just in time for Christmas :-D

@hughperkins
Copy link
Contributor

(cuda version ported/merged into clnn master hughperkins/clnn@6e4976c )

@fmassa fmassa deleted the avepool_pad branch December 15, 2015 06:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants