Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SpatialAveragePadding padding and ceil kernels #134

Merged
merged 1 commit into from
Dec 15, 2015
Merged

SpatialAveragePadding padding and ceil kernels #134

merged 1 commit into from
Dec 15, 2015

Conversation

szagoruyko
Copy link
Member

cuda part for torch/nn#365

}
if(COUNT_INCLUDE_PAD)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this condition is inverted. When you include the padding region, you should divide by pool_size.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, as is both conditions are the same. I think you should replace the ((hend - hstart) * (wend - wstart)) by kernel_h*kernel_w.

@fmassa
Copy link
Contributor

fmassa commented Sep 7, 2015

I made some inline coments. Could you add some tests for the new cases ?

@hughperkins
Copy link
Contributor

Tests are missing test for :ceil() mode?

@szagoruyko
Copy link
Member Author

@hughperkins e040398

@hughperkins
Copy link
Contributor

Ah :-) Right :-)

@hughperkins
Copy link
Contributor

Hmmm, it looks like this compares with the cpu version? Does that mean there is a cpu implementation of ceil average pooling to compare with?

@fmassa
Copy link
Contributor

fmassa commented Sep 24, 2015

@hughperkins yes, there is a CPU implementation, check torch/nn#365
I've written all the tests needed for both PRs to be merged. But there is still a problem with GPU implementation when count_include_pad is false, and I couldn't figure out why (in my tests, it seems that count_include_pad is always behaving as if it was true, even though we set it to false).

@fmassa
Copy link
Contributor

fmassa commented Sep 24, 2015

(Note that I'm comparing it against my fork, with the corrections mentioned in this PR e040398 )

@hughperkins
Copy link
Contributor

(ported to clnn, in branch, https://github.com/hughperkins/clnn/compare/master...avgpool?expand=1 ; pending merge of torch/nn#365 )

@fmassa
Copy link
Contributor

fmassa commented Nov 12, 2015

I think that after https://github.com/szagoruyko/cunn/pull/2 is merged, this is fine to go.

@hughperkins
Copy link
Contributor

Looks like this is ready to be merged right? :-)

@fmassa
Copy link
Contributor

fmassa commented Nov 17, 2015

@hughperkins @szagoruyko was comparing this implementation against cudnn and it seems that the case in which setCountExcludePad is true there are some boundary differences. This case is not yet officially present in cudnn binding though, and I haven't had a look at it (still using CUDNN R2).

@hughperkins
Copy link
Contributor

Can we at least merge in the nn version? Even though this isnt supported in cunn yet, but it is supported in clnn, but I cant merge it into clnn master until it is present in nn master.

@fmassa
Copy link
Contributor

fmassa commented Nov 17, 2015

The thing is, both nn and cunn implemetations are equivalent, so if there is a case in which cunn does not behave as cudnn, then it's also going to be the case for nn.
I think what we all want is to have an equivalent implementation everywhere (and as cudnn is becoming the standard for CUDA modules, we need to give the same results as them).
I'll try to have a look at cudnn R3 version of setCountExcludePad, to see what they are actually computing, and I'll come back to you soon (hopefully today).

@hughperkins
Copy link
Contributor

Ok... I think for my own usage I'm not using setCountExcludePad. Perhaps we could split this PR into two: one for ceilmode, and one for setCountExcludePad? An easy way to do that could be to just add an assert that doesnt allow anyone to use setCountExcludePad for now perhaps? (or to rename it to to unstable_unsupported_setCountExcludePad perhaps?)

@fmassa
Copy link
Contributor

fmassa commented Nov 17, 2015

Hum, yeah, splitting the PR is a possibility... I'll have a look at it today when I have some time.

@szagoruyko
Copy link
Member Author

this PR is almost ready to be merged, the only problem is the difference between it and COUNT_EXCLUDE_PADDING mode of cudnn, which we believe to be a bug in cudnn. will update as soon we find out more.

@hughperkins
Copy link
Contributor

@szagoruyko Mmmkay. What if ... can we add something like assert(COUNT_EXCLUDE_PADDING == 0) for now? And then can make a later pull request to enable COUNT_EXCLUDE_PADDING? (I'm happy to submit the appropriate code change to disable COUNT_EXCLUDE_PADDING for now)


luaL_argcheck(L, nInputCols >= kW && nInputRows >= kH, 2, "input image smaller than kernel size");
luaL_argcheck(L, nInputCols >= kW - padW && nInputRows >= kH - padH, 2, "input image smaller than kernel size");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this should be nInputCols >= kW - 2 * padW, and the same thing for H.
Currently, this condition doesn't allow the case of 1x1 input to be used with kW=kH=3 and padW=padH=1 (which I think should be valid).

@szagoruyko
Copy link
Member Author

fixed Francisco's comments and squashed

soumith added a commit that referenced this pull request Dec 15, 2015
SpatialAveragePadding padding and ceil kernels
@soumith soumith merged commit 7506462 into torch:master Dec 15, 2015
@soumith
Copy link
Member

soumith commented Dec 15, 2015

Thanks Sergey.

@szagoruyko szagoruyko deleted the avgpool branch December 15, 2015 10:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants