-
Notifications
You must be signed in to change notification settings - Fork 2.5k
Conversation
…d use 'BaseStem'&'Bottleneck' to simply codes
…ount of indentation and code duplication
…, and add dilation into ResNetHead
Thank you for your pull request and welcome to our community. We require contributors to sign our Contributor License Agreement, and we don't seem to have you on file. In order for us to review and merge your code, please sign up at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need the corporate CLA signed. If you have received this in error or have any questions, please contact us at [email protected]. Thanks! |
Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Facebook open source project. Thanks! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks generally good, thanks!
I didn't try it myself but I'll merge this as is, let's see what happens :-)
) | ||
return output | ||
|
||
@staticmethod |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You should ideally make it once_differentiable
as well, given that there is no support for double backwards here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hi @fmassa
add once_differentiable
like this?
@staticmethod
@once_differentiable
def backward(ctx, grad_output):
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, exactly, that's how it's done for RoIAlign / RoIPool
maskrcnn-benchmark/maskrcnn_benchmark/layers/roi_align.py
Lines 25 to 27 in b318c3e
@staticmethod | |
@once_differentiable | |
def backward(ctx, grad_output): |
@zimenglan-sysu-512 Hi, thanks for your excellent contribution! I'm building my own model but I've encountered an error when using To reproduce this error, just change this line kernel_size=3, into kernel_size=5 and excute:
The error log: RuntimeError: CUDA error: CUBLAS_STATUS_INTERNAL_ERROR when calling terminate called after throwing an instance of 'c10::Error' |
@fmassa @chengyangfu I'll really appreciate it if you can give me some advice on debugging! |
My environment: 2020-04-19 14:20:09,676 maskrcnn_benchmark INFO: OS: Ubuntu 16.04.5 LTS Python version: 3.7 Nvidia driver version: 430.40 Versions of relevant libraries: |
* make pixel indexes 0-based for bounding box in pascal voc dataset * replacing all instances of torch.distributed.deprecated with torch.distributed * replacing all instances of torch.distributed.deprecated with torch.distributed * add GroupNorm * add GroupNorm -- sort out yaml files * use torch.nn.GroupNorm instead, replace 'use_gn' with 'conv_block' and use 'BaseStem'&'Bottleneck' to simply codes * modification on 'group_norm' and 'conv_with_kaiming_uniform' function * modification on yaml files in configs/gn_baselines/ and reduce the amount of indentation and code duplication * use 'kaiming_uniform' to initialize resnet, disable gn after fc layer, and add dilation into ResNetHead * agnostic-regression for bbox * please set 'STRIDE_IN_1X1' to be 'False' when backbone use GN * add README.md for GN * add dcn from mmdetection
add deformable convolution and deformable pooling from mmdetection. here thanks my friend Jinqiang who help me out to add them.