diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml index 497d1a751..494c6d284 100644 --- a/.github/workflows/build.yml +++ b/.github/workflows/build.yml @@ -198,7 +198,7 @@ jobs: - name: Build and install run: pip install -e . - name: Run unittests - run: coverage run --branch --source mmrotate -m pytest tests -sv + run: coverage run --branch --source mmrotate -m pytest tests - name: Generate coverage report run: | coverage xml diff --git a/README.md b/README.md index 8167549c0..7391fca6b 100644 --- a/README.md +++ b/README.md @@ -145,21 +145,21 @@ This project is released under the [Apache 2.0 license](LICENSE). ## Projects in OpenMMLab * [MMCV](https://github.com/open-mmlab/mmcv): OpenMMLab foundational library for computer vision. -* [MIM](https://github.com/open-mmlab/mim): MIM Installs OpenMMLab Packages. +* [MIM](https://github.com/open-mmlab/mim): MIM installs OpenMMLab packages. * [MMClassification](https://github.com/open-mmlab/mmclassification): OpenMMLab image classification toolbox and benchmark. * [MMDetection](https://github.com/open-mmlab/mmdetection): OpenMMLab detection toolbox and benchmark. -* [MMDetection3D](https://github.com/open-mmlab/mmdetection3d): OpenMMLab next-generation platform for general 3D object detection. +* [MMDetection3D](https://github.com/open-mmlab/mmdetection3d): OpenMMLab's next-generation platform for general 3D object detection. +* [MMRotate](https://github.com/open-mmlab/mmrotate): OpenMMLab rotated object detection toolbox and benchmark. * [MMSegmentation](https://github.com/open-mmlab/mmsegmentation): OpenMMLab semantic segmentation toolbox and benchmark. -* [MMAction2](https://github.com/open-mmlab/mmaction2): OpenMMLab next-generation action understanding toolbox and benchmark. -* [MMTracking](https://github.com/open-mmlab/mmtracking): OpenMMLab video perception toolbox and benchmark. +* [MMOCR](https://github.com/open-mmlab/mmocr): OpenMMLab text detection, recognition, and understanding toolbox. * [MMPose](https://github.com/open-mmlab/mmpose): OpenMMLab pose estimation toolbox and benchmark. -* [MMEditing](https://github.com/open-mmlab/mmediting): OpenMMLab image and video editing toolbox. -* [MMOCR](https://github.com/open-mmlab/mmocr): A comprehensive toolbox for text detection, recognition and understanding. -* [MMGeneration](https://github.com/open-mmlab/mmgeneration): OpenMMLab next-generation toolbox for generative models. -* [MMFlow](https://github.com/open-mmlab/mmflow): OpenMMLab optical flow toolbox and benchmark. -* [MMFewShot](https://github.com/open-mmlab/mmfewshot): OpenMMLab fewshot learning toolbox and benchmark. * [MMHuman3D](https://github.com/open-mmlab/mmhuman3d): OpenMMLab 3D human parametric model toolbox and benchmark. * [MMSelfSup](https://github.com/open-mmlab/mmselfsup): OpenMMLab self-supervised learning toolbox and benchmark. * [MMRazor](https://github.com/open-mmlab/mmrazor): OpenMMLab model compression toolbox and benchmark. +* [MMFewShot](https://github.com/open-mmlab/mmfewshot): OpenMMLab fewshot learning toolbox and benchmark. +* [MMAction2](https://github.com/open-mmlab/mmaction2): OpenMMLab's next-generation action understanding toolbox and benchmark. +* [MMTracking](https://github.com/open-mmlab/mmtracking): OpenMMLab video perception toolbox and benchmark. +* [MMFlow](https://github.com/open-mmlab/mmflow): OpenMMLab optical flow toolbox and benchmark. +* [MMEditing](https://github.com/open-mmlab/mmediting): OpenMMLab image and video editing toolbox. +* [MMGeneration](https://github.com/open-mmlab/mmgeneration): OpenMMLab image and video generative models toolbox. * [MMDeploy](https://github.com/open-mmlab/mmdeploy): OpenMMLab model deployment framework. -* [MMRotate](https://github.com/open-mmlab/mmrotate): OpenMMLab rotated object detection toolbox and benchmark. diff --git a/README_zh-CN.md b/README_zh-CN.md index 116a85e36..fcf085962 100644 --- a/README_zh-CN.md +++ b/README_zh-CN.md @@ -141,23 +141,23 @@ MMRotate 是一款由不同学校和公司共同贡献的开源项目。我们 * [MMCV](https://github.com/open-mmlab/mmcv): OpenMMLab 计算机视觉基础库 * [MIM](https://github.com/open-mmlab/mim): MIM 是 OpenMMlab 项目、算法、模型的统一入口 -* [MMClassification](https://github.com/open-mmlab/mmclassification): OpenMMLab 图像分类工具箱与测试基准 -* [MMDetection](https://github.com/open-mmlab/mmdetection): OpenMMLab 检测工具箱与测试基准 -* [MMDetection3D](https://github.com/open-mmlab/mmdetection3d): OpenMMLab 新一代通用3D目标检测平台 -* [MMSegmentation](https://github.com/open-mmlab/mmsegmentation): OpenMMLab 语义分割工具箱与测试基准 -* [MMAction2](https://github.com/open-mmlab/mmaction2): OpenMMLab 新一代视频理解工具箱与测试基准 -* [MMTracking](https://github.com/open-mmlab/mmtracking): OpenMMLab 一体化视频目标感知平台 -* [MMPose](https://github.com/open-mmlab/mmpose): OpenMMLab 姿态估计工具箱与测试基准 -* [MMEditing](https://github.com/open-mmlab/mmediting): OpenMMLab 图像视频编辑工具箱 +* [MMClassification](https://github.com/open-mmlab/mmclassification): OpenMMLab 图像分类工具箱 +* [MMDetection](https://github.com/open-mmlab/mmdetection): OpenMMLab 目标检测工具箱 +* [MMDetection3D](https://github.com/open-mmlab/mmdetection3d): OpenMMLab 新一代通用 3D 目标检测平台 +* [MMRotate](https://github.com/open-mmlab/mmrotate): OpenMMLab 旋转框检测工具箱与测试基准 +* [MMSegmentation](https://github.com/open-mmlab/mmsegmentation): OpenMMLab 语义分割工具箱 * [MMOCR](https://github.com/open-mmlab/mmocr): OpenMMLab 全流程文字检测识别理解工具包 -* [MMGeneration](https://github.com/open-mmlab/mmgeneration): OpenMMLab 新一代生成模型工具箱 -* [MMFlow](https://github.com/open-mmlab/mmflow): OpenMMLab 光流估计工具箱与测试基准 -* [MMFewShot](https://github.com/open-mmlab/mmfewshot): OpenMMLab 少样本学习工具箱与测试基准 +* [MMPose](https://github.com/open-mmlab/mmpose): OpenMMLab 姿态估计工具箱 * [MMHuman3D](https://github.com/open-mmlab/mmhuman3d): OpenMMLab 人体参数化模型工具箱与测试基准 * [MMSelfSup](https://github.com/open-mmlab/mmselfsup): OpenMMLab 自监督学习工具箱与测试基准 * [MMRazor](https://github.com/open-mmlab/mmrazor): OpenMMLab 模型压缩工具箱与测试基准 +* [MMFewShot](https://github.com/open-mmlab/mmfewshot): OpenMMLab 少样本学习工具箱与测试基准 +* [MMAction2](https://github.com/open-mmlab/mmaction2): OpenMMLab 新一代视频理解工具箱 +* [MMTracking](https://github.com/open-mmlab/mmtracking): OpenMMLab 一体化视频目标感知平台 +* [MMFlow](https://github.com/open-mmlab/mmflow): OpenMMLab 光流估计工具箱与测试基准 +* [MMEditing](https://github.com/open-mmlab/mmediting): OpenMMLab 图像视频编辑工具箱 +* [MMGeneration](https://github.com/open-mmlab/mmgeneration): OpenMMLab 图片视频生成模型工具箱 * [MMDeploy](https://github.com/open-mmlab/mmdeploy): OpenMMLab 模型部署框架 -* [MMRotate](https://github.com/open-mmlab/mmrotate): OpenMMLab 旋转框检测工具箱与测试基准 ## 欢迎加入 OpenMMLab 社区 diff --git a/demo/dota_demo.jpg b/demo/dota_demo.jpg new file mode 100644 index 000000000..ef728ffaa Binary files /dev/null and b/demo/dota_demo.jpg differ diff --git a/demo/dota_demo.png b/demo/dota_demo.png deleted file mode 100644 index 1b46b0377..000000000 Binary files a/demo/dota_demo.png and /dev/null differ diff --git a/mmrotate/models/roi_heads/roi_extractors/rotate_single_level_roi_extractor.py b/mmrotate/models/roi_heads/roi_extractors/rotate_single_level_roi_extractor.py index ba6fdf47c..d0b1604d2 100644 --- a/mmrotate/models/roi_heads/roi_extractors/rotate_single_level_roi_extractor.py +++ b/mmrotate/models/roi_heads/roi_extractors/rotate_single_level_roi_extractor.py @@ -98,19 +98,21 @@ def forward(self, feats, rois, roi_scale_factor=None): Returns: torch.Tensor: Scaled RoI features. """ - out_size = self.roi_layers[0].out_size + if isinstance(self.roi_layers[0], ops.RiRoIAlignRotated): + out_size = nn.modules.utils._pair(self.roi_layers[0].out_size) + else: + out_size = self.roi_layers[0].output_size num_levels = len(feats) - expand_dims = (-1, self.out_channels * out_size * out_size) + expand_dims = (-1, self.out_channels * out_size[0] * out_size[1]) if torch.onnx.is_in_onnx_export(): # Work around to export mask-rcnn to onnx roi_feats = rois[:, :1].clone().detach() roi_feats = roi_feats.expand(*expand_dims) - roi_feats = roi_feats.reshape(-1, self.out_channels, out_size, - out_size) + roi_feats = roi_feats.reshape(-1, self.out_channels, *out_size) roi_feats = roi_feats * 0 else: roi_feats = feats[0].new_zeros( - rois.size(0), self.out_channels, out_size, out_size) + rois.size(0), self.out_channels, *out_size) # TODO: remove this when parrots supports if torch.__version__ == 'parrots': roi_feats.requires_grad = True