-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature] Add SOD datasets #913
Conversation
Hi, thanks for your nice pr, we would review it as soon as possible. Salient object detection is an important branch in semantic segmentation, could you also join us in supporting other representative datasets, such as MSRA, THUR15K, ECSSD and so on? Best, |
Codecov Report
@@ Coverage Diff @@
## master #913 +/- ##
==========================================
- Coverage 89.62% 89.56% -0.06%
==========================================
Files 113 117 +4
Lines 6263 6307 +44
Branches 989 993 +4
==========================================
+ Hits 5613 5649 +36
- Misses 452 460 +8
Partials 198 198
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
|
abot metrics in SOD
|
|
Hi @shinya7y |
Sure. I will only check superficial issues because I'm not familiar with SOD and mmseg. |
|
||
In salient object detection (SOD), HKU-IS is used for evaluation. | ||
|
||
First,download [HKU-IS.rar](https://sites.google.com/site/ligb86/mdfsaliency/). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I cannot access https://sites.google.com/site/ligb86/mdfsaliency/
Is it the same as https://i.cs.hku.hk/~gbli/deep_saliency.html ?
|
||
### DUTS | ||
|
||
First,download [DUTS-TR.zip](http://saliencydetection.net/duts/download/DUTS-TR.zip) and [DUTS-TE.zip](http://saliencydetection.net/duts/download/DUTS-TE.zip) . |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Three ) .
in this file to ).
@@ -99,7 +101,8 @@ def single_gpu_test(model, | |||
if pre_eval: | |||
# TODO: adapt samples_per_gpu > 1. | |||
# only samples_per_gpu=1 valid now | |||
result = dataset.pre_eval(result, indices=batch_indices) | |||
result = dataset.pre_eval( | |||
result, return_logit, indices=batch_indices) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
return_logit=return_logit
@@ -215,7 +223,8 @@ def multi_gpu_test(model, | |||
if pre_eval: | |||
# TODO: adapt samples_per_gpu > 1. | |||
# only samples_per_gpu=1 valid now | |||
result = dataset.pre_eval(result, indices=batch_indices) | |||
result = dataset.pre_eval( | |||
result, return_logit, indices=batch_indices) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
return_logit=return_logit
@@ -216,7 +216,7 @@ def whole_inference(self, img, img_meta, rescale): | |||
|
|||
return seg_logit | |||
|
|||
def inference(self, img, img_meta, rescale): | |||
def inference(self, img, img_meta, rescale, return_logit): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
def inference(self, img, img_meta, rescale, return_logit=False):
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Docstring for new argument.
"""Simple test with single image.""" | ||
seg_logit = self.inference(img, img_meta, rescale) | ||
seg_pred = seg_logit.argmax(dim=1) | ||
seg_logit = self.inference(img, img_meta, rescale, return_logit) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
return_logit=return_logit
in the three calls of self.inference
|
||
__all__ = [ | ||
'EvalHook', 'DistEvalHook', 'mean_dice', 'mean_iou', 'mean_fscore', | ||
'eval_metrics', 'get_classes', 'get_palette', 'pre_eval_to_metrics', | ||
'intersect_and_union' | ||
'intersect_and_union', 'calc_sod_metrics', 'eval_sod_metrics', 'pre_eval_to_sod_metrics' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code style is different in many files.
Please refer to https://github.com/open-mmlab/mmsegmentation/blob/master/.github/CONTRIBUTING.md
print('Making directories...') | ||
mmcv.mkdir_or_exist(out_dir) | ||
mmcv.mkdir_or_exist(osp.join(out_dir, 'images')) | ||
mmcv.mkdir_or_exist(osp.join(out_dir, 'images', 'validation')) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Many mkdir_or_exist
calls in dataset converters seem redundant.
if isinstance(pred_label, str): | ||
pred_label = torch.from_numpy(np.load(pred_label)) | ||
else: | ||
pred_label = torch.from_numpy((pred_label)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
two parentheses needed?
|
||
assert len(os.listdir(image_dir)) == DUT_OMRON_LEN \ | ||
and len(os.listdir(mask_dir)) == \ | ||
DUT_OMRON_LEN, 'len(DUT-OMRON) != {}'.format(DUT_OMRON_LEN) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
assert len(os.listdir(image_dir)) == DUT_OMRON_LEN \
and len(os.listdir(mask_dir)) == DUT_OMRON_LEN, \
f'len(DUT-OMRON) != {DUT_OMRON_LEN}'
Similar modifications of the other converters will improve readability.
@@ -30,9 +30,11 @@ def __init__(self, | |||
by_epoch=False, | |||
efficient_test=False, | |||
pre_eval=False, | |||
return_logit=False, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
add docstr for this argument
@@ -0,0 +1,67 @@ | |||
# Copyright (c) OpenMMLab. All rights reserved. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add comment for this script and give an example, you can refer to
https://github.com/open-mmlab/mmdetection/blob/9874180a015beee874c904357fe4ed4fab4b46a4/tools/analysis_tools/optimize_anchors.py#L2
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do this for other scripts too
Hi @acdart |
|
||
```shell | ||
python tools/convert_datasets/hku_is.py /path/to/HKU-IS.rar | ||
``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could modify the Chinese document accordingly.
img_dir='images/training', | ||
ann_dir='annotations/training', | ||
pipeline=train_pipeline), | ||
val=dict( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we use concat dataset
(#833) and do evaluation seperately.
@@ -38,6 +38,7 @@ def single_gpu_test(model, | |||
efficient_test=False, | |||
opacity=0.5, | |||
pre_eval=False, | |||
return_logit=False, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
docstring for this new argument.
if return_logit: | ||
output = seg_logit | ||
else: | ||
if seg_logit.shape[1] >= 2: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be better to add some comments for different cases (different shape).
Please merge the master branch into your branch, thank you. |
Hi, @acdart . Sorry to bother you. Do you plan to continue finishing this pr? We can spare more time or human resources in the next month if you need it. Looking forward to your reply. Best, |
Add class name and palette to |
* polish localizer code * add deprecated warnings * update warning msg
Support DUTS dataset, which is the most popular dataset in salient object detection.
DUTS contains 10,553 training images and 5,019 test images. All training images are collected from the ImageNet DET training/val sets, while test images are collected from the ImageNet DET test set and the SUN data set. Both the training and test set contain very challenging scenarios for saliency detection. Accurate pixel-level ground truths are manually annotated by 50 subjects.
DUTS is currently the largest saliency detection benchmark with the explicit training/test evaluation protocol. For fair comparison in the future research, the training set of DUTS serves as a good candidate for learning DNNs, while the test set and other public data sets can be used for evaluation.
Related links:
https://paperswithcode.com/sota/salient-object-detection-on-duts-te
https://github.com/ArcherFMY/sal_eval_toolbox