This repository has been archived by the owner on Oct 31, 2023. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 2.5k
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
add support for pascal voc dataset and evaluate (#207)
* add support for pascal voc dataset and evaluate * optimization for adding voc dataset * make inference.py dataset-agnostic; add use_difficult option to voc dataset * handle voc difficult objects correctly * Remove dependency on lxml plus minor improvements * More cleanups * More comments and improvements * Lint fix * Move configs to their own folder
- Loading branch information
Showing
17 changed files
with
860 additions
and
361 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
20 changes: 20 additions & 0 deletions
20
configs/pascal_voc/e2e_faster_rcnn_R_50_C4_1x_1_gpu_voc.yaml
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,20 @@ | ||
MODEL: | ||
META_ARCHITECTURE: "GeneralizedRCNN" | ||
WEIGHT: "catalog://ImageNetPretrained/MSRA/R-50" | ||
RPN: | ||
PRE_NMS_TOP_N_TEST: 6000 | ||
POST_NMS_TOP_N_TEST: 300 | ||
ANCHOR_SIZES: (128, 256, 512) | ||
ROI_BOX_HEAD: | ||
NUM_CLASSES: 21 | ||
DATASETS: | ||
TRAIN: ("voc_2007_trainval",) | ||
TEST: ("voc_2007_test",) | ||
SOLVER: | ||
BASE_LR: 0.001 | ||
WEIGHT_DECAY: 0.0001 | ||
STEPS: (50000, ) | ||
MAX_ITER: 70000 | ||
IMS_PER_BATCH: 1 | ||
TEST: | ||
IMS_PER_BATCH: 1 |
20 changes: 20 additions & 0 deletions
20
configs/pascal_voc/e2e_faster_rcnn_R_50_C4_1x_4_gpu_voc.yaml
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,20 @@ | ||
MODEL: | ||
META_ARCHITECTURE: "GeneralizedRCNN" | ||
WEIGHT: "catalog://ImageNetPretrained/MSRA/R-50" | ||
RPN: | ||
PRE_NMS_TOP_N_TEST: 6000 | ||
POST_NMS_TOP_N_TEST: 300 | ||
ANCHOR_SIZES: (128, 256, 512) | ||
ROI_BOX_HEAD: | ||
NUM_CLASSES: 21 | ||
DATASETS: | ||
TRAIN: ("voc_2007_trainval",) | ||
TEST: ("voc_2007_test",) | ||
SOLVER: | ||
BASE_LR: 0.004 | ||
WEIGHT_DECAY: 0.0001 | ||
STEPS: (12500, ) | ||
MAX_ITER: 17500 | ||
IMS_PER_BATCH: 4 | ||
TEST: | ||
IMS_PER_BATCH: 4 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,5 +1,6 @@ | ||
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. | ||
from .coco import COCODataset | ||
from .voc import PascalVOCDataset | ||
from .concat_dataset import ConcatDataset | ||
|
||
__all__ = ["COCODataset", "ConcatDataset"] | ||
__all__ = ["COCODataset", "ConcatDataset", "PascalVOCDataset"] |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,27 @@ | ||
from maskrcnn_benchmark.data import datasets | ||
|
||
from .coco import coco_evaluation | ||
from .voc import voc_evaluation | ||
|
||
|
||
def evaluate(dataset, predictions, output_folder, **kwargs): | ||
"""evaluate dataset using different methods based on dataset type. | ||
Args: | ||
dataset: Dataset object | ||
predictions(list[BoxList]): each item in the list represents the | ||
prediction results for one image. | ||
output_folder: output folder, to save evaluation files or results. | ||
**kwargs: other args. | ||
Returns: | ||
evaluation result | ||
""" | ||
args = dict( | ||
dataset=dataset, predictions=predictions, output_folder=output_folder, **kwargs | ||
) | ||
if isinstance(dataset, datasets.COCODataset): | ||
return coco_evaluation(**args) | ||
elif isinstance(dataset, datasets.PascalVOCDataset): | ||
return voc_evaluation(**args) | ||
else: | ||
dataset_name = dataset.__class__.__name__ | ||
raise NotImplementedError("Unsupported dataset type {}.".format(dataset_name)) |
21 changes: 21 additions & 0 deletions
21
maskrcnn_benchmark/data/datasets/evaluation/coco/__init__.py
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,21 @@ | ||
from .coco_eval import do_coco_evaluation | ||
|
||
|
||
def coco_evaluation( | ||
dataset, | ||
predictions, | ||
output_folder, | ||
box_only, | ||
iou_types, | ||
expected_results, | ||
expected_results_sigma_tol, | ||
): | ||
return do_coco_evaluation( | ||
dataset=dataset, | ||
predictions=predictions, | ||
box_only=box_only, | ||
output_folder=output_folder, | ||
iou_types=iou_types, | ||
expected_results=expected_results, | ||
expected_results_sigma_tol=expected_results_sigma_tol, | ||
) |
Oops, something went wrong.
9a1ba14
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hi @fmassa
i have a question: why do u add
dataset_name
as a parameter ininference
function intools/test_net.py
file?9a1ba14
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi,
It is now only needed for nicer logging / error messages I believe, but those checks could be pushed outside I think