Skip to content
This repository has been archived by the owner on Oct 31, 2023. It is now read-only.

Support Binary Mask with transparent SegmentationMask interface #473

Merged
merged 15 commits into from
Apr 9, 2019

Conversation

botcs
Copy link
Contributor

@botcs botcs commented Feb 21, 2019

After a major refactor on #150 I have implemented a generic SegmentationMask with having the following in mind:

Segmentations come in either:
1) Binary masks
2) Polygons

Binary masks can be represented in a contiguous array
and operations can be carried out more efficiently,
therefore BinaryMaskList handles them together.

Polygons are handled separately for each instance,
by PolygonInstance and instances are handled by
PolygonList.

SegmentationMask is supposed to represent both,
therefore it wraps the functions of BinaryMaskList
and PolygonList to make it transparent.

Features:

  • Error handling: I have added multiple sanity-check at different levels of initializations, which should prevent invalid data to pass through.
  • Transparent interface - Generic backend: BinaryMaskLists and PolygonLists are now interchangeable, and both can serve as the underlying representation of the SegmentationMask.
  • Unit test: I have refactored @wangg12 's unit tests as well, and in this notebook I have provided visual feedback on the results as well.
  • Unit train: Quick schedule trainings were done using configs/quick_schedules/e2e_mask_rcnn_R_50_FPN_quick.yaml with the Binary Masks consistently outperforming (maskAP=5.8) the training using Polygon representation on coco_minival. Also, using the Polygon backend the similar performance (maskAP=5.6) was achieved when using the current backend in the master branch.

Downsides:

I have changed the syntax to represent the structure more coherently which affects two lines in the lib:

  • [maskrcnn_benchmark/data/datasets/coco.py-L83]
  • [maskrcnn_benchmark/modeling/roi_heads/mask_head/loss.py-L38]

@facebook-github-bot facebook-github-bot added the CLA Signed Do not delete this pull request or issue due to inactivity. label Feb 21, 2019

def convert_to_polygon(self):
contours = self._findContours()
return PolygonList(contours, self.size)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not generally true. Consider a donut shape mask, the contours would fail to represent the real shape.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please refer to L139:

contour, hierarchy = cv2.findContours(
  mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_TC89_L1
)

This solves the donut problem:

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have you tested with a binary mask with multiple holes? I am not sure whether it is generalizable to all cases.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you give a counterexample, where you cannot select the outermost contour?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@botcs Can you convert the contours of a donut-shape object or object with holes found by cv2 back to the original binary mask via cocomask API? At least in my experience this is not trivial.

Besides, the function you provided has a minor problem with my opencv version.
It should be _, contour, hierarchy = cv2.findContours( mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_TC89_L1 )

Copy link
Contributor Author

@botcs botcs Feb 22, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I removed that part from the test since it did not change anything of the binary mask, which makes me think that the conversion is performed on both polygon entities but there are no signs why would be the inner hole's mask subtracted from the outer.

One way to come around this is to convert each polygon entities of the PolygonInstance using a N-ary XOR that would translate to 1 IFF the pixel is covered by a single polygon.

What do you think?

Copy link
Contributor

@wangg12 wangg12 Feb 22, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is OK, but maybe a warning or comments should be noted to users that if their dataset contains objects with holes, this conversion is somehow problematic and using binary_mask backend is preferred.

Copy link
Contributor Author

@botcs botcs Feb 24, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have now looked into the details of this polygon -> mask function carried out by COCO API, and the following happens: each polygon entity (parts of a polygon instance) is converted to RLE and the RLE is converted to mask in the segmentation_mask.py L303.

This whole unnecessary overhead could be eliminated by using cv2.fillPoly, which would also handle the output of the Mask->Polygon conversion in a very neat way. This modification could allow a probably lossless Mask<->Polygon bi-directional conversion.

What are your opinions @wangg12 @fmassa ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@botcs I'd prefer not to have a dependency on OpenCV in the core training loop (I've seen reports in the past that it didn't mixed well with pytorch multiprocessing, even though it might not apply here).

About lossless conversion from Mask to Polygon, I don't think this is actually possible, because there are discretization artifacts introduced when we go from a Polygon to a Mask that can't be exactly recovered. That being said, it would definitely be nice to have a way to convert back from mask to polygon, but in this case, given that polygons take less space, wouldn't it be recommended to (almost) always use polygons instead?

Copy link
Contributor Author

@botcs botcs Feb 25, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@fmassa I agree with the dependency issue. Should I import it only when the conversion is called and raise a warning, or remove the feature?

I agree, but apart from corner cases, in worst case the boundary pixels' coordinates would serve as the polygon representation. In reverse, if no shrinking or warping has been applied I think conversion is completely reversible.

given that polygons take less space, wouldn't it be recommended to (almost) always use polygons instead?

in theory yes, in practice I think it depends how often does the conversion happen, and the format of the original label. In synthetic datasets there are ridiculously rough surfaces (e.g. grass/tree occlusion) which would be expensive to convert and store and as long as polygon operations are not parallelized, masks could be dealt with more easily.

To sum up: In the current version polygon conversion returns the outer boundary only, no polygons with holes can be represented (nor in the previous version); on the other side we don't have parallelized binary mask transformations, which causes a huge overhead in cases where each element in the batch needs to be cropped / resized differently. As long as BinaryMasks are not parallelized, offline conversion to polygons is the best option.

if isinstance(masks, torch.Tensor):
pass
elif isinstance(masks, (list, tuple)):
masks = torch.stack(masks, dim=2)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would like to add support for converting RLE masks to binary mask as well.

Copy link
Contributor Author

@botcs botcs Feb 21, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In PR #150 RLE masks are discussed, however I could not find anywhere in the code of RLE support.
In my opinion, the COCO dataset seems to go fine without RLE, and I don't see the point why would it be generally useful.

As in the previous reviews for #150 was discussed by @fmassa and @wangg12 :

In my opinion, the inner computation should be based on either polygons or binary mask. So if the mode is 'mask' or the user's input is not polygons, I'd like to convert any of it into binary mask and let the inner computation based on binary masks.

Which justifies that RLE is not a self-containing representation in that sense, that for transformations like crop, resize, transpose, you would have to change the underlying representation.

Aside from that, one can implement the RLEList to back up SegmentationMask supporting that format, without interfering with the other classes. Of course to support full compatibility, one would have to take care of the to-from conversion for both RLEList <-> BinaryMaskList and RLEList <-> PolygonList

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again RLE -> polygons is not always possible considering a donut-like shape. So just convert it into binary mask in the input of binary mask class is enough.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds reasonable, I will add it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have found the implementation for RLE conversions here, which suggested that RLEs are passed as list of dicts, each of them has at least an entry with the key count, so I am checking that in the initialization.

@wangg12 could you please run a test if it runs OK this way?

Copy link
Contributor

@fmassa fmassa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the refactoring!

I have a few comments. I'd need to run a few experiments to validate that there is no change in behavior, so it might take a bit more time to merge this PR than usual.

maskrcnn_benchmark/structures/segmentation_mask.py Outdated Show resolved Hide resolved
maskrcnn_benchmark/structures/segmentation_mask.py Outdated Show resolved Hide resolved

''' This crashes the training way too many times...
for p in polygons:
assert p[::2].min() >= 0
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you explain why the training crashes with those asserts? are the elements out of bounds or negatives?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

both cases happen in the COCO training.
Ideally, this assert should not be a problem, however it seems that when the polygons are converted to binary masks, the implementation handles them without throwing warnings.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I still don't understand this correctly. This assumes that p is a polygon, right?

Copy link
Contributor Author

@botcs botcs Feb 26, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@fmassa please take a look at the PolygonInstance.crop method (starts at L241), which is a bit odd to me: instead of clamping (as noted in the in-line comment) it just subtracts the left/top boundrary of the cropping box of the polygons, and passes the reduced cropping size as an argument for the new PolygonInstance instance.

This was in the original implementation, and I suppose it heavily relies on the fact that polygon->binary conversion will handle this issue by just ignoring over- and under-flowing polygon coordinates. So this could be why it is working without, and crashing with these asserts

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh I see, yes, it makes sense.

The cropping was implemented that way to mimic the implementation in Detectron for the fused project_on_box. Actually, to make things really correct we would need to identify that a point is negative, and add new edges to the polygon. Just clamping the coordinates of the polygon is unfortunately not enough

@botcs
Copy link
Contributor Author

botcs commented Feb 24, 2019

I could not add a comment to @fmassa 's comment so I will post it below:

Actually I would rather have this here, but instead of a TODO as a FIXME, because this is currently a bottleneck in the training code

Agreed.
I was thinking how could this step be speed up a bit:

1) Iterate
2) Crop
3) Resize
4) Stack

But since the Crop step is carried out differently for each entry, I didn't see a straightforward way to do this.
If you think that a simple Python multiprocessing thread could solve this then I would be more than happy to implement it - I am just curious if it would collide with the torch backend.

@botcs
Copy link
Contributor Author

botcs commented Feb 25, 2019

I have a few comments. I'd need to run a few experiments to validate that there is no change in behavior, so it might take a bit more time to merge this PR than usual.

Hi @fmassa,
How are you going to validate exactly the consistency? I can make a notebook on comparing the old and the new implementation (both elementwise check on mask tensors and comments on syntax), if that helps.

Thank you

@fmassa
Copy link
Contributor

fmassa commented Feb 25, 2019

Hi @botcs sorry for the delay

About your first message:
This PR writes a cuda kernel that replaces the Crop / Resize / Stack #379

I was maybe hoping to have a way of decomposing a bit more the structures, but this might not be possible / needed.

About validating the consistency, my thought was to run a full training and compare end performances. This would indicate if the representation that we use brings any noticeable difference in the final model result.

@botcs
Copy link
Contributor Author

botcs commented Feb 25, 2019

Hi @fmassa,

About your first message:

This PR writes a cuda kernel that replaces the Crop / Resize / Stack #379

That sounds amazing! I really hope I could utilize then the BinaryMaskList's contiguous representation.


What would you think about a CityScapes training, where we could compare the old-poly, new-poly, new-mask annotation?

By old-poly, I mean the implementation on master, using coco-style training (issue filed here) with the configs/cityscapes/e2e_mask_rcnn_R_50_FPN_1x_cocostyle.yaml config file

For the new-poly and new-mask I would make a CityScapesDataset that would take either the original polygons, binary masks respectively as ground-truth.

@fmassa
Copy link
Contributor

fmassa commented Feb 26, 2019

@botcs that sounds amazing.
Having a kind of official implementation that supports masks would be great.
But the complication is around testing code: we currently use COCO for evaluating everything, and we should stick with it or rewrite the evaluation logic (some of which is in pascal eval), but which supports masks.
But that's a lot of work.

@wangg12
Copy link
Contributor

wangg12 commented Feb 26, 2019

@fmassa Why would evaluation be a problem since we could save the testing results in COCO style?

@fmassa
Copy link
Contributor

fmassa commented Feb 26, 2019

@wangg12 yes, you are right, the predictions are masks anyway and we convert them to RLE for COCO evaluation. But the COCOeval expects polygons for the ground truth, so those polygons should be there as well.

But the current way we dispatch to the evaluation code is for now hard-coded to dispatch to either the pascal evaluation or the coco evaluation, depending on the dataset class.

Those are not blockers, just problems that we need to address in order to make the code easy to use, and would definitely be a great contribution!

@wangg12
Copy link
Contributor

wangg12 commented Feb 26, 2019

@fmassa I think if users use binary masks, they could convert their annotations losslessly to COCO-style RLE formats, so there is no need to change the evaluation code.

@botcs
Copy link
Contributor Author

botcs commented Feb 26, 2019

I am working on a generic coco-style evaluation that is compatible (as much as possible) with segmentation datasets. However, it would definitely help if this PR would be merged, since the mentioned evaluation refactoring would be out of scope.

@fmassa
Copy link
Contributor

fmassa commented Feb 26, 2019

@wangg12 sounds good.

One thing I'd love to do is to have a kind of generic way of calling into the COCOeval without requiring the data to be in the COCODataset format.

@botcs
Copy link
Contributor Author

botcs commented Mar 1, 2019

Hi @fmassa,
I have finished training the R-50-FPN-MASK network, now with the correct solver arguments:

SOLVER:
  BASE_LR: 0.005
  WEIGHT_DECAY: 0.0001
  STEPS: (120000, 160000)
  MAX_ITER: 180000
  IMS_PER_BATCH: 4

The resulting scores are the following:
bbox mAP(.5:.95) = 0.355 (reported score is higher 37.8)
mask mAP(.5:.95) = 0.325 (reported score is higher 34.2)

Is this difference occurs because of the 2 GPU - smaller batch size - smaller learning rate or because of a significantly different behavior using the new SegmentationMask interface?

A detailed summary of the precision and recall:

Evaluate annotation type *bbox*
DONE (t=24.68s).
Accumulating evaluation results...
DONE (t=3.63s).
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.355
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.571
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.384
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.205
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.385
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.464
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.298
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.470
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.494
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.307
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.530
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.625
Loading and preparing results...
DONE (t=1.90s)
creating index...
index created!
Running per image evaluation...
Evaluate annotation type *segm*
DONE (t=28.11s).
Accumulating evaluation results...
DONE (t=3.52s).
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.325
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.538
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.343
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.145
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.350
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.480
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.283
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.433
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.452
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.258
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.491
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.604

@fmassa fmassa mentioned this pull request Mar 1, 2019
@botcs
Copy link
Contributor Author

botcs commented Mar 1, 2019

[COCOEval related]
Hi @fmassa,

I have made good progress with the plans on a new dataset agnostic COCO-style evaluation approach, however your help would be really appreciated. Please find my results visualized at:
https://nbviewer.jupyter.org/gist/botcs/904d96ebb50708f7690f52c831e76018

thanks,
Csabi

@botcs
Copy link
Contributor Author

botcs commented Mar 4, 2019

Hi @fmassa,

Are there any remaining to-dos to merge or reject this PR?

BTW, I am now doing the 3rd point in the roadmap, and I would be happy to submit the PR from the 2nd point.

@fmassa
Copy link
Contributor

fmassa commented Mar 5, 2019

Hey @botcs , sorry for the delay in replying.

Is this difference occurs because of the 2 GPU - smaller batch size - smaller learning rate or because of a significantly different behavior using the new SegmentationMask interface?

This seems to indicate that the new SegmentationMask might give lower results than the previous one using polygons. I'll try running the training on 8 GPUs again to verify, and I'll let you know tomorrow what results I got.

I have made good progress with the plans on a new dataset agnostic COCO-style evaluation approach, however your help would be really appreciated. Please find my results visualized at:
BTW, I am now doing the 3rd point in the roadmap, and I would be happy to submit the PR from the 2nd point.

I had a quick look and it looked very interesting and promising. I'll have a closer look tomorrow make some more comments (I need to run now).

@fmassa
Copy link
Contributor

fmassa commented Mar 7, 2019

@botcs I've launched two trainings yesterday with 8GPUs each for Mask R-CNN R-50 FPN. They are still training and the time per iteration is at about 2.2915 s, which is 5x slower than the equivalent implementation using Polygons.

Does that match your expectations?

@botcs
Copy link
Contributor Author

botcs commented Mar 7, 2019

@fmassa I have experienced a 2x slower computation, but this is because of the CPU intensive operations, probably the config is different.

If you use the polygon representation the speed is expected to be the same as the original implementation, also my current training on the CityScapes polygon annotations can confirm this: time: 0.4781 (0.4750) data: 0.0091 (0.0093) for the original image size (1024x2048).
The main bottleneck is roi_heads/mask_head/loss.py where each instance is cropped and resized one-by-one, appended to a list and then stacked. Note that this operation is extremely cheap for the polygons.

The discussion in #527 was mainly motivated by the increased process time... at the moment no tricks and heuristics are applied, just the original mask - poly representation refactored for the SegmentationMask.

@wangg12
Copy link
Contributor

wangg12 commented Mar 7, 2019

@fmassa I've experienced a training speed with about 1s/iter by using mmdetection and feeding the annotations with RLE based segmentation which are then converted into binary masks for training. So I am expecting that the speed should be faster with this repo.

@botcs
Copy link
Contributor Author

botcs commented Mar 7, 2019

@wangg12 I agree.

We can speed up here literally every for loop by simply just multithreading them. Which could save a lot of time. Also the overall spectacular training speed of mmdetection using binary masks can depend on multiple factors as well (not even talking about the image size), as I mentioned in the previous comment, the mask_head does not utilize any advantages of having the segmentations in a single torch tensor.

But to be fair.. the original implementation of the SegmentationMask should perform similarly in time complexity to this PR's, and my point is that the primary goal of this PR was not optimizing the binary mask processing, but unifying the interface for polygons and binary masks.

@fmassa
Copy link
Contributor

fmassa commented Mar 11, 2019

@botcs I've finished training with masks, but evaluation failed with the same type of error as before (the 0-d tensor)

RuntimeError: invalid argument 2: non-empty 4D input tensor expected but got: [1 x 0 x 480 x 640] at /opt/conda/conda-bld/pytorch-nightly_1551330564688/work/aten/src/THNN/generic/SpatialUpSamplingBilinear.c:22

so the fixes are not yet enough for this to be merged. :-/

@botcs
Copy link
Contributor Author

botcs commented Mar 11, 2019

@fmassa I am sad to hear that, I thought that the interpolation commit has fixed this issue. Can you please send me the full error-stack? Is it possible that a slicing operation is called after extracting the internal data representation by .get_mask_tensor()?

@botcs
Copy link
Contributor Author

botcs commented Mar 12, 2019

@fmassa in the COCO evaluation script found at maskrcnn_benchmark/data/datasets/evaluation/coco/coco_eval.py, there is no occurrence of the SegmentationMask class. The process involves conversion to RLE from the prediction.get_field("mask"), which is the torch tensor itself (not an instance of the SegmentationMask, on this issue see #527 for full details). The error reported may be related to the original Masker implementation found at maskrcnn_benchmark/modeling/roi_heads/mask_head/inference.py, where the interpolation function is imported from torch.nn.functional instead of maskrcnn_benchmark.layers.misc - the same problem that caused error in this implementation.

EDIT: PR #559 proposes a small modification to avoid the error - does that help?

@botcs botcs closed this Apr 1, 2019
@botcs botcs deleted the pr150 branch April 1, 2019 12:36
@botcs botcs restored the pr150 branch April 1, 2019 12:38
@botcs botcs mentioned this pull request Apr 1, 2019
@botcs botcs deleted the pr150 branch April 1, 2019 12:40
@botcs botcs restored the pr150 branch April 8, 2019 02:41
@botcs botcs deleted the pr150 branch April 8, 2019 02:50
@botcs botcs restored the pr150 branch April 8, 2019 02:51
@fmassa fmassa reopened this Apr 9, 2019
Copy link
Contributor

@fmassa fmassa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks!

@fmassa fmassa merged commit b4d5465 into facebookresearch:master Apr 9, 2019
elif isinstance(masks[0], dict) and "count" in masks[0]:
# RLE interpretation

masks = mask_utils

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't this be different for the case of a list of RLEs?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cc @botcs

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@IssamLaradji Nice catch!

Here is a quick guide for usage (and for sanity check on update):
https://gist.github.com/botcs/a59f3f59e22e5df93e3e5e4f86718af3

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed in PR #657

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perfect! the updated version looks good!

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Quick question, will this get merged into master :P?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:P cc @fmassa can you merge #657?

Thanks

@botcs botcs changed the title Support Binary Mask with transparent SementationMask interface Support Binary Mask with transparent SegmentationMask interface Apr 9, 2019
Lyears pushed a commit to Lyears/maskrcnn-benchmark that referenced this pull request Jun 28, 2020
…ookresearch#473)

* support RLE and binary mask

* do not convert to numpy

* be consistent with Detectron

* delete wrong comment

* [WIP] add tests for segmentation_mask

* update tests

* minor change

* Refactored segmentation_mask.py

* Add unit test for segmentation_mask.py

* Add RLE support for BinaryMaskList

* PEP8 black formatting

* Minor patch

* Use internal  that handles 0 channels

* Fix polygon slicing
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
CLA Signed Do not delete this pull request or issue due to inactivity.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants