Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AttributeError: 'COCO' object has no attribute 'get_cat_ids' #2913

Closed
sizhky opened this issue Jun 5, 2020 · 16 comments
Closed

AttributeError: 'COCO' object has no attribute 'get_cat_ids' #2913

sizhky opened this issue Jun 5, 2020 · 16 comments

Comments

@sizhky
Copy link

sizhky commented Jun 5, 2020

Thanks for your error report and we appreciate it a lot.

Checklist

  1. I have searched related issues but cannot get the expected help.

Describe the bug
I was trying to train SSD300 on a custom dataset in my local system with COCO style annotations and encountered this error on training

Reproduction

  1. What command or script did you run?
python tools/train.py configs/custom_training/ssd300_coco.py
  1. Did you make any modifications on the code or config? Did you understand what you have modified?
  • I copied ssd300_coco.py directly and only modified the data and annotation paths
  1. What dataset did you use?
  • A subset of open images

Environment

  1. Please run python mmdet/utils/collect_env.py to collect necessary environment infomation and paste it here.
sys.platform: linux
Python: 3.7.6 (default, Jan  8 2020, 19:59:22) [GCC 7.3.0]
CUDA available: True
CUDA_HOME: /usr
NVCC: Cuda compilation tools, release 10.1, V10.1.243
GPU 0: GeForce GTX 1070
GCC: gcc (Ubuntu 9.3.0-10ubuntu2) 9.3.0
PyTorch: 1.5.0
PyTorch compiling details: PyTorch built with:
  - GCC 7.3
  - C++ Version: 201402
  - Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications
  - Intel(R) MKL-DNN v0.21.1 (Git Hash 7d2fd500bc78936d1d648ca713b901012f470dbc)
  - OpenMP 201511 (a.k.a. OpenMP 4.5)
  - NNPACK is enabled
  - CPU capability usage: AVX2
  - CUDA Runtime 10.2
  - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_37,code=compute_37
  - CuDNN 7.6.5
  - Magma 2.5.2
  - Build settings: BLAS=MKL, BUILD_TYPE=Release, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -fopenmp -DNDEBUG -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DUSE_INTERNAL_THREADPOOL_IMPL -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, USE_CUDA=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_STATIC_DISPATCH=OFF, 

TorchVision: 0.6.0
OpenCV: 4.2.0
MMCV: 0.5.9
MMDetection: 2.0.0+8fc0542
MMDetection Compiler: GCC 9.3
MMDetection CUDA Compiler: 10.1
  1. You may add addition that may be helpful for locating the problem, such as
    • How you installed PyTorch [e.g., pip, conda, source]
    • Other environment variables that may be related (such as $PATH, $LD_LIBRARY_PATH, $PYTHONPATH, etc.)

Error traceback
If applicable, paste the error trackback here.

...
...
load_from = None
resume_from = None
workflow = [('train', 1)]
work_dir = './work_dirs/ssd300_coco'
gpu_ids = range(0, 1)

2020-06-06 00:18:26,654 - root - INFO - load model from: open-mmlab://vgg16_caffe
2020-06-06 00:18:26,691 - mmdet - WARNING - The model and loaded state dict do not match exactly

missing keys in source state_dict: extra.0.weight, extra.0.bias, extra.1.weight, extra.1.bias, extra.2.weight, extra.2.bias, extra.3.weight, extra.3.bias, extra.4.weight, extra.4.bias, extra.5.weight, extra.5.bias, extra.6.weight, extra.6.bias, extra.7.weight, extra.7.bias, l2_norm.weight

loading annotations into memory...
Done (t=0.09s)
creating index...
index created!
Traceback (most recent call last):
  File "tools/train.py", line 161, in <module>
    main()
  File "tools/train.py", line 136, in main
    datasets = [build_dataset(cfg.data.train)]
  File "/home/yyr/Documents/github/mmdetection/mmdet/datasets/builder.py", line 56, in build_dataset
    build_dataset(cfg['dataset'], default_args), cfg['times'])
  File "/home/yyr/Documents/github/mmdetection/mmdet/datasets/builder.py", line 63, in build_dataset
    dataset = build_from_cfg(cfg, DATASETS, default_args)
  File "/home/yyr/anaconda3/lib/python3.7/site-packages/mmcv/utils/registry.py", line 168, in build_from_cfg
    return obj_cls(**args)
  File "/home/yyr/Documents/github/mmdetection/mmdet/datasets/custom.py", line 71, in __init__
    self.data_infos = self.load_annotations(self.ann_file)
  File "/home/yyr/Documents/github/mmdetection/mmdet/datasets/coco.py", line 38, in load_annotations
    self.cat_ids = self.coco.get_cat_ids(cat_names=self.CLASSES)
AttributeError: 'COCO' object has no attribute 'get_cat_ids'

Bug fix
If you have already identified the reason, you can provide the information here. If you are willing to create a PR to fix it, please also leave a comment here and that would be much appreciated!

@lolipopshock
Copy link

lolipopshock commented Jun 5, 2020

I think the problem comes from this commit - #2088, which changes the API names for COCO in the coco.py file.
You can either:

  1. revert the coco.py file to the previous version via git checkout 206107 -- mmdet/datasets/coco.py
  2. reinstall the mmdet compatible version of cocoAPI by pip install -U "git+https://github.com/open-mmlab/cocoapi.git#subdirectory=pycocotools"

@sizhky
Copy link
Author

sizhky commented Jun 6, 2020

Patching aside, doesn't this mean all coco related trainings are going to break?

@hellock
Copy link
Member

hellock commented Jun 7, 2020

Since the official cocoapi repo is out of maintainance, we decided to have our own fork, which fixes bugs and compatability issues with newer version of numpy. Also we unify the api of COCO and LVIS.

@ppwwyyxx
Copy link
Contributor

ppwwyyxx commented Jun 9, 2020

The compatibility issue with newer version of numpy has actually been fixed in the official version in cocodataset/cocoapi#354
(I'm testing mmdet and also found this error)

@hellock
Copy link
Member

hellock commented Jun 9, 2020

The compatibility issue with newer version of numpy has actually been fixed in the official version in cocodataset/cocoapi#354
(I'm testing mmdet and also found this error)

I see. Is there any chance that Piotr will ask someone to update the pypi package so that pycocotools can be put in install_requires? For this issue, we added some snack case aliases for methods in pycocotools in our fork and udpated the installation guide.

@ppwwyyxx
Copy link
Contributor

ppwwyyxx commented Jun 9, 2020

Unfortunately we don't own the pypi package. It's created by some random guy I think. Maybe we can try to get into contact.

@xvjiarui
Copy link
Collaborator

xvjiarui commented Jun 9, 2020

When pycocotools (either pypi or offical github version) already exists in the environment, running pip install "git+https://github.com/open-mmlab/cocoapi.git#subdirectory=pycocotools" may not work.
In open-mmlab/cocoapi#5 , this issue should already be fixed in open-mmlab/cocoapi#5.

@gravitychen
Copy link

same question here.
I think we update pycocotools to some higher version
my solution is change:
get_cat_ids ---> getCatIds
get_img_ids ---> getImgIds
....
if you are lazy to change their them one by one,
copy here bro (I provide the code under the variable CLASSES = ('...','...','...',) )

def load_annotations(self, ann_file):
    self.coco = COCO(ann_file)
    self.cat_ids = self.coco.getCatIds(catNms=self.CLASSES)
    self.cat2label = {cat_id: i for i, cat_id in enumerate(self.cat_ids)}
    self.img_ids = self.coco.getImgIds()
    data_infos = []
    for i in self.img_ids:
        info = self.coco.loadImgs([i])[0]
        info['filename'] = info['file_name']
        data_infos.append(info)
    return data_infos

def get_ann_info(self, idx):
    img_id = self.data_infos[idx]['id']
    ann_ids = self.coco.getAnnIds(imgIds=[img_id])
    ann_info = self.coco.loadAnns(ann_ids)
    return self._parse_ann_info(self.data_infos[idx], ann_info)

def get_cat_ids(self, idx):
    img_id = self.data_infos[idx]['id']
    ann_ids = self.coco.getAnnIds(imgIds=[img_id])
    ann_info = self.coco.loadAnns(ann_ids)
    return [ann['category_id'] for ann in ann_info]

def _filter_imgs(self, min_size=32):
    """Filter images too small or without ground truths."""
    valid_inds = []
    ids_with_ann = set(_['image_id'] for _ in self.coco.anns.values())
    for i, img_info in enumerate(self.data_infos):
        if self.filter_empty_gt and self.img_ids[i] not in ids_with_ann:
            continue
        if min(img_info['width'], img_info['height']) >= min_size:
            valid_inds.append(i)
    return valid_inds

def get_subset_by_classes(self):
    """Get img ids that contain any category in class_ids.

    Different from the coco.getImgIds(), this function returns the id if
    the img contains one of the categories rather than all.

    Args:
        class_ids (list[int]): list of category ids

    Return:
        ids (list[int]): integer list of img ids
    """

    ids = set()
    for i, class_id in enumerate(self.cat_ids):
        ids |= set(self.coco.cat_img_map[class_id])
    self.img_ids = list(ids)

    data_infos = []
    for i in self.img_ids:
        info = self.coco.loadImgs([i])[0]
        info['filename'] = info['file_name']
        data_infos.append(info)
    return data_infos

def _parse_ann_info(self, img_info, ann_info):
    """Parse bbox and mask annotation.

    Args:
        ann_info (list[dict]): Annotation info of an image.
        with_mask (bool): Whether to parse mask annotations.

    Returns:
        dict: A dict containing the following keys: bboxes, bboxes_ignore,
            labels, masks, seg_map. "masks" are raw annotations and not
            decoded into binary masks.
    """
    gt_bboxes = []
    gt_labels = []
    gt_bboxes_ignore = []
    gt_masks_ann = []

    for i, ann in enumerate(ann_info):
        if ann.get('ignore', False):
            continue
        x1, y1, w, h = ann['bbox']
        if ann['area'] <= 0 or w < 1 or h < 1:
            continue
        if ann['category_id'] not in self.cat_ids:
            continue
        bbox = [x1, y1, x1 + w, y1 + h]
        if ann.get('iscrowd', False):
            gt_bboxes_ignore.append(bbox)
        else:
            gt_bboxes.append(bbox)
            gt_labels.append(self.cat2label[ann['category_id']])
            gt_masks_ann.append(ann['segmentation'])

    if gt_bboxes:
        gt_bboxes = np.array(gt_bboxes, dtype=np.float32)
        gt_labels = np.array(gt_labels, dtype=np.int64)
    else:
        gt_bboxes = np.zeros((0, 4), dtype=np.float32)
        gt_labels = np.array([], dtype=np.int64)

    if gt_bboxes_ignore:
        gt_bboxes_ignore = np.array(gt_bboxes_ignore, dtype=np.float32)
    else:
        gt_bboxes_ignore = np.zeros((0, 4), dtype=np.float32)

    seg_map = img_info['filename'].replace('jpg', 'png')

    ann = dict(
        bboxes=gt_bboxes,
        labels=gt_labels,
        bboxes_ignore=gt_bboxes_ignore,
        masks=gt_masks_ann,
        seg_map=seg_map)

    return ann

def xyxy2xywh(self, bbox):
    _bbox = bbox.tolist()
    return [
        _bbox[0],
        _bbox[1],
        _bbox[2] - _bbox[0],
        _bbox[3] - _bbox[1],
    ]

def _proposal2json(self, results):
    json_results = []
    for idx in range(len(self)):
        img_id = self.img_ids[idx]
        bboxes = results[idx]
        for i in range(bboxes.shape[0]):
            data = dict()
            data['image_id'] = img_id
            data['bbox'] = self.xyxy2xywh(bboxes[i])
            data['score'] = float(bboxes[i][4])
            data['category_id'] = 1
            json_results.append(data)
    return json_results

def _det2json(self, results):
    json_results = []
    for idx in range(len(self)):
        img_id = self.img_ids[idx]
        result = results[idx]
        for label in range(len(result)):
            bboxes = result[label]
            for i in range(bboxes.shape[0]):
                data = dict()
                data['image_id'] = img_id
                data['bbox'] = self.xyxy2xywh(bboxes[i])
                data['score'] = float(bboxes[i][4])
                data['category_id'] = self.cat_ids[label]
                json_results.append(data)
    return json_results

def _segm2json(self, results):
    bbox_json_results = []
    segm_json_results = []
    for idx in range(len(self)):
        img_id = self.img_ids[idx]
        det, seg = results[idx]
        for label in range(len(det)):
            # bbox results
            bboxes = det[label]
            for i in range(bboxes.shape[0]):
                data = dict()
                data['image_id'] = img_id
                data['bbox'] = self.xyxy2xywh(bboxes[i])
                data['score'] = float(bboxes[i][4])
                data['category_id'] = self.cat_ids[label]
                bbox_json_results.append(data)

            # segm results
            # some detectors use different scores for bbox and mask
            if isinstance(seg, tuple):
                segms = seg[0][label]
                mask_score = seg[1][label]
            else:
                segms = seg[label]
                mask_score = [bbox[4] for bbox in bboxes]
            for i in range(bboxes.shape[0]):
                data = dict()
                data['image_id'] = img_id
                data['bbox'] = self.xyxy2xywh(bboxes[i])
                data['score'] = float(mask_score[i])
                data['category_id'] = self.cat_ids[label]
                if isinstance(segms[i]['counts'], bytes):
                    segms[i]['counts'] = segms[i]['counts'].decode()
                data['segmentation'] = segms[i]
                segm_json_results.append(data)
    return bbox_json_results, segm_json_results

def results2json(self, results, outfile_prefix):
    """Dump the detection results to a json file.

    There are 3 types of results: proposals, bbox predictions, mask
    predictions, and they have different data types. This method will
    automatically recognize the type, and dump them to json files.

    Args:
        results (list[list | tuple | ndarray]): Testing results of the
            dataset.
        outfile_prefix (str): The filename prefix of the json files. If the
            prefix is "somepath/xxx", the json files will be named
            "somepath/xxx.bbox.json", "somepath/xxx.segm.json",
            "somepath/xxx.proposal.json".

    Returns:
        dict[str: str]: Possible keys are "bbox", "segm", "proposal", and
            values are corresponding filenames.
    """
    result_files = dict()
    if isinstance(results[0], list):
        json_results = self._det2json(results)
        result_files['bbox'] = f'{outfile_prefix}.bbox.json'
        result_files['proposal'] = f'{outfile_prefix}.bbox.json'
        mmcv.dump(json_results, result_files['bbox'])
    elif isinstance(results[0], tuple):
        json_results = self._segm2json(results)
        result_files['bbox'] = f'{outfile_prefix}.bbox.json'
        result_files['proposal'] = f'{outfile_prefix}.bbox.json'
        result_files['segm'] = f'{outfile_prefix}.segm.json'
        mmcv.dump(json_results[0], result_files['bbox'])
        mmcv.dump(json_results[1], result_files['segm'])
    elif isinstance(results[0], np.ndarray):
        json_results = self._proposal2json(results)
        result_files['proposal'] = f'{outfile_prefix}.proposal.json'
        mmcv.dump(json_results, result_files['proposal'])
    else:
        raise TypeError('invalid type of results')
    return result_files

def fast_eval_recall(self, results, proposal_nums, iou_thrs, logger=None):
    gt_bboxes = []
    for i in range(len(self.img_ids)):
        ann_ids = self.coco.getAnnIds(imgIds=self.img_ids[i])
        ann_info = self.coco.loadAnns(ann_ids)
        if len(ann_info) == 0:
            gt_bboxes.append(np.zeros((0, 4)))
            continue
        bboxes = []
        for ann in ann_info:
            if ann.get('ignore', False) or ann['iscrowd']:
                continue
            x1, y1, w, h = ann['bbox']
            bboxes.append([x1, y1, x1 + w, y1 + h])
        bboxes = np.array(bboxes, dtype=np.float32)
        if bboxes.shape[0] == 0:
            bboxes = np.zeros((0, 4))
        gt_bboxes.append(bboxes)

    recalls = eval_recalls(
        gt_bboxes, results, proposal_nums, iou_thrs, logger=logger)
    ar = recalls.mean(axis=1)
    return ar

def format_results(self, results, jsonfile_prefix=None, **kwargs):
    """Format the results to json (standard format for COCO evaluation).

    Args:
        results (list): Testing results of the dataset.
        jsonfile_prefix (str | None): The prefix of json files. It includes
            the file path and the prefix of filename, e.g., "a/b/prefix".
            If not specified, a temp file will be created. Default: None.

    Returns:
        tuple: (result_files, tmp_dir), result_files is a dict containing
            the json filepaths, tmp_dir is the temporal directory created
            for saving json files when jsonfile_prefix is not specified.
    """
    assert isinstance(results, list), 'results must be a list'
    assert len(results) == len(self), (
        'The length of results is not equal to the dataset len: {} != {}'.
        format(len(results), len(self)))

    if jsonfile_prefix is None:
        tmp_dir = tempfile.TemporaryDirectory()
        jsonfile_prefix = osp.join(tmp_dir.name, 'results')
    else:
        tmp_dir = None
    result_files = self.results2json(results, jsonfile_prefix)
    return result_files, tmp_dir

def evaluate(self,
             results,
             metric='bbox',
             logger=None,
             jsonfile_prefix=None,
             classwise=False,
             proposal_nums=(100, 300, 1000),
             iou_thrs=np.arange(0.5, 0.96, 0.05)):
    """Evaluation in COCO protocol.

    Args:
        results (list): Testing results of the dataset.
        metric (str | list[str]): Metrics to be evaluated.
        logger (logging.Logger | str | None): Logger used for printing
            related information during evaluation. Default: None.
        jsonfile_prefix (str | None): The prefix of json files. It includes
            the file path and the prefix of filename, e.g., "a/b/prefix".
            If not specified, a temp file will be created. Default: None.
        classwise (bool): Whether to evaluating the AP for each class.
        proposal_nums (Sequence[int]): Proposal number used for evaluating
            recalls, such as recall@100, recall@1000.
            Default: (100, 300, 1000).
        iou_thrs (Sequence[float]): IoU threshold used for evaluating
            recalls. If set to a list, the average recall of all IoUs will
            also be computed. Default: 0.5.

    Returns:
        dict[str: float]
    """

    metrics = metric if isinstance(metric, list) else [metric]
    allowed_metrics = ['bbox', 'segm', 'proposal', 'proposal_fast']
    for metric in metrics:
        if metric not in allowed_metrics:
            raise KeyError(f'metric {metric} is not supported')

    result_files, tmp_dir = self.format_results(results, jsonfile_prefix)

    eval_results = {}
    cocoGt = self.coco
    for metric in metrics:
        msg = f'Evaluating {metric}...'
        if logger is None:
            msg = '\n' + msg
        print_log(msg, logger=logger)

        if metric == 'proposal_fast':
            ar = self.fast_eval_recall(
                results, proposal_nums, iou_thrs, logger='silent')
            log_msg = []
            for i, num in enumerate(proposal_nums):
                eval_results[f'AR@{num}'] = ar[i]
                log_msg.append(f'\nAR@{num}\t{ar[i]:.4f}')
            log_msg = ''.join(log_msg)
            print_log(log_msg, logger=logger)
            continue

        if metric not in result_files:
            raise KeyError(f'{metric} is not in results')
        try:
            cocoDt = cocoGt.loadRes(result_files[metric])
        except IndexError:
            print_log(
                'The testing results of the whole dataset is empty.',
                logger=logger,
                level=logging.ERROR)
            break

        iou_type = 'bbox' if metric == 'proposal' else metric
        cocoEval = COCOeval(cocoGt, cocoDt, iou_type)
        cocoEval.params.catIds = self.cat_ids
        cocoEval.params.imgIds = self.img_ids
        if metric == 'proposal':
            cocoEval.params.useCats = 0
            cocoEval.params.maxDets = list(proposal_nums)
            cocoEval.evaluate()
            cocoEval.accumulate()
            cocoEval.summarize()
            metric_items = [
                'AR@100', 'AR@300', 'AR@1000', 'AR_s@1000', 'AR_m@1000',
                'AR_l@1000'
            ]
            for i, item in enumerate(metric_items):
                val = float(f'{cocoEval.stats[i + 6]:.3f}')
                eval_results[item] = val
        else:
            cocoEval.evaluate()
            cocoEval.accumulate()
            cocoEval.summarize()
            if classwise:  # Compute per-category AP
                # Compute per-category AP
                # from https://github.com/facebookresearch/detectron2/
                precisions = cocoEval.eval['precision']
                # precision: (iou, recall, cls, area range, max dets)
                assert len(self.cat_ids) == precisions.shape[2]

                results_per_category = []
                for idx, catId in enumerate(self.cat_ids):
                    # area range index 0: all area ranges
                    # max dets index -1: typically 100 per image
                    nm = self.coco.loadCats(catId)[0]
                    precision = precisions[:, :, idx, 0, -1]
                    precision = precision[precision > -1]
                    if precision.size:
                        ap = np.mean(precision)
                    else:
                        ap = float('nan')
                    results_per_category.append(
                        (f'{nm["name"]}', f'{float(ap):0.3f}'))

                num_columns = min(6, len(results_per_category) * 2)
                results_flatten = list(
                    itertools.chain(*results_per_category))
                headers = ['category', 'AP'] * (num_columns // 2)
                results_2d = itertools.zip_longest(*[
                    results_flatten[i::num_columns]
                    for i in range(num_columns)
                ])
                table_data = [headers]
                table_data += [result for result in results_2d]
                table = AsciiTable(table_data)
                print_log('\n' + table.table, logger=logger)

            metric_items = [
                'mAP', 'mAP_50', 'mAP_75', 'mAP_s', 'mAP_m', 'mAP_l'
            ]
            for i in range(len(metric_items)):
                key = f'{metric}_{metric_items[i]}'
                val = float(f'{cocoEval.stats[i]:.3f}')
                eval_results[key] = val
            ap = cocoEval.stats[:6]
            eval_results[f'{metric}_mAP_copypaste'] = (
                f'{ap[0]:.3f} {ap[1]:.3f} {ap[2]:.3f} {ap[3]:.3f} '
                f'{ap[4]:.3f} {ap[5]:.3f}')
    if tmp_dir is not None:
        tmp_dir.cleanup()
    return eval_results

@hellock
Copy link
Member

hellock commented Jun 9, 2020

@gravitychen You can just install the new pycocotools by openmmlab. Modifying coco.py is not a good solution.

pip install "git+https://github.com/open-mmlab/cocoapi.git#subdirectory=pycocotools"

@azibit
Copy link
Contributor

azibit commented Jun 10, 2020

```shell
pip install "git+https://github.com/open-mmlab/cocoapi.git#subdirectory=pycocotools"

@hellock Thanks. This fixed get_cat_ids for me

@hellock hellock closed this as completed Jun 11, 2020
@guotong1988
Copy link

Thank you!

@Mxbonn
Copy link
Contributor

Mxbonn commented Jun 15, 2020

Can't mmlab keep aliases to the old function names in their fork? That way people who use the git version of the official coco api (which is up to date with numpy changes) don't have to change the coco file in mmdet?

I personally don't think forcing people to use your fork of the coco api is the way to go.

@hellock
Copy link
Member

hellock commented Jun 15, 2020

@Mxbonn We do not want to keep our own fork at all if the official one was well maintained.

Our fork contains both the original and the snake case method names. It solves the following problems and we think the benefits suppress the drawbacks.

  • We add snake case method aliases to make coco and lvis apis compatible, resulting in cleaner and simpler dataset implementations. Snake case is also recommended by pep8.
  • We relax the version limitation of requirements in lvis. (The original lvis api requires specific versions with numpy==xxx, matplotlib==xxxx that are unnecessary, and the authors are not responding to issues.)
  • The official repo does not provide timely bug fixes. E.g., the incompatibility with the latest numpy causes errors and we had to limit the version for quite a long time. It was fixed later, but could be fixed much faster in our own fork.

@ppwwyyxx
Copy link
Contributor

We had taken back control of the name "pycocotools" on pypi. Now the package is updated to be the same as github.

@luuuyi
Copy link
Contributor

luuuyi commented Jul 7, 2020

@hellock I vote to "using mmlab's cocapi". Thanks for your maintain.

@hachreak
Copy link
Contributor

hachreak commented Aug 3, 2020

Hi everyone,
there is some news to update the official pycocotools? Or, to extend pycocotools COCO class without overwrite official code? Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests