You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
2022-01-28 22:33:24,528 - mmdet - INFO - Evaluating bbox...
Loading and preparing results...2022-01-28 22:33:24,529 - mmdet - ERROR - The testing results of the whole dataset is empty.
/root/anaconda3/lib/python3.9/site-packages/mmcv/runner/hooks/evaluation.py:374: UserWarning: Since `eval_res` is an empty dict, the behavior to save the best checkpoint will be skipped in this evaluation. warnings.warn(
Traceback (most recent call last):
File "/home/superdisk/tensorflow-great-barrier-reef/tools/train.py", line 185, in <module> main()
File "/home/superdisk/tensorflow-great-barrier-reef/tools/train.py", line 174, in main
train_detector( File "/home/superdisk/tensorflow-great-barrier-reef/mmdet/apis/train.py", line 203, in train_detector
runner.run(data_loaders, cfg.workflow)
File "/root/anaconda3/lib/python3.9/site-packages/mmcv/runner/epoch_based_runner.py", line 127, in run
epoch_runner(data_loaders[i], **kwargs)
File "/root/anaconda3/lib/python3.9/site-packages/mmcv/runner/epoch_based_runner.py", line 54, in train
self.call_hook('after_train_epoch')
File "/root/anaconda3/lib/python3.9/site-packages/mmcv/runner/base_runner.py", line 307, in call_hook
getattr(hook, fn_name)(self)
File "/root/anaconda3/lib/python3.9/site-packages/mmcv/runner/hooks/evaluation.py", line 267, in after_train_epoch
self._do_evaluate(runner)
File "/home/superdisk/tensorflow-great-barrier-reef/mmdet/core/evaluation/eval_hooks.py", line 60, in _do_evaluate
self._save_ckpt(runner, key_score)
File "/root/anaconda3/lib/python3.9/site-packages/mmcv/runner/hooks/evaluation.py", line 330, in _save_ckpt
if self.compare_func(key_score, best_score):
File "/root/anaconda3/lib/python3.9/site-packages/mmcv/runner/hooks/evaluation.py", line 77, in <lambda>
rule_map = {'greater': lambda x, y: x > y, 'less': lambda x, y: x < y}
TypeError: '>' not supported between instances of 'Nonetype' and 'float'
Bug fix
I think we can fixed with modify the eval_hooks.py , but i'm not sure that's right.
def_do_evaluate(self, runner):
"""perform evaluation and save ckpt."""ifnotself._should_evaluate(runner):
returnfrommmdet.apisimportsingle_gpu_testresults=single_gpu_test(runner.model, self.dataloader, show=False)
runner.log_buffer.output['eval_iter_num'] =len(self.dataloader)
key_score=self.evaluate(runner, results)
# the key_score may be `None` so it needs to skip the action to save# the best checkpointifself.save_bestandkey_score:
self._save_ckpt(runner, key_score)
The text was updated successfully, but these errors were encountered:
huang-jesse
changed the title
[eval_hooks.py]ERROR - The testing results of the whole dataset is empty.
[eval_hooks.py] ERROR - The testing results of the whole dataset is empty.
Jan 29, 2022
huang-jesse
changed the title
[eval_hooks.py] ERROR - The testing results of the whole dataset is empty.
[eval_hooks.py] TypeError: '>' not supported between instances of 'Nonetype' and 'float'
Jan 29, 2022
Hello @LuooChen, this is really a bug in mmdetection and your modification is right.
If possible, can you create a pull request following this guideline and contribute to mmdetection? We appreciate your contribution.
Hello @LuooChen, this is really a bug in mmdetection and your modification is right. If possible, can you create a pull request following this guideline and contribute to mmdetection? We appreciate your contribution.
Checklist
Describe the bug
I think i got the same issue with ERROR - The testing results of the whole dataset is empty - YOLOX and COCO that fixed for mmcv, but doesn't resovled in eval_hooks.py.
Reproduction
Environment
necessary environment information:
Error traceback
The error logs below:
Bug fix
I think we can fixed with modify the eval_hooks.py , but i'm not sure that's right.
The text was updated successfully, but these errors were encountered: