Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

执行测试文件报错,keyerror:‘img_name’ #99

Open
lyx-0213 opened this issue Jul 12, 2022 · 6 comments
Open

执行测试文件报错,keyerror:‘img_name’ #99

lyx-0213 opened this issue Jul 12, 2022 · 6 comments

Comments

@lyx-0213
Copy link

请问怎么解决啊谢谢

@xingchen-yao
Copy link

遇到了同样的问题,请问解决了吗

@wuhengliangliang
Copy link

请问问题解决了嘛

@tiamjiakun
Copy link

tiamjiakun commented Oct 13, 2022

在pan_ic15.py中的‘prepare_test_data’方法(第441行之后)中添加
img_name = img_path.split("/")[-1]
img_meta.update(dict(img_path=img_path))
img_meta.update(dict(img_name=img_name))
这三行代码,然后就好了。

@zeng-cy
Copy link

zeng-cy commented Nov 22, 2022

prepare_test_data

您好,能具体说一下这个文件在哪里吗?没找到

@894900372
Copy link

I also encountered this issue during the testing of ctw1500, and the reason for the problem is that there is indeed no 'imgname' key in the img_metas dictionary. This may be because the image name was not correctly set or passed during the data loading or processing phase.
I have resolved this issue and need to modify the content of utils/result_format.py:
class ResultFormat(object):
def init(self, data_type, result_path):
self.data_type = data_type
self.result_path = result_path
Self. img_index=0 # Initialize image index

if osp.isfile(result_path):
os.remove(result_path)

if result_path.endswith('.zip'):
result_path = result_path.replace('.zip', '')

if not osp.exists(result_path):
os.makedirs(result_path)

def write_result(self, img_metas, outputs):
Imgname=f'image_ {self. img_index} '# Generate image names using indexes
Self. img_index+=1 # Update index

if 'IC15' in self.data_type:
self._write_result_ic15(img_name, outputs)
elif 'TT' in self.data_type:
self._write_result_tt(img_name, outputs)
elif 'CTW' in self.data_type:
self._write_result_ctw(img_name, outputs)
elif 'MSRA' in self.data_type:
self._write_result_msra(img_name, outputs)
If you want to debug to see if this key is included, you can add the following code in test.py:
def test(test_loader, model, cfg):
model.eval()

with_rec = hasattr(cfg.model, 'recognition_head')
if with_rec:
pp = Corrector(cfg.data.test.type, **cfg.test_cfg.rec_post_process)

if cfg.vis:
vis = Visualizer(vis_path=osp.join('vis/', cfg.data.test.type))

rf = ResultFormat(cfg.data.test.type, cfg.test_cfg.result_path)

if cfg.report_speed:
speed_meters = dict(
backbone_time=AverageMeter(500),
neck_time=AverageMeter(500),
det_head_time=AverageMeter(500),
det_post_time=AverageMeter(500),
rec_time=AverageMeter(500),
total_time=AverageMeter(500))

print('Start testing %d images' % len(test_loader))
for idx, data in enumerate(test_loader):
print('Testing %d/%d\r' % (idx, len(test_loader)), end='', flush=True)

#Print img_metas to see what's inside
print("img_metas:", data['img_metas'])

prepare input

data['imgs'] = data['imgs'].cuda()
data.update(dict(cfg=cfg))

forward

with torch.no_grad():
outputs = model(**data)

if cfg.report_speed:
report_speed(outputs, speed_meters)

post process of recognition

if with_rec:
outputs = pp.process(data['img_metas'], outputs)

save result

rf.write_result(data['img_metas'], outputs)

visualize

if cfg.vis:
vis.process(data['img_metas'], outputs)

print('Done!')
111
222
333

@CodeSailor369
Copy link

I also encountered this issue during the testing of ctw1500, and the reason for the problem is that there is indeed no 'imgname' key in the img_metas dictionary. This may be because the image name was not correctly set or passed during the data loading or processing phase. I have resolved this issue and need to modify the content of utils/result_format.py: class ResultFormat(object): def init(self, data_type, result_path): self.data_type = data_type self.result_path = result_path Self. img_index=0 # Initialize image index

if osp.isfile(result_path): os.remove(result_path)

if result_path.endswith('.zip'): result_path = result_path.replace('.zip', '')

if not osp.exists(result_path): os.makedirs(result_path)

def write_result(self, img_metas, outputs): Imgname=f'image_ {self. img_index} '# Generate image names using indexes Self. img_index+=1 # Update index

if 'IC15' in self.data_type: self._write_result_ic15(img_name, outputs) elif 'TT' in self.data_type: self._write_result_tt(img_name, outputs) elif 'CTW' in self.data_type: self._write_result_ctw(img_name, outputs) elif 'MSRA' in self.data_type: self._write_result_msra(img_name, outputs) If you want to debug to see if this key is included, you can add the following code in test.py: def test(test_loader, model, cfg): model.eval()

with_rec = hasattr(cfg.model, 'recognition_head') if with_rec: pp = Corrector(cfg.data.test.type, **cfg.test_cfg.rec_post_process)

if cfg.vis: vis = Visualizer(vis_path=osp.join('vis/', cfg.data.test.type))

rf = ResultFormat(cfg.data.test.type, cfg.test_cfg.result_path)

if cfg.report_speed: speed_meters = dict( backbone_time=AverageMeter(500), neck_time=AverageMeter(500), det_head_time=AverageMeter(500), det_post_time=AverageMeter(500), rec_time=AverageMeter(500), total_time=AverageMeter(500))

print('Start testing %d images' % len(test_loader)) for idx, data in enumerate(test_loader): print('Testing %d/%d\r' % (idx, len(test_loader)), end='', flush=True)

#Print img_metas to see what's inside print("img_metas:", data['img_metas'])

prepare input

data['imgs'] = data['imgs'].cuda() data.update(dict(cfg=cfg))

forward

with torch.no_grad(): outputs = model(**data)

if cfg.report_speed: report_speed(outputs, speed_meters)

post process of recognition

if with_rec: outputs = pp.process(data['img_metas'], outputs)

save result

rf.write_result(data['img_metas'], outputs)

visualize

if cfg.vis: vis.process(data['img_metas'], outputs)

print('Done!') 111 222 333

Thank you for your solution!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants