Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About quick test own single image. #1

Open
alvinlin1271320 opened this issue Jun 4, 2023 · 4 comments
Open

About quick test own single image. #1

alvinlin1271320 opened this issue Jun 4, 2023 · 4 comments

Comments

@alvinlin1271320
Copy link

Thanks for the code. I would like to ask you that could I just test my single image rather than test SIDD dataset? If can, what should I do?

I run your code like this, and below is the error message.
$ python test.py -s LGBPN_SIDD -c APBSN_SIDD/BSN_SIDD -g 0 -e 20 --test_img {my_image}

"src\util\util.py", line 300, in forward
b, c, h, w = x.shape
ValueError: not enough values to unpack (expected 4, got 2)

@Wang-XIaoDingdd
Copy link
Owner

Hi alvinlin1271320:

  1. Testing one image is OK.
  2. It seems that the error comes from the incorrect shape of x.
    So I wonder if your input image just has 1 channel, and the x is of shape [h, w]? The input should be of 3 channels (RGB) for real image denoising :)

@gaoyuanhang
Copy link

I have also encountered this issue. I am sure that I entered a color image, including 3 channels, but I will report an error :
ValueError: not enough values to unpack (expected 4, got 3).
I hope you can answer this question.

@xzq-2000
Copy link

xzq-2000 commented Jul 2, 2024

我在测试数据集的时候也出现了(mmdet) D:\programs\denoise_sonar_programs\LGBPN-master>python test.py --config APBSN_DND/BSN_DND --pretrained D:\programs\denoise_sonar_programs\LGBPN-master\output\test\checkpoint\test_020.pth --test_dir D:\programs\denoise_sonar_programs\LGBPN-master\dataset\Sonar\RN
model loaded : D:\programs\denoise_sonar_programs\LGBPN-master\output\test\checkpoint\test_020.pth
Start >>>
C:\Users\admin.conda\envs\mmdet\lib\site-packages\torch\functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\TensorShape.cpp:3191.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
torch.Size([1, 24, 392, 520])
torch.Size([1, 24, 392, 520])
torch.Size([1, 24, 392, 520])
torch.Size([1, 208, 272])
torch.Size([1, 208, 272])
Traceback (most recent call last):
File "test.py", line 44, in
main()
File "test.py", line 40, in main
trainer.test()
File "C:\Users\admin.conda\envs\mmdet\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "D:\programs\denoise_sonar_programs\LGBPN-master\src\trainer\trainer.py", line 39, in test
self.test_dir(self.cfg['test_dir'])
File "D:\programs\denoise_sonar_programs\LGBPN-master\src\trainer\base.py", line 480, in test_dir
self.test_img(os.path.join(direc, ff), os.path.join(direc, 'results'))
File "D:\programs\denoise_sonar_programs\LGBPN-master\src\trainer\base.py", line 448, in test_img
denoised = self.denoiser(noisy)
File "D:\programs\denoise_sonar_programs\LGBPN-master\src\model\APBSN.py", line 67, in denoise
img_pd_bsn = self.forward(img=x, pd=self.pd_b)
File "D:\programs\denoise_sonar_programs\LGBPN-master\src\model\APBSN.py", line 58, in forward
pd_img_denoised = self.bsn(img, dict=dict)
File "C:\Users\admin.conda\envs\mmdet\lib\site-packages\torch\nn\modules\module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "D:\programs\denoise_sonar_programs\LGBPN-master\src\model\DBSNl.py", line 502, in forward
br2 = self.branch2(x, refine, dict=dict)
File "C:\Users\admin.conda\envs\mmdet\lib\site-packages\torch\nn\modules\module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "D:\programs\denoise_sonar_programs\LGBPN-master\src\model\DBSNl.py", line 156, in forward
x = self.Maskconv(x, refine, dict=dict, SIDD=self.SIDD)
File "C:\Users\admin.conda\envs\mmdet\lib\site-packages\torch\nn\modules\module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "D:\programs\denoise_sonar_programs\LGBPN-master\src\model\DBSNl.py", line 366, in forward
x_out = self.forward_chop(x, ratio=pd_test_ratio)
File "D:\programs\denoise_sonar_programs\LGBPN-master\src\model\DBSNl.py", line 423, in forward_chop
y = self.forward_chop(*p, shave=shave, min_size=min_size, ratio=ratio)
File "D:\programs\denoise_sonar_programs\LGBPN-master\src\model\DBSNl.py", line 403, in forward_chop
x_offset = P.data_parallel(deform_conv, *x, range(n_GPUs))
File "C:\Users\admin.conda\envs\mmdet\lib\site-packages\torch\nn\parallel\data_parallel.py", line 231, in data_parallel
return module(*inputs[0], **module_kwargs[0])
File "C:\Users\admin.conda\envs\mmdet\lib\site-packages\torch\nn\modules\module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "D:\programs\denoise_sonar_programs\LGBPN-master\src\util\util.py", line 301, in forward
b, c, h, w = x.shape
ValueError: not enough values to unpack (expected 4, got 3)这个问题,请问一下有什么解决方法吗,测试的图像三通道和单通道我都尝试过了,都会出现这个错误,希望有空可以帮我看一下这个问题。

@xzq-2000
Copy link

xzq-2000 commented Jul 2, 2024

我在测试数据集的时候也出现了(mmdet) D:\programs\denoise_sonar_programs\LGBPN-master>python test.py --config APBSN_DND/BSN_DND --pretrained D:\programs\denoise_sonar_programs\LGBPN-master\output\test\checkpoint\test_020.pth --test_dir D:\programs\denoise_sonar_programs\LGBPN-master\dataset\Sonar\RN 模型已加载:D:\programs\denoise_sonar_programs\LGBPN-master\output\test\checkpoint\test_020.pth Start >>>C:\Users\admin.conda\envs\mmdet\lib\site-packages\torch\functional.py:504:UserWarning:torch.meshgrid:在即将发布的版本中,需要传递索引参数。(在 C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\TensorShape.cpp:3191 内部触发。 return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined] torch.尺寸([1, 24, 392, 520]) 割炬。尺寸([1, 24, 392, 520]) 割炬。尺寸([1, 24, 392, 520]) 割炬。尺寸([1, 208, 272])割炬。大小([1, 208, 272]) 回溯(最近一次调用最后一次): 文件“test.py”,第 44 行,在 main() 文件“test.py”,第 40 行,在 main trainer.test() 文件“C:\Users\admin.conda\envs\mmdet\lib\site-packages\torch\autograd\grad_mode.py”,第 27 行,在 decorate_context 返回 func(*args, **kwargs) 文件“D:\programs\denoise_sonar_programs\LGBPN-master\src\trainer\trainer.py”,第 39 行,测试中self.test_dir(self.cfg['test_dir']) 文件“D:\programs\denoise_sonar_programs\LGBPN-master\src\trainer\base.py”,第 480 行,在 test_dir self.test_img(os.path.join(direc, ff), os.path.join(direc, 'results')) 文件“D:\programs\denoise_sonar_programs\LGBPN-master\src\trainer\base.py”,第 448 行,在 test_img 去噪 = self.denoiser(嘈杂) 文件“D:\programs\denoise_sonar_programs\LGBPN-master\src\model\APBSN.py“,第 67 行,降噪 img_pd_bsn = self.forward(img=x, pd=self.pd_b) 文件”D:\programs\denoise_sonar_programs\LGBPN-master\src\model\APBSN.py“,第 58 行,正向 pd_img_denoised = self.bsn(img, dict=dict) 文件”C:\Users\admin.conda\envs\mmdet\lib\site-packages\torch\nn\modules\module.py“,第 1190 行,_call_impl返回 forward_call(*input, **kwargs) 文件“D:\programs\denoise_sonar_programs\LGBPN-master\src\model\DBSNl.py”,第 502 行,向前 br2 = self.branch2(x, refine, dict=dict) 文件“C:\Users\admin.conda\envs\mmdet\lib\site-packages\torch\nn\modules\module.py”,第 1190 行,在 _call_impl 返回 forward_call(*input, **kwargs) 文件“D:\programs\denoise_sonar_programs\LGBPN-master\src\model\DBSNl.py”,第 156 行,向前x = 自我。Maskconv(x, refine, dict=dict, SIDD=self.SIDD) 文件“C:\Users\admin.conda\envs\mmdet\lib\site-packages\torch\nn\modules\module.py”,第 1190 行,_call_impl返回 forward_call(*input, **kwargs) 文件“D:\programs\denoise_sonar_programs\LGBPN-master\src\model\DBSNl.py”,第 366 行,正向 x_out = self.forward_chop(x, ratio=pd_test_ratio) 文件“D:\programs\denoise_sonar_programs\LGBPN-master\src\model\DBSNl.py”, 第 423 行,在 forward_chop y = self.forward_chop(*p, shave=shave, min_size=min_size, ratio=ratio) 文件“D:\programs\denoise_sonar_programs\LGBPN-master\src\model\DBSNl.py“,第 403 行,在forward_chop x_offset = P.data_parallel(deform_conv, *x, range(n_GPUs)) 文件”C:\Users\admin.conda\envs\mmdet\lib\site-packages\torch\nn\parallel\data_parallel.py“,第 231 行,在data_parallel返回模块(*inputs[0],**module_kwargs[0]) 文件”C:\Users\admin.conda\envs\mmdet\lib\site-packages\torch\nn\modules\module.py“, 第 1190 行,在_call_impl返回 forward_call(input, **kwargs) 文件“D:\programs\denoise_sonar_programs\LGBPN-master\src\util\util.py”,第 301 行,向前 b、c、h、w = x.shape ValueError:没有足够的值来解包(预期 4,得到 3)这个问题,请问一下有什么解决方法吗,测试的图像三通道和单通道我都尝试过了,都会出现这个错误,希望有空可以帮我看一下这个问题。
我尝试再utils文件中修改对应部分,不知道这洋的操作对去噪的正确性是否有影响,另外想请问博主测试一张768
1024图片的时间大约多久,我这里特别慢,谢谢博主
x_chops = [
torch.cat([
a.unsqueeze(0)[..., top, left],
a.unsqueeze(0)[..., top, right],
a.unsqueeze(0)[..., bottom, left],
a.unsqueeze(0)[..., bottom, right]
]) if a.dim() == 3 else torch.cat([
a[..., top, left],
a[..., top, right],
a[..., bottom, left],
a[..., bottom, right]
]) for a in args
]

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants