Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance of given model and trained model #15

Open
chaos5958 opened this issue Dec 30, 2017 · 2 comments
Open

Performance of given model and trained model #15

chaos5958 opened this issue Dec 30, 2017 · 2 comments

Comments

@chaos5958
Copy link

chaos5958 commented Dec 30, 2017

Hello, I tested the given model and the model I trained using default parameters you provide.
Also, I constructed test dataset using matlab, and check bicubic performance is reproduced as the paper.
However, vdsr performance is a little bit different form you provided in README.pd.

Besides, I implemented VDSR once tensorflow+PIL, but results are too bad, so I'm currently testing user repo using pytorch+Matlab.
Anyway, thank you for the code!

  1. Why you 'shave_border' in evaluating PSNR? Is it norm in the super resolution research?

  2. How could I reproduce your result? (e.g. better than paper)

  3. Should I do bicubic interpolation on normalized value (e.g. 0-1)? The result seems a little bit different.
    It's a little bit strange that, in the training stage, bicubic is done in double value, but ,in the test stage, bicubic is done in integer value.

Set 5, Scale 4, shave_border = 0
bicubic = 28.422
vdsr (given) = 30.797
vdsr (trained) = 30.651

Set 5, Scale 4, shave_border = 4
bicubic = 28.414
vdsr (given) = 30.880
vdsr (trained) = 30.727
vdsr (README) = 31.35 (I want to reproduce this one!)

Here is part of my test code.

for filename in filenames:

     im_gt_y = sio.loadmat(filename)['im_label']
     im_b_y = sio.loadmat(filename)['im_input']

     #im_gt_y = sio.loadmat("data/Set5/" + opt.image + "_SC4_CH1.mat")['im_label']
     #im_b_y = sio.loadmat("data/Set5/" + opt.image + "_SC4_CH1.mat")['im_input']

     im_input = im_b_y/255.
     im_input = Variable(torch.from_numpy(im_input).float()).view(1, -1, im_input.shape[0], im_input.shape[1])

     if cuda:
         im_input = im_input.cuda()
     else:
         im_input = im_input.cpu()

     start_time = time.time()
     im_result_y = model(im_input)
     im_result_y = im_result_y.cpu()
     elapsed_time = time.time() - start_time
     print("Forward Time: {:.5f}sec".format(elapsed_time))

     im_h_y = im_result_y.data[0].numpy().astype(float)
     im_gt_y = im_gt_y.astype(float)
     im_b_y = im_b_y.astype(float)

     im_h_y = im_h_y*255.
     im_h_y[im_h_y<0] = 0
     im_h_y[im_h_y>255.] = 255.
     im_h_y = im_h_y[0,:,:]

     psnr_bicubic = PSNR(im_gt_y, im_b_y, shave_border=opt.scale)
     print(psnr_bicubic)
     psnr_sr = PSNR(im_gt_y, im_h_y, shave_border=opt.scale)
     print(psnr_sr)

     total_psnr_bicubic += psnr_bicubic
     total_psnr_sr += psnr_sr

 print("{} Scale {}] PSNR_bicubic={:.3f}".format(opt.dataset, opt.scale, total_psnr_bicubic / len(filenames)))
 print("{} Scale {}] PSNR_sr={:.3f}".format(opt.dataset, opt.scale, total_psnr_sr/ len(filenames)))

Thank you,
Hyunho

@twtygqyy
Copy link
Owner

twtygqyy commented Jan 4, 2018

Hi @chaos5958 , please refer https://github.com/twtygqyy/pytorch-SRDenseNet/blob/master/eval.py
Let me know if you have further problem.

@designproj
Copy link

@chaos5958 hi i wonder if you solved that problem. for me it also happens, the difference between pre trainded weight and trained data which i did.
@twtygqyy hi i want your help...how can i get the PSNR as your weight.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants