Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why is the super-resolution result the same size as input image ? #51

Open
howfars opened this issue Dec 10, 2019 · 6 comments
Open

Why is the super-resolution result the same size as input image ? #51

howfars opened this issue Dec 10, 2019 · 6 comments

Comments

@howfars
Copy link

howfars commented Dec 10, 2019

I downloaded the repo as well as the pretrained model,trying to test the model.But I found that the size of the output image is the same as the input,and I think the resolution has no improvement.This is the result,the left image is original and the right one is result.
image
Here is my training settings,I only modified the file_list and gpu_mode option,other settings remain default.

# Training settings
parser = argparse.ArgumentParser(description='PyTorch Super Res Example')
parser.add_argument('--upscale_factor', type=int, default=4, help="super resolution upscale factor")
parser.add_argument('--testBatchSize', type=int, default=1, help='testing batch size')
# parser.add_argument('--gpu_mode', type=bool, default=True)
#Modify the gpu_mode variable to False,because I don't install cuda toolkit
parser.add_argument('--gpu_mode', type=bool, default=False)
parser.add_argument('--chop_forward', type=bool, default=False)
parser.add_argument('--threads', type=int, default=1, help='number of threads for data loader to use')
parser.add_argument('--seed', type=int, default=123, help='random seed to use. Default=123')
parser.add_argument('--gpus', default=1, type=int, help='number of gpu')
parser.add_argument('--data_dir', type=str, default='./Vid4')
parser.add_argument('--file_list', type=str, default='walk.txt')
parser.add_argument('--other_dataset', type=bool, default=True, help="use other dataset than vimeo-90k")
parser.add_argument('--future_frame', type=bool, default=True, help="use future frame")
parser.add_argument('--nFrames', type=int, default=7)
parser.add_argument('--model_type', type=str, default='RBPN')
parser.add_argument('--residual', type=bool, default=False)
parser.add_argument('--output', default='Results/', help='Location to save checkpoint models')
parser.add_argument('--model', default='weights/RBPN_4x.pth', help='sr pretrained base model')

Is there anything wrong?I'm a newbie in the super-resolution field,can someone help me?Thank you.

@YuanZYF
Copy link

YuanZYF commented Dec 25, 2019

I also encountered the same problem, how can I solve it?

@jsh-me
Copy link

jsh-me commented Mar 26, 2020

I have the same problem
Somebody tell me the solution plz

@NabihGit
Copy link

NabihGit commented Apr 8, 2020

anybody know the solution, I have the same problem

@NabihGit
Copy link

NabihGit commented Apr 8, 2020

@YuanZYF @jsh-me @howfars So as what I find, u can just look at dataset.py , there the author down sample the src images to 1/4. u can just delete the downsample process (which is resize function in dataset.py), which include target, input and neibor images so the result image could be all right

@jsh-me
Copy link

jsh-me commented Apr 8, 2020

@NabihGit
Oh! Thank you for letting me know. :)

@howfars
Copy link
Author

howfars commented Apr 8, 2020

@NabihGit Thank you so much!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants