-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support specifying scales in preprocessing div2k dataset #472
Conversation
Add a new scripts to generate training data pairs for blind super resolution
made some modification
Hello, thank you for your contribution. May I ask why do you pre-generate the LR images instead of generating the LR during training? I think the degradations could be more diverse if we apply the degradations during training. |
Yes, you're right. Maybe I think pre-generating low-quality images is convenient and intuitive, and if we generate lq images during training, we will no longer need LR sub images generated by Thanks for your advice! I have a new idea. We add a new argument in |
1.remove preprocess_div2k_dataset_bsr.py; 2.add a new argument into preprocess_div2k_dataset.py script to control whether crop LR images; 3.modify corresponding README.md and README_zh-CN.md.
I have modified my pull request by
|
I think you can simply use an argument If |
Replace custom-degradation argument with scale
Restore README.md and README_zh-CN.md
Restore README.md and README_zh-CN.md
Thanks for your advice. I have updated it. |
Codecov Report
@@ Coverage Diff @@
## master #472 +/- ##
==========================================
- Coverage 80.56% 80.54% -0.02%
==========================================
Files 190 190
Lines 10338 10338
Branches 1533 1533
==========================================
- Hits 8329 8327 -2
- Misses 1780 1781 +1
- Partials 229 230 +1
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
|
) * First commit! Add a new scripts to generate training data pairs for blind super resolution * Second commit! made some modification * Third commit 1.remove preprocess_div2k_dataset_bsr.py; 2.add a new argument into preprocess_div2k_dataset.py script to control whether crop LR images; 3.modify corresponding README.md and README_zh-CN.md. * Fourth Commit Replace custom-degradation argument with scale * Fifth Commit Restore README.md and README_zh-CN.md * Sixth Commit Restore README.md and README_zh-CN.md * Update README_zh-CN.md * update annotations * scale -> scales Co-authored-by: lizz <[email protected]>
Hi @wileewang !First of all, we want to express our gratitude for your significant PR in this project. Your contribution is highly appreciated, and we are grateful for your efforts in helping improve this open-source project during your personal time. We believe that many developers will benefit from your PR. We would also like to invite you to join our Special Interest Group (SIG) private channel on Discord, where you can share your experiences, ideas, and build connections with like-minded peers. To join the SIG channel, simply message moderator— OpenMMLab on Discord or briefly share your open-source contributions in the #introductions channel and we will assist you. Look forward to seeing you there! Join us :https://discord.gg/raweFPmdzG If you are Chinese or have WeChat,welcome to join our community on WeChat. You can add our assistant :openmmlabwx. Please add "mmsig + Github ID" as a remark when adding friends:) |
Motivation
We want build a data set like DIV2KRK which contain low-quality(LQ) images, ground-truth(GT) images and blur kernels and use it for training models like Deep Alternating Network(DAN), which means we also need to crop them into sub images. This PR create a scripts to generate this data set.
Modification
We create a new file
preprocess_div2k_dataset_bsr.py
, a copy ofpreprocess_div2k_dataset.py
, and then add a function to generate blur kernel according to configuration, and we no longer use DIV2K LR images to crop into sub images but apply pipeline used in DAN's paper: GT-->blur-->down-sampling-->LQ, and we also stored corresponding blur kernel in.mat
format using same base name.