Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature Request] Adding support for only filtering a masked area #1

Open
kriNon opened this issue Mar 7, 2019 · 2 comments
Open

Comments

@kriNon
Copy link

kriNon commented Mar 7, 2019

Hey, I'm working on a script where I am using waifu2x in vapoursynth, and I'm trying to speed up how fast the script it. I am trying to use waifu2x as a antialiasing filter, and as such I am running waifu2x in YUV mode in the denoising mode only on the luma plane. It's working fairly well, but I was thinking, since it's an AA filter, I only need to run waifu2x on the edges of the clip, so if it were possible to give vs_mxnet an edge mask and only process those pixels.

This could also be used in other ways too, for example if someone were to create a function that generates a mask of areas with noise, then it would be possible to only denoise those areas.

Let me know what your thoughts are. If you're not interested at all then feel free to close this issue.

Thanks

@kice
Copy link
Owner

kice commented Mar 7, 2019

Even you only the pixel of edges, but you have to feed waifu2x with the hole image. Since waifu2x is base on CNN which needs all pixel of the image to compute the final result; thus I don't think by adding a mask can gain any speed improvement. If you interest more, you might google how CNN works.

For partial image processing, I would suggest process the hole image then do the mask merge yourself due to how CNN works; unless you use one rectangle masking, which is able to crop the image to cut donw computational cost. But few people use the latter method.

If you have other suggestions or question, please let me know. Or you have your answer, you would close the issue.

@kriNon
Copy link
Author

kriNon commented Mar 8, 2019

Hey thanks for the quick response!

Maybe I do misunderstand how waifu2x works. I would imagine that at a really basic level waifu2x is just a more complicated convolution calculation, and as such, since calculations for a convolution are done per pixel, that by simply not calculating values for certain pixels that it would be significantly faster.

I don't believe that feeding waifu2x the whole image is the slow part of the operation. I believe that the calculations for the convolutions are the slow part, and as such it should be faster if only part of the image is masked.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants