You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello and thank you for sharing your good repository.
With attention, I can see a plaid pattern in reconstructed images which I believe is the result of patching the image (The border of reconstructed patches has a little less quality). Do you know any way to resolve that (maybe overlapping patches (which may increase the compression size))?
The text was updated successfully, but these errors were encountered:
Thank you for your interest!
You are correct, the reconstructed image will have these artifacts consisting of noisy edges where patches touch each other. To circumvent this, in a sort-of hacky manner, I have devised smoothing.py which performs linear interpolation at the borders, making the image look clearer.
This is an open problem. A possible cause, in my opinion, is that, given an image patch, the network does not have any explicit information regarding the surrounding patches, therefore not knowing how to treat that particular pixels on the border.
I have started to brush-up the project, so far ensuring that training works with the latest PyTorch (1.7.0). I plan to devise some experiments in order to fix the patching issues, but feel free to contribute if you have an idea!
Thanks for explanation. I think the cause you described is correct, in other words, latent variables mainly affect their responsive pixels in the output but they have a wide receptive field, so neighbor patches latent variables affect the border pixels in current patch. Let me know if I'm wrong.
One possible way is to not dividing the image to patches and sweep the image with the conv network. I'm doing something similar to what you've done using 32x32 patches as training data and tried the method. The result showed a different kind of artifact which I believe is because the padding was a significant part of training data but in action there is not a lot of padding in a high resolution image. Because you trained your network on 128x128 patches you should not see this artifact a lot. What do you think about it?
Hello and thank you for sharing your good repository.
With attention, I can see a plaid pattern in reconstructed images which I believe is the result of patching the image (The border of reconstructed patches has a little less quality). Do you know any way to resolve that (maybe overlapping patches (which may increase the compression size))?
The text was updated successfully, but these errors were encountered: