Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MD FA Calculation Problem #6

Open
SherryZhaoXY opened this issue Oct 11, 2021 · 4 comments
Open

MD FA Calculation Problem #6

SherryZhaoXY opened this issue Oct 11, 2021 · 4 comments

Comments

@SherryZhaoXY
Copy link

Hi there, thanks for your contribution. I notice that you calculate the 'MD' and 'FA' in that way:
MD1 = torch.mean(torch.mul(torch.pow(g1_out - output_images, 2), output_images)) FA1 = torch.mean(torch.mul(torch.pow(g1_out - output_images, 2), 1 - output_images))
but in the paper, it looks like MD = || (S-S_0)×S_0||_2^2 , FA = || (S-S_0)×(1-S_0)||_2^2, the '×' denotes the element-wise multiplication, '_2' denotes L2 norm, ^2 denotes square.
So, there are no any part of torch.pow(g1_out - output_images, 2), and L2 norm is sum of squares and square root, torch.mean() doesn't make it. Why you calculate it like this? Looking forward your reply.

@wcyjerry
Copy link

Hi there, thanks for your contribution. I notice that you calculate the 'MD' and 'FA' in that way: MD1 = torch.mean(torch.mul(torch.pow(g1_out - output_images, 2), output_images)) FA1 = torch.mean(torch.mul(torch.pow(g1_out - output_images, 2), 1 - output_images)) but in the paper, it looks like MD = || (S-S_0)×S_0||_2^2 , FA = || (S-S_0)×(1-S_0)||_2^2, the '×' denotes the element-wise multiplication, '_2' denotes L2 norm, ^2 denotes square. So, there are no any part of torch.pow(g1_out - output_images, 2), and L2 norm is sum of squares and square root, torch.mean() doesn't make it. Why you calculate it like this? Looking forward your reply.

torch.mul(torch.pow(g1_out - output_images, 2), output_images) just does the work of L2 norm and square .
L2 norm means suqare and root, when it is following by square , the root op was off-set.
and for torch.mean , I suppose it tries to match the definition , MD actually means MD-rate , So it shoud devided by total elements of the image. As for training , as you can see there is some hyper-para for MD and FA , so whether it is meaned is not that important.

@SherryZhaoXY
Copy link
Author

Hi there, thanks for your contribution. I notice that you calculate the 'MD' and 'FA' in that way: MD1 = torch.mean(torch.mul(torch.pow(g1_out - output_images, 2), output_images)) FA1 = torch.mean(torch.mul(torch.pow(g1_out - output_images, 2), 1 - output_images)) but in the paper, it looks like MD = || (S-S_0)×S_0||_2^2 , FA = || (S-S_0)×(1-S_0)||_2^2, the '×' denotes the element-wise multiplication, '_2' denotes L2 norm, ^2 denotes square. So, there are no any part of torch.pow(g1_out - output_images, 2), and L2 norm is sum of squares and square root, torch.mean() doesn't make it. Why you calculate it like this? Looking forward your reply.

torch.mul(torch.pow(g1_out - output_images, 2), output_images) just does the work of L2 norm and square . L2 norm means suqare and root, when it is following by square , the root op was off-set. and for torch.mean , I suppose it tries to match the definition , MD actually means MD-rate , So it shoud devided by total elements of the image. As for training , as you can see there is some hyper-para for MD and FA , so whether it is meaned is not that important.

Thanks for your explaination!

@wcyjerry
Copy link

Hi there, thanks for your contribution. I notice that you calculate the 'MD' and 'FA' in that way: MD1 = torch.mean(torch.mul(torch.pow(g1_out - output_images, 2), output_images)) FA1 = torch.mean(torch.mul(torch.pow(g1_out - output_images, 2), 1 - output_images)) but in the paper, it looks like MD = || (S-S_0)×S_0||_2^2 , FA = || (S-S_0)×(1-S_0)||_2^2, the '×' denotes the element-wise multiplication, '_2' denotes L2 norm, ^2 denotes square. So, there are no any part of torch.pow(g1_out - output_images, 2), and L2 norm is sum of squares and square root, torch.mean() doesn't make it. Why you calculate it like this? Looking forward your reply.

torch.mul(torch.pow(g1_out - output_images, 2), output_images) just does the work of L2 norm and square . L2 norm means suqare and root, when it is following by square , the root op was off-set. and for torch.mean , I suppose it tries to match the definition , MD actually means MD-rate , So it shoud devided by total elements of the image. As for training , as you can see there is some hyper-para for MD and FA , so whether it is meaned is not that important.

Thanks for your explaination!

U R welcome,If it has any help. Actually I'm a student of this papers author now,xdddd

@zsh2650720805
Copy link

Hi there, thanks for your contribution. I notice that you calculate the 'MD' and 'FA' in that way: MD1 = torch.mean(torch.mul(torch.pow(g1_out - output_images, 2), output_images)) FA1 = torch.mean(torch.mul(torch.pow(g1_out - output_images, 2), 1 - output_images)) but in the paper, it looks like MD = || (S-S_0)×S_0||_2^2 , FA = || (S-S_0)×(1-S_0)||_2^2, the '×' denotes the element-wise multiplication, '_2' denotes L2 norm, ^2 denotes square. So, there are no any part of torch.pow(g1_out - output_images, 2), and L2 norm is sum of squares and square root, torch.mean() doesn't make it. Why you calculate it like this? Looking forward your reply.

torch.mul(torch.pow(g1_out - output_images, 2), output_images) just does the work of L2 norm and square . L2 norm means suqare and root, when it is following by square , the root op was off-set. and for torch.mean , I suppose it tries to match the definition , MD actually means MD-rate , So it shoud devided by total elements of the image. As for training , as you can see there is some hyper-para for MD and FA , so whether it is meaned is not that important.

Thanks for your explaination!

U R welcome,If it has any help. Actually I'm a student of this papers author now,xdddd

师兄您好,现在公布的数据集是不是有几张图片是损坏的啊,请问师兄您那有未损坏的版本嘛,期待并感谢师兄您的回复,[email protected]

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants