-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
different implementation during testing? #13
Comments
Uncertainty can not be calculated by the ["all_samples"], because it has been limited to [-1, 1]. |
Thank you for your explanation, I understand now! As for my first question, yesterday I tested both your official code version (in the first row) and your official paper version (in the second row, I just changed btw, the training settings are the same:
It seems that the dice score in the second row seems better. It seems that this change may be helpful. Perhaps you could consider making this change as well, but it's all up to you. If I made any mistake in the above analysis, please let me know. Thanks again! |
Wow, thank you. I will modify this section. |
Hi Jessie, This is the final result showing in the log after 300 epochs (validation results), and my settings are: I think my settings are similar to yours since I do not change anything and keep it as default. Thus I suspect the reason could be the different versions of packages. Could you please show your package version here if you don't mind? Here is my package version: |
Thanks for your great work but I have met a little problem:
Why is the implementation in this line different from the formula in the paper?
Diff-UNet/BraTS2020/test.py
Line 93 in 2699001
Should I change sample_outputs[i]["all_samples"][index].cpu() to uncer_out in
Diff-UNet/BraTS2020/test.py
Line 87 in 2699001
The text was updated successfully, but these errors were encountered: