-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
1840 add tbf layer #5757
1840 add tbf layer #5757
Conversation
…tyle and unit tests passed. Signed-off-by: Fabian Wagner <[email protected]>
Signed-off-by: Fabian Wagner <[email protected]>
/black |
for more information, see https://pre-commit.ci
Signed-off-by: monai-bot <[email protected]>
/black
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @faebstn96 , I read your code in the past two weeks and it looks great! Just left two comments directly on the code.
One more question: It seems that most of the logics in the backprop is the same to forward (gaussianKernel_xyz are also the same?). Does it make sense to generate filter_kernel_back (in bf_layer_gpu_backward.cu) directly in forward and compute gradientOutputTensor with gradientInputTensor && the filter? If feasible, backward code will be much simpler.
monai/csrc/filtering/trainable_bilateral/bf_layer_gpu_backward.cu
Outdated
Show resolved
Hide resolved
Hi @zehuanw,
You are right, the calculation of the spatial Kernels gaussianKernel_xyz is the same in the forward and backward and could be calculated in the forward only and then reused in the backward. However, the intensity range kernel needs to be calculated on the fly as the input intensities change locally (this is the more complex part of the calculation within the CUDA kernels). |
This reverts commit 68ff70b.
This reverts commit 68ff70b. Signed-off-by: Fabian Wagner <[email protected]>
… into 1840-add_tbf_layer
Signed-off-by: Fabian Wagner <[email protected]>
Signed-off-by: monai-bot <[email protected]>
/build |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks @faebstn96 and the comments from @zehuanw, the PR looks nice and I'm merging it soon. the fact that the variable notations are consistent with the descriptions in the paper makes the code logic easy to follow...
/build |
Fixes #1840 .
Description
I integrated the trainable bilateral filter layer (TBF) in the MONAI repository as a new PyTorch filter layer. The TBF contains an analytical gradient derivation toward its filter parameters and its noisy input image which enables gradient-based optimization within the PyTorch graph. See here for more details on the gradient derivation. Unit tests were added that check the filter output as well as the gradient computation.
Types of changes
./runtests.sh -f -u --net --coverage
../runtests.sh --quick --unittests --disttests
.make html
command in thedocs/
folder.