-
Notifications
You must be signed in to change notification settings - Fork 45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add utility to configure PDHG for with explicit TGV or TV regularisation #1766
base: master
Are you sure you want to change the base?
Conversation
Hey all, This still needs a few changes. Currently the Would you like me to do a PR or let you guys take care of it? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have added unit tests for setting up TV
I would appreciate a review of the code and the unit tests for that please before I write the TGV unit tests
Hi @lauramurgatroyd - I like the style of the unit tests for explicit TV. They are comprehensive without being expensive. I have suggested a few changes to them and made some comments on the new utility file. |
from cil.optimisation.functions import MixedL21Norm, BlockFunction, L2NormSquared, ScaledFunction | ||
from cil.optimisation.operators import BlockOperator, IdentityOperator, GradientOperator, \ | ||
SymmetrisedGradientOperator, ZeroOperator | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we call this file something other than PDHG.py? e.g. set_up_PDHG.py or something similar?
Forward operator. | ||
data : AcquisitionData | ||
alpha : float | ||
Regularisation parameter. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Regularisation parameter for which part?
|
||
def setup_explicit_TGV(A, data, alpha, delta=1.0, omega=1): | ||
'''Function to setup LS + TGV problem for use with explicit PDHG | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Need a TGV equation in here, defining alpha and beta and omega
delta : float, default 1.0 | ||
The Regularisation parameter for the symmetrised gradient, beta, can be controlled by delta | ||
with beta = delta * alpha. | ||
omega : float, default 1.0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Least squares uses c
instead of omega
, perhaps we could follow suit?
The constant in front of the data fitting term. Mathematicians like it to be 1/2 but it is 1 by default, | ||
i.e. it is ignored if it is 1. | ||
|
||
Returns: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we add a code snippet to explain how to use the returned K and F in PDHG? Also need to explain briefly what we mean by "explicit"
function = F[0].function | ||
|
||
np.testing.assert_equal(type(function), L2NormSquared) | ||
np.testing.assert_array_equal(function.b.as_array(), ad.as_array()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would also be tempted to evaluate your functions on a small image, so reduce the size of the geometry to save computational cost, create a random test_data
and then do
self.assertAlmostEqual(function(test_data), omega*LeastSquares(b=ad)(test_data))
np.testing.assert_equal(type(K[1].operator), GradientOperator) | ||
ig = ad.geometry.get_ImageGeometry() | ||
expected_grad = alpha*GradientOperator(ig) | ||
np.testing.assert_allclose(expected_grad.direct(ad)[1].as_array(), expected_grad.direct(ad)[1].as_array()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is that comparing the same thing?
np.testing.assert_allclose(expected_grad.direct(ad)[1].as_array(), K[1].direct(ad)[1].as_array(), 10**(-4)) | ||
np.testing.assert_allclose(expected_grad.direct(ad)[0].as_array(), K[1].direct(ad)[0].as_array(), 10**(-4)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the expected_grad sould be operator on an image geometry not on acquisition data? Thus you should not be using ad
|
||
# F[1] | ||
|
||
np.testing.assert_equal(MixedL21Norm, type(F[1])) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similarly, can you test this on an object?
a910556
to
f8b513b
Compare
Description
Add utility to configure PDHG for with explicit TGV or TV regularisation with Least Squares.
Example Usage
An example is on CIL-Demos TGV_tomography.py
Changes
max
toBlockDataContainer
dot
between aBlockDataContainer
and aDataContainer
Testing you performed
Related issues/links
Checklist
Contribution Notes
Please read and adhere to the developer guide and local patterns and conventions.