-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Deconvolution using Convolution Forward Operator with PDHG throwing _rmul_ not defined error #442
Comments
You seem to have found a bug. Until we fix it, you should be able to get it working by setting |
Thanks @bwohlberg! I was able to unblock by making that suggested change. But yeah it is really slow, taking a 1-2 minutes per iteration, compared to I didn't follow that why do you say that the forward operator is non-trivial. We're using |
"Non-trivial" was perhaps a bit too vague. The prox of the l2 loss is cheap to compute when the forward operator is an identity or diagonal operator. Any other linear operator is currently solved via CG, which is very expensive. It should be straightforward to add support for a fast solution for |
Thanks @bwohlberg, I tried #======
# SOLVER
f = loss.SquaredL2Loss(y=Cx, A=C)
lbd = 5e-1#50 # L1 norm regularization parameter
g = lbd * functional.L21Norm()
D = linop.FiniteDifference(input_shape=im_jx.shape, circular=True)
maxiter = 20
mu, nu = ProximalADMM.estimate_parameters(D)
solver_padmm = ProximalADMM(
f=f,
g=g,
A=C,
rho=1e0,
mu=mu,
nu=nu,
x0=Cx,
maxiter=50,
itstat_options={"display": True, "period": 10},
)
print("\nProximal ADMM solver")
solver_padmm.solve()
hist_padmm = solver_padmm.itstat_object.history(transpose=True) But I'm getting the same
Is this a bug too? Can you please help me getting this to run, so I can try out the more complex forward operator with sum over convolutions? Also, what about |
This is probably the same bug as before, and should be resolveable in the same way. Note, though, that you haven't set the problem up in a way that avoids the expensive l2 prox. Take a look at this example to see how it should be done. |
Changing label for this issue since the bug component has been moved to #445. |
Thanks, I'm trying to reformulate the problem with In the meantime, can you please tell if the |
For my And thanks alot for assisting with im_jx = jax.device_put(im) #im_s
u = U[:, :U.shape[1]].T.reshape((U.shape[1], *im_jx.shape))
w = W[:U.shape[1]].reshape((U.shape[1], *im_jx.shape))
u_jx = jax.device_put(u)
w_jx = jax.device_put(w)
C = linop.CircularConvolve(h=u, input_shape=im_jx.shape, ndims=2)
D = linop.Diagonal(w_jx)
S = linop.Sum(input_shape=D.output_shape, axis=0)
Blur = S @ D @ C
# ProximalADMM
f = functional.ZeroFunctional()
g0 = loss.SquaredL2Loss(y=im_jx)
lbd = 5.0e-1 # L1 norm regularization parameter
g1 = lbd * functional.L21Norm()
g = functional.SeparableFunctional((g0, g1))
D = linop.FiniteDifference(input_shape=im_jx.shape, circular=True)#, append=0)
A = linop.VerticalStack((Blur, D))
maxiter = 20 # number of ADMM iterations
mu, nu = ProximalADMM.estimate_parameters(D)
solver_padmm = ProximalADMM(
f=f,
g=g,
A=A,
rho=5e1,
mu=float(mu),
nu=float(nu),
x0=im_jx,
maxiter=maxiter,
itstat_options={"display": True, "period": 10},
)
print("\nProximal ADMM solver")
solver_padmm.solve()
hist_padmm = solver_padmm.itstat_object.history(transpose=True) Now, the solver executes but rapidly diverges to Also, I'm getting my Python kernel to crash upon re-run as it is taking up huge memory space on device (after 14 iterations, python kernel was taking up a whopping 23GB of memory). Do you know if and how that can be mitigated?
|
Closing since remaining questions now appear as separate issues. |
* Resolve #442 * Add utility function for checking if an object is a scalar of an array of unit size * Add tests for operator mult/div by singleton arrays * Modify conditional for scalar equivalence and corresponding test function * Typo fix * Simplify conditional * Add an assertion
Setting it up as a standalone question for making
PDHG
work with my simple deblur problem.I've setup synthetic image and blurred it with an anisotropic Gaussian kernel. Now the optimization problem I'm trying to solve is:
Using
PDHG
throws thisTypeError
that theOperation __rmul__ not defined between and .
I'm getting this error when passing forward operator,A
, into theloss.SquaredL2Loss()
that gets passed toPDHG
constructor. Can you please help navigate this error?The text was updated successfully, but these errors were encountered: