Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

discrepancies in values of the jacobian #58

Open
richinex opened this issue Jan 4, 2022 · 2 comments
Open

discrepancies in values of the jacobian #58

richinex opened this issue Jan 4, 2022 · 2 comments

Comments

@richinex
Copy link

richinex commented Jan 4, 2022

I tried to compare the values obtained from computing the jacobian of a complex valued function using numdifftools and found that it differed from that computed using autograd only for the parameter having 1j. i would like to know what went wrong.

from numdifftools import Jacobian, Hessian

import autograd.numpy as np
from autograd import jacobian, hessian

frequencies = np.array([1.00000000e+04, 7.94328235e+03, 6.30957344e+03, 5.01187234e+03,3.98107171e+03, 3.16227766e+03, 2.51188643e+03, 1.99526231e+03,
       1.58489319e+03, 1.25892541e+03, 1.00000000e+03, 7.94328235e+02,6.30957344e+02, 5.01187234e+02, 3.98107171e+02, 3.16227766e+02,
       2.51188643e+02, 1.99526231e+02, 1.58489319e+02, 1.25892541e+02,1.00000000e+02, 7.94328235e+01, 6.30957344e+01, 5.01187234e+01,
       3.98107171e+01, 3.16227766e+01, 2.51188643e+01, 1.99526231e+01,1.58489319e+01, 1.25892541e+01, 1.00000000e+01, 7.94328235e+00,
       6.30957344e+00, 5.01187234e+00, 3.98107171e+00, 3.16227766e+00,2.51188643e+00, 1.99526231e+00, 1.58489319e+00, 1.25892541e+00,
       1.00000000e+00])

Z = np.array([ 28.31206108+3.85713361e+00j,  30.65961255-6.15028952e-01j,34.24015216-1.53009927e+00j,  31.28832221+2.00862719e-01j,
        29.16894207-1.90759028e+00j,  31.35415498+8.12935902e+00j,33.29418445-8.44736332e-01j,  31.44975238-4.33909650e+00j,
        28.69757696-4.57807440e-01j,  32.60369585-7.49365253e+00j,38.67554243+1.94558309e+00j,  32.04682449+5.96513060e-02j,
        33.27666601+6.96674118e-02j,  34.03106231-1.63078686e+00j,31.61457921-3.37817364e+00j,  29.10184267-6.59263829e+00j,
        32.50730455-7.49033830e+00j,  36.43172672-3.28250914e+00j,38.06409634-6.87182171e+00j,  28.0217396 -7.79409862e+00j,
        32.56764449-1.28634629e+01j,  35.51511763-2.12395710e+01j,31.9317554 -2.85721873e+01j,  38.87220134-2.99072634e+01j,
        35.5291417 -3.74944224e+01j,  42.9529093 -4.04380053e+01j,40.94710166-5.09880048e+01j,  47.58460761-6.50587929e+01j,
        56.61773977-6.93842057e+01j,  70.84595799-8.97293571e+01j,91.28235232-9.63476214e+01j, 114.19032377-1.06793820e+02j,
       139.78246542-1.00266685e+02j, 161.07272489-1.00733766e+02j,181.230854  -9.02441714e+01j, 205.54084395-8.99491269e+01j,
       215.24421556-7.62099459e+01j, 223.62924894-6.40376670e+01j,229.93028184-4.76770260e+01j, 234.56144072-4.38847706e+01j,
       240.57421683-3.52116682e+01j])

params_init =np.array([10, 1e-5, 20, 50])

w = 2 * np.pi*frequencies # angular frequencies
s = 1j*w
def z_fun(p):
        return p[0] + 1/(s*p[1]+1/(p[2]+(p[3]/np.sqrt(s))))

# define the objective function
def obj_fun_1(p, z): 
    wtRe = 1/(z.real**2 + z.imag**2)
    return (np.sum(np.absolute((wtRe * (z_fun(p)- z)**2))))

# computing the jacobian using numdifftools

def num_jac_1(x, z):
    return Jacobian(lambda x: obj_fun_1(x, z), method='central')(x).real.squeeze()

# check the values
print(num_jac_1(params_init, Z))
# [-6.42302368e-01  1.45242757e-05 -1.07751884e-01 -2.99510421e-02]

# computing the jacobian using autograd
auto_jac_1 = jacobian(obj_fun_1)
# check the values
print(auto_jac_1(params_init, Z))
# [-6.42302368e-01  1.52096259e+05 -1.07751884e-01 -2.99510421e-02]

Now the problem is that when i supply the jacobian from numdifftools (the difference is in the second parameter (p[1]) the optimization fails but when I supply that obtained using autograd the optimization succeeds. I would like to know what the problem is.

@pbrod
Copy link
Owner

pbrod commented Feb 11, 2022

The problem here is that the function you are differentiating is very steep around the second parameter and therefore numdifftools fails. Numdifftools is only reliable on smooth slowly varying functions.
The only way to succed in this case is to limit the steps taken.

If you limit the maximum stepsize to 0.001 like this:

def num_jac_1(x, z):
        return Jacobian(lambda x: obj_fun_1(x, z), method='central', base_step=0.001)(x).real.squeeze()

you will get the wanted result:
[-6.42302368e-01 1.52096259e+05 -1.07751884e-01 -2.99510421e-02]
[-6.42302368e-01 1.52096259e+05 -1.07751884e-01 -2.99510421e-02]

@richinex
Copy link
Author

Ahhhhhhh! I see. I'm happy to see that it works. Thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants