Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature-Request for Default Derivatives (Gradient and Jacobian) #573

Closed
dhathris opened this issue May 2, 2022 · 5 comments
Closed

Feature-Request for Default Derivatives (Gradient and Jacobian) #573

dhathris opened this issue May 2, 2022 · 5 comments

Comments

@dhathris
Copy link

dhathris commented May 2, 2022

Hello,
I have a feature request for IPOPT. Is it possible to implement in IPOPT default derivatives for the objective function and the constraints using the same finite differences method used in the derivative checker feature? For example, fmincon of MATLAB does this, in order to use fmincon, one does not need to supply these derivative functions explicitly.
Since there is already a finite differences method available for derivative checker feature, I am assuming it can be repurposed for acting as default derivatives, without the user having to explicitly implementing the functions for the gradient(s) and Jacobian.
Please let us know if this feature request can be honored, if so, if there is anything I can help with for the implementation.

Thanks,
Dhathri

@svigerske
Copy link
Member

svigerske commented May 2, 2022

There is an jacobian_approximation option.
I don't think it is much tested and it isn't recommended to be used as performance is not expected to be great with this (unlike fmincon, Ipopt needs 2nd derivatives as well, so it would approximate the Hessian of the Lagrangian from an approximation of the Jacobian), but you could try it out.
You will still need to specify which variables appear in which constraint, i.e., the sparsity pattern of the Jacobian.

There is nothing to approximate the Gradient of the objective function. So if you have a nonlinear objective without gradients, then you will have to move it into the constraints (min f(x) -> min z s.t. f(x) <= z).

@dhathris
Copy link
Author

dhathris commented May 2, 2022

I believe the jacobian_approximation option you speak of is only used in case of derivatives check being enabled. It cannot be used outside of that feature, in the actual optimization portion of IPOPT. Also, it is very much beneficial to verify the correctness of implementation of a given NLP problem, if we had default derivatives IPOPT like fmincon. I am not particularly looking to use this in production where performance is a concern, I am looking to verify the correctness of the NLP problem formulation.
As for the gradient, derivative checker does verify the gradient as well with finite differences approach by default. So I believe that also can be reused to provide default gradient.

@svigerske
Copy link
Member

From a look at https://github.com/coin-or/Ipopt/blob/stable/3.14/src/Interfaces/IpTNLPAdapter.cpp#L2738, I would say that this option does not only effect the derivative tester. Maybe it is confusing that the option is located in this section of the docu.

Further, if I put a printf into the eval_jac_g of the HS071 test, i.e.,

--- a/examples/hs071_cpp/hs071_nlp.cpp
+++ b/examples/hs071_cpp/hs071_nlp.cpp
@@ -228,7 +228,7 @@ bool HS071_NLP::eval_jac_g(
    else
    {
       // return the values of the Jacobian of the constraints
-
+printf("eval_jac_g() called for values\n");
       values[0] = x[1] * x[2] * x[3]; // 0,0
       values[1] = x[0] * x[2] * x[3]; // 0,1
       values[2] = x[0] * x[1] * x[3]; // 0,2

then I get this log

This is Ipopt version 3.14.6, running with linear solver ma27.

Number of nonzeros in equality constraint Jacobian...:        4
Number of nonzeros in inequality constraint Jacobian.:        4
Number of nonzeros in Lagrangian Hessian.............:       10

eval_jac_g() called for values
eval_jac_g() called for values
Total number of variables............................:        4
                     variables with only lower bounds:        0
                variables with lower and upper bounds:        4
                     variables with only upper bounds:        0
Total number of equality constraints.................:        1
Total number of inequality constraints...............:        1
        inequality constraints with only lower bounds:        1
   inequality constraints with lower and upper bounds:        0
        inequality constraints with only upper bounds:        0

iter    objective    inf_pr   inf_du lg(mu)  ||d||  lg(rg) alpha_du alpha_pr  ls
   0  1.6109693e+01 1.12e+01 5.28e-01   0.0 0.00e+00    -  0.00e+00 0.00e+00   0
eval_jac_g() called for values
   1  1.7410406e+01 7.49e-01 2.25e+01  -0.3 7.97e-01    -  3.19e-01 1.00e+00f  1
eval_jac_g() called for values
   2  1.8001613e+01 7.52e-03 4.96e+00  -0.3 5.60e-02   2.0 9.97e-01 1.00e+00h  1
eval_jac_g() called for values
   3  1.7199482e+01 4.00e-02 4.24e-01  -1.0 9.91e-01    -  9.98e-01 1.00e+00f  1
eval_jac_g() called for values
   4  1.6940955e+01 1.59e-01 4.58e-02  -1.4 2.88e-01    -  9.66e-01 1.00e+00h  1
eval_jac_g() called for values
   5  1.7003411e+01 2.16e-02 8.42e-03  -2.9 7.03e-02    -  9.68e-01 1.00e+00h  1
eval_jac_g() called for values
   6  1.7013974e+01 2.03e-04 8.65e-05  -4.5 6.22e-03    -  1.00e+00 1.00e+00h  1
eval_jac_g() called for values
   7  1.7014017e+01 2.76e-07 2.18e-07 -10.3 1.43e-04    -  9.99e-01 1.00e+00h  1

Number of Iterations....: 7

If I also set jacobian_approximation finite-difference-values, then I get this log:

This is Ipopt version 3.14.6, running with linear solver ma27.

Number of nonzeros in equality constraint Jacobian...:        4
Number of nonzeros in inequality constraint Jacobian.:        4
Number of nonzeros in Lagrangian Hessian.............:       10

Total number of variables............................:        4
                     variables with only lower bounds:        0
                variables with lower and upper bounds:        4
                     variables with only upper bounds:        0
Total number of equality constraints.................:        1
Total number of inequality constraints...............:        1
        inequality constraints with only lower bounds:        1
   inequality constraints with lower and upper bounds:        0
        inequality constraints with only upper bounds:        0

iter    objective    inf_pr   inf_du lg(mu)  ||d||  lg(rg) alpha_du alpha_pr  ls
   0  1.6109693e+01 1.12e+01 5.28e-01   0.0 0.00e+00    -  0.00e+00 0.00e+00   0
   1  1.7410406e+01 7.49e-01 2.25e+01  -0.3 7.97e-01    -  3.19e-01 1.00e+00f  1
   2  1.8001613e+01 7.52e-03 4.96e+00  -0.3 5.60e-02   2.0 9.97e-01 1.00e+00h  1
   3  1.7199482e+01 4.00e-02 4.24e-01  -1.0 9.91e-01    -  9.98e-01 1.00e+00f  1
   4  1.6940955e+01 1.59e-01 4.58e-02  -1.4 2.88e-01    -  9.66e-01 1.00e+00h  1
   5  1.7003411e+01 2.16e-02 8.42e-03  -2.9 7.03e-02    -  9.68e-01 1.00e+00h  1
   6  1.7013974e+01 2.03e-04 8.65e-05  -4.5 6.22e-03    -  1.00e+00 1.00e+00h  1
   7  1.7014017e+01 2.76e-07 2.13e-07 -10.3 1.43e-04    -  9.99e-01 1.00e+00h  1

So eval_jac_g() from HS071 is no longer called to evaluate the Jacobian.

@dhathris
Copy link
Author

dhathris commented May 2, 2022

Ok, thanks for the example, please let me test this out on my own before we close this issue. I should get back to you by tomorrow morning.

Thanks,
Dhathri

@dhathris
Copy link
Author

dhathris commented May 4, 2022

Yes, this is working as shown in the sample example for me. I wish I had the same option with the Gradient of the Objective as well. I guess, you can't win them all, 😄

@dhathris dhathris closed this as completed May 4, 2022
svigerske added a commit that referenced this issue Jun 23, 2022
- new option gradient_approximation
- requested in #573, shouldn't really be used
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants