Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot get NLPBlock via MOI.get #289

Closed
Robbybp opened this issue Mar 26, 2024 · 6 comments
Closed

Cannot get NLPBlock via MOI.get #289

Robbybp opened this issue Mar 26, 2024 · 6 comments

Comments

@Robbybp
Copy link

Robbybp commented Mar 26, 2024

I'd like to profile function/derivative callbacks during a Knitro solve, and per the example in rosetta-opf, it seems the "standard" way to do this should be through MOI.get(unsafe_backend(model), MOI.NLPBlock()). However, this gives me an error that the NLPBlock() attribute is not supported. For example, the following script raises a GetAttributeNotAllowed error.

import JuMP                                                                                                                                                                                  
import KNITRO                                                                                                                                                                                
                                                                                                                                                                                             
model = JuMP.Model(KNITRO.Optimizer)                                                                                                                                                         
                                                                                                                                                                                             
@JuMP.variable(model, x[1:2], start=1.5)                                                                                                                                                     
@JuMP.constraint(model, x[1] * x[2]^1.1 == 1.1)                                                                                                                                              
@JuMP.objective(model, Min, 2*x[1]^2 + x[2]^2)                                                                                                                                               
                                                                                                                                                                                             
JuMP.optimize!(model)                                                                                                                                                                        
                                                                                                                                                                                             
nlpblock = JuMP.MOI.get(JuMP.unsafe_backend(model), JuMP.MOI.NLPBlock())                                                                                                                     
evaluator = nlpblock.evaluator                                                                                                                                                               
                                                                                                                                                                                             
hessian_time = evaluator.eval_hessian_lagrangian_timer                                                                                                                                       
println("Hessian eval. time: $hessian_time")                                                                                                                                                 

I can get the evaluator via model.moi_backend.optimizer.model.nlp_data.evaluator, but this (a) is a lot to remember and (b) seems to not work when using Knitro in direct mode.

I'm not sure if what I'm attempting above is the correct way to do this, but I would like to be able to query NLP callback times in a solver-agnostic way.

@frapac
Copy link
Collaborator

frapac commented Mar 26, 2024

Indeed a getter was missing. This should be fixed by: #290

@odow
Copy link
Member

odow commented Mar 26, 2024

I think this particular case can be fixed by #290

I'm not sure if what I'm attempting above is the correct way to do this, but I would like to be able to query NLP callback times in a solver-agnostic way.

This is not supported.

@odow
Copy link
Member

odow commented Mar 26, 2024

Closing as fixed by #290

@odow odow closed this as completed Mar 26, 2024
@odow
Copy link
Member

odow commented Mar 26, 2024

I'll add: it's mainly not supported because we now have no control over how the solver implements derivatives. It's a property of the solver, not of JuMP or MOI. As examples, querying the hessian Lagrangian timer doesn't make sense for AmplNLWriter or BARON.

@Robbybp
Copy link
Author

Robbybp commented Mar 26, 2024

Yes, that makes sense. I assume all nonlinear models will have an NLPBlock and some NLP evaluator, but what the evaluator supports is up in the air?

@odow
Copy link
Member

odow commented Mar 26, 2024

I assume all nonlinear models will have an NLPBlock

Not necessarily, and not one that they can get or set. NLPBlock was used by the "old" nonlinear interface. New models will use ScalarNonlinearFunction. Internally, the solver might decide to build a NLPBlock to query derivatives, but they don't have to.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

3 participants