Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DNMY: experimental testing for fast resolves of NLP #3018

Closed
wants to merge 2 commits into from

Conversation

odow
Copy link
Member

@odow odow commented Jul 14, 2022

This is NOT safe to merge because doesn't update the expression
graphs and so will break AmplNLWriter etc.

Part of #1185

This needs some changes to Ipopt to see any potential benefits.

@odow odow added Category: Nonlinear Related to nonlinear programming Type: Performance labels Jul 14, 2022
@codecov
Copy link

codecov bot commented Jul 14, 2022

Codecov Report

Base: 97.62% // Head: 97.62% // No change to project coverage 👍

Coverage data is based on head (e5b5301) compared to base (59926a9).
Patch coverage: 100.00% of modified lines in pull request are covered.

Additional details and impacted files
@@           Coverage Diff           @@
##           master    #3018   +/-   ##
=======================================
  Coverage   97.62%   97.62%           
=======================================
  Files          32       32           
  Lines        4297     4297           
=======================================
  Hits         4195     4195           
  Misses        102      102           
Impacted Files Coverage Δ
src/optimizer_interface.jl 96.07% <100.00%> (ø)

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

☔ View full report at Codecov.
📢 Do you have feedback about the report comment? Let us know in this issue.

@odow
Copy link
Member Author

odow commented Jul 18, 2022

So the problem is that some AD backends might not update their expressions (or AD calls) if a parameter value is updated after initialize:

julia> model = Model()
A JuMP Model
Feasibility problem with:
Variables: 0
Model mode: AUTOMATIC
CachingOptimizer state: NO_OPTIMIZER
Solver name: No optimizer attached.

julia> @variable(model, x)
x

julia> @NLparameter(model, p == 2)
p == 2.0

julia> @NLexpression(model, ex, p)
subexpression[1]: p

julia> @NLobjective(model, Min, x^ex)

julia> evaluator = NLPEvaluator(model)
Nonlinear.Evaluator with available features:
  * :Grad
  * :Jac
  * :JacVec
  * :Hess
  * :HessVec
  * :ExprGraph

julia> MOI.initialize(evaluator, [:ExprGraph])

julia> MOI.objective_expr(evaluator)
:(x[MathOptInterface.VariableIndex(1)] ^ 2.0)

julia> set_value(p, 3)
3

julia> MOI.objective_expr(evaluator)
:(x[MathOptInterface.VariableIndex(1)] ^ 2.0)

But adding a way to update a parameter in AbstractNLPEvaluator breaks the abstraction that you can create a Nonlinear.Model, and then convert it into an AbstractNLPEvaluator. Now you'll have to keep the model around and check for updated parameter values.

@blegat
Copy link
Member

blegat commented Jul 18, 2022

This looks a lot like jump-dev/MathOptInterface.jl#1901
Maybe we should have something similar for AD backends?

@odow
Copy link
Member Author

odow commented Jul 20, 2022

initialize is essentially like final_touch. It says "I've finished building the model, and I'm ready to set everything up."

But the problem is that we want people to be able to modify parameter values and not have to rebuild everything, but the NLPEvaluators don't have the concept of a parameter.

@odow
Copy link
Member Author

odow commented Jul 22, 2022

So one way to move forward with this is to update MOI.Nonlinear so that initialize doesn't store expression graphs. Then we can special-case JuMP to avoid setting the evaluator if the backend is ReverseSparseAD, and the previous backed was also ReverseSparseAD.

I don't think there's a generic solution, short of completely changing how the nonlinear interface is passed from JuMP to the solver.

This is NOT safe to merge because doesn't update the expression
graphs and so will break AmplNLWriter etc.
@odow
Copy link
Member Author

odow commented Sep 2, 2022

Part of the problem is that we have an Evaluator which requires a fixed model. (You can't add new variables after creating the Evaluator object.)

We could switch to some mechanism where we pass the Nonlinear.Model object instead, but I wonder if it's simpler to just have a kwarg in optimize! that opt-in skips the NLP update. Then people solving PF in a loop can explicitly decide that things still work if they skip updating the Evaluator every time.

@odow
Copy link
Member Author

odow commented Sep 2, 2022

This allows a loop like:

    for i in 1:n
        for (i,load) in data["load"]
            set_value(pd_parameter[load["index"]], (1.0+(rand()-0.5)/10)*load["pd_base"])
            set_value(qd_parameter[load["index"]], (1.0+(rand()-0.5)/10)*load["qd_base"])
        end
        if i > 1
            x = all_variables(model)
            x0 = value.(x)
            set_start_value.(x, x0)
        end
        optimize!(model; _skip_nonlinear_update = i > 1)

        @assert(termination_status(model) == LOCALLY_SOLVED)
        @assert(primal_status(model) == FEASIBLE_POINT)
    end

The problem with the previous "is the nonlinear model dirty" approach is that we'd also need to set the evaluator if the backend changed, even if the model didn't.

@odow
Copy link
Member Author

odow commented Nov 17, 2022

Closing because I think we need to pass the symbolic form to the solver to make this work.

x-ref jump-dev/MathOptInterface.jl#1998
x-ref jump-dev/MathOptInterface.jl#846

@odow odow closed this Nov 17, 2022
@odow odow deleted the od/nlp-resolve branch November 17, 2022 02:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Category: Nonlinear Related to nonlinear programming Type: Performance
Development

Successfully merging this pull request may close these issues.

2 participants