-
-
Notifications
You must be signed in to change notification settings - Fork 398
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DNMY: experimental testing for fast resolves of NLP #3018
Conversation
Codecov ReportBase: 97.62% // Head: 97.62% // No change to project coverage 👍
Additional details and impacted files@@ Coverage Diff @@
## master #3018 +/- ##
=======================================
Coverage 97.62% 97.62%
=======================================
Files 32 32
Lines 4297 4297
=======================================
Hits 4195 4195
Misses 102 102
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. ☔ View full report at Codecov. |
So the problem is that some AD backends might not update their expressions (or AD calls) if a parameter value is updated after julia> model = Model()
A JuMP Model
Feasibility problem with:
Variables: 0
Model mode: AUTOMATIC
CachingOptimizer state: NO_OPTIMIZER
Solver name: No optimizer attached.
julia> @variable(model, x)
x
julia> @NLparameter(model, p == 2)
p == 2.0
julia> @NLexpression(model, ex, p)
subexpression[1]: p
julia> @NLobjective(model, Min, x^ex)
julia> evaluator = NLPEvaluator(model)
Nonlinear.Evaluator with available features:
* :Grad
* :Jac
* :JacVec
* :Hess
* :HessVec
* :ExprGraph
julia> MOI.initialize(evaluator, [:ExprGraph])
julia> MOI.objective_expr(evaluator)
:(x[MathOptInterface.VariableIndex(1)] ^ 2.0)
julia> set_value(p, 3)
3
julia> MOI.objective_expr(evaluator)
:(x[MathOptInterface.VariableIndex(1)] ^ 2.0) But adding a way to update a parameter in |
This looks a lot like jump-dev/MathOptInterface.jl#1901 |
But the problem is that we want people to be able to modify parameter values and not have to rebuild everything, but the |
So one way to move forward with this is to update MOI.Nonlinear so that I don't think there's a generic solution, short of completely changing how the nonlinear interface is passed from JuMP to the solver. |
This is NOT safe to merge because doesn't update the expression graphs and so will break AmplNLWriter etc.
Part of the problem is that we have an We could switch to some mechanism where we pass the |
This allows a loop like: for i in 1:n
for (i,load) in data["load"]
set_value(pd_parameter[load["index"]], (1.0+(rand()-0.5)/10)*load["pd_base"])
set_value(qd_parameter[load["index"]], (1.0+(rand()-0.5)/10)*load["qd_base"])
end
if i > 1
x = all_variables(model)
x0 = value.(x)
set_start_value.(x, x0)
end
optimize!(model; _skip_nonlinear_update = i > 1)
@assert(termination_status(model) == LOCALLY_SOLVED)
@assert(primal_status(model) == FEASIBLE_POINT)
end The problem with the previous "is the nonlinear model dirty" approach is that we'd also need to set the evaluator if the backend changed, even if the model didn't. |
Closing because I think we need to pass the symbolic form to the solver to make this work. x-ref jump-dev/MathOptInterface.jl#1998 |
This is NOT safe to merge because doesn't update the expression
graphs and so will break AmplNLWriter etc.
Part of #1185
This needs some changes to Ipopt to see any potential benefits.