Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error updating NLparameter #204

Closed
stelmo opened this issue Oct 26, 2021 · 3 comments · Fixed by #236
Closed

Error updating NLparameter #204

stelmo opened this issue Oct 26, 2021 · 3 comments · Fixed by #236

Comments

@stelmo
Copy link

stelmo commented Oct 26, 2021

Hi, I seem to be unable to update parameters for resolves using the JuMP interface. Below is a MWE:

using JuMP
using KNITRO

# small test problem
model = JuMP.Model(
    optimizer_with_attributes(
        KNITRO.Optimizer,
        "honorbnds" => 1,
        ),
)

@variable(model, 0 <= x[1:5] <= 10)
@constraint(model, sum(x) == 10)
epsilon = @NLparameter(model, epsilon==1)
@NLconstraint(model, x[1] + exp(x[2]*epsilon) <= 10)
@NLconstraint(model, x[2] + exp(x[3]*epsilon) <= 5)
@objective(model, Max, x[2])

# solve with first parameter
set_value(epsilon, 1e-1) # okay
optimize!(model)

# solve with second parameter
set_value(epsilon, 1e-2) # error :(
optimize!(model)

The error message is:

┌ Warning: Knitro encounters an exception in evaluation callback: UndefRefError()
└ @ KNITRO C:\Users\St. Elmo\.julia\packages\KNITRO\S3mx8\src\kn_callbacks.jl:261
ERROR: User routine for func_callback returned -500.
       Could not evaluate objective or constraints at the current point.
┌ Warning: Knitro encounters an exception in evaluation callback: UndefRefError()
└ @ KNITRO C:\Users\St. Elmo\.julia\packages\KNITRO\S3mx8\src\kn_callbacks.jl:261
ERROR: User routine for grad_callback returned -500.
       Could not evaluate first derivatives at the current point.
WARNING: Evaluation error in Knitro presolver.
         No presolve will be applied.
bar_conic_enable:        0
datacheck:               0
hessian_no_f:            1
honorbnds:               1
par_numthreads:          1
presolve:                0
xtol_iters:              3
Knitro shifted start point further inside bounds (1 variable).
┌ Warning: Knitro encounters an exception in evaluation callback: UndefRefError()
└ @ KNITRO C:\Users\St. Elmo\.julia\packages\KNITRO\S3mx8\src\kn_callbacks.jl:261
ERROR: User routine for func_callback returned -500.
       Could not evaluate objective or constraints at the current point.

EXIT: Callback function error.

===============================================================================

I am using Julia v1.6.3, JuMP v0.21.10 and KNITRO v0.10.
Any ideas on how to do this?

@frapac
Copy link
Collaborator

frapac commented Oct 26, 2021

Thanks for reporting this issue with the complete MWE. I confirm I can reproduce it locally.

A direct workaround

If this is blocking for your work, I would suggest to define epsilon as a fixed optimization variable (Knitro will remove epsilon automatically during the preprocessing anyway). For instance:

using JuMP
using KNITRO

# small test problem
model = JuMP.Model(
    optimizer_with_attributes(
        KNITRO.Optimizer,
        "honorbnds" => 1,
        ),
)

@variable(model, 0 <= x[1:5] <= 10)
@variable(model, epsilon==1)
@constraint(model, sum(x) == 10)
@NLconstraint(model, x[1] + exp(x[2]*epsilon) <= 10)
@NLconstraint(model, x[2] + exp(x[3]*epsilon) <= 5)
@objective(model, Max, x[2])

# solve with first parameter
JuMP.fix(epsilon, 1e-1) # okay
optimize!(model)

# solve with second parameter
JuMP.fix(epsilon, 1e-2) # it works locally for me!
optimize!(model)

Addressing your issue

However, the previous MWE does not solve your issue. I think it's a bug inside Knitro's MOI wrapper. The issue arises inside the JuMP.NLPEvaluator, and the following code breaks after the first solve:

x0 = rand(5)
jump_evaluator = model.moi_backend.optimizer.model.nlp_data.evaluator
MOI.eval_objective(jump_evaluator, x0)

ERROR: UndefRefError: access to undefined reference
Stacktrace:
 [1] getproperty
   @ ./Base.jl:33 [inlined]
 [2] macro expansion
   @ ~/.julia/packages/JuMP/klrjG/src/nlp.jl:735 [inlined]
 [3] macro expansion
   @ ./timing.jl:287 [inlined]
 [4] eval_objective(d::NLPEvaluator, x::Vector{Float64})
   @ JuMP ~/.julia/packages/JuMP/klrjG/src/nlp.jl:734

Your MWE is working fine with Ipopt, so I don't think the issue is inside JuMP. I suspect a nasty side-effect within Knitro's MOI wrapper.

@odow
Copy link
Member

odow commented Aug 17, 2022

I have a guess at what is wrong:

You initialize things only if nlp_loaded is false:

if !isa(model.nlp_data.evaluator, EmptyNLPEvaluator) && !model.nlp_loaded
# Instantiate NLPEvaluator once and for all.
features = MOI.features_available(model.nlp_data.evaluator)::Vector{Symbol}
has_hessian = (:Hess in features)
has_hessvec = (:HessVec in features)
has_nlp_objective = model.nlp_data.has_objective
num_nlp_constraints = length(model.nlp_data.constraint_bounds)
has_nlp_constraints = (num_nlp_constraints > 0)
# Build initial features for solver.
init_feat = Symbol[]
has_nlp_objective && push!(init_feat, :Grad)
# Knitro could not mix Hessian callback with Hessian-vector callback.
if has_hessian
push!(init_feat, :Hess)
elseif has_hessvec
push!(init_feat, :HessVec)
end
if has_nlp_constraints
push!(init_feat, :Jac)
end
MOI.initialize(model.nlp_data.evaluator, init_feat)

But setting the NLPBlockData doesn't reset nlp_loaded to false:

function MOI.set(model::Optimizer, ::MOI.NLPBlock, nlp_data::MOI.NLPBlockData)
return model.nlp_data = nlp_data
end

So the first solve works okay, then JuMP updates the NLPBlockData for the second solve, and KNITRO doesn't rebuild the NLPData, which results in the undefined reference.

This can probably be fixed by setting model.nlp_loaded = false in the MOI.set method of NLPBlockData.

Related issues: jump-dev/JuMP.jl#1185, jump-dev/JuMP.jl#3018

odow added a commit that referenced this issue Aug 25, 2022
@odow odow closed this as completed in #236 Aug 25, 2022
odow added a commit that referenced this issue Aug 25, 2022
@stelmo
Copy link
Author

stelmo commented Aug 25, 2022

Thank you very much!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging a pull request may close this issue.

3 participants