Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Passing parameters and second derivatives #154

Closed
optimizationoptimist opened this issue Nov 23, 2020 · 14 comments
Closed

Passing parameters and second derivatives #154

optimizationoptimist opened this issue Nov 23, 2020 · 14 comments

Comments

@optimizationoptimist
Copy link

optimizationoptimist commented Nov 23, 2020

I have posted this question on Discourse see link here. I am using JuMP for an optimisation problem that uses LD_SLSQP from NLopt. I would like to use NLopt directly rather than calling it from JuMP, but I have not been able to figure out how to do it correctly from the documentation. I have given my code for JuMP and my attempt at implementing in NLopt directly.

The part I find difficult is how to pass on parameters to functions, and first and second derivatives when using NLopt.
Please let me know what corrections I need to make to my code for NLopt?

function myfunc_nlopt(c,t)
	f(x) = sum(@views c[i]/(1+x)^t[i] for i in 1:length(c))
	df(x) = sum(@views -t[i]*c[i]/(1+x)^(t[i]+1) for i in 1:length(c))
	ddf(x) = sum(@views (t[i]^2+t[i])*c[i]/(1+x)^(t[i]+2) for i in 1:length(c))
	model = Model(NLopt.Optimizer)
	set_optimizer_attribute(model, "algorithm", :LD_SLSQP)
    @variable(model,-100.0 <=x<= 100.0)
	JuMP.register(model, :f, 1, f,df,ddf)
	@NLobjective(model,Min,f(x))
	@NLconstraint(model,f(x)==0.0)
	JuMP.optimize!(model)
    rate = value.(x)
	return rate
end

Runtime code

c= vcat(-1.2e6,ones(1200)*10000)   
t= collect(0:length(c)-1)/12
myfunc_nlopt(c,t)    

My attempt at implementing in NLopt directly

function nlopt_direct(x::Vector, grad::Vector,c,t)
	grad[1] = sum(@views -t[i]*c[i]/(1+x)^(t[i]+1) for i in 1:length(c))
	grad[2] = sum(@views (t[i]^2+t[i])*c[i]/(1+x)^(t[i]+2) for i in 1:length(c))
	return sum(@views c[i]/(1+x)^t[i] for i in 1:length(c))
end
function myconstraint(x::Vector,c,t)
	sum(@views c[i]/(1+x)^t[i] for i in 1:length(c))
end
opt = Opt(:LD_SLSQP)
opt.xtol_abs=1e-9
opt.min_objective=nlopt_direct
equality_constraint!(opt,(x,c,t) ->myconstraint(x,c,t),0.0)
(minf,minx,ret) = NLopt.optimize(opt,[-10.0,10.0])
@stevengj
Copy link
Collaborator

To pass parameters, just use a closure:

opt.min_objective= (x, grad) -> nlopt_direct(x, grad, c, t)

NLopt doesn't use second derivatives.

@optimizationoptimist
Copy link
Author

Thank you.

I have changed the now, but I get the same answer as bounds given on the variable. please le know what changes I need to make.?_

function nlopt_direct(x::Vector, grad::Vector,c,t)
grad[1] = sum(-t[i]*c[i]/(1+x)^(t[i]+1) for i in 1:length(c))
return sum(c[i]/(1+x)^([i) for i in 1:length(c))
end
function myconstraint(x::Vector,c,t,)
sum(c[i]/(1+x)^([i) for i in 1:length(c))
end
c=vcat(-1.2e6,ones(1200)*10000
t = collect(0:length
opt = opt(:LD_SLSQP,2)

OPT.MIN_OBJECTIVE = (x,grad) -> nlopt_direct(x,grad,c,t)
equality_constraint!(opt,(x,c,t) ->myconstraint(x,c,t),0.0)
opt.min
(minf,minx,ret) = NLopt.optimize(opt,[-100.0,10000.0])
end

@stevengj
Copy link
Collaborator

stevengj commented Nov 26, 2020

You are only assigning a single component of grad — what about grad[2]?

(Your code snippet above is not runnable…)

@optimizationoptimist
Copy link
Author

optimizationoptimist commented Nov 26, 2020

You are only assigning a single component of grad — what about grad[2]?

x is just one variable and not a vector, so there is only one gradient grad[1].

(Your code snippet above is not runnable…)

I made few mistakes in copying over the code. Please find the code that reproduces Inf values below. I have also given the equivalent code in JuMP which provides the correct result. These two points are unclear to me:

1. How to specify an equality constraint that requires the value of the function to be zero, when there is just one variable x?
2. How to specify bounds on the variable?

function nlopt_direct(x::Vector, grad::Vector,c,t)
grad[1] = sum(-t[i]*c[i]/(1+x)^(t[i]+1) for i in 1:length(c))
return sum(c[i]/(1+x)^[i] for i in 1:length(c))
end
function myconstraint(x::Vector, grad::Vector,c,t)
    grad[1] = sum(-t[i]*c[i]/(1+x)^(t[i]+1) for i in 1:length(c))
return sum(c[i]/(1+x)^[i] for i in 1:length(c))
end
c= vcat(-1.2e6,ones(1200)*10000)   
t= collect(0:length(c)-1)/12
opt = Opt(:LD_SLSQP,1)

opt.min_objective = (x, grad) -> nlopt_direct(x, grad, c, t)
equality_constraint!(opt,(x,grad) ->myconstraint(x,grad,c,t),0.0)

(optf,optx,ret) = NLopt.optimize!(opt, [100.0])

Code in JuMP that works as expected

function xirr_nlopt(c,t)
	f(x) = sum(@views c[i]/(1+x)^t[i] for i in 1:length(c))
	df(x) = sum(@views -t[i]*c[i]/(1+x)^(t[i]+1) for i in 1:length(c))
	ddf(x) = sum(@views (t[i]^2+t[i])*c[i]/(1+x)^(t[i]+2) for i in 1:length(c))
	model = Model(NLopt.Optimizer)
	set_optimizer_attribute(model, "algorithm", :LD_SLSQP)
        set_optimizer_attribute(model, MOI.Silent(), true)
	@variable(model,-100 <=x<= 100.0)
	JuMP.register(model, :f, 1, f,df,ddf)
	@NLobjective(model,Min,f(x))
	@NLconstraint(model,f(x)==0.0)
	JuMP.optimize!(model)
	rate = value.(x)
	return rate
end
c= vcat(-1.2e6,ones(1200)*10000)   
t= collect(0:length(c)-1)/12
myfunc_nlopt(c,t)    #This produces the correct answer of 0.10470788064468868

I also noticed an error in the example given in the documentation here. A variable 'model' is used to define the JuMP Model, but variables and constraints are defined by reference to a variable 'm'.

model = Model(NLopt.Optimizer)
@variable(m, x1)

@stevengj
Copy link
Collaborator

stevengj commented Nov 26, 2020

You used Opt(:LD_SLSQP,2)

in your first example, so you were optimizing over 2 variables, and both x and the gradient have two components.

How to specify bounds on the variable?

opt.lower_bounds and opt.upper_bounds

@stevengj
Copy link
Collaborator

Even if you have only a single variable, x is still a vector. The code sum(c[i]/(1+x)^[i] for i in 1:length(c)) makes no sense.

@stevengj
Copy link
Collaborator

I'm closing this as it's not an actual issue with NLopt, just a question.

@optimizationoptimist
Copy link
Author

optimizationoptimist commented Nov 26, 2020

Even if you have only a single variable, x is still a vector. The code sum(c[i]/(1+x)^[i] for i in 1:length(c)) makes no sense.

Thanks. Is there anything in the documentation that explains how to express an objective function that is created by a loop/comprehension such as the codesum(c[i]/(1+x)^[i] for i in 1:length(c)) in a form suitable for NLopt.jl?

@stevengj
Copy link
Collaborator

stevengj commented Nov 28, 2020

You just need to write a Julia function that returns a number. NLopt doesn't need a special form of loop syntax — it is not JuMP. If you want to learn ordinary Julia syntax, see https://julialang.org/learning/

sum(c[i]/(1+x)^[i] for i in 1:length(c)) is not valid Julia syntax for working with an array x — I'm not sure what you think it is supposed to mean.

@optimizationoptimist
Copy link
Author

Hi there, Thanks for your help. I think there is some misunderstanding. I can write the objective function algebraically, if that is what you are referring to, but I have difficulty in understanding the syntax of inputs required for NLopt.

I have defined the objective function and derivatives algebraically, and I replaced x with x[1] as it is a vector (shown below). I am still getting the forced stop error message, most likely due to incorrectly defining the equality constraint. The equality constraint is that the objective function obj(x,c,t) should be equal to zero.

In my case there is one variable, so I am not sure the correct way of defining the equality constraint in NLopt. For a two variables function, I can figure out returning from the my_contraint function something like obj(x,c,t)-x[2].

using NLopt
using LinearAlgebra
c= vcat(-1.2e6,ones(1200)*10000)
t= collect(0:length(c)-1)/12

function obj(x,c,t)
    X_power= Vector{Float64}(undef,length(t))
    for i in eachindex(t)
        X_power[i] =  1/(1+x[1])^t[i]
    end
    return dot(c,X_power)
end

function deriv(x,c,t)
    X_power= Vector{Float64}(undef,length(t))
    T= similar(X_power)
    for i in eachindex(t)
        X_power[i] = 1/(1+x[1])^(t[i]-1)
        T[i] = -t[i]*c[i]
    end
    return dot(T,X_power)
end

function nlopt_direct(x::Vector,grad::Vector)
    grad[1] = deriv(x,c,t)
 return obj(x,c,t)
end

function my_constraint(x::Vector,grad::Vector)
    # Constraint is an equality constraint that the objective function should be equal to zero
  grad[1] = deriv(x,c,t)
return obj(x,c,t)
end

opt = Opt(:LD_SLSQP,1)
opt.min_objective = (x,grad)->nlopt_direct(x::Vector,grad::Vector)
# Equality constraint is probably been not defined correctly
equality_constraint!(opt,(x,grad) ->my_constraint(x::Vector,grad::Vector),0.0)
minx = optimize(opt,[0.5])

@mzaffalon
Copy link
Contributor

SLSQP does not pass the vector grad to each iteration: you get :FORCED_STOP because you are trying to access a 0-length vector and you should check whether this is the case by testing with length(grad)>0:

function nlopt_direct(x::Vector,grad::Vector)
    length(grad) > 0 && (grad[1] = deriv(x,c,t))
    obj(x,c,t)
end

function my_constraint(x::Vector,grad::Vector)
    length(grad) > 0 && (grad[1] = deriv(x,c,t))
    obj(x,c,t)
end

I also wished the error message was a bit clearer because I stumbled over the same problem as you in the past.

As for the rest, it is sufficient to write

opt.min_objective = nlopt_direct
equality_constraint!(opt,my_constraint,0.0)
(minf, minx, ret) = optimize(opt,[0.5])

When I execute your code, I get (-1.1564428858967801e-9, [0.10470788064468936], :ROUNDOFF_LIMITED).

@optimizationoptimist
Copy link
Author

optimizationoptimist commented Dec 2, 2020

by testing with length(grad)>0

Thank you! This solves my problem

@optimizationoptimist
Copy link
Author

optimizationoptimist commented Dec 2, 2020

@mzaffalon I also noticed that one of the parameters constrtol_abs listed in the documentation is not accessible using the syntax provided. For example, opt.constrtol_abs = 1.0e-9 will give an error, whereas you can use other optional parameters such as opt.ftol_abs = 1.0e-9

@mzaffalon
Copy link
Contributor

I am afraid I cannot help you there. I have never used NLopt MathOptInterface.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

3 participants