-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Switch Nonlinear API to JuMP.GenericNonlinearExpr
#308
Conversation
Here are some typical NLP test cases. using InfiniteOpt, Distributions
function optimal_control()
γ, β, N, = 0.303, 0.727, 1e5
y0 = Dict(:s => 1 - 1/N, :e => 1/N, :i => 0, :r => 0)
model = InfiniteModel()
@infinite_parameter(model, t ∈ [0, 200], num_supports = 201)
@infinite_parameter(model, ξ ~ Uniform(0.1, 0.6), num_supports = 100)
@variable(model, y[keys(y0)] ≥ 0, Infinite(t, ξ))
@variable(model, 0 ≤ u ≤ 0.8, Infinite(t), start = 0.2)
set_upper_bound(y[:i], 0.02)
@objective(model, Min, ∫(u, t))
@constraint(model, [k in keys(y0)], y[k](0, ξ) == y0[k])
@constraint(model, ∂(y[:s], t) == (u - 1) * β * y[:s] * y[:i])
@constraint(model, ∂(y[:e], t) == (1 - u) * β * y[:s] * y[:i] - ξ * y[:e])
@constraint(model, ∂(y[:i], t) == ξ * y[:e] - γ * y[:i])
@constraint(model, ∂(y[:r], t) == γ * y[:i])
build_optimizer_model!(model, check_support_dims = false)
end
function nested_measures()
model = InfiniteModel()
@infinite_parameter(model, x[1:3] in [-1, 1], independent = true, num_supports = 10)
@variable(model, y, Infinite(x))
@variable(model, q, Infinite(x[1]))
@variable(model, z[1:1000])
@objective(model, Min, sum(z) + ∫(sin(q) / q + ∫(∫(log(y^3.6 / cos(y)) + y^2 + y, x[3]), x[2]), x[1]))
@constraint(model, [i in 1:1000], z[i] / q - exp(sin(y)) <= 34)
build_optimizer_model!(model)
end
function large_expressions()
model = InfiniteModel()
@infinite_parameter(model, t in [0, 1], num_supports = 10)
@variable(model, y[1:50000], Infinite(t))
@variable(model, z[1:50000])
@objective(model, Max, sum(2z[i]^2 + ∫(sin(1/y[i]), t) for i = 1:50000))
@constraint(model, prod(ifelse(z[i] <= y[i]^3, log(y[i] / i), z[i] / cos(y[i])) for i in 1:50000) <= 42)
@constraint(model, sum(z[i]^i + log(y[i]) for i in 1:50000) == 0)
build_optimizer_model!(model)
end
@time optimal_control()
@time nested_measures()
@time large_expressions() With this pull request and the most current version of jump-dev/JuMP.jl#3106, I get: 9.274992 seconds (46.83 M allocations: 2.204 GiB, 10.96% gc time, 48.40% compilation time)
20.659800 seconds (137.31 M allocations: 6.034 GiB, 47.38% gc time)
41.839613 seconds (225.99 M allocations: 10.678 GiB, 39.41% gc time) With the InfiniteOpt 7.100326 seconds (38.98 M allocations: 2.076 GiB, 13.08% gc time)
46.806975 seconds (184.33 M allocations: 10.240 GiB, 16.18% gc time)
ERROR: StackOverflowError: So for problems with lot's of short nonlinear expressions, the master appears to do better. Otherwise, expressions with lots of summation terms do better with this pull request. Moreover, deeply nested expressions lead to I wonder if we could do even better by more efficiently handling |
Updated timings with the current jump-dev/JuMP.jl#3106 (with the new 3.576631 seconds (34.51 M allocations: 1.538 GiB, 25.67% gc time)
13.754643 seconds (133.18 M allocations: 5.612 GiB, 35.05% gc time)
45.391207 seconds (198.35 M allocations: 65.224 GiB, 24.20% gc time) So, this makes a great improvement. The 1st benchmark is about 3 times faster! |
Nice. Did you have any ideas why it's so much faster? |
My intuition is that with the reduced tree size for sums, iterating over the tree is appreciably faster since it is faster to iterate over an array than a bunch of nested expressions. InfiniteOpt will iterate over each expression a handful of times which magnifies this improvement. |
Rerunning this with the latest change to jump-dev/JuMP.jl#3106 (w/ jump-dev/JuMP.jl@472daa3), I get: 4.061993 seconds (34.51 M allocations: 1.538 GiB, 23.89% gc time)
15.238920 seconds (133.20 M allocations: 5.612 GiB, 29.53% gc time)
ERROR: StackOverflowError:
Stacktrace:
[1] check_belongs_to_model(expr::NonlinearExpr{VariableRef}, model::Model)
@ JuMP C:\Users\bbgui\.julia\packages\JuMP\nY991\src\nlp_expr.jl:389
[2] check_belongs_to_model(expr::NonlinearExpr{VariableRef}, model::Model) (repeats 32640 times)
@ JuMP C:\Users\bbgui\.julia\packages\JuMP\nY991\src\nlp_expr.jl:390
[3] set_objective_function
@ C:\Users\bbgui\.julia\packages\JuMP\nY991\src\objective.jl:125 [inlined]
[4] set_objective
@ C:\Users\bbgui\.julia\packages\JuMP\nY991\src\objective.jl:175 [inlined]
[5] _set_objective(trans_model::Model, sense::MathOptInterface.OptimizationSense, expr::NonlinearExpr{VariableRef})
@ InfiniteOpt.TranscriptionOpt C:\Users\bbgui\Documents\InfiniteOpt.jl\src\TranscriptionOpt\transcribe.jl:563
[6] transcribe_objective!(trans_model::Model, inf_model::InfiniteModel)
@ InfiniteOpt.TranscriptionOpt C:\Users\bbgui\Documents\InfiniteOpt.jl\src\TranscriptionOpt\transcribe.jl:586
[7] build_transcription_model!(trans_model::Model, inf_model::InfiniteModel; check_support_dims::Bool)
@ InfiniteOpt.TranscriptionOpt C:\Users\bbgui\Documents\InfiniteOpt.jl\src\TranscriptionOpt\transcribe.jl:894
[8] #build_optimizer_model!#101
@ C:\Users\bbgui\Documents\InfiniteOpt.jl\src\TranscriptionOpt\optimize.jl:34 [inlined]
[9] build_optimizer_model!
@ C:\Users\bbgui\Documents\InfiniteOpt.jl\src\TranscriptionOpt\optimize.jl:32 [inlined]
[10] build_optimizer_model!(model::InfiniteModel; kwargs::Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
@ InfiniteOpt C:\Users\bbgui\Documents\InfiniteOpt.jl\src\optimize.jl:534
[11] build_optimizer_model!
@ C:\Users\bbgui\Documents\InfiniteOpt.jl\src\optimize.jl:529 [inlined]
[12] large_expressions()
@ Main .\REPL[6]:9
[13] top-level scope
@ .\timing.jl:220 [inlined]
[14] top-level scope
@ .\REPL[18]:0 So there are some minor degradations and I now get a stack overflow error with large sums/products since the I presume the expression is no longer flattened. Note that I don't explicitly use the new |
The problem code causing the problem is for stack overflow errors is InfiniteOpt.jl/src/expressions.jl Lines 779 to 799 in b41bf51
I haven't figured out an easy way for this to not use recursion since I don't always know a priori what the variable reference type of the new mapped expression will be without significant workflow changes. Currently, it builds expressions expressions with |
Yeah I should have talked to you about this. My problem was that flattening the expressions eagerly in construction had quite a performance hit, because we don't know if we can |
That's fair and I working on a fix, but there are some nuances with |
For your recursion issue. So you can only tell the variable type after |
Yes, under the current workflow |
Now with the With the previous flattening performed via operator overloading we got: 3.576631 seconds (34.51 M allocations: 1.538 GiB, 25.67% gc time)
13.754643 seconds (133.18 M allocations: 5.612 GiB, 35.05% gc time)
45.391207 seconds (198.35 M allocations: 65.224 GiB, 24.20% gc time) Now I get: 3.062378 seconds (34.35 M allocations: 1.534 GiB, 13.63% gc time)
12.373132 seconds (133.74 M allocations: 5.631 GiB, 31.39% gc time)
27.535335 seconds (297.36 M allocations: 12.636 GiB, 29.51% gc time) So, with the large summation and product benchmark (the last one) we do see a significant reduction in memory and time. The other 2 are essentially the same. Definitely, a win then. Great work @odow! I still need to remove the recursion in |
👍 To recap, we went from the original:
to
This is great! |
Yes, it is! The new interface really does a lot to better enable extensions now. |
JuMP.GenericNonlinearExpr
JuMP.GenericNonlinearExpr
The documentation now is only failing because of bad links that will be resolved once JuMP makes its next release. Thus, this will be ready to merge once JuMP releases |
Woooo!!! |
This incorporates jump-dev/JuMP.jl#3106.
TODO list: