-
Notifications
You must be signed in to change notification settings - Fork 115
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Lazy and generated constraint model much slower than complete model #618
Comments
I have made changes - which I hope do not break anything. Lazy looks OK - Gen still does not work - will look at that - but it is slightly awkward debugging C++ called from python. In debug mode my Complete took 47 seconds for 100 model and Lazy 38. |
Thanks again for the rapid response and changes. This new version solves N=10 with lazy constraints quickly (0.85 CPU seconds). However for N=11 the lazy variant does not converge within a 30 seconds time limit (the complete model does in 0.1second) and was not able to find the optimal solution (it selects x_{N-2}=x_{N-1}=1, but x_N=0):
Looking at the solvers output, I'm surprised by the reported gap of 0.26 -- shouldn't that be 0.045(=0.211-0.1667)? Moreover, the constraint generation model now fails to find the optimal solution for N=8 and aborts after the 30s time limit with a sub optimal solution of 0.375 instead of 0.267. |
I knew I might have made Gen model worse - was going to dig into that today. |
Just ran lazy with 11 - Result - Optimal solution found |
Have made more changes. Works for me with 8,9,10,11,50 and 100 One point that might be of interest to people using Cbc master and python-mip is that to help me debug, I modified Cbc_C_Interface which is used by python-mip. Now if the environment variable COIN_CBC_OPTIONS is set the code reads standard cbc options which adds some items not directly accessible from the C interface. The format is one parameter per line - without the leading -. Lines starting * are comments. Any options that you can see by going into cbc and entering ? can be used. |
Many thanks for these changes, cbc now solves up to N=100 with ease -- and the lazy constraint variant being extremely fast, actually the fastest of the 3 variants. I guess the reason that the generated constraint model is slower is due to the python interoperability overhead, isn't it? Or could there be something else? One thing I noticed though is that the lazy variant reports a sub optimal solution as optimal. For example with N=105, it selects x_N=x_{N-2}=1 and x_{N-1}=0 with objective value of 0.0192325473879 instead of 0.0191391941392. Is that within some epsilon tolerance or is this an issue? (Similarly for N=107 and N=109. Interestingly the odd numbers 106, 108 are not affected, neither are N>110 (have not tested extensively). Again thanks a lot for your work to improve the lazy constraint generation feature. I hope debugging the solver when called from python was not too bothersome, or at least, that the minimalistic examples made up for that and helped to identify the root causes quickly. |
The default cutoff increment is 0.0001 and 0.0191391 is not 0.0001 better than 0.019232, which is why you got that behaviour. I will get round to adding a setCutoffIncrement to the C interface - unless someone else wants to get involved with C interface. However you can fix the problem. So I have a file /tmp/options_py.txt which just has one line increment 1.0e-6 and I export COIN_CBC_OPTIONS="/tmp/options_py.txt" |
While I do understand a certain overhead of lazy constraints, the following example seems overly drastic to me. Maybe there is some issue with lazy constraint handling, or maybe I miss certain configuration options.
In the context of a Benders Decomposition problem, I observed that the solver never found a feasible solution, since every cut slightly changed the objective function of a solution candidate, thus rendering that solution candidate infeasible. In Benders Decomposition, a solution could be easily reported by adjusting the achieved objective value (Is that possible from within a lazy constraint generator callback?) But such adjustments is probably not easy in a general situation, isn't it?
The following is a simple Benders like problem that I investigated to better understand the situation. I was surprised how different the solver behaved for lazy and generated constraints. I wonder if that is expected behaviour. Let's consider the simple problem:
The optimal solution is:
Comparing complete, lazy, and constraint generation models:
The complete model even easily solves a problem with N=100 in 10 seconds.
The model with N=10 has a total of 46 constraints -- which are all the feasible solutions. It seems that the generated constraints are not added to the root model though and are regenerated many times. I would expect that with such small number of constraints, the solver would eventually generate them all and then behave similar to the complete model. Is there some configuration to improve the situation for such problems?
Experiment is run with latest version (efdc0d9) from master, including fix of #616
Attached Files: Minimal Example using python-mip, and log output for N=8/9.
Log output for N=10 is not attached as it seems to verbose and probably not helpful compared to N=9.
The constraint generation callback prints each time it is called with diagnostic information.
lazy-bender.py.txt
lazy-bender-n8.log
lazy-bender-n9.log
The text was updated successfully, but these errors were encountered: