Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add generic algorithm parameters API #365

Merged
merged 8 commits into from
Nov 19, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 7 additions & 0 deletions doc/docs/NLopt_Algorithms.md
Original file line number Diff line number Diff line change
Expand Up @@ -311,6 +311,13 @@ At each point **x**, MMA forms a local approximation using the gradient of *f* a

I also implemented another CCSA algorithm from the same paper, `NLOPT_LD_CCSAQ`: instead of constructing local MMA approximations, it constructs simple quadratic approximations (or rather, affine approximations plus a quadratic penalty term to stay conservative). This is the ccsa_quadratic code. It seems to have similar convergence rates to MMA for most problems, which is not surprising as they are both essentially similar. However, for the quadratic variant I implemented the possibility of [preconditioning](NLopt_Reference.md#preconditioning-with-approximate-hessians): including a user-supplied Hessian approximation in the local model. It is easy to incorporate this into the proof in Svanberg's paper, and to show that global convergence is still guaranteed as long as the user's "Hessian" is positive semidefinite, and it practice it can greatly improve convergence if the preconditioner is a good approximation for the real Hessian (at least for the eigenvectors of the largest eigenvalues).

The `NLOPT_LD_MMA` and `NLOPT_LD_CCSAQ` algorithms support the following internal parameters, which can be
specified using the [`nlopt_set_param` API](NLopt_Reference#Algorithm-specific_parameters.md):

* `inner_maxeval`: If ≥ 0, gives maximum number of "inner" iterations of the algorithm where it tries to ensure that its approximatations are "conservative"; defaults to `0` (no limit). It can be useful to specify a finite number (e.g. `5` or `10`) for this parameter if inaccuracies in your gradient or objective function are preventing the algorithm from making progress.
* `dual_algorithm` (defaults to `NLOPT_LD_MMA`), `dual_ftol_rel` (defaults to `1e-14`), `dual_ftol_abs` (defaults to `0`), `dual_xtol_rel` (defaults to `0`), `dual_xtol_abs` (defaults to `0`), `dual_maxeval` (defaults to `100000`): These specify how the algorithm internally solves the "dual" optimization problem for its approximate objective. Because this subsidiary solve requires no evaluations of the user's objective function, it is typically fast enough that we can solve it to high precision without worrying too much about the details. Howeve,r in high-dimensional problems you may notice that MMA/CCSA is taking a long time between optimization steps, in which case you may want to increase `dual_ftol_rel` or make other changes. If these parameters are not specified, NLopt takes them from the [subsidiary-optimizer algorithm](NLopt_Reference#localsubsidiary-optimization-algorithm) if that has been specified, and otherwise uses the defaults indicated here.
* `verbosity`: If ≥ 0, causes the algorithm to print internal status information on each iteration.

### SLSQP

Specified in NLopt as `NLOPT_LD_SLSQP`, this is a sequential quadratic programming (SQP) algorithm for nonlinearly constrained gradient-based optimization (supporting both inequality and equality constraints), based on the implementation by Dieter Kraft and described in:
Expand Down
15 changes: 15 additions & 0 deletions doc/docs/NLopt_C-plus-plus_Reference.md
Original file line number Diff line number Diff line change
Expand Up @@ -250,6 +250,21 @@ Request the number of evaluations.

In certain cases, the caller may wish to *force* the optimization to halt, for some reason unknown to NLopt. For example, if the user presses Ctrl-C, or there is an error of some sort in the objective function. You can do this by throwing *any* exception inside your objective/constraint functions: the exception will be caught, the optimization will be halted gracefully, and another exception (possibly not the same one) will be rethrown. See [Exceptions](#Exceptions.md), below. The C++ equivalent of `nlopt_forced_stop` from the [C API](NLopt_Reference#Forced_termination.md) is to throw an `nlopt::forced_stop` exception.


Algorithm-specific parameters
-----------------------------

Certain NLopt optimization algorithms allow you to specify additional parameters by calling
```
nlopt_result nlopt::opt::set_param(const char *name, double val);
bool nlopt::opt::has_param(const char *name);
double nlopt::opt::get_param(const char *name, double defaultval);
unsigned nlopt::opt::num_params();
const char *nlopt::opt::nth_param(unsigned n);
```
where the string `name` is the name of an algorithm-specific parameter and `val` is the value you are setting the parameter to. These functions are equivalent to the [C API](NLopt_Reference#Algorithm-specific_parameters.md) functions of the corresponding names.


Performing the optimization
---------------------------

Expand Down
14 changes: 14 additions & 0 deletions doc/docs/NLopt_Python_Reference.md
Original file line number Diff line number Diff line change
Expand Up @@ -236,6 +236,20 @@ Request the number of evaluations.

In certain cases, the caller may wish to *force* the optimization to halt, for some reason unknown to NLopt. For example, if the user presses Ctrl-C, or there is an error of some sort in the objective function. You can do this by raising *any* exception inside your objective/constraint functions:the optimization will be halted gracefully, and the same exception will be raised to the caller. See [Exceptions](#exceptions), below. The Python equivalent of `nlopt_forced_stop` from the [C API](NLopt_Reference.md#forced-termination) is to throw an `nlopt.ForcedStop` exception.

Algorithm-specific parameters
-----------------------------

Certain NLopt optimization algorithms allow you to specify additional parameters by calling
```py
opt.set_param("name", val);
opt.has_param("name");
opt.get_param("name", defaultval);
opt.num_params();
opt.nth_param(n);
```
where the string `"name"` is the name of an algorithm-specific parameter and `val` is the value you are setting the parameter to. These functions are equivalent to the [C API](NLopt_Reference#Algorithm-specific_parameters.md) functions of the corresponding names.


Performing the optimization
---------------------------

Expand Down
26 changes: 26 additions & 0 deletions doc/docs/NLopt_Reference.md
Original file line number Diff line number Diff line change
Expand Up @@ -300,6 +300,32 @@ int nlopt_get_force_stop(nlopt_opt opt)

which returns the last force-stop value that was set since the last `nlopt_optimize`. The force-stop value is reset to zero at the beginning of `nlopt_optimize`. Passing `val=0` to `nlopt_set_force_stop` tells NLopt *not* to force a halt.


Algorithm-specific parameters
-----------------------------

Certain NLopt optimization algorithms allow you to specify additional parameters by calling

```c
nlopt_result nlopt_set_param(nlopt_opt opt, const char *name, double val);
```

where the string `name` is the name of an algorithm-specific parameter and `val` is the value you are setting the parameter to. For example, the MMA algorithm has a parameter `"inner_maxeval"`, an upper bound on the number of "inner" iterations of the algorithm, which you can set via `nlopt_set_param(opt, "inner_maxeval", 100)`.

You can also check whether a parameter is set or get the current value of a parameter with
```c
double nlopt_has_param(const nlopt_opt opt, const char *name);
double nlopt_get_param(const nlopt_opt opt, const char *name, double defaultval);
```
where `defaultval` is returned by `nlopt_get_param` if the parameter `name` has not been set.

To inspect the list of currently set parameters, you can use:
```c
unsigned nlopt_num_params(const nlopt_opt opt);
const char *nlopt_nth_param(const nlopt_opt opt, unsigned n);
```
which return the number of set parameters and the name of the `n`-th set parameters (from `0` to `num_params-1`), respectively.

Performing the optimization
---------------------------

Expand Down
24 changes: 15 additions & 9 deletions src/algs/mma/ccsa_quadratic.c
Original file line number Diff line number Diff line change
Expand Up @@ -218,7 +218,7 @@ nlopt_result ccsa_quadratic_minimize(
double *x, /* in: initial guess, out: minimizer */
double *minf,
nlopt_stopping *stop,
nlopt_opt dual_opt)
nlopt_opt dual_opt, int inner_maxeval, unsigned verbose)
{
nlopt_result ret = NLOPT_SUCCESS;
double *xcur, rho, *sigma, *dfdx, *dfdx_cur, *xprev, *xprevprev, fcur;
Expand All @@ -233,6 +233,8 @@ nlopt_result ccsa_quadratic_minimize(
unsigned no_precond;
nlopt_opt pre_opt = NULL;

verbose = MAX(ccsa_verbose, verbose);

m = nlopt_count_constraints(mfc = m, fc);
if (nlopt_get_dimension(dual_opt) != m) {
nlopt_stop_msg(stop, "dual optimizer has wrong dimension %d != %d",
Expand Down Expand Up @@ -370,6 +372,7 @@ nlopt_result ccsa_quadratic_minimize(
nlopt_remove_equality_constraints(dual_opt);

while (1) { /* outer iterations */
int inner_nevals = 0;
double fprev = fcur;
if (nlopt_stop_forced(stop)) ret = NLOPT_FORCED_STOP;
else if (nlopt_stop_evals(stop)) ret = NLOPT_MAXEVAL_REACHED;
Expand Down Expand Up @@ -432,16 +435,17 @@ nlopt_result ccsa_quadratic_minimize(
gi(m, dd.gcval, n, xcur, NULL, &dd);
}

if (ccsa_verbose) {
if (verbose) {
printf("CCSA dual converged in %d iters to g=%g:\n",
dd.count, dd.gval);
for (i = 0; i < MIN(ccsa_verbose, m); ++i)
for (i = 0; i < MIN(verbose, m); ++i)
printf(" CCSA y[%u]=%g, gc[%u]=%g\n",
i, y[i], i, dd.gcval[i]);
}

fcur = f(n, xcur, dfdx_cur, f_data);
++ *(stop->nevals_p);
++inner_nevals;
if (nlopt_stop_forced(stop)) {
ret = NLOPT_FORCED_STOP; goto done; }
feasible_cur = 1; infeasibility_cur = 0;
Expand All @@ -465,9 +469,11 @@ nlopt_result ccsa_quadratic_minimize(
}
}

inner_done = inner_done || (inner_maxeval > 0 && inner_nevals == inner_maxeval);

if ((fcur < *minf && (inner_done || feasible_cur || !feasible))
|| (!feasible && infeasibility_cur < infeasibility)) {
if (ccsa_verbose && !feasible_cur)
if (verbose && !feasible_cur)
printf("CCSA - using infeasible point?\n");
dd.fval = *minf = fcur;
infeasibility = infeasibility_cur;
Expand Down Expand Up @@ -508,9 +514,9 @@ nlopt_result ccsa_quadratic_minimize(
1.1 * (rhoc[i] + (fcval_cur[i]-dd.gcval[i])
/ dd.wval));

if (ccsa_verbose)
if (verbose)
printf("CCSA inner iteration: rho -> %g\n", rho);
for (i = 0; i < MIN(ccsa_verbose, m); ++i)
for (i = 0; i < MIN(verbose, m); ++i)
printf(" CCSA rhoc[%u] -> %g\n", i,rhoc[i]);
}

Expand All @@ -522,11 +528,11 @@ nlopt_result ccsa_quadratic_minimize(

/* update rho and sigma for iteration k+1 */
rho = MAX(0.1 * rho, CCSA_RHOMIN);
if (ccsa_verbose)
if (verbose)
printf("CCSA outer iteration: rho -> %g\n", rho);
for (i = 0; i < m; ++i)
rhoc[i] = MAX(0.1 * rhoc[i], CCSA_RHOMIN);
for (i = 0; i < MIN(ccsa_verbose, m); ++i)
for (i = 0; i < MIN(verbose, m); ++i)
printf(" CCSA rhoc[%u] -> %g\n", i, rhoc[i]);
if (k > 1) {
for (j = 0; j < n; ++j) {
Expand All @@ -540,7 +546,7 @@ nlopt_result ccsa_quadratic_minimize(
sigma[j] = MAX(sigma[j], 1e-8*(ub[j]-lb[j]));
}
}
for (j = 0; j < MIN(ccsa_verbose, n); ++j)
for (j = 0; j < MIN(verbose, n); ++j)
printf(" CCSA sigma[%u] -> %g\n",
j, sigma[j]);
}
Expand Down
24 changes: 15 additions & 9 deletions src/algs/mma/mma.c
Original file line number Diff line number Diff line change
Expand Up @@ -148,7 +148,7 @@ nlopt_result mma_minimize(unsigned n, nlopt_func f, void *f_data,
double *x, /* in: initial guess, out: minimizer */
double *minf,
nlopt_stopping *stop,
nlopt_opt dual_opt)
nlopt_opt dual_opt, int inner_maxeval, unsigned verbose)
{
nlopt_result ret = NLOPT_SUCCESS;
double *xcur, rho, *sigma, *dfdx, *dfdx_cur, *xprev, *xprevprev, fcur;
Expand All @@ -160,6 +160,8 @@ nlopt_result mma_minimize(unsigned n, nlopt_func f, void *f_data,
double infeasibility;
unsigned mfc;

verbose = MAX(mma_verbose, verbose);

m = nlopt_count_constraints(mfc = m, fc);
if (nlopt_get_dimension(dual_opt) != m) {
nlopt_stop_msg(stop, "dual optimizer has wrong dimension %d != %d",
Expand Down Expand Up @@ -246,6 +248,7 @@ nlopt_result mma_minimize(unsigned n, nlopt_func f, void *f_data,
nlopt_remove_equality_constraints(dual_opt);

while (1) { /* outer iterations */
int inner_nevals = 0;
double fprev = fcur;
if (nlopt_stop_forced(stop)) ret = NLOPT_FORCED_STOP;
else if (nlopt_stop_evals(stop)) ret = NLOPT_MAXEVAL_REACHED;
Expand Down Expand Up @@ -278,16 +281,17 @@ nlopt_result mma_minimize(unsigned n, nlopt_func f, void *f_data,
}

dual_func(m, y, NULL, &dd); /* evaluate final xcur etc. */
if (mma_verbose) {
if (verbose) {
printf("MMA dual converged in %d iterations to g=%g:\n",
dd.count, dd.gval);
for (i = 0; i < MIN(mma_verbose, m); ++i)
for (i = 0; i < MIN(verbose, m); ++i)
printf(" MMA y[%u]=%g, gc[%u]=%g\n",
i, y[i], i, dd.gcval[i]);
}

fcur = f(n, xcur, dfdx_cur, f_data);
++ *(stop->nevals_p);
++inner_nevals;
if (nlopt_stop_forced(stop)) {
ret = NLOPT_FORCED_STOP; goto done; }
feasible_cur = 1; infeasibility_cur = 0;
Expand Down Expand Up @@ -316,9 +320,11 @@ nlopt_result mma_minimize(unsigned n, nlopt_func f, void *f_data,
}
}

inner_done = inner_done || (inner_maxeval > 0 && inner_nevals == inner_maxeval);

if ((fcur < *minf && (inner_done || feasible_cur || !feasible))
|| (!feasible && infeasibility_cur < infeasibility)) {
if (mma_verbose && !feasible_cur)
if (verbose && !feasible_cur)
printf("MMA - using infeasible point?\n");
dd.fval = *minf = fcur;
infeasibility = infeasibility_cur;
Expand Down Expand Up @@ -360,9 +366,9 @@ nlopt_result mma_minimize(unsigned n, nlopt_func f, void *f_data,
1.1 * (rhoc[i] + (fcval_cur[i]-dd.gcval[i])
/ dd.wval));

if (mma_verbose)
if (verbose)
printf("MMA inner iteration: rho -> %g\n", rho);
for (i = 0; i < MIN(mma_verbose, m); ++i)
for (i = 0; i < MIN(verbose, m); ++i)
printf(" MMA rhoc[%u] -> %g\n", i,rhoc[i]);
}

Expand All @@ -374,11 +380,11 @@ nlopt_result mma_minimize(unsigned n, nlopt_func f, void *f_data,

/* update rho and sigma for iteration k+1 */
rho = MAX(0.1 * rho, MMA_RHOMIN);
if (mma_verbose)
if (verbose)
printf("MMA outer iteration: rho -> %g\n", rho);
for (i = 0; i < m; ++i)
rhoc[i] = MAX(0.1 * rhoc[i], MMA_RHOMIN);
for (i = 0; i < MIN(mma_verbose, m); ++i)
for (i = 0; i < MIN(verbose, m); ++i)
printf(" MMA rhoc[%u] -> %g\n", i, rhoc[i]);
if (k > 1) {
for (j = 0; j < n; ++j) {
Expand All @@ -390,7 +396,7 @@ nlopt_result mma_minimize(unsigned n, nlopt_func f, void *f_data,
sigma[j] = MAX(sigma[j], 0.01*(ub[j]-lb[j]));
}
}
for (j = 0; j < MIN(mma_verbose, n); ++j)
for (j = 0; j < MIN(verbose, n); ++j)
printf(" MMA sigma[%u] -> %g\n",
j, sigma[j]);
}
Expand Down
5 changes: 3 additions & 2 deletions src/algs/mma/mma.h
Original file line number Diff line number Diff line change
Expand Up @@ -32,14 +32,15 @@ extern "C"
#endif /* __cplusplus */

extern unsigned mma_verbose;
extern unsigned ccsa_verbose;

nlopt_result mma_minimize(unsigned n, nlopt_func f, void *f_data,
unsigned m, nlopt_constraint *fc,
const double *lb, const double *ub, /* bounds */
double *x, /* in: initial guess, out: minimizer */
double *minf,
nlopt_stopping *stop,
nlopt_opt dual_opt);
nlopt_opt dual_opt, int inner_maxeval, unsigned verbose);

nlopt_result ccsa_quadratic_minimize(
unsigned n, nlopt_func f, void *f_data,
Expand All @@ -51,7 +52,7 @@ nlopt_result ccsa_quadratic_minimize(
double *x, /* in: initial guess, out: minimizer */
double *minf,
nlopt_stopping *stop,
nlopt_opt dual_opt);
nlopt_opt dual_opt, int inner_maxeval, unsigned verbose);

#ifdef __cplusplus
} /* extern "C" */
Expand Down
6 changes: 6 additions & 0 deletions src/api/nlopt-in.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -434,6 +434,12 @@ namespace nlopt {
tol.empty() ? NULL : &tol[0]));
}

void set_param(const char *name, double val) { mythrow(nlopt_set_param(o, name, val)); }
double get_param(const char *name, double defaultval) const { return nlopt_get_param(o, name, defaultval); }
bool has_param(const char *name) const { return bool(nlopt_has_param(o, name)); }
const char *nth_param(unsigned n) const { return nlopt_nth_param(o, n); }
unsigned num_params() const { return nlopt_num_params(o); }

#define NLOPT_GETSET_VEC(name) \
void set_##name(double val) { \
mythrow(nlopt_set_##name##1(o, val)); \
Expand Down
8 changes: 8 additions & 0 deletions src/api/nlopt-internal.h
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,11 @@ extern "C" {

/*********************************************************************/

typedef struct {
char *name;
double val;
} nlopt_opt_param;

struct nlopt_opt_s {
nlopt_algorithm algorithm; /* the optimization algorithm (immutable) */
unsigned n; /* the dimension of the problem (immutable) */
Expand All @@ -41,6 +46,9 @@ extern "C" {
nlopt_precond pre; /* optional preconditioner for f (NULL if none) */
int maximize; /* nonzero if we are maximizing, not minimizing */

nlopt_opt_param *params;
unsigned nparams;

double *lb, *ub; /* lower and upper bounds (length n) */

unsigned m; /* number of inequality constraints */
Expand Down
6 changes: 6 additions & 0 deletions src/api/nlopt.h
Original file line number Diff line number Diff line change
Expand Up @@ -216,6 +216,12 @@ NLOPT_EXTERN(unsigned) nlopt_get_dimension(const nlopt_opt opt);

NLOPT_EXTERN(const char *) nlopt_get_errmsg(nlopt_opt opt);

/* generic algorithm parameters: */
NLOPT_EXTERN(nlopt_result) nlopt_set_param(nlopt_opt opt, const char *name, double val);
NLOPT_EXTERN(double) nlopt_get_param(const nlopt_opt opt, const char *name, double defaultval);
NLOPT_EXTERN(int) nlopt_has_param(const nlopt_opt opt, const char *name);
NLOPT_EXTERN(unsigned) nlopt_num_params(const nlopt_opt opt);
NLOPT_EXTERN(const char *) nlopt_nth_param(const nlopt_opt opt, unsigned n);

/* constraints: */

Expand Down
Loading