diff --git a/doc/docs/NLopt_Algorithms.md b/doc/docs/NLopt_Algorithms.md
index c39ac54a..75eedc53 100644
--- a/doc/docs/NLopt_Algorithms.md
+++ b/doc/docs/NLopt_Algorithms.md
@@ -305,11 +305,11 @@ My implementation of the globally-convergent method-of-moving-asymptotes (MMA) a
This is an improved CCSA ("conservative convex separable approximation") variant of the original MMA algorithm published by Svanberg in 1987, which has become popular for topology optimization. (*Note:* "globally convergent" does *not* mean that this algorithm converges to the global optimum; it means that it is guaranteed to converge to *some* local minimum from any feasible starting point.)
-At each point **x**, MMA forms a local approximation using the gradient of *f* and the constraint functions, plus a quadratic "penalty" term to make the approximations "conservative" (upper bounds for the exact functions). The precise approximation MMA forms is difficult to describe in a few words, because it includes nonlinear terms consisting of a poles at some distance from *x* (outside of the current trust region), almost a kind of Pade approximant. The main point is that the approximation is both convex and separable, making it trivial to solve the approximate optimization by a dual method. Optimizing the approximation leads to a new candidate point **x**. The objective and constraints are evaluated at the candidate point. If the approximations were indeed conservative (upper bounds for the actual functions at the candidate point), then the process is restarted at the new **x**. Otherwise, the approximations are made more conservative (by increasing the penalty term) and re-optimized.
+At each point **x**, MMA forms a local approximation using the gradient of *f* and the constraint functions, plus a quadratic "penalty" term to make the approximations "conservative" (upper bounds for the exact functions). The precise approximation MMA forms is difficult to describe in a few words, because it includes nonlinear terms consisting of a poles at some distance from *x* (outside of the current trust region), almost a kind of Padé approximant. The main point is that the approximation is both convex and separable, making it trivial to solve the approximate optimization by a dual method. Optimizing the approximation leads to a new candidate point **x**. The objective and constraints are evaluated at the candidate point. If the approximations were indeed conservative (upper bounds for the actual functions at the candidate point), then the process is restarted at the new **x**. Otherwise, the approximations are made more conservative (by increasing the penalty term) and re-optimized.
(If you contact [Professor Svanberg](http://researchprojects.kth.se/index.php/kb_7902/pb_2085/pb.html), he has been willing in the past to graciously provide you with his original code, albeit under restrictions on commercial use or redistribution. The MMA implementation in NLopt, however, is completely independent of Svanberg's, whose code we have not examined; any bugs are my own, of course.)
-I also implemented another CCSA algorithm from the same paper, `NLOPT_LD_CCSAQ`: instead of constructing local MMA approximations, it constructs simple quadratic approximations (or rather, affine approximations plus a quadratic penalty term to stay conservative). This is the ccsa_quadratic code. It seems to have similar convergence rates to MMA for most problems, which is not surprising as they are both essentially similar. However, for the quadratic variant I implemented the possibility of [preconditioning](NLopt_Reference#Preconditioning_with_approximate_Hessians.md): including a user-supplied Hessian approximation in the local model. It is easy to incorporate this into the proof in Svanberg's paper, and to show that global convergence is still guaranteed as long as the user's "Hessian" is positive semidefinite, and it practice it can greatly improve convergence if the preconditioner is a good approximation for the real Hessian (at least for the eigenvectors of the largest eigenvalues).
+I also implemented another CCSA algorithm from the same paper, `NLOPT_LD_CCSAQ`: instead of constructing local MMA approximations, it constructs simple quadratic approximations (or rather, affine approximations plus a quadratic penalty term to stay conservative). This is the ccsa_quadratic code. It seems to have similar convergence rates to MMA for most problems, which is not surprising as they are both essentially similar. However, for the quadratic variant I implemented the possibility of [preconditioning](NLopt_Reference.md#preconditioning-with-approximate-hessians): including a user-supplied Hessian approximation in the local model. It is easy to incorporate this into the proof in Svanberg's paper, and to show that global convergence is still guaranteed as long as the user's "Hessian" is positive semidefinite, and it practice it can greatly improve convergence if the preconditioner is a good approximation for the real Hessian (at least for the eigenvectors of the largest eigenvalues).
### SLSQP
diff --git a/doc/docs/NLopt_Introduction.md b/doc/docs/NLopt_Introduction.md
index 509d0d36..22aec283 100644
--- a/doc/docs/NLopt_Introduction.md
+++ b/doc/docs/NLopt_Introduction.md
@@ -4,6 +4,8 @@
In this chapter of the manual, we begin by giving a general overview of the optimization problems that NLopt solves, the key distinctions between different types of optimization algorithms, and comment on ways to cast various problems in the form NLopt requires. We also describe the background and goals of NLopt.
+[TOC]
+
Optimization problems
---------------------
diff --git a/doc/docs/NLopt_Python_Reference.md b/doc/docs/NLopt_Python_Reference.md
index ba8098b5..484a86c9 100644
--- a/doc/docs/NLopt_Python_Reference.md
+++ b/doc/docs/NLopt_Python_Reference.md
@@ -6,16 +6,17 @@ The NLopt includes an interface callable from the [Python programming language](
The main purpose of this section is to document the syntax and unique features of the Python API; for more detail on the underlying features, please refer to the C documentation in the [NLopt Reference](NLopt_Reference.md).
+[TOC]
+
Using the NLopt Python API
--------------------------
To use NLopt in Python, your Python program should include the lines:
+```py
+import nlopt
+from numpy import *
```
-import nlopt
-from numpy import *
-```
-
which imports the `nlopt` module, and also imports the numpy ([NumPy](https://en.wikipedia.org/wiki/NumPy)) that defines the array data types used for communicating with NLopt.
@@ -24,10 +25,9 @@ The `nlopt.opt` class
The NLopt API revolves around an object of type `nlopt.opt`. Via methods of this object, all of the parameters of the optimization are specified (dimensions, algorithm, stopping criteria, constraints, objective function, etcetera), and then one finally calls the `opt.optimize` method in order to perform the optimization. The object should normally be created via the constructor:
+```py
+opt = nlopt.opt(algorithm, n)
```
-opt = nlopt.opt(algorithm, n)
-```
-
given an `algorithm` (see [NLopt Algorithms](NLopt_Algorithms.md) for possible values) and the dimensionality of the problem (`n`, the number of optimization parameters). Whereas the C algorithms are specified by `nlopt_algorithm` constants of the form `NLOPT_LD_MMA`, `NLOPT_LN_COBYLA`, etcetera, the Python `algorithm` values are of the form `nlopt.LD_MMA`, `nlopt.LN_COBYLA`, etcetera (with the `NLOPT_` prefix replaced by the `nlopt.` namespace).
@@ -37,25 +37,23 @@ If there is an error in the constructor (or copy constructor, or assignment), a
The algorithm and dimension parameters of the object are immutable (cannot be changed without constructing a new object), but you can query them for a given object by the methods:
-```
+```py
opt.get_algorithm()
opt.get_dimension()
```
-
You can get a string description of the algorithm via:
-```
+```py
opt.get_algorithm_name()
```
-
Objective function
------------------
The objective function is specified by calling one of the methods:
-```
+```py
opt.set_min_objective(f)
opt.set_max_objective(f)
```
@@ -63,81 +61,75 @@ opt.set_max_objective(f)
depending on whether one wishes to minimize or maximize the objective function `f`, respectively. The function `f` should be of the form:
+```py
+def f(x, grad):
+ if grad.size > 0:
+ ...set grad to gradient, in-place...
+ return ...value of f(x)...
```
-def f(x, grad):
- if grad.size > 0:
-```
-
-` `*`...set` `grad` `to` `gradient,` `in-place...`*
-` return `*`...value` `of` `f(x)...`*
The return value should be the value of the function at the point `x`, where `x` is a NumPy array of length `n` of the optimization parameters (the same as the dimension passed to the constructor).
In addition, if the argument `grad` is not empty, i.e. `grad.size>0`, then `grad` is a NumPy array of length `n` which should (upon return) be set to the gradient of the function with respect to the optimization parameters at `x`. That is, `grad[i]` should upon return contain the partial derivative $\partial f / \partial x_i$, for $0 \leq i < n$, if `grad` is non-empty. Not all of the optimization algorithms (below) use the gradient information: for algorithms listed as "derivative-free," the `grad` argument will always be empty and need never be computed. (For algorithms that do use gradient information, however, `grad` may still be empty for some calls.)
-Note that `grad` must be modified *in-place* by your function `f`. Generally, this means using indexing operations `grad[...]` `=` `...` to overwrite the contents of `grad`, as described below.
+Note that `grad` must be modified *in-place* by your function `f`. Generally, this means using indexing operations `grad[...] = ...` to overwrite the contents of `grad`, as described below.
### Assigning results in-place
Your objective and constraint functions must overwrite the contents of the `grad` (gradient) argument in-place (although of course you can allocate whatever additional storage you might need, in addition to overwriting `grad`). However, typical Python assignment operations do *not* do this. For example:
+```py
+grad = 2*x
```
-grad = 2*x
-```
-
might seem like the gradient of the function `sum(x**2)`, but it will *not work* with NLopt because this expression actually allocates a *new* array to store `2*x` and re-assigns `grad` to point to it, rather than overwriting the old contents of `grad`. Instead, you should do:
+```py
+grad[:] = 2*x
```
-grad[:] = 2*x
-```
-
-Assigning any [slice or view](http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html) `grad[...]` of the array will overwrite the contents, which is what NLopt needs you to do. So, you should generally use indexing expressions `grad[...]` `=` `...` to assign the gradient result.
+Assigning any [slice or view](http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html) `grad[...]` of the array will overwrite the contents, which is what NLopt needs you to do. So, you should generally use indexing expressions `grad[...] = ...` to assign the gradient result.
In specific cases, there are a few other NumPy and SciPy functions that are documented to operate in-place on their arguments, and you can also use such functions to modify `grad` if you want. If a function is not *explicitly documented to modify its arguments in-place*, however, you should assume that it does *not*.
Bound constraints
-----------------
-The [bound constraints](NLopt_Reference#Bound_constraints.md) can be specified by calling the methods:
+The [bound constraints](NLopt_Reference.md#bound-constraints) can be specified by calling the methods:
-```
+```py
opt.set_lower_bounds(lb)
opt.set_upper_bounds(ub)
```
-
where `lb` and `ub` are arrays (NumPy arrays or Python lists) of length *n* (the same as the dimension passed to the `nlopt.opt` constructor). For convenience, these are overloaded with functions that take a single number as arguments, in order to set the lower/upper bounds for all optimization parameters to a single constant.
To retrieve the values of the lower/upper bounds, you can call one of:
-```
+```py
opt.get_lower_bounds()
opt.get_upper_bounds()
```
-
both of which return NumPy arrays.
-To specify an unbounded dimension, you can use ±`float('inf')` (or ±`numpy.inf`) in Python to specify ±∞.
+To specify an unbounded dimension, you can use ±`float('inf')` (or ±`numpy.inf`) in Python to specify $\pm\infty$.
Nonlinear constraints
---------------------
-Just as for [nonlinear constraints in C](NLopt_Reference#Nonlinear_constraints.md), you can specify nonlinear inequality and equality constraints by the methods:
+Just as for [nonlinear constraints in C](NLopt_Reference.md#nonlinear-constraints), you can specify nonlinear inequality and equality constraints by the methods:
-```
-opt.add_inequality_constraint(fc, tol=0)
-opt.add_equality_constraint(h, tol=0)
+```py
+opt.add_inequality_constraint(fc, tol=0)
+opt.add_equality_constraint(h, tol=0)
```
-
where the arguments `fc` and `h` have the same form as the objective function above. The optional `tol` arguments specify a tolerance in judging feasibility for the purposes of stopping the optimization, as in C.
To remove all of the inequality and/or equality constraints from a given problem, you can call the following methods:
-```
+```py
opt.remove_inequality_constraints()
opt.remove_equality_constraints()
```
@@ -145,32 +137,27 @@ opt.remove_equality_constraints()
### Vector-valued constraints
-Just as for [nonlinear constraints in C](NLopt_Reference#Vector-valued_constraints.md), you can specify vector-valued nonlinear inequality and equality constraints by the methods
+Just as for [nonlinear constraints in C](NLopt_Reference.md#vector-valued-constraints), you can specify vector-valued nonlinear inequality and equality constraints by the methods
+```py
+opt.add_inequality_mconstraint(c, tol)
+opt.add_equality_mconstraint(c, tol)
```
-opt.add_inequality_mconstraint(c, tol)
-opt.add_equality_mconstraint(c, tol)
-```
-
Here, `tol` is an array (NumPy array or Python list) of the tolerances in each constraint dimension; the dimensionality *m* of the constraint is determined by `tol.size`. The constraint function `c` must be of the form:
-```
+```py
def c(result, x, grad):
- if grad.size > 0:
-```
-
-` `*`...set` `grad` `to` `gradient,` `in-place...`*
-` result[0] = `*`...value` `of` `c``0``(x)...`*
-` result[1] = `*`...value` `of` `c``1``(x)...`*
-```
- ...
+ if grad.size > 0:
+ ...set grad to gradient, in-place...
+ result[0] = ...value of c_0(x)...
+ result[1] = ...value of c_1(x)...
+ ...
```
+`result` is a NumPy array whose length equals the dimensionality *m* of the constraint (same as the length of `tol` above), which upon return should be set *in-place* ([see above](#assigning-results-in-place)) to the constraint results at the point `x` (a NumPy array whose length *n* is the same as the dimension passed to the constructor). Any return value of the function is ignored.
-`result` is a NumPy array whose length equals the dimensionality *m* of the constraint (same as the length of `tol` above), which upon return should be set *in-place* ([see above](#Assigning_results_in-place.md)) to the constraint results at the point `x` (a NumPy array whose length *n* is the same as the dimension passed to the constructor). Any return value of the function is ignored.
-
-In addition, if the argument `grad` is not empty, i.e. `grad.size>0`, then `grad` is a 2d NumPy array of size *m*×*n* which should (upon return) be set in-place ([see above](#Assigning_results_in-place.md)) to the gradient of the function with respect to the optimization parameters at `x`. That is, `grad[i,j]` should upon return contain the partial derivative $\partial c_i / \partial x_j$ if `grad` is non-empty. Not all of the optimization algorithms (below) use the gradient information: for algorithms listed as "derivative-free," the `grad` argument will always be empty and need never be computed. (For algorithms that do use gradient information, however, `grad` may still be empty for some calls.)
+In addition, if the argument `grad` is not empty, i.e. `grad.size>0`, then `grad` is a 2d NumPy array of size *m*×*n* which should (upon return) be set in-place ([see above](#assigning-results-in-place)) to the gradient of the function with respect to the optimization parameters at `x`. That is, `grad[i,j]` should upon return contain the partial derivative $\partial c_i / \partial x_j$ if `grad` is non-empty. Not all of the optimization algorithms (below) use the gradient information: for algorithms listed as "derivative-free," the `grad` argument will always be empty and need never be computed. (For algorithms that do use gradient information, however, `grad` may still be empty for some calls.)
An inequality constraint corresponds to $c_i \le 0$ for $0 \le i < m$, and an equality constraint corresponds to $c_i = 0$, in both cases with tolerance `tol[i]` for purposes of termination criteria.
@@ -179,119 +166,104 @@ An inequality constraint corresponds to $c_i \le 0$ for $0 \le i < m$, and an eq
Stopping criteria
-----------------
-As explained in the [C API Reference](NLopt_Reference#Stopping_criteria.md) and the [Introduction](NLopt_Introduction#Termination_conditions.md)), you have multiple options for different stopping criteria that you can specify. (Unspecified stopping criteria are disabled; i.e., they have innocuous defaults.)
+As explained in the [C API Reference](NLopt_Reference.md#stopping-criteria) and the [Introduction](NLopt_Introduction.md#termination-conditions)), you have multiple options for different stopping criteria that you can specify. (Unspecified stopping criteria are disabled; i.e., they have innocuous defaults.)
For each stopping criteria, there are (at least) two methods: a `set` method to specify the stopping criterion, and a `get` method to retrieve the current value for that criterion. The meanings of each criterion are exactly the same as in the C API.
-```
+```py
opt.set_stopval(stopval)
opt.get_stopval()
```
-
Stop when an objective value of at least `stopval` is found.
-```
+```py
opt.set_ftol_rel(tol)
opt.get_ftol_rel()
```
-
Set relative tolerance on function value.
-```
+```py
opt.set_ftol_abs(tol)
opt.get_ftol_abs()
```
-
Set absolute tolerance on function value.
-```
+```py
opt.set_xtol_rel(tol)
opt.get_xtol_rel()
```
-
Set relative tolerance on optimization parameters.
-```
+```py
opt.set_xtol_abs(tol)
opt.get_xtol_abs()
```
-
Set absolute tolerances on optimization parameters. The `tol` input must be an array (NumPy array or Python list) of length `n` (the dimension specified in the `nlopt.opt` constructor); alternatively, you can pass a single number in order to set the same tolerance for all optimization parameters. `get_xtol_abs()` returns the tolerances as a NumPy array.
-```
+```py
opt.set_x_weights(w)
opt.get_x_weights()
```
-
Set the weights used when the computing L₁ norm for the `xtol_rel` stopping criterion above.
-```
+```py
opt.set_maxeval(maxeval)
opt.get_maxeval()
```
-
Stop when the number of function evaluations exceeds `maxeval`. (0 or negative for no limit.)
-```
+```py
opt.set_maxtime(maxtime)
opt.get_maxtime()
```
-
Stop when the optimization time (in seconds) exceeds `maxtime`. (0 or negative for no limit.)
-
-
-```
+```py
opt.get_numevals()
```
-
Request the number of evaluations.
-
-
### Forced termination
-In certain cases, the caller may wish to *force* the optimization to halt, for some reason unknown to NLopt. For example, if the user presses Ctrl-C, or there is an error of some sort in the objective function. You can do this by raise *any* exception inside your objective/constraint functions:the optimization will be halted gracefully, and the same exception will be raised to the caller. See [Exceptions](#Exceptions.md), below. The Python equivalent of `nlopt_forced_stop` from the [C API](NLopt_Reference#Forced_termination.md) is to throw an `nlopt.ForcedStop` exception.
+In certain cases, the caller may wish to *force* the optimization to halt, for some reason unknown to NLopt. For example, if the user presses Ctrl-C, or there is an error of some sort in the objective function. You can do this by raising *any* exception inside your objective/constraint functions:the optimization will be halted gracefully, and the same exception will be raised to the caller. See [Exceptions](#exceptions), below. The Python equivalent of `nlopt_forced_stop` from the [C API](NLopt_Reference.md#forced-termination) is to throw an `nlopt.ForcedStop` exception.
Performing the optimization
---------------------------
Once all of the desired optimization parameters have been specified in a given object `opt`, you can perform the optimization by calling:
-```
+```py
xopt = opt.optimize(x)
```
-
On input, `x` is an array (NumPy array or Python list) of length `n` (the dimension of the problem from the `nlopt.opt` constructor) giving an initial guess for the optimization parameters. The return value `xopt` is a NumPy array containing the optimized values of the optimization parameters.
You can call the following methods to retrieve the optimized objective function value from the last `optimize` call, and also the return code (including negative/failure return values) from the last `optimize` call:
-```
+```py
opt_val = opt.last_optimum_value()
result = opt.last_optimize_result()
```
-
-The return code (see below) is positive on success, indicating the reason for termination. On failure (negative return codes), `optimize()` throws an exception (see [Exceptions](#Exceptions.md), below).
+The return code (see below) is positive on success, indicating the reason for termination. On failure (negative return codes), `optimize()` throws an exception (see [Exceptions](#exceptions), below).
### Return values
-The possible return values are the same as the [return values in the C API](NLopt_Reference#Return_values.md), except that the `NLOPT_` prefix is replaced with the `nlopt.` namespace. That is, `NLOPT_SUCCESS` becomes `nlopt.SUCCESS`, etcetera.
+The possible return values are the same as the [return values in the C API](NLopt_Reference.md#return-values), except that the `NLOPT_` prefix is replaced with the `nlopt.` namespace. That is, `NLOPT_SUCCESS` becomes `nlopt.SUCCESS`, etcetera.
Exceptions
----------
-The [Error codes (negative return values)](NLopt_Reference#Error_codes_(negative_return_values).md) in the C API are replaced in the Python API by thrown exceptions. The following exceptions are thrown by the various routines:
+The [Error codes (negative return values)](NLopt_Reference.md#error-codes-negative-return-values) in the C API are replaced in the Python API by thrown exceptions. The following exceptions are thrown by the various routines:
```
RunTimeError
@@ -324,11 +296,10 @@ Local/subsidiary optimization algorithm
Some of the algorithms, especially MLSL and AUGLAG, use a different optimization algorithm as a subroutine, typically for local optimization. You can change the local search algorithm and its tolerances by calling:
-```
+```py
opt.set_local_optimizer(local_opt)
```
-
Here, `local_opt` is another `nlopt.opt` object whose parameters are used to determine the local search algorithm, its stopping criteria, and other algorithm parameters. (However, the objective function, bounds, and nonlinear-constraint parameters of `local_opt` are ignored.) The dimension `n` of `local_opt` must match that of `opt`.
This function makes a copy of the `local_opt` object, so you can freely change your original `local_opt` afterwards without affecting `opt`.
@@ -338,25 +309,23 @@ Initial step size
Just as in the C API, you can [get and set the initial step sizes](NLopt_Reference#Initial_step_size.md) for derivative-free optimization algorithms. The Python equivalents of the C functions are the following methods:
-```
+```py
opt.set_initial_step(dx)
dx = opt.get_initial_step(x)
```
-
Here, `dx` is an array (NumPy array or Python list) of the (nonzero) initial steps for each dimension, or a single number if you wish to use the same initial steps for all dimensions. `opt.get_initial_step(x)` returns the initial step that will be used for a starting guess of `x` in `opt.optimize(x)`.
Stochastic population
---------------------
-Just as in the C API, you can [get and set the initial population](NLopt_Reference#Stochastic_population.md) for stochastic optimization algorithms, by the methods:
+Just as in the C API, you can [get and set the initial population](NLopt_Reference.md#stochastic-population) for stochastic optimization algorithms, by the methods:
-```
+```py
opt.set_population(pop)
opt.get_population()
```
-
(A `pop` of zero implies that the heuristic default will be used.)
Pseudorandom numbers
@@ -364,31 +333,28 @@ Pseudorandom numbers
For stochastic optimization algorithms, we use pseudorandom numbers generated by the [Mersenne Twister](https://en.wikipedia.org/wiki/Mersenne_twister) algorithm, based on code from Makoto Matsumoto. By default, the [seed](https://en.wikipedia.org/wiki/Random_seed) for the random numbers is generated from the system time, so that you will get a different sequence of pseudorandom numbers each time you run your program. If you want to use a "deterministic" sequence of pseudorandom numbers, i.e. the same sequence from run to run, you can set the seed by calling:
-```
+```py
nlopt.srand(seed)
```
-
where `seed` is an integer. To reset the seed based on the system time, you can call:
-```
+```py
nlopt.srand_time()
```
-
(Normally, you don't need to call this as it is called automatically. However, it might be useful if you want to "re-randomize" the pseudorandom numbers after calling `nlopt.srand` to set a deterministic seed.)
Vector storage for limited-memory quasi-Newton algorithms
---------------------------------------------------------
-Just as in the C API, you can get and set the [number *M* of stored vectors](NLopt_Reference#Vector_storage_for_limited-memory_quasi-Newton_algorithms.md) for limited-memory quasi-Newton algorithms, via the methods:
+Just as in the C API, you can get and set the [number *M* of stored vectors](NLopt_Reference.md#vector-storage-for-limited-memory-quasi-newton-algorithms) for limited-memory quasi-Newton algorithms, via the methods:
-```
+```py
opt.set_vector_storage(M)
opt.get_vector_storage()
```
-
(The default is *M*=0, in which case NLopt uses a heuristic nonzero value.)
Version number
@@ -396,13 +362,12 @@ Version number
To determine the version number of NLopt at runtime, you can call:
-```
+```py
nlopt.version_major()
nlopt.version_minor()
nlopt.version_bugfix()
```
-
For example, NLopt version 3.1.4 would return `major=3`, `minor=1`, and `bugfix=4`.
diff --git a/doc/docs/NLopt_Reference.md b/doc/docs/NLopt_Reference.md
index 25aa9517..b8bd7d78 100644
--- a/doc/docs/NLopt_Reference.md
+++ b/doc/docs/NLopt_Reference.md
@@ -2,42 +2,40 @@
# NLopt Reference
---
-NLopt is a library, not a stand-alone program—it is designed to be called from your own program in C, C++, Fortran, Matlab, GNU Octave, or other languages. This reference section describes the programming interface (API) of NLopt in the C language. The reference manuals for other languages can be found at:
-
-- [NLopt C++ Reference](NLopt_C-plus-plus_Reference.md)
-- [NLopt Fortran Reference](NLopt_Fortran_Reference.md)
-- [NLopt Matlab Reference](NLopt_Matlab_Reference.md)
-- [NLopt Python Reference](NLopt_Python_Reference.md)
-- [NLopt Guile Reference](NLopt_Guile_Reference.md)
-- [NLopt Julia Reference](https://github.com/stevengj/NLopt.jl)
+NLopt is a library, not a stand-alone program—it is designed to be called from your own program in C, C++, Fortran, Matlab, GNU Octave, or other languages. This reference section describes the programming interface (API) of NLopt in the C language. Links to the reference manuals for other languages can be found in the left sidebar.
The old API from versions of NLopt prior to 2.0 is deprecated, but continues to be supported for backwards compatibility. You can find it described in the [NLopt Deprecated API Reference](NLopt_Deprecated_API_Reference.md).
-Other sources of information include the Unix [man page](https://en.wikipedia.org/wiki/Manual_page_(Unix)): On Unix, you can run e.g. `man` `nlopt` for documentation of C API. In Matlab and GNU Octave, the corresponding command is to type `help` `nlopt_optimize`.
+Other sources of information include the Unix [man page](https://en.wikipedia.org/wiki/Manual_page_(Unix)): On Unix, you can run e.g. `man nlopt` for documentation of C API. In Matlab and GNU Octave, the corresponding command is to type `help nlopt_optimize`.
+
+[TOC]
Compiling and linking your program to NLopt
-------------------------------------------
An NLopt program in C should include the NLopt header file:
-`#include `
-
-For programs in compiled languages like C or Fortran, when you compile your program you will have to link it to the NLopt library. This is *in addition* to including the header file (`#include` in C or `#include` in C++). On Unix, you would normally link with a command something like:
+```c
+#include
+```
-*`compiler`*` `*`...source/object` `files...`*` -lnlopt -lm -o myprogram`
+For programs in compiled languages like C or Fortran, when you compile your program you will have to link it to the NLopt library. This is *in addition* to including the header file (`#include ` in C or `#include ` in C++). On Unix, you would normally link with a command something like:
-where *compiler* is `cc`, `f77`, `g++`, or whatever is appropriate for your machine/language.
+```c
+compiler ...source/object files... -lnlopt -lm -o myprogram
+```
+where `compiler` is `cc`, `f77`, `g++`, or whatever is appropriate for your machine/language.
*Note:* the `-lnlopt` `-lm` options, which link to the NLopt library (and the math library, which it requires), must come *after* your source/object files. In general, the rule is that if *A* depends upon *B*, then *A* must come before *B* in the link command.
-*Note:* the above example assumes that you have installed the NLopt library in a place where the compiler knows to find it (e.g. in a standard directory like `/usr/lib` or `/usr/local/lib`). If you installed somewhere else (e.g. in your home directory if you are not a system administrator), then you will need to use a `-L` flag to tell the compiler where to find the library. See [the installation manual](NLopt_Installation#Changing_the_installation_directory.md).
+*Note:* the above example assumes that you have installed the NLopt library in a place where the compiler knows to find it (e.g. in a standard directory like `/usr/lib` or `/usr/local/lib`). If you installed somewhere else (e.g. in your home directory if you are not a system administrator), then you will need to use a `-L` flag to tell the compiler where to find the library. See [the installation manual](NLopt_Installation.md#changing-the-installation-directory).
The `nlopt_opt` object
----------------------
The NLopt API revolves around an "object" of type `nlopt_opt` (an opaque pointer type). Via this object, all of the parameters of the optimization are specified (dimensions, algorithm, stopping criteria, constraints, objective function, etcetera), and then one finally passes this object to `nlopt_optimize` in order to perform the optimization. The object is created by calling:
-```
+```c
nlopt_opt nlopt_create(nlopt_algorithm algorithm, unsigned n);
```
@@ -46,46 +44,44 @@ which returns a newly allocated `nlopt_opt` object (or NULL if there was an erro
When you are finished with the object, you must deallocate it by calling:
-```
+```c
void nlopt_destroy(nlopt_opt opt);
```
Simple assignment (`=`) makes two pointers to the same object. To make an independent copy of an object, use:
-```
+```c
nlopt_opt nlopt_copy(const nlopt_opt opt);
```
The algorithm and dimension parameters of the object are immutable (cannot be changed without creating a new object), but you can query them for a given object by calling:
+```c
+nlopt_algorithm nlopt_get_algorithm(const nlopt_opt opt);
+unsigned nlopt_get_dimension(const nlopt_opt opt);
```
-nlopt_algorithm nlopt_get_algorithm(const nlopt_opt opt);
-unsigned nlopt_get_dimension(const nlopt_opt opt);
-```
-
You can get a descriptive (null-terminated) string corresponding to a particular algorithm by calling:
-```
-const char *nlopt_algorithm_name(nlopt_algorithm algorithm);
+```c
+const char *nlopt_algorithm_name(nlopt_algorithm algorithm);
```
You can convert an `nlopt_algorithm` to/from a string identifier (`NLOPT_FOO` converts to/from `"FOO"`) by calling:
-```
-const char *nlopt_algorithm_to_string(nlopt_algorithm algorithm);
+```c
+const char *nlopt_algorithm_to_string(nlopt_algorithm algorithm);
nlopt_algorithm nlopt_algorithm_from_string(const char *name);
```
-
Objective function
------------------
The objective function is specified by calling one of:
-```
+```c
nlopt_result nlopt_set_min_objective(nlopt_opt opt, nlopt_func f, void* f_data);
nlopt_result nlopt_set_max_objective(nlopt_opt opt, nlopt_func f, void* f_data);
```
@@ -93,7 +89,7 @@ nlopt_result nlopt_set_max_objective(nlopt_opt opt, nlopt_func f, void* f_
depending on whether one wishes to minimize or maximize the objective function `f`, respectively. The function `f` should be of the form:
-```
+```c
double f(unsigned n, const double* x, double* grad, void* f_data);
```
@@ -112,7 +108,7 @@ Most of the algorithms in NLopt are designed for minimization of functions with
These bounds are specified by passing arrays `lb` and `ub` of length `n` (the dimension of the problem, from `nlopt_create`) to one or both of the functions:
-```
+```c
nlopt_result nlopt_set_lower_bounds(nlopt_opt opt, const double* lb);
nlopt_result nlopt_set_upper_bounds(nlopt_opt opt, const double* ub);
```
@@ -129,22 +125,20 @@ Note, however, that some of the algorithms in NLopt, in particular most of the g
For convenience, the functions `nlopt_set_*_bounds1` are supplied in order to set the lower/upper bounds for all optimization parameters to a single constant (so that you don’t have to fill an array with a constant value), along with `nlopt_set_*_bound` to set the bound for
a single variable `x[i]`:
-```
+```c
nlopt_result nlopt_set_lower_bounds1(nlopt_opt opt, double lb);
nlopt_result nlopt_set_upper_bounds1(nlopt_opt opt, double ub);
nlopt_result nlopt_set_lower_bound(nlopt_opt opt, int i, double lb);
nlopt_result nlopt_set_upper_bound(nlopt_opt opt, int i, double ub);
```
-
The values of the lower and upper bounds can be retrieved by calling:
-```
+```c
nlopt_result nlopt_get_lower_bounds(const nlopt_opt opt, double* lb);
nlopt_result nlopt_get_upper_bounds(const nlopt_opt opt, double* ub);
```
-
where `lb` and `ub` are arrays of length `n` that, upon successful return, are set to copies of the lower and upper bounds, respectively.
Nonlinear constraints
@@ -152,58 +146,53 @@ Nonlinear constraints
Several of the algorithms in NLopt (`MMA`, `COBYLA`, and `ORIG_DIRECT`) also support arbitrary nonlinear inequality constraints, and some additionally allow nonlinear equality constraints (`ISRES` and `AUGLAG`). For these algorithms, you can specify as many nonlinear constraints as you wish by calling the following functions multiple times.
-In particular, a nonlinear inequality constraint of the form `fc`(*x*) ≤ 0, where the function `fc` is of the same form as the objective function described above, can be specified by calling:
+In particular, a nonlinear inequality constraint of the form $f_c(\mathbf{x}) \leq 0$, where the function $f_c$ (or `fc`) is of the same form as the objective function described above, can be specified by calling:
-```
+```c
nlopt_result nlopt_add_inequality_constraint(nlopt_opt opt, nlopt_func fc, void* fc_data, double tol);
```
+Just as for the objective function, `fc_data` is a pointer to arbitrary user data that will be passed through to the fc function whenever it is called. The parameter `tol` is a tolerance that is used for the purpose of stopping criteria *only*: a point $\mathbf{x}$ is considered feasible for judging whether to stop the optimization if $f_c(\mathbf{x}) \leq tol$. A tolerance of zero means that NLopt will try not to consider any $\mathbf{x}$ to be converged unless $f_c$ is strictly non-positive; generally, at least a small positive tolerance is advisable to reduce sensitivity to rounding errors.
-Just as for the objective function, `fc_data` is a pointer to arbitrary user data that will be passed through to the fc function whenever it is called. The parameter `tol` is a tolerance that is used for the purpose of stopping criteria *only*: a point *x* is considered feasible for judging whether to stop the optimization if `fc`(*x*) ≤ `tol`. A tolerance of zero means that NLopt will try not to consider any x to be converged unless `fc` is strictly non-positive; generally, at least a small positive tolerance is advisable to reduce sensitivity to rounding errors.
-
-(The [return value](#Return_Values.md) is negative if there was an error, e.g. an invalid argument or an out-of-memory situation.)
+(The [return value](#return-values) is negative if there was an error, e.g. an invalid argument or an out-of-memory situation.)
-Similarly, a nonlinear equality constraint of the form `h`(*x*) = 0, where the function `h` is of the same form as the objective function described above, can be specified by calling:
+Similarly, a nonlinear equality constraint of the form $h(\mathbf{x}) = 0$, where the function $h$ (or `h`) is of the same form as the objective function described above, can be specified by calling:
-```
+```c
nlopt_result nlopt_add_equality_constraint(nlopt_opt opt, nlopt_func h, void* h_data, double tol);
```
-
Just as for the objective function, `h_data` is a pointer to arbitrary user data that will be passed through to the `h` function whenever it is called. The parameter tol is a tolerance that is used for the purpose of stopping criteria *only*: a point *x* is considered feasible for judging whether to stop the optimization if |`h`(*x*)| ≤ `tol`. For equality constraints, a small positive tolerance is strongly advised in order to allow NLopt to converge even if the equality constraint is slightly nonzero.
(For any algorithm listed as "derivative-free" below, the `grad` argument to `fc` or `h` will always be `NULL` and need never be computed.)
To remove all of the inequality and/or equality constraints from a given problem `opt`, you can call the following functions:
-```
+```c
nlopt_result nlopt_remove_inequality_constraints(nlopt_opt opt);
nlopt_result nlopt_remove_equality_constraints(nlopt_opt opt);
```
-
### Vector-valued constraints
In some applications with multiple constraints, it is more convenient to define a single function that returns the values (and gradients) of all constraints at once. For example, different constraint functions might share computations in some way. Or, if you have a large number of constraints, you may wish to compute them in parallel. This possibility is supported by the following function, which defines multiple constraints at once, or equivalently a vector-valued constraint function $\mathbf{c}: \mathbb{R}^n \to \mathbb{R}^m$:
-```
+```c
nlopt_result nlopt_add_inequality_mconstraint(nlopt_opt opt, unsigned m,
nlopt_mfunc c, void* c_data, const double *tol);
nlopt_result nlopt_add_equality_mconstraint(nlopt_opt opt, unsigned m,
nlopt_mfunc c, void* c_data, const double *tol);
```
-
Here, `m` is the dimensionality of the constraint result and `tol` points to an array of length `m` of the tolerances in each constraint dimension (or `NULL` for zero tolerances). The constraint function must be of the form:
-```
+```c
void c(unsigned m, double *result, unsigned n, const double* x, double* grad, void* f_data);
```
-
This evaluates the constraint function(s) $\mathbf{c}(\mathbf{x})$ at the point `x`, an array of length `n` (the same as the dimension passed to `nlopt_create`). Upon return, the output value of the constraints should be stored in `result`, an array of length `m` (the same as the dimension passed to `nlopt_add_*_mconstraint`), so that `result[i]` stores *c**i*.
-In addition, if `grad` is non-`NULL`, then `grad` points to an array of length `m*n` which should, upon return, be set to the gradients of the constraint functions with respect to `x`. The `n` dimension of `grad` is stored contiguously, so that $\part c_i / \part x_j$ is stored in `grad[i*n` `+` `j]`.
+In addition, if `grad` is non-`NULL`, then `grad` points to an array of length `m*n` which should, upon return, be set to the gradients of the constraint functions with respect to `x`. The `n` dimension of `grad` is stored contiguously, so that $\partial c_i / \partial x_j$ is stored in `grad[i*n` `+` `j]`.
An inequality constraint corresponds to $c_i \le 0$ for $0 \le i < m$, and an equality constraint corresponds to $c_i = 0$, in both cases with tolerance `tol[i]` for purposes of termination criteria.
@@ -212,29 +201,27 @@ An inequality constraint corresponds to $c_i \le 0$ for $0 \le i < m$, and an eq
Stopping criteria
-----------------
-Multiple stopping criteria for the optimization are supported (see also the [Introduction](NLopt_Introduction#Termination_conditions.md)), as specified by the functions to modify a given optimization problem `opt`. The optimization halts whenever any one of these criteria is satisfied. In some cases, the precise interpretation of the stopping criterion depends on the optimization algorithm above (although we have tried to make them as consistent as reasonably possible), and some algorithms do not support all of the stopping criteria.
+Multiple stopping criteria for the optimization are supported (see also the [Introduction](NLopt_Introduction.md#termination-conditions)), as specified by the functions to modify a given optimization problem `opt`. The optimization halts whenever any one of these criteria is satisfied. In some cases, the precise interpretation of the stopping criterion depends on the optimization algorithm above (although we have tried to make them as consistent as reasonably possible), and some algorithms do not support all of the stopping criteria.
**Note:** you do not need to use *all* of the stopping criteria! In most cases, you only need one or two, and can omit the remainder (all criteria are disabled by default).
For each stopping criteria, there are (at least) two functions: a `set` function to specify the stopping criterion, and a `get` function to retrieve the current value for that criterion.
-```
+```c
nlopt_result nlopt_set_stopval(nlopt_opt opt, double stopval);
double nlopt_get_stopval(const nlopt_opt opt);
```
-
Stop when an objective value of at least stopval is found: stop minimizing when an objective value ≤ `stopval` is found, or stop maximizing a value ≥ `stopval` is found. (Setting `stopval` to `-HUGE_VAL` for minimizing or `+HUGE_VAL` for maximizing disables this stopping criterion.)
-```
+```c
nlopt_result nlopt_set_ftol_rel(nlopt_opt opt, double tol);
double nlopt_get_ftol_rel(const nlopt_opt opt);
```
-
Set relative tolerance on function value: stop when an optimization step (or an estimate of the optimum) changes the objective function value by less than `tol` multiplied by the absolute value of the function value. (If there is any chance that your optimum function value is close to zero, you might want to set an absolute tolerance with `nlopt_set_ftol_abs` as well.) Criterion is disabled if `tol` is non-positive.
-```
+```c
nlopt_result nlopt_set_ftol_abs(nlopt_opt opt, double tol);
double nlopt_get_ftol_abs(const nlopt_opt opt);
```
@@ -242,7 +229,7 @@ double nlopt_get_ftol_abs(const nlopt_opt opt);
Set absolute tolerance on function value: stop when an optimization step (or an estimate of the optimum) changes the function value by less than `tol`. Criterion is disabled if `tol` is non-positive.
-```
+```c
nlopt_result nlopt_set_xtol_rel(nlopt_opt opt, double tol);
double nlopt_get_xtol_rel(const nlopt_opt opt);
```
@@ -250,7 +237,7 @@ double nlopt_get_xtol_rel(const nlopt_opt opt);
Set relative tolerance on optimization parameters: stop when an optimization step (or an estimate of the optimum) causes a relative change the parameters $x$ by less than `tol`, i.e. $\Vert \Delta x \Vert_w < \mbox{tol}\cdot\Vert x \Vert_w$ as measured by a weighted L₁ norm $\Vert x \Vert_w = \sum_i w_i |x_i|$, where the weights $w_i$ default to 1.
(If there is any chance that the optimal $\Vert x \Vert$ is close to zero, you might want to set an absolute tolerance with `nlopt_set_xtol_abs` as well.) Criterion is disabled if `tol` is non-positive.
-```
+```c
nlopt_result nlopt_set_x_weights(nlopt_opt opt, const double *w);
nlopt_result nlopt_set_x_weights1(nlopt_opt opt, const double w);
nlopt_result nlopt_get_x_weights(const nlopt_opt opt, double *w);
@@ -258,7 +245,7 @@ nlopt_result nlopt_get_x_weights(const nlopt_opt opt, double *w);
Set/get the weights used when the computing L₁ norm for the `xtol_rel` stopping criterion above, where `*w` must point to an array of length equal to the number of optimization parameters in `opt`. `nlopt_set_x_weights1` can be used to set all of the weights to the same value `w`. The weights default to `1`, but non-constant weights can be used to handle situations where the different parameters `x` have different units or importance, for example.
-```
+```c
nlopt_result nlopt_set_xtol_abs(nlopt_opt opt, const double *tol);
nlopt_result nlopt_set_xtol_abs1(nlopt_opt opt, double tol);
nlopt_result nlopt_get_xtol_abs(const nlopt_opt opt, double *tol);
@@ -267,7 +254,7 @@ nlopt_result nlopt_get_xtol_abs(const nlopt_opt opt, double *tol);
Set absolute tolerances on optimization parameters. `tol` is a pointer to an array of length `n` (the dimension from `nlopt_create`) giving the tolerances: stop when an optimization step (or an estimate of the optimum) changes every parameter `x[i]` by less than `tol[i]`. (Note that `nlopt_set_xtol_abs` makes a copy of the `tol` array, so subsequent changes to the caller's `tol` have no effect on `opt`.) In `nlopt_get_xtol_abs`, `tol` must be an array of length `n`, which upon successful return contains a copy of the current tolerances. For convenience, the `nlopt_set_xtol_abs1` may be used to set the absolute tolerances in all `n` optimization parameters to the same value. Criterion is disabled if `tol` is non-positive.
-```
+```c
nlopt_result nlopt_set_maxeval(nlopt_opt opt, int maxeval);
int nlopt_get_maxeval(nlopt_opt opt);
```
@@ -275,22 +262,19 @@ int nlopt_get_maxeval(nlopt_opt opt);
Stop when the number of function evaluations exceeds `maxeval`. (This is not a strict maximum: the number of function evaluations may exceed maxeval slightly, depending upon the algorithm.) Criterion is disabled if `maxeval` is non-positive.
-```
+```c
nlopt_result nlopt_set_maxtime(nlopt_opt opt, double maxtime);
double nlopt_get_maxtime(nlopt_opt opt);
```
-
Stop when the optimization time (in seconds) exceeds `maxtime`. (This is not a strict maximum: the time may exceed maxtime slightly, depending upon the algorithm and on how slow your function evaluation is.) Criterion is disabled if `maxtime` is non-positive.
-```
+```c
int nlopt_get_numevals(nlopt_opt opt);
```
-
Request the number of evaluations.
-
### Forced termination
In certain cases, the caller may wish to *force* the optimization to halt, for some reason unknown to NLopt. For example, if the user presses Ctrl-C, or there is an error of some sort in the objective function. (This is used to implement exception handling in the NLopt wrappers for C++ and other languages.) In this case, it is possible to tell NLopt to halt the optimization gracefully, returning the best point found so far, by calling the following function from *within* your objective or constraint functions:
@@ -304,18 +288,16 @@ This causes `nlopt_optimize` to halt, returning the `NLOPT_FORCED_STOP` error co
If you want to provide a bit more information, you can call
-```
+```c
nlopt_result nlopt_set_force_stop(nlopt_opt opt, int val)
```
-
to set a forced-stop integer value `val`, which can be later retrieved by calling:
-```
+```c
int nlopt_get_force_stop(nlopt_opt opt)
```
-
which returns the last force-stop value that was set since the last `nlopt_optimize`. The force-stop value is reset to zero at the beginning of `nlopt_optimize`. Passing `val=0` to `nlopt_set_force_stop` tells NLopt *not* to force a halt.
Performing the optimization
@@ -323,11 +305,10 @@ Performing the optimization
Once all of the desired optimization parameters have been specified in a given object `opt`, you can perform the optimization by calling:
-```
+```c
nlopt_result nlopt_optimize(nlopt_opt opt, double *x, double *opt_f);
```
-
On input, `x` is an array of length `n` (the dimension of the problem from `nlopt_create`) giving an initial guess for the optimization parameters. On successful return, `x` contains the optimized values of the parameters, and `opt_f` contains the corresponding value of the objective function.
The return value (see below) is positive on success and negative on failure.
@@ -340,37 +321,37 @@ Most of the NLopt functions return an enumerated constant of type `nlopt_result`
### Successful termination (positive return values)
```
-NLOPT_SUCCESS` `=` `1
+NLOPT_SUCCESS = 1
```
Generic success return value.
```
-NLOPT_STOPVAL_REACHED` `=` `2
+NLOPT_STOPVAL_REACHED = 2
```
Optimization stopped because `stopval` (above) was reached.
```
-NLOPT_FTOL_REACHED` `=` `3
+NLOPT_FTOL_REACHED = 3
```
Optimization stopped because `ftol_rel` or `ftol_abs` (above) was reached.
```
-NLOPT_XTOL_REACHED` `=` `4
+NLOPT_XTOL_REACHED = 4
```
Optimization stopped because `xtol_rel` or `xtol_abs` (above) was reached.
```
-NLOPT_MAXEVAL_REACHED` `=` `5
+NLOPT_MAXEVAL_REACHED = 5
```
Optimization stopped because `maxeval` (above) was reached.
```
-NLOPT_MAXTIME_REACHED` `=` `6
+NLOPT_MAXTIME_REACHED = 6
```
Optimization stopped because `maxtime` (above) was reached.
@@ -378,39 +359,38 @@ Optimization stopped because `maxtime` (above) was reached.
### Error codes (negative return values)
```
-NLOPT_FAILURE` `=` `-1
+NLOPT_FAILURE = -1
```
Generic failure code.
```
-NLOPT_INVALID_ARGS` `=` `-2
+NLOPT_INVALID_ARGS = -2
```
Invalid arguments (e.g. lower bounds are bigger than upper bounds, an unknown algorithm was specified, etcetera).
```
-NLOPT_OUT_OF_MEMORY` `=` `-3
+NLOPT_OUT_OF_MEMORY = -3
```
Ran out of memory.
```
-NLOPT_ROUNDOFF_LIMITED` `=` `-4
+NLOPT_ROUNDOFF_LIMITED = -4
```
Halted because roundoff errors limited progress. (In this case, the optimization still typically returns a useful result.)
```
-NLOPT_FORCED_STOP` `=` `-5
+NLOPT_FORCED_STOP = -5
```
Halted because of a [forced termination](#Forced_termination.md): the user called `nlopt_force_stop(opt)` on the optimization’s `nlopt_opt` object `opt` from the user’s objective function or constraints.
-
You can convert an `nlopt_result` to/from a string identifier (`NLOPT_FOO` converts to/from `"FOO"`) by calling:
-```
+```c
const char *nlopt_result_to_string(nlopt_result result);
nlopt_result nlopt_result_from_string(const char *name);
```
@@ -421,11 +401,10 @@ Local/subsidiary optimization algorithm
Some of the algorithms, especially MLSL and AUGLAG, use a different optimization algorithm as a subroutine, typically for local optimization. You can change the local search algorithm and its tolerances by calling:
-```
+```c
nlopt_result nlopt_set_local_optimizer(nlopt_opt opt, const nlopt_opt local_opt);
```
-
Here, `local_opt` is another `nlopt_opt` object whose parameters are used to determine the local search algorithm, its stopping criteria, and other algorithm parameters. (However, the objective function, bounds, and nonlinear-constraint parameters of `local_opt` are ignored.) The dimension `n` of `local_opt` must match that of `opt`.
This function makes a copy of the `local_opt` object, so you can freely destroy your original `local_opt` afterwards.
@@ -437,25 +416,22 @@ For derivative-free local-optimization algorithms, the optimizer must somehow de
You can modify the initial step size by calling:
-```
+```c
nlopt_result nlopt_set_initial_step(nlopt_opt opt, const double* dx);
```
-
Here, `dx` is an array of length `n` (the dimension of the problem from `nlopt_create`) containing the (nonzero) initial step size for each component of the optimization parameters `x`. If you pass `NULL` for `dx`, then NLopt will use its heuristics to determine the initial step size. For convenience, if you want to set the step sizes in every direction to be the same value, you can instead call:
-```
+```c
nlopt_result nlopt_set_initial_step1(nlopt_opt opt, double dx);
```
-
You can get the initial step size by calling:
-```
+```c
nlopt_result nlopt_get_initial_step(const nlopt_opt opt, const double *x, double *dx);
```
-
Here, `x` is the same as the initial guess that you plan to pass to `nlopt_optimize` – if you have not set the initial step and NLopt is using its heuristics, its heuristic step size may depend on the initial *x*, which is why you must pass it here. Both `x` and `dx` are arrays of length `n` (the dimension of the problem from `nlopt_create`), where `dx` on successful return contains the initial step sizes.
Stochastic population
@@ -463,11 +439,10 @@ Stochastic population
Several of the stochastic search algorithms (e.g., `CRS`, `MLSL`, and `ISRES`) start by generating some initial "population" of random points *x*. By default, this initial population size is chosen heuristically in some algorithm-specific way, but the initial population can by changed by calling:
-```
+```c
nlopt_result nlopt_set_population(nlopt_opt opt, unsigned pop);
```
-
(A `pop` of zero implies that the heuristic default will be used.)
Pseudorandom numbers
@@ -475,20 +450,18 @@ Pseudorandom numbers
For stochastic optimization algorithms, we use pseudorandom numbers generated by the [Mersenne Twister](https://en.wikipedia.org/wiki/Mersenne_twister) algorithm, based on code from Makoto Matsumoto. By default, the [seed](https://en.wikipedia.org/wiki/Random_seed) for the random numbers is generated from the system time, so that you will get a different sequence of pseudorandom numbers each time you run your program. If you want to use a "deterministic" sequence of pseudorandom numbers, i.e. the same sequence from run to run, you can set the seed by calling:
-```
+```c
void nlopt_srand(unsigned long seed);
```
-
Some of the algorithms also support using low-discrepancy sequences (LDS), sometimes known as quasi-random numbers. NLopt uses the Sobol LDS, which is implemented for up to 1111 dimensions.
To reset the seed based on the system time, you can call:
-```
+```c
void nlopt_srand_time(void);
```
-
(Normally, you don't need to call this as it is called automatically. However, it might be useful if you want to "re-randomize" the pseudorandom numbers after calling `nlopt_srand` to set a deterministic seed.)
Vector storage for limited-memory quasi-Newton algorithms
@@ -496,12 +469,11 @@ Vector storage for limited-memory quasi-Newton algorithms
Some of the NLopt algorithms are limited-memory "quasi-Newton" algorithms, which "remember" the gradients from a finite number *M* of the previous optimization steps in order to construct an approximate 2nd derivative matrix. The bigger *M* is, the more storage the algorithms require, but on the other hand they *may* converge faster for larger *M*. By default, NLopt chooses a heuristic value of *M*, but this can be changed/retrieved by calling:
-```
+```c
nlopt_result nlopt_set_vector_storage(nlopt_opt opt, unsigned M);
unsigned nlopt_get_vector_storage(const nlopt_opt opt);
```
-
Passing *M*=0 (the default) tells NLopt to use a heuristic value. By default, NLopt currently sets *M* to 10 or at most 10 [MiB](W:Mebibyte.md) worth of vectors, whichever is larger.
Preconditioning with approximate Hessians
@@ -511,15 +483,14 @@ If you know the Hessian (second-derivative) matrix of your objective function, i
Currently, support for preconditioners in NLopt is somewhat experimental, and is only used in the `NLOPT_LD_CCSAQ` algorithm. You specify a preconditioned objective function by calling one of:
-```
+```c
nlopt_result nlopt_set_precond_min_objective(nlopt_opt opt, nlopt_func f, nlopt_precond pre, void *f_data);
nlopt_result nlopt_set_precond_min_objective(nlopt_opt opt, nlopt_func f, nlopt_precond pre, void *f_data);
```
-
which are identical to `nlopt_set_min_objective` and `nlopt_set_max_objective`, respectively, except that they additionally specify a preconditioner `pre`, which is a function of the form:
-```
+```c
void pre(unsigned n, const double *x, const double *v, double *vpre, void *f_data);
```
@@ -531,11 +502,10 @@ Version number
To determine the version number of NLopt at runtime, you can call:
-```
+```c
void nlopt_version(int *major, int *minor, int *bugfix);
```
-
For example, NLopt version 3.1.4 would return `*major=3`, `*minor=1`, and `*bugfix=4`.