From 3d8c5de341ca868ca99b208948d8d6f468343f9f Mon Sep 17 00:00:00 2001 From: Julien Schueller Date: Fri, 25 Oct 2024 20:37:37 +0200 Subject: [PATCH] Doc: Fixed links --- doc/docs/NLopt_Algorithms.md | 26 +++++++++++----------- doc/docs/NLopt_C-plus-plus_Reference.md | 24 ++++++++++---------- doc/docs/NLopt_Deprecated_API_Reference.md | 10 ++++----- doc/docs/NLopt_Fortran_Reference.md | 22 +++++++++--------- doc/docs/NLopt_Guile_Reference.md | 18 +++++++-------- doc/docs/NLopt_Introduction.md | 2 +- doc/docs/NLopt_Matlab_Reference.md | 14 ++++++------ doc/docs/NLopt_Python_Reference.md | 4 ++-- doc/docs/NLopt_Tutorial.md | 8 +++---- 9 files changed, 64 insertions(+), 64 deletions(-) diff --git a/doc/docs/NLopt_Algorithms.md b/doc/docs/NLopt_Algorithms.md index 477748d1..4abdf166 100644 --- a/doc/docs/NLopt_Algorithms.md +++ b/doc/docs/NLopt_Algorithms.md @@ -24,14 +24,14 @@ For any given optimization problem, it is a good idea to compare several of the However, comparing algorithms requires a little bit of care because the function-value/parameter tolerance tests are not all implemented in exactly the same way for different algorithms. So, for example, the same fractional 10−4 tolerance on the function value might produce a much more accurate minimum in one algorithm compared to another, and matching them might require some experimentation with the tolerances. -Instead, a more fair and reliable way to compare two different algorithms is to run one until the function value is converged to some value *f*A, and then run the second algorithm with the minf_max [termination test](NLopt_Introduction#termination-conditions) set to minf_max=*f*A. That is, ask how long it takes for the two algorithms to reach the same function value. +Instead, a more fair and reliable way to compare two different algorithms is to run one until the function value is converged to some value *f*A, and then run the second algorithm with the minf_max [termination test](NLopt_Introduction.md#termination-conditions) set to minf_max=*f*A. That is, ask how long it takes for the two algorithms to reach the same function value. Better yet, run some algorithm for a really long time until the minimum *f*M is located to high precision. Then run the different algorithms you want to compare with the termination test: minf_max=*f*M+Δ*f*. That is, ask how long it takes for the different algorithms to obtain the minimum to within an absolute tolerance Δ*f*, for some Δ*f*. (This is *totally different* from using the ftol_abs termination test, because the latter uses only a crude estimate of the error in the function values, and moreover the estimate varies between algorithms.) Global optimization ------------------- -All of the global-optimization algorithms currently require you to specify bound constraints on all the optimization parameters. Of these algorithms, only ISRES, AGS, and ORIG_DIRECT support nonlinear inequality constraints, and only ISRES supports nonlinear equality constraints. (However, any of them can be applied to nonlinearly constrained problems by combining them with the [augmented Lagrangian method](#augmented-lagrangian-algorithm) below.) +All of the algorithms currently require you to specify bound constraints on all the optimization parameters. Of these algorithms, only ISRES, AGS, and ORIG_DIRECT support nonlinear inequality constraints, and only ISRES supports nonlinear equality constraints. (However, any of them can be applied to nonlinearly constrained problems by combining them with the [augmented Lagrangian method](#augmented-lagrangian-algorithm) below.) **Something you should consider** is that, after running the global optimization, it is often worthwhile to then use the global optimum as a starting point for a local optimization to "polish" the optimum to a greater accuracy. (Many of the global optimization algorithms devote more effort to searching the global parameter space than in finding the precise position of the local optimum accurately.) @@ -75,7 +75,7 @@ The CRS algorithms are sometimes compared to genetic algorithms, in that they st - Eligius M. T. Hendrix, P. M. Ortigosa, and I. García, "On success rates for controlled random search," *J. Global Optim.* **21**, p. 239-263 (2001). -The initial population size for CRS defaults to 10×(*n*+1) in *n* dimensions, but this can be changed with the [nlopt_set_population](NLopt_Reference#stochastic-population) function; the initial population must be at least *n*+1. +The initial population size for CRS defaults to 10×(*n*+1) in *n* dimensions, but this can be changed with the [nlopt_set_population](NLopt_Reference.md#stochastic-population) function; the initial population must be at least *n*+1. Only bound-constrained problems are supported by this algorithm. @@ -95,11 +95,11 @@ In either case, MLSL is a "multistart" algorithm: it works by doing a sequence o The local-search portion of MLSL can use any of the other algorithms in NLopt, and in particular can use either gradient-based (`D`) or derivative-free algorithms (`N`) The local search uses the derivative/nonderivative algorithm set by `nlopt_opt_set_local_optimizer`. -LDS-based MLSL with is specified as `NLOPT_G_MLSL_LDS`, while the original non-LDS original MLSL (using pseudo-random numbers, currently via the [Mersenne twister](https://en.wikipedia.org/wiki/Mersenne_twister) algorithm) is indicated by `NLOPT_G_MLSL`. In both cases, you must specify the [local optimization](NLopt_Reference#localsubsidiary-optimization-algorithm) algorithm (which can be gradient-based or derivative-free) via `nlopt_opt_set_local_optimizer`. +LDS-based MLSL with is specified as `NLOPT_G_MLSL_LDS`, while the original non-LDS original MLSL (using pseudo-random numbers, currently via the [Mersenne twister](https://en.wikipedia.org/wiki/Mersenne_twister) algorithm) is indicated by `NLOPT_G_MLSL`. In both cases, you must specify the [local optimization](NLopt_Reference.md#localsubsidiary-optimization-algorithm) algorithm (which can be gradient-based or derivative-free) via `nlopt_opt_set_local_optimizer`. **Note**: If you do not set a stopping tolerance for your local-optimization algorithm, MLSL defaults to ftol_rel=10−15 and xtol_rel=10−7 for the local searches. Note that it is perfectly reasonable to set a relatively large tolerance for these local searches, run MLSL, and then at the end run another local optimization with a lower tolerance, using the MLSL result as a starting point, to "polish off" the optimum to high precision. -By default, each iteration of MLSL samples 4 random new trial points, but this can be changed with the [nlopt_set_population](NLopt_Reference#stochastic-population) function. +By default, each iteration of MLSL samples 4 random new trial points, but this can be changed with the [nlopt_set_population](NLopt_Reference.md#stochastic-population) function. Only bound-constrained problems are supported by this algorithm. @@ -157,7 +157,7 @@ It is a refinement of an earlier method described in: This is an independent implementation by S. G. Johnson (2009) based on the papers above. Runarsson also has his own Matlab implemention available from his web page [here](http://www3.hi.is/~tpr). -The evolution strategy is based on a combination of a mutation rule (with a log-normal step-size update and exponential smoothing) and differential variation (a Nelder–Mead-like update rule). The fitness ranking is simply via the objective function for problems without nonlinear constraints, but when nonlinear constraints are included the stochastic ranking proposed by Runarsson and Yao is employed. The population size for ISRES defaults to 20×(*n*+1) in *n* dimensions, but this can be changed with the [nlopt_set_population](NLopt_Reference#stochastic-population) function. +The evolution strategy is based on a combination of a mutation rule (with a log-normal step-size update and exponential smoothing) and differential variation (a Nelder–Mead-like update rule). The fitness ranking is simply via the objective function for problems without nonlinear constraints, but when nonlinear constraints are included the stochastic ranking proposed by Runarsson and Yao is employed. The population size for ISRES defaults to 20×(*n*+1) in *n* dimensions, but this can be changed with the [nlopt_set_population](NLopt_Reference.md#stochastic-population) function. This method supports arbitrary nonlinear inequality and equality constraints in addition to the bound constraints, and is specified within NLopt as `NLOPT_GN_ISRES`. @@ -206,7 +206,7 @@ The original code itself was written in Fortran by Powell and was converted to C NLopt's version is slightly modified in a few ways. First, we incorporated all of the NLopt termination criteria. Second, we added explicit support for bound constraints (although the original COBYLA could handle bound constraints as linear constraints, it would sometimes take a step that violated the bound constraints). Third, we allow `COBYLA` to increase the trust-region radius if the predicted improvement was approximately right and the simplex is OK, following a suggestion in the [SAS manual for PROC NLP](http://www.uc.edu/sashtml/iml/chap17/sect164.htm) that seems to improve convergence speed. Fourth, we pseudo-randomize simplex steps in COBYLA algorithm, improving robustness by avoiding accidentally taking steps that don't improve conditioning (which seems to happen sometimes with active bound constraints); the algorithm remains deterministic (a deterministic seed is used), however. Also, we support unequal initial-step sizes in the different parameters (by the simple expedient of internally rescaling the parameters proportional to the initial steps), which is important when different parameters have very different scales. -(The underlying COBYLA code only supports inequality constraints. Equality constraints are automatically [transformed into pairs](NLopt_Introduction#equality-constraints) of inequality constraints, which in the case of this algorithm seems not to cause problems.) +(The underlying COBYLA code only supports inequality constraints. Equality constraints are automatically [transformed into pairs](NLopt_Introduction.md#equality-constraints) of inequality constraints, which in the case of this algorithm seems not to cause problems.) It is specified within NLopt as `NLOPT_LN_COBYLA`. @@ -314,10 +314,10 @@ I also implemented another CCSA algorithm from the same paper, `NLOPT_LD_CCSAQ`: For the quadratic variant I also implemented the possibility of [preconditioning](NLopt_Reference.md#preconditioning-with-approximate-hessians): including a user-supplied Hessian approximation in the local model. It is easy to incorporate this into the proof in Svanberg's paper, and to show that global convergence is still guaranteed as long as the user's "Hessian" is positive semidefinite, and in practice it can greatly improve convergence if the preconditioner is a good approximation for the real Hessian (at least for the eigenvectors of the largest eigenvalues). The `NLOPT_LD_MMA` and `NLOPT_LD_CCSAQ` algorithms support the following internal parameters, which can be -specified using the [`nlopt_set_param` API](NLopt_Reference#algorithm-specific-parameters): +specified using the [`nlopt_set_param` API](NLopt_Reference.md#algorithm-specific-parameters): * `inner_maxeval`: If ≥ 0, gives maximum number of "inner" iterations of the algorithm where it tries to ensure that its approximatations are "conservative"; defaults to `0` (no limit). It can be useful to specify a finite number (e.g. `5` or `10`) for this parameter if inaccuracies in your gradient or objective function are preventing the algorithm from making progress. -* `dual_algorithm` (defaults to `NLOPT_LD_MMA`), `dual_ftol_rel` (defaults to `1e-14`), `dual_ftol_abs` (defaults to `0`), `dual_xtol_rel` (defaults to `0`), `dual_xtol_abs` (defaults to `0`), `dual_maxeval` (defaults to `100000`): These specify how the algorithm internally solves the "dual" optimization problem for its approximate objective. Because this subsidiary solve requires no evaluations of the user's objective function, it is typically fast enough that we can solve it to high precision without worrying too much about the details. Howeve,r in high-dimensional problems you may notice that MMA/CCSA is taking a long time between optimization steps, in which case you may want to increase `dual_ftol_rel` or make other changes. If these parameters are not specified, NLopt takes them from the [subsidiary-optimizer algorithm](NLopt_Reference#localsubsidiary-optimization-algorithm) if that has been specified, and otherwise uses the defaults indicated here. +* `dual_algorithm` (defaults to `NLOPT_LD_MMA`), `dual_ftol_rel` (defaults to `1e-14`), `dual_ftol_abs` (defaults to `0`), `dual_xtol_rel` (defaults to `0`), `dual_xtol_abs` (defaults to `0`), `dual_maxeval` (defaults to `100000`): These specify how the algorithm internally solves the "dual" optimization problem for its approximate objective. Because this subsidiary solve requires no evaluations of the user's objective function, it is typically fast enough that we can solve it to high precision without worrying too much about the details. Howeve,r in high-dimensional problems you may notice that MMA/CCSA is taking a long time between optimization steps, in which case you may want to increase `dual_ftol_rel` or make other changes. If these parameters are not specified, NLopt takes them from the [subsidiary-optimizer algorithm](NLopt_Reference.md#localsubsidiary-optimization-algorithm) if that has been specified, and otherwise uses the defaults indicated here. * `verbosity`: If > 0, causes the algorithm to print internal status information on each iteration. * `rho_init`: if specified, should be a rough upper bound for the second derivative (the biggest eigenvalue of the Hessian of the objective or constraints); defaults to `1.0`. CCSA/MMA will adaptively adjust this as the optimization progresses, so even it if `rho_init` is completely wrong the algorithm will still converge. A `rho_init` that is too large will cause the algorithm to take overly small steps at the beginning, while a `rho_init` that is too small will cause it to take overly large steps (and have to backtrack) at the beginning. Similarly, you can also use the "initial stepsize" option ([NLopt reference](NLopt_Reference.md#initial-step-size)) to control the maximum size of the initial steps (half the diameter of the trust region). @@ -349,7 +349,7 @@ The original L-BFGS algorithm, based on variable-metric updates via Strang recur I converted Prof. Luksan's code to C with the help of [f2c](https://en.wikipedia.org/wiki/f2c), and made a few minor modifications (mainly to include the NLopt termination criteria). -One of the parameters of this algorithm is the number *M* of gradients to "remember" from previous optimization steps: increasing *M* increases the memory requirements but may speed convergence. NLopt sets *M* to a heuristic value by default, but this can be [changed by the set_vector_storage function](NLopt_Reference#vector-storage-for-limited-memory-quasi-newton-algorithms). +One of the parameters of this algorithm is the number *M* of gradients to "remember" from previous optimization steps: increasing *M* increases the memory requirements but may speed convergence. NLopt sets *M* to a heuristic value by default, but this can be [changed by the set_vector_storage function](NLopt_Reference.md#vector-storage-for-limited-memory-quasi-newton-algorithms). ### Preconditioned truncated Newton @@ -366,7 +366,7 @@ p. 190-212 (1983) . I converted Prof. Luksan's code to C with the help of [f2c](https://en.wikipedia.org/wiki/f2c), and made a few minor modifications (mainly to include the NLopt termination criteria). -One of the parameters of this algorithm is the number *M* of gradients to "remember" from previous optimization steps: increasing *M* increases the memory requirements but may speed convergence. NLopt sets *M* to a heuristic value by default, but this can be [changed by the set_vector_storage function](NLopt_Reference#vector-storage-for-limited-memory-quasi-newton-algorithms). +One of the parameters of this algorithm is the number *M* of gradients to "remember" from previous optimization steps: increasing *M* increases the memory requirements but may speed convergence. NLopt sets *M* to a heuristic value by default, but this can be [changed by the set_vector_storage function](NLopt_Reference.md#vector-storage-for-limited-memory-quasi-newton-algorithms). ### Shifted limited-memory variable-metric @@ -382,7 +382,7 @@ The algorithms are based on the ones described by: I converted Prof. Luksan's code to C with the help of [f2c](https://en.wikipedia.org/wiki/f2c), and made a few minor modifications (mainly to include the NLopt termination criteria). -One of the parameters of this algorithm is the number *M* of gradients to "remember" from previous optimization steps: increasing *M* increases the memory requirements but may speed convergence. NLopt sets *M* to a heuristic value by default, but this can be [changed by the set_vector_storage function](NLopt_Reference#vector-storage-for-limited-memory-quasi-newton-algorithms). +One of the parameters of this algorithm is the number *M* of gradients to "remember" from previous optimization steps: increasing *M* increases the memory requirements but may speed convergence. NLopt sets *M* to a heuristic value by default, but this can be [changed by the set_vector_storage function](NLopt_Reference.md#vector-storage-for-limited-memory-quasi-newton-algorithms). Augmented Lagrangian algorithm ------------------------------ @@ -394,7 +394,7 @@ There is one algorithm in NLopt that fits into all of the above categories, depe This method combines the objective function and the nonlinear inequality/equality constraints (if any) in to a single function: essentially, the objective plus a "penalty" for any violated constraints. This modified objective function is then passed to *another* optimization algorithm with *no* nonlinear constraints. If the constraints are violated by the solution of this sub-problem, then the size of the penalties is increased and the process is repeated; eventually, the process must converge to the desired solution (if it exists). -The subsidiary optimization algorithm is specified by the `nlopt_set_local_optimizer` function, described in the [NLopt Reference](NLopt_Reference#localsubsidiary-optimization-algorithm). (Don't forget to set a stopping tolerance for this subsidiary optimizer!) Since all of the actual optimization is performed in this subsidiary optimizer, the subsidiary algorithm that you specify determines whether the optimization is gradient-based or derivative-free. In fact, you can even specify a global optimization algorithm for the subsidiary optimizer, in order to perform global nonlinearly constrained optimization (although specifying a good stopping criterion for this subsidiary global optimizer is tricky). +The subsidiary optimization algorithm is specified by the `nlopt_set_local_optimizer` function, described in the [NLopt Reference](NLopt_Reference.md#localsubsidiary-optimization-algorithm). (Don't forget to set a stopping tolerance for this subsidiary optimizer!) Since all of the actual optimization is performed in this subsidiary optimizer, the subsidiary algorithm that you specify determines whether the optimization is gradient-based or derivative-free. In fact, you can even specify a global optimization algorithm for the subsidiary optimizer, in order to perform global nonlinearly constrained optimization (although specifying a good stopping criterion for this subsidiary global optimizer is tricky). The augmented Lagrangian method is specified in NLopt as `NLOPT_AUGLAG`. We also provide a variant, `NLOPT_AUGLAG_EQ`, that only uses penalty functions for equality constraints, while inequality constraints are passed through to the subsidiary algorithm to be handled directly; in this case, the subsidiary algorithm must handle inequality constraints (e.g. MMA or COBYLA). diff --git a/doc/docs/NLopt_C-plus-plus_Reference.md b/doc/docs/NLopt_C-plus-plus_Reference.md index 968c2127..6a32d40c 100644 --- a/doc/docs/NLopt_C-plus-plus_Reference.md +++ b/doc/docs/NLopt_C-plus-plus_Reference.md @@ -96,12 +96,12 @@ void nlopt::opt::set_max_objective(nlopt::func f, void* f_data); ``` -where `f` is of the same form as the [C objective function](NLopt_Reference#objective-function). +where `f` is of the same form as the [C objective function](NLopt_Reference.md#objective-function). Bound constraints ----------------- -The [bound constraints](NLopt_Reference#bound-constraints) can be specified by calling the methods: +The [bound constraints](NLopt_Reference.md#bound-constraints) can be specified by calling the methods: ``` void nlopt::opt::set_lower_bounds(const std::vector`` &lb); @@ -132,7 +132,7 @@ where the first two functions set their arguments (which must be vectors of leng Nonlinear constraints --------------------- -Just as for [nonlinear constraints in C](NLopt_Reference#nonlinear-constraints), you can specify nonlinear inequality and equality constraints by the methods: +Just as for [nonlinear constraints in C](NLopt_Reference.md#nonlinear-constraints), you can specify nonlinear inequality and equality constraints by the methods: ``` void nlopt::opt::add_inequality_constraint(nlopt::vfunc fc, void *fc_data, double tol=0); @@ -152,7 +152,7 @@ void nlopt::opt::remove_equality_constraints(); ### Vector-valued constraints -Just as for [nonlinear constraints in C](NLopt_Reference#vector-valued-constraints), you can specify nonlinear inequality and equality constraints by the methods: +Just as for [nonlinear constraints in C](NLopt_Reference.md#vector-valued-constraints), you can specify nonlinear inequality and equality constraints by the methods: ``` void nlopt::opt::add_inequality_mconstraint(nlopt::mfunc c, void *c_data, const vector`` &tol); @@ -167,7 +167,7 @@ Here, `tol` is a vector of the tolerances in each constraint dimension; the dime Stopping criteria ----------------- -As explained in the [C API Reference](NLopt_Reference#stopping-criteria) and the [Introduction](NLopt_Introduction#termination-conditions)), you have multiple options for different stopping criteria that you can specify. (Unspecified stopping criteria are disabled; i.e., they have innocuous defaults.) +As explained in the [C API Reference](NLopt_Reference.md#stopping-criteria) and the [Introduction](NLopt_Introduction.md#termination-conditions)), you have multiple options for different stopping criteria that you can specify. (Unspecified stopping criteria are disabled; i.e., they have innocuous defaults.) For each stopping criteria, there are (at least) two method: a `set` method to specify the stopping criterion, and a `get` method to retrieve the current value for that criterion. The meanings of each criterion are exactly the same as in the C API. @@ -248,7 +248,7 @@ Request the number of evaluations. ### Forced termination -In certain cases, the caller may wish to *force* the optimization to halt, for some reason unknown to NLopt. For example, if the user presses Ctrl-C, or there is an error of some sort in the objective function. You can do this by throwing *any* exception inside your objective/constraint functions: the exception will be caught, the optimization will be halted gracefully, and another exception (possibly not the same one) will be rethrown. See [Exceptions](#exceptions), below. The C++ equivalent of `nlopt_forced_stop` from the [C API](NLopt_Reference#forced-termination) is to throw an `nlopt::forced_stop` exception. +In certain cases, the caller may wish to *force* the optimization to halt, for some reason unknown to NLopt. For example, if the user presses Ctrl-C, or there is an error of some sort in the objective function. You can do this by throwing *any* exception inside your objective/constraint functions: the exception will be caught, the optimization will be halted gracefully, and another exception (possibly not the same one) will be rethrown. See [Exceptions](#exceptions), below. The C++ equivalent of `nlopt_forced_stop` from the [C API](NLopt_Reference.md#forced-termination) is to throw an `nlopt::forced_stop` exception. Algorithm-specific parameters @@ -262,7 +262,7 @@ double nlopt::opt::get_param(const char *name, double defaultval); unsigned nlopt::opt::num_params(); const char *nlopt::opt::nth_param(unsigned n); ``` -where the string `name` is the name of an algorithm-specific parameter and `val` is the value you are setting the parameter to. These functions are equivalent to the [C API](NLopt_Reference#algorithm-specific-parameters) functions of the corresponding names. +where the string `name` is the name of an algorithm-specific parameter and `val` is the value you are setting the parameter to. These functions are equivalent to the [C API](NLopt_Reference.md#algorithm-specific-parameters) functions of the corresponding names. Performing the optimization @@ -289,12 +289,12 @@ nlopt::result nlopt::opt::last_optimize_result() const; ### Return values -The possible return values are the same as the [return values in the C API](NLopt_Reference#return-values), except that the `NLOPT_` prefix is replaced with the `nlopt::` namespace. That is, `NLOPT_SUCCESS` becomes `nlopt::SUCCESS`, etcetera. +The possible return values are the same as the [return values in the C API](NLopt_Reference.md#return-values), except that the `NLOPT_` prefix is replaced with the `nlopt::` namespace. That is, `NLOPT_SUCCESS` becomes `nlopt::SUCCESS`, etcetera. Exceptions ---------- -The [Error codes (negative return values)](NLopt_Reference#error-codes-negative-return-values) in the C API are replaced in the C++ API by thrown exceptions. The following exceptions are thrown by the various routines: +The [Error codes (negative return values)](NLopt_Reference.md#error-codes-negative-return-values) in the C API are replaced in the C++ API by thrown exceptions. The following exceptions are thrown by the various routines: ``` std::runtime_error @@ -339,7 +339,7 @@ This function makes a copy of the `local_opt` object, so you can freely destroy Initial step size ----------------- -Just as in the C API, you can [get and set the initial step sizes](NLopt_Reference#initial-step-size) for derivative-free optimization algorithms. The C++ equivalents of the C functions are the following methods: +Just as in the C API, you can [get and set the initial step sizes](NLopt_Reference.md#initial-step-size) for derivative-free optimization algorithms. The C++ equivalents of the C functions are the following methods: ``` void nlopt::opt::set_initial_step(const std::vector &dx); @@ -351,7 +351,7 @@ void nlopt::opt::get_initial_step(const std::vector &x, std::vector< Stochastic population --------------------- -Just as in the C API, you can [get and set the initial population](NLopt_Reference#stochastic-population) for stochastic optimization algorithms, by the methods: +Just as in the C API, you can [get and set the initial population](NLopt_Reference.md#stochastic-population) for stochastic optimization algorithms, by the methods: ``` void nlopt::opt::set_population(unsigned pop); @@ -383,7 +383,7 @@ void nlopt::srand_time(); Vector storage for limited-memory quasi-Newton algorithms --------------------------------------------------------- -Just as in the C API, you can get and set the [number *M* of stored vectors](NLopt_Reference#vector-storage-for-limited-memory-quasi-newton-algorithms) for limited-memory quasi-Newton algorithms, via the methods: +Just as in the C API, you can get and set the [number *M* of stored vectors](NLopt_Reference.md#vector-storage-for-limited-memory-quasi-newton-algorithms) for limited-memory quasi-Newton algorithms, via the methods: ``` void nlopt::opt::set_vector_storage(unsigned M); diff --git a/doc/docs/NLopt_Deprecated_API_Reference.md b/doc/docs/NLopt_Deprecated_API_Reference.md index 83132b23..4eea62ee 100644 --- a/doc/docs/NLopt_Deprecated_API_Reference.md +++ b/doc/docs/NLopt_Deprecated_API_Reference.md @@ -19,7 +19,7 @@ where *compiler* is `cc`, `f77`, `g++`, or whatever is appropriate for your mach *Note:* the `-lnlopt` `-lm` options, which link to the NLopt library (and the math library, which it requires), must come *after* your source/object files. In general, the rule is that if *A* depends upon *B*, then *A* must come before *B* in the link command. -*Note:* the above example assumes that you have installed the NLopt library in a place where the compiler knows to find it (e.g. in a standard directory like `/usr/lib` or `/usr/local/lib`). If you installed somewhere else (e.g. in your home directory if you are not a system administrator), then you will need to use a `-L` flag to tell the compiler where to find the library. See [the installation manual](NLopt_Installation#changing-the-installation-directory). +*Note:* the above example assumes that you have installed the NLopt library in a place where the compiler knows to find it (e.g. in a standard directory like `/usr/lib` or `/usr/local/lib`). If you installed somewhere else (e.g. in your home directory if you are not a system administrator), then you will need to use a `-L` flag to tell the compiler where to find the library. See [the installation manual](NLopt_Installation.md#changing-the-installation-directory). C/C++ programming interface --------------------------- @@ -69,7 +69,7 @@ Starting guess and returned optimum: - `x` — an array of length `n` of the optimization parameters `x[0]`, ..., `x[n-1]`. On input, a starting guess for the optimum parameters; on output, the best found values of the parameters. (For a *local* optimization routine, the starting guess `x` determines which local optimum is found.) The starting guess is required to satisfy the bound constraints `lb` and `ub`; it need not satisfy the nonlinear inequality constraints `fc` (although it might be more efficient if you have a feasible starting guess.) - `minf` — on output, the minimum value of the objective function that was found (corresponding to the output value of the parameters `x`). -The remaining parameters specify the termination conditions. Please read the [introduction to the termination conditions](NLopt_Introduction#termination-conditions) for a general overview of these criteria. (In particular, note that you do *not* need to use *all* of these conditions; typically, you will use only one or two, and set the remainder to innocuous values.) +The remaining parameters specify the termination conditions. Please read the [introduction to the termination conditions](NLopt_Introduction.md#termination-conditions) for a general overview of these criteria. (In particular, note that you do *not* need to use *all* of these conditions; typically, you will use only one or two, and set the remainder to innocuous values.) - `minf_max` — stop if the objective function value drops below `minf_max`. (Set to `-HUGE_VAL` to ignore.) - `ftol_rel`, `ftol_abs` — relative and absolute tolerances in the objective function value. (Set to zero to ignore.) @@ -152,7 +152,7 @@ In particular, the constraint function `fc` will be called (at most) `m` times f ### Mixed global/local search algorithm -Some of the [global optimization algorithms](NLopt_Algorithms#Global_optimization.md) (currently, only MLSL) combine some global search scheme with a separate local optimization algorithm for local searches. For example, MLSL performs a sequence of local searches from semi-random starting points. +Some of the [global optimization algorithms](NLopt_Algorithms.md#global-optimization) (currently, only MLSL) combine some global search scheme with a separate local optimization algorithm for local searches. For example, MLSL performs a sequence of local searches from semi-random starting points. Using the following functions, you can control *which* local search algorithm is used for MLSL (and any similar algorithm that is added in the future), as well as specifying a maximum number of function evaluations for the local search: @@ -239,7 +239,7 @@ The parameters are similar to those of `nlopt_minimize_constrained` (see also th - `x` (double precision array(`n`), IN/OUT) — on input, an initial guess for the optimization parameters; on output, the best parameters found - `minf` (double precision, OUT) — on output, the minimum value of the objective function that was found -Termination conditions (see [introduction](NLopt_Introduction#termination-conditions)): +Termination conditions (see [introduction](NLopt_Introduction.md#termination-conditions)): - `minf_max` (double precision, IN) — stop if the an objective function value ≤ `minf_max` is found (set to `-Infinity`, or a huge negative number, to ignore). - `ftol_rel`, `ftol_abs` (double precision, IN) — relative and absolute tolerances in the objective function value (ignored if zero). @@ -419,7 +419,7 @@ If you have no nonlinear constraints, i.e. `fc` = `fc_data` = `{}`, then it is e - `stop.maxtime` — maximum run time in seconds - `stop.verbose` — > 0 indicates verbose output -You do *not* need to set all of these fields; termination conditions corresponding to any fields that you do not set are ignored. As discussed in the [introduction](NLopt_Introduction#termination-conditions), normally you only want one or two of these conditions. For example to set a relative **x** tolerance of 10−4 and run for no more than 5 minutes, you would do: +You do *not* need to set all of these fields; termination conditions corresponding to any fields that you do not set are ignored. As discussed in the [introduction](NLopt_Introduction.md#termination-conditions), normally you only want one or two of these conditions. For example to set a relative **x** tolerance of 10−4 and run for no more than 5 minutes, you would do: ``` stop.xtol_rel = 1e-4; diff --git a/doc/docs/NLopt_Fortran_Reference.md b/doc/docs/NLopt_Fortran_Reference.md index 2c5bc7a2..1b58c75e 100644 --- a/doc/docs/NLopt_Fortran_Reference.md +++ b/doc/docs/NLopt_Fortran_Reference.md @@ -25,12 +25,12 @@ When you compile your program you will have to link it to the NLopt library. On where *compiler* is `f77`, `gfortran`, or whatever is appropriate for your machine. -*Note:* the above example assumes that you have installed the NLopt library in a place where the compiler knows to find it (e.g. in a standard directory like `/usr/lib` or `/usr/local/lib`). If you installed somewhere else (e.g. in your home directory if you are not a system administrator), then you will need to use a `-L` flag to tell the compiler where to find the library. See [the installation manual](NLopt_Installation#changing-the-installation-directory). +*Note:* the above example assumes that you have installed the NLopt library in a place where the compiler knows to find it (e.g. in a standard directory like `/usr/lib` or `/usr/local/lib`). If you installed somewhere else (e.g. in your home directory if you are not a system administrator), then you will need to use a `-L` flag to tell the compiler where to find the library. See [the installation manual](NLopt_Installation.md#changing-the-installation-directory). Fortran vs. C API ----------------- -As explained in the [NLopt Tutorial](NLopt_Tutorial#example-in-fortran), there are a few simple rules that define the differences between the C and Fortran APIs: +As explained in the [NLopt Tutorial](NLopt_Tutorial.md#example-in-fortran), there are a few simple rules that define the differences between the C and Fortran APIs: - All `nlopt_` functions are converted into `nlo_` subroutines, with return values converted into the first argument. - The `nlopt_opt` type corresponds to `integer*8`. (Technically, we could use any type that is big enough to hold a pointer on all platforms; `integer*8` is big enough for pointers on both 32-bit and 64-bit machines.) @@ -114,7 +114,7 @@ The `f_data` argument can be used to pass through a single variable containing a Bound constraints ----------------- -The [bound constraints](NLopt_Reference#bound-constraints) can be specified by calling the methods: +The [bound constraints](NLopt_Reference.md#bound-constraints) can be specified by calling the methods: ``` double precision lb(n), ub(n) @@ -146,7 +146,7 @@ To specify an unbounded dimension, you can use ±`huge(lb(1))` in Fortran to spe Nonlinear constraints --------------------- -Just as for [nonlinear constraints in C](NLopt_Reference#nonlinear-constraints), you can specify nonlinear inequality and equality constraints by the methods: +Just as for [nonlinear constraints in C](NLopt_Reference.md#nonlinear-constraints), you can specify nonlinear inequality and equality constraints by the methods: ``` call nlo_add_inequality_constraint(ires, opt, fc, fc_data, tol) @@ -166,7 +166,7 @@ call nlo_remove_equality_constraints(ires, opt) ### Vector-valued constraints -Just as for [nonlinear constraints in C](NLopt_Reference#vector-valued-constraints), you can specify vector-valued nonlinear inequality and equality constraints by the subroutines +Just as for [nonlinear constraints in C](NLopt_Reference.md#vector-valued-constraints), you can specify vector-valued nonlinear inequality and equality constraints by the subroutines ``` double precision tol(m) @@ -206,7 +206,7 @@ An inequality constraint corresponds to $c_i \le 0$ for $1 \le i \le m$, and an Stopping criteria ----------------- -As explained in the [C API Reference](NLopt_Reference#stopping-criteria) and the [Introduction](NLopt_Introduction#termination-conditions)), you have multiple options for different stopping criteria that you can specify. (Unspecified stopping criteria are disabled; i.e., they have innocuous defaults.) +As explained in the [C API Reference](NLopt_Reference.md#stopping-criteria) and the [Introduction](NLopt_Introduction.md#termination-conditions)), you have multiple options for different stopping criteria that you can specify. (Unspecified stopping criteria are disabled; i.e., they have innocuous defaults.) For each stopping criteria, there are (at least) two subroutines: a `set` subroutine to specify the stopping criterion, and a `get` subroutine to retrieve the current value for that criterion. The meanings of each criterion are exactly the same as in the C API. The first argument `ires` of each `set` subroutine is an `integer` [return value](#return-values) (positive on success). @@ -286,7 +286,7 @@ Stop when the optimization time (in seconds) exceeds `maxtime` (`double` `precis ### Forced termination -In certain cases, the caller may wish to *force* the optimization to halt, for some reason unknown to NLopt. For example, if the user presses Ctrl-C, or there is an error of some sort in the objective function. In this case, it is possible to tell NLopt to halt the optimization gracefully, returning the best point found so far, by calling the following subroutine from *within* your objective or constraint functions (exactly analogous to the corresponding [C routines](NLopt_Reference#forced-termination)): +In certain cases, the caller may wish to *force* the optimization to halt, for some reason unknown to NLopt. For example, if the user presses Ctrl-C, or there is an error of some sort in the objective function. In this case, it is possible to tell NLopt to halt the optimization gracefully, returning the best point found so far, by calling the following subroutine from *within* your objective or constraint functions (exactly analogous to the corresponding [C routines](NLopt_Reference.md#forced-termination)): ``` call nlo_force_stop(ires, opt) @@ -318,7 +318,7 @@ On input, `x` is an array of length `n` (the dimension of the problem from the ` ### Return values -The possible return values are the same as the [return values in the C API](NLopt_Reference#return-values), with the corresponding integer constants defined in the `nlopt.f` include file. +The possible return values are the same as the [return values in the C API](NLopt_Reference.md#return-values), with the corresponding integer constants defined in the `nlopt.f` include file. Local/subsidiary optimization algorithm --------------------------------------- @@ -337,7 +337,7 @@ This function makes a copy of the `local_opt` object, so you can freely change o Initial step size ----------------- -Just as in the C API, you can [get and set the initial step sizes](NLopt_Reference#initial-step-size) for derivative-free optimization algorithms. The Fortran equivalents of the C functions are the following methods: +Just as in the C API, you can [get and set the initial step sizes](NLopt_Reference.md#initial-step-size) for derivative-free optimization algorithms. The Fortran equivalents of the C functions are the following methods: ``` double precision x(n) dx(n), dx1 @@ -352,7 +352,7 @@ Here, `dx` is an array of the (nonzero) initial steps for each dimension. For co Stochastic population --------------------- -Just as in the C API, you can [get and set the initial population](NLopt_Reference#stochastic-population) for stochastic optimization algorithms, by the methods: +Just as in the C API, you can [get and set the initial population](NLopt_Reference.md#stochastic-population) for stochastic optimization algorithms, by the methods: ``` call nlo_set_population(ires, opt, ipop) @@ -384,7 +384,7 @@ call nlosrt Vector storage for limited-memory quasi-Newton algorithms --------------------------------------------------------- -Just as in the C API, you can get and set the [number *M* of stored vectors](NLopt_Reference#vector-storage-for-limited-memory-quasi-newton-algorithms) for limited-memory quasi-Newton algorithms: +Just as in the C API, you can get and set the [number *M* of stored vectors](NLopt_Reference.md#vector-storage-for-limited-memory-quasi-newton-algorithms) for limited-memory quasi-Newton algorithms: ``` call nlo_set_vector_storage(ires, opt, M) diff --git a/doc/docs/NLopt_Guile_Reference.md b/doc/docs/NLopt_Guile_Reference.md index baf67cd7..ace2d5de 100644 --- a/doc/docs/NLopt_Guile_Reference.md +++ b/doc/docs/NLopt_Guile_Reference.md @@ -88,7 +88,7 @@ Note that `grad` must be modified `in-place` by your function `f`, by using `(ve Bound constraints ----------------- -The [bound constraints](NLopt_Reference#bound-constraints) can be specified by calling the methods: +The [bound constraints](NLopt_Reference.md#bound-constraints) can be specified by calling the methods: ``` (nlopt-opt-set-lower-bounds opt lb) @@ -113,7 +113,7 @@ To specify an unbounded dimension, you can use `(inf)` or `(-` `(inf))` in Guile Nonlinear constraints --------------------- -Just as for [nonlinear constraints in C](NLopt_Reference#nonlinear-constraints), you can specify nonlinear inequality and equality constraints by the methods: +Just as for [nonlinear constraints in C](NLopt_Reference.md#nonlinear-constraints), you can specify nonlinear inequality and equality constraints by the methods: ``` (nlopt-opt-add-inequality-constraint opt fc tol) @@ -134,7 +134,7 @@ To remove all of the inequality and/or equality constraints from a given problem Stopping criteria ----------------- -As explained in the [C API Reference](NLopt_Reference#stopping-criteria) and the [Introduction](NLopt_Introduction#termination-conditions)), you have multiple options for different stopping criteria that you can specify. (Unspecified stopping criteria are disabled; i.e., they have innocuous defaults.) +As explained in the [C API Reference](NLopt_Reference.md#stopping-criteria) and the [Introduction](NLopt_Introduction.md#termination-conditions)), you have multiple options for different stopping criteria that you can specify. (Unspecified stopping criteria are disabled; i.e., they have innocuous defaults.) For each stopping criteria, there are (at least) two method: a `set` method to specify the stopping criterion, and a `get` method to retrieve the current value for that criterion. The meanings of each criterion are exactly the same as in the C API. @@ -234,12 +234,12 @@ The return code (see below) is positive on success, indicating the reason for te ### Return values -The possible return values are the same as the [return values in the C API](NLopt_Reference#return-values), except that the `NLOPT_` prefix is replaced with the `NLOPT-` namespace. That is, `NLOPT_SUCCESS` becomes `NLOPT-SUCCESS`, etcetera. +The possible return values are the same as the [return values in the C API](NLopt_Reference.md#return-values), except that the `NLOPT_` prefix is replaced with the `NLOPT-` namespace. That is, `NLOPT_SUCCESS` becomes `NLOPT-SUCCESS`, etcetera. Exceptions ---------- -The [Error codes (negative return values)](NLopt_Reference#error-codes-negative-return-values) in the C API are replaced in the Guile API by thrown exceptions. The exception key takes the form of a Scheme symbol. The following exception keys are thrown by the various routines: +The [Error codes (negative return values)](NLopt_Reference.md#error-codes-negative-return-values) in the C API are replaced in the Guile API by thrown exceptions. The exception key takes the form of a Scheme symbol. The following exception keys are thrown by the various routines: ``` runtime-error @@ -263,7 +263,7 @@ Ran out of memory (a memory allocation failed), equivalent to `NLOPT_OUT_OF_MEMO Halted because roundoff errors limited progress, equivalent to `NLOPT_ROUNDOFF_LIMITED`. `forced-stop` (subclass of `Exception`) -Halted because of a [forced termination](#forced-termination): the user called `opt.force_stop()` from the user’s objective function. Equivalent to `NLOPT_FORCED_STOP`. +Halted because of a forced termination: the user called `opt.force_stop()` from the user’s objective function. Equivalent to `NLOPT_FORCED_STOP`. Currently, NLopt does not catch any exceptions that you might throw from your objective or constraint functions. (In the future, we might catch these exceptions, halt the optimization gracefully, and then re-throw, as in Python or C++, but this is not yet implemented.) So, throwing an exception in your objective/constraint may result in a memory leak. @@ -300,7 +300,7 @@ This function makes a copy of the `local-opt` object, so you can freely change y Initial step size ----------------- -Just as in the C API, you can [get and set the initial step sizes](NLopt_Reference#initial-step-size) for derivative-free optimization algorithms. The Guile equivalents of the C functions are the following methods: +Just as in the C API, you can [get and set the initial step sizes](NLopt_Reference.md#initial-step-size) for derivative-free optimization algorithms. The Guile equivalents of the C functions are the following methods: ``` (nlopt-opt-set-initial-step opt dx) @@ -313,7 +313,7 @@ Here, `dx` is a vector or list of the (nonzero) initial steps for each dimension Stochastic population --------------------- -Just as in the C API, you can [get and set the initial population](NLopt_Reference#stochastic-population) for stochastic optimization algorithms, by the methods: +Just as in the C API, you can [get and set the initial population](NLopt_Reference.md#stochastic-population) for stochastic optimization algorithms, by the methods: ``` (nlopt-opt-set-population opt pop) @@ -345,7 +345,7 @@ where `seed` is an integer. o reset the seed based on the system time, you can c Vector storage for limited-memory quasi-Newton algorithms --------------------------------------------------------- -Just as in the C API, you can get and set the [number *M* of stored vectors](NLopt_Reference#vector-storage-for-limited-memory-quasi-newton-algorithms) for limited-memory quasi-Newton algorithms, via the functions: +Just as in the C API, you can get and set the [number *M* of stored vectors](NLopt_Reference.md#vector-storage-for-limited-memory-quasi-newton-algorithms) for limited-memory quasi-Newton algorithms, via the functions: ``` (nlopt-opt-set-vector-storage opt M) diff --git a/doc/docs/NLopt_Introduction.md b/doc/docs/NLopt_Introduction.md index aaf0d6ee..f1c0c71f 100644 --- a/doc/docs/NLopt_Introduction.md +++ b/doc/docs/NLopt_Introduction.md @@ -147,7 +147,7 @@ Because of this, the most reasonable termination criterion for global optimizati I would advise you *not* to use function-value (ftol) or parameter tolerances (xtol) in global optimization. I made a half-hearted attempt to implement these tests in the various global-optimization algorithms, but it doesn't seem like there is any really satisfactory way to go about this, and I can't claim that my choices were especially compelling. -For the [MLSL](NLopt_Algorithms#mlsl-multi-level-single-linkage) algorithm, you need to set the ftol and xtol [parameters of the local optimization algorithm](NLopt_Reference#localsubsidiary-optimization-algorithm) control the tolerances of the *local* searches, *not* of the global search; you should definitely set these, lest the algorithm spend an excessive amount of time trying to run local searches to machine precision. +For the [MLSL](NLopt_Algorithms.md#mlsl-multi-level-single-linkage) algorithm, you need to set the ftol and xtol [parameters of the local optimization algorithm](NLopt_Reference.md#localsubsidiary-optimization-algorithm) control the tolerances of the *local* searches, *not* of the global search; you should definitely set these, lest the algorithm spend an excessive amount of time trying to run local searches to machine precision. Background and goals of NLopt ----------------------------- diff --git a/doc/docs/NLopt_Matlab_Reference.md b/doc/docs/NLopt_Matlab_Reference.md index 6a6b02bc..a3f6e2cd 100644 --- a/doc/docs/NLopt_Matlab_Reference.md +++ b/doc/docs/NLopt_Matlab_Reference.md @@ -50,19 +50,19 @@ The first return value should be the value of the function at the point `x`, whe In addition, if the caller requests two return values (`nargout` `>` `1`), then the second return value `gradient` should be a vector (row or column) of length `n` that is the gradient of the function with respect to the optimization parameters at `x`. That is, `grad(i)` should upon return contain the partial derivative $\partial f / \partial x_i$, for $1 < i \leq n$. Not all of the optimization algorithms (below) use the gradient information: for algorithms listed as "derivative-free," the `nargout` will always be 1 and the gradient need never be computed. -If your objective function returns [NaN](https://en.wikipedia.org/wiki/NaN) (`nan` in Matlab), that will force the optimization to terminate, equivalent to calling [nlopt_force_stop](NLopt_Reference#forced-termination) in C. +If your objective function returns [NaN](https://en.wikipedia.org/wiki/NaN) (`nan` in Matlab), that will force the optimization to terminate, equivalent to calling [nlopt_force_stop](NLopt_Reference.md#forced-termination) in C. Bound constraints ----------------- -The [bound constraints](NLopt_Reference#bound-constraints) can be specified by setting `opt.lower_bounds` and/or `opt.upper_bounds` to vectors of length *n* (the same as the length of the initial guess passed to `nlopt_optimize`). +The [bound constraints](NLopt_Reference.md#bound-constraints) can be specified by setting `opt.lower_bounds` and/or `opt.upper_bounds` to vectors of length *n* (the same as the length of the initial guess passed to `nlopt_optimize`). To specify an unbounded dimension, you can use ±`inf` in Matlab to specify ±∞. Nonlinear constraints --------------------- -Just as for [nonlinear constraints in C](NLopt_Reference#nonlinear-constraints), you can specify nonlinear inequality and equality constraints by setting `opt.fc` and `opt.h` to be [cell arrays](http://blogs.mathworks.com/loren/2006/06/21/cell-arrays-and-their-contents/) of function handles (of the same form as the objective function above) for the inequality and equality constraints, respectively. +Just as for [nonlinear constraints in C](NLopt_Reference.md#nonlinear-constraints), you can specify nonlinear inequality and equality constraints by setting `opt.fc` and `opt.h` to be [cell arrays](http://blogs.mathworks.com/loren/2006/06/21/cell-arrays-and-their-contents/) of function handles (of the same form as the objective function above) for the inequality and equality constraints, respectively. Recall that a cell array is specified via `{...}` in Matlab, e.g. `{` `@constraint1,` `@constraint2` `}`. @@ -71,7 +71,7 @@ Optionally, you can specify a tolerance in judging feasibility for the purposes Stopping criteria ----------------- -As explained in the [C API Reference](NLopt_Reference#stopping-criteria) and the [Introduction](NLopt_Introduction#termination-conditions)), you have multiple options for different stopping criteria that you can specify. (Unspecified stopping criteria are disabled; i.e., they have innocuous defaults.) The various stopping criteria can be specified via the following fields of your structure. +As explained in the [C API Reference](NLopt_Reference.md#stopping-criteria) and the [Introduction](NLopt_Introduction.md#termination-conditions)), you have multiple options for different stopping criteria that you can specify. (Unspecified stopping criteria are disabled; i.e., they have innocuous defaults.) The various stopping criteria can be specified via the following fields of your structure. ``` opt.stopval @@ -155,17 +155,17 @@ The fields of `opt.local_optimizer` are used to determine the local search algor Initial step size ----------------- -Just as in the C API, you can [get and set the initial step sizes](NLopt_Reference#initial-step-size) for derivative-free optimization algorithms. In Matlab, you set the `opt.initial_step` field to a vector of the (nonzero) initial steps for each dimension. +Just as in the C API, you can [get and set the initial step sizes](NLopt_Reference.md#initial-step-size) for derivative-free optimization algorithms. In Matlab, you set the `opt.initial_step` field to a vector of the (nonzero) initial steps for each dimension. Stochastic population --------------------- -Just as in the C API, you can [get and set the initial population](NLopt_Reference#stochastic-population) for stochastic optimization algorithms, by setting `opt.population` to an (integer) initial population. (An `opt.population` of zero implies that the heuristic default will be used.) +Just as in the C API, you can [get and set the initial population](NLopt_Reference.md#stochastic-population) for stochastic optimization algorithms, by setting `opt.population` to an (integer) initial population. (An `opt.population` of zero implies that the heuristic default will be used.) Vector storage for limited-memory quasi-Newton algorithms --------------------------------------------------------- -Just as in the C API, you can set the [number *M* of stored vectors for limited-memory quasi-Newton algorithms](NLopt_Reference#vector-storage-for-limited-memory-quasi-newton-algorithms), via: +Just as in the C API, you can set the [number *M* of stored vectors for limited-memory quasi-Newton algorithms](NLopt_Reference.md#vector-storage-for-limited-memory-quasi-newton-algorithms), via: ``` opt.vector_storage diff --git a/doc/docs/NLopt_Python_Reference.md b/doc/docs/NLopt_Python_Reference.md index b9b0355f..3e329d3c 100644 --- a/doc/docs/NLopt_Python_Reference.md +++ b/doc/docs/NLopt_Python_Reference.md @@ -247,7 +247,7 @@ opt.get_param("name", defaultval); opt.num_params(); opt.nth_param(n); ``` -where the string `"name"` is the name of an algorithm-specific parameter and `val` is the value you are setting the parameter to. These functions are equivalent to the [C API](NLopt_Reference#algorithm-specific-parameters) functions of the corresponding names. +where the string `"name"` is the name of an algorithm-specific parameter and `val` is the value you are setting the parameter to. These functions are equivalent to the [C API](NLopt_Reference.md#algorithm-specific-parameters) functions of the corresponding names. Performing the optimization @@ -321,7 +321,7 @@ This function makes a copy of the `local_opt` object, so you can freely change y Initial step size ----------------- -Just as in the C API, you can [get and set the initial step sizes](NLopt_Reference#initial-step-size) for derivative-free optimization algorithms. The Python equivalents of the C functions are the following methods: +Just as in the C API, you can [get and set the initial step sizes](NLopt_Reference.md#initial-step-size) for derivative-free optimization algorithms. The Python equivalents of the C functions are the following methods: ```py opt.set_initial_step(dx) diff --git a/doc/docs/NLopt_Tutorial.md b/doc/docs/NLopt_Tutorial.md index b1fb5700..6e490ad3 100644 --- a/doc/docs/NLopt_Tutorial.md +++ b/doc/docs/NLopt_Tutorial.md @@ -51,7 +51,7 @@ double myfunc(unsigned n, const double *x, double *grad, void *my_func_ ``` -There are several things to notice here. First, since this is C, our indices are zero-based, so we have `x[0]` and `x[1]` instead of *x*1 and *x*2. The return value of our function is the objective $\sqrt{x_2}$. Also, if the parameter `grad` is not `NULL`, then we set `grad[0]` and `grad[1]` to the partial derivatives of our objective with respect to `x[0]` and `x[1]`. The gradient is only needed for [gradient-based algorithms](NLopt_Introduction#gradient-based-versus-derivative-free-algorithms); if you use a derivative-free optimization algorithm, `grad` will always be `NULL` and you need never compute any derivatives. Finally, we have an extra parameter `my_func_data` that can be used to pass additional data to `myfunc`, but no additional data is needed here so that parameter is unused. +There are several things to notice here. First, since this is C, our indices are zero-based, so we have `x[0]` and `x[1]` instead of *x*1 and *x*2. The return value of our function is the objective $\sqrt{x_2}$. Also, if the parameter `grad` is not `NULL`, then we set `grad[0]` and `grad[1]` to the partial derivatives of our objective with respect to `x[0]` and `x[1]`. The gradient is only needed for [gradient-based algorithms](NLopt_Introduction.md#gradient-based-versus-derivative-free-algorithms); if you use a derivative-free optimization algorithm, `grad` will always be `NULL` and you need never compute any derivatives. Finally, we have an extra parameter `my_func_data` that can be used to pass additional data to `myfunc`, but no additional data is needed here so that parameter is unused. For the constraints, on the other hand, we *will* have additional data. Each constraint is parameterized by two numbers *a* and *b*, so we will declare a data structure to hold this information: @@ -365,7 +365,7 @@ retcode = 4 ``` -(The [return code](NLopt_Reference#return-values) `4` corresponds to `NLOPT_XTOL_REACHED`, which means it converged to the specified *x* tolerance.) To switch to a derivative-free algorithm like COBYLA, we just change `opt.algorithm` parameter: +(The [return code](NLopt_Reference.md#return-values) `4` corresponds to `NLOPT_XTOL_REACHED`, which means it converged to the specified *x* tolerance.) To switch to a derivative-free algorithm like COBYLA, we just change `opt.algorithm` parameter: ```matlab opt.algorithm = NLOPT_LN_COBYLA @@ -482,7 +482,7 @@ result =  4 ``` -finding the same correct optimum as in the C interface (of course). (The [return code](NLopt_Reference#return-values) `4` corresponds to `nlopt.XTOL_REACHED`, which means it converged to the specified *x* tolerance.) +finding the same correct optimum as in the C interface (of course). (The [return code](NLopt_Reference.md#return-values) `4` corresponds to `nlopt.XTOL_REACHED`, which means it converged to the specified *x* tolerance.) ### Important: Modifying `grad` in-place @@ -500,7 +500,7 @@ grad[:] = 2*x ``` -which *overwrites* the old contents of grad with `2*x`. See also the [NLopt Python Reference](NLopt_Python_Reference#assigning-results-in-place). +which *overwrites* the old contents of grad with `2*x`. See also the [NLopt Python Reference](NLopt_Python_Reference.md#assigning-results-in-place). Example in GNU Guile (Scheme) -----------------------------