Skip to content

Latest commit

 

History

History
93 lines (52 loc) · 5.01 KB

benchmarks.md

File metadata and controls

93 lines (52 loc) · 5.01 KB

Benchmark examples

The following figures are examples of algorithm benchmarks which can be generated very easily from the platform In all examples, we use independent experiments for the different x-values; so that consistent rankings between methods, over several x-values, have a statistical meaning.

If you want to run the examples yourself, please make sure you have installed nevergrad with the benchmark flag (see here).

Noisy optimization

Created with command: python -m nevergrad.benchmark noise --seed=12 --repetitions=10 --plot

Here the variance of the noise does not vanish near the optimum. TBPSA uses the noise management principles of pcCMSA-ES reaching fast convergence rates. We here compare it to a sample of our algorithms; but it performed very well also compared to many other methods. noise

One-shot optimization

dim 11 & select one feature.

Created with command:

python -m nevergrad.benchmark dim10_select_one_feature --seed=12 --repetitions=400 --plot

One-shot optimization is the case in which all evaluations should be done in parallel; the optimization algorithm can only decide, once and for all, which points are going to be evaluated. We consider here

  • an optimum which is translated by a standard centered Gaussian;
  • 1 useful variable and 10 useless variables (this is a feature selection context as in https://arxiv.org/abs/1706.03200);
  • the sphere function (restricted to the useful variable);

we see that

  • Quasirandom without scrambling is suboptimal;
  • Cauchy sampling helps a lot (though the optimum is normally drawn!), in this feature selection context;
  • LHS performs equivalently to low discrepancy (which can be related to the fact that only one feature matters).

select1

dim 12 & select two features.

We reproduce this experiment but with 2 useful variables:

python -m nevergrad.benchmark dim10_select_two_features --seed=12 --repetitions=400 --plot

LHS still performs very well, as well as scrambled methods; Cauchy is not that useful anymore.

select2

dim 10 & small budget

With all variables useful, the situation becomes different; Cauchy is harmful. Scrambling is still very necessary. LHS (vanilla), which does not couple variables, is weak. python -m nevergrad.benchmark dim10_smallbudget --seed=12 --repetitions=400 --plot

smallbudget

In dim 4

In moderate dimension, scrambling is less necessary (consistently with theory) and LHS becomes weaker as budget increases (consistently with discrepancy results in https://arxiv.org/abs/1707.08481). The following plot was created with command: python -m nevergrad.benchmark doe_dim4 --seed=12 --repetitions=400 --plot

smallbudget

Comparison-based methods for ill-conditioned problems.

In this setting (rotated or not, ill-conditioned) we get excellent results with:

python -m nevergrad.benchmark compabasedillcond --seed=12 --repetitions=400 --plot

compabased

Ill-conditioned function

SQP (which won GECCO 2015 context performs great in the quadratic case, consistently with theory and intuition:

python -m nevergrad.benchmark illcond --seed=12 --repetitions=50 --plot

compabased

Discrete

The platform can also deal with discrete objective functions! We can both consider discrete domains handled through softmax or through discretization of continuous variables.

python -m nevergrad.benchmark discrete --seed=12 --repetitions=10 --plot

We note that FastGA performs best. DoubleFastGA corresponds to a mutation rate ranging between 1/dim and (dim-1)/dim instead of 1/dim and 1/2; this is because the original range corresponds to a binary domain whereas we consider arbitrary domains. The simple uniform mixing of mutation rates (https://arxiv.org/abs/1606.05551) performs well in several cases. discrete_leadingones discrete_hardonemax