- additional benchmarks
- adding tests in the CI
- helpers for choosing algorithms for different problems
- fix the issue with the Boston dataset
- Add many quasi-opposite variants
- Improve the Dagsduhloid benchmark
- Get rid of legacy pypi solver in CircleCI
- Update Pypi in CircleCI
- Switch to Python 3.8
- Add the Dagstuhloid benchmark
- Add yet another group of metamodels
- Fix links
- Add metamodels
- Update for weighted multiobjective optimization with differential evolution
- Removed
descriptor
field of parameters which had been deprecated in previous versions. Usefunction
field instead to specify if the function is deterministic or not #X.
TransitionChoice
behavior has been changed to use bins instead of a full float representation. This may lead to slight changes during optimizations. It can also be set as unordered for use with discrete 1+1 optimizers (experimental)- Adding NGOptRW, presumably better than NGOpt for real-world problems.
- Making some dependencies optional because running was becoming too complicated.
- Adding the NLOPT library.
- Adding smoothness operators for discrete optimization.
- Adding YAPBBOB, with a parameter regulating YABBOB-like problems so that the distribution of the optimum is less rotationally invariant.
- Adding constrained counterparts of YABBOB: yapenbbob (a few constraints), yaonepenbbob (single constraint), yamegapenbbob (many constraints).
- Improvements in the photonics benchmarks.
- Externalizing CompilerGym.
- Making some tests less flaky.
- Adding Simulated annealing and Tabu search.
- Making the code more robust to Gym environments.
copy()
method of aParameter
does not change the parameters's random state anymore (it used to reset it toNone
#1048MultiobjectiveFunction
does not exist anymore #1034.Choice
andTransitionChoice
have some of their API changed for uniformization. In particular,indices
is now anng.p.Array
(and not annp.ndarray
) which contains the selected indices (or index) of theChoice
. The sampling is performed by specific "layers" that are applied toData
parameters #1065.Parameter.set_standardized_space
does not take adeterministic
parameter anymore #1068. This is replaced by the more generalwith ng.p.helpers.determistic_sampling(parameter)
context. One-shot algorithms are also updated to choose options ofChoice
parameters deterministically, since it is a simpler behavior to expect compared to sampling the standardized space than sampling the option stochastically from thereRandomSearch
now defaults to sample values using theparameter.sample()
instead of a Gaussian #1068. The only difference comes with bounded variables since in this caseparameter.sample()
samples uniformly (unless otherwise specified). The previous behavior can be obtained withRandomSearchMaker(sampler="gaussian")
.PSO
API has been slightly changed #1073Parameter
instancesdescriptor
attribute is deprecated, in favor of a combinaison of an analysis function (ng.p.helpers.analyze
) returning information about the parameter (eg: whether continuous, deterministic etc...) and a newfunction
attribute which can be used to provide information about the function (eg: whether deterministic etc) #1076.- Half the budget alloted to solve cheap constrained is now used by a sub-optimizer #1047. More changes of constraint management will land in the near future.
- Experimental methods
Array.set_recombination
andArray.set_mutation(custom=.)
are removed in favor of layers changingArray
behaviors #1086. Caution: this is still very experimental (and undocumented). - Important bug correction on the shape of bounds if specified as tuple or list instead of np.ndarray #1221.
master
branch has been renamed tomain
. See #1230 for more context.Parameter
classes are undergoing heavy changes, please open an issue if you encounter any problem. The midterm aim is to allow for simpler constraint management.Parameter
have been updated have undergone heavy changes to ease the handling of their tree structure ( #1029 #1036 #1038 #1043 #1044 )Parameter
classes have now a layer structure #1045 which simplifies changing their behavior. In future PRs this system will take charge of bounds, other constraints, sampling etc.- The layer structures allows disentangling bounds and log-distribution. This goal has been reached with
#1053 but may create some instabilities. In particular,
the representation (
__repr__
) ofArray
has changed, and theirbounds
attribute is no longer reliable for now. This change will eventually lead to a new syntax for settings bounds and distribution, but it's not ready yet. DE
initial sampling as been updated to take bounds into accounts #1058Array
can now takelower
andupper
bounds as initialization arguments. The array is initialized at its average if notinit
is provided and both bounds are provided. In this case, sampling will be uniformm between these bounds.- Bayesian optimizers are now properly using the bounds for bounded problem, which may improve performance #1222.
- the new
nevergrad.errors
module gathers errors and warnings used throughout the package (WIP) #1031. EvolutionStrategy
now defaults to NSGA2 selection in the multiobjective case- A new experimental callback adds an early stopping mechanism #1054.
Choice
-like parameters now accept integers are inputs instead of a list, as a shortcut forrange(num)
#1106.- An interface with Pymoo optimizers has been added #1197.
- An interface with BayesOptim optimizers has been added #1179.
- Fix for abnormally slow iterations for large budgets using CMA in a portfolio #1350.
- A new
enable_pickling
option was added to optimizers. This is only necessary for some of them (among whichscipy
-based optimizer), and comes at the cost of additional memory usage #1356 #1358.
tell
method can now receive a list/array of losses for multi-objective optimization #775. For now it is neither robust, nor scalable, nor stable, nor optimal so be careful when using it. More information in the documentation.- The old way to perform multiobjective optimization, through the use of :code:
MultiobjectiveFunction
, is now deprecated and will be removed after version 0.4.3 #1017. - By default, the optimizer now returns the best set of parameter as recommendation #951, considering that the function is deterministic. The previous behavior would use an estimation of noise to provide the pessimistic best point, leading to unexpected behaviors #947. You can can back to this behavior by specifying: :code:
parametrization.descriptors.deterministic_function = False
DE
and its variants have been updated to make full use of the multi-objective losses #789. Other optimizers convert multiobjective problems to a volume minimization, which is not always as efficient.- as an experimental feature we have added some preliminary support for constraint management through penalties. From then on the prefered option for penalty is to register a function returning a positive float when the constraint is satisfied. While we will wait fore more testing before documenting it, this may already cause instabilities and errors when adding cheap constraints. Please open an issue if you encounter a problem.
tell
argumentvalue
is renamed toloss
for clarification #774. This can be breaking when using named arguments!ExperimentFunction
now automatically records arguments used for their instantiation so that they can both be used to create a new copy, and as descriptors if there are of type int/bool/float/str [#914](facebookresearch#914 #914).- from now on, code formatting needs to be
black
compliant. This is simply performed by runningblack nevergrad
. A continuous integration checks that PRs are compliant, and the precommit hooks have been adapted. For PRs branching from an old master, you can runblack --line-length=110 nevergrad/<path_to_modified_file>
to make your code easier to merge. - Pruning has been patched to make sure it is not activated too often upon convergence #1014. The bug used to lead to important slowdown when reaching near convergence.
recommend
now provides an evaluated candidate when possible. For non-deterministic parametrization likeChoice
, this means we won't resample, and we will actually recommend the best past evaluated candidate #668. Still, some optimizers (likeTBPSA
) may recommend a non-evaluated point.Choice
andTransitionChoice
can now take arepetitions
parameters for sampling several times, it is equivalent to :code:Tuple(*[Choice(options) for _ in range(repetitions)])
but can be be up to 30x faster for large numbers of repetitions #670 #696.- Defaults for bounds in
Array
is nowbouncing
, which is a variant ofclipping
avoiding over-sompling on the bounds #684 and #691.
This version should be robust. Following versions may become more unstable as we will add more native multiobjective optimization as an experimental feature. We also are in the process of simplifying the naming pattern for the "NGO/Shiwa" type optimizers which may cause some changes in the future.
Archive
now stores the best corresponding candidate. This requires twice the memory compared to before the change. #594Parameter
now holds aloss: Optional[float]
attribute which is set and used by optimizers after thetell
method.- Quasi-random samplers (
LHSSearch
,HammersleySearch
,HaltonSearch
etc...) now sample in the full range of bounded variables when thefull_range_sampling
isTrue
#598. This required some ugly hacks, help is most welcome to find nices solutions. full_range_sampling
is activated by default if both range are provided inArray.set_bounds
.- Propagate parametrization system features (generation tracking, ...) to
OnePlusOne
based algorithms #599. - Moved the
Selector
dataframe overlay so that basic requirements do not includepandas
(only necessary for benchmarks) #609 - Changed the version name pattern (removed the
v
) to unify withpypi
versions. Expect more frequent intermediary versions to be pushed (deployment has now been made pseudo-automatic). - Started implementing more ML-oriented testbeds #642
- Removed all deprecated code #499. That includes:
instrumentation
as init parameter of anOptimizer
(replaced byparametrization
)instrumentation
as attribute of anOptimizer
(replaced byparametrization
)candidate_maker
(not needed anymore)optimize
methods ofOptimizer
(renamed tominimize
)- all the
instrumentation
subpackage (replaced byparametrization
) and its legacy methods (set_cheap_constraint_checker
etc)
- Removed
ParametrizedOptimizer
andOptimizerFamily
in favor ofConfiguredOptimizer
with simpler usage #518 #521. - Some variants of algorithms have been removed from the
ng.optimizers
namespace to simplify it. All such variants can be easily created using the correspondingConfiguredOptimizer
. Also, addingimport nevergrad.optimization.experimentalvariants
will populateng.optimizers.registry
with all variants, and they are all available for benchmarks #528. - Renamed
a_min
anda_max
inArray
,Scalar
andLog
parameters for clarity. Using old names will raise a deprecation warning for the time being. archive
is pruned much more often (eg.: fornum_workers=1
, usually pruned to 100 elements when reaching 1000), so you should not rely on it for storing all results, use a callback instead #571. If this is a problem for you, let us know why and we'll find a solution!
- Propagate parametrization system features (generation tracking, ...) to
TBPSA
,PSO
andEDA
based algorithms. - Rewrote multiobjective core system #484.
- Activated Windows CI (still a bit flaky, with a few deactivated tests).
- Better callbacks in
np.callbacks
, including exporting tohiplot
. - Activated documentation on github pages.
- Scalar now takes optional
lower
andupper
bounds at initialization, andsigma
(and optionnallyinit
) if is automatically set to a sensible default #536.
- Fist argument of optimizers is renamed to
parametrization
instead ofinstrumentation
for consistency #497. There is currently a deprecation warning, but this will be breaking in v0.4.0. - Old
instrumentation
classes now raise deprecation warnings, and will disappear in versions >0.3.2. Hence, prefere using parameters fromng.p
thanng.var
, and avoid usingng.Instrumentation
altogether if you don't need it anymore (or import it throughng.p.Instrumentation
). CandidateMaker
(optimizer.create_candidate
) raisesDeprecationWarning
s since it new candidates/parameters can be straightforwardly created (parameter.spawn_child(new_value=new_value)
)Candidate
class is completely removed, and is completely replaced byParameter
#459. This should not break existing code sinceParameter
can be straightforwardly used as aCandidate
.
- New parametrization is now as efficient as in v0.3.0 (see CHANGELOG for v0.3.1 for contect)
- Optimizers can now hold any parametrization, not just
Instrumentation
. This for instance mean that when you doOptimizerClass(instrumentation=12, budget=100)
, the instrumentation (and therefore the candidates) will be of classng.p.Array
(and notng.p.Instrumentation
), and their attributevalue
will be the correspondingnp.ndarray
value. You can still useargs
andkwargs
if you want, but it's no more needed! - Added experimental evolution-strategy-like algorithms using new parametrization #471 (the behavior and API of these optimizers will probably evolve in the near future).
DE
algorithms comply with the new parametrization system and can be set to use parameter's recombination.- Fixed array as bounds in
Array
parameters
Note: this is the first step to propagate the instrumentation/parametrization framework. Learn more on the Facebook user group. If you are looking for stability, await for version 0.4.0, but the intermediary releases will help by providing deprecation warnings.
FolderFunction
must now be accessed throughnevergrad.parametrization.FolderFunction
- Instrumentation names are changed (possibly breaking for benchmarks records)
- Old instrumentation classes now all inherits from the new parametrization classes #391. Both systems coexists, but optimizers use the old API at this point (it will use the new one in version 0.3.2).
- Temporary performance loss is expected in orded to keep compatibility between
Variable
andParameter
frameworks. PSO
now uses initialization by sampling the parametrization, instead of sampling all the real space. A newWidePSO
optimizer was created, using the previous initial sampling method #467.
Note: this version is stable, but the following versions will include breaking changes which may cause instability. The aim of this changes will be to update the instrumentation system for more flexibility. See PR #323 and Fb user group for more information.
Instrumentation
is now aVariable
for simplicity and flexibility. TheVariable
API has therefore heavily changed, and bigger changes are coming (instrumentation
will becomeparametrization
with a different API). This should only impact custom-made variables.InstrumentedFunction
has been aggressively deprecated to solve bugs and simplify code, in favor of using theInstrumentation
directly at the optimizer initialization, and of usingExperimentFunction
to define functions to be used in benchmarks. Main differences are:instrumentation
attribute is renamed toparametrization
for forward compatibility.__init__
takes exactly two arguments (main function and parametrization/instrumentation) and- calls to
__call__
is directly forwarded to the main function (instead of converting from data space),
Candidates
have now auid
instead of auuid
for compatibility reasons.- Update archive
keys/items_as_array
methods tokeys/items_as_arrays
for consistency.
- Benchmark plots now show confidence area (using partially transparent lines).
Chaining
optimizer family enables chaining of algorithms.- Cleaner installation.
- New simplified
Log
variable for log-distributed scalars. - Cheap constraints can now be provided through the
Instrumentation
- Added preliminary multiobjective function support (may be buggy for the time being, and API will change)
- New callback for dumping parameters and loss, and loading them back easily for display (display yet to come).
- Added a new parametrization module which is expected to soon replace the instrumentation module.
- Added new test cases: games, power system, etc (experimental)
- Added new algorithms: quasi-opposite one shot optimizers
- instrumentations now hold a
random_state
attribute which can be seeded (optimizer.instrumentation.random_state.seed(12)
). Seedingnumpy
's global random state seed before using the instrumentation still works (but if not, this change can break reproducibility). The random state is used by the optimizers through theoptimizer._rng
property.
- added a
Scalar
variable as a shortcut toArray(1).asscalar(dtype)
to simplify specifying instrumentation. - added
suggest
method to optimizers in order to manually provide the nextCandidate
from theask
method (experimental feature, name and behavior may change). - populated
nevergrad
's namespace so thatimport nevergrad as ng
gives access tong.Instrumentation
,ng.var
andng.optimizers
. Theoptimizers
namespace is quite messy, some non-optimizer objects will eventually be removed from there. - renamed
optimize
tominimize
to be more explicit. Usingoptimize
will raise aDeprecationWarning
for the time being. - added first game-oriented testbed function in the
functions.rl
module. This is still experimental and will require refactoring before the API becomes stable.
- changed
tanh
toarctan
as default for bounded variables (much wider range). - changed cumulative Gaussian density to
arctan
for rescaling inBO
(much wider range). - renamed
Array.asfloat
method toArray.asscalar
and allow casting toint
as well through an argument.
- fixed
tell_not_asked
forDE
family of optimizers. - added
dump
andload
method toOptimizer
. - Added warnings against inefficient settings:
BO
algorithms with dis-continuous or noisy instrumentations without appropriate parametrization,PSO
andDE
for low budget. - improved benchmark plots legend.
- first parameter of optimizers is now
instrumentation
instead ofdimension
. This allows the optimizer to have information on the underlying structure.int
s are still allowed as before and will set the instrumentation to theInstrumentation(var.Array(n))
(which is basically the identity). - removed
BaseFunction
in favor ofInstrumentedFunction
and use instrumentation instead of defining specific transforms (breaking change for benchmark function implementation). ask()
andprovide_recommendation()
now return aCandidate
with attributesargs
,kwargs
(depending on the instrumentation) anddata
(the array which was formerly returned).tell
must now receive this candidate as well instead of the array.- removed
tell_not_asked
in favor oftell
. A newnum_tell_not_asked
attribute is added to check the number oftell
calls with non-asked points.
- updated
bayesion-optimization
version to 1.0.1. - from now on, optimizers should preferably implement
_internal_ask_candidate
and_internal_tell_candidate
instead of_internal_ask
and_internal_tell
. This should take at most one more line:x = candidate.data
. - added an
_asked
private attribute to register uuid of particuels that were asked for. - solved
ArtificialFunction
delay bug.
- corrected a bug introduced by v0.1.5 for
PSO
. - activated
tell_not_ask
forPSO
,TBPSA
and differential evolution algorithms. - added a pruning mechanisms for optimizers archive in order to avoid using a huge amount of memory.
- corrected typing after activating
numpy-stubs
.
- provided different install procedures for optimization, benchmark and dev (requirements differ).
- added an experimental
tell_not_asked
method to optimizers. - switched to
pytest
for testing, and removed dependency tonosetests
andgenty
. - made archive more memory efficient by using bytes as key instead of tuple of floats.
- started rewritting some optimizers as instance of a family of optimizers (experimental).
- added pseudotime in benchmarks for both steady mode and batch mode.
- made the whole chain from
Optimizer
toBenchmarkChunk
stateful and able to restart from where it was stopped. - started introducing
tell_not_asked
method (experimental).
- fixed
PSO
in asynchronous case - started refactoring
instrumentation
in depth, and more specifically instantiation of external code (breaking change) - Added Photonics and ARCoating test functions
- Added variants of algorithms
- multiple bug fixes
- multiple typo corrections (including modules changing names)
- added MLDA functions
- allowed steady state in experiments
- allowed custom file types for external code instantiation
- added dissymetric noise case to
ArtificialFunction
- prepared an
Instrumentation
class to simplify instrumentation (breaking changes will come) - added new algorithms and benchmarks
- improved plotting
- added a transform method to
BaseFunction
(more breaking changes will come)
Work on instrumentation
will continue and breaking changes will be pushed in the following versions.
Initial version