Skip to content

Commit

Permalink
Add tests for the suggest method (#1316)
Browse files Browse the repository at this point in the history
* testing_suggest

* fix

* fix

* fix

* fix

* fix

* Update nevergrad/optimization/test_optimizerlib.py

Co-authored-by: Jérémy Rapin <[email protected]>

* Update nevergrad/optimization/test_optimizerlib.py

Co-authored-by: Jérémy Rapin <[email protected]>

* Cleaning and rebasing "suggest" (#1380)

* Add architecture search for Gym (#1315)

* as

* black

* add_Test

* type_fix

* typing

* fix

* fix_bug

* fix_weird

* wtf

* fix

* fix

* fix

* black

* switch_to_choice

* mypy

* fix

* fix

* fix

* fix

* fix

* fix_type

* Adding optimization specifically for RL (#1303)

* Adding optimization specifically for RL

* fix

* fix

* Import benchmarks online for fixing dependencies (#1310)

* fix_dependencies

* fix_tests

* : missing

* fix_lint

* fix

* fix

* fix

* XPs with sparsity on Open AI Gym (#1319)

* XPs with sparsity on Open AI Gym

* Update gymexperiments.py

* Adding state of the art results on OpenAI Gym results (#1318)

* Update plotting.py

* fix

* ok_good

* fix

* Adding example for OpenAI Gym (#1320)

* testing more algorithms for permutations (#1321)

* testing more algorithms for permutations

* fix

* Adding yahdlbbbob and simplifying stuff (#1145)

* Update experiments.py

* black

* fix_case_1

* fix

* Update experiments.py

* Add Olympus benchmark (#1190)

* Add olympus function and benchmark

* remove surfaces with error

* Add noise and use average of noise free surfaces for evaluation_finction

* fix static tests

* rename olympus to olympussurfaces

* fix test_core olympus

* replace olympus with olymp in requirements

* fix olymp version in requirements

* black reformatting

* Add discrete and GaussianMixture surfaces

* Add discrete and GaussianMixture surfaces

* Move surfaces in a class variable

* no_need_for_42_because_noiseless

* nowindowsforcarraz

* fix

* fix

* fix

* fix

* fix

* fix

* fix

* fix

* bettertest

* Add olympus emulators

* add BayesNeuralNet

* reformat code

* minor fix

* minor fix

* minor fix

* add missing package and correct seed problem

* remove unused packages

* fix silence_tensorflow version

* try after merge

* fix

* gamma_fix

* fix

* fix

* fix_naming

* fix

* fix

* fix

* fix

* fix

* fix

* fix

* fix

* Update requirements/bench.txt

Co-authored-by: Jérémy Rapin <[email protected]>

* YEEEEEEEES_it_works

Co-authored-by: ncarraz <[email protected]>
Co-authored-by: Jérémy Rapin <[email protected]>

* Documentation for permutations (#1322)

* change sparsity in gym (#1328)

* change sparsity in gym

* fix

* Unstack ngopt (#1330)

* fix plot (#1332)

* fix plot

* black

* Fix gym: thread safety (#1338)

* fixgym

* fix

* fix

* fix

* Avoid bool check on params (#1349)

* Keep only final optimizer in portfolio (#1350)

* Fix nan in cma (#1347)

* fixnan

* fixnan

* fixnan

* fo

* Keep data in cache for CMA (#1351)

* Add installation instruction for MuJoCo (#1352)

* adding installation instruction for MuJoCo

* raise error

* better error handling

Co-authored-by: Mathurin Videau <[email protected]>

* enable_pickling for NGOpt (#1356)

* Bugfix in scrambled progressive optimization. (#1357)

* Add anisotropic progressive optimization (#1344)

* Adding anisotropic progressive optimization

* black

* Finish adding enable_pickling for all optimizers (Rescaled) (#1358)

* Residual controllers in RL. (#1359)

* Residual controllers in RL.

It's not exactly residual, it's initializing with something closer to identity. It works pretty well.

* Update multigym.py

* fix

* fix

* fix

* black

* fix

* seed

* Update test_core.py

* fix

* fix

* fix

* Bump version to 0.4.3.post10 (#1364)

* Removing an incomplete sentence from the doc (#1367)

* Fix broken CI (#1370)

* docs: add GH button in support of Ukraine  (#1369)

* Add the FAO crop model (#1343)

* aquacrop

* fix

* fix

* fix

* Update ac.py

* black

* Update experiments.py (#1361)

* fix

* Update bench.txt

* fix

* fix

* fix

* tentative_pip3

* yet_another_tentative_fi

* yet_another_tentative_fi

* fix

* fix_suffering

* desperate_try

* desperate_try

* desperate_try

* desperate_try

* fix

* desperate_try

* desperate_try

* desperate_try

* desperate_try

* fix

* Update config.yml

* fix

* Update setup.py

* Update main.txt

* fix

* Use up-to-date headers (#1371)

* Add NLOPT as a solver (#1340)

* Update version and changelog to 0.5.0 (#1372)

* Deactivate mutation test in CI (#1374)

* Reduce noise in gym (#1333)

* Reduce noise in gym

* Update multigym.py

* Add comment

* NaN robustness in Gym (#1289)

* NaN robustness in Gym

* Update nevergrad/functions/gym/multigym.py

Co-authored-by: Jérémy Rapin <[email protected]>

* fix

* fix

Co-authored-by: Jérémy Rapin <[email protected]>

* Add more models for Gym control (#1346)

* more models for Gym control

* Update gymexperiments.py

* Update multigym.py

* fix

* Update gymexperiments.py

* update moremodels (#1378)

* Bump version to 0.4.3.post10 (#1364)

* Removing an incomplete sentence from the doc (#1367)

* Fix broken CI (#1370)

* docs: add GH button in support of Ukraine  (#1369)

* Add the FAO crop model (#1343)

* aquacrop

* fix

* fix

* fix

* Update ac.py

* black

* Update experiments.py (#1361)

* fix

* Update bench.txt

* fix

* fix

* fix

* tentative_pip3

* yet_another_tentative_fi

* yet_another_tentative_fi

* fix

* fix_suffering

* desperate_try

* desperate_try

* desperate_try

* desperate_try

* fix

* desperate_try

* desperate_try

* desperate_try

* desperate_try

* fix

* Update config.yml

* fix

* Update setup.py

* Update main.txt

* fix

* Use up-to-date headers (#1371)

* Add NLOPT as a solver (#1340)

* Update version and changelog to 0.5.0 (#1372)

* Deactivate mutation test in CI (#1374)

* Reduce noise in gym (#1333)

* Reduce noise in gym

* Update multigym.py

* Add comment

Co-authored-by: Jérémy Rapin <[email protected]>
Co-authored-by: Dmitry Vinnik <[email protected]>

* fix

* fix

* im_lost

* fix

* fix

* fix

Co-authored-by: Jérémy Rapin <[email protected]>
Co-authored-by: Dmitry Vinnik <[email protected]>

* Add a conformant GP experiment (#1337)

* Adding a conformant GP experiment

Because sometimes conformant planning, in spite of being super simple, performs incredibly well.

* fix

* High-speed differential evolution (#1366)

* High-speed differential evolution

* Update base.py

* Update optimizerlib.py

* fix

* fix

* clean

* clean

* Update differentialevolution.py

* Update nevergrad/optimization/differentialevolution.py

Co-authored-by: Jérémy Rapin <[email protected]>

* Update nevergrad/optimization/differentialevolution.py

Co-authored-by: Jérémy Rapin <[email protected]>

* fi

* fix

* fix

* fix

Co-authored-by: Jérémy Rapin <[email protected]>

* cleaning

* cleaning

* cleaning

Co-authored-by: ncarraz <[email protected]>
Co-authored-by: Jérémy Rapin <[email protected]>
Co-authored-by: mathuvu <[email protected]>
Co-authored-by: Mathurin Videau <[email protected]>
Co-authored-by: Jeremy Reizenstein <[email protected]>
Co-authored-by: Dmitry Vinnik <[email protected]>

* fix

* Update nevergrad/optimization/test_optimizerlib.py

Co-authored-by: Jérémy Rapin <[email protected]>

* Update test_optimizerlib.py

* fix

* Update test_optimizerlib.py

* Update test_optimizerlib.py

* fix

* Update test_suggest.py

* fix

Co-authored-by: Jérémy Rapin <[email protected]>
Co-authored-by: ncarraz <[email protected]>
Co-authored-by: mathuvu <[email protected]>
Co-authored-by: Mathurin Videau <[email protected]>
Co-authored-by: Jeremy Reizenstein <[email protected]>
Co-authored-by: Dmitry Vinnik <[email protected]>
  • Loading branch information
7 people authored Mar 22, 2022
1 parent 07f6ec0 commit 213bbc8
Show file tree
Hide file tree
Showing 2 changed files with 114 additions and 15 deletions.
15 changes: 0 additions & 15 deletions nevergrad/optimization/test_optimizerlib.py
Original file line number Diff line number Diff line change
Expand Up @@ -254,21 +254,6 @@ def recomkeeper() -> tp.Generator[RecommendationKeeper, None, None]:
keeper.save()


@testing.suppress_nevergrad_warnings()
@pytest.mark.parametrize("name", registry) # type: ignore
def test_optimizers_suggest(name: str) -> None: # pylint: disable=redefined-outer-name
optimizer = registry[name](parametrization=4, budget=2)
optimizer.suggest(np.array([12.0] * 4))
candidate = optimizer.ask()
try:
optimizer.tell(candidate, 12)
# The optimizer should recommend its suggestion, except for a few optimization methods:
if name not in ["SPSA", "TBPSA", "StupidRandom"]:
np.testing.assert_array_almost_equal(optimizer.provide_recommendation().value, [12.0] * 4)
except base.errors.TellNotAskedNotSupportedError:
pass


# pylint: disable=redefined-outer-name
@pytest.mark.parametrize("name", registry) # type: ignore
def test_optimizers_recommendation(name: str, recomkeeper: RecommendationKeeper) -> None:
Expand Down
114 changes: 114 additions & 0 deletions nevergrad/optimization/test_suggest.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,114 @@
# Copyright (c) Meta Platforms, Inc. and affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.

import pytest
import numpy as np
import sys
import nevergrad as ng
import nevergrad.common.typing as tp
from nevergrad.common import testing
from . import base
from .optimizerlib import registry


# decorators to be used when testing on Windows is unecessary
# or cumbersome
skip_win_perf = pytest.mark.skipif(
sys.platform == "win32", reason="Slow, and no need to test performance on all platforms"
)


def suggestable(name: str) -> bool:
# Some methods are not good with suggestions.
keywords = ["TBPSA", "BO", "EMNA", "EDA", "BO", "Stupid", "Pymoo"]
return not any(x in name for x in keywords)


def suggestion_testing(
name: str,
instrumentation: ng.p.Array,
suggestion: np.ndarray,
budget: int,
objective_function: tp.Callable[..., tp.Any],
optimum: tp.Optional[np.ndarray] = None,
threshold: tp.Optional[float] = None,
):
optimizer_cls = registry[name]
optim = optimizer_cls(instrumentation, budget)
if optimum is None:
optimum = suggestion
optim.suggest(suggestion)
optim.minimize(objective_function)
if threshold is not None:
assert (
objective_function(optim.recommend().value) < threshold
), "{name} proposes {optim.recommend().value} instead of {optimum} (threshold={threshold})"
return
assert np.all(
optim.recommend().value == optimum
), "{name} proposes {optim.recommend().value} instead of {optimum}"


@skip_win_perf # type: ignore
@pytest.mark.parametrize("name", [r for r in registry if suggestable(r)]) # type: ignore
def test_suggest_optimizers(name: str) -> None:
"""Checks that each optimizer is able to converge when optimum is given"""

instrum = ng.p.Array(shape=(100,)).set_bounds(0.0, 1.0)
instrum.set_integer_casting()
suggestion = np.asarray([0] * 17 + [1] * 17 + [0] * 66) # The optimum is the suggestion.
target = lambda x: 0 if np.all(np.asarray(x, dtype=int) == suggestion) else 1
suggestion_testing(name, instrum, suggestion, 7, target)


def good_at_suggest(name: str) -> bool:
keywords = [
"Noisy",
"Optimistic",
"Multi",
"Anisotropic",
"BSO",
"Sparse",
"Recombining",
"PortfolioDiscreteOne",
]
return not any(k in name for k in keywords)


@skip_win_perf # type: ignore
@pytest.mark.parametrize("name", [r for r in registry if "iscre" in r and good_at_suggest(r)]) # type: ignore
def test_harder_suggest_optimizers(name: str) -> None:
"""Checks that discrete optimizers are good when a suggestion is nearby."""
instrum = ng.p.Array(shape=(100,)).set_bounds(0.0, 1.0)
instrum.set_integer_casting()
optimum = np.asarray([0] * 17 + [1] * 17 + [0] * 66)
target = lambda x: min(3, np.sum((np.asarray(x, dtype=int) - optimum) ** 2))
suggestion = np.asarray([0] * 17 + [1] * 16 + [0] * 67)
suggestion_testing(name, instrum, suggestion, 1500, target, optimum)


@skip_win_perf # type: ignore
def test_harder_continuous_suggest_optimizers() -> None:
"""Checks that somes optimizer can converge when provided with a good suggestion."""
instrum = ng.p.Array(shape=(100,)).set_bounds(0.0, 1.0)
optimum = np.asarray([0] * 17 + [1] * 17 + [0] * 66)
target = lambda x: min(2.0, np.sum((x - optimum) ** 2))
suggestion = np.asarray([0] * 17 + [1] * 16 + [0] * 67)
suggestion_testing("NGOpt", instrum, suggestion, 1500, target, optimum, threshold=0.9)


@testing.suppress_nevergrad_warnings()
@pytest.mark.parametrize("name", registry) # type: ignore
def test_optimizers_suggest(name: str) -> None: # pylint: disable=redefined-outer-name
optimizer = registry[name](parametrization=4, budget=2)
optimizer.suggest(np.array([12.0] * 4))
candidate = optimizer.ask()
try:
optimizer.tell(candidate, 12)
# The optimizer should recommend its suggestion, except for a few optimization methods:
if name not in ["SPSA", "TBPSA", "StupidRandom"]:
np.testing.assert_array_almost_equal(optimizer.provide_recommendation().value, [12.0] * 4)
except base.errors.TellNotAskedNotSupportedError:
pass

0 comments on commit 213bbc8

Please sign in to comment.