Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Minimum Score Not Recorded #947

Closed
pacowong opened this issue Dec 12, 2020 · 2 comments
Closed

Minimum Score Not Recorded #947

pacowong opened this issue Dec 12, 2020 · 2 comments

Comments

@pacowong
Copy link
Contributor

pacowong commented Dec 12, 2020

Steps to reproduce

This is an issue discovered in my pull request (#902). To isolate the issue, I have created a smaller test case.

import nevergrad as ng
min_score = None

def fake_training(score: int) -> float:
    global min_score
    if min_score is None or min_score > score:
        min_score = score
    return score

def test(choice_size=1000, budget=300):
    global min_score
    min_score = None
    parametrization = ng.p.Instrumentation(score=ng.p.Choice(list(range(0, choice_size))))
    optimizer = ng.optimizers.OnePlusOne(parametrization=parametrization, budget=budget, num_workers=1)
    recommendation = optimizer.minimize(fake_training)
    print((min_score, recommendation.kwargs['score']))
    return min_score==recommendation.kwargs['score']

To run the test:

for i in range(0, 100):
    if not test(1000, 300):
         break

Observed Results

(0, 0)
(1, 1)
(0, 0)
(2, 2)
(10, 10)
(9, 21)

Expected Results

Print 100 tuples and the elements in each tuple should be the same.

Investigation

The _update_archive_and_bests function in nevergrad/optimization/base.py should be responsible for incorrect behaviour.
I have checked that the minimum loss has been passed into this function but the archiving process may have some problems:

  1. the value of mvalue can be None in if mvalue is self.current_bests[name]
  2. Assertion file if assert mvalue.parameter.loss == candidate.loss is added before if mvalue.parameter.loss > candidate.loss
    Not sure why this is the case because fake_training function is a deterministic function.
@jrapin
Copy link
Contributor

jrapin commented Dec 15, 2020

Hi @pacowong
Thanks for pointing this out. This was indeed an expected behavior which aim was to deal with noisy data. In this case however it was confusing and definitely suboptimal so we have updated the optimizers code. It should be way better indeed.

@jrapin jrapin closed this as completed Dec 15, 2020
@pacowong
Copy link
Contributor Author

Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants