You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Print 100 tuples and the elements in each tuple should be the same.
Investigation
The _update_archive_and_bests function in nevergrad/optimization/base.py should be responsible for incorrect behaviour.
I have checked that the minimum loss has been passed into this function but the archiving process may have some problems:
the value of mvalue can be None in if mvalue is self.current_bests[name]
Assertion file if assert mvalue.parameter.loss == candidate.loss is added before if mvalue.parameter.loss > candidate.loss
Not sure why this is the case because fake_training function is a deterministic function.
The text was updated successfully, but these errors were encountered:
Hi @pacowong
Thanks for pointing this out. This was indeed an expected behavior which aim was to deal with noisy data. In this case however it was confusing and definitely suboptimal so we have updated the optimizers code. It should be way better indeed.
Steps to reproduce
This is an issue discovered in my pull request (#902). To isolate the issue, I have created a smaller test case.
To run the test:
Observed Results
Expected Results
Print 100 tuples and the elements in each tuple should be the same.
Investigation
The
_update_archive_and_bests
function innevergrad/optimization/base.py
should be responsible for incorrect behaviour.I have checked that the minimum loss has been passed into this function but the archiving process may have some problems:
mvalue
can beNone
inif mvalue is self.current_bests[name]
assert mvalue.parameter.loss == candidate.loss
is added beforeif mvalue.parameter.loss > candidate.loss
Not sure why this is the case because
fake_training
function is a deterministic function.The text was updated successfully, but these errors were encountered: