LES, Distributed ES & Fixes 🚀
Added
-
Adds exponential decay of mean/weight regularization to ES that update mean (FD-ES and CMA variants). Simply provide
mean_decay
!= 0.0 argument at strategy instantiation to strategy. Note that covariance estimates may be a bit off, but this circumvents constant increase of mean norm due to stochastic process nature. -
Adds experimental distributed ES, which sample directly on all devices (no longer only on host). Furthermore, we use
pmean
-like all reduce ops to construct z-scored fitness scores and gradient accumulations to update the mean estimate. So far only FD-gradient-based ES are supported. Major benefits: Scale with the number of devives and allow for larger populations/number of dimensions.- Supported distributed ES:
DistributedOpenES
- Import via:
from evosax.experimental.distributed import DistributedOpenES
- Supported distributed ES:
-
Adds
RandomSearch
as basic baseline. -
Adds
LES
(Lange et al., 2023) and a retrained trained checkpoint. -
Adds a separate example notebook for how to use the
BBOBVisualizer
.
Changed
Sep_CMA_ES
automatic hyperparameter calculation runs intoint32
problems, whennum_dims
> 40k. We therefore clip the number to 40k for this calculation.
Fixed
- Fixed DES to also take flexible
fitness_kwargs
,temperature
,sigma_init
as inputs. - Fixed PGPE exponential decay option to account for
sigma
update.