You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Right now the probabilities of crossover, p(xover), mutation, p(mut), and generating a new molecule from scratch, p(new), is set by the user and remain unchanged throughout a run (can @marco-foscato confirm this?). This is a simple strategy that works well for many applications, but I think in DENOPTIM's case we could use a more sophisticated scheme where these probabilities are dynamically changed throughout a run. I am confident these changes will increase the performance of DENOPTIM. Here are my suggestions:
Drawing inspiration from Simulated Annealing, we start with a p(mut) high and lower it as time goes on. We can also make this even more sophisticated by increasing p(mut) if we see that the gene pool is stagnating and lower it if we are improving.
We use the generation of a new molecule to implement a sensible restart strategy.
Suggestion number 1 is a common way of making a GA "smarter". If the results from the Simulated Annealing-approach looks promising then we can choose to implement the more sophisticated one at a later stage. For the user this would mean that instead of setting the mutation probability he/she would set some alpha related to how fast or slow p(mut) should respond to changes.
Suggestion number 2 is both a common way of escaping local maxima and of actually making convergence faster. Here is a quote from "Handbook of Meta-Heuristics" that explains why convergence may be faster if we choose a good restart strategy:
[…] the algorithm [Greedy Randomized Adaptive Path-Relinking] finds a target solution in relatively
few iterations: about 25% of the runs take at most 101 iterations; about 50% take at
most 192 iterations; and about 75% take at most 345. However, some runs take much
longer: 10% take over 1000 iterations; 5% over 2000; and 2% over 9715 iterations.
The book goes further on to suggest that it is best to restart at regular intervals. The difficulty lies of course in choosing the length of these intervals. I suggest that we simply generate a new molecule from scratch after n successful modifications. We can find the best n by doing some experiments ourselves where we compare convergence times for different n for two or three different experiments. If the optimal n is very different between the runs then n should be set by the user. If it is not then we can hard-code this value into DENOPTIM.
The text was updated successfully, but these errors were encountered:
Right now the probabilities of crossover, p(xover), mutation, p(mut), and generating a new molecule from scratch, p(new), is set by the user and remain unchanged throughout a run (can @marco-foscato confirm this?). This is a simple strategy that works well for many applications, but I think in DENOPTIM's case we could use a more sophisticated scheme where these probabilities are dynamically changed throughout a run. I am confident these changes will increase the performance of DENOPTIM. Here are my suggestions:
Suggestion number 1 is a common way of making a GA "smarter". If the results from the Simulated Annealing-approach looks promising then we can choose to implement the more sophisticated one at a later stage. For the user this would mean that instead of setting the mutation probability he/she would set some alpha related to how fast or slow p(mut) should respond to changes.
Suggestion number 2 is both a common way of escaping local maxima and of actually making convergence faster. Here is a quote from "Handbook of Meta-Heuristics" that explains why convergence may be faster if we choose a good restart strategy:
The book goes further on to suggest that it is best to restart at regular intervals. The difficulty lies of course in choosing the length of these intervals. I suggest that we simply generate a new molecule from scratch after n successful modifications. We can find the best n by doing some experiments ourselves where we compare convergence times for different n for two or three different experiments. If the optimal n is very different between the runs then n should be set by the user. If it is not then we can hard-code this value into DENOPTIM.
The text was updated successfully, but these errors were encountered: