-
Notifications
You must be signed in to change notification settings - Fork 283
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add different selection methods #28
Comments
Epsilon Lexicase |
Be great if you could provide a bit more detail @echo66 ... |
It's already implement in https://github.com/lacava/few/ . More info at: |
This would be most welcome @wulfihm 👍 |
Beginning this now. Not sure when I will be done. |
No worries @wulfihm take your time, no rush. I'll be interested to see what you come up with. Abstracting out the selector as its own class possible? |
So, as evident I did not get around to doing this. Maybe I will get around to doing a clean PR for this but maybe also not, so in the meantime if anyone wants to do it, feel free. And if you have any questions to my code above I am able to answer them if necessary. Eplex and ParetoGP are defined here: https://github.com/wulfihm/gplearn_ba/blob/master/gplearn/selection.py I tried doing NSGA2 but it really did not work at all. I also implemented other stuff like geometric semantic crossover and mutation, simplification of solutions of gplearn (not finished) and another complexity measure I named 'Kommenda' from M. Kommenda et al. and adding the R2 score for regression. Another note, everything above I only implemented with regression in mind. I completely ignored the symbolicTransformer. If you are interested at all here is my bachelor thesis: |
Really appreciate you sharing this @wulfihm , if someone wants to take up the torch, it'd be very cool to see these added. Otherwise, maybe I'll take a few rainy weekend days this winter to play with your code 😄 |
+1 for this! I'm trying to switch to GPLearn from Eureqa lately and I'm also very interested in this. For context, I have a recent paper on converting neural networks into analytic equations to discover new physical laws: https://arxiv.org/abs/2006.11287. We use the following Pareto front technique where we look for the sharpest drop in log-error over length. It seems to work pretty well in a range of noisy datasets rather than jointly optimizing loss and length. But I'd also be interested in trying out these others. |
@MilesCranmer What exactly do you mean with "how to use these techniques"? Programmatically, Theoretically? :D |
I mean programmatically - i.e., how I can configure those methods for GPlearn's .fit() loop for a particular problem if I were to use your fork. |
I added additional hyperparameters/options: complexity => 'kommenda' (for the kommenda complexity) ParetoGP works by selecting the first parent randomly from the Paretofront (The Archive). The second parent is selection via the selection mechanism (can be anything, i.e. tournament or eplex) from the normal population. See: https://doi.org/10.1007/0-387-23254-0_17 I used the code here: https://github.com/wulfihm/ba_code/blob/master/main.py works via command line arguments. The |
That's awesome! I'm really looking forward to trying it out this week. Thanks for putting this online and offering assistance in configuring it. Cheers, |
The text was updated successfully, but these errors were encountered: