-
Notifications
You must be signed in to change notification settings - Fork 56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve training archive loading #90
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
…anProcessLearner can use each other's archives as training archives.
…do not include any data from the training archive if one was provided at the begining of the optimization.
… with respect to data in the training archive if one was provided at the start of the optimization.
…ed as training data archives for the machine-learning-based learners.
…ler to the learner for some controller types, which meant that the learner archive would be missing the results from the last run.
@charmasaur There are a lot of changes in this PR, but a lot of it is just reorganization. I did a test optimization with each learner, checked that |
Feel free just to merge it, I might take a look tomorrow but if I have any
suggestions I can just send a follow-up PR.
…On Tue, 22 Dec. 2020, 7:47 pm zakv, ***@***.***> wrote:
@charmasaur <https://github.com/charmasaur> There are a lot of changes in
this PR, but a lot of it is just reorganization. I did a test optimization
with each learner, checked that GaussianProcessLearner and
NeuralNetLearner could use any archive as a training archive, and checked
that visualizations still worked. Everything seems to be running ok. Are
you interested in taking a look or should I just merge it?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#90 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AATKUBMFRQ4E5ZJMNYVEWQTSWBMLJANCNFSM4VCNEA4A>
.
|
zakv
added a commit
to zakv/M-LOOP
that referenced
this pull request
Dec 25, 2020
…alizer and NeuralNetVisualizer use the default value for param_names instead of loading it from the archive.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Makes more progress towards resolving all of the problems mentioned in #82.Fully resolves #82In particular theTheGaussianProcessLearner
andNeuralNetLearner
can now use each other's archives as training archives. They still can't use archives from theDifferentialEvolutionLearner
, and possibly the other learners as well.GaussianProcessLearner
andNeuralNetLearner
can now use a learner archive from any type of learner as their training archive. Note that it won't be possible for them to use old archives from other types of learners; they can only load archives from other types of learners if those optimizations were run with the changes included in this PR.Changes proposed in this pull request:
GaussianProcessLearner
andNeuralNetLearner
to use learner archives from other types of learners as training archives.all_costs
andall_params
.NelderMeadLearner
,DifferentialEvolutionLearner
, andRandomLearner
have been modified to save these parameters to their archives; hence why their archives from old optimization runs won't work here.params
,cost
,uncer
, andbad
to their learners.self.all_params
,self.all_costs
,self.all_uncers
, andself.bad_run_indexs
properties.RandomLearner
now keeps track of its best parameters on its own since its controller doesn't send them anymore.RandomLearner
now saves a learner archive by defaultOverall approach:
num_params
in the new optimization doesn't matchnum_params
in the archive.num_params
isNone
and a training archive is provided, then it will default to the value ofnum_params
in the training archive. This is needed for the visualization class so that it can call its parent's__init__()
method without knowingnum_params
ahead of time.cost_has_noise
, use the user-supplied keyword argument values and usual defaults, ignoring the values in the archive.__init__()
.