You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Right now, test is applied to the parameters at the end of training. Sometimes, this can be much worse than the peak, which is what you'd use any benchmark anyways. Ideally test would automatically do this for you.
The text was updated successfully, but these errors were encountered:
This is the "deep mind" style testing. I've been thinking about the best way to handle this, there's a few options. I'm still not happy with the experiments package as a whole, so may roll this into future refactoring.
If you need this urgently, you should be able to implement a version using between experiment.train(), experiment.test() and experiment.save() by writing your own version of run_experiment
Right now, test is applied to the parameters at the end of training. Sometimes, this can be much worse than the peak, which is what you'd use any benchmark anyways. Ideally test would automatically do this for you.
The text was updated successfully, but these errors were encountered: