You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I would like to log metrics in a document during training and validation as what you have done for testing. Currently I have to open tensorbard and get
Desired solution
Is it possible to use a flag, like the tensorboard logging, to enable or disable metric logging regardless of stages?
The text was updated successfully, but these errors were encountered:
PR #253 actually introduces an update to allow for saving validation metrics via a --collect_valid_results flag, this should be merged in the next few days (I just need to find time for a final review) but you should be able to install this version of AllenAct ahead of time if you'd like:
Saving the training metrics to a metric file is something we hadn't thought of so might take some more effort. Note that you can always run testing but update the relevant experiment config file to use training dataset instead of the test dataset as a temporary workaround. If we built a function that let you extract the training metrics from the tensorboard file, would that be sufficient for you? I.e. are you just looking for a way to get these metrics without having to run tensorboard explicitly?
Thanks for your response! Yes, I think a function that can extract the training metrics from the tensorboard file would be good for me so that I don't have to run tensorboard.
Problem
I would like to log metrics in a document during training and validation as what you have done for testing. Currently I have to open tensorbard and get
Desired solution
Is it possible to use a flag, like the tensorboard logging, to enable or disable metric logging regardless of stages?
The text was updated successfully, but these errors were encountered: