Replies: 1 comment
-
I strongly need this feature |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Moving forward I was wondering if there were any plans to allow the logging of non-scalar values through the
self.log(...)
method provided to pytorch lightning modules. For custom logging the user can access the logger throughself.log.experiment()
which custom loggers are free to overload. The disadvantage of this is:A potentially better alternative would be to allow the pytorch lightning module to expose variables to a logger (scalar, tensors or potentially anything else) with the logger class responsible for processing them into the correct format for logging. This would allow the logging of images / videos / whatever else provided the input to the logger can be generated from the exposed variables. One way to achieve this is to allow the
log
method to take arbitrary data types and instead of converting them to scalars before passing to the logger asself.trainer.logger.agg_and_log_metrics(scalar_metrics, step=step)
inLoggerConnector.log_metrics()
converting them after and changing themetrics_to_scalars
defined inTrainerLoggingMixin
to filter out / convert any object that cannot be converted to a scalar for downstream tasks. I am not sure if this last step is necessary. From looking at the code there are two places where side-effects might be present in from this change inlogger.agg_and_log_metrics(...)
:In the first case, this seems to simply update a dictionary of logged variables which is probably fine unless the variables in
scalar_metrics
are very large. In the second casescalar_metrics
are appended which is none ideal. Luckily this only occurs when running in debug mode from what I can tell?My question would be whether it is actually necessary to preserve logged variables in state. What is this used for? I was struggling to figure it out in the code. If this is not necessary then the proposed change is pretty simple to implement. I have a working proof of concept for this approach by simply overloading the
agg_and_log_metrics(...)
in theTrainer
class to look something like:Then the user is able to do what they want with the tensor variables by defining there own
def log_metrics(self, metrics, step)
.At the moment the logger is also passed
'epoch'
in the dictionaryscalar_metrics
but there are some cases where you might want the logger to log additional information which is not known until after the instantiation of the Trainer object (such as length of dataloader etc). For example in the above I extract two other bits of information ("n_steps"
and"n_epochs"
). A better option might be to also give loggers access to the trainer object (just as the trainer object has access to the logger).I would be more than happy to help contribute to this. I am a newcomer but keen to learn and happy to spend some time helping out!
Beta Was this translation helpful? Give feedback.
All reactions