You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
I have set a different learning rate (compared to the default 1e-4) as well as a different value of lr_decay for my training in config.json.
However during training the lr displayed in Tensorboard shows a constant value of 1e-4 at any steps.
To Reproduce
Steps to reproduce the behavior:
Set a different lr or lr_decay, observer the lr log in Tensorboard
Additional context
I wonder if this is a bug in display or the manually set lr and lr_decay is not used after lightning is implemented?
The text was updated successfully, but these errors were encountered:
Describe the bug
I have set a different learning rate (compared to the default 1e-4) as well as a different value of lr_decay for my training in config.json.
However during training the lr displayed in Tensorboard shows a constant value of 1e-4 at any steps.
To Reproduce
Steps to reproduce the behavior:
Set a different lr or lr_decay, observer the lr log in Tensorboard
Additional context
I wonder if this is a bug in display or the manually set lr and lr_decay is not used after lightning is implemented?
The text was updated successfully, but these errors were encountered: