You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I wanted to ask about your experience training the UNet and UNetDS_Attention models.
What range of training and validation losses did you typically observe?
If there isn’t a definitive value for what these losses "should be," could you share the approximate range of loss values you encountered during the training and validation steps in your experiments for the paper on SmaAt-UNet for precipitation task?
With my data, I tend to observe values on 10^8 scale. E.g.:
Early stopping reached at epoch 9. Best monitored metric value: 531416352.000000
When training DGMR, I also observed 100x greater losses (3000s vs 30s) on my data as opposed to the data provided by Nimrod covering UK. It makes me think that I need to adjust my data.
Thank you for your time and help!
The text was updated successfully, but these errors were encountered:
Hi @rutkovskii,
When I look at my checkpoints I see val_loss being around 10 at the start and then after converging around 0.23.
But this very much depends on your loss-metric and your data. For example, when I have as target a single image with size 144x144, the maximum possible MAE is 20736 (assuming the predicted values are between 0 and 1). When you have 6 images instead then the maximum possible MAE is 6x20736=124416.
When you calculate the loss on unnormalized data these values can increase drastically. Furthermore, when you use MSE even further as it squares the differences.
Hi @HansBambel,
I wanted to ask about your experience training the UNet and UNetDS_Attention models.
With my data, I tend to observe values on 10^8 scale. E.g.:
When training DGMR, I also observed 100x greater losses (3000s vs 30s) on my data as opposed to the data provided by Nimrod covering UK. It makes me think that I need to adjust my data.
Thank you for your time and help!
The text was updated successfully, but these errors were encountered: