Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question on Training and Validation Losses #35

Open
rutkovskii opened this issue Nov 21, 2024 · 1 comment
Open

Question on Training and Validation Losses #35

rutkovskii opened this issue Nov 21, 2024 · 1 comment

Comments

@rutkovskii
Copy link

Hi @HansBambel,

I wanted to ask about your experience training the UNet and UNetDS_Attention models.

  1. What range of training and validation losses did you typically observe?
  2. If there isn’t a definitive value for what these losses "should be," could you share the approximate range of loss values you encountered during the training and validation steps in your experiments for the paper on SmaAt-UNet for precipitation task?

With my data, I tend to observe values on 10^8 scale. E.g.:

Early stopping reached at epoch 9. Best monitored metric value: 531416352.000000

When training DGMR, I also observed 100x greater losses (3000s vs 30s) on my data as opposed to the data provided by Nimrod covering UK. It makes me think that I need to adjust my data.

Thank you for your time and help!

@HansBambel
Copy link
Owner

Hi @rutkovskii,
When I look at my checkpoints I see val_loss being around 10 at the start and then after converging around 0.23.

But this very much depends on your loss-metric and your data. For example, when I have as target a single image with size 144x144, the maximum possible MAE is 20736 (assuming the predicted values are between 0 and 1). When you have 6 images instead then the maximum possible MAE is 6x20736=124416.

When you calculate the loss on unnormalized data these values can increase drastically. Furthermore, when you use MSE even further as it squares the differences.

So, again, it very much depends on your data :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants