You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When training for a classification task, FScore is more relevant than accuracy. After training, when I select the best model, I often get a subpar model, since a bette model with higher FScore may be available, and no checkpoint were log for it
Either use FScore as default monitor metric for classification or additionnal task such as "unbalanced_classification".
At the moment I have to manually add the FScore Macro and max as monitor mode. Very easy to fix - I just don't see why someone with a normal classification task would select accuracy over FScore as target metric.
The text was updated successfully, but these errors were encountered:
When training for a classification task, FScore is more relevant than accuracy. After training, when I select the best model, I often get a subpar model, since a bette model with higher FScore may be available, and no checkpoint were log for it
Either use FScore as default monitor metric for classification or additionnal task such as "unbalanced_classification".
At the moment I have to manually add the FScore Macro and max as monitor mode. Very easy to fix - I just don't see why someone with a normal classification task would select accuracy over FScore as target metric.
The text was updated successfully, but these errors were encountered: