You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
as for classification, it requires the target to be the label, instead of one hot vector.
Since in the train_loader it requires one hot vector (or maybe the label also work, in my case I only feed into the one hot vector), it might be consistent to let the val_loader require one hot vector or make both format work?
Best,
Rui
The text was updated successfully, but these errors were encountered:
I don't quite get the first issue: In the doc, we specified that the targets tensor follows PyTorch's CrossEntropyLoss convention, i.e. it is an integer tensor of (...) or (..., 1) where ... indicates any leading dimensions (see e.g. this).
As for your second question, I think it's a good point to be more flexible when defining the interval. E.g., we could pass the base parameter of torch.logspace as a method parameter of optimize_prior_precision.
Feel free to open a pull request! (I will eventually work on this, but I have so much in my queue.)
Hi,
There might be a small bug in
Laplace/laplace/utils/metrics.py
Line 35 in 6b0618a
as for classification, it requires the target to be the label, instead of one hot vector.
Since in the train_loader it requires one hot vector (or maybe the label also work, in my case I only feed into the one hot vector), it might be consistent to let the val_loader require one hot vector or make both format work?
Best,
Rui
The text was updated successfully, but these errors were encountered: