You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I tried to fine-tune the model, took one example, and trained it for many epochs. I noticed that the loss decreased significantly, indicating that the model was learning. However, when I utilized the saved LoRa, I found that model was identical to the base model. I loaded both an overfitted model (trained for 25 epochs with 100 identical examples each) and an underfitted model (trained for 1 epoch). Surprisingly, both models produced the same text and achieved identical scores.
What are the chances that the LoRa was not saved correctly? Or maybe the problem is in the training parameters or in the training process?
The text was updated successfully, but these errors were encountered:
I tried to fine-tune the model, took one example, and trained it for many epochs. I noticed that the loss decreased significantly, indicating that the model was learning. However, when I utilized the saved LoRa, I found that model was identical to the base model. I loaded both an overfitted model (trained for 25 epochs with 100 identical examples each) and an underfitted model (trained for 1 epoch). Surprisingly, both models produced the same text and achieved identical scores.
What are the chances that the LoRa was not saved correctly? Or maybe the problem is in the training parameters or in the training process?
The text was updated successfully, but these errors were encountered: