You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Validation prediction insights : This tab displays model predictions for random, best, and worst validation samples. This tab becomes available after the first validation run and allows you to evaluate how well your model generalizes to new data.
Since Axolotl is headless (no UI) this can instead be implemented with WandB logging.
@NanoCode012 : Could you give me some pointers on where this should be added to Axolotl? I'll try to find time in the next week when I'm training to add and test this new feature. Thanks!
callbacks should be placed in utils/callbacks.py. Then you can add it to the Trainer at utils/trainer.py. You can see examples of callbacks and how it's added in the aforementioned files.
I think you can add a callback on_evaluate finished (?) if that's an option to also predict over a few eval samples and save the responses.
🔖 Feature description
One of my favourite features from LLM Studio is the validation prediction insights: https://h2oai.github.io/h2o-llmstudio/guide/experiments/view-an-experiment#experiment-tabs
Since Axolotl is headless (no UI) this can instead be implemented with WandB logging.
Examples:
✔️ Solution
See https://wandb.ai/stacey/mnist-viz/reports/Visualize-Predictions-over-Time--Vmlldzo1OTQxMTk
❓ Alternatives
No response
📝 Additional Context
I'd be interested in contributing this, if Axolotl team is interested and I can figure it out 😅
Acknowledgements
The text was updated successfully, but these errors were encountered: