You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, amazing work, and thank you for making it open source!
After reviewing your code, I noticed multiple preference strategies are included when selecting DPO preference pairs. Have you compared these strategies, and if so, which one tends to perform better?
When incorporating chosen preference data (SFT) into the original model, if the data distribution of the original model's outputs is completely inconsistent with the chosen data and of lower quality, would you recommend using OOD chosen + generated data as preference pairs for training, or only using preference pairs generated by the original model?
Thanks in advance for your insights!
The text was updated successfully, but these errors were encountered:
For 1, we have found that the max-min pair performs the best in our experiments.
For 2, we'd suggest to conduct SFT first, then perform online DPO, so that your policy model is good enough to generate reasonable samples. If your policy model is not good enough, the sample efficiency would be very low (best of n + large n to obtain a good example.
Thank you so much for taking the time to respond! I truly appreciate your insights.
Regarding point 2, I wanted to seek further clarification: does the process involve performing SFT first, followed by DPO? Specifically, is the SFT step meant to align the distribution by fine-tuning on the chosen outputs from the DPO preference pairs. If so, does this imply that the chosen output in DPO pairs needs to be of particularly high quality?
Or is it sufficient to use open-source instruction-tuning datasets to bring the model to a usable level, without focusing on the differences between the SFT data and DPO pairs? In such a case, would the primary criterion for determining the usability of DPO pairs simply rely on the RM's scoring as rejected sample.
Thank you again for your patience and for sharing your expertise!
Hi, amazing work, and thank you for making it open source!
After reviewing your code, I noticed multiple preference strategies are included when selecting DPO preference pairs. Have you compared these strategies, and if so, which one tends to perform better?
When incorporating chosen preference data (SFT) into the original model, if the data distribution of the original model's outputs is completely inconsistent with the chosen data and of lower quality, would you recommend using OOD chosen + generated data as preference pairs for training, or only using preference pairs generated by the original model?
Thanks in advance for your insights!
The text was updated successfully, but these errors were encountered: