Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fail to load weight from pair-preference-model-LLaMA3-8B #4

Open
matouk98 opened this issue May 31, 2024 · 2 comments
Open

Fail to load weight from pair-preference-model-LLaMA3-8B #4

matouk98 opened this issue May 31, 2024 · 2 comments

Comments

@matouk98
Copy link

Hi, congratulations to the great work and thanks for open source!

I am running step 3.2 with pair-preference-model-LLaMA3-8B. However, I encountered the warning "Some weights of LlamaForSequenceClassification were not initialized from the model checkpoint at RLHFlow/pair-preference-model-LLaMA3-8B and are newly initialized: ['score.weight']". Could you please help me with the issue? Thanks a lot!

@WeiXiongUST
Copy link
Contributor

The current code is for the Bradley Terry reward, which is a ``AutoModelForSequenceClassification''.

In contrast, the pair-preference model is ``AutoModelForCausalLM''. Also the way of using these two models is different. I should write another script for the pair-RM in the next few days.

Thanks for bring this issue to us.

@hmzo
Copy link

hmzo commented Jun 27, 2024

@WeiXiongUST Hello, is there any recent progress on this? I'm curious about if pair-rm needs $C_k^2$ inferences for k candidates. How can we get the absolute reward score for each candidate?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants