-
-
Notifications
You must be signed in to change notification settings - Fork 5.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
YaRN tests #1161
YaRN tests #1161
Conversation
f74f49d
to
63a52f6
Compare
Thanks for giving this a shot @viktor-ferenczi. YaRN models look impressive because of their low perplexity and long contest windows, so I’m sure the community will love to test this out once it’s ready. |
please finish this pull request, it will really help because this model is very good |
I don't have extensive LLM (or vLLM) development experience yet. I'm learning into it on the job here, so it won't be implemented fast (unless I get help on this). I'm dedicated to complete it at some point, but also need to find the time working on this. (I have a day job.) |
@zhuohan123 @WoosukKwon please see this pull request |
1fe53b2
to
ca3800e
Compare
@casper-hansen There is #555 and #464. They seem to share code with YaRN, just different RoPE scaling approaches. I suggest to have a unified configuration and partially shared implementation. #555 would be a good start if that can be reviewed and finalized first. It is meaningless to redo the work which has already been done there. That PR has different test cases for the long context as what I wrote, so maybe they could be merged to use both. The new LLM option would look something like: rope_scaling=dict(
type='linear', # linear, dynamic or yarn
factor=2.0, # scaling factor
... # Hyper-parameters if required, like YaRN's alpha and beta
) Also, it seems to be defined in the YaRN models, except of the alpha and beta hyper-parameters. The paper mentions alpha=1 and beta=32 for Llama 2 models. In the "rope_scaling": {
"factor": 32.0,
"original_max_position_embeddings": 4096,
"type": "yarn",
"finetuned": true
} The smaller "rope_scaling": {
"factor": 16.0,
"original_max_position_embeddings": 4096,
"type": "yarn",
"finetuned": true
} What do you think? |
ca3800e
to
a7595e1
Compare
Hi @viktor-ferenczi, I would be willing to contribute the implementation, unless you have already started work on this. |
I just added tests and haven't written the actual YaRN code yet. What may help you is that #464 was merged recently. Please go ahead with the implementation, because I lack the time to work on it right now. |
Implementation PR: #1264 |
Issue: #980
Currently the branch has preliminary code to test the context window quality with pass key retrieval tasks. It does not plot a graph, that's not the goal of the test. It allows for running the same test on both the reference implementation of YaRN using Transformers and vLLM using parameters to provide similar output, therefore allows for comparing our upcoming implementation with the reference one.
TODO