Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[v0.3.1] Release Tracker #2859

Closed
5 tasks done
WoosukKwon opened this issue Feb 13, 2024 · 12 comments · Fixed by #2887
Closed
5 tasks done

[v0.3.1] Release Tracker #2859

WoosukKwon opened this issue Feb 13, 2024 · 12 comments · Fixed by #2887
Labels
release Related to new version release

Comments

@WoosukKwon
Copy link
Collaborator

WoosukKwon commented Feb 13, 2024

ETA: Feb 14-16 th

Major changes

TBD

PRs to be merged before the release

@WoosukKwon WoosukKwon added the release Related to new version release label Feb 13, 2024
@WoosukKwon
Copy link
Collaborator Author

@simon-mo Please feel free to add more!

@simon-mo
Copy link
Collaborator

I would really like #2804 but it seems to be blocked by either FlashInfer or other libraries

@WoosukKwon
Copy link
Collaborator Author

@simon-mo I think the main concern here is AMD because the ROCm xformers patch uses xformers==0.0.23 while we're upgrading to xformers==0.0.24 for CUDA.

@simon-mo
Copy link
Collaborator

I see. We might be able to distribute different version with varying version pins...

@tutu329
Copy link

tutu329 commented Feb 14, 2024

need supporting miqu-1-70b-sf-gptq. thanks a lot!

@umarbutler
Copy link

I would like to see a fix to #2795. I and two other users have been unable to use the latest version of vLLM with Ray, however, it works perfectly well after downgrading to the previous version.

@casper-hansen
Copy link
Contributor

#2761 brings back support for quantized MoE models like Mixtral/Deepseek. Also brings a great speedup (2-3x).

Possible to include it in the next release so that quantized models are not broken?

@simon-mo
Copy link
Collaborator

@umarbutler In this release, we will disable to the custom all reduce, which should address #2795.
@casper-hansen We should definitely get this PR in in next two weeks, but we need to cut a release today to address critical issues.

@pcmoritz
Copy link
Collaborator

We will need to include #2875 in the release as well

@WoosukKwon
Copy link
Collaborator Author

@pcmoritz Added. Thanks!

@WoosukKwon
Copy link
Collaborator Author

@caoshiyi Would it be possible to merge #2517 before the release?

@caoshiyi
Copy link
Contributor

@WoosukKwon Sorry for the delay. Will address the comments tonight.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
release Related to new version release
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants