-
Notifications
You must be signed in to change notification settings - Fork 473
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Support for DeBERTaV2 #207
Comments
Hi @sam-h-bean, Before quantizing a Feel free to reach out if you have any other question! |
@JingyaHuang I opened a PR here! Excited to get to the quantizing once this makes its way into main. Or is there a way I can begin that work using my local copy of transformers? |
@sam-h-bean Sure, feel free to test |
Closed as completed. P.S. As DeBERTa tokenizers output tokenizer = {processor_class}.from_pretrained("{checkpoint}")
model = {model_class}.from_pretrained("{checkpoint}")
-inputs = tokenizer("Optimum is nice.", return_tensors="pt")
+inputs = tokenizer("Optimum is nice.", return_tensors="pt", return_token_type_ids=False)
outputs = model(**inputs) And for pipeline API, we are working on taking these edge cases into consideration. |
I am getting |
@ice-americano Hi, could you open a new issue to track down this problem? Thanks a lot! |
I would like to use DeBERTaV2 for sequence classification as a quantized model. Please let me know what needs to be done to open a PR to add this support!
The text was updated successfully, but these errors were encountered: