-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
generate_w_clip ERROR #2
Comments
Thanks for your interest! May I know the version of the transformer and the models (e.g. LLaVA, InstructBlip or mPLUG-Owl) that you are using? |
I am using llava-v1.5 and transformer is the 4.31.0 provided in your code |
It seems LLaVA has changed some interface code (e.g., the output_ids not containing input_ids anymore). To adapt to latest version of LLaVA, I might still need some time to change the code. If it is available, could you use LLaVA v1.1.3 first to run the code first? It is the exact model version in our experiments.
After installing the LLaVA, you could then install the custom transformer. Thanks! |
Thank you for your reply I installed according to the requirements of your LLava version 1.1.3 (and weight is https://huggingface.co/liuhaotian/llava-v1.5-7b), but there are still problems:
I do not know whether it is the open_clip version (open-clip-torch==2.24.0) problem or the ViT-SO400M-14-SigLIP-384 weight problem (https://huggingface.co/timm/ViT-SO400M-14-SigLIP-384/tree/main). By the way, my clip_scorer is created locally like this:
|
I met the same error |
Thanks for your feedbacks! This bug has previously been mentioned in mlfoundations/open_clip#660 (comment). It could be potential solved by using the updated timm package, such as
Please let me know if this solves the issue. Thanks! |
Oh no, I just tried timm for 0.9.8, 0.9.9, 0.9.10, 0.9.11, 0.9.12, 0.9.16, 1.0.3 and the latest dev version (1.0.4 dev) and it doesn't seem to fix the problem, it's still the same bug. |
It is a bit wired XD. Just in case if you might be using |
I know what the problem is, just need to make a slight change in the following code
|
I will update it later in the code and thanks for your time! :) |
Very good work!
I encountered the following error while running the function
GPU RTX 3090
python 3.9.19
torch 2.0.0
The text was updated successfully, but these errors were encountered: