-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[OOM] Fine tuning CLIP #1573
Comments
I'm also looking for examples for finetuning the CLIP model with sentence-transformer. Thanks! |
Hey @nreimers, congrats on your move/promotion to cohere.ai. I would like to open a PR and address this issue. Any pointers on how to approach it ? |
@AndrMoura did u solve ur problem? |
I didn't. I used the HF library to train my own CLIP. As for the label, check the MultipleNegativesRankingLoss . |
image= mapping.keys() This is how i am computing my score is it correct? |
Ohh how can I do that? bcoz for now i am simply using sentenceTransformers library to import my clip model and am getting good results but i cant evaluate it and this is where i am stuck. |
HI, I am having the same error with memory |
Hello, Im trying to fine-tune a CLIP model with my own data (image-description pairs) with a GPU but mid-training Im getting OOM RAM. The RAM memory during training slowly goes up and up until oom.
This is a sample from my code:
I believe the problem lies on the 2nd line. When I change the train_examples to load text only:
train_examples = [InputExample(texts=[row[1]['description'], row[1]['description']]) for row in train_captions.iterrows()]
The model trains without any memory issues!! I must be doing something wrong with the image loading. What is the proper way to load images in train_examples variable?
Thank you.
The text was updated successfully, but these errors were encountered: