-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
how to download / create single_caption_per_sample_val.json file #7
Comments
Hi, here are the instructions. Please let me know if you encounter any issue. |
Hi, I had the same problem, I didn't get single_caption_per_sample_val.json, and what does it mean to set dataset_mode to 0.5, 1.5, 2.5, etc. in embeddings_generator.py ? |
I gave up, I found other code on GitHub and then conducted an evaluation, referring to https://github.com/jmhessel/clipscore. |
Hi, sorry for the confusion. The json (single_caption_per_sample_val) holds the captions data (per id) and it is generated in the script of parse_karpathy. So once you download the data from the sources I mentioned in the readme, you can use the script of parse_karpathy to pre-process it and to generate a json that is in the format of single_caption_per_sample_val. Then you can simply use that json as the input for the embeddings_generator. The different dataset_mode s in the embeddings_generator are just something internal for me that was useful since I wanted have mode per dataset (for me it is easier to manage the different ~10 paths) but you can definitely ignore it and just have your own json and assign it there to 'annotations_path'. Hope it is helpful. Once I have some free time I'll update the code to make it easier to use. |
|
@wxpqq826615304 I'll try this,thanks |
can anyone please help me how to generate single_caption_per_sample_val.json file as mentioned in embeddings_generator.py file as shown below
annotations_path = f'/home/gamir/DER-Roei/davidn/myprivate_coco/annotations/single_caption_per_sample_val.json'
The text was updated successfully, but these errors were encountered: