Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

how to download / create single_caption_per_sample_val.json file #7

Open
BoiAkay opened this issue Mar 9, 2023 · 6 comments
Open

Comments

@BoiAkay
Copy link

BoiAkay commented Mar 9, 2023

can anyone please help me how to generate single_caption_per_sample_val.json file as mentioned in embeddings_generator.py file as shown below
annotations_path = f'/home/gamir/DER-Roei/davidn/myprivate_coco/annotations/single_caption_per_sample_val.json'

@DavidHuji
Copy link
Owner

DavidHuji commented Apr 4, 2023

Hi, here are the instructions. Please let me know if you encounter any issue.

@gWeiXP
Copy link

gWeiXP commented Nov 25, 2023

Hi, here are the instructions. Please let me know if you encounter any issue.

Hi, I had the same problem, I didn't get single_caption_per_sample_val.json, and what does it mean to set dataset_mode to 0.5, 1.5, 2.5, etc. in embeddings_generator.py ?

@gWeiXP
Copy link

gWeiXP commented Dec 12, 2023

I gave up, I found other code on GitHub and then conducted an evaluation, referring to https://github.com/jmhessel/clipscore.
Simply save the generated descriptions and label descriptions into two lists, refer to clipscore.py.

@DavidHuji
Copy link
Owner

Hi, sorry for the confusion. The json (single_caption_per_sample_val) holds the captions data (per id) and it is generated in the script of parse_karpathy. So once you download the data from the sources I mentioned in the readme, you can use the script of parse_karpathy to pre-process it and to generate a json that is in the format of single_caption_per_sample_val. Then you can simply use that json as the input for the embeddings_generator. The different dataset_mode s in the embeddings_generator are just something internal for me that was useful since I wanted have mode per dataset (for me it is easier to manage the different ~10 paths) but you can definitely ignore it and just have your own json and assign it there to 'annotations_path'. Hope it is helpful. Once I have some free time I'll update the code to make it easier to use.

@qq123aa456
Copy link

Hi, sorry for the confusion. The json (single_caption_per_sample_val) holds the captions data (per id) and it is generated in the script of parse_karpathy. So once you download the data from the sources I mentioned in the readme, you can use the script of parse_karpathy to pre-process it and to generate a json that is in the format of single_caption_per_sample_val. Then you can simply use that json as the input for the embeddings_generator. The different dataset_mode s in the embeddings_generator are just something internal for me that was useful since I wanted have mode per dataset (for me it is easier to manage the different ~10 paths) but you can definitely ignore it and just have your own json and assign it there to 'annotations_path'. Hope it is helpful. Once I have some free time I'll update the code to make it easier to use.
Thank you so much for your reply,could you please give us some instructions on how to get scores,like bleu,cider?

@qq123aa456
Copy link

@wxpqq826615304 I'll try this,thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants