Reference code for Findings of Findings of ACL 2021 paper - Reordering Examples Helps during Priming-based Few-Shot Learning.
The code was written with, or depends on:
- Python 3.6
- Pytorch 1.7.0
- Transformers 3.4.0
- Create a virtualenv and install dependecies
virtualenv -p python3 env source env/bin/activate pip3 install -r requirements.txt
- Download data following instructions at https://github.com/ucinlp/autoprompt [2] and unzip in the same folder as this repository. For fact-retreival experiments, download LAMA [3] data from https://github.com/facebookresearch/LAMA and copy the
relations.jsonl
file intodata/fact-retrieval/original/
. - To Run classification tasks using 10 training examples: can be sst2 or sicke2b. can be pero or pero_abl (without sep token learning). <start_idx> specifies the trainin split, to reproduce results from the paper, use 0,10,20,30,40.
bash run_clf.sh 0 <dataset> <mode> <start_idx> ./saved_models/outputdir1
- Similarly, to run fact retrieval task:
bash run_fact_retrieval.sh 0 <mode> <start_idx> saved_models/outputdir2
If you use this code, please consider citing:
[1] Sawan Kumar and Partha Talukdar. 2021. Reordering Examples Helps during Priming-based Few-Shot Learning. To appear in Findings of ACL, 2021. Association for Computational Linguistics.
[2] Shin, Taylor, et al. "Eliciting Knowledge from Language Models Using Automatically Generated Prompts." Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). 2020.
[3] Petroni, Fabio, et al. "Language Models as Knowledge Bases?." Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). 2019.
For any clarification, comments, or suggestions please create an issue or contact [email protected]