Assesing the syntactic abilities of BERT.
Evaluate Google's BERT-Base and BERT-Large models on the syntactic agreement datasets from Linzen, Goldberg and Dupoux 2016 and Marvin and Linzen 2018 and Gulordava et al 2018.
This is quite messy, as I hacked it together between things here and there. But I also believe it is accurate. This lists the data files and shows how to run the evaluation. For more details and results, see the arxiv report.
Data taken from the github repos of Linzen, Goldberg and Dupoux (LGD), Marvin and Linzen (ML), and Gulordava et al.
File | Description |
---|---|
marvin_linzen_dataset.tsv |
stimuli from Marvin and Linzen. I dumped it from the pickle files of ML |
wiki.vocab |
from LGD, used for verb inflections (wiki.vocab) |
lgd_dataset.tsv |
processed data from LGD |
generated.tab |
data from Gulordava et al (generated.tab) |
lgd_dataset.tsv
is created by
wget http://tallinzen.net/media/rnn_agreement/agr_50_mostcommon_10K.tsv.gz
gunzip agr_50_mostcommon_10K.tsv.gz
python make_linzen_goldberg_testset.py > lgd_dataset.tsv
pip install pytorch_pretrained_bert
python eval_bert.py > results/lgd_results_large.txt
python eval_bert.py base > results/lgd_results_base.txt
python eval_bert.py marvin > results/marvin_results_large.txt
python eval_bert.py marvin base > results/marvin_results_base.txt
python eval_bert.py gul > results/gulordava_results_large.txt
python eval_bert.py gul base > results/gulordava_results_base.txt
python gen_marvin_tbl.py
python gen_lgd_tbl.py
python gen_gul_tbl.py