-
Notifications
You must be signed in to change notification settings - Fork 525
CLUE MRC
zhezhaoa edited this page Aug 24, 2023
·
18 revisions
Here is a short summary of our solution on CLUE MRC benchmark.
The example of fine-tuning and doing inference on CMRC2018 dataset with cluecorpussmall_roberta_wwm_large_seq512_model.bin:
python3 finetune/run_cmrc.py --pretrained_model_path models/cluecorpussmall_roberta_wwm_large_seq512_model.bin \
--vocab_path models/google_zh_vocab.txt \
--config_path models/bert/large_config.json \
--train_path datasets/cmrc2018/train.json \
--dev_path datasets/cmrc2018/dev.json \
--output_model_path models/cmrc_model.bin \
--epochs_num 2 --batch_size 8 --seq_length 512
python3 inference/run_cmrc_infer.py --load_model_path models/cmrc_model.bin \
--vocab_path models/google_zh_vocab.txt \
--config_path models/bert/large_config.json \
--test_path datasets/cmrc2018/test.json \
--prediction_path datasets/cmrc2018/prediction.json \
--seq_length 512
The example of fine-tuning and doing inference on ChID dataset with cluecorpussmall_roberta_wwm_large_seq512_model.bin:
python3 finetune/run_chid.py --pretrained_model_path models/cluecorpussmall_roberta_wwm_large_seq512_model.bin \
--vocab_path models/google_zh_vocab.txt \
--config_path models/bert/large_config.json \
--train_path datasets/chid/train.json --train_answer_path datasets/chid/train_answer.json \
--dev_path datasets/chid/dev.json --dev_answer_path datasets/chid/dev_answer.json \
--output_model_path models/multichoice_model.bin \
--report_steps 1000 \
--epochs_num 3 --batch_size 16 --seq_length 64 --max_choices_num 10
python3 inference/run_chid_infer.py --load_model_path models/multichoice_model.bin \
--vocab_path models/google_zh_vocab.txt \
--config_path models/bert/large_config.json \
--test_path datasets/chid/test.json \
--prediction_path datasets/chid/prediction.json \
--seq_length 64 --max_choices_num 10
Notice that postprocess_chid_predictions function is used at inference stage which is important for the performance on ChID dataset.
The example of fine-tuning and doing inference on C3 dataset with cluecorpussmall_roberta_wwm_large_seq512_model.bin:
python3 finetune/run_c3.py --pretrained_model_path models/cluecorpussmall_roberta_wwm_large_seq512_model.bin \
--vocab_path models/google_zh_vocab.txt \
--config_path models/bert/large_config.json \
--train_path datasets/c3/train.json --dev_path datasets/c3/dev.json \
--output_model_path models/multichoice_model.bin \
--learning_rate 1e-5 --epochs_num 5 --batch_size 8 --seq_length 512 --max_choices_num 4
python3 inference/run_c3_infer.py --load_model_path models/multichoice_model.bin \
--vocab_path models/google_zh_vocab.txt \
--config_path models/bert/large_config.json \
--test_path datasets/c3/test.json \
--prediction_path datasets/c3/prediction.json \
--seq_length 512 --max_choices_num 4