We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I run your code with
CUDA_VISIBLE_DEVICES=0,1,2,3 python3 run_classifier_TABSA.py --task_name semeval_QA_B --data_dir data/semeval2014/bert-pair --vocab_file uncased_L-12_H-768_A-12/vocab.txt --bert_config_file uncased_L-12_H-768_A-12/bert_config.json --init_checkpoint uncased_L-12_H-768_A-12/pytorch_model.bin --eval_test --do_lower_case --max_seq_length 512 --train_batch_size 24 --learning_rate 2e-5 --num_train_epochs 12.0 --output_dir results_v2/semeval_result/QA_B --seed 42
and get best performance at epoch 10
aspect_P = 0.9109792284866469 aspect_R = 0.8985365853658537 aspect_F = 0.9047151277013752 sentiment_Acc_4_classes = 0.8439024390243902 sentiment_Acc_3_classes = 0.8735868448098664 sentiment_Acc_2_classes = 0.9306029579067122
But it should be
BERT-pair-QA-B 85.9 89.9 95.6
as in your article.
What should I do to reproduce your results? Thanks.
The text was updated successfully, but these errors were encountered:
The recommended epoch for fine-tuning BERT model is 3 or 4. Besides, you can try to change the random seed.
Sorry, something went wrong.
No branches or pull requests
I run your code with
CUDA_VISIBLE_DEVICES=0,1,2,3 python3 run_classifier_TABSA.py --task_name semeval_QA_B --data_dir data/semeval2014/bert-pair --vocab_file uncased_L-12_H-768_A-12/vocab.txt --bert_config_file uncased_L-12_H-768_A-12/bert_config.json --init_checkpoint uncased_L-12_H-768_A-12/pytorch_model.bin --eval_test --do_lower_case --max_seq_length 512 --train_batch_size 24 --learning_rate 2e-5 --num_train_epochs 12.0 --output_dir results_v2/semeval_result/QA_B --seed 42
and get best performance at epoch 10
aspect_P = 0.9109792284866469
aspect_R = 0.8985365853658537
aspect_F = 0.9047151277013752
sentiment_Acc_4_classes = 0.8439024390243902
sentiment_Acc_3_classes = 0.8735868448098664
sentiment_Acc_2_classes = 0.9306029579067122
But it should be
BERT-pair-QA-B 85.9 89.9 95.6
as in your article.
What should I do to reproduce your results? Thanks.
The text was updated successfully, but these errors were encountered: