You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Dear Editor,
My first step is to do full-precision finetuning, and I set quant_mode:true. And then I carry out the Integer-only finetuning. When I test the Integer-only finetuning model on the MRPC, the result is very bad. Could you give some guidance?(I test the MRPC sample, the result is tensor([[0.5003, 0.4997]], grad_fn=))
Dear Editor,
My first step is to do full-precision finetuning, and I set quant_mode:true. And then I carry out the Integer-only finetuning. When I test the Integer-only finetuning model on the MRPC, the result is very bad. Could you give some guidance?(I test the MRPC sample, the result is tensor([[0.5003, 0.4997]], grad_fn=))
{
"_name_or_path": "/home/rram/storage/cailei/nlp_project/fine_tune/standard_ibert_weights/ibert-roberta-base",
"architectures": [
"IBertForSequenceClassification"
],
"attention_probs_dropout_prob": 0.1,
"bos_token_id": 0,
"eos_token_id": 2,
"finetuning_task": "mrpc",
"force_dequant": "none",
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"id2label": {
"0": "not_equivalent",
"1": "equivalent"
},
"initializer_range": 0.02,
"intermediate_size": 3072,
"label2id": {
"equivalent": 1,
"not_equivalent": 0
},
"layer_norm_eps": 1e-05,
"max_position_embeddings": 514,
"model_type": "ibert",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"pad_token_id": 1,
"position_embedding_type": "absolute",
"quant_mode": true,
"tokenizer_class": "RobertaTokenizer",
"torch_dtype": "int8",
"transformers_version": "4.12.0.dev0",
"type_vocab_size": 1,
"vocab_size": 50265
}
The text was updated successfully, but these errors were encountered: