-
Notifications
You must be signed in to change notification settings - Fork 27.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
example in BertForSequenceClassification() conflicts with the api #54
Comments
Hi, |
xloem
pushed a commit
to xloem/transformers
that referenced
this issue
Apr 9, 2023
* Update trainer and model flows to accommodate sparseml Disable FP16 on QAT start (huggingface#12) * Override LRScheduler when using LRModifiers * Disable FP16 on QAT start * keep wrapped scaler object for training after disabling Using QATMatMul in DistilBERT model class (huggingface#41) Removed double quantization of output of context layer. (huggingface#45) Fix DataParallel validation forward signatures (huggingface#47) * Fix: DataParallel validation forward signatures * Update: generalize forward_fn selection Best model after epoch (huggingface#46) fix sclaer check for non fp16 mode in trainer (huggingface#38) Mobilebert QAT (huggingface#55) * Remove duplicate quantization of vocabulary. enable a QATWrapper for non-parameterized matmuls in BERT self attention (huggingface#9) * Utils and auxillary changes update Zoo stub loading for SparseZoo 1.1 refactor (huggingface#54) add flag to signal NM integration is active (huggingface#32) Add recipe_name to file names * Fix errors introduced in manual cherry-pick upgrade Co-authored-by: Benjamin Fineran <[email protected]>
jameshennessytempus
pushed a commit
to jameshennessytempus/transformers
that referenced
this issue
Jun 1, 2023
jonb377
added a commit
to jonb377/hf-transformers
that referenced
this issue
Apr 5, 2024
* Wait device ops before starting the profile * Add comment and move capture to the end of step
ZYC-ModelCloud
pushed a commit
to ZYC-ModelCloud/transformers
that referenced
this issue
Nov 14, 2024
* move ruff.toml to format folder * add format.sh * fix path --------- Co-authored-by: csy <[email protected]>
ZYC-ModelCloud
pushed a commit
to ZYC-ModelCloud/transformers
that referenced
this issue
Nov 14, 2024
…t(). (huggingface#54) Co-authored-by: LRL-ModelCloud <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hi, firstly, admire u for the great job. but I encounter 2 problems when i use it:
1.
UnicodeDecodeError: 'gbk' codec can't decode byte 0x85 in position 4527: illegal multibyte sequence
,same problem as ISSUE 52 when I excute the
BertTokenizer.from_pretrained('bert-base-uncased')
, but I successfully excuteBertForNextSentencePrediction.from_pretrained('bert-base-uncased')
, >.<2. in the pytorch-pretrained-BERT/pytorch_pretrained_bert/modeling.py,
line 761 --> ```
token_type_ids
: an optional torch.LongTensor of shape [batch_size, sequence_length] with the tokentypes indices selected in [0, 1]. Type 0 corresponds to a `sentence A` and type 1 corresponds to
a `sentence B` token (see BERT paper for more details).
The text was updated successfully, but these errors were encountered: