Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

example in BertForSequenceClassification() conflicts with the api #54

Closed
labixiaoK opened this issue Nov 24, 2018 · 1 comment
Closed

Comments

@labixiaoK
Copy link

Hi, firstly, admire u for the great job. but I encounter 2 problems when i use it:
1. UnicodeDecodeError: 'gbk' codec can't decode byte 0x85 in position 4527: illegal multibyte sequence,
same problem as ISSUE 52 when I excute the BertTokenizer.from_pretrained('bert-base-uncased'), but I successfully excute BertForNextSentencePrediction.from_pretrained('bert-base-uncased'), >.<
2. in the pytorch-pretrained-BERT/pytorch_pretrained_bert/modeling.py,
line 761 --> ```
token_type_ids: an optional torch.LongTensor of shape [batch_size, sequence_length] with the token
types indices selected in [0, 1]
. Type 0 corresponds to a `sentence A` and type 1 corresponds to
a `sentence B` token (see BERT paper for more details).

but in the following example,  in **line 784**-->     `token_type_ids = torch.LongTensor([[0, 0, 1], [0, **2**, 0]])`, why the '2' appears?  I am confused.  Otherwise, is the situation similar to '0, 1, 0 ' correct ? Or it should be similar to [000000111111] , that is continuous '0' and continuous '1' ?
ty.
@thomwolf
Copy link
Member

Hi,
(1) is solved on master. I will release a new release soon with the fixes on pip. In the mean time you can install from sources if you want.
I fixed the typo in the docstring you mention in (2), thanks, it should be a 1 instead of a 2.

ydshieh pushed a commit that referenced this issue Feb 9, 2023
xloem pushed a commit to xloem/transformers that referenced this issue Apr 9, 2023
* Update trainer and model flows to accommodate sparseml

Disable FP16 on QAT start (huggingface#12)

* Override LRScheduler when using LRModifiers

* Disable FP16 on QAT start

* keep wrapped scaler object for training after disabling

Using QATMatMul in DistilBERT model class (huggingface#41)

Removed double quantization of output of context layer. (huggingface#45)

Fix DataParallel validation forward signatures (huggingface#47)

* Fix: DataParallel validation forward signatures

* Update: generalize forward_fn selection

Best model after epoch (huggingface#46)

fix sclaer check for non fp16 mode in trainer (huggingface#38)

Mobilebert QAT (huggingface#55)

* Remove duplicate quantization of vocabulary.

enable a QATWrapper for non-parameterized matmuls in BERT self attention (huggingface#9)

* Utils and auxillary changes

update Zoo stub loading for SparseZoo 1.1 refactor (huggingface#54)

add flag to signal NM integration is active (huggingface#32)

Add recipe_name to file names

* Fix errors introduced in manual cherry-pick upgrade

Co-authored-by: Benjamin Fineran <[email protected]>
jameshennessytempus pushed a commit to jameshennessytempus/transformers that referenced this issue Jun 1, 2023
jonb377 added a commit to jonb377/hf-transformers that referenced this issue Apr 5, 2024
* Wait device ops before starting the profile

* Add comment and move capture to the end of step
ZYC-ModelCloud pushed a commit to ZYC-ModelCloud/transformers that referenced this issue Nov 14, 2024
* move ruff.toml to format folder

* add format.sh

* fix path

---------

Co-authored-by: csy <[email protected]>
ZYC-ModelCloud pushed a commit to ZYC-ModelCloud/transformers that referenced this issue Nov 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants