Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add RoCBert support for Bettertransformer #542

Merged

Conversation

shogohida
Copy link
Contributor

@shogohida shogohida commented Dec 3, 2022

What does this PR do?

Adds RoCBert support for Bettertransformer

Fixes huggingface/transformers#20372

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you make sure to update the documentation with your changes?
  • Did you write any new necessary tests?

@shogohida shogohida marked this pull request as ready for review December 3, 2022 14:16
@HuggingFaceDocBuilderDev
Copy link

HuggingFaceDocBuilderDev commented Dec 3, 2022

The documentation is not available anymore as the PR was closed or merged.

@shogohida
Copy link
Contributor Author

I got the following error but I didn't get it....

https://github.com/huggingface/optimum/actions/runs/3608885843/jobs/6081897113
https://github.com/huggingface/optimum/actions/runs/3608885843/jobs/6081897157

 File "/Users/runner/work/optimum/optimum/tests/bettertransformer/testing_bettertransformer_utils.py", line 110, in test_raise_autocast
    _ = bt_model(**inputs)
  File "/Users/runner/hostedtoolcache/Python/3.8.14/x64/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
    return forward_call(*input, **kwargs)
  File "/Users/runner/hostedtoolcache/Python/3.8.14/x64/lib/python3.8/site-packages/transformers/models/roc_bert/modeling_roc_bert.py", line 1041, in forward
    embedding_output = self.embeddings(
  File "/Users/runner/hostedtoolcache/Python/3.8.14/x64/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
    return forward_call(*input, **kwargs)
  File "/Users/runner/hostedtoolcache/Python/3.8.14/x64/lib/python3.8/site-packages/transformers/models/roc_bert/modeling_roc_bert.py", line 282, in forward
    embedding_in = self.LayerNorm(embedding_in)
  File "/Users/runner/hostedtoolcache/Python/3.8.14/x64/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
    return forward_call(*input, **kwargs)
  File "/Users/runner/hostedtoolcache/Python/3.8.14/x64/lib/python3.8/site-packages/torch/nn/modules/normalization.py", line 190, in forward
    return F.layer_norm(
  File "/Users/runner/hostedtoolcache/Python/3.8.14/x64/lib/python3.8/site-packages/torch/nn/functional.py", line 2515, in layer_norm
    return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: expected scalar type BFloat16 but found Float

Copy link
Contributor

@younesbelkada younesbelkada left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks so much for adding BetterTransformer support for this new architecture!
SInce ROCBert copies exactly the same structure as Bert, we can leverage that and use directly BertLayerBetterTransformer as the converted layer :D
Regarding the failing test, let me dig a little bit and get back to you

optimum/bettertransformer/models/encoder_models.py Outdated Show resolved Hide resolved
optimum/bettertransformer/models/__init__.py Outdated Show resolved Hide resolved
optimum/bettertransformer/models/__init__.py Outdated Show resolved Hide resolved
@shogohida
Copy link
Contributor Author

Hi @younesbelkada
Thanks for your comment! I deleted RoCBertLayerBetterTransformer as you indicated. I received another error which is following.

FAILED onnxruntime/test_optimization.py::ORTOptimizerTest::test_compare_original_seq2seq_model_with_optimized_model_7 - FileNotFoundError: [Errno 2] No such file or directory: '/Users/runner/.cache/huggingface/hub/hf-internal-testing/tiny-random-onnx-mt5/decoder_model_o2_cpu.onnx'

@younesbelkada
Copy link
Contributor

Hi @shogohida
Don't worry about this test as it is flaky ;) Will look into the failing BT test now

@shogohida
Copy link
Contributor Author

There is another error that I don't understand...

ERROR: test_inference_speed (test_bettertransformer_encoder.BetterTransformersEncoderTest)
The converted models should be at least slightly faster than the native
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/opt/hostedtoolcache/Python/3.8.14/x64/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/home/runner/work/optimum/optimum/tests/bettertransformer/test_bettertransformer_encoder.py", line 169, in test_inference_speed
    _ = bt_model(input_ids, attention_mask=attention_mask)
  File "/opt/hostedtoolcache/Python/3.8.14/x64/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
    return forward_call(*input, **kwargs)
  File "/opt/hostedtoolcache/Python/3.8.14/x64/lib/python3.8/site-packages/transformers/models/bert/modeling_bert.py", line 1034, in forward
    pooled_output = self.pooler(sequence_output) if self.pooler is not None else None
  File "/opt/hostedtoolcache/Python/3.8.14/x64/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
    return forward_call(*input, **kwargs)
  File "/opt/hostedtoolcache/Python/3.8.14/x64/lib/python3.8/site-packages/transformers/models/bert/modeling_bert.py", line 661, in forward
    first_token_tensor = hidden_states[:, 0]
IndexError: index 0 is out of bounds for dimension 1 with size 0

@fxmarty
Copy link
Contributor

fxmarty commented Dec 8, 2022

@shogohida Thanks for the work! This is a flaky test, fixed in #564 . I rerun the workflow and it should be fine.

@fxmarty fxmarty closed this in #564 Dec 8, 2022
@fxmarty fxmarty reopened this Dec 8, 2022
@fxmarty
Copy link
Contributor

fxmarty commented Dec 8, 2022

Yet an other test failing:

======================================================================
ERROR: test_raise_autocast (test_bettertransformer_encoder.BetterTransformersEncoderTest)
A tests that checks if the conversion raises an error if the model is run under
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/runner/work/optimum/optimum/tests/bettertransformer/testing_bettertransformer_utils.py", line 141, in test_raise_autocast
    _ = bt_model(**inputs)
  File "/opt/hostedtoolcache/Python/3.8.15/x64/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
    return forward_call(*input, **kwargs)
  File "/opt/hostedtoolcache/Python/3.8.15/x64/lib/python3.8/site-packages/transformers/models/roc_bert/modeling_roc_bert.py", line 1041, in forward
    embedding_output = self.embeddings(
  File "/opt/hostedtoolcache/Python/3.8.15/x64/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
    return forward_call(*input, **kwargs)
  File "/opt/hostedtoolcache/Python/3.8.15/x64/lib/python3.8/site-packages/transformers/models/roc_bert/modeling_roc_bert.py", line 282, in forward
    embedding_in = self.LayerNorm(embedding_in)
  File "/opt/hostedtoolcache/Python/3.8.15/x64/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
    return forward_call(*input, **kwargs)
  File "/opt/hostedtoolcache/Python/3.8.15/x64/lib/python3.8/site-packages/torch/nn/modules/normalization.py", line 190, in forward
    return F.layer_norm(
  File "/opt/hostedtoolcache/Python/3.8.15/x64/lib/python3.8/site-packages/torch/nn/functional.py", line 2515, in layer_norm
    return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: expected scalar type BFloat16 but found Float

@younesbelkada
Copy link
Contributor

This failing test has to do with autocast and ROCBert. I think that the fix should go on transformers side or we can just skip this test for RocBert..

@shogohida
Copy link
Contributor Author

Thanks for your comments guys! So what should I do?.... Can we skip the failing test as Younes said?

@younesbelkada
Copy link
Contributor

Hi @shogohida
Thanks so much for the heads up!
Yes let's skip the test for now. can you create a separate testing class for ROCBert (check what we have done for Vilt for example) Let me know if I can help !

@shogohida
Copy link
Contributor Author

@younesbelkada
Sure! I need to create a separate test class like this, right?

I'll let you know if I get stuck!

@younesbelkada
Copy link
Contributor

This is correct, please proceed as suggested ;-)

@shogohida shogohida force-pushed the add-rocbert-support-for-bettertransformer branch from 74af5d4 to 92f024f Compare January 8, 2023 00:55
Signed-off-by: Shogo Hida <[email protected]>
Signed-off-by: Shogo Hida <[email protected]>
@shogohida
Copy link
Contributor Author

@younesbelkada
Sorry for coding this late. I added a separate test class for RocBert but I didn't know how to change prepare_inputs_for_class... I read other parts of the code but I couldn't find the solution and just copied from BetterTransformersEncoderTest.

61ff473

@fxmarty
Copy link
Contributor

fxmarty commented Jan 8, 2023

Hi @shogohida , thanks for working on it! Maybe

class BetterTransformersRoCBertTest(BetterTransformersEncoderTest):
    all_models_to_test = ["path-to-tiny-rocbert-model-here"]
    
    # unrelated issue with torch.amp.autocast with rocbert (expected scalar type BFloat16 but found Float)
    def test_raise_autocast(self):
        pass

would work

Signed-off-by: Shogo Hida <[email protected]>
@shogohida
Copy link
Contributor Author

shogohida commented Jan 8, 2023

Hi @fxmarty
Thanks for your comment! I changed the code as you indicated.

228aa15

It seems I removed a review request for you but it was an accident when I requested a review from Younes 🙏

@shogohida shogohida requested review from younesbelkada and removed request for fxmarty January 8, 2023 22:04
Signed-off-by: Shogo Hida <[email protected]>
@shogohida
Copy link
Contributor Author

Still facing the same error....

https://github.com/huggingface/optimum/actions/runs/3882552677/jobs/6630181175
https://github.com/huggingface/optimum/actions/runs/3882552677/jobs/6630181346

Traceback (most recent call last):
  File "/Users/runner/work/optimum/optimum/tests/bettertransformer/testing_bettertransformer_utils.py", line 141, in test_raise_autocast
    _ = bt_model(**inputs)
  File "/Users/runner/hostedtoolcache/Python/3.8.15/x64/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/Users/runner/hostedtoolcache/Python/3.8.15/x64/lib/python3.8/site-packages/transformers/models/roc_bert/modeling_roc_bert.py", line 1041, in forward
    embedding_output = self.embeddings(
  File "/Users/runner/hostedtoolcache/Python/3.8.15/x64/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/Users/runner/hostedtoolcache/Python/3.8.15/x64/lib/python3.8/site-packages/transformers/models/roc_bert/modeling_roc_bert.py", line 282, in forward
    embedding_in = self.LayerNorm(embedding_in)
  File "/Users/runner/hostedtoolcache/Python/3.8.15/x64/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/Users/runner/hostedtoolcache/Python/3.8.15/x64/lib/python3.8/site-packages/torch/nn/modules/normalization.py", line 190, in forward
    return F.layer_norm(
  File "/Users/runner/hostedtoolcache/Python/3.8.15/x64/lib/python3.8/site-packages/torch/nn/functional.py", line 2515, in layer_norm
    return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: expected scalar type BFloat16 but found Float

@fxmarty
Copy link
Contributor

fxmarty commented Jan 12, 2023

Thank you for your contribution!

@fxmarty fxmarty merged commit b412390 into huggingface:main Jan 12, 2023
@shogohida shogohida deleted the add-rocbert-support-for-bettertransformer branch January 12, 2023 23:19
@shogohida
Copy link
Contributor Author

Thanks for your review! It took time but it was merged in the end... This was my first issue so I hope to contribute more!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Community contribution - BetterTransformer integration for more models!
4 participants