-
Notifications
You must be signed in to change notification settings - Fork 27.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add type annotations for Beit, ViT and Deit models - Pytorch #16092
Add type annotations for Beit, ViT and Deit models - Pytorch #16092
Conversation
at BeitEmbeddings - on forward method the return is `PatchEmbeddings` at BeitSelfAttention - on forward method the relative_position_bias is `Optional[BeitRelativePositionBias]` at BeitSelfOutput - on forward method i don't know what is the type of gamma, Optional[float]? Also, don't found where this parameter is used. at BeitAttention - on forward method the relative_position_bias is `Optional[BeitRelativePositionBias]` at BeitLayer - on forward method the relative_position_bias is `Optional[BeitRelativePositionBias]` at BeitRelativePositionBias - on forward method the return is `BeitRelativePositionBias`
The documentation is not available anymore as the PR was closed or merged. |
* 📝 first draft of audio/vision guides * ✨ make fixup * 🖍 fix typo * 🖍 close parentheses * 🖍 apply feedback * 🖍 apply feedback, make fixup * 🖍 more fixup for perceiver * 🖍 apply feedback * ✨ make fixup * 🖍 fix data collator
* gather z3 params for new_lm_head * Update src/transformers/modeling_utils.py Co-authored-by: Stas Bekman <[email protected]> Co-authored-by: Stas Bekman <[email protected]>
* [WIP] add support for bf16 mode * prep for bf16 * prep for bf16 * fix; zero2/bf16 is ok * check bf16 is available * test fixes * enable zero3_bf16 * config files * docs * split stage_dtype; merge back to non-dtype-specific config file * fix doc * cleanup * cleanup * bfloat16 => bf16 to match the PR changes * s/zero_gather_fp16_weights_on_model_save/zero_gather_16bit_weights_on_model_save/; s/save_fp16_model/save_16bit_model/ * test fixes/skipping * move * fix * Update docs/source/main_classes/deepspeed.mdx Co-authored-by: Sylvain Gugger <[email protected]> * backticks * cleanup * cleanup * cleanup * new version * add note about grad accum in bf16 Co-authored-by: Sylvain Gugger <[email protected]>
Hi, thank you for this! I won't merge it yet because I'm going to ask the other team members on Monday what we think about the missing annotations - it's possible to use forward references for this, but I want to make sure that plays well with our code analysis tools before we do that. |
Already have a command that runs something like |
…16089) * Add missing type hints for all flavors of LayoutLMv2 PyTorch models. * Fixed return types and added type hints for LayoutLM. * Fix removed arguments which breaks tests.
* Make Camembert great again! * Add Camembert to TensorFlow ONNX tests
… Encoder-Decoder Checkpoints (#16056) * Fix Loading of Flax(Speech)EncoderDecoderModel kwargs from PreTrained Encoder-Decoder Checkpoints * change wording
Configuration `tied-embeddings-all` implies `tied-embeddings-src`
* Make TF pt-tf equivalence test more aggressive * Fix for TFConvNextModelTest and TFTransfoXLModelTest * fix kwargs for outputs * clean-up * Add docstring for check_outputs() * remove: need to rename encoder-decoder * clean-up * send PyTorch things to the correct device * Add back the accidentally removed test case in test_pt_tf_model_equivalence() * Fix: change to tuple before calling check_outputs() * Fix: tfo could be a list * use to_tuple() * allow tfo only to be tuple or tensor * allow tfo to be list or tuple for now + style change * minor fix * remove np.copy and update comments * tfo -> tf_output, same for pt * Add more detailed comment * remove the incorrect comment Co-authored-by: ydshieh <[email protected]>
Co-authored-by: ydshieh <[email protected]>
* Change unpacking of TF mobilebert inputs to use decorator * Move unpack_inputs as the top decorator * make fixup Co-authored-by: ChienVM <[email protected]>
Co-authored-by: Niels Rogge <[email protected]>
* Spanish translation of the file training.mdx * Settings - Spanish translation of the file training.mdx * Latest changes to the Spanish translation of the training.mdx file * Delete Hugging.mdx * Last changes to the training fil Espanish version * Latest modifications * Latest changes, document ready for PR * Nits Co-authored-by: Yhary Arias <[email protected]> Co-authored-by: Omar U. Espejel <[email protected]>
Hi @johnnv1, I just confirmed with the team that forward references are fine for types that aren't defined yet! |
…Text2 (#16083) * Fix checkpoint name in docstring example Co-authored-by: ydshieh <[email protected]> Co-authored-by: Patrick von Platen <[email protected]>
* Replace input_processing * move unpack_inputs
* First pass * Fixup * Fix broken tests * Make unpack_inputs the first decorator
* Add type hints for TFDistilBert * Update src/transformers/models/distilbert/modeling_tf_distilbert.py Co-authored-by: Matt <[email protected]>
* Can choose framework for ONNX export * Fix docstring
* Add type hints for LukeModel * Add type hints for entitypairclassification * Remove blank space Co-authored-by: bhavika <bhavika@debian-BULLSEYE-live-builder-AMD64>
* Add type hints for SqueezeBert PyTorch * fixed unused List err * style fixes
* Add missing type hints - ELECTRA TF * bool -> Optional[bool]
* Runtime -> Devel * Torch before DeepSpeed
at BeitEmbeddings - on forward method the return is `PatchEmbeddings` at BeitSelfAttention - on forward method the relative_position_bias is `Optional[BeitRelativePositionBias]` at BeitSelfOutput - on forward method i don't know what is the type of gamma, Optional[float]? Also, don't found where this parameter is used. at BeitAttention - on forward method the relative_position_bias is `Optional[BeitRelativePositionBias]` at BeitLayer - on forward method the relative_position_bias is `Optional[BeitRelativePositionBias]` at BeitRelativePositionBias - on forward method the return is `BeitRelativePositionBias`
…v1/transformers into add_type_annotations/vision_models
@Rocketknight1 after rebase upstream/master the |
Hi @johnnv1, that rebase made this pretty tricky to review! It might be easier to close this and start a new PR and just copy the changes over? If there are classes that are copies then the original source of the copy should be changed, followed by |
What does this PR do?
I added type annotations for Beit, ViT and Deit models at PyTorch as described in #16059
@Rocketknight1
At Beit model, have some missing annotations because the annotation is another class at the file that are declared after. Also, at
BeitSelfOutput
class, I don't know the correct type for the gamma parameter atforward
method -- thegamma
andinput_tensor
seems not to be used, but I prefer not to modify this now.