Skip to content

Commit

Permalink
Fix llava next tie_word_embeddings config (huggingface#30640)
Browse files Browse the repository at this point in the history
* fix llava next embedding

* add docstring

* Update src/transformers/models/llava_next/configuration_llava_next.py

Co-authored-by: NielsRogge <[email protected]>

---------

Co-authored-by: NielsRogge <[email protected]>
  • Loading branch information
2 people authored and zucchini-nlp committed May 10, 2024
1 parent 4fdce25 commit 1d65c78
Showing 1 changed file with 4 additions and 1 deletion.
Original file line number Diff line number Diff line change
Expand Up @@ -55,6 +55,8 @@ class LlavaNextConfig(PretrainedConfig):
image_grid_pinpoints (`List`, *optional*, defaults to `[[336, 672], [672, 336], [672, 672], [1008, 336], [336, 1008]]`):
A list of possible resolutions to use for processing high resolution images. Each item in the list should be a tuple or list
of the form `(height, width)`.
tie_word_embeddings (`bool`, *optional*, defaults to `False`):
Whether the model's input and output word embeddings should be tied.
Example:
Expand Down Expand Up @@ -90,6 +92,7 @@ def __init__(
vision_feature_select_strategy="default",
vision_feature_layer=-2,
image_grid_pinpoints=None,
tie_word_embeddings=False,
**kwargs,
):
self.ignore_index = ignore_index
Expand Down Expand Up @@ -138,4 +141,4 @@ def __init__(

self.text_config = text_config

super().__init__(**kwargs)
super().__init__(tie_word_embeddings=tie_word_embeddings, **kwargs)

0 comments on commit 1d65c78

Please sign in to comment.