Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix LLaVA-NeXT handling of non-square images #2097

Closed
wants to merge 2 commits into from
Closed

Conversation

danieldk
Copy link
Member

What does this PR do?

We could get shape mismatches with non-square images, resulting in an
exception that crashed the backend.

When post-processing an image, features corresponding to padding are
removed when padding was needed. This is also reflected in the calculation
of the number of image tokens to get the correct number of slots.
However, there was a mismatch between the post-processing and the slot
calculation. The image post-processing could exclude fewer padding features
due to rounding. This change updates the image token calculation to
correspond to the image postprocessing.

Fixes #1777.

While investigating this, I found another issue where the upstream code
contains a bug that swaps the height and width dimensions after computing
the image grid shape. Since the models were also trained with this bug,
we should reproduce the same bug to ensure that we are generating the
same features.

Draft: needs to be rebased after #2080.

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline,
    Pull Request section?
  • Was this discussed/approved via a Github issue or the forum? Please add a link
    to it if that's the case.
  • Did you make sure to update the documentation with your changes? Here are the
    documentation guidelines, and
    here are tips on formatting docstrings.
  • Did you write any new necessary tests?

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

danieldk added 2 commits June 20, 2024 09:21
Before this change, the number of reserved image tokens was not the
same as the number of images. Fixes #2029.

While at it, also remove all the image token handling duplication
in `prepare_input`.
We could get shape mismatches with non-square images, resulting in an
exception that crashed the backend.

When post-processing an image, features corresponding to padding are
removed when padding was needed. This is also reflected in the calculation
of the number of image tokens to get the correct number of slots.
However, there was a mismatch between the post-processing and the slot
calculation. The image post-processing could exclude fewer padding features
due to rounding. This change updates the image token calculation to
correspond to the image postprocessing.

Fixes #1777.

While investigating this, I found another issue where the upstream code
contains a bug that swaps the height and width dimensions after computing
the image grid shape. Since the models were also trained with this bug,
we should reproduce the same bug to ensure that we are generating the
same features.
@@ -39,15 +39,14 @@ def get_anyres_image_grid_shape(image_size, grid_pinpoints, patch_size):
return height // patch_size, width // patch_size


def image_text_replacement(image_input, config, image_id) -> str:
def image_text_replacement(processor, image_input, config, image_id) -> str:
if config.model_type == "idefics2":
# TODO technically depends on image splitting which is not implemented.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
# TODO technically depends on image splitting which is not implemented.

image_seq_len = 64
image_str = f"<fake_token_around_image>{'<image>' * image_seq_len}<fake_token_around_image>"
if processor.image_processor.do_image_splitting:
image_str *= 5
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

https://github.com/huggingface/transformers/blob/0dd65a03198424a41ec6948e445c313e9f292939/src/transformers/models/idefics2/processing_idefics2.py#L191-L193

We still need that.

Instead of doing it that inefficiently maybe we just conditionally prefix/postfix (Also to make the intent clearer). Wdyt ?

Copy link
Collaborator

@Narsil Narsil left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall looks great.

Ideally we add a test for it (maybe in an existing test).
At the very least a unit test for the slots + text/input_ids

@danieldk
Copy link
Member Author

Closing, was merged as part of #2080.

@danieldk danieldk closed this Jun 27, 2024
@danieldk danieldk deleted the bugfix/llava-unpad branch June 27, 2024 14:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Llava Next crashes on certain image sizes
2 participants