Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The last ut test of the QDQBert model ”test_inference_no_head_absolute_embedding” did not pass when using official safetensors #31486

Closed
2 of 4 tasks
jiangyichen830 opened this issue Jun 19, 2024 · 3 comments

Comments

@jiangyichen830
Copy link

jiangyichen830 commented Jun 19, 2024

System Info

  • transformers version: 4.41.2
  • Platform: Linux-5.11.0-27-generic-x86_64-with-glibc2.31
  • Python version: 3.9.17
  • Huggingface_hub version: 0.23.2
  • Safetensors version: 0.4.3
  • Accelerate version: 0.21.0
    I want use qdqbert but found when i copy from commit version "update ruff version (https://github.com/huggingface/transformers/pull/30932[)]" in transformers/tests/models/qdqbert
    /test_modeling_qdqbert.py, the test named "test_inference_no_head_absolute_embedding" could not pass and has a big difference, even needs to modify the accuracy from 1e-4 to 7e-2 in order to pass. I didn't make any changes, I just ran pytest - v. I want to know what caused such a large accuracy error

Who can help?

@ArthurZucker @younesbelkada @amyeroberts
Hope i can get your help to solve this problem. Thanks!

Information

  • The official example scripts
  • My own modified scripts

Tasks

  • An officially supported task in the examples folder (such as GLUE/SQuAD, ...)
  • My own task or dataset (give details below)

Reproduction

Just find a code version with qdqbert model, such as what I mentioned above. Then run the command pytest -v tests/models/qdqbert/ also remember RUN_SLOW=1 and you will find the error.

Expected behavior

The result of my local run is [[[ 0.4352, -0.0278, 0.8552],[ 0.2360, -0.0271, 0.8560],[ 0.2916, -0.0897, 0.7542]]], but the expected result in source code was [[[0.4571, -0.0735, 0.8594], [0.2774, -0.0278, 0.8794], [0.3548, -0.0473, 0.7593]]]. There is a huge difference between the two results.

@jiangyichen830
Copy link
Author

These are two screenshots of my test results.
1718727950295
6e-2

@amyeroberts
Copy link
Collaborator

Hi @jiangyichen830, thanks for raising an issue!

QDQBert has been deprecated, and so we won't be accepting updates to the model and will not longer run tests for the model.

The differences you're seeing could be for a variety of different reasons e.g. the running hardware or other libraries installed in the environment.

Copy link

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

Please note that issues that do not follow the contributing guidelines are likely to be ignored.

@github-actions github-actions bot closed this as completed Aug 3, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants
@amyeroberts @jiangyichen830 and others