You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Accelerate version: 0.21.0
I want use qdqbert but found when i copy from commit version "update ruff version (https://github.com/huggingface/transformers/pull/30932[)]" in transformers/tests/models/qdqbert
/test_modeling_qdqbert.py, the test named "test_inference_no_head_absolute_embedding" could not pass and has a big difference, even needs to modify the accuracy from 1e-4 to 7e-2 in order to pass. I didn't make any changes, I just ran pytest - v. I want to know what caused such a large accuracy error
An officially supported task in the examples folder (such as GLUE/SQuAD, ...)
My own task or dataset (give details below)
Reproduction
Just find a code version with qdqbert model, such as what I mentioned above. Then run the command pytest -v tests/models/qdqbert/ also remember RUN_SLOW=1 and you will find the error.
Expected behavior
The result of my local run is [[[ 0.4352, -0.0278, 0.8552],[ 0.2360, -0.0271, 0.8560],[ 0.2916, -0.0897, 0.7542]]], but the expected result in source code was [[[0.4571, -0.0735, 0.8594], [0.2774, -0.0278, 0.8794], [0.3548, -0.0473, 0.7593]]]. There is a huge difference between the two results.
The text was updated successfully, but these errors were encountered:
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.
System Info
transformers
version: 4.41.2I want use qdqbert but found when i copy from commit version "update ruff version (https://github.com/huggingface/transformers/pull/30932[)]" in transformers/tests/models/qdqbert
/test_modeling_qdqbert.py, the test named "test_inference_no_head_absolute_embedding" could not pass and has a big difference, even needs to modify the accuracy from 1e-4 to 7e-2 in order to pass. I didn't make any changes, I just ran pytest - v. I want to know what caused such a large accuracy error
Who can help?
@ArthurZucker @younesbelkada @amyeroberts
Hope i can get your help to solve this problem. Thanks!
Information
Tasks
examples
folder (such as GLUE/SQuAD, ...)Reproduction
Just find a code version with qdqbert model, such as what I mentioned above. Then run the command
pytest -v tests/models/qdqbert/
also remember RUN_SLOW=1 and you will find the error.Expected behavior
The result of my local run is [[[ 0.4352, -0.0278, 0.8552],[ 0.2360, -0.0271, 0.8560],[ 0.2916, -0.0897, 0.7542]]], but the expected result in source code was [[[0.4571, -0.0735, 0.8594], [0.2774, -0.0278, 0.8794], [0.3548, -0.0473, 0.7593]]]. There is a huge difference between the two results.
The text was updated successfully, but these errors were encountered: