Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update igpu performance default transformers version from 4.36.2 to 4.37.0 #11841

Conversation

Oscilloscope98
Copy link
Contributor

@Oscilloscope98 Oscilloscope98 commented Aug 19, 2024

Description

Update igpu performance default transformers version from 4.36.2 to 4.37.0

@Oscilloscope98 Oscilloscope98 changed the title Update igpu performance from transformers 4.36.2 to 4.37.0 Update igpu performance default transformers version from 4.36.2 to 4.37.0 Aug 19, 2024
@Oscilloscope98 Oscilloscope98 merged commit 1ac3440 into intel-analytics:perf-transformers-437-test Aug 19, 2024
Oscilloscope98 added a commit that referenced this pull request Aug 20, 2024
….0` (#11869)

* Update igpu performance from transformers 4.36.2 to 4.37.0 (#11841)

* upgrade arc perf test to transformers 4.37 (#11842)

* fix load low bit com dtype (#11832)

* feat: add mixed_precision argument on ppl longbench evaluation

* fix: delete extra code

* feat: upgrade arc perf test to transformers 4.37

* fix: add missing codes

* fix: keep perf test for qwen-vl-chat in transformers 4.36

* fix: remove extra space

* fix: resolve pr comment

* fix: add empty line

* fix: add pip install for spr and core test

* fix: delete extra comments

* fix: remove python -m for pip

* Revert "fix load low bit com dtype (#11832)"

This reverts commit 6841a9a.

---------

Co-authored-by: Zhao Changmin <[email protected]>
Co-authored-by: Jinhe Tang <[email protected]>

* add transformers==4.36 for qwen vl in igpu-perf (#11846)

* add transformers==4.36.2 for qwen-vl

* Small update

---------

Co-authored-by: Yuwen Hu <[email protected]>

* fix: remove qwen-7b on core test (#11851)

* fix: remove qwen-7b on core test

* fix: change delete to comment

---------

Co-authored-by: Jinhe Tang <[email protected]>

* replce filename (#11854)

* fix: remove qwen-7b on core test

* fix: change delete to comment

* fix: replace filename

---------

Co-authored-by: Jinhe Tang <[email protected]>

* fix: delete extra comments (#11863)

* Remove transformers installation for temp test purposes

* Small fix

* Small update

---------

Co-authored-by: Chu,Youcheng <[email protected]>
Co-authored-by: Zhao Changmin <[email protected]>
Co-authored-by: Jinhe Tang <[email protected]>
Co-authored-by: Zijie Li <[email protected]>
Co-authored-by: Chu,Youcheng <[email protected]>
hkvision pushed a commit that referenced this pull request Aug 20, 2024
* feat: update readme for ppl test

* fix: textual adjustments

* fix: textual adjustments

* Add ipex-llm npu option in setup.py (#11858)

* add ipex-llm npu release

* update example doc

* meet latest release changes

* optimize phi3 memory usage (#11867)

* Update `ipex-llm` default transformers version to 4.37.0 (#11859)

* Update default transformers version to 4.37.0

* Add dependency requirements for qwen and qwen-vl

* Temp fix transformers version for these not yet verified models

* Skip qwen test in UT for now as it requires transformers<4.37.0

* Update performance test regarding updated default `transformers==4.37.0` (#11869)

* Update igpu performance from transformers 4.36.2 to 4.37.0 (#11841)

* upgrade arc perf test to transformers 4.37 (#11842)

* fix load low bit com dtype (#11832)

* feat: add mixed_precision argument on ppl longbench evaluation

* fix: delete extra code

* feat: upgrade arc perf test to transformers 4.37

* fix: add missing codes

* fix: keep perf test for qwen-vl-chat in transformers 4.36

* fix: remove extra space

* fix: resolve pr comment

* fix: add empty line

* fix: add pip install for spr and core test

* fix: delete extra comments

* fix: remove python -m for pip

* Revert "fix load low bit com dtype (#11832)"

This reverts commit 6841a9a.

---------

Co-authored-by: Zhao Changmin <[email protected]>
Co-authored-by: Jinhe Tang <[email protected]>

* add transformers==4.36 for qwen vl in igpu-perf (#11846)

* add transformers==4.36.2 for qwen-vl

* Small update

---------

Co-authored-by: Yuwen Hu <[email protected]>

* fix: remove qwen-7b on core test (#11851)

* fix: remove qwen-7b on core test

* fix: change delete to comment

---------

Co-authored-by: Jinhe Tang <[email protected]>

* replce filename (#11854)

* fix: remove qwen-7b on core test

* fix: change delete to comment

* fix: replace filename

---------

Co-authored-by: Jinhe Tang <[email protected]>

* fix: delete extra comments (#11863)

* Remove transformers installation for temp test purposes

* Small fix

* Small update

---------

Co-authored-by: Chu,Youcheng <[email protected]>
Co-authored-by: Zhao Changmin <[email protected]>
Co-authored-by: Jinhe Tang <[email protected]>
Co-authored-by: Zijie Li <[email protected]>
Co-authored-by: Chu,Youcheng <[email protected]>

* Pytorch models transformers version update (#11860)

* yi sync

* delete 4.34 constraint

* delete 4.34 constraint

* delete 4.31 constraint

* delete 4.34 constraint

* delete 4.35 constraint

* added <=4.33.3 constraint

* added <=4.33.3 constraint

* switched to chinese prompt

* Update compresskv model forward type logic (#11868)

* update

* fix

* Update local import for ppl (#11866)

Co-authored-by: jenniew <[email protected]>

* fix: textual adjustment

---------

Co-authored-by: SONG Ge <[email protected]>
Co-authored-by: Yishuo Wang <[email protected]>
Co-authored-by: Yuwen Hu <[email protected]>
Co-authored-by: Zhao Changmin <[email protected]>
Co-authored-by: Jinhe Tang <[email protected]>
Co-authored-by: Zijie Li <[email protected]>
Co-authored-by: Yina Chen <[email protected]>
Co-authored-by: RyuKosei <[email protected]>
Co-authored-by: jenniew <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants