-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Benchmarks - Add LLaMA-2 Models #668
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pls use python3 setup.py lint
to check the format and run python3 setup.py format
to format and code
@abuccts can I get access to the unit test logs. |
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #668 +/- ##
==========================================
- Coverage 85.77% 84.90% -0.87%
==========================================
Files 97 98 +1
Lines 6925 7116 +191
==========================================
+ Hits 5940 6042 +102
- Misses 985 1074 +89
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚨 Try these New Features:
|
LGTM, thanks! Please fix the UT failures with Python 1.10. And since the CUDA tests are running on K80 which is very old GPU, we can skip the "cuda-init-test", and just make sure "cpu-unit-test" can pass.
|
Added llama benchmark - training and inference in accordance with the existing pytorch models implementation like gpt2, lstm etc.