-
Notifications
You must be signed in to change notification settings - Fork 526
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix model compression error #1043
Conversation
@pkulzy plz check the rocm implementation, thanks~ |
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## devel #1043 +/- ##
=======================================
Coverage 75.10% 75.10%
=======================================
Files 87 87
Lines 6953 6953
=======================================
Hits 5222 5222
Misses 1731 1731 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please add the restriction the to know restrictions
section of the doc
I'll address it. |
* fix model compression error * add doc for model compression limitation
During the code review process, I found that due to the initial settings of shared memory within the kernel
tabulate_fusion_grad_fifth_order_polynomial
, the GPU implementation of model compression would require that the last layer of the embedding net must be less than 128. And there is no error message, which may cause potential risks.This PR should fix this problem. At the same time, an error message has been added. Now the size of the last network in model compression must be less than 1024.