Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Investigate storing results from ggml operations in F16 format #959

Closed
ggerganov opened this issue Apr 14, 2023 · 1 comment
Closed

Investigate storing results from ggml operations in F16 format #959

ggerganov opened this issue Apr 14, 2023 · 1 comment
Assignees
Labels
help wanted Extra attention is needed high priority Very important issue performance Speed related topics research 🔬

Comments

@ggerganov
Copy link
Owner

ggerganov commented Apr 14, 2023

Currently, all ggml operations return the results in F32 format.

The goal of this task is to see if there is an elegant way to add support for keeping the results in F16 format.
This will ideally be passed as a parameter to the ggml_context and will also involve adding support for F16 operands in most of the existing operators. Ideally, we want to achieve this somehow without duplicating the entire code base.

Note that internal floating-point accumulators in the different operations can and should remain in F32 format.
It is just when we store the results into the dst tensor, we will cast them to F16.

Going to F16 intermediate results would reduce significantly the memory pressure and could lead to significant speed improvements. Hopefully, the loss in quality would be marginal. But in any case, there will always be the option of switching back to full F32 precision.

I am looking for suggestions and initial prototypes of how we can achieve this in an elegant way.

Related:

Edit: An initial quick and dirty implementation that simply goes over the existing LLaMA related operators and changes the return type to F16 would be useful to determine if such functionality is useful and how much performance gain we can expect. If it is worth, then we can think in more details about how exactly to support it.

@ggerganov
Copy link
Owner Author

Very basic tests by changing certain formats and F32 - > F16 casts at hot spots indicate that this might not be a viable approach for improving the performance. Will close this for now, as I no longer think that this can lead to improvements, but in case anyone has other observations - feel free to reopen

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Extra attention is needed high priority Very important issue performance Speed related topics research 🔬
Development

No branches or pull requests

1 participant