Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve Caching and compile with MX #1056

Closed
Giuseppe5 opened this issue Oct 14, 2024 · 1 comment
Closed

Improve Caching and compile with MX #1056

Giuseppe5 opened this issue Oct 14, 2024 · 1 comment
Labels
enhancement New feature or request good first issue Good for newcomers

Comments

@Giuseppe5
Copy link
Collaborator

Is your feature request related to a problem? Please describe.
Currently torch.compile is not compatible with MX data types, mostly because it is not yet clear to me how to cache data so that they are readily available when entering quant_inference_mode.
The main issue arises from the fact that MX data have two views, a groupwise view with an extra group dimension and a compressed view without the extra group dimension (but where the scales are expanded, so much more memory is used).

Describe the solution you'd like
Cache scale/zero_point and all the other quant metadata needed for computation for inference. Not sure what is going to be the best solution.

Additional context
Still not clear the best way to approach this, it might be needed for the proxy to expose some extra helper methods to perform views. Reach out with a proposed solution so we discuss!

@Giuseppe5 Giuseppe5 added enhancement New feature or request good first issue Good for newcomers labels Oct 14, 2024
@Giuseppe5
Copy link
Collaborator Author

Solved in #1133

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request good first issue Good for newcomers
Projects
None yet
Development

No branches or pull requests

1 participant