Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

torchao: add int8; quanto: add NF4; torch compile fixes + ability to compile optim #986

Merged
merged 30 commits into from
Sep 28, 2024

Conversation

bghira
Copy link
Owner

@bghira bghira commented Sep 24, 2024

No description provided.

@bghira bghira changed the title torchao: fp8/autoquant torchao: fp8/int8 Sep 25, 2024
@bghira
Copy link
Owner Author

bghira commented Sep 25, 2024

@sayakpaul i can't get this to work on NVIDIA devices, but it does work on Apple MPS. have you had any luck training on torchao? for cuda it complains it cannot cast (tensor, tensor) to tensor.

@bghira
Copy link
Owner Author

bghira commented Sep 25, 2024

currently the main benefit of torchao i can see is that we get torch compile support, but torch compile doesn't even work on mps..

on MPS, torchao is 3 seconds per step slower than quanto, which is roughly the same speed as bf16, tested on Flux Dev.

so if there's no nvidia support for training and it works more slowly on MPS, not sure there is a point to merge this PR

@bghira

This comment was marked as outdated.

@bghira bghira changed the title torchao: fp8/int8 (wip, does not work) torchao: fp8/int8 Sep 26, 2024
@bghira bghira marked this pull request as draft September 26, 2024 17:23
@bghira bghira closed this Sep 26, 2024
@bghira
Copy link
Owner Author

bghira commented Sep 27, 2024

i change my mind, i got it to work with int8 on nvidia after messing around with the internals on the GPUMODE discord.

@bghira bghira reopened this Sep 27, 2024
@bghira bghira changed the title (wip, does not work) torchao: fp8/int8 (wip, int8 only) torchao: fp8/int8 Sep 27, 2024
@bghira bghira changed the title (wip, int8 only) torchao: fp8/int8 int8-torchao, nf4-quanto Sep 28, 2024
@bghira bghira changed the title int8-torchao, nf4-quanto torchao: add int8; quanto: add NF4; torch compile fixes + ability to compile optim Sep 28, 2024
@bghira bghira marked this pull request as ready for review September 28, 2024 17:36
@bghira bghira merged commit 8298530 into main Sep 28, 2024
1 check passed
@bghira bghira deleted the feature/torchao branch September 28, 2024 17:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant