Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

adding quant_format, mantissa, and exponent options to evaluate script #717

Merged
merged 7 commits into from
Oct 19, 2023

Conversation

fabianandresgrob
Copy link
Contributor

Adds the following options for the ptq_evaluate.py script:

--quant_format {int,float}
                      Quantization format to use for weights and activations
                      (default: int)
--layerwise-first-last-mantissa-bit-width LAYERWISE_FIRST_LAST_MANTISSA_BIT_WIDTH
                      TODO
--layerwise-first-last-exponent-bit-width LAYERWISE_FIRST_LAST_EXPONENT_BIT_WIDTH
                      TODO
--weight-mantissa-bit-width WEIGHT_MANTISSA_BIT_WIDTH
                      TODO
--weight-exponent-bit-width WEIGHT_EXPONENT_BIT_WIDTH
                      TODO
--act-mantissa-bit-width ACT_MANTISSA_BIT_WIDTH
                      TODO
--act-exponent-bit-width ACT_EXPONENT_BIT_WIDTH
                      TODO

Also changed scale_factor_type argument so ptq_common.py handles weight_scale_type correctly. Still missing good explanations for the added options.

@Giuseppe5 Giuseppe5 self-assigned this Oct 11, 2023
@Giuseppe5 Giuseppe5 merged commit 1412174 into Xilinx:dev Oct 19, 2023
18 of 22 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants