Skip to content

Commit

Permalink
fix bug
Browse files Browse the repository at this point in the history
Signed-off-by: xin3he <[email protected]>
  • Loading branch information
xin3he committed Jun 21, 2024
1 parent 33bb948 commit c7808e4
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion test/3x/torch/quantization/weight_only/test_rtn.py
Original file line number Diff line number Diff line change
Expand Up @@ -241,7 +241,7 @@ def test_double_quant_params(self, dtype, double_quant_bits, double_quant_group_
out = model(self.example_inputs)[0]
atol_true = (out - self.q_label).amax()
# compare atol, this case is an ideal case.
if not (dtype, double_quant_bits, double_quant_group_size) == (256, 6, "nf4"):
if not (dtype, double_quant_bits, double_quant_group_size) == ("nf4", 6, 256):
assert (
atol_false < atol_true
), "asym for double quant should have smaller atol because scales is bigger than zero, please double check."
Expand Down

0 comments on commit c7808e4

Please sign in to comment.