Skip to content

Commit

Permalink
add FP8 quantization link in README.md (#1273)
Browse files Browse the repository at this point in the history
(cherry picked from commit 17da417)
  • Loading branch information
xin3he authored and chensuyue committed Sep 27, 2023
1 parent 5580899 commit aff4131
Showing 1 changed file with 2 additions and 1 deletion.
3 changes: 2 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -120,7 +120,8 @@ q_model = fit(
<td colspan="2" align="center"><a href="./docs/source/smooth_quant.md">SmoothQuant</td>
</tr>
<tr>
<td colspan="8" align="center"><a href="./docs/source/quantization_weight_only.md">Weight-Only Quantization (INT8/INT4/FP4/NF4) </td>
<td colspan="4" align="center"><a href="./docs/source/quantization_weight_only.md">Weight-Only Quantization (INT8/INT4/FP4/NF4) </td>
<td colspan="4" align="center"><a href="https://github.com/intel/neural-compressor/blob/fp8_adaptor/docs/source/fp8.md">FP8 Quantization </td>
</tr>
</tbody>
<thead>
Expand Down

0 comments on commit aff4131

Please sign in to comment.