From de0fa21cd9d6291b521281b2b5fc8f6519cb84ae Mon Sep 17 00:00:00 2001 From: "Huang, Tai" Date: Fri, 9 Aug 2024 22:32:37 +0800 Subject: [PATCH] Fix broken link in docs (#1969) Signed-off-by: Huang, Tai --- docs/source/3x/PT_MixedPrecision.md | 2 +- docs/source/3x/TF_Quant.md | 2 +- docs/source/3x/TF_SQ.md | 2 +- docs/source/3x/quantization.md | 2 +- 4 files changed, 4 insertions(+), 4 deletions(-) diff --git a/docs/source/3x/PT_MixedPrecision.md b/docs/source/3x/PT_MixedPrecision.md index a8b62d866a4..3fbd1db6bbf 100644 --- a/docs/source/3x/PT_MixedPrecision.md +++ b/docs/source/3x/PT_MixedPrecision.md @@ -107,5 +107,5 @@ best_model = autotune(model=build_torch_model(), tune_config=custom_tune_config, ## Examples -Users can also refer to [examples](https://github.com/intel/neural-compressor/blob/master/examples/3.x_api/pytorch\cv\mixed_precision +Users can also refer to [examples](https://github.com/intel/neural-compressor/blob/master/examples/3.x_api/pytorch/cv/mixed_precision ) on how to quantize a model with Mixed Precision. diff --git a/docs/source/3x/TF_Quant.md b/docs/source/3x/TF_Quant.md index d80c25ecada..9314a3c8200 100644 --- a/docs/source/3x/TF_Quant.md +++ b/docs/source/3x/TF_Quant.md @@ -13,7 +13,7 @@ TensorFlow Quantization `neural_compressor.tensorflow` supports quantizing both TensorFlow and Keras model with or without accuracy aware tuning. -For the detailed quantization fundamentals, please refer to the document for [Quantization](../quantization.md). +For the detailed quantization fundamentals, please refer to the document for [Quantization](quantization.md). ## Get Started diff --git a/docs/source/3x/TF_SQ.md b/docs/source/3x/TF_SQ.md index 5225138e502..1d3a08836b5 100644 --- a/docs/source/3x/TF_SQ.md +++ b/docs/source/3x/TF_SQ.md @@ -50,4 +50,4 @@ best_model = autotune( ## Examples -Users can also refer to [examples](https://github.com/intel/neural-compressor/blob/master/examples/3.x_api/tensorflow/nlp/large_language_models\quantization\ptq\smoothquant) on how to apply smooth quant to a TensorFlow model with `neural_compressor.tensorflow`. +Users can also refer to [examples](https://github.com/intel/neural-compressor/blob/master/examples/3.x_api/tensorflow/nlp/large_language_models/quantization/ptq/smoothquant) on how to apply smooth quant to a TensorFlow model with `neural_compressor.tensorflow`. diff --git a/docs/source/3x/quantization.md b/docs/source/3x/quantization.md index b26c49470a9..26ba158d54f 100644 --- a/docs/source/3x/quantization.md +++ b/docs/source/3x/quantization.md @@ -396,7 +396,7 @@ For supported quantization methods for `accuracy aware tuning` and the detailed User could refer to below chart to understand the whole tuning flow. -accuracy aware tuning working flow +accuracy aware tuning working flow