diff --git a/README.md b/README.md
index 5fbad1d0434..2b935164ca6 100644
--- a/README.md
+++ b/README.md
@@ -5,12 +5,12 @@ IntelĀ® Neural Compressor
@@ -28,7 +28,7 @@ support AMD CPU, ARM CPU, and NVidia GPU through ONNX Runtime with limited testi
## What's New
* [2024/07] From 3.0 release, framework extension API is recommended to be used for quantization.
-* [2024/07] Performance optimizations and usability improvements on [client-side](https://github.com/intel/neural-compressor/blob/master/docs/source/3x/client_quant.md).
+* [2024/07] Performance optimizations and usability improvements on [client-side](./docs/source/3x/client_quant.md).
## Installation
### Install Framework
@@ -140,7 +140,7 @@ quantized_model = fit(model=float_model, conf=static_quant_conf, calib_dataloade
Architecture |
- Workflow |
+ Workflow |
APIs |
LLMs Recipes |
Examples |