Skip to content

Commit

Permalink
feat(//examples/int8/qat): Install pytorch-quantization with
Browse files Browse the repository at this point in the history
requirements.txt

Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
  • Loading branch information
narendasan committed Aug 23, 2021
1 parent 68ba63c commit 1ca1484
Show file tree
Hide file tree
Showing 2 changed files with 7 additions and 4 deletions.
6 changes: 4 additions & 2 deletions examples/int8/training/vgg16/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ python3 main.py --lr 0.01 --batch-size 128 --drop-ratio 0.15 --ckpt-dir $(pwd)/v
You can monitor training with tensorboard, logs are stored by default at `/tmp/vgg16_logs`

### Quantization
### Quantization Aware Fine Tuning (for trying out QAT Workflows)

To perform quantization aware training, it is recommended that you finetune your model obtained from previous step with quantization layers.

Expand Down Expand Up @@ -51,12 +51,14 @@ After QAT is completed, you should see the checkpoint of QAT model in the `$pwd/

Use the exporter script to create a torchscript module you can compile with TRTorch

### For PTQ
```
python3 export_ckpt.py <path-to-checkpoint>
```

It should produce a file called `trained_vgg16.jit.pt`
The checkpoint file should be from the original training and not quatization aware fine tuning. THe script should produce a file called `trained_vgg16.jit.pt`

### For QAT
To export a QAT model, you can run

```
Expand Down
5 changes: 3 additions & 2 deletions examples/int8/training/vgg16/requirements.txt
Original file line number Diff line number Diff line change
@@ -1,2 +1,3 @@
torch>=1.4.0
tensorboard>=1.14.0
torch>=1.9.0
tensorboard>=1.14.0
pytorch-quantization --extra-index-url https://pypi.ngc.nvidia.com

0 comments on commit 1ca1484

Please sign in to comment.