Skip to content

Commit

Permalink
[NBs] Update notebooks to only use QONNX export
Browse files Browse the repository at this point in the history
  • Loading branch information
auphelia committed Jul 7, 2023
1 parent 391cd76 commit 7924bf7
Show file tree
Hide file tree
Showing 8 changed files with 52 additions and 352 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
"source": [
"# Importing Brevitas networks into FINN with the QONNX interchange format\n",
"\n",
"**Note: This notebook is very similar to the 1a notebook, in that it shows the same concepts for the QONNX ingestion as 1a does for FINN-ONNX. Section 1 is identical in both notebooks.**\n",
"**Note: Previously it was possible to directly export the FINN-ONNX interchange format from Brevitas to pass to the FINN compiler. This support is deprecated and FINN uses the export to the QONNX format as a front end, internally FINN uses still the FINN-ONNX format.**\n",
"\n",
"In this notebook we'll go through an example of how to import a Brevitas-trained QNN into FINN. The steps will be as follows:\n",
"\n",
Expand Down Expand Up @@ -318,7 +318,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.5"
"version": "3.10.6"
}
},
"nbformat": 4,
Expand Down
321 changes: 0 additions & 321 deletions notebooks/basics/1a_brevitas_network_import_via_FINN-ONNX.ipynb

This file was deleted.

21 changes: 14 additions & 7 deletions notebooks/end2end_example/bnn-pynq/cnv_end2end_example.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@
"source": [
"## 1. Brevitas Export, FINN Import and Tidy-Up\n",
"\n",
"Similar to what we did in the TFC-w1a1 end-to-end notebook, we will start by exporting the [pretrained CNV-w1a1 network](https://github.com/Xilinx/brevitas/tree/master/src/brevitas_examples/bnn_pynq) to ONNX, importing that into FINN and running the \"tidy-up\" transformations to have a first look at the topology."
"Similar to what we did in the TFC-w1a1 end-to-end notebook, we will start by exporting the [pretrained CNV-w1a1 network](https://github.com/Xilinx/brevitas/tree/master/src/brevitas_examples/bnn_pynq) to ONNX, importing that into FINN and running the \"tidy-up\" transformations to have a first look at the topology. The network will be exported in QONNX format and then converted into the FINN-ONNX format to prepare it for the FINN compiler."
]
},
{
Expand All @@ -84,15 +84,20 @@
"import torch\n",
"import onnx\n",
"from finn.util.test import get_test_model_trained\n",
"from brevitas.export import export_finn_onnx\n",
"from brevitas.export import export_qonnx\n",
"from qonnx.util.cleanup import cleanup as qonnx_cleanup\n",
"from qonnx.core.modelwrapper import ModelWrapper\n",
"from finn.transformation.qonnx.convert_qonnx_to_finn import ConvertQONNXtoFINN\n",
"from qonnx.transformation.infer_shapes import InferShapes\n",
"from qonnx.transformation.fold_constants import FoldConstants\n",
"from qonnx.transformation.general import GiveReadableTensorNames, GiveUniqueNodeNames, RemoveStaticGraphInputs\n",
"\n",
"cnv = get_test_model_trained(\"CNV\", 1, 1)\n",
"export_finn_onnx(cnv, torch.randn(1, 3, 32, 32), build_dir + \"/end2end_cnv_w1a1_export.onnx\")\n",
"model = ModelWrapper(build_dir + \"/end2end_cnv_w1a1_export.onnx\")\n",
"export_onnx_path = build_dir + \"/end2end_cnv_w1a1_export.onnx\"\n",
"export_qonnx(cnv, torch.randn(1, 3, 32, 32), export_onnx_path)\n",
"qonnx_cleanup(export_onnx_path, out_file=export_onnx_path)\n",
"model = ModelWrapper(export_onnx_path)\n",
"model = model.transform(ConvertQONNXtoFINN())\n",
"model = model.transform(InferShapes())\n",
"model = model.transform(FoldConstants())\n",
"model = model.transform(GiveUniqueNodeNames())\n",
Expand Down Expand Up @@ -149,10 +154,12 @@
"# preprocessing: torchvision's ToTensor divides uint8 inputs by 255\n",
"totensor_pyt = ToTensor()\n",
"chkpt_preproc_name = build_dir+\"/end2end_cnv_w1a1_preproc.onnx\"\n",
"export_finn_onnx(totensor_pyt, torch.randn(ishape), chkpt_preproc_name)\n",
"export_qonnx(totensor_pyt, torch.randn(ishape), chkpt_preproc_name)\n",
"qonnx_cleanup(chkpt_preproc_name, out_file=chkpt_preproc_name)\n",
"pre_model = ModelWrapper(chkpt_preproc_name)\n",
"pre_model = pre_model.transform(ConvertQONNXtoFINN())\n",
"\n",
"# join preprocessing and core model\n",
"pre_model = ModelWrapper(chkpt_preproc_name)\n",
"model = model.transform(MergeONNXModels(pre_model))\n",
"# add input quantization annotation: UINT8 for all BNN-PYNQ models\n",
"global_inp_name = model.graph.input[0].name\n",
Expand Down Expand Up @@ -633,7 +640,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.5"
"version": "3.10.6"
}
},
"nbformat": 4,
Expand Down
Loading

0 comments on commit 7924bf7

Please sign in to comment.