Skip to content

Commit

Permalink
ncnn
Browse files Browse the repository at this point in the history
  • Loading branch information
CoinCheung committed Feb 5, 2023
1 parent e81fb12 commit 0773fff
Show file tree
Hide file tree
Showing 4 changed files with 35 additions and 21 deletions.
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -117,7 +117,9 @@ pretrained/*
run.sh
openvino/build/*
openvino/output*
ncnn/models/*
*.onnx
tis/cpp_client/build/*
log*txt

tvm/
1 change: 1 addition & 0 deletions lib/models/resnet.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@

resnet18_url = 'https://download.pytorch.org/models/resnet18-5c106cde.pth'


from torch.nn import BatchNorm2d


Expand Down
43 changes: 25 additions & 18 deletions ncnn/README.md
Original file line number Diff line number Diff line change
@@ -1,20 +1,14 @@

### My platform
### My platform

* raspberry pi 3b
* 2022-04-04-raspios-bullseye-armhf-lite.img
* cpu: 4 core armv8, memory: 1G



### Install ncnn
### Install ncnn

#### 1. dependencies
```
$ python -m pip install onnx-simplifier
```

#### 2. build ncnn
Just follow the ncnn official tutoral of [build-for-linux](https://github.com/Tencent/ncnn/wiki/how-to-build#build-for-linux) to install ncnn. Following steps are all carried out on my raspberry pi:

**step 1:** install dependencies
Expand All @@ -25,21 +19,26 @@ $ sudo apt install build-essential git cmake libprotobuf-dev protobuf-compiler l
**step 2:** (optional) install vulkan

**step 3:** build
I am using commit `5725c028c0980efd`, and I have not tested over other commits.
I am using commit `6869c81ed3e7170dc0`, and I have not tested over other commits.
```
$ git clone https://github.com/Tencent/ncnn.git
$ cd ncnn
$ git reset --hard 5725c028c0980efd
$ git reset --hard 6869c81ed3e7170dc0
$ git submodule update --init
$ mkdir -p build
$ cmake -DCMAKE_BUILD_TYPE=Release -DNCNN_VULKAN=OFF -DNCNN_BUILD_TOOLS=ON -DCMAKE_TOOLCHAIN_FILE=../toolchains/pi3.toolchain.cmake ..
$ make -j2
$ make install
```

### Convert model, build and run the demo
### Convert pytorch model to ncnn model

#### 1. convert pytorch model to ncnn model via onnx
#### 1. dependencies
```
$ python -m pip install onnx-simplifier
```

#### 2. convert pytorch model to ncnn model via onnx
On your training platform:
```
$ cd BiSeNet/
Expand All @@ -52,13 +51,21 @@ Then copy your `model_v2_sim.onnx` from training platform to raspberry device.
On raspberry device:
```
$ /path/to/ncnn/build/tools/onnx/onnx2ncnn model_v2_sim.onnx model_v2_sim.param model_v2_sim.bin
$ cd BiSeNet/ncnn/
$ mkdir -p models
$ mv model_v2_sim.param models/
$ mv model_v2_sim.bin models/
```

#### 2. compile demo code
You can optimize the ncnn model by fusing the layers and save the weights with fp16 datatype.
On raspberry device:
```
$ /path/to/ncnn/build/tools/ncnnoptimize model_v2_sim.param model_v2_sim.bin model_v2_sim_opt.param model_v2_sim_opt.bin 65536
$ mv model_v2_sim_opt.param model_v2_sim.param
$ mv model_v2_sim_opt.bin model_v2_sim.bin
```

You can also quantize the model for int8 inference, following this [tutorial](https://github.com/Tencent/ncnn/wiki/quantized-int8-inference). Make sure your device support int8 inference.


### build and run the demo
#### 1. compile demo code
On raspberry device:
```
$ mkdir -p BiSeNet/ncnn/build
Expand All @@ -67,7 +74,7 @@ $ cmake .. -DNCNN_ROOT=/path/to/ncnn/build/install
$ make
```

#### 3. run demo
#### 2. run demo
```
./segment
```
10 changes: 7 additions & 3 deletions ncnn/segment.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -51,12 +51,16 @@ void inference() {
mod.opt.use_vulkan_compute = 1;
mod.set_vulkan_device(1);
#endif
// ncnn enable fp16 by default, so we do not need these options
// int8 depends on the model itself, so we do not set here
//// switch off fp16
// bool use_fp16 = false;
// mod.opt.use_fp16_packed = use_fp16;
// mod.opt.use_fp16_storage = use_fp16;
// mod.opt.use_fp16_arithmetic = use_fp16;
//// switch on bf16
// mod.opt.use_packing_layout = true;
// mod.opt.use_ff16_storage = true;
//// reduce cpu usage
// net.opt.openmp_blocktime = 0;
mod.opt.use_winograd_convolution = true;

// we should set opt before load model
Expand All @@ -78,7 +82,7 @@ void inference() {

// set input, run, get output
ncnn::Extractor ex = mod.create_extractor();
ex.set_light_mode(true); // not sure what this mean
ex.set_light_mode(true);
ex.set_num_threads(nthreads);
#if NCNN_VULKAN
ex.set_vulkan_compute(true);
Expand Down

0 comments on commit 0773fff

Please sign in to comment.