Four models:
- simple_graph: Python -> Java / JavaScript / C / C++
- resnet_v2_50: Python -> Java / JavaScript / C++
- big_gan_512: Python -> C
- mobilenet_v2: Python -> Android
- Get ImageNet classnames (for Android only):
wget -P /tmp/assets https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt - Create virtualenv and install dependencies:
virtualenv --system-site-packages -p python3.6 ./venv && source ./venv/bin/activate
pip install opencv-python tensorflow tensorflow_hub
- Train and save frozen model (TFLite model for Android):
$ python train.py
C++ API: https://www.tensorflow.org/guide/extend/cc
- Build OpenCV from source: https://opencv.org/releases.html (-D CMAKE_INSTALL_PREFIX=/tmp/opencv-3.4/install) and add install directory to path: PATH="$PATH:/tmp/opencv-3.4/install"
- Install Bazel: https://docs.bazel.build/versions/master/install-ubuntu.html
- Clone TensorFlow GitHub repository: https://github.com/tensorflow/tensorflow
- Place BUILD and main.cpp files in tensorflow/cc/project directory
- Add the following to main repository WORKSPACE file:
new_local_repository(
name = "opencv",
path = "/tmp/opencv-3.4/install",
build_file = "opencv.BUILD")
- Place opencv.BUILD in the same directory as main repository WORKSPACE file with the following:
cc_library(
name = "opencv",
srcs = glob(["lib/*.so*"]),
hdrs = glob([ "include/opencv2/**/*.h", "include/opencv2/**/*.hpp", ]),
includes = ["include"],
visibility = ["//visibility:public"],
linkstatic = 1)
Then project can depend on @opencv//:opencv to link in the .so's under lib/ and reference the headers under include/.
- Build from repository workspace:
$ bazel build --jobs 6 --ram_utilization_factor 50 //tensorflow/cc/project:main
- Run from repository workspace:
$ ./bazel-bin/tensorflow/cc/project/main
Java API: https://www.tensorflow.org/install/lang_java
- Install Maven: https://maven.apache.org/install.html
- Create Maven project and place pom.xml in workspace directory and main.java in src/main/java directory
- Build from workspace directory creating Jar file:
$ mvn install
- Run from workspace directory:
java -cp target/resnet-1.0-SNAPSHOT.jar:~/.m2/repository/org/tensorflow/libtensorflow/1.12.0/libtensorflow-1.12.0.jar:~/.m2/repository/org/tensorflow/libtensorflow_jni/1.12.0/libtensorflow_jni-1.12.0.jar:~/.m2/repository/org/openpnp/opencv/3.4.2-1/opencv-3.4.2-1.jar Main
For GPU use package tensorflow_jni_gpu instead of tensorflow_jni and also chenge Maven pom.xml file.
Write C++ program which uses TensorFlow C API. Project can be build without Bazel and outside TensorFlow project.
C API: https://www.tensorflow.org/install/lang_c
C API functions headers: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/c/c_api.h
- Build OpenCV from source: https://opencv.org/releases.html (-D CMAKE_INSTALL_PREFIX=/tmp/opencv-3.4/install) and add install directory to path: PATH="$PATH:/tmp/opencv-3.4/install"
- Download TensorFlow C library: https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-gpu-linux-x86_64-1.12.0.tar.gz
- Extract TensorFlow C library:
sudo mkdir /tmp/tf-1.12
sudo tar -xz libtensorflow-cpu-linux-x86_64-1.12.0.tar.gz -C /tmp/tf-1.12
- Configure linker environmental variables:
export LIBRARY_PATH=$LIBRARY_PATH:/tmp/tf-1.12/lib
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/tmp/tf-1.12/lib
export LIBRARY_PATH=$LIBRARY_PATH:/tmp/opencv-3.4/install/lib
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/tmp/opencv-3.4/install/lib
- Build project:
g++ -I/tmp/tf-1.12/include -L/tmp/tf-1.12/lib main.cpp -I/tmp/opencv-3.4/install/include -L/tmp/opencv-3.4/install/lib -ltensorflow -lopencv_core -lopencv_imgcodecs -lopencv_imgproc -o main
- Run executable:
./main
TensorFlow.js: https://js.tensorflow.org
- Install Yarn (and Node.js): https://yarnpkg.com/en. If using Ubuntu 18 first remove cmdtree: sudo apt remove cmdtree.
- Install tfjs-converter: https://github.com/tensorflow/tfjs-converter
- Install http-server:
sudo apt install npm
sudo npm install http-server -g
- Convert existing TensorFlow model to TensorFlow.js Web format:
tensorflowjs_converter \
--input_format=tf_saved_model \
--output_node_names='y' \
--saved_model_tags=serve \
/tmp/python/saved_model \
/tmp/javascript/model
-
For loadFrozenModel function to work with local files, those files needs to be served by a server:
http-server -c1 --cors /tmp/javascript/model -p 8081
Add origin=* to URL queries parameters to solve missing CORS 'Access-Control-Allow-Origin' header. -
Start a local development HTTP server which watches the filesystem for changes:
yarn
yarn watch
- Generate dist/ folder which contains the build artifacts and can be used for deployment:
yarn
yarn build
- Use GPU package for for higher performance:
yarn add @tensorflow/tfjs-node-gpu
-
Build opencv.js: https://docs.opencv.org/3.4/d4/da1/tutorial_js_setup.html
-
Place opencv.js file in the same directory as index.html and index.js (or upload it on the internet)
-
Install tfjs-converter: https://github.com/tensorflow/tfjs-converter
-
Convert saved model to tfjs format:
tensorflowjs_converter \
--input_format=tf_saved_model \
--output_node_names='probabilities, predictions_renamed' \
--saved_model_tags=serve \
/tmp/resnet_v2_50/python/saved_model \
/tmp/resnet_v2_50/javascript/model
-
Copy imagenet_names.txt to /tmp/resnet_v2_50/javascript/model
-
For javascript to work with local files, those files needs to be served by a server: http-server -c1 --cors /tmp/javascript/model -p 8080
-
Run index.html in the browser
Note that when deploying model with tfhub module on remote computer, data from /tmp/tfhub_modules (or directory set with TFHUB_CACHE_DIR) must be copied under exactly the same absolute path in remote computer. It can't be done with manipulating TFHUB_CACHE_DIR env because tfhub directory absolute path is hardcoded inside model when it's saved.
https://www.tensorflow.org/lite
- Install Android Studio through Ubuntu Software
- Install Android SDK: Tools -> SDK Manager -> Android SDK -> SDK Tools
- Install emulator and create virtual device (optional): Tools -> AVD Manager
- Install Android OpenCV:
-
Download OpenCV for Android: wget https://sourceforge.net/projects/opencvlibrary/files/opencv-android/3.4.1/opencv-3.4.1-android-sdk.zip/download && unzip download -d /tmp
-
Import OpenCV module: File -> New -> Import Module add /tmp/OpenCV-android-sdk/sdk/java and resolve automatically using Android Studio (first sync project, then do refactor)
-
Add OpenCV module to project: File -> Project Structure -> app Dependencies -> add OpenCV module with + mark
-
Copy libs to Android app: cp -r /tmp/OpenCV-android-sdk/sdk/native/libs {app_dir}//app/src/main mv {app_dir}//app/src/main/libs {app_dir}//app/src/main/jniLibs
-
Change build.gradle(Module:openCV) so it uses the same compileSdkVersion and targetSdkVersion as build.gradle(Module:app)
- Install Android TensorFlow:
- Add implementation 'org.tensorflow:tensorflow-lite:0.0.0-nightly' to dependencies in build.gradle(Module:app) and sync project
-
Copy assets (ImageNetLabels.txt and model.tflite) to Android app: cp -r /tmp/assets {app_dir}/app/src/main
-
Set mobile into developer state and load application
-
Add camera persmissions: Device settings -> Applications -> App -> Permissions -> turn on Camera (or use Dexter to ask from application)