This is a platform for real time visualization of Convolutional Neural Networks.
The aim of the platform is to be a handful tool for interactive quick analysis of networks.
Activation maps of convolutional layers as well activations of fully connected layer are visualized. Visualized activations can be clicked interactively for applying more advance visualization techniques to corresponding neurons.
The FPS is in the order of magnitude (*~0.4) of the FPS of the visualized network. For example ResNet50 is visualized with FPS ~40 on GTX 1080 Ti. The latter is achieved by creating a single graph for all the visualizations in such a way that given an input frame all the required visualizations in certain moment of time are obtained on the GPU by single pass through the graph without backward and forward data communications with the GPU.
It is recommended to run on GPU, as with the CPU version the FPS will be very low. To run on GPU, also the following is required.
- Recent NVIDIA drivers (
nvidia-384
on Ubuntu) - NVIDIA Docker
docker build -t basecv . # Build Docker image which contains all the requirements
docker run --runtime nvidia --env DISPLAY=$DISPLAY -v="/tmp/.X11-unix:/tmp/.X11-unix:rw" -v=$(pwd)/.keras:/root/.keras -v="$(pwd)/..:$(pwd)/.." -w=$(pwd) -it basecv python3 main.py --stream "your/stream/uri"
docker build -t basecv -f Dockerfile.cpu . # Build Docker image which contains all the requirements
docker run --env DISPLAY=$DISPLAY -v="/tmp/.X11-unix:/tmp/.X11-unix:rw" -v=$(pwd)/.keras:/root/.keras -v="$(pwd)/..:$(pwd)/.." -w=$(pwd) -it basecv python3 main.py --stream "your/stream/uri"
python3 main.py -h # Gives information on available parameters
usage: main.py [-h] [--stream STREAM] [--network NETWORK]
optional arguments:
-h, --help show this help message and exit
--stream STREAM Video stram URI, webcam number or path to a video based
on which the network is visualized
--network NETWORK Network to visualise: One of built in keras applications
(VGG16,ResNet50 ...) or path to .h5 file
For example, one could visualize YOLO by creating it's translation to Keras as described in https://github.com/qqwweee/keras-yolo3 and then by passing path to .h5 file using --network parameter.
The X Server should allow connections from a docker container.
Run xhost +local:docker
, also check this
Currently available:
Extendable with other algorithms, required computation for which is in the order of magnitude of forward/backward pass through the network.
Visualization algorithms reside in single files and can be applied to still images
$ python3 gradcam.py -h
usage: gradcam.py [-h] [-i INPUT] [-o OUTPUT] [-n NETWORK]
[--convindex CONVINDEX]
optional arguments:
-h, --help show this help message and exit
-i INPUT, --input INPUT
Input image
-o OUTPUT, --output OUTPUT
Output Image
-n NETWORK, --network NETWORK
Network (VGG16,ResNet50 ...)
--convindex CONVINDEX
Index of convolutional layer to use in the algorithm
(-1 for last layer)
Last Convolutional Layer | Last-1 Convolutional Layer | |||
ResNet50 | VGG16 | ResNet50 | VGG16 | |
Top-1 | ||||
Top-2 |