Faster and more precisely than Grad-CAM.
Upper result is...
- The size of images is 96-96.
- We used MobileNet V2 with ArcFace.
- We measured the processing time on Colaboratory(Tesla P4).
Adapting Grad-CAM for Embedding Networks(arXiv, Jan 2020)
We changed below.
- change Triplet to ArcFace.
- change k-means clusters(from 50 to 10).
- Keras 2.2.4
- TensorFlow 1.9.0
- sklearn 0.19.0
- Opencv 3.4.3.18
- RapberryPi3 modelB(below result) or PC
command below
python3 janken_demo.py
press [s] to change below mode(like ObjectDetection).
Detail is here(Japanese).
Look at Train_Faster-Grad-CAM.ipynb
- Change MobileNet V2 to V3. Because V3 is faster than V2 on CPU.
- Change Raspberry Pi3 to Pi4(or JetsonNANO).
- Quantization like this.
1.Anomaly detection
When you use Self-supervised-learning, anomaly region is visualized by using Faster-Grad-CAM.
Next example is that circle is normal.
And extra line or missing line is anomaly image.
Upper result is that only normal images is used in trainging!
Realtime visualization is like below.
You can do anomaly detection and visualization at the same time.
2.Auto-Annotation
Auto-Annotation is based Grad-CAM and Bayesian optimization.
When you use Faster-Grad-CAM instead of Grad-CAM, you reduce total time by 25%(from 20sec to 15sec).
- janken_dataset(karaage0703)