Text detector based on FCOS architecture with MobileNetV2-like as a backbone for indoor/outdoor scenes with more or less horizontal text.
The key benefit of this model compared to the base model is its smaller size and faster performance.
Metric | Value |
---|---|
F-measure (harmonic mean of precision and recall on ICDAR2013) | 88.45% |
GFlops | 7.78 |
MParams | 2.26 |
Source framework | PyTorch* |
Image, name: input
, shape: 1, 3, 704, 704
in the format 1, C, H, W
, where:
C
- number of channelsH
- image heightW
- image width
Expected color order - BGR
.
-
The
boxes
is a blob with the shape100, 5
in the formatN, 5
, whereN
is the number of detected bounding boxes. For each detection, the description has the format: [x_min
,y_min
,x_max
,y_max
,conf
], where:- (
x_min
,y_min
) - coordinates of the top left bounding box corner - (
x_max
,y_max
) - coordinates of the bottom right bounding box corner conf
- confidence for the predicted class
- (
-
The
labels
is a blob with the shape100
in the formatN
, whereN
is the number of detected bounding boxes. In case of text detection, it is equal to0
for each detected box.
The OpenVINO Training Extensions provide a training pipeline, allowing to fine-tune the model on custom dataset.
[*] Other names and brands may be claimed as the property of others.