-
Notifications
You must be signed in to change notification settings - Fork 5.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Could not use custom model in object detection #22
Comments
Routing to @fanzhanggoogle @chuoling |
One issue is that to use quantized model, you should add |
Please pull the latest commits, quantized model might not be supported yet in the previous release. |
Closing the issue. Please re-open if you still have issues. |
Adding README updates and images for all samples
Recently I would like to replace model
ssdlite_object_detection.tflite
by my custom model, which is trained with ssd_mobilenetv2_coco(float model). To get tflite file, I use the scriptexport_ssdlite_graph.py
and set the flagadd_postprocessing_op=False
as mentioned in the tutorial, then I used TFliteConverter to quantize my model (weight only) to obtain my graph tflite.For the part of mobile, I modified the
model_path
,label_map_path
,num class
,num_boxes
(for my case is 1917 instead of 2034) in theobject_detection_android_gpu.pbtxt
. Besides, I replace the models and file txt in objectdetctiongpu/BUILD file. Then I build and install apk, no errors during the process. But when I run inference with my mobile, no bounding boxes were detected.. Did I miss something ? Thanks for your help!The text was updated successfully, but these errors were encountered: