-
Notifications
You must be signed in to change notification settings - Fork 116
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Object detection Example with float32 model #18
Comments
Can you share you console log corresponding to your print statements? This looks like a input/output format mismatch issue, I would be able to help better with some debug output. Thanks. |
Thanks. All of the print output: Restarted application in 626ms.
flutter: Interpreter Created Successfully
flutter: _inputShape[0]: 1
flutter: _inputShape[1]: 256
flutter: _inputShape[2]: 256
flutter: _inputShape[3]: 3
flutter: _outputShape[0]: 1
flutter: _outputShape[1]: 256
flutter: _outputShape[2]: 256
flutter: _outputType: TfLiteType.float32
flutter: _intputType: TfLiteType.float32
image_picker: compressing is not supported for type (null). Returning the image with original quality
flutter: input image data type: TfLiteType.uint8
flutter: crop size: 3024
flutter: input image width: 256
flutter: input image height: 256
flutter: input image data type: TfLiteType.uint8
flutter: Time to load image: 66 ms
flutter: input buffer: Instance of '_ByteBuffer'
flutter: output buffer: Instance of '_ByteBuffer'
flutter: \^[[38;5;196m┌───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────<…>
flutter: \^[[38;5;196m│ \^[[0m\^[[39m\^[[48;5;196mBad state: failed precondition<…>
flutter: \^[[38;5;196m├┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄<…>
flutter: \^[[38;5;196m│ #0 checkState (package:quiver/check.dart:73:<…>
flutter: \^[[38;5;196m│ am15h/tflite_flutter_plugin#1 Tensor.setTo (package:tflite_flutter/src/tensor.dart:150:<…>
flutter: \^[[38;5;196m│ am15h/tflite_flutter_plugin#2 Interpreter.runForMultipleInputs (package:tflite_flutter/src/interpreter.dart:194:3<…>
flutter: \^[[38;5;196m│ am15h/tflite_flutter_plugin#3 Interpreter.run (package:tflite_flutter/src/interpreter.dart:165:<…>
flutter: \^[[38;5;196m│ am15h/tflite_flutter_plugin#4 Classifier.predict (package:tensorflow_poc/classifier.dart:113:1<…>
flutter: \^[[38;5;196m│ am15h/tflite_flutter_plugin#5 _MyHomePageState._predict (package:tensorflow_poc/main.dart:69:2<…>
flutter: \^[[38;5;196m│ am15h/tflite_flutter_plugin#6 _MyHomePageState.getImage.<anonymous closure> (package:tensorflow_poc/main.dart:63:<…>
flutter: \^[[38;5;196m│ am15h/tflite_flutter_plugin#7 State.setState (package:flutter/src/widgets/framework.dart:1267:3<…>
flutter: \^[[38;5;196m├┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄<…>
flutter: \^[[38;5;196m│ ⛔ error<…>
flutter: \^[[38;5;196m└───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────<…>
flutter: #0 checkState
package:quiver/check.dart:73
am15h/tflite_flutter_plugin#1 Tensor.setTo
package:tflite_flutter/src/tensor.dart:150
am15h/tflite_flutter_plugin#2 Interpreter.runForMultipleInputs
package:tflite_flutter/src/interpreter.dart:194
am15h/tflite_flutter_plugin#3 Interpreter.run
package:tflite_flutter/src/interpreter.dart:165
am15h/tflite_flutter_plugin#4 Classifier.predict
package:tensorflow_poc/classifier.dart:113
am15h/tflite_flutter_plugin#5 _MyHomePageState._predict
package:tensorflow_poc/main.dart:69
am15h/tflite_flutter_plugin#6 _MyHomePageState.getImage.<anonymous closure>
package:tensorflow_poc/main.dart:63
am15h/tflite_flutter_plugin#7 State.setState
package:flutter/…/widgets/framework.dart:1267
am15h/tflite_flutter_plugin#8 _MyHomePageState.getImage
package:tensorflow_poc/main.dart:57
<asynchronous suspension> |
Hello, I am also new to TensorFlow, and sorry for my english, I'm from France, but I am probably facing the same issue than you @funwithflutter. I have build my own custom model from scratch (with this tutorial on YouTube), with my own dataset. When I try to convert it into TensorFlow Lit format, I end up having a float32[1, 320, 320, 3] input type, which seems to be the "standard" input type. I used netron.app to visualize the differences between @am15h model (which is similar to the official one, given by TensorFlow) and mine. @am15h, just like the official model from TensorFlow, are using the "quantization" process (explained here), in order to reduce the model size (as far as I could understand). One of the consequences is that the input type is modified to uint8. In my case, I do not want to use quantization, I would like to keep my float32 input type, and still be able to perform object detection on a flutter app. @am15h would you have any resources (online tutorial, github repo or anything else) to perform object detection with a flutter app, without the quantization optimisation ? Or to be able to use your package with an input type float32[1, size, size, 3] ? Thank you ! 🙂 Stacktrace from AndroidStudio with float32 input type custom model :
EDIT : Solved this by using flutter_tflite thanks to the example app |
Hi, I am very sorry for late reply missed these notifications, I would suggest to use this implementation class instead of the ClassifierQuant Let me know if you can get it working by doing this. Please feel free to ask if you need more help with this. |
Thank you for this ! would you have an example for float models with real time object detection ? |
@am15h any example with de object detection part please ? Working with float32 input model |
I am transferring this issue to https://github.com/am15h/object_detection_flutter. |
Sorry don't know what to label this issue as. I think it's more likely an error my side than something wrong with the package. Any help will be appreciated (I'm new to TensorFlow in general). Also thanks for the amazing package!
I'm trying to use this model: https://tfhub.dev/intel/lite-model/midas/v2_1_small/1/lite/1
It computes depth from an image.
And as far as I can see I'm doing all the necessary steps. I copied the code from your image classification example, and also double checked with the Android example provided in the link above (and as far as I can see I'm doing the same steps).
I'm using the tflite flutter helper package.
I'm getting a
failed precondition
in Quiver at the following point (when I call interpreter.run):Stacktrace:
Something that also has me confused is that
interpreter.getInputTensor(0).type
returnsTfLiteType.float32
, but I expected this to beuint8
from the model description.Below is my classifier class (I'm using this classifier in the Image Classification example from this package):
And implementation class:
The text was updated successfully, but these errors were encountered: