-
Notifications
You must be signed in to change notification settings - Fork 302
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Precondition failed when attempting to use with a custom model #49
Comments
This usually happens when the input that you give to the model is the wrong shape or type. I suggest printing out the shapes and types of your tensors to see if there are any mismatches. Helped me a lot with this issue :) |
Hi @dewball345, I am facing the same issue with audio classifier example code having a model with shape: [15600] and mine being a shape: [1, 15600]. I am not sure what changes I should do into predict() function inside these lines
So it fits with the model I have... Would you know? |
@tiofabby Another important thing - make sure the data types are matching (ex if the input is a Float 32 but the model accepts Float 16 this error will occur) |
Hi @dewball345, thanks for your quick feedback. My data type is okay, it is Float32 as the one in the example but my input is 2 dimensional [1,15600] unlike the one dimensional of the example. So I am trying to understand how I can fill the input Buffer with 2 dimensional array instead of just one...? I tried different things without success. I cannot find a TensorAudio function allowing me in doing that, not sure neither also if I should use runForMultipleInputs for such 2 dimensional case I have..? |
@tiofabby is there a way to convert Tensor audio to a raw byte stream? From there, there should be an attribute to get the type. |
Hi @dewball345 , I cannot find a way to convert Tensor Audio to a raw byte stream... and I am not sure to get your point neither...
but interpreter.run() fails with <> So I guess this is not the way it should be done to handle multidimensional inputs.... |
Thanks for the code - I asked for the types because if the types aren't the same you will get this error. Could you change the UInt8List variable bytes to be a float 32 list? |
Sure @dewball345, I just tried then but still getting same error:
|
@tiofabby - likely something being passed is null? I honestly don't know how to fix that problem |
@tiofabby Have you managed to fix this yet? I also have the same issue |
Hi @SanaSizmic , no I have not been able to find a way to fix this. If you do, please let me know... I would be happy to get this fixed too. |
Hi, I am trying to use the library with a custom model that takes an image at the input and produces a mask at the output; it was originally converted from PyTorch. The model in question is
u2net
which removes backgrounds from images.I get the following error when invoking
interpreter.run
:I assume the problem is in incorrectly formatted input data, but through experimentation I've ruled out the obvious issues, so I've reached the stage where I'd like to get some troubleshooting tips from others.
Here are some relevant context details:
TfLiteType.float32, shape: [1, 320, 320, 3], data: 1228800]
(a 320x320 image in RGB)TfLiteType.float32, shape: [1, 320, 320, 1], data: 409600
(a grayscale 320x320 image)Here's an outline of what I do
When
run
is invoked I get high CPU use for a few minutes (running this in an Android Emulator), and then the program crashes with the error above. Running it on an actual smartphone yields the same results.Running the original model with PyTorch takes less than a second on the same hardware, so I am sure this is not inherent complexity in the model (especially that the
tflite
version is simplified, so it should probably be faster).How can I understand the root cause of this problem and address it?
The text was updated successfully, but these errors were encountered: