Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Precondition failed when attempting to use with a custom model #49

Open
ralienpp opened this issue Dec 24, 2021 · 11 comments
Open

Precondition failed when attempting to use with a custom model #49

ralienpp opened this issue Dec 24, 2021 · 11 comments

Comments

@ralienpp
Copy link

Hi, I am trying to use the library with a custom model that takes an image at the input and produces a mask at the output; it was originally converted from PyTorch. The model in question is u2net which removes backgrounds from images.

I get the following error when invoking interpreter.run:

E/flutter ( 3988): [ERROR:flutter/lib/ui/ui_dart_state.cc(209)] Unhandled Exception: Bad state: failed precondition
E/flutter ( 3988): #0      checkState (package:quiver/check.dart:74:5)
E/flutter ( 3988): #1      Tensor.setTo (package:tflite_flutter/src/tensor.dart:146:5)
E/flutter ( 3988): #2      Interpreter.runForMultipleInputs (package:tflite_flutter/src/interpreter.dart:186:33)
E/flutter ( 3988): #3      Interpreter.run (package:tflite_flutter/src/interpreter.dart:157:5)
E/flutter ( 3988): #4      removeBackground (bgremover_local_ex.dart:58:17)
E/flutter ( 3988): <asynchronous suspension>

I assume the problem is in incorrectly formatted input data, but through experimentation I've ruled out the obvious issues, so I've reached the stage where I'd like to get some troubleshooting tips from others.

Here are some relevant context details:

  • the input is of the form TfLiteType.float32, shape: [1, 320, 320, 3], data: 1228800] (a 320x320 image in RGB)
  • the output is TfLiteType.float32, shape: [1, 320, 320, 1], data: 409600 (a grayscale 320x320 image)

Here's an outline of what I do

    final interpreter = await tfl.Interpreter.fromAsset(
        'magic_float32.tflite');

    ImageProcessor imageProcessor = ImageProcessorBuilder()
        .add(ResizeOp(320, 320, ResizeMethod.NEAREST_NEIGHBOUR))
        .build();
   TensorImage tensorImage = TensorImage.fromFile(File('images/input.jpg'));
   tensorImage = imageProcessor.process(tensorImage);

   // I presume that at this step the image is resized to the right shape,
   // though I also experimented with manual apriori resizing, just to make
   // sure the issue is not at this stage
   // Image imgIn = decodeJpg(File('images/input.jpg').readAsBytesSync());
   // Image imgResized = copyResize(imgIn, width: 320, height: 320);
   // TensorImage tensorImage = TensorImage.fromImage(imgResized);


    TensorBuffer outputBuffer = TensorBuffer.createFixedSize(
        interpreter.getOutputTensor(0).shape,
        interpreter.getOutputTensor(0).type);

    interpreter.run(tensorImage.buffer, outputBuffer);

When run is invoked I get high CPU use for a few minutes (running this in an Android Emulator), and then the program crashes with the error above. Running it on an actual smartphone yields the same results.

Running the original model with PyTorch takes less than a second on the same hardware, so I am sure this is not inherent complexity in the model (especially that the tflite version is simplified, so it should probably be faster).

How can I understand the root cause of this problem and address it?

@dewball345
Copy link

This usually happens when the input that you give to the model is the wrong shape or type. I suggest printing out the shapes and types of your tensors to see if there are any mismatches. Helped me a lot with this issue :)

@tiofabby
Copy link

tiofabby commented Jun 8, 2022

Hi @dewball345, I am facing the same issue with audio classifier example code having a model with shape: [15600] and mine being a shape: [1, 15600]. I am not sure what changes I should do into predict() function inside these lines

    TensorAudio tensorAudio = TensorAudio.create(
        TensorAudioFormat.create(1, sampleRate), _inputShape[0]);
    tensorAudio.loadShortBytes(bytes);

So it fits with the model I have... Would you know?
Thank you
Fabrice

@dewball345
Copy link

@tiofabby Another important thing - make sure the data types are matching (ex if the input is a Float 32 but the model accepts Float 16 this error will occur)

@tiofabby
Copy link

tiofabby commented Jun 9, 2022

Hi @dewball345, thanks for your quick feedback. My data type is okay, it is Float32 as the one in the example but my input is 2 dimensional [1,15600] unlike the one dimensional of the example. So I am trying to understand how I can fill the input Buffer with 2 dimensional array instead of just one...? I tried different things without success. I cannot find a TensorAudio function allowing me in doing that, not sure neither also if I should use runForMultipleInputs for such 2 dimensional case I have..?
Any help would be greatly appreciated as I feel stuck.
Thank you
Fabrice

@dewball345
Copy link

@tiofabby is there a way to convert Tensor audio to a raw byte stream? From there, there should be an attribute to get the type.

@tiofabby
Copy link

tiofabby commented Jun 9, 2022

Hi @dewball345 , I cannot find a way to convert Tensor Audio to a raw byte stream... and I am not sure to get your point neither...
I am trying to create an _inputBuffer then to be able to have a 2 dimensional audio model input with _inputShape = [0, 15600]

  Future<void> loadModel() async {
    try {
      interpreter = await Interpreter.fromAsset(_modelFileName,
          options: _interpreterOptions);
      _inputShape = interpreter.getInputTensor(0).shape;
      _inputType = interpreter.getInputTensor(0).type;
      _outputShape = interpreter.getOutputTensor(0).shape;
      _outputType = interpreter.getOutputTensor(0).type;

      _inputBuffer = TensorBuffer.createFixedSize(_inputShape, _inputType);
      _outputBuffer = TensorBuffer.createFixedSize(_outputShape, _outputType);

    } catch (e) {
      print('Unable to create interpreter, Caught Exception: ${e.toString()}');
    }
  }

  List<Category> predict(List<int> audioSample) {
    final pres = DateTime.now().millisecondsSinceEpoch;
    Uint8List bytes = Uint8List.fromList(audioSample);
    TensorAudio tensorAudio = TensorAudio.create(
        TensorAudioFormat.create(1, sampleRate), _inputShape[1]);
    tensorAudio.loadShortBytes(bytes);

    _inputBuffer.loadBuffer(tensorAudio.tensorBuffer.getBuffer(),
        shape: _inputShape);

    final pre = DateTime.now().millisecondsSinceEpoch - pres;
    final runs = DateTime.now().millisecondsSinceEpoch;

    interpreter.run(_inputBuffer.getBuffer(), _outputBuffer.getBuffer());

but interpreter.run() fails with <>

So I guess this is not the way it should be done to handle multidimensional inputs....
I hope this piece of code may help to better understand my concern
Thank you!

@dewball345
Copy link

Hi @dewball345 , I cannot find a way to convert Tensor Audio to a raw byte stream... and I am not sure to get your point neither... I am trying to create an _inputBuffer then to be able to have a 2 dimensional audio model input with _inputShape = [0, 15600]

  Future<void> loadModel() async {
    try {
      interpreter = await Interpreter.fromAsset(_modelFileName,
          options: _interpreterOptions);
      _inputShape = interpreter.getInputTensor(0).shape;
      _inputType = interpreter.getInputTensor(0).type;
      _outputShape = interpreter.getOutputTensor(0).shape;
      _outputType = interpreter.getOutputTensor(0).type;

      _inputBuffer = TensorBuffer.createFixedSize(_inputShape, _inputType);
      _outputBuffer = TensorBuffer.createFixedSize(_outputShape, _outputType);

    } catch (e) {
      print('Unable to create interpreter, Caught Exception: ${e.toString()}');
    }
  }

  List<Category> predict(List<int> audioSample) {
    final pres = DateTime.now().millisecondsSinceEpoch;
    Uint8List bytes = Uint8List.fromList(audioSample);
    TensorAudio tensorAudio = TensorAudio.create(
        TensorAudioFormat.create(1, sampleRate), _inputShape[1]);
    tensorAudio.loadShortBytes(bytes);

    _inputBuffer.loadBuffer(tensorAudio.tensorBuffer.getBuffer(),
        shape: _inputShape);

    final pre = DateTime.now().millisecondsSinceEpoch - pres;
    final runs = DateTime.now().millisecondsSinceEpoch;

    interpreter.run(_inputBuffer.getBuffer(), _outputBuffer.getBuffer());

but interpreter.run() fails with <>

So I guess this is not the way it should be done to handle multidimensional inputs.... I hope this piece of code may help to better understand my concern Thank you!

Thanks for the code - I asked for the types because if the types aren't the same you will get this error. Could you change the UInt8List variable bytes to be a float 32 list?

@tiofabby
Copy link

tiofabby commented Jun 9, 2022

Sure @dewball345, I just tried then but still getting same error:
Null check operator used on a null value
with the following changes, I understood you were suggesting:

  List<Category> predict(List<int> audioSample) {

    List<double> audioSampleDoubles =
        audioSample.map((i) => i.toDouble()).toList();
    Float32List bytes = Float32List.fromList(audioSampleDoubles);
    TensorAudio tensorAudio = TensorAudio.create(
        TensorAudioFormat.create(1, sampleRate), _inputShape[1]);

    tensorAudio.loadDoubleList(bytes);

    _inputBuffer.loadBuffer(tensorAudio.tensorBuffer.getBuffer(),
        shape: _inputShape);

    interpreter.run(_inputBuffer.getBuffer(), _outputBuffer.getBuffer());

@dewball345
Copy link

@tiofabby - likely something being passed is null? I honestly don't know how to fix that problem

@SanaSizmic
Copy link

Hi @dewball345, I am facing the same issue with audio classifier example code having a model with shape: [15600] and mine being a shape: [1, 15600]. I am not sure what changes I should do into predict() function inside these lines

    TensorAudio tensorAudio = TensorAudio.create(
        TensorAudioFormat.create(1, sampleRate), _inputShape[0]);
    tensorAudio.loadShortBytes(bytes);

So it fits with the model I have... Would you know? Thank you Fabrice

@tiofabby Have you managed to fix this yet? I also have the same issue

@tiofabby
Copy link

tiofabby commented Aug 4, 2022

Hi @SanaSizmic , no I have not been able to find a way to fix this. If you do, please let me know... I would be happy to get this fixed too.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants