Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bad state: failed precondition #167

Closed
sDobrzanski opened this issue Dec 25, 2021 · 17 comments
Closed

Bad state: failed precondition #167

sDobrzanski opened this issue Dec 25, 2021 · 17 comments

Comments

@sDobrzanski
Copy link

sDobrzanski commented Dec 25, 2021

Hello,
I'm working on app that uses my custom created object detection .tflite model. I followed code described in this tutorial:
https://github.com/am15h/object_detection_flutter but I'm getting Bad state: failed precondition error in this line:
checkState(tfLiteTensorCopyFromBuffer(_tensor, ptr.cast(), bytes.length) == TfLiteStatus.ok); at tensor.dart.
Here is my code:

  Future<List<Recognition>> detectObjects2(XFile imageFile) async {
    TensorImage tensorImage = await createTensorImage(imageFile);

    final Interpreter interpreter =
        await Interpreter.fromAsset('mobilenet.tflite');
    List<String> labels = await FileUtil.loadLabels("assets/labelmap.txt");

    var outputTensors = interpreter.getOutputTensors();

    List<List<int>> _outputShapes = [];
    List<TfLiteType> _outputTypes = [];

    for (var tensor in outputTensors) {
      _outputShapes.add(tensor.shape);
      _outputTypes.add(tensor.type);
    }
    TensorBuffer outputScores = TensorBufferFloat(_outputShapes[0]);
    TensorBuffer outputLocations = TensorBufferFloat(_outputShapes[1]);
    TensorBuffer numLocations = TensorBufferFloat(_outputShapes[2]);
    TensorBuffer outputClasses = TensorBufferFloat(_outputShapes[3]);
    List<Object> inputs = [tensorImage.buffer];

    Map<int, Object> outputs = {
      0: outputScores.buffer,
      1: outputLocations.buffer,
      2: numLocations.buffer,
      3: outputClasses.buffer,
    };

    interpreter.runForMultipleInputs(inputs, outputs);

    int resultsCount = min(15, numLocations.getIntValue(0));
    int labelOffset = 1;
    List<Recognition> recognitions = [];
    for (int i = 0; i < resultsCount; i++) {
      // Prediction score
      var score = outputScores.getDoubleValue(i);
      // Label string
      var labelIndex = outputClasses.getIntValue(i) + labelOffset;
      var label = labels.elementAt(labelIndex);

      if (score > 0.4) {
        recognitions.add(
          Recognition(i, label, score),
        );
      }
    }
    return recognitions;
  }

  Future<TensorImage> createTensorImage(XFile imageFile) async {
    final bytes = await File(imageFile.path).readAsBytes();
    final img.Image? image = img.decodeImage(bytes);
    TensorImage _inputImage = TensorImage(TfLiteType.float32);
    _inputImage.loadImage(image!);
    int padSize = max(_inputImage.height, _inputImage.width);
    ImageProcessor imageProcessor = ImageProcessorBuilder()
        .add(ResizeWithCropOrPadOp(padSize, padSize))
        .add(ResizeOp(300, 300, ResizeMethod.BILINEAR))
        .build();
    _inputImage = imageProcessor.process(_inputImage);
    return _inputImage;
  }

and console output:
[log] Error AiCubit: Bad state: failed precondition (log created by me, nothing comes from library itself)
Thanks for help in advance ;)

@ralienpp
Copy link

This is similar to what I described here: am15h/tflite_flutter_helper#49 - in case any discussion occurs there, you might want to be a part of it.

Note that over there I used TensorImage.fromFile(File('input.jpg')) to load my image directly with primitives provided by the library, hoping that it would ensure I didn't do anything incorrect in the process of doing it by hand, as you did. However, that didn't solve the problem

@sDobrzanski
Copy link
Author

sDobrzanski commented Dec 26, 2021

Can be closed, I just had to resize my image according to input Tensor so I changed .add(ResizeOp(300, 300, ResizeMethod.BILINEAR)) to .add(ResizeOp(640, 640, ResizeMethod.BILINEAR)). @ralienpp in your case try to set a float32 type to your input tensorImage as I did in createTensorImage function.

@ralienpp
Copy link

Thank you for the hint. Indeed, after your remark I noticed that the type was uint8 (if using the method I described initially), while if I use your approach, the type is float32. This has gotten me successfully past that error.

Although I still don't get all the results I need, this is probably an issue with my model itself.

@gabrielglbh
Copy link

gabrielglbh commented Dec 29, 2021

It may be not an issue of your model. I was having the same trouble as you @ralienpp.

  TensorImage _processImage(Image image) {
    TensorImage img = TensorImage(TfLiteType.float32);
    img.loadImage(image);
    return ImageProcessorBuilder()
      .add(ResizeOp(width, height, ResizeMethod.BILINEAR))
      .add(NormalizeOp(0, 255))
      .build()
      .process(img);
  }

  List<Category> predict(Image image) {
    //TensorImage im = _processImage(image);
    var _inputImage = List<List<double>>.generate(height, (i) =>
      List.generate(width, (j) => 0.0)).reshape<double>([1, height, width, 1]);

    for (int x = 0; x < height; x++) {
      for (int y = 0; y < width; y++) {
        double val = image[(x * width) + y].toDouble();
        val = val > 50 ? 1.0 : 0;
        _inputImage[0][x][y][0] = val;
      }
   }

   TensorBuffer outputBuffer = TensorBuffer.createFixedSize(
      interpreter.getOutputTensor(0).shape,
      interpreter.getOutputTensor(0).type);

      interpreter.run(_inputImage, outputBuffer);

      final probabilityProcessor = TensorProcessorBuilder().build();

      Map<String, double> labeledProb = TensorLabel.fromList(
         labels, probabilityProcessor.process(outputBuffer))
           .getMapWithFloatValue();

      final pred = getProbability(labeledProb).toList();
      List<Category> categories = [];

      for (int x = 0; x < pred.length; x++)
          categories.add(Category(pred[x].key, pred[x].value));

      return categories;
  }

If I uncomment the _processImage method and comment the whole loop, the input shape [64, 64, 3] does not match the interpreter's input buffer [1, 64, 64, 1]. But if I write the for loop below _processImage and comment _processImage to actually populate the image to match the interpreter's input size, the said error of Bad State is non existent.

Although the error of this issue is "solved", I have the exact same output every time I run the interpreter on different images: all 0.0 in the outputBuffer. I am thinking it might be some problem with the input, but I do not know.

Hope this helps someone. I am reeally keen to get this working!

@espbee
Copy link

espbee commented Dec 30, 2021

Have any of you run into an issue like this only after updating from 0.8.0 ?

@gabrielglbh
Copy link

gabrielglbh commented Dec 30, 2021

I found this plugin already in version 0.9.0, so I cannot tell, but I was about to post that I managed to found the solution and get proper predictions.

Following this comment's code:

  • Just remove the _processImage. Processing the image with TensorImage seems not the best soultion for custom models yet. You will have to process it and transform it accordingly to your input's model by hand.
  import 'package:image/image.dart' as i;

  ...

  Map<String, double> predict(i.Image image) {
    final resizedImage = i.copyResize(image, width: width, height: height);

    var _inputImage = List<List<double>>.generate(height, (i) =>
        List.generate(width, (j) => 0.0)).reshape<double>([1, height, width, 1]);

    for (int x = 0; x < height; x++) {
      for (int y = 0; y < width; y++) {
        double val = resizedImage[(x * width) + y].toDouble();
        val = val > 50 ? 1.0 : 0;
        _inputImage[0][x][y][0] = val;
      }
    }

    TensorBuffer outputBuffer = TensorBuffer.createFixedSize(
      interpreter.getOutputTensor(0).shape,
      interpreter.getOutputTensor(0).type);

    interpreter.run(_inputImage, outputBuffer.getBuffer());

    final probabilityProcessor = TensorProcessorBuilder()
        .add(NormalizeOp(0, 1)).build();

    return TensorLabel.fromList(
        labels, probabilityProcessor.process(outputBuffer))
        .getMapWithFloatValue();
  }
  • It is very important that you use i.copyResize to resize your image accordingly to the model needs before anything else.
  • Then prepopulate the _inputImage with 0.0, in my case was of type List<List<List<List<double>>>>.
  • After that, iterate through the resized image normalizing its pixel values (val > 50 ? 1.0 : 0) and appending them to the _inputImage list.
  • Create the outputBuffer and run the interpreter with the _inputImage and outputBuffer.getBuffer(). Note that the getBuffer() in the outputBuffer is very important there.
  • Then create the probabilityProcessor with the addition of a NormalizeOp if needed. Will depend on the model.
  • And finally, just get the ACTUAL predictions and not all 0.

Hope this helps someone.

@espbee
Copy link

espbee commented Dec 30, 2021

@gabrielglbh are you applying padding at any point?
@sDobrzanski did you get this running while using the ImageProcessor ?

@gabrielglbh
Copy link

@espbee As far as I am concerned, no padding applied.

@espbee
Copy link

espbee commented Dec 30, 2021

@gabrielglbh thanks.

@espbee
Copy link

espbee commented Dec 30, 2021

why would the model effect what's happening with the pre-inference image conversions? (asking in all earnestness, i'm just trying to sort through my issues)

@sDobrzanski
Copy link
Author

sDobrzanski commented Dec 31, 2021

@espbee Yes, I managed to run it with image processor but predictions for my object detection model are not matching the image itself so I think there is still something wrong with my implementation. Regarding with @gabrielglbh input image procesing it's only valid for single input/output model.

@gabrielglbh
Copy link

@espbee The thing I could not perform with the ImageProcessorBuilder was transforming the input image from [64, 64, 3] to [1, 64, 64, 1]. That is: converting it to grayscale through Image.grayscale() function and creating a single batch. I think that if I manage to transform the image to the needed form, the ImageProcessorBuilder will do just fine.

@espbee
Copy link

espbee commented Dec 31, 2021

@gabrielglbh thanks. and thanks for the post generally. happy new year-- side question: did you have any issues with the TensorFlowLiteC.framework ?

@gabrielglbh
Copy link

@espbee I did not try it with TensorFlowLiteC.framework, I am manily focusing on Android side for now. Happy new year :)

@espbee
Copy link

espbee commented Jan 1, 2022

@sDobrzanski thanks a million. in the older lib i didn't have to specify the TensorImage as float32. that was it. my labors on this count are over. happy new year.

@yingshaoxo
Copy link
Contributor

yingshaoxo commented Jan 14, 2023

I didn't see TensorImage at the newest version from flutter side.

How do you guys do the NormalizeOp then?

@yingshaoxo
Copy link
Contributor

I didn't see TensorImage at the newest version from flutter side.

How do you guys do the NormalizeOp then?

Well, I found this:

import 'package:image/image.dart' as imglib;

  imglib.Image normalize_image_into_range(
      imglib.Image image, num min, num max) {
    return imglib.normalize(image, min: min, max: max);
  }

face_image = normalize_image_into_range(face_image, 0, 1);

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants