-
-
Notifications
You must be signed in to change notification settings - Fork 16.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding metadata to TFLite models #5784
Comments
@chainyo I'm not sure, I'm not a TF specialist myself, but on the surface this appears to be a TFLite issue rather than a YOLOv5 issue. Perhaps raise this on the TF repo or the TF forums, or even stackoverflow. Naturally this would be nice to have here too, as TFLite inference with detect.py lacks class names currently, so from our side at least adding the class names as model metadata would be useful. |
This is the actual custom script I have to add metadatas.
Outputs don't match the metadatas because yolov5 has 1 tensor as output and the metadatas are expecting 4 tensors. ValueError: The number of output tensors (1) should match the number of output tensor metadata (4) The goal would be to understand how to make it fit the yolov5 output. - subgraph.outputTensorGroups = [group]
+ subgraph.outputTensorGroups = [[group]] import os
from tflite_support import flatbuffers
from tflite_support import metadata as _metadata
from tflite_support import metadata_schema_py_generated as _metadata_fb
model_file = "[path_to_model].tflite"
# Creates model info.
model_meta = _metadata_fb.ModelMetadataT()
model_meta.name = "Model Name"
model_meta.description = ("Model Description")
model_meta.version = "v1"
model_meta.author = "Model Author"
# Creates input info.
input_meta = _metadata_fb.TensorMetadataT()
input_meta.name = "image"
input_meta.description = ("Input Description")
input_meta.content = _metadata_fb.ContentT()
input_meta.content.contentProperties = _metadata_fb.ImagePropertiesT()
input_meta.content.contentProperties.colorSpace = (
_metadata_fb.ColorSpaceType.RGB)
input_meta.content.contentPropertiesType = (
_metadata_fb.ContentProperties.ImageProperties)
input_normalization = _metadata_fb.ProcessUnitT()
input_normalization.optionsType = (
_metadata_fb.ProcessUnitOptions.NormalizationOptions)
input_normalization.options = _metadata_fb.NormalizationOptionsT()
input_normalization.options.mean = [127.5]
input_normalization.options.std = [127.5]
input_meta.processUnits = [input_normalization]
input_stats = _metadata_fb.StatsT()
input_stats.max = [255]
input_stats.min = [0]
input_meta.stats = input_stats
# Creates output info.
output_location_meta = _metadata_fb.TensorMetadataT()
output_location_meta.name = "location"
output_location_meta.description = "The locations of the detected boxes."
output_location_meta.content = _metadata_fb.ContentT()
output_location_meta.content.contentPropertiesType = (
_metadata_fb.ContentProperties.BoundingBoxProperties)
output_location_meta.content.contentProperties = (
_metadata_fb.BoundingBoxPropertiesT())
output_location_meta.content.contentProperties.index = [1, 0, 3, 2]
output_location_meta.content.contentProperties.type = (
_metadata_fb.BoundingBoxType.BOUNDARIES)
output_location_meta.content.contentProperties.coordinateType = (
_metadata_fb.CoordinateType.RATIO)
output_location_meta.content.range = _metadata_fb.ValueRangeT()
output_location_meta.content.range.min = 2
output_location_meta.content.range.max = 2
output_class_meta = _metadata_fb.TensorMetadataT()
output_class_meta.name = "category"
output_class_meta.description = "The categories of the detected boxes."
output_class_meta.content = _metadata_fb.ContentT()
output_class_meta.content.contentPropertiesType = (
_metadata_fb.ContentProperties.FeatureProperties)
output_class_meta.content.contentProperties = (
_metadata_fb.FeaturePropertiesT())
output_class_meta.content.range = _metadata_fb.ValueRangeT()
output_class_meta.content.range.min = 2
output_class_meta.content.range.max = 2
label_file = _metadata_fb.AssociatedFileT()
label_file.name = os.path.basename("label.txt")
label_file.description = "Label of objects that this model can recognize."
label_file.type = _metadata_fb.AssociatedFileType.TENSOR_VALUE_LABELS
output_class_meta.associatedFiles = [label_file]
output_score_meta = _metadata_fb.TensorMetadataT()
output_score_meta.name = "score"
output_score_meta.description = "The scores of the detected boxes."
output_score_meta.content = _metadata_fb.ContentT()
output_score_meta.content.contentPropertiesType = (
_metadata_fb.ContentProperties.FeatureProperties)
output_score_meta.content.contentProperties = (
_metadata_fb.FeaturePropertiesT())
output_score_meta.content.range = _metadata_fb.ValueRangeT()
output_score_meta.content.range.min = 2
output_score_meta.content.range.max = 2
output_number_meta = _metadata_fb.TensorMetadataT()
output_number_meta.name = "number of detections"
output_number_meta.description = "The number of the detected boxes."
output_number_meta.content = _metadata_fb.ContentT()
output_number_meta.content.contentPropertiesType = (
_metadata_fb.ContentProperties.FeatureProperties)
output_number_meta.content.contentProperties = (
_metadata_fb.FeaturePropertiesT())
# Creates subgraph info.
group = _metadata_fb.TensorGroupT()
group.name = "detection result"
group.tensorNames = [
output_location_meta.name, output_class_meta.name,
output_score_meta.name
]
subgraph = _metadata_fb.SubGraphMetadataT()
subgraph.inputTensorMetadata = [input_meta]
subgraph.outputTensorMetadata = [
output_location_meta, output_class_meta, output_score_meta,
output_number_meta
]
subgraph.outputTensorGroups = [group]
model_meta.subgraphMetadata = [subgraph]
b = flatbuffers.Builder(0)
b.Finish(
model_meta.Pack(b),
_metadata.MetadataPopulator.METADATA_FILE_IDENTIFIER)
metadata_buf = b.Output()
populator = _metadata.MetadataPopulator.with_model_file(model_file)
populator.load_metadata_buffer(metadata_buf)
populator.load_associated_files(["[path_to_labels].txt"])
populator.populate()
Yes I know but I was expecting any hints from @zldrobit because he worked a lot on tflite conversions 🤗 |
By the way, it could be a really nice add to tflite conversions. And If I find the way to do it, I would PR it 😄 |
@chainyo the 4 output format is a special TF format that they apply to older SSD models. We talked to Google (Sachin Joglekar, https://github.com/srjoglekar246) about applying it to YOLOv5 but the talks stagnated. You might want to contact him and see if he'd be interested in pointing you in the right direction or providing more official support for YOLOv5. References: |
Thanks for links and stuff! 🤝 |
@chainyo Adding |
Because it will run on android mobile, yes GPU support is not needed. I will give a look and try your hints, thanks. I keep the thread up to date. I will also try to contact Sachin Joglekar as Glenn suggested. |
That works really well with It seems that MLKit only works with standard models from 3 years ago... |
@chainyo the four outputs you are showing have nothing in common with the 4 google outputs: |
@glenn-jocher It is probably because I used the YoloV5 I will try to use the tf converter to see if it changes anything 🤗 EDIT: With |
After some discussion with Google employees, in order to make custom models to fit They use |
@chainyo did you have success to convert tflite model for ml-kit? |
@chainyo @dcboy NMS support for TFLite models has been added in #5938. You could refer to the colab notebook as an example, though detection for TFLite models with NMS is not supported yet by the master branch. |
👋 Hello, this issue has been automatically marked as stale because it has not had recent activity. Please note it will be closed if no further activity occurs. Access additional YOLOv5 🚀 resources:
Access additional Ultralytics ⚡ resources:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed! Thank you for your contributions to YOLOv5 🚀 and Vision AI ⭐! |
The above code is giving me an error: |
Hi @chainyo |
@matinmoezzi Not really... I trained some Google models (EfficientDet) instead of or kept YoloV5 without MLKit. |
@chainyo Thank you |
Facing the same issue |
@PadalaKavya can you please submit a PR to attach metadata to TFLite models and then read the same data at inference time. We have this in place with many existing formats like ONNX but not TFLite yet. |
@KingWu I'm not very familiar with ML Kit, so you should probably raise an issue directly there requesting support for YOLOv5 TFLite models |
The last time I talked to the google employee managing the TFLite converter, she told me that MLKit wasn't supposed to be compatible with anything else Google models like EfficientDet. So I think that you have to do the TFLite implementation when you want to use the YoloV5 model. |
That's my problem, PT 2 TFLITE & add metadata, tensors outputs like this: Below is my previous post: |
adding metadata to efficientdet, yolo model will not work, because ML kit is not compatible with that till now. It can be in future! Although I can recreate your model's and deploy in your android app, contact me! WhatsApp: +8801770293055 |
@ShahriarAlom adding metadata to the EfficientDet or YOLO model for compatibility with ML Kit is currently not supported. ML Kit has its own compatibility requirements and it may not work with models that are not specifically designed for it. You can contact the mentioned phone number for assistance in recreating and deploying the model in your Android app. |
Crazy I am having the similar issues right now in 2024. |
Currently, YOLOv5 models don't directly support ML Kit due to output format differences. You might consider using alternative models like EfficientDet for ML Kit compatibility. |
Search before asking
Question
I looked a lot of issues and the @zldrobit repo for
TFLite
export and inference.But, I'm still lacking on answers in order to add metadatas to my tflite model. I know how to use it in a python script or even with tf.js, but I need to use my converted model in MLKit (a solution by Google) and this platform requires metadatas, especially outputs metadatas to run.
I tried also to add manually metadatas to my model with
tflite_support
package. Unfortunatly the default object detection template doesn't work because YoloV5 output is only 1 tensor and not 4 tensors as expected by the template. I tried to customize the metadatas add via this package, but nothing seems to work.I'm still looking for a solution to add metadatas to my TFLite custom model.
I looked this issues' answer @zldrobit any suggestions ? #5030 (comment)
The text was updated successfully, but these errors were encountered: