Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How can I convert .pt weight to tflite? #4586

Closed
jaskiratsingh2000 opened this issue Aug 29, 2021 · 56 comments
Closed

How can I convert .pt weight to tflite? #4586

jaskiratsingh2000 opened this issue Aug 29, 2021 · 56 comments
Labels
question Further information is requested Stale Stale and schedule for closing soon

Comments

@jaskiratsingh2000
Copy link

Hi @glenn-jocher I want to convert PyTorch weights to tflite how can I do that?

@jaskiratsingh2000 jaskiratsingh2000 added the question Further information is requested label Aug 29, 2021
@glenn-jocher
Copy link
Member

@jaskiratsingh2000 see tf.py:

yolov5/models/tf.py

Lines 1 to 12 in bbfafea

# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
"""
TensorFlow/Keras and TFLite versions of YOLOv5
Authored by https://github.com/zldrobit in PR https://github.com/ultralytics/yolov5/pull/1127
Usage:
$ python models/tf.py --weights yolov5s.pt --cfg yolov5s.yaml
Export int8 TFLite models:
$ python models/tf.py --weights yolov5s.pt --cfg models/yolov5s.yaml --tfl-int8 \
--source path/to/images/ --ncalib 100

@jaskiratsingh2000
Copy link
Author

jaskiratsingh2000 commented Aug 29, 2021

@glenn-jocher Hey, Thanks for referring me to this issue. I appreciate that.

Can you also let me know that instead of "yolov5s.pt" weights can I add custom trained weights file as well like "best.pt" or "last.pt"

@glenn-jocher another quick question that will this get exported or what? Where can I find after that?

@glenn-jocher
Copy link
Member

@jaskiratsingh2000 you can export any YOLOv5 model, that's the main purpose of the function. It wouldn't be much use if it only exported official models.

Exported models are placed in same parent directory as source model.

@jaskiratsingh2000
Copy link
Author

jaskiratsingh2000 commented Aug 29, 2021 via email

@JNaranjo-Alcazar
Copy link

Following the usage indications:

python models/tf.py --weights runs/train/exp6/weights/best.pt --cfg yolov5s.yaml

I got:

Starting TensorFlow GraphDef export with TensorFlow 2.6.0...
TensorFlow GraphDef export failure: name 'keras_model' is not defined
Traceback (most recent call last):
  File "models/tf.py", line 491, in <module>
    tf.TensorSpec(keras_model.inputs[0].shape, keras_model.inputs[0].dtype))
NameError: name 'keras_model' is not defined

Starting TFLite export with TensorFlow 2.6.0...

TFLite export failure: name 'keras_model' is not defined
Traceback (most recent call last):
  File "models/tf.py", line 521, in <module>
    converter = tf.lite.TFLiteConverter.from_keras_model(keras_model)
NameError: name 'keras_model' is not defined

However, when running the same script as:

python models/tf.py --weights runs/train/exp6/weights/best.pt --cfg models/yolov5s.yaml 

the conversion is run correctly. This may be a typo in the documentation or an code issue...

Hope this feedback helps!

Thanks for the amazing work!

@glenn-jocher
Copy link
Member

@JNaranjo-Alcazar thanks for the bug report! I'm not able to reproduce this, when I run the usage example in Colab everything works well:

# Setup
!git clone https://github.com/ultralytics/yolov5  # clone repo
%cd yolov5
%pip install -qr requirements.txt  # install dependencies

import torch
from IPython.display import Image, clear_output  # to display images

clear_output()
print(f"Setup complete. Using torch {torch.__version__} ({torch.cuda.get_device_properties(0).name if torch.cuda.is_available() else 'CPU'})")

# Reproduce
!python models/tf.py --weights yolov5s.pt --cfg yolov5s.yaml

@JNaranjo-Alcazar
Copy link

I had this bug using a Docker container (not the one available in the repository) with python 3.7 and installing all the requirements and tensorflow.

I think I should use Colab then.

Thanks for the quick reply

@Ronald-Kray
Copy link

@glenn-jocher
After running tf.py, I can't find the result file.
Where can I find tflite file?
image

@glenn-jocher
Copy link
Member

@Ronald-Kray TF exports are handled by export.py now:

python export.py --weights yolov5s.pt --include tflite

@github-actions
Copy link
Contributor

github-actions bot commented Oct 21, 2021

👋 Hello, this issue has been automatically marked as stale because it has not had recent activity. Please note it will be closed if no further activity occurs.

Access additional YOLOv5 🚀 resources:

Access additional Ultralytics ⚡ resources:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLOv5 🚀 and Vision AI ⭐!

@github-actions github-actions bot added the Stale Stale and schedule for closing soon label Oct 21, 2021
@kashishgoyal31
Copy link

PYTHONPATH=. python3 models/tf.py --weights weights/yolov5s.pt --cfg models/yolov5s.yaml --img 320 --tfl-int8 --source /data/dataset/coco/coco2017/train2017 --ncalib 100

I am trying to convert my trained model to tflite. Please guide about weights. Will I be using my model trained best.pt ?? and what is to be added in source?

@kashishgoyal31
Copy link

Also I am getting illegal instruction as the error

@glenn-jocher
Copy link
Member

@kashishgoyal31

python export.py --weights yolov5s.pt --include tflite

@kashishgoyal31
Copy link

kashishgoyal31 commented Nov 17, 2021 via email

@glenn-jocher
Copy link
Member

@kashishgoyal31 --source can be anything you want. See detect.py for Usage examples instead of asking:

yolov5/detect.py

Lines 5 to 12 in 562191f

Usage:
$ python path/to/detect.py --weights yolov5s.pt --source 0 # webcam
img.jpg # image
vid.mp4 # video
path/ # directory
path/*.jpg # glob
'https://youtu.be/Zgi9g1ksQHc' # YouTube
'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP stream

@kashishgoyal31
Copy link

kashishgoyal31 commented Nov 17, 2021 via email

@glenn-jocher
Copy link
Member

glenn-jocher commented Nov 17, 2021

@kashishgoyal31 👋 hi, thanks for letting us know about this possible problem with YOLOv5 🚀. We've created a few short guidelines below to help users provide what we need in order to get started investigating a possible problem.

How to create a Minimal, Reproducible Example

When asking a question, people will be better able to provide help if you provide code that they can easily understand and use to reproduce the problem. This is referred to by community members as creating a minimum reproducible example. Your code that reproduces the problem should be:

  • Minimal – Use as little code as possible to produce the problem
  • Complete – Provide all parts someone else needs to reproduce the problem
  • Reproducible – Test the code you're about to provide to make sure it reproduces the problem

For Ultralytics to provide assistance your code should also be:

  • Current – Verify that your code is up-to-date with GitHub master, and if necessary git pull or git clone a new copy to ensure your problem has not already been solved in master.
  • Unmodified – Your problem must be reproducible using official YOLOv5 code without changes. Ultralytics does not provide support for custom code ⚠️.

If you believe your problem meets all the above criteria, please close this issue and raise a new one using the 🐛 Bug Report template with a minimum reproducible example to help us better understand and diagnose your problem.

Thank you! 😃

@kashishgoyal31
Copy link

I have converted the weights from best.pt to tflite using below command
!python3 export.py --weights /content/best.pt --img 320 --include tflite
and then tried detect.py using command
!python3 detect.py --weights /content/best-fp16.tflite --img 320 --source /content/freshapple546.jpeg

The output image does not consider my class details in output and gives class 0 instead of my custom class name

@glenn-jocher
Copy link
Member

@kashishgoyal31 class names are not in tflite files. You can manually add them here:

stride, names, pt, jit, onnx, engine = model.stride, model.names, model.pt, model.jit, model.onnx, model.engine

@kashishgoyal31
Copy link

kashishgoyal31 commented Nov 24, 2021 via email

@kashishgoyal31
Copy link

kashishgoyal31 commented Nov 24, 2021 via email

@endeavorhh
Copy link

python3 export.py --weights /content/best.pt --img 320 --include tflite
but fire-fp16.tflite not my trained class. i trained fire,the detect result is person/bicycle like yolov5s.pt

@glenn-jocher
Copy link
Member

TFLite models don't have class names attached. You can pass a --data yaml during detect if you'd like to use alternative annotations names:

python detect.py --data custom_data.yaml

@ramchandra-bioenable
Copy link

I want to convert .pt file to .tflite.
Is it possible to set input image size to .tflite model?

@glenn-jocher
Copy link
Member

@c1p31068 tflite models don't have attached class names metadata, you can find the names manually in your data.yaml

@c1p31068
Copy link

c1p31068 commented Aug 4, 2022

Thanks for the reply. What is the best way to manually name in data.yaml? Can they be done with googl colab? Furthermore, I am a beginner, is it possible to do this task? The sentence is wrong because I'm using a translation.

@c1p31068
Copy link

c1p31068 commented Aug 4, 2022

Hi, can .tflite be used with yolov5?

@glenn-jocher
Copy link
Member

@c1p31068 to export from PyTorch to TFLite

python export.py --weights yolov5s.pt --include tflite

@c1p31068
Copy link

c1p31068 commented Aug 4, 2022

Thanks for the reply. I was able to export! However, the yolov5 labels are not reflected. I have yolov5 running and testing and in doing so it becomes a person.
I can't understand some of the previous questions.

@glenn-jocher
Copy link
Member

@c1p31068 if you're using YOLOv5 for inference you can pass your --data to specify your names, i.e.

python detect.py --weights model.tflite --data your_data.yaml

@c1p31068
Copy link

c1p31068 commented Aug 4, 2022

Thank you! We've solved the problem! I am so glad to have found you. Thank you so much!

@c1p31068
Copy link

c1p31068 commented Sep 2, 2022

Hi, is it possible to specify a version of tflite to convert?

@glenn-jocher
Copy link
Member

@c1p31068 what do you mean a version of tflite?

Whatever version of tensorflow that is installed is used during export.

@c1p31068
Copy link

c1p31068 commented Sep 5, 2022

Thanks for the reply.
I got an error about version mismatch when I used the training data in android studio, because I thought the version of the file I converted from yolov5 to tensorflow lite did not match.

@HripsimeS
Copy link

@jaskiratsingh2000 Hello. Did you try to use converted/exported to tflite model weight deploy in Android Studio to do Object detection with mobile app? If yes, did you have any issues with converted/exported to tflite model ?

@glenn-jocher
Copy link
Member

glenn-jocher commented Nov 22, 2022

@HripsimeS 👋 Hello! Thanks for asking about Export Formats. YOLOv5 🚀 offers export to almost all of the common export formats. See our TFLite, ONNX, CoreML, TensorRT Export Tutorial for full details.

Formats

YOLOv5 inference is officially supported in 11 formats:

💡 ProTip: Export to ONNX or OpenVINO for up to 3x CPU speedup. See CPU Benchmarks.
💡 ProTip: Export to TensorRT for up to 5x GPU speedup. See GPU Benchmarks.

Format export.py --include Model
PyTorch - yolov5s.pt
TorchScript torchscript yolov5s.torchscript
ONNX onnx yolov5s.onnx
OpenVINO openvino yolov5s_openvino_model/
TensorRT engine yolov5s.engine
CoreML coreml yolov5s.mlmodel
TensorFlow SavedModel saved_model yolov5s_saved_model/
TensorFlow GraphDef pb yolov5s.pb
TensorFlow Lite tflite yolov5s.tflite
TensorFlow Edge TPU edgetpu yolov5s_edgetpu.tflite
TensorFlow.js tfjs yolov5s_web_model/
PaddlePaddle paddle yolov5s_paddle_model/

Benchmarks

Benchmarks below run on a Colab Pro with the YOLOv5 tutorial notebook Open In Colab. To reproduce:

python benchmarks.py --weights yolov5s.pt --imgsz 640 --device 0

Colab Pro V100 GPU

benchmarks: weights=/content/yolov5/yolov5s.pt, imgsz=640, batch_size=1, data=/content/yolov5/data/coco128.yaml, device=0, half=False, test=False
Checking setup...
YOLOv5 🚀 v6.1-135-g7926afc torch 1.10.0+cu111 CUDA:0 (Tesla V100-SXM2-16GB, 16160MiB)
Setup complete ✅ (8 CPUs, 51.0 GB RAM, 46.7/166.8 GB disk)

Benchmarks complete (458.07s)
                   Format  [email protected]:0.95  Inference time (ms)
0                 PyTorch        0.4623                10.19
1             TorchScript        0.4623                 6.85
2                    ONNX        0.4623                14.63
3                OpenVINO           NaN                  NaN
4                TensorRT        0.4617                 1.89
5                  CoreML           NaN                  NaN
6   TensorFlow SavedModel        0.4623                21.28
7     TensorFlow GraphDef        0.4623                21.22
8         TensorFlow Lite           NaN                  NaN
9     TensorFlow Edge TPU           NaN                  NaN
10          TensorFlow.js           NaN                  NaN

Colab Pro CPU

benchmarks: weights=/content/yolov5/yolov5s.pt, imgsz=640, batch_size=1, data=/content/yolov5/data/coco128.yaml, device=cpu, half=False, test=False
Checking setup...
YOLOv5 🚀 v6.1-135-g7926afc torch 1.10.0+cu111 CPU
Setup complete ✅ (8 CPUs, 51.0 GB RAM, 41.5/166.8 GB disk)

Benchmarks complete (241.20s)
                   Format  [email protected]:0.95  Inference time (ms)
0                 PyTorch        0.4623               127.61
1             TorchScript        0.4623               131.23
2                    ONNX        0.4623                69.34
3                OpenVINO        0.4623                66.52
4                TensorRT           NaN                  NaN
5                  CoreML           NaN                  NaN
6   TensorFlow SavedModel        0.4623               123.79
7     TensorFlow GraphDef        0.4623               121.57
8         TensorFlow Lite        0.4623               316.61
9     TensorFlow Edge TPU           NaN                  NaN
10          TensorFlow.js           NaN                  NaN

Export a Trained YOLOv5 Model

This command exports a pretrained YOLOv5s model to TorchScript and ONNX formats. yolov5s.pt is the 'small' model, the second smallest model available. Other options are yolov5n.pt, yolov5m.pt, yolov5l.pt and yolov5x.pt, along with their P6 counterparts i.e. yolov5s6.pt or you own custom training checkpoint i.e. runs/exp/weights/best.pt. For details on all available models please see our README table.

python export.py --weights yolov5s.pt --include torchscript onnx

💡 ProTip: Add --half to export models at FP16 half precision for smaller file sizes

Output:

export: data=data/coco128.yaml, weights=['yolov5s.pt'], imgsz=[640, 640], batch_size=1, device=cpu, half=False, inplace=False, train=False, keras=False, optimize=False, int8=False, dynamic=False, simplify=False, opset=12, verbose=False, workspace=4, nms=False, agnostic_nms=False, topk_per_class=100, topk_all=100, iou_thres=0.45, conf_thres=0.25, include=['torchscript', 'onnx']
YOLOv5 🚀 v6.2-104-ge3e5122 Python-3.7.13 torch-1.12.1+cu113 CPU

Downloading https://github.com/ultralytics/yolov5/releases/download/v6.2/yolov5s.pt to yolov5s.pt...
100% 14.1M/14.1M [00:00<00:00, 274MB/s]

Fusing layers... 
YOLOv5s summary: 213 layers, 7225885 parameters, 0 gradients

PyTorch: starting from yolov5s.pt with output shape (1, 25200, 85) (14.1 MB)

TorchScript: starting export with torch 1.12.1+cu113...
TorchScript: export success ✅ 1.7s, saved as yolov5s.torchscript (28.1 MB)

ONNX: starting export with onnx 1.12.0...
ONNX: export success ✅ 2.3s, saved as yolov5s.onnx (28.0 MB)

Export complete (5.5s)
Results saved to /content/yolov5
Detect:          python detect.py --weights yolov5s.onnx 
Validate:        python val.py --weights yolov5s.onnx 
PyTorch Hub:     model = torch.hub.load('ultralytics/yolov5', 'custom', 'yolov5s.onnx')
Visualize:       https://netron.app/

The 3 exported models will be saved alongside the original PyTorch model:

Netron Viewer is recommended for visualizing exported models:

Exported Model Usage Examples

detect.py runs inference on exported models:

python detect.py --weights yolov5s.pt                 # PyTorch
                           yolov5s.torchscript        # TorchScript
                           yolov5s.onnx               # ONNX Runtime or OpenCV DNN with --dnn
                           yolov5s_openvino_model     # OpenVINO
                           yolov5s.engine             # TensorRT
                           yolov5s.mlmodel            # CoreML (macOS only)
                           yolov5s_saved_model        # TensorFlow SavedModel
                           yolov5s.pb                 # TensorFlow GraphDef
                           yolov5s.tflite             # TensorFlow Lite
                           yolov5s_edgetpu.tflite     # TensorFlow Edge TPU
                           yolov5s_paddle_model       # PaddlePaddle

val.py runs validation on exported models:

python val.py --weights yolov5s.pt                 # PyTorch
                        yolov5s.torchscript        # TorchScript
                        yolov5s.onnx               # ONNX Runtime or OpenCV DNN with --dnn
                        yolov5s_openvino_model     # OpenVINO
                        yolov5s.engine             # TensorRT
                        yolov5s.mlmodel            # CoreML (macOS Only)
                        yolov5s_saved_model        # TensorFlow SavedModel
                        yolov5s.pb                 # TensorFlow GraphDef
                        yolov5s.tflite             # TensorFlow Lite
                        yolov5s_edgetpu.tflite     # TensorFlow Edge TPU
                        yolov5s_paddle_model       # PaddlePaddle

Use PyTorch Hub with exported YOLOv5 models:

import torch

# Model
model = torch.hub.load('ultralytics/yolov5', 'custom', 'yolov5s.pt')
                                                       'yolov5s.torchscript ')       # TorchScript
                                                       'yolov5s.onnx')               # ONNX Runtime
                                                       'yolov5s_openvino_model')     # OpenVINO
                                                       'yolov5s.engine')             # TensorRT
                                                       'yolov5s.mlmodel')            # CoreML (macOS Only)
                                                       'yolov5s_saved_model')        # TensorFlow SavedModel
                                                       'yolov5s.pb')                 # TensorFlow GraphDef
                                                       'yolov5s.tflite')             # TensorFlow Lite
                                                       'yolov5s_edgetpu.tflite')     # TensorFlow Edge TPU
                                                       'yolov5s_paddle_model')       # PaddlePaddle

# Images
img = 'https://ultralytics.com/images/zidane.jpg'  # or file, Path, PIL, OpenCV, numpy, list

# Inference
results = model(img)

# Results
results.print()  # or .show(), .save(), .crop(), .pandas(), etc.

OpenCV DNN inference

OpenCV inference with ONNX models:

python export.py --weights yolov5s.pt --include onnx

python detect.py --weights yolov5s.onnx --dnn  # detect
python val.py --weights yolov5s.onnx --dnn  # validate

C++ Inference

YOLOv5 OpenCV DNN C++ inference on exported ONNX model examples:

YOLOv5 OpenVINO C++ inference examples:

Good luck 🍀 and let us know if you have any other questions!

@paramkaur10
Copy link

@glenn-jocher I am trying to convert best.pt to tflite but keep encountering the error.

export: data=data/coco128.yaml, weights=['runs/train/results_128/weights/best.pt'], imgsz=[256], batch_size=1, device=cpu, half=False, inplace=False, keras=False, optimize=False, int8=False, dynamic=False, simplify=False, opset=17, verbose=False, workspace=4, nms=False, agnostic_nms=False, topk_per_class=100, topk_all=100, iou_thres=0.45, conf_thres=0.25, include=['tflite']
YOLOv5 🚀 v7.0-66-g9650f16 Python-3.9.13 torch-1.12.1 CPU

Fusing layers... 
Model summary: 267 layers, 46113663 parameters, 0 gradients, 107.7 GFLOPs

PyTorch: starting from runs/train/results_128/weights/best.pt with output shape (1, 4032, 7) (88.4 MB)
TensorFlow SavedModel: export failure ❌ 1.3s: module 'tensorflow.python.util.dispatch' has no attribute 'add_fallback_dispatch_list'

TensorFlow Lite: starting export with tensorflow 2.6.0...
TensorFlow Lite: export failure ❌ 0.0s: 'NoneType' object has no attribute 'call'
Traceback (most recent call last):
  File "/home/ec2-user/SageMaker/yolov5/export.py", line 653, in <module>
    main(opt)
  File "/home/ec2-user/SageMaker/yolov5/export.py", line 648, in main
    run(**vars(opt))
  File "/home/ec2-user/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/home/ec2-user/SageMaker/yolov5/export.py", line 589, in run
    add_tflite_metadata(f[8] or f[7], metadata, num_outputs=len(s_model.outputs))
AttributeError: 'NoneType' object has no attribute 'outputs'

@YasmineeBa
Copy link

YasmineeBa commented Apr 29, 2023

hi, i want to convert the weights file 'best.pt' into tflite such as i modified the architecture of yolov5 model ( i replace some c3 function by a transformer ), but i got an error in export.py that c3tr is not defined.
'''
TensorFlow SavedModel: export failure 4.4s: name 'C3STR' is not defined

TensorFlow Lite: starting export with tensorflow 2.12.0...
TensorFlow Lite: export failure 0.0s: 'NoneType' object has no attribute 'call'

ext
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jodo\Documents\yolov5\yolov5\export.py", line 610, in run
add_tflite_metadata(f[8] or f[7], metadata, num_outputs=len(s_model.outputs))
^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'outputs'
'''

@YasmineeBa
Copy link

hi, i want to convert the weights file 'best.pt' into tflite such as i modified the architecture of yolov5 model ( i replace the final c3 function by a transformer ), but i got an error in export.py that c3tr is not defined. please help me iw there is a solution or tell me if it is impossible to convert it into tflite when the architecture is modified
'''
TensorFlow SavedModel: export failure 4.4s: name 'C3STR' is not defined

TensorFlow Lite: starting export with tensorflow 2.12.0...
TensorFlow Lite: export failure 0.0s: 'NoneType' object has no attribute 'call'

ext
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jodo\Documents\yolov5\yolov5\export.py", line 610, in run
add_tflite_metadata(f[8] or f[7], metadata, num_outputs=len(s_model.outputs))
^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'outputs'
'''

@glenn-jocher
Copy link
Member

@YasmineeBa hello, it seems like the error you're encountering is related to the modified architecture that you're using. The error message states that 'C3STR' is not defined, which suggests that the modified architecture is expecting a layer or function that is not defined.

Regarding your question about whether it is possible to convert a modified YOLOv5 architecture to TFLite, the answer is that it should be possible as long as the modified architecture can be defined using the available TensorFlow operations. However, it might require some additional customization and configuration to ensure that the conversion process is successful.

One suggestion is to try and define the 'C3STR' layer or function in your modified architecture and see if that resolves the issue. Additionally, you could try modifying the TFLite export script to account for the changes in your architecture. If the issue persists, it might be helpful to share more details about the modified architecture and the changes that were made in order to provide more specific guidance.

I hope this helps!

@YasmineeBa
Copy link

thank you for response, i really defined all function related with C3STR function in the common.py file, and declared in yolo.py using pytorch, and i modified the yolov5s.yaml with the new architecture but when i run the script to export weights into tflite, the code show me the default architecture of yolov5s ( summary architecture).
so did you mean, that i need to modified the implementation of pytorch functions to tensorflow in the tf.py file?

@glenn-jocher
Copy link
Member

@YasmineeBa hello! It's great to hear that you've defined all the necessary functions related to 'C3STR' in the common.py file and declared them in yolo.py using PyTorch. If you're modifying the yolov5s.yaml file to incorporate these changes, you should see the modified architecture when running the training script with PyTorch.

However, if you're encountering issues when exporting the model to TFLite, it's possible that some changes may need to be made in the tf.py file to ensure that the TensorFlow implementation is consistent with the changes made in the PyTorch implementation. It may be helpful to review the documentation and examples related to exporting PyTorch models to TensorFlow and TFLite to ensure that the conversion process is executed correctly.

Please feel free to provide more details about the specific issues you're encountering with the TFLite export process, and we can work together to find a solution.

@YasmineeBa
Copy link

YasmineeBa commented May 1, 2023 via email

@glenn-jocher
Copy link
Member

Hi @YasmineeBa,

It's good to hear that you were able to define all the necessary functions related to 'C3STR' in the common.py file and resolved the issue related to undefined functions. However, it seems like you're encountering a new error during the TFLite export process.

In order to provide more specific guidance, it would be helpful to review the error message that you're encountering. Based on the error message, we can suggest possible solutions to resolve the issue. Please feel free to share more details about the error message that you're encountering, and we can work together to find a solution.

In the meantime, it may be helpful to review the documentation and examples related to exporting PyTorch models to TensorFlow and TFLite to ensure that the conversion process is executed correctly.

Let us know if you have further questions or concerns.

Best regards.

@YasmineeBa
Copy link

YasmineeBa commented May 1, 2023 via email

@glenn-jocher
Copy link
Member

Hi @YasmineeBa,

Thanks for updating us and letting us know that the issue has been resolved! It's great to hear that you were able to identify and fix the problem by reviewing your function declarations.

If you encounter any further issues or have any additional questions, please don't hesitate to reach out for assistance. We're always here to help.

Best regards.

@VishalShinde16
Copy link

hey i am converting .pb to tflite of yolov7 but when i add metadata using tensorflow object detection metadata writer , in output_tensor_metadata its showing only location instead of category,score,location

!python export.py --weights /content/drive/MyDrive/ObjectDetection/yolov7/runs/train/yolov7-custom/weights/best.pt --grid --end2end --simplify
--topk-all 100 --iou-thres 0.65 --conf-thres 0.35 --img-size 640 640 --max-wh 640

!onnx-tf convert -i /content/drive/MyDrive/ObjectDetection/yolov7/runs/train/yolov7-custom/weights/best.onnx -o /content/drive/MyDrive/ObjectDetection/yolov7/tfmodel

import tensorflow as tf
converter = tf.lite.TFLiteConverter.from_saved_model('/content/drive/MyDrive/ObjectDetection/yolov7/tfmodel')
tflite_model = converter.convert()

with open('/content/drive/MyDrive/ObjectDetection/yolov7/yolov7_model.tflite', 'wb') as f:
f.write(tflite_model)

#adding metadata
ObjectDetectorWriter = object_detector.MetadataWriter
_MODEL_PATH = "/content/drive/MyDrive/ObjectDetection/yolov7/yolov7_model.tflite"

_LABEL_FILE = "/content/drive/MyDrive/ObjectDetection/yolov7/labels.txt"
_SAVE_TO_PATH = "/content/drive/MyDrive/ObjectDetection/yolov7/yolov7_model_metadata.tflite"

_INPUT_NORM_MEAN = 127.5
_INPUT_NORM_STD = 127.5

Create the metadata writer.

writer = ObjectDetectorWriter.create_for_inference(
writer_utils.load_file(_MODEL_PATH), [_INPUT_NORM_MEAN], [_INPUT_NORM_STD],
[_LABEL_FILE])

Verify the metadata generated by metadata writer.

print(writer.get_metadata_json())

Populate the metadata into the model.

writer_utils.save_file(writer.populate(), _SAVE_TO_PATH)

output:-
{
"name": "ObjectDetector",
"description": "Identify which of a known set of objects might be present and provide information about their positions within the given image or a video stream.",
"subgraph_metadata": [
{
"input_tensor_metadata": [
{
"name": "image",
"description": "Input image to be detected.",
"content": {
"content_properties_type": "ImageProperties",
"content_properties": {
"color_space": "RGB"
}
},
"process_units": [
{
"options_type": "NormalizationOptions",
"options": {
"mean": [
127.5
],
"std": [
127.5
]
}
}
],
"stats": {
"max": [
1.0
],
"min": [
-1.0
]
}
}
],
"output_tensor_metadata": [
{
"name": "location",
"description": "The locations of the detected boxes.",
"content": {
"content_properties_type": "BoundingBoxProperties",
"content_properties": {
"index": [
1,
0,
3,
2
],
"type": "BOUNDARIES"
},
"range": {
"min": 2,
"max": 2
}
},
"stats": {
}
}
],
"output_tensor_groups": [
{
"name": "detection_result",
"tensor_names": [
"location",
"category",
"score"
]
}
]
}
]
}

/usr/local/lib/python3.10/dist-packages/tensorflow_lite_support/metadata/python/metadata.py:395: UserWarning: File, 'labels.txt', does not exist in the metadata. But packing it to tflite model is still allowed.
warnings.warn(

@glenn-jocher
Copy link
Member

@VishalShinde16 hello,

It seems that you are encountering an issue with the TensorFlow Object Detection Metadata Writer when adding metadata to your converted .tflite model. Specifically, you mentioned that in the "output_tensor_metadata" section, only the "location" is shown instead of "category," "score," and "location".

To address this issue, one potential solution is to check your label file, "labels.txt", to ensure that it exists and is correctly formatted. The warning message you received indicates that the file might not exist, but it is still allowed to pack it into the .tflite model.

Please verify that the "labels.txt" file is present and correctly specifies the categories for your objects. Ensure that each category is listed on a separate line in the file.

Once you have confirmed the presence and correctness of the label file, rerun the process of adding metadata to your model using the TensorFlow Object Detection Metadata Writer. This should ensure that the metadata properly includes the "category" and "score" information along with the "location."

I hope this helps! Let me know if you have any further questions or concerns.

@rochellemadulara
Copy link

@glenn-jocher hello sir, I would like to ask how to transform the .pt file from my custom training in google colab to tflite? Should I also run it in Google colab to convert, or is it okay to run it in my windows local computer? Thank you.

@glenn-jocher
Copy link
Member

@rochellemadulara hello! 😊 You can convert the .pt file from your custom training to .tflite either in Google Colab or on your Windows local computer. The environment where you perform the conversion isn't strictly important as long as you have the necessary libraries installed.

For conversion, ensure you have TensorFlow installed. Here's a brief example of how you might do it in Python:

import torch
# Load your trained model
model = torch.load('your_model.pt')

# Convert to ONNX format first
torch.onnx.export(model, dummy_input, "model.onnx", ...)

# Then convert ONNX to TensorFlow and finally to TFLite
# Note: You might need additional libraries and steps, refer to the TensorFlow documentation for detailed steps.

Remember to adjust paths and parameters as necessary. If you're doing this on your local computer, make sure you have the required environment set up, similar to what you used in Colab.

Hope this helps! If you have further questions, feel free to ask.

@rochellemadulara
Copy link

Hello again, sir. when I run my code these are the errors

2024-04-05 22:49:02.875687: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable TF_ENABLE_ONEDNN_OPTS=0.
2024-04-05 22:49:03.617639: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable TF_ENABLE_ONEDNN_OPTS=0.
WARNING:tensorflow:From C:\Users\user\anaconda3New\envs\yolov5\Lib\site-packages\tf_keras\src\losses.py:2976: The name tf.losses.sparse_softmax_cross_entropy is deprecated. Please use tf.compat.v1.losses.sparse_softmax_cross_entropy instead.

WARNING:tensorflow:From C:\Users\user\anaconda3New\envs\yolov5\Lib\site-packages\tensorflow_probability\python\internal\backend\numpy_utils.py:48: The name tf.logging.TaskLevelStatusMessage is deprecated. Please use tf.compat.v1.logging.TaskLevelStatusMessage instead.

WARNING:tensorflow:From C:\Users\user\anaconda3New\envs\yolov5\Lib\site-packages\tensorflow_probability\python\internal\backend\numpy_utils.py:48: The name tf.control_flow_v2_enabled is deprecated. Please use tf.compat.v1.control_flow_v2_enabled instead.

C:\Users\user\anaconda3New\envs\yolov5\Lib\site-packages\tensorflow_addons\utils\tfa_eol_msg.py:23: UserWarning:

TensorFlow Addons (TFA) has ended development and introduction of new features.
TFA has entered a minimal maintenance and release mode until a planned end of life in May 2024.
Please modify downstream libraries to take dependencies from other repositories in our TensorFlow community (e.g. Keras, Keras-CV, and Keras-NLP).

For more information see: tensorflow/addons#2807

warnings.warn(
C:\Users\user\anaconda3New\envs\yolov5\Lib\site-packages\tensorflow_addons\utils\ensure_tf_install.py:53: UserWarning: Tensorflow Addons supports using Python ops for all Tensorflow versions above or equal to 2.12.0 and strictly below 2.15.0 (nightly versions are not supported).
The versions of TensorFlow you are currently using is 2.16.1 and is not supported.
Some things might work, some things might not.
If you were to encounter a bug, do not file an issue.
If you want to make sure you're using a tested and supported configuration, either change the TensorFlow version or the TensorFlow Addons's version.
You can find the compatibility matrix in TensorFlow Addon's readme:
https://github.com/tensorflow/addons
warnings.warn(
Traceback (most recent call last):
File "c:\Users\user\yolov5\convert_tflite.py", line 3, in
from onnx_tf.backend import prepare
File "C:\Users\user\anaconda3New\envs\yolov5\Lib\site-packages\onnx_tf_init_.py", line 1, in
from . import backend
File "C:\Users\user\anaconda3New\envs\yolov5\Lib\site-packages\onnx_tf\backend.py", line 28, in
from onnx_tf.common.handler_helper import get_all_backend_handlers
File "C:\Users\user\anaconda3New\envs\yolov5\Lib\site-packages\onnx_tf\common\handler_helper.py", line 3, in
from onnx_tf.handlers.backend import * # noqa
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\user\anaconda3New\envs\yolov5\Lib\site-packages\onnx_tf\handlers\backend\hardmax.py", line 3, in
import tensorflow_addons as tfa
File "C:\Users\user\anaconda3New\envs\yolov5\Lib\site-packages\tensorflow_addons_init_.py", line 23, in
from tensorflow_addons import activations
File "C:\Users\user\anaconda3New\envs\yolov5\Lib\site-packages\tensorflow_addons\activations_init_.py", line 17, in
from tensorflow_addons.activations.gelu import gelu
File "C:\Users\user\anaconda3New\envs\yolov5\Lib\site-packages\tensorflow_addons\activations\gelu.py", line 19, in
from tensorflow_addons.utils.types import TensorLike
File "C:\Users\user\anaconda3New\envs\yolov5\Lib\site-packages\tensorflow_addons\utils\types.py", line 29, in
from keras.src.engine import keras_tensor
ModuleNotFoundError: No module named 'keras.src.engine'

These are the list of my pip for keras and tensorflow
keras 3.1.1
tf_keras 2.16.0
tensorflow 2.16.1
tensorflow-addons 0.22.0
tensorflow-intel 2.16.1
tensorflow-io-gcs-filesystem 0.31.0
tensorflow-probability 0.24.0

@glenn-jocher
Copy link
Member

Hello @rochellemadulara,

It looks like you're experiencing compatibility issues between different TensorFlow and add-on versions. Specifically, the error at the end (ModuleNotFoundError: No module named 'keras.src.engine') suggests an issue with your TensorFlow Addons (TFA) and its compatibility with TensorFlow 2.16.1.

Firstly, TensorFlow Addons is not compatible with TensorFlow 2.16.1 as per the warning you received. You have a couple of options to solve this issue:

  1. Downgrade TensorFlow to a compatible version that works well with TensorFlow Addons 0.22.0. As suggested, TensorFlow versions above or equal to 2.12.0 and strictly below 2.15.0 are supported by TFA 0.22.0.

    You can downgrade TensorFlow using pip:

    pip install tensorflow==2.14.0
    
  2. Update TensorFlow Addons if a compatible version is available for TensorFlow 2.16.1, though the warning suggests that you might be using the latest compatible release.

Ensure your environment is activated when running these commands to apply changes to the correct Python environment. After correcting the version mismatch, your code should run without the stated error. Unfortunately, if TFA does not support TensorFlow 2.16.1 yet, you might need to stick with option 1.

Hope this helps resolve the issue! If you encounter further errors, feel free to ask.

@mekidhamza
Copy link

mekidhamza commented Jul 15, 2024

I ran this command to transform my .pt weights to TFLite, but every time I get TFLite weights for the object detection COCO dataset, not my trained weights.

!python export.py --weight /content/yolov5/runs/train/exp47best/weights/best.pt --include tflite --img 768

@glenn-jocher
Copy link
Member

Hello @mekidhamza,

Thank you for reaching out! It sounds like you're encountering an issue where the exported TFLite model is not reflecting your custom-trained weights but instead defaults to the COCO dataset weights.

To ensure that your custom-trained weights are correctly exported to TFLite, please verify the following:

  1. Correct Path: Double-check the path to your custom-trained weights. Ensure there are no typos and that the file exists at the specified location.

  2. Latest Version: Make sure you are using the latest version of YOLOv5 and its dependencies. You can update your repository and dependencies with the following commands:

    git pull
    pip install -U -r requirements.txt
  3. Command Syntax: Ensure the command syntax is correct. Here’s an example command for exporting custom-trained weights to TFLite:

    python export.py --weights /content/yolov5/runs/train/exp47/weights/best.pt --include tflite --img 768
  4. Model Verification: After exporting, you can verify the model by running inference on a sample image to ensure it uses your custom-trained weights. Here’s an example:

    import tensorflow as tf
    
    # Load the TFLite model
    interpreter = tf.lite.Interpreter(model_path="path/to/your_model.tflite")
    interpreter.allocate_tensors()
    
    # Get input and output tensors
    input_details = interpreter.get_input_details()
    output_details = interpreter.get_output_details()
    
    # Prepare your input data
    input_data = ...  # Your input data here
    
    # Run inference
    interpreter.set_tensor(input_details[0]['index'], input_data)
    interpreter.invoke()
    
    # Get the results
    output_data = interpreter.get_tensor(output_details[0]['index'])
    print(output_data)

If the issue persists, please provide more details about any error messages or unexpected behaviors you encounter. This will help us diagnose the problem more effectively.

Thank you for your patience and for using YOLOv5! 😊

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested Stale Stale and schedule for closing soon
Projects
None yet
Development

No branches or pull requests