Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Figure out how to run yolov8 on TPU #2

Open
shaunmulligan opened this issue Jul 3, 2023 · 8 comments
Open

Figure out how to run yolov8 on TPU #2

shaunmulligan opened this issue Jul 3, 2023 · 8 comments
Labels
wontfix This will not be worked on

Comments

@shaunmulligan
Copy link
Owner

No description provided.

@shaunmulligan
Copy link
Owner Author

  1. Add our Debian package repository to your system:
echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | sudo tee /etc/apt/sources.list.d/coral-edgetpu.list 
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -  
sudo apt-get update 
  1. Install the Edge TPU runtime:
sudo apt-get install libedgetpu1-std
  1. Now connect the USB Accelerator to your computer using the provided USB 3.0 cable. If you already plugged it in, remove it and replug it so the newly-installed udev rule can take effect.
  2. Install PyCoral lib:
sudo apt-get install python3-pycoral

Install the latest build opencv-python on piwheels, luckily ultralytics requires version 4.6 or above. https://www.piwheels.org/project/opencv-python/

 pip install opencv-python==4.6.0.66

Install ultralytics:

pip install ultralytics

Trying to run model.export:

need to install a specific cmake version so that I can install some deps for ultralytics (some ONNX packages), but cmake also needed openssl-dev package :/

https://linuxhint.com/install-cmake-on-debian/

sudo apt-get install libssl-dev

relevant yolov8 convert to tflite model issue: ultralytics/ultralytics#1185 and PR to get it working: ultralytics/ultralytics#1695 (comment)

Also seems edgetpu-compiler cant run on ARM, so need to do tflite -> edge_tpu model conversion on colab i think.

https://colab.research.google.com/github/google-coral/tutorials/blob/master/compile_for_edgetpu.ipynb

unfortunately seems that conversion is failing still:

pi@cm4-hypecycle:~/code/ultralytics $ yolo export model=yolov8n.pt data=coco128.yaml format=tflite int8
/usr/lib/python3/dist-packages/requests/__init__.py:87: RequestsDependencyWarning: urllib3 (2.0.3) or chardet (4.0.0) doesn't match a supported version!
  warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "
Ultralytics YOLOv8.0.124 🚀 Python-3.9.2 torch-2.0.1 CPU
YOLOv8n summary (fused): 168 layers, 3151904 parameters, 0 gradients, 8.7 GFLOPs

PyTorch: starting from yolov8n.pt with input shape (1, 3, 640, 640) BCHW and output shape(s) (1, 84, 8400) (6.2 MB)

TensorFlow SavedModel: starting export with tensorflow 2.12.0...

ONNX: starting export with onnx 1.14.0 opset 17...
================ Diagnostic Run torch.onnx.export version 2.0.1 ================
verbose: False, log level: Level.ERROR
======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ========================

ONNX: simplifying with onnxsim 0.4.33...
ONNX: export success ✅ 13.0s, saved as yolov8n.onnx (12.2 MB)

TensorFlow SavedModel: running 'onnx2tf -i yolov8n.onnx -o yolov8n_saved_model -nuo --non_verbose -oiqt -qt per-tensor'
Traceback (most recent call last):
  File "/home/pi/.local/bin/onnx2tf", line 8, in <module>
    sys.exit(main())
  File "/home/pi/.local/lib/python3.9/site-packages/onnx2tf/onnx2tf.py", line 2149, in main
    model = convert(
  File "/home/pi/.local/lib/python3.9/site-packages/onnx2tf/onnx2tf.py", line 1077, in convert
    tflite_model = converter.convert()
  File "/home/pi/.local/lib/python3.9/site-packages/tensorflow/lite/python/lite.py", line 1897, in convert
    return super(TFLiteConverterV2, self).convert()
  File "/home/pi/.local/lib/python3.9/site-packages/tensorflow/lite/python/lite.py", line 962, in wrapper
    return self._convert_and_export_metrics(convert_func, *args, **kwargs)
  File "/home/pi/.local/lib/python3.9/site-packages/tensorflow/lite/python/lite.py", line 954, in _convert_and_export_metrics
    return flatbuffer_utils.convert_object_to_bytearray(model_object)
  File "/home/pi/.local/lib/python3.9/site-packages/tensorflow/lite/tools/flatbuffer_utils.py", line 86, in convert_object_to_bytearray
    model_offset = model_object.Pack(builder)
  File "/home/pi/.local/lib/python3.9/site-packages/tensorflow/lite/python/schema_py_generated.py", line 12169, in Pack
    operatorCodes = builder.EndVector()
TypeError: EndVector() missing 1 required positional argument: 'vectorNumElems'
TensorFlow SavedModel: export success ✅ 147.6s, saved as yolov8n_saved_model (13.2 MB)

TensorFlow Lite: starting export with tensorflow 2.12.0...
TensorFlow Lite: export success ✅ 0.0s, saved as yolov8n_saved_model/yolov8n_int8.tflite (0.0 MB)

Export complete (154.7s)
Results saved to /home/pi/code/ultralytics
Predict:         yolo predict task=detect model=yolov8n_saved_model/yolov8n_int8.tflite imgsz=640 
Validate:        yolo val task=detect model=yolov8n_saved_model/yolov8n_int8.tflite imgsz=640 data=coco128.yaml 
Visualize:       https://netron.app

@shaunmulligan
Copy link
Owner Author

According to the yolov8 discord and this issue ultralytics/ultralytics#1185 it seems like currently this is a no go, but it does seem like people are working on it. Hopefully in the near future we can use it.

@shaunmulligan shaunmulligan added the wontfix This will not be worked on label Jul 3, 2023
@dvando
Copy link

dvando commented Jul 13, 2023

Hi @shaunmulligan, have you found any solution related to the issue?
You could try this https://github.com/michaelnguyen11/ultralytics/commit/e0348e43a185b8b69ce0def505ac4c9a55217551.
It's basically the same as the one from ultralytics/ultralytics#1695, but it's adapted to the new YOLO project structure.
I tried to run it on TPU with some minor changes to the model, and it works great (with a slight decrease in accuracy, ofc).

@shaunmulligan
Copy link
Owner Author

Hey @dvando thanks for the tip, I haven't found a solution yet. I will give this a try. What TPU are you using, I'm using the USB based edge TPU from coral.ai, so that might make it more difficult :P

@dvando
Copy link

dvando commented Jul 14, 2023

Same with me, I'm using a USB Coral accelerator too, and it works great 👌

@shaunmulligan
Copy link
Owner Author

okay awesome, so all you did was use the standard yolov8s model and export to edge_tpu format using the above?

@dvando
Copy link

dvando commented Jul 14, 2023

The only noticeable change I made to the model is changing all of its activation functions to ReLU. After that, I followed the steps in the commit I mentioned above. I exported it to tflite int8, and then use the edgetpu_compiler to compile the model to run in the TPU.
Btw it should be a time well spent to read this.

@shaunmulligan
Copy link
Owner Author

thats awesome @dvando , thanks so much for the tips!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
wontfix This will not be worked on
Projects
None yet
Development

No branches or pull requests

2 participants