Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Example to run yolo model on NPU #664

Open
fobrs opened this issue Nov 7, 2024 · 2 comments
Open

Example to run yolo model on NPU #664

fobrs opened this issue Nov 7, 2024 · 2 comments

Comments

@fobrs
Copy link

fobrs commented Nov 7, 2024

I made some sample code to show how to use the NPU to run a yolo model on mp4 files.

Currently it runs in real time on my Snapdragon X Elite Dev Box. Twice as fast as the Yolov4 GPU DirectML sample while using less than half the power.

You can see it here: https://github.com/fobrs/yolov9_npu

Should I make a pull request?

@fosteman
Copy link

wow, nice work man Thanks!

I'm thinking hard and long about selecting a platform to deploy an assembly-of yolo models on-to.
And surface-pro with the snapdragon elite chip comes to mind. But i have no confidence in it.

@fobrs
Copy link
Author

fobrs commented Nov 27, 2024

I hoped Qualcomm would have had all NPU drivers 100% ready when they started shipping the Copilot Laptops. BUT that's not the case. DirectML is not 100% supported. Not all ONNX models run without errors, only Qualcomm approved ones from their AI Hub. The latest version of the demo supports running some models on the GPU (Press G). A Yolov8s-seg.onnx model, not approved by Qualcomm runs ok on the GPU, but with errors on the NPU.
And something about the speed, my 5 year old Dell precision, with an nVidia RTX 3000 graphics adapter runs these yolo models as fast as the Qualcomm NPU. I guess the speed Qualcomm advertises is from running quantized (int8) models.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants