-
Notifications
You must be signed in to change notification settings - Fork 744
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Building tensorflow lite GPU #1529
Comments
OK, after reading the docs I see that the list of header files in @platform for the -gpu extension is not added to the existing list, I should re-declare all of them. Having done that, it seems to be much closer to compiling. There are some errors like this:
and
I see that the above two members are defined inside this block in tensorflow/lite/delegates/gpu/delegate_options.h:
But I'm not sure how to work around this issue, any pointers would be appreciated. and the other error is this:
Which seems simliar...TfLiteGpuDelegateV2CreateAsync is defined in tensorflow/lite/delegates/gpu/delegate.h like this:
Any tips for fixing those? Thanks in advance |
You mean the include list? We can manipulate it later on with LoadEnabled.init() like this:
We can skip anything problematic like that rather easily: |
Actually, no, we want the interface to be the same for all platforms, so just add the header file to the include list for all platforms. |
But for functions that are not actually there to link with, we can annotate them with something like |
I have got it compiling and generating the tensorflow-lite-linux-x86-64-gpu.jar file, and I'm able to create the GPU delegate using ModifyGraphWithDelegate + TfLiteGpuDelegateV2Create. I have run into some issues actually using my NVIDIA GPU from within my Windows WSL2 installation though. In particular, it seems like tflite is using OpenCL to interact with the GPU, and getting OpenCL + NVIDIA GPUs working together under WSL2 is a problem - microsoft/WSL#6951. I was able to get tflite to use my integrated Intel GPU via OpenCL, so I don't think that its an issue with javacpp/tensorflowlite, but rather the execution environment. I also followed the guide here - https://medium.com/@tackboon97_98523/how-to-install-opencl-on-wsl-ubuntu-to-detect-a-cuda-gpu-device-30f334a415ec to get the NVIDIA GPU working with OpenCL via POCL, but when running my application I end up with messages like this:
Anyway, I think that I might park this for now, and investigate using ONNX Runtime instead. Would you be interested in building a GPU-enabled version of TFLite by default as part of your normal build? In that case I will try to clean up what I've done and submit a PR. |
Sure, please open a PR with what you got! Thanks |
Hi,
I'm trying to build tensorflow lite for linux-x86_64 with the -gpu extension. I figured that the easiest way would be to use GitHub actions and just modify some of the workflow files, which I've done here - master...barrypitman:javacpp-presets:v1.5.10-GPU
I was initially able to build the tensorflow-lite-2.15.0-1.5.10-linux-x86_64-gpu.jar file by passing ext=-gpu. The resulting libjnitensorflowlite.so file is larger than the default one without GPU support (seems like a good thing).
However, when I try to include that tensorflow-lite-2.15.0-1.5.10-linux-x86_64-gpu.jar file in my project as a dependency, I can't call create the GPU delegate, e.g. "TfLiteGpuDelegateV2Create" as described here - https://www.tensorflow.org/lite/android/delegates/gpu_native#enable_gpu_acceleration. The class doesn't exist.
Then I tried to link the relevant header files to generate the necessary java classes, i.e.
But that caused the build to fail with a lot of compilation errors - https://github.com/barrypitman/javacpp-presets/actions/runs/10420729781/job/28861326724
Any tips or pointers for how to build the linux-x86_64 version of tensorflow lite with GPU support would be appreciated!
Thanks
The text was updated successfully, but these errors were encountered: