-
Notifications
You must be signed in to change notification settings - Fork 5.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Calculator::Process() for node "facedetectionfrontgpu__ImageToTensorCalculator" failed: Only BGRA/RGBA textures are supported, passed format: 24 #1311
Comments
Hello, I faced the same problem while testing the new palm_detection_gpu and hand_landmark_tracking_gpu graphs on Ubuntu desktop (mediapipe tag 0.8.0). 1, Modified the demo_run_graph_main_gpu.cc source file to convert the camera_frame_raw with cv::COLOR_BGR2RGBA flag instead of cv::COLOR_BGR2RGB and then used mediapipe::ImageFormat::SRGBA to create ImageFrame These both solved the "Only BGRA/RGBA textures are supported, passed format: 24" but then with both solution I got an error stack like this: I20201119 13:44:13.844982 3912 demo_run_graph_main_gpu.cc:88] Start running the calculator graph. [mutex.cc : 1365] RAW: Acquiring 0x7fd48ca378e8 Mutexes held: 0x55b189860558 0x7fd48c0afc98 [mutex.cc : 1381] RAW: mutex@0x7fd48c0afc98 stack: [mutex.cc : 1386] RAW: dying due to potential deadlock I wonder if you tried these soultions you would get something similar. 2, My second guess was to replace some calculators with their older versions: Since the ImageTransformationCalculator supports not only RGBA but RGB input too, it could run with the original input from demo_run_graph_main_gpu.cc without any addtional pixel format conversion. The graph could run and detect hands but it performed far worse than with CPU in the same version (tag 0.8.0) or the earlier versions with GPU. While I was trying to find other solutions, I found a notification in the new TensorConverterCalculator: So, I assume this is a known issue in the latest version. I would also appriciate any help on this. |
Just a quick update, related to the error stack mentioned in my last comment at point 1.: After this fix, the graph can run with GPU support, but sadly it still performs bad (in sense of frequency of detecting hands). The CPU version somehow performs way better on the same video sample. I didn't experience such difference at earlier versions. |
Sorry, I made some more tests and eventually with the two fixes I mentioned before, the GPU and CPU version perform similarly. 1, row101.: cv::cvtColor(camera_frame_raw, camera_frame, cv::COLOR_BGR2RGBA); OR b, change input_stream: "input_video" to input_stream: "input_videoraw" in row 4. and add: node { In this case you need to change demo_run_graph_main_gpu.cc row 33 to constexpr char kInputStream[] = "input_videoraw"; Also do not forget to add the calculator in the build file: mediapipe/graphs/face_detection/BUILD to mobile_calculators for example to row 22 like this: 2, |
Wow this is great ! Thank you so much for your help. Just a quick question, I am working on adding a feature to the desktop version of multi hands that labels left versus right.. I thought I saw this out there as a modification to a calculator at some point , but I don't se it anymore.. Any chance you have this around as well? |
If I understand, you are looking for the mediapipe/graphs/hand_tracking/subgraphs/hand_renderer_gpu.pbtxt file. If you want to render the handedness around the related hands, you need to use the multi_hand_rects to position the labels. |
Yes sir, and I have both of those functions working, but the current hand landmarks only show 0 and 1 , where 0 is the first hand it detects and 1 is the second hand (regardless of whether it is left or right) . I have seen some code that uses the palm vector in relation to the thumb position to tell left from right.. I cannot find it now.. I was able to do it in python but for speed sake I would prefer to do it in the calculator as you mentioned. Just struggling in C++ a bit with out things like Numpy.py etc.. was hoping it was lying around and I missed it. Again thank you for your prompt response and help! you have been great. |
Confirmed that your solution solved my original issue, I did number 1 and I changed row 442 in tensors_to_detections_calculator.cc from : glDispatchCompute(num_boxes_, 1, 1); |
Follow up on the handness, I was compiling the wrong version (too many adjustments trying to fix my original version) . I see exacty what you are talking about and I am going to attempt the modifications to label the hands above the hand images. |
Great! I'm happy it helped! |
Having an issue with all mediapipe builds on my system. Everything compiles fine then when I run it, I get the BRGBA error
GLOG_logtostderr=1 bazel-bin/mediapipe/examples/desktop/face_detection/face_detection_gpu --calculator_graph_config_file=mediapipe/graphs/face_detection/face_detection_mobile_gpu.pbtxt
Result:
I20201118 09:54:38.296219 8095 demo_run_graph_main_gpu.cc:51] Get calculator graph config contents: # MediaPipe graph that performs face mesh with TensorFlow Lite on GPU.
GPU buffer. (GpuBuffer)
input_stream: "input_video"
Output image with rendered results. (GpuBuffer)
output_stream: "output_video"
Detected faces. (std::vector)
output_stream: "face_detections"
Throttles the images flowing downstream for flow control. It passes through
the very first incoming image unaltered, and waits for downstream nodes
(calculators and subgraphs) in the graph to finish their tasks before it
passes through another image. All images that come in while waiting are
dropped, limiting the number of in-flight images in most part of the graph to
1. This prevents the downstream nodes from queuing up incoming images and data
excessively, which leads to increased latency and memory usage, unwanted in
real-time mobile applications. It also eliminates unnecessarily computation,
e.g., the output produced by a node may get dropped downstream if the
subsequent nodes are still busy processing previous inputs.
node {
calculator: "FlowLimiterCalculator"
input_stream: "input_video"
input_stream: "FINISHED:output_video"
input_stream_info: {
tag_index: "FINISHED"
back_edge: true
}
output_stream: "throttled_input_video"
}
Subgraph that detects faces.
node {
calculator: "FaceDetectionFrontGpu"
input_stream: "IMAGE:throttled_input_video"
output_stream: "DETECTIONS:face_detections"
}
Converts the detections to drawing primitives for annotation overlay.
node {
calculator: "DetectionsToRenderDataCalculator"
input_stream: "DETECTIONS:face_detections"
output_stream: "RENDER_DATA:render_data"
node_options: {
[type.googleapis.com/mediapipe.DetectionsToRenderDataCalculatorOptions] {
thickness: 4.0
color { r: 255 g: 0 b: 0 }
}
}
}
Draws annotations and overlays them on top of the input images.
node {
calculator: "AnnotationOverlayCalculator"
input_stream: "IMAGE_GPU:throttled_input_video"
input_stream: "render_data"
output_stream: "IMAGE_GPU:output_video"
}
I20201118 09:54:38.296731 8095 demo_run_graph_main_gpu.cc:57] Initialize the calculator graph.
I20201118 09:54:38.297314 8095 demo_run_graph_main_gpu.cc:61] Initialize the GPU.
I20201118 09:54:38.305438 8095 gl_context_egl.cc:158] Successfully initialized EGL. Major : 1 Minor: 5
I20201118 09:54:38.363840 8106 gl_context.cc:324] GL version: 3.2 (OpenGL ES 3.2 NVIDIA 455.45.01)
I20201118 09:54:38.363948 8095 demo_run_graph_main_gpu.cc:67] Initialize the camera or load the video.
I20201118 09:54:38.848876 8095 demo_run_graph_main_gpu.cc:88] Start running the calculator graph.
I20201118 09:54:38.849045 8095 demo_run_graph_main_gpu.cc:93] Start grabbing and processing frames.
INFO: Created TensorFlow Lite delegate for GPU.
ERROR: Following operations are not supported by GPU delegate:
DEQUANTIZE:
164 operations will run on the GPU, and the remaining 0 operations will run on the CPU.
I20201118 09:54:39.205772 8095 demo_run_graph_main_gpu.cc:175] Shutting down.
E20201118 09:54:39.238298 8095 demo_run_graph_main_gpu.cc:186] Failed to run the graph: CalculatorGraph::Run() failed in Run:
Calculator::Process() for node "facedetectionfrontgpu__ImageToTensorCalculator" failed: Only BGRA/RGBA textures are supported, passed format: 24
I am running ubuntu 18.04 / with a 2080ti. Nvidia drivers 455..
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2017 NVIDIA Corporation
Built on Fri_Nov__3_21:07:56_CDT_2017
Cuda compilation tools, release 9.1, V9.1.85
Any help is greatly appreciated..
The text was updated successfully, but these errors were encountered: