Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failed to run hand_tracking_gpu and face_detection_gpu on Nvidia Jetson Nano: core dumped #1315

Closed
cg3dland opened this issue Nov 20, 2020 · 3 comments

Comments

@cg3dland
Copy link

After update the library path from "lib/x86_64-linux-gnu/..." to "lib/aarch64-linux-gnu/..." in the file third_party/opencv_linux.BUILD, the examples hand_tracking_gpu and face_detection_gpu are built successfully with following commands:

bazel build -c opt --copt -DMESA_EGL_NO_X11_HEADERS --copt -DEGL_NO_X11 mediapipe/examples/desktop/hand_tracking:hand_tracking_gpu
bazel build -c opt --copt -DMESA_EGL_NO_X11_HEADERS --copt -DEGL_NO_X11 mediapipe/examples/desktop/face_detection:face_detection_gpu

but when these 2 examples are run on Jetson Nano with following commands, core dumped error happen:
GLOG_logtostderr=1 bazel-bin/mediapipe/examples/desktop/hand_tracking/hand_tracking_gpu --calculator_graph_config_file=mediapipe/graphs/hand_tracking/hand_tracking_desktop_live_gpu.pbtxt
GLOG_logtostderr=1 bazel-bin/mediapipe/examples/desktop/face_detection/face_detection_gpu --calculator_graph_config_file=mediapipe/graphs/face_detection/face_detection_mobile_gpu.pbtxt

The outputs of hand_tracking_gpu are:
I20201120 13:46:59.447275 654 demo_run_graph_main_gpu.cc:51] Get calculator graph config contents: # MediaPipe graph that performs multi-hand tracking with TensorFlow Lite on GPU.

Used in the examples in

mediapipe/examples/android/src/java/com/mediapipe/apps/handtrackinggpu.

GPU image. (GpuBuffer)

input_stream: "input_video"

GPU image. (GpuBuffer)

output_stream: "output_video"

Collection of detected/predicted hands, each represented as a list of

landmarks. (std::vector)

output_stream: "hand_landmarks"

Generates side packet cotaining max number of hands to detect/track.

node {
calculator: "ConstantSidePacketCalculator"
output_side_packet: "PACKET:num_hands"
node_options: {
[type.googleapis.com/mediapipe.ConstantSidePacketCalculatorOptions]: {
packet { int_value: 2 }
}
}
}

Detects/tracks hand landmarks.

node {
calculator: "HandLandmarkTrackingGpu"
input_stream: "IMAGE:input_video"
input_side_packet: "NUM_HANDS:num_hands"
output_stream: "LANDMARKS:hand_landmarks"
output_stream: "HANDEDNESS:handedness"
output_stream: "PALM_DETECTIONS:palm_detections"
output_stream: "HAND_ROIS_FROM_LANDMARKS:hand_rects_from_landmarks"
output_stream: "HAND_ROIS_FROM_PALM_DETECTIONS:hand_rects_from_palm_detections"
}

Subgraph that renders annotations and overlays them on top of the input

images (see hand_renderer_gpu.pbtxt).

node {
calculator: "HandRendererSubgraph"
input_stream: "IMAGE:input_video"
input_stream: "DETECTIONS:palm_detections"
input_stream: "LANDMARKS:hand_landmarks"
input_stream: "HANDEDNESS:handedness"
input_stream: "NORM_RECTS:0:hand_rects_from_palm_detections"
input_stream: "NORM_RECTS:1:hand_rects_from_landmarks"
output_stream: "IMAGE:output_video"
}
I20201120 13:46:59.449458 654 demo_run_graph_main_gpu.cc:57] Initialize the calculator graph.
I20201120 13:46:59.459513 654 demo_run_graph_main_gpu.cc:61] Initialize the GPU.
I20201120 13:46:59.485337 654 gl_context_egl.cc:158] Successfully initialized EGL. Major : 1 Minor: 5
I20201120 13:46:59.528771 660 gl_context.cc:324] GL version: 3.2 (OpenGL ES 3.2 NVIDIA 32.4.3)
I20201120 13:46:59.529000 654 demo_run_graph_main_gpu.cc:67] Initialize the camera or load the video.
I20201120 13:47:00.270992 654 demo_run_graph_main_gpu.cc:88] Start running the calculator graph.
I20201120 13:47:00.273991 654 demo_run_graph_main_gpu.cc:93] Start grabbing and processing frames.
INFO: Created TensorFlow Lite delegate for GPU.
I20201120 13:47:01.172703 654 demo_run_graph_main_gpu.cc:175] Shutting down.
Segmentation fault (core dumped)

the outputs of face_detection_gpu are:
I20201120 13:48:13.271037 759 demo_run_graph_main_gpu.cc:51] Get calculator graph config contents: # MediaPipe graph that performs face mesh with TensorFlow Lite on GPU.

GPU buffer. (GpuBuffer)

input_stream: "input_video"

Output image with rendered results. (GpuBuffer)

output_stream: "output_video"

Detected faces. (std::vector)

output_stream: "face_detections"

Throttles the images flowing downstream for flow control. It passes through

the very first incoming image unaltered, and waits for downstream nodes

(calculators and subgraphs) in the graph to finish their tasks before it

passes through another image. All images that come in while waiting are

dropped, limiting the number of in-flight images in most part of the graph to

1. This prevents the downstream nodes from queuing up incoming images and data

excessively, which leads to increased latency and memory usage, unwanted in

real-time mobile applications. It also eliminates unnecessarily computation,

e.g., the output produced by a node may get dropped downstream if the

subsequent nodes are still busy processing previous inputs.

node {
calculator: "FlowLimiterCalculator"
input_stream: "input_video"
input_stream: "FINISHED:output_video"
input_stream_info: {
tag_index: "FINISHED"
back_edge: true
}
output_stream: "throttled_input_video"
}

Subgraph that detects faces.

node {
calculator: "FaceDetectionFrontGpu"
input_stream: "IMAGE:throttled_input_video"
output_stream: "DETECTIONS:face_detections"
}

Converts the detections to drawing primitives for annotation overlay.

node {
calculator: "DetectionsToRenderDataCalculator"
input_stream: "DETECTIONS:face_detections"
output_stream: "RENDER_DATA:render_data"
node_options: {
[type.googleapis.com/mediapipe.DetectionsToRenderDataCalculatorOptions] {
thickness: 4.0
color { r: 255 g: 0 b: 0 }
}
}
}

Draws annotations and overlays them on top of the input images.

node {
calculator: "AnnotationOverlayCalculator"
input_stream: "IMAGE_GPU:throttled_input_video"
input_stream: "render_data"
output_stream: "IMAGE_GPU:output_video"
}
I20201120 13:48:13.273288 759 demo_run_graph_main_gpu.cc:57] Initialize the calculator graph.
I20201120 13:48:13.275235 759 demo_run_graph_main_gpu.cc:61] Initialize the GPU.
I20201120 13:48:13.299504 759 gl_context_egl.cc:158] Successfully initialized EGL. Major : 1 Minor: 5
I20201120 13:48:13.343075 765 gl_context.cc:324] GL version: 3.2 (OpenGL ES 3.2 NVIDIA 32.4.3)
I20201120 13:48:13.343400 759 demo_run_graph_main_gpu.cc:67] Initialize the camera or load the video.
I20201120 13:48:14.090574 759 demo_run_graph_main_gpu.cc:88] Start running the calculator graph.
I20201120 13:48:14.090939 759 demo_run_graph_main_gpu.cc:93] Start grabbing and processing frames.
INFO: Created TensorFlow Lite delegate for GPU.
ERROR: Following operations are not supported by GPU delegate:
DEQUANTIZE:
164 operations will run on the GPU, and the remaining 0 operations will run on the CPU.
I20201120 13:48:14.989559 759 demo_run_graph_main_gpu.cc:175] Shutting down.
Segmentation fault (core dumped)

What problem happens when the 2 examples are built?

BTW, unfortunately, I could not find core dump file from the directory $HOME/mediapipe, and bazel-bin/mediapipe/examples/desktop/hand_tracking, bazel-bin/mediapipe/examples/desktop/face_detection.

@cg3dland
Copy link
Author

The cpu version are built and run as expected.

@AlvarezAti90
Copy link

Hello,

It might be the same issue as #1311
Can you try the steps proposed there?

@cg3dland
Copy link
Author

Yes, this is the same issue as #1311, it is resolved with the guideline from #1311.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants