Skip to content

Commit

Permalink
Project import generated by Copybara.
Browse files Browse the repository at this point in the history
PiperOrigin-RevId: 264105834
  • Loading branch information
MediaPipe Team authored and camillol committed Aug 19, 2019
1 parent 71a47bb commit f5df228
Show file tree
Hide file tree
Showing 28 changed files with 276 additions and 257 deletions.
31 changes: 10 additions & 21 deletions mediapipe/docs/face_detection_mobile_gpu.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,33 +8,24 @@ that performs face detection with TensorFlow Lite on GPU.

## Android

Please see [Hello World! in MediaPipe on Android](hello_world_android.md) for
general instructions to develop an Android application that uses MediaPipe.
[Source](https://github.com/google/mediapipe/tree/master/mediapipe/examples/android/src/java/com/google/mediapipe/apps/facedetectiongpu)

The graph below is used in the
[Face Detection GPU Android example app](https://github.com/google/mediapipe/tree/master/mediapipe/examples/android/src/java/com/google/mediapipe/apps/facedetectiongpu).
To build the app, run:
To build and install the app:

```bash
bazel build -c opt --config=android_arm64 mediapipe/examples/android/src/java/com/google/mediapipe/apps/facedetectiongpu
```

To further install the app on an Android device, run:

```bash
adb install bazel-bin/mediapipe/examples/android/src/java/com/google/mediapipe/apps/facedetectiongpu/facedetectiongpu.apk
```

## iOS

Please see [Hello World! in MediaPipe on iOS](hello_world_ios.md) for general
instructions to develop an iOS application that uses MediaPipe.
[Source](https://github.com/google/mediapipe/tree/master/mediapipe/examples/ios/facedetectiongpu).

See the general [instructions](./mediapipe_ios_setup.md) for building iOS
examples and generating an Xcode project. This will be the FaceDetectionGpuApp
target.

The graph below is used in the
[Face Detection GPU iOS example app](https://github.com/google/mediapipe/tree/master/mediapipe/examples/ios/facedetectiongpu).
To build the app, please see the general
[MediaPipe iOS app building and setup instructions](./mediapipe_ios_setup.md).
Specific to this example, run:
To build on the command line:

```bash
bazel build -c opt --config=ios_arm64 mediapipe/examples/ios/facedetectiongpu:FaceDetectionGpuApp
Expand All @@ -51,7 +42,7 @@ below and paste it into [MediaPipe Visualizer](https://viz.mediapipe.dev/).

```bash
# MediaPipe graph that performs face detection with TensorFlow Lite on GPU.
# Used in the example in
# Used in the examples in
# mediapipie/examples/android/src/java/com/mediapipe/apps/facedetectiongpu and
# mediapipie/examples/ios/facedetectiongpu.

Expand Down Expand Up @@ -227,9 +218,7 @@ node {
}
}

# Draws annotations and overlays them on top of a GPU copy of the original
# image coming into the graph. The calculator assumes that image origin is
# always at the top-left corner and renders text accordingly.
# Draws annotations and overlays them on top of the input images.
node {
calculator: "AnnotationOverlayCalculator"
input_stream: "INPUT_FRAME_GPU:throttled_input_video"
Expand Down
58 changes: 16 additions & 42 deletions mediapipe/docs/hair_segmentation_mobile_gpu.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,20 +8,12 @@ that performs hair segmentation with TensorFlow Lite on GPU.

## Android

Please see [Hello World! in MediaPipe on Android](hello_world_android.md) for
general instructions to develop an Android application that uses MediaPipe.
[Source](https://github.com/google/mediapipe/tree/master/mediapipe/examples/android/src/java/com/google/mediapipe/apps/hairsegmentationgpu)

The graph below is used in the
[Hair Segmentation GPU Android example app](https://github.com/google/mediapipe/tree/master/mediapipe/examples/android/src/java/com/google/mediapipe/apps/hairsegmentationgpu).
To build the app, run:
To build and install the app:

```bash
bazel build -c opt --config=android_arm64 mediapipe/examples/android/src/java/com/google/mediapipe/apps/hairsegmentationgpu
```

To further install the app on an Android device, run:

```bash
adb install bazel-bin/mediapipe/examples/android/src/java/com/google/mediapipe/apps/hairsegmentationgpu/hairsegmentationgpu.apk
```

Expand All @@ -37,7 +29,7 @@ below and paste it into [MediaPipe Visualizer](https://viz.mediapipe.dev/).
```bash
# MediaPipe graph that performs hair segmentation with TensorFlow Lite on GPU.
# Used in the example in
# mediapipie/examples/ios/hairsegmentationgpu.
# mediapipie/examples/android/src/java/com/mediapipe/apps/hairsegmentationgpu.

# Images on GPU coming into and out of the graph.
input_stream: "input_video"
Expand Down Expand Up @@ -84,14 +76,11 @@ node: {
}
}

# Waits for a mask from the previous round of hair segmentation to be fed back
# as an input, and caches it. Upon the arrival of an input image, it checks if
# there is a mask cached, and sends out the mask with the timestamp replaced by
# that of the input image. This is needed so that the "current image" and the
# "previous mask" share the same timestamp, and as a result can be synchronized
# and combined in the subsequent calculator. Note that upon the arrival of the
# very first input frame, an empty packet is sent out to jump start the feedback
# loop.
# Caches a mask fed back from the previous round of hair segmentation, and upon
# the arrival of the next input image sends out the cached mask with the
# timestamp replaced by that of the input image, essentially generating a packet
# that carries the previous mask. Note that upon the arrival of the very first
# input image, an empty packet is sent out to jump start the feedback loop.
node {
calculator: "PreviousLoopbackCalculator"
input_stream: "MAIN:throttled_input_video"
Expand All @@ -114,9 +103,9 @@ node {

# Converts the transformed input image on GPU into an image tensor stored in
# tflite::gpu::GlBuffer. The zero_center option is set to false to normalize the
# pixel values to [0.f, 1.f] as opposed to [-1.f, 1.f].
# With the max_num_channels option set to 4, all 4 RGBA channels are contained
# in the image tensor.
# pixel values to [0.f, 1.f] as opposed to [-1.f, 1.f]. With the
# max_num_channels option set to 4, all 4 RGBA channels are contained in the
# image tensor.
node {
calculator: "TfLiteConverterCalculator"
input_stream: "IMAGE_GPU:mask_embedded_input_video"
Expand Down Expand Up @@ -147,7 +136,7 @@ node {
node {
calculator: "TfLiteInferenceCalculator"
input_stream: "TENSORS_GPU:image_tensor"
output_stream: "TENSORS:segmentation_tensor"
output_stream: "TENSORS_GPU:segmentation_tensor"
input_side_packet: "CUSTOM_OP_RESOLVER:op_resolver"
node_options: {
[type.googleapis.com/mediapipe.TfLiteInferenceCalculatorOptions] {
Expand All @@ -157,23 +146,15 @@ node {
}
}

# The next step (tensors to segmentation) is not yet supported on iOS GPU.
# Convert the previous segmentation mask to CPU for processing.
node: {
calculator: "GpuBufferToImageFrameCalculator"
input_stream: "previous_hair_mask"
output_stream: "previous_hair_mask_cpu"
}

# Decodes the segmentation tensor generated by the TensorFlow Lite model into a
# mask of values in [0.f, 1.f], stored in the R channel of a CPU buffer. It also
# mask of values in [0.f, 1.f], stored in the R channel of a GPU buffer. It also
# takes the mask generated previously as another input to improve the temporal
# consistency.
node {
calculator: "TfLiteTensorsToSegmentationCalculator"
input_stream: "TENSORS:segmentation_tensor"
input_stream: "PREV_MASK:previous_hair_mask_cpu"
output_stream: "MASK:hair_mask_cpu"
input_stream: "TENSORS_GPU:segmentation_tensor"
input_stream: "PREV_MASK_GPU:previous_hair_mask"
output_stream: "MASK_GPU:hair_mask"
node_options: {
[type.googleapis.com/mediapipe.TfLiteTensorsToSegmentationCalculatorOptions] {
tensor_width: 512
Expand All @@ -185,13 +166,6 @@ node {
}
}

# Send the current segmentation mask to GPU for the last step, blending.
node: {
calculator: "ImageFrameToGpuBufferCalculator"
input_stream: "hair_mask_cpu"
output_stream: "hair_mask"
}

# Colors the hair segmentation with the color specified in the option.
node {
calculator: "RecolorCalculator"
Expand Down
46 changes: 28 additions & 18 deletions mediapipe/docs/hand_detection_mobile_gpu.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,33 +20,32 @@ confidence score to generate the hand rectangle, to be further utilized in the

## Android

Please see [Hello World! in MediaPipe on Android](hello_world_android.md) for
general instructions to develop an Android application that uses MediaPipe.
[Source](https://github.com/google/mediapipe/tree/master/mediapipe/examples/android/src/java/com/google/mediapipe/apps/handdetectiongpu)

The graph below is used in the
[Hand Detection GPU Android example app](https://github.com/google/mediapipe/tree/master/mediapipe/examples/android/src/java/com/google/mediapipe/apps/handdetectiongpu).
To build the app, run:
An arm64 APK can be
[downloaded here](https://drive.google.com/open?id=1qUlTtH7Ydg-wl_H6VVL8vueu2UCTu37E).

To build the app yourself:

```bash
bazel build -c opt --config=android_arm64 mediapipe/examples/android/src/java/com/google/mediapipe/apps/handdetectiongpu
```

To further install the app on an Android device, run:
Once the app is built, install it on Android device with:

```bash
adb install bazel-bin/mediapipe/examples/android/src/java/com/google/mediapipe/apps/handdetectiongpu/handdetectiongpu.apk
```

## iOS

Please see [Hello World! in MediaPipe on iOS](hello_world_ios.md) for general
instructions to develop an iOS application that uses MediaPipe.
[Source](https://github.com/google/mediapipe/tree/master/mediapipe/examples/ios/handdetectiongpu).

See the general [instructions](./mediapipe_ios_setup.md) for building iOS
examples and generating an Xcode project. This will be the HandDetectionGpuApp
target.

The graph below is used in the
[Hand Detection GPU iOS example app](https://github.com/google/mediapipe/tree/master/mediapipe/examples/ios/handdetectiongpu).
To build the app, please see the general
[MediaPipe iOS app building and setup instructions](./mediapipe_ios_setup.md).
Specific to this example, run:
To build on the command line:

```bash
bazel build -c opt --config=ios_arm64 mediapipe/examples/ios/handdetectiongpu:HandDetectionGpuApp
Expand All @@ -70,14 +69,24 @@ Visualizing Subgraphs section in the

```bash
# MediaPipe graph that performs hand detection with TensorFlow Lite on GPU.
# Used in the example in
# mediapipie/examples/android/src/java/com/mediapipe/apps/handdetectiongpu.
# Used in the examples in
# mediapipie/examples/android/src/java/com/mediapipe/apps/handdetectiongpu and
# mediapipie/examples/ios/handdetectiongpu.

# Images coming into and out of the graph.
input_stream: "input_video"
output_stream: "output_video"

# Throttles the images flowing downstream for flow control. It passes through
# the very first incoming image unaltered, and waits for HandDetectionSubgraph
# downstream in the graph to finish its tasks before it passes through another
# image. All images that come in while waiting are dropped, limiting the number
# of in-flight images in HandDetectionSubgraph to 1. This prevents the nodes in
# HandDetectionSubgraph from queuing up incoming images and data excessively,
# which leads to increased latency and memory usage, unwanted in real-time
# mobile applications. It also eliminates unnecessarily computation, e.g., the
# output produced by a node in the subgraph may get dropped downstream if the
# subsequent nodes are still busy processing previous inputs.
node {
calculator: "FlowLimiterCalculator"
input_stream: "input_video"
Expand All @@ -89,6 +98,7 @@ node {
output_stream: "throttled_input_video"
}

# Subgraph that detections hands (see hand_detection_gpu.pbtxt).
node {
calculator: "HandDetectionSubgraph"
input_stream: "throttled_input_video"
Expand Down Expand Up @@ -123,7 +133,7 @@ node {
}
}

# Draws annotations and overlays them on top of the input image into the graph.
# Draws annotations and overlays them on top of the input images.
node {
calculator: "AnnotationOverlayCalculator"
input_stream: "INPUT_FRAME_GPU:throttled_input_video"
Expand Down Expand Up @@ -271,8 +281,8 @@ node {
}
}

# Maps detection label IDs to the corresponding label text. The label map is
# provided in the label_map_path option.
# Maps detection label IDs to the corresponding label text ("Palm"). The label
# map is provided in the label_map_path option.
node {
calculator: "DetectionLabelIdToTextCalculator"
input_stream: "filtered_detections"
Expand Down
Loading

0 comments on commit f5df228

Please sign in to comment.