Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MaskOverlayCalculator black screen issue #3405

Closed
UnlimitedX opened this issue Jun 7, 2022 · 22 comments
Closed

MaskOverlayCalculator black screen issue #3405

UnlimitedX opened this issue Jun 7, 2022 · 22 comments
Assignees
Labels
legacy:selfie segmentation Issues related to selfie segmentation platform:android Issues with Android as Platform type:support General questions

Comments

@UnlimitedX
Copy link

UnlimitedX commented Jun 7, 2022

System information (Please provide as much relevant information as possible)

  • Android
  • Solution: SelfieSegmentation
  • Programming Language and version ( e.g. C++, Python, Java): JAVA

Describe the expected behavior:
I'd like to use the MaskOverlayCalculator node but I get a black screen as output.

My custom graph

I've customized the SelfieSegmentation GPU graph to be able to receive a background video.
I'm sending packets to the graph in Java using the VideoInput java class (solutioncore package).

videoBackgroundInput.setNewFrameListener(textureFrame -> { //Log.d("Segmenter", "New texture frame received"); Packet videoPacket = processor.getPacketCreator().createGpuBuffer(textureFrame); processor.getGraph().addConsumablePacketToInputStream(BACKGROUND_VIDEO_STREAM_NAME, videoPacket, textureFrame.getTimestamp()); videoPacket.release(); });

The background_video is properly displayed if I display it using the RecolorCalculator node. Same for the input_video.
But when I want to use the MaskOverlayCalculator, I get a black screen as output.
Here is the graph below with some comments.

# MediaPipe graph that performs selfie segmentation with TensorFlow Lite on GPU.

## GPU buffer. (GpuBuffer)
input_stream: "input_video"

# Custom Background video
input_stream: "background_video"

# Output image with rendered results. (GpuBuffer)
output_stream: "output_video"

# Throttles the images flowing downstream for flow control. It passes through
# the very first incoming image unaltered, and waits for downstream nodes
# (calculators and subgraphs) in the graph to finish their tasks before it
# passes through another image. All images that come in while waiting are
# dropped, limiting the number of in-flight images in most part of the graph to
# 1. This prevents the downstream nodes from queuing up incoming images and data
# excessively, which leads to increased latency and memory usage, unwanted in
# real-time mobile applications. It also eliminates unnecessarily computation,
# e.g., the output produced by a node may get dropped downstream if the
# subsequent nodes are still busy processing previous inputs.
node {
  calculator: "FlowLimiterCalculator"
  input_stream: "input_video"
  input_stream: "FINISHED:output_video"
  input_stream_info: {
    tag_index: "FINISHED"
    back_edge: true
  }
  output_stream: "throttled_input_video"
}


node {
  calculator: "FlowLimiterCalculator"
  input_stream: "background_video"
  input_stream: "FINISHED:output_video"
  input_stream_info: {
    tag_index: "FINISHED"
    back_edge: true
  }
  output_stream: "throttled_background_video"
}

node {
 calculator: "ImageTransformationCalculator"
 #input_stream: "IMAGE_GPU:background_video"
 input_stream: "IMAGE_GPU:throttled_background_video"
 output_stream: "IMAGE_GPU:scaled_background_video"
 node_options: {
    [type.googleapis.com/mediapipe.ImageTransformationCalculatorOptions] {
      output_width: 1280
      output_height: 720
    }
  }
}

# Subgraph that performs selfie segmentation.
node {
  calculator: "SelfieSegmentationGpu"
  input_stream: "IMAGE:throttled_input_video"
  output_stream: "SEGMENTATION_MASK:segmentation_mask"
}



# <- I'd like to use this calculator, but the result is a black screen. No error is displayed in the logs although ExternalTextureConverter seems to be allocating new texture continuously.
#node {
#  calculator: "MaskOverlayCalculator"
#  input_stream: "VIDEO:0:scaled_background_video"
#  input_stream: "VIDEO:1:throttled_input_video"
#  input_stream: "MASK:segmentation_mask"
#  output_stream: "OUTPUT:output_video"
#}

# Colors the selfie segmentation with the color specified in the option.
node {
  calculator: "RecolorCalculator"
  #input_stream: "IMAGE_GPU:throttled_input_video" # <- with this setting enabled: works well, displays camera as output with segmentation mask applied
  input_stream: "IMAGE_GPU:scaled_background_video" # <- with this setting enabled: displays background video properly as output, althought segmentation mask is not applied
  input_stream: "MASK_GPU:segmentation_mask"
  output_stream: "IMAGE_GPU:output_video"
  node_options: {
    [type.googleapis.com/mediapipe.RecolorCalculatorOptions] {
      color { r: 0 g: 125 b: 255 }
      mask_channel: RED
      invert_mask: true
      adjust_with_luminance: false
    }
  }
}

Looking forward to hearing from you!
Thanks

@UnlimitedX UnlimitedX added the type:support General questions label Jun 7, 2022
@sureshdagooglecom sureshdagooglecom added platform:android Issues with Android as Platform legacy:selfie segmentation Issues related to selfie segmentation labels Jun 8, 2022
@sureshdagooglecom
Copy link

Hi @UnlimitedX ,
Could you share any reference video w.r.t above issue.

@sureshdagooglecom sureshdagooglecom added the stat:awaiting response Waiting for user response label Jun 9, 2022
@UnlimitedX
Copy link
Author

UnlimitedX commented Jun 9, 2022

Hi @sureshdagooglecom ,
Thanks for your prompt reply.

I use the Recolor Calculator to check if my videos are adequately provided to the Mediapipe graph.

When the scaled_background_video is activated in the Recolor Calculator, showing the background_video packets are properly injected. (although the video is flipped vertically, no idea why) . The Selfie Segmentation mask is not applied. I don't know why.
[background video]https://noldo.fr/dev/asiaf/2022_06_09_10_40_10.mp4

When the throttled_input_video is active in the recolor calculator, it works as expected. (I blurred my face for privacy reasons and only took a screenshot)
image

When the Recolor Calculator is disabled and the MaskOverlay Calculator is enabled, I got a black screen.
image
And I can see such logs in the console:

....
D/ExternalTextureConv(16156): Created output texture: 33 width: 720 height: 720
D/ExternalTextureConv(16156): Created output texture: 31 width: 720 height: 720
D/ExternalTextureConv(16156): Created output texture: 33 width: 720 height: 720
D/ExternalTextureConv(16156): Created output texture: 31 width: 720 height: 720
D/ExternalTextureConv(16156): Created output texture: 33 width: 720 height: 720
D/ExternalTextureConv(16156): Created output texture: 31 width: 720 height: 720
D/ExternalTextureConv(16156): Created output texture: 33 width: 720 height: 720
D/ExternalTextureConv(16156): Created output texture: 31 width: 720 height: 720
D/ExternalTextureConv(16156): Created output texture: 33 width: 720 height: 720
D/ExternalTextureConv(16156): Created output texture: 31 width: 720 height: 720
D/ExternalTextureConv(16156): Created output texture: 33 width: 720 height: 720
D/ExternalTextureConv(16156): Created output texture: 31 width: 720 height: 720
D/ExternalTextureConv(16156): Created output texture: 33 width: 720 height: 720
D/ExternalTextureConv(16156): Created output texture: 31 width: 720 height: 720
D/ExternalTextureConv(16156): Created output texture: 31 width: 720 height: 720
D/ExternalTextureConv(16156): Created output texture: 36 width: 720 height: 720
D/ExternalTextureConv(16156): Created output texture: 37 width: 720 height: 720
D/ExternalTextureConv(16156): Created output texture: 36 width: 720 height: 720
....

Those logs do not show up when I use the Recolor Calculator.

Expected result, when using the MaskOverlay calculator would be (edited on Gimp):

image

@google-ml-butler google-ml-butler bot removed the stat:awaiting response Waiting for user response label Jun 9, 2022
@sureshdagooglecom sureshdagooglecom added the stat:awaiting response Waiting for user response label Jun 10, 2022
@UnlimitedX
Copy link
Author

@sureshdagooglecom
image
Do you need more details?

@google-ml-butler google-ml-butler bot removed the stat:awaiting response Waiting for user response label Jun 10, 2022
@sureshdagooglecom sureshdagooglecom added the stat:awaiting googler Waiting for Google Engineer's Response label Jun 13, 2022
@UnlimitedX
Copy link
Author

Hi @jiuqiant , I'm looking forward to hearing from you :-)

@sureshdagooglecom
Copy link

Hi @UnlimitedX ,

  1. Have you validated that the mask produced by produced by SelfieSegmentationGpu is useable using the RecolorCalculator? So the only problem is MaskOverlayCalculator?
  2. Please ensure that the RGBA channel produced by SelfieSegmentationGpu matches the channel expected by MaskOverlayCalculator. I see that the default for MaskOverlayCalculator is the RED channel.
    3)Please try some constant mask and different two input video streams to gain any insight about non-black output from MaskOverlayCalculator.
    4)Please try passing the video stream through a final GL calculator to gain more insight about the output of MaskOverlayCalculator.

@sureshdagooglecom sureshdagooglecom added stat:awaiting response Waiting for user response and removed stat:awaiting googler Waiting for Google Engineer's Response labels Jun 17, 2022
@google-ml-butler
Copy link

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you.

@UnlimitedX
Copy link
Author

UnlimitedX commented Jun 24, 2022

Have you validated that the mask produced by produced by SelfieSegmentationGpu is useable using the RecolorCalculator? So the only problem is MaskOverlayCalculator?

Yes it works well.

Please ensure that the RGBA channel produced by SelfieSegmentationGpu matches the channel expected by MaskOverlayCalculator. I see that the default for MaskOverlayCalculator is the RED channel.
3)Please try some constant mask and different two input video streams to gain any insight about non-black output from MaskOverlayCalculator.
4)Please try passing the video stream through a final GL calculator to gain more insight about the output of MaskOverlayCalculator.

Ok thanks, I'm currently doing some tests and I'll keep you updated about the progress.

@google-ml-butler google-ml-butler bot removed stat:awaiting response Waiting for user response stale labels Jun 24, 2022
@UnlimitedX
Copy link
Author

UnlimitedX commented Jun 24, 2022

Hi @sureshdagooglecom
I can confirm the SelfieSegmentationGpu Mask is in the RED channel.

image

Testing with same inputs

I've tested the MaskOverlayCalculator with the same two inputs for testing purposes, which is the camera input:

node {
  calculator: "MaskOverlayCalculator"
  input_stream: "VIDEO:0:scaled_input_video"
  input_stream: "VIDEO:1:scaled_input_video"
  input_stream: "MASK:segmentation_mask"
  output_stream: "OUTPUT:output_video"
}

I don't have a black screen and the camera is displayed. So I had a non-black output video in this case.

Testing back with background video

When reverting back to my background video, the black screen returns.

node {
  calculator: "MaskOverlayCalculator"
  input_stream: "VIDEO:0:background_video"   <=========
  input_stream: "VIDEO:1:scaled_input_video"
  input_stream: "MASK:segmentation_mask"
  output_stream: "OUTPUT:output_video"
}

Testing with background video + constant mask

I've also tried using the constant mask and the black screen remains.

node {
  calculator: "MaskOverlayCalculator"
  input_stream: "VIDEO:0:background_video"
  input_stream: "VIDEO:1:scaled_input_video"
  input_stream: "CONST_MASK:segmentation_mask_float"  <=========
  output_stream: "OUTPUT:output_video"
}

And the constant is injected like this:
image

Testing with RecolorCalculator and the background video

Since the RecolorCalculator is working well with the camera input (scaled_input_video) and the segmentation mask, I've tried to replace the camera input by the background_video.
It does not work : the background video is displayed as it, but not segmented using the mask

Video format problem? (RGBA, encoding, or something else ? )

I've tried multiple background videos and the problem remains the same.
So I tried to add a SetAlphaCalculator on the background video in case the Alpha channel was not set at all before injecting the video to the MaskOverlay, but with no effect.
The black screen is still here and I am wondering if the video is not sent correctly to mediapipe, in the right encoding?
Could you suggest to me the right GL calculator to gain more insight about the output of MaskOverlayCalculator?
Any suggestions?

Thanks!

@UnlimitedX
Copy link
Author

Hi @jiuqiant @sureshdagooglecom , any news?
I'm looking forward to hearing from you :-)

@sureshdagooglecom sureshdagooglecom added the stat:awaiting googler Waiting for Google Engineer's Response label Jul 13, 2022
@mcclanahoochie
Copy link

Hi,

I don't see anything obviously wrong here. It seems the combination of camera and video streams is causing an issue somewhere (but either by itself is OK).

A few other things I can think to check are:

  • both the camera and video packets sent into the graph have the SAME timestamp, and are both valid each frame
  • add some logging inside the mask overlay calculator, to make sure Process() is getting called, all packets are present, and finishes properly. example: LOG(INFO)<<"my log";
  • try simplifying the graph and temporarily remove the flow limiter calculators

@UnlimitedX
Copy link
Author

UnlimitedX commented Jul 18, 2022

Hi @mcclanahoochie,
Thanks for your reply.

I've tried simplifying the graph by removing the flow limiter calculators but the app crashes after a few seconds.
Here is the log:

D/MediaPlayerNative( 7245): Message: MEDIA_SET_VIDEO_SIZE(5), ext1=1920, ext2=1080
D/MediaPlayerNative( 7245): [notify] : [1170] callback app listenerNotNull=1, send=1
D/MediaPlayerNative( 7245): Message: MEDIA_INFO(200), ext1=MEDIA_INFO_RENDERING_START(3), ext2=0
W/MediaPlayerNative( 7245): info/warning (3, 0)
D/MediaPlayerNative( 7245): [notify] : [1170] callback app listenerNotNull=1, send=1
D/ExternalTextureConv( 7245): Created output texture: 5 width: 1280 height: 720
D/ExternalTextureConv( 7245): Created output texture: 6 width: 1280 height: 720
D/ExternalTextureConv( 7245): Created output texture: 7 width: 1280 height: 720
D/ExternalTextureConv( 7245): Created output texture: 8 width: 1280 height: 720
D/ExternalTextureConv( 7245): Created output texture: 9 width: 1280 height: 720
D/ExternalTextureConv( 7245): Created output texture: 10 width: 1280 height: 720
D/ExternalTextureConv( 7245): Created output texture: 11 width: 720 height: 720
D/ExternalTextureConv( 7245): Created output texture: 12 width: 1280 height: 720
D/MediaPlayerNative( 7245): [notify] : [1170] callback app listenerNotNull=1, send=1
I/native  ( 7245): I20220718 10:38:10.372817  7589 graph.cc:474] Start running the graph, waiting for inputs.
I/native  ( 7245): I20220718 10:38:10.377311  7589 gl_context_egl.cc:163] Successfully initialized EGL. Major : 1 Minor: 4
I/native  ( 7245): I20220718 10:38:10.384033  7652 gl_context.cc:331] GL version: 3.2 (OpenGL ES 3.2 v1.r18p0-01rel0.1ba20d94cbd7b05d6630eff84a60e5f7)
D/ExternalTextureConv( 7245): Created output texture: 14 width: 720 height: 720
D/ExternalTextureConv( 7245): Created output texture: 13 width: 1280 height: 720
I/native  ( 7245): I20220718 10:38:10.397828  7645 resource_util_android.cc:89] Successfully loaded: selfie_segmentation.tflite
D/ExternalTextureConv( 7245): Created output texture: 16 width: 720 height: 720
I/native  ( 7245): I20220718 10:38:10.427770  7588 jni_util.cc:41] GetEnv: not attached
I/tflite  ( 7245): Initialized TensorFlow Lite runtime.
I/tflite  ( 7245): Created TensorFlow Lite delegate for GPU.
I/tflite  ( 7245): Replacing 246 node(s) with delegate (TfLiteGpuDelegate) node, yielding 1 partitions.
D/ExternalTextureConv( 7245): Created output texture: 18 width: 1280 height: 720
D/ExternalTextureConv( 7245): Created output texture: 19 width: 1280 height: 720
D/ExternalTextureConv( 7245): Created output texture: 20 width: 1280 height: 720
D/ExternalTextureConv( 7245): Created output texture: 21 width: 1280 height: 720
D/ExternalTextureConv( 7245): Created output texture: 22 width: 1280 height: 720
D/ExternalTextureConv( 7245): Created output texture: 23 width: 1280 height: 720
D/ExternalTextureConv( 7245): Created output texture: 24 width: 1280 height: 720
D/ExternalTextureConv( 7245): Created output texture: 25 width: 1280 height: 720
D/ExternalTextureConv( 7245): Created output texture: 26 width: 1280 height: 720
D/ExternalTextureConv( 7245): Created output texture: 27 width: 1280 height: 720
D/ExternalTextureConv( 7245): Created output texture: 28 width: 1280 height: 720
D/ExternalTextureConv( 7245): Created output texture: 29 width: 1280 height: 720
D/ExternalTextureConv( 7245): Created output texture: 30 width: 1280 height: 720
D/ExternalTextureConv( 7245): Created output texture: 31 width: 1280 height: 720
D/ExternalTextureConv( 7245): Created output texture: 32 width: 1280 height: 720
D/ExternalTextureConv( 7245): Created output texture: 33 width: 1280 height: 720
D/ExternalTextureConv( 7245): Created output texture: 34 width: 1280 height: 720
D/ExternalTextureConv( 7245): Created output texture: 35 width: 1280 height: 720
D/ExternalTextureConv( 7245): Created output texture: 36 width: 1280 height: 720
D/ExternalTextureConv( 7245): Created output texture: 37 width: 1280 height: 720
D/ExternalTextureConv( 7245): Created output texture: 38 width: 1280 height: 720
D/ExternalTextureConv( 7245): Created output texture: 39 width: 1280 height: 720
D/ExternalTextureConv( 7245): Created output texture: 40 width: 1280 height: 720
D/ExternalTextureConv( 7245): Created output texture: 41 width: 1280 height: 720
D/ExternalTextureConv( 7245): Created output texture: 42 width: 1280 height: 720
D/ExternalTextureConv( 7245): Created output texture: 43 width: 1280 height: 720
D/ExternalTextureConv( 7245): Created output texture: 44 width: 1280 height: 720
D/ExternalTextureConv( 7245): Created output texture: 45 width: 1280 height: 720
D/ExternalTextureConv( 7245): Created output texture: 46 width: 1280 height: 720
D/ExternalTextureConv( 7245): Created output texture: 47 width: 1280 height: 720
D/ExternalTextureConv( 7245): Created output texture: 48 width: 1280 height: 720
D/ExternalTextureConv( 7245): Created output texture: 49 width: 1280 height: 720
D/ExternalTextureConv( 7245): Created output texture: 50 width: 1280 height: 720
D/ExternalTextureConv( 7245): Created output texture: 51 width: 1280 height: 720
V/PlayerBase( 7245): baseRelease() piid=13103 state=0
D/ExternalTextureConv( 7245): Created output texture: 52 width: 1280 height: 720
W/System  ( 7245): A resource failed to call release.
D/ExternalTextureConv( 7245): Created output texture: 53 width: 1280 height: 720
W/System  ( 7245): A resource failed to call release.
D/ExternalTextureConv( 7245): Created output texture: 54 width: 1280 height: 720
D/ExternalTextureConv( 7245): Created output texture: 55 width: 1280 height: 720
D/ExternalTextureConv( 7245): Created output texture: 56 width: 1280 height: 720
D/ExternalTextureConv( 7245): Created output texture: 57 width: 1280 height: 720
D/ExternalTextureConv( 7245): Created output texture: 58 width: 1280 height: 720
D/ExternalTextureConv( 7245): Created output texture: 59 width: 1280 height: 720
D/ExternalTextureConv( 7245): Created output texture: 60 width: 1280 height: 720
D/MediaPlayerNative( 7245): [notify] : [1170] callback app listenerNotNull=1, send=1
D/ExternalTextureConv( 7245): Created output texture: 61 width: 1280 height: 720
D/ExternalTextureConv( 7245): Created output texture: 62 width: 1280 height: 720
D/ExternalTextureConv( 7245): Created output texture: 63 width: 720 height: 720
D/ExternalTextureConv( 7245): Created output texture: 64 width: 1280 height: 720
D/ExternalTextureConv( 7245): Created output texture: 65 width: 720 height: 720
D/ExternalTextureConv( 7245): Created output texture: 69 width: 1280 height: 720
D/ExternalTextureConv( 7245): Created output texture: 14 width: 720 height: 720
D/ExternalTextureConv( 7245): Created output texture: 63 width: 1280 height: 720
D/ExternalTextureConv( 7245): Created output texture: 72 width: 720 height: 720
D/ExternalTextureConv( 7245): Created output texture: 76 width: 1280 height: 720
D/ExternalTextureConv( 7245): Created output texture: 79 width: 1280 height: 720
D/ExternalTextureConv( 7245): Created output texture: 84 width: 1280 height: 720
D/ExternalTextureConv( 7245): Created output texture: 5 width: 1280 height: 720
D/ExternalTextureConv( 7245): Created output texture: 85 width: 1280 height: 720
4
D/MediaPlayerNative( 7245): [notify] : [1170] callback app listenerNotNull=1, send=1
D/MediaPlayerNative( 7245): Message: Unknown MediaEventType(6), ext1=0, ext2=0x0
6
D/MediaPlayerNative( 7245): [notify] : [1170] callback app listenerNotNull=1, send=1
D/MediaPlayerNative( 7245): Message: Unknown MediaEventType(6), ext1=0, ext2=0x0
3
D/MediaPlayerNative( 7245): [notify] : [1170] callback app listenerNotNull=1, send=1
W/native  ( 7245): W20220718 10:38:16.180394  7588 calculator_graph.cc:1184] Resolved a deadlock by increasing max_queue_size of input stream: segmentation_mask to: 101. Consider increasing max_queue_size for better performance.
D/MediaPlayerNative( 7245): [notify] : [1170] callback app listenerNotNull=1, send=1
W/native  ( 7245): W20220718 10:38:18.321895  7588 calculator_graph.cc:1184] Resolved a deadlock by increasing max_queue_size of input stream: scaled_input_video to: 151. Consider increasing max_queue_size for better performance.
2
D/MediaPlayerNative( 7245): [notify] : [1170] callback app listenerNotNull=1, send=1
D/MediaPlayerNative( 7245): Message: Unknown MediaEventType(6), ext1=0, ext2=0x0
3
D/MediaPlayerNative( 7245): [notify] : [1170] callback app listenerNotNull=1, send=1
Lost connection to device.
Exited (sigterm)

I've also added some logs following your advice.
image

And it seems to be stuck before the "process 2 log".
image

So that means mask_packet.IsEmpty() is true (I added a test directly in the if condition later to make sure it stops here).
With or without the flow limiter, almost the same logs only the frequency is different.

image

mask_packet should not be empty since it is working well with the RecolorCalculator (I tried again to make sure it's still working).

image

But from what I understand, the video 1 should be at least displayed :
image

Nevertheless, I still have a black background.

- both the camera and video packets sent into the graph have the SAME timestamp, and are both valid each frame

Could you please guide me through this process? Since the Camera feed is sent directly into the processor and the background feed is sent through consumable packets I don't know how to check the timestamps at the same place for both. Should I do it in the Process of the MaskCalculator?

How the background feed is sent:
image

The camera feed is set in the way given in the examples.

Note:
The SetupBackgroundVideo function is called at the very end of the initialization process.
image

Looking forward to hearing from you!

@UnlimitedX
Copy link
Author

Hi @mcclanahoochie ,

Have you had a moment to check my previous post?

Thank you very much.

@mcclanahoochie
Copy link

I don't think having two flow limiter calculators is necessary. Try using a packet cloner for background video instead:

input_stream: "input_video"

input_stream: "background_video"

output_stream: "output_video"

node {
  calculator: "FlowLimiterCalculator"
  input_stream: "input_video"
  input_stream: "FINISHED:output_video"
  input_stream_info: {
    tag_index: "FINISHED"
    back_edge: true
  }
  output_stream: "throttled_input_video"
}

 node {
   calculator: "PacketClonerCalculator"
   input_stream: "TICK:throttled_input_video"
   input_stream: "background_video"
   output_stream: "throttled_background_video"
  # # Try using this setting also:
  # input_stream_handler {
  #   input_stream_handler: "ImmediateInputStreamHandler"
  # }
 }

node {
 calculator: "ImageTransformationCalculator"
 input_stream: "IMAGE_GPU:throttled_background_video"
 output_stream: "IMAGE_GPU:scaled_background_video"
 node_options: {
    [type.googleapis.com/mediapipe.ImageTransformationCalculatorOptions] {
      output_width: 1280
      output_height: 720
    }
  }
}

node {
  calculator: "SelfieSegmentationGpu"
  input_stream: "IMAGE:throttled_input_video"
  output_stream: "SEGMENTATION_MASK:segmentation_mask"
}

node {
  calculator: "MaskOverlayCalculator"
  input_stream: "VIDEO:0:scaled_background_video"
  input_stream: "VIDEO:1:throttled_input_video"
  input_stream: "MASK:segmentation_mask"
  output_stream: "OUTPUT:output_video"
}

The packet cloner will output a packet with every TICK signal.
You may need the ImmediateInputStreamHandler option (try with/without)

@UnlimitedX
Copy link
Author

Hi @mcclanahoochie
Thank you for your reply.

I tried and got this error :
image

I realized I do have not the latest version of mediapipe and the TICK tag was added recently.
So I tried the following configuration based on my packet_cloner_calculator.cc file on my Docker image:
image

With the ImmediateInputStreamHandler it crashed after 5 seconds of black screen. And without the black screen remained without crashing.

Would you advise updating Mediapipe to its latest version?
If so, do you have any guide on seamlessly doing that? (I am on Docker).

Thanks!

@UnlimitedX
Copy link
Author

UnlimitedX commented Jan 23, 2023

Hi @mcclanahoochie ,
I hope you are doing well.

I've installed the latest version of Mediapipe today and compiled the graph with the exact same code you provided to me.
We still have a black screen with the following logs:

`D/ExternalTextureConv(29428): Created output texture: 38 width: 720 height: 720
D/ExternalTextureConv(29428): Created output texture: 37 width: 720 height: 720
D/ExternalTextureConv(29428): Created output texture: 38 width: 720 height: 720
D/ExternalTextureConv(29428): Created output texture: 37 width: 720 height: 720
D/ExternalTextureConv(29428): Created output texture: 38 width: 720 height: 720
D/ExternalTextureConv(29428): Created output texture: 37 width: 720 height: 720
D/ExternalTextureConv(29428): Created output texture: 40 width: 720 height: 720
D/ExternalTextureConv(29428): Created output texture: 37 width: 720 height: 720
D/ExternalTextureConv(29428): Created output texture: 40 width: 720 height: 720
D/ExternalTextureConv(29428): Created output texture: 37 width: 720 height: 720
D/ExternalTextureConv(29428): Created output texture: 40 width: 720 height: 720
D/ExternalTextureConv(29428): Created output texture: 37 width: 720 height: 720
D/ExternalTextureConv(29428): Created output texture: 40 width: 720 height: 720
D/ExternalTextureConv(29428): Created output texture: 37 width: 720 height: 720
D/ExternalTextureConv(29428): Created output texture: 40 width: 720 height: 720
D/ExternalTextureConv(29428): Created output texture: 37 width: 720 height: 720
D/ExternalTextureConv(29428): Created output texture: 40 width: 720 height: 720
D/ExternalTextureConv(29428): Created output texture: 37 width: 720 height: 720
D/ExternalTextureConv(29428): Created output texture: 40 width: 720 height: 720
D/ExternalTextureConv(29428): Created output texture: 37 width: 720 height: 720
D/MediaPlayerNative(29428): [notify] : [1170] callback app listenerNotNull=1, send=1
D/ExternalTextureConv(29428): Created output texture: 40 width: 720 height: 720
D/MediaPlayerNative(29428): Message: Unknown MediaEventType(6), ext1=0, ext2=0x0
2
D/MediaPlayerNative(29428): [notify] : [1170] callback app listenerNotNull=1, send=1
D/ExternalTextureConv(29428): Created output texture: 37 width: 720 height: 720
D/ExternalTextureConv(29428): Created output texture: 40 width: 720 height: 720
D/MediaPlayerNative(29428): [notify] : [1170] callback app listenerNotNull=1, send=1
D/ExternalTextureConv(29428): Created output texture: 37 width: 720 height: 720
D/ExternalTextureConv(29428): Created output texture: 40 width: 720 height: 720
D/ExternalTextureConv(29428): Created output texture: 37 width: 720 height: 720
D/ExternalTextureConv(29428): Created output texture: 40 width: 720 height: 720
D/ExternalTextureConv(29428): Created output texture: 37 width: 720 height: 720
D/ExternalTextureConv(29428): Created output texture: 40 width: 720 height: 720
D/ExternalTextureConv(29428): Created output texture: 37 width: 720 height: 720
D/ExternalTextureConv(29428): Created output texture: 40 width: 720 height: 720`

Do you have any suggestions?
Build file is as follow:
image

@UnlimitedX
Copy link
Author

It's working with the ImmediateInputStreamHandler !

@UnlimitedX
Copy link
Author

UnlimitedX commented Feb 10, 2023

Hi @mcclanahoochie ,
I'm reopening this one because I get an error from time to time when starting the graph.
The screen remains black and I can hear the background video, then it crashes a few seconds after. In this case the camera stream doesn't appear.

F/libc    ( 5451): Fatal signal 6 (SIGABRT), code -1 (SI_QUEUE) in tid 5972 (Thread-10), pid 5451 (m.karaocine.app)
*** *** *** *** *** *** *** *** *** *** *** *** *** *** *** ***
Build fingerprint: 'HONOR/JSN-L21/HWJSN-H:10/HONORJSN-L21/10.0.0.224C432:user/release-keys'
Revision: '0'
ABI: 'arm64'
Timestamp: 2023-02-10 09:39:02+0100
pid: 5451, tid: 5972, name: Thread-10  >>> com.karaocine.app <<<
uid: 10316
signal 6 (SIGABRT), code -1 (SI_QUEUE), fault addr --------
    x0  0000000000000000  x1  0000000000001754  x2  0000000000000006  x3  0000007b2dd053a0
    x4  473a3a6570697061  x5  473a3a6570697061  x6  473a3a6570697061  x7  7265666675427570
    x8  00000000000000f0  x9  8a097744807d1e91  x10 0000000000000001  x11 0000000000000000
    x12 fffffff0fffffbdf  x13 6e61206465766965  x14 0000000000000010  x15 0000007c6219939a
    x16 0000007c6219e908  x17 0000007c6217e700  x18 0000007b2ccda000  x19 000000000000154b
    x20 0000000000001754  x21 00000000ffffffff  x22 0000007b46e3af28  x23 0000007b46e3af70
    x24 0000007b2dd06020  x25 0000007b46e247b0  x26 0000007b46c14d14  x27 0000000000000000
    x28 0000007b46c15603  x29 0000007b2dd05440
    sp  0000007b2dd05380  lr  0000007c62133580  pc  0000007c621335ac
backtrace:
      #00 pc 00000000000705ac  /apex/com.android.runtime/lib64/bionic/libc.so (abort+160) (BuildId: 8ee0932e058bcf6b353f0a24aaef0e4e)
      #01 pc 000000000059066c  /data/app/com.karaocine.app-kI1nWhi6yAx5vA8rCmN11Q==/base.apk (offset 0xf75c000) (google::logging_fail()+4)
      #02 pc 000000000058fe18  /data/app/com.karaocine.app-kI1nWhi6yAx5vA8rCmN11Q==/base.apk (offset 0xf75c000) (google::LogMessage::SendToLog()+580)
      #03 pc 000000000058ffdc  /data/app/com.karaocine.app-kI1nWhi6yAx5vA8rCmN11Q==/base.apk (offset 0xf75c000) (google::LogMessage::Flush()+184)
      #04 pc 0000000000590e64  /data/app/com.karaocine.app-kI1nWhi6yAx5vA8rCmN11Q==/base.apk (offset 0xf75c000) (google::LogMessageFatal::~LogMessageFatal()+12)
      #05 pc 00000000000bf390  /data/app/com.karaocine.app-kI1nWhi6yAx5vA8rCmN11Q==/base.apk (offset 0xf75c000) (mediapipe::GpuBuffer const& mediapipe::Packet::Get<mediapipe::GpuBuffer>() const+168)
#06 pc 00000000000d74d8  /data/app/com.karaocine.app-kI1nWhi6yAx5vA8rCmN11Q==/base.apk (offset 0xf75c000) (_ZNSt6__ndk110__function6__funcIZN9mediapipe21MaskOverlayCalculator7ProcessEPNS2_17CalculatorContextEE3$_0NS_9allocatorIS6_EEFN4absl12lts_202206236StatusEvEEclEv+260)
      #07 pc 00000000004c8dd4  /data/app/com.karaocine.app-kI1nWhi6yAx5vA8rCmN11Q==/base.apk (offset 0xf75c000) (_ZNSt6__ndk110__function6__funcIZN9mediapipe9GlContext3RunENS_8functionIFN4absl12lts_202206236StatusEvEEEiNS2_9TimestampEE3$_8NS_9allocatorISB_EES8_EclEv+24)
      #08 pc 00000000004c530c  /data/app/com.karaocine.app-kI1nWhi6yAx5vA8rCmN11Q==/base.apk (offset 0xf75c000) (mediapipe::GlContext::DedicatedThread::Run(std::__ndk1::function<absl::lts_20220623::Status ()>)+60)
      #09 pc 00000000004c5b28  /data/app/com.karaocine.app-kI1nWhi6yAx5vA8rCmN11Q==/base.apk (offset 0xf75c000) (mediapipe::GlContext::Run(std::__ndk1::function<absl::lts_20220623::Status ()>, int, mediapipe::Timestamp)+284)
      #10 pc 0000000000481eb4  /data/app/com.karaocine.app-kI1nWhi6yAx5vA8rCmN11Q==/base.apk (offset 0xf75c000) (mediapipe::GlCalculatorHelper::RunInGlContext(std::__ndk1::function<absl::lts_20220623::Status ()>, mediapipe::CalculatorContext*)+96)
      #11 pc 0000000000481f84  /data/app/com.karaocine.app-kI1nWhi6yAx5vA8rCmN11Q==/base.apk (offset 0xf75c000) (mediapipe::GlCalculatorHelper::RunInGlContext(std::__ndk1::function<absl::lts_20220623::Status ()>)+84)
      #12 pc 00000000000d6dbc  /data/app/com.karaocine.app-kI1nWhi6yAx5vA8rCmN11Q==/base.apk (offset 0xf75c000) (mediapipe::MaskOverlayCalculator::Process(mediapipe::CalculatorContext*)+60)
      #13 pc 00000000004b7f28  /data/app/com.karaocine.app-kI1nWhi6yAx5vA8rCmN11Q==/base.apk (offset 0xf75c000) (mediapipe::CalculatorNode::ProcessNode(mediapipe::CalculatorContext*)+432)
      #14 pc 00000000004a8484  /data/app/com.karaocine.app-kI1nWhi6yAx5vA8rCmN11Q==/base.apk (offset 0xf75c000) (mediapipe::internal::SchedulerQueue::RunCalculatorNode(mediapipe::CalculatorNode*, mediapipe::CalculatorContext*)+480)
      #15 pc 00000000004a8038  /data/app/com.karaocine.app-kI1nWhi6yAx5vA8rCmN11Q==/base.apk (offset 0xf75c000) (mediapipe::internal::SchedulerQueue::RunNextTask()+112)
      #16 pc 00000000004c8f4c  /data/app/com.karaocine.app-kI1nWhi6yAx5vA8rCmN11Q==/base.apk (offset 0xf75c000) (_ZNSt6__ndk110__function6__funcIZN9mediapipe9GlContext17RunWithoutWaitingENS_8functionIFvvEEEE3$_9NS_9allocatorIS7_EES5_EclEv+12)
      #17 pc 00000000004c5260  /data/app/com.karaocine.app-kI1nWhi6yAx5vA8rCmN11Q==/base.apk (offset 0xf75c000) (mediapipe::GlContext::DedicatedThread::ThreadBody()+204)
      #18 pc 00000000004c4edc  /data/app/com.karaocine.app-kI1nWhi6yAx5vA8rCmN11Q==/base.apk (offset 0xf75c000) (mediapipe::GlContext::DedicatedThread::ThreadBody(void*)+4)
      #19 pc 00000000000cf700  /apex/com.android.runtime/lib64/bionic/libc.so (__pthread_start(void*)+36) (BuildId: 8ee0932e058bcf6b353f0a24aaef0e4e)
      #20 pc 00000000000720e8  /apex/com.android.runtime/lib64/bionic/libc.so (__start_thread+64) (BuildId: 8ee0932e058bcf6b353f0a24aaef0e4e)
Lost connection to device.
Exited (sigterm)

I spotted the MaskOverlayCalculator / Packet::Getmediapipe::GpuBuffer() error so I thought it was because one stream or the other was starting too quickly and the other stream was not arriving fast enough (the background video was streaming from the network).

Here is what I did without success :
image

Looking forward to hearing from you!
Thanks :)

@mcclanahoochie
Copy link

The error hints at a video packet being empty.

Try checking for the VIDEO inputs to be not empty , similar to the mask here https://github.com/google/mediapipe/blob/6cdc6443b6a7ed662744e2a2ce2d58d9c83e6d6f/mediapipe/calculators/image/mask_overlay_calculator.cc#L111-L121

Just return ok status if input1_packet or input0_packet is empty.

@kuaashish kuaashish removed the stat:awaiting googler Waiting for Google Engineer's Response label Apr 26, 2023
@kuaashish
Copy link
Collaborator

Hello @UnlimitedX,
We are upgrading the MediaPipe Legacy Solutions to new MediaPipe solutions However, the libraries, documentation, and source code for all the MediapPipe Legacy Solutions will continue to be available in our GitHub repository and through library distribution services, such as Maven and NPM.

You can continue to use those legacy solutions in your applications if you choose. Though, we would request you to check new MediaPipe solutions which can help you more easily build and customize ML solutions for your applications. These new solutions will provide a superset of capabilities available in the legacy solutions. Thank you

@kuaashish kuaashish added the stat:awaiting response Waiting for user response label Apr 26, 2023
@github-actions
Copy link

github-actions bot commented May 4, 2023

This issue has been marked stale because it has no recent activity since 7 days. It will be closed if no further activity occurs. Thank you.

@github-actions github-actions bot added the stale label May 4, 2023
@github-actions
Copy link

This issue was closed due to lack of activity after being marked stale for past 7 days.

@google-ml-butler
Copy link

Are you satisfied with the resolution of your issue?
Yes
No

@kuaashish kuaashish removed stat:awaiting response Waiting for user response stale labels May 11, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
legacy:selfie segmentation Issues related to selfie segmentation platform:android Issues with Android as Platform type:support General questions
Projects
None yet
Development

No branches or pull requests

5 participants