-
Notifications
You must be signed in to change notification settings - Fork 5.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Running Youtube8m model_inference with local videos takes long time to render the video with labels and my computer CRASH #204
Comments
Besides, if the model_inference can't run, I'd like to run starter code's inference.py , so output.tfrecord's format should be same as test data of YT8M dataset. |
Please give the model_inference more time to finish the video rendering. The warning messages is ok in this case because video rendering is expected to be super slow and it's not a real deadlock issue. Also, may I know what you mean by crashing your computer? For your second question, if you want to use the output_tfrecord generated by the feature extraction pipeline as the input of inference.py, a converter script is needed. It needs to convert the audio and rgb features from floatlist to bytelist and also rewrite the feature_list keys by applying following mappings:
|
My computer had stunned, I couldn't even move my mouse cursor. But maybe it's just GUI problem, I'll run again later.Thanks a lot! |
In theory, MediaPipe should pick an appropriate number of threads to use depending on the number of available processors. However, if you prefer to limit the cpu resources used by MediaPipe, you can add |
@jiuqiant I re-run the code on another computer and it seems work. But in the last step, it had run for a while and got Killed message. The annotated video was created but I can't open it, and it's header was deprecated as the link below. Is there any suggestion? Here's the annotated video's info: And process log:
|
Thanks for reporting this issue. After investigation, I realize that there is a bug in the code if the duration of the video is longer than the number of the feature points in /tmp/mediapipe/features.pb. In your particular case, I believe your video is longer than 120 seconds and the feature extractor pipeline is only asked to extract the features for the first 120 seconds of the video. That triggers the bug. Please do the following change in the opencv_video_decoder_calculator.cc. Then, rebuild and rerun the model_inference binary. Rebuild should be super fast because of the bazel cache you already have.
We will have this fix in the next release if that works for you. Thanks! |
Thanks for helping ! The annotated video ran successfully ! |
Yes, the [-2,2] quantize range is ok. |
@sedez, joeyhng is the author of the old yt8m feature extractor. His answer can be trusted. |
…t-detector-rearch Refactored iOS Object Detector
I'm trying to run
https://github.com/google/mediapipe/tree/master/mediapipe/examples/desktop/youtube8m
GLOG_logtostderr=1 bazel-bin/mediapipe/examples/desktop/youtube8m/model_inference --calculator_graph_config_file=mediapipe/graphs/youtube8m/local_video_model_inference.pbtxt --input_side_packets=input_sequence_example_path=/tmp/mediapipe/output.tfrecord,input_video_path=/home/tl32rodan/Downloads/v1.mp4,output_video_path=/tmp/mediapipe/annotated_video.mp4,segment_size=5,overlap=4 &> model_inference_error_log
After a while, my computer will crash and automatically delete /tmp/mediapipe
I tried one more time and recorded the log before it crashes as below.
Needs help.
The text was updated successfully, but these errors were encountered: