-
Notifications
You must be signed in to change notification settings - Fork 40
Question about static_masks, dynamic_masks #9
Comments
Hi, Unfortunately, we don't plan to release that part since it depends on some internal codebase. But for custom videos which include common objects such as human, dogs or cats, you can use code from https://github.com/zhengqili/Neural-Scene-Flow-Fields/blob/main/nsff_scripts/run_flows_video.py#L87 to obtain raw motion mask. You can also implement our motion segmentation based on our paper's description, which should not be too hard, and I am happy to answer any question you have. To get static and dynamic masks, you should perform morphological erosion and dilation (with radius 5) to the raw motion masks to ensure the moving regions from dynamic masks are smaller, and moving regions from static masks are larger than true moving regions in the original videos (similar to the ideas from https://github.com/erikalu/omnimatte). |
Thanks for your reply! I will try it. |
I wonder if I should use the method given by Omnimatte to get the dynamic mask. And have you extracted the mask of the demo videos using the Omnimatte algorithm? |
Thanks for your great work!
I found that for custom data, the motion masks should be provided.
How can i get the motion masks of custom video?
The text was updated successfully, but these errors were encountered: