Skip to content
This repository has been archived by the owner on Dec 12, 2024. It is now read-only.

Question about static_masks, dynamic_masks #9

Open
sandy-ssdut opened this issue Jun 25, 2023 · 3 comments
Open

Question about static_masks, dynamic_masks #9

sandy-ssdut opened this issue Jun 25, 2023 · 3 comments

Comments

@sandy-ssdut
Copy link

Thanks for your great work!
I found that for custom data, the motion masks should be provided.
How can i get the motion masks of custom video?

@zhengqili
Copy link
Contributor

zhengqili commented Jun 26, 2023

Hi,

Unfortunately, we don't plan to release that part since it depends on some internal codebase. But for custom videos which include common objects such as human, dogs or cats, you can use code from https://github.com/zhengqili/Neural-Scene-Flow-Fields/blob/main/nsff_scripts/run_flows_video.py#L87 to obtain raw motion mask.

You can also implement our motion segmentation based on our paper's description, which should not be too hard, and I am happy to answer any question you have.

To get static and dynamic masks, you should perform morphological erosion and dilation (with radius 5) to the raw motion masks to ensure the moving regions from dynamic masks are smaller, and moving regions from static masks are larger than true moving regions in the original videos (similar to the ideas from https://github.com/erikalu/omnimatte).

@sandy-ssdut
Copy link
Author

Thanks for your reply!

I will try it.

@yavon818
Copy link

he moving regions from dynamic masks are smaller, and moving regions from static masks are larger than true m

I wonder if I should use the method given by Omnimatte to get the dynamic mask. And have you extracted the mask of the demo videos using the Omnimatte algorithm?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants