You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I ran test_flow.py in an effort to reproduce the results shown in Figure 4 of the paper. However, my qualitative results differed quite a bit from those reported in that paper.
Comparing Figure 4 from the paper with my results, you immediately see that the soft consensus mask has opposite contrast from that shown in the paper. (The paper says that high values of m indicate static scene pixels.) It would not be a problem if the direction of the contrast were just flipped. But even when assuming flipped contrasts, the comparison still puzzles me.
In the left-most one of my examples, it seems like there is some kind of saturation effect (or ceiling/floor effect), which produces a white rim around the image, especially at the bottom and the sides. I presume that this falsely indicates that these peripheral pixels are nonrigid. You can also see this to some extent in the original figure but it is not as strong. Consequently, the model predicts large patches of nonrigid motion: train tracks and trees on the left and the grass on the right. The example in the middle shows a similar problem: There are quite large white areas where no black is seen in the original. This may explain why the model predicts too much nonrigid motion on the right side where there is just grass under shadow. The fourth example from the left also shows too much nonrigid motion on the right side where there is only a building. Maybe the motion segmentation does not work properly? Just guessing...
I have added the model predictions for the samples shown in Figure 7 of the paper. The problems are easier to see here. The parked cars are falsely predicted to be moving. The consensus masks do not look very similar to those of shown in the paper, again, possibly indicating that something might be off about the motion segmentation.
The results look a lot better when test_flow.py computes the mask in the same way as test_mask.py does. I used the default mask threshold of .94 to achieve these results.
Dear Anurag,
I ran
test_flow.py
in an effort to reproduce the results shown in Figure 4 of the paper. However, my qualitative results differed quite a bit from those reported in that paper.Comparing Figure 4 from the paper with my results, you immediately see that the soft consensus mask has opposite contrast from that shown in the paper. (The paper says that high values of m indicate static scene pixels.) It would not be a problem if the direction of the contrast were just flipped. But even when assuming flipped contrasts, the comparison still puzzles me.
In the left-most one of my examples, it seems like there is some kind of saturation effect (or ceiling/floor effect), which produces a white rim around the image, especially at the bottom and the sides. I presume that this falsely indicates that these peripheral pixels are nonrigid. You can also see this to some extent in the original figure but it is not as strong. Consequently, the model predicts large patches of nonrigid motion: train tracks and trees on the left and the grass on the right. The example in the middle shows a similar problem: There are quite large white areas where no black is seen in the original. This may explain why the model predicts too much nonrigid motion on the right side where there is just grass under shadow. The fourth example from the left also shows too much nonrigid motion on the right side where there is only a building. Maybe the motion segmentation does not work properly? Just guessing...
I ran the code as follows:
Cheers,
Michael
The text was updated successfully, but these errors were encountered: