-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Video lagging while running #82
Comments
I you have a usable threaded queue, I would be interested. I avoided it at the moment, because it is probably better to drop frames than to use queues which may increase latency. But when the bottleneck is running the filters it may be useful. |
I tried setting
Kind of difficult since I'm on OSX using akvirtualcamera.
Here you go. |
I didn't have the chance to test on macOS at all. I am happy when it runs. Other optimizations you can try:
Do you have the threaded queue used with the program? I wonder where it would require locks for dropping frames that are too old. One, for example, would probably not want to drop frames after segmentation and before the filters, as the segmentation step is already costly. On the other hand, a frame may be late because of some of the more complex effects what may be hard to predict. And when one can predict it, there should probably be a warning "Your current filters slow down the cam" that allows the user to adjust what filters they want to use. |
Mobilenet
That's an iris video device on macbook pro I don't think will do anything, but can you point to an example if that's the case?
The way that I used the threaded queue is to acquire frames in a separate thread and enqueue them for later processing in hopes of reducing latency. |
I have no idea and would need to search if it is possible myself.
I don't thing grabbing frames is expensive. The neural net is expensive and the operations on the numpy arrays seem to be more expensive than one would expect as well (and can possibly be further optimized). So one probably would rather want to process multiple frames at the same time to better utilize the CPU. And that's where it gets tricky, because you cannot reduce the latency of a single frame passing through the pipeline, so it is not that clear how many frames should be buffered without introducing even more latency because of too large buffers. |
Got it, maybe some of the heavy processing code could be written in cython? Finally, just to re-iterate the following error |
No, it is because OpenCV was not able to lower the buffer size of the webcam.
Here we try to reduce the buffer that OpenCV (or maybe your webcam driver controlled by opencv) uses. The function returns True, when the property was set successfully and will reduce the latency that occurs before the program even starts processing the frames. I am not sure how to debug it or if you can change anything when OpenCV cannot change that property. |
Thanks for the clarification! I thought that might be one of the reasons for the after images effect, but now I see that's not related to that. Cheers! |
I've noticed that the video is quite lagging in real time introducing quite some latency making the the overall experience to the other users watching your video quite unpleasant.
Even though I tried to create a threaded queue to store frames for processing still I experience the same thing any suggestions?
If I store more than 2 frames in the queue I get a lot of latency in the video, the reason for creating the threaded queue in the first place was because I was the getting the following error
'Failed to reduce capture buffer size. Latency will be higher!'
I thought that with the queue I would be able to have less latency but still the overall experience is quite bad compared to snap camera.
Any suggestions?
The text was updated successfully, but these errors were encountered: