Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Video lagging while running #82

Open
kirk86 opened this issue Dec 6, 2021 · 8 comments
Open

Video lagging while running #82

kirk86 opened this issue Dec 6, 2021 · 8 comments
Labels
question Further information is requested

Comments

@kirk86
Copy link

kirk86 commented Dec 6, 2021

I've noticed that the video is quite lagging in real time introducing quite some latency making the the overall experience to the other users watching your video quite unpleasant.

Even though I tried to create a threaded queue to store frames for processing still I experience the same thing any suggestions?

If I store more than 2 frames in the queue I get a lot of latency in the video, the reason for creating the threaded queue in the first place was because I was the getting the following error
'Failed to reduce capture buffer size. Latency will be higher!'

I thought that with the queue I would be able to have less latency but still the overall experience is quite bad compared to snap camera.

Any suggestions?

@allo- allo- added the question Further information is requested label Dec 6, 2021
@allo-
Copy link
Owner

allo- commented Dec 6, 2021

  • Try to reduce the fps
  • Averaging images reduces artifacts and flickering but creates after images
  • You can try using akvcam instead of v4l2loopback. v4l2loopback freezes when the program isn't providing images fast enough and it is up to the recording program how it handles the drop in fps. akvcam can interpolate (I think by providing the same image over and over again)

I you have a usable threaded queue, I would be interested. I avoided it at the moment, because it is probably better to drop frames than to use queues which may increase latency. But when the bottleneck is running the filters it may be useful.

@kirk86
Copy link
Author

kirk86 commented Dec 6, 2021

  • Try to reduce the fps

I tried setting fps = 2 in config.yaml but still getting the after images

  • You can try using akvcam instead of v4l2loopback. v4l2loopback freezes when the program isn't providing images fast enough and it is up to the recording program how it handles the drop in fps. akvcam can interpolate (I think by providing the same image over and over again)

Kind of difficult since I'm on OSX using akvirtualcamera.

I you have a usable threaded queue, I would be interested.

Here you go.
queue.zip

@allo-
Copy link
Owner

allo- commented Dec 6, 2021

I didn't have the chance to test on macOS at all. I am happy when it runs.

Other optimizations you can try:

  • Which model are you using? resnet is slow, mobilnet should be usable. Experimental support for mediapipe Replace the backend with Mediapipe #75 should be the fastest option
  • Maybe you can setup tensorflow to use the GPU

Do you have the threaded queue used with the program? I wonder where it would require locks for dropping frames that are too old. One, for example, would probably not want to drop frames after segmentation and before the filters, as the segmentation step is already costly. On the other hand, a frame may be late because of some of the more complex effects what may be hard to predict. And when one can predict it, there should probably be a warning "Your current filters slow down the cam" that allows the user to adjust what filters they want to use.

@kirk86
Copy link
Author

kirk86 commented Dec 6, 2021

  • Which model are you using?

Mobilenet

  • Maybe you can setup tensorflow to use the GPU

That's an iris video device on macbook pro I don't think will do anything, but can you point to an example if that's the case?

Do you have the threaded queue used with the program?

The way that I used the threaded queue is to acquire frames in a separate thread and enqueue them for later processing in hopes of reducing latency.
In that case I simply replaced the cap. with the threaded queue object if that makes sense. When I do that I get slightly better latency than without the threaded queue but still get the after images.

@allo-
Copy link
Owner

allo- commented Dec 6, 2021

That's an iris video device on macbook pro I don't think will do anything, but can you point to an example if that's the case?

I have no idea and would need to search if it is possible myself.

The way that I used the threaded queue is to acquire frames in a separate thread and enqueue them for later processing in hopes of reducing latency.

I don't thing grabbing frames is expensive. The neural net is expensive and the operations on the numpy arrays seem to be more expensive than one would expect as well (and can possibly be further optimized).

So one probably would rather want to process multiple frames at the same time to better utilize the CPU. And that's where it gets tricky, because you cannot reduce the latency of a single frame passing through the pipeline, so it is not that clear how many frames should be buffered without introducing even more latency because of too large buffers.

@kirk86
Copy link
Author

kirk86 commented Dec 6, 2021

it is not that clear how many frames should be buffered without introducing even more latency because of too large buffers.

Got it, maybe some of the heavy processing code could be written in cython?

Finally, just to re-iterate the following error Failed to reduce capture buffer size. Latency will be higher! is caused because the program isn't providing images fast enough, right?

@allo-
Copy link
Owner

allo- commented Dec 6, 2021

No, it is because OpenCV was not able to lower the buffer size of the webcam.

# Attempt to reduce the buffer size
if not cap.set(cv2.CAP_PROP_BUFFERSIZE, 1):
    print('Failed to reduce capture buffer size. Latency will be higher!')

Here we try to reduce the buffer that OpenCV (or maybe your webcam driver controlled by opencv) uses. The function returns True, when the property was set successfully and will reduce the latency that occurs before the program even starts processing the frames.

I am not sure how to debug it or if you can change anything when OpenCV cannot change that property.

@kirk86
Copy link
Author

kirk86 commented Dec 6, 2021

Thanks for the clarification! I thought that might be one of the reasons for the after images effect, but now I see that's not related to that. Cheers!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants