-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Testing the FPS in code #8417
Comments
I want to add that I've been very happy with the camera so far. |
do you have a discord? |
Hi @arothenberg There is not a RealSense Discord support account. Public support for RealSense is provided by the GitHub forums and at Intel Support's Intel RealSense Help Center forum. https://support.intelrealsense.com/hc/en-us/community/topics Advice about FPS calculation and frame drops to compare to your own method is provided in the link below. If a "hiccup" occurs in the depth stream then the SDK will try to go back to the last good frame and continue onward from there, which is why you may see the same timestamp repeated twice in a row sometimes. So whilst you can assume that the SDK has done its best to collect "good" frames, examining the list of timestamps may provide indications of whether there are problems that might be occurring during streaming. You can expect a greater burden on processing during align operations. This is why it can be useful to offload processing from the CPU onto a graphics GPU to accelerate processing. With computing devices that have an Nvidia GPU, this can be done with CUDA support. For non-Nvidia GPUs, GLSL processing blocks provide offloading and are 'vendor neutral', meaning that they should work with any GPU brand (though improvement may not be noticable on low-end computing devices). The link below provides a good discussion of the pros and cons of using GLSL. |
Thanks Marty. I know the align operation takes time so that's why I used it. It is probably comparable to the operation that I If I just take the depth frame and I don't align or remove any background at 90fps (on the camera) I'll try the GSL blocks and see if that helps. |
The discussion in the link below about aligning in a project with low-power computers may provide helpful insights. |
Y'know Marty, I tested my code (not the rs-aligned-advanced - which i've been using as a benchmark) and I might be getting 65FPS which is much more than enough for me. I suppose the aligned tables take a lot of time. . I'll close this |
Thanks very much for the update @arothenberg - I'm pleased that your code produces results that are saisfactory for your needs. |
And thanks for that link to the gsl blocks. that might be important. tyvm. |
just in case someone browses this. I was not getting 65FPS. I was getting something comparable to above - around 10 fps |
Issue Description
I'm trying to get the number of FPS that I am able to process from the camera.
I used rs-align-advanced.cpp as the template for my own project. But I don't use the color stream and I do other things...
Anyway, I tested the speed of rs-align-advanced.cpp adding the following code to the main procedure:
I got around 80 frames in 10 seconds. That was with the remove_background function making a pass over the depth/video tables.
This is the last phase of my project before I use real data(which costs) and I want to make sure I get every depth frame possible
for analysis. The motions captured can be quick. Right now I on some gestures I get 2 frames to work with. I assume that's something I'm doing and not the camera. I worked with the kinect, using c#, writing my own bitmap readers, and I think I got better FPS. And they were 30 FPS and I'm using the default settings in rs-align-advanced which may be 30FPS.
Thanks in advance.
.
The text was updated successfully, but these errors were encountered: