-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Overflow in depth images after 4 metres #9765
Comments
Hi @SteveGaukrodger Would it be possible to update your RealSense SDK version, please? If firmware version 5.12.8.200 or newer is installed then a minimum SDK version of 2.39.0 should be used due to internal changes in the firmware from 5.12.8.200 onwards. If you need to use SDK 2.33.1 then the recommended firmware for that SDK version is 5.12.3.0, though 5.12.7.100 (the last version before the internal firmware changes that require 2.39.0) should be okay too, |
Hi Marty, I'll try upgrading the SDK now and get back to you. Kind regards, Steve |
Hi Marty, I downloaded the latest sdk - 2.49.3474 - and did a clean and rebuild with the new dlls. Unfortunately, it is still happening. I also did some more measurements and although it's difficult to be precise, it really does seem to be at almost exactly 4m - when writing the max of frames it's 3985-3995. In case it helps, I've included my capture code below. It's pretty much from the tutorial.
Kind regards Steve |
It looks as though you are applying a depth-color align filter in your script. The RealSense Viewer does not have depth to color alignment in its 2D view (the one that you provided an image of), so that is a notable difference between the Viewer image and yours. The simplest C# real-time depth rendering example I have seen is one that a RealSense team member suggested in #3413 (comment) for displaying depth in a Windows form. Alternatively, an alternative hacky approach to the problem may be to define a threshold filter in C# that excludes from the image depth data that has values greater than 4 meters. A RealSense user created such a script and shared it in image form. |
Hi, I tried the threshold solution with and without the color alignment. In both cases I get the same problem. If I set the threshold to 4m then I don't see any thresholding at all. This strongly suggests to me that the problem is in the data coming from the camera (or in the API interpreting it). I don't for a moment think it's a realsense bug or someone would have noticed, but maybe there's a camera option I'm not setting properly? I've tried doing a hardware reset, tried using a D435, tried changing the units. In all cases, the same problem keeps happening. Kind regards, Steve |
So I went through the Capture and Processing C# wrapper tutorials. When I use the old approach (a task factory) the depth is correct. When I use the block processing approach I get the 4m limit. I have no idea why this would be. This code now works (please let me know if there's anything wrong with how I'm doing alignment - I hacked this together) public void Start(string filename, bool intrinsicsOnly = false)
{
try
{
using (Context ctx = new Context())
{
var devices = ctx.QueryDevices();
var dev = devices[0];
Console.WriteLine("\nUsing device 0, an {0}", dev.Info[CameraInfo.Name]);
Console.WriteLine(" Serial number: {0}", dev.Info[CameraInfo.SerialNumber]);
Console.WriteLine(" Firmware version: {0}", dev.Info[CameraInfo.FirmwareVersion]);
var sensors = dev.QuerySensors();
var depthSensor = sensors[0];
var colorSensor = sensors[1];
var depthProfile = depthSensor.StreamProfiles
.Where(p => p.Stream == Stream.Depth && p.As<VideoStreamProfile>().Height == 480 && p.As<VideoStreamProfile>().Width == 640)
.OrderBy(p => p.Framerate)
.Select(p => p.As<VideoStreamProfile>()).First();
var colorProfile = colorSensor.StreamProfiles
.Where(p => p.Stream == Stream.Color && p.As<VideoStreamProfile>().Height == 480 && p.As<VideoStreamProfile>().Width == 640)
.OrderBy(p => p.Framerate)
.Select(p => p.As<VideoStreamProfile>()).First();
var cfg = new Config();
cfg.EnableStream(Stream.Depth, depthProfile.Width, depthProfile.Height, depthProfile.Format, depthProfile.Framerate);
cfg.EnableStream(Stream.Color, colorProfile.Width, colorProfile.Height, colorProfile.Format, colorProfile.Framerate);
var pp = pipeline.Start(cfg);
_device = pp.Device;
Log.Information("Getting intrinsics");
var intrinsics = pp.GetStream<VideoStreamProfile>(Stream.Color).GetIntrinsics();
Task.Factory.StartNew(() =>
{
while (!tokenSource.Token.IsCancellationRequested)
{
// We wait for the next available FrameSet and using it as a releaser object that would track
// all newly allocated .NET frames, and ensure deterministic finalization
// at the end of scope.
using (var frames = pipeline.WaitForFrames())
{
var alignedFrames = frames.ApplyFilter(align).DisposeWith(frames);
var alignedFrameSet = alignedFrames.AsFrameSet().DisposeWith(frames);
using(var depth = alignedFrameSet.DepthFrame)
using (var color = alignedFrameSet.ColorFrame)
{
var depthImage = DepthFrameToGrayImage(depth).DisposeWith(frames);
var colorImage = VideoFrameToRgbImage(color).DisposeWith(frames);
OnFrameReceived(depthImage, colorImage, frames.Timestamp);
}
}
}
Log.Debug("Loop finished");
pipeline.Stop();
pipeline.Dispose();
}, tokenSource.Token);
}
}
catch (Exception ex)
{
Log.Error(ex.Message);
Log.Error(ex.StackTrace);
Log.Error(ex.ToString());
throw;
}
} I have no idea what I'm doing differently when I use the processing/block approach. Perhaps someone can use the tutorial code in a hallway to check that they don't get a replication? I'm happy for someone to close this, but I'm also happy to help get to the bottom of what's happening. |
Great to see that you were able to develop a solution, @SteveGaukrodger :) Alignment in the C# wrapper has historically been awkward, as described in #4719 |
Thanks for the help MartyG. And glad to know that I'm not missing something obvious on the alignment :) |
Case closed due to solution achieved and no further comments received. |
When I point the camera at objects more than 4 metres away (possibly exactly 4.095m) distance wraps around like an integer overflow - a pixel 5 metres away shows as being 1 metre away. This doesn't happen in the RealSense viewer, so I'm clearly doing something wrong but I can't figure out what.
In the viewer, the centre of the image is more than 4m away and shows up correctly.
When I capture images using librealsense the area circled in red should be white since it is further away. However, it is grey because it's being read as closer than 4m.
I've used raw pixels read into a byte array and confirmed that it is happening in the frames coming from the camera, it's not just an artefact of how I do the conversion.
Any suggestions appreciated.
The text was updated successfully, but these errors were encountered: