Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Overflow in depth images after 4 metres #9765

Closed
SteveGaukrodger opened this issue Sep 13, 2021 · 9 comments
Closed

Overflow in depth images after 4 metres #9765

SteveGaukrodger opened this issue Sep 13, 2021 · 9 comments

Comments

@SteveGaukrodger
Copy link


Required Info
Camera Model D415
Firmware Version 05.12.15.50
Operating System & Version Win 10
Platform PC
SDK Version 2.33.1.1360
Language C#
Segment others

When I point the camera at objects more than 4 metres away (possibly exactly 4.095m) distance wraps around like an integer overflow - a pixel 5 metres away shows as being 1 metre away. This doesn't happen in the RealSense viewer, so I'm clearly doing something wrong but I can't figure out what.

RealsenseViewer-hallway
In the viewer, the centre of the image is more than 4m away and shows up correctly.
Realsense-MyApp-Hallway
When I capture images using librealsense the area circled in red should be white since it is further away. However, it is grey because it's being read as closer than 4m.

I've used raw pixels read into a byte array and confirmed that it is happening in the frames coming from the camera, it's not just an artefact of how I do the conversion.

Any suggestions appreciated.

@MartyG-RealSense
Copy link
Collaborator

Hi @SteveGaukrodger Would it be possible to update your RealSense SDK version, please? If firmware version 5.12.8.200 or newer is installed then a minimum SDK version of 2.39.0 should be used due to internal changes in the firmware from 5.12.8.200 onwards.

image

If you need to use SDK 2.33.1 then the recommended firmware for that SDK version is 5.12.3.0, though 5.12.7.100 (the last version before the internal firmware changes that require 2.39.0) should be okay too,

@SteveGaukrodger
Copy link
Author

Hi Marty,

I'll try upgrading the SDK now and get back to you.

Kind regards,

Steve

@SteveGaukrodger
Copy link
Author

Hi Marty,

I downloaded the latest sdk - 2.49.3474 - and did a clean and rebuild with the new dlls.

Unfortunately, it is still happening. I also did some more measurements and although it's difficult to be precise, it really does seem to be at almost exactly 4m - when writing the max of frames it's 3985-3995.

In case it helps, I've included my capture code below. It's pretty much from the tutorial.

            try
            {
                var cfg = new Config();
                cfg.EnableStream(Stream.Depth, FrameWidth, FrameHeight, framerate: _frameRate);
                cfg.EnableStream(Stream.Color, FrameWidth,FrameHeight, Format.Rgb8, framerate: _frameRate);
                var pp = pipeline.Start(cfg);
                _device = pp.Device;
                var intrinsics = pp.GetStream<VideoStreamProfile>(Stream.Color).GetIntrinsics();

                Intrinsics = new RealSenseIntrinsics(intrinsics);
                if (intrinsicsOnly)
                {
                    pipeline.Stop();
                    return;
                }

                var sensor = pp.Device.QuerySensors().First(s => s.Is(Extension.DepthSensor));
                var blocks = sensor.ProcessingBlocks.ToList();
                block = new CustomProcessingBlock((f, src) =>
                {
                    using (var releaser = new FramesReleaser())
                    {
                        foreach (ProcessingBlock p in blocks)
                            f = p.Process(f).DisposeWith(releaser);

                        f = f.ApplyFilter(align).DisposeWith(releaser);

                        src.FrameReady(f);
                    }
                });

                block.Start(f =>
                {
                    using (var frames = f.As<FrameSet>())
                    {
                        var colorFrame = frames.ColorFrame.DisposeWith(frames);
                        var depthFrame = frames.First<DepthFrame>(Stream.Depth).DisposeWith(frames);
                        var depthImage = DepthFrameToGrayImage(depthFrame).DisposeWith(frames);
                        var colorImage = VideoFrameToRgbImage(colorFrame).DisposeWith(frames);
                        OnFrameReceived(depthImage, colorImage, frames.Timestamp);
                    }
                });

                var token = tokenSource.Token;

                var t = Task.Factory.StartNew(() =>
                {
                    while (!token.IsCancellationRequested)
                    {
                        using (var frames = pipeline.WaitForFrames())
                        {
                            block.Process(frames);
                        }
                    }
                    pipeline.Stop();
                    pipeline.Dispose();

                }, token);
            }
            catch (Exception ex)
            {
                Log.Error(ex.Message);
                Log.Error(ex.StackTrace);
                Log.Error(ex.ToString());
                throw;
            }
        private Image<Gray, ushort> DepthFrameToGrayImage(DepthFrame frame)
        {
            if (frame == null) throw new NullReferenceException("Depth frame cannot be null");
            if (_depthFrame == null)
            {
                _depthFrame = new ushort[frame.Width * frame.Height];
            }
            frame.CopyTo(_depthFrame);

            // Examining _depthFrame shows that the pixel values are already wrong at this point.

            Image<Gray, ushort> depthImage = new Image<Gray, ushort>(frame.Width, frame.Height);
            Buffer.BlockCopy(_depthFrame, 0, depthImage.Data, 0, _depthFrame.Length * sizeof(ushort));
            
            return depthImage;
        }

Kind regards

Steve

@SteveGaukrodger SteveGaukrodger changed the title Overflow in depth images after 4.095 metres Overflow in depth images after 4 metres Sep 13, 2021
@MartyG-RealSense
Copy link
Collaborator

It looks as though you are applying a depth-color align filter in your script. The RealSense Viewer does not have depth to color alignment in its 2D view (the one that you provided an image of), so that is a notable difference between the Viewer image and yours.

The simplest C# real-time depth rendering example I have seen is one that a RealSense team member suggested in #3413 (comment) for displaying depth in a Windows form.

Alternatively, an alternative hacky approach to the problem may be to define a threshold filter in C# that excludes from the image depth data that has values greater than 4 meters. A RealSense user created such a script and shared it in image form.

image

image

@SteveGaukrodger
Copy link
Author

SteveGaukrodger commented Sep 14, 2021

Hi,

I tried the threshold solution with and without the color alignment. In both cases I get the same problem.
I've included a photo where the depth threshold is set to 3.5m. You can see that there is a gap where things are thresholded out, but then the distance starts to loop again.

Thresh3 5

If I set the threshold to 4m then I don't see any thresholding at all.

Thresh4 0

This strongly suggests to me that the problem is in the data coming from the camera (or in the API interpreting it). I don't for a moment think it's a realsense bug or someone would have noticed, but maybe there's a camera option I'm not setting properly?

I've tried doing a hardware reset, tried using a D435, tried changing the units. In all cases, the same problem keeps happening.

Kind regards,

Steve

@SteveGaukrodger
Copy link
Author

SteveGaukrodger commented Sep 14, 2021

So I went through the Capture and Processing C# wrapper tutorials. When I use the old approach (a task factory) the depth is correct. When I use the block processing approach I get the 4m limit.

I have no idea why this would be.

This code now works (please let me know if there's anything wrong with how I'm doing alignment - I hacked this together)

        public void Start(string filename, bool intrinsicsOnly = false)
        {
            try
            {
                using (Context ctx = new Context())
                {
                    var devices = ctx.QueryDevices();
                    var dev = devices[0];

                    Console.WriteLine("\nUsing device 0, an {0}", dev.Info[CameraInfo.Name]);
                    Console.WriteLine("    Serial number: {0}", dev.Info[CameraInfo.SerialNumber]);
                    Console.WriteLine("    Firmware version: {0}", dev.Info[CameraInfo.FirmwareVersion]);

                    var sensors = dev.QuerySensors();
                    var depthSensor = sensors[0];
                    var colorSensor = sensors[1];

                    var depthProfile = depthSensor.StreamProfiles
                        .Where(p => p.Stream == Stream.Depth && p.As<VideoStreamProfile>().Height == 480 && p.As<VideoStreamProfile>().Width == 640)
                        .OrderBy(p => p.Framerate)
                        .Select(p => p.As<VideoStreamProfile>()).First();

                    var colorProfile = colorSensor.StreamProfiles
                        .Where(p => p.Stream == Stream.Color && p.As<VideoStreamProfile>().Height == 480 && p.As<VideoStreamProfile>().Width == 640)
                        .OrderBy(p => p.Framerate)
                        .Select(p => p.As<VideoStreamProfile>()).First();

                    var cfg = new Config();
                    cfg.EnableStream(Stream.Depth, depthProfile.Width, depthProfile.Height, depthProfile.Format, depthProfile.Framerate);
                    cfg.EnableStream(Stream.Color, colorProfile.Width, colorProfile.Height, colorProfile.Format, colorProfile.Framerate);


                    var pp = pipeline.Start(cfg);
                    _device = pp.Device;
                    Log.Information("Getting intrinsics");
                    var intrinsics = pp.GetStream<VideoStreamProfile>(Stream.Color).GetIntrinsics();
                    
                    Task.Factory.StartNew(() =>
                    {
                        while (!tokenSource.Token.IsCancellationRequested)
                        {
                            // We wait for the next available FrameSet and using it as a releaser object that would track
                            // all newly allocated .NET frames, and ensure deterministic finalization
                            // at the end of scope. 
                            using (var frames = pipeline.WaitForFrames())
                            {
                                var alignedFrames = frames.ApplyFilter(align).DisposeWith(frames);
                                var alignedFrameSet = alignedFrames.AsFrameSet().DisposeWith(frames);
                                using(var depth = alignedFrameSet.DepthFrame)
                                using (var color = alignedFrameSet.ColorFrame)
                                {
                                    var depthImage = DepthFrameToGrayImage(depth).DisposeWith(frames);
                                    var colorImage = VideoFrameToRgbImage(color).DisposeWith(frames);
                                    OnFrameReceived(depthImage, colorImage, frames.Timestamp);
                                }
                            }
                        }
                        Log.Debug("Loop finished");
                        pipeline.Stop();
                        pipeline.Dispose();
                    }, tokenSource.Token);
                }
            }
            catch (Exception ex)
            {
                Log.Error(ex.Message);
                Log.Error(ex.StackTrace);
                Log.Error(ex.ToString());
                throw;
            }

        }

I have no idea what I'm doing differently when I use the processing/block approach. Perhaps someone can use the tutorial code in a hallway to check that they don't get a replication?

I'm happy for someone to close this, but I'm also happy to help get to the bottom of what's happening.

@MartyG-RealSense
Copy link
Collaborator

Great to see that you were able to develop a solution, @SteveGaukrodger :)

Alignment in the C# wrapper has historically been awkward, as described in #4719

@SteveGaukrodger
Copy link
Author

Thanks for the help MartyG. And glad to know that I'm not missing something obvious on the alignment :)

@MartyG-RealSense
Copy link
Collaborator

Case closed due to solution achieved and no further comments received.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants