Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error Calibrating IMU and failed to receive IMU frames when stream Depth Color IMU at the same time #6370

Closed
juliussin opened this issue May 8, 2020 · 25 comments

Comments

@juliussin
Copy link

juliussin commented May 8, 2020


Required Info
Camera Model D435i
Firmware Version 05.12.03.00
Operating System & Version JetPack 4.3
Kernel Version (Linux Only) Linux 4.9.140-tegra aarch64
Platform NVIDIA Jetson Nano
SDK Version 2.32.1
Language Python
Segment Robot

Issue Description

I'm having some troubles with my realsense. First, I got this error when I open realsense-viewer

 09/05 04:24:38,836 WARNING [548395409856] (types.cpp:49) Accel Sensitivity:hwmon command 0x4f failed. Error type: No data to return (-21).

After looking for the solution, I tried to calibrate my realsense with the rs-imu-calibration.py provided in SDK. But after calibration, it failed to write the results to the eeprom:

Writing files:
accel_ast.txt
gyro_ast.txt
[-2.77070314e-03  7.31204644e-06 -3.71226973e-05]
[1000 1000 1000 1000 1000 1000]
using 6000 measurements.
[[ 1.02466962  0.02833392 -0.00701211]
 [-0.01204428  1.01828694  0.03595002]
 [ 0.00251626 -0.00895735  1.01212725]
 [-0.06575837  0.04205819  0.02977264]]
residuals: [  4.46210107  42.70104708 194.89635213]
rank: 4
singular: [438.4666239  428.57253125 424.56741096  77.4509522 ]
norm (raw data  ): 9.626653
norm (fixed data): 9.803630 A good calibration will be near 9.806650
Would you like to write the results to the camera's eeprom? (Y/N)Y
Writing calibration to device.

Done. failed to set power state

I'm wondering what's wrong with my device?

Also I'm having a lot of problems with Python Wrapper. I can't stream Color + Depth + IMU at the same time. Similar to #5628 and #6031 . I run their kindly provided code, and it didn't succeed to read the IMU frames. But strangely, sometimes after I restart my Jetson Nano, the code worked! But on the second time running, IMU frames timeout! I've stop the pipeline properly, so I think that's not the problem. I think that's a bug. I've been restarting my device frequently to find out the problem.

I'm using only 1 realsense, and I'm using USB cable provided in the product box. I also powered my Jetson Nano properly with sufficient power supply, if you would guess those. :)

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented May 9, 2020

I considered the calibration EEPROM writing problem carefully. As a first step to try, I would recommend performing a "gold reset" on the camera to return it to factory defaults. If there is a corruption in the calibration table that is stored inside the camera then the gold reset can correct it.

https://forums.intel.com/s/question/0D70P000006EFHESA4

There have been a couple of previous cases related to difficulty with using Python to stream color, depth and IMU together. I hope that the links below will provide useful insights.

#5628
#6031

The second link relates specifically to streaming color, depth and IMU at the same time on Nano.

@juliussin
Copy link
Author

I considered the calibration EEPROM writing problem carefully. As a first step to try, I would recommend performing a "gold reset" on the camera to return it to factory defaults. If there is a corruption in the calibration table that is stored inside the camera then the gold reset can correct it.

https://forums.intel.com/s/question/0D70P000006EFHESA4

There have been a couple of previous cases related to difficulty with using Python to stream color, depth and IMU together. I hope that the links below will provide useful insights.

#5628
#6031

The second link relates specifically to streaming color, depth and IMU at the same time on Nano.

Thank you for your reply! Sure, for the problems with Python stream color depth and IMU together, already tried those solutions and their codes. But it didn't work as well. But let leave that aside while trying to fix the calibration error. Maybe after that it will work fine!

Refering to https://www.intel.com/content/dam/support/us/en/documents/emerging-technologies/intel-realsense-technology/RealSense_D400_Dyn_Calib_User_Guide.pdf. I failed to get the deb package.

$ sudo apt-key adv --keyserver keys.gnupg.net --recv-key C8B3A55A6F3EFCDE || sudo apt-key adv -- keyserver hkp://keyserver.ubuntu.com:80 --recv-key C8B3A55A6F3EFCDE
Executing: /tmp/apt-key-gpghome.gtcD6BCs9P/gpg.1.sh --keyserver keys.gnupg.net --recv-key C8B3A55A6F3EFCDE
gpg: key C8B3A55A6F3EFCDE: ""CN = Intel(R) Intel(R) Realsense", O=Intel Corporation" not changed
gpg: Total number processed: 1
gpg:              unchanged: 1

$ sudo add-apt-repository "deb http://realsense-hw-public.s3.amazonaws.com/Debian/apt-repo xenial main" -u
Get:1 file:/var/cuda-repo-10-0-local-10.0.326  InRelease
Ign:1 file:/var/cuda-repo-10-0-local-10.0.326  InRelease
Get:2 file:/var/visionworks-repo  InRelease
Ign:2 file:/var/visionworks-repo  InRelease
Get:3 file:/var/visionworks-sfm-repo  InRelease
Ign:3 file:/var/visionworks-sfm-repo  InRelease
Get:4 file:/var/visionworks-tracking-repo  InRelease
Ign:4 file:/var/visionworks-tracking-repo  InRelease
Get:5 file:/var/cuda-repo-10-0-local-10.0.326  Release [574 B]              
Get:6 file:/var/visionworks-repo  Release [1.999 B]                            
Get:7 file:/var/visionworks-sfm-repo  Release [2.003 B]                        
Get:5 file:/var/cuda-repo-10-0-local-10.0.326  Release [574 B]                 
Get:8 file:/var/visionworks-tracking-repo  Release [2.008 B]                   
Get:6 file:/var/visionworks-repo  Release [1.999 B]                            
Get:7 file:/var/visionworks-sfm-repo  Release [2.003 B]                        
Get:8 file:/var/visionworks-tracking-repo  Release [2.008 B]                   
Hit:11 http://ports.ubuntu.com/ubuntu-ports bionic InRelease                   
Hit:14 http://ports.ubuntu.com/ubuntu-ports bionic-updates InRelease           
Get:15 http://realsense-hw-public.s3.amazonaws.com/Debian/apt-repo bionic InRelease [3.230 B]
Hit:16 http://realsense-hw-public.s3.amazonaws.com/Debian/apt-repo xenial InRelease
Hit:17 http://ports.ubuntu.com/ubuntu-ports bionic-backports InRelease         
Hit:18 http://ports.ubuntu.com/ubuntu-ports bionic-security InRelease          
Get:19 https://repo.download.nvidia.com/jetson/common r32 InRelease [2.541 B]
Get:20 https://repo.download.nvidia.com/jetson/t210 r32 InRelease [2.555 B]
Fetched 8.326 B in 4s (2.053 B/s)     
Reading package lists... Done

$ sudo rm -f /etc/apt/sources.list.d/realsense-public.list

$ sudo apt-get update
...

$ sudo apt-get install librscalibrationapi
Reading package lists... Done
Building dependency tree       
Reading state information... Done
E: Unable to locate package librscalibrationapi

can you help me with this problem before I perform the gold reset?

@MartyG-RealSense
Copy link
Collaborator

Someone else had problems installing librscalibrationapi but were able to install librscalibrationtool.

https://forums.intel.com/s/question/0D50P00004WIrgvSAD/possible-to-calibrate-d435-from-ubuntu-1804?language=en_US

librscalibrationtool is actually the one that you want. The 'Calibration API' is a more advanced tool for users who want to create their own custom calibration tool.

If you installed librscalibrationtool first and then tried installing the optional librscalibrationapi second, then that second optional step can be skipped as you do not need the custom calibration tool creation functions.

image

@juliussin
Copy link
Author

@MartyG-RealSense yes, same error with librescalibrationtool. It cannot find the package. But that's okay I used Windows and it worked. I got result:

>Intel.Realsense.CustomTW.exe -g
CustomRW for Intel Realsense D400, Version: 2.6.8.0

  Device PID: 0B3A
  Device name: Intel RealSense D435I
  Serial number: 8**********8
  Firmware version: 05.12.03.00

Calibration on device successfully reset to default gold factory settings.

And then I performed again rs-imu-calibration same error:

Would you like to write the results to the camera's eeprom? (Y/N)Y
Writing calibration to device.

Done. failed to set power state

@MartyG-RealSense
Copy link
Collaborator

Okay, thanks for trying the gold reset. At least we have eliminated the calibration table as a potential cause of your problem.

Your opening mention of having sufficient power supply suggests that you are using a mains electricity powered USB 3 hub connected to a wall power socket?

@juliussin
Copy link
Author

Okay, thanks for trying the gold reset. At least we have eliminated the calibration table as a potential cause of your problem.

Your opening mention of having sufficient power supply suggests that you are using a mains electricity powered USB 3 hub connected to a wall power socket?

Thanks! I don't use any USB 3 Hub, I connect it directly to the Jetson Nano USB Port, but my Jetson Nano is powered with 5V Barrel Jack with maximum current 3.5A. I never have (yet) any issue with my Jetson Nano failed to power any USB device through it's USB port. I also could perform stream depth+color with Python code continuously. (Which it means it could work, and I don't think EEPROM writing or IMU calibration needs more power than depth+color stream, am I wrong?)

@MartyG-RealSense
Copy link
Collaborator

It's more an issue of whether the power supplied to the USB port is stable, rather than the overall amount of power supplied to the board. An enterprise computer with a big power supply can still experience USB port problems.

What a mains powered USB hub does is provide stable power to the USB port, because the hub is supplied from the mains electricity. Whereas if the camera is plugged directly into the computer's USB port, it is still a "passive" connection that relies on the computer's power supply (whilst a mains powered USB hub is an "active" connection).

This guide to the Nano power supply suggests having both a barrel jack and a powered hub:

https://desertbot.io/blog/jetson-nano-power-supply-barrel-vs-micro-usb

Having said that, you are able to run all three streams on the first run of the program, so the hardware is capable of performing it without the help of a powered hub. I have also heard of cases like this, where the application fails on a subsequent run, seemingly because something was not closed properly after the first run.

Are you using syncer instead of pipeline in your program, please? There was a reported problem earlier this week with syncer not being shut down properly after the stop and then close of a sensor.

#6337

@juliussin
Copy link
Author

@MartyG-RealSense So the idea about using Jetson Nano and RealSense was to make a portable wearable device and battery powered. I'm afraid it is not possible for me to use powered hub while designing a simple portable battery powered wearable device.

Sure, I also aware of that pipeline, so I said earlier in the first post that I stop all my pipelines correctly. I didn't use syncer since I used and still modified many sample codes provided. I even try to stop the pipelines twice (in case my try .. finally ... statement didn't work, but I got cannot stop pipeline before start in second stop, which means that my pipelines stop (2 stop, 1 for imu pipeline and 1 for rgb+color pipeline) worked correctly.

Also when I open realsense-viewer it could stream through IMU depth and color. Maybe if you want me to check or try something, I could try to provide it.

@MartyG-RealSense
Copy link
Collaborator

It is not unusual for a process to work fine in the RealSense Viewer (which uses C++ language) but be more problematic to reproduce in the Python language due to differences between the languages.

Could you test please whether the problem with IMU occurs if you reset the camera (but not the Nano) by unplugging the camera after the first successful run and then plugging it back in before starting the program for the second time?

@juliussin
Copy link
Author

juliussin commented May 9, 2020

This in my code:

import pyrealsense2 as rs
import numpy as np
import cv2
import time

print('\nusing librealsense sdk version: ' + str(rs.__version__))

# Stream Setting
device_id = None  # Default: None
enable_imu = True
accel_fps = 63
gyro_fps = 200
enable_depth = True
depth_resolution = (640, 360)
depth_fps = 60
enable_rgb = True
rgb_resolution = (640, 360)
rgb_fps = 60
do_align = True
if do_align and (not enable_depth or not enable_rgb):
    print('do align only if depth and color enabled!')
    exit()

if enable_imu:  # IMU Pipeline Configuration
    imu_pipeline = rs.pipeline()
    imu_config = rs.config()
    if device_id is not None:
        imu_config.enable_device(device_id)
    imu_config.enable_stream(rs.stream.accel, rs.format.motion_xyz32f, accel_fps)
    imu_config.enable_stream(rs.stream.gyro, rs.format.motion_xyz32f, gyro_fps)
    imu_profile = imu_pipeline.start(imu_config)

if enable_depth or enable_rgb:  # Depth + Color Pipeline Configuration
    pipeline = rs.pipeline()
    config = rs.config()
    if device_id is not None:
        config.enable_device(device_id)
    if enable_depth:
        config.enable_stream(rs.stream.depth, depth_resolution[0], depth_resolution[1], rs.format.z16, depth_fps)
    if enable_rgb:
        config.enable_stream(rs.stream.color, rgb_resolution[0], rgb_resolution[1], rs.format.bgr8, rgb_fps)
    profile = pipeline.start(config)
    if do_align:
        depth_sensor = profile.get_device().first_depth_sensor()
        depth_scale = depth_sensor.get_depth_scale()
        align_to = rs.stream.color
        align = rs.align(align_to)

time.sleep(2)  # Delay after starting pipeline(s)

try:
    frame_count = 0  # If needed
    while True:
        time1 = time.time()
        if enable_imu and do_align:
            frames = pipeline.wait_for_frames()
            imu_frames = imu_pipeline.wait_for_frames()
            aligned_frames = align.process(frames)
            aligned_depth_frame = aligned_frames.get_depth_frame()
            aligned_color_frame = aligned_frames.get_color_frame()
            accel_frame = imu_frames.first_or_default(rs.stream.accel).as_motion_frame()
            gyro_frame = imu_frames.first_or_default(rs.stream.gyro).as_motion_frame()
            if (not aligned_depth_frame or not aligned_color_frame) and (not accel_frame and not gyro_frame):
                print('aligned_depth_frame: %r' % aligned_depth_frame)
                print('aligned_color_frame: %r' % aligned_color_frame)
                print('accel_frame        : %r' % accel_frame)
                print('gyro_frame         : %r' % gyro_frame)
                continue

            if aligned_depth_frame and aligned_color_frame:
                depth_image = np.asanyarray(aligned_depth_frame.get_data())
                color_image = np.asanyarray(aligned_color_frame.get_data())
            else:
                depth_image = None
                color_image = None
            if accel_frame:
                accel_sample = np.asanyarray(accel_frame.get_motion_data())
            else:
                accel_sample = None
            if gyro_frame:
                gyro_sample = np.asanyarray(gyro_frame.get_motion_data())
            else:
                gyro_sample = None

        depth_colormap = cv2.applyColorMap(cv2.convertScaleAbs(depth_image, alpha=0.03), cv2.COLORMAP_JET)
        cv2.imshow("Color", color_image)
        cv2.imshow("Depth Colormap", depth_colormap)
        print('Accelerometer: ' + str(accel_sample))
        print('Gyroscope: ' + str(gyro_sample))
        key = cv2.waitKey(1)
        if key == 27:
#            if enable_imu:
#                imu_pipeline.stop()
#            if enable_depth or enable_rgb:
#                pipeline.stop()
            break
        print('Done 1 Frame! Time: {0:0.3f}ms & FPS: {1:0.2f}'.format(
            (time.time()-time1)*1000,
            1/(time.time()-time1)
        ))

finally:
    if enable_imu:
        imu_pipeline.stop()
    if enable_depth or enable_rgb:
        pipeline.stop()
    cv2.destroyAllWindows

First, after restart: worked, then exit properly.

$ python3 stream3in1.py 

using librealsense sdk version: 2.32.1
Accelerometer: x: 0.284393, y: -9.64974, z: 0.509946
Gyroscope: x: 0, y: 0.00349066, z: 0
Done 1 Frame! Time: 213.311ms & FPS: 4.69
Accelerometer: x: 0.26478, y: -9.66936, z: 0.490332
Gyroscope: x: -0.00174533, y: 0.00349066, z: 0
Done 1 Frame! Time: 71.718ms & FPS: 13.94
Accelerometer: x: 0.284393, y: -9.66936, z: 0.490332
Gyroscope: x: -0.00523599, y: -0.00174533, z: 0
Done 1 Frame! Time: 70.885ms & FPS: 14.11
Accelerometer: x: 0.304006, y: -9.66936, z: 0.490332
Gyroscope: x: 0, y: -0.00174533, z: -0.00349066
Done 1 Frame! Time: 31.395ms & FPS: 31.85
Accelerometer: x: 0.304006, y: -9.64974, z: 0.490332
Gyroscope: x: 0, y: 0, z: 0
Done 1 Frame! Time: 67.800ms & FPS: 14.75
Accelerometer: x: 0.284393, y: -9.64974, z: 0.490332
Gyroscope: x: 0, y: -0.00523599, z: 0
Done 1 Frame! Time: 29.272ms & FPS: 34.16
Accelerometer: x: 0.284393, y: -9.68897, z: 0.509946
Gyroscope: x: -0.00523599, y: 0, z: 0.00349066
Done 1 Frame! Time: 43.204ms & FPS: 23.14
Accelerometer: x: 0.304006, y: -9.66936, z: 0.529559
Gyroscope: x: -0.00349066, y: 0.00174533, z: 0
Done 1 Frame! Time: 56.655ms & FPS: 17.65
Accelerometer: x: 0.284393, y: -9.66936, z: 0.490332
Gyroscope: x: 0, y: 0.00174533, z: -0.00349066
Done 1 Frame! Time: 34.464ms & FPS: 29.01
Accelerometer: x: 0.26478, y: -9.66936, z: 0.490332
Gyroscope: x: -0.00698132, y: 0.00174533, z: 0
Done 1 Frame! Time: 58.594ms & FPS: 17.07
Accelerometer: x: 0.284393, y: -9.66936, z: 0.509946
Gyroscope: x: -0, y: -0.00349066, z: 0.00174533
Done 1 Frame! Time: 61.028ms & FPS: 16.38
Accelerometer: x: 0.284393, y: -9.66936, z: 0.509946
Gyroscope: x: -0.00523599, y: -0.00174533, z: 0
Done 1 Frame! Time: 55.823ms & FPS: 17.91
Accelerometer: x: 0.284393, y: -9.66936, z: 0.490332
Gyroscope: x: -0.00698132, y: 0, z: 0
Done 1 Frame! Time: 46.720ms & FPS: 21.39
Accelerometer: x: 0.284393, y: -9.68897, z: 0.470719
Gyroscope: x: -0.00349066, y: 0, z: 0
Done 1 Frame! Time: 51.219ms & FPS: 19.52
Accelerometer: x: 0.284393, y: -9.64974, z: 0.490332
Gyroscope: x: -0.00174533, y: 0.00174533, z: 0
Done 1 Frame! Time: 41.964ms & FPS: 23.83
Accelerometer: x: 0.304006, y: -9.66936, z: 0.509946
Gyroscope: x: 0, y: 0.00349066, z: 0
Done 1 Frame! Time: 33.170ms & FPS: 30.14
Accelerometer: x: 0.304006, y: -9.66936, z: 0.490332
Gyroscope: x: 0, y: 0.00523599, z: -0.00174533
Done 1 Frame! Time: 35.163ms & FPS: 28.44
Accelerometer: x: 0.284393, y: -9.64974, z: 0.509946
Gyroscope: x: -0.00349066, y: 0, z: 0
Done 1 Frame! Time: 56.777ms & FPS: 17.61
Accelerometer: x: 0.284393, y: -9.66936, z: 0.509946
Gyroscope: x: -0.00174533, y: -0.00174533, z: -0.00174533
Done 1 Frame! Time: 29.440ms & FPS: 33.96
Accelerometer: x: 0.304006, y: -9.66936, z: 0.509946
Gyroscope: x: -0.00698132, y: 0, z: 0.00174533
Done 1 Frame! Time: 43.360ms & FPS: 23.06
Accelerometer: x: 0.284393, y: -9.68897, z: 0.509946
Gyroscope: x: -0.00174533, y: 0, z: 0
Done 1 Frame! Time: 45.114ms & FPS: 22.16
Accelerometer: x: 0.284393, y: -9.66936, z: 0.509946
Gyroscope: x: -0.00523599, y: 0, z: -0.00174533
Done 1 Frame! Time: 35.628ms & FPS: 28.06

I noticed that with these simple tasks, I got very poor FPS even though my setting was 60FPS for the depth+color. Yes, this is not the legal way to check the FPS, just as an indicator
Second, tried to run the program again:

$ python3 stream3in1.py 

using librealsense sdk version: 2.32.1
Accelerometer: x: 0.284393, y: -9.68897, z: 0.509946
Gyroscope: x: -0.00174533, y: 0, z: 0.00174533
Done 1 Frame! Time: 3570.331ms & FPS: 0.28
Traceback (most recent call last):
  File "stream3in1.py", line 57, in <module>
    imu_frames = imu_pipeline.wait_for_frames()
RuntimeError: Frame didn't arrive within 5000

Third, unplug the RealSense for a minute, and plug it back, and run the program again:

$ python3 stream3in1.py 

using librealsense sdk version: 2.32.1
Accelerometer: x: 0.166713, y: -8.50237, z: 0.402073
Gyroscope: x: -0.00523599, y: 0, z: 0
Done 1 Frame! Time: 3551.788ms & FPS: 0.28
Traceback (most recent call last):
  File "stream3in1.py", line 57, in <module>
    imu_frames = imu_pipeline.wait_for_frames()
RuntimeError: Frame didn't arrive within 5000

I realized it managed to get the first IMU frames, but with no depth+color (the cv2.imwrite() didn't show anything). I tried once more to make sure:

$ python3 stream3in1.py 

using librealsense sdk version: 2.32.1
Traceback (most recent call last):
  File "stream3in1.py", line 57, in <module>
    imu_frames = imu_pipeline.wait_for_frames()
RuntimeError: Frame didn't arrive within 5000

Now it didn't manage to get the first IMU frames. Very strange huh!

@MartyG-RealSense
Copy link
Collaborator

Frame didn't arrive within 5000 is a useful indicator. It suggests that the pipeline might be getting jammed up with frames because they are not being released properly, causing timeouts due to the pipeline not being able to respond to a poll for new frames in a timely manner.

@juliussin
Copy link
Author

@MartyG-RealSense sure, thank you for your explanation. I didn't understand about how the pipeline work, for example if I have pipeline for gyros and accel with different FPS, at what FPS would I get the timeline frame?

@MartyG-RealSense
Copy link
Collaborator

The discussion in the link below is a good reference for the timing differences between the depth and IMU streams

#3205.

@juliussin
Copy link
Author

juliussin commented May 10, 2020

Okay, so I've tried callback function to deal with it. If I may conclude, the problem of this error when stream depth + rgb + imu is the more-than-one pipelines. I don't know why, but if I open one pipeline only, and stream through three of them with callback function, I can get all the IMU (Accelerometer and Gyroscope with different FPS), Depth only (corupted). If I use 2 pipelines, then one of the pipeline would not working correctly, or receive the frame very rare (like 1 every 5-15 seconds).

@MartyG-RealSense if I may ask, do you know any reference about Python Callback function only give 1 frame and not a composite_frame if I stream through RGB and depth? I read in c++ callback example code, it suppose to receive a frameset (which I think it is composite_frame in Python) in callback function. That's why I said depth only because when the callback function (of pipeline with color + depth enabled) called, it only receive 1 frame with type of stream.depth, but the frame cannot be cast as depth_frame.

I will provide the script latter.

import pyrealsense2 as rs
import numpy as np
import cv2
import time

print('\nusing librealsense sdk version: ' + str(rs.__version__))

frame = None
frame_new = False
process_done = True


def callback(frames):  # Callback just simply put frame(s) to global variable
    global frame, frame_new, process_done
    if process_done:
        frame = frames
        frame_new = True


# Stream Setting
device_id = None  # Default: None
accel_fps = 63
gyro_fps = 200
depth_resolution = (640, 360)
depth_fps = 30
rgb_resolution = (640, 360)
rgb_fps = 30


pipeline = rs.pipeline()
config = rs.config()
if device_id is not None:
    config.enable_device(device_id)
config.enable_stream(rs.stream.depth, depth_resolution[0], depth_resolution[1], rs.format.z16, depth_fps)
config.enable_stream(rs.stream.color, rgb_resolution[0], rgb_resolution[1], rs.format.bgr8, rgb_fps)
config.enable_stream(rs.stream.accel, rs.format.motion_xyz32f, accel_fps)
config.enable_stream(rs.stream.gyro, rs.format.motion_xyz32f, gyro_fps)
profile = pipeline.start(config, callback)

depth_sensor = profile.get_device().first_depth_sensor()
depth_scale = depth_sensor.get_depth_scale()
align = rs.align(rs.stream.color)

try:
    frame_count = 0  # If needed
    while True:
        if frame_new:
            process_done = False
            print(type(frame))
            print(str(frame.get_profile().stream_type()))
            if frame.get_profile().stream_type() == rs.stream.gyro:
                print('Gyro: ' + str(frame.as_motion_frame().get_motion_data()))
            if frame.get_profile().stream_type() == rs.stream.accel:
                print('Accel: ' + str(frame.as_motion_frame().get_motion_data()))
            
            if frame.get_profile().stream_type() == rs.stream.depth:
                print('DEPTH!!!')
                print(np.asanyarray(frame.as_depth_frame().get_data()))
            if frame.get_profile().stream_type() == rs.stream.color:
                print('COLOR!!!')
                print(np.asanyarray(frame.as_video_frame().get_data()))
                # depth_image = np.asanyarray(depth_frame.get_data())
                # depth_colormap = cv2.applyColorMap(cv2.convertScaleAbs(depth_image, alpha=0.03), cv2.COLORMAP_JET)
                # cv2.imshow("Depth", depth_colormap)
                # cv2.waitKey(1)
            process_done = True
            frame_new = False

finally:
    pipeline.stop()
    # cv2.destroyAllWindows

the output:

using librealsense sdk version: 2.32.1
<class 'pyrealsense2.frame'>
stream.gyro
Gyro: x: -0.0122173, y: 0, z: -0.00872665
<class 'pyrealsense2.frame'>
stream.gyro
Gyro: x: -0.00349066, y: 0.00523599, z: -0.00174533

...

<class 'pyrealsense2.frame'>
stream.accel
Accel: x: 0.0490332, y: -9.66936, z: 0.627626
<class 'pyrealsense2.frame'>
stream.gyro
Gyro: x: -0.00174533, y: 0, z: 0
<class 'pyrealsense2.frame'>
stream.gyro
Gyro: x: -0.00349066, y: 0, z: 0
<class 'pyrealsense2.frame'>
stream.gyro
Gyro: x: -0.00349066, y: 0, z: 0
<class 'pyrealsense2.frame'>
stream.depth
DEPTH!!!
Traceback (most recent call last):
  File "/home/jetson/project/TA/to_share.py", line 57, in <module>
    print(np.asanyarray(frame.as_depth_frame().get_data()))
RuntimeError: null pointer passed for argument "frame_ref"

@MartyG-RealSense
Copy link
Collaborator

This subject is somewhat outside of my current level of programming knowledge. I did though find a reference for using callback with depth and RGB in Python.

#5417

@juliussin
Copy link
Author

This subject is somewhat outside of my current level of programming knowledge. I did though find a reference for using callback with depth and RGB in Python.

#5417

WOW! Right on target. Thank you very much! Yes it works and it helps me a lot! I also tried to search other issues but didn't find that one, should be more thorough.

Okay I think this three stream problem is solved. Maybe this is just imperfect feature, since frameset is from c++ and I don't know what's the difference between it and composite-frame, also why the developer made it different in c++ and python. But still, I can't align it directly since the align function needs composite-frame data type. I think I can manage to make a new composite-frame with depth and color data from #5417 technique. I will share my code after I do some tests. Can I propose this as enhancement ?

And last but not least about the IMU calibration. Currently, this uncalibrated IMU don't affect me, but I'm thrilled to find the reason why. Again, thanks!

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented May 10, 2020

I'm very glad that the link was able to help you!

Yes, feel free to post a new issue containing your feature request and I will label it as Enhancement.

My understanding of the reason for calibrating the IMU is that otherwise, you can get non-zero values in the accelerometer and gyro even when the camera is stationary, when the values should be reading zero.

@MartyG-RealSense
Copy link
Collaborator

Case closed due to no further comments received. Thanks again!

@zamora18
Copy link

@juliussin do you have updated code you could share? Did you end up using/testing with two pipelines? Can you share at what frame rates you actually receive gyroscope & accelerometer data (were you able to get them at 200 fps & 63 fps using the callback)?
I am getting the null pointer passed for argument "frame_ref" error. I was not able to see how #5417 resolved the issue for extracting IMU or RGB data; I tried using a similar call to their example: frame.as_frameset().get_motion_frame().get_data() but returned an error.

@a7u7a
Copy link

a7u7a commented Oct 23, 2020

Hi, I'm getting Segmentation fault when trying to use a callback on the pipeline for handling framesets. As mentioned here: #5417

This is my code:

import pyrealsense2 as rs

pipeline = rs.pipeline()
def test_callback(fs): 
    depth_data = fs.as_frameset().get_depth_frame().get_data()
    pipeline.stop() 
profile = pipeline.start(test_callback)

Tried debugging with faulthandler but it is not returning anything at all on: $python3 -q -X faulthandler test.py

@MartyG-RealSense
Copy link
Collaborator

Hi @a7u7a I looked at your code carefully. Your script does not seem to have a definition for what the contents of fs should be. So you could instead use the default frame in its place. I would also change test_callback to callback, and edit the pipeline start instruction appropriately to reflet this change.

Also, I believe that you cannot call pipeline.stop() before pipeline.start() if the pipeline has not already been started. So I moved pipeline.stop() below the start statement.

import pyrealsense2 as rs
pipeline = rs.pipeline()
def callback(frame):
depth_data = frame.as_frameset().get_depth_frame().get_data()
profile = pipeline.start(callback)
pipeline.stop()

@a7u7a
Copy link

a7u7a commented Dec 14, 2020

Hi MartyG, Sorry for the late reply. Your comment was very helpful, many thanks!

@07hokage
Copy link

Hi @a7u7a I looked at your code carefully. Your script does not seem to have a definition for what the contents of fs should be. So you could instead use the default frame in its place. I would also change test_callback to callback, and edit the pipeline start instruction appropriately to reflet this change.

Also, I believe that you cannot call pipeline.stop() before pipeline.start() if the pipeline has not already been started. So I moved pipeline.stop() below the start statement.

import pyrealsense2 as rs
pipeline = rs.pipeline()
def callback(frame):
depth_data = frame.as_frameset().get_depth_frame().get_data()
profile = pipeline.start(callback)
pipeline.stop()

Hi @MartyG-RealSense , i tried this sample script, but it seems the callback function doesn't seem to be invoked in my case. i'm using D455 device. i kept some print statements in the call back function which doesn't get printed confirming that call back function is not getting invoked

@07hokage
Copy link

Hi @a7u7a I looked at your code carefully. Your script does not seem to have a definition for what the contents of fs should be. So you could instead use the default frame in its place. I would also change test_callback to callback, and edit the pipeline start instruction appropriately to reflet this change.
Also, I believe that you cannot call pipeline.stop() before pipeline.start() if the pipeline has not already been started. So I moved pipeline.stop() below the start statement.
import pyrealsense2 as rs
pipeline = rs.pipeline()
def callback(frame):
depth_data = frame.as_frameset().get_depth_frame().get_data()
profile = pipeline.start(callback)
pipeline.stop()

Hi @MartyG-RealSense , i tried this sample script, but it seems the callback function doesn't seem to be invoked in my case. i'm using D455 device. i kept some print statements in the call back function which doesn't get printed confirming that call back function is not getting invoked

Issue cleared! adding delay of few milliseconds, before accessing call back data did the trick

@MartyG-RealSense
Copy link
Collaborator

Hi @07hokage It is great to hear that you succeeded - thanks very much for the update!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants