Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

D455 TX2 functional in realsense-viewer&example codes. But in codes from different directory show RuntimeError #8904

Closed
screw-44 opened this issue Apr 26, 2021 · 10 comments

Comments

@screw-44
Copy link


Required Info
Camera Model { D400, D455 }
Firmware Version 05.12.12.100
Operating System & Version Linux (Ubuntu 14/16/17)
Kernel Version (Linux Only) linux version 4.9.201-tegra
Platform NVIDIA Jetson
SDK Version latest }
Language python
Segment {Robot/Smartphone/VR/AR/others }

Issue Description

My D455 works normal in x86 ubuntu, when l switch to the Tx2. I follow the official step
https://github.com/IntelRealSense/librealsense/blob/master/doc/installation_jetson.md

At first try, I use CMake
cmake ../ -DCMAKE_BUILD_TYPE=Release -DBUILD_EXAMPLES=true -DFORCE_RSUSB_BACKEND=ON -DBUILD_PYTHON_BINDINGS:bool=true -DBUILD_WITH_CUDA=true
and make install the source file.

The result is fully functional realsense viewer and codes in the C++ examples and python examples(in wrappers) can function normally. But if l use my codes(correct codes functional in x86 ubuntu using pip3 install pyrealsence2), Error occured.

my code find out the device_product_line is D400

But in codes:
profile_IMU = pipeline_IMU.start(config_IMU)
it pops error:
RuntimeError: No device connected

my question is how can i fix it?

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Apr 26, 2021

Hi @screw-44 Assuming that the Python wrapper has installed correctly if you are able to run the Python wrapper examples, is your script able to run if you remove the reference to a custom configuration from within the brackets of the pipeline start instruction, please?

profile_IMU = pipeline_IMU.start()

@screw-44
Copy link
Author

Hi @screw-44 Assuming that the Python wrapper has installed correctly if you are able to run the Python wrapper examples, is your script able to run if you remove the reference to a custom configuration from within the brackets of the pipeline start instruction, please?

profile_IMU = pipeline_IMU.start()

I try to fix the problem yesterday by recmake make install the source codes. Now, I can't even import pyrealsense2. But the realsense-viewer still working just fine. the examples I mentioned in previous post means the examples in the build folder
~/Desktop/librealsense/build/examples , and l can't make the codes in the ~/Desktop/librealsense/examples as well as codes in the ~/Desktop/librealsense/wrappers/python/examples working. The prior C++ codes don't response to my make command, and the python codes Error: ModuleNotFoundError: No module named 'pyrealsense2'

@screw-44
Copy link
Author

Can I reinstall the SDK by using Debian packages (the method in the https://github.com/IntelRealSense/librealsense/blob/master/doc/installation_jetson.md ) to make it work normally in both C++ and python?

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Apr 27, 2021

An installation based on Debian packages would not include wrappers such as Python in the librealsense build.

You could make another attempt to build from source from RSUSB over an internet connection - a method that is not dependent on Linux versions or kernel versions and does not require patching. I would recommend performing a Python-equipped RSUSB build with CMake using the Jetson guide in the link below if you did not use this guide originally.

#6964 (comment)

@screw-44
Copy link
Author

thanks for the solution, it did fix some problems. I can normally use the D455 as a camera.

But I still can't read the IMU data from the camera.
I try my coeds in both x86 and jetson, x86 works just fine, but the tx2 pops error:

Traceback (most recent call last):
  File "Cam_IMU.py", line 85, in <module>
    cap = Capmanager(debug = True)
  FIle "Cam_IMU.py", line 22, in __init__
    self.IMU_pipeline.start(IMU_config)
RuntimeError: No device connected

my codes is :

        # Create IMU pipeline
        self.IMU_pipeline = rs.pipeline()
        IMU_config = rs.config()
        IMU_config.enable_stream(rs.stream.accel, rs.format.motion_xyz32f, 63)
        IMU_config.enable_stream(rs.stream.gyro, rs.format.motion_xyz32f, 200)
        self.IMU_pipeline.start(IMU_config)

It is really weird that in the x86 it works but jetson it not. also In the x86, I used to codes pip3 install the pyrealsense2.

this is the full codes.

import pyrealsense2 as rs
import numpy as np
import cv2 as cv
import math
import time


class CapManager():
    def __init__(self,cfg = None,camera_number = 0,debug = False):
        self.debug = debug

        # Create frame pipeline *frame pipleline and IMU pipleline must be sepreated
        self.frame_pipeline = rs.pipeline()
        frame_config = rs.config()
        frame_config.enable_stream(rs.stream.depth, rs.format.z16, 30)
        frame_config.enable_stream(rs.stream.color, rs.format.bgr8, 30)
        self.frame_pipeline.start(frame_config)
        # Create IMU pipeline
        self.IMU_pipeline = rs.pipeline()
        IMU_config = rs.config()
        IMU_config.enable_stream(rs.stream.accel, rs.format.motion_xyz32f, 63)
        IMU_config.enable_stream(rs.stream.gyro, rs.format.motion_xyz32f, 200)
        self.IMU_pipeline.start(IMU_config)

        # Create an align object
        # rs.align allows us to perform alignment of depth frames to others frames
        # The "align_to" is the stream type to which we plan to align depth frames.
        align_to = rs.stream.color
        self.align = rs.align(align_to)
        
        # Get device product line
        pipeline_wrapper = rs.pipeline_wrapper(self.frame_pipeline)
        pipeline_profile = frame_config.resolve(pipeline_wrapper)
        device = pipeline_profile.get_device()
        device_product_line = str(device.get_info(rs.camera_info.product_line))
        print("device_product_line: {}".format(device_product_line))


    def read(self):
        try:
            # Get color image as well as depth image
            frames = self.frame_pipeline.wait_for_frames()

            # Align the depth frame to color frame
            aligned_frames = self.align.process(frames)

            color_frame = frames.get_color_frame()
            depth_frame = aligned_frames.get_depth_frame()
            color_image = np.asanyarray(color_frame.get_data())
            depth_image = np.asanyarray(depth_frame.get_data())
            # Get IMU data            
            IMU = self.IMU_pipeline.wait_for_frames()
            accel = np.asarray( [ IMU[0].as_motion_frame().get_motion_data().x,
                                  IMU[0].as_motion_frame().get_motion_data().y,
                                  IMU[0].as_motion_frame().get_motion_data().z ] )
            gyro  = np.asarray( [ IMU[1].as_motion_frame().get_motion_data().x, 
                                  IMU[1].as_motion_frame().get_motion_data().y, 
                                  IMU[1].as_motion_frame().get_motion_data().z ] )

            return ( color_image, depth_image, accel, gyro )
        except Exception as e:
            print("exception is : {}".format(e))

    def reConfigCamera(self, fps=60):
        # first stop current pipeline
        self.frame_pipeline.stop()
        # re config the pipeline
        self.frame_pipeline = rs.pipeline()
        frame_config = rs.config()
        frame_config.enable_stream(rs.stream.depth, rs.format.z16, fps)
        frame_config.enable_stream(rs.stream.color, rs.format.bgr8, fps)
        self.frame_pipeline.start(frame_config)
   




    def __del__(self):
        self.frame_pipeline.stop()
        self.IMU_pipeline.stop()
            
    

if __name__ == "__main__":
    cap = CapManager(debug = True)
    fps = 60
    cv.namedWindow("color frame")
    cv.createTrackbar("fps settings: ", "color frame", 1, 3, lambda x:x+1)

    while True:
        color_image, depth_image, accel, gyro = cap.read()
        
        if cap.debug:
            print("accelerometer: ", accel)
            print("gyro: ", gyro)
            print("color_frame size", color_image.shape)
            print("depth_frame size", depth_image.shape)
        
        cv.imshow("color frame", color_image)
        cv.imshow("depth frame", depth_image)

        if fps != cv.getTrackbarPos("fps settings: ", "color frame"):
            fps = cv.getTrackbarPos("fps settings: ", "color frame")
            cap.reConfigCamera(fps*30)

        key = cv.waitKey(1)
        if key == ord('q'):
            break

    cv.destroyAllWindows()
    

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Apr 28, 2021

Your definition of the separate pipeline configurations for IMU and depth / color look consistent with the code in what I consider to be the best script reference for doing so in Python (in the link below).

#5628 (comment)

You confirmed earlier that the Python examples work, suggesting that the Python wrapper itself is installed correctly.

It may be worth running a Python test script that only accesses the IMU to test whether the problem can be replicated with other IMU scripts. With the script below, the RealSense user in that case thought that the script was faulty but found later that it worked fine on a different computer.

#8492 (comment)

@screw-44
Copy link
Author

Thanks for the suggestion

l try the codes in the #8492 and it work.
Then I added the frame pipeline to it, and it works too. (codes are listed below)

But when l try to write it in a class, I just copy and paste the code, but it show a different error.

        p = rs.pipeline()
        conf = rs.config()
        conf.enable_stream(rs.stream.accel)
        conf.enable_stream(rs.stream.gyro)
        prof = p.start(conf)
        self.imu_pipeline = p

        fp = rs.pipeline()
        conf = rs.config()
        conf.enable_stream(rs.stream.depth, rs.format.z16, 60)
        conf.enable_stream(rs.stream.color, rs.format.bgr8, 60)
        prof = fp.start(conf)
        self.frame_pipeline = fp
Traceback (most recent call last):
  File "Cam_IMU.py", line 105, in <module>
    cap = CapManager(debug = True)
  File "Cam_IMU.py", line 36, in __init__
    prof = fp.start(conf)
RuntimeError: failed to set power state
Exception ignored in: <bound method CapManager.__del__ of <__main__.CapManager object at 0x7f97da8a90>>
Traceback (most recent call last):
  File "Cam_IMU.py", line 99, in __del__
    self.frame_pipeline.stop()
AttributeError: 'CapManager' object has no attribute 'frame_pipeline'

Is it because I can't use two pipeline in a class? I find it always is the later pipeline that pops error.

the codes didn't involove class looks like this:

import pyrealsense2 as rs
import numpy as np
import cv2 as cv

p = rs.pipeline()
conf = rs.config()
conf.enable_stream(rs.stream.accel)
conf.enable_stream(rs.stream.gyro)
prof = p.start(conf)

fp = rs.pipeline()
conf = rs.config()
conf.enable_stream(rs.stream.depth, rs.format.z16, 60)
conf.enable_stream(rs.stream.color, rs.format.bgr8, 60)
prof = fp.start(conf)

align_to = rs.stream.color
align = rs.align(align_to)


while True:
    f = p.wait_for_frames()
    accel = f[0].as_motion_frame().get_motion_data()
    gyro = f[1].as_motion_frame().get_motion_data()
    accel = np.asarray([accel.x, accel.y, accel.z])
    gyro = np.asarray([gyro.x, gyro.y, gyro.z])
    print("accelerometer: ", accel)
    print("gyro: ", gyro)

    # Get color image as well as depth image
    frames = fp.wait_for_frames()
    # Align the depth frame to color frame
    aligned_frames = align.process(frames)
    color_frame = frames.get_color_frame()
    depth_frame = aligned_frames.get_depth_frame()
    color_image = np.asanyarray(color_frame.get_data())
    depth_image = np.asanyarray(depth_frame.get_data())
    cv.imshow("color image", color_image)
    cv.imshow("depth_image", depth_image)
    cv.waitKey(10)

the full codes that involves class look like this:

import pyrealsense2 as rs
import numpy as np
import cv2 as cv
import math
import time


class CapManager():
    def __init__(self,cfg = None,camera_number = 0,debug = False):
        self.debug = debug

        # Create frame pipeline frame pipleline and IMU pipleline must be sepreated
        #self.frame_pipeline = rs.pipeline()
        #self.frame_config = rs.config()
        #self.frame_config.enable_stream(rs.stream.depth, rs.format.z16, 60)
        #self.frame_config.enable_stream(rs.stream.color, rs.format.bgr8, 60)
        #self.frame_pipeline.start(self.frame_config)
        # Create IMU pipeline
        #self.imu_pipeline = rs.pipeline()  
        #self.conf = rs.config()
        #self.conf.enable_stream(rs.stream.accel)
        #self.conf.enable_stream(rs.stream.gyro)
        #self.imu_pipeline.start(self.conf)

        p = rs.pipeline()
        conf = rs.config()
        conf.enable_stream(rs.stream.accel)
        conf.enable_stream(rs.stream.gyro)
        prof = p.start(conf)
        self.imu_pipeline = p

        fp = rs.pipeline()
        conf = rs.config()
        conf.enable_stream(rs.stream.depth, rs.format.z16, 60)
        conf.enable_stream(rs.stream.color, rs.format.bgr8, 60)
        prof = fp.start(conf)
        self.frame_pipeline = fp

        #self.IMU_pipeline = rs.pipeline()
        #IMU_config = rs.config()
        #IMU_config.enable_stream(rs.stream.accel, rs.format.motion_xyz32f, 63)
        #IMU_config.enable_stream(rs.stream.gyro, rs.format.motion_xyz32f, 200)
        #self.IMU_pipeline.start(IMU_config)

        # Create an align object
        # rs.align allows us to perform alignment of depth frames to others frames
        # The "align_to" is the stream type to which we plan to align depth frames.
        align_to = rs.stream.color
        self.align = rs.align(align_to)
        
        # Get device product line
        pipeline_wrapper = rs.pipeline_wrapper(self.frame_pipeline)
        pipeline_profile = frame_config.resolve(pipeline_wrapper)
        device = pipeline_profile.get_device()
        device_product_line = str(device.get_info(rs.camera_info.product_line))
        print("device_product_line: {}".format(device_product_line))


    def read(self):
        try:
            # Get color image as well as depth image
            frames = self.frame_pipeline.wait_for_frames()

            # Align the depth frame to color frame
            aligned_frames = self.align.process(frames)

            color_frame = frames.get_color_frame()
            depth_frame = aligned_frames.get_depth_frame()
            color_image = np.asanyarray(color_frame.get_data())
            depth_image = np.asanyarray(depth_frame.get_data())
            # Get IMU data            
            IMU = self.imu_pipeline.wait_for_frames()
            accel = np.asarray( [ IMU[0].as_motion_frame().get_motion_data().x,
                                  IMU[0].as_motion_frame().get_motion_data().y,
                                  IMU[0].as_motion_frame().get_motion_data().z ] )
            gyro  = np.asarray( [ IMU[1].as_motion_frame().get_motion_data().x, 
                                  IMU[1].as_motion_frame().get_motion_data().y, 
                                  IMU[1].as_motion_frame().get_motion_data().z ] )

            return ( color_image, depth_image, accel, gyro )
        except Exception as e:
            print("exception is : {}".format(e))

    def reConfigCamera(self, fps=60):
        # first stop current pipeline
        self.frame_pipeline.stop()
        # re config the pipeline
        self.frame_pipeline = rs.pipeline()
        frame_config = rs.config()
        frame_config.enable_stream(rs.stream.depth, rs.format.z16, fps)
        frame_config.enable_stream(rs.stream.color, rs.format.bgr8, fps)
        self.frame_pipeline.start(frame_config)
   




    def __del__(self):
        self.frame_pipeline.stop()
        self.imu_pipeline.stop()
            
    

if __name__ == "__main__":
    p = rs.pipeline()
    conf = rs.config()
    conf.enable_stream(rs.stream.accel)
    conf.enable_stream(rs.stream.gyro)
    prof = p.start(conf)

    fp = rs.pipeline()
    conf = rs.config()
    conf.enable_stream(rs.stream.depth, rs.format.z16, 60)
    conf.enable_stream(rs.stream.color, rs.format.bgr8, 60)
    prof = fp.start(conf)

    cap = CapManager(debug = True)
    fps = 60
    cv.namedWindow("color frame")
    cv.createTrackbar("fps settings: ", "color frame", 1, 3, lambda x:x+1)

    while True:
        color_image, depth_image, accel, gyro = cap.read()
        
        if cap.debug:
            print("accelerometer: ", accel)
            print("gyro: ", gyro)
            print("color_frame size", color_image.shape)
            print("depth_frame size", depth_image.shape)
        
        cv.imshow("color frame", color_image)
        cv.imshow("depth frame", depth_image)

        if fps != cv.getTrackbarPos("fps settings: ", "color frame"):
            fps = cv.getTrackbarPos("fps settings: ", "color frame")
            cap.reConfigCamera(fps*30)

        key = cv.waitKey(1)
        if key == ord('q'):
            break

    cv.destroyAllWindows()

Thank you very much for your patient and helps, it really matters alot to me!

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Apr 28, 2021

My Python programming knowledge does not extend to use of classes. I did find an SDK Python example program pyglet_pointcloud_viewer.py that makes use of them though, so that can act as an example of them in a librealsense Python program.

https://github.com/IntelRealSense/librealsense/blob/master/wrappers/python/examples/pyglet_pointcloud_viewer.py#L62

I note that in this example, the class name does not have brackets after it:

image

@screw-44
Copy link
Author

The exaple you mentioned seems not to contain the part that get imu data from camera.

I try to put two pipeline in different class, still the error occur. It seems it won't let me to start two pipeline.

Now, l sort of get arounded the issue by only use one pipeline.

        self.pipeline = rs.pipeline()
        conf = rs.config()

        conf.enable_stream(rs.stream.depth, rs.format.z16, 60)
        conf.enable_stream(rs.stream.color, rs.format.bgr8, 60)

        conf.enable_stream(rs.stream.accel)
        conf.enable_stream(rs.stream.gyro)

        prof = self.pipeline.start(conf)

It worked in my jetson setup.

@MartyG-RealSense
Copy link
Collaborator

Thanks very much @screw-44 for the update - great news that you developed a solution that works for you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants