-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
D455 TX2 functional in realsense-viewer&example codes. But in codes from different directory show RuntimeError #8904
Comments
Hi @screw-44 Assuming that the Python wrapper has installed correctly if you are able to run the Python wrapper examples, is your script able to run if you remove the reference to a custom configuration from within the brackets of the pipeline start instruction, please? profile_IMU = pipeline_IMU.start() |
I try to fix the problem yesterday by recmake make install the source codes. Now, I can't even import pyrealsense2. But the realsense-viewer still working just fine. the examples I mentioned in previous post means the examples in the build folder |
Can I reinstall the SDK by using Debian packages (the method in the https://github.com/IntelRealSense/librealsense/blob/master/doc/installation_jetson.md ) to make it work normally in both C++ and python? |
An installation based on Debian packages would not include wrappers such as Python in the librealsense build. You could make another attempt to build from source from RSUSB over an internet connection - a method that is not dependent on Linux versions or kernel versions and does not require patching. I would recommend performing a Python-equipped RSUSB build with CMake using the Jetson guide in the link below if you did not use this guide originally. |
thanks for the solution, it did fix some problems. I can normally use the D455 as a camera. But I still can't read the IMU data from the camera. Traceback (most recent call last):
File "Cam_IMU.py", line 85, in <module>
cap = Capmanager(debug = True)
FIle "Cam_IMU.py", line 22, in __init__
self.IMU_pipeline.start(IMU_config)
RuntimeError: No device connected my codes is : # Create IMU pipeline
self.IMU_pipeline = rs.pipeline()
IMU_config = rs.config()
IMU_config.enable_stream(rs.stream.accel, rs.format.motion_xyz32f, 63)
IMU_config.enable_stream(rs.stream.gyro, rs.format.motion_xyz32f, 200)
self.IMU_pipeline.start(IMU_config) It is really weird that in the x86 it works but jetson it not. also In the x86, I used to codes this is the full codes. import pyrealsense2 as rs
import numpy as np
import cv2 as cv
import math
import time
class CapManager():
def __init__(self,cfg = None,camera_number = 0,debug = False):
self.debug = debug
# Create frame pipeline *frame pipleline and IMU pipleline must be sepreated
self.frame_pipeline = rs.pipeline()
frame_config = rs.config()
frame_config.enable_stream(rs.stream.depth, rs.format.z16, 30)
frame_config.enable_stream(rs.stream.color, rs.format.bgr8, 30)
self.frame_pipeline.start(frame_config)
# Create IMU pipeline
self.IMU_pipeline = rs.pipeline()
IMU_config = rs.config()
IMU_config.enable_stream(rs.stream.accel, rs.format.motion_xyz32f, 63)
IMU_config.enable_stream(rs.stream.gyro, rs.format.motion_xyz32f, 200)
self.IMU_pipeline.start(IMU_config)
# Create an align object
# rs.align allows us to perform alignment of depth frames to others frames
# The "align_to" is the stream type to which we plan to align depth frames.
align_to = rs.stream.color
self.align = rs.align(align_to)
# Get device product line
pipeline_wrapper = rs.pipeline_wrapper(self.frame_pipeline)
pipeline_profile = frame_config.resolve(pipeline_wrapper)
device = pipeline_profile.get_device()
device_product_line = str(device.get_info(rs.camera_info.product_line))
print("device_product_line: {}".format(device_product_line))
def read(self):
try:
# Get color image as well as depth image
frames = self.frame_pipeline.wait_for_frames()
# Align the depth frame to color frame
aligned_frames = self.align.process(frames)
color_frame = frames.get_color_frame()
depth_frame = aligned_frames.get_depth_frame()
color_image = np.asanyarray(color_frame.get_data())
depth_image = np.asanyarray(depth_frame.get_data())
# Get IMU data
IMU = self.IMU_pipeline.wait_for_frames()
accel = np.asarray( [ IMU[0].as_motion_frame().get_motion_data().x,
IMU[0].as_motion_frame().get_motion_data().y,
IMU[0].as_motion_frame().get_motion_data().z ] )
gyro = np.asarray( [ IMU[1].as_motion_frame().get_motion_data().x,
IMU[1].as_motion_frame().get_motion_data().y,
IMU[1].as_motion_frame().get_motion_data().z ] )
return ( color_image, depth_image, accel, gyro )
except Exception as e:
print("exception is : {}".format(e))
def reConfigCamera(self, fps=60):
# first stop current pipeline
self.frame_pipeline.stop()
# re config the pipeline
self.frame_pipeline = rs.pipeline()
frame_config = rs.config()
frame_config.enable_stream(rs.stream.depth, rs.format.z16, fps)
frame_config.enable_stream(rs.stream.color, rs.format.bgr8, fps)
self.frame_pipeline.start(frame_config)
def __del__(self):
self.frame_pipeline.stop()
self.IMU_pipeline.stop()
if __name__ == "__main__":
cap = CapManager(debug = True)
fps = 60
cv.namedWindow("color frame")
cv.createTrackbar("fps settings: ", "color frame", 1, 3, lambda x:x+1)
while True:
color_image, depth_image, accel, gyro = cap.read()
if cap.debug:
print("accelerometer: ", accel)
print("gyro: ", gyro)
print("color_frame size", color_image.shape)
print("depth_frame size", depth_image.shape)
cv.imshow("color frame", color_image)
cv.imshow("depth frame", depth_image)
if fps != cv.getTrackbarPos("fps settings: ", "color frame"):
fps = cv.getTrackbarPos("fps settings: ", "color frame")
cap.reConfigCamera(fps*30)
key = cv.waitKey(1)
if key == ord('q'):
break
cv.destroyAllWindows()
|
Your definition of the separate pipeline configurations for IMU and depth / color look consistent with the code in what I consider to be the best script reference for doing so in Python (in the link below). You confirmed earlier that the Python examples work, suggesting that the Python wrapper itself is installed correctly. It may be worth running a Python test script that only accesses the IMU to test whether the problem can be replicated with other IMU scripts. With the script below, the RealSense user in that case thought that the script was faulty but found later that it worked fine on a different computer. |
Thanks for the suggestion l try the codes in the #8492 and it work. But when l try to write it in a class, I just copy and paste the code, but it show a different error. p = rs.pipeline()
conf = rs.config()
conf.enable_stream(rs.stream.accel)
conf.enable_stream(rs.stream.gyro)
prof = p.start(conf)
self.imu_pipeline = p
fp = rs.pipeline()
conf = rs.config()
conf.enable_stream(rs.stream.depth, rs.format.z16, 60)
conf.enable_stream(rs.stream.color, rs.format.bgr8, 60)
prof = fp.start(conf)
self.frame_pipeline = fp Traceback (most recent call last):
File "Cam_IMU.py", line 105, in <module>
cap = CapManager(debug = True)
File "Cam_IMU.py", line 36, in __init__
prof = fp.start(conf)
RuntimeError: failed to set power state
Exception ignored in: <bound method CapManager.__del__ of <__main__.CapManager object at 0x7f97da8a90>>
Traceback (most recent call last):
File "Cam_IMU.py", line 99, in __del__
self.frame_pipeline.stop()
AttributeError: 'CapManager' object has no attribute 'frame_pipeline' Is it because I can't use two pipeline in a class? I find it always is the later pipeline that pops error. the codes didn't involove class looks like this: import pyrealsense2 as rs
import numpy as np
import cv2 as cv
p = rs.pipeline()
conf = rs.config()
conf.enable_stream(rs.stream.accel)
conf.enable_stream(rs.stream.gyro)
prof = p.start(conf)
fp = rs.pipeline()
conf = rs.config()
conf.enable_stream(rs.stream.depth, rs.format.z16, 60)
conf.enable_stream(rs.stream.color, rs.format.bgr8, 60)
prof = fp.start(conf)
align_to = rs.stream.color
align = rs.align(align_to)
while True:
f = p.wait_for_frames()
accel = f[0].as_motion_frame().get_motion_data()
gyro = f[1].as_motion_frame().get_motion_data()
accel = np.asarray([accel.x, accel.y, accel.z])
gyro = np.asarray([gyro.x, gyro.y, gyro.z])
print("accelerometer: ", accel)
print("gyro: ", gyro)
# Get color image as well as depth image
frames = fp.wait_for_frames()
# Align the depth frame to color frame
aligned_frames = align.process(frames)
color_frame = frames.get_color_frame()
depth_frame = aligned_frames.get_depth_frame()
color_image = np.asanyarray(color_frame.get_data())
depth_image = np.asanyarray(depth_frame.get_data())
cv.imshow("color image", color_image)
cv.imshow("depth_image", depth_image)
cv.waitKey(10) the full codes that involves class look like this: import pyrealsense2 as rs
import numpy as np
import cv2 as cv
import math
import time
class CapManager():
def __init__(self,cfg = None,camera_number = 0,debug = False):
self.debug = debug
# Create frame pipeline frame pipleline and IMU pipleline must be sepreated
#self.frame_pipeline = rs.pipeline()
#self.frame_config = rs.config()
#self.frame_config.enable_stream(rs.stream.depth, rs.format.z16, 60)
#self.frame_config.enable_stream(rs.stream.color, rs.format.bgr8, 60)
#self.frame_pipeline.start(self.frame_config)
# Create IMU pipeline
#self.imu_pipeline = rs.pipeline()
#self.conf = rs.config()
#self.conf.enable_stream(rs.stream.accel)
#self.conf.enable_stream(rs.stream.gyro)
#self.imu_pipeline.start(self.conf)
p = rs.pipeline()
conf = rs.config()
conf.enable_stream(rs.stream.accel)
conf.enable_stream(rs.stream.gyro)
prof = p.start(conf)
self.imu_pipeline = p
fp = rs.pipeline()
conf = rs.config()
conf.enable_stream(rs.stream.depth, rs.format.z16, 60)
conf.enable_stream(rs.stream.color, rs.format.bgr8, 60)
prof = fp.start(conf)
self.frame_pipeline = fp
#self.IMU_pipeline = rs.pipeline()
#IMU_config = rs.config()
#IMU_config.enable_stream(rs.stream.accel, rs.format.motion_xyz32f, 63)
#IMU_config.enable_stream(rs.stream.gyro, rs.format.motion_xyz32f, 200)
#self.IMU_pipeline.start(IMU_config)
# Create an align object
# rs.align allows us to perform alignment of depth frames to others frames
# The "align_to" is the stream type to which we plan to align depth frames.
align_to = rs.stream.color
self.align = rs.align(align_to)
# Get device product line
pipeline_wrapper = rs.pipeline_wrapper(self.frame_pipeline)
pipeline_profile = frame_config.resolve(pipeline_wrapper)
device = pipeline_profile.get_device()
device_product_line = str(device.get_info(rs.camera_info.product_line))
print("device_product_line: {}".format(device_product_line))
def read(self):
try:
# Get color image as well as depth image
frames = self.frame_pipeline.wait_for_frames()
# Align the depth frame to color frame
aligned_frames = self.align.process(frames)
color_frame = frames.get_color_frame()
depth_frame = aligned_frames.get_depth_frame()
color_image = np.asanyarray(color_frame.get_data())
depth_image = np.asanyarray(depth_frame.get_data())
# Get IMU data
IMU = self.imu_pipeline.wait_for_frames()
accel = np.asarray( [ IMU[0].as_motion_frame().get_motion_data().x,
IMU[0].as_motion_frame().get_motion_data().y,
IMU[0].as_motion_frame().get_motion_data().z ] )
gyro = np.asarray( [ IMU[1].as_motion_frame().get_motion_data().x,
IMU[1].as_motion_frame().get_motion_data().y,
IMU[1].as_motion_frame().get_motion_data().z ] )
return ( color_image, depth_image, accel, gyro )
except Exception as e:
print("exception is : {}".format(e))
def reConfigCamera(self, fps=60):
# first stop current pipeline
self.frame_pipeline.stop()
# re config the pipeline
self.frame_pipeline = rs.pipeline()
frame_config = rs.config()
frame_config.enable_stream(rs.stream.depth, rs.format.z16, fps)
frame_config.enable_stream(rs.stream.color, rs.format.bgr8, fps)
self.frame_pipeline.start(frame_config)
def __del__(self):
self.frame_pipeline.stop()
self.imu_pipeline.stop()
if __name__ == "__main__":
p = rs.pipeline()
conf = rs.config()
conf.enable_stream(rs.stream.accel)
conf.enable_stream(rs.stream.gyro)
prof = p.start(conf)
fp = rs.pipeline()
conf = rs.config()
conf.enable_stream(rs.stream.depth, rs.format.z16, 60)
conf.enable_stream(rs.stream.color, rs.format.bgr8, 60)
prof = fp.start(conf)
cap = CapManager(debug = True)
fps = 60
cv.namedWindow("color frame")
cv.createTrackbar("fps settings: ", "color frame", 1, 3, lambda x:x+1)
while True:
color_image, depth_image, accel, gyro = cap.read()
if cap.debug:
print("accelerometer: ", accel)
print("gyro: ", gyro)
print("color_frame size", color_image.shape)
print("depth_frame size", depth_image.shape)
cv.imshow("color frame", color_image)
cv.imshow("depth frame", depth_image)
if fps != cv.getTrackbarPos("fps settings: ", "color frame"):
fps = cv.getTrackbarPos("fps settings: ", "color frame")
cap.reConfigCamera(fps*30)
key = cv.waitKey(1)
if key == ord('q'):
break
cv.destroyAllWindows() Thank you very much for your patient and helps, it really matters alot to me! |
My Python programming knowledge does not extend to use of classes. I did find an SDK Python example program pyglet_pointcloud_viewer.py that makes use of them though, so that can act as an example of them in a librealsense Python program. I note that in this example, the class name does not have brackets after it: |
The exaple you mentioned seems not to contain the part that get imu data from camera. I try to put two pipeline in different class, still the error occur. It seems it won't let me to start two pipeline. Now, l sort of get arounded the issue by only use one pipeline. self.pipeline = rs.pipeline()
conf = rs.config()
conf.enable_stream(rs.stream.depth, rs.format.z16, 60)
conf.enable_stream(rs.stream.color, rs.format.bgr8, 60)
conf.enable_stream(rs.stream.accel)
conf.enable_stream(rs.stream.gyro)
prof = self.pipeline.start(conf) It worked in my jetson setup. |
Thanks very much @screw-44 for the update - great news that you developed a solution that works for you! |
Issue Description
My D455 works normal in x86 ubuntu, when l switch to the Tx2. I follow the official step
https://github.com/IntelRealSense/librealsense/blob/master/doc/installation_jetson.md
At first try, I use CMake
cmake ../ -DCMAKE_BUILD_TYPE=Release -DBUILD_EXAMPLES=true -DFORCE_RSUSB_BACKEND=ON -DBUILD_PYTHON_BINDINGS:bool=true -DBUILD_WITH_CUDA=true
and make install the source file.
The result is fully functional realsense viewer and codes in the C++ examples and python examples(in wrappers) can function normally. But if l use my codes(correct codes functional in x86 ubuntu using
pip3 install pyrealsence2
), Error occured.my code find out the device_product_line is D400
But in codes:
profile_IMU = pipeline_IMU.start(config_IMU)
it pops error:
RuntimeError: No device connected
my question is how can i fix it?
The text was updated successfully, but these errors were encountered: