Replies: 11 comments 5 replies
-
Hi @yash7321 The links below are a few examples of Python SLAM tools, though they are quite old and not tested with RealSense cameras. https://github.com/DanielsKraus/SLAM-python isl-org/Open3D#473 may be a helpful reference for generating a RealSense pointcloud in Open3D. |
Beta Was this translation helpful? Give feedback.
-
the rgb and depth image i get from real sense{get_depth_frame() and .get_color_frame()} do i need to preprocess them like and how can i get more accurate data for depth image ? |
Beta Was this translation helpful? Give feedback.
-
You do not need to perform rectification on the images as they are rectified by hardware inside the camera called the Vision Processor D4 before they are sent through the cable to the computer. In regard to transforming disparity to depth, the Python reference at #11015 (comment) may be helpful. Accuracy can be negatively affected by a range of different factors, including environment and lighting and the distance of the camera from the observed object / surface. In general though, using the Medium Density camera configuration preset will provide a good balance between accuracy and the amount of detail on the depth image. You can use the Python instruction rs.option.visual_preset to set the Medium Density preset from the list of presets available. #10014 has an example of doing so. On the following line:
change HIGH_DENSITY to MEDIUM_DENSITY. |
Beta Was this translation helpful? Give feedback.
-
Thank you for quick response, I am new to vision processing , it would be of great help if you could help with following
|
Beta Was this translation helpful? Give feedback.
-
https://dev.intelrealsense.com/docs/api-architecture#low-level-device-api More information about the Low-Level API can be found at IntelRealSense/realsense-ros#1409 (comment) as well as information relevant to question 3. about reducing depth noise.
If the instruction wait_for_frames() is used in a program script then the RealSense SDK should also try to find the closest timestamp match between the depth and RGB streams. In regard to IMU, each IMU data packet is timestamped using the depth sensor hardware clock to allow temporal synchronization between gyro, accel and depth frames.
(a) Aligning depth to color and then using an instruction called rs2_deproject_pixel_to_point() to convert 2D pixels into 3D world points. (b) Using the instructions map_to() and pc.calculate() to map depth and RGB together and generate a pointcloud. map_to / pc.calculate should be a slightly more accurate method of generating a pointcloud than alignment and deprojection, though the difference is not significantly high.
https://github.com/IntelRealSense/realsense-ros/wiki/SLAM-with-D435i |
Beta Was this translation helpful? Give feedback.
-
Below are a few examples of Python SLAM projects, though none of their documentations specifically mention making use of an IMU. https://github.com/gisbi-kim/PyICP-SLAM |
Beta Was this translation helpful? Give feedback.
-
The recommended order for applying a list of post-processing filters is shown at the link below. https://dev.intelrealsense.com/docs/post-processing-filters#using-filters-in-application-code The RealSense Viewer tool has a range of post-processing filters enabled by default. The enabled by default ones have a blue icon beside them. Depth data will still be displayable even if no post-processing filters are applied though. |
Beta Was this translation helpful? Give feedback.
-
You could export a bag file from the Viewer, which is like a video recording of camera streams, and then import the bag file into your script. The script will then use the bag file data as though it is a real live camera. You cannot save post-processing to a bag file though, so you would have to first load the bag file and then apply the post-processing filters to the bag's data in real-time. #1672 (comment) has a Python example of applying post-processing filters to depth data. |
Beta Was this translation helpful? Give feedback.
-
If the shadow that you describe is a vertical strip of black empty space on the left edge of the image then this is called an Invalid Depth Band. The phenomenon is described on pages 87-88 of the current edition of the data sheet document for the 400 Series cameras. You cannot remove this band but it is normal and harmless. https://dev.intelrealsense.com/docs/intel-realsense-d400-series-product-family-datasheet If the shadow is on the left side of an object or person, like in the above image, then this phenomenon is an occlusion. This is also normal and not something that needs to be corrected. |
Beta Was this translation helpful? Give feedback.
-
align = rs.align(rs.stream.color) in above code , the depth frame i get is wrt colour frame right? (that is from same point) and if i want to create point cloud from that depth image , do i need to use any intrensics or extrensics, if so what values do i need to use? |
Beta Was this translation helpful? Give feedback.
-
The above code aligns the depth image to the color image. This means that the depth field of view size is resized to match the field of view size of the color stream, and the depth coordinates are mapped onto corresponding color coordinates. It is not necessary to use intrinsics in this alignment code when a pointcloud is not being generated, as demonstrated by the RealSense SDK's align_depth2color.py Python alignment example program at the link below. Intrinsics will have to be used when generating a pointcloud, like in the opencv_pointcloud_viewer.py example. #11031 (comment) provides a few different methods for producing a pointcloud with Python code using intrinsics. |
Beta Was this translation helpful? Give feedback.
-
1)i am trying to do slam with only d435i in python not using ros , what will be my best approach here
and 2)is there a way to directly get pointcloud data from camera and use it to visualize live in open3d?
Beta Was this translation helpful? Give feedback.
All reactions