Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to convert a pixel to x, y in meters? #2481

Closed
sachinsdate opened this issue Oct 5, 2018 · 4 comments
Closed

How to convert a pixel to x, y in meters? #2481

sachinsdate opened this issue Oct 5, 2018 · 4 comments
Assignees

Comments

@sachinsdate
Copy link

Hi,
I am a newbie to RealSense. Is there a way to convert the position of a pixel in the frame in x,y pixels to a corresponding X, Y position in meters, where X is the horizontal distance in meters from the axis of the camera to the point in the FOV that appears at x pixels in the RGB frame, and Y is the vertical distance in meters from the camera axis to the point in the FOV lying at y pixels in the RGB frame? The assumption is that the camera axis is along the z dimension.

Thanks
Sachin

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Oct 5, 2018

I would recommend investigating the deproject_pixel_to_point instruction, as discussed in the link below.

#1413

The example program Measure also provides an explanation of 2d pixel to 3d point conversion.

https://github.com/IntelRealSense/librealsense/tree/master/examples/measure

@sachinsdate
Copy link
Author

Thanks for the pointer @MartyG-RealSense! I went through #1413 and rs_measure. That worked.
My working code looks as follows (in case it helps anybody else on this forum):

pipeline = rs.pipeline()
config = rs.config()
rs.config.enable_device_from_file(config, bag_file_path)
config.enable_stream(rs.stream.depth)
config.enable_stream(rs.stream.color)

profile = pipeline.start(config)
depth_sensor = profile.get_device().first_depth_sensor()
depth_scale = depth_sensor.get_depth_scale()

align_to = rs.stream.color
align = rs.align(align_to)
try:
	while True:
		frames = pipeline.wait_for_frames()
		aligned_frames = align.process(frames)
		aligned_depth_frame = aligned_frames.get_depth_frame()
		color_frame = aligned_frames.get_color_frame()

		depth_image = np.asanyarray(aligned_depth_frame.get_data())
		color_image = np.asanyarray(color_frame.get_data())
		num_rows = depth_image.shape[0]
		num_cols = depth_image.shape[1]

		depth_intrin = aligned_depth_frame.profile.as_video_stream_profile().intrinsics

		for r in range(0, num_rows, 5):
			for c in range(0, num_cols, 5):
				depth = aligned_depth_frame.get_distance(c, r)
				depth_point_in_meters_camera_coords = rs.rs2_deproject_pixel_to_point(depth_intrin, [c, r], depth)
				#some other stuff I do here with this depth point, such as projecting it to world coordinates etc.

finally:
	pipeline.stop()

But now I have a different problem:

I translated the camera coordinates of each point found by deprojection to world coordinates, I ignored the Y coordinate and projected the X and the Z coordinates to the XZ world coordinate plane. When I look down upon the XZ plane from above I can see these long sweeping trails (see picture below). It's as if the D415 is computing multiple depth values for each pixel in the camera frame! Or I am interpreting these artifacts incorrectly?
Why is the D415 producing these sweeping trails (I have marked them in the picture below)?

image

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Oct 10, 2018

I'm very glad you were able to resolve your measuring problem. Thanks so much for sharing your script with the community. :)

In regard to your new question: could you tell me please what the camera is observing? It looks like a human in motion, such as a person diving from a swimming pool high-board.

If it is a person in motion, the D415 may not be the most suitable camera model, as it has a slow 'rolling shutter' that can cause streaks on images of objects in motion. The D435 is more suited to motion, as it has a faster 'global shutter'.

This is why applications such as cameras ascending into the atmosphere on a balloon tend to use global shutters instead of rolling shutters.

If you prefer to continue with your D415, some users have adjusted their shutter speed indirectly by changing exposure, since shutter speed is also known as 'exposure time'.

@sachinsdate
Copy link
Author

The camera is mounted on a stationary tripod and looking at one wall of a room. There is no motion in the frame. I figured out the reason for the long trails. The wall has two windows and the trails seem to be created by distances measured to objects (i.e. trees and such) that lie outside the room.

Thanks @MartyG-RealSense for brainstorming with me on this one.

I am closing this issue. Thanks for your help!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants