Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

get_texture_coordinate #6339

Closed
nevynwong opened this issue May 2, 2020 · 11 comments
Closed

get_texture_coordinate #6339

nevynwong opened this issue May 2, 2020 · 11 comments

Comments

@nevynwong
Copy link

| Camera Model |D415 |
| Operating System & Version | win10 |
| Platform | PC |
| Language | python |

I've asked before about color and point cloud alignment, and I know that pc.map_to(color) and pc.calculate(depth) can get the point cloud and color images aligned, but I've found that for some points in color, the mapping to the point cloud seems to have some rotation changes, what's the reason?
Part of my program is this:
.....
pc.map_to(color_frame)
points = pc.calculate(depth_frame)
tex_coor = np.asanyarray(points.get_texture_coordinate())
pc_coor = np.asanyarray(points.get_vertices())
....

Then I correspond the index position of some points in tex_coord to the index position of pc_coord to get the corresponding point cloud coordinates.Because I need not only the data from the point cloud, but also the data from the corresponding points.I want to ask is this the right thing to do?If not, what do you suggest?

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented May 2, 2020

I hope that the long Python discussion in the link below will provide useful guidance if you have not seen it already.

#1231 (comment)

@nevynwong
Copy link
Author

My current problem is that I got the coordinates of some points in the color image ahead of time (these coordinates were obtained in the extracted 2d image, not in points.get_texture_coordinates()), and now I want to get the corresponding 3d coordinates, my idea is to find the locations of those coordinates in points.get_texture_coordinates() and then correspond to get_vertices.Is this the right approach?

@MartyG-RealSense
Copy link
Collaborator

The usual approach for converting 2D coordinates to 3D points would be rs2_deproject_pixel_to_point.

#3688

@nevynwong
Copy link
Author

I want to know if my idea is correct, because it's easier for me to manipulate other data.Here's part of my program.
...
pc.map_to(color_frame)

    # Generate the pointcloud and texture mappings
    points = pc.calculate(frame)
    tex_coord = np.asanyarray(points.get_texture_coordinates())
    pc_coord = np.asanyarray(points.get_vertices())

    width = 848
    height = 480
    tex_points = np.zeros((848*480,2))
    for l in range(848*480):

        tex_points[l][0] = int(tex_coord[l][0]*width+0.5)
        tex_points[l][1] = int(tex_coord[l][1]*height+0.5)
    for i in range(49):
        depth_pixel = []
        for j in range(2):
	    a = int(face_marks[i][j][m-1]+0.5)  #Face_marks is the two-dimensional point that I need to convert
	    depth_pixel.append(a)

        x1 = depth_pixel[0]
        y1 = depth_pixel[1]

        index = np.where((tex_points[:,0] == x1)&(tex_points[:,1] == y1)) ##Find the location of the two-dimensional point in tex_coord
	 face_3D_marks[i][0][m-1] = pc_coord[index][0][0]		#Obtain the 3d coordinates of the corresponding points
	 face_3D_marks[i][1][m-1] = pc_coord[index][0][1]
	 face_3D_marks[i][2][m-1] = pc_coord[index][0][2]

...

@MartyG-RealSense
Copy link
Collaborator

I am still learning RealSense programming, so I will have to refer your question to another RealSense team member. I apologise for the wait in the meantime.

@dorodnic Could you take a look at the points.get_texture_coordinates() code of @nevynwong above please?

@ev-mp
Copy link
Collaborator

ev-mp commented May 4, 2020

@nevynwong , please elaborate what do you mean by

..but I've found that for some points in color, the mapping to the point cloud seems to have some rotation changes, what's the reason?

Specify precisely the Librealsense calls made and the results received.
It's recommended that you also provide an illustration/screenshots to substantiate this.

@nevynwong
Copy link
Author

nevynwong commented May 4, 2020

I first extracted the color sequence image from the.bag file, then took some markers on the human face, then read them in python and mapped them to the texture point cloud.Here are the results of my markers in 2d and 3d reconstruction.

Sorry, the picture cannot be sent, I am trying.

@ev-mp
Copy link
Collaborator

ev-mp commented May 4, 2020

It seem that the Librealsense-related code is correct, though it could be further optimized. So when you arrive to the core algorithm segment the texture coordinates

tex_points[l][0] = int(tex_coord[l][0]*width+0.5)
tex_points[l][1] = int(tex_coord[l][1]*height+0.5)

Can be applied directly onto the depth frame to find the corresponding vertices.

For some RGB pixels the corresponding Depth data will be zero (no data) and it is normal. So if you find too many "no data" references then I would recommend to expand the search "windows" to NxN RGB pixels or to use aggressive "hole filling" filter settings.

The content of the face marks and the way they are applied is not part of the SDK, it is up to you to verify its correctness.

Still can you explain the meaning of the "rotation changes" ?

@nevynwong
Copy link
Author

Because I need the aligned color and point cloud data, I can only do this.I tried to directly use the depth frame to find the corresponding point cloud, but the result was not ideal. Maybe I used the function parameter of deproject incorrectly.My previous part of program is here.
depth_intrin = depth_frame.profile.as_video_stream_profile().intrinsics

    ##post-processing
    frame = depth_frame
    frame = decimation.process(frame)
    frame = depth_to_disparity.process(frame)
    frame = spatial.process(frame)
    frame = temporal.process(frame)
    frame = disparity_to_depth.process(frame)
    frame = hole_filling.process(frame)

    ##depth data
    frame = frame.as_depth_frame()
    depth_image = np.asanyarray(frame.get_data())

    ## find 3D face marks
    for i in range(49):
		print i
 		depth_pixel = []
 		for j in range(2):

	    a = int(face_marks[i][j][m-1]+0.5)
	    depth_pixel.append(a)

	x1 = depth_pixel[1]
	y1 = depth_pixel[0]

	depth_value = depth_image[x1,y1]*depth_scale
	print("\n\t depth_value: " + str(depth_value))
        # From pixel to 3D point
	depth_point = rs.rs2_deproject_pixel_to_point(depth_intrin, depth_pixel, depth_value)
	print("\n\t 3D depth_point: " + str(depth_point))

	face_3D_marks[i][0][m-1] = depth_point[0]		
	face_3D_marks[i][1][m-1] = depth_point[1]
	face_3D_marks[i][2][m-1] = depth_point[2]

Whether the intrinsics parameter in rs2_deproject_pixel_to_point is the depth parameter after aligned depth to color frame?

  1. In the above program, when mapping the RGB marks points to the point cloud, I noticed that some of the points were different from the positions of the faces in RGB, so I wondered if the color I got after using pc.map_to(color_frame) and pc.calculate(depth_frame) changed with the RGB exported from rs-convert.exe?Because my marks points are marked in RGB exported from rs-convert.exe.

@ev-mp
Copy link
Collaborator

ev-mp commented May 6, 2020

@nevynwong,

  1. When deprojecting data you should use the intrinsic obtained from the frame to which the data belongs color_frame intrinsics vs align intrinsics (are same ?) #5658. To answer whether it is used correctly in the above code you need to establish whether the data for that comes from the original or the aligned depth frame:

My previous part of program is here.
depth_intrin = depth_frame.profile.as_video_stream_profile().intrinsics

Check what is the source of depth_frame. My understanding is that it comes from depth aligned to color , otherwise the following wouldn't make sense:

a = int(face_marks[i][j][m-1]+0.5)
depth_pixel.append(a)

and if that assumption is correct the deproject should be rectified with

current_depth_intrin = depth_aligned_to_color.profile.as_video_stream_profile().intrinsics
depth_point = rs.rs2_deproject_pixel_to_point(depth_intrincurrent_depth_intrin , depth_pixel, depth_value)

  1. I'm sorry but I'm not sure I follow you on this. Can you post screenshots to explain it graphically? Two notes as a basic check:
  • rs-convert tools export raw RGB and Depth frames, there are no manipulations on the raw data.
  • pc.map_to(color_frame) and pc.calculate(depth_frame) do not modify the content of the RGB frame, only depth.

@nevynwong
Copy link
Author

I have understood the first question. Thank you for your reply.
The second question is more simply whether the RGB pixel positions after rs-convert.exe and pc.map_to(color) have not changed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants