Skip to content

Latest commit

 

History

History
93 lines (60 loc) · 3.12 KB

README.md

File metadata and controls

93 lines (60 loc) · 3.12 KB

Lidar 3D Cloud Projection

This Project present how to associate regions in a camera image with Lidar points in 3D space.

For more details, please check my blog.

Dependencies for Running Locally

Basic Build Instructions

  1. Clone this repo.

  2. Make a build directory in the top level project directory: mkdir build && cd build

  3. Compile: cmake .. && make

  4. Run it: ./project_lidar_to_camera.

Steps

  1. Filtering Lidar Points

    The code below shows how a filter can be applied to remove Lidar points that do not satisfy a set of constraints, i.e. they are …

    1. … positioned behind the Lidar sensor and thus have a negative x coordinate.
    2. … too far away in x-direction and thus exceeding an upper distance limit.
    3. … too far off to the sides in y-direction and thus not relevant for collision detection
    4. … too close to the road surface in negative z-direction.
    5. … showing a reflectivity close to zero, which might indicate low reliability.
    for(auto it=lidarPoints.begin(); it!=lidarPoints.end(); ++it) {
        float maxX = 25.0, maxY = 6.0, minZ = -1.4; 
        if(it->x > maxX || it->x < 0.0 || abs(it->y) > maxY || it->z < minZ || it->r<0.01 )
        {
        	continue; // skip to next point
    	}
    }
  2. Convert current Lidar point into homogeneous coordinates and store it in the 4D variable X.

     X.at<double>(0, 0) = it->x;
     X.at<double>(1, 0) = it->y;
     X.at<double>(2, 0) = it->z;
     X.at<double>(3, 0) = 1;
    
  3. Then, apply the projection equation to map X onto the image plane of the camera and Store the result in Y.

    Y = P_rect_00 * R_rect_00 * RT * X;
    
  4. Once this is done, transform Y back into Euclidean coordinates and store the result in the variable pt.

    cv::Point pt;
    pt.x = Y.at<double>(0, 0) / Y.at<double>(0, 2);
    pt.y = Y.at<double>(1, 0) / Y.at<double>(0, 2);