Skip to content
tanaysaha edited this page Jun 29, 2019 · 36 revisions

helipad_det

The aim of this ROS package is to provide detection and pose estimation of an H-shaped helipad for the purpose of landing a UAV. The approach is based on this paper. It provides robust and accurate detection of the helipad from different angles and orientations.

Helipad Detection

Idea

Given Below is the image of the Helipad.

The H

We obtain a closed contour of the H, given by a parameterized function gamma, where and each gamma represents a point of the contour on the image. We define the size of the contour to be it's Cardinality .

H Contour

Define a function , where would give us the 'Sharpness' around the particular point of the contour. This can be achieved by considering the distance of a particular point to the line connecting and . This distance will be close to 0 for points along relatively straight lines and give a maxima around corners. So,

is the required distance for each point on the contour. This gives us a 'signature' of that particular contour.

Given below is the representation of for on a straight line not near a corner followed by the case where it is near a corner.

The Signature of an experimentally detected H.

This signature is then processed to obtain only the peaks. We compare this signature with the signature of the required H by checking the ratio of the distance between the maxima and their contour size in their individual signatures. For the centre of the H, we use the centroid of all the corners obtained by taking the average of where is a point of maxima of for all .

Technical Approach

The raw image obtained from the subscribed image stream is converted to the grayscale colorspace. A Gaussian blur and thresholding are applied to the image so as to accentuate the H-shaped marker. A Canny edge detector is then applied to it. The contours of this image are then generated. These contours are analyzed with the pointToLineDistance function and stored in a vector. A smoothing operation is applied to the distance data and a corresponding signature is made. This signature is compared to an ideal H signature, allowing some degree of tolerance. The center of the H is computed in the image frame. This coordinate of the image is then transformed into the global frame.

Software Pipeline

The entire framework is built upon the robotics middleware ROS. The detection part of the package relies on the OpenCV library. We use the inbuilt functions for the pre-processing of the raw image obtained from the /usb_cam/image_raw topic. The preprocess.h header file contains the functions that perform these tasks. Contours are generated by the findContours inbuilt OpenCV function. We also impose a condition on the areas of the contours to minimize the processing and small erroneous detections. The contours are then passed to the pointToLineDistance function which provides the distance data and stores it in a vector. The smooth function is used to obtain only the peak values and thus forms a signature vector for each of these contours. This function also takes care of noise and outlier values. Now that the signature is formed, the vector values are shifted such that the largest gap comes first. This new signature is now ready to be compared with the ideal ‘H’ signature. The lengths of the segments of the ‘H’ are compared with the corresponding lengths of the ideal ‘H’. Some amount of tolerance (some percentage of the total length of the contour) is allowed while comparing the lengths. Once the signature is confirmed to be the ‘H’ marker, the center of the contour is found and is to be used for pose estimation. The detected H is then published on the topic /detected_h.

Pose Estimation

References

https://link.springer.com/article/10.1007/s10846-018-0933-2

Clone this wiki locally