Skip to content

Latest commit

 

History

History
58 lines (40 loc) · 3.26 KB

File metadata and controls

58 lines (40 loc) · 3.26 KB

[Video]

YOLO-Object-Detection-for-Pick-and-Place-task-using-ROS-on-KUKA-iiwa

Robots use computer vision in many operations. In this repository, we are going to address one of these operations, pick and place, which is the most widely used in production lines and assembly processes. The aim of this project is to select 3 objects out of 6 objects in robot work environment, and place them in specific-colored regions, so that each object will be placed in colored region initially specified by the operator.

The challenges of this project go as follows:

  • Object Detection: YOLOv4 
  • Box Detection: Color Segmentation in HSI and Morphology operations
  • Pick and Place: Object center and orientation and Box center and orientation
  • Localization w.r.t Robot Base: Eye-In-Hand Camera Calibration and Transformations
  • Hardware Implementation: KUKA iiwa and ROS

Workplace Setup (Objects)

Dataset is attached above.

Labelling is done on roboflow

Dataset preprocessing and augmentations

YOLO Model Testing for Object Detection

  • Test1

Problems arise due to illumination change, camera orientation, false positive results (detect the gripper as an object, detect background as an object).

Solution: Applying thresholding on detection confidence score.

  • Test2

Object Center and Orientation

Box Detection

- Color Segmentation in HSI
- Morphology operations
- Box center and orientation

Localization w.r.t Robot Base

- Camera Intrinsics
- Eye-In-Hand Camera Calibration
- Transformations

Hardware Implementation

KUKA iiwa robot iiwa_stack

Demo

Full video is attached Demo