Skip to content

Workshop 2 ‐ Basic Functionality

gcielniak edited this page Oct 5, 2024 · 5 revisions

The goal of this session is to refresh your knowledge of creating your own ROS2 packages and to try out some basic ROS2 functionality enabling the autonomous counting of colourful objects.

Create your own ROS2 package

  1. If you have not done it yet, fork the repository to your own GitHub account. Then clone to your local machine and open in VSC.

  2. In the src folder, create a package called rob2002_project with a simple publisher node (no need for the subscriber node) using the procedure outlined in the official ROS2 tutorial. Pay attention when declaring dependencies. Build the package using colcon, source the workspace and run the publisher node as per instructions in Section 4 of the tutorial. If you build your package with colcon build --symlink-install you will not need to rebuild the package every time you make changes in the node scripts.

  3. Modify the node so that it continuously publishes a Twist message on the cmd_vel topic. First, change the message type from String to Twist (import from geometry_msg.msg) and the topic name from topic to cmd_vel. Adjust the timer_period to 2.0 s. and set the linear x velocity of the Twist message to 0.1. Remember to add geometry_msg to your dependencies in package.xml. Try the new publisher together with the simulator or the real robot running. Change the node's name to mover_basic.py to differentiate it from other scripts. You can also change the names of scripts, classes and instances in the code to reflect their actual function. If you struggle with any of these steps, have a sneak peek at the mover_basic from the rob2002_tutorial package.

  4. Once it is confirmed that everything is fine, it is now time to commit the changes to your repository and sync with the GitHub account, you can use the VSC Source Control panel for that. It is part of good programming practice to update your projects incrementally. It is also worth taking note of all the steps required for building and modifying your own ROS packages. Why not create your own Wiki page listing all the steps there so that you have them handy next time, learning at the same time good practice in documenting your code?

Basic movement behaviours

The following behaviours demonstrate two basic autonomous robot movements that can serve as a starting point for your projects.

  1. Using the above code as a template, implement a behaviour which rotates the robot in place by an angle in a fixed number of repeating steps. To achieve that, you will need to modify the Twist message to specify the angular speed, and introduce a counter check in timer_callback which will stop the Timer and the repetitions (use the Timer.cancel() function). Adjust the angular velocity and number of steps parameters such that the robot executes a full rotation in 8 steps.

  2. The provided rob2002_tutorial package contains the mover_laser.py node which implements a simple laser-based obstacle avoidance behaviour. Add the note to your own rob2002_project package (do not forget about dependencies and declaring a new script in setup.py) and try it out with the robot deployed around the geometric shapes. You might need to stack the shapes to make them visible to the laser.

  3. Modify the mover_laser behaviour such that its execution is time-limited: introduce a parameter in laser_callback that will stop the roaming behaviour after a set number of seconds. Then, try out different time limits (e.g. 1, 2 and 3 min.) and note the subjective area coverage for each setting.

Object detection and counting

The following nodes demonstrate the basic functionality required for detecting and counting coloured objects and should be a good starting point for your own implementations.

  1. The tutorial package provides the detector_basic.py node which demonstrates how to subscribe to LIMO's image topics, perform colour thresholding and object detection. Study the code first, then incorporate the node into your own package (rob2002_project). To test the node, run the simulator and insert a coloured object (e.g. a red Cricket ball) from the object library in Gazebo and place the object in front of the robot. You might need to move the robot and the ball manually into a location with more free space around. Then run your detector node from a different terminal window (do not forget to source your package!). You should see the debug windows visualising the image processing pipeline. The node outputs the detected objects as the /object_polygon topic.

  2. Place more objects around and note the change in the detector's output. Try out different colours of the objects and adjust the range of the RGB colour filter accordingly. To change the colour of the simulated objects, right-click on the object and select "Edit model". In the Model Editor, right-click on the object again and select "Open Link Inspector". In the Visual tab, select the visual/Material/Scrip/Name field and change its value to a different Gazebo material (e.g. GazeboGreen). Click OK, save the model as unit_sphere_green for example and close the Model Editor. Restart the simulator if needed. See the following list for more information about the Gazebo materials.

  3. Try out the object detector on a real robot. You will need to change the subscribed image topic as these differ from the simulation and adjust the colour range in the RGB colour filter.

  4. The counter_basic.py node demonstrates how to subscribe to the object detector and enable a simple object counting functionality. Perform a similar procedure as with the object detector, to incorporate that node into your project. Then run the node alongside the detector and observe the output while adding the objects in front of the robot. Try it both in simulation and on the real robot.

You have now all the functionality needed for the autonomous object counter! Try running the movement nodes together with the detector and counter nodes and note any potential challenges you might face in future sessions.