-
Notifications
You must be signed in to change notification settings - Fork 14
Workshop 2 ‐ Basic Functionality
The goal of this session is to refresh your knowledge of creating your own ROS2 packages and to try out some basic ROS2 functionality enabling the autonomous counting of colourful objects.
-
If you have not done it yet, fork the repository to your own GitHub account. Then clone to your local machine and open in VSC.
-
In the src folder, create a package called
rob2002_project
with a simple publisher node (no need for the subscriber node) using the procedure outlined in the official ROS2 tutorial. Pay attention when declaring dependencies. Build the package usingcolcon
, source the workspace and run the publisher node as per instructions in Section 4 of the tutorial. If you build your package withcolcon build --symlink-install
you will not need to rebuild the package every time you make changes in the node scripts. -
Modify the node so that it continuously publishes a Twist message on the
cmd_vel
topic. First, change the message type fromString
toTwist
(import fromgeometry_msg.msg
) and the topic name fromtopic
tocmd_vel
. Adjust thetimer_period
to 2.0 s. and set the linear x velocity of theTwist
message to 0.1. Remember to addgeometry_msg
to your dependencies in package.xml. Try the new publisher together with the simulator or the real robot running. Change the node's name tomover_basic.py
to differentiate it from other scripts. You can also change the names of scripts, classes and instances in the code to reflect their actual function. If you struggle with any of these steps, have a sneak peek at the mover_basic from therob2002_tutorial
package. -
Once it is confirmed that everything is fine, it is now time to commit the changes to your repository and sync with the GitHub account, you can use the VSC Source Control panel for that. It is part of good programming practice to update your projects incrementally. It is also worth taking note of all the steps required for building and modifying your own ROS packages. Why not create your own Wiki page listing all the steps there so that you have them handy next time, learning at the same time good practice in documenting your code?
The following behaviours demonstrate two basic autonomous robot movements that can serve as a starting point for your projects.
-
Using the above code as a template, implement a behaviour which rotates the robot in place by an angle in a fixed number of repeating steps. To achieve that, you will need to modify the
Twist
message to specify the angular speed, and introduce a counter check intimer_callback
which will stop the Timer and the repetitions (use theTimer.cancel()
function). Adjust the angular velocity and number of steps parameters such that the robot executes a full rotation in 8 steps. -
The provided
rob2002_tutorial
package contains the mover_laser.py node which implements a simple laser-based obstacle avoidance behaviour. Add the note to your ownrob2002_project
package (do not forget about dependencies and declaring a new script insetup.py
) and try it out with the robot deployed around the geometric shapes. You might need to stack the shapes to make them visible to the laser. -
Modify the mover_laser behaviour such that its execution is time-limited: introduce a parameter in
laser_callback
that will stop the roaming behaviour after a set number of seconds. Then, try out different time limits (e.g. 1, 2 and 3 min.) and note the subjective area coverage for each setting.
The following nodes demonstrate the basic functionality required for detecting and counting coloured objects and should be a good starting point for your own implementations.
-
The tutorial package provides the detector_basic.py node which demonstrates how to subscribe to LIMO's image topics, perform colour thresholding and object detection. Study the code first, then incorporate the node into your own package (rob2002_project). To test the node, run the simulator and insert a coloured object (e.g. a red Cricket ball) from the object library in Gazebo and place the object in front of the robot. You might need to move the robot and the ball manually into a location with more free space around. Then run your detector node from a different terminal window (do not forget to source your package!). You should see the debug windows visualising the image processing pipeline. The node outputs the detected objects as the
/object_polygon
topic. -
Place more objects around and note the change in the detector's output. Try out different colours of the objects and adjust the range of the RGB colour filter accordingly. To change the colour of the simulated objects, right-click on the object and select "Edit model". In the Model Editor, right-click on the object again and select "Open Link Inspector". In the Visual tab, select the visual/Material/Scrip/Name field and change its value to a different Gazebo material (e.g. GazeboGreen). Click OK, save the model as
unit_sphere_green
for example and close the Model Editor. Restart the simulator if needed. See the following list for more information about the Gazebo materials. -
Try out the object detector on a real robot. You will need to change the subscribed image topic as these differ from the simulation and adjust the colour range in the RGB colour filter.
-
The counter_basic.py node demonstrates how to subscribe to the object detector and enable a simple object counting functionality. Perform a similar procedure as with the object detector, to incorporate that node into your project. Then run the node alongside the detector and observe the output while adding the objects in front of the robot. Try it both in simulation and on the real robot.
You have now all the functionality needed for the autonomous object counter! Try running the movement nodes together with the detector and counter nodes and note any potential challenges you might face in future sessions.