Skip to content
Angela Maio edited this page Jun 24, 2020 · 38 revisions

The control of a SubT team encompasses the access to each robot's sensors and actuators, the ability to navigate each robot to the right location, and optionally, the communication among team members to distribute the tasks. Additionally, the location of multiple artifacts need to be reported. SubT provides a combination of ROS and C++ interfaces to manage these tasks. This section explains these interfaces grouped by type.

Navigation

Unmanned Ground Vehicles (UGVs)

The UGVs implement velocity controllers. They accept Twist messages and continuously publish Odometry messages. With this interface, you should be able to command linear and angular velocities to each robot and get its current position estimation.

ROS Topic Description Message type
/<ROBOT_NAME>/cmd_vel Target velocity geometry_msgs/Twist
/<ROBOT_NAME>/odom Odometry nav_msgs/Odometry

The following robots have unique control interfaces while still utilizing a cmd_vel (geometry_msgs/Twist) message for commands:

  • The CERBERUS ANYmal B legged robot has a cerberus_anymal_locomotion package to interface between commanded velocities and leg control. This can be integrated into your robot's solution docker container. See the robot's specifications.md for details.
  • The Robotika Kloubak two-section wheeled robot uses separate velocity controllers on topics cmd_vel/front and cmd_vel/rear for the two sections, which are connected by a passive joint. See the robot's specifications.md for details.

Unmanned Aerial Vehicles (UAVs)

The linear velocity and yaw angle velocity of UAVs can be controlled through Twist messages. The velocities are expressed in the body frame of the UAVs. The vehicles also have a topic that can be used to enable or disable the controller. The controller starts in the enabled state by default. If a disable message (a value of false) is received, the controller sends a final zero velocity command to the rotors and disables itself. If the vehicle is in the air, disabling the controller will cause it to fall. If an enable message (a value of true) is received, the controller becomes enabled and waits for Twist messages.

ROS Topic Description Message type
/<ROBOT_NAME>/cmd_vel Target velocity geometry_msgs/Twist
/<ROBOT_NAME>/velocity_controller/enable Velocity controller state std_msgs/Bool

Note that for Tunnel Circuit, the /<ROBOT_NAME>/cmd_vel topic used to be an interface for a type of thrust controller. For Urban Circuit and later, the topic has become an interface for a built-in velocity controller.

Sensors

These are the sensor specifications:

Sensor type Specification
2D short range 270-deg lidar 5m 270-deg planar lidar
2D long range 270-deg lidar 30m 270-deg planar lidar**
2D long range 360-deg lidar 25m 360-deg planar lidar
3D medium range lidar 100m 360-deg +/-15 deg vertical lidar**
3D long range lidar 300m 360-deg +/-15 deg vertical lidar
Point lidar range sensor (40m)
HD camera 1280x960 60-deg 2D camera
HD wide camera 1280x960 115-deg 2D camera**
VGA wide camera 640x360 130-deg 2D camera
QVGA camera 320x240 60-deg 2D camera
QVGA RGBD camera 320x240 60-deg RGBD camera, 10m range**
VGA RGBD camera 640x480 60-deg RGBD camera, 10m range**
RGBD ToF camera 224x171 62-deg RGBD camera, 4m range
IMU 3-axis accelerometer and gyroscope
Magnetometer returns the magnetic field vector
Air pressure sensor returns absolute fluid pressure
Gas detector returns true when in a gas-emitting room

Each robot can be configured with different sensor suites. The table in the Robots Wiki summarizes all payload options for each robot. Notice that all configurations have at least one inertial measurement unit (IMU) and a gas detector.

**Note: Some team-submitted models have slightly different sensor specifications to better match real sensors. Details can be found in each robot's SubT Tech Repo item and the submitted_models/<ROBOT_NAME>/specifications.md file.

The selection of each sensor suite is done via ign launch command line arguments. Next is an example where we're launching our team of robots where X1, X2, X3 and X4 are configured to use the sensor configuration #1, #1, #3 and $4 respectively.

ign launch -v 4 competition.ign robotName1:=X1 robotConfig1:=X1_SENSOR_CONFIG_1 robotName2:=X2 robotConfig2:=X2_SENSOR_CONFIG_1 robotName3:=X3 robotConfig3:=X3_SENSOR_CONFIG_1 robotName4:=X4 robotConfig4:=X4_SENSOR_CONFIG_1

Each sensor publishes data over a ROS topic. Here's a summary of the topics used by each sensor type.

Sensor type ROS topic Message type
Planar Lidar /<ROBOT_NAME>/front_scan sensor_msgs/LaserScan
3D Lidar /<ROBOT_NAME>/points sensor_msgs/PointCloud2
2D camera /<ROBOT_NAME>/front/image_raw sensor_msgs/Image
/<ROBOT_NAME>/front/camera_info sensor_msgs/CameraInfo
/<ROBOT_NAME>/front/optical/image_raw sensor_msgs/Image
/<ROBOT_NAME>/front/optical/camera_info sensor_msgs/CameraInfo
Stereo camera /<ROBOT_NAME>/front/left/image_raw sensor_msgs/Image
/<ROBOT_NAME>/front/left/camera_info sensor_msgs/CameraInfo
/<ROBOT_NAME>/front/right/image_raw sensor_msgs/Image
/<ROBOT_NAME>/front/right/camera_info sensor_msgs/CameraInfo
/<ROBOT_NAME>/front/left/optical/image_raw sensor_msgs/Image
/<ROBOT_NAME>/front/left/optical/camera_info sensor_msgs/CameraInfo
/<ROBOT_NAME>/front/right/optical/image_raw sensor_msgs/Image
/<ROBOT_NAME>/front/right/optical/camera_info sensor_msgs/CameraInfo
RGBD camera /<ROBOT_NAME>/rgbd_camera/depth/points/ sensor_msgs/PointCloud2
/<ROBOT_NAME>/front/camera_info sensor_msgs/CameraInfo
/<ROBOT_NAME>/front/image_raw sensor_msgs/Image
/<ROBOT_NAME>/front/depth sensor_msgs/Image
/<ROBOT_NAME>/front/optical/camera_info sensor_msgs/CameraInfo
/<ROBOT_NAME>/front/optical/image_raw sensor_msgs/Image
/<ROBOT_NAME>/front/optical/depth sensor_msgs/Image
IMU /<ROBOT_NAME>/imu/data sensor_msgs/Imu
Magnetometer /<ROBOT_NAME>/magnetic_field sensor_msgs/MagneticField
Pressure sensor /<ROBOT_NAME>/air_pressure sensor_msgs/FluidPressure
Battery state /<ROBOT_NAME>/battery_state sensor_msgs/BatteryState
Gas detector /<ROBOT_NAME>/gas_detected std_msgs/Bool Message
Point lidar /<ROBOT_NAME>/front_cliff_scan sensor_msgs/LaserScan

For 2D camera, Stereo camera, and RGBD camera sensors, the image and depth data are also available in optical frame (z forward, x right, y down) via topics with the /optical/ string in the topic name. More information on coordinate frame conventions can be found in REP 103.

Gimbal

Some robots utilize a pan/tilt gimbal to rotate sensors to see different view angles around the robot.

The angles of the pan and tilt axes can be accessed through the <ROBOT_NAME>/joint_state topic with the names pan_gimbal_joint and tilt_gimbal_joint. The gimbal pan and tilt angles may be controlled by publishing ROS messages on the command topics listed below.

ROS Topic Description Message type
/<ROBOT_NAME>/joint_state Gimbal joint angles [rad] and rates [rad/s] sensor_msgs/JointState
/<ROBOT_NAME>/pan_tilt/pan_rate_cmd_double Gimbal pan angular rate command std_msgs/Float64
/<ROBOT_NAME>/pan_tilt/tilt_rate_cmd_double Gimbal tilt angular rate command std_msgs/Float64

Communications

A robot has the ability to send data to one or multiple robots of the team. Next are the main communication features available:

  • Packets are sent over a single hop, similar to UDP. No MANET algorithm or other packet relay systems are provided. Teams can implement their own relay systems if they like.
  • If two robots are within communication range, then one can try send a packet to the other.
  • There is a maximum range beyond which two robots are not neighbors (and thus cannot communicate directly).
  • When a robot tries to send a packet to a neighbor, that packet is dropped with some probability.
  • There is a maximum size (e.g. 1500 octets) per message.
  • There is a maximum data rate allowed among robots communicating over the same network segment (e.g. 54000 kbps).

These features have been implemented in a C++ library (libCommsClient.so). This library exposes the CommsClient class that lets you exercise the communication within the team. The CommsClient class contains the following relevant functions:

  • CommsClient(localAddress): This is the class constructor and accepts a local address. An address is a string representing a unique identifier.

    Important: In SubT, the address should match the name of the robot in Gazebo.

  • Bind(callback, address, port): This method binds an address and port to a virtual socket, and sends incoming messages to the specified callback.

  • SendTo(data, address, port): This method allows an agent to send data to other individual robot (unicast), all the robots (broadcast), or a group of robots (multicast). The addresses for sending broadcast and multicast messages are "kBroadcast" and "kMulticast" respectively.

    The address "subt::kBaseStationName" can be used to communicate with the base station and report the location of an artifact. The artifact parameter is a serialized Google Protobuf message that you should populate and serialize. Here's the message description. Next you can find an example:

      // We only need the position.
      ignition::msgs::Pose pose;
      pose.mutable_position()->set_x(1);
      pose.mutable_position()->set_y(2);
      pose.mutable_position()->set_z(3);
    
      // Fill the type and pose.
      subt::msgs::Artifact artifact;
      artifact.set_type(static_cast<uint32_t>(subt::ArtifactType::TYPE_BACKPACK));
      artifact.mutable_pose()->CopyFrom(pose);
    
      // Serialize the artifact.
      std::string serializedData;
      if (!artifact.SerializeToString(&serializedData)) {
        std::cerr << "ReportArtifact(): Error serializing message\n"
                  << artifact.DebugString() << std::endl;
      }
    
      // Report it.
      this->client->SendTo(serializedData, subt::kBaseStationName);
    
    **Important:** Calling `SendTo()` to the base station does not guarantee that the base station receives your message. You are sending a regular message to the base station subject to the same rules that any other message sent.
    
  • Host(): This method returns the address of the robot.

  • SendBeacon(): Manually trigger a beacon (broadcast) packet (for stimulating neighbor discovery).

  • StartBeaconInterval(ros::Duration period): Start a timer to periodically send beacon packets.

  • Neighbors(): Return neighbor state information (based on received packets).

Breadcrumbs

A breadcrumb is a communications device attached to a ground vehicle than can be dropped into the world. The goal of breadcrumbs is to extend the communication range among the team. In the SubT context, once a breadcrumb is deployed, it cannot move. It operates as a static network relay.

A breadcrumb deployment can be requested by publishing a ROS message on topic /<ROBOT_NAME>/breadcrumb/deploy (e.g.: /X1/breadcrumb/deploy). The type of the message is std_msgs/Empty. The ground vehicle configurations that include breadcrumbs, and the finite number available, are listed on the Robots Wiki.

ROS Topic Description Message type
/<ROBOT_NAME>/breadcrumb/deploy Drop a breadcrumb std_msgs/Empty

Messages are delivered to the final destinations if source and destination are directly within communication range. If source and destination are not within direct communication range, but there's a path between the source and a sequence of one or more relays to the destination, the message will be delivered. Keep in mind that all the hops between source and the first relay, between intermediate relays, and between the last relay and the destination, need to be within communication range.

Marsupials

Refer to the Marsupial Tutorial for information about marsupial vehicles.

A marsupial detachment can be requested by publishing a ROS message on topic /<childRobotName>/detach (e.g.: /Xb/detach). The type of the message is std_msgs/Empty.

ROS Topic Description Message type
/<CHILD_ROBOT_NAME>/detach Detach from a marsupial parent std_msgs/Empty

This example spawns two marsupials:

ign launch -v 4 cave_circuit.ign  robotName1:=Xa robotConfig1:=X1_SENSOR_CONFIG_1 robotName2:=Xb robotConfig2:=X3_SENSOR_CONFIG_1 robotName3:=Xc robotConfig3:=X1_SENSOR_CONFIG_1 robotName4:=Xd robotConfig4:=X3_SENSOR_CONFIG_1 marsupial1:=Xa:Xb marsupial2:=Xc:Xd`

To detach Xb, run:

rostopic pub /Xb/detach std_msgs/Empty "{}"

Artifacts

One of the main tasks to complete during the competition is to identify and report the pose of multiple artifacts. Teams should use the SendTo() function from the CommsClient class to report the location of artifacts to the base station.

The artifact pose should be reported relative to the artifact origin. At this point, there's no physical object to represent the artifact origin. Instead, robots can query the ROS service /subt/pose_from_artifact_origin that returns the robot pose relative to the artifact origin. Note that this service is only available if the robot is in the staging area.

When a robot communicates with the base station, received artifact requests are acknowledged by sending back a message of type ArtifactScore. Keep in mind that the ACK can also be lost as any other message sent. Repeated artifact requests (a request is considered repeated when contains the same artifact type and position as a previously sent request) do not count as new requests for scoring purposes. In other words, a team will not lose an extra token while retrying to send a previously reported artifact.

Like all other incoming messages, the ACKs come through the callback connected with Bind(). From that callback, you can check the source and destination of the message. The message contents can be obtained by deserializing the incoming data into a message, for example:

void boundCallback(const std::string &_srcAddress,
                   const std::string &_dstAddress,
                   const uint32_t _dstPort,
                   const std::string &_data)
{
  subt::msgs::ArtifactScore res;
  if (!res.ParseFromString(_data))
  {
    ROS_ERROR("CommClientCallback(): Error deserializing message.");
  }

  // Add code to handle communication callbacks.
  ROS_INFO("Message from [%s] to [%s] on port [%u]:\n [%s]", _srcAddress.c_str(),
      _dstAddress.c_str(), _dstPort, res.DebugString().c_str());
}

Important: The scoring plugin will find the closest artifact near the reported pose that matches the requested type (e.g.: if you're reporting a backpack at pose [10, 20, 0, 0, 0, 0], the scoring plugin will compare your reported pose with the closest backpack). The score is based on the accuracy of the report and its timing.

Service Description
/subt/pose_from_artifact_origin Robot pose relative to the origin artifact

Score

The current score is periodically published under the /subt/score topic. You can check your task score at any time with:

rostopic echo /subt/score
Topic Description
/subt/score Current score

See also: /clock ROS topic

Starting and Finishing a Trial

Two ROS services are provided for starting and finishing a trial. Both of these services are optional. The start service can be called whenever you are ready to begin the competition. The start service is also automatically called when the first robot leaves the staging area. It's good practice to call the finish service when you are done with a competition run. If any robot instance calls the /subt/finish service, the entire simulation instance will be terminated. The finish service will automatically be called when any of the termination criteria are met (refer to the SubT Challenge rules document).

Service Description Request Message Response Message
/subt/start Start a trial std_srvs::SetBool::Request std_srvs::SetBool::Response
/subt/finish Complete a trial std_srvs::SetBool::Request std_srvs::SetBool::Response

Custom data logging

Debugging can be difficult when using Cloudsim since all submitted solutions are run automatically without the ability for users to ssh into a robot. This problem is mitigated by recording, using rosbag, all topics that match /robot_data(.*). Recording is started automatically. Any robot that wants to have data recorded should publish messages on a /robot_data(.*) topic.

Up to 2GB of data, separated into two 1GB files, can be recorded.

Ground truth pose

For debugging purposes, the flag enableGroundTruth:=true can be set in the arguments to ign launch, which will cause TF messages to be published with ground truth transforms for each vehicle. The topic <robot name>/pose will also include a transform from the world frame to each vehicle. Note that this topic also includes transforms from the model frame to the links and sensors in the vehicle. Therefore, using the topic directly would require some filtering to get only the transform from world to the vehicle. The recommended way to obtain ground truth information is via TF.