Skip to content
Angela Maio edited this page May 26, 2021 · 38 revisions

The control of a SubT Challenge team encompasses the access to each robot's sensors and actuators, the ability to navigate each robot to the right location, and optionally, the communication among team members to distribute the tasks. Additionally, the location of multiple artifacts need to be reported. The SubT Virtual Testbed provides a combination of ROS and C++ interfaces to manage these tasks. This section explains these interfaces grouped by type.

Navigation

Unmanned Ground Vehicles (UGVs)

The UGVs implement velocity controllers. They accept Twist messages and continuously publish Odometry messages. With this interface, you should be able to command linear and angular velocities to each robot and get its current position estimation.

ROS Topic Description Message type
/<ROBOT_NAME>/cmd_vel Target velocity geometry_msgs/Twist
/<ROBOT_NAME>/odom Odometry nav_msgs/Odometry

The following robots have unique control interfaces while still utilizing a cmd_vel (geometry_msgs/Twist) message for commands:

  • The CERBERUS ANYmal B and CERBERUS ANYmal C legged robots have a cerberus_anymal_locomotion package to interface between commanded velocities and leg control. This can be integrated into your robot's solution docker container. See the robot's specifications.md for details.
  • The Spot legged robot has a subt_spot package to interface between commanded velocities and leg control. This can be integrated into your robot's solution docker container. See the robot's specifications.md for details.
  • The CTU-CRAS-NORLAB Lily legged robot has a ctu_cras_norlab_hexapod_controller package to interface between commanded velocities and leg control. This can be integrated into your robot's solution docker container. See the robot's specifications.md for details.
  • The Robotika Kloubak two-section wheeled robot uses separate velocity controllers on topics cmd_vel/front and cmd_vel/rear for the two sections, which are connected by a passive joint. See the robot's specifications.md for details.

Unmanned Aerial Vehicles (UAVs)

The linear velocity and yaw angle velocity of UAVs can be controlled through Twist messages. The velocities are expressed in the body frame of the UAVs. The vehicles also have a topic that can be used to enable or disable the controller. The controller starts in the enabled state by default. If a disable message (a value of false) is received, the controller sends a final zero velocity command to the rotors and disables itself. If the vehicle is in the air, disabling the controller will cause it to fall. If an enable message (a value of true) is received, the controller becomes enabled and waits for Twist messages.

ROS Topic Description Message type
/<ROBOT_NAME>/cmd_vel Target velocity geometry_msgs/Twist
/<ROBOT_NAME>/velocity_controller/enable Velocity controller state std_msgs/Bool

Note that for Tunnel Circuit, the /<ROBOT_NAME>/cmd_vel topic used to be an interface for a type of thrust controller. For Urban Circuit and later, the topic has become an interface for a built-in velocity controller.

Sensors

These are the sensor specifications:

Sensor type Specification
2D short range 270-deg lidar 5m 270-deg planar lidar
2D long range 270-deg lidar 30m 270-deg planar lidar**
2D long range 360-deg lidar 25m 360-deg planar lidar
3D medium range lidar 100m 360-deg +/-15 deg vertical lidar**
3D long range lidar 300m 360-deg +/-15 deg vertical lidar
Point lidar range sensor (40m)
HD camera 1280x960 60-deg 2D camera**
HD wide camera 1280x960 115-deg 2D camera**
VGA wide camera 640x360 130-deg 2D camera
QVGA camera 320x240 60-deg 2D camera
QVGA RGBD camera 320x240 60-deg RGBD camera, 10m range**
VGA RGBD camera 640x480 60-deg RGBD camera, 10m range**
ToF camera 224x171 62-deg depth camera, 4m range
Thermal camera 384x288 8-bit camera, -20 to 400degC temp range**
IMU 3-axis accelerometer and gyroscope
Magnetometer returns the magnetic field vector
Air pressure sensor returns absolute fluid pressure
Gas detector returns true when in a gas-emitting room

Each robot can be configured with different sensor suites. The table in the Robots Wiki summarizes all payload options for each robot. Notice that all configurations have at least one inertial measurement unit (IMU) and a gas detector.

**Note: Some team-submitted models have slightly different sensor specifications to better match real sensors. Details can be found in each robot's SubT Tech Repo item and the submitted_models/<ROBOT_NAME>/specifications.md file.

The selection of each sensor suite is done via ign launch command line arguments. Next is an example where we're launching our team of robots where X1, X2, X3 and X4 are configured to use the sensor configuration #1, #1, #3 and $4 respectively.

ign launch -v 4 competition.ign robotName1:=X1 robotConfig1:=X1_SENSOR_CONFIG_1 robotName2:=X2 robotConfig2:=X2_SENSOR_CONFIG_1 robotName3:=X3 robotConfig3:=X3_SENSOR_CONFIG_1 robotName4:=X4 robotConfig4:=X4_SENSOR_CONFIG_1

Each sensor publishes data over a ROS topic. Here's a summary of the topics used by each sensor type.

Sensor type ROS topic Message type
Planar Lidar /<ROBOT_NAME>/front_scan sensor_msgs/LaserScan
3D Lidar /<ROBOT_NAME>/points sensor_msgs/PointCloud2
2D camera /<ROBOT_NAME>/front/image_raw sensor_msgs/Image
/<ROBOT_NAME>/front/camera_info sensor_msgs/CameraInfo
/<ROBOT_NAME>/front/optical/image_raw sensor_msgs/Image
/<ROBOT_NAME>/front/optical/camera_info sensor_msgs/CameraInfo
Stereo camera /<ROBOT_NAME>/front/left/image_raw sensor_msgs/Image
/<ROBOT_NAME>/front/left/camera_info sensor_msgs/CameraInfo
/<ROBOT_NAME>/front/right/image_raw sensor_msgs/Image
/<ROBOT_NAME>/front/right/camera_info sensor_msgs/CameraInfo
/<ROBOT_NAME>/front/left/optical/image_raw sensor_msgs/Image
/<ROBOT_NAME>/front/left/optical/camera_info sensor_msgs/CameraInfo
/<ROBOT_NAME>/front/right/optical/image_raw sensor_msgs/Image
/<ROBOT_NAME>/front/right/optical/camera_info sensor_msgs/CameraInfo
RGBD camera /<ROBOT_NAME>/rgbd_camera/depth/points sensor_msgs/PointCloud2
/<ROBOT_NAME>/front/camera_info sensor_msgs/CameraInfo
/<ROBOT_NAME>/front/image_raw sensor_msgs/Image
/<ROBOT_NAME>/front/depth sensor_msgs/Image
/<ROBOT_NAME>/front/optical/camera_info sensor_msgs/CameraInfo
/<ROBOT_NAME>/front/optical/image_raw sensor_msgs/Image
/<ROBOT_NAME>/front/optical/depth sensor_msgs/Image
ToF camera /<ROBOT_NAME>/tof_front/depth/points sensor_msgs/PointCloud2
/<ROBOT_NAME>/tof_front/camera_info sensor_msgs/CameraInfo
/<ROBOT_NAME>/tof_front/depth sensor_msgs/Image
/<ROBOT_NAME>/tof_front/optical/camera_info sensor_msgs/CameraInfo
/<ROBOT_NAME>/tof_front/optical/depth sensor_msgs/Image
Thermal camera /<ROBOT_NAME>/thermal_camera/camera_info sensor_msgs/CameraInfo
/<ROBOT_NAME>/thermal_camera/image_raw sensor_msgs/Image
/<ROBOT_NAME>/thermal_camera/optical/camera_info sensor_msgs/CameraInfo
/<ROBOT_NAME>/thermal_camera/optical/image_raw sensor_msgs/Image
IMU /<ROBOT_NAME>/imu/data sensor_msgs/Imu
Magnetometer /<ROBOT_NAME>/magnetic_field sensor_msgs/MagneticField
Pressure sensor /<ROBOT_NAME>/air_pressure sensor_msgs/FluidPressure
Battery state /<ROBOT_NAME>/battery_state sensor_msgs/BatteryState
Gas detector /<ROBOT_NAME>/gas_detected std_msgs/Bool Message
Point lidar /<ROBOT_NAME>/front_cliff_scan sensor_msgs/LaserScan
Contact /<ROBOT_NAME>/leg_contacts/front_left bosdyn_spot/ContactsStamped and bosdyn_spot/LogicalContact

For all camera sensors, the image and depth data (when applicable) are also available in optical frame (z forward, x right, y down) via topics with the /optical/ string in the topic name. More information on coordinate frame conventions can be found in REP 103.

Joints & Gimbals

Some robots utilize a gimbal to rotate sensors to see different view angles around the robot or have other active joints (e.g., legs or flippers) for increasing maneuverability over uneven terrain.

The angles of relevant robot joints can be accessed through the <ROBOT_NAME>/joint_state topic. For example, the MARBLE Husky's pan-tilt gimbal has joints named pan_gimbal_joint and tilt_gimbal_joint. The joint angles may be controlled by publishing ROS messages on the command topics listed below. As robot models were submitted by Systems Track teams to match their hardware counterparts, the message and command type varies across different robots as listed.

ROS Topic Description Message type Applicable Robot(s)
/<ROBOT_NAME>/joint_state Joint angles [rad] and rates [rad/s] sensor_msgs/JointState All with active joints
/<ROBOT_NAME>/pan_tilt/<AXIS>_rate_cmd_double Angular rate command for a single axis of a pan/tilt gimbal std_msgs/Float64 MARBLE Husky, HD2
/<ROBOT_NAME>/lidar_gimbal/<AXIS>_rate_cmd_double Angular rate command for a a single axis of a lidar gimbal (roll or pan axes) std_msgs/Float64 e.g., OzBot ATR, DTR, Absolem
/<ROBOT_NAME>/flippers_cmd_pos/<FLIPPER> Flipper angular position command for a single flipper (front_right, rear_left, etc.) std_msgs/Float64 Tracked robots with flippers, e.g., Absolem
/<ROBOT_NAME>/flippers_cmd_vel/<FLIPPER> Flipper angular velocity command for a single flipper (front_right, rear_left, etc.) std_msgs/Float64 Tracked robots with flippers, e.g., Absolem
/<ROBOT_NAME>/desired_joint_positions Leg joint angular position and velocity commands for all joints mav_msgs/Actuators Legged robots, e.g., ANYmal B

Collision Damage

Any robot that experiences a collision with a change in kinetic energy above a given threshold will become disabled by the Damage Plugin to simulate mechanical failure. The robot's sensors and communications will continue to operate, but the robot will be unable to move its active joints (e.g., wheels, legs, propellers, gimbals). There is no ROS-message-based indication of robot damage, as mechanical failures are often undetectable by sensors.

Communications

A robot has the ability to send data to one or multiple robots of the team. Next are the main communication features available:

  • Packets are sent over a single hop, similar to UDP. No MANET algorithm or other packet relay systems are provided. Teams can implement their own relay systems if they like.
  • If two robots are within communication range, then one can try send a packet to the other.
  • There is a maximum range beyond which two robots are not neighbors (and thus cannot communicate directly).
  • When a robot tries to send a packet to a neighbor, that packet is dropped with some probability.
  • There is a maximum size (e.g. 1500 octets) per message.
  • There is a maximum data rate allowed among robots communicating over the same network segment (e.g. 54000 kbps).

These features have been implemented in a C++ library (libCommsClient.so). This library exposes the CommsClient class that lets you exercise the communication within the team. The CommsClient class contains the following relevant functions:

  • CommsClient(localAddress): This is the class constructor and accepts a local address. An address is a string representing a unique identifier.

    Important: The address should match the name of the robot in Gazebo.

  • Bind(callback, address, port): This method binds an address and port to a virtual socket, and sends incoming messages to the specified callback.

  • SendTo(data, address, port): This method allows an agent to send data to other individual robot (unicast), all the robots (broadcast), or a group of robots (multicast). The addresses for sending broadcast and multicast messages are "kBroadcast" and "kMulticast" respectively.

    The address "subt::kBaseStationName" can be used to communicate with the base station and report the location of an artifact. The artifact parameter is a serialized Google Protobuf message that you should populate and serialize. Here's the message description. Next you can find an example:

      // We only need the position.
      ignition::msgs::Pose pose;
      pose.mutable_position()->set_x(1);
      pose.mutable_position()->set_y(2);
      pose.mutable_position()->set_z(3);
    
      // Fill the type and pose.
      subt::msgs::Artifact artifact;
      artifact.set_type(static_cast<uint32_t>(subt::ArtifactType::TYPE_BACKPACK));
      artifact.mutable_pose()->CopyFrom(pose);
    
      // Serialize the artifact.
      std::string serializedData;
      if (!artifact.SerializeToString(&serializedData)) {
        std::cerr << "ReportArtifact(): Error serializing message\n"
                  << artifact.DebugString() << std::endl;
      }
    
      // Report it.
      this->client->SendTo(serializedData, subt::kBaseStationName);
    
    **Important:** Calling `SendTo()` to the base station does not guarantee that the base station receives your message. You are sending a regular message to the base station subject to the same rules that any other message sent.
    
  • Host(): This method returns the address of the robot.

  • SendBeacon(): Manually trigger a beacon (broadcast) packet (for stimulating neighbor discovery).

  • StartBeaconInterval(ros::Duration period): Start a timer to periodically send beacon packets.

  • Neighbors(): Return neighbor state information (based on received packets).

Breadcrumbs

A breadcrumb is a communications device attached to a ground vehicle than can be dropped into the world. The goal of breadcrumbs is to extend the communication range among the team. In the SubT Simulator, once a breadcrumb is deployed, it cannot move. It operates as a static network relay.

A breadcrumb deployment can be requested by publishing a ROS message on topic /<ROBOT_NAME>/breadcrumb/deploy (e.g.: /X1/breadcrumb/deploy). The type of the message is std_msgs/Empty. The ground vehicle configurations that include breadcrumbs, and the finite number available, are listed on the Robots Wiki.

ROS Topic Description Message type
/<ROBOT_NAME>/breadcrumb/deploy Drop a breadcrumb std_msgs/Empty

Messages are delivered to the final destinations if source and destination are directly within communication range. If source and destination are not within direct communication range, but there's a path between the source and a sequence of one or more relays to the destination, the message will be delivered. Keep in mind that all the hops between source and the first relay, between intermediate relays, and between the last relay and the destination, need to be within communication range.

Marsupials

Refer to the Marsupial Tutorial for information about marsupial vehicles.

A marsupial detachment can be requested by publishing a ROS message on topic /<childRobotName>/detach (e.g.: /Xb/detach). The type of the message is std_msgs/Empty.

ROS Topic Description Message type
/<CHILD_ROBOT_NAME>/detach Detach from a marsupial parent std_msgs/Empty

This example spawns two marsupials:

ign launch -v 4 competition.ign worldName:=simple_cave_01 circuit:=cave robotName1:=Xa robotConfig1:=X1_SENSOR_CONFIG_1 robotName2:=Xb robotConfig2:=X3_SENSOR_CONFIG_1 robotName3:=Xc robotConfig3:=X1_SENSOR_CONFIG_1 robotName4:=Xd robotConfig4:=X3_SENSOR_CONFIG_1 marsupial1:=Xa:Xb marsupial2:=Xc:Xd`

To detach Xb, run:

rostopic pub /Xb/detach std_msgs/Empty "{}"

Artifacts

One of the main tasks to complete during the competition is to identify and report the pose of multiple artifacts. Teams should use the SendTo() function from the CommsClient class to report the location of artifacts to the base station.

The artifact pose should be reported relative to the artifact origin. At this point, there's no physical object to represent the artifact origin. Instead, robots can query the ROS service /subt/pose_from_artifact_origin that returns the robot pose relative to the artifact origin. Note that this service is only available if the robot is in the staging area.

When a robot communicates with the base station, received artifact requests are acknowledged by sending back a message of type ArtifactScore. Keep in mind that the ACK can also be lost as any other message sent. Repeated artifact requests (a request is considered repeated when contains the same artifact type and position as a previously sent request) do not count as new requests for scoring purposes. In other words, a team will not lose an extra token while retrying to send a previously reported artifact. The ACK sent for a duplicate report will contain the score_change of the original report.

Like all other incoming messages, the ACKs come through the callback connected with Bind(). From that callback, you can check the source and destination of the message. The message contents can be obtained by deserializing the incoming data into a message, for example:

void boundCallback(const std::string &_srcAddress,
                   const std::string &_dstAddress,
                   const uint32_t _dstPort,
                   const std::string &_data)
{
  subt::msgs::ArtifactScore res;
  if (!res.ParseFromString(_data))
  {
    ROS_ERROR("CommClientCallback(): Error deserializing message.");
  }

  // Add code to handle communication callbacks.
  ROS_INFO("Message from [%s] to [%s] on port [%u]:\n [%s]", _srcAddress.c_str(),
      _dstAddress.c_str(), _dstPort, res.DebugString().c_str());
}

Important: The scoring plugin will find the closest artifact near the reported pose that matches the requested type (e.g.: if you're reporting a backpack at pose [10, 20, 0, 0, 0, 0], the scoring plugin will compare your reported pose with the closest backpack). The score is based on the accuracy of the report and its timing.

Service Description
/subt/pose_from_artifact_origin Robot pose relative to the origin artifact

Score

The current score is periodically published under the /subt/score topic. You can check your task score at any time with:

rostopic echo /subt/score
Topic Description
/subt/score Current score

Competition Clock

The competition clock counts down the time remaining in a phase. There are three phases:

  1. setup: This phase is active from when simulation starts to when the scored run period starts.
  2. run: This phase is active from when the scored competition run starts to when the run duration has elapsed, or earlier if the run is ended manually. See Starting and Finishing a Trial for instructions on manually starting/ending this phase.
  3. finished: This phase is active when the competition run finishes.

The competition clock requires the launch argument durationSec:=<sim_seconds_allowed>, otherwise the run phase will display zero.

rostopic echo /subt/run_clock
Topic Description
/subt/run_clock Current phase and time remaining

See also: /clock ROS topic for tracking simulation time.

Starting and Finishing a Trial

Two ROS services are provided for manually starting and finishing the "run" phase when competition scoring is active. Both of these services are optional.

The /subt/start service can be called:

  • manually whenever your robots are ready to begin the competition run, OR
  • automatically when the first robot leaves the staging area, OR
  • automatically after 15 minutes of simulation time have passed

It's good practice to call the finish service when you are done with a competition run. If any robot instance calls the /subt/finish service, the run will end and the simulation instance will be terminated. The finish service will be automatically be called when any of the termination criteria are met (refer to the SubT Challenge Rules document at the Resources page).

Service Description Request Message Response Message
/subt/start Start a trial std_srvs::SetBool::Request std_srvs::SetBool::Response
/subt/finish Complete a trial std_srvs::SetBool::Request std_srvs::SetBool::Response

Mapping Server

Teams are strongly encouraged to provide a window into their competition progress by transmitting map, telemetry, and marker information from their system to DARPA to aid in evaluation.

Mapping information consists of either two-dimensional or three-dimensional representations of the subterranean environment that the team has constructed using sensing and processing. Telemetry information consists of real-time positions for elements of the team’s system. Marker information consists of any other visualization objects the team wishes to submit to DARPA, e.g., artifact detections, locations of dropped communications nodes, or traveled or planned trajectories of robots. Together, these types of information provide insight into the progress and success of a team’s approach. All information is expected to be reported in the DARPA frame which aligns with the artifact_origin frame in simulation.

Teams should send mapping data over the following topics to the Virtual Mapping Server, which can also be found in mapping_server_relays.launch. These topics will be automatically sent to the mapping server to be saved and analyzed for each robot that publishes to the specified topic names. If mapping data is sent by a single robot, the map is assumed to be merged and representative of all robots’ mapping contributions. If sent by multiple robots, the maps will be merged by the mapping server based upon their alignment with the DARPA frame before analysis.

ROS Topic Description Message type
/cloud 3D volumetric map data - up to 1 update per 10 seconds sensor_msgs/PointCloud2
/grid 2D map data - up to 1 update per 10 seconds nav_msgs/OccupancyGrid
/poses Robot telemetry data - up to 1 update per second geometry_msgs/PoseArray
/markers Other visualization objects, e.g., planned trajectories, detected artifacts, etc. - up to 1 update per second visualization_msgs/MarkerArray

Custom data logging

Debugging can be difficult when using Cloudsim since all submitted solutions are run automatically without the ability for users to ssh into a robot. This problem is mitigated by recording, using rosbag, all topics that match /robot_data(.*). Recording is started automatically. Any robot that wants to have data recorded should publish messages on a /robot_data(.*) topic.

Up to 2GB of data, separated into two 1GB files, can be recorded.

Ground truth pose

For debugging purposes, the flag enableGroundTruth:=true can be set in the arguments to ign launch, which will cause TF messages to be published with ground truth transforms for each vehicle. The topic <robot name>/pose will also include a transform from the world frame to each vehicle. Note that this topic and <robot name>/pose_static also include transforms from the model frame to the links and sensors in the vehicle. Therefore, using the topic directly would require some filtering to get only the transform from world to the vehicle. The recommended way to obtain ground truth information is via TF. Please note that the ground truth robot pose is not available during practice or competition runs on Cloudsim. Teams should incorporate robot odometry into their solutions to explore and map the underground environments.