Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Execute ROS2 migration testing plan #387

Open
25 tasks
ShaanGondalia opened this issue Dec 8, 2022 · 0 comments
Open
25 tasks

Execute ROS2 migration testing plan #387

ShaanGondalia opened this issue Dec 8, 2022 · 0 comments
Labels
systems Related to building and automation (docker/github)

Comments

@ShaanGondalia
Copy link
Contributor

ShaanGondalia commented Dec 8, 2022

Background

Before merging #350, we need to create and execute a thorough test plan to make sure all of our systems work in ROS2 as well as they do in ROS. All tests in the testing plan should be run both on a local setup and on the robot computer unless otherwise stated.

Important Notes

This issue is missing testing plans for offboard_comms, task_planning, execute, and gui. Once these packages are migrated they should be added to the testing plan below.

Running ROS2 docker images

To run the updated docker images:

  1. Comment /build-artifacts under Migrate to ROS2 #373. This will trigger the build-artifacts workflow, which builds the docker images and makes them available as docker images that can be downloaded from GitHub. Alternatively, the images can be built locally but this is generally slower.
  2. Navigate to Actions > build-artifacts and select the workflow corresponding to your comment. The artifacts will be located at the bottom of the window. Download onboard and landside.
  3. Navigate to the downloaded files on your machine. Note that the onboard artifact is two images. We only need amd64-onboard.tar.gz.
  4. Load the images with docker load -i amd64-onboard.tar.gz and docker load -i amd64-landside.tar. This will create an image called dukerobotics/robosub-ros:onboard-amd64 and dukerobotics/robosub-ros:landside
  5. Re-tag the images with the following sequence of commands:
docker tag dukerobotics/robosub-ros:onboard-amd64 dukerobotics/robosub-ros2:onboard
docker image rm dukerobotics/robosub-ros:onboard-amd64
docker tag dukerobotics/robosub-ros:landside dukerobotics/robosub-ros2:landside
docker image rm dukerobotics/robosub-ros:landside
  1. Note that if you had a dukerobotics/robosub-ros:landside image before, it will be untagged after step 4. If you want to retag it, use docker image ls to find its image id, and run docker tag <image_id> dukerobotics/robosub-ros:landside after step 5. The reason to do this is so that both the original ROS1 and ROS2 images are available on your machines, so you can swap between the two easily.
  2. Change the image names indocker-compose.yaml, so that the ROS2 images are referenced:
services:
  onboard:
    image: dukerobotics/robosub-ros2:onboard
  ...
  landside:
    image: dukerobotics/robosub-ros2:landside
  1. Now, you can run docker compose up -d or docker run as normal to start the ROS2 containers!

Running ROS2 code

  1. Once inside the docker containers, run ./build.sh to build the corresponding ROS2 workspaces.
  2. Run source ${COMPUTER_TYPE}/ros2_ws/install/setup.bash to make our custom packages recognizable.
  3. To run nodes: ros2 run <package> <node> --ros-args -p <param_1_name>:=<param_1_value> -p <param_2_name>:=<param_2_value> Don't use a .py extension when calling the node.
  4. To run launch files: ros2 launch <package> <launch_file>.launch.py <arg_1_name>:=<arg_1_value>. Note that the .launch.py extension is required, otherwise your launch file won't be recognized

Testing Plan

This testing plan encompasses all of the general functionality we need before we can safely migrate to ROS2 (along with our CI checks).

Landside

camera_view

  • Record bag from cameras using ROS2 bag cli. Add usage to the README
  • Run bag_to_video to convert the bag to an avi file, and verify that the feed looks correct
  • Run video_to_bag to convert the avi file back to a bag file. Note that the size should increase drastically due to frame padding, this is expected (use a small original bag file).
  • View stereo and mono camera feeds from landside while avt_camera is publishing. Update README to include new commands for viewing feeds.

joystick

  • Connect F310 joystick to the robot computer and run F310.launch.py. Verify that joystick/raw and controls/desired_power are receiving messages.
  • Connect Thrustmaster joystick to the robot computer and run thrustmaster.launch.py. Verify that joystick/raw and controls/desired_power are receiving messages.

simulation

  • Run test_sim_comm.launch.py in an empty scene and verify that there are no errors (the robot doesn't need to move in a square).
  • Run test_sim_comm.launch,py in a scene with an object (i.e. gate) and verify that fake_cv_maker.py works correctly.

Onboard

acoustics

  • Run acoustics.launch.py in simulation and use the ROS2 action cli to generate some sample data. Verify that the results are expected.

avt_camera

  • Connect the left and right Allied Vision cameras to the robot computer and run mono_camera on both. Verify that the cameras connect and the corresponding topics are being published to (camera/left/image_raw and camera/left/camera_info)
  • Run stereo_cameras.launch.py and verify that all of the corresponding topics are being published.

controls

  • Run controls.launch.py transform:=true to verify that controls can be tested without simulation
  • Run controls.launch.py sim:=true while the simulation is running. Then run test_state_publisher with pose, velocity, and power control and verify that there are no errors (movement will probably be pretty bad).
  • Run controls.launch.py on the robot computer while state.launch.py is running. Verify that the robot moves.

cv

  • Move a test model to the models folder and rebuild the package.
  • Run cv.launch.py and ros2 run cv test_images and verify that the expected topics are published to without error. Note that test_images is currently configured to send images to the left camera topic.

data_pub

  • Run pub_dvl.launch.py and verify that the computer connects to the dvl. Verify that the dvl/raw and dvl/odom topics are being published to. Make sure the data is reasonable and the publishing rate is adequate.
  • Run pub_imu.launch.py and verify that the computer connects to the imu. Verify that the sensors/imu/imu and sensors/imu/mag topics are being published to. Make sure the data is reasonable and the publishing rate is adequate.
  • Run pub_depth.launch.py and verify that offboard/pressure is receiving values and sensors/depth is being published to. Make sure the data is reasonable and the publishing rate is adequate. This requiresoffboard_comms to be running to receive pressure sensor data from the Arduino.
  • Run pub_all.launch.py to make sure all of the sensors work together.

sensor_fusion

  • Run fuse.launch.py while the DVL and IMU are publishing. Verify that /state is published and has reasonable values.
  • Verify that we don't need to publish robot_description to get tf2 transforms. Our old documentation says that this is needed but I don't think this is the case anymore.

static_transforms

  • Run static_transforms.launch.py and verify that the correct transform values are being published.

system_utils

  • Run system_info and verify that the correct system usage messages are published.
  • Run remote_launch and verify that the start_node and stop_node services are created. Use the ros2 service cli to start and stop a test node and test launch file.
@ShaanGondalia ShaanGondalia added the systems Related to building and automation (docker/github) label Dec 8, 2022
@ShaanGondalia ShaanGondalia mentioned this issue Dec 8, 2022
32 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
systems Related to building and automation (docker/github)
Projects
None yet
Development

No branches or pull requests

1 participant