Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement MVP MoveToMouth #42

Merged
merged 24 commits into from
Aug 30, 2023
Merged

Implement MVP MoveToMouth #42

merged 24 commits into from
Aug 30, 2023

Conversation

amalnanavati
Copy link
Contributor

@amalnanavati amalnanavati commented Jul 29, 2023

Description

This PR implements the MVP MoveToMouth. Specifically, it implements a py_tree that:

  1. Toggles on face detection.
  2. Moves the arm to the side-staging location (move to configuration with orientation path constraints to keep the fork straight).
  3. Waits for a face to be detected.
  4. Moves the face mesh in the planning scene to the detected mouth center position.
  5. Computes the pose goal for the robot to move towards from the detected mouth center.
  6. Moves the robot to that pose, with the orientation path constraint to keep the fork straight.
  7. Toggles off face detection.

In order to implement this, this PR also makes the following changes to existing behaviors, decorators, and trees:

  1. Change permissions on the feedback for the action from EXCLUSIVE_WRITE to WRITE to account for the fact that one tree may have multiple move_to actions.
    1. Note that the feedback may render poorly on the app to have multiple "thinking/moving" progress bars. We may need to come up with another way of rendering progress for actions with multiple robot motions.
  2. Adds a tree_root_name parameter to every MoveToTree, because we were ended up with trees with multiple nested namespaces for behaviors, but the MoveTo behaviors must write to the top-level tree namespace in order for it to be read. Therefore, each tree's name parameter can be as nested as one wants, but the tree_root_name parameter must be the same.
  3. Adds an optional keys_to_not_write_to_blackboard set to every MoveToTree, to allow us to specify certain keys that should not be hardcoded into the blackboard when initializing the tree, and will instead be written to the blackboard by another behavior.
  4. Improves error handling:
    1. Allows create_action_servers.py to not crash when there are arbitrary exceptions in the trees.
    2. Allow move_to to deal with the scenario where a call to MoveIt finishes before a second tick of the tree.

Testing procedure

Pull the latest code in pymoveit2 branch amaln/allowed_collision_matrix.

As of now, this PR has only been tested in sim, with dummy face detection (waiting on #36 ). Before merging, it must be tested on the real robot, with actual face detection.

Initialization

  1. Ensure you are on the right branch of each of the dependencies and have the latest code. PRL fork of pymoveit (branch: amaln/allowed_collision_matrix), PRL fork of py_trees_ros (branch: amaln/service_client), and feeding_web_interface (branch: main).
  2. Build your workspace: colcon build
  3. Run the code.
    1. Sim
      1. Launch the F/T Sensor: ros2 run ada_feeding dummy_ft_sensor.py
      2. Launch the Dummy RealSense Node: ros2 run feeding_web_app_ros2_test DummyRealSense
      3. Launch Dummy FaceDetection: ros2 run feeding_web_app_ros2_test FaceDetection
      4. Launch the feeding nodes: ros2 launch ada_feeding ada_feeding_launch.xml
      5. Launch MoveIt: ros2 launch ada_moveit demo_feeding.launch.py sim:=mock
    2. Real Robot
      1. Launch the F/T Sensor: ros2 run forque_sensor_hardware forque_sensor_hardware --ros-args -p host:=xxx.xxx.x.xx (replace the x's with the correct IP)
      2. Launch the RealSense: See the README in ada_feeding_perception
      3. Launch Face Detection: ros2 launch ada_feeding_perception ada_feeding_perception_launch.xml
      4. Launch the feeding nodes: ros2 launch ada_feeding ada_feeding_launch.xml
      5. Launch MoveIt: ros2 launch ada_moveit demo_feeding.launch.py

Testing

Reset the robot to the resting position: ros2 action send_goal /MoveToRestingPosition ada_feeding_msgs/action/MoveTo "{}" --feedback (you must do this every time before calling MoveToMouth). Then call the MoveToMouth action: ros2 action send_goal /MoveToMouth ada_feeding_msgs/action/MoveTo "{}" --feedback. Verify the following, in both sim and real.

  • It successfully runs the completion. This includes that it:
    • Toggles on face detection.
    • Moves the arm to the side-staging location.
    • Waits for a face to be detected.
    • Moves the face mesh in the planning scene to the detected mouth center position.
    • Moves the robot to in front of the mouth.
    • Toggles off face detection.
    • Throughout all motions, the fork is kept mostly straight.
  • You can terminate it at any of the above stages and it terminates cleanly.
    • Toggles on face detection.
    • Moves the arm to the side-staging location.
    • Waits for a face to be detected.
    • Moves the face mesh in the planning scene to the detected mouth center position.
    • Moves the robot to in front of the mouth.
    • Toggles off face detection.
    • Throughout all motions, the fork is kept mostly straight.

Future Work

Currently, on the final approach to the user's mouth, the robot moves to a position 5cm away from their detected mouth center in the y direction of the base link (perpendicular to the wheelchair back), and an orientation within 0.5 radians of pointing straight at the wheelchair back. However:

  1. Ideally, the fork should be perpendicular to the user's face, not to the wheelchair back. So if their face/mouth is rotated, the fork should also rotate accordingly (this requires FaceDetection perceiving the orientation of the user's face).
  2. The fork should not be restricted to moving to a single point 5 cm in front of the user's mouth. Rather, it should be able to move anywhere that distance away from the mouth, within a range of angles (think in terms of spherical coordiantes centered on the mouth). In order to do this, we need to add many options for goal constraints to the MoveIt planning call.

Before opening a pull request

  • Format your code using black formatter python3 -m black .
  • Run your code through pylint and address (most) warnings/errors: pylint --recursive=y .

Before Merging

  • Squash & Merge

@amalnanavati amalnanavati changed the title Implement MVP MoveToMouth Implemented MVP MoveToMouth Jul 29, 2023
@amalnanavati amalnanavati changed the title Implemented MVP MoveToMouth Implement MVP MoveToMouth Jul 29, 2023
@amalnanavati
Copy link
Contributor Author

Note: If you follow the instructions in the README, you need the latest code on feeding_web_interface

@amalnanavati
Copy link
Contributor Author

amalnanavati commented Aug 5, 2023

I just remembered that it is possible for the app to get into the situation where the robot is at the above-plate position and then the app calls MoveToMouth (e.g., if the user decides to skip acquisition, e.g., if the fork already has food on it). In this case, the start position does not satisfy orientation constraints, and the plan will either fail or be a weird two-part plan (because ompl_planning.yaml has MoveIt attempt to fix cases where the start state is out-of-constraints.) Instead, MoveToMouth should not have orientation path constraints (for the motion to the staging position) if the robot starts at the above-plate position.

There are a few possible ways to address this:

  1. Have the MoveToMouth action first check if the starting position satisfies the orientation constraints, and only then add the orientation constraints. (This requires adding another behavior to subscribe to tf and check whether orientation constraints are satisfied.)
  2. Have the MoveToMouth action first try moving with the orientation constraints, and if that fails (for any reason) have it retry without orientation constraints.
  3. Change the MoveTo action interface to take in the app state it is being called from. Since the app states (theoretically) directly correspond to a robot arm position, that should give MoveToMouth all the info it needs to decide whether or not to add orientation constraints.

I'm leaning towards 3, but lmk what you think after reviewing it.

@egordon
Copy link
Collaborator

egordon commented Aug 15, 2023

@amalnanavati Note: the branch in the doc doesn't exist for pymoveit2, I'm assuming your latest branch amaln/allowed_collision_matrix instead.

@amalnanavati
Copy link
Contributor Author

Yes, you are right. The latest branch is documented here. I also edited the PR comment

Base automatically changed from amaln/planning_scene to ros2-devel August 15, 2023 22:09
@amalnanavati
Copy link
Contributor Author

Rebased onto the new ros2-devel. Will test in-person, with face detection, tomorrow.

@amalnanavati amalnanavati changed the base branch from ros2-devel to amaln/pre_moveto_idiom August 25, 2023 23:35
@amalnanavati
Copy link
Contributor Author

I re-ran all the sim tests and verified that it works (with the most up-to-date code in ros2-devel and amaln/pre_moveto_idiom). I will test it in-person on Mon, but if folks want to get a head-start on the review they can.

@Raidakarim
Copy link
Contributor

I pulled all new code and checked out the branches mentioned in the testing procedure. I faced some obstacles as shown below when I tried testing in sim:
Seems like this command Launch MoveIt: ros2 launch ada_moveit demo_feeding.launch.py sim:=mock does not work. When I tried modifying it a bit like this ros2 launch ada_moveit demo.launch.py sim:=mock; it worked.
Screenshot 2023-08-29 at 1 46 06 PM
After launching all 5 terminals as listed in sim (a-e), I opened another terminal and ran ros2 action send_goal /MoveToRestingPosition ada_feeding_msgs/action/MoveTo "{}" --feedback got an error the launch file terminal and the move action terminal didn't run anything:
Screenshot 2023-08-29 at 12 26 20 PM
Screenshot 2023-08-29 at 12 27 51 PM
Then I modified the ceate_action_servers.py file's ada_watchdog_listener and ADAWatchdogListener to respectively be ada_watchdog and ADAWatchdog.
Screenshot 2023-08-29 at 1 37 53 PM
Then, when I ran the move action for resting position command again, I got this error:
Screenshot 2023-08-29 at 12 46 59 PM
I didn't make any further changes.

@amalnanavati
Copy link
Contributor Author

@Raidakarim you need to have also pulled the most recent ada_ros2 code. Also ensure you have the latest pymoveit2 code on the amaln/allow_collision_matrix branch

Base automatically changed from amaln/pre_moveto_idiom to ros2-devel August 30, 2023 00:57
@amalnanavati amalnanavati mentioned this pull request Aug 30, 2023
8 tasks
- Added behavior to move face to detected position
- incorporated into MoveToMouth
- Removed py_tree_ros from dependencies -- I think ament_python packages can't be found by CMakeList.txt?
- Small bug fixes
@Raidakarim

This comment was marked as off-topic.

@amalnanavati

This comment was marked as off-topic.

@Raidakarim

This comment was marked as off-topic.

@amalnanavati
Copy link
Contributor Author

@Raidakarim Please take this discussion onto Slack, off of this PR. Comments on this PR should focus on code feedback, not debugging separate issues.

@amalnanavati
Copy link
Contributor Author

TODO: For the real F/T sensor, we should look into whether the re-tare service returns during the 0.75 secs when the F/T readings are paused, or after. Because the current pre_moveto_config immediately toggles the watchdog listener after the re-taring service returns, but maybe there should be a sleep in-between.

Copy link
Contributor

@taylorkf taylorkf left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good. We still have an (infrequent) error where the robot does not approach a mouth even when it is detected and within 1.5 meters. We'll keep an eye out for this to happen again and track the error messages.

@amalnanavati amalnanavati merged commit 4e6dd1e into ros2-devel Aug 30, 2023
@amalnanavati amalnanavati deleted the amaln/move_to_mouth branch August 30, 2023 21:06
@Raidakarim
Copy link
Contributor

Raidakarim commented Sep 19, 2023

For Initialization Step 3i, in Sim point d, I had to set use_estop to false to the given command. So, the command I used to successfully ran it is: ros2 launch ada_feeding ada_feeding_launch.xml use_estop:=false

If I don't add the use_estop part and just use the current given command, it doesn't run successfully and gives errors. Also, I ran it in ros2-devel branch after this PR has been merged. So, this might not only be true for this PR but also for the ros2-devel branch.

@amalnanavati
Copy link
Contributor Author

Updated the README to account for that in this commit.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants