-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement MVP MoveToMouth #42
Conversation
d3dc826
to
2514074
Compare
d96db73
to
eefbd95
Compare
Note: If you follow the instructions in the README, you need the latest code on |
I just remembered that it is possible for the app to get into the situation where the robot is at the above-plate position and then the app calls There are a few possible ways to address this:
I'm leaning towards 3, but lmk what you think after reviewing it. |
@amalnanavati Note: the branch in the doc doesn't exist for pymoveit2, I'm assuming your latest branch |
Yes, you are right. The latest branch is documented here. I also edited the PR comment |
a6a6655
to
c894542
Compare
Rebased onto the new |
c894542
to
1a53884
Compare
e4f66a2
to
df715f5
Compare
I re-ran all the sim tests and verified that it works (with the most up-to-date code in |
83dc7f3
to
b82d2c7
Compare
@Raidakarim you need to have also pulled the most recent |
- Added behavior to move face to detected position - incorporated into MoveToMouth - Removed py_tree_ros from dependencies -- I think ament_python packages can't be found by CMakeList.txt? - Small bug fixes
6b74e7c
to
482d24b
Compare
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
@Raidakarim Please take this discussion onto Slack, off of this PR. Comments on this PR should focus on code feedback, not debugging separate issues. |
TODO: For the real F/T sensor, we should look into whether the re-tare service returns during the 0.75 secs when the F/T readings are paused, or after. Because the current |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good. We still have an (infrequent) error where the robot does not approach a mouth even when it is detected and within 1.5 meters. We'll keep an eye out for this to happen again and track the error messages.
For Initialization Step 3i, in Sim point d, I had to set use_estop to false to the given command. So, the command I used to successfully ran it is: If I don't add the use_estop part and just use the current given command, it doesn't run successfully and gives errors. Also, I ran it in |
Updated the README to account for that in this commit. |
Description
This PR implements the MVP MoveToMouth. Specifically, it implements a py_tree that:
In order to implement this, this PR also makes the following changes to existing behaviors, decorators, and trees:
EXCLUSIVE_WRITE
toWRITE
to account for the fact that one tree may have multiple move_to actions.tree_root_name
parameter to everyMoveToTree
, because we were ended up with trees with multiple nested namespaces for behaviors, but theMoveTo
behaviors must write to the top-level tree namespace in order for it to be read. Therefore, each tree'sname
parameter can be as nested as one wants, but thetree_root_name
parameter must be the same.keys_to_not_write_to_blackboard
set to every MoveToTree, to allow us to specify certain keys that should not be hardcoded into the blackboard when initializing the tree, and will instead be written to the blackboard by another behavior.create_action_servers.py
to not crash when there are arbitrary exceptions in the trees.move_to
to deal with the scenario where a call to MoveIt finishes before a second tick of the tree.Testing procedure
Pull the latest code in pymoveit2 branch
amaln/allowed_collision_matrix
.As of now, this PR has only been tested in sim, with dummy face detection (waiting on #36 ). Before merging, it must be tested on the real robot, with actual face detection.
Initialization
amaln/allowed_collision_matrix
), PRL fork of py_trees_ros (branch:amaln/service_client
), and feeding_web_interface (branch:main
).colcon build
ros2 run ada_feeding dummy_ft_sensor.py
ros2 run feeding_web_app_ros2_test DummyRealSense
ros2 run feeding_web_app_ros2_test FaceDetection
ros2 launch ada_feeding ada_feeding_launch.xml
ros2 launch ada_moveit demo_feeding.launch.py sim:=mock
ros2 run forque_sensor_hardware forque_sensor_hardware --ros-args -p host:=xxx.xxx.x.xx
(replace the x's with the correct IP)ada_feeding_perception
ros2 launch ada_feeding_perception ada_feeding_perception_launch.xml
ros2 launch ada_feeding ada_feeding_launch.xml
ros2 launch ada_moveit demo_feeding.launch.py
Testing
Reset the robot to the resting position:
ros2 action send_goal /MoveToRestingPosition ada_feeding_msgs/action/MoveTo "{}" --feedback
(you must do this every time before callingMoveToMouth
). Then call theMoveToMouth
action:ros2 action send_goal /MoveToMouth ada_feeding_msgs/action/MoveTo "{}" --feedback
. Verify the following, in both sim and real.Future Work
Currently, on the final approach to the user's mouth, the robot moves to a position 5cm away from their detected mouth center in the y direction of the base link (perpendicular to the wheelchair back), and an orientation within 0.5 radians of pointing straight at the wheelchair back. However:
Before opening a pull request
python3 -m black .
pylint --recursive=y .
Before Merging
Squash & Merge