Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ROS nodes FPS performance measurements #419

Merged
merged 80 commits into from
Apr 11, 2023
Merged

ROS nodes FPS performance measurements #419

merged 80 commits into from
Apr 11, 2023

Conversation

tsampazk
Copy link
Collaborator

@tsampazk tsampazk commented Mar 13, 2023

This PR adds time performance measurement of tools' inference. For each node, the time it takes to run (only) inference is measured and published in a performance topic. Publishing this message is optional, i.e. the relevant topic needs to be set via argparse. This message can be subscribed to, or echoed to show the current FPS.

If a more accurate performance measurement is needed, one can use the new performance node which subscribes to the performance topic and calculates a running average of the FPS measurement which it prints in the console along with the time it took for the last frame to be processed.

  • ROS1 Pose estimation node proof of concept
  • ROS1 Performance node
  • ROS1 Documentation update
  • ROS1 Other nodes
  • ROS2 Nodes
  • ROS2 Documentation update

@thomaspeyrucain @ad-daniel Let me know how this looks and whether it's sufficient and/or convenient, before i add it in the rest of the ROS1/2 nodes.

@tsampazk tsampazk added the enhancement New feature or request label Mar 13, 2023
@tsampazk tsampazk self-assigned this Mar 13, 2023
@ad-daniel
Copy link
Collaborator

Implementation wise I think this is a nice way of doing it. I have some doubts concerning whether measuring just the infer method is enough though, because in that case I don't expect there to be any real difference between the value obtained from this method and the values already reported in the documentation when the tools were benchmarked using the python API. It's measuring the same thing. So the question is, can we actually use these values to "show" that the performance requirements of the use-case have been met? Because by taking only the infer method, it sounds to me like it's an evaluation of the tool, not a task/use-case. What else should be included and where the boundary should be I don't know.. but before doing the change for all nodes it should be settled

@tsampazk
Copy link
Collaborator Author

Implementation wise I think this is a nice way of doing it. I have some doubts concerning whether measuring just the infer method is enough though, because in that case I don't expect there to be any real difference between the value obtained from this method and the values already reported in the documentation when the tools were benchmarked using the python API. It's measuring the same thing. So the question is, can we actually use these values to "show" that the performance requirements of the use-case have been met? Because by taking only the infer method, it sounds to me like it's an evaluation of the tool, not a task/use-case. What else should be included and where the boundary should be I don't know.. but before doing the change for all nodes it should be settled

Oh i was under the impression that that was what we wanted (measure only tool inference). I'll wait for more input before moving forward.

@tsampazk tsampazk marked this pull request as ready for review March 24, 2023 11:21
Copy link
Collaborator

@stefaniapedrazzi stefaniapedrazzi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you (also for all the code cleanup)!
In general everything seems fine for me.
I just have a couple of minor comments

@tsampazk tsampazk mentioned this pull request Mar 29, 2023
4 tasks
Copy link
Collaborator

@passalis passalis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for all this extensive work! I have only two minor suggestions for the performance node (switch from FPS to inferences/sec since some nodes do not receive frames as input).

Co-authored-by: Nikolaos Passalis <[email protected]>
Copy link
Collaborator

@passalis passalis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you!

Copy link
Collaborator

@stefaniapedrazzi stefaniapedrazzi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you!

@tsampazk tsampazk merged commit 0c2edb0 into develop Apr 11, 2023
@tsampazk tsampazk deleted the ros-performance branch April 11, 2023 06:46
lucamarchionni pushed a commit to lucamarchionni/opendr that referenced this pull request Jun 10, 2024
* Added performance node

* Added time measurement for tool inference and publishing

* Moved performance start time before image preprocessing

* audiovisual_emotion_recognition_node.py performance and some formatting

* binary_high_resolution_node.py performance

* continual_skeleton_based_action_recognition_node.py performance and some rearrangement

* face_detection_retinaface_node.py performance

* face_recognition_node.py performance

* facial_emotion_estimation_node.py performance

* fall_detection_node.py performance

* heart_anomaly_detection_node.py performance

* hr_pose_estimation_node.py performance

* landmark_based_facial_expression_recognition_node.py performance

* object_detection_2d_centernet_node.py performance

* object_detection_2d_detr_node.py performance

* object_detection_2d_gem_node.py performance

* object_detection_2d_nanodet_node.py performance

* object_detection_2d_ssd_node.py performance

* object_detection_2d_yolov3_node.py performance

* object_detection_2d_yolov5_node.py performance

* object_detection_3d_voxel_node.py performance

* object_tracking_2d_deep_sort_node.py performance

* object_tracking_2d_fair_mot_node.py performance

* object_tracking_2d_siamrpn_node.py performance

* object_tracking_3d_ab3dmot_node.py performance

* panoptic_segmentation_efficient_lps_node.py performance

* panoptic_segmentation_efficient_ps_node.py performance

* pose_estimation_node.py minor fixes

* rgbd_hand_gesture_recognition_node.py performance

* semantic_segmentation_bisenet_node.py performance

* skeleton_based_action_recognition_node.py performance

* speech_command_recognition_node.py performance

* video_activity_recognition_node.py performance

* Added section for utility nodes and entry for the performance node

* Added entry in notes for logging performance

* Added entries for the new performance topic in all nodes

* audiovisual_emotion_recognition_node.py ROS2 performance

* binary_high_resolution_node.py ROS2 performance

* continual_skeleton_based_action_recognition_node.py ROS2 performance

* face_detection_retinaface_node.py ROS2 performance

* face_recognition_node.py ROS2 performance

* facial_emotion_estimation_node.py ROS2 performance

* fall_detection_node.py ROS2 performance

* heart_anomaly_detection_node.py ROS2 performance

* hr_pose_estimation_node.py ros1 renamed class

* hr_pose_estimation_node.py ROS2 performance

* landmark_based_facial_expression_recognition_node.py ROS2 performance

* object_detection_2d_centernet_node.py ROS2 performance

* object_detection_2d_detr_node.py ROS2 performance

* object_detection_2d_gem_node.py ROS2 performance

* object_detection_2d_nanodet_node.py ROS1 minor fix

* object_detection_2d_nanodet_node.py ROS2 performance

* object_detection_2d_ssd_node.py ROS2 performance

* object_detection_2d_yolov3_node.py ROS1 class name fix

* object_detection_2d_yolov3_node.py ROS2 performance

* object_detection_2d_yolov5_node.py ROS1 class name fix

* object_detection_2d_yolov5_node.py ROS2 performance

* object_detection_3d_voxel_node.py ROS2 performance

* object_tracking_2d_deep_sort_node.py ROS2 performance

* object_tracking_2d_fair_mot_node.py ROS2 performance

* object_tracking_2d_siamrpn_node.py ROS2 performance

* object_tracking_3d_ab3dmot_node.py ROS2 performance

* panoptic_segmentation_efficient_lps_node.py ROS1 some imports rearrangement

* panoptic_segmentation_efficient_lps_node.py  ROS2 performance

* panoptic_segmentation_efficient_ps_node.py ROS1 some imports rearrangement

* panoptic_segmentation_efficient_ps_node.py ROS2 performance

* pose_estimation_node.py ROS2 performance

* rgbd_hand_gesture_recognition_node.py ROS2 performance

* semantic_segmentation_bisenet_node.py ROS2 performance

* skeleton_based_action_recognition_node.py ROS2 performance and some refactoring

* speech_command_recognition_node.py ROS2 performance

* video_activity_recognition_node.py ROS2 performance

* Added ROS2 performance_node.py

* ROS2 readme updates for performance topics/node

* Apply suggestions from code review

Co-authored-by: Stefania Pedrazzi <[email protected]>

* Disable running tracking infer if no tracking is needed for ab3dmot ROS2

Co-authored-by: Stefania Pedrazzi <[email protected]>

* Disable running tracking infer if no tracking is needed for ab3dmot ROS1

* Apply suggestions from code review

Co-authored-by: Nikolaos Passalis <[email protected]>

* Removed additional spaces after suggested change

* Applied review suggestion to ROS2 node as well

---------

Co-authored-by: Stefania Pedrazzi <[email protected]>
Co-authored-by: Nikolaos Passalis <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request test sources Run style checks
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants