Skip to content

Commit

Permalink
Fall detection (#237)
Browse files Browse the repository at this point in the history
* Initial version of fall detector learner with naive fall detection implementation

* Added alternative ways to retrieve data from Keypoint, similar to Pose class

* Formatted imports

* Initial version of fall detection demo and empty README

* Reverted changes in target.py for Keypoint class

* Reverted to default Keypoint.data access and fixed download path

* Added convenience __getitem__ method and properties for accessing Keypoint data.

* Fall detection evaluation on UR Fall Dataset WIP

* Improved reading of UR Fall Dataset and completed basic evaluation

* Inference demo to run fall detection on predetermined images

* Renamed fall_detection.py to webcam_demo.py

* Webcam demo cleanup

* Removed unused time import

* Infer now returns a list of detections

* Inference demo now works with modified learner for multiple poses

* Infer now returns the pose as well

* Inference demo works on multiple detections and prints appropriate messages and graphics

* Webcam demo now works with multiple fall detections

* Some modifications to the eval method and a docstring with some explanations.

* Evaluation demo

* Changed the way naive fall detection calculates angles for major increase in sensitivity and minor decrease in specificity

* Changed the way naive fall detection calculates leg position, avoiding some false positives and fixed minor bug

* Added condition for calves angle, increasing sensitivity significantly

* Added tests for fall detector

* Added download method and did extensive changes all around to work with tests

* Finalized inference demo with images downloaded from FTP

* Finalized eval demo with image download from FTP and argparse

* Finalized webcam demo

* Added fall detection demo readme

* Added NotImplementedError on fit and made some methods private

* Added documentation for fall detector

* Minor fixes

* Added tutorial notebook and updated README

* Update tests_suite.yml

Added fall detection tests

* Added missing references on fall detection tests

* Fixes according to review

* Couple of fixes based on review

* Added fall detection on packages.txt

* Added a dependencies.ini for fall-detection

* Removed notes section

* Update dep installation

* Added changelog entry for fall detection tool

* Added fall detection node

* Added fall detection node instructions

* Added Fall Detection entry in list of nodes

* Added python ros node for fall detection

* Review fixes

* Temporary test

* Revert "Temporary test"

This reverts commit 40dba12.

Co-authored-by: ad-daniel <[email protected]>
Co-authored-by: Nikolaos Passalis <[email protected]>
Co-authored-by: ad-daniel <[email protected]>
  • Loading branch information
4 people authored Apr 27, 2022
1 parent 3d82381 commit 00a9fb4
Show file tree
Hide file tree
Showing 20 changed files with 1,451 additions and 18 deletions.
6 changes: 5 additions & 1 deletion .github/workflows/tests_suite.yml
Original file line number Diff line number Diff line change
Expand Up @@ -67,6 +67,7 @@ jobs:
- perception/object_tracking_2d
- perception/object_detection_3d
- perception/pose_estimation
- perception/fall_detection
- perception/speech_recognition
- perception/skeleton_based_action_recognition
- perception/semantic_segmentation
Expand Down Expand Up @@ -171,6 +172,7 @@ jobs:
- perception/multimodal_human_centric
- perception/object_tracking_2d
- perception/pose_estimation
- perception/fall_detection
- perception/speech_recognition
- perception/skeleton_based_action_recognition
- perception/semantic_segmentation
Expand Down Expand Up @@ -240,6 +242,7 @@ jobs:
- perception/multimodal_human_centric
- perception/object_tracking_2d
- perception/pose_estimation
- perception/fall_detection
- perception/speech_recognition
- perception/skeleton_based_action_recognition
- perception/semantic_segmentation
Expand Down Expand Up @@ -283,6 +286,7 @@ jobs:
# The following two are dependecies for some other packages and pip cannot automatically install them if they are not on a repo
pip install ./artifact/wheel-artifact/opendr-toolkit-compressive-learning-*.tar.gz
pip install ./artifact/wheel-artifact/opendr-toolkit-object-detection-2d-*.tar.gz
pip install ./artifact/wheel-artifact/opendr-toolkit-pose-estimation-*.tar.gz
# Install specific package for testing
package=$(sed "s/_/-/g" <<< ${{ matrix.package }})
Expand All @@ -294,7 +298,6 @@ jobs:
# Utils contains hyperparameter tuning
if [ "$package" == "utils" ]; then
pip install ./artifact/wheel-artifact/opendr-toolkit-hyperparameter-tuner-*.tar.gz
else
pip install ./artifact/wheel-artifact/opendr-toolkit-$package-*.tar.gz
fi
Expand All @@ -315,6 +318,7 @@ jobs:
- perception/multimodal_human_centric
- perception/object_tracking_2d
- perception/pose_estimation
- perception/fall_detection
- perception/speech_recognition
- perception/skeleton_based_action_recognition
- perception/semantic_segmentation
Expand Down
1 change: 1 addition & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@ Released on XX, XXth, 2022.
- Improved the structure of the toolkit by moving `io` from `utils` to `engine.helper` ([#201](https://github.com/opendr-eu/opendr/pull/201)).
- Added support for `post-install` scripts and `opendr` dependencies in `.ini` files ([#201](https://github.com/opendr-eu/opendr/pull/201)).
- Updated toolkit to support CUDA 11.2 and improved GPU support ([#215](https://github.com/opendr-eu/opendr/pull/215)).
- Added a standalone pose-based fall detection tool ([#237](https://github.com/opendr-eu/opendr/pull/237))
- Bug Fixes:
- Updated wheel building pipeline to include missing files and removed unnecessary dependencies ([#200](https://github.com/opendr-eu/opendr/pull/200)).
- `panoptic_segmentation/efficient_ps`: updated dataset preparation scripts to create correct validation ground truth ([#221](https://github.com/opendr-eu/opendr/pull/221)).
Expand Down
108 changes: 108 additions & 0 deletions docs/reference/fall-detection.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,108 @@
## fall_detection module

The *fall_detection* module contains the *FallDetectorLearner* class, which inherits from the abstract class *Learner*.

### Class FallDetectorLearner
Bases: `engine.learners.Learner`

The *FallDetectorLearner* class contains the implementation of a naive fall detector algorithm.
It can be used to perform fall detection on images (inference) using a pose estimator.

The [FallDetectorLearner](/src/opendr/perception/fall_detection/fall_detector_learner.py) class has the
following public methods:

#### `FallDetectorLearner` constructor
```python
FallDetectorLearner(self, pose_estimator)
```

Constructor parameters:

- **pose_estimator**: *object*\
The provided pose estimator class used to detect poses that the fall detector uses to determine if a person has fallen.

#### `FallDetectorLearner.eval`
```python
FallDetectorLearner.eval(self, dataset, verbose)
```

This method is used to evaluate the naive fall detector algorithm on an evaluation dataset.
Returns a dictionary containing statistics regarding the evaluation.

Parameters:

- **dataset**: *object*\
Object that holds the evaluation dataset.
Can be of type `ExternalDataset` or a custom dataset inheriting from `DatasetIterator`.
- **verbose**: *bool, default=True*\
If set to True, enables the maximum verbosity.

#### `FallDetectorLearner.infer`
```python
FallDetectorLearner.infer(self, img)
```

This method is used to perform fall detection on an image.
Returns a list of tuples, one for every person detected, that each contains an `engine.target.Category`, a list of three keypoints that define two lines that are used to determine if the person has fallen, and the complete poses that were detected.
It returns an empty list if no pose detections were made.

The `engine.target.Category` is `1` if person has fallen, `-1` if person is standing and `0` if a person is detected, but
the algorithm is unable to detect if person is standing or fallen.

Parameters:

- **img**: *object*\
Object of type engine.data.Image.

#### `FallDetectorLearner.download`
```python
FallDetectorLearner.download(path, mode, verbose, url)
```

Download utility for downloading fall detection test images and annotations.

Parameters:

- **path**: *str, default=None*\
Local path to save the files, defaults to '.' if None.
- **mode**: *str, default="test_data"*\
What files to download, can only be "test_data"
- **verbose**: *bool, default=False*\
Whether to print messages in the console.
- **url**: *str, default=OpenDR FTP URL*\
URL of the FTP server.


#### Examples

* **Inference and result drawing example on a test image using OpenCV.**
```python
import cv2
from opendr.engine.data import Image
from opendr.perception.fall_detection import FallDetectorLearner
from opendr.perception.pose_estimation import LightweightOpenPoseLearner
from opendr.perception.pose_estimation import draw, get_bbox

pose_estimator = LightweightOpenPoseLearner(device="cuda", mobilenet_use_stride=False)
pose_estimator.download(verbose=True) # Download the default pretrained mobilenet model
pose_estimator.load("openpose_default")

fall_detector = FallDetectorLearner(pose_estimator)

# Download a sample dataset
fall_detector.download(verbose=True)

img = Image.open("test_images/fallen.png")
detections = fall_detector.infer(img)
fallen = detections[0][0].data # Get fallen int from first detection
pose = detections[0][2] # Get pose from first detection
img = img.opencv()
draw(img, pose) # Draw the detected pose

if fallen == 1:
x, y, w, h = get_bbox(pose)
cv2.rectangle(img, (x, y), (x + w, y + h), (0, 0, 255), 2)
cv2.putText(img, "Detected fallen person", (5, 12), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 1, cv2.LINE_AA)
cv2.imshow('Result', img)
cv2.waitKey(0)
```
6 changes: 5 additions & 1 deletion docs/reference/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,10 +61,12 @@ Neither the copyright holder nor any applicable licensor will be liable for any
- heart anomaly detection:
- [gated_recurrent_unit Module](gated-recurrent-unit-learner.md)
- [attention_neural_bag_of_feature_learner Module](attention-neural-bag-of-feature-learner.md)
- fall detection:
- [fall_detection Module](fall-detection.md)

- `control` Module
- [mobile_manipulation Module](mobile-manipulation.md)
- [single_demo_grasp Module](single-demonstration-grasping.md)
- [single_demo_grasp Module](single-demonstration-grasping.md)

- `simulation` Module
- [human_model_generation Module](human_model_generation.md)
Expand Down Expand Up @@ -119,6 +121,8 @@ Neither the copyright holder nor any applicable licensor will be liable for any
- [bisnet Demo](/projects/perception/semantic_segmentation/bisenet)
- action recognition:
- [skeleton_based_action_recognition Demo](/projects/perception/skeleton_based_action_recognition)
- fall detection:
- [fall_detection Demo](/projects/perception/fall_detection.md)
- [full_map_posterior_slam Module](/projects/perception/slam/full_map_posterior_gmapping)
- `simulation` Module
- [SMPL+D Human Models Dataset](/projects/simulation/SMPL%2BD_human_models)
Expand Down
1 change: 1 addition & 0 deletions packages.txt
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@ perception/speech_recognition
perception/semantic_segmentation
perception/face_recognition
perception/pose_estimation
perception/fall_detection
perception/compressive_learning
perception/heart_anomaly_detection
simulation/human_model_generation
Expand Down
31 changes: 16 additions & 15 deletions projects/opendr_ws/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,18 +41,19 @@ Currently, apart from tools, opendr_ws contains the following ROS nodes:

### [Perception](src/perception/README.md)
1. Pose Estimation
2. 2D Object Detection
3. Face Detection
4. Panoptic Segmentation
5. Face Recognition
6. Semantic Segmentation
7. RGBD Hand Gesture Recognition
8. Heart Anomaly Detection
9. Video Human Activity Recognition
10. Landmark-based Facial Expression Recognition
11. Skeleton-based Human Action Recognition
12. Speech Command Recognition
13. Voxel Object Detection 3D
14. AB3DMOT Object Tracking 3D
15. FairMOT Object Tracking 2D
16. Deep Sort Object Tracking 2D
2. Fall Detection
3. 2D Object Detection
4. Face Detection
5. Panoptic Segmentation
6. Face Recognition
7. Semantic Segmentation
8. RGBD Hand Gesture Recognition
9. Heart Anomaly Detection
10. Video Human Activity Recognition
11. Landmark-based Facial Expression Recognition
12. Skeleton-based Human Action Recognition
13. Speech Command Recognition
14. Voxel Object Detection 3D
15. AB3DMOT Object Tracking 3D
16. FairMOT Object Tracking 2D
17. Deep Sort Object Tracking 2D
1 change: 1 addition & 0 deletions projects/opendr_ws/src/perception/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,7 @@ include_directories(

catkin_install_python(PROGRAMS
scripts/pose_estimation.py
scripts/fall_detection.py
scripts/object_detection_2d_detr.py
scripts/object_detection_2d_gem.py
scripts/semantic_segmentation_bisenet.py
Expand Down
18 changes: 18 additions & 0 deletions projects/opendr_ws/src/perception/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,6 +40,24 @@ rosrun perception pose_estimation.py
3. You can examine the annotated image stream using `rqt_image_view` (select the topic `/opendr/image_pose_annotated`) or
`rostopic echo /opendr/poses`

## Fall Detection ROS Node
Assuming that you have already [activated the OpenDR environment](../../../../docs/reference/installation.md), [built your workspace](../../README.md) and started roscore (i.e., just run `roscore`), then you can

1. Start the node responsible for publishing images. If you have a usb camera, then you can use the corresponding node (assuming you have installed the corresponding package):

```shell
rosrun usb_cam usb_cam_node
```

2. You are then ready to start the fall detection node

```shell
rosrun perception fall_detection.py
```

3. You can examine the annotated image stream using `rqt_image_view` (select the topic `/opendr/image_fall_annotated`) or
`rostopic echo /opendr/falls`, where the node publishes bounding boxes of detected fallen poses

## Face Recognition ROS Node
Assuming that you have already [activated the OpenDR environment](../../../../docs/reference/installation.md), [built your workspace](../../README.md) and started roscore (i.e., just run `roscore`), then you can

Expand Down
133 changes: 133 additions & 0 deletions projects/opendr_ws/src/perception/scripts/fall_detection.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,133 @@
#!/usr/bin/env python
# Copyright 2020-2022 OpenDR European Project
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.


import rospy
import torch
import cv2
from vision_msgs.msg import Detection2DArray
from sensor_msgs.msg import Image as ROS_Image
from opendr_bridge import ROSBridge
from opendr.perception.pose_estimation import get_bbox
from opendr.perception.pose_estimation import LightweightOpenPoseLearner
from opendr.perception.fall_detection import FallDetectorLearner
from opendr.engine.data import Image
from opendr.engine.target import BoundingBox, BoundingBoxList


class FallDetectionNode:

def __init__(self, input_image_topic="/usb_cam/image_raw", output_image_topic="/opendr/image_fall_annotated",
fall_annotations_topic="/opendr/falls", device="cuda"):
"""
Creates a ROS Node for fall detection
:param input_image_topic: Topic from which we are reading the input image
:type input_image_topic: str
:param output_image_topic: Topic to which we are publishing the annotated image (if None, we are not publishing
annotated image)
:type output_image_topic: str
:param fall_annotations_topic: Topic to which we are publishing the annotations (if None, we are not publishing
annotated fall annotations)
:type fall_annotations_topic: str
:param device: device on which we are running inference ('cpu' or 'cuda')
:type device: str
"""
if output_image_topic is not None:
self.image_publisher = rospy.Publisher(output_image_topic, ROS_Image, queue_size=10)
else:
self.image_publisher = None

if fall_annotations_topic is not None:
self.fall_publisher = rospy.Publisher(fall_annotations_topic, Detection2DArray, queue_size=10)
else:
self.fall_publisher = None

self.input_image_topic = input_image_topic

self.bridge = ROSBridge()

# Initialize the pose estimation
self.pose_estimator = LightweightOpenPoseLearner(device=device, num_refinement_stages=2,
mobilenet_use_stride=False,
half_precision=False)
self.pose_estimator.download(path=".", verbose=True)
self.pose_estimator.load("openpose_default")

self.fall_detector = FallDetectorLearner(self.pose_estimator)

def listen(self):
"""
Start the node and begin processing input data
"""
rospy.init_node('opendr_fall_detection', anonymous=True)
rospy.Subscriber(self.input_image_topic, ROS_Image, self.callback)
rospy.loginfo("Fall detection node started!")
rospy.spin()

def callback(self, data):
"""
Callback that process the input data and publishes to the corresponding topics
:param data: input message
:type data: sensor_msgs.msg.Image
"""

# Convert sensor_msgs.msg.Image into OpenDR Image
image = self.bridge.from_ros_image(data, encoding='bgr8')

# Run fall detection
detections = self.fall_detector.infer(image)

# Get an OpenCV image back
image = image.opencv()

bboxes = BoundingBoxList([])
for detection in detections:
fallen = detection[0].data
pose = detection[2]

if fallen == 1:
color = (0, 0, 255)
x, y, w, h = get_bbox(pose)
bbox = BoundingBox(left=x, top=y, width=w, height=h, name=0)
bboxes.data.append(bbox)

cv2.rectangle(image, (x, y), (x + w, y + h), color, 2)
cv2.putText(image, "Detected fallen person", (5, 55), cv2.FONT_HERSHEY_SIMPLEX,
0.75, color, 1, cv2.LINE_AA)

# Convert detected boxes to ROS type and publish
ros_boxes = self.bridge.to_ros_boxes(bboxes)
if self.fall_publisher is not None:
self.fall_publisher.publish(ros_boxes)

if self.image_publisher is not None:
message = self.bridge.to_ros_image(Image(image), encoding='bgr8')
self.image_publisher.publish(message)


if __name__ == '__main__':
# Select the device for running the
try:
if torch.cuda.is_available():
print("GPU found.")
device = 'cuda'
else:
print("GPU not found. Using CPU instead.")
device = 'cpu'
except:
device = 'cpu'

fall_detection_node = FallDetectionNode(device=device)
fall_detection_node.listen()
Loading

0 comments on commit 00a9fb4

Please sign in to comment.