Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Robotti human detection simulation demo #451

Merged
merged 12 commits into from
Aug 29, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
23 changes: 23 additions & 0 deletions projects/python/perception/robotti_human_detection/Makefile
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
# Copyright 2020-2023 OpenDR European Project
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

.PHONY: release debug profile clean

release debug profile clean:
+@echo "# compile libraries"
+@make -s -C webots/libraries/bvh_util $@
+@echo "# compile controller"
+@make -s -C webots/controllers/bvh_animation $@
+@echo "# compile plugins"
+@make -s -C webots/plugins/robot_windows/robotti_window $@
51 changes: 51 additions & 0 deletions projects/python/perception/robotti_human_detection/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
# Simulation of human detection with Robotti

This folder contains an example how to perform human detection with the Robotti model in simulation.
The human detection is performed using YOLOV5x.

### Setup the environment

To run this simulation, you need to install:
- Webots R2023b or newer ([installation instructions](https://cyberbotics.com/doc/guide/installing-webots))
- `perception` module of the OpenDR toolkit ([installation instructions](https://github.com/opendr-eu/opendr/blob/master/docs/reference/installation.md)).
- Install additional libraries:
```sh
pip install gym
sudo apt install libopenblas0
```

Then, you need to compile some libraries needed by the simulation, by opening a terminal, navigating
to this folder, i.e. `/opendr/projects/python/perception/robotti_human_detection`, and running:
```sh
export WEBOTS_HOME=/path/to/webots/installation
make
```

### Run the simulation

First open a terminal and navigate to this folder.

Start Webots and open the `webots/worlds/robotti_human_detection.wbt` world file:
```sh
export WEBOTS_HOME=/path/to/webots/installation
$WEBOTS_HOME/webots webots/worlds/robotti_human_detection.wbt
```

In a different terminal, navigate to your OpenDR root and activate the toolkit environment:
```sh
source bin/activate.sh
```
Then navigate to this folder and start the controller program of the Robotti:
```sh
export WEBOTS_HOME=/path/to/webots/installation
$WEBOTS_HOME/webots-controller webots/controllers/human_detection/human_detection.py
```
Finally, start the simulation by hitting the play button in Webots.

By default, the YOLOV5x is run on CPU.
If you want to use CUDA device, you can `--cuda` to the previous command:
```sh
$WEBOTS_HOME/webots-controller webots/controllers/human_detection/human_detection.py --cuda
```

The Robotti should now start to move and the display image with annotation of detected person should appear in the robot window.
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
/bvh_animation
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
# Copyright 2020-2023 OpenDR European Project
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# Webots Makefile system
#
# You may add some variable definitions hereafter to customize the build process
# See documentation in $(WEBOTS_HOME_PATH)/resources/Makefile.include

ifndef WEBOTS_SKIN_ANIMATION_PATH
WEBOTS_SKIN_ANIMATION_PATH = ../../libraries
endif

INCLUDE = -I"$(WEBOTS_SKIN_ANIMATION_PATH)/bvh_util/include"
LIBRARIES = -L"$(WEBOTS_SKIN_ANIMATION_PATH)/bvh_util" -lbvh_util

### Do not modify: this includes Webots global Makefile.include
null :=
space := $(null) $(null)
WEBOTS_HOME_PATH?=$(subst $(space),\ ,$(strip $(subst \,/,$(WEBOTS_HOME))))
include $(WEBOTS_HOME_PATH)/resources/Makefile.include
Original file line number Diff line number Diff line change
@@ -0,0 +1,251 @@
/*
* Copyright 2020-2023 OpenDR European Project
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/

#include <webots/bvh_util.h>
#include <webots/robot.h>
#include <webots/skin.h>
#include <webots/supervisor.h>

#include <math.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>

#define TIME_STEP 32

static void print_usage(const char *command) {
printf("Usage: %s -d <skin_device_name> [-f <motion_file_path> | -s <start_frame_index> | -e <end_frame_index> | -l]\n",
command);
printf("Options:\n");
printf(" -d: Skin device name.\n");
printf(" -f: path to motion file.\n");
printf(" -s: scale factor for motion translation. Default is 20.\n");
printf(" -e: index of ending motion frame.\n");
printf(" -l: loop motion without resetting to initial position.\n");
}

int main(int argc, char **argv) {
wb_robot_init();

WbFieldRef rotation_field = wb_supervisor_node_get_field(wb_supervisor_node_get_self(), "rotation");
WbFieldRef children_field = wb_supervisor_node_get_field(wb_supervisor_node_get_self(), "children");
WbFieldRef translation_field =
wb_supervisor_node_get_field(wb_supervisor_field_get_mf_node(children_field, 0), "translation");

char *skin_device_name = NULL;
char *motion_file_path = NULL;
int end_frame_index = 0;
int scale = 20;
bool loop = false;
int c;
while ((c = getopt(argc, argv, "d:f:s:e:l")) != -1) {
switch (c) {
case 'd':
skin_device_name = optarg;
break;
case 'f':
motion_file_path = optarg;
break;
case 's':
scale = atoi(optarg);
break;
case 'e':
end_frame_index = atoi(optarg);
break;
case 'l':
loop = true;
break;
case '?':
printf("?\n");
if (optopt == 'd' || optopt == 'f' || optopt == 's' || optopt == 'e')
fprintf(stderr, "Option -%c requires an argument.\n", optopt);
else
fprintf(stderr, "Unknown option `-%c'.\n", optopt);
default:
print_usage(argv[0]);
return 1;
}
}

if (skin_device_name == NULL || motion_file_path == NULL) {
fprintf(stderr, "Missing required arguments -d and -f.\n");
print_usage(argv[0]);
return 1;
}

WbDeviceTag skin = wb_robot_get_device(skin_device_name);

// Open a BVH animation file.
WbuBvhMotion bvh_motion = wbu_bvh_read_file(motion_file_path);
if (bvh_motion == NULL) {
wb_robot_cleanup();
return -1;
}

int i, j;

// Get the number of bones in the Skin device
const int skin_bone_count = wb_skin_get_bone_count(skin);
if (skin_bone_count == 0) {
printf("The Skin model has no bones to animate.\n");
return 0;
}

// Get the number of joints and frames in the BVH file.
const int bvh_joint_count = wbu_bvh_get_joint_count(bvh_motion);
const int bvh_frame_count = wbu_bvh_get_frame_count(bvh_motion);
printf("The BVH file \"%s\" has %d joints, and %d frames.\n", motion_file_path, bvh_joint_count, bvh_frame_count);

// Get the bone names in the Skin device
char **joint_name_list;
joint_name_list = (char **)malloc((skin_bone_count) * sizeof(char *));
int root_bone_index = -1;
for (i = 0; i < skin_bone_count; ++i) {
const char *name = wb_skin_get_bone_name(skin, i);
joint_name_list[i] = (char *)malloc(strlen(name) + 1);
strcpy(joint_name_list[i], name);
if (strcmp(name, "Hips") == 0)
root_bone_index = i;
}

if (root_bone_index < 0)
fprintf(stderr, "Root joint not found\n");

// Find correspondencies between the Skin's bones and BVH's joint.
// For example 'hip' could be bone 0 in Skin device, and joint 5 in BVH motion file
int *index_skin_to_bvh = (int *)malloc(skin_bone_count * sizeof(int));
for (i = 0; i < skin_bone_count; ++i) {
index_skin_to_bvh[i] = -1;

if (i == 24 || i == 25 || i == 26 || i == 15 || i == 16 || i == 17)
continue;

const char *skin_name = joint_name_list[i];
for (j = 0; j < bvh_joint_count; ++j) {
const char *bvh_name = wbu_bvh_get_joint_name(bvh_motion, j);
if (strcmp(skin_name, bvh_name) == 0)
index_skin_to_bvh[i] = j;
}
}

// Pass absolute and relative joint T pose orientation to BVH utility library
for (i = 0; i < skin_bone_count; ++i) {
if (index_skin_to_bvh[i] < 0)
continue;
const double *global_t_pose = wb_skin_get_bone_orientation(skin, i, true);
wbu_bvh_set_model_t_pose(bvh_motion, global_t_pose, index_skin_to_bvh[i], true);
const double *local_t_pose = wb_skin_get_bone_orientation(skin, i, false);
wbu_bvh_set_model_t_pose(bvh_motion, local_t_pose, index_skin_to_bvh[i], false);
}

// Set factor converting from BVH skeleton scale to Webots skeleton scale.
// Only translation values are scaled by this factor.
wbu_bvh_set_scale(bvh_motion, scale);

double root_position_offset[3] = {0.0, 0.0, 0.0};
double initial_translation[3];
const double *it = wb_supervisor_field_get_sf_vec3f(translation_field);
for (i = 0; i < 3; ++i)
initial_translation[i] = it[i];
if (root_bone_index >= 0) {
const double *current_root_position = wbu_bvh_get_root_translation(bvh_motion);
// Use initial Skin position as zero reference position
for (i = 0; i < 3; ++i)
root_position_offset[i] = -current_root_position[i];
}

// Check end frame index
if (end_frame_index > 0 && end_frame_index >= bvh_frame_count) {
fprintf(stderr, "Invalid end frame index %d. This motion has %d frames.\n", end_frame_index, bvh_frame_count);
end_frame_index = bvh_frame_count;
} else
end_frame_index = bvh_frame_count;

int current_step = 0;
while (wb_robot_step(TIME_STEP) != -1) {
for (i = 0; i < skin_bone_count; ++i) {
if (index_skin_to_bvh[i] < 0)
continue;

// Get joint rotation for each joint.
// Note that we need to pass the joint index according to BVH file.
const double *orientation = wbu_bvh_get_joint_rotation(bvh_motion, index_skin_to_bvh[i]);
wb_skin_set_bone_orientation(skin, i, orientation, false);
}

// Offset the position by a desired value if needed.
const double *root_position;
if (root_bone_index >= 0) {
root_position = wbu_bvh_get_root_translation(bvh_motion);
double position[3];
position[0] = root_position[2] + root_position_offset[2] + initial_translation[0];
position[1] = root_position[0] + root_position_offset[0] + initial_translation[1];
position[2] = initial_translation[2];
wb_supervisor_field_set_sf_vec3f(translation_field, position);
}

// Fetch the next animation frame.
// The simulation update rate is lower than the BVH frame rate, so 4 BVH motion frames are fetched.
const int current_frame_index = wbu_bvh_get_frame_index(bvh_motion);
const int remaining_frames = end_frame_index - current_frame_index;
if (remaining_frames <= 4) {
if (loop && root_bone_index >= 0) {
// Save new global position offset
// based on last frame and not on loaded frame (1 over 4)
wbu_bvh_goto_frame(bvh_motion, end_frame_index - 1);
root_position = wbu_bvh_get_root_translation(bvh_motion);
const double *translation = wb_supervisor_field_get_sf_vec3f(translation_field);
for (i = 0; i < 3; ++i)
initial_translation[i] = translation[i];
}
wbu_bvh_goto_frame(bvh_motion, 1); // skip initial pose
} else {
int f = 4;
while (f > 0) {
wbu_bvh_step(bvh_motion);
--f;
}
}

if (current_step > 500) {
const double *rotation = wb_supervisor_field_get_sf_rotation(rotation_field);
double new_rotation[4];
for (int i = 0; i < 4; i++)
new_rotation[i] = rotation[i];
new_rotation[3] += M_PI;
wb_supervisor_field_set_sf_rotation(rotation_field, new_rotation);
const double *translation = wb_supervisor_field_get_sf_vec3f(translation_field);
initial_translation[0] = -translation[0];
initial_translation[1] = -translation[1];
initial_translation[2] = translation[2];
wb_supervisor_field_set_sf_vec3f(translation_field, initial_translation);
wbu_bvh_goto_frame(bvh_motion, 1); // skip initial pose
current_step = 0;
}
current_step++;
}

// Cleanup
for (i = 0; i < skin_bone_count; ++i)
free(joint_name_list[i]);
free(joint_name_list);
free(index_skin_to_bvh);
wbu_bvh_cleanup(bvh_motion);
wb_robot_cleanup();

return 0;
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
# Copyright 2020-2023 OpenDR European Project
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

import sys

from human_detection_env import Env

env = Env(args=sys.argv)

env.reset()
while True:
obs, reward, dones, _ = env.step(0)
if dones:
obs = env.reset()
print("DONE")
Loading
Loading